Class Meeting 09: Reinforcement Learning


Today's Class Meeting



What You'll Need for Today's Class



Q-Learning Algorithm


\(\textrm{Algorithm Q_Learning}:\)
\( \qquad \textrm{initialize} \: Q \)
\( \qquad t = 0 \)
\( \qquad \textrm{while} \: Q \: \textrm{has not converged:} \)
\( \qquad \qquad \textrm{select} \: a_t \: \textrm{at random} \)
\( \qquad \qquad \textrm{perform} \: a_t \)
\( \qquad \qquad \textrm{receive} \: r_t \)
\( \qquad \qquad Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha \cdot \Big( r_t + \gamma \cdot \: \textrm{max}_a Q(s_{t+1}, a) - Q(s_t, a_t) \Big)\)
\( \qquad \qquad t = t + 1\)

Where:


Class Exercise: Q-Learning in a 5 Room Building


For today's class exercise, please get together in groups of 2-3 students.


Specifying the MDP Model


Let's consider a situation in which our Turtlebot3 robot is placed inside a building that has a floorplan like that shown in the following image.

We can characterize this space as an MDP, where each state represents one room in the building (or outside, e.g., room 5) and where the agent can transition between rooms by moving either north, south, east, or west. The agent cannot stay in the same state from time step to time step, except once it is outside.

actions

We can specify the reward function as depicted in the following diagram, where the agent receives a reward when it transitions outside from either room 1 or room 4. Additionally, the agent continues reaping rewards once it's already outside every time step.

actions

Our robot's goal in this world is to get outside (room 5) from it's starting position in room 2.

actions

Applying the Q-Learning Algorithm in this World


The following table / matrix can be used to represent \(Q(s_t, a_t)\), where the rows represent the world states (rooms) and the columns represent actions that the robot can take. The following table is initialized where all valid state-action pairs are initialized to 0 and and invalid state-action pairs are initialized to -1.

North South East West
rm0 -1 0 -1 -1
rm1 0 0 -1 -1
rm2 -1 -1 -1 0
rm3 0 -1 0 0
rm4 0 0 0 -1
rm5 0 0 0 0

Your goal in this exercise is to update \(Q(s_t, a_t)\) according to the Q-learning algorithm. In this exercise we will make the following assumptions:

Here are action trajectories that you'll have your robot agent follow through the environment (assume that your agent always starts in room 2 at the beginning of each trajectory). For each action within each trajectory, update \(Q(s_t, a_t)\) according to the Q-learning algorithm. You can check your answers on this page, which provides \(Q(s_t, a_t)\) after each trajectory.


More on Reinforcement Learning


If you wan to learn more about reinforcement learning or go further into the topic introduced in class, please check out:


Acknowledgments


The lecture for today's class was informed by Kaelbling et al. (1996), Charles Isbell and Michael Littman's Udacity course on Reinforcement Learning, and geeksforgeeks.org's article on the upper confidence bound for reinforcement learning. The class exercise for today was inspired by the Q-learning tutorial on mnemstudio.org.