Particle Filter Localization Project

Project Material


Objectives


Your goal in this project is to gain in-depth knowledge and experience with solving problem of robot localization using the particle filter algorithm. This problem set is designed to give you the opportunity to learn about probabilistic approaches within robotics and to continue to grow your skills in robot programming. Like before, If you have any questions about this project or find yourself getting stuck, please post on the course Slack or send a Slack DM to the teaching team. Even if you don't find yourself hitting roadblocks, feel free to share with your peers what's working well for you.

Learning Goals

Teaming & Logistics

You are expected to work with 1 other student for this project. Your project partner may be in a different lab section than you, however, both of you are expected to attend the same lab section for the next 2 weeks while you're working on this project. If you and your partner cannot attend the same lab section due to schedule conflicts, you will need to find a different partner. If you strongly prefer working by yourself, please reach out to the teaching team to discuss your individual case. A team of 3 will only be allowed if there is an odd number of students. Your team will submit your code and writeup together (in 1 Github repo).

Selecting a partner: You can either choose your own partner, or you can can post in the #find-project-partner Slack channel that you're looking for a partner, and find someone else who's also looking for a partner.


Deliverables


Like last project, you'll submit this project using Github Classroom (both the code and the writeup). Both partners will contribute to the same github repo.

Implementation Plan

Please put your implementation plan within your README.md file. Your implementation plan should contain the following:

Note: Class Meeting 05 covers two measurement models for range finders and updating your particles using a measurement model, which you may find helpful in filling out your Implementation Plan.

Writeup

Like last project, please modify the README.md file as your writeup for this project. Please add pictures, Youtube videos, and/or embedded animated gifs to showcase and describe your work. Your writeup should:

Code

Use our Github Classroom to access the starter git repo particle_filter_project (here's a direct link to the starter repo), putting it within your ~/catkin_ws/src/ directory. The starter git repo / ROS package contains the following files:

particle_filter_project/gazebo_custom_meshes/turtlebot3_maze.dae
particle_filter_project/launch/navigate_to_goal.launch
particle_filter_project/launch/turtlebot3_maze.launch
particle_filter_project/launch/visualize_particle.launch
particle_filter_project/rviz/particle_filter_project_v2.rviz
particle_filter_project/scripts/particle_filter.py
particle_filter_project/worlds/turtlebot3_maze.world
particle_filter_project/CMakeLists.txt
particle_filter_project/package.xml
particle_filter_project/README.md

In addition to these files, you'll also create your own map of the maze, and store those files in a map folder in your particle_filter_project ROS package:

particle_filter_project/map/maze_map.pgm
particle_filter_project/map/maze_map.yaml

You will write your code within the particle_filter.py file we've provided. You may also create other python scripts that may provide helper functions for the main steps of the particle filter localization contained within particle_filter.py.

Please remember that when grading your code, we'll be looking for:

gif or Embedded Video

At the beginning or end of your writeup please include a gif or embedded video (e.g., mp4) of one of your most successful particle filter localization runs. In your gif/video, (at minimum) please show what's happening in your rviz window. We should see the particles in your particle filter localization (visualized in rviz) converging on the actual location of the robot.

Note that, unfortunately, there are no recording options directly on rviz. We recommend that you record your RViz window directly from your base OS or within the Ubuntu VM.

If you are on MacOS, QuickTimePlayer would be a great out-of-the-box option for screen recording. On Windows, a few options work great such as OBS Studio, or the Windows 10 builtin Game bar. On Ubuntu, one option would be Simple Screen Recorder. Another quicker option is to record the whole screen using Ctrl+Shift+Alt+R to both start and stop a screen recording.

rosbag

Record a run of your particle filter localization in a rosbag. Please record the following topics: /map, /scan, /cmd_vel, /particle_cloud, /estimated_robot_pose, and any other topics you generate and use in your particle filter localization project. Please do not record all of the topics, since the camera topics make the rosbags very large. For ease of use, here's how to record a rosbag:

$ rosbag record -O filename.bag topic-names
Please refer to the ROS Resources page for further details on how to record a rosbag.

Partner Contributions Survey

The final deliverable is ensuring that each team member completes this Partner Contributions Google Survey. The purpose of this survey is to accurately capture the contributions of each partner to your combined particle filter localization project deliverables.

Grading

The Partile Filter Project will be graded as follows:

New to this project is the Individual Contribution grade. This will be assessed through your responses to the partner contributions survey. Your grade will reflect how much you contributed to your team's project.


Deadlines & Submission


Submission

As was true with the warmup project, you will use Gradescope to submit your particle filter project deliverables. As a reminder:


Running the Code


When testing and running your particle filter code, you'll have the following commands running in different terminals or terminal tabs.


With the Physical Robot


When you're ready to test your particle filter code on a physical robot, run the following commands.

First terminal: run roscore.

$ roscore

Second terminal: run Bringup on the Pi.

$ ssh pi@IP_OF_TURTLEBOT
$ set_ip LAST_THREE_DIGITS
$ bringup

Third terminal: launch the launchfile that we've constructed that 1) starts up the map server, 2) sets up some important coordinate transformations, and 3) runs rviz with some helpful configurations already in place for you (visualizing the map, particle cloud, robot location). If the map doesn't show up when you run this command, we recommend shutting down all of your terminals (including roscore) and starting them all up again in the order we present here.

$ roslaunch particle_filter_project visualize_particles.launch

Fourth terminal: run the Turtlebot3 provided code to teleoperate the robot.

$ roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch

Fifth terminal: run your particle filter code.

$ rosrun particle_filter_project particle_filter.py

In Gazebo


You may use Gazebo to test your particle filter code prior to running it on a physical Turtlebot, as the testing process is much faster and easier to tweak using Gazebo. However, make sure that your official run of the particle filter code is on the physical robot!

Note: If some of you find that your computers experience a significant lag, you may find it easier to simply just work with the physical turtlebot.

First terminal: run roscore.

$ roscore

Second terminal: run your Gazebo simulator. For this project, we're using a simulated version of the particle filter maze.

$ roslaunch particle_filter_project turtlebot3_maze.launch
Note: You may find that the maze doesn't show up when you run the roslaunch particle_filter_project turtlebot3_maze.launch command. This issue is caused by the main directory having a name other than particle_filter_project (which happens with Github classroom creates a unique repository name for your project group). To fix this and allow the map to show up, simply change your directory name from something like particle-filter-project-sarah-sebo to particle_filter_project. On the command line, you can execute:
$ cd ~/catkin_ws/src/
$ mv particle-filter-project-sarah-sebo particle_filter_project

Third terminal: launch the launchfile that we've constructed that 1) starts up the map server, 2) sets up some important coordinate transformations, and 3) runs rviz with some helpful configurations already in place for you (visualizing the map, particle cloud, robot location). If the map doesn't show up when you run this command, we recommend shutting down all of your terminals (including roscore) and starting them all up again in the order we present here.

$ roslaunch particle_filter_project visualize_particles.launch

Fourth terminal: run the Turtlebot3 provided code to teleoperate the robot.

$ roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch

Fifth terminal: run your particle filter code.

$ rosrun particle_filter_project particle_filter.py
Note: It is very important that you execute roslaunch particle_filter_project visualize_particles.launch AFTER you execute roslaunch particle_filter_project turtlebot3_maze.launch to ensure that the coordinate transforms work properly.

The Particle Filter Localization


The goal of our particle filter localization (i.e., Monte Carlo localization) will be to help a robot answer the question of "where am I"? This problem assumes that the robot has a map of its environment, however, the robot either does not know or is unsure of its position and orientation within that environment.

To solve the localization problem, the particle filter localization approach (i.e., Monte Carlo Localization algorithm) makes many guesses (particles) for where it might think the robot could be, all over the map. Then, it compares what it's seeing (using its sensors) with what each guess (each particle) would see. Guesses that see similar things to the robot are more likely to be the true position of the robot. As the robot moves around in its environment, it should become clearer and clearer which guesses are the ones that are most likely to correspond to the actual robot's position. Please see the Class Meeting 04 page for more information about the MCL algorithm.

In more detail, the particle filter localization first initializes a set of particles in random locations and orientations within the map and then iterates over the following steps until the particles have converged to (hopefully) the position of the robot:

  1. Capture the movement of the robot from the robot's odometry
  2. Update the position and orientation of each of the particles based on the robot's movement
  3. Compare the laser scan of the robot with the hypothetical laser scan of each particle, assigning each particle a weight that corresponds to how similar the particle's hypothetical laser scan is to the robot's laser scan
  4. Resample with replacement a new set of particles probabilistically according to the particle weights
  5. Update your estimate of the robot's location

In the following sections, we'll walk through each of the main steps involved with programming your own particle filter localization algorithm.


1. Making a Map of the Environment


Your first step with this project will involve recording a map of the maze (pictured below). You will record the map using the built in turtlebot3 SLAM tools that we go over during Lab D: roslaunch turtlebot3_slam turtlebot3_slam.launch slam_methods:=gmapping. Please save your map in the particle_filter_project/map directory under the name maze_map.

maze

2. Initialializing Particles on the Map


Once you've created your map, you'll next want to initalize your particles. Ideally, you'll want your particles initiaized only within the light grey cells on the map (as opposed to the black obstacles/walls or the area outside the map), and randomly distributed across that area.

To initalize your particles within the map's boundaries, you'll need to work with the particle filter's map attribute, which is of type nav_msgs/OccupancyGrid. The map data list (data) uses row-major ordering and the map info contains useful information about the width, height, resolution, and more. You'll also want to locate the origin of the map to help you debug.

Note: It can be helpful to add a rospy.sleep() command in your code to allow time for the ROS node as well as its publishers and subscribers to set up before you publish your particle cloud for the first time.

3. Updating Particle Position/Orientation Based on the Robot's Movement


Next, we need to ensure that when our robot moves around and turns (as you're teleoperating it), that the particles also move around and turn the exact same amount. You'll teleoperate the robot with either the alias teleop or the longer roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch. To keep track of the robot's movements, the starter code already computes the difference in the robot's xy position and orientation (yaw) in the robot_scan_received() function. If the robot moves "enough" based on the linear and angular movement thresholds, it will trigger one iteration of the particle filter localization algorithm.

Your job, is to use the precomputed difference in the robot's xy position and orientation (yaw) to update the particle positions, and also adding in some movement noise.

Note: We recommend that during this project, you test your code early and often. It's much easier to incrementally debug your code than to write up the whole thing and then start debugging.

For example, we recommend you test out the movement of your particles by lowering your number of partcles to something like 4-10 particles and then observing their movement closesly to see if their movement matches that of the turtlebot.


4. Assigning Particle Weights Based on Sensor Measurements


This is a crticial component of the particle filter localization algorithm, where you assign a weight to each particle that represents how well the robot's sensor measurements match up with the particle's location on the map. As we went over in Class Meeting 05, you are welcome to use either a ray casting approach or a likelihood field approach.

One helper function we've provided you to assist with this section is compute_prob_zero_centered_gaussian(dist, sd). This function takes in a distance, which represents the difference between the LiDAR distance you receive from the robot and the distance you compute for your particle based on either the ray casting or likelihood field algorithm. With that input, the function outputs a probability value. For example, if your robot "sees" a distance of 2.0m at 90 degrees and your particle also "sees" a distance of 2.0m at 90 degrees, the distance value you'd feed into compute_prob_zero_centered_gaussian(dist, sd) would be 2.0 - 2.0 = 0, and you'd get a high probability output, since these values are closely aligned.


5. Resampling


After you've computed the particle weights, you'll now resample with replacement a new set of particles (from the prior set of particles) with a probability in propotion to the particle weights.


6. Updating Your Estimate of the Robot's Location


Finally, you'll update the estimate of the robot's location based on the average position and orientation of the particles.

You will notice that your estimate will be very poor until your particles have begun to converge. This is ok and expected.


7. Optimization


Parameter and code optimization is a large component of student success in this project. Notably, this algorithm can be very computationally expensive, so it's important that you adjust your code and particle filter parameters to enable your particle filter localization code to run successfully.

Code Optimization: There are ways you can restructure your code to enable better performance, such as:

Parameter Optimization: There are many parameters you can adjust in your code to either (1) improve runtime performance and reduce lag or (2) improve the performance of the algorithm itself, some examples of paramters we encourage you to change and optimize include:

Note: Most of you will see an error in your terminal that Gazebo's time is going backwards. This is an indication that the computation your particle filter localization code is doing is taking too long and can't keep up with the robot's movement. Try reducing the number of particles or trying the other optimization techniques listed above, and then run your code again.

A View of the Goal


The following example gifs show the progression of the particles over time. You can see that at the beginning, the particles are randomly distributed throughout the map. Over time, the particles converge on likely locations of the robot, based on its sensor measurements. And eventually, the particles converge on the true location of the robot.

particle filter example
particle filter example

And in this example, you can see the particle filter side-by-side with the real robot in the maze.

particle filter example

A Personal Note


In many ways, I would not be a professor at UChicago and teaching this class had it not been for the particle filter that I programmed during my undergrad at Franklin W. Olin College of Engineering. My partner in crime, Keely, and I took a semester to learn about algorithms used in self-driving cars (under the guidance of Professor Lynn Stein) and implemented our own particle filter on a real robot within a maze that we constructed ourselves. It was this project that propelled me to get my PhD in Computer Science studying human-robot interaction, which then led me to UChicago. Below, I've included some photos of our project.

Vrobot
Vrobot
Vrobot
Vrobot

And here's the video of our particle filter working, where you can see the particles (blue) converging to the location of the robot (red).

Acknowledgments


The design of this course project was influenced by the particle filter project that Keely Haverstock and I completed in Fall 2013 at Olin College of Engineering as well as Paul Ruvolo and his Fall 2020 A Computational Introduction to Robotics course taught at Olin College of Engineering. The gifs providing examples of the particle filter are used with permission from former students Max Lederman, Emilia Lim, Shengjie Lin, Sam Nitkin, and David Pan.