Your goal in this project is to explore a topic in robotics that is interesting to you and your team. This open-ended project is an opportunity to be creative and ambitions and we are excited to see what you will accomplish!
For this project, your team should consist of 2-4 members. There are no restrictions on who your team members can be (e.g., you can work with prior partners from the previous 2 projects).
Your project proposal should include:
Your project proposal should be placed in a Google Drive folder your team creates for its deliverables. When you're ready to turn in your project proposal:
All of your presentations should be uploaded to your team's Google Drive folder before the class when you're presenting.
You'll organize your code into one Github repo.
Like in the previous projects, your Github README will serve as your project writeup. Your writeup should include the following:
Your demo should be either 1) presented in your writeup as a gif or 2) included in your Github repo as a video (or whatever format is most conducive to showing your project off). Your demo should clearly exhibit all of the functionality and features of your project.
Your team's project proposal and presentation slides should all be uploaded within the same Google Drive folder that your group creates for this project. For your first deliverable (the project proposal), please 1) give all the members of the teaching team read & comment access to the Google Drive folder (sarahsebo@uchicago.edu, pouya@uchicago.edu, yvesshum@uchicago.edu) and 2) DM all of the teaching team the link to your Google Drive folder on Slack.
Your project team will make a total of 3 presentations over the course of the project. Upload your slides for each of these presentations to your Google Drive folder before the class period where you'll be presenting them.
Once you're done with your project, fill out the team contributions survey. The purpose of this survey is to accurately capture the contributions of each team member to your combined final project deliverables.
The following final project deliverables are due on Friday, June 4 11:00am CST:
As noted on the Course Info page, we are not accepting flex hours for this assignment.
This project was inspired by both the idea of a robot goalie and the game of pong, where the turtlebot must stop a launched ball from entering a goal that is being defended. This team wanted to make a robot that could move to the location of an incoming ball and stop it from entering the goal/hitting the area behind the robot. They aimed to do this using some form of a reinforcement learning algorithm. What makes this project interesting and different from what was done in the past is that it involves a large, continuous state space of the ball pose (position, orientation, and velocity) on the field, as well as the pose of the robot. In the last QLearning project, there was a small, finite number of states which could be exhaustively searched. But in this goalie project, the soccer ball can come from any location along the middle line of the field we created, and at any angle, which is at the very least a computationally intractable number of states for performing traditional QLearning on a standard computer. This team thus had to find an approach that was still using reinforcement learning but that reduced the overall complexity required.
The goal of the project was to create controller teleoperation, in which connecting a controller would give a user full funtionality of that robot's wheels and arm through controller manipulation. This is interesting because it makes moving the robot a lot easier, especially the arm, and it contributes a physical, hardware component that was missing in this class. Through this project, we were able to make the robot move, turn, and grab things purely through buttons and joysticks on a controller. The three main components of this project are controller input, navigation, and inverse kinematics. The controller input controls the navigation and implements inverse kinematics in order to move the arm the way the user wants it to move.
Coming from similar backgrounds in game design, this team produced a game using their knowledge of robotics algorithms. Their idea was most similar to a survival game: the player operates one TurtleBot in a dark maze while another "monster" bot tries to approach the player. The monster knows the maze and the player's position at any time. A creature of the dark, this monster cannot stay in the glow of the player's flashlight; thus, the monster bot has to be smart in planning its path to the player.
The main components of the project are A* Search and Sensory Controls. The A* Search finds the best path from the monster to the player. The robot uses its Sensory Controls to traverse the maze, center itself in the maze hallways, avoid wall collisions, make smooth turns, and detect light. If the monster runs into the player's light, the A* Search takes this into account and recalculates the new best path to sneak up behind the player.
Dubbed Pacturtle, this final project was inspired by Pacman. This team's program features a user-operated Pacturtle whose goal is to evade the four ghost turtles for as long as possible within a specially designed maze world. By giving all the ghost turtles different search algorithms (e.g., breadth first search, map exploration, random movements), they hoped to make the game as challenging as possible and, in this way, immerse the user in an interactive and fun game.
The goal of this project was to program a robot to be pet-like, connecting reinforcement learning with human-computer interaction. Reinforcement learning has become a hot-topic in computer science/robotics, and with the pandemic, people have had more time to play with and train their pets. So, this team thought it would be interesting to combine these two things and create something that encourages human-robot interaction similar to/based on the way humans interact with pets (more specifically dogs because they're more responsive and dependent on humans). Overall, this team was able to make a robot pet-like in that it can be given specific commands and execute the correct action (while also having some personality). The different commands included: roll (spin around), shake (extend arm and move up and down), come (go to the person, as represented by a yellow object), follow (follow the yellow object as the user moves it around), find [color] (go to the dumbbell of the specified color), and fetch [color] (go to and bring back the dumbbell of the specified color).
The motivation for this project was to create an advanced prototype of themes we have covered so far by combining more complex robot navigation and manipulation with a path finding algorithm. With this team's implementation, the robot can find the shortest route between any number of destinations on the map and navigate to the points without colliding into obstacles, stopping at stop signs, and avoiding bumping into one other moving robot. The other moving robot takes the same cyclical path around the street and our navigation bot stops and lets them pass when the situation comes up.
The design of this course project was influenced by Paul Ruvolo and his Fall 2020 A Computational Introduction to Robotics course taught at Olin College of Engineering. I also want to thank the students from Winter 2021, whose projects are the featured examples on this page.