Class Meeting 11: Partially Observable Markov Decision Processes (POMDPs)
Today's Class Meeting
- Learning about partially observable Markov decision processes (POMDPs) and an application of POMDPs to child-robot tutoring
- here's a link to the slides
- If you're interested in learning more about the child-robot tutoring research project presented in class today, here's a link our full research paper that was published at the AAAI conference in 2019.
Resources where you can learn more about POMDPs
- Hossein Kamalzadeh and Michael Hahsler's POMDP: Introduction to Partially Observable Markov Decision Processes, which details the use of the pomdp R package
- pomdp.org - which contains tutorials, code, examples, and more on POMDPs
- The Wikipedia page on POMDPs
- Cassandra, A. R., Kaelbling, L. P., & Littman, M. L. (1994, July). Acting optimally in partially observable stochastic domains. In AAAI (Vol. 94, pp. 1023-1028).
- Kaelbling, L. P., Littman, M. L., & Cassandra, A. R. (1998). Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2), 99-134.
Acknowledgments
The information delivered for today's lecture was informed by tutorial content on pomdp.org and images from Hossein Kamalzadeh and Michael Hahsler's POMDP: Introduction to Partially Observable Markov Decision Processes.