CS297 Proposal
Dynamic Difficulty Adjustment of Video Games using Deep Learning
Ayan Abhiranya Singh (ayanabhiranya.singh@sjsu.edu)
Advisor: Dr. Chris Pollett
Description:
Accessibility features are growing more diverse and gaining more importance in video games. Prominent game developers are making it a priority to add more of these features into their games and reach as wide an audience as possible. Video game players come from a variety of different skill levels. Our project aims to address the problem of dynamic difficulty adjustment (DDA) within video games, using deep learning. The idea of the basic 3-level difficulty system (easy, normal, hard)seems to be quite archaic now and a solution where the game adjusts to the skill of the player would be extremely lucrative for game companies today. In this project, we aim to study transfer learning algorithms and apply them to to teach our AI to play video games. We will then experiment with different algorithms over the course of the semester to train our AI to adjust to varying frame rates of a video game.
Schedule:
Week 1:
August 23 - August 30 | Finalize project proposal and find relevant research papers |
Week 2:
August 30 - September 6 | Read [1] and find sample datasets for training |
Week 3:
September 6 - September 13 | Read [1] and demo understanding on Q-learning |
Week 4:
September 13 - September 20 | [Del-1] Q-learning |
Week 5:
September 20 - September 27 | Research implementing [Del-1] on a neural network |
Week 6:
September 27 - October 4 | Read [2] |
Week 7:
October 4 - October 11 | Read [3] and demo progress on Del-2 |
Week 8:
October 11 - October 18 | [Del-2] Neural network |
Week 9:
October 18 - October 25 | Read [4] and study PacMan game mechanics |
Week 10:
October 25 - November 1 | Complete [4] and demo understanding of reinforcement learning within game |
Week 11:
November 1 - November 8 | Present understanding of Atari paper and neural net-based learning for video games |
Week 12:
November 8 - November 15 | Read [5] and research enhancing Del-2 to play a simple version of PacMan |
Week 13:
November 15 - November 22 | [Del-3] Game-playing agent for PacMan |
Week 14:
November 22 - November 29 | [Del-4] PacMan with reinforcement learning |
Week 15:
November 29 - December 6 | [Del-5] Finish 297 Report |
Deliverables:
The full project will be done when CS298 is completed. The following will
be done by the end of CS297:
1. Q-learning implementation to learn dirt generation policy systems for Vacuum World.
2. Neural net version of a learning table lookup in Q-learning
3. Simple agent that can play PacMan
4. Deep Q-Learning implementation that can play PacMan
5. Complete the CS 297 report.
References:
[1] "Part V, Machine Learning, Chapter 22 Reinforcement Learning" in Artificial Intelligence: A Modern Approach. S. Russell and P. Norvig. Fourth Edition, New Jersey: Pearson Education, Inc. 2021, pp. 789-821.
[2] "Mobile Object Detection using Tensorflow Lite and Transfer Learning.". Alsing, Oscar. 2018
[3] "Deep Learning for Real-Time Atari Game Play using Offline Monte-Carlo Tree Search Planning." Guo, Xiaoxiao, Satinder Singh, Honglak Lee, Richard L. Lewis, and Xiaoshi Wang. Advances in Neural Information Processing Systems 27. 2014
[4] "Playing Atari with Deep Reinforcement Learning." Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. arXiv Preprint arXiv:1312.5602. 2013.
[5] "Transfer Learning for Related Reinforcement Learning Tasks Via Image-to-Image Translation.". Gamrian, Shani and Yoav Goldberg. PMLR, .2019
[6] "AlphaDDA: Game Artificial Intelligence with Dynamic Difficulty Adjustment using AlphaZero.". Fujita, Kazuhisa. arXiv Preprint arXiv:2111.06266.2021
[7] "Dynamic Difficulty Adjustment through an Adaptive AI. Silva, Mirna Paula, Victor do Nascimento Silva, and Luiz Chaimowicz.". IEEE. 2015. |