Chris Pollett > Students >
Leo

    ( Print View )

    [Bio]

    [Project Blog]

    [CS297Proposal]

    [Del1]

    [Del2]

    [Papers Slides-PDF]

    [Del4]

    [CS297Report-PDF]

    [CS299Proposal]

    [CS299Report-PDF]

    [CS299Presentation-PDF]

    [Grad Photo1-JPG]

    [Grad Photo2-JPG]

    [Grad Photo3-JPG]

                          

























CS299 Proposal

Intelligent Behavior for Computer Game Characters Using Prediction and Learning

Leo Lee (leo@leolees.com)

Advisor: Dr. Chris Pollett (cpollett@yahoo.com)

Committee Members: Dr. Rudy Rucker (rudy@rudyrucker.com) and Dr. Jeff Smith (smithJ@cs.sjsu.edu)

Abstract:

While the graphical aspect of video games has seen much improvement, sophisticated AI in video games is still a rarity. The standard for video game AI has been based on elaborate finite state machines (FSMs). The use of FSMs for video game AI leads to static behavior in the computer controlled non-player characters (NPCs). In turn, this static behavior detracts from the playability and entertainment value of the game.

The purpose of this thesis is to develop an AI system for a computer game, which will allow the NPC to learn and adapt its behavior to the player. The game itself will be a 3D fighting game called Alpha Fighter. The player will control one character while the computer AI system controls the other. Each character will have a variety of fighting moves, which can be used to defeat the opponent. The goal is to have the AI fighter behave more like a human player would. The AI fighter will dynamically learn and adapt its behavior according to the behavior of the human player. In essence, the AI fighter will attempt to behave like a human player, making predictions of its opponent based on observations, and formulating strategies based on those predictions. To achieve this, a Hidden Markov Model or Dynamic Bayesian Network will be used.

CS297 Results

  • Developed a word prediction program based on n-grams and a Rock, Paper, Scissors game where the computer predicts the player's next move using a string-matching algorithm.
  • Wrote a document of game concept, describing Alpha Fighter's game-play, concept, and features.
  • Created a partial character model to be used in the game.
  • Developed a move prediction program using a Hidden Markov Model.

Proposed Schedule

Week 1: Jan.23-29Submit Deliverable 1 - a high level description of the AI system. Work on Stage 1: getting the game core running, including camera, simple scene graph, and basic graphics module. The result will be a skybox and a single character on the screen.
Week 2: Jan.30-Feb.5Finish Stage 1.
Weeks 3-4: Feb.6-19Work on Stage 2: controls and animation capabilities.
Week 5: Feb.20-26Work on Stage 3: adding a title, options, and character selection screens with menus.
Week 6: Feb.27-Mar.5Submit Deliverable 2 - the game design document. Work on Stage 4: incorporate other fighter into game and add collision detection/response.
Weeks 7-8: Mar.6-19Work on Stage 5: implementing the AI system.
Week 9: Mar.20-26Submit Deliverable 3 - Alpha Fighter. Write first draft of report and have it reviewed by committee.
Week 10: Mar.27-Apr.1Produce 2nd draft of report and submit it to the graduate studies office.
Weeks 11-18: Apr.3-May17Fix errors found by graduate studies office and produce final draft of report for binding. Prepare for defense.

Key Deliverables:

  • Software
    • A high level design of how the AI system will work. This will include a diagram of the Hidden Markov Model or Dynamic Bayesian Network to be used for Alpha Fighter, as well as how the model will be updated to adapt to the environment.
    • Game design document - This will explain the concept, game-play, background, controls, and features of the game. It will also go into detail about the design of the program. This will include UML diagrams and an overview of any complex algorithms used.
    • Alpha Fighter - The finished game will have many components including:
      • Game core - This module is responsible for coordinating the update and display of the game as well as managing the other components.
      • Graphics - This module provides a wrapper around the Direct3D library and is responsible for rendering the scene as well as managing graphical resources.
      • Input - This module includes a wrapper around the DirectInput library to receive user input. It is also responsible for reacting to user input such as moving the fighter or attacking.
      • Sound - This module provides a wrapper around the DirectSound and DirectMusic libraries, enabling the game to play sound and music.
      • Physics - This module provides the collision detection and handling for the game. It is also responsible for the effects of external forces such as gravity and friction.
      • AI - This module provides the behavior for the NPC. It is responsible for controlling the NPC fighter.
      • Game logic - This module handles the miscellaneous tasks required in the game. This includes transitioning between game states, checking for game over conditions, and adjusting various game parameters.
  • Report
    • First draft of the report.
    • Draft to be submitted to the graduate studies office for review.
    • Final draft to be submitted for binding.

Innovations and Challenges

  • Learning in a video game AI system is quite new. In fact, sophisticated video game AI in general is relatively new. Therefore, a successful implementation of the proposed AI system in Alpha Fighter will put it at the forefront of game AI technology.
  • Development of character models and animations has proven to be quite challenging for the author.
  • Successfully implementing and tweaking the AI system will be challenging.
  • A game has many components. The shear scope of the program and the limited time available to complete the thesis will prove to be a challenge. Although the author has a previously developed code base to work off of, it must still be reworked to fit suitably for Alpha Fighter.
  • Collision detection and reaction will be very important. In a fighting game the player expects collision detection to be very accurate. Also, the context in which the collision occurred is important. For example, colliding with a hand is handled differently depending on whether the hand was part of a punch.
  • Animation of the characters may prove to be challenging. Not only must predefined, key-framed animations be played at the right time and flow smoothly, but dynamic animations also need to be generated when a fighter gets hit.

References:

[Charniak96] Charniak, E. (1996). Statistical language learning. Cambridge, MA: MIT Press.

[Elias04] Elias, H. (2004). Inverse kinematics - improved methods. Retreived December 6, 2004, from http://freespace.virgin.net/hugo.elias/models/m_ik2.htm

[Evans02] Evans, R. (2002). Varieties of learning. In S. Rabin (Ed.), AI game programming wisdom (pp. 567-578). Hingham, MA: Charles River Media.

[Jakobsen03] Jakobsen, T. (2003, January). Advanced character physics. Retreived December 6, 2004, from http://www.gamasutra.com/resource_guide/20030121/jacobson_01.php

[Kaukoranta04] Kaukoranta, T., Smed, J., & Hakonen, H. (2004). Understanding pattern recognition methods. In S. Rabin (Ed.), AI game programming wisdom 2 (pp. 579-589). Hingham, MA: Charles River Media.

[Laramée02] Laramée, F. D. (2002). Using n-gram statistical models to predict player behavior. In S. Rabin (Ed.), AI game programming wisdom (pp. 596-601). Hingham, MA: Charles River Media.

[Luna04] Luna, F. (2004). Skinned mesh character animation with Direct3D 9.0c. Retreived December 15, 2004, from http://www.moon-labs.com/resources/d3dx_skinnedmesh.pdf

[Manslow02] Manslow, J. (2002). Learning and adaptation. In S. Rabin (Ed.), AI game programming wisdom (pp. 557-566). Hingham, MA: Charles River Media.

[Manslow04] Manslow, J. (2004). Using reinforcement learning to solve AI control problems. In S. Rabin (Ed.), AI game programming wisdom 2 (pp. 591-601). Hingham, MA: Charles River Media.

[Microsoft03] Microsoft Corporation. (2003). Microsoft DirectX 9.0c SDK Documentation.

[Mommersteeg02] Mommersteeg, F. (2002). Pattern recognition with sequential prediction. In S. Rabin (Ed.), AI game programming wisdom (pp. 586-595). Hingham, MA: Charles River Media.

[Tozour02] Tozour, P. (2002). Introduction to Bayesian networks and reasoning under uncertainty. In S. Rabin (Ed.), AI game programming wisdom (pp. 345-357). Hingham, MA: Charles River Media.

[Tozour02] Tozour, P. (2002). The evolution of game AI. In S. Rabin (Ed.), AI game programming wisdom (pp. 3-15). Hingham, MA: Charles River Media.

[Rabin04] Rabin, S. (2004). Common game AI techniques. In S. Rabin (Ed.), AI game programming wisdom 2 (pp. 3-14). Hingham, MA: Charles River Media.

[Rabin04] Rabin, S. (2004). Promising game AI techniques. In S. Rabin (Ed.), AI game programming wisdom 2 (pp. 15-27). Hingham, MA: Charles River Media.

[Russell03] Russell, S, & Norvig, P. (2003). Artificial intelligence: a modern approach (2nd ed.). Upper Saddle River, NJ: Prentice Hall.

[Sloper01] Sloper, T. (2001). Lesson #2: sample outline for a game design. Retreived December 6, 2004, from http://www.sloperama.com/advice/specs.htm