Chris Pollett > Students > Sodhi
Print View
[Bio]
[Blog]
[CS 297 Proposal]
[Power Point Presentations]
[Deliverable 1]
[Deliverable 2]
[Deliverable 3]
[Deliverable 4]
[CS 297_Report - PDF]
[CS 298 Project Proposal]
[CS 298 Project Report - PDF]
[CS 298 Project Presentation - PDF]
|
Simple multi layer AI using Keras (Built on top of TensorFlow and THEANO)
Description:
In order to be able to play the game, the machine needs to control it.
Deliverable 2 comprises of a program in python that controls the keyboard and mouse functions of the laptop. This would further help in controlling the game and then take screenshots of it.
About the program: (cnn.py)
- The first hidden layer is a convolutional layer - Convolution2D. It haa 30 feature maps, with the size of 5 by 5 and a rectifier activation function. This is the input layer, expecting images with the structure outline above [pixels][width][height].
- Next is a pooling layer that takes the max called MaxPooling2D. ( configured with a pool size of 2 by 2.)
- The third hidden layer is a convolutional layer - Convolution2D. It haa 15 feature maps, with the size of 3 by 3.
- Next is a pooling layer that takes the max called MaxPooling2D. ( configured with a pool size of 2 by 2.)
- The next layer is a regularization layer using dropout called Dropout. It is configured to randomly exclude 20% of neurons in the layer in order to reduce overfitting.
- Next is a layer that converts the 2D matrix data to a vector called Flatten. It allows the output to be processed by standard fully connected layers.
- Next a fully connected layer with 128 neurons and rectifier activation function.
- The layer after that is another fully connected layer with 50 neurons and rectifier activation function.
- Finally, the output layer has 10 neurons for the 10 classes and a softmax activation function to output probability-like predictions for each class.
About the program: (cnn.py)
- The first hidden layer is a convolutional layer - Convolution2D. It haa 30 feature maps, with the size of 5 by 5 and a rectifier activation function. This is the input layer, expecting images with the structure outline above [pixels][width][height].
- Next is a pooling layer that takes the max called MaxPooling2D. ( configured with a pool size of 2 by 2.)
- The third hidden layer is a convolutional layer - Convolution2D. It haa 15 feature maps, with the size of 3 by 3.
- Next is a pooling layer that takes the max called MaxPooling2D. ( configured with a pool size of 2 by 2.)
- The next layer is a regularization layer using dropout called Dropout. It is configured to randomly exclude 20% of neurons in the layer in order to reduce overfitting.
- Next is a layer that converts the 2D matrix data to a vector called Flatten. It allows the output to be processed by standard fully connected layers.
- Next a fully connected layer with 128 neurons and rectifier activation function.
- The layer after that is another fully connected layer with 50 neurons and rectifier activation function.
- Finally, the output layer has 10 neurons for the 10 classes and a softmax activation function to output probability-like predictions for each class.
Steps to make the program run:
- Download the program to train on MNIST from here: cnn.py
- Download the library THEANO from here OR (if using linux), on your Terminal, type: "pip install THEANO"
- Download the library Tensorflow from here OR (if using linux), on your Terminal, type: "pip install tensorflow"
- Download the library Keras from here OR (if using linux), on your Terminal, type: "pip install keras"
- To run the program, on your terminal, type: "python cnn.py"
- After the program runs successfully, an h5 file will be generated, called: "my_model.h5"
- Once the model is generated
- Download the program to predict the labels from here: cnn_test.py
- To run the program, on your terminal, type: "python cnn_test.py"
- You shall be able to see the labels of the predicted images!
Note: cnn.py might take a long time to run. To decrease the time, decrease the number of epochs.
|