Chris Pollett> CS256
( Print View )

Student Corner:
[Submit Sec2]
[Grades Sec2]

[Lecture Notes]
[Discussion Board]

Course Info:
[Texts & Links]
[Description]
[Course Outcomes]
[Outcomes Matrix]
[Course Schedule]
[Grading]
[Requirements/HW/Quizzes]
[Class Protocols]
[Exam Info]
[Regrades]
[University Policies]
[Announcements]

HW Assignments:
[Hw1] [Hw2] [Hw3]
[Hw4] [Hw5] [Quizzes]

Practice Exams:
[Midterm] [Final]

HW#4 --- last modified November 22 2021 18:47:06.

Solution set.

Due date: Nov 19

Files to be submitted:
  Hw4.zip

Purpose: To learn how to make more complicated nets with Keras and Tensorflow, to conduct Regularization and Optimization Expreiments.

Related Course Outcomes:

The main course outcomes covered by this assignment are:

CLO4 -- Be able to select neural network layers type to build a network suitable for various learning tasks such as object classification, object detection, language processing, planning, policy selection, etc.

CLO5 -- Be able to select an appropriate regularization technique for a given learning task.

CLO6 -- Be able to code and train with a library such as Tensorflow or Pytorch a multi-layer neural network.

CLO7 -- Be able to measure the performance of a model, determine if more data in needed, as well as how to tune the model.

Description: For this homework you will implement variations on LeNET5 in Keras capable of learning the card suit training data of Hw2. To this end first modify your training suit generator so that it outputs 32x32 images. Then for your training network modify the LeNET5 from the Nov. 8 Lecture where you replace the output layer with a softmax layer with four classes. Your program will be run for training from the command line using a syntax:

python cnn_net.py train kernel_type topology loss optimizer model_file_name data_folder

Here train is exactly the string train and indicates that you are training the net, kernel_type is 5square or 5diamond, indicating we are using standard 5x5 kernels or are using custom layers implementing a size 5 diamond shaped kernels, topology is either 1 or 2 indicating if we have the normal LeNET or one with skip connections; loss should be one of cross, cross-l1, cross-l2, indicating the loss function should be cross entropy, cross entropy with L1 regularization, cross entropy with L2 regularization; optimizer should be one of sgd or adam indicating the Keras optimizer should be SGD: Gradient descent (with momentum) or the Adam algorithm; model_file_name should be the name of a file to save the model to in h5 format; and finally, data_name should be the name of a folder containing your card suit data. You should implement the 5diamond feature by creating a subclass of keras.layers.Layer. A filter used to make a feature map for such layer should have shape:

__X__
_XXX_
XXXXX
_XXX_
__X__

where the input lines used to produce one output feature map pixel are indicated by X. To create the layers of your network you should use functional API using the Model class directly. I.e., do not use the Sequential class. In the case where num_blocks is set to 1, the overall network should just be LeNET5 where the last layer is a softmax. In the case where topology is set to 2, the network layers 1-6 should be the same as before. However, layer 7 now should receive 136 inputs rather than 120, but still have 84 outputs. The additional 16 inputs should come from doing a 10x10 max-pooling of each of the 16 feature maps of layer 3.

In addition to the command line controllable training parameters indicated above, your program will probably need to set parameters like batch size, number of epochs for training, etc. Pull these out as constants at the top of your code. Do training on the standard LeNET5 setup to see what values work the best before conducting the additional experiment below.

For testing your program will be run as:

python cnn_net.py test model_file_name data_folder

When run in test mode, your program should output the overall accuracy over the test data, the rates from the confusion matrix, the total runtime for training, and the total runtime for testing.

Once you have written the above program, I would like you to design and conduct experiments which investigate the following:

  1. How does the choice of loss function affect the training convergence rate of your model? How does it affect the final accuracy?
  2. How does the choice of kernel type affect the training of your model? How does it affect the final accuracy?
  3. How does the choice of optimizer affect the training of your model? How does it affect the final accuracy?
  4. How does the choice of topology affect the training of your model? How does it affect the final accuracy?

The write-ups of your experiments should follow the guidelines from the Sep 29 lecture. You should make sure to have at least one graph/experiment generated using matplotlib. Include your experiment write-ups in the file Hw4.pdf which should be included in your Hw4.zip file.

Finally, for HW5 I am going to let you create a neural network of your choice subject to certain conditions. As Hw5 is already up, please take a look at it. For Hw4, I want you to put a HW5Proposal.txt file into your HW4.zip where you tell me what you think you want to try for Hw5 so I can do a quick sanity check to make sure it is feasible.

Point Breakdown
cnn_net.py works according to the spec above and suits_generator.py from Hw2 was modified to output 32x32 images 2pts
Experiments 1-4 (2pts each) 8pts
Total 10pts