Chris Pollett > CS156
( Print View )

Student Corner:
Submit Sec2]
  [Grades Sec2]

  [Lecture Notes]

  [Discussion Board]

Course Info:
  [Texts & Links]
  [Outcomes Matrix]
  [HW/Quiz Info]
  [Exam Info]
  [Additional Policies]

HWs and Quizzes:
  [Hw1]  [Hw2]  [Hw3]
  [Hw4]  [Hw5]  [Quizzes]

Practice Exams:
  [Mid 1]  [Mid 2]  [Final]


HW#5 --- last modified Wednesday, 06-Dec-2017 21:54:07 PST.

Solution set.

Due date: Dec 11

Files to be submitted:

Purpose: To learn about default reasoning. To understand the computations a probabilistic agent might do. To gain experience with learning algorithms.

Related Course Outcomes:

The main course outcomes covered by this assignment are:

LO11 -- Students should be able to describe default reasoning.

LO12 -- Students should be able to describe or implement at least one learning algorithm.


This homework will consist of both a written and coding part. Submit the written part in a file Hw1.pdf as part of your zip. It should consist of answers to the following questions:

  1. Come up with a simple knowledge based that has facts that of the form Fish(Salmon), Fish(Tuna), ... and Bird(Pigeon), Bird(Seagull), ... Come up with default rules that say fish by default swim, fish by default do not fly, birds by default do not swim, and birds by default fly. Have as part of your knowlege base one counterexample bird and fish. Compute an extension to your default logic knowledge base.
  2. Consider the following partially explored Wumpus World (ignore how we ended up with a world explored in this fashion):
    Here `X` indicates explored but nothing detected; `B` indicates a breeze was felt on that square; `S` indicates a stench was smelt. For each square on the frontier use the method from the Nov 27 Lecture to get a probability that the given square is safe. Which square should a rational agent who must find the gold search next?
  3. Consider the following table of data concerning whether or not to sleep in:
    Have CoffeeDay of WeekSunny OutsideStayed Up LateSleep In
    In the above, DNM means does not matter for this case. Work out by hand (show work) the decision tree that our decision tree learning algorithm would compute for the above training set.

For the coding portion of the homework I'd like you to code the perceptron learning algorithm discussed in class and use it to learn two input threshold functions.

Your program will be run as follows:

python slope intercept num_examples

The perceptron you are training will try to learn to output 1 on inputs (x,y) such that y - slope * x - intercept ≥ 0; and to otherwise output 0. To do this your program will generate num_examples many examples, (x, y, 1 or 0 depending if y - slope * x - intercept ≥ 0) and train using the training algorithm from class a two input perceptron with initially random weights. After training on all the examples, your program will output its final weights. To choose the training examples, pick the x values uniformly at random from ± intercept/slope and choose the y values uniformly at random from ± b, then determine if the pair satisfies y - slope * x - intercept ≥ 0.

As part of your project you should conduct some experiments by varying num_examples and experimentally determining how likely your trained perceptron (based on the output weights) would have classified new data drawn in the same way. To do this write a second program, which runs from the command line with a line like:

python slope intercept weight0 weight1 weight2 num_tests

Here weight0, weight1 and weight2 are the values output from your, slope and intercept are the values used to train before, and num_tests are the number of test triples to test on. These should be generated in the same way as in Your program,, should on inputs as above output the fraction of the test examples the given weights would have classified correctly. Using this program, conduct your experiments. Write up these experiments in the file Experiments.pdf which you also include with your project.

Point Breakdown

Written exercises 2pts each 6pts computes num_example training examples as described (1/2pt). outputs final weights of trained perceptron (1/2pt) 1 trains a perceptron using the training algorithm from class. 1 works as described. 1
Experiment write up. 1