Nonparametric Learning




CS156

Chris Pollett

May 11, 2015

Outline

Feed-Forward Learning

More Feed-Forward Learning

Back Propagation Learning Algorithm

Putting this all together we get the following algorithm:

The Back Propagation Algorithm

The book has a graph showing that decision tree learning in the restaurant example is only slightly better than using a feed-forward network.

Non-parametric Learning

Nearest Neighbor Model

Choice of Distance Function

Making Lookups of Nearest Neighbors Efficient

Quiz

Which of the following is true?

  1. The Importance function for our decision tree learning algorithm chooses the attribute among the remaining attributes which provides the least information gain as the attribute to use for the root of the tree.
  2. A 2-input perceptron, provided we've chosen the weights accordingly, can be used to compute an XOR if its inputs.
  3. Our learning algorithm for sigmoid perceptrons tries to choose weights so as to minimize the Loss function and does this using a steepest decent method.

Support Vector Machines

Building a Better Separator

Separators for traditional perceptron model (a) and SVM model (b)

Computing Maximum Margin Separator

SVMs in General

Example of SVM with a kernel