Question 1 (10 points) For this question, you will prove that a network of perceptrons can compute any logical function. All that is necessary to prove this is to show that you can implement AND, OR, and NOT in perceptrons since any logical function can be constructed by a combination of these. (It's actually even easier than this, but we'll stick with AND, OR and NOT for this question.) To answer this question, you should provide 3 sets of weights, one for a perceptron that recognizes AND, one for a perceptron that recognizes OR, and one for a perceptron that recognizes NOT.
Question 2 (10 points) Consider a learning problem where the inputs are X and Y, and the target concept is a ball of radius 1 in the X-Y plane, i.e., everything inside the ball is "True" and everything outside the ball is "False". a) Explain why the true and false classes are not linearly separable if the inputs are just X and Y. b) Suppose the inputs are now X, Y, their squares, and their product. Explain whether the classes are linearly separable now. c) Suppose you were forced to use a classifier with just one feature. Provide a single feature that would make the classes linearly separable.
In this homework, you will design a perceptron classifier. You will test your classifier on two image data sets: a set of scanned handwritten digit images and a set of face images in which edges have already been detected. Even with simple features, your classifiers will be able to do quite well on these tasks when given enough training data.
Optical character recognition (OCR) is the task of extracting text from image sources. The first data set on which you will run your classifiers is a collection of handwritten numerical digits (0-9). This is a very commercially useful technology, similar to the technique used by the US post office to route mail by zip codes. There are systems that can perform with over 99% classification accuracy (see LeNet-5 for an example system in action).
Face detection is the task of localizing faces within video or still images. The faces can be at any location and vary in size. There are many applications for face detection, including human computer interaction and surveillance. You will attempt a simplified face detection task in which your system is presented with an image that has been pre-processed by an edge detection algorithm. The task is to determine whether the edge image is a face or not. There are several systems in use that perform quite well at the face detection task. Your digital camera or the camera on your phone may do some basic face detction.
The code for this homework includes the following files and data, available as a zip file.
||Data file, including the digit and face data.|
Files you will edit
||The location where you will write your perceptron classifier.|
||The location where you will write your SVM classifier as a bonus question.|
Files you should read but NOT edit
||Abstract super class for the classifiers you will write.
(You should read this file carefully to see how the infrastructure is set up.)
||I/O code to read in the classification data.|
||Code defining some useful tools. You may be familiar with some of these by now, and they will save you a lot of time.|
||A simple baseline classifier that just labels every instance as the most frequent class.|
What to submit: You should submit
svmClassifier.py (if you do the bonus question).
Evaluation: Your code will be autograded for technical correctness. Please do not change the names of any provided functions or classes within the code, or you will wreak havoc on the autograder.
To try out the classification pipeline, run
from the command line. This
will classify the digit data using the default classifier (
mostFrequent) which blindly classifies every example
with the most frequent label.
As usual, you can learn more about the possible command line options by running:
python dataClassifier.py -h
We have defined some simple features for you.
Our simple feature set includes one feature for
each pixel location, which can take values 0 or 1 (off or on). The features are encoded as
Counter where keys are feature locations (represented as (column,row)) and
values are 0 or 1. The face recognition data set has value 1 only for those pixels identified
by a Canny edge detector.
Implementation Note: You'll find it easiest to hard-code the binary feature assumption. If you do, make sure you don't include any non-binary features. Or, you can write your code more generally, to handle arbitrary feature values, though this will probably involve a preliminary pass through the training set to find all possible feature values (and you'll need an "unknown" option in case you encounter a value in the test data you never saw during training).
perceptron.py. You will fill in the
trainfunction, and the
A perceptron does not use
probabilities to make its decisions. Instead, it keeps a
prototype weight vector of each class . Given a feature list ,
the perceptron computes the class whose prototype is most similar
to the input vector . Formally, given a feature vector (a map
from properties to counts, pixels to intensities), we score each class with:
Using the addition, subtraction, and multiplication functionality of the
Counter class in
util.py, the perceptron updates should be
relatively easy to code. Certain implementation issues have been
taken care of for you in
perceptron.py, such as handling iterations
over the training data and ordering the update trials. Furthermore,
the code sets up the weights data structure for you. Each
legal label needs its own prototype Counter full of weights.
Question 3 (10 points)
trainmethod for the perceptron algorithm and test it using the basic pixel features on the face and digit data (use the
-c perceptronoption). What classification performance do you get for each? As a sanity check, the command:
> python dataClassifier.py -d digits -t 100 -c perceptron
-i iterationsoption. Try different numbers of iterations and see how it influences the performance. In practice, you would use the performance on the validation set to figure out when to stop training, but you don't need to implement this stopping criterion for this assignment.
Don't worry about implementing an SVM by hand. Instead, we'll use the scikit-learn module that provides an easy-to-use SVM. All you need is to get scikit-learn running for our case.
You'll need to write your code in
svmClassifier.py. You will complete the
You do not have to know how exactly SVMs work to complete this question, but you are encouraged to read the SVM page on Wikipedia.
import sklearnwithout error in python. Third-party softwares/packages (when explicitly permitted by the instructor!) can save you time over implementing everything from scratch. The first step is usually installation, which can be quite challenging by itself sometimes. Luckily,
scikit-learnis not so hard and you can follow the steps in its installation instruction. We recommend installing it with
easy_install, which are distribution tools for conveniently installing python modules.
easy_installcompiling the downloaded source code, make sure you have all the dependencies available on your machine.
python-numpy-develif you are running a linux. Please note that since this is extra credit, we may not be able to provide you the same level of support as for regular assignments. You may need to rough it a bit, but it's good practice for the real world.
svmClassifier.py. Compare the peformance of your SVM classifier to your perceptron classifier with 100 training samples. Run your code with:
python dataClassifier.py -c svmClassifier
Hints and observations:
scikit-learn. Refer to its usage and documentation. There is also a quick-start example.
formattingData()method can perform this task with mean and std as input parameters.
svm.SVC(...), especially the choice of the kernel function and its parameters. To push the performance of your classifier to the limit, try optimizing these parameters in some "grid" with regard to its accuracy in the validation dataset. Scikit-learn provides a grid search routine that can perform this task.