1. 9 "Modeling the Neo-Cortex" 2. no 3. no 4. 3 5. 5 6. 0 This project report was very poorly written. With constant run-on sentences, disorganized structure, and many grammatical errors per sentence, it was a struggle to read. While there was an overall structure, it seemed as though bits and pieces were out of place. For example, there was no evaluation section, but instead all of the evaluation was crammed into the results section, and no clear distinction was made between phase one and phase two results. The overall idea of the project, modeling a neural network after the neo-cortex, fits well into the framework of developmental robotics. Unfortunately, the implementation does not. First of all, the fact that features for objects were hand labeled by a person rather than derived from actual data violates the grounding principle of developmental robotics: the data was not grounded in the robot's own sensory perception, but that of an external observer. Also, hand labeling the features defeats the idea of the project. The purpose of the neo-cortex is to take in raw sensory data and output the features from the data, but since this was done by humans, the algorithms don't have a purpose. The best thing about this project is its idea. Modeling an algorithm off the way the neo-cortex works is very interesting. Hawkin's model of the cortex provides some useful insight into how the brain works, but as of yet, no one has gotten a very good working model of it. If this project could deliver on that, it would be a very useful contribution to science. This project didn't even come close to delivering on that. The worst thing about this project is the implementation. The methods used to implement this project lack real foundation in developmental robotics. The phase one neural network, since it was all just linear combinations of activations, didn't provide useful results. The methods in this project line up with what was proposed in the initial proposal. The results and contributions, as I predicted in my initial review, are useless, just like the initial propsal indicated they would be. There are quite a few details left out. First of all, HOW DO YOU TAKE THE DFT OF THE CENTROID OF AN OBJECT?! I would love to know how... Also, details of the evaluation of phase two are almost non-existent, as well the accuracy for phase two isn't even reported. How was the evaluation performed? What was the training and test set? There is no related work section, which would have been very helpful to have. The results in this paper do not demonstrate success; they are just noise. The results from phase one indicate, as expected, that when the update rules for weights for each neuron don't take into account the firing rates of the other neurons, all the neurons eventaully converge to the same output. When they do take that into account, they start to have different outputs, but the outputs are meaningless. There is no relation between objects with similar outputs. Also, only 38 datapoints were used, as well as in one test only 10 datapoints, that were randomly generated, were used. This is no where near enough datapoints to train a neural network or provide meaningful results. Finally, randomly generating data points is not a valid method for acquiring data. These networks are supposed to discover structure in the data, but by definition, there is no structure in random data! The results for phase two are equally as meaningless. Of what little is reported, the ROC is reported as being >0.6, which the authors claim indicates effective learning. It, in fact, does not. An ROC of 0.5 is chance, so being greater than 0.6 is not very informative. Looking at the provided confusion matrix, it would seem as though the results were random chance. In fact, calculating the accuracy from the confusion matrix yields an accuracy of 0%. According the the matrix, it got none of the classifications correct. This is not unexpected, as the authors are attempting to get a multilayer perceptron to classify a small set of objects using hand labeled binary features. There is some hope though. The initial idea was very promising. If the authors wish to improve this project, the first thing that needs to be done is a more thorough literature search. Modeling the neo- cortex with neural networks has been attempted several times, some with success. Hawkins even attempted to implement an algorithm based off the ideas in his book. As well, the authors might consider looking into Deep Belief Networks, they have shown much promise and are very similar to what the authors are trying to do. This paper, in its current form, is not publishable. If it were majorly overhauled, then it might be able to be published, but as it currently stands, it isn't.