1. Project number and Title: 3 – Developmentally Learning the Support Affordance of a Platform 2. Should this project be considered for the Best Project award? (yes/no): No 3. Should this project be considered for the top 3 project awards? (yes/no): Yes 4. On a scale of 1 to 10, how would you rate the overall organization/clarity of the project report? (1-10): 9 5. On a scale of 1 to 10, how would you rate the overall project idea? (1-10): 9 6. On a scale of 1 to 10, how would you rate the overall research contribution of the project idea, methodology and/or results? (1-10): 9 Overall, is the project report clear, concise, and well-organized? The final project report, like the proposal, is very clear, concise, and well written. Upon first read nothing popped up as being blatantly wrong. Upon second read a couple of nits can be identified: Justify the text such that if fills out the columns, Caption the equation, Label your objects in figures 1 (I don’t recall if the shape types are called out), and Increase size of your points in the points clouds so that they are more easily identified. How does the project idea and methodology fit within the framework of Developmental Robotics? The project idea and methodology seems to fit relatively well into the framework of Developmental Robotics. Having a system that is capable of learning and using affordances of a platform is definitely something that is highly desirable. Describe what you like BEST about the project? The project team did a really good job of trying to explore the problem space from a variety of directions. This is evident in their experimental layout when they chose to have four different table configurations and again in their analysis when they chose to test four different algorithms. It was nice to see they even took the extra effort to try and understand why the approaches worked as they did. There future work section goes on to further demonstrate this mentality as their only planned extension tries to improve their approach with their full dataset. Describe what you like LEAST about the project? While did a good job of looking at the problem from many directions their future work section seems rather sparse. With the way the group attacked the problem you would expected to see a plethora of future avenues of exploration and there was only one. This was quite disappointing and something I would definitely recommend they work on if they were to submit it for publication. Do the methods, results and contributions of the final project correspond to what was presented in the initial project proposal? As clearly stated in the final report the group was forced to delineate from their initial proposal due to limitations of the robotic system. Rather than having a raised platform they were forced to implement the ramp as an extension of the table. This change was probably for the better since it led to them implementing multiple configurations which I believe greatly enhanced their project and allowed them to look at the problem from multiple directions. The remaining methods, results, and contributions were pretty much in line with what they had proposed. At the very least they followed close enough that nothing was blaringly obvious. Are there any major details left out with regards to the methods, algorithms, or experimental design described in the report? The final project report is fairly comprehensive and seems to describe the methods, algorithms, and experimental setup with sufficient detail to be clear while at the same time be very concise. The only things that may potentially be missing is the explicit calling out of the type of shapes that were used during the trials, if any adjustment were necessary to position the block in the correct position when it first started to go out of control, and how success was measured. An interesting thought is that if success was measured based upon the entire bath of the block then it may be artificially inflated. In other words a majority of the time it is in one case or the other all you have to do is predict when it changes. Even if you are off by significant margin detecting when the object is changing state overall it may appear that you are successful at detecting whether or not the shape is out of control. It would be interesting to see how quickly the methods detected the transition from under control to out of control and to come up with an explanation of why this is the case. Do the experimental results reported in the paper demonstrate success? The experimental results as detailed in the final report demonstrate success. The project group was able to show that all of their approaches (Naïve Bayes, kNN, J48, and entropy measurements) were capable of classifying the object as under control or out of control a significantly better rate than chance. Under the worst case scenario they were still able achieve a claimed 85 percent correct classification which is pretty impressive. There is of course the question that was raised above. Do you have any suggestions for improvement and future work? * Is there a reason the disappearance of the object is not immediately interpreted as out of control? Does the tracking system frequently result in the object not being detected? Were other alternatives to color tracking such as texture tracking considered? * Is there a reason you choose to color all of the objects the same color? Was it truly necessary? It seems like the initial setup could easily have been changed to identify the color of the object and then use that for future tracking purposes? * Do each the algorithms report the block as going out of control at the same point in time? If not, why and can the delay be correlated to anything in the experimental setup? * How does the current object affect the success rate? Is it the same for all objects? Does changing the orientation of the objects influence the results? * What changes would be required in your approach to learning the affordance of objects such that you can successfully stack them on top of one another? In other words what steps or experiments would you need to run in order to get to the original project idea before it was simplified? How close is the final project report to being publishable as a conference or journal paper (consider the research papers that were part of the course reading)? What would it take to get there? The final project report seems fairly close to being publishable. The methods and results are definitely there, it is just a matter of ensuring everything is finalized and cleaned up. The only sections that are may need some effort is the future work since it seems rather sparse and possibly the results based upon the question I raised about the success measurement I raised earlier.