Project 6 Quantifying and Evaluating Uncertainty in the Internal Representations of Robots should be considered for top project: no should be considered for top 3: yes organization/clarity: 8/10 project idea: 9/10 research contribution: 8/10 The overall idea for this project was very solid. The motivation is, however, a tad uncertain. The proposal went extensively into an only-vaguely related idea of some motivation obtained from religious-based text and artwork. This was notably absent from this work but was, instead, replaced by a sequence of related work dealing with entropy reasoning in various applications. This went a long way towards increasing the relevance of the work. The conceptually best thing about this project is probably the apparent unification of a number of tasks with a single system. This type of work has proven very popular and sometimes revolutionary in a multitude of applications and research areas (physics comes to mind). Considering the scope of the project assignment, its expected that most project would only have a single result decent result associated with the project given that each such setup requires a relatively rigorous setup. The seemingly worse thing about the project is the execution and the description of the math. The first apparent issue is that its long and its almost impossible to determine the relevance of each section. The connection between some of the equations is, at time, unclear making them somewhat hard to follow. The equations themselves seemed both under-explained and overly explicit. It seems that the best thing to improving this work is to draw constant analogues to the individual platforms or one of the individual platforms to make the progression and relevance easier to follow as more and more equations are introduced into the system. The promises made in this proposal were, seemingly fulfilled in this work. A mathematical model was provided to attempt to provide some kind of relevant entropy-based reasoning model for a number of different tasks. A number of applications were also presented with their respective result and impact. The mathematical representation is obviously very much present and, as far as i can determine, complete. A notable missing component, however, is that the actual methodology for converting this block of math into the results described. This report should include the individual algorithms used to obtain the results for each of the test applications. As the project stands, I don’t believe its possible to reproduce which seems like a relevant metric in the measurement of the success of a given work. The experimental results do seem to have a relatively high measure of success but do seem to draw up a few questions. A couple times were reported as the time it takes the model to reach a confident categorization. These two times were reported as 30 seconds for the grasping behavior or one of the tasks (I assume the containers one) and 20 seconds for the game experiment. Although these were reported as fast times, 30 seconds seems like a long time to shake an object to figure out that there’s something inside it and 20 seconds seems entirely out of the question for a game environment as 20 seconds is a virtual eternity for even slower games. This brings up the question of whether its worth instead considering a less generalized model to obtain faster results for the specific tasks. As already mentioned, a future iteration of this work would have to include the concrete algorithm used to implement this entropy model. Reproducibility is probably the biggest thing keeping this from being publishable. Furthermore, the math section of this report would need to be vastly pruned in order for it to better describe the relevance of the work. A better description of the research tasks augmented by this work and, perhaps, a unique research task would better exemplify the goal of this project.