A framework for learning behavior-grounded object categories
Shane Griffith, Jivko Sinapov, Vlad Sukhoy, Matt Miller, and Alex Stoytchev. Developmental Robotics Laboratory, Iowa State University
I earned my M.S. after 3.5 years of studying what a container is, and how a humanoid robot can learn what a container is.
Although a growing body of literature in robotics addressed many different container manipulation problems, individual papers only chipped away at isolated problems one by one.
This meant that the algorithms for one domain were not directly applicable to other domains.
After I saw this problem, the goal of my thesis was to identify how a robot could start to learn about containers in a more general way.
Because people have a representation of containers that is generalizable across many different container manipulation problems, I looked to psychology for all the information on the origins of container learning.
Psychologists observed that infants form an abstract spatial category for containers, which allows them to apply their knowledge to novel containers.
At the time, however, the current theories of object categorization weren't clear about exactly how infants form an object category for containers.
Consequently, I looked more deeply into the psychology literature in order to try and understand how infants learn.
By citing many different theories and observations from psychology, I extrapolated an explanation for how infants learn object categories.
As a result of the expertise of the whole team, we were able to create a computational framework for learning object categories in a similar way by a robot.
Our experiments with containers showed that this method of object categorization really works, and it works really well.
Our work was well received when we submitted it to the IEEE Transactions on Autonomous Mental Development (TAMD) for publication in their journal.
An eminent developmental psychologist reviewed the object categorization theory (the expertise of the other two reviewers was robotics), and in her "comments to the author" that we received when the paper was accepted, she signed her name in her review (reviews are usually anonymous) and said:
"I commend the authors on a fantastic literature review of my domain. The authors accurately cite a broad array of the relevant literature. There were no relevant articles missing. I do not have any suggested changes because I think the literature is very good as it is. ...I was tickled by the unification of citations from people that are often perceived to be in opposing theoretical camps. ...I signed this review because I hope that the authors send me a copy when they get it published. I find the work fascinating and I would like to refer to their in my own work."
In addition to technical comments that helped us to improve our work, the two roboticists said "[this paper presents] an interesting and out-of-the-box way of addressing concept acquisition" and "this paper makes a significant contribution to the existing literature." In the end, my research productivity for my M.S. came to rest at π (11 papers in 3.5 years).
Ritika Sahai, Shane Griffith, and Alex Stoytchev. Developmental Robotics Laboratory, Iowa State University
Learning to write is known to be a tough problem for humans due to the intricate nature of writing. That's why we want robots to do it: a successful learning paradigm for this problem could also be applied to solve other difficult learning tasks. Moreover, if robots could do it they may offer people a personalized and more natural form of communication (e.g., by leaving post-it notes on the fridge).
Our first study tested the assumption that a robot can identify a good writing utensil for a given surface. We got interesting results, which informed the way we conducted some future studies.
Shane Griffith, Alex Baumgarten, Jon Dashner, Kyle Miller, Mark Rabe, Chris Tott, Jon Watson, Joshua Watt, and Nicola Elia
For this research project seven other seniors and I created a holonomic robot, i.e., a mobile robot that can move in any direction without turning.
To create the robot, we undertook a localization sytem, robot AI, a sensor-data logging system, a telemetry board, and a robot chassis.
I focused my attention the localization system.
I researched existing solutions, analyzed worst case time delay and position error, and identified a camera that fit the system demands.
Also, I applied image processing algorithms for tracking the robot and implemented a communication backbone for routing the robot's position.
After 9 months, we created a fully functional holonomic robot research testbed.
The project was a success with the whole team's effort.
We created a fast robot that could target an equally fast moving tennis ball.
Through multiple demonstrations, we were able to document different milestones of the project with
Now, Dr. Elia uses the robot for controls research, including a study to balance a freestanding pendulum on top of the robot.
Creating a Wireless Sensor Network
(May 2006 - May 2007)
Shane Griffith, Kyle Byerly, and Daji Qiao Wireless Sensor Network Laboratory, Iowa State University
My first research project was for the Wireless SensorNetwork Laboratory at ISU.
Dr. Qiao wanted an easily deployable wireless sensor network that he could use to give demonstrations and that could also serve as a research testbed.
So, Kyle and I experimentally tested ad-hoc communication protocols, implemented a hierarchical communication network, and designed a graphical user interface.
After a year of work, we created a self-organizing, scalable, practical, and modular wireless sensor network.
We tested our system by deploying multiple motes within a dormitory.
The sensors captured light-changing events, the wireless network routed the data to a database, and the user interface graphed different trends.
One dataset captured the energy-using habits of some undergraduates.
Another dataset captured the sunrise.
Pictures: network, user interface, dorm room, sunrise.