5 Q’s for Stefanie Tellex, Assistant Professor of Computer Science at Brown University
The Center for Data Innovation spoke with Stefanie Tellex, assistant professor of computer science at Brown University. Tellex discussed her work with the Humans to Robots Laboratory, which aims to create and share training data for robotic grasping to help empower humans with robotics.
This interview has been lightly edited.
Joshua New: You help run the Humans to Robots Laboratory at Brown University, which focuses on “empowering every person with a collaborative robot.” Could you elaborate on this? How do you empower someone with a robot?
Stefanie Tellex: Robots are capable of making changes to the physical world. Our goal is to teach robots to make changes that benefit people. Through this collaboration, the person can benefit from the robot’s ability to change the physical environment. However there are many changes a person might want: set the table, empty the dishwasher, clean the room, make the bed. Depending on the details, it is important for the robot to work with the person to identify the best changes to make.
My goal is for every person to be empowered with a collaborative robot partner. This could be in a home to help the elderly, in a laboratory to help with an experiment, or in a factory to help assemble an airplane. .
New: Could you explain the Million Object Challenge?
Tellex: The goal of the Million Object Challenge is to collect the largest ever dataset of object manipulation experiences by pooling data across multiple Baxter robots. Baxter is one of the most widely distributed robots in research today and most of them sit idle much of the time.
Human children spend the first years of their lives exploring and playing as they learn about the world. Similarly, robots need access to large amounts of data in order to learn how to interact with the physical world. However if every robot had to “play” for two years after it came off the assembly line before it could be useful, it wouldn’t be very practical. Instead, we allow one robot to download data that another robot collected, so that as soon as it’s assembled, it can engage in useful behavior.
New: Teaching robots how to grasp new objects efficiently seems to be a pretty common focus in robotics research. Why is that? Just how valuable is grasping?
Tellex: Picking objects up is useful in itself because the robot can change its environment, but it is also a gateway to other capabilities. To pick up objects is the foundation of everything else the robot does. Grasping an object requires localizing the object, then moving the arm to the object’s physical location, and planning a trajectory that causes the arm to lift the object. The robot must map between sensor information and arm movements that cause the object to move.
New: When a robot learns how to grasp effectively, could this method of problem solving be applied to other tasks?
Tellex: Yes grasping an object is the first step to many other tasks. For example, to make a cup of tea, the robot must first grasp the mug. To open a bottle, it must grasp the bottle and the bottle cap. Similarly, to learn about objects that move such as a pair of scissors, it must first pick them up and explore their dynamics.
New: When it comes to improving how robots interact with their environment, is hardware or software the biggest limitation? For example, is it too difficult at this point to connect enough sensors for a robot to effectively sense its surroundings? Or are the algorithms powering the robot simply not well-developed enough?
Tellex: Both are a challenge. In terms of hardware, many robots have a very small workspace, and can’t reach the floor, a table, a counter, and a shelf. Robot hands are also an area of active research, including improved tactile sensors and force feedback to detect when the robot is touching something.
But there are strong limitations in software as well. Existing software often uses only a subset of the information available on a given robot because of the overhead costs and time necessary to integrate each modality. By collecting data, we aim to enable robots to automatically learn to incorporate information from sensors, making it easier to benefit from new information.