In today’s episode we talk about a new generation of affordable robots with the Bilibot project and its leader Garratt Gallagher from MIT.
Garratt Gallagher joined MIT in 2009 as a research engineer after a Masters of Robotics at Carnegie Mellon University. During his day job, he works with the PR2 robots from Willow Garage. On the side however, Gallagher has been developing the Bilibot, a cheap hobbyist/research robot that merges the capabilities of ROS, iRobot’s iCreate, the Kinect and a robust manipulator. The end result is an excellent platform with state-of-the-art sensing technology that has the potential to achieve a variety of service tasks, such as picking up your room or fetching a beer from the fridge.
To bring the Bilibot project to the next level, Gallagher partnered-up with two master students in Operations Management at MIT. The company they recently founded is now selling Bilibots for 1200$ with a cash return of $350 if you make a video of the Bilibot doing something cool, you share the code on ROS and collaborate with other developers. With this step, his team hopes to build a user community excited about the robot and prepare for their next big step, a robot app store.
In today’s episode we meet with Natalie Freed, David Robert and Adam Setapen from Cynthia Breazeal’s Personal Robots Group at the MIT Media Lab. They’ll be telling us about the Playtime Computing System, a playground where kids can interact with the physical world and its virtual extension.
The playground looks like a dream-like play-area with objects kids can interact with, including a robot that looks like an alphabet block and can be decorated with letters, shapes and even a mustache. The physical playground is surrounded by an engaging virtual world projected on a set of screens. Robot characters can seamlessly transition from the real world to the virtual world by entering a portal (which is basically a robot garage). Since anything is possible in the virtual world, robots can gain new capabilities, such as flying, and kids can rearrange the world or add their own virtual objects to the mix using a Creation Station. The children’s behavior is tracked using 3D motion capture as well as other sensors such as cameras and audio inputs.
The playground brings a whole new dimension to the idea of play, getting kids off the couch, engaging in creative activity that could bring them to a virtual cafe in France to learn french or allow them to build a whole new world to share with other kids around the world. In the interview, David, Adam and Natalie tell us what they learned from experiments with the Playtime Computing System, the fun anecdotes that come-up when working with kids, and the future of interactive media.
So when do we get one of these at home?
Natalie Freed finished her Masters in Computer Science at Arizona State University with a concentration in Arts, Media, and Engineering. She joined the MIT Media lab last summer as a graduate student and has since been interested in studying human-robot interactions with kids.
David Robert has a decade of expertise in the film industry working as a Technical Director and Animator. Over the years he’s consulted and worked with the world’s top animation studios including PIXAR, Dreamworks, LucasArts, ILM and Disney Imagineering. He also taught at The Academy of Art, Walt Disney Feature Animation, Pixar University and gave lectures around the world. He’s currently doing a PhD at the Personal Robots group as a first step in showing that the “future of animation is off the screen”.
Adam Setapen has a Masters in Computer Science from the University of Texas at Austin and a strong background in AI. He joined the Personal Robots Group as a graduate student with the hope of developing robots for children that support long term interaction.
For the occasion we speak with 12 scientists about the most remarkable developments in robotics over the last 50 years and their prediction for the next half-century. This 50th special is split into two episodes with the second half airing in two weeks.
We’ve also upgraded our website so that you can easily browse through episodes by topic, interviewee, tag or just listen to one of our favorites, so have a look!
You can interact with the ROBOTS community by leaving comments directly under episode posts or on our new sleek forum. To do both, just log-in once in the top bar of the website.
Rolf Pfeifer is Professor at the University of Zurich where he directs the Artificial Intelligence Laboratory. He pioneered a new approach to artificial intelligence (“New AI”), which emphasizes the role of embodiment and argues that thought is not independent of the body, but tightly constrained, and at the same time enabled by it.
Mark Tilden is a famous robot inventor who builds new robots on a daily basis. He pioneered a philosophy for making simple and reactive robots and tagged it BEAM robotics (which stands for Biology, Electronics, Aesthetics, and Mechanics). Lately, Tilden has been making famous products such as the Robosapien and Femisapien robots at WowWee.
Schofield is an expert in underwater robots, taking part in recent projects such as the Scarlet Knight glider which crossed the Atlantic Ocean fully autonomously while dodging fishing nets, strong currents and even the occasional shark.
As director of the Center for Engineering Education Outreach, Rogers tours the elementary schools of the world trying to bring engineering and robotics to young children. He has also worked with LEGO to develop ROBOLAB, a robotic approach to learning science and math.
In this episode we speak with two experts in robot learning. Andrea Thomaz from Georgia Tech looks at how humans can teach and humanoids learn with the hope to create good human-robot interactions. We then speak with Sethu Vijayakumar from the University of Edinburgh about machine learning and how it can be used to teach a robot hand to balance a pole.
With Simon and other humanoid robots such as Junior, she is looking at how to make social robots that can learn from humans in their everyday environment. With this endeavor in mind, her lab is studying how humans actually teach and draws conclusions that could be useful when designing future machine learning algorithms. She is also taking inspiration from nature to make robots that can learn in an incremental manner by observing and reproducing what people in their environment are doing, similar to what happens when you put two kids together in a playpen.
In today’s show we’ll be looking at robots used for the rehabilitation of stroke patients. Our first guest, Ludovic Dovat for the National University of Singapore is part of a multi-national team working on robotic devices that help patients regain the use of their hands. Our second guest, David Brown, is co-founder of Kinea Design near Chicago that makes a rehabilitation robot called the KineAssist. As a physiotherapist, he gives us his hands-on view on how robots can help patients re-learn to walk.
Dovat explains that most stroke victims are sent home as soon as they are able to walk and do not have a chance to re-learn essential but more delicate tasks like gripping and writing due to the complexity and expense of rehabilitating the hand. His robotic systems are used in conjunction with physiotherapists to ease the recovery process for both victim and therapist and help patients lead fuller and ultimately happier lives while reducing the cost of the therapy.
He specializes in post-stroke disabilities and novel engineering that can help his patients get back on their feet. With a nice balance between his background in physiotherapy and academic science, he’s been in the field, with machines such as the KineAssist that can challenge patients with difficult walking exercices while catching them if they fall. Over the years, Kinea Design has been expanding their portfolio with products like arm prosthetics and haptic interfaces for DARPA’s Revolutionizing Prosthetics Program which Dean Kamen just recently presented in our show.
More generally, Brown tells us about his patients, colleagues and the market of rehabilitataion robots from a medical perspective.