Archive for the ‘Podcast’ Category

February 20th, 2015

Robots: Sensors for Autonomous Driving - Transcript

In this episode, Audrow Nash interviews Christoph Stiller from the Karlsruhe Institute of Technology. Stiller speaks about the sensors required for various level of autonomous driving, as well as the ethics of autonomous cars, and his experience in the Defense Advanced Research Projects Agency (DARPA) Grand Challenge.

 

Christoph Stiller

ChristophStillerChristoph Stiller studied Electrical Engineering in Aachen, Germany and Trondheim, Norway, and received the Diploma degree and the Dr.-Ing. degree (Ph.D.) from Aachen University of Technology in 1988 and 1994, respectively. He worked with INRS-Telecommunications in Montreal, Canada for a post-doctoral year as Member of the Scientific Staff in 1994/1995. In 1995 he joined the Corporate Research and Advanced Development of Robert Bosch GmbH, Germany. In 2001 he became chaired professor and director of the Institute for Measurement and Control Systems at Karlsruhe Institute of Technology, Germany.

Dr. Stiller serves as immediate Past President of the IEEE Intelligent Transportation Systems Society, Associate Editor for the IEEE Transactions on Intelligent Transportation Systems (2004-ongoing), IEEE Transactions on Image Processing (1999-2003) and for the IEEE Intelligent Transportation Systems Magazine (2012-ongoing). He served as Editor-in-Chief of the IEEE Intelligent Transportation Systems Magazine (2009-2011). He has been program chair of the IEEE Intelligent Vehicles Symposium 2004 in Italy and General Chair of the IEEE Intelligent Vehicles Symposium 2011 in Germany. His automated driving team AnnieWAY has been finalist in the Darpa Urban Challenge 2007 and winner of the Grand Cooperative Driving Challenge in 2011.

Links:

| More

Related episodes:

December 27th, 2014

Robots: 3D SLAM

In this episode, Audrow Nash speaks with Professor John Leonard from MIT about his research on dense, object-based 3D Simultaneous Localization And Mapping (SLAM).

Leonard explains what SLAM is, as well as its practical applications. The explanations include what it means for SLAM to be object-based (versus feature-based) and to have dense (versus sparse) environmental mapping. The interview closes with advice for aspiring roboticists.

John Leonard
jleonard_05_nov2014John J. Leonard is Professor of Mechanical and Ocean Engineering and Associate Department Head for Research in the MIT Department of Mechanical Engineering. He is also a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research addresses the problems of navigation and mapping for autonomous mobile robots. He holds the degrees of B.S.E.E. in Electrical Engineering and Science from the University of Pennsylvania (1987) and D.Phil. in Engineering Science from the University of Oxford (1994). He studied at Oxford under a Thouron Fellowship and Research Assistantship funded by the ESPRIT program of the European Community. Prof. Leonard joined the MIT faculty in 1996, after five years as a Post-Doctoral Fellow and Research Scientist in the MIT Sea Grant Autonomous Underwater Vehicle (AUV) Laboratory. He has served an associate editor of the IEEE Journal of Oceanic Engineering and of the IEEE Transactions on Robotics and Automation. He was team leader for MIT’s DARPA Urban Challenge team, which was one of eleven teams to qualify for the Urban Challenge final event and one of six teams to complete the race. He is the recipient of an NSF Career Award (1998), an E.T.S. Walton Visitor Award from Science Foundation Ireland (2004), the King-Sun Fu Memorial Best Transactions on Robotics Paper Award (2006), and he is an IEEE Fellow (2014).

Links:

| More

Related episodes:

May 31st, 2013

Robots: Curved Artificial Compound Eye

In this episode, we speak with Ramon Pericet and Michal Dobrzynski from EPFL about their Curved Artificial Compound Eye (CurvACE) published in the Proceedings of the National Academy of Sciences. Inspired by the fly’s vision system, their sensor can enable a large range of applications that require motion detection using a small plug-and-play device. As shown in the video below, you could use these sensors to control small robots navigating an environment, even in the dark, or equip a small autonomous flying robot with limited payload. Other applications include home automation, surveillance, medical instruments, prosthetic devices, and smart clothing.


The artificial compound eye features a panoramic, hemispherical field of view with a resolution identical to that of the fruitfly in less than 1 mm thickness. Additionally, it can extract images 3 times faster than a fruitfly, and includes neuromorphic photoreceptors that allow motion perception in a wide range of environments from a sunny day to moon light. To build the sensors, the researchers align an array of microlenses, an array of photodetectors, and a flexible PCB that mechanically supports and electrically connects the ensemble.

This work is part of the European Project Curvace which brings together a total of 15 people from four partners in France, Germany and Switzerland.

You can read our full coverage about this new sensor on Robohub.

Ramon Pericet Camara
Ramon Pericet Camara is the scientific coordinator for the CurvACE project and a postdoctoral researcher at the Laboratory of Intelligent Systems at EPFL. His research interests are oriented towards bio-inspired robotics, soft robotics, and soft-condensed matter physics.

Ramon received a Masters degree in Physics in 2000 from the University of Granada (Spain) and a PhD in Multidisciplinary Research from the University of Geneva (Switzerland) in 2006. Subsequently, he was granted a fellowship for prospective researchers from the Swiss National Science Foundation to join the Max Planck Institute for Polymer Research in Mainz (Germany).

Michal Dobrzynski
Michal Dobrzynski is a PhD student at the Laboratory of Intelligent Systems at EPFL. He obtained his master degree in Automatic Control and Robotics in 2006 from the Warsaw Technical University (Poland). He then joined the SGAR S.L. Company (Barcelona, Spain) as a Robot and PLC Software Engineer where his work focused on industrial robots and automatic lines programming and visualization. Next, in 2007, he joined a Numerical Method Laboratory at the University Politechnica of Bucharest (Romania) where he spent two years working in the FP6 “Early Stage Training 3″ project as a Researcher.




Links:

| More

Related episodes:

February 10th, 2012

Robots: Senseable Robots

In today’s episode we look at some of the work done by the Senseable City Lab. We’ll be talking to Carlo Ratti, the director of the Lab, about two of the Lab’s many projects – namely Flyfire and Seaswarm.

Carlo Ratti

An architect and engineer by training, Carlo Ratti practices in Italy and teaches at the Massachusetts Institute of Technology, where he directs the Senseable City Lab. He graduated from the Politecnico di Torino and the École Nationale des Ponts et Chaussées in Paris, and later earned his MPhil and PhD at the University of Cambridge, UK.

As well as being a regular contributor to the architecture magazine Domus and the Italian newspaper Il Sole 24 Ore, Carlo has written for the BBC, La Stampa, Scientific American and The New York Times. His work has been exhibited worldwide at venues such as the Venice Biennale, the Design Museum Barcelona, the Science Museum in London, GAFTA in San Francisco and The Museum of Modern Art in New York. His Digital Water Pavilion at the 2008 World Expo was hailed by Time Magazine as one of the ‘Best Inventions of the Year’. Carlo was recently a presenter at TED 2011 and is serving as a member of the World Economic Forum Global Agenda Council for Urban Management. He is also a program director at the Strelka Institute for Media, Architecture and Design in Moscow and a curator of the 2012 BMW Guggenheim Pavilion in Berlin.

Carlo founded the Senseable City Lab in 2004 within the City Design and Development group at the Department of Urban Studies and Planning, in collaboration with the MIT Media Lab. The Lab’s mission is to creatively intervene and investigate the interface between people, technologies and the city. Whilst fostering interdisciplinary, the Lab’s work draws on diverse fields such as urban planning, architecture, design, engineering, computer science, natural sciences and economics to capture the full nature of urban problems and deliver research and applications that empower citizens to make choices that result in a more liveable urban condition.

Links:

| More

Related episodes:

January 28th, 2011

Robots: Odor Source Localization

In this episode we revisit robot olfaction and take a closer look at the problem of odor source localization. Our first guest, Hiroshi Ishida from the Tokyo University of Agriculture and Technology is an expert in the field, whose sniffing robots range from blimps to ground and underwater robots. Our second guest, Thomas Lochmatter from EPFL talks about tradeoffs between biologically inspired and probabilistic approaches to navigate a gas plume.

Hiroshi Ishida

Hiroshi Ishida is Associate Professor in the Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Japan.
The focus of his research group is to develop robots that can find sources of airborne gas plumes or underwater chemical plumes. To this end, they developed the Active Stereo Nose (see figure below), a differential gas sampling system inspired by the dog’s nose, and the Crayfish robot that mimics the mechanism used by crayfish in nature to create unidirectional water currents.

Thomas Lochmatter

image credit: SNF


During his PhD at the Distributed Intelligent Systems and Algorithms Lab at EPFL in Switzerland, Thomas Lochmatter developed a modular odor system for the Khepera III robot. His research focused on the pros and cons of biologically-inspired and probabilistic algorithms for odor localization, while dealing with both single and multi-robot systems.

Links:

| More

Related episodes: