June 6th, 2008

Robots: Cornell Racing Team and Velodyne’s LIDAR Sensor - Transcript

Our inaugural episode centers on the 2007 DARPA Urban Challenge, featuring interviews with professor Daniel Huttenlocher from Team Cornell and Rick Yoder from Velodyne, a producer of LIDAR sensors used by several teams in the robot car race.

Dan Huttenlocher

Team Cornell's Robot Racing TeamDan Huttenlocher is professor of Computing, Information Science and Business at Cornell University in Ithaca New York. As the co-leader of Cornell’s racing team for the 2007 DARPA Urban Challenge, he spent countless hours testing the autonomous car which finally finished among the six final automobiles capable of following California’s road code over 56 miles of a mock urban environment. With design in mind, his team of 13 students managed to discretely embed a slick black 2007 Chevy Tahoe with a Velodyne LIDAR, three IBEO 1.5D LIDARs, five 1D SICK LIDARs, five millimeter-wave radars, and four cameras. Of course, millions of data points per second don’t come for free and Cornell’s trunk is the home of 17 dual core processors.

Since a pile of impressive hardware and CPU is not enough, Team Cornell developed the artificial intelligence and control software needed to allow their robot to locally represent its location on the road and further figure out, on a more global scale, where it really was in the world. Moreover, the Cornell car also needed to localize and track other objects in the environment and ideally reason about their next moves. So, what went wrong in this little fender bender with MIT’s car (see video below)? I guess the professional human drivers during the challenge weren’t wrong, when they said that Cornell’s car drove like a human.



Velodyne LIDAR

Velodyne's LIDAR robot sensorRick Yoder is an employee at Velodyne, a new-comer in the field of LIDAR (Light Detection and Ranging) sensors. The HDL-64E LIDAR uses an impressive 64 stationary lasers on a base rotating at 900rpm. This sensor was specifically designed for the 2007 DARPA Urban Challenge, and was used by around a third of the participating teams, although some other teams may have been turned away by the hefty $75,000USD price tag! Though not yet destined for the consumer market, Rick hints at a new series of sensors that may soon find their way into your car.

Links:


Latest News:

Visit the Robots Forum for links and discussions about the IEEE Spectrum Magazine’s Singularity, Robin Murphy’s Survivor Buddy, Georgia Tech’s Sandbot, the rapid-prototyper robot “RepRap” and the Japanese Navirobo teddybear mentioned in the podcast.

View and post comments in the forum

| More

Related episodes:

  • Sabine Hauert

    Hey!

    It was great to speak with Dan Huttenlocher who gave a very interesting talk here at the EPFL.
    I was personally amazed with the speed at which the robotics community is striving towards autonomous cars. There’s been a huge step since the already spectacular 2005 Darpa Grand Challenge and its desert roads. When I asked Prof. Huttenlocher why he wanted to participate in the Darpa Urban Challenge, he said "Because I thought it was impossible".
    Well I guess not, and if 13 students could pack-up todays robotic technologies and create a robot car for urban environments in just one year, then we might not be that far from seeing these cars in our streets soon.

    Autonomous cars…. bring it on…

  • Bender

    Yeah, it’s crazy. Just a few years ago the best teams didn’t even manage to do the desert track and now we have several teams completing the Urban challenge… I wonder what DARPA comes up with next…

  • Johnny 5

    Guess you don’t have to look very far … the answer’s right here on the forum: viewtopic.php?f=9&t=43&p=81&hilit=chembot#p81

  • Marvin

    I guess a next logical step would be a challenge to automate convoys. That’s what they need to run their military supply chains in autonomous mode. And it would be quiet useful for cars as well … just hit a button once you’re on the highway and your car joins the chain of moving cars.

    If you have some data on the car’s weight and brake performance this could make convoy driving much safer. And you could safe tons of gas by slipstreaming.

  • Number 6

    Velodyne’s LIDAR sensor being used to make a Radiohead video:


    More info at:
    http://code.google.com/creative/radiohead/

  • Anonymous

    AWESOME!

  • Marvin

    Ah, at last! Humans merge with robots …

  • ipodfansmail

    May i have a personal question? counld u sent me your email address?
    My email is ipodfansmail@gmail.com, thx, man.

  • Anonymous

    [quote="ipodfansmail"]May i have a personal question? counld u sent me your email address?
    My email is ipodfansmail@gmail.com, thx, man.[/quote]

    Whose, mine? So you can spam me? No way!! :>

    But you can probably contact an individual user by sending him a private message through the forum … (not me though :> hehe! )

  • LeonidAdams

    ipodfansmail, you can send you question to me by using PM

  • Pingback: 017 – Roboter | omega tau

Transcript

Cornell Racing Team

Interview with Dan Huttenlocher

ROBOTS (Sabine):  Hi, Dan, welcome to robots.

Dan Huttenlocher:  Hi.

ROBOTS:  Can you please present yourself to our listeners.

Dan Huttenlocher:  Sure.  I’m a professor at Cornell University, which is in New York State in the United States, both in the computer science department there and in the business school.

ROBOTS:  With Team Cornell, you developed one of the six successful autonomous cars capable of driving 56 miles in a mock urban environment.  Can you present Team Cornell to us?

Dan Huttenlocher:  Sure.  The team was largely composed of undergraduate students and some masters level students also.  And the students designed this vehicle really from the ground up.  They started with a standard car and then they designed all of the aspects of the mechanical engineering for controlling the car, the electrical engineering for interfacing to the systems in the car, and then all of the computer algorithms for sensing what was going on in the environment and doing the actual planning and control of the vehicle.

ROBOTS:  And how many students were on this team?

Dan Huttenlocher:  It was a fairly small team, about a dozen students.  I was one of two faculty advisors, myself and a guy named Mark Campbell who is in mechanical engineering, who brought more of the expertise from that side

ROBOTS:  So maybe for those who do not already know, what is the DARPA urban challenge?

Dan Huttenlocher:  The DARPA urban challenge is the third in a string of competitions that has been run by the Defense Advanced Research Projects Agency, which is an agency in the United States government, and the goal of these challenges has been to advance the development of robotic vehicles.  So the first two challenges both involved vehicles driving in the desert, which while quite complicated from the robotics point of view was a relatively simple environment in terms of a human being driving in the desert; there were not many obstacles around.  There was a lot of success in the second one of those challenges, so then DARPA, in this urban challenge, decided to up the stakes a lot and took a piece of an old army base or actually air force base that was no longer used but was an urban environment and set up a many kilometer course there where they had not only the robots competing but also a bunch of traffic vehicles driven by professional drivers really to simulate what driving in traffic in an urban environment would be like

ROBOTS:  So, concretely, what were the main skills that these cars needed to demonstrate?  Were there stop signs?

Dan Huttenlocher:  Yes, so the cars had to follow the rules of the road of the state of California.  So like every good person first learning how to drive, you look through these rule books and then of course after you become a practiced driver, you forget about it completely but in developing the vehicle, we had to pay a lot of attention to what the rules of the road are.  And then the cars had to negotiate various kinds of situations that would happen in a more urban environment so things like stop signs where when you approach the intersection, you have to pay attention to  who was there before you, which some human drivers don’t do such a good job of as we have all seen, but you have to pay attention to who was there before you and wait your turn, things like merging into moving traffic where you have to pull in between vehicles that were already driving along.  So a set of these kinds of behaviors,  and then in addition some other behaviors that maybe are not quite so common in normal everyday urban driving, things like where a road would be blocked and so you have to plan a new route and go around the blockage.

ROBOTS:  This sounds like quite a daunting task, you had these 13 students working on this.  In the end, what was the Cornell car like?

Dan Huttenlocher:  The Cornell car was characterized by a few things.  One is that the team paid a lot of attention to what I would call engineering elegance.  The vehicle looks much more like a regular car than some of the other vehicles in the competition.  They were careful to take the sensors and try to embed them in the vehicle so that it did not look like a robot car but like a regular car.  They paid a lot of attention to having the car drive very smoothly and in fact I think one of the greatest compliments that we got, which we actually had to overhear in a bar, because the drivers who were driving these traffic vehicles would all come to the bars in the evening afterward, and the drivers were commenting that the Cornell vehicle drove most like a person, that a lot of these other cars seemed like robots and this car seemed to drive like a human and to us I think that was really a very big compliment.

ROBOTS:  And what car model was used?

Dan Huttenlocher:  The car was a Chevy Tahoe, or is, I shouldn’t even put it in the past tense because the vehicle is still up and running and being used for research.  The car is a Chevy Tahoe so it is a fairly large SUV and there were a couple of reasons for choosing that vehicle platform.  One is that it has a lot of extra room under the hood so that we could put a second alternator to generate more electrical power, and then also there is plenty of room in the back for all the computers and there are a lot of computers in the back for controlling the car.

ROBOTS:  And what were the sensors, actuators, and CPU which allowed this car to drive so human-like?

Dan Huttenlocher:  There are a set of sensors, ranging from cameras that were used for doing things like recognizing where the road is in front of the vehicle and finding the stop lines when you had to come to a stop, to what are known as LIDAR sensors which are laser ranging devices that tell you the distance to things in the world so they produce something that looks a lot like an image from a regular camera except instead of you seeing brightness in each pixel in the image,  you actually see distance.  So there were these LIDAR units, and then in addition we also had radars that could detect other cars in the environment.  The sensors were a really critical piece of this, because if you think about what makes driving in an urban environment hard it is dealing with the other vehicles around you.  And then in terms of the computing power, we had a whole rack full of actually 17 dual processor Pentiums in the back of the car, so really there was a whole data center back there.  And that was necessary for both running software that was used to process the sensing data and also the artificial intelligence software that was doing the planning and control of the vehicle

ROBOTS:  And what was this car worth in the end?

Dan Huttenlocher:  It is very hard to put a price tag on it.  In some sense all of the vehicles in the urban challenge really are unique things where it is not so easy to put a price on them.  But just if you think about all of the physical hardware that went into the car, it is several hundred thousand dollars, so on the order of three or four hundred thousand dollars, but it is also probably 18 hours a day, 7 days a week, for 6 months worth of effort from the whole team, and it is hard to put a price on that.

ROBOTS:  Let us look a little bit more into the AI.  How was the robot capable of perceiving its environment, figuring out where it was, etcetera?

Dan Huttenlocher:  There are really several layers to the way the robot represents what is going on in the environment and maybe the simplest thing is to think a little bit by analogy to human driving.  So the perceptual processing in the car is broken down into something that we call an egocentric representation which is the vehicle detecting what is out there in the world with respect to itself.  And in that egocentric representation, the car is able to do things like obstacle avoidance.  So for example, if something were to come out of in the road in front of the car, it would know how to swerve out of the way or stop in order to try to avoid that.  And that is a very local sort of almost kind of reaction to what is going on in the environment around you.  And then the next level up from that there is a sort of longer range representation which is related to a map.  And if you think about how you are driving as a person, there is a similar sort of difference, right?  I mean, when you are driving, if a ball rolls out in the street in front of your car or something, you very quickly come to a stop and try to avoid that, without really thinking about where am I on the road?  You are not really paying attention to the map at that point.  And then this higher level representation with the map allows you to do things like reason about what kind of route you ought to be taking from one place to another, but also lets you reason about vehicles that are approaching.  So for instance, if you are on a road where there are two lanes of traffic, one in each direction, and there is a car coming at you, you react very differently if you know that is a two-direction road than one-direction road.  If it is one-direction, you are going to very quickly try to get out of the way because someone is clearly doing something very confused, and so this higher level representation of the environment, where we put the vehicle on a map, localize it, and then reason about the world with respect to the map, allows you to do this longer range kind of planning.

ROBOTS:  Here you just spoke of a rolling ball, cars which are moving, how do you actually track these objects and figure out what they are doing?

Dan Huttenlocher:  There are several layers to the perceptual system.  But what the perceptual system tries to do at a high level of description is take data from all of these different sensors, LIDAR, radar, vision cameras, and integrate that together into one overall representation.  So for example, you may get radar data and LIDAR data that correspond with the same location in the world and then you believe that that is probably the same object.  You also get some vision data from there that may help you further characterize the object.  And what the car does in terms of the software at that level is it actually tracks all of the individual objects that it can separate from the background in the environment.  So things that it thinks are not part of the ground or part of buildings based on their size or the fact that they are moving, it tries to track each of those objects independently and keep track of it.  And if you think again, by analogy to the human driving, as a human you have a hard time tracking more than a couple of objects in your environment at once.  You are sort of paying attention to one or two other cars generally in terms of your focus of attention, like the car in front of you or maybe somebody next to you, who might be trying to come into your lane.  Because of the fact that once we are able to track any vehicles with software, tracking a lot of them is not so much harder, in fact the car and the software on the car [are] potentially better than people in the longer run at keeping track of a lot of things going on in the environment.

ROBOTS:  With the MIT team, there was a minor fender bender which seems to result [from] not really interpreting the track linking data as well.  First of all, can you tell us about this fender bender and what was needed for this not to happen?

Dan Huttenlocher:  Sure.  So what happened in about the middle of the race is that none of these systems work anywhere near perfectly yet and certainly, while in the long term one can see these kinds of things doing better than human drivers, they still do not at the moment, and so our car was a little bit confused.  There was a big concrete barrier right at the edge of the road, and our vehicle thought that that might be another vehicle that might start moving, and so it was sort of trying to get around it and leave enough room, and so MIT’s vehicle came up behind us and basically got impatient with waiting for our car, which was sort of stopping and going and backing up and trying to get around this thing.  So their vehicle went around and passed ours, but they pulled back into the lane that we were in basically where we were also moving, and the two cars almost kissed.  They came together moving in the same direction, and there are a number of interesting videos of that available on the internet.  It is amazing, anything that happens these days is available out there.  So in terms of what was necessary to prevent that, there were a few things.  One is that for all of these vehicles, the ability to do tracking reliably over long time periods is still quite limited.  And, in particular, in the case of the particular solution that MIT used, they in addition had some problem with telling slow moving vehicles from stopped vehicles, so they thought we were standing still when our vehicle was actually sort of stopping and going, and so they tried to pull in too tight.  But if you think back to what does a human do in this kind of situation?  So say there was a human driving our car instead of a robot, the human would say somebody is trying to pass me here in this place where there are not even two lanes.  I am just going to stop and wait for them to go by.  I do not know what they are doing but it does not make any sense.  Whereas all of these vehicles really react much more locally to what is going on in the environment.  They don’t have a representation that is over a long enough time period to say somebody is trying to pass me, and so, in reacting to this very local representation of things, they cannot make the kinds of more intelligent decisions that humans might make that are more based on understanding a much longer time period of observation and action.  In fact this is one of the big research areas that we are looking at now, how do you get these vehicles and these perceptual systems to be able to actually anticipate the actions of others.  Right now they just react and a good human driver is not just reacting, you’re actually anticipating what someone might do next.

ROBOTS:  Do you have any insight on how this could be done?

Dan Huttenlocher:  There certainly are some prerequisites at least to being able to do anticipation, which involve being able to track and represent the actions of vehicles over longer time periods, and then the kind of directions that we are planning to take that involve machine learning sorts of techniques, where you can look at a variety of previous experiences where certain sorts of things led to certain sorts of outcomes, and then potentially use that to predict what outcomes are likely given the situations that you will observe in the future.

ROBOTS:  Here we spoke of the MIT team.  How were the strategies used in your car different from what the other teams used?

Dan Huttenlocher:  One of the bigger differences in terms of the way that the software worked on our car, the sort of AI layers, which is really where most of the strategy of the vehicle is, the solution that our students took in designing this was to have our car really try to track objects over some sort of, at least medium scale time periods.  As we were just discussing before the really long time period tracking that you would need to be able to anticipate actions we cannot do yet.  But our car was able to track other vehicles for periods of 10 or 20 seconds fairly reliably, and from that was able to do some better kinds of prediction of actions.  So I think that was a big distinguishing characteristic.  Most of the other teams, instead of trying to say what obstacles are out there in the environment and track them as independent obstacles, instead used what are known as these occupancy representations where you just say what portions of the world have something in them versus a free space, and you do your planning with respect to that.  We took a more dynamic view of the world, in terms of the kind of solution where objects are things that are actually moving independently.  So in some sense, at a high level, our approach is a little bit more like a video game with a bunch of things flying around in it, rather than a video game where the world is sort of all static.

ROBOTS:  Let us imagine we are one year before the DARPA urban challenge, what would you do differently?

Dan Huttenlocher:  I am not sure I would actually do much differently.  We made a decision at Cornell, and on our team and I think it was a good decision from a lot of perspectives, to really let the students make all the design decisions.  A lot of the teams were more professional engineers who already had quite a lot of experience or faculty and PhD students or postdocs who were further along in their education.  And we wanted to do something a little bit different because we thought this would be a good learning experience for students of the undergraduate and masters level, and I don’t think I would change that.  Of course, the problem is having made that commitment it is then very hard to change anything else, because the students make all the decisions in the end.  If there is one thing I would try to do even more than I did is to enforce even more testing discipline.  We put a lot of effort into testing but the teams that really did the best I think in the challenge had done even more testing than we had, and one of the things that happened to us was in the later part of the race we had a hardware bug that we knew was in the car but we had not been able to track it down.  It was intermittent and that slowed our vehicle down to a maximum speed of 5 miles an hour when it was occurring, so I think even more testing would have certainly made the vehicle perform better.

ROBOTS:  Let us look a bit at the future now.  What are the future steps in making autonomous cars?

Dan Huttenlocher:  I think there is both a good deal of research but also a lot of engineering taking the research and really building systems that work and that are reliable.  These cars drove for a few hours which on the one hand is a huge accomplishment compared to the previous state of the art but we all drive our cars for years and I do not think any of these solutions are reliable enough yet to last that kind of length of time.  So there is both a lot of engineering challenges in making these systems robust and reliable, but they are also the kinds of research questions, some of which we touched on before, about how can you develop representations that allow these vehicles not just to react but actually to anticipate what is going to happen.  So I think those are both directions where there is considerable need for further work before we will have fully autonomous vehicles out on the roads every day.  But on the other hand, I think a lot of this technology is already starting to become available, not in the setting the fully autonomous vehicles, but more in the setting of safety systems in automobiles, and I think there we are already starting to see this and I think we will see much more of it over the next few years.

ROBOTS:  One last question.  I interviewed several people on the street on what they think about having a car which would drive them to work and one of the women said, hell no and just laughed so I am wondering what it will take to get people into an autonomous car.

Dan Huttenlocher:  I think it will happen incrementally.  Already there are these systems in high end automobiles that are called adaptive cruise control, so with regular cruise control you set a speed on your car and it just drives at that speed, and if there is somebody in front of you going slower you just hit them.  The adaptive cruise control uses radar to tell the distance to the vehicle in front of you and actually its speed and then can compute the relative speed between the two vehicles.  It backs off and slows down your car to a safe following distance based on the speed that you are moving, which is already doing a fair amount of reasoning about what is going on in the environment, and this is a safety system, and I think we are going to see more and more complicated safety systems in these cars over time.  There are other safety systems that are used for big trucks, where they have vision sensors in the front and they can tell when the driver is drifting out of the lane, based on the painted lane markings on the road.  So I think as more and more of these safety systems come into vehicles, people will get used to the car warning them when they are doing something unsafe, like when they are drifting out of the lane, being able to adjust to the vehicles around them, and eventually the difference between that and a fully autonomous car will not be a very big step.

ROBOTS:  Thanks, Dan, for being here with us on robots.

Dan Huttenlocher:  Thank you.

Velodyne’s LIDAR Sensor

Interview with Rick Yoder

ROBOTS (Narrator):  Most of the finishing teams on the DARPA urban challenge, including Team Cornell, used this same LIDAR sensor.  Now we go to Adam, who spoke with Rick Yoder from Velodyne, the manufacturers of this sensor.

ROBOTS (Adam):  Hi, Rick Yoder and welcome to robots.  You work at Velodyne, producers of the LIDAR sensor that was used by 5 out of 6 finishing teams that competed in the DARPA urban challenge.  Can you explain to us what exactly is a LIDAR?

Rick Yoder:  The LIDAR is a 64 laser element system that uses 64 lasers simultaneously in a 26 degree vertical array, and that will give you a vertical field division of 26 degrees that gives you simultaneous outputs from each of those lasers giving you unprecedented amounts of data.

ROBOTS:  Okay.  Why is this sensor so useful for current navigation?

Rick Yoder:  Because it operates on real time.  It gives you a complete 360 degree view around the vehicle at the said 26 degree vertical field division in real time, so every revolution gives you a single scan and you can operate the unit anywhere between 5 and 15 hertz.

ROBOTS:  Why is this better than, say, an omnidirectional camera, or two omnidirectional cameras or something?

Rick Yoder:  An omnidirectional camera will not necessarily give you distance data.  Your main concern is to grab a 3-dimentional environment that will give you distance acquisitions for every point in the point cloud, so any obstacle that might be in the view of the sensor itself will yield accurate distance data, not only in length distance but also X Y and Z co-ordinates, it can be used for calculations.

ROBOTS:  Okay.  That sounds like a lot of data is being sent through this LIDAR all at once.  How do you deal with all this information?

Rick Yoder:  To tell you the truth, we basically do all the processing through DSP processing and then it’s converted through our various DDA conversions then sent out through a UDP packet over ethernet and you are basically receiving a little over a million data points per second

ROBOTS:  Okay, that sounds like a lot of data to deal with I guess.

Rick Yoder:  Quite a bit.  We do not write any proprietary software for the unit itself.  Pretty much left that up to the imaginations of the end users so that they can set it up for whatever application they particularly want to use it for.

ROBOTS:  And they have been using your sensor quite successfully, as I said, 5 out of 6 finishing teams used you system.  Do you think your sensor would be on consumer cars anytime soon?

Rick Yoder:  Probably not in its current format.  We are constantly working to improve and develop new products that will be more applicable to the modern vehicle.  At the moment it is a bit large and cumbersome the way it is now but we have got some things in the works that will hopefully prove to be a pretty nice set up for almost all the auto manufacturers in the future.  We are also looking at several different types of markets.

ROBOTS:  Such as what other markets?

Rick Yoder:  We are actually looking at some surveillance and mapping markets.  Another interesting application that we have come across [is] with the unit mounted 90 degrees from its normal mounting position.  It can be used to accumulate data, rather than giving real time type of acquisition, so when you do accumulative it scans the environment horizontally as you drive down the road at speed, it gives you a high definition point cloud of the entire surrounding areas along the road side.

ROBOTS:  So you can use it for like mapping or something like that.

Rick Yoder:  Yes, absolutely.  We were working with a few different companies that are doing just that, as well as surveying different assets along the roadside as far as guard-rails, bridges, signage, and also tree growth.

ROBOTS:  That is pretty interesting.

Rick Yoder:  It is getting pretty exciting.  There are a lot of different applications that we are finding as more and more customers pick up these units and experiment with them.

ROBOTS:  Okay.  What should we expect to see in the next 3 or 4 years from you guys?  Any consumer products or still sticking to the big heavy units?

Rick Yoder:  [Regarding] the big heavy units, we are in the process of building another 50 units for sale.  They are just about ready.  They should be shipping by the end of this month, and we are constantly developing other new products, that I cannot necessarily talk about at this time, but some pretty exciting stuff as far as color LIDAR systems.  We would be using red, green and blue laser systems, as well as some miniature systems that can be used on roving robots and hand held devices.

ROBOTS:  Excellent.  Sounds like there are some interesting things coming from you guys.  We will keep an eye out.  Thank you very much Rick, for the interview, and we hope to hear from you soon.

Show all

All audio interviews are transcribed and checked with great care. However, we can not assume any responsibility for their accuracy.