Archive for the ‘Podcast’ Category

March 12th, 2010

Robots: The Future of Artificial Intelligence

In this episode we stray into the realm of artificial intelligence, what it means, its early beginnings and where it may be going in the future. We speak with Kristinn R. Thórisson from Reykjavik University in Iceland who’s been involved in the AI scene for the last 20 years. He tells us about some of the great advances, but also some of the disappointments in the field, and where he thinks AI will be used in the near future. We then attempt a closing definition on the question “What is a Robot?” with Prof. Wendelin Reich from the Swedish Collegium for Advanced Study at Uppsala University, Sweden.

Kristinn R. Thórisson

Kristinn R. Thórisson is Associate Professor, School of Computer Science, Reykjavik University in Iceland. After completing his doctoral studies at the famous MIT Media Lab, Thórisson has gone on to found several companies specialising in AI as well as two separate AI labs in Iceland (CADIA and IIIM). Thorisson leads us on a guided tour of AI since it’s inception in the 50s, through ages of promise and darkness, to where it’s at right now. He also talks about his own research into constructivist AI and where he hopes to see AI in the future, from wide-ranging fields such as simulation or even governance at a national scale.

What is a Robot?

This week we received an excellent “Robot” definition from Wendelin Reich who is professor in social psychology at the Swedish Collegium for Advanced Study at Uppsala University, Sweden.

A robot is an artificial, physically embodied ‘agent tool’. In other words, a thing that a large number of people call a ‘robot’ tends to satisfy the following criteria:

(1) It can be described as an ‘agent’ [more precisely put: it displays the typical properties of objects which we humans, and other animals, were evolutionarily designed to view as agents: self-propelled motion; goal-orientation; instrumental rationality etc.*];

(2) it is a physical object [as opposed to a virtual agent etc.];

(3) it has been constructed by someone else [humans or aliens, but not biological evolution];

(4) it fulfills a function for this someone [which makes it a ‘tool’];

(5) and it is, or is expected to be, under ultimate control by this someone [that is, a robot is autonomous only to the extent that we allow it to be so, and a ‘rogue robot’ is, by definition, an undesirable aberration].

At ROBOTS we’re pretty convinced with this definition and would like to know what you think! Therefore, we’ve started a discussion topic on our forum that you can use to debate this definition and all the other great ones we’ve received that are listed below. Sincere thanks to all the contributors who made this debate possible!

“A robot is a physical machine manipulated to automatically perform an undesirable work function that supports a desired human outcome.” Kevin Makice

“A reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.” Robot Institute of America, 1979

“A robot is a physical apparatus designed to perform a specific function. Functional complexity varies greatly – from the simple repetitive task involving little or no embedded software, to a set of complex tasks requiring decisions to be made based on parameters sensed in real time. These tasks and decisions may involve cooperation with other robots and/or assistance from one or more humans either directly or remotely.”

“A robot is an intelligent machine that moves, reacts and interacts with its environment in an autonomous manner.” Pius Agius

“A robot must be able to sense its environment, understand that environment and make calculated and intelligent decisions to affect that environment or its position within that environment while producing useful work without human intervention.”

“A robot is a machine with a very small and very powerful processor (and or sensing devices) with an equivalent powerful software program mounted on a strong flexible frame or chassis which out performs all present machine of its time in all parameters/categories (accuracy, easy to program or instruct/easy controls,intelligent,reliable)”

“I think a robot should have a certain amount of autonomy, or be preprogrammed enough to do some work by itself. If it’s completely remote controlled, it shouldn’t be called a robot. I’m bringing this up because it seems like there are a lot of machines being used in the military and by police to disarm bombs and such, which, from what I gather, are really just remote controlled. Am I right? They look like what we think robots should look like, because they have arms and they’re mobile. But my opinion is, if they can’t really do anything on their own, they shouldn’t be called robots!…”


Latest News:

As always, more information on this episode’s news including 100+ years Popular Science archives, Japan’s Kojiro anthropomimetic robot and the open-source Roomba-enhancing project Gåågle Bot can be found on the Robots Forum.

View and post comments on this episode in the forum

| More

Related episodes:

November 20th, 2009

Robots: Learning

In this episode we speak with two experts in robot learning. Andrea Thomaz from Georgia Tech looks at how humans can teach and humanoids learn with the hope to create good human-robot interactions. We then speak with Sethu Vijayakumar from the University of Edinburgh about machine learning and how it can be used to teach a robot hand to balance a pole.

Andrea Thomaz

Andrea Thomaz is professor at Georgia Tech and the director of the Socially Intelligent Machines Research Laboratory. With a foot in human-robot interactions thanks to her PhD and Post-doc at MIT with Cynthia Breazeal, Thomaz went on to design her own humanoid-creature named Simon augmented with an amazing designer head and flanked with the most expressive ears you’ll be seeing anytime soon. Simon features an articulated torso, dual 7-DOF arms, and anthropomorphic hands from Meka Robotics.

With Simon and other humanoid robots such as Junior, she is looking at how to make social robots that can learn from humans in their everyday environment. With this endeavor in mind, her lab is studying how humans actually teach and draws conclusions that could be useful when designing future machine learning algorithms. She is also taking inspiration from nature to make robots that can learn in an incremental manner by observing and reproducing what people in their environment are doing, similar to what happens when you put two kids together in a playpen.

Andrea Thomaz is also the author of the Blog “So, Where’s My Robot?” where she posts thoughts on social machine learning. Finally, she was awarded the prestigious “MIT Tech Review 2009 Young Innovators Under 35“.

Sethu Vijayakumar

Sethu Vijayakumar is the Director of the Institute of Perception, Action & Behavior in the School of Informatics at the University of Edinburgh and an associate member of the Institute for Adaptive & Neural Computation. With the Statistical Machine Learning and Motor Control Group there he’s been looking at how robots can learn complex tasks such as balancing a pole using an anthropomorphic arm. His pursuit of the holy grail in machine learning has brought him to tackle the intricacies related to highly changing and dynamic environments. Because of this, his research interests span a broad interdisciplinary curriculum involving basic research in the fields of statistical machine learning, robotics, human motor control, Bayesian inference techniques and computational neuroscience. Finally, he’ll be telling us more generally how machine learning is different from human learning and what he sees as the next steps in this area with a short escapade in the world of prosthetics.

Since August 2007, he holds a Senior Research Fellowship of the Royal Academy of Engineering, co-funded by Microsoft Research in Learning Robotics.


Latest News:

For more information on the autopsy-performing Virtobot, a great video of the Pac Man Robot Game and to revisit some of 2009′s memorable robots, including SCRATCHBOT, Festo’s Robot Penguins, the Wirelessly controlled Beetle and Robot Fashion Model HRP-4C have a look at the Robots forum!

View and post comments on this episode in the forum

| More

Related episodes:

September 12th, 2008

Robots: An Uncertain Revolution

In this episode we dive into the revolution brought on by the field of probabilistic robotics with Claudio Mattiussi who is Senior Researcher at the Laboratory of Intelligent Systems in Lausanne, Switzerland. We then launch a most “uncertain” competition to see how our listeners are able to cope with uncertainty in estimating the cleaning capabilities of our Roomba robot.

Claudio Mattiussi

As a Senior Researcher at the Laboratory of Intelligent Systems at the EPFL in Lausanne Switzerland, Claudio Mattiussi has been looking into the world of evolutionary computation, neural networks and machine learning applied to tasks such as reverse engineering gene regulatory networks, synthesizing neural networks, and designing electronic circuits. Thanks to his experience with real-world applications and years in industry, Mattiussi has become aware of the need to deal with uncertainty, which is present in most environments and living beings. As a solution, he presents the probabilistic or Bayesian approach to perceiving the world, with a touch of history, philosophy and projection. Rather than being against good old fashion artificial intelligence (GOFAI), or Brooks’ Behavior Based approach, he proposes the “uncertain” revolution using the probabilistic paradigm as being a compromise for the future.

Finally, he discusses how the probabilities can be used to make decisions on robot behavior using neural structures and evolutionary techniques.

Uncertain Contest

For a detailed view on some of the subjects presented in this show, win the new book on “Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies”
written by Dario Floreano and Claudio Mattiussi, out on the 30th of September 2008.

To make you apply your own probabilistic approaches to a concrete problem, we’ll be asking you to guess (or compute) the percentage of dirt collected by a Roomba robot in its own “uncertain” environment. We’re waiting for your vote by Wednesday, September 24th at 9AM GMT.

All the details for the competition can be found on our forum.


Latest News:

Check out the Robots Forum for pictures, links, videos and some ongoing discussion for this episode’s news, including the most recent iRobot headlines, Rod Brooks’ new Heartland Robotics as well as the gigantic robot spider roaming Liverpool.

View and post comments on this episode in the forum

| More

Related episodes:

May 9th, 2008

Talking Robots Podcast LogoTalking Robots: Blue Brain Robotics
Go to original website

In this episode of Talking Robots we speak with Henry Markram who is the director of the Blue Brain Project, director of the Center for Neuroscience and Technology and co-director of EPFL’s Brain Mind Institute in Switzerland. While most roboticists have been working on abstracting the brain, the Blue Brain project has been painting the whole picture of a rat neocortical column (NCC) from the bottom up; starting with the cells, neurons, and finally pulling the connections which generate the jungle of the mind. It seems that modeling our grey matter as a whole might result in emergent features such as consciousness or self representation and provide necessary tools for the study of brain disorders such as Alzheimer’s or Autism. Finally, robots embedded with such in-silico replication of the brain might not only be more efficient in communicating, showing emotions and planning, they will also serve as essential testbeds to better understand what’s happening in our head.


Related episodes:

April 11th, 2008

Talking Robots Podcast LogoTalking Robots: Personal Robots
Go to original website

In this episode of Talking Robots we talk to Cynthia Breazeal who is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology in the USA, where she founded and directs the Personal Robots Group at the Media Lab. With her creaturoids, animoids, humanoids and robotized objects, Breazeal has been working to make robots and humans team up in a human-centric way, work together as peers, and learn from one another. Breazeal’s work includes personal robots such as the very expressive Kismet, the Huggable™ robot teddy, Leonardo the social creature and the MDS (Mobile/ Dextourous/Social ) humanoid robot.


Related episodes: