May 20th, 2013

Transcript – curious & creative

ROBOTS (Per): So, we’re going to speak about artificial intelligence, and how that ends up in robotics, how people can co-operate with robotics. But let’s start at how you got started in artificial intelligence and take it from there.

Rob: Okay, so when I was looking for an undergraduate degree I was interested in computers. I was interested in being creative with computers. I have always been interested in art and design and I knew I wanted to do something in computer science but I wasn’t sure what. I came down to a short list of either doing Cybernetics at Reading University or doing artificial intelligence at Edinburgh University in the UK. I chose to do artificial intelligence and I’ve always been very happy with that choice. So I was combining then a lot of work in computer graphics and artificial intelligence. This was the area I was most interested in as an undergraduate. I was looking at how I can use techniques from artificial intelligence neural networks, genetic algorithms and other such techniques to explore the potential for generating 3D forms; 2D images and 3D models. And where I ended up with that was very much influenced by the work of Karl Sims who was using genetic programming to evolve 2D images and the work of William Nathan who was using genetic algorithms to evolve 3D sculptures. William Nathan was an artist. Karl Sims mostly works in special effects but he’s most famous perhaps for his work at Connection Machines as their artist in residence. He worked a lot with those, super computers there, exploring potential for physics assimilations etc. So I tried to combine the work of both of those. Karl Sims using genetic programming, I use genetic programming to evolve three-dimensional forms and interactive evolution to allow users to explore the space of possibilities. That was always very interesting to me. I think possibly the thing I’ve always wanted to do, and the thing I do now really, is give creative people the tools to explore a space of possibilities.

I always find it most satisfying when I create a piece of software and then hand that over to people who think completely differently from me, who then produce things I could never imagine with that software. That always is a very exciting possibility for me. And so that’s what I was doing in my undergraduate research. And then when I was looking to do my PhD, I was interested in trying to extend this into design. So to go from generating sculptures, three-dimensional forms, to generating evolving forms in design. And that’s how I started at Sydney University doing my PhD. But actually then I quickly got interested in what was motivating people when they’re exploring these evolved space, when they’re exploring with an interactive evolutionary tool. What was motivating them to actually select what they were selecting. How were they exploring that space and that got me into psychology of creativity and actually that’s been the focus of my research since then: the field of computational creativity, trying to model creative processes such that when we look at a piece of software, we can attribute that with some creativity; whatever that means. I mean creativity is still one of those terms – like life and intelligence, all those interesting questions – which we still don’t have a very good definition of. But the simplest or the most agreed definition is the ability to produce things that are both novel and appropriate where appropriate may mean aesthetically beautiful or valuable or achieves a purpose for a design. And when I started in that field it quickly became apparent to me that a lot of the research in computational creativity and in artificial intelligence applied to design focused almost entirely on this question of how to produce things that are appropriate or valuable or aesthetic. That continues to be the focus. After all it’s very hard to appreciate anything if it’s not valuable.

But for me the big question that wasn’t being asked is we are leaving novelty. We are leaving the production of something new purely to chance and that doesn’t seem to fit very well with how humans design or are creative. And in fact when you go into the psychology, you find that people have a very specific preference which is that they prefer things that are both similar but different to those that they’ve experienced before. There are good evolutionary explanations for this, things that are similar but different to those that we’ve experienced before, give us the maximum opportunity for learning about them very quickly and so therefore we can learn about the environment. So it goes into a sort of developmental psychology and there’s been some great work by people like Juergen Schmidhuber and others who have looked at computation modeling of curiosity.

My interest is specifically the computation model of curiosity within creativity, within creative pursuits whether it’ll be art, design, music, whatever that might be. And how curiosity both drives the creative exploration and how it feeds back into social interaction between individuals and the development of cultures. So all of these things coming together. In software a lot of the work that I’ve done is producing agents that are curious in that they model some of these aspects of having a preference for things which are similar but different to those that have been categorized within the system before. So they’ve learned a model of these things and then often the hardest bit is figuring out the dimensions along which to judge similarity and that’s often where I still have my fingers in the pie the most I suppose: where I’m still having the most amount of influence is determining what the perceptual capabilities are of one of these agents and how they determine similarity.

But after that, it’s a matter of modeling curiosity and typically that often means modeling something close to boredom. That means we can have agents which are capable of determining that recent experiences are very similar to those that they’ve had before or they are so different from those that they’ve had before that they cannot categorize these new experiences. And so in those two cases there is a search for some middle ground where they can learn at an appropriate rate. And really it starts turning this whole idea of creativity being without being both novel and appropriate. It says that there are certain types of novelty that are more appropriate than other types of novelty so that we can experience them and understand them and learn about them quickly. And that gets into a whole social aspect of when we have a group of agents, those agents can share what they produce. So let’s say they use an evolutionary algorithm to produce images they can share those images with other agents and they can make judgments about how interesting those images are based on their own models of what they’ve experienced before, other images that they’ve experienced before and also how novel and what their preference is for novelty. So agents can have different preferences for the amount of novelty that they’ll tolerate whether they’re interested depends on how fast they can learn or other aspects that may have influenced their preference for novelty.

We get into a social aspect there, where we can start looking at what are the social structures that evolved them. So if we actually create a simulation where we have agents with ranges of preferences for novelty and we allow them to communicate generated artifacts, they would naturally form into what we call cliques. They would naturally form into groups where they like the work of other agents, or they like the artifacts of other agents but they don’t like the artifacts of another group of agents because they’ve gone off into a different part of what we might call the design space, the space of possible designs.

So they become increasingly differentiated or may end up with these social groups that have very close communication networks. But that can be very interesting as well and if the groups then become close enough that it is possible for agents to exchange between the groups we end up with major shifts, so possibly major injections of new ideas into a group.

And that’s an interesting sort of social model that comes straight out. We also can model things like having agents that innovate too quickly so they have a very high preference for novelty compared to the other groups or have a very low preference for novelty compared to other groups and how those are integrated to the social group. So for instance if we model an agent with a very low preference for novelty he won’t innovate very quickly, he’ll innovate very slowly and in fact will be ignored by the rest of the agents. If we have an agent that innovates very quickly on the other hand he will also get ignored by the other agents because the work that it’s producing is so different from everything else that they simply cannot integrate them into their models. So the novelty is not interesting even though it may be very novel. And this sort of builds on the work of Collin Martindale who proposed a thought experiment called the Law of Novelty that you have to innovate in order to be recognized within a society that values creativity but he really only looked at the case where you didn’t innovate quickly enough. The model extends what we showed in simulations, the model extends to cases where you innovate too quickly and I think that’s another interesting aspect of being able to produce these sorts of simulations. And then more recently I have been looking at how language can be interesting to these models. And language becomes a very interesting aspect of that because then you’ve not only got the artifacts (the image, the design, whatever) but also an utterance that describes that work. And that becomes a reference. I mean you have a way of ideally referencing not just the work but also properties of the work. So, if you have a world in which the agents have only ever produced blue squares and red triangles, if you’ve got language that’s evolved to describe blue and red and square and triangle it becomes then possible for the agents to imagine a world which has blue triangles while that is something that they’ve never seen. And it becomes a way for agents such as these sort of curious agents to produce interesting linguistic structures. So to search purely in the language for something that is an interesting description because it is a description that is similar but different to those that they’ve seen before. And then to set that as a goal for a search and this in itself is obviously a very simple but this a much like the way a lot of people think about design.

People think about design as imagining a world that is different or imagining things, how things could be different in the world and then searching for ways to satisfy that imagined future. And so if we can do this with the evolution of a grounded language, which is building on the work of Luke Steele and Simon Kirby, then we can get into this modeling of not just a curiosity that’s driven by not finding things interesting at the moment but driven by the imagination of something that could be, how things can be different. Which is what a lot of us think about when we think about being curious, we think about the curiosity of searching out potentials. I mean maybe we think of cat-like curiosity where we want to get into everything. Well that maybe is more like the divisive curiosity: there’s nothing interesting in where I am, I’m going to search everywhere else and see what’s around. But when we think about humans being curious we really think about this other type of specific curiosity where we imagine the possibility and then that possibility being something that we can search for. And so that’s really what I’m working on with the software simulation side: it is trying to build a minimal but in some ways well rounded model of creative systems. That is: creativity at the individual, the social and the cultural level. So trying to have all those things come together and that’s really where I’ve been working. These are autonomous systems that work independently from humans, so what they produce is of very little value to us but it’s of great value to the agents that are sharing them and they determine what is valuable, what is novel and that’s just interesting to then see from a modeling perspective.

More recently I’ve been getting into actually looking at embodied models of curiosity and specifically trying to build curious robots. I suppose it is the easiest way of thinking about it. Although we started a project called Curious Places which is about trying to embody intelligence into an actual space, but in a lot of cases we’re building independent robotic units that are then within that space. So we developed quite a few different projects there, we developed rooms that would monitor the activity within the rooms. We developed a curious research place, which was designed, around our own seminar room here. Each of the PowerPoint presentations that were given in the room were uploaded to the server, the server then goes and searches for other material related to the presentations that maybe happened over a week or over a month and then produces a slide show of other material that it found relative to that that it determined was interesting based upon the material that was already presented.

Now the slide shows were presentations in their own, it became a way for the room to augment the activities in that room. So, that was an example of actually trying to change the room itself such that it became an active participant of the activities and responded to the activities within the room. That’s sort of saying the whole room is sort of an intelligent system where the actuators were some projection systems and inputs were again the computer where slides were uploaded and the whole planning system was really just going out on the internet and then bringing information back.

We’ve also been playing with autonomous robotic systems; the curious whispers project was the most recent of that. An honors student of mine called Emma Chi developed some very nice robots that could play simple tunes, just very simple eight note tunes and they could listen to the tunes from each other. The real purpose of the first experiment was to look at how humans and robots interact within an environment. So the space, as a curious place, is a place you walk into where there are curious or creative or interesting robots where people interact with that. So we have these simple robots, which can play tunes and listen to tunes and compose new tunes based on what they’ve heard and we also provide in the space a very simple synthesizer to allow people to enter into a conversation with these robots by playing tunes. So they become in some ways part of the robot’s society for a limited time and that’s one of those things where we are very interested in how people understand the space they’ve walked into, how they then start interacting with the robots, how they understand what the robots are capable of doing without any real prompting from us. It’s a very playful environment, it is one where people can come in and they see three robots that are already interacting with each other and where you’ve got a synthesizer that obviously can play notes and if you play notes then the robots stop and listen and then maybe will close down to rehearse playing a new song and then open up and play variations of the songs that you played.

Interestingly some people came into the room and thought the power relationship was the other way round from the way they would normally imagine. So, they would come in and think that they had to repeat the songs that the robots were playing rather than introducing their own songs. And so it turned out that it was best to introduce people to the robots in isolation: so one robot on it’s own that someone could interact with, and then you could put them into the group situation. So it was a more balanced power relationship between the person and the robot initially, an then they could get into the social situation where they feel like they can become part of this ongoing conversation between the robots. So it’s obvious that there is an interaction between the robots, that it’s autonomous of you. But it’s also possible then to enter into that. It’s a bit like the problem of entering into a conversation when you are in a cocktail party or something like that. You walk into a party and you’ve got to try and understand what’s going on, what the conversation is between people and then you can join in. So you got to understand the context first. And I think that’s a really interesting way of thinking about how we interact with robots and with technology.

ROBOTS: It is a conversation more than the robot telling us stuff or we telling the robot stuff.

Rob: That’s right. And is about how do you start that conversation because I think that’s really the knack here. Because we’re not building humans and so therefore we’re building things that have a different perspective on the world and we can develop social convention (which is really what a lot of user interfaces are: social convention of the master-slave) but there are other ways of communicating.

There’s a whole range of ways of communicating and particularly influenced by the work of the cybernetician, Gordon Pask who developed a conversation theory which has been applied in education and all sorts of different areas and I’m being very influenced by this whole idea of having a conversation with technology and how we can then develop systems that facilitate that conversation happening. If you build humanoid robots (nothing wrong with building robots in the form of a human being) but if you don’t try and suggest that they have human capabilities and rather you open up the possibilities for exploring what the capabilities of that robot are and finding a middle ground where you can work together. That is where things get really sort of exciting but it’s also very natural for us. We are used to being surrounded by intelligences other than human and understanding what they can do. We’re masters of that in many cases, we have great relationships with many animals and that’s really where we can look. So there are very different sorts of relationships but the most sophisticated we can make are those conversational abilities, the more successful we can be at actually producing technology that we can get to grips with fairly quickly. And in this respect I think that the way Luke Steele and others have done in robots learning languages through the playing of language games (I mean it’s still a long way away from being a practical way of entering into a casual conversation with a robot), the ability for a robot to then learn some sort of short hand for what one instruction might mean through a negotiated conversation is going to be one of those things I think we’re going to get more and more used to. It would be a very gradual, natural change. But that’s the sort of way where I can see us having a very successful relationship with technology. Less about trying to hide the interface but more about trying to make the interface actually obvious and negotiable.

Some people talk about a “scene full interface”, an interface that is clear. You know where the interface is and you can see it. It’s not something that you’re trying to magically get rid of, it’s no something that’s hidden but rater something that is obvious but familiar. And those sorts of interfaces are the ones where I think we can have a lot of success with bringing technology in. And I think there are some great examples of those sort of things coming through. Especially in the world of toys there’s some wonderful examples of just things that have natural interfaces. The whole tangible computing area has a lot of wonderful stuff happening there and is probably something that we’re going to see. Just as the previous generation grew up with computers, I think the current generation is growing up with intelligent hardware and growing up with robots and that will probably make a huge difference in how we see robotics evolve in the next 20 to 30 years.

ROBOTS: How they communicate, have this conversation with robots like a computer professional has with the computer. They know that the computer has certain limitations and abilities, abilities and limitations and then adapt that and get the most out of this relationship with technology. I think that this is a very interesting aspect of this research: that we have a conversation, a relationship to get the most of what we want.

Rob: Yes exactly I think that’s precisely the point: by having a generation grow up with the idea that they can build programmable hardware, we’re going to see an enormous increase in people understanding what that programmable hardware is capable of. And naturally that gets us the into the robotics area, although it’s going to be less about the robot and much more about what we might call just ubiquitous computing or some other aspect of tangible computing where it’s just natural to have hardware and software and work with them together. The next generation will just naturally work with them in the same way that the current generation takes to the iPad without any fuss at all. I mean it’s such a natural interface. Of course they got rid of some of the more confusing aspects such as the file system, and it’s probably going to technology that you can negotiate with.

I mean, it’s probably going to evolve like that: we’re going to have some sort of clumsy first attempts and continue on until we can find a model where it’s easy for us to pick up this sort of robotic technology and use it to whatever purpose we want to. Of course using things like 3D printing and CNC machines that people are installing at their home so they can build their own hardware is again becoming one of those skills, in the same way that when I grew up it was just natural to have a personal computer at home and you just learn how to program in your spare time as a hobby and make your own games or whatever. Now it seems that it’s very natural for someone to have a 3D printer in their bedroom and then just to learn 3D modeling and be able to build whatever it is they need as they need it. And increasingly that’s coming with the ability to add a microprocessor or even the latest Linux system into the hardware. We’re well into the realm of building robots and all sorts of other interesting devices, as we need them.

ROBOTS: And that will also develop that generation’s conversation skill with the same hardware.

Rob: Of course I mean I think there’s a competence there but possibly in terms of the conversation the most important thing is understanding the context, understanding the capabilities of the other, the robotic other. I mean understanding what their capabilities are and understanding how to negotiate. Now that might be in the same way a negotiator negotiates with the computer which is very much: “I write the software and it runs on the computer” but it could be in the sense of actually very naturally having an open system where there are standard ways, and this is of course what I’m trying to develop here. There are standard ways of building software, it’s actually there, it is open, adaptable and capable of entering into a negotiated set of meanings which can be the base of the conversation. If we can dwell at that source of the software base – which of course is being done in many of the work on evolution of language – if we can develop those sorts of systems as something that anyone can just pick up and add into their piece of hardware, then we develop these sort of open conversations and I think that’s a very exciting possibility and that we could see that happening.

ROBOTS: Certainly this is very interesting and we’re definitely going to try to keep up with this work. So thank you very much for being part of the podcast and I’ll hope to be back soon.

Rob: Thanks very much.

| More