From the moment the Jetsons’ robot maid, Rosie, smashed a pineapple upside down cake over Mr. Spacely’s head, I’ve looked forward to the day when humanoid housekeepers would inhabit our homes. But interacting with humans is harder work than it seems. So, we can forgive science for taking a bit of time with developing artificial intelligence.
One of the most complicated aspects of artificial intelligence is getting machines to understand what we want. Language would seem to be the best way to direct a robot, but the truth is we’re not even sure how humans understand language.
Machines can learn language to some extent, but it is harder to program the context and experience we rely on to not only understand language, but the things we encounter as well.
A program can enable a computer to see a picture of a bottle and then say what the picture is. But Christian Smith, an assistant professor in Computer Science at KTH Royal Institute of Technology, says that it’s a “completely different thing to see a picture and understand what you can do with it.
“I can program a robot to pick up a bottle of water and pour water into a glass. But it might not be able to use that knowledge to pour gasoline from a container into a car. It’s basically the same kind of action, but we cannot yet get robots to generalize those kinds of things.”
The fact is, it won’t be able to pour from just any bottle either. A robot can only be programmed to pour from a specific bottle, which is fine if you want all the drinks in your home to be decanted from the same, standard bottle (just be careful your robot’s not serving bourbon in place of your morning orange juice).
And while there are language processing dialog systems for computers, these may not necessarily work well in an interaction between a human being and a robot.
How hard can it be? Smith points out the challenge for a machine to understand context. Take for example the determiners, “this” and “that”. You’re sitting in the kitchen and there are two glasses on the table. “I have this glass, and over there we have that glass,” Smith says. “But geometrically speaking, where is the boundary line where a glass stops being this glass and starts becoming that glass? If I tell you to pick up that glass, which glass is that?
Now, imagine you’re sitting at a table with another person. There are two glasses in front of you, at separate distances, and you’re asked to pass “that glass”.
The one that’s closest to you will be this glass, and the one farther away from you will be that glass, even though they’re both that glass to the person on the other side of the table.
“The glass that is that glass in one context becomes this in another context. This and that will point to completely different glasses just because of our spatial context.”
Sounds like the premise for an Abbot and Costello routine, right?
How does that person know which glass you are referring to? Usually we reach agreement with the person who’s sitting at that table with us. Eye movements, gestures and body language help two people agree on which glass they’re talking about; though I’ve been in situations with humans who have normally functioning brains (at least I suppose they did) and still got it all mixed up. So I can’t help but imagine how difficult this could be for a robot.
However, it is becoming increasingly clear to me that we are a long way from having humanoid robot bartenders.
David Callahan
Watch a full discussion of the challenges and issues around artificial intelligence, on Crosstalks TV, featuring Smith along with: Jürgen Schmidhuber, Professor in Artificial Intelligence, Scientific director at the Swiss AI Lab IDSIA; Theo Kanter, Professor of Computer Science at the Department of Computer and Systems Sciences Stockholm University; and Kristina Nilsson Björkenstam, PhD, Computational Linguistics Stockholm University.
http://talks.crosstalks.tv/the-promise-and-threat-of-artificial
Crosstalks is an academic web talk show where recognized researchers from two of Sweden’s top universities, KTH Royal Institute of Technology and Stockholm University, discuss global topics live with viewers worldwide. Crosstalks is an international academic forum where the brightest minds share knowledge and insights on the basis of leading research.
When we make a computer that thinks (after we figure out what thinking is) It might not behave like us humans IMHO. Humans evolved intelligence while dealing with problems computers don’t have. Some of those being gathering food, avoiding predators, finding a mate, and raising children. These had a profound effect on how we think. The Touring test would be find a simulated human. But we probably won’t create one of those. It’s thought processes will most likely be different than ours. We will be able to recognise it as intelligent, but it won’t be human.