The Robot - the Human's Best Friend


Research teams in Europe, Japan and the USA are currently working on creating robots that look like people and animals. The idea is that they will be helpers in our homes, or perhaps even our friends and conversation partners. They can be used for fun and luxury when we are young or for our care and company when we are old.

Robot seal instead of pet

A news report on Japanese TV in September 2006 shows an elderly Japanese couple with a small, soft white baby seal. The woman holds the seal on her lap whilst she talks about the family’s newest member. We see the couple’s photographs of previous, now departed pets. They are all dogs. Yes, we watch the couple enjoying beer and sushi at a breakfast restaurant, and the seal is with them. It’s sitting on the counter and following the couple’s conversation with the chef who is preparing raw fish. The restaurant does not normally allow pets inside, but an exception is made for the seal. Its fur is synthetic and under the fur there is an electromechanical body and a form of artificial intelligence that means that the robot seal has the same relationship with its owners as a real pet.

The news report continues on Takanori Shibata’s computer screen. Takanori Shibata is the creator of Paro, as the robot seal is called. He is in Denmark to exhibit Paro at a robot festival in Odense. The couple on the news report have lived with the seal for more than two years as part of an experiment. Takanori translates part of what the elderly woman is saying to the camera. The woman says that she would not take as much care of the seal if she simply regarded it as a machine. Takanori Shibata tells us that even he is surprised at the degree of emotional bonding that has taken place between the couple and the robot seal.

Some robot researchers are investing in the idea that partnerships and relationships can be created between robots and humans. These could be emotional bonds like the one we are already seeing with the robot pet. But they can also be more practical partnerships in which humans can communicate with robots and get them to do various things in the home. It is an important challenge for science to achieve a natural coexistence between the human and the robot. On the way to solving this problem there are many difficulties, both technical and ethical.

Many difficulties

At the beginning of the 1990s when Takanori Shibata chose to devote himself to the creation of a robot pet, this was because he wanted to create a robot that did not have a particular function, e.g. a vacuum cleaner robot that could solve a problem. At the same time, he had no desire to use his energy on developing a human partner robot of the type that can carry out many different tasks in the home. Takanori Shibata considered that far too difficult. And he still believes that this type of human robot will only be able to carry out very specialised tasks themselves in the distant future.

The difficulties are therefore one of the first things to consider when you look at research into human-like cognitive robots. Nonetheless, there are many research teams in Europe, Japan and the USA that are working full-time on developing robots that will be able to communicate and play with people in a natural way. There is intensive research into everything about robot partners. This includes their external appearance, their artificial intelligence, their senses, their movement apparatus, their learning abilities, their behaviour and much more.

Why is it difficult to build a robot partner? Briefly, the difficulties are inherent in the fact that the robot will be a partner to a human being and must be able to relate to the human world. The human robot must be able to register its physical environment (stairs, doors, tables, chairs) and act in this environment that is living and unpredictable compared with the environment of the industrial robot.

And not only that, it must also be able to recognise people, communicate with them and interact with them in ways that fit in with humans’ social norms. During the last few decades of research into computer technology and artificial intelligence, it has been shown that this is a far more difficult task than building a computer that can beat the world’s best chess player.

Robots with body and environment

The term artificial intelligence was first used at a conference in 1956. Since then, much research into artificial intelligence has focused on the development of systems that can carry out tasks involving mathematics and calculation. The highly acclaimed robot researcher Rodney Brooks writes in his book Flesh and Machines that intelligence was very much the same as activities that highly educated male scientists enjoyed, such as playing chess, proving mathematical theorems and solving integral calculus. According to Brooks, intelligence wasn’t linked to the tasks that every child can carry out without blinking: Being able to see the difference between a cup and a chair, walking round on two legs, finding a way from the bedroom to the living room, etc.

The engineering challenge is of course massive when it comes to creating the robots’ bodies, movement apparatus and sensory apparatus. But it’s probably far more difficult to develop software that allows the robot to understand the meaning of the many sensory impressions and react to them independently.

In the 80s and 90s, a lot of time was spent on getting the human robots to walk on two legs, writes Rodney Brooks. After ten years of secret work, Honda brought out their human robot P2 in 1997. Its ability to walk, even on stairs, was impressive. But walking was the only thing that the robot could do without remote control. The robot’s direction was controlled by a joystick and the head and arms were remote-controlled by a person through virtual reality. So it was actually only its legs that were robotic. But since the millennium, countless research groups all over the world have begun adding autonomy and a form of environmental understanding to the creations that will perhaps be helpers, conversation partners, therapists or even friends to human beings.

Robots in social interaction

In recent years there has been fast progress in the research and development of practical robots. In Japan there are already a few robot partners that can be bought as help for work in the home (see fact box with robots). Japan is a front runner when it comes to developing service robots or robot partners. The Japanese state invests billions of kroner in an area that is new to research and where it is still very uncertain whether good results can be achieved. This, amongst other reason, is because Japan is a society with a low birth rate and high longevity. Who will ensure that the elderly can manage physically - and perhaps even psychologically - in the home or in the care home? In Japan, robot partners are a possible answer.

Until we perhaps succeed in creating an all-round personal helper of steel and algorithms, we can study the tendency of Takanori Shibata's robot seal, Paro. After long research in care homes, Shibata has been able to documents the fact that Paro has a beneficial effect on dementia sufferers, and it is being marketed as a “therapeutic robot”. But it can also be used to benefit healthy people psychologically. In a Japanese television programme, an elderly woman appeared who had had Paro for a couple of years. She had previously been active as a teacher in Japanese tea ceremonies. In her old age, she had become a little lonely and having Paro as a pet has helped her. Takanori Shibata says Paro is built for long-lasting relationships and his company therefore feels obliged to stock spare parts for up to 50 years.

It is not only the Japanese however who see the opportunities for cognitive robots. In Denmark, the Ministry of Science, Technology and Development has published its technological vision on cognition and robots. Here, the increasing number of older people is one of the reasons why Denmark should invest in this area. Another reason is to strengthen the “experiential” economy. In September 2006, active, intelligent robot toys were the theme of Denmark’s first robot festival. The purpose of this is to promote attractive, instructive and motor skills-enhancing robot toys. The arrangers of the festival call themselves Robocluster.

The European robot partner

The biggest research project of all on cognitive robot partners is taking place in Europe however. The project is called COGNIRON and is led by robot scientist Raja Chatila, who, together with his institute in Toulouse, has been on the robot research world map since the 70s. The purpose of the project is to develop programmes and mechanisms that will make the robot capable of being a partner in the home. There are three levels of cognition that require development. Firstly, the robot must be able to understand and recognise things and people in a home environment. Secondly, the robot must be able to interact with people socially and understand body language and eye movements, just as it must be able to focus attention with its eyes so that humans can discern what it wants. Thirdly, the robot must be able to learn new things through observation and practice. This means that humans must be able to teach robots new actions simply by showing the robot how it is done without a programmer being involved.



The COGNIRON research project has developed three main experiments in which the various cognitive systems and robot mechanisms in the project will be tested. For non-experts, the experiments say something about the current level of robot research when it comes to social interaction, etc. In the first experiment “the robot’s home tour” a robot must be able to learn to know a home through a human pointing to objects and saying what the various things are called. In a second experiment “Curious robot”, the robot must ask on its own initiative whether it should pick up an object that a human has dropped – and then do this if the human answers yes. In a third experiment, the robot must show that it can lay a table after a human has shown it how this is done.



In other words, this involves very simple actions. But these simple actions require an unbelievable amount of understanding. Building a robot partner for humans is therefore almost like creating a theory of knowledge. For how is it that we humans convert a number of different sensory impressions, for example from a finger that points and the sound of a voice to a message that “this is a bottle”? And how does one find markers that can always be seen, so that a robot can follow a person in motion in all types of lighting? These are only two of many questions that robot researchers must provide answers to if robots are to function effectively.



Making life eternal

The social robot “Kismet” was created by Cynthia Breazeal, who studied under Rodney Brooks. Kismet consists exclusively of a head and was built to study natural, social interaction with humans “face to face”. Cynthia Breazeal developed Kismet under the influence of studies in children’s cognitive development. She said that humans find it easier to interact with, and have feelings for, a robot that reminds them of a child rather than a robot that is large and masculine.

As part of the research, Kismet has regular contact with people of all ages that come in “off the street” in order to spend time with Kismet. Rodney Brooks writes in his book Flesh and Machines about such an interaction. There is a man who points at his watch and says to Kismet that he would like to show him his wristwatch. Kismet focuses his eyes with a natural movement on the wristwatch. After a short time he directs his gaze again towards the man’s eyes.

In this way, the illusion is created that Kismet understands what the man says. But at that time Kismet didn’t have any language and didn’t understand a word. Kismet reacts to the unconscious social sign given by the man as he puts the watch into Kismet’s field of vision. Kismet has a system for visual attention. It notices three kinds of things; moving things, things with strong colours and skin-coloured things – in other words, people. If it hasn’t seen anything in a strong colour for some time, its coded scale for “boredom” increases, and it will want to direct its gaze at something brightly coloured. If it hasn’t seen any skin-coloured thing for some time, its scale for “loneliness” increases and it wants to direct its attention towards skin-coloured things it can see. If it has looked at something for some time, it will always want to look for something new to look at. This, roughly speaking forms the underlying pulleys and cogs that make it look, on the surface, like a situation in which two beings interact and understand each other.

Kismet has no understanding of language. But it can react to the tone of a voice, irrespective of language. This is called prosody and is the equivalent of a child instinctively understanding praise or admonishment from the tone of a voice alone.

Even though the revelation of Kismet’s very simple rules of behaviour – seen in comparison with human consciousness – can seem disappointing – make no mistake. It has cost two researchers 2.5 years just to develop the programmes that are specific to Kismet. And the effect in relation to the impression of an inner life is astonishing. Rodney Brooks writes in his book Flesh and Machine that the students and scientists who work with Kismet in the laboratory become uneasy when one of them is updating Kismet’s system with new functions. Because then they have to break off Kismet’s social behaviour and its head turns stiffly from one side to the other, almost like when you change ink cartridges in your printer. The impression of life is so strong that even scientists who know the mechanics inside out become uncomfortable when the illusion of social interaction is broken and Kismet reveals itself to be a lifeless machine.

The psychology of personification

In 2003, Batya Friedman, Peter Kahn and Jennifer Hagman from the University of Washington analysed input to a discussion forum on the internet for owners of Sony’s robot dog AIBO (see fact box with robots). 3119 contributions from 189 persons were reviewed and were placed in classes depending on whether the content said anything about AIBO’s technological properties, lifelike properties, mental states, social communication or moral status.

Friedman and her colleagues’ investigations provide several interesting results. Firstly, 75 per cent of the contributions confirm the understanding of AIBO as a technological gadget. But at the same time 60 per cent of the contributions confirm the understanding of AIBO as a being with an inner mental life. Friedman and her colleagues conclude that many owners of AIBO are fully aware that AIBO is a technological gadget, but that AIBO simultaneously awakens feelings in them as if AIBO were a living being.



But the investigation also shows that only a few contributions showed that the owners felt AIBO has moral qualities and rights. In other words, the owners relate to AIBO to a high degree as a living being when it comes to feelings and social interaction, but that they nonetheless regard AIBO as a thing to the degree that it has no moral status.

The fact that the robot AIBO is not regarded as a being that has a moral right to care and consideration is not primarily a problem for the robot. But Friedman and her colleagues point out that it could be a problem for the relationship between people. Children could in future have emotional relationships with lifelike robot pets, without this being linked to a feeling that they should merit care. The researchers believe that this could become an ethical problem if it becomes a part of the children’s developmental psychology that is transferred to beings that really are living, i.e. to animals and humans.

The ethics of eternal life

It’s as if the robot seal Paro is a real pet. It’s as if Kismet understands the social world. What ethical dimensions does this “as if” contain? Is it a deception, or are cognitive robot partners really just harmless three-dimensional user interfaces that function most effectively if humans see them as acting, feeling and understandable beings?

Kerstin Deutenhahn is professor of artificial intelligence at the University of Hertfordshire. She is researching into what it is that makes beings social. And she’s using this research to develop artificial intelligence for robots that are intended to function socially. Kerstin Deutenhahn and her institute are involved in the European project COGNIRON and she is working on developing small social doll-robots that can be used to treat people with autism.

Kerstin Deutenhahn believes that nothing can replace contact with the real natural world and with real living beings. But she believes that robot pets can be useful, for example in places where having real pets is not allowed. She also believes that cognitive robot partners will be able to raise the quality of life for her and many others, because a robot partner can deal with trivial household tasks. She can then “have more time to play with my daughter in the garden”, as she says. In the same way, she sees prospects that robot partners will help older people to remain in their own homes for longer, as robot partners would be able to manage the physical tasks that older people are perhaps no longer able to perform.

According to Kerstin Deutenhahn, we have to differentiate between deception and pretence. The pretence of understanding between the human robot and the human being is something purely practical that can also be used for other user interfaces. She mentions, for example, GPS navigation in cars, in which the apparatus reads out route instructions. Here there is no doubt that this is merely a pretence at communication. The same would apply to robots – both pet-robots and robot partners.

On the other hand, Kerstin Deutenhahn believes that robot designers should take care not to step over the line between pretending and deceiving. This line is crossed if scientists consciously try to conceal the fact that something is actually a robot. She believes that we should be extra careful in relation to children who are more receptive to deception, because children, as we know, cannot differentiate so categorically between fantasy and reality. KD says that robot designers should avoid creating robots that are too similar to humans. As an example, she mentions Hiroshi Ishiguron, who is researching into the creation of androids - very human-like robots - at Osaka University (see fact box with robots). Ishiguro’s androids are developed for the purpose of research into the technical, social and psychological aspects of robots that are very like humans.

Robot hit-parade

• Wakamaru is a personal robot partner that has been available on the Japanese market since September 2005. The robot has been developed and sold by Mitsubishi. Wakamaru can recognise and communicate with its owner (two persons) and eight other people. It can carry out orders, but it is also capable of asking on the basis of what it has learnt about its owner’s habits. Wakamaru is designed for communication and monitoring tasks in the home and it cannot yet carry out physical tasks such as cleaning, etc. On the other hand, it understands 10,000 words. The robot costs $14000 with a $900 monthly service charge.

• iRobi is a Korean robot for the home from the company Yujin Robot. Like the Tellytubbies, it has a screen on its stomach. It can be used, amongst other things, for monitoring and entertainment. iRobi has built-in children’s songs that it can dance to.

• Kaspar is a human robot in the form of a little boy. It belongs to a laboratory at the University of Hertfordshire where it is used to research cognitive development and interaction between robots and humans.

• Kismet is a famous robot at MIT. It’s used in researching how one can get robots to exhibit social behaviour suitable for communication with people. Kismet is only a bust consisting of a head with facial features that are a mixture of human-like features and the features of a pet. Kismet uses facial expressions, eye movements and voice to simulate intention and feelings in communication with humans.

• Asimo is Honda’s human robot and is on the market. Asimo has a very well developed movement apparatus and can receive instructions and react in a suitable way. The latest version of Asimo can carry out tasks at a reception desk, receive objects, such as a crate of drinks for example, and it can push a trolley in front of it. Asimo’s body language follows the norms of Japanese politeness. Asimo’s artificial intelligence makes it capable of recognising faces, understanding bodily postures and spoken instructions. Asimo is a research product, but the robot is used educationally and for scientific events. For example, Asimo can be used to teach children and young people about road safety.

• Papero is a small R2-D2-like robot from the Japanese electronics company NEC. Papero is on the market and can be used for entertainment and simple monitoring tasks. It can recognise faces and understands 650 words.

• AIBO is Sony’s famous pet-robot and is sold by the thousand around the world. It is created in the image of a dog and can express a number of feelings. It reacts to the actions of its owner and it moves rather like a puppy. AIBO is no longer in production but will be sold while stocks last.

• QRIO is a small human robot and is Sony’s successor to AIBO. QRIO has well-developed motor skills. It can withstand knocks and get back onto its feet if it falls. It can recognise faces, understand instructions and learn new words. According to Sony it can gradually take part in conversations where it chooses to talk about topics that interest its owner.

• Paro is a robot baby seal that is marketed as a therapeutic robot. Like other robot pets on the market, it has artificial intelligence that affects behaviour and responses to the user’s behaviour. Paro has well developed technology for sensing movements. According to the developer, Takanori Shibata, Paro has a positive effect, for example, on elderly people with dementia.

• Repliee is a research product at Osaka University. It is an expression of one of the most radical attempts to make a robot like a human being and is as such the spearhead of android science, as the scientist behind the initiative, Hiroshi Ishiguro, calls it. The purpose of the android is to find out how to avoid human robots being rejected as a result of people’s fears.

• Fumio Hara’s robot face has no name, but it is a robot-bust, like KISMET that can simulate emotional and communicative facial expressions. The purpose of the design is also to make the face as human-like as possible. The thought behind this is that the simulation of the human body’s features is necessary in order to understand what human intelligence is.

• Are you and your robot experiencing problems getting along together? Then seek help from the world’s first, and perhaps only robot psychiatrist, Dr. Joanne Pransky: http://www.robot.md/



Read more

Robot 
Robots and the law
Should we have consideration for robots?

External links

Technological vision for cognition and robots
The vision is a report prepared by a working group under the Ministry for Science, Technology and Development.

Theme site on robots
The site is linked to Danmarks Radio’s Science programme “Knowledge about”

Website for COGNIRON
COGNIRON stand for The Cognitive Robot Companion.