In our own image

In this week's edition of Time Magazine, an article entitled One Small Step traces the recent development of humanoid robots. Written by Lev Grossman, the article questions why we have such an obsession with human shaped machines. Exactly what is behind this anthropomorphic desire to design robots that move and look like their makers? We can see a purpose for self driving cars, robot manufacturing and even robot vacuum machines for the home. But robots that actually walk and behave like us? This sounds like the narcissism depicted in the welter of Hollywood feature films that have been released in recent years, such as Chappie, I, Robot and Ex Machina.

Each carries its own coded warning about what happens when we try to create machines in our own image, and each poses fundamental questions about the inevitable problems of relationships, ethical dilemmas and the threat to humankind. Add Artificial Intelligence into the mix and the possibilities become frightening. The creation may inevitably wish to emulate, or even usurp, its creator. This is a trope that has been with us for almost as long as we have been telling stories. From the malevolent Frankenstein's Creature to the more benign Pinocchio, and the sentient Star Trek character Data, we see a recurring narrative in which the created being yearns to become more human.  It shapes the discourse surrounding our technological future, and lurks in the background of theories such as the technological singularity.

So why the need to design machines that are human shaped? The answer given by the scientists who design humanoid robots is that the human being is the perfect form. The human body has exceptional mobility in comparison to most other animals, and can negotiate just about any terrain it encounters. But this also becomes the downfall when we try to create robots that emulate human movement. It is extremely difficult to achieve, not least because the robot has to carry its own power source with it, a problem that can profoundly influence its design. It is not uncommon to see humanoid robots trundling around wearing huge backpacks.

Furthermore, what we take for granted - walking, balancing, sitting down and standing up - is extremely difficult to program into the frame of a humanoid robot. It requires a sophisticated arrangement of servo motors and hydraulic systems for movement, and also needs to sense where it is in relation to the objects and environment around it. Building a human machine is a very hard proposition.

The teams that attempted to meet the challenges of the DARPA Robotics Challenge discovered this time and time again. Designing a robot that can walk or manipulate doorhandles is a frustrating process that takes months of programming, trials and tests and ultimately, a return yet again to the drawing board to iron out more prosaic faults that threaten to ruin the entire enterprise.



This video of robots falling over while attempting to complete minor tasks such as getting out of a car or walking up stairs is hilarious. It seems a long way off before we can begin to worry about humanoid machines taking over the world. But the video should also make us think - what happens if and when scientists eventually crack the problems? When we do have machines that can walk and talk and also have an inbuilt intelligence, what happens next?

Image by Richard Greenhill and Marie De Ryck on Wikimedia Commons

Creative Commons License
In our own image by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Comments

Joseph Gliddon said…
I think what happens next depends on how we treat AI if/when it becomes aware and what rights we afford it - see my blog "what is human" http://wp.me/p1gWAk-3t

We should also consider what type of intelligence we want to create and what examples our actions will provide it.
I for one would be much more relaxed about an AI developed by a University Vs one that grew out of a Hedge fund algorithm...
Steve Wheeler said…
Yes, we should consider what machines we want, but in the final analysis, do we actually have much choice, when these decisions are in the hands of the scientists?
Joseph Gliddon said…
I think its a conversation that humans need to have before it does become a pressing concern. We (as in society) should have the choice over who gets to make these decisions and although there are hands that are potentially worse than the scientists (military?) I have a different group who I think should have responsibility.

Assuming that true AI has awareness, ability in more than one narrow field and the capacity to learn then that learning should be supported and guided.
The "child of our mind" deserves to be taught the skills, information, context, social and moral underpinnings that provide the best chance for them to become a happy, successful and productive member of the society that they are joining.
I would suggest that the decisions on this upbringing should be (at least in part) in the hands of "Educators" (Primary, Secondary, FE, HE, postgrad - possibly in quick succession!).

Of course that does raise the question - if the "child of our mind" deserves all that then shouldn't every human child have the same opportunity.

Popular Posts