Lab Innovation
01/22/2025 | Digital Innovation
Putting aside for a moment the complex algorithms, the Cartesian style co-ordinates and the way sensor data is amassed and analysed, there is a far more discernible not to mention visual, way of recognizing the progress we have made since the days of W. Grey Walter’s early robots. Unlike the plodding, tortoise-shaped creatures reminiscent of those that were developing as far back as the Pangea, today’s are, at a glance at least, hard to distinguish form the most active, mobile and, importantly, intuitive of quadripeds. Or put more simply, if it looks like a pet we can relate to, we trust it.
The tortoises in question were Elmer and Elsie who were (quite literally) the brainchild of the American neuroscientist who studied the brain’s electrical activity at the Burden Neurological Institute in the UK and created battery-powered robots to test his theory that a minimum number of brain cells can control complex behaviour and choice. Shorty after the war, he created the pair: basic, slow-moving, dome-shaped machine designed to explore their environment and react to is, aided by sight and touch in the form of rotating photoelectric cell and a reactive external contact switch.
Anyone who’s seen the video report in which Cleo Abram of the tech show Huge If True, will note the way she infuses her account the four-legged 2022 AIRA challenge winner’s many advances by describing it as cute and durable and confessing to being unable to resist the urge to pet it. And in an interview, MIT’s Media Lab research specialist Kate Darling spoke about her book in which she argues that it’s far more useful to turn to our relationship with animals rather than with other humans in order to better understand robot-human interactions.
“We’re constantly subconsciously comparing robots to humans, and artificial intelligence to human intelligence,” she told Behavioural Scientist. “That’s never made a lot of sense to me, given that artificial intelligence works so differently from our own. It’s also made way more sense to me to look at the animal analogy because it changes so many conversations, and we’ve used animals as a partner in what we’re trying to achieve because their skillsets are so different [to ours]."
“It always bothered me that we are limiting ourselves and falling into this technological determinism that robots can and should replace people, and I just feel like animals are such a salient analogy that everyone gets. [An animal] is also this autonomous thing that can sense, think, make decisions, and learn that we’ve dealt with previously.”
But she did recognise cultural issues in anthropomorphizing animals and robots in similar ways, there are cultural issues, particularly in Western society, where there is a strong divide between things that “are alive and things that aren’t alive”, she said. “Then there are other cultures, like the history of Japanese culture that contain a lots of Shintoism, which don’t have such a strict divide. “In Japan, people are more willing to view a robot as something fluid. One of the things that I discovered while researching the book is that we used to do this to animals too. Before pets were really a big deal, we acknowledged that animals were alive, but we thought it was silly to have emotions or to develop attachments or, God-forbid, give them any rights. I wonder if history is repeating itself in that we’re starting to see that people treat robots like living things but we still view them as silly.”
Spot was a case in point: an uncannily realistic canine-bot typical of a trend that uses the natural world as inspiration. Human engineers increasingly look to living systems for clues to a good design, given that they are so efficient, and adaptable to new situations and conditions. Robot designers want their creations to be similarly capable. After all, the fundamental constraints that nature has been working with over billions of years still apply, whatever the purpose of the robots we create. This means that we create robots suited for a world designed by humans for humans.
As for those that look like humans, they are generally welcomed because of an innate belief that only humans strive to prove their competence, an intrinsic notion referred to as effectance motivation. Add to that the need for confirmation bias where we look for evidence to support our expectations and it’s no surprise, we believe anything that looks like us should function well, notwithstanding the counter view that sees them as a threat to job security and even human identity. Leading robotics professor Masahiro Mori at the Tokyo Institute of Technology coined the term Uncanny Valley after he notices an abrupt shift within human attitudes toward robots as anthropomorphism efforts accelerated.
While people behaved well towards them initially, as more humanlike features were added, the response shifted from excitement to revulsion as they were viewed as a threat. Philosophy professor Mark Coeckelbergh, of the University of Twente in the Netherlands said: “Insofar as robots are already part of us, we trust them as we are already related to them.
“As we are learning and as we are developing skills for dealing with these new entities, trust grows. In this sense, trusting robots is not science-fictions but is already happening, and rational calculation is only one interpretation of, and way of how one’s relation to the technology takes shape.”
Lab Innovation
Digital Innovation
Pharma Innovation
Newsletter
With our newsletter you will receive current information on ACHEMA on a regular basis. You are guaranteed not to miss any important dates.
Theodor-Heuss-Allee 25
60486 Frankfurt am Main
Tel.: +49 69 7564-100