Robot Morality

Artificial Intelligence (AI) is arguably the most exciting, most recently also increasingly controversial, field in robotics. AI will outperform humans in a growing range of fields in the years to come.

The ultimate version of AI would be a recreation of the human thought process; a man-made software (bot) or machine (robot) with our intellectual abilities – including the ability to learn, reason, use language, and formulate original ideas.

Today, AI machines are able to replicate some specific elements of intellectual ability or understanding. As a consequence, humans and (ro)bots are working much more closely together than ever before. Firstly, we’ll see increasingly collaborative robots, aka “cobots”, which are working with humans hand-in-hand on the factory floor. Secondly, computer programs that talk like humans, aka “bots” or “chatbots”, are approachable as if you hold a conversation with another human to accomplish a task. The future is machine and human.

This all relates to the subject of the Technological Singularity.* The singularity will come after a time when our technological creations exceed the “computing power”, or intelligence of the human brain.

Singularitarians say that we simply cannot fathom what such a future would be like. However, an essential question will become “How can we teach the concept of ethics to (ro)bots, how do we ensure AI behaves morally”? This is what is referred to as “Good AI”.

From a neuroscientist’s perspective, (ro)bots shall learn more from human development. For example, we teach children concepts of morality before we teach them more complex ideas like algebra. After they are able to conduct themselves appropriately in social situations, we go on to teach them language skills and things that require reasoning that is more complex. We plan to follow the same pattern with AI. What we decide to create is up to us.

“We can’t retrofit morality and ethics. We need to focus on that first and build it into (ro)bots’ core. The real problem of (ro)bot morality is not the (ro)bots, but us. It starts with company ethics and people behavior. The greater the freedom of a man or machine, the more it needs and will need moral standards”.

We here at Robotise make the field of robotics, and specifically that of robots safe to interact with. The way our robots operate ensures that our customer can collaborate with them in many ways. Also, all data is kept secure and private, meaning you own your personal data.

Oliver Stahl

* Technological Singularity was a term coined by Vernor Vinge, the science fiction author, in 1983. “We will soon create intelligences greater than our own,” he wrote. “When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”