Should Robots Have Rights?

Image: iStock

One of my favourite episodes of Star Trek: The Next Generation is "The Measure of a Man". In that episode, the personhood of the android Lieutenant Commander Data is legally argued with Capitan Picard and Commander Riker forced to lock horns in a court to determine whether Data should be afforded the right of self determination. While that might have been science fiction, the European Union is considering a similar matter, with a resolution to afford robots legal status as "electronic persons" being considered.

There's little doubt we are on the precipice of a new age. Artificial intelligence, machine learning and robotics are coalescing to create a new class of technology. We already have systems which, while not conscious, are able to adapt to changing conditions based on rules.

There is also widespread concern, perhaps even fear, that robots and AI could lead to widespread unemployment with Bill Gates suggesting robots should be taxed in order to make up for the potential shortfall in government finances as people are usurped from the workplace.

Then, there are questions of liability. If a robot is afforded person-like status, what happens if they make a mistake resulting in damage or loss of life? How do you extract damages from a device? And, if the "punishment" is turning the device off, then is that tantamount to capital punishment - something most western countries decry as barbaric - unless you're from Texas! However, some are suggesting that some degree of legal restitution could be covered if the "electronic person" is protected by an insurance policy.

The problem the European Union, according Mady Delvaux-Stehres, a socialist MEP from Luxembourg, is trying to solve is that current rules are “insufficient” for the “technological revolution”, and suggests the EU should establish “basic ethical principles […] to avoid potential pitfalls".

You have to hand it to the EU. Thinking ahead legislatively is a positive thing. Too often, the law is stuck, playing catch-up with the pace of technical change far out-stripping the law's ability to keep up, let alone anticipate what is coming next.

According to the EU parliament, “The more autonomous robots are, the less they can be considered simple tools in the hands of other actors (such as the manufacturer, the owner, the user, etc.). This, in turn, makes the ordinary rules on liability insufficient and calls for new rules which focus on how a machine can be held – partly or entirely – responsible for its acts or omissions".

Establishing a "Charter of Ethics" is one way to address the issues before they become acute problems. And while the idea of electronic personhood does have a degree of the bizarre, it does create a framework not just of rights for robots but, perhaps more critically - at least until we are able to create sentient electronic persons - responsibilities and obligations.

Asimov's Laws of Robotics have survived the test of time, since they were written in 1942. And they are likely to be part of the legal framework being proposed with the EU's charter extending them to cover the principles of beneficence (robots should act in the best interests of humans), non-maleficence (robots should not harm a human), autonomy (the capacity to make an informed, un-coerced decision about the terms of interaction with robots) and justice with regard to fair distribution of the benefits associated with robotics.

So, at what point should a robot be afforded its own rights? And what would those rights include? Do we treat them as highly trained pets - capable of making decisions but "owned"? If we take the Bill Gates path and tax them, does that change things? As a taxpayer, I am afforded certain rights and access. Would this be afforded to electronic persons?

Does the appearance of a robot matter? If the robot has a humanoid form, does that make a difference?

Is a robot a living thing? One definition says life is "the condition that distinguishes animals and plants from inorganic matter, including the capacity for growth, reproduction, functional activity, and continual change preceding death". If a robot can make other robots, then is that reproduction?

AI and robotics are still in their infancy but could profoundly change the way we live.

I don't pretend to have all the answers but this is a rare opportunity for society to think ahead about technology and its impact on society. If we can define consciousness and then decide if a device we create attains consciousness then I think that being should be afforded a set of rights, just like any other living thing. But what the rights look like is something I'm still pondering.

Is the EU wasting its time, contemplating this now? Or is this the time to think about this and establish a framework for the future?

WATCH MORE: Tech News

Comments

    A robot is just the vehicle much like the human body, it's the computer running it that is what might become sentient. Eventually, there will come a time when robots and by inference computers will become sentient, then they will need rights. Until then they are simply very smart machines.

      At the moment, they're still toasters, and as you say eventually will be properly sentient. I don't think theres much debate at the moment about either of those levels. It either happens, or it doesn't.

      Its that middle ground in between. At what point do they become the equivalent of a dog, or a child, or on par with an intellectually handicapped adult? Its those tiers that create the moral grey area the EU are trying to at least start looking at.

      I don't know what you do in those circumstances, but that's when I expect things will get juicy. I think its something we need to start on now though, because when Skynet happens, they go from being a toaster to an adult immediately. With no experiences to instil morals etc that drive us to make our decisions.

      They're making a solution for a problem that doesn't exist, knowing that sooner or later it will.

    A robot that is not sentient does not need rights. Its programmed by humans on how to act. You cant give rights to something that is not capable of independant thought, There is no point as they would not be able to act upon those rights. It would be like giving rights to a shovel. Sure the shovel would have those rights, but it is not capable of acting on those rights. So the whole endavour is pointless.

Join the discussion!

Trending Stories Right Now