Do Robots Need A Section 230-Style Safe Harbor?

from the future-questions dept

Forget Asimov’s three laws of robotics. These days, there are questions about what human laws robots may need to follow. Michael Scott points us to an interesting, if highly speculative, article questioning legal issues related to robots, questioning whether or not a new arena of law will need to be developed to handle liability when it comes to actions done by robots. There are certain questions concerning who would be liable? Those who built the robot? Those who programed it? Those who operated it? Others? The robot itself? While the article seems to go a little overboard at times (claiming that there’s a problem if teens program a robot to do something bad since teens are “judgment proof” due to a lack of money — which hardly stops liability on teens in other suits) it does make some important points.

Key among those is the point that if liability is too high for the companies doing the innovating in the US, it could lead to the industry developing elsewhere. As a parallel, the article brings up the Section 230 safe harbors of the CDA, which famously protect service providers from liability for actions by users — noting that this is part of why so many more internet businesses have been built in the US than elsewhere (there are other issues too, but such liability protections certainly help). So, what would a “section 230”-like liability safe harbor look like for robots?

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Do Robots Need A Section 230-Style Safe Harbor?”

Subscribe: RSS Leave a comment
37 Comments
Lanning says:

I, Robot

There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote… of a soul?

senshikaze (profile) says:

the three laws

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I think if we can hard program that in, there would be no reason to have a Section 230 for robotics.

Asimov had this all planned out over 60 years ago. Our retarded legal system is too blind by money, er “justice”, to see that simple works.

another mike (profile) says:

Re: the three laws

There is only one logical solution to ensure all three laws remain in effect. Asimov wasn’t writing guidebooks; he was writing warnings.

Anyway, I’m not the only “hobby robotics guy” out there pushing the envelope for what machines can do. Just because it’s not under military contract, don’t assume it’s not a very capable machine.

Anonymous Coward says:

Re: the three laws

“I think if we can hard program that in, there would be no reason to have a Section 230 for robotics.”

Yeah, good luck with that. Seriously, “hard coding” the three laws is such a huge crock of bullshit that every media outlet and every movie seems to think is some magical solution to every robot. There’s no “Don’t hurt the human” command in C++ last time I checked.

Every time any industrial accident has been caused by a robot or by an automated system it was because the system wasn’t aware that it was hurting a human or causing shit to happen. No one programs the robot to move the crane through someone’s head, it just happens because the capability does not exist yet for a robot to be aware of what it’s doing. Sure, we can put sensors and safeties and shit all over the place, but it’s the same damn thing as the rest of the machine. The computer reads inputs, processes data, and controls actuators.

Until a computer can be self-aware, something that ain’t going to happen for at least the next 30 years, if not more, we aren’t going to be able to make robots obey magic three laws.

Anonymous Coward says:

Re: the three laws

Clearly senshikaze hasn’t had his computer taken over by a spambot (would you like to be billed or fined for each of the millions of messages your computer sent without your knowledge?), been trapped in a runaway car by an Engine Control Unit with a timing race, piloted an airliner that the autopilot pointed directly at a mountain, been responsible for certifying the official release of software on which lives depended, had his garage door spray painted by a teen vandal, nor has he received even one bruise from when he programmed even a simple autonomous bot.

:) says:

Asimov doesn't account for accidents.

Accidents don’t happen because we want them they happen despite every effort to not have them.

Liability will play a role in that universe of course.

And there is the fact that any machine can be reprogrammed, security can be bypassed and a lot of other factors.

Could an exo-skeleton malfunction when you are carrying an old guy and crush something or drop the guy on the floor?

Who would be liable? the guy using the suit?

Personaly I think autonomous robots and semi autonomous robots should offer no risk of litigation to any person who did not directly or indirectly tried to cause harm to another person or property.

Anonymous Coward says:

Firstly, Asimov was a science fiction author. Not to be taken too seriously. Philosophers of Artificial Intelligence like John McCarthy and Aaron Sloman think the three laws to be a joke (and inhumane if a robot with free will was ever invented).

More seriously, though, robots aren’t that different from any other electrical appliance, so the legalities should be the same. If a robot malfunctions, it’s no different from a washing machine malfunctioning. If a robot catches a virus or has a bug, it’s no different from any software disaster. If somebody programs a robot to kill their wife, it’s no different from them killing their wife some other way (ah yes, Murder-she-wrote with robots).

Perhaps robots will need a “black box” like airplanes that records everything that happens. Also, a big off switch might be nice.

Anonymous Coward says:

Re: Re:

“Philosophers of Artificial Intelligence like John McCarthy and Aaron Sloman”

Sure, Asimov was a sci-fi writer, but “Philosophers of Artificial Intelligence” aren’t to be taken any more seriously. There’s nothing that makes their opinions on the subject any better than Asimov’s.

Show me an engineer with a law degree (or a lawyer with an engineering degree) and I’ll listen.

technomage (profile) says:

Re: Re:

“Firstly, Asimov was a science fiction author. Not to be taken too seriously. Philosophers of Artificial Intelligence like John McCarthy and Aaron Sloman think the three laws to be a joke (and inhumane if a robot with free will was ever invented).”

First off, science fiction authors write about things they wish to happen, ‘philosophers’ of AI write about things they wish to happen…Looks to me, neither should be taken seriously, or maybe both should be taken seriously? But, first off, tell me one thing, technology wise, that has been invented in say the last hundred years, that wasn’t originally dreamed up by some Science Fiction Writer? On top of that, show me something these ‘philosophers’ have done that is in use and not some college project waiting for the next darpa handout?

Anonymous Coward says:

Re: Do Robots Need A Section 230-Style Safe Harbor?

That’s actually very simple to answer.

Let’s realize what this article is all about. The person that wrote it wants attention and intentionally overlooked every rational answer to her question so that they could write about it as if they had just thought of some amazing new ethical dilemma. They’re really just pretty whiny.

technomage (profile) says:

There was a sci fi author, Victor Milan, who wrote about AI and AC (artificial consciousness). AI is easy, program something to be aware of the surroundings. Roomba has a small style AI. AC would be much more difficult. Self-learning, self-aware programs and robots. Robots that actually ask (in the immortal words of Vger): Why am I here, is there nothing more? Milan’s idea was to program it with a bushido code (title of the book was Cybernetic Samurai) So while not using the three laws, it still had a “moral” code to follow.

My point is, in the book, the computer essentially breaks it’s own code and self destructs. Once you program a hard set of rules, and then let the unit become aware of said rules, and it knows there are limitations it will eventually find ways to break those rules (ask any teenager).
230 against the designers would definitely be needed, especially in a self-aware machine, as it will have the ability to go far beyond what the original designers planned, or even hoped to control.

When They (the bots) finally get to that level, then they will have to be responsible for their own actions.

mermaldad (profile) says:

The term robot is so broad, that it encompasses a multitude of devices, from the “simple” robotic arms that do factory work, to the self-aware android of science fiction.

senshikaze suggests that the Asmovian laws are all that are needed, but when my robotic lawn mower loses its bearings and mows down his prize-winning bonsai garden (without ever violating an Asmovian law), he might reconsider.

I can guarantee there will be moral panics where people will demand laws against “robostalking”, “robobullying”, etc., when in fact these are just stalking and bullying with the robot as a tool. And people will undoubtedly sue robot manufacturers when robots do what their owners told them to do. So I’m sure that some sort of safe harbor will be needed to protect manufacturers from the actions of users.

nicholas says:

The First Wave of robots will create a bad name for themselves.

This first batch of robots will be dangerous, badly designed…. uncommunicative. In the robot boom many cowboy companies will form – whilst legislation and societies understanding of the incredible possibilities [and limitations] of robotics catches up – very slowly. Robots will later have to shake off some of the bad image they got ‘whilst learning’ – just like the internet – at the moment it’s a mess. A Massive Mess. There will be robots everywhere, and some will be very dangerous, others superb. You just can’t tell how much is going on in that metal scull. PS – asimovs laws are SO HARD to program that by the time we are able to implement them, robotics will have all ready exploded. And the most capable robots will be military anyway.

Andrew F (profile) says:

Products Liability

Robots are like any other products. Figuring out who to blame isn’t a unique problem — e.g. if grandma overdoses, we could blame the pill manufacturer, the scientist who came up the formula, the retailer, whoever was supposed to put a warning label on it, whoever was supposed to make sure the warning label was legible to old ladies, her grandma, her caretakers, her kids, or her herself.

Dah Bear! says:

Let’s just give everyone and everything 230 status on everything all the time, and then there will never be any liablity. Nobody is responsible. Nobody is liable. Nobody did it.

Blame the entire universe on the digital equivalent of two black youths. It’s worked for years, why not just take it digital? 230 is just digital SODDI.

Mike Masnick (profile) says:

Re: Re:

Let’s just give everyone and everything 230 status on everything all the time, and then there will never be any liablity. Nobody is responsible. Nobody is liable. Nobody did it.

I’m afraid that you appear to be quite confused over how Section 230 works. The issue is not about avoiding liability, but about properly placing the liability on the correct party. There is nothing in Section 230 that allows for avoidance of liability by the parties actually involved in the action.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...