What Happens When Robots Behave Badly
Paula Roy for Shifted News
Is a bot ordering ecstasy pills on the dark web liable for its actions? Can self-driving cars be sued for killing a person? These are some of the questions that Andres Guadamuz tries to answer in his talk about legal responsibility in the age of artificial intelligence at re:publica 18. A senior lecturer for Intellectual Property Law at Sussex University Dr. Andres Guadamuz' research focuses on open licensing, software protection, digital copyright, and complexity in networks.
He says the fundamental challenge is to settle for a definition of what we call artificial intelligence. For most people artificial intelligence (AI) is all “the big things that we don’t know yet”, but actually the most important and most interesting area is where and how artificial intelligence enters our everyday lives - like the vacuum cleaners sweeping our living rooms. That’s why for him, AI is more like an autonomous agent, because the “smartness” of the system is a secondary aspect if we talk about legal responsibility. Usually, you have three levels of liability: the product, the service or the user. But AI adds a new level of liability, which makes it hard to find the right person to sue.
But do we need a new system? Or do we have to revisit Roman rights for slavery as some researchers suggest? Guadamuz suggests our existing laws still fit most purposes in most cases. We may only have to reconsider contract law, but on the scale of international law to tackle these emerging issues? He claims that you can find international law in any smart contract in blockchain. Similarly, international law would need to be revisited in order to figure out how clarity can be brought into this confusing area.