AI Lawyer John Weaver: Government Should Introduce AI Regulation That Reflects What the Nation Would Like for the Country Once AI Is Everywhere

in #space6 years ago

zxcsfrwq-01.jpg

Artificial intelligence has been a dream and a fear for many of us ever since 2001: A Space Odyssey, if not before that. The fear that it will take over the humankind is often entwined with the high hopes that it will catapult us to yet unimaginary heights of technological prosperity. However, it’s obvious that the technological advancement has a long way to go until such things actually become possible. Still, the pace of progress is obvious, and it might be reasonable to define the rules of the game long before the actual game commences.

In order to look deeper into this complex matter combining the issues of legal sciences, moral philosophy, physics, engineering, and psychology, lawless.tech talked with John Weaver, an associate at McLane Middleton focused on AI, self-driving vehicles and drones, and the author of numerous articles on the legal issues with the emerging technologies, as well as the book Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws.

lawless.tech: Could you tell us about yourself? How did you decide to become a lawyer? Were you influenced by pop culture or by something else?

John Weaver: I was very interested in the education field in college and in my early twenties. I taught middle schoolers and high schoolers, but realized that I would get burned out if I became a career teacher, which ultimately wouldn’t have done me or the students any good. I liked the idea of being involved in education policy and thought I should either pursue a grad degree or a law degree. I got a paralegal gig in a big firm in DC, thinking that if I could handle this job, I should pursue a law degree. Although I worked with the New Hampshire AG’s office in law school on school funding litigation, once I graduated, I discovered the emerging AI and autonomous tech sector and switched lanes. It was too cool and important not to get involved.

lawless.tech: From a look at the list of your articles, it seems that AI regulation is the primary field of interest for you. Why AI, not space, IoT, or other fascinating things?

John Weaver: I became fascinated with AI and autonomous technology because it upends a basic assumption that underlies almost all our laws: only human beings make decisions. When there’s a car accident, a human being was driving. When a new novel becomes a hit filled with valuable intellectual property, a human being wrote the book. When a child or senior citizen suffering from dementia requires a guardian or power of attorney, the designated individual who makes decisions on his or her behalf is a human being. But self-driving cars, creative AI, and caretaking robots disrupt the legal models and rules that we expect in those situations. Many of our laws either don’t function properly because of the new technology, or our laws do not even govern the new technology because they rely on a fundamental assumption that is no longer true. IoT, space exploration, augmented and virtual reality, etc. are amazing – I do a lot of work for clients in IoT infrastructure – but they don’t require the radical reimagining of one of the most fundamental tenets of our legal system. AI does.

lawless.tech: What do AI lawyers do today, when AI is hardly developed to the extent described in sci-fi works? Which instances require his or her unique expertise? Are such services in great demand?

John Weaver: To be clear – AI will not be as developed as it is in sci-fi for a LONG time. But the issues that I consult on now concern best practices, expected regulations, existing regulations that can be applied to AI, risk mitigation, liability concerns, etc. Currently, I would classify it as a niche practice, but one that’s growing as the technology becomes more ubiquitous and governments begin to regulate different elements of it. For example, the EU’s GDPR contains the right to be free from “a decision based solely on automated processing… which produces legal effects concerning him or her or similarly significantly affects him or her,” as well as the right for individuals to receive “meaningful information about the logic involved” in those decisions. Those requirements are quite frankly beyond the capabilities of the vast majority of AI being used today, but clients worry about how they can show they are attempting to comply in order to avoid the potential penalties GDPR imposes. That’s where I come in.

lawless.tech: Earlier this year, a self-driving Uber killed a woman in Arizona, and they cancelled the testing altogether. This might signify one trend, the one where they’re afraid of robots because they don’t understand how they work. And there’s another pole, an exalted one, where people expect unrealistic things from robots, again, due to the lack of understanding. Which kind of regulation is more likely to prevail in your opinion? How can we, as a humankind, overcome this lack of understanding? And, after all, how should our legal system accommodate to a brave new world with thinking machines?

John Weaver: One of the things that I advocate to government officials that I interact with on this topic is to regulate early, and we’ll fix it as we go. What I mean is that it is important that governments establish a vision of how they want the world to look with this technology in widespread use before the AI sector becomes a mature industry. History shows time and again that it is easier to impose regulatory controls on young industries than on mature ones. Two vice industries, tobacco and daily fantasy, are contrasting examples that prove the point. In 2015, the state of New York attempted to regulate daily fantasy a young industry with a fairly limited capital and market presence. The two largest companies, DraftKings and FanDuel, came to terms with the state within a year for regulatory action. On the other hand, the tobacco industry, with a large historical, cultural, and economic presence, continues to oppose nearly all efforts at regulation.

As soon as possible, the federal government should introduce AI regulation that reflects what the nation collectively would like for the country once AI is everywhere. One of two things will happen. Either researchers and companies will develop new AI applications with an eye toward those regulations, or we’ll realize the regulations do not properly reflect the reality of AI and Washington will change them. That means as we gain better understanding of the technology, our regulations will get better. But we have to get the regulations in place first. Without early regulation, companies create their own practices and are reluctant to accept the costs of adopting new, regulatory standards thereafter, even when the new standards better reflect the nation’s overall well-being.

With regard to how our legal system should change, I think we should acknowledge that our base assumption that only human beings make decisions is starting to crack and will continue to deteriorate as AI becomes more prominent. We should grant certain AI limited legal personhood in the forms of well-defined and limited rights and obligations in order to make sure that their benefits are as widely spread as possible. Additionally, legislation should be passed that imposes fiduciary duties on more parties involved in the AI design process. Finally, resources should be spent on education at all levels to emphasize both the technological skills needed to work with AI as well as the business and interpersonal skills (e.g., entrepreneurship) that will be useful as AI begins assuming more jobs once held by human beings.

lawless.tech: Elon Musk is reported to fear collective artificial intelligence, that sci-fi nightmare and ‘we are the Borg’ thing. How can we avoid such scenario? Is there any sense in legally limiting the options for developers? Is there a chance that such limitations would stifle innovations instead of protecting people? Or is it again just fear of the unknown?

John Weaver: Musk’s fear is not entirely illogical, but it is overblown. The sci-fi nightmare is not a legitimate concern anytime soon; there’s a reason my Journal of Robotics, Artificial Intelligence & Law is called “Everything is Not Terminator.” Apocalyptic AI is not realistic now, although I can understand the fear that it might somehow develop without proper education and regulation. Having said that, I do not believe in limiting the options for developers. They should have fairly broad range to explore, experiment, and create, although all within a regulatory framework as I described before. Some of those regulations should address safety and testing precautions, but not necessarily limit what can be tested.

lawless.tech: It’s probably too early to think about robotic rights, however, sooner or later this question will pop up. It’s more of a philosophic question though: when does a robot cease to be just a machine and become a personality to an extent sufficient of recognizing it as one, legally? Obviously it’s not a discrete process, and it’s impossible to pinpoint the exact moment when it happens. However, there must be some criteria for that. What are they, in your opinion?

John Weaver: I am pretty confident right now that I will not have to contend with robots legitimately seeking civil rights in my professional career. What I think we will see, however, is something akin to the animal rights movement for robots, which will want laws in place to govern how we treat robots, the minimum standard of care they are entitled to, etc. There are two primary reasons. First, and most powerfully, there will be people who interact with their robots and begin to think of them as friends or family members. They will hate the idea that other people don’t see their robots the same way. Those people will band together to lobby for basic care requirements. Second, and more philosophically important if not politically, there are already a number of people who worry about how our treatment of AI applications will influence how we treat actual people. My wife and I talk a lot about how our children interact with Alexa – Are they polite enough? Should they be saying please and thank you more? In subsequent years, some of those people will want to impose a minimum standard of robot care because they believe that we will treat people better if we are forced to treat robots better.

lawless.tech: For the sake of convenience, let’s say there are robots that have personalities, and there are those that have not. The former realize themselves somehow, and the latter are mere machines with no consciousness. Should they have different rights? And, if a robot has rights, does it also have obligations? Say, if a medical robot like Da Vinci makes a mistake during a surgery and effectively kills its patient, who should be held responsible for that?

John Weaver: The division you’re talking about will play out differently at first, at least in terms of regulations and legal requirements. We’ll look at high and low functioning weak AI. Low functioning weak AI include the interactive robots that some nursing homes are experimenting with to provide social interaction to lonely senior citizens, virtual algorithmic shopping assistants on e-commerce sites, and the personal assistants we have on our phone. High functioning weak AI will include more sophisticated programs and devices that can take part in complex human, physical, and legal interactions: robots that we trust to provide for the physical and social well-being of a senior citizen, to act as a buying or selling agent in a competitive market, to provide well-informed and researched personal advice and guidance in response to our closely held goals and secrets. High functioning AI will require regulations that introduce limited legal personhood to those types of AI. Low functioning AI will require that treatment less, although in some cases it will still be important, like for the intellectual property created by AI like Google’s Magenta art and music applications or Narrative Science’s autonomous writing program.

To use Da Vinci as an example, no new liability models are necessary because Da Vinci is not autonomous. If the doctor makes a mistake, he or she is liable; if the machine itself was poorly designed or built, the manufacturer is liable. When there is eventually truly autonomous surgical devices, liability will likely be more complicated. The doctors will still be liable for their mistakes, and the companies that design and sell the devices will still be liable for production or design mistakes. But if an injury or death occurs even though the machine was acting responsibly and as it should, my position for a while has been that the law should create a separate category of a liable legal person, through which the autonomous surgery device (or autonomous car, autonomous construction equipment, etc.) is itself an insurable entity, with insurance maintained by another entity that is limited to that device/legal person. Where the doctor and manufacturer have not made an error, liability is limited only to the value of the insurance.

This creates some problems, like does it become more difficult for the victim to recover damages because of the highly technical nature of the devices. One way I have explored to prevent this is to create a payment system for liable AI that is similar to worker’s compensation. Victims have to satisfy a lower standard of evidence – a show of actual injury and reasonable proof that AI caused the injury. In return, for easier and faster payments, the payments would be lower than what might be gained in court. This permits the victims to recover faster and easier while also letting AI developers and manufacturers plan for an established loss.

lawless.tech: What is the future like, in your opinion? In a hundred years, there might be robots mining resources from asteroids, or nanobots editing your DNA, curing cancer, and doing other seemingly magical things. How it all could be regulated? What’s the reasonable approach to dealing with this brave new world in legal terms?

John Weaver: If we’re smart and lucky, we will see AI regulation early and often. The constantly evolving regulatory landscape will give guidance to developers as to what our expectations for AI are while also being responsive to the needs of the developers as they are able to articulate which expectations are reasonable and which ones are not. I believe some of those regulations will include creating limited legal personhood for some versions of AI. Despite my reticence above about sentient robots in the near future, in 100 years sentient AI and robots might be possible, and we’ll have to deal with the legal and ethical implications then. However, technology develops so fast that you could tell me anything you listed – plus a lot of other advancements – will be real in a century, and I’d say you could be right. Interstellar travel? Sure, why not. The singularity? I can get behind that.

lawless.tech: If we assume there will be a technological singularity, and humans merge with robots in some way, what will it mean to human and robot rights? Will they merge as well? Or will they bring about something yet unimaginable?

John Weaver: It’s an interesting question. Jefferson famously said that the earth belongs in usufruct to the living, meaning that we only own it while we’re alive and are able to profit from it. The idea means both that the earth is not intended to benefit our ancestors but also that we cannot expect to burden our descendants’ use of it either. What happens if there are no longer deceased generations, but successively older generations who are still here? Presumably, the first such generation will try to amend the law to ensure they retain all or most of the rights they had before they merged with robots: property rights, voting rights, civil rights, etc. Does their retention of their wealth and influence mean that there will be less for subsequent generations to inherit and enjoy? I think there will be new legal models, institutions, and battles over the concept. I’m not sure we’ll be able to imagine the changes such a singularity will introduce until it’s almost on top of us.


This post originally appeared at https://lawless.tech/ai-lawyer-john-weaver-government-should-introduce-ai-regulation-that-reflects-what-the-nation-would-like-for-the-country-once-ai-is-everywhere/

lawless.tech is an online magazine devoted to covering the ongoing regulatory attempts to oversee and control the newest technologies

Join our Telegram channel, follow us on Twitter and Facebook to explore how regulations will impact the latest technological advances.

Coin Marketplace

STEEM 0.28
TRX 0.12
JST 0.033
BTC 71095.13
ETH 3686.44
USDT 1.00
SBD 3.76