Advances in Artificial Intelligence: A Primer and its Impact on Business and the Law

Today, Artificial Intelligence (AI) is no longer solely the domain of science fiction writers and Hollywood film studios. Unbeknownst to most consumers, AI has quietly made its way into many aspects of daily life, from depositing cheques through your phone to curated and targeted advertisements on social media platforms.

While many of these developments have the potential to increase our productivity and quality of life—or have already done so—our laws and the courts are playing catch-up. This article will provide a primer on AI, and explore its application in business and in life and the potential legal issues that will arise as AI continues on its exponential growth trajectory.

What is AI?

Before exploring the legal implications of AI, it is helpful to clarify what AI is. Different sources provide different definitions, but in its most simple formulation, AI can be thought of as the ability for computers to accomplish tasks normally associated with humans acting intelligently. Most AI used today does not actually replicate or mimic human intelligence but rather uses a more sophisticated form of traditional programming. In traditional computing, the programmer instructs the computer what to do in every possible scenario. The programmer supplies the intelligence, and the computer simply executes the task. In AI, the computer is taught to make decisions on its own, by analyzing large data sets and drawing its own inferences and conclusions.

There are principally two types of AI—generalized AI and applied AI. Generalized AI refers to a machine or a system that can handle any task thrown at it. Applied (or narrow) AI refers to a machine or system that can perform a specific task in a manner that mimics a component (but not all) of human intelligence. While Generalized AI remains elusive, numerous advances in applied AI have emerged in the past few years. One of these advances is the concept of machine learning (ML). Nvidia, a company at the forefront of ML development, describes ML as “the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.”1

In ML, as the system continues to complete its task, it learns from user input and becomes more and more intelligent. For example, when you accidentally type the wrong thing into a Google search bar, it often asks if you meant to search for something else. If you click on Google’s suggestion, it assumes that its predictive algorithm was correct, thereby validating the system. It uses this user feedback to improve suggestions going forward.

Torys Quarterly Q2 2017

An example of machine learning, the more feedback Google’s predictive algorithm receives, the smarter it becomes at predicting.

AI in Practice

AI also has been deployed in banking, wealth management, insurance, autonomous vehicles, advertising and social media, to name a few other examples. In banking, AI is being used to make lending decisions based on vast amounts of data.2 In wealth management, automated financial advisers (sometimes called “robo advisers”) use AI to monitor news, stock prices and indicators of investor sentiment to make trades and balance portfolios.3 In insurance, AI can be used to predict the risk level of a particular customer, or to determine the expected repair cost of a vehicle involved in a collision using only a photo of the damage.4

AI relies on massive amounts of data. It wasn’t until recently that storage and processing developed to a level that could sustain AI systems.

Perhaps the best-known use of AI in consumer technology is the development of autonomous vehicles. Self-driving cars use AI to identify and categorize their surroundings, predict the movement of vehicles and other obstacles in their path, make lane position and routing decisions, and avoid collisions. These complex calculations occur thousands of times per second, allowing the vehicles to respond in real time, often faster than human reaction speeds.

Requirements for AI

Alan Turing predicted the development of AI back in 1950 in his paper “Computing Machinery and Intelligence.”5 It took another half century for AI to become commercially available. This considerable delay was due in part to a lack of computing power. AI, and ML in particular, rely on massive amounts of data. The ML algorithms need to store and crunch all this data in real time. It wasn’t until recently that storage media and computer processors developed to a level that could sustain AI systems. Commercially available computer infrastructure can now handle certain AI tasks. For more complex or data-hungry tasks, businesses can rent space on IBM’s supercomputer, Watson.6 Furthermore, many AI powered systems operate on cloud-based software-as-a-service models, thereby reducing the computing infrastructure required by end users.7

Legal Implications of AI

Tort

As AI is still in its infancy, the ramifications of its advancement are not yet fully understood. Nonetheless, certain jurisdictions have been proactive at developing frameworks for AI regulation.8 As the One Hundred Year Study on Artificial Intelligence report from Stanford University notes,

“as a transformative technology, AI has the potential to challenge any number of legal assumptions in the short, medium, and long term. Precisely how law and policy will adapt to advances in AI - and how AI will adapt to values reflected in law and policy - depends on a variety of social, cultural, economic, and other factors, and is likely to vary by jurisdiction.”9

One area that deserves particular attention is self-driving cars, as it presents a helpful vehicle (excuse the pun) for exploring a number of legal issues in the AI context. The use of AI in autonomous vehicles has ramifications in tort, contract, intellectual property, and insurance law, among others. In the context of personal injury, should the AI be programmed to protect the driver over a third party in an impending collision? If an injury does result from a collision with an autonomous vehicle, is the designer of the AI liable in tort?

The more autonomous vehicles become, the greater the shift from driver liability to product (i.e., manufacturer) liability.

The answer to these and similar questions will depend in part on the level of autonomy with which the vehicle operates. This autonomy can be viewed on a spectrum from full human control (hands on, eyes on) to partial human control (hands off, eyes on) to no human involvement at all (hands off eyes off).10 Conventional driving is an example of the former, Tesla’s Autopilot an example of partial control, and Google’s Waymo an example of the latter. Liability will likely depend on where on the spectrum the vehicle falls. The closer the vehicle is to the fully autonomous end, the greater the shift from driver liability to product (i.e., manufacturer) liability.

The shift from driver liability to product liability could also impact the insurance industry. As the risk and number of collisions decrease, perhaps there will be downward pressure on insurance premiums and changes to the structure of our insurance scheme as a whole. For example, will no-fault insurance make sense when autonomous vehicles collect all the information required to determine fault in a rapid and economical manner?

Intellectual Property

In the intellectual property sphere, questions arise as to the ownership of intellectual property generated by AI. In our autonomous vehicle example, consider a situation where the car’s AI learns a new method of predicting potential accidents from the data it collects while shuttling its owner around town. Assuming the new invention is patentable on its own, to whom does that IP belong? It is likely that for now, these issues will be explicitly spelled out in the contracts governing the relationship between driver and manufacturer, but legislative changes may ultimately be required to eliminate the uncertainty.

Privacy Law

Privacy concerns abound when considering the volume and source of data required by AI systems. An autonomous vehicle collects images of its surroundings containing personally identifiable information. The vehicle’s entire route history, including locations visited, is stored by the AI to improve future routing decisions. Networks of autonomous vehicles may communicate with each other to share information, adjust traffic patterns, and avoid collisions. How will the privacy of the vehicle owner and nearby pedestrians be protected? Privacy, data protection and cybersecurity regulations will need to address these and similar concerns.

Conclusion

The legal issues surrounding autonomous vehicles is just one example of the use of AI in the marketplace. As AI continues on its exponential growth trajectory and its applications broaden, so will common law and legislation. Just as governments caught up during (or following) the industrial revolution to introduce laws and regulations governing mechanical systems, so too must they catch up during the AI revolution to introduce laws and regulations to govern intelligent systems.

_________________________ 

1 See “What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?,” NVidia, July 29, 2016

2 See “Banking Start-Ups Adopt New Tools for Lending,” The New York Times, January 19, 2015

3 See “New robo-advisor uses A.I. to take on active investing,” MoneySense, August 4, 2016

4 See “Top Issues - AI in Insurance: Hype or Reality?,” PWC, March, 2016

5 See “Computing machinery and intelligence,” Loebner, April 20, 2004

6 See “U of T team takes second place in IBM Watson challenge,” UofT News, January 14, 2015

7 See examples: “Uncover relevant information from contracts with Kira,” Kira Systems.

8 See “Legal Aspects of Artificial Intelligence,” Kemp IT Law, November, 2016

9 See “Artificial Intelligence and life in 2030,” Stanford, September, 2016

10 See “Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies,” Centre for Connected & Autonomous Vehicles, July, 2016

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now