Artificial Intelligence and the Law
In many ways, the evolution of artificial intelligence is exciting and inspiring. Using AI can make complex tasks simpler. It can help businesses, professionals, lawyers, and government agencies operate more efficiently. It also can provide consumers with advanced products and services. People benefit from AI every day, and it has become part of the fabric of modern life. If you came to this website through a Google search, AI defined the search results that you saw. If you were looking for a movie to watch on Netflix last night, AI offered suggestions that may have influenced your choice. If you were stuck in traffic on your commute, maybe you used the navigation software on your phone to plan a detour. That is also a form of AI.
The legal system often struggles to keep up with changes in technology. Sometimes legislatures may need to develop new laws to address certain applications of artificial intelligence. Other times, courts may need to apply existing laws to these situations. One key point to bear in mind is that AI does not create an exception to the law or exist outside the legal system. Below is an overview of some areas in which AI may raise distinctive legal questions.
Injuries and AI
If a robot harms someone, the victim cannot sue the robot as they would an ordinary person. For example, if a self-driving car strikes another vehicle, the AI operating the car cannot be held directly liable for its “negligence” as a human driver could be. Victims do not lack legal recourse for their injuries, though.
An area of personal injury law known as products liability provides that companies can be held liable for harm caused by their defective products. This means that a manufacturer of a robot may be held strictly liable for injuries caused by a defect in the robot. Defects may involve the manufacturing or design of the robot, or faulty instructions for its use.
Also, robots do not always operate independently. If a person using a robot did not act reasonably, or if their decision to use a robot was unreasonable, they could be held liable under a negligence theory for resulting injuries. For example, humans often still play a role in operating autonomous cars to varying extents. They may be able to override the AI in an emergency, and failing to do this may subject them to liability if an accident results.
Intellectual Property and AI
Issues at the intersection of intellectual property and AI generally involve either patents or copyrights. First, an inventor or another entity might seek a patent for an invention that incorporates artificial intelligence. This can get tricky under federal patent law, which holds that patents cannot protect abstract ideas without an additional element that involves some sort of inventive concept. Some AI concepts might not qualify for a patent based on this abstractness test. That said, the U.S. Patent & Trademark Office has granted many patents based on technologies involving artificial intelligence. The USPTO continues to develop guidelines and solicit feedback on patents for AI technologies.
Meanwhile, federal law has established that a creator of a computer program can get a copyright for the program. This covers all of the original expression in the program. On the other hand, copyright does not cover the functional aspects of a computer program, including its algorithms, formatting, functions, and logic.
Consumer Rights and AI
Companies routinely collect data from consumers who use their services. Sometimes AI may play a direct role in collecting data, such as when people use a digital assistant like Siri. Often, AI helps companies review and use the information that they have gathered. It can process huge quantities of data more effectively than a human could, and it also has the ability to make decisions based on this data with little or no human intervention.
Concerns involving automated decision-making are relatively new, but several states have enacted laws that affect this issue. Recent state laws include the California Privacy Rights Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, and the Virginia Consumer Data Protection Act. These laws require a data protection impact assessment for certain processing activities that pose a heightened risk of harm, such as processing personal information for targeted advertising or processing sensitive data. This means that a company must identify and weigh the risks and benefits of the processing, taking into account any protections that it implements to mitigate the risks.
Law Enforcement and AI
Algorithms may help police departments decide how to allocate their resources. They are only as useful as the information put into them, though. If a law enforcement agency inputs data that reflects flawed or improper practices, such as falsified police reports or planted evidence, the algorithm will produce similarly flawed results. Moreover, AI sometimes can reinforce or legitimize biases in police departments. If officers in a particular city tend to police a Hispanic neighborhood more aggressively due to personal prejudices, for example, an AI system evaluating data such as arrest rates might conclude that this is a high-crime area and encourage the police department to continue this pattern.
Meanwhile, courts have explored algorithms as a way to improve objectivity in sentencing. This might alleviate concerns over racial bias, which has played a well-documented role in sentencing defendants. However, some studies have suggested that biases remain in these AI systems, cautioning judges against relying on them too heavily.
Employment Discrimination and AI
Some businesses have started to rely increasingly on AI in making decisions related to their employees. These may affect every part of the employment relationship, from the recruitment and hiring process to monitoring employees on the job to deciding whom to lay off or fire. While AI seems like it would be neutral, its use can cause unintended discriminatory effects depending on the data that the system is trained to use. The federal Equal Employment Opportunity Commission has launched an AI and Algorithmic Fairness Initiative in an effort to ensure that the use of AI in employment decisions complies with civil rights laws.
For example, AI systems that monitor productivity may discriminate against people with certain disabilities or health conditions. If a system ranks employees based on keystrokes, it might give a poor ranking to an employee who is limited by arthritis. A system that evaluates employees based on the time that they spend at their work stations might not account for the breaks that a pregnant employee has received as an accommodation for her condition.
Health Care and AI
Artificial intelligence often assists doctors with diagnosing or treating patients. This can create challenges in the area of healthcare regulation, overseen by the federal Food and Drug Administration. Companies that make medical devices must go through certain steps to get FDA approval. If a manufacturer of a medical device changes the device, it might need to get approval from the FDA again before putting the altered product on the market. When a medical device incorporates AI, though, machine learning software may change continuously in real-time based on new data.
The FDA has acknowledged that standard regulatory frameworks may not work for these adaptive technologies. Responding to this concern, it has proposed a distinctive framework in which manufacturers submit a predetermined change control plan for pre-market review. This includes pre-specifications describing the expected and planned modifications for the device, as well as an algorithm change protocol in which the manufacturer explains what it will do to manage the risks posed by the modifications.