In many ways, the evolution of artificial intelligence is exciting and inspiring. Using AI can make complex tasks simpler. It can help businesses, professionals, lawyers, and government agencies operate more efficiently. It also can provide consumers with advanced products and services. People benefit from AI every day, and it has become part of the fabric of modern life. If you came to this website through a Google search, AI defined the search results that you saw. If you were looking for a movie to watch on Netflix last night, AI offered suggestions that may have influenced your choice. If you were stuck in traffic on your commute, maybe you used the navigation software on your phone to plan a detour. That is also a form of AI.
Errors, Biases, and AI
Advocates of AI claim that it can avoid issues like bias and human error. However, AI is not a magic bullet. People make the algorithms driving AI, and human error or bias may remain part of the process. Novel errors or biases also may arise, such as when AI is asked to make ethical choices.
The legal system often struggles to keep up with changes in technology. Sometimes legislatures may need to develop new laws to address certain applications of artificial intelligence. Other times, courts may need to apply existing laws to these situations. One key point to bear in mind is that AI does not create an exception to the law or exist outside the legal system. Below is an overview of some areas in which AI may raise distinctive legal questions.
Injuries and AI
If a robot harms someone, the victim cannot sue the robot as they would an ordinary person. For example, if a self-driving car strikes another vehicle, the AI operating the car cannot be held directly liable for its “negligence” as a human driver could be. Victims do not lack legal recourse for their injuries, though.
An area of personal injury law known as products liability provides that companies can be held liable for harm caused by their defective products. This means that a manufacturer of a robot may be held strictly liable for injuries caused by a defect in the robot. Defects may involve the manufacturing or design of the robot, or faulty instructions for its use.
Also, robots do not always operate independently. If a person using a robot did not act reasonably, or if their decision to use a robot was unreasonable, they could be held liable under a negligence theory for resulting injuries. For example, humans often still play a role in operating autonomous cars to varying extents. They may be able to override the AI in an emergency, and failing to do this may subject them to liability if an accident results.
Work Injuries and AI
If you were injured on the job, it may not matter much whether a human or a robot caused your injury. You likely would be entitled to workers’ compensation benefits regardless of the cause.
Intellectual Property and AI
Issues at the intersection of intellectual property and AI generally involve either patents or copyrights. First, an inventor or another entity might seek a patent for an invention that incorporates artificial intelligence. This can get tricky under federal patent law, which holds that patents cannot protect abstract ideas without an additional element that involves some sort of inventive concept. Some AI concepts might not qualify for a patent based on this abstractness test. That said, the U.S. Patent & Trademark Office has granted many patents based on technologies involving artificial intelligence. The USPTO continues to develop guidelines and solicit feedback on patents for AI technologies.
Meanwhile, federal law has established that a creator of a computer program can get a copyright for the program. This covers all of the original expression in the program. On the other hand, copyright does not cover the functional aspects of a computer program, including its algorithms, formatting, functions, and logic.
Copyrights and Patents for AI-Created Works
Sophisticated forms of artificial intelligence can write stories or songs and create art. Do these get protection? The U.S. Copyright Office has stated that it will not register works produced by a machine or mechanical process that operates randomly or automatically without any creative input or intervention from a human author. Federal courts also have ruled that AI cannot be a named inventor on a patent.
Consumer Rights and AI
Companies routinely collect data from consumers who use their services. Sometimes AI may play a direct role in collecting data, such as when people use a digital assistant like Siri. Often, AI helps companies review and use the information that they have gathered. It can process huge quantities of data more effectively than a human could, and it also has the ability to make decisions based on this data with little or no human intervention.
Concerns involving automated decision-making are relatively new, but several states have enacted laws that affect this issue. Recent state laws include the California Privacy Rights Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, and the Virginia Consumer Data Protection Act. These laws require a data protection impact assessment for certain processing activities that pose a heightened risk of harm, such as processing personal information for targeted advertising or processing sensitive data. This means that a company must identify and weigh the risks and benefits of the processing, taking into account any protections that it implements to mitigate the risks.
American Data Privacy and Protection Act
An expansive federal law known as the American Data Privacy and Protection Act has gained bipartisan support. If it takes effect, the ADPPA would require covered companies to conduct algorithm design evaluations and algorithm impact assessments, among other things. A Bureau of Privacy in the Federal Trade Commission would enforce this law.
Law Enforcement and AI
Algorithms may help police departments decide how to allocate their resources. They are only as useful as the information put into them, though. If a law enforcement agency inputs data that reflects flawed or improper practices, such as falsified police reports or planted evidence, the algorithm will produce similarly flawed results. Moreover, AI sometimes can reinforce or legitimize biases in police departments. If officers in a particular city tend to police a Hispanic neighborhood more aggressively due to personal prejudices, for example, an AI system evaluating data such as arrest rates might conclude that this is a high-crime area and encourage the police department to continue this pattern.
Meanwhile, courts have explored algorithms as a way to improve objectivity in sentencing. This might alleviate concerns over racial bias, which has played a well-documented role in sentencing defendants. However, some studies have suggested that biases remain in these AI systems, cautioning judges against relying on them too heavily.
When a judge decides whether a defendant should get bail, and how much it should be, they may use a bail algorithm that is designed to objectively assess flight risk and other key factors. The algorithm may produce a score or provide a recommendation to or not to release.
Employment Discrimination and AI
Some businesses have started to rely increasingly on AI in making decisions related to their employees. These may affect every part of the employment relationship, from the recruitment and hiring process to monitoring employees on the job to deciding whom to lay off or fire. While AI seems like it would be neutral, its use can cause unintended discriminatory effects depending on the data that the system is trained to use. The federal Equal Employment Opportunity Commission has launched an AI and Algorithmic Fairness Initiative in an effort to ensure that the use of AI in employment decisions complies with civil rights laws.
For example, AI systems that monitor productivity may discriminate against people with certain disabilities or health conditions. If a system ranks employees based on keystrokes, it might give a poor ranking to an employee who is limited by arthritis. A system that evaluates employees based on the time that they spend at their work stations might not account for the breaks that a pregnant employee has received as an accommodation for her condition.
Sex Discrimination at Amazon
Amazon once tried to use an AI system for hiring decisions, only to find out that it “taught itself” to discriminate against female job applicants for technical positions. Why would it do this? The model was trained to assess applicants based on patterns in resumes submitted to the company over a 10-year period. Most of these resumes had come from men.
Health Care and AI
Artificial intelligence often assists doctors with diagnosing or treating patients. This can create challenges in the area of healthcare regulation, overseen by the federal Food and Drug Administration. Companies that make medical devices must go through certain steps to get FDA approval. If a manufacturer of a medical device changes the device, it might need to get approval from the FDA again before putting the altered product on the market. When a medical device incorporates AI, though, machine learning software may change continuously in real-time based on new data.
The FDA has acknowledged that standard regulatory frameworks may not work for these adaptive technologies. Responding to this concern, it has proposed a distinctive framework in which manufacturers submit a predetermined change control plan for pre-market review. This includes pre-specifications describing the expected and planned modifications for the device, as well as an algorithm change protocol in which the manufacturer explains what it will do to manage the risks posed by the modifications.
Medical Malpractice and AI
When a doctor fails to meet the professional standard of care, they may be liable to a patient for medical malpractice. In some cases, deciding not to use AI could indicate a failure to meet the applicable standard of care, but excessively relying on AI also could be grounds for a malpractice claim. Health care providers must weigh these decisions carefully.