The Law Around AI – What Do We Know?
Rapid technological advance can often see legislators and courts struggling to keep up, and AI is a prime example. The proliferation of AI-based tools offers manifold opportunities for businesses to streamline their operations and output, and the law governing the development and use of such tools is undoubtedly nascent.
The Wild West?
However, for employers adopting AI tools to streamline recruitment, boost productivity or cut overheads, the potential for lawbreaking is clear. Businesses cannot simply shirk liability onto the AI developers, and legal precedents are emerging from the Wild West of machine-generated products, services and business tools:
“A common-sense approach is slowly being developed by legal authorities, that when you use AI in business, you retain the liability for the outcomes,” says a member of the Acuity Law Employment team.
What are the risks of AI business tools?
- Inaccuracy
Large language models (LLMs) – machine learning models that can generate human language like ChatGPT – apply statistical probabilities to the data they have been trained on. This means that the programme is trying to give the user the answer that is the most likely outcome to the prompt that has been inputted. It is trying to combine the text that would most commonly be put together. At times, this might yield eccentric or incorrect results.
Also, the content’s accuracy is limited by the cut-off point in time at which the model was trained, so the information might not be the most up to date.
Potentially most problematic of all, the predictive nature of the technology means that there is the potential for the LLM to fabricate or “hallucinate” information altogether. If the information is business critical, this could have disastrous consequences. Take the following example from the US.
Roberto Mata alleged he was “struck by a metal serving cart” onboard a 2019 Avianca Airlines flight and suffered personal injuries. Mata’s lawyers cited at least six other cases to show precedent, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air. Unfortunately for Mata and his team, the court found that the cases didn’t exist, and the citations contained “bogus judicial decisions with bogus quotes and bogus internal citations.” A federal judge considered sanctions.
In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured by the bot that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact. Air Canada argued the chatbot was a “separate legal entity that was responsible for its own actions”, but the British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees. It established a common-sense principle: if you are handing over part of your business to AI, you are responsible for what it does.
Applying human oversight and critical thinking to the output of an AI tool is essential to not only the accuracy of the immediate output, but the maintenance of a competent workforce. Through over-reliance on AI a workforce as a whole could lose skills, knowledge, motivation, and the ability to think critically.
- Leaks of confidential information
What users of AI tools might not consider is that, in many cases, data or queries they enter into the tool will be used by the tool for self-training purposes, and may inadvertently become public information.
In 2023, a Samsung engineer uploaded sensitive internal source code to ChatGPT to assist with tasks, causing an accidental leak. Samsung very swiftly banned the use of generative AI among workers. Banks and other giants, like Amazon, have also heavily restricted or outright banned the use of AI by workers.
Security risks become more concerning when we consider open-source AI tools (OpenAI and DeepMind are closed-source AI). Since everyone has access to the code in open-source applications, AI tools can be created easily by bad actors or by well-meaning but inexperienced developers. Cyber security may, therefore, not be the primary focus of these tools, potentially leaving them more prone to cyber-attacks to steal or poison data.
- Intellectual Property infringement
IP laws are important to foster innovation and ensure that those investing time and resources in creating art, literature, software, and other copyrightable works are able to reap the benefit of that investment. However, AI tools use information from multiple sources, and avail themselves of vast amounts of materials.
The UK Parliament considered extending an exemption to copyright infringement already in place to allow data mining for research purposes and for not-for-profit purposes for training AI applications. However, since the AI output could potentially be used for commercial purposes this was quickly withdrawn, as it would undermine the effort of the rights holder that had gone into developing the creative works and their right to profit from it.
We see this issue playing out in the courts, as stock photo provider Getty Images has sued artificial intelligence company Stability AI Inc, accusing it of misusing more than 12 million Getty photos to train its Stable Diffusion AI image-generation system. The lawsuit, filed in Delaware federal court, follows a separate Getty case against Stability in the United Kingdom and a related class-action complaint filed by artists in California against Stability and other companies in the fast-growing field of generative AI.
For now, AI providers use their terms of service and terms of usage to avoid liability as far as possible. ChatGPT, for example, does use input for training, but does not seek to own any output. It also alerts users that use of the material may infringe the copyright of third parties and that the output it generates for one user may be reproduced for other users.
Can AI be protected by copyright as an author?
To qualify for copyright protection, a work must be original and must reflect the personality of its author as an expression of their free and creative choices. The author must have had a substantial involvement in creating the work and the act of conceiving the idea and directing someone else does not necessarily confer authorship. Currently, the law assumes that the author is a natural or legal person.
The question of whether AI can be an author has not yet been definitively tested by the courts, but the UK Supreme Court ruled early in 2024 that only a human could be the inventor of a patented idea. The UK has a special protection for computer-generated works whereby the author becomes the person by whom the arrangements necessary for the creation of the work are to be undertaken. Arguably, this means that the owner of the AI tool itself, rather than the user, could be the ultimate owner.
In practice, some AI tools will concede that the user owns the input, while some will say that the user does not own the output per se, but is granted a licence to use it.
When we look at LLMs, we encounter the issue of AI being trained on virtually immeasurable material, much of which is subject to copyright. This creates a question of conflict between the original rights holder in the creative work, the AI platform that has mined the data for training purposes, and the user who has created something new using the copyrighted material, likely along with other material.
- Discrimination
In 2021, researchers from New York University found that an AI model was able to determine a job applicant’s gender even where other machine learning models had attempted to strip their CVs of all gender indicators. Mouse movements and keystrokes alone can reveal to trained AI several probabilities about our age, race, gender, personality and even “agreeableness”. The fact is that AI knows more about us than we realise.
Employment, equality, and data protection laws seek to safeguard employees, including protecting them from certain motives or decision-making processes. However, AI analytics present a significant risk in circumventing protections by enabling employers to act based on possible futures, such as the likelihood that an employee will become stressed or will resign, both of which are possible to estimate.
UK law protects individuals by bestowing the right to not be subject to solely automated decisions having legal effect under the UK GDPR. There are exceptions for situations of contractual necessity, where it is authorised by union of other law, or in cases where the individual has provided explicit consent. Otherwise, an individual has the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision. Broadly speaking, there is a need for greater transparency on use of AI tools for decision-making, including informing those affected that automated decision-making has taken place.
Where the decision-making involves especially sensitive or “Special Category” data (including racial or ethnic origins, religious or philosophical beliefs, trade union membership, biometric data and health), obtaining explicit consent is essential.In practice, the need for explicit consent is difficult for AI tools to achieve, as learned data may be difficult to unlearn. Nevertheless, transparency is one of the key principles of data protection laws and so company privacy notices should be updated with “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Of course, as technology continues to evolve and AI system become fully autonomous or far removed from human decision-making, it will become more difficult to unravel the chain of logic that has been followed.
To protect themselves and their products, businesses using AI should adopt an approach that applies critical reasoning to work products generated by AI as well as internal decision-making. Adopting an overarching approach of fairness and reasonableness is a good start towards compliance with existing laws, but it is important to watch this space for bespoke laws that are introduced in response to particular AI-related risks.
Top tips for minimising liability for the risks of using AI business tools.
- Fully investigate AI tools before use and ascertain how they:
- Verify outputs for accuracy
- Make use of data entered into them
- Safeguard against copyright infringement.
- Risk assess AI tools before use and monitor trends. This includes understanding the potential discriminatory effects of the AI and ensuring that all reasonable steps to mitigate them have been taken.
- Consider relevant insurance, for intellectual property, for example.
- Review contracts and terms & conditions to clarify legal or factual uncertainty and who is liable financially – or call a lawyer to do so.
- If the business uses automated decision-making tools, ensure it is compliant with UK GDPR. Depending on how the technology is used, employees may have the right to not be subject to such decisions, and where special category data is involved, explicit consent must be sought. You may need to update the company privacy policy to set out the logic of the AI tools and how they are used.
- Manage the performance and conduct of employees using AI and surveying current AI usage across the workforce.
- Provide training on how best to use AI tools, making sure that employees understand the risks, as well as their responsibilities for input and output.
- Create a policy that clarifies how AI is used in the business, including consequences for employees misusing or using AI incorrectly.
- Monitor and make sure that employees are not becoming reliant on AI and deskilling as a result.
Next steps…
For more information about implementing AI tools in your business, contact our Commercial & Technology team.