AI: An automated workforce or… a very complicated calculator?

Print Friendly, PDF & Email

AI: An automated workforce or… a very complicated calculator?

Adam McGlynn considers the legal risks posed by Artificial Intelligence in the workplace

Key Contact: Chris Aldridge

Can an algorithm be sexist?

Surely a piece of software, emotionless and programmed according to logic, would be the antithesis of discrimination?

Not so, according to Employment Associate Adam McGlynn, speaking at last week’s CIPD’s half-day conference on AI in the Workplace.

AI is designed to please, using the data it has been trained on to generate the outcome it thinks you want. If this data includes unfair decisions founded on biases, then the algorithm will give you more of the same.

“AI can reflect the consensus of the present or replicate the mistakes of the past, but it struggles to envision a future of equality and inclusion,” says Adam.

“Where an AI tool endeavours to achieve a seemingly neutral objective, but has the effect of putting cohorts sharing protected characteristics at a disadvantage in the process, this can achieve the very definition of prima facie indirect discrimination.”

Uses of AI by HR teams

  1. Recruitment
  2. Onboarding
  3. Performance review
  4. Personalised learning and development platforms

As HR professionals increasingly turn to AI tools to assist in tasks like recruitment, onboarding, performance review and learning and development, the potential for unfair treatment rises.

“AI can calculate and predict how likely we are to join a union, how stressed we are at work, and even flag when we are likely to hand in our resignation. If employers act on the probability of future events, how can people’s rights be effectively safeguarded?” asks Adam.

Plus, as such tools become more sophisticated, unravelling their decision-making and maintaining transparency is likely to become more difficult.

Discrimination is just one of the legal risks presented by AI. For employers adopting AI tools to streamline recruitment, boost productivity or cut overheads, the potential for lawbreaking is clear – even though rapid technological advance can see legislators and courts struggling to keep up.  

Legal Risks of Using AI in the Workplace

  1. Discrimination
  2. Inaccuracies and hallucinations
  3. Leak of proprietary company information
  4. Data breaches
  5. IP infringement

“Liability is a complex question, especially as fault can be introduced at the point of coding, training, implementation, usage, and a variety of other stages. However, a common-sense approach is slowly being developed by legal authorities, that when you use AI in business, you retain the liability for the outcomes.”

In his session, Adam reported real-life cases where AI tools had leaked confidential business information and created copyright-infringing content. In some cases, AI is simply patchy in its reliability and accuracy. From giving customers the wrong information to fabricating information outright, AI tools are imperfect – and can create negative service delivery – and legal – consequences for businesses.

But the answer is not to run for the hills, says Adam. Businesses can manage the risks of AI while reaping the benefits by adopting an approach that applies critical reasoning to work products generated by AI.

“Maintaining meaningful human interaction with these tools is the best policy and goes beyond simply rubberstamping the results of an AI tool. We must make sure we are critically assessing the output of an AI tool for fairness and reasonableness – the overarching principles of most employment laws – to make sure we are comfortable,” says Adam.

“AI has incredible power and can empower workforces in many different ways – but it is just a very complicated calculator. Humans remain essential to using that tool.”

Adam’s Top Tips for Safely Using AI Tools in the Workplace

  1. Fully investigate AI tools before use and ascertain how they:
  2. Verify outputs for accuracy
  3. Make use of data entered into them
  4. Safeguard against copyright infringement.
  5. Risk assess AI tools before use.
  6. Consider relevant insurance.
  7. Review contracts and terms & conditions – or call a lawyer to help you
  8. If the business uses automated decision-making tools, ensure you are compliant with UK GDPR. Depending on how the technology is used, employees may have the right to not be subject to such decisions, and where special category data is involved, explicit consent must be sought. You may need to update the company privacy policy to set out the logic of the AI tools and how they are used.
  9. Manage employees using AI by
  10. Surveying current AI usage across the workforce.
  11. Providing training on how best to use AI tools.
  12. Make sure that employees understand both the risks and their responsibilities.
  13. Create a policy that clarifies how AI is used in the business, including consequences for employees misusing or using AI incorrectly. Contact Acuity’s Employment team for generic or bespoke policies.
  14. Monitor and make sure that employees are not becoming reliant on AI and deskilling as a result.

For advice on staying on the right side of the law when using AI tools in your workplace, contact our Employment team.

Recent Posts

Whistleblower Protection Following Nicol V World Travel And Tourism Council
May 13, 2024
To Tip Or Not To Tip? How The Employment (Allocation Of Tips) Act 2023 Will Impact The Hospitality Industry
May 13, 2024
Reform Of The Sick Note
May 13, 2024
Unlocking The CQC’s Quality Statements – How And Why “Co-Production” Must Become A Cornerstone Of Your Service
April 26, 2024
Court Of Appeal Rules On Damages Award Following A Breach By The NHS Of Its Procurement Obligations – Braceurself Limited v NHS England
April 23, 2024
Acuity Law Reveals Role In £1.13 Million Seed Funding For London-Based Healthtech, HealthKey
April 18, 2024



Skip to content