Skip to main content

    Harnessing the Power of AI: Balancing Innovation & Legal Protection

    Tahir Khan
    Post by Tahir Khan
    October 18, 2023
    Harnessing the Power of AI: Balancing Innovation & Legal Protection

    Introduction

    Digital innovations and advances in AI have produced a range of novel talent identification and assessment tools. Many of these technologies promise to help organisations improve their ability to find the right person for the right job and screen out the wrong people for the wrong jobs, faster and cheaper than ever before.

    These tools put unprecedented power in the hands of organisations to pursue data-based human capital decisions. They also have the potential to democratise feedback, giving millions of job candidates data-driven insights on their strengths, development needs, and potential career and organisational fit. We have seen the rapid growth (and corresponding venture capital investment) in game-based assessments, bots for scraping social media postings, linguistic analysis of candidates writing samples, and videobased interviews that utilise algorithms to analyse speech content, tone of voice, emotional states, nonverbal behaviours, and temperamental clues. AI has over the years been heavily used in the following types of decision making, candidate profiling, Automated decision making, Machine learning. These are predominately algorithmic or computational HR and management tools. They learn information directly from data without relying on predetermined equation as a model. The algorithms adaptively improve their performance as the number of their sample’s available increases. Technological advancements coupled with a desire to optimise recruitment processes and to target a higher quality of candidate has led to an increase in the use of artificial intelligence by employers during the recruitment process. 

    AI software is increasingly becoming mainstream and features in at least one stage of the recruitment process by many global organisations including Vodafone, McDonald’s, and Unilever; at face value it is relatively easy to see its benefits. Technological advancements, combined with the desire to optimise recruitment processes and target high-quality candidates, have led to a significant increase in the use of AI by employers during the hiring process. The benefits of AI in recruitment are apparent, as demonstrated by Unilever, which reported saving 100,000 hours of interviewing time and nearly £1million per year in 2019.

    While these novel tools are disrupting the recruitment and assessment space, they leave many yet-unanswered questions about their accuracy, and the ethical, legal, and privacy implications that they introduce. There have been numerous debates over the years in using AI in the workplace, occasionally grabbing the headlines, but unsurprisingly under little scrutiny from a regulatory perspective. During Covid -19 the use of AI software accelerated in becoming mainstream and has since featured in work allocation programs, recruitment and hiring drives, and even as far as being used in dismissal and management decision-making.

    During lockdown the benefits of incorporating AI was significant providing many organisations with an undeniable lifeline in speeding up the HR process in saving time and expense. Unfortunately, there is far less information about the new generation of talent tools that are increasingly used in the pre-hire assessment. One would need to consider if these tools are scientifically validated and therefore are not controlling for potential discriminatory adverse impact - creating employer blind reliance. 

    Technology is indeed having an impact on employment law in the UK, particularly in areas such as data protection, privacy, discrimination, and employee rights. Here are some key areas where AI intersects with employment law:

    Algorithmic Discrimination

    Discrimination and Bias: Direct Discrimination: AI algorithms must not be programmed or trained to discriminate against individuals based on protected characteristics such as race, gender, age, disability, religion, or sexual orientation. Care must be taken to ensure that the AI system does not result in unfair treatment or biased decisions. An example of the danger of biases embedded within AI software is a case involving Uber, which has been subject to an employment tribunal claim brought by one of its former UK drivers on the basis that its facial recognition software did not work as well with people of colour and resulted in the claimant’s account being wrongly deactivated. Another of this occurred in 2018 when a retailer scrapped an algorithm used for recruitment after it was discovered that the machine learning system showed a preference for male candidates. This bias was inadvertently introduced due to the use of data sets primarily composed of male CVs. Similarly, in 2021, Facebook faced accusations of discrimination in its job advertisements, as an experiment revealed biased outcomes with job ads predominantly shown to specific sexes. 

    AI was at the heart of a legal dispute between the make-up brand MAC (a subsidiary of Estée Lauder) and three of its former employees. In June 2020, as part of a redundancy exercise, MAC utilised software provided by the recruiting platform HireVue to conduct video interviews. Though not a recruitment exercise, this case aids as a cautionary tale about the adoption of AI tools. The HireVue software analysed the interviewees’ answers, language and facial expressions using an algorithm. Interview scores were considered alongside sales figures and employment records. Three employees appealed the decision to make them redundant, suspecting that the HireVue interview had been their downfall, and took legal action against Estée Lauder. According to the employees, they had not been informed of the nature of the redundancy assessment. They also alleged that no explanation of the methodology for the algorithmic evaluation was provided. An out of court settlement was reached earlier this year. The employees’ claims touch on the crux of the problems with AI applications: the lack of transparency around performance. Commercial confidentiality is often cited as the basis for withholding information. However, there is a clear tension between this reticence and employers’ obligations under the Equality Act 2010 regime. Where employers cannot explain how and why decisions have been made, there is a risk of breaching employment legislation. Ultimately, employers are held responsible for the decisions made - not the technology.

    Indirect Discrimination

    Indirect discrimination can occur when an AI algorithm disproportionately disadvantages individuals with certain protected characteristics. It is crucial to regularly test and monitor AI systems for potential biases and take corrective measures to address any identified issues.

    Training Data Bias

    AI algorithms learn from historical data, which may reflect biases present in past hiring decisions. If the training data is biased, it can perpetuate discriminatory outcomes. Employers should carefully select and pre-process training data to minimise bias and ensure fair representation.

    Data Protection

    At the start of any AI project employers should consider whether a data protection impact assessment (DPIA) is required. Employers must carry out a DPIA where a type of processing is likely to result in a high risk to the rights and freedoms of individuals. A DPIA involves the identification of privacy risks and a consideration of what is necessary and proportionate. If algorithms are used, there should be transparency about how this is applied to demonstrate accountability. 

    Lawful Basis

    Employers must establish a lawful basis for processing personal data under the GDPR. Consent, legitimate interests, and contractual necessity are some of the potential lawful bases, depending on the circumstances. 

    Purpose Limitation

    Personal data collected during the recruitment process should only be used for the intended purposes. It is essential to clearly communicate these purposes to applicants and avoid using the data for unrelated activities.

    Data Minimisation

    Employers should collect and process only the necessary personal data required for the recruitment process. Unnecessary or excessive data collection can raise compliance issues. 

    Intellectual Property and Ownership

    If employers use third-party AI systems or algorithms in their hiring processes, they must consider the ownership and licensing of the AI technology. Understanding the terms and conditions of usage, intellectual property rights, and potential data sharing is important to avoid any legal disputes or infringements.

    Human-Centric Approach

    While AI can assist in the hiring process, employers should maintain a human-centric approach. This means ensuring that human judgment and expertise are incorporated into decision-making, and that AI is used to augment and support human decision-making rather than replace it entirely. A balance between automation and human involvement should be struck to preserve ethical considerations. 

    Confidentiality

    Employers must safeguard the confidentiality of applicant data and ensure that access is limited to authorised individuals involved in the recruitment process. To address these legal pitfalls, employers should conduct thorough assessments of the AI systems used in recruitment, including their design, training data, and decision-making processes. Employers should also establish clear policies and procedures to address potential biases, ensure compliance with data protection laws, and provide transparency to applicants. It is advisable to seek legal counsel to ensure full compliance with all relevant laws and regulations.

    The Government's policy paper, "A pro-innovation approach to AI" (the Paper), published on 29 March 2023, outlines the UK's proposed new AI regulatory framework. The key message emerging from the proposals is that the approach is considered both proportionate and pro- innovation. The underlying aim is to create a regulatory landscape in which innovation can thrive, thereby enabling the UK to attract talent and host more high-skilled jobs. The Paper is open for consultation until 21 June 2023. The end of March saw an unfortunate clash of approaches on the question of regulation: on the same day as the UK government published its pro-innovation (read, regulation-lite) White Paper on AI, The Future of Life Institute published an open letter calling for the development of the most powerful AI systems to be paused to allow for the dramatic acceleration of robust AI governance systems. Although its veracity was subsequently challenged, signatories included tech giants such as Elon Musk and Steve Wozniak.

    Calls for stricter oversight of such developing technologies in the UK workplace have also recently been sounded by the TUC. The TUC argues that AI-powered technologies are now making “high risk, life changing” decisions about workers’ lives – such as decisions relating to performance management and termination. Unchecked, it cautions that the technology could lead to greater discrimination at work. The TUC is calling for a right of explain ability to ensure that workers can understand how technology is being used to make decisions about them, and the introduction of a statutory duty for employers to consult before new AI is introduced. A focus of one of these reports was an analysis of the legal implications of AI systems in the post-pandemic workplace, bearing in mind the use of AI and ADM (automated decision-making) to recruit, monitor, manage, reward and discipline staff had proliferated. The report identified the extent to which existing laws already regulate the use of this and what the TUC felt were significant deficiencies that need to be filled.

    For example, the common law duty of trust and confidence arguably requires employers to be able to explain their decisions and for those decisions to be rational and in good faith. In terms of statutory rights, protection against unfair dismissal, data protection rights and the prohibition of discrimination under the Equality Act (amongst other things) all have relevance to how this technology is used at work. However, the 2021 report went on to identify 15 “gaps” if AI systems in the workplace are to be regulated by existing laws and made several specific recommendations for legislative changes to plug these perceived shortcomings. For example, it proposed the introduction of a requirement that employers provide information on any high-risk use of AI and ADM in section 1 employment particulars. The approach taken by the government in the White Paper means that any such plugs are likely to be far from watertight.

    Conclusion

    Advancements in technology, particularly AI, big data, social media, and machine learning, have blurred the line between public and private aspects of individuals' lives. This grants employers greater access to personal information, which can improve the hiring process but also raises ethical and privacy concerns. Balancing these concerns requires legislation, ethical guidelines, technological safeguards, and public discussions.

    Laws and regulations should limit data collection and ensure transparency and protection against discrimination. Ethical guidelines can promote responsible data usage and accountability. Technological safeguards like anonymisation and encryption protect privacy while allowing data analysis.

    Transparent AI systems give individuals more control over their information. Public discussions shape privacy norms and involve diverse perspectives. It's crucial to find the right balance between the benefits of technology and privacy risks and engage in ethical and responsible data practices.

    Tahir Khan
    Post by Tahir Khan
    October 18, 2023

    Comments