Tahir Khan highlights the issues employers should be aware of when considering adopting technology into their employment processes. In this article, we focus on the potential repercussions of new technologies on the privacy of job candidates, as well as the implications for candidates' protections under the Equality, Disability Acts, and employment laws. Employers recognise that they can't or shouldn't ask candidates about their family status or political orientation, or whether they are pregnant, straight, gay, sad, lonely, depressed, physically, or mentally ill, drinking too much, abusing drugs, or sleeping too little. However, new technologies may already be able to discern many of these factors indirectly and without proper (or even any) consent.
Digital innovations and advances in AI have produced a range of novel talent identification and assessment tools. Many of these technologies promise to help organisations improve their ability to find the right person for the right job and screen out the wrong people for the wrong jobs, faster and cheaper than ever before.
These tools put unprecedented power in the hands of organisations to pursue data-based human capital decisions. They also have the potential to democratise feedback, giving millions of job candidates data-driven insights on their strengths, development needs, and potential career and organisational fit.
We have seen the rapid growth (and corresponding venture capital investment) in game-based assessments, bots for scraping social media postings, linguistic analysis of candidates writing samples, and video-based interviews that utilise algorithms to analyse speech content, tone of voice, emotional states, nonverbal behaviours, and temperamental clues.
AI has over the years been heavily used in the following types of decision making, candidate profiling, Automated decision making, Machine learning. These are predominately algorithmic or computational HR and management tools. They learn information directly from data without relying on predetermined equation as a model. The algorithms adaptively improve their performance as the number of their sample’s available increases. Technological advancements coupled with a desire to optimise recruitment processes and to target a higher quality of candidate has led to an increase in the use of artificial intelligence (AI) by employers during the recruitment process.
AI software is increasingly becoming mainstream and features in at least one stage of the recruitment process by many global organisations including Vodafone, McDonald’s, and Unilever; at face value it is relatively easy to see its benefits. technological advancements, combined with the desire to optimise recruitment processes and target high-quality candidates, have led to a significant increase in the use of artificial intelligence (AI) by employers during the hiring process. This trend has become mainstream, with global organisations like Vodafone, McDonald’s, and Unilever incorporating AI at various stages of recruitment. The benefits of AI in recruitment are apparent, as demonstrated by Unilever, which reported saving 100,000 hours of interviewing time and nearly £1 million per year using AI in 2019.
While these novel tools are disrupting the recruitment and assessment space, they leave many yet-unanswered questions about their accuracy, and the ethical, legal, and privacy implications that they introduce.
There have been numerous debates over the years in using AI in the workplace, occasionally grabbing the headlines, but unsurprisingly under little scrutiny from a regulatory perspective. During Covid -19 the use of AI software accelerated in becoming mainstream and has since featured in work allocation programs, recruitment and hiring drives, and even as far as being used in dismissal and management decision-making. During lockdown the benefits of incorporating AI was significant providing many organisations with an undeniable lifeline in speeding up the HR process in saving time and expense. Unfortunately, there is far less information about the new generation of talent tools that are increasingly used in the pre-hire assessment. One would need to consider if these tools are scientifically validated and therefore are not controlling for potential discriminatory adverse impact - creating employer blind reliance.
Technology is indeed having an impact on employment law in the UK, particularly in areas such as data protection, privacy, discrimination, and employee rights. Here are some key areas where AI intersects with employment law:
Direct Discrimination: AI algorithms must not be programmed or trained to discriminate against individuals based on protected characteristics such as race, gender, age, disability, religion, or sexual orientation. Care must be taken to ensure that the AI system does not result in unfair treatment or biased decisions. An example of the danger of biases embedded within AI software is a case involving Uber, which has been subject to an employment tribunal claim brought by one of its former UK drivers on the basis that its facial recognition software did not work as well with people of colour and resulted in the claimant’s account being wrongly deactivated. Another of this occurred in 2018 when a retailer scrapped an algorithm used for recruitment after it was discovered that the machine learning system showed a preference for male candidates. This bias was inadvertently introduced due to the use of data sets primarily composed of male CVs. Similarly, in 2021, Facebook faced accusations of discrimination in its job advertisements, as an experiment revealed biased outcomes with job ads predominantly shown to specific sexes. AI was at the heart of a legal dispute between the make-up brand MAC (a subsidiary of Estée Lauder) and three of its former employees. In June 2020, as part of a redundancy exercise, MAC utilised software provided by the recruiting platform HireVue to conduct video interviews. Though not a recruitment exercise, this case aids as a cautionary tale about the adoption of AI tools.
The HireVue software analysed the interviewees’ answers, language and facial expressions using an algorithm. Interview scores were considered alongside sales figures and employment records. Three employees appealed the decision to make them redundant, suspecting that the HireVue interview had been their downfall, and took legal action against Estée Lauder.
According to the employees, they had not been informed of the nature of the redundancy assessment. They also alleged that no explanation of the methodology for the algorithmic evaluation was provided. An out of court settlement was reached earlier this year. The employees’ claims touch on the crux of the problems with AI applications: the lack of transparency around performance. Commercial confidentiality is often cited as the basis for withholding information. However, there is a clear tension between this reticence and employers’ obligations under the Equality Act 2010 regime. Where employers cannot explain how and why decisions have been made, there is a risk of breaching employment legislation. Ultimately, employers are held responsible for the decisions made - not the technology.
Indirect Discrimination: Indirect discrimination can occur when an AI algorithm disproportionately disadvantages individuals with certain protected characteristics. It is crucial to regularly test and monitor AI systems for potential biases and take corrective measures to address any identified issues.
Training Data Bias: AI algorithms learn from historical data, which may reflect biases present in past hiring decisions. If the training data is biased, it can perpetuate discriminatory outcomes. Employers should carefully select and preprocess training data to minimise bias and ensure fair representation.
At the start of any AI project employers should consider whether a data protection impact assessment (DPIA) is required. Employers must carry out a DPIA where a type of processing is likely to result in a high risk to the rights and freedoms of individuals. A DPIA involves the identification of privacy risks and a consideration of what is necessary and proportionate. If algorithms are used, there should be transparency about how this is applied to demonstrate accountability.
Lawful Basis: Employers must establish a lawful basis for processing personal data under the GDPR. Consent, legitimate interests, and contractual necessity are some of the potential lawful bases, depending on the circumstances.
Purpose Limitation: Personal data collected during the recruitment process should only be used for the intended purposes. It is essential to clearly communicate these purposes to applicants and avoid using the data for unrelated activities.
Data Minimisation: Employers should collect and process only the necessary personal data required for the recruitment process. Unnecessary or excessive data collection can raise compliance issues.
Intellectual Property and Ownership: If employers use third-party AI systems or algorithms in their hiring processes, they must consider the ownership and licensing of the AI technology. Understanding the terms and conditions of usage, intellectual property rights, and potential data sharing is important to avoid any legal disputes or infringements.
Human-Centric Approach: While AI can assist in the hiring process, employers should maintain a human-centric approach. This means ensuring that human judgment and expertise are incorporated into decision-making, and that AI is used to augment and support human decision-making rather than replace it entirely. A balance between automation and human involvement should be struck to preserve ethical considerations.
Applicants' Rights: Job applicants have the right to be informed about the use of AI in the recruitment process, including the types of data collected, how it is processed, and the logic behind automated decisions.
Explainable AI: Ensuring that AI algorithms are explainable is important for transparency and accountability. Applicants should have the ability to understand and challenge decisions made by AI systems.
Fairness and Bias Mitigation: AI systems can inherit biases from historical data or reflect biases present in the recruitment process. Employers should actively work to identify and mitigate biases to ensure fair treatment of applicants. This may involve regularly monitoring and auditing AI systems, diversifying training data, and applying fairness tests to assess the impact of AI algorithms on different applicant groups.
Algorithmic Accountability: Employers should take responsibility for the decisions made by AI systems in hiring. This involves regular monitoring and auditing of AI algorithms to identify any unintended consequences or biases. Establishing mechanisms for addressing concerns, providing opportunities for redress, and continuously improving the fairness and performance of AI systems is crucial.
Profiling and Significant Effects: The GDPR imposes restrictions on solely automated decisions that have a significant impact on individuals. In most cases, employers should include human intervention in the decision-making process to ensure fairness and compliance with legal requirements.
Right to Explanation: When AI systems make decisions that have legal or similarly significant effects on individuals, applicants have the right to receive an explanation of the decision-making process and the underlying logic.
Data Security: Adequate security measures should be in place to protect personal data from unauthorised access, loss, or disclosure. This includes ensuring secure storage, encryption, and regular security assessments.
Confidentiality: Employers must safeguard the confidentiality of applicant data and ensure that access is limited to authorised individuals involved in the recruitment process.
To address these legal pitfalls, employers should conduct thorough assessments of the AI systems used in recruitment, including their design, training data, and decision-making processes. Employers should also establish clear policies and procedures to address potential biases, ensure compliance with data protection laws, and provide transparency to applicants. It is advisable to seek legal counsel to ensure full compliance with all relevant laws and regulations.
The Government's policy paper, "A pro-innovation approach to AI" (the Paper), published on 29 March 2023, outlines the UK's proposed new AI regulatory framework. The key message emerging from the proposals is that the approach is considered both proportionate and pro-innovation. The underlying aim is to create a regulatory landscape in which innovation can thrive, thereby enabling the UK to attract talent and host more high-skilled jobs. The Paper is open for consultation until 21 June 2023.
The end of March saw an unfortunate clash of approaches on the question of regulation: on the same day as the UK government published its pro-innovation (read, regulation-lite) White Paper on AI, the Future of Life Institute published an open letter calling for the development of the most powerful AI systems to be paused to allow for the dramatic acceleration of robust AI governance systems. Although its veracity was subsequently challenged, signatories included tech giants such as Elon Musk and Steve Wozniak.
Calls for stricter oversight of such developing technologies in the UK workplace have also recently been sounded by the TUC. The TUC argues that AI-powered technologies are now making “high risk, life changing” decisions about workers’ lives – such as decisions relating to performance management and termination. Unchecked, it cautions that the technology could lead to greater discrimination at work. The TUC is calling for a right of explainability to ensure that workers can understand how technology is being used to make decisions about them, and the introduction of a statutory duty for employers to consult before new AI is introduced.
A focus of one of these reports was an analysis of the legal implications of AI systems in the post-pandemic workplace, bearing in mind the use of AI and ADM (automated decision-making) to recruit, monitor, manage, reward and discipline staff had proliferated. The report identified the extent to which existing laws already regulate the use of this and what the TUC felt were significant deficiencies that need to be filled.
For example, the common law duty of trust and confidence arguably requires employers to be able to explain their decisions and for those decisions to be rational and in good faith. In terms of statutory rights, protection against unfair dismissal, data protection rights and the prohibition of discrimination under the Equality Act (amongst other things) all have relevance to how this technology is used at work. However, the 2021 report went on to identify 15 “gaps” if AI systems in the workplace are to be regulated by existing laws and made a number of specific recommendations for legislative changes in order to plug these perceived shortcomings. For example, it proposed the introduction of a requirement that employers provide information on any high-risk use of AI and ADM in section 1 employment particulars. But, as we will go on to consider, the approach taken by the government in the White Paper means that any such plugs are likely to be far from watertight.
New technologies have blurred the boundaries between public and private aspects, such as traits and states, and this trend is expected to continue in the future. Through the utilisation of AI, big data, social media, and machine learning, employers will gain greater access to candidates' private lives, attributes, challenges, and mental states. While addressing the resulting privacy concerns may not have straightforward solutions, we believe it is essential to engage in public discussions and debates on these matters.
In the digital era, vast amounts of information about us are readily accessible. Big data constantly tracks our online activities and compiles data that can be analysed by unimaginable tools. These tools have the potential to provide insights to future employers regarding our suitability or lack of for specific roles.