Artificial Intelligence (AI) is no longer the future of football, it’s the present. From data-driven scouting tools to semi-automated officiating systems, AI is reshaping the way Premier League clubs operate both on and off the pitch. However, alongside these innovations comes a wave of legal and ethical challenges that clubs can no longer afford to ignore.
As clubs adopt AI to gain tactical, physical, and commercial advantages, they must also navigate complex frameworks of data protection law, employment rights, and issues of fairness and integrity in sport. Failure to do so could lead not only to litigation but to reputational damage and a loss of trust among fans and players.
AI in football relies heavily on data, especially biometric and performance-related data collected from players. This includes GPS tracking, injury diagnostics, heart-rate monitoring, and even predictive performance models. Under the UK General Data Protection Regulation (UK GDPR), most of this information is classified as “special category data” and requires heightened safeguards.
In 2022, Everton FC implemented a GDPR compliance framework using SureCloud’s governance software, replacing manual spreadsheets with automated risk assessments and breach response protocols. This example shows the level of due diligence now expected of top-tier clubs.
Key obligations include:
A club found lacking in these areas risks significant fines from the Information Commissioner’s Office (ICO), especially if data is shared with external vendors or used for unintended commercial purposes.
Perhaps the most prominent legal challenge of recent years is exemplified by Project Red Card, a legal action initiated by more than 800 current and former professional players. They claim their personal performance data has been harvested and sold to betting firms and data analytics companies without consent or compensation.
This dispute raises foundational questions:
Given the potential for long-term commercial value, especially in sectors like fantasy football, esports, and gambling, clubs must ensure that all data usage is contractually clarified, lawfully processed, and ethically managed.
Clubs are increasingly using AI to assist with scouting, talent identification, and contract negotiations. Tools like aiScout, which allows players to submit video and performance data remotely, democratise access to professional football. However, they also introduce the risk of bias.
AI models can inadvertently favour players with access to better training environments, more high-quality footage, or certain physical traits historically preferred in elite sport. This could unintentionally discriminate against players from underfunded academies or marginalised communities.
Under the Equality Act 2010, clubs can face liability for indirect discrimination, even when using technology developed by third parties. Without regular audits of these systems, clubs risk reinforcing systemic biases under the guise of “data-driven decisions.”
Article 22 of the GDPR prohibits fully automated decisions that produce legal or significant effects, like denying a contract extension or altering pay, without human review. As clubs increasingly turn to AI to flag fitness concerns or suggest contractual actions, they must ensure these recommendations remain advisory, not definitive.
A cautionary tale comes from the use of AI in other sectors, where flawed predictive models have resulted in discriminatory hiring practices and wrongful terminations. In football, the equivalent would be an algorithm wrongly predicting a player’s injury risk, leading to a premature release from the club.
To protect against this, decisions must remain transparent, contestable, and underpinned by human judgment. Otherwise, clubs could face wrongful dismissal, or discrimination claims under UK employment law.
Ethically, consent cannot just be a tick-box exercise. When a young academy player is asked to consent to biometric monitoring or to participate in AI-enhanced training regimes, the pressure to comply can be immense.
Given the imbalance of power between player and club, it’s essential that consent is freely given, fully informed, and capable of being withdrawn without consequence. Anything less undermines trust and could amount to ethical coercion, even if it remains legally compliant.
AI models often operate as “black boxes,” producing outputs without clear reasoning. For example, a club might use an AI-powered fitness tracker to assess who starts a match. If a player is benched due to AI predictions without a clear explanation, it could lead to resentment and undermine morale.
Transparency is essential, not only to meet legal standards but to preserve the team dynamic and uphold principles of sporting fairness. Clubs must be able to explain the rationale behind AI-assisted decisions and ensure players have recourse to challenge them.
AI is expensive. While top Premier League clubs like Liverpool have partnered with DeepMind to analyse thousands of corner-kick scenarios using TacticAI, smaller clubs lack the resources for such cutting-edge tools.
This could widen the competitive gap in the league, entrenching the dominance of financially powerful clubs and leaving others behind. To mitigate this, there is a growing argument for league-wide AI governance frameworks and shared access to core technologies, much like Financial Fair Play initiatives.
The Premier League’s decision to delay the rollout of semi-automated offside technology in 2024, after controversial VAR hold-ups, shows just how sensitive fans are to AI interfering with the rhythm and emotion of the game.
Technology must serve the spirit of football, not frustrate it. Poorly explained decisions, inconsistent application of tech, or opaque officiating systems can quickly erode public trust. Clubs and governing bodies need to invest in fan education and ensure transparency about how and when AI is used.
Artificial intelligence is transforming professional football. But it must be handled with care. For Premier League clubs, the challenge is not just about staying competitive, it’s about staying compliant, transparent, and fair.
Those who embed legal and ethical principles into their AI strategies will not only avoid risk, but they will also help shape a version of the game where technology enhances human talent, rather than replacing or distorting it.