Labour Research February 2022

Features

When AI rules the workplace

Artificial intelligence is increasingly being used as a management tool to monitor staff activity, both in the workplace and for remote workers.

The technological revolution workers have experienced over recent years includes the rapid development of Artificial Intelligence (AI) to carry out management functions — including decisions on recruitment, line management, monitoring and training.

The TUC says the impact of automation on functions such as the manufacture of goods and provision of retail services is well recognised — but that far less attention has been given to the rapid development of AI to manage people.

While AI has the potential to improve working lives, it also risks more inequality and discrimination, unsafe working conditions, and blurring of the boundaries between home and work.

Unions are demanding workers have a say in shaping this technology and how it is used in the world of work.

Growth in use of AI in the workplace

The Covid pandemic has spurred three-quarters of firms to adopt new technologies, according to recent research by the London School of Economics’ Centre for Economic Performance (CEP) and the CBI business confederation.

The joint research, The business response to Covid-19 one year on: findings from the second wave of the CEP-CBI survey on technology adoption,is based on a July 2021 survey of 425 UK firms.

The study reports that compared to other emerging technologies, relatively higher shares of businesses report one-year or five-year plans “for adoption of AI and digital automation in particular, suggesting that this is an area set to grow over the short to medium term”.

The recent inquiry into AI and surveillance in the workplace by the All Party Parliamentary Group for the Future of Work found these technologies are already transforming work and working lives. And it highlighted a growing body of evidence pointing to “significant negative impacts on conditions and quality of work”.

TUC AI lead Mary Towers told Labour Research:“Our view is that algorithmic management largely started as a phenomenon in the so-called gig economy, particularly in the context of platform work.

“But it’s now spread way beyond that and into the entirety of the labour market and into sectors that one wouldn’t necessarily anticipate.”

The TUC’s AI working group reflects this, with its members drawn from unions representing workers in media and communications, education and other public services, for example, as well as in the more obvious areas of retail, warehousing and delivery. As part of its AI project, which began in spring 2020, the TUC carried out a survey of workers. This found an increase in both the use of algorithmic management, and monitoring and surveillance during the pandemic, driven by the rise in homeworking.

While many workers have not been able to work from home, some who have been working from home have experienced a significant shift in the widespread adoption of technology, including remote management tools.

For example, recent polling by the Prospect specialists’ union found that by early November 2021, around one in three workers were being monitored at work, up from around a quarter six months previously. This included a doubling of the use of camera monitoring in people’s homes, with 13% of homeworkers being monitored by cameras compared to just 5% in April 2021.

Lack of tranparency on the use of AI

The TUC research identified a range of problems for workers being managed by AI, beginning with a lack of awareness. Towers said employers are using technologies to provide a high level of monitoring and surveillance, often without workers being aware of this. When the TUC asked workers whether it was possible that AI-powered technologies were being used at their workplace, a shocking 89% responded either “yes” or “not sure”.

The research and policy officer for the Community trade union, Anna Mowbray, also points to a lack of awareness about data protection rights.

“Workers have the right to understand what data is being collected about them and they have the right to request that data by making a subject access request, but it doesn’t really seem to be happening,” she said.

And Towers said: “Even if a worker is aware of the use of these types of technologies, they might not understand how they are operating, both in a broad sense and in relation to them as an individual — how decisions being made relate to the individual’s terms and conditions at work, how those specific decisions are being made, what rationale is behind those decisions, what criteria are being used.”

Unfair decision-making

This lack of transparency and clarity means decisions being made by AI that workers perceive to be unfair, go unchallenged. In the TUC’s 2020 report, Technology managing people — the worker experience, a CWU communications union rep explained they couldn’t challenge such decisions because “algorithms are shrouded in mystery so neither the employer nor the union truly understands what’s happening”.

Mowbray says Community has seen groups with legally protected characteristics unfairly disadvantaged by technology that doesn’t consider their additional needs. For example, a call centre worker going through the menopause needed to take extra breaks, but call monitoring software penalised this.

While this could be challenged on the basis of gender, age and disability discrimination, Mowbray says managers often don’t have enough understanding of the technology and buy it “off the shelf”. In some cases, the tools they procure have been developed to comply with US equality laws, which use different concepts to UK legislation.

As well as unlawful discrimination, other forms of unfairness TUC research uncovered include ratings that don’t make sense because the AI judges performance without taking account of the context. An example is where an algorithm assesses the quality of someone’s driving.

“They might be downgraded for quick acceleration or making short turns, but those actions might be determined by the landscape and environment that the individual is driving in,” Towers explained.

AI can also negatively affect health and safety. Unfairness impacts on workers’ mental health, and unreasonable and unrealistic productivity targets result in work intensification and can push people to work unsafely — for example, drivers are pushed to drive too fast or when they are tired.

Lack of autonomy and agency

The research also found links between poor mental health and the lack of human agency, the lack of freedom to decide how to do tasks as well as the lack of ability to challenge decisions.

In one example highlighted in Technology managing people, the CWU tried but failed to establish how an algorithm worked and the difficulties resulted in significant levels of absence from work for mental health reasons.

Mowbray says Community has also dealt with cases where monitoring technology is flawed — for example, an AI tool used to detect smoking in a vehicle interpreted a pen as a cigarette.

“The combination of surveillance and AI is very concerning, particularly because clearly a number of technologies employers are using are not good enough and they make mistakes,” she said.

She is also concerned about the lack of autonomy where workers “are very tightly controlled by an algorithmic management system that impacts on human dignity.

“It’s really important that, as far as possible, people have control to decide what tasks they are going to do and in what order, and algorithmic management tools can prevent that. The Institute for the Future of Work put it well when it said it is turning people into machines.”

Union guidance

The TUC project aims to raise awareness of the “lived worker experience” of these types of technologies and ensure the impact on workers is considered in their development, procurement and application stages.

As well as carrying out research, it has also identified the existing legal tools available to reps and the gaps in current law and guidance. Dignity at work and the AI revolution — a TUC manifesto sets out a series of demands for change (see box). In December 2021, it published new guidance for reps, When AI is the boss.

Dignity at work and the AI revolution

Dignity at work and the AI revolution — a TUC manifesto, sets out “a very pragmatic set of proposals”, said TUC AI lead Mary Towers. She said these are “intended to resolve as quickly as possible many of the issues that we identified with the use of AI at work”.

The proposals aim to fill the gaps in equality and data protection laws. The TUC says “high-risk” should be defined as broadly as possible without inhibiting harmless uses of artificial intelligence and automated decision-making (ADM) and should be focused on the worker impact.

The manifesto calls for sector-specific guidance on the meaning of high-risk AI/ADM, with full input from unions and civil society.

The proposals include:

• a statutory right to in-person engagement where important, high-risk decisions are being made about people at work;

• a comprehensive and universal right to human review of high-risk decisions made by AI;

• as employers collect and use worker data, a reciprocal right for workers to collect and use their own data;

• a statutory right for employees and workers to disconnect from work, to create “communication-free” time in their lives (see Labour Research, September 2021, pages 9-11);

• ensuring a worker has ready access to information about how AI and automated decision-making is being used in the workplace in a way which is high-risk, and an obligation on employers to provide this information within the statement of particulars required by Section 1 of the Employment Rights Act 1996; and

• an obligation on employers to maintain a register which contains this information, updated regularly.

Mowbray says AI has the potential to bring benefits for workers. For example, it could automate away routine tasks people don’t necessarily enjoy and leave them to focus on more satisfying relationship-building work. Improved productivity could also mean reduced hours, increased pay, or more flexible working. But this won’t happen by itself.

“Community’s work has focused on worker voice and consultation because we think the more that workers are involved in the development of these systems, the more they are able to understand them and challenge management decisions, the better able they are to shape them for their benefit,” she said.

Community’s new guide, Technology agreements: a partnership approach to use of technology at work, sets up a process to help reps break down how they are going to negotiate around technology and make sure it’s implemented in the interests of workers.

The union recommends developing technology forums and training technology reps. It also advocates the use of equalities, algorithmic and data protection impact assessments and has created a model agreement it aims to roll out with key employers over the coming months.

The TUC hopes to develop further AI guidance and training for reps, starting with guidance focusing on collective bargaining and consultation rights.

At present, not many collective agreements deal specifically with AI. However, the key principles framework agreement at Royal Mail, negotiated by the CWU, includes a commitment that “technology will not be used to de-humanise the workplace or operational decision making”.

It also includes a provision to maintain channels of communication between workers, managers and trade union representatives because “technology will not replace the need for consultation and negotiation”.

Towers said: “While this is a very, very new area from a policy perspective, from a rep’s perspective it is very much a mixture of the new and the old.

“Union reps can use established negotiation techniques and approaches they have used in previous phases of automation. Those are still applicable in the context of this phase — the automation of the management function.”

Influencing government policy

The TUC will also focus on two key opportunities to influence government policy this year. The long-awaited Employment Bill, if forthcoming, will be a “potentially suitable vehicle” for most of the proposals set out in the manifesto (see box on page 14).

The government is also due to publish an AI White Paper following its September 2021 National AI Strategy.

And the TUC will be campaigning to keep important rights under the UK General Data Protection Regulation as the government reforms the UK data protection regime.

Article 21 gives individuals the right to object to the processing of their personal data at any time. And Article 22 gives people the right not to be subject to “solely automated decisions”, including profiling, which have a legal or similarly significant effect on them.

European Union developments

While the UK is no longer part of the EU, European developments may impact on UK workers to some extent.

In December 2021, the European Commission, which develops laws for EU member states, published a draft directive on platform workers which includes a section on algorithmic management and transparency and explainability.

A European Union AI Act is also due to come into force over the next few years. TUC AI lead Mary Towers says this is a “hugely significant attempt to regulate the use of AI”.

She told Labour Research: “There is the potential for the Act to become the gold standard internationally.

“We do not have anything that is comparable to the Act or to the provisions on algorithmic management in the platform workers draft directive.” She added that as the EU Act adopts a product management as opposed to a fundamental rights-based approach, “standardisation will be used to operationalise the Act.

“In the circumstances, given the likely influence of the AI Act in the UK, albeit by operation of the market, it seems inevitable that standardisation of AI will also be of significance here in the UK. Unions should have a voice in this process.”

Unions have rightly identified AI as a “really, really important area as the new frontier of workers’ rights,” said Towers.

“We don’t want to resist technology because that won’t be effective and it’s not in the best interests of our members,” said Mowbray. “It’s about empowering workers to say what they want to get out of it and shape the future.”

All Party Parliamentary Group for the Future of Work, The New Frontier: Artificial Intelligence at Work (https://www.futureworkappg.org.uk/s/The-New-Frontier-Artificial-Intelligence-at-Work.pdf)

Centre for Economic Performance, The business response to Covid-19 one year on: findings from the second wave of the CEP-CBI survey on technology adoption (https://cep.lse.ac.uk/pubs/download/cepcovid-19-024.pdf)

Community, Technology agreements: a partnership approach to use of technology at work (https://community-tu.org/who-we-are/what-we-stand-for/workers-and-technology/)

TUC, Dignity at work and the AI revolution — a TUC manifesto (https://www.tuc.org.uk/sites/default/files/2021-03/The_AI_Revolution_20121_Manifesto_AW.pdf)

TUC, Technology managing people — the worker experience (https://www.tuc.org.uk/sites/default/files/2020-11/Technology_Managing_People_Report_2020_AW_Optimised.pdf)

TUC, When AI is the boss - an introduction for union reps (https://www.tuc.org.uk/sites/default/files/2021-12/When_AI_Is_The_Boss_2021_Reps_Guide_AW_Accessible.pdf)