November 14, 2024

Almost 60% of people would like to see the UK government regulate the use of generative AI technologies such as ChatGPT in the workplace to help safeguard jobs, according to a survey.

As leading figures in the tech industry call for restrictions on the rapid development of AI, research by the Prospect trade union suggests strong public support for regulation.

In a survey of more than 1,000 people last month, 58% agreed that “the government should set rules around the use of generative AI to protect workers’ jobs”. Just 12% said the government should not interfere because “the benefits are likely to outweigh any costs”.

Employers have used various forms of AI for some time – including in target-setting, and hiring and firing decisions – but the salience of the technologies has increased dramatically since the release of ChatGPT, which hit 100 million users within two months of launch.

Analysts at Goldman Sachs recently suggested AI could ultimately replace 300m jobs – up to a quarter of the global workforce – though many of these would be replaced by new jobs needed to work alongside the technology.

They identified administrative jobs as those most at risk, followed by those in law, architecture and engineering.

Prospect represents skilled workers such as scientists and engineers. Andrew Pakes, the deputy general secretary of the union, said many employees are already experiencing some form of AI through automated decision-making, often in conjunction with workplace surveillance.

“It’s the hidden decision-making behind surveillance software and many of the AI tools that leaves workers feeling uneasy about how decisions are being made,” he said.

“Rather than waiting until more problems occur before taking action, government must engage now with both employees and employers to draw up fair new rules for using this tech.”

The survey also showed that 71% of workers would be uncomfortable with having their movements tracked at work, and 59% with having their keyboard use monitored while they are working from home.

In a recent white paper, the government appeared to suggest it would take a laissez-faire approach to the development of AI, with a foreword by the science, innovation and technology secretary suggesting it had delivered “fantastic social and economic benefits for real people”.

But government sources have suggested that the prime minister has some concerns about the technology.

Pakes said it was important for regulation to tackle what he called the “here and now” of AI, as well as the potential for apocalyptic risks in the future. “The government can act today,” he said.

The TUC has called for limits on the way employers gather and use data about their staff, which can then be fed into automated decisions.

Mary Towers, who leads the TUC’s work on AI in the workplace, told a recent House of Lords select committee hearing: “Data is about control, data is about influence, data is the route that workers have to establish fair conditions at work.”

She warned that AI could be “used to intensify work to a level where it becomes unsustainable”. The TUC is calling for workers to be told what data is being gathered about them and how it is being used.

Read more:
Almost 60% of people want regulation of AI in UK workplaces, survey finds