Google DeepMind workers in UK vote to unionize amid deal with US military
2026-05-05 05:05
Workers developing Googleâs artificial intelligence products in the UK have voted to unionize, in part out of concerns about a deal between the company and the US military that was announced last week.
In a letter slated to go to management on Tuesday and shared exclusively with the Guardian, workers at Google DeepMind, the companyâs AI research laboratory, requested recognition of the Communication Workers Union and Unite the Union as joint representatives of the labâs UK-based staff.
DeepMindâs UK workers voted to unionize in April. One of the workers said they were particularly driven by reports that Google was close to reaching a deal with the defense department and pointed to the USâs âcapricious Iran warâ and the Trump administrationâs feud with Anthropic as indications that the department is ânot a responsible partnerâ. The deal was ultimately announced on Friday.
âI have joined the union due to concerns about AI being used to empower authoritarianism, whether through military or surveillance applications, both foreign and domestic,â added the worker, who requested anonymity because of fear of retaliation. âBy unionizing, we are taking the traditional route for workers to organize and have a say.â
Another worker, who also requested anonymity, said that many at the company had struggled with what they had come to view as their complicity in Israelâs war in Gaza. The company provided the Israeli military with increased access to its AI tools from the early days of the war in Gaza, the Washington Post reported last year, and in 2021, it signed, along with Amazon, a $1.2bn cloud-computing contract with the Israeli government.
âOur technology helped the IDF,â said the second UK worker, referring to Israelâs military. âI want AI to benefit humanity, not to facilitate a genocide.â
Google did not respond to a request for comment.
Concerns by Google workers and investors have been mounting for years but have particularly escalated after the company last year dropped a pledge not to develop militarized AI. That development was a driving motivation for Google DeepMind workersâ union in the UK, two of them said. While small groups of Google employees have unionized in the US before, the UK workers are the first in a âfrontierâ AI lab to seek union recognition, they said. Google DeepMind is headquartered in London but has about a dozen offices across North America and Europe. At least 1,000 workers will be represented if the company recognizes the union, according to union officials.
On Friday, the Pentagon confirmed it had reached agreements with seven leading AI companies, Google among them. Others included SpaceX, OpenAI, Nvidia, Reflection, Microsoft and Amazon Web Services. Anthropic, whose technology is in wide use by the US military but which has sparred with the Pentagon over future contracts, was notably absent from the group.
âThese agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfightersâ ability to maintain decision superiority across all domains of warfare,â the defense department officials said in a statement.
The Trump administration has pushed AI companies to make their tools available on classified networks without the standard restrictions they apply to users. Googleâs contract with the Pentagon reportedly includes language stating: âThe parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.â But that language is non-binding, and the agreement also says Google has no right to control or veto âlawfulâ government operational decision-making.
Workers who voted to join the union said they did so to raise pressure on Google to meet demands already made by other employees at the company, including that it commit not to develop technology âwhose primary purpose is to cause harm or injury to peopleâ, establish an independent ethics oversight body, and grant workers the individual right to refuse to contribute to projects on moral grounds. Should the company refuse, they said, they are considering protests and âresearch strikesâ, during which staff abstain from work expected to significantly improve core products such as Gemini, Googleâs AI bot, while avoiding detection by continuing to perform less significant updates.
Workers across Google have been increasingly vocal about their opposition to militarized applications of their technology. Last week, amid reports of the pending deal, more than 600 Google employees signed an open letter to CEO, Sundar Pichai, demanding the company not make its AI systems available for classified use.
âWe want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways,â they wrote. âMaking the wrong call right now would cause irreparable damage to Googleâs reputation, business, and role in the world.â
Tech workers have increasingly challenged management over the use of the technology they have helped develop. In 2024, Google fired 50 workers who had protested against Project Nimbus, the 2021 contract with the Israeli government. At Microsoft, which the Guardian revealed supplied Israel with cloud storage used in the mass surveillance of Palestinians, workers occupied a company campus with signs reading âNo Labor for Genocideâ. (The company terminated the Israeli militaryâs access to that technology after the Guardianâs reporting.)
Investors have also raised concerns. A coalition of shareholders who own about $2.2bn of Alphabetâs âshares wrote a letter to Googleâs parent company last week demanding a meeting and greater transparency about Google Cloud and AI deployments in âhigh-riskâ contexts. They cited concerns about the company providing services to US immigration â authorities, as well as Project Nimbus, and raised questions about âthe effectiveness of policy guardrails, internal escalation processes, and Board oversight of AI deployments in conflict-affected or security-sensitive environmentsâ.
In 2018, Google also dealt with widespread employee protests over a military contract known as Project Maven, in which the company agreed to build AI products for the Pentagonâs analysis of drone footage. In response to the backlash, the company did not renew the contract in 2019 and published a set of principles for its work on AI that included the pledge, now dropped, not to design AI for weapons. Palantir took over Project Maven, which continues today.