Over 100 Google DeepMind employees write open letter, want Google to stop working on these contracts
About 200 employees at Google DeepMind, the company’s AI division’s, have signed an open letter urging the company to end its contracts with military organisations, Time Magazine reports. The letter, dated May 16, 2024, expresses concerns that DeepMind’s AI technology is being used for warfare purposes, potentially violating Google’s own AI principles.
The letter specifically references Google’s defence contract with the Israeli military, known as Project Nimbus, as well as reports of AI being used for mass surveillance and target selection in Gaza. While not focusing on any particular conflict, the signatories emphasise that their concerns are about upholding ethical AI practices.
“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter states, according to Time.
When Google acquired DeepMind in 2014, it promised that the lab’s technology would never be used for military or surveillance purposes. DeepMind’s AI principles prohibit working on applications that cause “overall harm” or aid in building weapons or technology intended to cause injury. However, as DeepMind has been integrated more closely with Google’s core operations, this separation has become less distinct.
The letter calls on DeepMind’s leadership to investigate claims of military use of Google Cloud services, terminate military access to DeepMind technology, and establish a new governance body to prevent future military applications.
As of August 2024, Google has not substantially responded to the employees’ concerns, Time reports. “We have received no meaningful response from leadership,” one employee told the magazine, “and we are growing increasingly frustrated.”
A Google spokesperson defended the company’s practices, stating, “When developing AI technologies and making them available to customers, we comply with our AI Principles, which outline our commitment to developing technology responsibly.”
This internal dissent follows a previous protests against Project Nimbus, which resulted in Google firing dozens of employees earlier this year.
About 200 employees at Google DeepMind, the company’s AI division’s, have signed an open letter urging the company to end its contracts with military organisations, Time Magazine reports. The letter, dated May 16, 2024, expresses concerns that DeepMind’s AI technology is being used for warfare purposes, potentially violating Google’s own AI principles.
The letter specifically references Google’s defence contract with the Israeli military, known as Project Nimbus, as well as reports of AI being used for mass surveillance and target selection in Gaza. While not focusing on any particular conflict, the signatories emphasise that their concerns are about upholding ethical AI practices.
“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter states, according to Time.
When Google acquired DeepMind in 2014, it promised that the lab’s technology would never be used for military or surveillance purposes. DeepMind’s AI principles prohibit working on applications that cause “overall harm” or aid in building weapons or technology intended to cause injury. However, as DeepMind has been integrated more closely with Google’s core operations, this separation has become less distinct.
The letter calls on DeepMind’s leadership to investigate claims of military use of Google Cloud services, terminate military access to DeepMind technology, and establish a new governance body to prevent future military applications.
As of August 2024, Google has not substantially responded to the employees’ concerns, Time reports. “We have received no meaningful response from leadership,” one employee told the magazine, “and we are growing increasingly frustrated.”
A Google spokesperson defended the company’s practices, stating, “When developing AI technologies and making them available to customers, we comply with our AI Principles, which outline our commitment to developing technology responsibly.”
This internal dissent follows a previous protests against Project Nimbus, which resulted in Google firing dozens of employees earlier this year.