Google lifts prohibition on weaponizing AI
Google has lifted its prohibition on utilizing AI for the development of weapons and surveillance technologies. Read Full Article at RT.com.
![Google lifts prohibition on weaponizing AI](https://mf.b37mrtl.ru/files/2025.02/thumbnail/67a39521203027209925a09b.jpg?#)
Google has made substantial adjustments to its artificial intelligence principles, lifting earlier bans on employing the technology for weaponry and surveillance purposes. The announcement made on Tuesday shifts the company’s former position against applications that might lead to "overall harm."
Originally, in 2018, Google implemented a set of AI principles in response to backlash regarding its participation in military projects, such as a U.S. Department of Defense initiative that utilized AI for data processing and target identification in combat. The initial guidelines clearly stated that Google would not create or deploy AI for use in weaponry or technologies that inflict or directly facilitate injury to individuals, nor for surveillance practices that breach internationally acknowledged standards.
However, the newer iteration of Google’s AI principles has removed these stipulations. Instead, Google DeepMind CEO Demis Hassabis and senior executive for technology and society James Manyika have introduced a fresh list of the company’s "core tenants" regarding AI use. These focus on innovation and collaboration, asserting that "democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
Margaret Mitchell, a former co-leader of Google’s ethical AI team, expressed to Bloomberg that the exclusion of the 'harm' clause might imply a shift toward "deploying technology directly that can kill people."
As reported by The Washington Post, Google has engaged with the Israeli military since the early days of the Gaza conflict, competing with Amazon to deliver artificial intelligence services. Following the Hamas attack on Israel in October 2023, Google’s cloud division reportedly assisted the Israel Defense Forces in accessing AI tools, contrary to the company’s public claims of restricting its involvement to civilian government agencies, according to internal documents cited by the publication last month.
This policy reversal by Google occurs amidst ongoing concerns regarding the potential dangers of AI for humanity. Geoffrey Hinton, a leading figure in AI and 2024 Nobel Prize winner in physics, cautioned late last year that AI technology might eventually lead to human extinction within the next thirty years, estimating a risk as high as 20%.
Hinton has warned that AI systems could eventually exceed human intelligence, escape human oversight, and potentially inflict catastrophic damage. He has called for a significant allocation of resources toward ensuring AI safety and fostering the ethical application of the technology, emphasizing the need for proactive initiatives.
Camille Lefevre for TROIB News