'AI Could Assume Roles of Judge, Jury, and Executioner,' Global Risks Expert Tells RT

Artificial intelligence is increasingly being utilized in various aspects of life, and Google's recent shift in principles indicates potential future trends. Read Full Article at RT.com.

'AI Could Assume Roles of Judge, Jury, and Executioner,' Global Risks Expert Tells RT
**Artificial Intelligence is Being Weaponized Throughout Society, with Google’s Recent Policy Shift Indicating Future Trends**

Last week, Google updated its artificial intelligence principles, eliminating its previous opposition to the use of AI for weapon development, technologies that directly cause harm to individuals, or for surveillance practices that breach internationally accepted standards.

Demis Hassabis, who heads Google's AI division, commented that the guidelines were being revised in light of a changing world and emphasized that AI has a role in protecting “national security.”

RT spoke with Dr. Mathew Maavak, a senior consultant for Malaysia’s National Artificial Intelligence Roadmap 2021-2025, who specializes in global risks, geopolitics, strategic foresight, governance, and AI, about the implications of Google’s new policies.

**RT:** Does this mean that Google and other corporations will now start making AI-powered weapons?

**Dr. Mathew Maavak:** First and foremost, Google was largely the creation of the US national security apparatus or simply, the “deep state”. The origins of many, if not all, Big Tech entities today can be traced to ground-breaking research undertaken by the US Defense Advanced Research Projects Agency and its predecessor the Advanced Research Projects Agency. So, the quasi-private entity called Google is inextricably beholden to its “national security” origins, as are other Big Tech entities. Weaponizing AI and creating AI-powered weapons is a natural progression for these entities. Microsoft has long established its own “military empire”.

Furthermore, Big Tech platforms have been extensively used for data and intelligence-gathering activities worldwide. This is a reason why China has banned many US Big Tech software and apps. A nation cannot be sovereign if it is beholden to Big Tech!

As for Google changing its guidelines on AI, this should not come as a surprise. Big Tech was actively promoting universal AI governance models through various high-profile institutional shills, United Nations agencies, Non-Governmental Organizations, think tanks, and national governments. Through my recent work in this field, it became abundantly clear that the US government sought to stifle the development of indigenous AI worldwide by promoting half-baked and turgid AI governance models that are riddled with contradictions. The gap between lofty aspirations and longstanding realities are simply unbridgeable.

The same playbook was deployed to push Environmental, Social, and Governance schemes worldwide – imposing heavy costs on developing nations and corporations alike. Now, the US and Big Capital are ditching the very ESG schemes they had devised.

Unfortunately, many nations fell for these ploys, investing significant money and resources into building fanciful ESG and AI frameworks. These nations risk becoming permanently dependent on Big Tech under what I call “AI neo-colonialism”.

Alphabet’s Google and YouTube, Microsoft’s Bing, and Elon Musk’s X have long weaponized their platforms before this recent change in AI policy. Big Tech’s search algorithms have been weaponized to erase dissenters and contrarian platforms from the digital landscape, effectively imposing a modern-day damnatio memoriae. I have to use the Russian search engine Yandex in order to retrieve my old articles.

**RT:** Why is this change being made now?

**Dr. Mathew Maavak:** All weapons systems increasingly rely on AI. The Russia-Ukrainian conflict alone has seen AI being used in the battlefield. The extensive use of drones, with possible swarm intelligence capabilities, is just one out of many anecdotal examples of AI usage in Ukraine. You cannot create next-generation weapons and countermeasures without AI. You cannot bring a knife to a gunfight, as the old saying goes.

It must be noted that one of the most profitable and future-proof sectors, with guaranteed returns on investments, is the Military-Industrial Complex. Weaponizing AI and making AI-powered weapons is just a natural course of action for Big Tech.

It is also quite telling that the leaders of the two AI superpowers — the United States and China — skipped the recent Paris AI Summit. The event devolved into a scripted talk shop orchestrated by Big Tech. Alongside the United Kingdom, the United States also refused to sign the declaration on making AI “safe for all.” Clearly, this event was staged to suppress AI innovation in developing nations while legitimizing the weaponization of AI by major powers.

**RT:** Google’s ‘principles’, to begin with, are set by Google itself, are voluntary and non-binding under any law. So theoretically, nothing was preventing the company from just going ahead with any kind of AI research it wanted. Why did it feel the need to make it “official”?

**Dr. Mathew Maavak:** Google’s so-called “principles” were never determined by the company alone. They were a mere sop for public consumption, perfectly encapsulated by its laughably cynical motto: “Don’t be evil.”

Its parent company Alphabet is owned by the usual suspects from Big Capital such as Vanguard, BlackRock, State Street, etc. – all of whom are private arms of the US deep state.

An entity like Google cannot conduct “any kind of AI research” as its activities have to conform to the diktats of its primary stakeholders. Google formalized its new weaponization policy because the public’s stake in its ownership pie is virtually nonexistent.

**RT:** Is it time to come up with international laws regarding military AI – like Google’s principles before the recent change, but enforceable?

**Dr. Mathew Maavak:** As I have alluded to earlier, various international AI governance models – all of whom are pretty much a facsimile of each other – were surreptitiously formulated by the likes of Google, Microsoft, Amazon, and other members of the so-called Tech Bros. Nations were just given the illusion of having a stake in this global AI legal and ethics matrix. Bureaucrats merely rubber-stamped whatever Big Tech promoted through various actors and avenues.

At the same time, dissenters to this travesty were systematically ostracized. They may however end up having the last laugh in a coming AI-linked SHTF event.

There are other vexing issues to consider here: How does one define “AI war crime” within an international legal framework? Is it even possible to come up with a universal consensus?

The operator of an armed drone responsible for wiping out scores of civilians might pin the disaster on an AI glitch. The software running the drone itself may have algorithms sourced from various private entities across the world. Who should shoulder the blame in the event of a war crime? The operator, the vendor responsible for software integration, or the entity whose algorithm was used or adapted for targeting? Realistically, it should be the antagonist nation but never bet the farm on restitution if the culprit happens to be the United States or a close ally like Israel.

Last but not least, governments worldwide acted as co-conspirators in Google’s use of AI to censor dissenting scientific viewpoints and contrarian research findings during the so-called COVID-19 pandemic. In doing so, they have effectively handed Big Tech permanent leverage to blackmail them.

Furthermore, what do you think facilitates a welter of bioweapons research across 400-odd US military-linked laboratories worldwide? Gain-of-function microbial experimentation is not possible without AI.

**RT:** AI tools in non-military areas of life, such as generating texts or images, are still far from perfect. Isn’t it a bit early to rely on them in warfare?

**Dr. Mathew Maavak:** The generation of AI texts and images can absolutely be used for warfare, and this is already becoming a significant concern in modern conflict scenarios. AI-generated content can be weaponized via AI-generated texts; AI-generated images/deepfakes; fake intelligence; and spoofing communications, among others. The possibilities here are simply endless!

AI is evolving at an exponential rate. Yesterday’s science fiction is tomorrow’s reality!

**RT:** As reported recently by the Washington Post, Google appears to have been providing AI tools for the Israel Defense Forces since the start of their Gaza campaign. Could the change in the company’s AI principles be linked to that?

**Dr. Mathew Maavak:** I highly doubt it. The IDF’s use of Google’s cloud computing services and related tools may arguably be portrayed as the canonical starting point for the weaponization of AI. But why would the IDF want a multinational civilian workforce based in the United States to have access to its military operations?

If Google provided AI tools to the IDF, it would have done so under directives from the US deep state. A nominally civilian entity cannot unilaterally supply sensitive AI tools for wartime use to any foreign power, allied or otherwise.

Logically speaking, Google’s participation in the Gazan carnage should result in a massive boycott by member states of the Organization of Islamic States. But this will never happen as too many politicians, “technocrats,” and academics in the OIC are beholden to US patronage. The rail guards of merit, bias, and non-discrimination are also virtually non-existent in the OIC bloc, although they form the pillars of AI governance.

All in all, AI principles as they currently stand, whether in civilian or military spheres, are nothing more than a paper tiger.

**RT:** Again concerning the IDF, it has been revealed that a lot of the civilian deaths in Gaza were apparently not a result of poor AI tools, but of negligent human oversight. Perhaps military AI, when employed properly, could actually lead to more humane warfare?

**Dr. Mathew Maavak:** Honestly, I don’t think AI had played a significant role in the genocidal war in Gaza. The use of AI would have led to a targeted military campaign; not a mad, blood-stained blunderbuss of terror. This was no “oversight”; this was intentional!

Compare Israel’s recent actions in Gaza to the relatively professional military campaign it conducted in the same area in 2014 – when human intelligence and Electronic Intelligence played a bigger role vis-à-vis AI. Did AI dumb down the IDF or is AI being used as a scapegoat for Israel’s war crimes?

The bigger question, however, is this: Why did the IDF’s AI-coordinated border security system fail to detect Hamas’ military activities in the lead-up to the October 7, 2024, cross-border attacks? The system is equipped with multiple sensors and detection tools across land, sea, air, and underground — making the failure even more perplexing.

In the final analysis, AI is being weaponized across all facets of human life, including religion, and US Big Tech is leading the way. Ultimately, under certain circumstances in the future, AI may be used to act as the judge, jury, and executioner. It may decide who is worthy to live and who is not.

We are indeed living in interesting times.

Sophie Wagner for TROIB News