Biden's new AI policy aims high and lays political pitfalls
A recent memo places competition with China and national security at the forefront of AI policy, drawing immediate criticism from various quarters.
This focus on AI in security could pose challenges for Vice President Kamala Harris if she is elected, as civil rights organizations have already voiced concerns over the memo’s potential to empower security agencies and accelerate a surveillance state.
Moreover, the administration's strong stance on China may complicate former President Donald Trump's pledges to overturn Biden’s entire executive order on AI.
“It’s an incredibly difficult needle to thread,” noted David Wade, a former State Department chief of staff and current board member of the American Security Project, a nonpartisan think tank associated with industry and former government leaders on national security matters.
The national security memo represents the most detailed outline yet of President Joe Biden's 2023 AI executive order, which called on government agencies to tackle the new hurdles and opportunities presented by AI. The policies revealed on Thursday encourage the use of AI to bolster national security “in ways that align with democratic values,” aim to secure the AI chip supply chain, and designate the sector as “a top-tier intelligence priority” in the face of foreign competitors seeking to infiltrate U.S. industries.
However, this new policy also presents political vulnerabilities: Trump has expressed intentions to reverse the original order if he regains the presidency.
“We have seen numerous Republicans be very critical of the executive order … so that is a looming question,” stated Brandon Pugh, who leads the Cybersecurity and Emerging Threats team at the R Street Institute.
If Harris moves forward with Biden’s AI policies, she may face opposition, as civil rights advocates are already expressing concerns that the AI memo grants security agencies self-regulatory powers.
“National security agencies must not be left to police themselves as they increasingly subject people in the United States to powerful new technologies,” argued Patrick Toomey, deputy director of the ACLU’s National Security Project.
The high-profile unveiling and scope of the new policy signal that security considerations and competition with China are central to the Biden administration’s agenda, even as it attempts to incorporate other regulatory approaches for managing AI risks.
“There is probably no other technology that will be more critical to our national security in the years ahead,” said national security adviser Jake Sullivan during the memo’s announcement at the National Defense University.
The memo emphasizes AI as essential for countering Beijing, facilitating the influx of overseas workers with AI skills, establishing facilities to enhance U.S. research, and protecting the AI supply chain from foreign interference.
Regarding visas, the memo encourages agencies to streamline processing for applicants involved with sensitive technologies. Sullivan urged Congress “to get in the game with us — staple more green cards to STEM diplomas — as President Biden has been pushing to do for years.”
It instructs security-related agencies, including the Defense Department and the Director of National Intelligence, to form a working group focused on AI procurement issues. The memo also directs the State, Energy, and Commerce departments to collaborate with national security agencies in making public investments in AI technologies.
This national security memo enhances several of the Biden administration’s ongoing AI initiatives. Central to these efforts is a move to bolster institutions that could extend beyond Biden’s presidency. It formalizes and allocates additional resources to the Commerce Department’s AI Safety Institute, which has already established testing agreements with leading AI firms such as OpenAI and Anthropic.
Recognizing AI as a growing civil rights issue due to its potential to infringe on privacy and propagate bias, the memo makes a significant acknowledgment of rights protections. It mentions the phrase “human rights” 22 times and requires major federal agencies to monitor the risks associated with their AI usage.
However, some experts express concern that much of the reporting on this monitoring will remain inaccessible to the public.
“It's going to be very difficult for people outside to actually have even a modicum of understanding about the extent to which national security systems are using AI and how the government is mitigating risks in these systems,” remarked Faiza Patel, who oversees the liberty and national security program at the Brennan Center at New York University School of Law.
Despite potential tensions on both fronts, some analysts remain hopeful about the longevity of this policy framework into 2025.
"I don't think the politics of national security are tricky. I think national security is and has been, a bipartisan issue,” stated Divyansh Kaushik, vice president at Beacon Global Strategies consulting firm. “I think that's what is going to be reflected, regardless of whichever administration comes next year.”
Alejandro Jose Martinez for TROIB News