WASHINGTON: The Biden administration outlined plans on Thursday for the US government to develop and use artificial intelligence to advance national security while managing its risks.
A White House memo directed federal agencies “to improve the security and diversity of chip supply chains … with AI in mind.” It also prioritises the collection of information on other countries’ operations against the US AI sector and passing that intelligence along quickly to AI developers to help keep their products secure.
“We have to get this right, because there is probably no other technology that will be more critical to our national security in the years ahead,” White House national security adviser Jake Sullivan said in remarks at the National Defence University in Washington.
“We have to be faster in deploying AI and our national security enterprise than America’s rivals are in theirs,” he said. “If we don’t deploy AI more quickly and comprehensively to strengthen our national security, we risk squandering our hard-earned lead.”
The effort intends to balance the need for fair competition and open markets, while protecting privacy, human rights and ensuring that AI systems do not undercut US national security, Sullivan added, even as competitors are not bound by the same principles held by the United States.
The directive is the latest move by US President Joe Biden’s administration to address AI as Congress’ efforts to regulate the emerging technology have stalled.
Next month, it will convene a global safety summit in San Francisco. Biden last year signed an executive order aimed at limiting the risks that AI poses to consumers, workers, minority groups and national security.
Generative AI can create text, photos and videos in response to open-ended prompts, inspiring both excitement over its potential as well as fears that its could be misused and potentially overpower humans with catastrophic effects.
The rapidly evolving technology has prompted governments worldwide to seek to regulate the AI industry, which is led by tech giants such as Microsoft-backed OpenAI, Alphabet’s Google and Amazon, and scores of start-ups.
While Thursday’s memo pressed government use, it also requires US agencies “to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses.”
It also calls for a framework for Washington to work with allies to ensure AI “is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms.”