On the heels of Biden's executive order, agencies get White House directive on implementing AI

Vice President Kamala Harris speaks on AI policy at the U.S. Embassy in London, England.

Vice President Kamala Harris speaks on AI policy at the U.S. Embassy in London, England. Carl Court/Getty Images

Agencies will be on the hook for tapping chief AI officers, adding risk management practices to AI like independent evaluations and more.

The White House released draft policy guidance on the government’s use of artificial intelligence Wednesday that it says will help agencies both manage the risks of the technology and harness its benefits, following the release of the administration’s AI executive order on Monday. 

Government agencies can look for new requirements to test AI systems, create new chief AI officers and more.

“This memo specifically pertains to how federal agencies responsibly use AI, which is important both because of the fundamental missions in federal agencies and the way that those missions impact society [and] impact individuals,” Jason Miller, deputy director for management at Office of Management and Budget, told Nextgov/FCW in an interview on the guidance. “It's important because the government as an enterprise can lead by example, both at home and abroad, in the responsible use of AI.”

Vice President Kamala Harris announced the draft guidance Wednesday morning ahead of the Global Summit on AI Safety in London in a speech meant to outline the administration’s vision for AI. 

“We reject the false choice that suggests we can either protect the public or advance innovation. We can — and we must — do both. And we must do so swiftly, as this technology rapidly advances,” her prepared remarks read.

Although the administration has touted the release of this executive order and guidance as a significant step forward, Harris also pointed to Capitol Hill, saying that “one important way to address these challenges — in addition to the work we have already done — is through legislation — legislation that strengthens AI safety without stifling innovation.”

The new policy from OMB director Shalanda Young is currently in draft form. OMB is taking comments on the proposal through December 5 and aims to finalize the guidance in early 2024, said Miller when asked about if the upcoming election could impact implementation. “In parallel, we’re executing with agencies now,” he said.

“This is not about setting us up for success for tomorrow, this is about setting us up for success now and enabling us to have the right architecture in place as things evolve,” he said, noting that the government already has over 700 current AI use cases. 

All agencies outside of the intelligence community would be required to annually comb through their AI inventory to gather a list of high-risk uses that impact rights or safety, like in the context of hazardous chemicals or in determining access to government benefits of services.

The guidance also includes a list of mandatory risk management practices for those high-impact AI uses that all agencies except those in the intelligence community would need to implement by Aug. 1, 2024. The memo also doesn’t apply to national security systems that involve AI.

Agencies will be on the hook for conducting AI impact assessments, testing systems in real-world conditions and doing an independent evaluation of the AI documentation. Agencies will also be responsible for establishing ongoing monitoring as well as periodic human reviews, and they will be required to assess and mitigate disparities in performance, use representative data sets and conduct ongoing monitoring and mitigation for AI-enabled discrimination. 

The guidance also instructs agencies to notify individuals meaningfully impacted by AI-influenced decisions, such as the denial of benefits, and offer a remedy process, like an option to appeal. The memo encourages agencies to give people the option to opt out “where possible.”

AI systems that don’t meet those requirements and aren’t the subject of a waiver process detailed in the draft policy should not be used, it states.

More risk-management practices could be on the way for agencies beyond Wednesday’s draft guidance, as the executive order itself directs the National Institute of Standards and Technology to develop guidelines, tools and practices around risk-management of AI in government. 

NIST has already released an AI Risk Management Framework, intended to serve as a voluntary metric for all kinds of organizations to evaluate their AI systems. And in October 2022 the White House released a Blueprint for an AI Bill of Rights to establish principles for responsible AI use and development. Miller said that the new guidance “is fully consistent with them and translates them and prioritizes them in actionable ways for agencies.”

Even as the administration seeks to mitigate risks, the guidance also includes requirements for certain, large agencies — those under the Chief Financial Officers Act — to develop a public strategy for “removing barriers to the use of AI and advancing AI maturity” within a year of the guidance’s release. 

“We think AI, if used in a responsible way, can improve efficiency in how the federal government serves people,” said Miller.

Among the potential barriers to doing so is the need for government talent in AI, he said. The executive order includes a push for a new National AI Talent Surge to bring such workers into the federal government. 

The draft guidance does not include hard requirements for the federal contractors from which the government purchases AI systems, although it does offer recommendations to agencies as they purchase such systems, including that they evaluate company’s claims about their AI systems and minimize vendor lock-in. It also teases that “OMB will also develop an initial means to ensure that AI contracts align with” the memo.

The draft also includes a series of to-do items on the governance of AI within agencies, including that agencies designate a chief AI officer. Some agencies have already established such positions independently. 

CFO Act agencies will also have to convene an AI governance board, and all agencies will also have to publicly release a plan on aligning with this policy guidance within 180 days, as well as release an expanded version of existing AI use case inventories annually.

Asked about OMB’s work aligning the budget with these new policies, Miller said “this is always a balancing act” similar to other priorities like cybersecurity or government talent. It “requires focus and leadership and attention, but I think this is in the realm of what we … should be expected to deliver.”

“The president and vice president's leadership has ensured that our senior leaders across the administration are very focused on it,” he said. “Leadership attention is always the most important in terms of getting things done and we have that in place.”