7 tech companies agree to White House’s new trustworthy AI commitments

Executives from seven leading AI companies reached an agreement with the Biden administration to incorporate safety practices in the development of their technology.

Executives from seven leading AI companies reached an agreement with the Biden administration to incorporate safety practices in the development of their technology. NurPhoto / Getty Images

While the Biden administration is working to engage in voluntary commitments with Silicon Valley, a planned executive order will codify next steps in securing the emerging technology. 

The Biden administration and several leading artificial intelligence technology companies have entered an agreement to prioritize safety in their products, the latest in President Joe Biden’s effort to bring some level of regulation to private sector emerging technology. 

Announced Friday, companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all committed to three goals to ensure their AI products meet adequate safety levels before hitting the market.

The three commitments include ensuring a product is safe before it goes to market, incorporating security as an inherent design feature and earning public trust through public reporting on AI products’ use limitations. 

“To make the most of AI's potential, the Biden Harris administration is holding the industry to the highest standards to ensure that innovation doesn't come at the expense of Americans rights and safety,” a White House official said during a Thursday press call.

A forthcoming executive order will further codify this “critical next step” to manage AI risk, and will work on a domestic level to advance bipartisan legislation on AI regulations and engage in international standards discussions on AI systems’ usage. 

Much like the recently-announced Cyber Labeling Program, internal and external security validation and audit procedures will confirm if these companies are complying with the White House’s new agreement. 

The commitments emphasize transparency on how AI products and models are intended to be used. In addition to requesting formal publications on systems’ limitations and liabilities, companies are also requested to document potential biases and societal risks associated with using their AI products. 

Participating companies will also be asked to invest in cybersecurity best practices and insider threat safeguards. The agreement also requests that AI model weights — value components that are used to train AI systems — be secured and evaluated. 

The administration also requested AI manufacturers include watermarking in their systems to reduce deception amid generative AI content. The forthcoming watermarking protocol will apply to both audio and visual content to distinguish AI-generated content from specific user-generated content. The administration official said there is “technical work to be done” on developing the system, but will ultimately be “both technically robust and sound, but also easy for users to see.”

The potential watermark would be embedded in the AI software and act as a signature for any content it creates.

Despite the thoroughness of the new agreements, the White House is still working on more formal legislation to continue progressing compulsive regulations. 

“I think we know that legislation is going to be critical to establish the legal and regulatory regime to make sure these technologies are safe,” the administration official said. “We're already engaged directly with members of Congress of both parties on these issues.”

Biden has been working to engage tech companies amid the rapid growth of generative AI software like ChatGPT, inviting several leading CEOs to the White House back in May to open talks on how to create safe AI systems. 

“We recognize that this is the next step that we are taking, but certainly not the last step,“ the official said.

Industry trade groups like The Software Alliance — also known as BSA — reacted positively to the news, noting that anthropomorphic generative AI systems hinge on public confidence. 

“Today's announcement can help to form some of the architecture for regulatory guardrails around AI, along with proposals advanced by BSA to set rules for high-risk AI systems that involve concerns around bias and discrimination,” a spokesperson for The Software Alliance said in a statement. “Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits.”

NEXT STORY: Oceus hires new finance chief