Download this article in PDF format.
As concerns over how artificial intelligence (AI) is developed and put to use in the real world continue to mount, more technology companies are taking the steps to manage the risks posed by this advanced technology. Defined as the simulation of human intelligence processes by computer systems, AI is being used in expert systems, natural language processing (NLP), speech recognition and machine vision, among other applications.
With AI being infused into applications like ChatGPT and Google’s Bard, more attention is being paid to the risks associated with the technology. The Biden-Harris Administration is taking steps to “advance responsible AI,” for example, and as part of that mission has secured voluntary commitments from multiple different companies that have promised to “help advance the development of safe, secure and trustworthy AI.”
To date, 15 companies—including the latest additions of Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability—have thrown their hats into the “responsible AI development” ring. The initial seven AI companies signed on in July, and included Google, Microsoft, OpenAI, Amazon, Anthropic, Inflection and Meta. At that point, the initial seven agreed to the set of voluntary commitments in developing AI technology, CNBC reports.
“These commitments represent an important bridge to government action, and are just one part of the Biden-Harris Administration’s comprehensive approach to seizing the promise and managing the risks of AI,” the White House says. “The Administration is developing an executive order and will continue to pursue bipartisan legislation to help America lead the way in responsible AI development.”
What are the Commitments? The commitments underscore three fundamental principles: safety, security and trust. “As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to take decisive action to keep Americans safe and protect their rights,” the White House says. The companies that have pledged to keep AI safe and secure have committed to:
- Ensuring products are safe before introducing them to the public through internal and external security testing of their AI systems before their release.
- Building systems that put security first by investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
- Earning the public’s trust by developing robust technical mechanisms to ensure that users know when content is AI-generated (i.e., a watermarking system); publicly reporting their AI systems’ capabilities, limitations and areas of appropriate and inappropriate use; and prioritizing research on the societal risks that AI systems can pose.
Grappling with AI has Become Paramount
One company to recently sign onto the effort is Scale, whose website states that the company is “on a mission to accelerate the development of AI applications.” In a September blog, the company said that the Biden-Harris Administration’s voluntary commitments are “critical to the future of AI.”
“The reality is that progress in frontier model capabilities must happen alongside progress in model evaluation and safety. This is not only the right thing to do, but practical,” the company added. The Scale team also commended the White House’s leadership in bringing together companies that are shaping the future of responsible AI development.
“America’s continued technological leadership hinges on our ability to build and embrace the most cutting-edge AI across all sectors of our economy and government,” Scale said. “This can only be achieved with collaboration between industry and government to create a broad understanding of the possibilities and limitations of the technology, so the benefits can be maximized and risks minimized.”
The New York Times says grappling with AI has become paramount since OpenAI released the powerful ChatGPT chatbot last year. “The technology has since been under scrutiny for affecting people’s jobs, spreading misinformation and potentially developing its own intelligence,” the publication adds. “As a result, lawmakers and regulators in Washington have increasingly debated about how to handle AI.”