Newsom Orders Independent AI Risk Assessment for State Contracts

California Governor Gavin Newsom has directed state government agencies to independently assess the risks associated with artificial intelligence (AI) companies, particularly those flagged by the federal government. This executive order, signed on Monday, comes after a dispute between AI startup Anthropic and the Department of Defense.

Federal Designation and California's Response

The Department of Defense recently designated San Francisco-based AI tools maker Anthropic as a supply chain risk. This designation effectively limited Anthropic’s ability to compete for certain military contracts and subcontracts, prompting a temporary injunction from a judge. Governor Newsom’s order stipulates that California will review federal designations and make its own determination regarding doing business with these companies.

Dispute Over Contract Clauses

The conflict between Anthropic and the Defense Department stemmed from contract clauses prohibiting the military from utilizing Anthropic’s systems for domestic mass surveillance and fully autonomous weaponry. Newsom’s action signals a preference for California to lead in assessing the risks of AI startups.

Balancing Innovation and Safeguards

The broader aim of Newsom’s order is to establish safeguards for AI use by state employees while simultaneously encouraging the adoption of the technology. California is a leading hub for AI companies and regulations, making its approach particularly significant.

Key Mandates for State Agencies

The executive order requires state agencies to:

  • Develop recommendations for contract standards addressing AI’s potential to generate child sexual abuse material, violate civil liberties, and infringe upon legal protections.
  • Provide employees with access to “vetted GenAI tools.”
  • Update the State Digital Strategy to leverage generative AI for improved transparency, accountability, and accessibility of government services.
  • Develop generative AI solutions to enhance access to government services for Californians.
  • Issue guidance on watermarking AI-generated imagery and videos.

Current AI Initiatives in California

These mandates coincide with ongoing AI initiatives across more than 20 California departments and agencies, including the development of Poppy, a generative AI assistant for state employees. Several agencies are also testing AI applications to assist state employees, address homelessness, and support businesses. State courts and city governments are also increasing their use of the technology.

Federal Approach and California's Divergence

Newsom’s office contrasted California’s approach with that of the Trump administration, alleging rollbacks of protections and a disregard for potential AI harms. The White House recently introduced an AI policy framework that favors a light-touch regulatory approach, lacking specific provisions for bias, discrimination, or civil rights.

Previous Executive Order

This is Newsom’s second executive order addressing AI. A 2023 order focused specifically on generative AI, calling for increased adoption alongside the implementation of safety measures.

Political Considerations

Newsom’s handling of AI issues is being closely monitored by both union leaders, who seek stronger worker protections, and technology donors influencing California politics. The governor’s potential presidential ambitions add further significance to his stance on AI regulation.