This week, Silicon Valley leaders, including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon, sparked controversy with their comments about AI safety groups. They suggested that some advocates may not be working purely for the public good, but could be acting for themselves or for wealthy backers.
Silicon Valley Criticizes AI Safety Advocates
AI safety organizations told TechCrunch that these remarks feel like an attempt to intimidate critics. This is not the first time in 2024, rumors about a California AI safety bill, SB 1047, claimed startup founders could face jail time. Though the bill was eventually vetoed, the controversy left many advocates worried. Several nonprofit leaders even asked to stay anonymous to avoid retaliation.
The debate highlights a growing tension in Silicon Valley: building AI responsibly versus building it quickly for profit. David Sacks accused Anthropic, an AI lab, of fearmongering to pass laws that benefit itself. Anthropic had supported California’s SB 53, which sets safety rules for large AI companies.
Tensions Between AI Growth and Safety
Meanwhile, OpenAI sent subpoenas to AI safety nonprofits like Encode, seeking documents about their support for SB 53 and opposition to OpenAI’s restructuring. OpenAI said it was checking for possible coordination among critics, while some experts said these actions seem aimed at silencing critics.
At the same time, Americans are worried about AI, mainly about job losses and deepfakes rather than catastrophic risks. This shows the challenge: protecting people while keeping the AI industry growing. Despite the pushback, the AI safety movement is gaining momentum as 2026 approaches, showing it is starting to make a real impact.