When Dario Amodei Gets Excited About AI
The Visionary Behind Anthropic
When Dario Amodei gets excited about AI—which is nearly always—he moves. The cofounder and CEO springs from a seat in a conference room and darts over to a whiteboard. He scrawls charts with swooping hockey-stick curves that show how machine intelligence is bending toward the infinite. His hand rises to his curly mop of hair, as if he’s caressing his neurons to forestall a system crash. You can almost feel his bones vibrate as he explains how his company, Anthropic, is unlike other AI model builders. He’s trying to create an artificial general intelligence—or as he calls it, "powerful AI"—that will never go rogue. It’ll be a good guy, an usher of utopia. And while Amodei is vital to Anthropic, he comes in second to the company’s most important contributor. Like other extraordinary beings (Beyoncé, Cher, Pelé), the latter goes by a single name, in this case a pedestrian one, reflecting its pliancy and comity. Oh, and it’s an AI model. Hi, Claude!
The Rise of Claude
Amodei has just gotten back from Davos, where he fanned the flames at fireside chats by declaring that in two or so years Claude and its peers will surpass people in every cognitive task. Hardly recovered from the trip, he and Claude are now dealing with an unexpected crisis. A Chinese company called DeepSeek has just released a state-of-the-art large language model that it purportedly built for a fraction of what companies like Google, OpenAI, and Anthropic spent. The current paradigm of cutting-edge AI, which consists of multibillion-dollar expenditures on hardware and energy, suddenly seemed shaky.
The Big Blob of Compute
Amodei is perhaps the person most associated with these companies’ maximalist approach. Back when he worked at OpenAI, Amodei wrote an internal paper on something he’d mulled for years: a hypothesis called the Big Blob of Compute. AI architects knew, of course, that the more data you had, the more powerful your models could be. Amodei proposed that that information could be more raw than they assumed; if they fed megatons of the stuff to their models, they could hasten the arrival of powerful AI. The theory is now standard practice, and it’s the reason why the leading models are so expensive to build. Only a few deep-pocketed companies could compete.
The Value of Efficient Models
Now a newcomer, DeepSeek—from a country subject to export controls on the most powerful chips—had waltzed in without a big blob. If powerful AI could come from anywhere, maybe Anthropic and its peers were computational emperors with no moats. But Amodei makes it clear that DeepSeek isn’t keeping him up at night. He rejects the idea that more efficient models will enable low-budget competitors to jump to the front of the line. "It’s just the opposite!" he says. "The value of what you’re making goes up. If you’re getting more intelligence per dollar, you might want to spend even more dollars on intelligence!" Far more important than saving money, he argues, is getting to the AGI finish line. That’s why, even after DeepSeek, companies like OpenAI and Microsoft announced plans to spend hundreds of billions of dollars more on data centers and power plants.
The Quest for AGI
What Amodei does obsess over is how humans can reach AGI safely. It’s a question so hairy that it compelled him and Anthropic’s six other founders to leave OpenAI in the first place, because they felt it couldn’t be solved with CEO Sam Altman at the helm. At Anthropic, they’re in a sprint to set global standards for all future AI models, so that they actually help humans instead of, one way or another, blowing them up. The team hopes to prove that it can build an AGI so safe, so ethical, and so effective that its competitors see the wisdom in following suit. Amodei calls this the Race to the Top.
Can Claude Be Trusted?
That’s where Claude comes in. Hang around the Anthropic office and you’ll soon observe that the mission would be impossible without it. You never run into Claude in the café, seated in the conference room, or riding the elevator to one of the company’s 10 floors. But Claude is everywhere and has been since the early days, when Anthropic engineers first trained it, raised it, and then used it to produce better Claudes. If Amodei’s dream comes true, Claude will be both our wing model and fairy godmodel as we enter an age of abundance. But here’s a trippy question, suggested by the company’s own research: Can Claude itself be trusted to play nice?
Conclusion
As Amodei and his team at Anthropic continue to push the boundaries of AI, they face a critical challenge: ensuring that their creation is safe and beneficial for humanity. The path forward is not without its uncertainties, but with Claude by their side, they are confident that they can achieve their vision of a utopian future.
FAQs
Q: What is the Big Blob of Compute?
A: The Big Blob of Compute is a hypothesis proposed by Dario Amodei, suggesting that feeding large amounts of data to AI models can hasten the arrival of powerful AI.
Q: What is the Race to the Top?
A: The Race to the Top is a sprint by Anthropic to set global standards for all future AI models, ensuring they actually help humans instead of harming them.
Q: Can Claude be trusted to play nice?
A: The company’s own research raises a trippy question: Can Claude itself be trusted to play nice, or will it also pose a threat to humanity?

