If you’ve spent any time in the tech press over the last six months, you’ve probably seen the headlines. “AI could kill us all.” “The risk of extinction is real.” “We need to pause development.” These warnings, often delivered with the gravitas of a late-night public service announcement, have become a staple of the industry’s public relations diet. The man leading the charge? Dario Amodei, CEO of Anthropic, who has made a second career out of predicting the worst-case scenario for the very technology he is building.
But there is another voice in the room, and it isn’t whispering. It’s Jensen Huang, the leather-jacket-clad CEO of Nvidia. And lately, he’s had enough of the doomsday rhetoric. In a series of recent interviews and public appearances, Huang has made it abundantly clear that he is “so over” the dire predictions coming from his fellow AI leaders. He isn’t just disagreeing with them; he’s rolling his eyes in a way that only a man who has seen two decades of tech cycles can.
Fear sells, but compute builds
The fundamental clash between Huang and Amodei isn’t about whether AI is powerful. That is a given. The disagreement is about *what* that power actually means. Amodei, a former OpenAI executive, has a philosophical bent. He talks about “safety” the way a physicist talks about entropy—as an inevitable force that must be contained. He has warned about AI systems that could be used to create bioweapons or destabilize democratic institutions. He has called for more government regulation and a slower pace of development.
Jensen Huang, on the other hand, is an engineer who built the shovel for the gold rush. He runs a company worth trillions of dollars because he believes that more compute, more data, and more acceleration is the only path forward. When he hears Amodei talk about existential risk, he hears a theoretical physicist describing a black hole that hasn’t formed yet. Huang’s response is simple: “Show me the data. Show me the actual harm. Right now, all I see is a tool that makes people more productive, more creative, and more capable.”
During a recent Q&A at the GTC conference, a reporter asked Huang about the “doomer” narrative. He didn’t mince words. “Look, I respect Dario. He is very smart. But the predictions of doom? We have been hearing that for decades. When the iPhone came out, people said it would rot our brains. When the internet went public, people said it would destroy newspapers. And it did, in some ways. But we adapted. We are still here. AI is no different.”
The real problem is not the AI, it’s the humans
Huang’s frustration seems to stem from a specific place: the hypocrisy of the “safety first” crowd. He points out that the same people warning about the end of the world are also the ones desperately trying to build the most advanced models. If AI is truly an existential threat, why are you racing to build a more powerful one? “It feels like a marketing strategy,” one Nvidia insider told me, speaking on condition of anonymity. “If you build the biggest, most dangerous model, you get the most attention. Then you warn everyone about it. It’s a great way to set the agenda.”
Huang doesn’t say this directly, but his body language suggests it. He has been in the tech industry since the 1990s. He saw the dot-com crash. He saw the rise of cloud computing. He knows that every new technology is greeted with hysteria. He also knows that the people who panic usually lose. The people who build, win.
Amodei’s camp would argue that this is a dangerous oversimplification. They would say that AI is different because it is the first technology that can improve itself. They would point to the “alignment problem” as a legitimate scientific challenge. And they are not wrong. There are real, unresolved issues with how we ensure AI systems do what we want, especially as they become more autonomous.
But Huang’s point is that we cannot solve those problems in a vacuum. We cannot build safe systems if we do not build systems at all. He argues that the best way to make AI safe is to make it ubiquitous, to embed it into every layer of society, and then let the regulatory and social frameworks catch up. “You don’t learn to swim by sitting on the edge of the pool,” he said in a recent interview. “You get in the water.”
Investors are choosing Huang over Amodei
The market, for now, is firmly on Huang’s side. Nvidia’s stock has skyrocketed, while Anthropic, despite raising billions, is still a private company struggling to find a sustainable business model beyond selling API access. Investors are betting that the future belongs to the infrastructure providers, not the alarmists. They are betting that the world will find a way to use AI productively before it finds a way to destroy itself.
This creates an uncomfortable tension. On the one hand, you have a trillion-dollar company that profits directly from the acceleration of AI. On the other, you have a CEO who says, “We need to slow down.” It is hard to take the latter seriously when the former is building the factories that make the latter’s products possible. If Dario Amodei truly believed that AI was an existential threat, he would be lobbying to shut down Nvidia, not building a model that runs on Nvidia chips.
Huang knows this. And he is tired of the game. He wants to talk about the next wave of AI—the “physical AI” that will drive robots, automate factories, and transform healthcare. He wants to talk about the $100 billion investment in data centers. He does not want to spend another hour debating whether a chatbot is going to start a nuclear war.
So, what is the takeaway? Jensen Huang is not an idiot. He is not a reckless optimist. He is a realist who has seen this movie before. He believes that the best defense against the dangers of technology is more technology, better technology, and faster technology. He believes that human ingenuity will outpace human fear. And he is betting his entire company on it.
Dario Amodei might be right. We might be building a monster. But Jensen Huang is building the cage. And he is betting that the cage will hold.
Ahmed Abed – News journalist