A small Chinese startup just forced America’s biggest tech companies to rethink how they build artificial intelligence.
DeepSeek’s release of its R1 model, which reportedly matches or exceeds the capabilities of U.S.-built AI systems at a fraction of the cost, triggered a massive sell-off in tech stocks that erased nearly $600 billion from Nvidia’s market value alone.
The shockwaves hit the US tech sector in the gut, with leaders in the industry hurrying to analyze how DeepSeek achieved such results.
Though there are still open questions, after analyzing the open-source code, the consensus, for now, is that Chinese developers are better at building efficient models. And the tech titans of AI put on their smiley faces and looked at the bright side, embracing the notion that any advance in AI was good for the industry.
OpenAI’s Sam Altman acknowledged the model’s impressive performance while promising to accelerate the release of “better models.”
look forward to bringing you all AGI and beyond.
— Sam Altman (@sama) January 28, 2025
Meta’s Mark Zuckerberg said his company had assembled multiple “war rooms” filled with engineers bent on analyzing DeepSeek’s technology and strategizing Meta’s response.
Meanwhile, President Donald Trump, never one to miss a news cycle, characterized DeepSeek’s breakthrough as both a “wake-up call” and a “positive” development for U.S. technology “because you don’t have to spend this much money.”
The Post-DeepSeek Era
OK, so let’s ignore what they are saying and consider what they will most likely do to respond to the DeepSeek breakthrough.
It turns out that several big closed-source players are already sneaking DeepSeek’s methods into their playbooks—they just won’t make headlines about borrowing from the competition.
For instance, Perplexity already implemented the model on its search engine, and Groq also made it available to run at record speed inference times.
Most of the big names in the American AI scene, including Meta, are either adapting to DeepSeek or thinking about ways to take advantage of its technology.
As the initial market panic subsides—Nvidia stock rebounded 9% today—technology leaders point to a counterintuitive economic principle suggesting that DeepSeek’s efficiency breakthrough might boost demand for AI hardware.
Known as Jevons’ Paradox, this concept explains why technological efficiency tends to expand usage rather than decrease consumption.
“As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of,” said Satya Nadela, CEO of Microsoft, OpenAI’s largest investor.
Despite suffering Wall Street’s most significant single-day drop in market cap, Nvidia sees DeepSeek’s breakthrough as an opportunity.
“The pie just got much bigger, faster. Nvidia Chief Researcher Jim Fan tweeted Monday. “We, as one humanity, are marching towards universal AGI sooner.”
An obvious, “we are so back” moment in the AI circle somehow turned into “it’s so over” in mainstream.> unbelievable shortsightedness> the power of o1 in the palm of every coder’s hand to study, explore, and iterate upon> ideas compound> the rate of compounding accelerates…
— Jim Fan (@DrJimFan) January 27, 2025
In other words, if Jevons’ paradox applies, DeepSeek’s demonstration that high-quality AI models can be built with minimal computational resources doesn’t mean we’ll use fewer GPUs overall. Instead, the big guys will get bigger.
At the other end of the spectrum, as the barrier to entry drops, a surge of new developers and companies will jump into AI development.
The explosion in total projects will likely drive compute and chip demand to unprecedented levels. Of course, for AI, not all chips are alike, and the market has apparently decided that Apple silicon might have a leg up on Nvidia chips in this new world.
That’s why AAPL shot up 8% this week, despite its consumer-grade “Apple Intelligence” being derided as an oxymoron.
The argument is that Apple chips are more energy efficient, designed for localized use versus the big server farms that use Nvidia chips, and feature a “unified memory architecture,” meaning the CPU, GPU, and Neural Engine share a single pool of ultra-fast memory.
This eliminates the need for data transfer between separate components, reducing latency and increasing efficiency for AI workloads. For models like DeepSeek, which rely on fast memory access for complex operations, UMA supposedly significantly improves performance.
Clearly, in the throes of the Innovator’s Dilemma, it’s unlikely that Nvidia will change its strategy—considering they are the dominant supplier of AI hardware thanks to their monopolization of the CUDA architecture, the key to running and developing most of the AI models currently available.
DeepSeek doesn’t challenge this monopoly—but China is working on it to boost the adoption of the Huawei Ascend lineup of chips.
As it stands, Microsoft doesn’t seem too worried about changing its business strategy as an infrastructure provider.
However, OpenAI did apply a small change to counter users’ expectations, giving Plus users (those paying $20 a month) some of the features that previously were available only for Pro users (those paying $200 a month) to retain clients.
ok we heard y’all.
*plus tier will get 100 o3-mini queries per DAY (!)*we will bring operator to plus tier as soon as we can*our next agent will launch with availability in the plus tier
enjoy 😊 https://t.co/w8sFsq6mI1
— Sam Altman (@sama) January 25, 2025
Another company with a lot of skin in the game is Meta, developers of Llama—the world’s largest and most popular family of Open Source LLMs.
Meta has already committed to investing $65 billion in AI infrastructure this year.
The company’s chief AI scientist, Yann LeCun, also looked at the bright side of getting pantsed by a tiny startup in China: “To people who see the performance of DeepSeek and think: ‘China is surpassing the US in AI.’
“You are reading this wrong; the correct reading is: ‘Open source models are surpassing proprietary ones,’” Lecun posted on LinkedIn.
Don’t be surprised if Meta adopts DeepSeek’s methods to enhance Llama-4: “Because their work is published and open source, everyone can profit from it—that is the power of open research and open source,” Lecun wrote.
During its Q4 earnings call, CEO Zuckerberg said Meta is planning to allocate ten times more computing power to develop Llama-4 than the resources allocated to train Llama-3.
The company may either reduce its spending and apply DeepSeek’s techniques—or maintain the spending while applying those techniques and come up with a model that’s even more powerful.
The Future of AI Might Not Depend on The Better AI
No matter how brilliant DeepSeek’s inference model is, in the end, AI still has a voracious appetite for two things: power (server farms) and data (to train and learn on).
Industry analysts project the demand for GPUs will spike 30% this year, and global AI computing costs could grow 10X in the next five years.
How those costs get passed on to businesses and consumers is still an open question.
In the meantime, open-source AI models, such as DeepSeek’s, are getting so good that people are questioning whether the premium prices charged by proprietary code companies are fair.
Who wants to pay $20 a month for OpenAI’s consumer-grade offering—let alone $200 a month for its high-end model–when you can get it for free?
“More companies are building open-source alternatives to premium AI tools, creating competition that benefits [small and medium-sized enterprises],” Karan Sirdesai, CEO & Co-Founder of Mira, a decentralized network of AI models, told Decrypt. “This natural evolution toward accessible solutions mirrors how other technologies have become democratized through market dynamics rather than regulation.”
For Sirdesai, models like DeepSeek and other open-source initiatives push the industry forward as they give developers tools to position themselves in markets that look like they are going to be wholly dominated by oligopolies and a few massive corporations.
It turns out, however, that “decentralized infrastructure and open-source development are already creating competitive alternatives to premium AI tools,” he said.
Atul Arya, CEO and founder of Blackstraw.ai, which develops AI implementation strategies for different businesses, said the larger benefit of open-source AI is that it will help the world avoid a potential gap between the AI-haves and the AI-have-nots.
“The difference between free and paid versions typically centers on speed and scale, rather than fundamental capabilities, ensuring that core functionality remains accessible to the broader public,” he told Decrypt.
Arya believes open source developments like DeepSeek help level the scale and create more fair conditions in a market as wild as the AI industry.
“The true driver of democratized access is the open-source community, which is rapidly catching up,” he said.
Edited by Sebastian Sinclair and Josh Quittner
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.