Key Takeaways
- OpenAI runs its AI software on Nvidia’s graphics processing units, or GPUs.
- Nvidia’s gross profit margins from the industry have been estimated at about 80%, sometimes referred to as the “Nvidia tax.”
- Google began investing in its own tensor processing units, or TPUs, in 2015, so it doesn’t have to buy hardware from another company.
- Analysts estimate that Google can operate its TPU capabilities for approximately 20% less than entities that rely on Nvidia GPUs.
Artificial intelligence is sweeping the modern world and has already revolutionized how we do business. This technology has opened the door for a high-stakes arms race among the leading companies in the industry.
In April 2025, OpenAI introduced its GPT-4.1 series and its o3- and o4-mini reasoning models. Soon after, Google (GOOG) released Gemini, an upgrade to its Gemini 2.5 Pro, in an almost immediate response. These actions sent other companies in the AI space scrambling to choose the right platform for their needs. Here’s how the two giants are changing the shape of the AI industry.
The “Nvidia Tax” That’s Crushing OpenAI’s Profit Margins
The competition for AI dominance often comes down to differences in hardware.
OpenAI is based on Nvidia’s (NVDA) graphics processing units, or GPUs. They’re top-notch, but expensive to maintain. OpenAI buys its GPUs from Nvidia, whose gross profit margins on this technology have been estimated to run at a staggering 80%. The premium that OpenAI pays for Nvidia chips is appropriately referred to as the “Nvidia tax.”
“Buyers pay many times their [chips’] manufacturing cost,” explains Chad Cummings, a tax attorney and CPA whose legal practice is devoted to assisting seed and Series A start-ups in the AI space. “A chip costing a few thousand dollars to produce often sells for over $20,000. This steep markup functions like a tax on each unit of computation for firms like OpenAI, inflating their cost of goods sold and squeezing margins.”
Google has avoided this expense by designing its own tensor processing units, or TPUs. Google first began investing in TPUs in 2015, avoiding the markup on GPU technology.
OpenAI pivoted away from Nvidia in June 2025 and began following Google’s lead instead. It now uses Google’s TPUs to enable ChatGPT, dodging a portion of the Nvidia tax and reducing its dependency on Nvidia.
Important
Google doesn’t share all of its technology with its competitors. It limits OpenAI’s access to its most advanced chips.
How Google’s TPU Strategy Bypasses the GPU Bottleneck
There’s no simple comparison between Google’s TPUs and Nvidia’s GPUs because they are very different AI technologies. Google’s TPUs have reportedly excelled when it comes to deep learning applications and uses, with processing speeds 15 to 30 times faster than GPUs. They’re also more energy efficient, offering 30 to 80 times more performance per watt.
However, GPUs also have the edge when it comes to versatility, and are reportedly better suited for speech and image recognition.
Can OpenAI’s Cost Structure Compete With Google’s Vertical Integration?
Ultimately, success in the AI race will come down to dollars and cents.
“OpenAI faces high ongoing costs,” notes Cummings, “and a large share of its income goes to outside suppliers as its AI usage grows, limiting its efficiency gains. OpenAI is trying to blunt this disadvantage by pursuing big data center projects with partners and even exploring its own chip design, but these efforts require significant capital and time.”
A vertically-integrated supply chain is more efficient than communication between side-by-side platforms. By producing its own processors, Google is less reliant on outside suppliers and effectively dodges the Nvidia tax.
Analysts estimate that Google can operate its TPU capabilities for approximately 20% less than entities that have to pay Nvidia for processing power. Some of the fastest GPUs earn gross margins of up to 90%.
The Bottom Line
While OpenAI has a momentary disadvantage, it isn’t doomed in the ever-changing AI landscape. The company has set the goal of bringing in $125 billion in 2029, a sizeable increase from its $13 billion target in 2025. For consumers and companies that use AI, the best approach may be to wait and see.
