Khaberni - The AI revolution marked a turning point in Nvidia's business journey, transforming it from merely a company that manufactures graphics cards for gaming and various platforms, and perhaps mining at times, into one of the largest tech companies in the world within the trillion-dollar club.
However, it seems that AI processors, as much as they contributed to Nvidia's prosperity, will also be the cause of its demise, following changes in the company's value after Google's announcement of its AI processing units.
Since rumors about Google's "TPU" AI processors spread, Nvidia has lost more than 4% of its value, and Alphabet, Google's parent company, has seen a 12% surge.
But why this immediate effect from just a small rumor without the processors themselves appearing or even being officially launched in the market? And could Nvidia's dominance in this sector end at Google's gates?
Why did all this happen?
In recent days, Google launched the new generation of its AI model "Gemini 3.0" with advanced features that surpass many competing models.
However, Google's disclosure of the new model was not usual, as while Google focused on describing the capabilities and possibilities of the model, it briefly mentioned that it used its own processors to train the model, the "TPU" processors that Google had revealed earlier in April of this year.
A few days later, The Information published a report about Meta's intention to rely on Google's "TPU" processors for training its AI models, which has shifted Nvidia's dominance in this sector.
It is worth mentioning that Meta is not the first company to use Google's "TPU" processors according to a separate report by Barrons, as Apple and Anthropic have done so previously.
What is the difference between Nvidia's processors and Google's new processors?
Both processors handle large and diverse data with ease and efficiency, thanks to their distinct and superior design. However, each type of processor deals with data differently from the other.
While Nvidia's processors rely on thousands of computing cores to process a large amount of data together at one time, Google's processors process the data sequentially, rather than processing all the data at once, they process it in connected segments.
This difference stems from the core design of each processor, as Nvidia's processors, also known as GPUs, are specifically made to process images, video, and animated scenes, forcing them to handle a large amount of data simultaneously.
However, Google's "TPU" processors, which stand for Tensor Processing Units, are specifically designed to handle AI data, thus they do not consume as much power or computing strength as Nvidia's graphic processors.
Google's processors lack the high adaptability that Nvidia's processors have, in addition to being less customizable and learnable than the leading Nvidia processors. Conversely, they are less expensive and consume less energy than Nvidia's equivalent.
When did TPU processors first appear?
These processors first appeared in 2013, then were launched to markets in 2015 and used by the company to operate the algorithms of its search engine making it more efficient and faster, then in 2018 the company began placing these processors in its cloud servers.
Google made available to all users access to its leading TPU processors, provided the user has access to Google's cloud services. Later, the DeepMind unit used these chips to train AI models.
The new generation of TPU chips is named "Ironwood", a processor that uses liquid cooling and is specifically designed to handle AI loads and related tasks, available in two sizes, either a capsule containing 256 chips or a capsule containing 9216 chips.
What is the risk to Nvidia from these chips?
Currently, the cost of building AI data centers, which operate on training and running the model, is the largest in the AI sector.
This means that this cost represents the major expenses for AI development companies, especially with the rapid advancement of Nvidia chips and the requirements of new models that force companies to rapidly and regularly update data centers.
Thus, if companies can reduce this cost and make it relatively lower, it ensures them higher profits and greater sustainability in operating and training AI models.
Google's processors provide this through two options, the first being direct purchase and ownership of the chips and installing them in their own data centers, which Google is expected to offer in the future.
However, the more logical option is to rent the processing capacities directly through Google's cloud services, which provides AI companies quick access to the latest and strongest chips without bearing the costs and hassles of installing and operating the chips.
Bloomberg, in a report about the new processors, indicates that while Google is not keen on completely replacing Nvidia since it is one of its largest customers, this could change in the future according to statements by Gartner analyst Jarrav Gupta.
Gupta added to Bloomberg saying, "Graphical processing units are more suitable for handling a wider range of workloads, Nvidia is generations ahead of its competitors."
On its part, Nvidia, through an official spokesperson to Bloomberg, expressed its pleasure with the progress that Google has achieved, adding a lot to the AI sector, confirming that it continues to supply Google with its processors in the near future.




