A few weeks ago, Capital Flywheels explored how societal and technological developments can be potentially simplified down to two core concepts: Energy and computation.
For long term investors and thinkers, there is nothing more powerful than recognizing and observing the mega-trends that not only influence our lives over months and years but over decades, centuries, and perhaps millennia. Understanding these mega-trends allow us to make decisions and position ourselves for the future far in advance of the common consensus as well as gauge the durability, duration, and sustainability of a prevailing narrative. A narrative can never exceed the lifespan of its underlying trend for long…A long narrative therefore needs a long mega-trend beneath it.
Energy and computation are two such core concepts that, whether human society has recognized or not, has dominated all aspects of history in ways that we can only begin to comprehend. All of human history and all histories of human conflict are mostly just different combinations and permutations of the need for energy and computation against our ability to access such energy and computation. When what a civilization needed in the form of energy (e.g. food and base resources) and computation (e.g. technology, knowledge) can only be found in another country, that was often cause enough for war.
In that post, Capital Flywheels offered two examples of highly interesting companies (but not necessarily interesting as stocks) today that currently embody a potent mix of energy and computation capabilities: Tesla (+ SpaceX) and Apple. These two companies appear to be pushing the boundaries of energy and / or computation applications to limits that few other companies dare explore. And in the process, both companies are achieving results that few other companies can therefore match.
While those two companies are well understood by consumers given their positioning as consumer companies, another company that belongs on that list that is far less well-understood is Nvidia.
Despite Nvidia’s growing importance to the future of computation, I suspect the popular understanding of Nvidia is still that of a niche supplier of semiconductors for games and AI-technologies that are still in research and years away from commercialization. In reality, Nvidia is already on the cusp of powering a decades-long shift in AI and computation adoption across all industries.
In the same way that people are realizing that the internet is going to find its way into every single industry because of COVID-19, AI and data-driven computation are likewise going to find its way into every industry imaginable. And Nvidia is going to power (almost) all of it.
Recently, Nvidia overtook Intel to become the largest semiconductor design company in the world by market capitalization. This was a large milestone many years in the making but will likely eventually be dwarfed by the ultimate value creation that Nvidia achieves when all is said and done. Unless something derails Nvidia’s progress in the company years, I fully expect Nvidia to be worth $1 trillion or more.

The computing paradigm that Intel ruled over is a centralized one. It assumed that computing is largely done through PCs (and, later, servers and datacenters). It assumed that computational power is always the factor to prioritize because computing machines are always plugged into a wall outlet without recognizing that both computation and energy efficiency are increasingly important factors to consider.
From an architecture perspective, Intel assumed that computation is best done through one or few very powerful cores. It assumes that problems worth solving are large, singular problems that require the highest computational speeds semiconductors can offer.
However, most of that turned out to be a mistake. There are certainly many areas where Intel’s assumptions remain relevant, but increasingly the world is discovering that energy is an important consideration if you want computation everywhere (especially in devices that are not plugged into a wall outlet) and that computational problems worth solving are not all large, singular problems.
Many people tend to agree that Intel’s downfall began in 2005 when Intel refused to dedicate resources to make a power-efficient processor for Steve Jobs’ yet-to-be-revealed iPhone that would shift the computing world into devices that are not plugged into a wall. Certainly, Intel could not have known how dramatically the world would change with the iPhone (especially when one had not yet seen it)…but it opened the gates to a world where Intel’s core assumptions about computing were no longer sustainable and true. The mega-trend in computing now is computing everywhere. And in order to allow computation everywhere, the world needed new tools and architectures that would allow ever growing demands for computation to be met against the desire for computation to be done in ever smaller devices and batteries.
The resulting paradigm is the one that we live in today – devices (mostly smartphones at the moment, but increasingly many other smart things like speakers, watches, surveillance cameras, doorbells, locks, lightbulbs, and, yes, eventually AR / VR glasses) with limited battery life powered by energy-efficient chips (based on ARM’s designs) and the vast majority of computation done in the cloud at datacenters that indeed have mostly unlimited energy resource.
Intel’s world of prioritizing power over energy with computation done on-device has now evolved into a world of ARM + cloud with ARM’s power-efficient designs enabling computation in many, many more energy-constrained places combined with the heavy computational lifting done in the datacenter.
Not only is Intel’s assumptions about power over energy increasingly struggling to remain relevant in the world we live in today, Intel’s assumptions about the nature of computing is also being challenged. Intel’s architecture centered around a few very powerful cores has become increasingly disconnected from the world we live in.
To borrow a math analogy, Intel’s assumption is that everyone on earth has Calculus problems and needs very complex integrations / differentiations done every day, and our need (desire?) to solve more and more Calculus problems are growing every day.
But this is not the world we live in.
The world we live in is one where everyone on earth has basic math problems (e.g. algebra problems) that are growing every day. Intel’s solution is to give everyone a personal math PhD assistant (e.g. a few very powerful cores) but the talents of that math PhD assistant are largely wasted on the basic problems that we all face in our daily lives. What we need is not a PhD assistant but hundreds or thousands of 8th graders that can help solve basic math.
While Intel’s CPUs still dominate datacenter computation workloads, increasingly the world is discovering that graphics processing units (GPUs) pioneered by Nvidia are much better suited for where the world is heading.
Unlike Intel’s CPUs, Nvidia’s GPUs have thousands of cores. Each individual core is much less capable than a single Intel CPU core. But unless the problem requires a very powerful core, thousands of weaker cores are much more likely to compute faster than a few super powerful cores. Returning to our analogy, is there any doubt that 1000 8th grades would solve 1000 algebra problems faster than 1 PhD attempting to do the same 1000 algebra problems?
What’s interesting is that the evolution of computing not only opened a window for Nvidia to become more relevant in the datacenter, but the emergence of AI has made Nvidia’s GPUs the most relevant general tool for computation. AI models are fundamentally structured and trained through computations that are more akin to pattern-matching rather than solving grand mathematical equations. It’s as if the 1000 8th graders are hired not to solve 1000 algebra problems but more like 1000 multiplication problems. Intel can continue to push the envelope of performance of their CPUs (make their “PhD” smarter) but the fundamental problem is not that the cores are not fast enough but that the CPU fundamentally does not have enough cores to solve the problems the world faces today.
Almost 10 years ago, the computing world started to question whether Moore’s Law was coming to an end. For close to 40 years, Intel had consistently pushed transistor density forward as predicted by Moore’s Law, but then progress started to slow. But what has become increasingly clear is that Moore’s Law has not come to an end but that Moore’s Law can no longer be sustained through the CPU paradigm and architectures pioneered by Intel.
In fact, Moore’s Law continues to progress when considered through the computational increases achieved through GPUs:

Source: Nvidia
One of Capital Flywheels’ core beliefs is that humanity’s desire for energy and computation are limitless. Every generation works hard to find ways to increase energy output and computation output because humans are fundamentally lazy. The more energy and computation we have access to, the easier of a life we can live. But each generation takes the prior generation’s progress for granted. No generation brought into this world has ever or will ever find the level of energy and computation achieved by their parents to be sufficient. Every generation will always want more.
Every single smartphone in our pockets today has more computational power than the computers used by NASA to send Niel Armstrong to the moon in 1969. If we could go back in time and tell the people of 1969 that we would all have that much computing power in everyone’s pocket just 50 years later, the people of 1969 would very likely ask why, what’s the point?
But we are never satisfied. The people of 1969 did not know that we, the people of 2020, would need things like Instagram and TikTok to entertain ourselves. And we will likely continue to invent ever more ways to consume the growth in computation that we can achieve.
Although Nvidia’s eclipse of Intel’s market capitalization seems to be the milestone that might mark the top, Nvidia was never on Intel’s path. Nvidia’s path is a different and longer one and is one that seems to begin only where Intel’s path ends.
Like the people of 1969, we, the people of 2020, cannot fathom at this very moment what the people of 2050 will need or want when it comes to computation. If Nvidia is able to continue to drive computation power forward according to Moore’s Law, the computation power available in everyone’s pocket in 2050 will be 1 million times more powerful than what we have in our pocket today. We cannot know what people will possibly want to or need to do with so much power, but Capital Flywheels knows that humanity will find a way to consume that computation power because we are never satisfied. And if humanity does exactly what humanity has done before, the exponential growth in demand for computation will translate into extraordinary growth for Nvidia’s products. Intel powered a billion PCs, but Nvidia (along with ARM) may power tens of billions of smart devices. Increasingly, computation is diffusing into every industry, even industries that have historically resisted. And Nvidia is doing their part to ensure that they are a foundational layer of technology for industries that traditionally sit far away from “tech”.
The financial outcome of such a future should be quite bright from here.
Interestingly, the media recently reported that ARM may be for sale. ARM was taken private by Softbank back in 2016 in a $32 billion deal. Despite ARM’s strategic importance for the future of computing, Softbank’s financial position has been weakened by the collapse of WeWork, which is likely the impetus for a potential sale of ARM. According to the media, Nvidia has shown interest in potentially acquiring ARM. The regulatory hurdle would likely be high and the direct synergies between Nvidia and ARM are debatable, but a union of Nvidia and ARM would very likely cement Nvidia as the most important semiconductor company of the next 30 years.
To close, I’d like to leave you with this recent demo of the upcoming Unreal Engine 5. Nvidia powers the gaming industry, and the quality of the images and universes now possible through games is likely to cross the chasm within the coming years. We may talk about the potential for Nvidia to bring the power of computation to all industries, but perhaps we may soon all be spending our waking moments in universes that are not powered by suns and stars, but Nvidia GPUs. How much would that be worth financially? But if that were to happen, the more interesting question would be what life would be like and what world you would find me in.
3 thoughts on “Energy and Computation Part 2 – Nvidia”
Comments are closed.