I’m a college instructor. Last weekend, I built an AI tutoring app on a $2900 Mac in my office. It works great. I am stunned by what this little machine sitting next to me right now can do. It runs LLMs with incredible speed (about 5s response time on my little app!). I can use it to customize and fine tune models. There’s not much I can think of right now that I would need more than this to do. Meanwhile, the stock market is betting $5 trillion--the current market cap of Nvidia--chasing the idea that this level of AI performance isn’t good enough.
I recently upgraded from a 3-year old M1 Mac. That same app took 30s to respond on that hardware. Now it takes 5. That’s 6x improvement in 3 years on consumer hardware. Going back a few years further to when I was an Electrical Engineering student back in 2004, nobody was even dreaming about performance like we have today.
That 5 trillion represents a bet—a bet that AI can recursively self-improve without humans in the loop. Not just improve—self improve. If that’s true, then this exponential ride we’re on with AI keeps going. The sky’s the limit. It’s an incredible bet to make! Where is the evidence to support it?
To put it in other words, here’s the case investors must be betting on: that current AI can train next-gen AI to be 1.1 (or even 1.01) times better than current. If that’s the case, then it is off to the races. Think of it like compounding interest. 0.99x means you're loosing 1 dollar per 100 every period. 1.01x means you're gaining 1. Projecting out, the difference between the two is profound. That 0.02 makes a huge difference when iterated 100s of times. If it's 1.01, then the value of these tech companies may be justified. Then, the next limit may be power consumption instead of humans.
What happens if that’s not the case? What if the next generation of AI needs human input to get better? In that case, we’re reaching a plateau in the capabilities of AI. Yes, it is true that next-gen AI is better than current-gen AI, but it is largely because of the people who are working on it. They are developing new ideas, new architecture, new design, to make things better. What they’ve accomplished is amazing. That’s why the engineers are so valuable.
Let’s take a look at the value of an AI engineer today. Take the market cap Nvidia and divide by the number of employees. You get something over $100 million per employee. Humans must be valuable. This also indicates to me that humans must be in the loop. At least for today, I think we can safely say that current AI is not training next-gen AI at a 1.1x multiple.
So, we have two paths ahead of us. Either 1) this current generation of AI engineers designs the AI that can iterate at a 1.1x rate or 2) they can’t.
Every previous iteration of technology has been dependent on the engineers who designed it. About 80 years ago, a few people in California figured out how to make a single semiconductor. Engineers at Fairchild, Shockley, and Bell Labs who followed made it better, inventing new ways to make it faster. Software designers and coders at Apple and Microsoft made breakthroughs to use the technology better. Nvidia humans had the brilliant idea to use GPUs (graphics!) to train models to work with natural language. Humans have been involved every step of the way. But now, some people must think we can break out of that cycle. I’m not so sure.
Physics likes limits. S curves. Push the accelerator on a car and what happens? It goes faster and faster until the engine can’t overcome the air friction. At some point, the limit is reached. This happens all the time in physics and in nature.
Just for the sake of argument, let’s say this isn’t the start of a plateau. Let’s say that we continue on the upward path. Then what? I already mentioned power. That’s a real concern. Also, how much intelligence do we really need? I mean, most days I think the world would benefit from more. But seriously, maybe there is an upper limit to what is useful.
If AI does plateau, then what are we left with? A company that owns very little in the way of tangible assets, with employees who cost over $100 million a piece. For comparison, an employee at Airbus is worth about 100x less, and an employee at Ford 5x less than that. There’s a long way for a company like Nvidia to fall. Look no further than Cisco 25 years ago. They still haven’t recovered.
The only historical parallel I could find for Market Cap divided by Number of Employee comparisons is the East India Company. Some of the details are interesting and worth exploring. They had a monopoly to extract resources (spices) via trade routes they controlled. They had government backing. They also had an army. The result was war, eventual collapse, and finally the rise of competition.
I’m worried that there isn’t much support under the valuation of today’s tech companies. If they fall then everyone with a retirement account will feel the pain. Bubbles are fun on the way up, but some caution is in order.
Here’s the truth of where we’re at today. I can already build things with $2900 hardware that are remarkable. They’re helping me run a better calculus class. Three years ago, I couldn’t do that.
$5 Trillion, $100 Million + per employee. These are big numbers. Are they justified?