Public Markets

Public Markets

The $100 Billion Memory Race: Why Micron Is the Most Undervalued Play in AI Infrastructure

Public Markets
Nov 17, 2025
∙ Paid

You know what’s funny about the AI revolution?
Everyone’s obsessed with the chips. Nvidia this, AMD that. Jensen Huang’s leather jacket gets more press coverage than most CEOs get in their entire careers. And don’t get me wrong—I get it. GPUs are sexy. They’re powerful. They’re the brains of AI.
But here’s what almost nobody is talking about...

Imagine you’re building the most sophisticated brain on the planet. You’ve got the world’s most advanced processor—let’s say it cost you $30,000. It can perform a quadrillion operations per second. It’s a marvel of human engineering.
But there’s a catch.

That brain needs to remember things. It needs to access information. Fast. Like, really fast. We’re talking trillions of data points flowing through it every single second.
Without memory—the right kind of memory—that $30,000 brain you just bought? It’s worthless. It’s like having a Ferrari with no gas tank. Looks great. Goes nowhere.
Welcome to the hidden crisis of artificial intelligence.

Let me paint you a picture of what’s happening right now in data centers across the world.
It’s 2 AM in a sprawling facility outside Portland, Oregon. Rows upon rows of servers stretch as far as you can see. Inside, Nvidia H100 GPUs—each one costing around $30,000—are training the latest large language model. The air conditioning is running at full blast because these things generate more heat than a commercial kitchen.
But here’s what the headlines miss: wrapped around each of those $30,000 GPUs is roughly $10,000 worth of something called High Bandwidth Memory. That’s HBM for short.
And without it? Those GPUs are just very expensive paperweights.
Think about that ratio for a second. For every dollar spent on the processor, you’re spending 33 cents on the memory. That’s not a rounding error. That’s a massive market that most investors are completely ignoring.

Now, let me ask you something: When was the last time you heard anyone talking about memory stocks?
Exactly.
While everyone’s been losing their minds over Nvidia—which, fair enough, they’ve built an incredible business—there’s been this quiet revolution happening in memory technology that’s about to reshape the entire AI industry.
And there’s one company sitting at the absolute center of it.
They’re not in Silicon Valley. They’re in Boise, Idaho.
They’ve been around for 47 years, which in tech years makes them practically ancient.
Their stock is up 186% this year—which sounds impressive until you realize Nvidia is up even more. So they’re not getting the attention.

But here’s the kicker: While Nvidia trades at a forward P/E of 30x and the Nasdaq-100 averages 26x, this company—Micron Technology—trades at a forward P/E of just 16.
Let me repeat that. Sixteen times forward earnings.
For a company that just reported revenue growth of 50% year-over-year to an all-time high of $37.5 billion.
For a company that’s the only American supplier of the most critical component in AI infrastructure.
For a company sitting on what might be the biggest supply-demand imbalance in the semiconductor industry.

Why Memory Is the Real Bottleneck
Okay, let’s get technical for a minute—but I promise I’ll make this interesting.
You know how everyone talks about AI models getting bigger? GPT-4 has 1.7 trillion parameters. The next generation will be even larger. We’re talking about models that need to access and manipulate billions—sometimes trillions—of data points simultaneously.
The problem isn’t computing power anymore. Modern GPUs can handle the math.
The problem is getting the data to the GPU fast enough.
It’s like having the world’s fastest chef, but you can only hand him one ingredient at a time. Doesn’t matter how skilled he is—he’s waiting around for the next carrot, the next onion, the next piece of chicken. The kitchen grinds to a halt.

That’s exactly what was happening with AI training until HBM came along.
Traditional memory—the stuff in your laptop—sits on the other side of the room, metaphorically speaking. The GPU has to reach out, grab some data, bring it back, process it, then go get more. It’s slow. It’s inefficient. And when you’re training trillion-parameter models, it’s completely unworkable.
High Bandwidth Memory changes everything. Micron’s HBM4 delivers speeds greater than 2.0 TB/s per memory stack—that’s a 60% improvement over the previous generation.
To put that in perspective: imagine upgrading from a garden hose to a fire hydrant. That’s the difference we’re talking about.
And Micron’s HBM3E consumes approximately 30% less power than competitive offerings. In data centers where electricity costs are measured in millions of dollars per month, that 30% power saving isn’t a nice-to-have—it’s a game-changer.

The Market Nobody’s Watching
Here’s where this gets really interesting.
Right now, as I’m recording this, the total addressable market for HBM is about $35 billion in 2025.
That’s already a massive market. But here’s the projection: By 2030, that market will hit $100 billion.
Let me say that again for the people in the back: $35 billion to $100 billion in five years.
That’s not linear growth. That’s not even exponential growth. That’s a full-blown explosion.

And why? Because every single AI accelerator—whether it’s Nvidia’s next-gen Blackwell, AMD’s MI350, or whatever Google and Microsoft are building in their secret labs—all of them need HBM. There’s no substitute. There’s no workaround. You either have HBM or you don’t have a competitive AI chip. Period.
Gartner predicts the market for DRAM could grow by as much as 28% in 2025 to $115.6 billion, driven by increased demand for HBM. We’re talking about a fundamental shift in the memory market—away from commodity DRAM for PCs and phones, toward specialized, high-margin memory for AI.

The 600% Growth Nobody Saw Coming
Now let’s talk about what’s actually happening at Micron right now.
In Micron’s fiscal Q3 2025, HBM revenues surged nearly 50% sequentially. That’s not year-over-year, that’s quarter-over-quarter. One quarter to the next—50% growth.

But it gets better.
For fiscal Q4 2025, HBM revenue exploded sixfold year-over-year to $2.1 billion.
Sixfold.
When was the last time you saw a multi-billion dollar revenue stream grow 600% in one year?
By the end of fiscal 2025, Micron’s data center revenue hit $20.75 billion—up 137% year-over-year. And here’s the thing: data center now represents 56% of Micron’s total revenue, up from just 35% the year before.
This isn’t a company with an AI side hustle. This is a company that’s fundamentally transformed itself into an AI infrastructure play in the span of 18 months.
And Wall Street is still valuing it like it’s a cyclical memory commodity business.

The Economics Are Absurd
Let me tell you why this is such an incredible business.
HBM commands average selling prices that are 2 to 3 times higher than conventional DRAM.
Think about that. Same silicon. Same fundamental manufacturing process. But because HBM is stacked vertically and requires advanced packaging, Micron can charge three times as much.
It gets better.

Micron’s gross margin expanded by 17 percentage points to 41%. For a hardware company selling physical products, 41% gross margins are exceptional. Apple would be proud of those numbers.
And here’s the kicker: HBM capacity is completely sold out through 2025, and supply agreements are being locked in through 2026.
Sold. Out.
When was the last time you heard about a semiconductor company with sold-out capacity for the next two years? That doesn’t happen in cyclical businesses. That happens when you have genuine supply-demand imbalance driven by structural transformation.
Let me spell out what this means: Micron can essentially name its price right now. They’re not competing on cost. They’re competing on whether they can even deliver the product.

America’s Only Champion
Here’s the geopolitical angle that makes this even more compelling.
Micron is the only manufacturer of HBM in the United States.
Read that again. The. Only. One.
Samsung makes HBM—in South Korea. SK Hynix makes HBM—also in South Korea. Together, those two companies control roughly 88-97% of the HBM market.
Now, nothing against South Korea—they’re allies, they’re great at semiconductors. But if you’re the U.S. government, and you’re watching AI become the defining technology of the 21st century, the defining factor in economic competitiveness, potentially the defining factor in military supremacy...
Do you really want 97% of the most critical component manufactured on a peninsula 30 miles from North Korea?

This is why Micron received $6.44 billion in CHIPS Act grants. This is why the company is investing $200 billion to build new fabs in Idaho, New York, and Virginia, with the goal of shifting 40% of global DRAM production to the U.S. by 2030.
This isn’t just a business story. This is national strategy.

Micron isn’t competing on a level playing field. They’re competing with the full backing of the U.S. government, which has decided that domestic memory production is a matter of national security.

The Design Wins That Matter
But here’s what really seals the deal for me.
Micron isn’t just selling memory chips. They’re co-designing the next generation of AI platforms with Nvidia and AMD—the two companies that control the AI accelerator market.
Micron’s HBM3E 36GB 12-high solution is integrated into AMD’s Instinct MI350 Series, delivering up to 8 TB/s of memory bandwidth. That platform can support AI models with up to 520 billion parameters on a single GPU.
Think about what that means. AMD didn’t just buy memory off the shelf. They worked with Micron to design the memory specifically for their next-gen architecture.

And it’s not just AMD.
In June 2025, Micron began shipping HBM4 36GB 12-high samples to multiple key customers. Micron plans to ramp HBM4 into volume production in calendar 2026, aligned with the launch of NVIDIA’s “Rubin” architecture.
That’s Nvidia’s next-generation platform after Blackwell. Micron is already shipping samples for chips that won’t launch until 2026.

This isn’t a vendor-customer relationship. This is a strategic partnership. And once you’re designed into a platform like that—once the engineers have optimized the entire system around your memory—it’s incredibly difficult for competitors to displace you.
Switching costs in semiconductor design are measured in years and hundreds of millions of dollars in engineering time.

The Vision
So let me paint you a picture of where this is going.
We’re at the very beginning of the AI infrastructure build-out. McKinsey believes data centers will require an eye-popping $6.7 trillion in global capital expenditures to keep pace with rising demand for AI workloads.
$6.7 trillion. That’s not a typo.
And every dollar spent on AI accelerators requires memory. Expensive, high-bandwidth, low-power memory.

Micron has stated its ambition to grow HBM market share to 20-25%—matching its overall DRAM share. Right now they’re at maybe 6%. So we’re talking about a 4x increase in market share.
In a market that’s growing from $35 billion to $100 billion.
Do the math: 20-25% of $100 billion = $20-25 billion in annual HBM revenue by 2030.
Micron’s total fiscal 2025 revenue was $37.5 billion. HBM alone could represent more than half of that in five years.
This isn’t a side business. This is becoming the business.

Why This Matters to You
Look, I know what you’re thinking. “Okay, cool story. Memory chips. Sounds boring. Why should I care?”
Here’s why:
The entire AI revolution—everything you’re reading about, everything you’re excited about, everything that’s about to transform every industry on the planet—all of it runs on memory.
ChatGPT? Needs HBM.
Mid-journey? Needs HBM.
Self-driving cars? Need HBM.
The AI-powered drug discovery that’s going to cure cancer? Needs HBM.
This isn’t optional. This isn’t a nice-to-have. This is foundational infrastructure.

And right now, you can buy shares in the only American company making this stuff at a valuation that’s half the market average, while it’s growing revenue 50% a year.
Micron closed at $247 per share recently. Morgan Stanley just raised their price target to $325. Wells Fargo set their target at $300.
That’s 30-40% upside according to Wall Street’s bull case. And honestly? I think Wall Street is being conservative.
Because they’re still modeling Micron like it’s a cyclical memory company. They’re not fully grasping that this has become a strategic infrastructure play with multi-year visibility and pricing power that hasn’t existed in the memory market in decades.

So that’s the setup.
A 47-year-old company from Boise, Idaho that’s transforming into America’s strategic AI infrastructure play.
Growing HBM revenue 600% year-over-year.
Sitting on a $100 billion opportunity by 2030.
Trading at a 50% discount to its semiconductor peers.
With sold-out capacity and the backing of the U.S. government.
This isn’t about making a quick trade. This is about positioning yourself in the infrastructure layer of the biggest technological transformation of our generation.
While everyone else is chasing the sexy AI stocks, the real opportunity might be hiding in plain sight—in the memory chips that make it all possible.

Let’s dive into the numbers...


What You’ll Discover in This Analysis

  1. The HBM Revolution — How a 600% revenue surge in one product line is reshaping Micron’s entire business model

  2. The $100 Billion Opportunity — Why the addressable market for this technology will grow 6x by 2030

  3. America’s Strategic Moat — The geopolitical advantage that’s impossible to replicate

  4. The Numbers Don’t Lie — Financial metrics that reveal a company at an inflection point

  5. The Bear Case — What could derail this thesis (and why it probably won’t)

  6. Our final answer: should you buy, at what price, and what performance predictions?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Private Markets · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture