TMTB: UBS Conference Key Quotes: NVDA, APP, DELL, WDC, AMAT, ANET, XYZ
NVDA: Colette Kress CFO
AI “bubble” vs. multi-decade transitions
“No, that’s not what we see. What we see is two to three different major transitions… first, the need to transition to accelerated computing… most work in the data center has been done with CPUs for years, but there’s not going to be any improvement that we can see using CPUs. By the end of the decade, $3 trillion to $4 trillion of total data center infrastructure… probably about half of that is focused on the transition itself. You’re seeing hyperscalers revise search, recommender engines, social media—this is a very big part of what we are seeing today… AI, including Agentic AI, is another transition that will continue to grow through the rest of the decade.”
Competitive position and the Grace Blackwell stack
“We’re very excited about our Grace Blackwell configurations—200 series, Ultra, and 300. You’re going to continue to see more models; those models are being built now and you’ll probably see them in about six months… Grace Blackwell was a full data center, rack-scale, extreme co-design—seven different chips working together. That’s very different from a fixed-function ASIC.”
“Everybody is on our platform. All models are on our platform, both in the cloud and on-prem. All workloads continue to be on our platform.”
Lead not shrinking; full-stack moat (CUDA and libraries)
“Absolutely not… Our platform is a full stack that incorporates hardware and the software transformation customers need. CUDA and the libraries are some of the best reasons people stay… it gets better over time. Enhancing our software can give you an ‘X factor’ improvement on top of the hardware… backwards and forwards compatible. You buy the compute and it gets stronger as we improve the software.”
Installed base dynamics (mostly additive builds)
“Most of the installed base still stays there… advanced new models want the latest generation… so they move the model to the newest architecture and keep the existing. To this date, most of what you’re seeing are brand-new builds throughout the U.S. and across the world.”
“We still see Ampere. We certainly see Hopper continuing to be used… backwards compatible, forwards compatible from the software.”
Inference profitability, reasoning models, and token flywheel
“Reasoning models—long thinking—are now coming to market, and you’ll see more on Blackwell. That drives more compute up front; the three scaling laws are still intact… then more token generation, plus more users. Users are now saying, ‘I would absolutely pay for this.’ Inference has moved to reasoning-type models and now there’s a margin that fuels more compute and more models… a flywheel for inferencing and token generation.”
“More compute, more tokens.”
Model builders’ cash vs. capacity (risk framing)
“Hyperscalers are continuing to buy compute for internal use and to transition to accelerated computing. Model makers need more compute, but they have to work through profitability and raising capital. Our focus in demand and supply is simple: do we have POs; do they have the ability to pay. Nothing has changed there.”
OpenAI framework agreement and current guideposts
“OpenAI is a very strong partnership… but our focus right now on our $0.5 trillion worth of Blackwell and Vera Rubin is really baked for OpenAI continuation through the CSPs. That $0.5 trillion doesn’t include any of the next-part, direct agreement with OpenAI… They want to go direct; we’re still working on a definitive agreement.”
Anthropic exposure
“We’re excited about our partnership with Anthropic… also focused on our platform. Through a CSP and working with Microsoft… they’re looking at a 1-gigawatt future as well. All of the model makers are focused on our platform and working with us.”
Vera Rubin status and expected uplift
“Vera Rubin has been taped out. We have the chips and are working feverishly to be ready for the second half of next year… Ultra’s transition was seamless and that’s what we wanted. You’re going to see an ‘X factor’ increase in performance with Vera Rubin.”
CPX and disaggregating inference/training paths
“There is a need for breaking down training and inferencing… CPX takes you to a different stage within the same infrastructure… all of it within a mixture-of-experts design. You’re breaking down the work, but not necessarily using a different type of compute; staying on the full system is probably the most efficient way to get it done.”
Why AI won’t “replace CUDA”
“CUDA is a longstanding development platform—now on our 13th version—with consistent libraries for different industries and workloads. People have talked for years about doing something similar; it hasn’t been successful. AI is moving fast—we keep updating techniques—and alternatives are always behind. It’s backwards and forwards compatible; Hopper has improved via software, and you’re starting to see it with Blackwell. A100/Hopper/Blackwell each get an ‘X factor’ improvement from software; for Blackwell, total increase versus last gen is 10x–15x, and within that you probably have about 2x just from software after going to market.”
Gross margin (mid-70s) through the Rubin ramp
“We fine-tuned cycle times, yields, and costs to move into the mid-70s. Blackwell Ultra’s seamless transition lets us focus more on efficiency. We are aware of supply prices—HBM and others—but with our scale, one more day of cycle-time efficiency and cost focus keeps us about the mid-70s next year as Rubin ramps.”
Inventory, purchase commitments, and the $0.5T plan
“Inventory and purchase commitments growing is a good thing—it means we have supply for our demand outlook. Inventory is things being processed to ship within the current quarter; by early December most of that likely moved to customers. For the $0.5 trillion next year, we have to order a lot of supply—long-lead items, seven chips, complex systems. Managing supply and demand is day-to-day; but yes, you should view the step-up as growth.”
“The GTC slide going into next year doesn’t include potential incremental agreements; there is opportunity for that to increase.”
Capital allocation priorities
“First, ensure cash for internal needs—supply and capacity to build what we’re building. Second, shareholder returns—repurchases and dividends will always be part of what we do. Third, strategic investments to expand the ecosystem—smaller checks, learning from partners in critical parts of AI. We do both ecosystem investments and M&A where it fits; very large M&A is hard, but we’ll continue to pursue teams and technologies that strengthen the platform.”
APP: Adam Foroughi, CEO; Matt Stumpf, CFO
Keep reading with a 7-day free trial
Subscribe to TMT Breakout to keep reading this post and get 7 days of free access to the full post archives.


