TMTB: Funda.AI + TMTB Slack AMA Recap
Thanks to FundaAI for making the first ever TMTB Slack AMA a huge success. We covered wide ranging topics including frontier model updates (Mythos, Gemini 4), Anthropic's compute constraints and the opening it creates for OpenAI, and the emerging CPU bottleneck in AI infrastructure. Also touched on memory/NAND pricing, optics supply chains, and specific names including APP, NET, ASTS, NVDA, and Alchip. You’ll find a full summary below.
In addition to the Funda AI Substack, Funda AI also offers an Institutional Package that unlocks even more AI skills, tools, and features to empower your research. This tier includes more sw features and access to more in-depth reports, as well as regular meetings with their analysts. Reach out to sales@funda.ai for inquiries.
Upcoming AMA/ Q&As:
Apptopia: Wednesday, April 22nd, 11am ET
Doug O’Laughlin, Semianalysis: Friday, April 24th, 10am ET
LLM & Frontier Model Updates
On Mythos capabilities and timing vs. Opus / Google updates:
Mythos is still in controlled Project Glasswing rollout. On Gemini 4 pre-training: directionally consistent that it kicked off around last month, but still in an early iterative phase — exploring architectural choices and training recipes rather than a fully locked production run. Near-term Google focus is heavily skewed toward closing the coding gap, with significant top talent redirected there. Expect Gemini 4 is unlikely to be a near-term “shock release,” but direction of travel on harness and coding is constructive. No info on TPU v9 specs yet.
On RL as a differentiator beyond coding — what happens in domains without clean verification?
Keep reading with a 7-day free trial
Subscribe to TMT Breakout to keep reading this post and get 7 days of free access to the full post archives.


