• Front Research
  • Posts
  • NSA Is Using Anthropic's Mythos Despite Pentagon "Supply Chain Risk" Designation

NSA Is Using Anthropic's Mythos Despite Pentagon "Supply Chain Risk" Designation

5-minute read, Daily Tech Brief

TECH BRIEF

Welcome back to Front Research, here's what's moving in tech this morning.

  • The NSA is using Anthropic's Mythos Preview despite a February Pentagon order declaring the company a "supply chain risk," with reports the model is also in broad use inside DoD

  • Google is in talks with Marvell on two new AI chips, a memory processing unit and a dedicated inference TPU, adding a third design partner alongside Broadcom and MediaTek

  • Mark Gurman says the glowing "26" in Apple's WWDC invite teases a redesigned Siri in iOS 27, while the global DRAM shortage is pushing the new Mac Studio launch into October

  • Fermi America's CEO Toby Neugebauer abruptly departs as the Trump-linked $17GW Texas AI campus runs into water, power, and tenant issues, sending FRMI down 31% in after-hours

  • Tesla launches unsupervised robotaxi service in Dallas and Houston, the first commercial rollout beyond Austin, with Musk targeting seven cities live by end of June

Let's get into it.

NSA Is Using Anthropic's Mythos Despite Pentagon "Supply Chain Risk" Designation

  • Axios reports the National Security Agency is using Anthropic's most capable model, Mythos Preview, and a second source says Mythos is also in broad use across the Department of Defense

  • The Pentagon moved in February to cut off Anthropic, declaring it a "supply chain risk" and ordering vendors to remove its software from military workflows, with active litigation still ongoing

  • The underlying dispute is about use policies: Anthropic refuses to allow Mythos to be used for mass surveillance or fully autonomous weapons, while DoD wants assurances for "all lawful purposes"

  • Anthropic CEO Dario Amodei met White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent on April 17 to align non-DoD federal access to Mythos, with next steps focused on civilian agencies

  • Why it matters: Mythos is the frontier cyber-defense model Anthropic has held back from GA, and Washington's end-run around its own blacklist confirms the strategic value is real, even as the policy stance gets awkward

Axios reported on April 19 that the National Security Agency has deployed Anthropic's Mythos Preview, the restricted frontier model Anthropic has kept out of general availability and routed only to a small group of enterprise and government partners. A second source told Axios that use of Mythos extends beyond the NSA into the broader Department of Defense. The reporting explicitly frames the deployment as happening in parallel with ongoing litigation between Anthropic and the Pentagon, which in February classified the company as a "supply chain risk" and ordered its vendors to strip Anthropic software from military workflows. (Reuters, Business Today, RedState)

The core disagreement is contractual, not technical. Anthropic's published usage policy prohibits its models from being used for mass surveillance and for fully autonomous weapon systems. The Pentagon's position is that those carve-outs are too restrictive given that DoD needs flexibility to use AI "for all lawful purposes." Neither side has moved publicly since February, but the reporting now suggests that individual defense and intelligence agencies are using Mythos through adjacent procurement paths while the corporate-level standoff remains unresolved.

The political track also moved this week. Anthropic CEO Dario Amodei met White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent on April 17, with the readouts focused on how non-DoD federal departments can engage with Mythos on terms Anthropic is willing to support. The implication is that Washington is trying to route around its own Pentagon order at the civilian-agency level, while leaving the military use case in legal limbo.

Why it matters: The disclosure confirms two things investors were modeling on assumption. First, Mythos is a genuine technical lead, not just a marketing line, the NSA is not paying a premium for a second-tier tool. Second, the federal government is willing to split its procurement stance, with civilian and intelligence agencies adopting what the military is formally blocked from, which is how Anthropic's government revenue line keeps compounding even during a Pentagon dispute. For the trillion-dollar valuation chatter that surfaced in The Decoder over the weekend, this kind of headline is the supply side of the story: government demand for frontier cyber-defense capability is real, expanding, and already translating into deployments. The risk is reputational and regulatory, if the Pentagon standoff becomes a presidential-level issue, Anthropic's positioning (safety-first, usage policy-driven) gets tested in public in a way the company has so far managed to avoid.

Google Pulls Marvell Into a Two-Chip TPU Plan, Broadening the Custom Inference Race

  • The Information reported on April 18 that Google is in talks with Marvell Technology to co-design two new AI chips, a memory processing unit and a dedicated inference-focused TPU

  • The memory processing unit is designed to work alongside existing Google TPUs, aiming to break the memory bandwidth bottleneck that constrains large-batch inference workloads

  • Design finalization on the MPU is targeted for as soon as next year, followed by test production; no signed contract yet, and Marvell would be Google's third TPU design partner alongside Broadcom and MediaTek

  • Custom ASIC sales are projected to grow roughly 45% in 2026, with the market size expanding toward $118 billion by 2033, per industry forecasts cited in the reporting

  • Why it matters: Inference is now the larger compute cost in the AI stack, and Google locking up a second-source custom-silicon partner is a direct hedge against Broadcom pricing power, with meaningful read-through to Marvell's AI revenue mix and to Nvidia's share of hyperscaler inference spend

Google is in talks with Marvell Technology to develop two new AI chips, a memory processing unit (MPU) intended to work alongside Google's existing Tensor Processing Units, and a new TPU designed specifically for inference workloads. The Information broke the story on April 18, with Reuters, TheNextWeb, and 24/7 Wall St. picking up the reporting. The MPU is the more interesting piece: it is aimed at addressing the memory-bandwidth bottleneck that shows up in large-batch inference, where model weights and KV caches cannot be held close enough to the compute to keep accelerators fed. (Reuters via Global Banking and Finance, Business Today, Startup Fortune)

Timeline is early. The companies aim to finalize the MPU design as soon as 2027 before handing it off for test production, and the Google-Marvell talks are not yet a signed contract. Still, adding Marvell makes Google's ASIC program more resilient: Broadcom has been the lead partner on TPUs since TPU v4, MediaTek came in during the v6 generation, and a third seat at the table gives Google leverage on pricing, on capacity, and on architectural diversity. For Marvell specifically, winning MPU and inference TPU sockets at Google would materially re-anchor the AI narrative around custom ASIC silicon rather than the connectivity business that has historically dominated the name.

The macro framing matters. Industry forecasts cited in the reporting put custom ASIC sales up roughly 45% in 2026 and the total addressable market at around $118 billion by 2033, with inference the dominant spend category as AI workloads move from training to serving. Google's public signal is consistent with that split: TPU v7e (the last pure-inference chip Google shipped) sold out into hyperscaler workloads on Vertex, and the new inference-specific TPU would extend that share play against Nvidia's Rubin and against AMD's MI400 line. (Influencer Magazine, Republic World)

Why it matters: The direction of travel is unmistakable: hyperscaler custom silicon is scaling faster than merchant GPU supply, and inference economics (not training) now drive compute mix. For Marvell, a Google MPU win would be the cleanest AI-ASIC narrative the company has ever had, and would pressure the sell-side to re-rate the data center segment. For Broadcom, the story is a reminder that even its flagship customer is actively diversifying its design bench; no long-dated hyperscaler chip program is fully captive. For Nvidia, the signal is that as inference volumes surge, the share Nvidia keeps at the chip layer depends on how aggressively it can push Rubin NVL72 and associated networking into inference fleets. For Google itself, the moves are of a piece with the Gemini 3.1 Pro cost curve: if you want to serve reasoning at scale at a margin that works, you need your own silicon, and you need at least two credible partners willing to build it.

Gurman: Apple's WWDC Teases a Revamped Siri, Mac Studio Slips to October on DRAM Shortage

  • Bloomberg's Mark Gurman writes that the glowing "26" in Apple's WWDC 2026 invite is a hint at a redesigned Siri experience in iOS 27, iPadOS 27, and macOS 27

  • The new Siri is expected to include a chatbot-style interface, a dedicated Siri app, improved multi-command handling, external AI agent support, and deeper use of on-screen and personal context

  • A worsening global DRAM shortage is pushing the new M5 Mac Studio from a mid-year launch to roughly October, with the touchscreen MacBook Pro slipping into late 2026 or early 2027

  • Mac mini and Mac Studio have already gone out of stock at Apple in multiple regions, with DRAM sell-through pressure driven by AI server demand showing up first in high-unified-memory Macs

  • Why it matters: WWDC is now effectively Apple's AI credibility event, and the memory shortage is the first time the hyperscaler DRAM cycle has visibly slipped an Apple consumer launch, a small but pointed signal that AI infrastructure demand is reaching into the consumer hardware supply chain

Mark Gurman reported in his Sunday Bloomberg column that the promotional graphic for WWDC 2026, featuring a glowing "26," is a deliberate tease for a redesigned Siri experience that Apple plans to unveil at the keynote. The redesign is expected to span iOS 27, iPadOS 27, and macOS 27, with a chatbot-style interface, a dedicated Siri app, better handling of multi-step instructions in a single request, support for external AI agents, and deeper use of on-screen and personal context to answer queries. (Bloomberg, MacRumors via search, Technobezz)

The Mac hardware roadmap got a more concrete, and less friendly, update in the same column. Gurman reports that the new M5 Mac Studio, which Apple had targeted for a mid-2026 refresh, is now looking at an October launch because of a worsening industry-wide memory shortage. The touchscreen MacBook Pro, once rumored for late 2026, is sliding into late 2026 to early 2027 territory for similar reasons. Consumer signals are already visible: the current Mac mini and Mac Studio have gone out of stock at Apple.com in several regions, with channel checks citing DRAM and LPDDR5 pricing pressure as the constraint. (iClarified, Macworld, TheNextWeb via search)

Apple's publicly stated position remains that the memory situation is manageable, but the shift in Mac Studio timing is telling. The Mac Studio is the most memory-intensive consumer product Apple ships (64 GB unified memory standard, configurable to 512 GB on M3 Ultra), which makes it the first consumer line to feel the squeeze when the hyperscaler bid for HBM and LPDDR is absorbing the supply. Microsoft flagged a similar dynamic last week when it raised Surface prices by up to $500, citing DRAM cost pass-through.

Why it matters: Two things line up here. First, Apple is now leaning on WWDC as the event where it has to demonstrate that its AI platform strategy (on-device plus private cloud compute plus third-party AI agent interoperability) actually works, after two years of the market grading Apple harshly relative to Google and Anthropic on assistant quality. The tease around Siri suggests the bar is deliberately being raised. Second, a memory-driven product delay at Apple is a leading indicator worth watching. It joins ASML's Q1 commentary on utilization, TSMC's 2026 guide, and Microsoft's Surface price hike as evidence that AI-server demand is absorbing enough DRAM and HBM to reprice consumer hardware. For Apple's own stock, the core question into WWDC is whether Siri 2.0 is credible enough to reset the Apple Intelligence narrative, because the hardware side of the story is about to face a visible supply headwind.

Fermi CEO Departs, FRMI -31% After-Hours, as Trump-Linked $17GW AI Campus Stalls

  • Fermi Inc. (NASDAQ: FRMI) announced on April 17 that co-founder and CEO Toby Neugebauer departed effective immediately, with no CEO-in-waiting and no prior public signal

  • The board set up an interim Office of the CEO led by COO Jacobo Ortiz and board observer Anna Bofa, with a permanent search underway and further details promised on April 20

  • FRMI fell as much as 31% in after-hours trading; the stock had already lost roughly 75% over the prior six months before the announcement

  • Project Matador, the planned 17GW Texas AI data center campus Fermi America has pitched as "the world's largest," is facing issues securing water, power, and anchor tenants

  • Why it matters: Fermi America, co-founded by former Energy Secretary Rick Perry and branded as a Trump-aligned AI infrastructure play, has been the poster child for the bull case that AI data center development can scale to 10GW+ single-site campuses; the stall is a counter-signal to that thesis just as broader AI capex commitments approach $300B annualized

Fermi Inc., the publicly traded vehicle behind Fermi America's planned Texas AI campus, announced that co-founder and CEO Toby Neugebauer stepped down with immediate effect on April 17, 2026. The disclosure was abrupt: Neugebauer spoke publicly about the project the day before and gave no indication his departure was imminent. The company's board formed an interim Office of the CEO composed of COO Jacobo Ortiz and board observer Anna Bofa, added Miles Everson to the board, and said it would provide additional detail on April 20. (Bloomberg, TradingView, StreetInsider)

The market reaction was severe. FRMI fell as much as 31% in after-hours trading on April 17, on top of a roughly 75% decline over the prior six months. The Axios and Distilled reporting paints the underlying picture: the 17GW Project Matador campus Fermi America has pitched as "the world's largest planned data center" is running into the three binding constraints every megasite eventually hits, water availability in a drought-prone part of the Texas panhandle, power interconnect with ERCOT, and signed anchor tenants willing to commit compute orders at the scale required to justify the buildout. (Distilled, The Tech Capital, DatacenterDynamics via search)

The political context matters because Fermi America has marketed itself as the administration-aligned option in the AI infrastructure landscape. Co-founded by former Energy Secretary Rick Perry and positioned as a Trump-branded play on American AI dominance, the company has pitched a story where permitting, land, and power would move faster than for conventional developers. The operational reality is catching up with the narrative.

Why it matters: The single most optimistic part of the AI capex thesis over the last twelve months has been the willingness of new entrants to underwrite gigawatt-class sites with aggressive buildout timelines. Fermi's stall is a reminder that the difficulty of these projects is not about ambition or financing, it is about water rights, grid interconnect queues, and committed tenant demand, all of which compound slowly. For hyperscaler tenants (Microsoft, Google, Amazon, Meta, Oracle), the lesson is that signed leases will continue to skew toward operators who have already cleared these hurdles, reinforcing the Equinix, Digital Realty, QTS, and CoreWeave position. For the neocloud complex (CoreWeave, Nebius, Lambda, Crusoe), Fermi's trouble widens the moat of operators actually delivering megawatts rather than slide decks. And for the broader AI capital cycle, it is the first meaningful public crack in the "any gigawatt, anywhere" bull case, a reminder that the binding constraint in 2027 may not be chips, it may be sites.

Tesla Rolls Out Unsupervised Robotaxi in Dallas and Houston, First True Commercial Expansion

  • Tesla launched its first fully unsupervised, publicly available robotaxi service in Dallas and Houston on April 18, marking its first commercial expansion beyond Austin

  • Initial geofences are small: about 25 square miles in Houston focused on Jersey Village and Willowbrook, and 30 to 35 square miles in Dallas centered on Highland Park and the urban core

  • Musk said the service is "open to the public," a shift from the invitation-only Austin beta, with a stated goal of seven live cities by end of June 2026 (Dallas, Houston, Phoenix, Miami, Orlando, Tampa, Las Vegas)

  • Austin's geofence has expanded to roughly 245 square miles over about a year from an initial 20-square-mile footprint, suggesting the Dallas and Houston zones will scale on a similar trajectory

  • Why it matters: This is the first time Tesla has opened a second or third unsupervised robotaxi city to the general public, validating that the Austin stack is portable and setting up Waymo's first credible multi-city challenger, with near-term read-through to Uber and Lyft's autonomous strategies

Tesla opened its unsupervised robotaxi service to the public in Dallas and Houston on April 18, the first time the company has extended commercial operations beyond its Austin launch site. Elon Musk confirmed on X that the service is "open to the public," a meaningful step from the invitation-only Austin beta that ran through most of 2025. The initial fleet totals 573 vehicles across the two cities, per Tech Insider's coverage. Tesla stock moved roughly 12% higher on the launch day before giving some of it back. (Electrek, Drive Tesla, Teslarati)

The initial geofences are deliberately narrow. Houston covers approximately 25 square miles in a triangle including Jersey Village and Willowbrook. Dallas covers roughly 30 to 35 square miles across Highland Park, the Park Cities, and parts of the urban core. For context, Austin started at about 20 square miles in mid-2025 and has expanded to roughly 245 square miles over the last twelve months, so Tesla has a clear operational template for iterative geofence expansion. Musk's public target remains seven cities live by end of June 2026, adding Phoenix, Miami, Orlando, Tampa, and Las Vegas. (notateslaapp, Eletric Vehicles)

Competitive positioning is the real story. Waymo remains the category leader with commercial operations in Phoenix, San Francisco, Los Angeles, and Austin; Tesla's April launch now means Waymo faces a real second operator in multiple metros. The Tesla stack is vision-only, does not rely on prior-mapped HD geometry, and is software-first in a way that lets Tesla stamp out new cities with substantially less lead time than a sensor-heavy approach. Skeptics note that the geofences are small and the scaling question (density per square mile, incidents per million miles, cost per ride) remains unproven outside Austin.

Why it matters: Tesla moving from one unsupervised robotaxi city to three is the clearest evidence yet that its autonomous stack is no longer a single-city science experiment. For the robotaxi category overall, a second credible operator validates that commercial AV services can scale and tightens the competitive window for Waymo's expansion schedule, Amazon's Zoox, Mobileye's robotaxi partners, and Chinese operators like Baidu and Pony.ai that are eyeing US entry. For Uber and Lyft, the near-term strategic question is whether ride-sharing networks end up as aggregators of AV fleets or as operators of their own (Uber has been hedging both ways). For Tesla equity, the bull-case remains that robotaxi revenue is the re-rating catalyst, and each city that opens to the public is one more proof point toward that thesis, with the caveat that unit economics on a 25 to 35-square-mile geofence are structurally worse than on a scaled 245-square-mile deployment.

Reply

or to participate.