In partnership with

Hello, Human Guide

Today, we will talk about these THREE stories:

  • Big Tech is pouring $650 billion into AI infrastructure in 2026.

  • Nearly half of government AI projects are not integrated into real workflows.

  • AI labs are accusing rivals of secretly scraping proprietary data.

Ship the message as fast as you think

Founders spend too much time drafting the same kinds of messages. Wispr Flow turns spoken thinking into final-draft writing so you can record investor updates, product briefs, and run-of-the-mill status notes by voice. Use saved snippets for recurring intros, insert calendar links by voice, and keep comms consistent across the team. It preserves your tone, fixes punctuation, and formats lists so you send confident messages fast. Works on Mac, Windows, and iPhone. Try Wispr Flow for founders.

The $650 Billion AI Gamble No One Wants to Call a Bubble

The money is moving faster than the models.

Reuters reports U.S. Big Tech companies are expected to invest $650 billion in AI in 2026, up from roughly $410 billion last year. Bridgewater associates tie much of U.S. growth expectations to AI infrastructure spending, from data centers to advanced chips, with companies like Nvidia acting as bellwethers for the entire trade.

What stands out is how much of this bet is built on forward belief, not current cash flow. Late at night, when dashboards refresh and server farms hum, this feels less like incremental innovation and more like an arms race where nobody wants to be the first to slow down. This is less about product market fit and more about geopolitical positioning.

If returns do not materialize soon, capital will concentrate brutally. Everything else gets cut.

If $650 billion is built on projected dominance, the real question is what happens when the growth narrative flickers and the screens go dark?

Read full Reuters report here.

Governments Bought AI. They Forgot to Use It.

Half the AI never made it into the workflow.

New research highlighted by Finviz shows nearly 50% of UK public sector AI initiatives are deployed as bolt on or standalone tools, rather than embedded into integrated systems. That means pilots, dashboards, and proofs of concept but limited operational transformation inside actual government processes.

What bothers me is how familiar this pattern looks. Buying AI feels decisive at 9 a.m. under fluorescent office lights, but redesigning workflows, retraining staff, and restructuring incentives is slow, political, and uncomfortable. This is not a technology problem, it is an institutional one.

Spending without integration leads to declining usage and quiet resistance. The tools exist. The behavior has not changed.

If governments keep stacking AI on top of old systems, the real question is whether taxpayers are funding transformation or just expensive experiments?

Read the research summary here.

The AI Labs Are Secretly Training on Each Other

The training data wars just went public.

Tech industry reports reveal major AI labs are accusing competitors including Chinese firms of using fake accounts and scraping proprietary model outputs to improve their own systems. As frontier models grow more expensive to train, access to high quality synthetic and proprietary data is becoming a strategic asset.

What struck me is how inevitable this feels. When models are trained on the internet, and the internet is increasingly filled with AI output, the line between public knowledge and proprietary intelligence blurs fast. Late at night, laptop open and Slack buzzing, the competitive pressure does not whisper, it roars.

This is less about copyright lawsuits and more about strategic leverage in a global AI race. Whoever controls the best data compounds faster.

If AI labs start feeding on each other’s outputs, the real question is whether innovation accelerates or collapses into recursive noise?

Read more on the data scraping allegations here.

Keep Reading