Editorial note
This morning’s thread is about two linked themes: the gap between spectacle and substance, and how messy engineering or economic choices can outsize technical breakthroughs. Below: short updates on developer-facing changes, then two deeper looks at a model leak at Anthropic and a new report that says machine traffic is now the internet’s default.
In Brief
GitHub will use private-repo Copilot interactions unless you opt out
Why this matters now: GitHub’s change means private repositories may be included in training data for GitHub’s AI models unless users toggle the setting, affecting developer privacy and corporate IP control.
GitHub announced that interactions with Copilot in private repos can be used to “train and improve our AI models,” and users have until April 24 to opt out in Settings > Copilot > Features. For teams and individuals with proprietary code in private repos, that’s an explicit moment to check account and org-level privacy settings. The change is targeted at improving model quality, but it raises familiar questions about consent, IP ownership, and the difficulty of auditing what ends up in training corpora. See the original Reddit PSA for community reaction and practical opt‑out steps.
Claude costs spike — OpenClaw users scramble for cheaper routing
Why this matters now: Anthropic’s higher prices and stricter session limits for Claude are forcing OpenClaw users and other agent operators to reroute workloads and benchmark cheaper models to avoid unpredictable bills.
Users in agent ecosystems like OpenClaw report sudden cost and reliability pain: higher billing, peak-hour caps, and five-hour session windows that now hit more users. The pragmatic response from the community is multi‑model routing—reserve premium models for human-facing tasks, use cheaper or self‑hosted models for background jobs, and keep fallbacks to avoid single-vendor outages. That approach reduces costs but adds operational complexity and testing overhead for developers who rely on continuous or always-on agents.
Deep Dive
Anthropic accidentally exposed details of an unreleased model
Why this matters now: Anthropic’s public CMS leak revealed internal drafts and specifics about an unreleased model called Claude Mythos, potentially exposing sensitive technical and business details before the company intended.
Anthropic left nearly 3,000 unpublished files—draft blog posts, images and PDFs—in a public content-management system, and security researchers discovered the cache. According to the reporting, the exposure included descriptions of an unreleased model Anthropic called Claude Mythos and language framing it as “the most capable model we’ve built to date.” Anthropic told reporters the leak was due to “human error in the CMS configuration” and that customer data and core infrastructure were not affected; Fortune alerted Anthropic and the company moved to lock down the assets.
“the most capable model we’ve built to date”
A misconfigured CMS is a simple-sounding problem with outsized consequences. Pre-release details can provide competitors tactical insights into architecture choices, training data or novel capabilities; they’re also a vector for attackers who could test adversarial prompts or craft exploits against specific model behaviors once those behaviors are known. For a company selling privacy and safety improvements as differentiators, the optics are also important—leaks undermine trust even when no customer data is exposed.
Operational takeaways are straightforward and worth repeating for engineering leaders: treat staging systems and CMSes as part of your threat model. That means least-privilege defaults, automation that prevents public toggles for draft assets, and routine audits that look for unintentionally exposed endpoints. On the market side, pre-release leaks can prompt regulatory attention and investor noise; firms should expect questions about configuration management and how they’ll prevent similar incidents as models grow more commercially sensitive.
Finally, the leak surfaces a deeper cultural challenge: tooling and human workflows haven’t caught up with the sensitivity of ML artifacts. As models become strategic assets, simple editorial or marketing mistakes can produce outsized harm. Security teams should treat unreleased model details like any other high-risk intellectual property and apply the same controls they would to source code, binaries, and customer data.
Report: AI and bot traffic now outpace human web activity
Why this matters now: Human Security’s report says AI-driven and automated traffic surged in 2025 and now exceeds human-originated interactions—a shift that alters fraud risk, analytics baselines and how companies handle identity on the web.
Cybersecurity firm Human Security released a State of AI Traffic report that found AI-driven traffic rose 187% through 2025 and that agentic AI—systems that act on users’ behalf—increased nearly 8,000% last year. CEO Stu Solomon summarized the shift bluntly: “The internet as a whole was created with this very basic notion that there’s a human being on the other side of the computer screen, and that notion is very rapidly being replaced.” The company’s analysis is based on processing over a quadrillion interactions in their platform.
“Machine-based traffic is effectively replacing humans as the dominant form of traffic on the other side of the internet.”
If these figures are directionally accurate, they change many assumptions that underlie product design, fraud detection, and web analytics. For example:
- Fraud systems tuned to human timing and click patterns will see higher false positives or miss automated abuse that mimics human behavior.
- Advertising attribution and engagement metrics grow harder to trust when a large fraction of "users" are bots or autonomous agents executing tasks.
- Rate-limiting and API pricing models must adapt to agent-driven loads that can be periodic, massively parallel, and persistent.
The report also calls out measurement difficulties—user-agent strings and other markers increasingly lie or are left at defaults—so the headline numbers should be read with caution. But even conservative interpretations indicate a substantive shift: businesses and platforms must distinguish between human intent and machine-driven activity and design authentication, consent, and billing models accordingly.
Practically, that means investing in richer behavioral telemetry and in provenance signals: cryptographic attestations, signed agent identities, and clearer metadata for agent actions. It also means policy and regulation will likely chase these technical changes—expect pressure for standards around how autonomous agents identify themselves and how platforms label automated interactions to end users.
Closing Thought
We’re living through a period where small operational mistakes and pricing decisions ripple faster than model breakthroughs. The Anthropic leak is a reminder: protecting ML IP is as much about process and defaults as it is about clever algorithms. The Human Security report warns that the fundamental audience of the web—the “other side of the screen”—is changing, and systems that assume a human by default will need rethinking. For engineers and product leaders, that points to two priorities this quarter: lock down your staging and CMS workflows, and start treating machine traffic as a first-class, measurable actor in your systems.
Sources
- Exclusive: Anthropic left details of an unreleased model, an upcoming exclusive CEO event, in a public database
- AI and bots have officially taken over the internet, report finds
- PSA: If you don't opt out by Apr 24 GitHub will train on your private repos
- Claude prices skyrocketed, what model are you using for OpenClaw now?