In Brief

IPv6 traffic crosses the 50% mark

Why this matters now: Google’s IPv6 adoption milestone shows that more than half of Google users now reach services over IPv6, signaling real-world deployment across carriers and clients.

Google’s public IPv6 dashboard reports the platform has crossed the symbolic 50% threshold for IPv6 traffic, a useful health check as operators and device vendors push the newer protocol, according to Google’s stats page. This isn’t a finish line — adoption is uneven across regions, enterprise networks and specific services — but hitting parity with IPv4 for a major provider means engineering teams should be testing IPv6 paths and not treating it as experimental.

“We are continuously measuring the availability of IPv6 connectivity among Google users,” the dashboard notes.

Key takeaway: IPv6 is mainstream enough that compatibility issues are now operational problems, not theoretical ones. If you run services, validate dual-stack behaviour and watch what your CDN, cloud provider, or load balancer does when IPv6 is preferred.

Cal.com is going closed source

Why this matters now: Cal.com’s move from open source to closed source frames a growing corporate response to AI-driven vulnerability scanning, and forces projects that handle user data to choose between transparency and perceived attack surface reduction.

Cal.com announces its production codebase will become closed while shipping a community fork called Cal.diy, framing the decision as a security response to automated tools that can scan open repos for weaknesses; see Cal.com’s post. The announcement sparked debate: proponents accept a company protecting sensitive auth and data paths, while critics point out that open source projects collectively audit and harden code — and that secrecy can slow detection of real bugs.

Practical note: teams depending on third‑party open projects should track divergence and verify that any forked or commercial branches maintain audits and transparency where it matters (auth, encryption, third‑party integrations).

Darkbloom — private inference on idle Macs

Why this matters now: Darkbloom’s pitch to monetize idle Macs highlights a new edge-compute model with immediate trade-offs around device control, privacy, and user economics.

Darkbloom proposes a marketplace that runs model inference on otherwise-idle Macs, promising quick payback and local privacy guarantees; the project is described at Darkbloom’s site. Early testers reported a rough experience and raised red flags about device management privileges.

One commenter summarized the worry bluntly: “Basically that computer is theirs now.”

If you’re tempted, treat early installs like exposed endpoints: audit what management profiles are installed, consider network segmentation, and assume the revenue model will evolve once real utilization and costs (power, support) show up.

Deep Dive

Google broke its promise to me – now ICE has my data

Why this matters now: Amandla Thomas‑Johnson says Google handed his account data to ICE via an administrative subpoena, raising urgent questions about platform notice, transparency, and whether enforcement tools are chilling political speech for non‑citizens.

Amandla Thomas‑Johnson, a dual UK/Trinidad and Tobago citizen and Ph.D. candidate, reports that in May 2025 Google produced identifying account logs to U.S. Immigration and Customs Enforcement (ICE) in response to an administrative subpoena — and that Google did not provide the advance notice it usually promises, according to the EFF’s writeup. The terse email he received read, “Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account,” which is the kind of boilerplate users often see after a disclosure. EFF argues Google “bypassed” its notification promise and released logs (IP addresses, location data, session times) that allow invasive stitching of a user’s activity.

This matters on three levels. First, notification promises from companies are a practical safeguard: they let people challenge requests or take steps to protect contacts and data. If companies treat those promises as optional when law enforcement asks, the protection erodes. Second, the use of administrative subpoenas — a tool with lower judicial oversight than warrants — raises concerns about how migration enforcement intersects with protected political expression, especially for foreign nationals. And third, the incident is a reminder that metadata like IPs and session timestamps can be as revealing as content; platforms hold rich, linkable logs that governments can use for tracking.

Community reactions split on legal nuance: some argue companies are barred from notifying users when gagged; others point out EFF’s claim that the subpoena here did not include a gag order, suggesting corporate discretion played a role. The EFF has asked state attorneys general to investigate for deceptive trade practices, reframing this from a narrow disclosure dispute into a consumer‑protection issue that regulators might find comfortable addressing.

For practitioners and privacy-conscious users: review your threat model (especially if you're a non‑citizen involved in activism), minimize logging where feasible, and push for clearer platform controls around state requests. For lawyers and policymakers, the case crystalizes a question: should notification promises be enforceable or auditable, and how do we reconcile legal process rules with platform transparency commitments?

“Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account.”

Cybersecurity looks like proof of work now

Why this matters now: The AI Security Institute’s tests suggest large generative models with vast token budgets can find multi‑step exploits efficiently, meaning security hardening may become a money race — defenders buying compute to scan, attackers buying compute to probe.

A third‑party analysis of Anthropic’s Mythos and related models, summarized by a developer’s writeup, shows models making steady progress at simulated 32‑step network takeover tasks when given massive token budgets (hundreds of millions of tokens per run). The key observation: “models continue making progress with increased token budgets,” implying diminishing returns aren’t obvious at the tested scale. In plain terms, throwing more inference compute at a problem keeps yielding new discoveries.

That flips some long‑standing assumptions. Historically, defenders benefit from visibility into source and control of deployment; but if attackers can outsource exploration to powerful models, defenders may need to buy equivalent model scans to find and patch issues before they’re weaponized. The economics favor whichever side can afford sustained token spending or faster access to better models. This is why Cal.com’s pivot to closed source feels less like a licensing quarrel and more like a strategic posture in a token‑budgeted contest — hide the map, force attackers to spend more to rediscover it.

There are hopeful counters: defenders often have structural advantages — knowledge of intended behavior, access to full source, and the ability to focus scans on diffs rather than entire ecosystems. Open source also scales defensive audits if many independent parties run AI hardening against the same code. But the risk is a “dark forest” dynamic where exposing source code increases attack surface and attackers with big budgets can iterate faster.

What to do now? Security teams should:

  • Treat AI-enabled fuzzing and attack simulation as a procurement item: budget for runs and include them in release cycles.
  • Prioritize automation that targets changed code rather than blanket scans to get more defence per token.
  • Consider gated disclosure or additional runtime hardening (obfuscation, canaries, telemetry) where public code is unavoidable.

This is a turning point: security is not just people and process any more — it’s compute economics. Organizations that recognize and budget for that shift will avoid being surprised when exploit discovery speeds up.

Closing Thought

Big models are changing two things at once: how quickly vulnerabilities are found, and the balance of power over surveillance data. Today’s stories — a privacy failure reportedly involving Google and ICE, and the idea that security is becoming a token budget contest — are connected. Companies and defenders need to rethink promises, transparency mechanisms, and where they place their bets: open collaboration and public audits, or tighter control and heavier investment in private hardening. Neither path is risk‑free, but pretending the rules haven’t changed will be a costly mistake.

Sources