Editorial intro

The themes today are governance and restraint: democratic friction over mass scanning of private messages, and technical pushback against the rush to automate software with agents. Both conversations cut the same way — how much do we centralize power (state or model), and where do humans stay in the loop?

In Brief

Running a Tesla Model 3 computer on my desk using parts from crashed cars

Why this matters now: Security researchers can recreate an actual Tesla infotainment and autopilot environment on a bench, making real-world vulnerability testing — and privilege programs — more practical and visible.

A security write-up shows a researcher bought salvaged Tesla Model 3 components and, after a lot of wiring headaches, got the car’s touchscreen and MCU booting on a desk, running the vehicle OS and exposing on-desk services used for diagnostics and research. The post documents practical obstacles — obscure connectors, a burned regulator, and ultimately buying a dashboard harness to complete the setup — that any motivated researcher can overcome, and then highlights interesting access points like a diagnostics API called ODIN and an SSH server that uses Tesla-signed keys.

"SSH allowed: vehicle parked"

The write-up is useful both as a maker story and as a probe of Tesla’s security incentive model; the company’s “Root access program” gives permanent SSH certs to researchers who responsibly disclose a rooting bug, which some commenters compared to Apple’s controlled research devices — useful, but potentially creating perverse incentives. Read the hands-on account at the original Tesla write-up.

Supreme Court sides with Cox in copyright fight over pirated music

Why this matters now: The Supreme Court’s unanimous decision reduces legal pressure on ISPs to police subscriber accounts and shifts the enforcement burden away from providers like Cox.

The Court ruled that Cox Communications cannot be held contributorily liable simply because subscribers used its service to pirate music, saying Cox did not "induce" infringement or offer a service tailored to it. Justice Thomas wrote the opinion, reversing a large judgment and narrowing a tool labels used to push for aggressive takedowns and account terminations. This recalibrates expectation about ISPs acting as copyright enforcers and will ripple into rights-holder strategies and consumer privacy debates; more from the New York Times report.

"Cox neither induced its users’ infringement nor provided a service tailored to infringement."

90% of Claude-linked output going to GitHub repos with <2 stars

Why this matters now: Anthropic’s Claude Code appears widely used, but most public commits attributed to it land in tiny repos — a signal that AI coding is prolific yet concentrated in low‑visibility projects.

A large analysis of public GitHub activity tied to Anthropic’s Claude Code finds most generated commits end up in repositories with fewer than two stars, with millions of commits and tens of billions of lines touched. That looks alarming until you remember most public repos are low‑star by default; critics warned this could mean a glut of low-quality, hard-to-maintain code, while supporters said AI lowers the cost of useful one-off projects. The aggregate dataset and discussion are at the Claude Code analysis.

Deep Dive

The EU still wants to scan your private messages and photos

Why this matters now: The European Parliament may be forced into a repeat vote on the Commission-backed "Chat Control," reopening the fight over whether platforms can scan everyone’s private communications by default.

After a March 11 vote that replaced blanket surveillance with a scheme for targeted monitoring requiring judicial oversight, activists say the European People’s Party (EPP) is attempting a procedural move to force a new vote. Opponents warn this is "a direct attack on democracy and blatant disregard for your right to privacy," and organizers behind Fight Chat Control are urging rapid constituent contact with MEPs to stop the measure.

"a direct attack on democracy and blatant disregard for your right to privacy."

Why the parliamentary maneuver matters: the European Commission can refile proposals, and legislative bodies have repeatedly repackaged similar surveillance powers. The current compromise — targeted scanning only under judicial authorization — moves away from a default, automated sweep of all private messages and photos. If that compromise is overturned, platforms could be legally required to deploy wide‑scale content scanning systems that fundamentally weaken end‑to‑end encryption and reshape platform privacy architectures.

For engineers and privacy-conscious citizens this is a crucial moment because the technical decisions required to comply — server-side scanning, client-side backdoors, or metadata-based heuristics — will lock in long-term architectural trade-offs. Activists are offering short, practical steps: email and call your MEP, and point them to the parliamentary vote record and the legal protections in Articles 7 and 8 of the EU Charter. The original advocacy page has the campaign details at Fight Chat Control.

Thoughts on slowing the fuck down (agents and code)

Why this matters now: As teams deploy coding agents at scale, senior engineers warn against treating agents as code-generators without human architectural oversight — the result can be brittle systems that break faster than they’re built.

A thoughtful post by an experienced developer argues bluntly: "Everything is broken." Agents produce large volumes of code quickly but don't learn from repeated mistakes, lack a global view of a codebase, and create maintenance debt when duplicated, inconsistent pieces pile up. The practical advice is counterintuitive: use agents for micro-tasks — scaffolding, suggestion, rubber-ducking — and keep humans responsible for architecture, ownership, and review.

"slowing the fuck down is the way to go."

This matters because the temptation to automate entire feature flows produces brittle, inscrutable systems that test suites and search tools can't reliably fix. The post recommends concrete safeguards: limit how much generated code you accept unreviewed, require small, reviewable pull requests, and treat agents as assistants rather than replaceable engineers. The author’s credibility (noted in discussions) and concrete trade-offs make this less hand-wringing and more of a playbook; read the full post at the author’s blog.

Closing Thought

Two threads tie these pieces together: power and stewardship. Whether the power sits in governments scanning private messages, platforms wielding diagnostic access, or models generating production code, the right question is who keeps ultimate responsibility and how we build checks before systems become brittle or rights are eroded. Today’s sensible moves are procedural — insist on judicial checks, insist on human review, and insist on clear incentives for researchers and platforms.

Sources