Editorial: Two threads tie today’s stories together — tooling we treat as harmless (npm installs, “official” mobile apps) can suddenly be the most dangerous surfaces, and small operational choices (allowing lifecycle scripts, shipping native apps) have outsized privacy and safety consequences. Below are short takes and two deeper reads worth acting on.

In Brief

Claude Code's source code has been leaked via a map file in their NPM registry

Why this matters now: Anthropic’s Claude Code leak exposes internal source, feature flags, and anti‑distillation defenses that competitors or attackers can study and probe today.

A researcher found a source‑map‑linked archive in Anthropic’s npm package that revealed internal files, unreleased features (codenames like “kairos” and a Buddy System), and telemetry heuristics. The leak also exposed an “anti_distillation” trick intended to poison scraped training data. Commenters called out both the cultural lapse (shipping source artifacts) and the practical risk: leaked feature flags make it easier to emulate or probe product behaviour. See the original discovery via the Twitter thread for details.

"Leaked feature flags let outsiders probe and even emulate upcoming product behavior."

Do your own writing

Why this matters now: Relying on LLMs to generate workplace documents risks outsourcing the very thinking those documents are meant to make explicit.

Alex H. argues that writing is thinking: a PRD should resolve product tradeoffs, and if you let an LLM write for you, you miss the cognitive work that clarifies intent. The post recommends using models for research, transcription, and fact‑checking — not for the core reasoning. The essay is a timely pushback as teams adopt drafts‑by‑AI for routine deliverables; read more at the original post.

"The goal of writing is not to have written. It is to have increased your understanding."

Ollama is now powered by MLX on Apple Silicon (preview)

Why this matters now: Ollama’s MLX backend significantly improves local inference speed on M‑series Macs, making heavier on‑device LLM workflows more practical.

Ollama rebuilt its Apple Silicon backend on Apple’s MLX framework to use unified memory and Neural Accelerators; their preview claims big speedups and better parity with NVIDIA via NVFP4 support. They also added smarter caching to lower memory pressure and reuse prefill work. If you run local models on a Mac with 32+ GB, this preview is worth testing; see Ollama’s post for benchmarks.

"0.19 will hit around '1851 token/s prefill and 134 token/s decode' when running with int4."

Deep Dive

Axios compromised on NPM – Malicious versions drop remote access trojan

Why this matters now: Developers and CI systems that ran npm install for the poisoned axios versions could have a cross‑platform RAT installed within seconds — and casual audits may miss it.

StepSecurity’s investigation shows this was a surgical supply‑chain attack. The attacker published two axios releases that themselves contained no malicious source — instead they added a fake dependency [email protected] whose postinstall lifecycle script immediately executed a dropper. That dropper contacted a command‑and‑control server, fetched platform‑specific payloads (macOS, Windows, Linux), and in some cases rewrote package.json to a clean stub so a quick look wouldn’t show anything suspicious.

"There are zero lines of malicious code inside axios itself, and that's exactly what makes this attack so dangerous." — StepSecurity

Why the technique is effective: npm lifecycle scripts run by default during install, and many CI pipelines and developer machines run npm install without sandboxing. The adversary exploited that default to achieve code execution at install time and then clean up traces, so tools like npm list or npm audit can be fooled.

Immediate practical mitigations:

  • Use --ignore-scripts in CI to avoid running lifecycle scripts on untrusted installs.
  • Prefer package managers or runtimes that default to safer behavior (pnpm, bun, or lockfile policies).
  • Enable min‑release‑age where possible and inspect dependency trees before upgrading.
  • Sandbox installs (containerized builds, ephemeral runners) and monitor outbound DNS/HTTP during CI jobs.
  • Where feasible, reduce reliance on HTTP client libraries when native APIs suffice (Node now has fetch).

This attack is also a reminder that supply‑chain defenses must be layered: better package‑publishing hygiene, artifact signing, reproducible builds, and observability during ephemeral install phases. StepSecurity published IOCs and capture logs worth feeding into IDS and endpoint rules; check their analysis if you’re triaging installs in the last few days.

Fedware: Government apps that spy harder than the apps they ban

Why this matters now: Federal Android apps are requesting sensor and background access far beyond their user‑facing needs, centralizing sensitive biometrics and location at scale.

A deep audit catalogues troubling patterns across multiple federal apps: the White House app requesting precise GPS, fingerprint access, storage and embedding three trackers (including an SDK from a sanctioned vendor reportedly linked to Huawei), FEMA asking for 28 permissions to show alerts, CBP retaining faceprints for up to 75 years, and ICE tools collecting biometrics and location tied into contractor databases. The pattern is clear: native apps provide access to sensors and background APIs that web pages cannot, and procurement choices and contractor toolchains are amplifying data collection.

"Every single one of these apps could be replaced by a web page."

The security and civil‑liberty implications are immediate. Native apps enable persistent background collection, long retention windows for biometric identifiers, and data sharing with contractors and brokers (the report cites Venntel’s massive location collection). Oversight appears weak: GAO recommendations remain largely unimplemented, and one audit found tracking SDKs where public trust demands restraint.

Practical steps for citizens and product teams:

  • For users: avoid installing government apps you don't need. Get alerts via web pages, RSS, or verified email if possible.
  • For procurement: require minimal permissions, transparent data flows, and strict vendor vetting before including analytics or third‑party SDKs.
  • For privacy advocates: push for audits, enforceable retention limits for biometrics, and default minimal‑permission web alternatives for public services.

Read the full analysis at the original post for the documentation and vendor names.

Closing Thought

Two takeaways: first, defaults matter. Whether it’s an npm lifecycle script that runs automatically or a mobile app that asks for every sensor, small decisions compound into large risk. Second, operational hygiene — instrumented installs, minimal permissions, and pre‑release artifact checks — is no longer optional for teams that want to stay in control. If you ship code or a public app, make your default posture one an adversary can’t exploit at scale.

Sources