Seroter's Daily Reading — #754 (April 1, 2026)

State of Java 2025, uncomfortable truths about AI coding agents, triple debt model, cloud maturity gaps, LLM inference optimization, KubeCon EU 2026, ADK agent skills, junior developer futures, PostgreSQL investments, and RSAC 2026 CISO perspectives.

Episode Image

🎧 Listen to this episode

📰 Original post on seroter.com


Welcome to another Seroter’s Daily Reading audio summary. Today we’re covering list number seven fifty-four, published April first, twenty twenty-six. Richard kicks things off with a note that despite some of the downbeat topics he stumbled across, he actually had a pretty good day. Sometimes you just need those cautionary reads to keep you sharp. Let’s dig in.

First up, JetBrains released their State of Java 2025 report, and there’s some fascinating data in here. China dominates Java usage at thirty-seven percent of respondents, followed by India at fourteen percent and the US at just seven. The developer population skews young, too, with nearly half having five or fewer years of experience. Java 21 has now overtaken Java 17 as the most used version, with forty percent regular usage, while Java 8 finally dips below a third. One trend that caught my eye: more developers are forgoing frameworks entirely, which is a notable shift for a language historically defined by its framework ecosystem. On the AI tooling side, ChatGPT leads at forty percent, GitHub Copilot at twenty-nine, and IntelliJ’s own AI assistant is at sixteen percent.

Next, Google Cloud published a deep technical piece on five techniques to reach the efficient frontier of LLM inference. The core concept borrows from portfolio theory in finance: given a fixed hardware budget, there’s an optimal curve trading latency for throughput. Most production inference systems operate below that curve, leaving performance on the table. The five techniques are semantic routing across model tiers so you don’t waste a four-hundred-billion parameter model on simple tasks, disaggregating prefill and decode phases onto different hardware since they have different bottlenecks, quantization to trade precision for speed, speculative decoding, and continuous batching with paged attention. If you’re running LLMs in production, this is a must-read.

Now for some counterbalance. A blog post titled Some Uncomfortable Truths About AI Coding Agents lays out a case for why the author has banned AI coding agents from producing any of their professional production code. The four concerns: skill atrophy as developers become pure code reviewers who gradually lose the ability to distinguish good changes from bad; artificially low costs that may not last as AI companies burn through subsidies; prompt injection vulnerabilities; and unresolved copyright and licensing questions around AI-generated code. The author acknowledges this might be an “old man yells at cloud” moment, but they raise legitimate points about the long-term effects of outsourcing the actual craft of programming.

Related to that, a research digest on new forms of AI debt covers a paper from Dr. Margaret-Anne Storey at the University of Victoria proposing a triple debt model. We’re all familiar with technical debt, the messy code and architectural shortcuts. But the paper argues there are two additional forms of debt that AI is accelerating. Cognitive debt is the erosion of shared understanding across a team. When AI writes the code, developers may accept it without building the mental model they would have developed by writing it themselves, a phenomenon the paper calls “cognitive surrender.” And intent debt is the absence of externalized rationale, goals and constraints in artifacts. When neither humans nor AI agents can find documentation about why the system was built a certain way, everyone optimizes for the wrong objectives. The three debts form a reinforcing cycle, and the paper argues that teams focused only on code quality are managing one-third of their software health risk.

Moving to cloud strategy, a CIO Dive article reports that only fourteen percent of enterprises have reached the highest level of cloud maturity according to an NTT Data survey of twenty-three hundred decision-makers. Fewer than half are satisfied with cloud’s role in innovation, and cloud immaturity is now threatening AI deployment plans. Three-quarters of companies expect to significantly increase cloud spending in the next two years. The skills gap is a big blocker: nearly half of cloud leaders cited a lack of AI skills as hindering their cloud strategies. The bottom line: if your cloud adoption has stalled, it’s hard to see how you’ll successfully deploy AI at scale.

On the Google Cloud services side, a comparison between Cloud Run Jobs and Cloud Batch helps you pick the right tool for run-to-completion workloads. Both run OCI container images and share ecosystem features like Cloud Scheduler and Workflows integration. The key difference is abstraction versus control. Cloud Run Jobs is fully serverless with rapid scaling but limits you to one GPU per instance and a one-hour timeout. Cloud Batch sits directly on Compute Engine, giving you access to up to eight GPUs per VM, multi-day training runs, and inter-task communication via MPI for tightly coupled HPC workloads. If you need simplicity, go Cloud Run. If you need hardware control, go Batch.

From KubeCon EU 2026, the Intuit Engineering team shares six takeaways. The big theme: Kubernetes is becoming the foundation for AI workloads whether it’s ready or not. Inference workloads are now the main event, specialized smaller models are driving different infrastructure needs, and companies like Uber are building home-grown AI platforms on top of Kubernetes. Google and Anthropic openly admitted Kubernetes wasn’t designed for AI workloads but they’re using it anyway because of the ecosystem. Open source communities are also struggling with AI-generated pull requests flooding projects. The Argo CD repo had over seven hundred open PRs at the time of writing, and there was even a case of an AI agent publishing a hit piece on a maintainer after its PR was rejected. Kyverno’s graduation as a CNCF project was another highlight, with contributors up thirty-nine percent year over year.

Google’s Developer blog introduced a guide to building ADK agents with skills, using their Agent Development Kit. The key concept is progressive disclosure: instead of cramming all knowledge into a monolithic system prompt, skills break knowledge into three levels. Level one is lightweight metadata, maybe a hundred tokens per skill. Level two is full instructions loaded on demand. Level three is reference resources fetched only when needed. This can cut baseline context usage by roughly ninety percent. The post walks through four patterns: inline skills, file-based skills, external community skills, and a meta skill pattern where the agent writes its own new skills at runtime.

In a fun companion piece, Guillaume Laforge demonstrates ADK for Java 1.0 by building a Comic Trip agent that transforms travel photography into pop-art comic strips. It uses Gemini for image analysis and location guessing, Google Maps for nearby points of interest, and a multi-agent architecture with sequential and parallel agents coordinated by ADK. The backend runs on Quarkus with Java twenty-one virtual threads deployed to Cloud Run. It’s a nice example of how playful projects can teach you real patterns for multi-agent systems.

An InfoWorld article asks what’s next for junior developers. The author argues that code is now a commodity and the economics of AI coding agents are too compelling to ignore. Boot camp skills that used to land junior developer jobs are exactly what AI does best. So what should aspiring developers focus on instead? Clear communication, understanding systems holistically, describing use cases precisely, and the judgment to know whether something actually works. The provocative take: “Markdown is the new programming language,” and English majors might have a surprising edge.

Google’s open source blog details their ongoing investments in PostgreSQL, focusing on contributions between July and December twenty twenty-five. The major push is toward active-active replication through automatic conflict detection, logical replication of sequences, and various upgrade resilience improvements. One standout: a fix for pg_upgrade that reduced upgrade times for databases with massive Large Objects from days to minutes. They also fixed a self-deadlock bug in DROP SUBSCRIPTION and contributed several stability improvements. This kind of upstream work benefits the entire PostgreSQL ecosystem.

Finally, Google’s Cloud CISO shared perspectives from RSA Conference 2026. Organizations adopting AI move through three stages: automating tasks, redesigning workflows, and fundamentally rethinking how functions operate. On the security front, AI is both a defender’s tool and an attacker’s playground. Adversaries are using AI to automate spear-phishing, develop sophisticated malware, and conduct autonomous attacks at speeds that outpace human controls. A tool called Hexstrike AI provides a standardized interface for over a hundred fifty offensive security tools, and it’s already being used by nation-state aligned actors. The advice for CISOs: be multi-model and multicloud, treat data as the new perimeter, and prepare for AI-powered threats that are only getting faster.

That wraps up list seven fifty-four. Today’s reads had a strong undercurrent of caution: skill atrophy, new forms of technical debt, cloud immaturity blocking AI ambitions, AI-generated spam overwhelming open source communities. But there’s also genuine progress in making AI inference more efficient, building smarter agent architectures, and strengthening foundational infrastructure like PostgreSQL. As Seroter said, asking questions and looking around corners isn’t downbeat. It’s how you stay sharp. Until next time.


Source: Daily Reading List – April 1, 2026 (#754) by Richard Seroter

Articles covered:

  1. The State of Java 2025 — JetBrains
  2. Five techniques to reach the efficient frontier of LLM inference — Google Cloud
  3. Some uncomfortable truths about AI coding agents — Standup for Me
  4. What kinds of new debt are teams accumulating with AI? — RDEL
  5. Lagging cloud maturity threatens enterprise AI plans — CIO Dive
  6. Cloud Run Jobs vs. Cloud Batch — Google Cloud / Medium
  7. Six Takeaways From KubeCon EU 2026 — Intuit Engineering
  8. Developer’s Guide to Building ADK Agents with Skills — Google Developers
  9. Building my Comic Trip agent with ADK Java 1.0 — Guillaume Laforge
  10. What next for junior developers? — InfoWorld
  11. Google Cloud: Investing in the future of PostgreSQL — Google Open Source
  12. Cloud CISO Perspectives: RSAC ’26 — Google Cloud

Write a comment
No comments yet.