Skip to main content

The Rhythmic Blog

AI and the Future of Software Engineering: The Broken Career Ladder (Part One)

December 4, 2025       Cris Daniluk               Comments  0

In the beginning (of compute), the machine programmer and operator were one and the same, literally patching cables and toggling switches, holding the entire computation in their head because there was nowhere else for it to exist. The programmer was the computer’s nervous system, the missing layer between intent and electricity.

This was somewhat limiting to say the least. It became immediately clear that some abstraction was going to be required. The machine needed to exist with some level of autonomy from the code it ran. And thus the notion of hardware and software as distinct components was born. But as time progressed and abstraction layers nested like infinite matryoshka dolls, AI eventually fell out the other end. Thus began the era of software writing software.

Historical Backdrop: Programming Through the Looking Glass

The first layer of abstraction was assembly code, a veneer on top of machine code that gave syntax and some modicum of structure. Then came compiled languages, allowing you to think in discrete concepts rather than machine concepts. High-level languages meant you could express intent in something resembling human logic. Frameworks saved you from rebuilding the same scaffolding for every project.

Each abstraction layer saved labor and made the field more accessible to a wider workforce, compressing the lower layers into what was effectively a new baseline. The demand for the highly skilled operator/programmer dropped dramatically, as the machine code drove machine behavior and required less expertise. This continued with assembly, COBOL, and even more modern languages like C/C++.

Each new baseline created a sort of trip through the looking glass, largely subsuming but not entirely eliminating the layer underneath it. New jobs emerge and quickly feel like the real work, until the next layer arrives and sends us further down the looking glass. This is not a tragedy. It’s progress. Nobody mourns the switch-toggler. Especially not the switch toggler.

The modern role we call “software engineer” emerged from this process as, fundamentally, translation work. Take a business need, turn it into machine instructions. Take a vague requirement, turn it into concrete implementation. The job title sounds precise and technical, but the actual work has always been about spanning the gap between what someone wants and what a computer can be told to do.

The more senior you got, the vaguer the inputs you could handle and the more complex the outputs you could produce. But it was still translation. Even software architecture was in service of translation, creating the optimal design to efficiently and sustainably maintain code and to meet technical requirements such as performance, scalability and security.

That translation—the core of what most software engineers do—is what’s now passing through the glass. Human-generated code is rapidly being subsumed by AI-generated code, creating the most significant paradigm shift since the original schism between hardware and software.

The Middle Rungs

By 2024, and unmistakably by 2025, a pattern emerged: the latest models had gotten good at the core translation work that was the bulk of work for mid-tier software engineering. Not perfect. Not autonomous. But good enough to change the economics—and we’re only in the early stages of that change working through the market.

To understand why mid-tier work is what’s passing through the glass, we have to be precise about what “mid-tier” means. It’s not a years-of-experience bracket or a title. It’s a type of work: reliable translation under well-specified conditions.

The mid-tier engineer is the person you hand a properly specified ticket and expect to return working code. They know the codebase well enough to navigate it, the language well enough to avoid obvious pitfalls. They can read documentation, follow patterns, debug common failure modes. What they’re doing is converting a spec into an implementation—human intent rendered in machine-executable form.

This is exactly what LLMs have become good at.

Give a capable model a clear prompt, sufficient context, and access to documentation, and it produces the same output as a mid-tier engineer. Often faster. Often with fewer bugs, because it doesn’t get tired or distracted. The model doesn’t understand what it’s doing in any deep sense, but understanding was never the value proposition of mid-tier work. The value proposition was reliable, consistent translation. That’s what’s being subsumed.

Most firms haven’t fully internalized this yet. They’re still hiring to old ratios, still structuring teams the way they always have. But the firms that have figured it out are already operating differently—running leaner, shipping faster, discovering that the bottleneck has shifted from “how many engineers” to “how well can we specify what we want.” The market will sort this out. It always does. We’re watching it happen in real time.

As this process unfolds, the emerging narrative focuses on the wrong place: “How will new graduates get their first job if AI can do entry-level work?” This confuses what entry level work actually is and where LLMs actually have the strongest impact. The actual ideal output for a junior engineer is to become a more capable engineer, not to contribute code in any substantial capacity. AI isn’t going to change that.

In fact, an economic protection of sorts will begin to emerge for entry level engineers. Companies invest in juniors because they need a pipeline of future seniors. You can’t automate your way to a senior engineer. You have to grow them.

The prototypical mid-career software engineer does not enjoy such protection. The bulk of their output is already well within the capabilities of today’s best models when used properly. The path to senior engineer–to becoming someone who can operate AI properly–no longer passes through this rung on the career ladder.

The Statelessness Problem

Despite the ability of LLMs to operate in place of mid-tier engineers, few teams have successfully transitioned to this model. Even AI-forward teams are typically augmenting and accelerating rather than replacing traditional software development (“writing code”). The barrier isn’t skill or fortitude–it’s context.

Every codebase is a “beautiful” snowflake–a unique artifact of accumulated decisions, many undocumented, all context-dependent. The knowledge needed to work effectively isn’t in the code itself—it’s in the heads of people who’ve been around long enough to remember why things are the way they are. Tribal knowledge lives in Slack threads, in commit messages that assume you already know what they’re referring to, in the senior engineer who remembers the outage that explains why the code looks like this.

Humans can manage this paradigm. As a new engineer ramps up they build a mental map for the codebase, becoming increasingly productive–and less likely to blow up prod–as they learn. In other words, they become contextually aware. AI, on the other hand, cannot operate this way–trying to work this way anyway is the root cause of many AI development disasters.

The problem is that AI models have no persistent mental map. Every session starts cold, without context. Each task requires constructing context from scratch. The AI can’t wander over to the senior engineer’s desk and ask why the authentication service has that weird retry logic. It has what you give it in the prompt, and nothing more.

Modern AI tooling can acquire context on your behalf by reading existing code, requirements, and guidance, but it requires precise direction to do so effectively–and that comes from rigid consistency and clear maps of how to proceed in the codebase itself.

Allowing AI to operate on a bespoke, idiosyncratic codebase will, at best, produce code that is technically valid but contextually uninformed, no different than inviting a stranger into your codebase and telling them to just go for it.

Given this, how do we move forward and use AI productively? Farming offers an instructive contrast.

Farms are also local and idiosyncratic. Soil varies field to field. Microclimates matter. The farmer’s expertise isn’t generic knowledge that transfers anywhere—it’s specific knowledge about this land, accumulated over years.

The fields don’t reset between tasks. Decisions persist in the land—where the irrigation runs, how the soil was amended, which cover crops worked and which didn’t. The farm is a physical artifact of accumulated judgment. Working on it means working with that judgment, not reconstructing it from scratch every morning.

When farms sell, the good ones come with documentation—not a rulebook, but a transfer of situational awareness. Here’s why this field drains poorly. Here’s what we tried that didn’t work. The new farmer can adapt gradually while preserving accumulated wisdom.

We idealize software codebases to work this way, but in reality they do not. Documentation is sparse or stale. Key contextual data too often lives in the head of a long-departed engineer. Moving forward requires us to structure our codebases to actually work this way, ending the beautiful-snowflake era that was really just a product of our distinctly human ability to grok and work with a hot mess of code.

Arming AI with rich contextual information frees us to focus on what we are seeking to accomplish rather than being bogged down by writing code. But “rich contextual information” is easier to say than to build. It requires understanding how context actually works—for AI and for the humans who have to structure it.

Standardization and the Path Forward

Humans and AI have the same fundamental problem: limited context windows. But we experience it from opposite directions.

For humans, the challenge is recruiting information. We can barely hold anything in working memory—a few items at once, if that. Our skill is knowing where to look, what to trust, which details we can safely ignore. We navigate complex systems not by holding them in memory but by building mental maps and knowing how to query them efficiently.

For AI, the challenge is not drowning itself in context. Given access to tools, files, and documentation and left to its own devices, AI will tend to do just that. The context gets cluttered and signal disappears in the noise. The skill, for those orchestrating AI workflows, is knowing what to exclude—curating context so the model sees what it needs and nothing more.

These are not distinct problems but rather different sides of the same coin.

Over centuries, we have developed technologies for the human version of this problem. We call them codes—not software code, but codified knowledge. Standards. Rule systems that compress hard-won expertise into forms that can be followed without being fully understood.

Consider electrical work. The early electrician was part inventor, part improviser, wiring buildings by intuition and learning from what burned down. Then came the National Electrical Code: hundreds of pages that compress decades of fire investigations into prescriptive rules. An electrician drilling into a wall in a code-compliant building doesn’t have to think about where the wires are. The code specifies where they can and cannot run. One bit of information—codecompliant, yes or no—substitutes for thousands of bits about specific wire placements.

Drafting followed the same arc. The master draftsman’s irreplaceable knowledge—proportions, conventions, the grammar of technical drawing—got written down, then codified, then embedded into CAD software that won’t let you draw an invalid line.

This is what codes actually are: compression schemes for expertise. They take knowledge that would otherwise require extensive context and flatten it into rules that require minimal context.

Software has resisted codification for decades, and the arguments were reasonable: the problem space is vast, requirements change constantly, every business is a special case. So the industry built cultures instead of codes—style guides, design patterns, framework conventions. These helped but didn’t compress expertise the way trade codes do.

There’s also a deeper reason: ego. Engineers resisted standards because they felt like an insult to craft. AI has no such resistance. It has no aesthetic preferences, no identity investment in doing things its way. Given consistent context, it converges toward the most common patterns in its training data.

Unlike the trades, where standardization emerged top down from regulation and liability—buildings that burned down created externalities that demanded collective action—software standardization will spread bottom up through competitive selection. Teams that structure codebases for AI legibility will ship substantially faster and cheaper, and the differential is large enough to be decisive.

Greenfield projects have an enormous advantage as they can start standardized while legacy codebases face painful migrations. Those new projects will set the competitive baseline, and the snowflakes will rapidly lose if they don’t convert. As patterns emerge, tooling will encode them. Scaffolding, linters, and CI pipelines will make following the standard easier than deviating from it. Generated code also has no reason to produce idiosyncratic output. This is how conventions spread without mandates–not by decree, but because the alternative is untenable.

The closer we get to this convergence, the faster the cycle times and feedback loops get. AI will still miss, just like human engineers, but the human operators will identify this, intervene as necessary, and maintain velocities that are an order of magnitude faster than humans alone can do today.

The documentation layer becomes load-bearing in this world.

If standard patterns cover most cases, then documented exceptions are where real engineering judgment lives. The codebase becomes two layers: a large body of standard code that any model can navigate, plus annotated exceptions explaining why this system deviates and when similar deviations might be appropriate.

Senior engineers become the judges. Their job isn’t writing routine code. It’s recognizing when standard patterns don’t apply, devising appropriate exceptions, and documenting them well enough that future engineers and future AI sessions understand the reasoning.

The undocumented deviation becomes the cardinal sin—a trap for the next person who encounters the code expecting it to follow the standard.

This answers how we structure codebases for the AI era. But it raises a harder question: how do we develop the engineers capable of doing this work?

The traditional path ran through mid-tier translation work. You learned judgment by doing progressively harder translations over years. That path is broken. The work that taught people is being automated.

We need a new model for growing senior engineers. History suggests what it might look like.

In Part Two, we’ll explore the apprenticeship model that can rebuild the pipeline from junior to senior engineer, examine what the “master engineer” role actually looks like in practice, and consider what the profession becomes on the other side of this transformation.
Secret Link