gjalla logo
gjalla

The Future of Software Engineering: From Junior Developer to Junior Architect


The Future of Software Engineering

From Junior Developer to Junior Architect

There’s a lot of discussion in the industry right now about what AI means for software engineering careers. There are a lot of conversations which I won’t touch on here (the recent Tailwind layoffs are an interesting case study), but instead I want to focus on some of the more philosophical narratives and where I believe we’re actually headed.

Read any popular software forum, blog, podcast, etc and you’ll hear lots about the de-skilling of junior engineers, no more need for juniors, lamenting that engineers are now just code janitors, wondering if prompt engineering is really the same as software engineering, worrying that we won’t have senior engineers in 10 years because the juniors aren’t in the trenches.

Trust me, I get it. I’m a software engineer, it’s unsettling to see everything we’ve been trained to do be automated so fast. But software engineering is not dead. It’s not even dying. It’s just changing.

As usual, here’s a tl;dr:

  • The need for software is not going away anytime soon.
  • Engineers that are new to the industry aren’t experiencing atrophy, they’re experiencing abstraction.
  • Prompting is the interface, not the substance.
  • Engineers equipped with AI coding agents are neither janitors nor simply prompt engineers. They’re architects and auditors.
  • Prediction: “Junior Architect” as an emerging entry-level role?

Purpose Requires People

Software is more necessary than ever, in more fields than ever. In fact, many fields are just starting to adopt software and automation, there is little risk that the need for software to solve new problems is going away.

Great, but won’t AI coding agents just build the solution themselves? No. Coding agents may generate the software, but software exists to solve human problems. Machines don’t understand those problems. Machines don’t understand our constraints. Our regulations. All of the things that determine how the software gets built. Someone has to define what the system should do, what it absolutely should not do, and then verify that it actually meets those requirements.

Code generation can be automated; intent cannot. The human role shifts from writing code to designing, architecting, and governing code.

Abstraction, Not Atrophy

Every generation of engineers has “lost” skills that became irrelevant at higher layers of abstraction. The earliest form of software engineering was machine code, directly writing the bits and bytes that the machine would execute. Assembly language was invented in 1949, gaining popularity as human readable commands that the processor could automatically map to its machine code. In the 1950’s, we had FORTRAN and COBOL which let engineers describe code more abstractly than assembly. These languages would then get translated into assembly language, and an assembler would output machine code. A similar shift came again in the ‘60’s and ‘70’s, giving us C (and its inspiration, ALGOL). Just like FORTRAN, the abstraction provided by C hides layers of complexity: first it is translated into assembly, then assembled into machine code. Even languages such as Java, C++, and Python follow similar procedures. The engineer can construct their solutions in a relatively abstract way, and the language tools take care of turning it into something that the machine can actually execute.

In each one of these shifts, were engineers “de-skilled”? Quite the contrary. The lower levels of tooling became so reliable that engineers could focus on a different layer of the problem they’re trying to solve. The bonus? They could do it faster without having to worry about some of those lower level details. These engineers aren’t unable to do engineering. Their engineering just looked different.

A similar narrative can be shaped around the AI coding agent revolution too. Engineers coming up today are aware of the peculiarities of various programming language syntaxes, but they’re able to focus their problem-solving on something else. So what’s that something else? Defining the semantic layer. What should the code do? How should it work? What constraints must it adhere to?

We’re not watching skills degrade. We’re watching the abstraction layer shift upward. (Again.)

Prompting is the Interface, Not the Substance

Importantly, in conjunction with my notes above, I agree with the worries that a lack of understanding of software concepts could be disastrous. Just like foundational computer science concepts were a critical through line in previous programming language revolutions, they’re still important now. That doesn’t change with the layers of abstraction.

Adopting AI coding agents today means prompting the agents in English to generate code which meets certain requirements. However, it’s reductive to say that the best AI-assisted builders are the best “prompters”. That misses what’s actually happening. Why do some prompts result in better software outputs than others? Because an effective prompt conveys (already-made) decisions about structure, design, architecture, constraints… i.e. all the things which require decisions about tradeoffs. If not provided, the AI coding agent will make these decisions silently and you’ll be left with software that accepted a tradeoff you didn’t agree with.

In other words, to effectively leverage AI coding agents requires an understanding of what decisions will need to get made about the system, deciding which ones need to be made by you vs can be made by the agent, and making those decisions. Computer science concepts.

Prompting is the interface; computer science is the substance.

Oversight, Not Cleanup Work

You’ve prompted your agent… now you spend days cleaning up what it generated… ugh! Are engineers just code janitors now?

No. Cleanup work is a result of either a) poor/changing system design or b) not adhering to styles/best practices. Part a) will reduce with effective prompting (read: computer science), and b) will fade as coding agents mature.

If your system design is ideal and your code is syntactically clean, what’s left? Making sure it actually behaves the way you want it to. The future isn’t cleaning up AI messes. It’s governing system behavior. Surfacing implicit assumptions. Verifying intent matches implementation.

If this is the new work of software engineers, and junior software developers are not needed as much, I wonder if a “Junior Architect” or similar role will become commonplace in organizations. New engineers who are tasked with managing smaller slices of systems. (people never regret making predictions on the internet, right?!)

So…?

The real question isn’t “will developers survive AI?”

It’s: what does engineering look like when we move up one layer of abstraction? And perhaps more interestingly, what does the connection between this new layer of abstraction and our current paradigms look like? How do we effectively design, communicate, and verify our systems?

Engineers who thrive in this world will be the ones who embrace and experiment with new tooling and think through changes at the system level - tradeoffs, downstream and collateral effects, use cases, all bridging implementation with output.

Recommendations and Resources