← Back to all posts

LLMs have changed the way the many programmers approach work, but how much has it really changed the overall process of engineering?

Picture of laptop showing a code editor, Cherry Blossom Tech logo, and title: "Engineering in the Age of AI"

Love it or hate it, LLMs have transformed engineering. We’ve seen skilled engineers build and maintain products and systems that would’ve previously taken an entire team, and we’ve even seen total amateurs create things that would’ve been tricky for even skilled experts to make alone in the past. AI support for coding isn’t nothing… but how much is it really?

As with any new tech, seeing through the hype can be tough. It’s hard at this point to not acknowledge that generative coding tools have their limitations. Vibe coded horror stories abound of codebases that look solid at a glance but are riddled with vulnerabilities and reckless shortcuts. Data breaches and rookie mistakes with disastrous consequences are juxtaposed against the success stories of people mastering AI workflows to create something unprecedented and amazing.

LLM use is complicated. Not just in engineering, but perhaps most especially in engineering. When you’re talking about using AI to build the infrastructure of tools and systems, that touches every part of culture directly or indirectly. And frankly, culture can’t seem to quite agree on how good (both in terms of ethics AND capability) AI tools are.

So how can we leverage AI tools to act as a force multiplier without losing the quality that human ingenuity brings to a project?

How do we learn to harness the power of AI without getting lost in the vibes?


For the sake of a focused discussion, this article will not discuss the ethical implications of AI and large-scale violations of IP rights that have been rampant throughout the world of LLMs.

We acknowledge that many popular LLM models have been trained on the work of billions of people who were often not even aware their work was being used for that process. We understand that this creates a conflict that will continue outside of the merits and capabilities of LLM tech itself. For this article, we will only be viewing LLM tech on those merits and will be disregarding this ethical conundrum. That does not mean we approve of unauthorized use of work, just that this is not the issue we wish to explore today.

AI, Context, and Intention

LLMs are undeniably powerful tools. The ability to apply algorithmic understanding to non-numerical languages opens up many possibilities.

As with any emerging tech, it’s been very easy for people to imagine everything AI could be applied to. When the applications of technology are not well understood, the intersection of marketing and hype is projection — ‘what does it do?’ ‘Well, what do you want it to do?’

For the companies producing the LLMs themselves, it’s been quite easy to promise us all that those dreams are actual, factual reality. The nature of content and bragging culture turns this into a feedback loop where marketers and consumers both exaggerate capabilities, and pretty soon we’re all working to make the LLM look better than it ever had the right to.

That’s not to say that large language models are all empty hype. They do fill needs and are the best tools for some applications. In reality, however, this version of AI is limited – it’s powerful, but it needs to be directed.

Its role in its consumer-facing incarnation is to mimic the work of human communication, be it natural language or programming language. It, however, lacks the complex cultural context we engage our very unique brains for our entire lives to create. It has no taste, intention, or expertise. It can’t feel, empathize, celebrate, or cringe. It needs us to provide that spark of creativity and editorial agency… or what it produces is simply slop.

It’s Just a Tool

The AI as a tool argument is incredibly hackneyed at this point, but given the proper framing it is still quite relevant. I am a writer. If you gave me a high quality floor loom and asked me to make a rug, I would have no idea what to do. The loom would be of no assistance, and what I would make would hardly even qualify as a rug.

Give an expert weaver that same loom, and they’ll be able to create a quality rug that they would never have been able to create without the loom. Yes, there’s a difference between hand-woven and loom-woven, but a highly skilled weaver can produce distinct quality in both mediums. A rug produced on a loom will not be the same as one produced manually, but neither could the quality and consistency of the loom be duplicated by hand alone.

Consider that I — knowing nothing about weaving other than what I said above — probably will not be able to tell the difference in quality between hand-woven and loom-woven. That weaver, however, will know it at a glance. They will recognize right away when the loom is not operating smoothly. They will see mistakes in the pattern that someone like me would miss.

The loom doesn’t know its errors. The loom doesn’t self edit. Expertise is still required to identify and refine quality.

That rug could be slop and I wouldn’t know.

Advancement Rather than Revolution

Engineering still requires three fundamental things: critical thinking, creative problem solving, and attention to detail. Beyond that, reasonable people could argue about what makes a software engineer great. Those, however, are the three non-negotiables.

…And AI can’t do a single one of them.

That’s not to say that LLMs don’t bring real value to the table. Sometimes while programming you’re looking things up, sometimes you’re referencing other code, and sometimes you’re just using trial and error — and there’s nothing wrong with that. AI makes all those things easier and potentially faster.

With AI tools, your knowledge of every obscure programming language isn’t as essential as those three skills we talked about above. The LLM has processed a lot of information about all kinds of code… it just doesn’t know how to intelligently apply it. That’s where you come in. Intelligent application. That’s what any type of engineering always was, and that’s what it still is.

Having a programming assistant with you at all times doesn’t change the way we think about programming or engineering. AI is the latest in reference materials, just like Google or Stack Overflow. You may have snagged a code snippet from Google at some point, but you didn’t expect Google to give you a fully built and tested app.

If you’re expecting an LLM to tell you how something should be built, build it for you, then troubleshoot away all the bugs… the results won’t live up to your expectations.

The Power of Human Ingenuity

AI in its current form consists of large data sets that it uses to predict what a human would likely respond to the query it is given. That data is, for the most part, created by humans. When aggregating all that data, the AI isn’t filtering for the highest quality input. Some of the input it’s given will be nowhere near quality, just by nature of the large amount of data being fed into it.

This means that when you ask AI to solve an engineering problem, it isn’t scraping through its data to find the best solution to the problem. It’s using all that data to predict what a correct solution to your problem should be. Some of the examples that have been fed into the LLM are surely very nuanced and intelligent. The best examples are an outlier, however. AI will tend to produce mediocre solutions most often, because the vast majority of human work is overall mediocre. That’s just how bell curves work.

So… will LLMs without critical thinking, taste, or practical expertise take over all the engineering jobs with mediocre work? Probably not any time soon.

The only reason that over-dramatic claims like this abound is that AI developed fast over the last few years. That’s just how new tech works.

Google had a breakthrough with transformer technology, so a bunch of people rushed to apply it to everything they could. The competition frenzy took over and suddenly we have all these new models and product options… but are we improving as much as we were at the beginning?

Let’s look at a classic example of tech acceleration. Moore’s Law used to posit that the number of transistors on an integrated circuit doubles every two years. This observation was made when microchip technology was experiencing an unprecedented boom. We looked at the data from a few decades and decided that this must be a law… that there would be no upper limit and that microchips would continually get more and more powerful while transistors got smaller and smaller and smaller, blah blah blah…

But that’s not how reality works. There were physical limitations. There were logistical limitations. There were material limitations. We were so high on our own sense of progress that we didn’t perceive the possibility of these things at the time, though they now seem obvious.

Processors didn’t continue to double in transistor count and half in price. We could make advancements at a rapid pace, so we did. Then the advancements stopped, because reality set in.

This is already happening with LLMs. Expecting that LLMs will continue to increase in power at an exponential rate is just not logical. Remember those limitations that lead to the death of Moore’s Law? Unless we want the entire planet to be one gigantic energy-sucking data center, those same bounds limit what we can do with LLMs.

There’s more to LLMs though, as they’re also limited by the total amount of data out there. There just aren’t enough examples for LLMs in their current state to continue improving indefinitely. They’ve already consumed a vast cross section of the entire data in human history.

Once the rapid advancement of model capabilities slows down we’ll likely see less hollow hype and alarmism and more of our lives settling into practical and sustainable uses of AI. While that may transform the processes within some jobs, it’s unlikely to eliminate any experts entirely. As we discussed above, AI needs subject matter experts to work properly.

The Future of Engineering

With all the hype of AI tech over the past few years, at some point most of us have fallen victim to thinking that this changes everything. Does it though? The tools we use have changed, but the goal of engineering and the expertise required for quality work hasn’t changed.

Great engineering still comes from our uniquely human creativity and taste. We’re not only problem solvers, but continual editors. We’re analyzing our solutions in real time as we craft them and always asking ourselves if there are better ways to do things. That’s how real innovation happens, and LLMs don’t do that. They’re looking for a likely solution… not the ideal solution.

LLMs may have more of a role in filling in some blanks in codebases in the future. They may become more of a go to reference than Stack Overflow or Google. That doesn’t mean that AI is doing our engineering for us going forward, and it surely doesn’t mean that engineers are redundant. The qualities that make a great engineer still lead to better innovation and solutions.

If you want to use AI to the fullest extent without reducing the human qualities that make your solutions great, stop thinking about it as being in control. Stop cramming it into parts of the process where it doesn’t help and stop outsourcing creativity to it.

AI isn’t an autonomous vehicle — it’s an electric bike. That bike can help increase how fast and how far you can go. It can change the way you commute to work and think about navigating your life…

But without you, it’s just a useless machine.