Three skills that matter when AI handles the coding is attracting attention across the tech world. Analysts, enthusiasts, and industry observers are watching closely to see how this story develops.
This update adds another signal to a fast-moving sector where product decisions, platform changes, and competition can quickly shape the market.
Writing code has always been the most time- and resource-intensive task in software advancement. AI is changing that, and faster than most engineering organizations are prepared for. Tools like Claude Code and Cursor are already handling significant parts of code construction, freeing developers to spend more time on requirements, architecture, and design.
But that shift creates a new challenge nobody is talking about enough. As AI takes on the heavy lifting, the skills that matter most are moving upstream: how to provide the right context for a prompt, how to evaluate what the model produces, and how to understand a problem deeply enough that you can’t be fooled by a confident but wrong answer.
This piece explores those three skills and why developers who master them will have a significant edge over those who don’t.
Software translation tools such as compilers and assemblers map a high-level description of code to a lower-level representation suitable for execution. Layering such tools led to the first dramatic improvements in coding productivity. AI prompt engineering represents the next generation of layered translation software that sits above the compiler and assembler. With AI code generation, the focus will move from writing good code to writing good prompts.
What constitutes a good prompt? The answer is good context. But what provides the best context? Most significantly, the developer must have a good understanding of the task the software must perform. Consider what’s required to write a typical software module that is part of a larger platform. The prompt should cover:
For new initiatives, the context for this module should be taken from a detailed platform design. The platform design is essentially the blueprint for the software, created by breaking down the overall design into smaller, separate parts called modules. Each of the modules is responsible for performing a specific function that the software needs to deliver. In microservice implementations, domain-driven design breaks the business requirements into distinct subdomains that can be mapped to microservices.
Good platform designs have a coherent architecture that provides a concept of operation like how the modules work together to meet the functional requirements. The best platform designs result when well-understood requirements are combined with the right architecture.
By working backwards and building the context into a prompt we discovered the most significant phases in the advancement life cycle including requirements analysis (what the software has to do), and architecture and platform design (how it does it).
Although one design pass might work, often developers will need to iterate on their design to get the best outcome. This has been emphasized by many software experts over the years, but perhaps best put by the famous computer scientist Fred Brooks: “Plan to throw one away, you will anyway.”
Iterative life cycles like spiral and evolutionary prototyping build the “throw one away” part into the process. Throwing something away sounds wasteful, but each iteration builds a deeper understanding of the problem: user requirements, architecture limitations, risks, and opportunities. Learning from each iteration greatly reduces the cost and complexity of the final product.
AI translation tools have the potential to make us more productive, but also introduce the risk that we will become lazy and dependent on them. A recent study found that LLM-assisted essay writing reduced user’s cognitive energy associated with their work relative to those who wrote essays unassisted by LLMs. This effect was termed “cognitive debt.”
I work with a strength trainer because modern life is too easy. It doesn’t require heavy lifting or strenuous activity. So we have to simulate it to improve both our strength and health. AI coding tools are like robots that do the heavy lifting of code generation for us. Without different challenges for us to overcome, we’ll get weaker.
We need to find ways to keep our brains working hard while using AI tools, so that we have the capacity to think through the hard problems in our software design and its advancement work.
Writing optimized assembly code is no longer considered a good use of anyone’s time because compilers are so good at it. But until recently, writing good code for a compiler or run-time engine in Java, Go, or Python has been an significant skill. In fact, these skills will remain significant even as LLMs support code generation because developers will still need to review the generated code and verify that the LLM output meets certain standards. Experienced developers who have been writing code for years already have these skills. Both new and existing developers will be able to learn and expand their knowledge via interaction with LLM tools that expose them to new techniques and ideas.
We need to find the equivalent of strength training for coding that replaces some coding directly but retains understanding and judgement for the code the LLM produces. Where can we put our brains to work to avoid cognitive debt?
First, study and understand the code generated from your prompt. Then re-write your prompt to improve the generated code, or rewrite the generated code if it’s close enough to what you need. LLMs behave statistically, so the generated code might not meet design goals. LLM gaslighting is real: quite often what it generates won’t run or isn’t correct, but the LLM will insist confidently that all is well. Don’t trust. Always verify.
LLMs can generate alternative designs from the same or slightly different prompt. Many developers are already leveraging this capability to explore the design space. Make sure you put the effort into understanding and modifying the code generated, and you’ll retain your coding skills.
Second, the focus of prompt engineering is to provide context to an LLM. So the key becomes creating that context, and understanding and judging the code that is generated. In addition to retaining their existing language and coding skills, software professionals should focus on other life-cycle elements, especially requirements, architecture, and design, so they have high-quality context for prompts.
Third, learn new languages and data models, and understand where each one fits best.
Fourth, build an understanding of best practices in code construction and design, independent of languages, so you can judge generated code using best practices that work across many different languages.
To stay competitive, you should understand that the bar will be rising. Historically, research has shown that the most productive individual developers are already about 10 times more effective than the least productive ones, and the best teams are about five times better than the weakest teams. AI tools could increase these differences by two or three times more, further widening the productivity gap. Many of these highly productive teams will work for your competitors.
AI will allow developers and teams that can crystallize requirements, architecture, and design to rapidly apply and evaluate different languages and data models to their project. AI will make iterative life cycles like spiral and evolutionary prototyping even more effective by allowing parallel advancement paths during each iteration. The key to success is leveraging AI in a way that allows you to focus on higher-level design issues while not losing control over code complexity. If you don’t learn these higher-level skills, developers and teams that do will be far more productive than you are.
Iterative life cycle with parallel paths and feedback loops.
Some have argued that AI will significantly improve software productivity. They envision a future in which software developers need only write a few prompts and an LLM will produce software that can replace existing SaaS products. But as Fred Brooks argued in a famous 1986 paper, “No Silver Bullet,” this is still impossible because of the two types of complexity that remain—accidental complexity and essential complexity.
Accidents are not inherent to the problem itself, but to the production process including the tools, languages, hardware limits, and implementation details we use to build the software. Historically, most productivity gains come from reducing accidental complexity. AI productivity can reduce accidental complexity, but developers must deal with its own challenges including hallucinations and poor-quality generated code that must be detected.
Essence refers to the inherent, unavoidable complexity of the problem itself. It is the challenge of “fashioning the complex conceptual construct” such as the abstract, interlocking ideas, data relationships, algorithms, and behaviors that accurately model the real-world problem the software must solve.
AI cannot be a silver bullet because of software’s inherent complexity. Even if you could reduce the time for all the accidental tasks to zero, the essential tasks still will be your biggest challenge and take up most of your efforts. Nevertheless, AI is a powerful tool. When used properly to manage complexity and explore the design space, it can significantly increase the productivity of teams and the quality of the software developed.
New Tech Forum provides a venue for tech innovation leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise tech innovation in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be significant and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
Why This Matters
This development may influence user expectations, future product strategy, and the competitive balance inside the broader technology industry.
Companies in adjacent segments often react quickly to similar moves, which is why stories like this tend to matter beyond a single announcement.
Looking Ahead
The full impact will become clearer over time, but the story already highlights how quickly the modern tech landscape can evolve.
Observers will continue tracking the next steps and how they affect products, users, and the wider market.