Back to blog

Engineering

There Is No Such Thing as Prompt Engineering

Date12 May 2026
Read5 min
ByPM
Promt Engineering.png
Promt Engineering.png

Since 2022, the tech industry has started treating the act of typing instructions into a chatbot as if it were an engineering discipline.

It is not.

"Prompt engineering", the practice of carefully crafting inputs to AI language models for better results, quickly became the hot new skill in the AI world. Courses, bootcamps, and certifications appeared to teach it. People updated their LinkedIn profiles. A cottage industry of prompt templates, libraries, and marketplaces popped up, with some selling single text strings for hundreds of dollars.

But beneath all this was a simple, uncomfortable truth that few in the hype cycle wanted to admit:

Prompt engineering is simply writing, thinking, and communicating. Calling it engineering does not make it so.

What "Prompt Engineering" Actually Is

If you remove the jargon, prompt engineers write instructions for AI, try different ways of saying things to get better results, add context, specify formats, and provide examples.

This is not engineering. Engineering means using scientific and mathematical principles to design and build systems. A civil engineer calculates how much weight a bridge can hold. A software engineer writes code that a machine runs the same way every time. A prompt engineer just rephrases a question until the chatbot gives a better answer.

Comparing prompt work to engineering is not just inaccurate; it is misleading. It suggests a level of rigor, consistency, and technical depth that simply is not there.

The Reproducibility Problem

Real engineering produces reproducible results. If a structural engineer designs a bridge, the bridge holds up. If a software engineer writes a function, it returns the same output for the same input every time.

Prompt engineering produces no such guarantees.

The same prompt can give different results even when used twice on the same model. A prompt that works well on one version might not work at all on the next.

There is no underlying theory or formal system. You cannot figure out the right prompt from basic rules. It is just trial and error presented as expertise.

When people discuss what makes a good prompt, they rely on observations and gut feelings, not rules. "Models seem to respond better when you assign a persona." "Adding examples helps." "Longer context windows change how you should structure information." These are not engineering principles. They are folk wisdom, gathered by anyone who uses a tool often and pays attention.

The Deprecation Cycle

Perhaps the most revealing evidence against prompt engineering as a genuine discipline is how quickly it becomes obsolete.

Real engineering disciplines accumulate. The knowledge a civil engineer learned twenty years ago is still largely valid today. The "prompt tricks" learned two years ago are largely deprecated.

If knowledge only lasts for a few months, it is not a real discipline. It is just a set of short-term fixes for the limits of new tools.

"But Better Prompts Do Get Better Results"

This is the main argument against my point, and it deserves a direct answer.

Yes, the way you phrase a request to an AI does affect the quality of the response. This is true and important to understand.

But the same is true of asking a question to a human expert, writing a clear brief for a contractor, or specifying requirements for any knowledge worker. The person who gives a vague, poorly considered brief gets worse work back than the person who thinks carefully about what they want and communicates it clearly.

We do not call that skill "brief engineering." We call it communication. We call it clarity. We call it thinking before you speak.

Good communication leads to better AI output not because AI needs special techniques, but because clear communication helps with any intelligent system, human or artificial. The only new thing with AI is that mistakes matter less since you can just try again, and you get feedback much faster.

Calling it "prompt engineering" takes a basic skill, clear, specific, purposeful communication and makes it seem mysterious. It suggests there is secret knowledge or special techniques only experts know. This idea helps people selling courses or looking for jobs, but it does not match reality.

What You Should Actually Do

This does not mean that how you talk to AI does not matter. It just means that calling it "engineering" is wrong, and there is no hidden expertise to learn.

What actually helps when working with AI tools is neither secret nor novel:

Be specific. Know what you want before you ask. Vague questions get vague answers, whether from AI or people.

Give context. AI models, like new coworkers, do better work when they understand the situation. Tell them who the audience is, what the purpose is, and any limits they should know about.

Try again if the first response is not right. This is not a special skill—it is just how conversation works.

Check the output. AI models can sound confident but still be wrong. Always check important facts. This advice applies to any source of information, not just AI.

These are not prompt engineering tricks. They are habits of clear thinking that were useful long before AI and will still matter long after today's models are gone.

Conclusion

Prompt engineering is not a real discipline or profession. It is not a skill set that is truly different from being able to communicate clearly and think carefully about what you want.

The so-called techniques are just basic communication skills with new names, short-term fixes for model limits that are being removed, or folk wisdom that has not become real expertise.

As AI tools improve, what will matter most is what always has: thinking clearly, communicating precisely, and checking results carefully. These skills do not need a new name, a certificate, or to be called engineering.