Prompt Engineering

In 1670s London, theaters employed a person called “the prompter” who stood just offstage, script in hand, ready to whisper the next line to any actor who froze. The job required no acting ability, no stage presence, no creative genius. Just the skill of knowing exactly what words to feed a performer to keep the show moving.

Three and a half centuries later, that’s essentially what you do every time you type a request into ChatGPT.

What Prompt Engineering Actually Means

A prompt is any instruction you give to an AI tool. “Write me a sonnet about my cat” is a prompt. So is “Summarize this chapter in three bullet points.” Prompt engineering is the practice of crafting those instructions deliberately, with an understanding of what makes AI respond well, so you get better results.

The “engineering” part might sound intimidating, like you need a computer science degree. You don’t. The core skill is the ability to write clear, specific English. In January 2023, Anthropic (the company behind Claude) posted a job listing for a “Prompt Engineer and Librarian” with a salary of $250,000 to $335,000 per year. The primary qualification wasn’t programming. It was exceptional written communication. Writers, it turns out, are unusually well-suited to this kind of work.

From Stage Whispers to Blinking Cursors

The word “prompt” entered English from Latin promptus, meaning “brought to light” or “ready.” By the 1400s, it had acquired its theatrical meaning: to supply a forgotten word to a performer. The person doing the supplying became the prompter, a fixture of theater for centuries.

When computers arrived, the metaphor migrated. The blinking cursor on a command-line terminal, waiting for you to type something, became known as “the prompt” by the late 1970s. The machine was the performer now, and the user was the one feeding it lines.

The AI version of the concept crystallized in 2020, when OpenAI released GPT-3. The model’s landmark paper, “Language Models are Few-Shot Learners,” demonstrated something surprising: you could get GPT-3 to perform tasks it had never been specifically trained for just by phrasing your input the right way. Include a few examples of what you want in your prompt, and the model catches on. The paper called this “in-context learning,” but the community of early users who began swapping tips about phrasing, formatting, and example selection needed a name for what they were doing. By 2021, “prompt engineering” had stuck.

After ChatGPT launched in November 2022, the term went fully mainstream. Oxford’s lexicographers placed “prompt” (in its AI sense) on the shortlist of four finalists for Word of the Year 2023. It lost to “rizz,” but the fact that a technical AI term was competing against slang for charisma tells you how fast this concept entered everyday language.

The Techniques That Actually Matter

You don’t need to memorize a taxonomy of prompt types, but a few techniques are genuinely useful.

Be specific, not vague. “Write a scene” gives an AI almost nothing to work with. “Write a tense confrontation between a burned-out detective and her estranged sister in a rain-soaked parking garage, using short declarative sentences and minimal dialogue tags” gives it rich patterns to draw from. The more context you provide about genre, voice, tone, and structure, the better the output.

Give examples. This is called few-shot prompting, and it’s one of the most powerful things you can do. Paste two or three paragraphs of your own prose, then ask the AI to continue in the same voice, rhythm, and sentence structure. The examples teach the model what you’re after far more effectively than adjectives like “literary” or “punchy” ever could.

Assign a role. Tell the AI who it is before asking it to do something. “You are a developmental editor who specializes in cozy mysteries. Read this chapter and flag any moments where the amateur sleuth’s deductions feel too convenient.” The persona shapes the model’s vocabulary, priorities, and frame of reference.

Think in steps. For complex tasks, break the work into a sequence. Ask for an outline first, then use that outline as context for drafting, then use the draft as context for revision. Each step builds on the last, and the AI maintains coherence far better than if you’d asked for the final product in one shot.

Iterate. Your first prompt is a rough draft, not a final manuscript. Read what the AI produces, identify what’s off, and refine your instruction. Prompt engineering is a conversation, not a command. The best results almost always come on the second or third attempt.

Why This Matters for Your Writing Life

Every AI tool you use as an author is shaped by prompts, whether you see them or not.

Sudowrite’s “Expand” and “Brainstorm” buttons are pre-engineered prompts the development team refined over months, packaged behind a clean interface. NovelCrafter takes a different approach, letting you view and edit the system prompts directly, giving you fine-grained control over how the AI responds to your writing. Claude’s Projects feature lets you upload character sheets, style guides, and plot bibles as persistent context, which is really just prompt engineering baked into the workflow.

Even Midjourney, which generates images rather than text, rewards prompt engineering. Authors creating cover concepts learn to structure prompts in a specific formula (subject, style, composition, lighting, mood) and use parameters like --ar 2:3 for book-cover proportions.

The underlying principle is always the same: AI tools are performers, and you’re the prompter standing in the wings. The better your cue, the better the performance. The good news is that the skill you’ve spent years developing as a writer, the ability to say exactly what you mean in clear, precise language, is the same skill that makes someone good at prompt engineering. You’ve been training for this longer than you think.