Prompt

For centuries, the most important person in a theater was someone the audience never saw. The prompter sat hidden in a small booth at the edge of the stage, following along in a marked-up script called the prompt book, ready to whisper a forgotten line at exactly the right moment. In Elizabethan theater, they called this person the “Book-Holder.” In German opera houses, professional prompters (called Souffleure) still work every performance today. The job was simple but essential: give the performer just enough of a cue that they could take it from there.

That is, almost exactly, what you do every time you type something into ChatGPT.

What a Prompt Is

A prompt is the text you give an AI model to work with. It can be a question (“What’s a good name for a detective in 1920s Chicago?”), an instruction (“Rewrite this paragraph in a more conversational tone”), a chunk of your manuscript with a request attached, or even a single word. Whatever you type into the input box of ChatGPT, Claude, Sudowrite, or NovelCrafter, that’s your prompt.

The word comes from the Latin promptus, meaning “brought forth” or “made ready,” from promere (“to bring out”). And the term has been doing the same job across wildly different contexts for six hundred years: giving someone (or something) the cue it needs to produce a response.

A Word That Keeps Changing Direction

The history of “prompt” involves a quiet but fascinating reversal.

In theater, a human prompts a human. The prompter feeds lines to an actor who takes it from there.

When computing adopted the word in the 1970s, the direction flipped. The machine started prompting you. That blinking cursor after C:\> or $ was the computer saying, “Your turn. What do you want me to do?” The word “prompt” in this context, first documented around 1977, referred to the computer’s signal that it was ready for input.

Then AI flipped it again. Now you prompt the machine. You write the input, and the AI generates the response. The human and the computer swapped roles entirely, and yet the word still fits, because in every case, something is being brought forth. The Latin doesn’t care who’s asking and who’s answering.

How Your Prompt Becomes Prose

When you type a prompt into an AI writing tool, the large language model behind it doesn’t read your words the way you do. It first breaks your text into small pieces called tokens (roughly three-quarters of a word each), converts them into numbers, and processes those numbers through layers of mathematical functions.

The model then does something deceptively simple: it predicts the most likely next token. Based on your prompt and everything it learned during training (patterns absorbed from billions of pages of text), it asks, “What word most probably comes next?” It picks one, adds it to the sequence, and repeats. Thousands of times. The result reads like thoughtful prose because the model has internalized the statistical rhythms of an enormous amount of thoughtful prose.

Your prompt is the starting condition for that entire chain of predictions. A vague prompt gives the model very little to work with, so it falls back on the most generic patterns it knows. A specific, detailed prompt activates richer, more relevant patterns. That’s why “Write a scene” produces something forgettable, while “Write a tense reunion between estranged siblings at their mother’s funeral, literary fiction, close third person” produces something you might actually use.

The Moment “Prompt” Became a Technical Term

The word had been floating around AI research casually for years, but 2020 was when it became a concept in its own right. OpenAI released GPT-3 that summer, and researchers discovered something unexpected: the model’s performance on a given task depended enormously on how you phrased your input. The same question, worded differently, could produce answers that ranged from brilliant to useless.

This was new. Previous AI systems did what they were trained to do regardless of how you asked. But GPT-3 was responsive to phrasing, structure, and even formatting in ways that nobody had designed or anticipated. Suddenly the input wasn’t just an input. It was a skill, a craft, a lever you could pull to dramatically change the output. It needed its own name, and “prompt” was already sitting right there.

By 2022, researchers had discovered that adding five words (“Let’s think step by step”) to an otherwise unchanged prompt could dramatically improve a model’s reasoning ability. The field of prompt engineering was born, dedicated to the art and science of writing inputs that bring out the best in these models.

Why This Matters for Your Writing Life

If you’ve been using AI tools and getting inconsistent results, the prompt is almost always where the problem lives. Understanding what a prompt actually is (and what it does mechanically) gives you real, practical advantages.

You’re already good at this. Prompting is fundamentally about clear communication in natural language. You’re a writer. Constructing precise, vivid, context-rich language is what you do. The authors who get the most out of AI tools aren’t the ones with computer science degrees. They’re the ones who understand that the quality of the output is directly shaped by the quality of the input, and who know how to provide context, set expectations, and be specific. That’s just good writing.

Context is everything. The more relevant detail you include in your prompt (your genre, your target audience, the tone you’re going for, examples of what you like), the better the model’s predictions become. Think of it like giving notes to a collaborator. “Make it better” is a useless note. “The pacing drags in the second paragraph, and the dialogue feels too formal for a sixteen-year-old character” is a note someone can actually act on.

Different tools use prompts differently. When you type directly into ChatGPT or Claude, you’re writing the prompt yourself. But tools like Sudowrite and NovelCrafter are also writing prompts on your behalf, behind the scenes, translating your clicks and selections into carefully engineered instructions that get sent to the underlying model. Understanding this helps you evaluate whether a tool is genuinely adding value or just wrapping a prompt around a prompt.

The theatrical prompter’s whole job was to give an actor the smallest possible cue that would unlock the best possible performance. Six centuries later, you’re doing the same thing with a different kind of performer. The better your cue, the better the show.