Generative AI (GenAI)

In 1906, a Russian mathematician named Andrey Markov did something strange with a poem. He took Pushkin’s Eugene Onegin, one of the great works of Russian literature, and began counting. Not words, but patterns: which vowels followed consonants, which sounds tended to cluster together, how the structure of language obeyed hidden statistical rhythms. Markov wasn’t trying to write poetry. He was trying to prove a point about probability. But in doing so, he built the first mathematical framework for predicting what comes next in a sequence of text.

That idea, that you can learn the patterns in language well enough to generate something new, is the beating heart of generative AI. It just took us about 120 years and a few trillion dollars in computing power to make it truly work.

So What Is It?

Generative AI is artificial intelligence that creates new content rather than analyzing existing content. Text, images, music, code, even video. Where most AI before it was built to classify and sort (spam or not spam, cat or dog, five stars or one star), generative AI flips the equation. It doesn’t judge. It produces.

Think of it as the difference between a book critic and a novelist. A book critic reads your manuscript and tells you what it is. A generative AI reads thousands of manuscripts and learns to write one. Both understand language. Only one produces a new page.

When you ask ChatGPT to brainstorm plot twists for your thriller, or watch Midjourney conjure a book cover from a text description, or let Sudowrite draft the next paragraph in your voice, you’re using generative AI. It’s the technology behind the tools that have made AI feel, for the first time, genuinely creative.

Where the Term Came From

“Generative AI” isn’t the product of a single eureka moment. It stitched itself together from two older ideas: “generative model” (a statistical concept that’s been around for decades) and “artificial intelligence” (John McCarthy’s famous coinage from 1956). The compound phrase appeared in academic papers through the 2010s, but it didn’t enter everyday conversation until one very specific date.

November 30, 2022. That’s when OpenAI released ChatGPT to the public. It hit one million users in five days. Within two months, it had a hundred million, making it the fastest-growing consumer application in history. For comparison, TikTok took nine months to reach that number. Instagram took two and a half years.

But the breakthroughs that made ChatGPT possible started years earlier. In 2014, a researcher named Ian Goodfellow was at a bar in Montreal called Les 3 Brasseurs, celebrating a friend’s graduation. His colleagues were stuck on a problem: how to train a neural network to generate realistic images when the hardware simply wasn’t powerful enough for the existing approaches. Goodfellow had an idea. What if you pitted two neural networks against each other, one that generates fakes and one that tries to catch them? They’d push each other to improve, like a counterfeiter and a detective locked in an escalating arms race. He went home, coded the whole thing in about an hour, and it worked on the first try. He called it a Generative Adversarial Network (GAN), and it kicked open the door to practical AI image generation.

Three years later, in 2017, a team of eight researchers at Google published a paper with an unusually confident title: “Attention Is All You Need.” They’d invented the transformer, a new architecture for processing language that could understand relationships between words across an entire passage simultaneously rather than plodding through them one at a time. The paper was focused on the mundane task of machine translation. The authors had no idea they’d just built the engine that would power ChatGPT, Claude, and every other large language model that followed.

How It Works (The Short Version)

Every generative AI tool does a variation of the same thing: it studies enormous amounts of existing content, learns the patterns, and uses those patterns to produce something new.

For text (the kind in ChatGPT, Claude, Sudowrite, and NovelCrafter), the process is fundamentally about prediction. The model reads your prompt and asks itself, “Based on everything I’ve learned about language, what’s the most likely next word?” Then it picks one, feeds it back in, and asks the same question again. And again. Thousands of times. The result is a passage that reads like natural prose, because it was generated by a system that has internalized the statistical patterns of billions of pages of natural prose.

For images (Midjourney, DALL-E, Stable Diffusion), the process is different but equally elegant. These tools use something called a diffusion model. During training, the system takes real images and gradually adds random noise until they become pure static. Then it learns to reverse the process, to start with noise and sculpt it, step by step, into a coherent image. When you type a prompt like “oil painting of a lighthouse on a cliff at sunset, moody atmosphere,” the model starts with visual static and progressively removes noise, guided by your words, until an image emerges.

Both approaches share the same underlying principle: learn the patterns in existing human-created work, then use those patterns to generate something new. It’s pattern completion at a scale and sophistication that Andrey Markov could never have imagined.

Why It Matters for Your Writing Life

Generative AI touches nearly every stage of an author’s workflow now, and understanding the term helps you see the common thread running through tools that look very different on the surface.

Writing and brainstorming. Sudowrite analyzes your prose style and generates passages that match your voice. NovelCrafter helps you plan and draft long-form fiction with AI that respects your creative direction. ChatGPT and Claude are general-purpose thinking partners for brainstorming plots, fleshing out characters, drafting query letters, and working through story problems.

Visual content. Midjourney and DALL-E turn text descriptions into cover concepts, promotional graphics, and character reference images. You’re not replacing a designer. You’re sketching with words before the designer starts.

Audio. Text-to-speech tools like ElevenLabs use generative AI to narrate audiobooks in natural-sounding voices, opening a format that was previously out of reach for many indie authors.

Marketing. Book descriptions, social media posts, email newsletters, author bios. The promotional writing that most authors would rather not do is exactly the kind of task generative AI handles well.

The key insight is that all of these tools, from the chatbot that helps you outline your novel to the image generator that drafts your cover, are doing the same fundamental thing. They learned patterns from an enormous body of human creative work, and they’re using those patterns to generate something new at your direction. Once you understand that, the whole landscape of AI tools starts to make a lot more sense. You stop seeing a dozen confusing products and start seeing one powerful idea, expressed in different ways, waiting for you to put it to work.