In the summer of 1956, a mathematician named John McCarthy convinced the Rockefeller Foundation to fund a two-month workshop at Dartmouth College. The budget was $13,500. The goal, stated with breathtaking confidence, was to figure out how to make machines “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” McCarthy and his collaborators believed a small group of brilliant people could make serious progress on all of this in a single summer.
They were wildly, gloriously wrong about the timeline. But the name McCarthy gave the project stuck: artificial intelligence.
What It Actually Means
Artificial intelligence is the broad field of building computer systems that can perform tasks we normally associate with human thinking. That includes understanding language, recognizing images, making decisions, solving problems, and (most relevant to you) generating text.
If you’ve ever asked ChatGPT to brainstorm plot ideas, used Grammarly to catch an awkward sentence, or watched Midjourney turn a text description into a book cover concept, you’ve used artificial intelligence. The term is an umbrella, not a specific technology. It covers everything from the spell-checker in your word processor to the large language model powering Claude.
A Name Nobody Liked
The origin story is too good not to tell properly. McCarthy was a Caltech-trained, Princeton-educated mathematician teaching at Dartmouth. He wanted to bring together the smartest people working on machine thinking for an intensive summer collaboration. The other organizers were Marvin Minsky from Harvard, Nathaniel Rochester from IBM, and Claude Shannon from Bell Labs (yes, the father of information theory).
The proposal they submitted opened with a conjecture that still defines the field: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Nobody else loved the name “artificial intelligence.” Shannon was unenthusiastic. Herbert Simon and Allen Newell, who showed up to the workshop with one of the first working AI programs (the Logic Theorist, which could prove mathematical theorems), preferred “complex information processing” for decades afterward. But McCarthy’s phrase had something the alternatives didn’t: it was vivid, provocative, and easy to remember. It stuck.
From Rules to Learning
For the first few decades, AI meant writing rules by hand. Programmers would encode human expertise into if-then logic: if the patient has a fever and a rash, suggest diagnosis X. These “expert systems” worked in narrow situations but crumbled the moment they encountered anything their programmers hadn’t anticipated. Two waves of disillusionment (the AI winters of the mid-1970s and late 1980s) nearly killed the field’s funding and reputation.
What revived it was a fundamental shift in approach. Instead of telling computers the rules, researchers started showing computers millions of examples and letting them figure out the patterns on their own. This is machine learning, and it changed everything. A spam filter trained on millions of emails learns what spam looks like without anyone writing a single rule about Nigerian princes.
The latest leap is generative AI, powered by neural networks called transformers. These systems train on enormous quantities of text (or images, or audio) and learn patterns so deeply that they can generate new content that’s coherent, contextually aware, and sometimes genuinely surprising. When you ask Claude to help you write a synopsis or ChatGPT to draft a query letter, you’re using generative AI, which is the most visible layer of the much larger AI stack.
Why Authors Should Care
AI isn’t one tool. It’s the underlying technology behind a whole ecosystem of tools that touch nearly every part of an author’s workflow:
Writing and brainstorming. Tools like Sudowrite, NovelCrafter, ChatGPT, and Claude help authors brainstorm story ideas, break through blocks, draft scenes, and revise prose. They’re collaborators, not replacements.
Editing and polishing. Grammarly and ProWritingAid use AI to catch grammar issues, flag pacing problems, spot overused words, and suggest clearer phrasing. They go far beyond traditional spell-check.
Cover design. Midjourney and DALL-E generate cover concepts from text descriptions, letting authors explore visual directions without hiring a designer for the brainstorming phase.
Audiobooks. ElevenLabs, Murf AI, and Play.ht use AI voice synthesis to produce narrated audiobooks at a fraction of the traditional cost and timeline, opening the format to authors who couldn’t otherwise afford it.
Marketing. From Amazon book descriptions to social media posts to email newsletters, AI assistants help authors handle the promotional writing that most of us would rather avoid.
Understanding the term “artificial intelligence” matters because it helps you see the common thread running through all these tools. They look different on the surface, but they all rely on the same core idea: systems that learn patterns from data and use those patterns to do useful things. When you understand that, you stop treating AI tools as magic (or as threats) and start treating them as what they are, powerful instruments you can learn to play well.
Seventy years after a mathematician thought a summer workshop could crack the code of human intelligence, we’re still working on it. But the tools that exist right now, imperfect and evolving as they are, can already make a real difference in your writing life. McCarthy would probably be amazed. He’d also probably say we’re still just getting started.