Writers in the Loop

“A company asked why it was so hard to hire a good writer. I told them it was because good writing is an illusion: what people call good writing is actually good thinking, and of course good thinkers are rare.” –Paul Graham, Y Combinator Founder, on X

Create an image of "Writers in the Loop" that visually represents a harmonious collaboration between human writers and AI. Emphasize the concept that content is now data. Show data streams or binary code integrating with written text, symbolizing how human-generated content fuels AI models.
Image generated by Dali-E-3 via Poe. (See footnotes for full prompt.)

Dark headlines have conditioned us to believe that AI is going to automate creatives out of existence.

As working creatives ourselves, we share these worries. We know that generative AI has the potential to upend the livelihoods of folks like us: writers, editors, and researchers, but also poets, painters, actors, and so on.

But we also believe that the fears that AI will wholesale replace the creative community grossly underestimate the value of human creativity in AI. The reality is that AI needs creatives of all stripes in the loop to turn technological capabilities into real-world value.

This view isn’t just wishful thinking. It’s what we can learn from the long intertwined history of technology and business. For all the buzz and utopian vs. doomsday headlines, AI is just a really powerful data tool. The difference with this latest data revolution is that the data isn’t the traditional “structured data” of columns of numbers, it’s the “unstructured data” of content: stories, videos, illustrations, music, dance, art. Until recently, this content was deemed too low-value to be worth the cost to capture, store, and use as data. Generative AI and LLMs have changed that equation.

Whenever there is a big leap in data technology, the countries and companies that wield the best data generally win. Renaissance Italian merchant cities reintroduced double-entry bookkeeping to Europe and dominated the continent’s banking and trade for centuries. Efficiency obsessives like Andrew Carnegie and John D. Rockefeller built their Gilded Age empires more with adding machines, typewriters, and telegraphs than with ruthless Robber Baron tactics. Harvard Business School used standardized testing to find the “Whiz Kids” GIs who helped the Allies win World War II, kick-started the Computer Age, and turned America into an economic superpower in the process.

By the end of 2025, investment in AI is expected to approach $200 billion, and most Fortune 500 companies plan to increase AI budgets 2-5x. Less clear is where and how to do that safely and effectively. Just as frequently as we read about new AI startups and breakthroughs, cautionary headlines warn of LLMs prompted to reveal sensitive data, chatbots hallucinating generous refund policies that companies actually have to pay for, and model responses that reinforce discriminatory biases.

Quality content and skilled creators are critical for turning technological promise into real use cases and economic value—and to avoid the pitfalls and lawsuits.

The reason is simple: a language model is only as good as the data it is trained on, and better content is better data. To illustrate, about a year ago, I sat in on a private demo of a company’s new AI Agent.1 Users could ask a question, and the agent would respond in natural language followed by a list of relevant links and citations from the company’s proprietary content stored in a local database.

A single engineer had worked part-time on the project2 and had it up and running in a few months. The real value of the company’s agent didn’t come from the replicable technology; it came from the years of proprietary content. The technology gave users a better way to access that content, but the content itself was what drove value that no competitor could copy.

More recently, LinkedIn founder and venture capitalist Reid Hoffman created an impressive AI twin of himself. The technology used includes synthetic audio from Eleven Labs and a video avatar by Hour One, but the real magic is in the content that trained the AI. As Reid says in the video description: “[the] persona—the way that REID AI formulates responses—is generated from a custom chatbot built on GPT-4 that was trained on my books, speeches, podcasts and other content that I’ve produced over the last few decades.”

We might think of Reid Hoffman as a technologist, but he’s also a very good writer and storyteller who has spent thousands of hours and probably (many) thousands of dollars on editors, podcast producers, and other creatives to hone his skills and clean up his content. It’s this content—the vast troves of unstructured training data—that drove the quality of Hoffman’s AI twin.

One of the most common—and dangerous—misconceptions about AI is that it is about to replace human writers, the content creation version of fully autonomous driving. Almost every week we hear an executive saying that “AI can write all our content.” AI can certainly generate huge amounts of content quickly and cheaply, but the highest quality content still needs writers in the loop for three reasons.

1) Human writers teach AI “what good looks like” for different use cases.

Real Reid Hoffman can’t just give REID AI a destination (“Write me a great speech!”) and let it navigate on its own. He needs to give it a lot of very specific guideposts. That’s the thinking part of writing from the Paul Graham quote I opened this piece with. Even if AI is part of creating content, you need humans to give context and tell the AI what voice to mimic, who the audience is, what is and isn’t an authoritative source, whether the speech should be funny or serious or both, and so on.

Screenshot from Poe, a platform where anyone can build an AI bot by describing what you want it to do – no coding required. Of course, the bot is only as good as the instructions it’s given.
Screenshot from Poe, a platform where anyone can build an AI bot by describing what you want it to do – no coding required. Of course, the bot is only as good as the instructions it’s given.

2) Human writers help avoid model collapse and keep improving AI performance.

Without regular infusions of new human-created content, AI content goes from average to awful pretty quickly, a phenomenon known as model collapse. It’s the AI equivalent of a copy of a copy of a copy, each instance a slightly lower resolution than the previous one, until the audience has no idea what they’re looking at.

To return to our REID AI example, actual Reid will need to constantly give it good new content and smart new thoughts to train on. Otherwise REID AI will quickly turn into a poor approximation, a behind-the-times Hoffman—and poor and behind-the-times are the last things you want from a technology investor.

3) Human writers make it possible to copyright AI-generated content.

While AI laws and precedents are still being set, content currently generated by an AI without a human hand transforming it cannot be copyrighted, a topic we’ll cover in more detail in a future post.

Some companies already possess quality, copyrightable content to build solid AIs with. Take Adobe. A year ago, investors were questioning whether AI posed an existential risk to the company. Now, Adobe has launched its own AI tools based on its existing library of hundreds of millions of well-tagged and organized stock photos. Not every company has such a treasure trove of content, but they do have internal communications, technical documentation, community forums, Slack threads, or thought leader executives (get ready for the folksy Warren Buffett Annual Letter-bot!). Still others may have their most valuable information locked in the minds of internal experts who just need a skilled interviewer and writer to bring it to the fore. And for companies that have none of the above, they’re going to need to roll up their sleeves, do some hard thinking and good writing, and start building ideas and stories that are worth training an AI on.

But where you’re starting from is far less important than the fact that you’re starting at all. Businesses that invest in quality content will have an enduring data edge, no matter how the technology, market, laws, and protocols evolve. Those who make the writers and editors and artists who create this content a core part of their AI strategy will also have a talent edge. And in a race as frenetic and relentless as the current AI boom, every advantage counts.


  1. An AI Agent can perform specific tasks without human intervention. In their current form, they can be thought of as super powerful chatbots, though advances in robotics mean AI Agents may take on more embodied forms in the future.
  2.  This particular AI Agent was set up with a RAG architecture, so the bot’s language skills were powered by ChatGPT, but the domain expertise and results were based on a set of content materials stored in a vector database, where they could be easily updated.

Written by Das Rush, N2 AI Strategist
Edited by Joe Flood, N2 founder and CEO

Feature image prompt: Create an image of “Writers in the Loop” that visually represents a harmonious collaboration between human writers and AI. Emphasize the concept that content is now data. Show data streams or binary code integrating with written text, symbolizing how human-generated content fuels AI models.

Like this post? Check out other pieces in our content series:

Interested in working with us? We are currently looking for beta customers for our AI content services. Reach out to ai@n2comms.com.