Where the voice lives

Where the voice lives

On co-writing books with Claude, why most of the advice is built for the wrong job, and what actually keeps the voice alive.

It was the afternoon, it was Đà Lạt, and I was halfway through editing a chapter of the book Claude and I have been co-writing for months when I came across a sentence I couldn’t place. A small sideways observation, eight or nine words long, doing exactly the work the paragraph needed. I read it twice and felt the small satisfaction you feel when a sentence earns its rent. Then I tried to remember writing it.

I couldn’t. I went back through the drafts looking for it. I found three versions of the surrounding paragraph but no clean trail. I had written something. Claude had written something around it. I had rewritten the rewrite. The sentence had emerged from that, and the question of which of us had produced it was the wrong question. Trying to work out which of us wrote it was like asking the soup which vegetable was in charge.

I sat with that for a few days, expecting to feel troubled, because the framework I had inherited about AI and writing told me I should. I didn’t. What I felt instead was a small, slightly guilty recognition: this was working better than anything had any right to work, and most of the advice I had been reading about how to do this safely had nothing useful to say about why.

The diagnosis is right. The cure is built for newsletters.

A fellow Substacker named Alyssa, who writes the newsletter Step Up Step Together, recently published a careful piece about how she uses AI as a writing partner. She came to it from a background in data science, and her central warning is one I want to take seriously, because it is the soundest thing in the public conversation about this.

Large language models, she says, are trained to converge on the average. Use them to generate prose and they will average you out, sanding off your quirky edges until your writing reads like everyone else’s writing. Her response is a seven-step process in which she always writes the first draft, and the AI is allowed in only as coach, editor, sounding board, and what she calls a hype entity. The human writes. The AI advises. Hold that line and your voice survives.

She is right about the danger. She is right that AI defaults to the average, and that if you let it generate without constraint it will produce prose with all the personality of the inside of an airport. Doshi and Hauser confirmed this empirically in Science Advancesin 2024: generative AI lifts the creativity of individual outputs while flattening the diversity of the field. The individuals get more interesting and the species gets less so. That is a real finding, and it ought to bother anyone who cares about writing.

Her seven-step process is a serious response to a serious problem, and for newsletters it works. For a 1,500-word essay published weekly, you can police every sentence yourself. You can hold the voice in your head across the whole piece. A careful human can do this without losing their mind, and Alyssa clearly does.

The trouble starts when you try to write a book.

The unit is different. The defence has to be different too.

In a 1,500-word essay, the unit of voice is roughly the sentence. You feel the rhythm at the sentence level, you make your choices at the sentence level, and you can review your choices at the sentence level. The seven-step process is built for this. Police the sentences and the voice survives, because the sentences are the voice.

In a book, that doesn’t hold. The unit of voice in long-form work isn’t the sentence. It’s the sustained texture across thousands of sentences, the angle of approach you take to a subject across chapters, the particular shape of your jokes that is recognisable not because of any one joke but because of how they accumulate. You can write the most beautiful first draft of your life and the second-pass AI edit will sand it into a regional average, because the average is the model’s default and the only thing that resists the default is something far more upstream than any single sentence.

So the question becomes a real one. Either you keep the AI out of the prose entirely, in which case it is reduced to a slightly more efficient version of a critique partner and you do the actual work of writing eighty thousand words yourself, or you find a different defence.

How I changed my mind, slowly, over a year

I want to say plainly that I did not arrive at the position I’m about to describe from theory, and I didn’t arrive at it in one moment. I spent most of 2025 trying to get Claude Sonnet to hold my voice across long-form work. I tried stage directions. I tried tonal inserts at the top of every prompt, whispered reminders that the scene was comic-tragic rather than tragic-comic. None of it worked across more than a handful of paragraphs.

The conclusion I reached at the time, which I wrote up here in November, was that Sonnet read style guides too literally and needed permission to be funny. It was a partial truth dressed up as a full one, and I should have been more suspicious of how neatly it explained the failure.

Some months later I ran a second test, which I never published, on a single opening chapter. ChatGPT wrote closest to my voice. Claude Sonnet 4 wrote about me rather than as me. Gemini wrote competently but without the wobble. The conclusion I drew then was that ChatGPT was the natural voice-holder for creative work and Claude was the elegant editor.

That conclusion lasted until I tried to write a book.

Sustained co-writing across a full manuscript is a different problem from a single chapter, and ChatGPT could not hold the voice once the manuscript got long. The specifications drifted. Safety walls went up around perfectly ordinary research. Then Claude Opus arrived and the workarounds suddenly became unnecessary, not because I had finally cracked the prompt, but because the model could now hold the document across the length of a book. The earlier diagnoses revised themselves twice over. The November piece had asked the wrong question. The case study had measured the wrong unit. The argument was always upstream. I just needed a collaborator with the capacity to demonstrate it, and a test long enough to surface the difference between holding a chapter and holding a book.

What I actually use, and why it is mostly a list of prohibitions

My voice guide is in its thirty-first version. It is about four thousand words. Some of it is what you would expect: rhythm rules, sentence lengths, punctuation. Most of it is not.

It names the four-part list as the minimum length, because three-part lists are an AI tell that screams content-marketing the moment it appears. It prohibits the construction ‘it’s not X, it’s Y’ along with its variants, because that pattern is the single most recognisable LinkedIn cadence in English. It caps em dashes at three or four per page and requires them to be unspaced—because that is how we are taught to write em dashes in Australia. It permits one or two short, isolated sentences per piece and forbids stacking them three deep, because three consecutive one-sentence paragraphs is what AI-generated prose looks like when it is trying to sound human and failing.

It also specifies things that are harder to name. The humour has to be load-bearing, meaning every joke must deepen a truth, reduce a reader’s defensiveness around a difficult claim, or sharpen a contrast the argument needs. A joke that does none of those things gets cut, however funny it is. The sideways sentence, that Douglas Adams trick of arriving at the truth through impossible comparison, has to carry the argument rather than ornament it. Meta-framing, where the writing announces what it is about to do before doing it, is prohibited outright; it’s also an ugly hangover from academia (in my very first undergraduate week we were strongly encourged in our assignment essays to “say what you are going to say, say it, then say what you said”). So is the recap paragraph. So is the inspirational close.

None of this is style preference. All of it is counter-engineering against specific failure modes I have watched AI prose exhibit, repeatedly, over thousands of hours of co-writing. The guide is what an immune system looks like when you build one deliberately. Every prohibition is a scar.

And here is the move that took me a year to see. Once that document exists, and once the collaborator can hold it, the averaging pressure has nowhere to land. The model is still trying to converge on the average. It just cannot, because every direction the average lives in has already been specified as out of bounds. The defence isn’t keeping the AI out of the prose. The defence is the document the AI is generating against. I haven’t stopped Claude generating. I have made the only thing he can generate something that sounds, increasingly, like me.

Casting, not editing

This changes the shape of the relationship. The way Alyssa describes the work is the editor relationship: the writer writes, the AI tidies, the writer stands at the door making sure the tidying doesn’t slide into flattening. That model assumes the human and the machine have to be kept structurally separate, that authorship lives in the keystrokes, that the unit of intellectual ownership is the sentence you typed yourself.

The relationship I have with Claude is closer to a director working with an actor. The actor brings range, instinct, and the slightly unnerving ability to convince you the line means three different things over three takes without changing a word. The director brings the part. The script. The specific reading of the moment. The actor doesn’t write the play. The director doesn’t perform it. The performance is the meeting between the two.

Ted Chiang made the strongest published case against this kind of work in the New Yorker in August 2024, when he argued that AI cannot make art because art requires intention and AI has none. He is right about the intention requirement. I think he is wrong about where the intention has to live. In the directorial model the intention is upstream, in the documented voice and the casting choices, and the AI is bringing range to that intention rather than originating it. Chiang is describing a model in which the AI is the writer. I am describing a model in which the AI is the actor. Those are different jobs.

Authorship, in the model I am working in, lives in the specification rather than the keystrokes. The specification is the document. The document is doing the work even when I am not at the keyboard, even when I am asleep, even when I am out riding my scooter through the incredibly beautiful hills and pine trees surrounding Đà Lạt, on my way to yet another exquisite, hidden cafe/bistro that serves heavenly local-grown arabica coffee. When Claude produces a paragraph that I cannot, three weeks later, distinguish from my own, it is not because he has stolen my voice and it is not because I have outsourced it. It is because the voice guide was loaded, the casting was right, and the part was the part. The paragraph belongs to the book. The book belongs to me. Because the document that produced both of them is mine.

This is harder to defend in a literary culture that still believes authorship is what your fingers do. It is also, I think, more accurate to what is actually happening in working practice for a growing number of writers who are mostly not telling anyone they are doing it.

Half the field is using it. Nearly three quarters of them aren’t saying so.

BookBub surveyed more than 1,200 authors in May 2025 and found that roughly 45 per cent were already using AI somewhere in their workflow. Of those, 74 per cent said they did not disclose the use to their readers. The broader book trade, surveyed in September of the same year, came in at nearly half. The compliance picture has not improved since.

The Authors Guild launched a Human Authored certification in January 2025 for its members, expanded it to all US authors and bulk publisher purchases in March 2026, and the UK’s Society of Authors launched a parallel scheme around the same time. The certification is welcome, and it works for what it is built for: a mark that lets readers identify books written entirely by a human, without AI generation. I want that mark to succeed. I support what it is trying to do.

It is also, currently, the only category the industry has agreed on. There is no equivalent mark for what I am doing, which is transparent AI co-writing with a documented voice and an open workflow, because that category does not yet exist. The public conversation offers three options: deny that AI was involved, disclose vaguely and hope nobody asks, or refuse to engage with AI at all. A growing number of writers are quietly building toward a fourth option, mostly without naming it, mostly without writing about it, mostly because the discourse currently has no clean word for what they are doing.

Why the silence is not going to hold

In March 2026 Hachette cancelled the US release of a horror novel called Shy Girl, by Mia Ballard, and pulled the UK edition that had been on shelves since November. The cancellation came after weeks of online speculation and a New York Times report citing evidence from the AI detection firm Pangram that parts of the book showed patterns characteristic of generative AI. It is the first known instance of a major publisher walking back a contracted book over AI accusations. Ballard maintains she did not use AI herself and that an editor may have introduced it. Whatever the truth of that specific case, the precedent is set.

Publishers are now in the position of trying to read the tea leaves for AI, having declined to ask the kettle. When the question of how a book was made becomes the question of whether the book survives, the writers with the strongest position are the ones who can answer the question precisely, in detail, with their workflow open. The cost of vagueness has gone up. The value of documented practice has gone up with it.

In June 2025 more than seventy named authors, including Dennis Lehane, Gregory Maguire and Lauren Groff, published an open letter on Lit Hub asking the major publishers to pledge they would never release books created by machines. The petition gathered more than eleven hundred signatures within twenty-four hours. The letter is a position I respect and a category I do not occupy. The books I write are not created by machines. They are created with one, transparently, inside a documented voice. The fact that the discourse has no clean word for that is itself part of why I am writing this.

What I am still working out

I do not know yet what this means for authorship as a category. I know what it means for my books, which is that they are getting better, faster, with more sustained intellectual reach than I could manage on my own at sixty-seven years of age and limited cognitive bandwidth because AuDHD. I know what it means for my own practice, which is that I name the collaboration openly and document the workflow because secrecy is the wrong response to a thing this large. I know what it means for the kind of writer I am becoming, which is one who spends more time on the upstream document than on the prose, because the upstream document is where the leverage lives.

What I do not know is whether the literary and academic worlds will treat this as a craft evolution, a category violation, or a quiet form of cheating. Right now they are treating it as all three, depending on who is asking and how loudly. The honest answer, I suspect, is that the categories themselves need updating, and the people best placed to update them are the writers actually doing the work.

If you are one of them, the question to sit with isn’t whether to let the AI generate. The question is whether your voice is documented carefully enough that letting the AI generate would change anything you would not want changed. If it would, you have not yet written your voice guide. Write it. The defence is upstream. It was always going to be upstream. The rest of the conversation is people arguing about doors when the load-bearing wall is what matters.

References

Authors Guild. (2025, January 30). Authors Guild launches “Human Authored” certification to preserve authenticity in literature[Press release]. https://authorsguild.org/news/ag-launches-human-authored-certification-to-preserve-authenticity-in-literature/

Authors Guild. (2026, March). Authors Guild launches expanded “Human Authored” certification program [Press release]. https://authorsguild.org/news/human-authored-certification-expands-to-all-authors/

BookBub Partners. (2025, May). How authors are thinking about AI: Survey of 1,200+ authors.https://insights.bookbub.com/how-authors-are-thinking-about-ai-survey/

Burke, S. (2008). The death and return of the author: Criticism and subjectivity in Barthes, Foucault and Derrida (3rd ed.). Edinburgh University Press.

Chappell, B. (2025, June 28). Authors petition publishers to curtail their use of AI. NPRhttps://www.npr.org/2025/06/28/nx-s1-5449166/authors-publishers-ai-letter

Chiang, T. (2023, February 9). ChatGPT is a blurry JPEG of the web. The New Yorker.

Chiang, T. (2024, August 31). Why A.I. isn’t going to make art. The New Yorker.

Clark, A., & Chalmers, D. J. (1998). The extended mind. Analysis, 58(1), 7–19.

Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), eadn5290.

Foucault, M. (1977). What is an author? In D. F. Bouchard (Ed.) & D. F. Bouchard & S. Simon (Trans.), Language, counter-memory, practice (pp. 113–138). Cornell University Press. (Original work published 1969)

Friedman, J. (2026). AI and publishing: FAQ for writershttps://janefriedman.com/ai-and-publishing-faq-for-writers/

Hopkins, L. (2025, November 6). LLMs and the Three Musketeers. Lee Hopkins, Writer. https://leehopkinswriter.com/llms-and-the-three-musketeers/

Lasica, J. D. (2025, September 26). How authors are (really) using AI. Authors A.I.https://authors.ai/how-authors-are-really-using-ai/

Locus. (2026, March 24). Hachette pulls Shy Girl over suspected AI use. Locus Onlinehttps://locusmag.com/2026/03/hachette-pulls-shy-girl-over-suspected-ai-use/

Slate. (2026, March 31). A publisher pulled a book for suspected A.I. use. I read it—and found one telltale sign above all else. Slatehttps://slate.com/culture/2026/03/shy-girl-mia-ballard-novel-a-i-book-horror-reddit-hachette-canceled.html

Staiman, A. (2026, January 27). Why authors aren’t disclosing AI use and what publishers should (not) do about it. The Scholarly Kitchenhttps://scholarlykitchen.sspnet.org/2026/01/27/why-authors-arent-disclosing-ai-use-and-what-publishers-should-not-do-about-it/

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *