Author Analytics

State of author AI 2026.

How working authors are using AI in 2026. Adoption rates, what writers want, the consent and compensation fight, and where the lines are being drawn.

Published May 2, 2026

Two years after ChatGPT broke out, "are you using AI?" is the most-asked question at every writing conference. The data is more nuanced than the discourse: most writers use AI for something, very few use it for everything, and the consensus on consent and compensation for training data is unusually strong. This page reconciles the major recent surveys.

Adoption: who is using what

Author AI usage by purpose (Authors Guild Dec 2023)
  • Grammar / spell check47%
  • Brainstorming / ideation33%
  • Marketing / blurb / ad copy26%
  • Structuring drafts13%

Authors Guild December 2023 AI Survey. Share of authors using AI for each task (of those who use AI at all). Categories overlap and respondents could select multiple. The single largest writing-adjacent use case is grammar; structured prose-drafting use is the smallest segment.

Two patterns. First, AI usage is concentrated outside the prose itself. Grammar, brainstorming, and marketing dominate; structuring drafts sits at 13% of AI-using authors. Second, fiction and nonfiction split: nonfiction authors report higher AI usage for drafting-adjacent tasks than fiction authors in the same survey, partly for craft reasons (factual prose tolerates AI assistance better than scene-driven fiction) and partly reader expectation.

The consent and compensation fight

The Authors Guild May 2023 AI survey found one of the strongest consensus signals in any recent author survey: 90% of writers believe authors should be compensated for the use of their books in training generative AI. This is unusually high agreement for any policy question.

Related findings from the May 2023 and December 2023 Authors Guild surveys:

  • 91% believe readers should be notified when AI generates all or part of a work (May 2023).
  • 96% believe authors' consent should be required before their work is used in AI training (December 2023 follow-up).
  • 78% said they would not license their work for AI training even if a licensing system existed without strong opt-out controls (December 2023).

The Authors Guild filed a class-action lawsuit against OpenAI in 2023 on these grounds, joined by other author groups in subsequent years. The legal landscape remains unsettled in 2026, with several major cases pending.

Tools authors actually use

No major recent author survey publishes a comprehensive tool-by-tool market share, so any specific percentage here would be a guess dressed up as data. What is consistently observable across surveys and industry commentary:

  • General-purpose chatbots (ChatGPT, Claude, Gemini) dominate because they are free or low-cost, accessible, and used for tasks unrelated to writing as well.
  • Editing assistants (Grammarly, ProWritingAid) have the longest pre-LLM history and continue to lead in their category.
  • Novel-specific AI tools (Sudowrite, NovelCrafter) sit at small but growing share among working authors. They are visible in indie-author communities and largely absent from traditional surveys.

See also the dedicated Authorlytica vs NovelCrafter comparison for one of these tools in detail.

How disclosure norms are shifting

The bigger 2025-2026 story is not adoption. It is the norm-formation around disclosure. Three signals:

Publisher submission requirements. Several major and mid-size publishers added AI disclosure clauses to their submission terms in 2024-2025. Most do not flat-out reject AI-assisted manuscripts, but they require disclosure and typically cap the percentage of AI-generated prose. Some imprints do reject any AI-generated prose outright.

Industry organizations. The Authors Guild added recommended AI disclosure language to its Model Trade Book Contract in 2024. SFWA (Science Fiction and Fantasy Writers of America) updated the Nebula Awards rules to disqualify any work that uses LLMs at any stage of the writing process.

Reader backlash. Several high-profile cases in 2024-2025 saw indie authors lose meaningful sales after readers identified AI-generated content that was not disclosed. The pattern has hardened a norm: AI in marketing and research, fine; AI in the prose, disclose or risk reputation.

Where the lines are being drawn

No single survey publishes a clean acceptance-rate breakdown by task, so specific percentages here would be a fabrication. Across the Authors Guild surveys, publisher policy statements, and the reader-backlash pattern, the qualitative line consistently sits in the same place:

  • AI for spell-check, grammar, and marketing copy is broadly accepted.
  • AI for cover concepts, brainstorming, and research is contested but widely tolerated, especially in indie publishing.
  • AI for outlining and editing prose suggestions is mixed; norms are still forming.
  • AI for direct prose drafting is broadly rejected, especially undisclosed. This is where reader backlash, publisher submission rules, and SFWA's Nebula ban all converge.

The line is drawn around creative ownership of the actual prose. Authors broadly accept AI as a research and operations tool. They broadly reject AI as a substitute for the writing itself.

Indie vs traditional split

Three differences worth noting:

Adoption rates skew higher in indie. Indie authors report higher AI usage across most categories, particularly marketing, ad copy, and cover concepts. The reasons are practical: indie authors are the marketing team for their books and benefit more from operational AI assistance.

Disclosure norms are stricter in traditional. Big-five and major mid-size publishers have explicit policies. Indie publishing has community norms (Amazon KDP added an AI disclosure requirement in 2023) but no comparable contractual gates.

Reader backlash hits indies harder. Traditional authors have publisher PR cover; indies own the relationship with readers directly, so disclosure mistakes affect sales more visibly.

What this means for working writers

Three takeaways from the data:

Use AI for non-prose tasks freely. Brainstorming, research, marketing, cover concepts, ad copy. The acceptance is high, the productivity gain is real, and the reader-trust risk is low.

Be careful with AI in the prose itself. If you do use it (for editing, prose suggestions, or drafting), expect to disclose somewhere in your submission process and on your platforms. The norms are still forming, and the conservative move is to over-disclose.

Support consent for training data. The Authors Guild lawsuit and similar legal action concern whether your books can be used to train AI without your permission. Whatever your view of AI tools, the consent question is a separate fight, and 90% of writers are on the same side.

Authorlytica's position

Authorlytica does not generate prose, edit text, or process the content of your manuscripts. It tracks session metadata: word counts, dates, durations, mood tags. The tool you use to write does not need an AI disclosure for your tracker. We do not train on your writing because we do not see your writing.

The reason this article exists at all is not to take a side on AI tools, but to surface the data on what authors actually think and do. The numbers are clearer than the arguments.

Related reading:

Whatever you write with, the writing has to happen.

Authorlytica tracks the sessions, not the prose. Your words stay yours. Free forever plan, no setup.

Try Authorlytica Free