
Recently, the Authors Guild expanded its human certification program for books to non-members (and the UK’s Society of Authors has partnered up on it as well). If you haven’t heard of this program, it was first launched in January 2025 for Authors Guild members only. The goal: to offer an official certification system that writers and publishers can use in their books and in marketing to indicate if the text of a book was human written. The logo and name are pending registration with the US Patent and Trademark Office and supported by a registration system that creates a public database where anyone can verify a book’s human origins. (Disclosure: I am a member of the Authors Guild and have a marketing partnership with them for my industry newsletter, The Bottom Line.)
The Authors Guild certification does permit a small amount of AI-generated text, mainly to allow for AI-powered grammar and spell-check applications. Use of AI for research or brainstorming does not disqualify a book, as long as the text is human written.
When the certification was first announced more than a year ago, I couldn’t help but wonder if it would be a short-lived effort, given the pace of change surrounding AI technology and writers’ use of it. I have noticed a handful of competing efforts to certify human-written work, such as Verify My Writing. It uses Pangram, considered the industry leader in AI detection, to issue a human score (0–100) based on the likelihood and amount of human-written content. A “certified human written” seal is offered if the score is 95 or higher. When I asked the founder of Verify My Writing what motivates writers to pay for this certification (as it is not required by agents, publishers, or retailers), he said pride—proving that they did the work—and market differentiation.
Now that Authors Guild has announced expansion of its program, opening it up to all authors for a $10 fee per certification, I find it problematic that certification is done on the honor system. That means the Authors Guild is not analyzing the manuscript for evidence of AI use but taking the author’s word for it. I wrote in my newsletter, The Bottom Line, “I’m sorry to say that certification without some kind of independent verification is not meaningful. Of course one might argue there is no way to know with 100 percent certainty whether AI has been used in a work, so this has to be on the honor system, but then … how can you really certify? This problem is not going away anytime soon.”
I heard from the Authors Guild’s executive director, Mary Rasenberger, who wrote me that no AI detection tool exists today that is sufficiently accurate for their purposes, but if and when that time comes, they will use it.
Her initial response really pushed me to reflect, though, on the deeper concerns I have with any human certification program. My initial commentary wasn’t intended to dismiss the initiative, but to point to the challenge the entire industry grapples with: the tension between honor systems and verifiable standards. As AI tools become more sophisticated, I’m not sure how much any certification will hold up under scrutiny.
Partly this is because there’s evidence authors are not entirely honest about their AI use despite the risks involved, which can be significant if you attest that your work is one thing and it is not. (See Mary’s full statement below about the risks.) Or authors don’t understand the risks involved or they think their particular use is acceptable. Rather than worrying about scammers taking advantage of this Authors Guild certification, I worry about legitimate authors who may not be entirely honest about their AI use because I don’t think they’re honest in other contexts where repercussions also apply. After having some conversations with small presses at AWP earlier this month about what AI use they’re seeing (using Pangram) from their contracted authors, my hunch is that the number of works with some percentage of AI-generated or AI-assisted text within their pages is approaching 25 to 50 percent for nonfiction in particular and may soon be the majority. These works are going on to be copyrighted, I might add. (Despite some agents and publishers saying that authors using AI assistance cannot gain copyright protection for their work, that is not true. It’s complicated.)
I also don’t see evidence in the market that human authorship certification means anything (yet) to readers. Regardless of who’s certifying, there are no studies showing increased market value for books certified with human authorship; publishers don’t seem eager to certify books, either. This leads to the question: What is the purpose of this type of certification?
Personally, I believe this desire for certification says more about people who desire to broadcast a strong anti-AI stance or who don’t feel secure in the current market. I worry this certification plays on people’s feelings of anger and professional insecurity during a confusing time. And while the program may give writers a sense of fighting back, it may be a sign of weakness, not strength, to feel you must signpost your humanness. And I think the signpost will lose value quickly even if it does in fact offer value today.
I also have concerns about false accusations by authors against other authors they don’t like (it’s already a well-worn insult on social media to accuse writers, or anyone, of using AI), and I have been consistent in my critiques around this issue. When the SFWA came out with a strict anti-AI policy for its Nebula Awards, I commented that it’s not a real policy if you don’t enforce it with third-party AI detection tools.
Meanwhile, other people have argued that the Authors Guild has this notification system backward—that it’s AI-created work that should be labeled as such. One author wrote, “What if I step into a bookstore and I see some books with the Authors Guild’s Human Authored Certification Mark on them; will that mean that all those other books that don’t have the mark mean they were created by AI? Will libraries have to go through their shelves and retrospectively mark all books…?”
I voiced all of these concerns to the Authors Guild last week.
Here is the full response from Mary Rasenberger at the Authors Guild.
Jane, I appreciate your sharing your thoughts and those of others with us. We created this program because our members asked for it. Despite our efforts to ensure that AI generated content is labeled so that consumers have full disclosure, there are no such requirements yet, and even Amazon, who we tried to persuade to provide such disclosure, still does not make it public. So when some members came to us and asked if we could instead offer a way for them to indicate to potential readers that they did not use AI, we agreed. In a world where so many are using AI, some authors who are not want a way to stand behind their work.
We did an enormous amount of research in 2024, receiving excellent guidance from diverse companies that must deal with scammers in the book marketplace and also those who might be interested in the program, including from Amazon, publishers, and others. We have a pending registered certification mark with the US Trademark Office—like the types of marks you find on various products, including food and electronics (think Gluten free, Free trade, Woman-Owned, Energy Star and on and on). It is a licensed trademark program—not a logo that anyone can freely use.
We have protected against scammers by (1) requiring non-members to be verified through a well-known, third party identity verification system, (2) limiting the number of books that any one person can certify in a year without reaching out to us for an exception, (3) charging a small fee which will discourage scammers, and (4) providing a unique registration number for each title, with a public searchable database of registered titles, so that any use of a certification can be checked.
We also prevent fraudulent certification by requiring all users to sign a license agreement for each title registered that makes the user represent and warrant that the title was “Human Authored”—meaning the text of the work itself (excluding the table of contents or index) was written by one or more humans (excepting de minimis use for spell check and editing). As we clearly state in the contract, if a licensee-author registers a book that the author knows contains AI generated text, they will be liable for breach of contract, trademark infringement, and likely consumer fraud under various laws. We believe that should disincentivize authors who use AI to generate text for them from registering their books. The Human Authored certification is not a requirement after all—it is only for those who wish to distinguish their work. In addition, we also continue to look at AI detection services and may introduce one in the future. We will likely have to raise fees though if we do.
I know you find the notion of “self-certification” meaningless because people will not be honest. But we do not agree. First, in our experience, most authors are honest and also are concerned about liability. Few will want to risk liability of hundreds of thousands of dollars to fraudulently obtain a Human Authored certificate. Moreover, it is not uncommon for certification to be enforced through agreements, like ours, rather than through testing. License agreements are the standard way to enforce a trademark; license agreements have covenants, representations, and warranties, such as the promise that the work is Human Authored in ours. Indeed, contracts are how much of our economy functions—through enforceable promises and consequences for breach backed by a legal system.
As to the question of why certify Human Authored instead of AI generated? The latter is of course preferable, and we have fought hard to get requirements for labeling AI generated content. But Congress has not yet acted (surprise, surprise) and Amazon did start requiring disclosure but still refuses to make the info public. We are still trying for AI disclosure but meanwhile offer this solution as well.
We do not expect to break even from this project, much less earn money from it. We currently are charging members $10. The fee covers:
- Verification: The out-of-pocket cost of verification for non-members. We need to verify that people are who they say they are. Members are verified when they join, so this only applies to non-members. There is a cost of around $2 a person.
- Enforcement: The logo is a trademark certification mark that needs to be enforced to retain its trademark status. We need to have staff who constantly monitor for fraudulent uses. We may also start using AI detection software once we find one that is reliable and won’t incorrectly identify human written as AI generated and that will cost money. Then we need to take legal action against any unauthorized uses. Some of it we can do with staff—such as sending cease and desist letters and working with Amazon to get books using the mark fraudulently taken down—though it may require bringing in additional staff if the workload is great. In some cases, we will inevitably have to bring lawsuits to enforce the mark against illegal uses. Any lawsuit costs minimally six figures. We are a not for profit without a litigation war chest and so will create an account with fees, minus the out-of-pocket costs, to use when we need to hire outside lawyers.
- Technology: We built an online platform for registration and a searchable database. We are currently building a portal for publishers. And like any technology will have ongoing updating and maintenance costs.
I appreciate the Authors Guild taking time to send this explanation. It’s a well-intentioned program meant to serve the author community, not a cash grab. While I still believe this certification has negligible value in the market, my copyeditor, Nicole Klungle—an avid fiction reader and Kindle Unlimited user—wrote me, “As a reader, I would absolutely value some kind of ‘entirely human authored’ signal that is more than ‘passed the captcha.’” At this point she can even tell which ones have been written by ChatGPT versus an Asian AI model.

