Editors and AI, Part IV: Beyond "Just Say No"—A Nuanced Approach to Generative AI in Editing
In my previous posts, I explored what artificial intelligence actually means, which editorial tools use AI features, and how generative AI really works. Today, I want to tackle something I'm seeing more and more in the editorial community: The "just say no" stance toward AI. Let's take a closer look.
The Resistance Is Real (and Understandable)
When the topic of artificial intelligence comes up in editorial circles, the responses often range from cautious skepticism to outright rejection. And it makes total sense. As editors, we've built our careers on a foundation of expertise, attention to detail, and deep understanding of language. The idea that a machine could replicate any part of our work feels not just threatening, but somehow wrong.
I've spent thousands of hours testing AI tools, taking classes, developing custom solutions, and having conversations with fellow editors about what these technologies mean for our profession. Through all this, I've developed a deep appreciation for why many editors take strong ethical stances against AI.
These concerns aren't just knee-jerk reactions to new technology—they reflect legitimate worries about things like:
Quality
We've all seen examples of AI-generated content that looks fine on the surface but falls apart under scrutiny. As professionals dedicated to excellence, the idea of automated tools producing subpar work that passes as "good enough" genuinely concerns us. A perfect example is what's happening with digital libraries like Hoopla, where thousands of AI-generated "books" with generic covers and factual errors are flooding the system. This trend of AI slop doesn't just waste readers' time; it actively diminishes readers' trust in published content and hurts legitimate authors whose work gets lost in the sea of AI-generated garbage.
Accuracy and Trust
When AI tools fabricate citations or present false information as fact, it undermines everything we stand for as editors. Many academic copyeditors I know have spent hours trying to track down references in studies that had already been peer-reviewed, only to find that the references didn't exist—they were AI-generated. This isn't just about quality; it's about maintaining the integrity of published work.
Creative Theft and Copyright Violations
Many editors and authors object to AI systems trained on creative works without permission or compensation. While the landscape is evolving—publishers like HarperCollins are now offering authors opt-in licensing agreements with compensation—the issue remains contentious. Numerous lawsuits from creators against AI companies highlight that these ethical concerns are far from resolved. As editors, we need to consider where we stand on this complex ethical issue.
Privacy and Client Confidentiality
How do these tools handle our clients' content? When we input a document into a generative AI system, what happens to that data? Will it become part of the training set for future versions? Could sensitive details from unpublished works end up informing responses to other users? How secure are these systems? These are all critical questions that professional editors need to address as we develop industry-standard practices and ethical frameworks to prioritize our clients' privacy and intellectual property rights.
Environmental Impact
The carbon footprint of training and running AI models is enormous. For example, a standard Google search consumes about 0.3 watt-hours of electricity, while an LLM-based AI search can use up to 10 times more energy per query (around 3 watt-hours). That might not seem like much, but when scaled up to millions or billions of searches, the difference is staggering. As professionals who care about sustainability, many editors question whether the benefits of these tools justify their significant environmental costs.
The Human Element
Our work involves preserving an author's unique voice, understanding context, and making nuanced judgment calls. When AI tries to replicate these skills, it often misses crucial subtleties that human editors don't. The empathy and collaboration that characterizes the best editor-author relationships simply can't be replicated by a machine.
All these concerns reflect deep commitments to professional excellence and ethical practice. They shouldn't be dismissed or minimized.
Why Understanding AI Matters—Even If You Choose Not to Use It
While I deeply respect the viewpoints and ethics behind many "just say no" stances, I believe that completely avoiding the AI conversation leaves us less prepared to advocate for our profession's future.
After extensively testing various AI tools (and I've gone deep into this rabbit hole), I've discovered something fascinating: The technical aspects of copyediting—all that nitty-gritty mechanical stuff—isn't easy to automate. In fact, as I explained in my previous post about how AI really works, traditional rule-based tools are far more reliable for many editing tasks than generative AI.
But we won't discover these insights if we're not willing to look. And more importantly, we're doing our clients and colleagues a disservice by shaming them when they experiment with AI rather than joining them in learning its limitations and appropriate uses. As Brené Brown writes in I Thought It Was Just Me (But It Isn't), "You cannot shame or belittle people into changing their behavior."
Here's what often happens when editors take an all-or-nothing approach:
We miss opportunities to educate clients.
When clients ask about AI tools, they're usually looking for guidance, not judgment. If we can speak knowledgeably about AI's capabilities and limitations, we can help them make informed decisions that align with their goals.
We fail to understand what we're actually arguing against.
As I discussed in part I of this series, "AI" is an umbrella term covering everything from simple machine-learning tools to complex generative systems. When we reject "AI" wholesale, we're not being precise about what specific technologies or practices we object to. Do you use Microsoft Word 365's built-in grammar and spell checker, also known as "Editor"? Then you're using AI. Do you use Grammarly? Then you're using AI.
We lose credibility with tech-savvy clients.
Some clients are genuinely excited about the possibilities of new technology. If we dismiss their interest without demonstrating a nuanced understanding of the tools, we risk being seen as resistant to change rather than as thoughtful advocates for quality.
We damage professional relationships.
Shaming colleagues for exploring AI tools creates an environment of fear rather than collaborative learning. Our profession thrives on open dialogue and shared insights. Instead of positioning ourselves as technology gatekeepers, we can become trusted guides, helping colleagues navigate AI's complexities with nuance and empathy.
We give AI more power than it deserves.
By treating AI as a mysterious force that will destroy our profession, we're actually attributing more capability to it than it currently has—and distracting ourselves from addressing the very real limitations of these tools.
A Framework for Thoughtful Evaluation
After years of helping editors build sustainable businesses, I've learned that innovation in our industry isn't inherently good or bad—it's how we approach it that matters. Just as a developmental editor helps authors see their work from new angles, we need to examine AI's role in our profession with both critical thinking and clear professional standards.
When I evaluate any new tool or technology for my business, I start with these questions:
- Does this genuinely enhance my ability to serve clients?
- Can I maintain my professional standards while using it?
- Do I understand its limitations well enough to use it responsibly?
- Does it align with my values and ethics?
Notice what isn't on this list: speed, efficiency, or competitive advantage. While these benefits might be tempting (and I get it—who doesn't want to work more efficiently?), they can't be our primary drivers.
When we lead with speed and efficiency, we risk compromising the very things that make us valuable to our clients. I've watched colleagues rush to adopt AI tools mainly because they're afraid of falling behind or losing business to faster, cheaper competitors. But here's what often happens: They end up spending way more time double-checking AI output than they would have spent doing the work traditionally. Or worse, they miss major errors because they're moving too quickly or putting too much trust into AI.
Instead, let's flip the script. What if we approached AI evaluation by first asking how it could help us deliver better service to our clients? How could it free up our time and mental energy for the deep, analytical work that only the human brain can do?
Finding Your Balanced Approach
While it's easy to fall into the all-or-nothing thinking trap, I am a big believer that our conversations around generative AI would benefit from a lot more nuance. We are all living in a gray area, and it's time to get comfortable with that, play devil's advocate, and do our best to look at this issue from all sides. This is the only way we're going to be able to think critically about the future of editorial professions and have the conversations that matter as we develop our own AI policies.
Here's what this might look like in practice:
- If you choose not to use AI tools in your work: Develop a clear rationale for this choice based on your research and/or testing of these tools. Be prepared to explain your stance to clients without shaming those who are curious about these technologies. Stay informed about AI developments so you can provide guidance when asked.
- If you're curious but cautious: Try using AI for non-client work first, such as giving you feedback on your blog post drafts, researching a complex topic you've been meaning to learn more about, or generating ideas for your business. Pay close attention to where the technology excels and where it fails. Develop clear boundaries and ethical standards before bringing these tools into client work.
- If you're already integrating AI into your workflow: Be transparent with clients about how and when you use these tools. Develop clear ethical guidelines that prioritize quality, privacy, intellectual property rights, and the preservation of the author's voice. Stay vigilant about the limitations of AI and maintain rigorous quality control processes.
Regardless of your approach, remember that your role as a professional editor isn't diminished by technological change. You're making conscious choices about how to maintain high standards and serve your clients effectively in an evolving industry. The key is making those choices based on your professional judgment and expertise rather than fear, shame, or peer pressure.
A Challenge for the "Just Say No" Camp
So here's my challenge to you: If you've been firmly in the "just say no" camp, consider spending some time exploring how these tools actually work. You might be surprised to find that understanding AI's limitations actually helps you articulate your value more clearly to clients.
Because at the end of the day, you're not just an editor—you're a trusted guide in an increasingly complex publishing landscape. Every choice you make about AI should support that mission, whether that choice is thoughtful adoption or informed rejection.
In my next post, I'll explore whether generative AI will replace human copyeditors (spoiler alert: it won't), and I'll go into more detail about what AI is good at and what it's (very) bad at when it comes to editorial work. See you then!
Recommended Reading
- "Public Library Ebook Service To Cull AI Slop After 404 Media Investigation." 404 Media. February 20, 2025. Accessed February 22, 2025.
- "Like It or Not, Publishers Are Licensing Material for AI Training (And Using AI Themselves)." Jane Friedman. July 17, 2024. Updated February 19, 2025. Accessed February 22, 2025.
- Calvert, Brian. "AI already uses as much energy as a small country. It’s only the beginning." Vox. March 28, 2024. Accessed January 12, 2025.
Previous Posts in My "AI and Editors" Series
- Editors and AI, Part I: What Is AI? A Primer for Editorial Professionals
- Editors and AI, Part II: AI in Editorial Software—Which Editing Tools Use AI and Which Don't
- Editors and AI, Part III: How Generative AI Really Works—What Editors Need to Know
This post was published on February 25, 2025.
Are You Charging What You're Worth?
New to editorial freelancing and feeling like you need to learn all the things? Overwhelmed with projects but not making enough money? Forgoing breaks and vacation time to meet deadlines? My free, 9-lesson course gives you actionable ways to find your ideal freelance rates, say goodbye to the hustle, and build a profitable business that energizes you.