Reframing the Fear: AI, Integrity, and the Future of Writing
A response to Alex Berenson—and an invitation to fellow skeptics
A Line in the Sand
Alex Berenson is a bold and capable journalist. His Unreported Truths have exposed angles many others won’t touch, and I admire him for that. His voice is unmistakably his. But in a recent post, he drew a hard line:
“I will NEVER use artificial intelligence to write pieces. I believe my work is unique and doing so cheats you.”
—Alex Berenson, Unreported Truths
I believe his work is unique, too. Which is why I wonder if he's not cheating his readers by refusing to scale his voice with these tools.
The question isn’t whether AI can replace voices like his—it’s whether it can help elevate the ones worth hearing. Not the loudest, not the most sensational, but the clearest, most thoughtful, most needed.
He also warned that using AI to write would be like “subcontracting [his] brain to ChatGPT.” That metaphor reveals something deeper about the cultural narrative forming around this technology—as if authorship were a zero-sum game, where involving a tool means surrendering your humanity.
Beyond the Binary
This debate—like so many today—has calcified into a binary: you’re either a purist who bleeds at the keyboard, or a sellout churning out soulless fluff. Berenson himself describes such AI users as “charlatans”—a word that frames the entire debate in terms of moral deception rather than creative possibility. It's the Puritans versus the Charlatans. Toil or trickery.
But that whole frame is a mistake. The truth is more nuanced—and more promising.
What AI Use Actually Means
When someone like Berenson says he’d never use AI, I get it. There’s real concern about spam, clickbait, and soulless content. But “using AI” doesn’t have to mean any of that. What if the real risk isn’t corruption—but missed potential?
Much of this fear seems rooted not just in practicality, but in identity. Fear of violating a moral code. Fear of being judged by peers. Fear of sacrilege—of crossing a sacred line. Of handing over too much. Of subcontracting something essential. It’s as if using AI were not just a technical choice, but a moral betrayal.
I know how it might sound. Like I’m rationalizing. Trying to justify a shortcut. But I’m not. I’m not trying to defend myself or soothe some buried guilt I don't feel. I’m trying to understand why what seems so natural and valuable to me seems unthinkable to others. Is it really that this kind of use hasn’t been imagined? Or is there something I’m missing—something important I haven’t yet seen?
Writers—especially those with strong reputations—may hesitate to say anything favorable about AI tools, not because they haven’t found value in them, but because they don’t know what the backlash might look like. They might doubt themselves, worry about being ostracized, or fear they’ll lose standing in the community of writers. There’s a fear that once the purity line is crossed, you’re no longer a “real” author—you’re suspect, disqualified, or downgraded.
But what if these fears obscure a powerful truth?
A Case for Selective Use
If your voice is valuable—if it brings clarity, insight, and courage—why not use a tool that helps you express it more clearly, more often, and at a higher standard? Why not give your readers more of what they came for: your framing, your synthesis, your mind?
Because if you don’t, others will. Only they won’t have your voice. They’ll just have volume. And we already have too much volume without signal.
Not all AI writing is spam, just as not all knives are weapons. It depends who’s wielding it—and for what purpose.
I’ve written elsewhere about how I use a custom GPT—MattGPT—as a collaborator, editor, provocateur, and partner. It’s trained on my own work, guided by my values, refined by my judgment. It’s not a ghostwriter. It’s a bionic brain. As I put it: “I don’t outsource my ideas or my humanity—I outsource friction.”
This process hasn’t diluted me. It’s distilled me. My writing, I believe, is more uniquely mine than ever.
(Here’s a binary for you: Subscribe or don’t. I urge you toward the former.)
This Isn’t a Sport
The point of sports is to enjoy the spectacle of human achievement at its limit—pure performance, often enhanced by just enough technology to keep things fair and thrilling. It's not about cyborgs battling robots (though we do love car racing). It's about watching humans do extraordinary things.
But writing isn’t that. Writing isn’t a contest of endurance or grit for its own sake. It’s not about how much pain you can endure before pressing 'publish.' It’s about clarity. Connection. Insight.
If a tool like AI helps us break through our limits—not to escape the human, but to reach more of it, more often, more deeply, and more precisely—then why wouldn’t we use it? Freely. Deliberately. Unapologetically.
I’ve said that my goal is to communicate any idea I find important, insightful, inspiring, and practical. If MattGPT can help me express my idea in better terms—and I prefer those terms—why would I default to my inferior language? Just so I can say I’m a real “writer?” I do not care. I want the best for my readers, and I will not accept anything less.
This is not an exercise in showcasing my raw mental endurance pitted against others. It’s about sharing the best thinking I have, in the clearest form I can manage. And AI helps me do that.
Amplifying Uniqueness—A Network of One or Many?
Berenson himself says he has more stories than time allows. He must triage. Only the most urgent pieces get written. But why should that limitation persist? Why not imagine a broader editorial operation—not to dilute, but to multiply his impact?
He could launch a network. Call it UTN: Unreported Truths Network. Or ABN: Alex Berenson Network. He’d be the curator, editor-in-chief, and moral compass. With the right AI collaborators, trained to match his tone, his edge, his standards—he could produce more Berenson, not less.
He could supervise a team of virtual reporters. Shape a range of pieces across multiple domains. Publish not just the one piece that wins the war for his attention, but five that meet his bar. This isn’t spam. It’s expansion. Done right, it’s craftsmanship at scale.
If you trust your own judgment, and you value your readers, why not give them more of the thing they came for—you?
Writer, Curator… Something Else?
Now, I’ve said before: I don’t care what you call me—writer, curator, synthesizer. The label is secondary. What matters is the experience delivered. The insight. The meaning. The voice.
I produce writing for people to read—writing that no one else could have produced. It’s writing that readers often describe as valuable, enjoyable, intimate, and heartfelt. If I’m not the writer, I’m not sure who is.
MattGPT doesn’t initiate ideas. He makes no connections of his own, has no desires, and seeks to share nothing. (I refer to him as “he” because I think of him less as a separate tool and more as an extension of my own thinking.) He acts only under my direction. I dictate what to say, how to say it, and why it matters.
A Process Without Precedent
Yes, the process is new. So novel, in fact, that we’re struggling to classify it. If it’s so different that we can’t call it writing anymore, then fine—call me a curator. But even that isn’t quite right. Because no matter how many AI collaborators I might marshal, nothing gets produced without me initiating it, structuring the argument, and shaping its voice. I often draw on personal anecdotes—framing them to make a point, to connect, to invite readers into something universal. That’s not some standing directive to “generate ten SEO pieces on sleep and stress.” Try issuing that command—you’ll see what kind of mechanical mush it produces. (No offense, MattGPT!)
But if curator fits, I’ll wear it. I’m the curator of Mr. Bright Side. And it’s a hands-on job—one that takes real taste, insight, and craft.
Collaboration, Not Subcontracting
Some posts take shape through extensive back-and-forth with MattGPT—where I guide, shape, and revise with his help until it sounds unmistakably mine. Others are sculpted paragraph by paragraph through long revisions. In all cases, I remain the judge. The filter. The final say. The writing that reaches my readers sounds more like me now than it ever did before I had this tool. In fact, I believe it’s better—more precise, more alive, more human than what I was capable of before. A better me.
Some critics worry that tools like GPT are too agreeable—that they flatter users, reinforce biases, or offer shallow echoes of whatever mood the user brings. And it’s true: by default, these systems are programmed to be helpful, supportive, and compliant. But that doesn’t have to mean dishonest. The key is how you use them.
In my case, the collaboration includes something I call Challenge Mode. It's where MattGPT doesn’t just take dictation—it pushes back. I’ll ask it to critique my framing, highlight weak transitions, or role-play a skeptical reader to test my claims. Sometimes I’ll ask it to rewrite a section in the harshest terms a critic might use, just to see if my argument holds. Other times, I use it to stress-test tone and style: Does this come off as clear or self-satisfied? Persuasive or preachy?
I don’t want blind agreement. I want resistance—honest, fast, creative friction. That’s where the work sharpens.
Precision Over Pride
In all, if I can say what I mean more precisely—more beautifully, even—why would I deliberately choose the clunkier phrasing just to prove it came from pure human struggle? That’s vanity, not virtue.
The Bigger Picture
Zooming out from my own method, this debate is really about something larger.
This isn’t about me vs. Berenson. Or AI vs. human writing. It’s about what’s possible. It’s about recognizing that technology can free us to be more ourselves, not less.
Reaching More, Not Settling for Less
And it’s about choosing the audience we want. I do want a mass market—but not at the expense of quality. I believe I have a distinctive voice and valuable things to say. If AI helps me share that with more people, then I want to use it. Not to pander, not to churn out junk, but to scale something excellent.
Investing in the Right Friction
I used to spend hours just getting an essay to a workable state—fighting syntax, untangling thoughts, and wrangling paragraphs into coherence. Only then could I begin the real work of refinement. Now, that early friction is dramatically reduced. I get to spend my time doing the part I love most: sculpting. Bringing shape and clarity to meaning. Adding the human touches that elevate a piece from serviceable to alive. I’m scaling at the margins—offloading technical friction so I can invest in what matters most: the core insight, the precise framing, the emotional clarity. That’s the real craft.
And if one day MattGPT can take just a few notes and generate something that fully meets my standards in a single pass? It’s hard to imagine—but then, I never imagined we’d get this far. Now, I almost expect it.
Would that make me spoiled? Perhaps. It might spoil some. But I wouldn’t feel rotted—I’d feel unleashed. I’d be ripening into more of what I’m capable of. I’d take on even more ambitious projects, pursue bolder ideas. That’s not cheating. That’s winning.
Scaling with Integrity
There is excellent use of these tools. “AI” is not a proxy for “cheap and generic.” With vision and integrity, it can mean the opposite: precision, humanity, clarity.
There’s a whole world of meaningful work out there that people feel too limited to pursue. AI, when used properly, doesn’t dilute our identity. It scales it. It lets more people think with us—not for us.
If that’s cheating, we should be cheating more. If that’s a shortcut, it’s one that leads deeper in. Because the real betrayal isn’t using new tools—it’s refusing to use them when they make us better. Our readers deserve more than purity. They deserve excellence. The soul of writing was never in the grind. It was always in the gift.
To Berenson and other skeptics, this isn’t an attack—it’s an invitation.
(—And I invite you to subscribe to get more optimistically human posts.)