The Unspeakable Middle

on living with contradiction in the age of AI

She asks what I do for a living, and my body answers before I do, forcing one of those half laugh, half groans that bubbles up from intestinal anxiety.

It's a Sunday night at a distillery in Indy. Not a tech event or a networking thing—just a bunch of queer folks like me standing around a long wooden table with goofy martinis and popcorn, a male-gazey lesbian romance playing on a projector that nobody's really watching. And I like these people. I'm here because I missed being around people who aren't optimizing anything.

"So what do you do?"

I work in developer relations. I help people build cool shit with software. This is true and also the verbal equivalent of saying, "welp..." and putting on a coat. I've gotten rather good at coats and ego death.

"Oh cool, so, like... AI stuff?"

And there it is. The faces change. Sometimes it's suspicion. Sometimes it's the polite, othering smirk Midwesterners have mastered to say, "Now, hon, I don't know about all that, but you do you." Sometimes it's genuine curiosity, but even then, there's a charge to the question that didn't exist three years ago. AI has become one of those words. Like politics at Thanksgiving. Like crypto in 2021.

I could say: yes, sort of, but it's complicated, and actually the thing I find most interesting about AI is also the thing that makes me furious, but I have real excitement about the technology alongside real horror about how it's being deployed, and I don't think those two things cancel each other out, I think they coexist, and—

But that's not the party answer.

So I say something vague with a little hand wave. Change the subject. Make fun of the movie, which, truly, is horrifying in its absolute blandness. But I walk away later with that hollow feeling couched against my ribs, one I've learned to identify well, that tells me silence is the problem.

During the day, at work, I live the other half of this dissonance. The tech genuinely thrills me. There are these moments, building with AI, where the possibilities feel like when I first learned to code—that stomach-dropping jolt of oh! this changes what's possible. My colleagues feel it, too; you can see it in Slack, this palpable buzz when a demo works and we believe, for even a second, we're building the future.

But there's a low hum underneath. We've stopped asking hard questions because the stock is up and the capabilities are intoxicating. There's this quiet deal: don't look too hard at how you're building, don't think too much about who it's for, and for fuck's sake, don't bring your doubts to the party.


There are, best I can tell, two loud camps in AI right now, and they agree on almost nothing except sheer volume.

Inside tech, the true believers. Sam Altman told Forbes in February that OpenAI has "essentially built AGI, or [is] very close to it," and that they're "progressing towards a system capable of innovating independently." A month later, at the BlackRock Infrastructure Summit, he put it more plainly: "We want to flood the world with intelligence. We want people to just use it for everything." Other alleged human beings like Elon Musk and Dario Amodei have both said AI systems will outsmart people this year. Keynotes promise abundance, turbocharging, elevation of humanity.

It's a salvation story in a Patagonia vest.

Outside tech, the backlash. My former professor on Threads who posts anti-AI screeds daily, quote-tweeting anyone who admits to using ChatGPT with the chaotic energy of a medieval inquisitor. The artists who treat the technology itself like its the war crime instead of the people who wield it. Jessica Grose, writing in the New York Times in January: "Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable." This camp is understandable. The cringe marketing earned this reaction. But avoidance isn't the same as skepticism, and opting out of the conversation doesn't mean the conversation stops. The deepfakes targeting their kids, the hallucinated legal filings deciding their rights, the algorithms quietly jacking up their insurance rates—none of that cares whether they've muted the discourse.

What both camps share is certainty. They know what they think. They've picked a side, because it's too hard to see past the binary. And their certainty has made the space between them uninhabitable.


That space, the silent middle, is enormous. And the data on it is eerily consistent.

Gallup's Q4 2025 tracking found that 49% of American workers say they never use AI. But among people whose jobs could be done remotely (desk workers, knowledge workers, the people most likely to be reading this) 66% are using it, and 40% are using it frequently. The gap between "never" and "frequently" has been narrowing all year, which means millions of people crossed the line and didn't tell anyone.

They have good reason to stay quiet. A WalkMe survey published last summer found that 48.8% of employees hide their AI use at work to avoid judgment. The discomfort is worst at the top: 53.4% of C-suite leaders conceal their AI habits, despite being the heaviest users. And Gen Z, the generation supposedly built for this moment, is living the contradiction most acutely. 62.6% admit to completing work using AI and pretending it was entirely their own. More than half have faked understanding of AI in meetings. (And I mean, to be fair, I'm sure many of them were also faking understanding of the pointless meetings.)

When Harvard Business Review ran an experiment giving 1,026 engineers identical code to evaluate, the only variable being whether the engineer supposedly used AI to write it, reviewers rated the "AI-assisted" engineer's competence 9% lower. Same code. Same output. Lower respect. The penalty was even steeper for women and older workers.

So the hiding isn't paranoia. It's rational. A KPMG survey of 30,000 workers found 57% had used AI in non-transparent ways. Gusto called it a "shadow economy of productivity improvements"—millions of people quietly paying for AI tools out of their own pockets, using them to get through the workday, and never breathing a word about it.

And it's not just employees. MetLife's annual workforce study, released just last week, found that 67% of employers acknowledge AI is creating new friction or mistrust with their workers, a number that climbed five points in a single year. Both sides of the desk can feel the strain. Neither side really knows what to say.

According to Pew, 50% of American adults feel more concerned than excited about AI's growing role in daily life. Only 10% feel more excited than concerned. The rest, 38%, are "equally concerned and excited."

Equally concerned and excited, in a lot more words, is how I describe to my friends that situationship I'm utterly convinced I can make work.

Cue the Midwestern smirk.


A close friend of mine, a fellow inhabitant of these American flatlands, worked at a tech company until recently. She saw what AI could do, and she couldn't stomach the cognitive dissonance of building with it daily, knowing what it cost. She didn't refuse AI out of ignorance. She just couldn't keep pretending it was uncomplicated.

Her company read this as a performance problem. She's out of tech now.

I think about her a lot, because her story is becoming a pattern of quiet erosion. The people with the most nuanced understanding, the ones actually sitting with the contradictions, are the ones the industry has the least room for. Their thoughtfulness reads as friction. Their doubt reads as disloyalty.

And then there's everyone else, the vast majority outside tech using AI in boring ways that subverts the loudest voices on both sides. Teachers rightsizing lesson plans for more individualized learning. Lawyers synthesizing obscure precedent into custom arguments. Small business owners automating invoices. Writers using AI to get unstuck on a character motivation, then going back to obsess over craft. These aren't utopians or Luddites. They're people being ground down by capitalism who found a tool that makes Thursday slightly more survivable, and they're not talking about it, because the only available scripts are zealotry or shame.

And also because it feels like a cheat code. A glitch. Milk it as long as possible before someone finds out.

I am, for the record, in this middle, too. I'm genuinely excited about this tech. I'm genuinely worried by its rapid onset. My brain is fried, not because I'm confused, but because the truth is contradictory, and now, more than ever, the world refuses to make space for slow thinking. I have, at different points in the same afternoon, felt like I'm building the future and like I'm building a bomb. AI is powerful and it's being deployed recklessly. It's useful and it's being marketed dishonestly. It's helping people and it's hurting people, sometimes in the same product, sometimes in the same interaction. Holding all of that at once, every day, with no one to talk to about it honestly: that's exhausting.


So, this series (more on that coming soon) is my small attempt to stop changing the subject.

It's not an anti-AI project. It's not a pro-AI project. It's an attempt to say the complicated things out loud, in the open, for as long as it takes to get them right.

The questions I keep coming back to aren't will AI save us or destroy us? Those are for the keynotes and the protest signs. The questions that haunt me are smaller and harder.

  • What is this technology actually doing to us, right now, today?
  • What are we feeling as we collide with synthetic minds for the first time in human history?
  • Who's responsible when things go wrong, and who decides what "wrong" means?
  • What do we lose when we optimize away the parts of life that were inefficient but human?

These questions need engagement and skepticism. They require knowing how the technology works and caring about what it does to people. They don't fit in a tweet or a self-deprecating joke.

If you've been using AI and feeling weird about it, this is for you. If you've been avoiding it and feeling righteous about it, this is for you, too. If you've been sitting in the middle, unable to say what you actually think because every available script is too loud, too certain, too much: welcome. There's room here.

And if you've been championing AI as the savior of our mortal universe, I mean, you gotta go get some therapy, but sure, you can be here, too. Just try and be a little quieter.


It's late when I leave the distillery. The drive home takes me through the kind of Midwest night that makes you forget the internet exists for a second, past gas stations and churches and neighborhoods that look exactly the same as they did twenty years ago except for the Ring cameras on every porch.

My head is replaying conversations. So, like... AI stuff?

Yeah. Like AI stuff.

I didn't answer honestly back there, and I know it, and I'm gonna think about it the whole way home because we're human, and we always think about the thing we should have said. We reflect long after being prompted for an answer.

Except I don't know what I should have said. That's the problem. All my honesty is is too long-winded and too full of contradictions and, really, just shitty vibes for a distillery on a Sunday night where all we want, for a few hours, is to get off our phones and have a little eye contact. It's the kind of answer that needs a whole series of dialogues and probably a stiff drink and the patience of someone who's willing to sit with you and figure it out together in real time.

So here's what I should have said:

I don't know. I really don't know. But I'm gonna keep figuring it out, and I'm gonna do it out loud, inefficiently. And I want you to come with me. You and all your fears.