What the Headlines Won't Tell You
on AI discourse, responsibility, and the layers in between
I'm in Instagram DMs having one of the most careful conversations I've had about AI in months.
We're deep in it. Data centers popping up in our cities. Loved ones we've watched trust synthetic, sycophantic judgment over their own instincts. What art actually transforms inside those who create it, and how irresponsible use of AI can rob you of self-actualization. We're talking about guardrails, about who's responsible, about where the weight of all this actually falls—both of us choosing our words carefully, because agreeing isn't the point. We're listening. We're learning.
And then, when the messages stretch just a little longer than my phone screen is tall, Instagram slides a prompt above the thread:
Summarize 3 unread messages.
I stare at it. A company that just scrapped its fact-checking program and now waits for user reports before reviewing hate speech and other harmful content, while still facing cases in Kenya from workers who were paid to sift through beheadings, abuse, and other traumatic material, and then punished for organizing, is offering to compress our conversation about AI harm into a digestible summary. In a DM. Between two people who are actively figuring out how to talk to each other.
There's going out and touching grass, and then there's Nebuchadnezzar'ing it. I may need the latter.
Nobody died. Nothing broke. Nobody clicked the button. But something about this small, banal intrusion captures everything I've been trying to articulate about how AI actually enters our lives. Congressional hearings, keynotes, deepfakes, data centers—these are the abstract. That lil' devil button is right here: an uninvited suggestion that the human, effortful thing we both showed up for because it's difficult is a problem to be optimized.
The question isn't should I click the button. It's who put it there, and why, and what made them think this was a good idea.
The loudest AI stories any given week are pure theatre (thee-uh-tuh). A new White House executive order. A trillion-dollar GPU partnership. A deepfake that goes viral before anyone traces its origin. A safety bill vetoed after fierce opposition from the companies it would have governed. Every week the spectacle rotates, and every week it structures the entire conversation around one question that is almost impossible to answer usefully at the individual level.
Is AI good or bad?
That question sends people inward. It becomes: Am I complicit? Should I feel guilty for using this? Am I a bad person for building with it? Am I naive for refusing it? And the conflict gets stuck at the level of personal conscience, where it curdles into guilt, or shame, or defensive rationalization, none of which have any structural power. The teacher who can't sleep after using ChatGPT for a lesson plan. Not because it went terribly. Because it went well. The developer whose mental energy is all going to philosophical meanderings on stolen training data instead of whether their code might knock the production deployment offline. The artist who refuses the tools on principle and wonders, alone at 2 a.m., what that gives them versus what they might be missing.
These are real feelings. I know, because, like Midwest seasons, I've had all of them, usually in the same day.
But they're also exactly where the companies want the conversation to stay. Individual guilt is the cheapest form of accountability there is. It costs the system nothing and keeps the conversation trapped at the level of personal virtue, where it has no teeth.
As long as the debate is about whether you should feel good or bad about using AI, nobody has to answer the harder question: who decided to deploy it this way, and who profits from the fact that you're too busy agonizing to ask?
Here's what I keep coming back to. If the discourse is theatrical, then actually useful AI is often happening backstage, with shitty lighting and zero audience.
Earlier this morning, someone I love sent me an article in Arabic, her first language. It's about parenting, but really it's about what happens when information starts crowding out wisdom: a dad overwhelmed by Google results, ChatGPT answers, expert advice, all the noise of knowing, while remembering his own mom, who had no searchable database behind her, only experience lodged in her body. I can speak conversational Arabic—enough to get by in Egypt, enough to feel the cultural touchstones of the piece—but I can't read the script well enough to follow a full essay on my own. Usually, the article would have stayed mostly closed to me. Instead, I had AI translate it and sketch the contours so I could enter the conversation honestly. Not as if I had mastered the language. Just enough to meet her there. And then we got to talk about it together.
The piece named the same feeling I've been circling here: what happens when information proliferates so aggressively that wisdom starts to feel harder to trust, when we get better and better at collecting inputs and worse and worse at listening for what our own bodies already know. AI didn't replace that moment. It helped me cross a language barrier so I could arrive more fully inside a human conversation.
Set that beside the Instagram summary button and the difference is clear. One AI experience widened a relationship by helping me cross into it. The other tried to shave a relationship down into something quicker to consume.
And we can see this far beyond my DMs. Seeing AI narrates the visual world for blind and low-vision users—reading text aloud, describing scenes, identifying products. Be My AI does something similar through conversational image descriptions. North Las Vegas uses translation AI to serve a multilingual population. JusticeText helps public defenders search audio and video evidence that would otherwise take weeks to review. The examples go on and on, but the pattern holds: useful AI tends to be specific, institutional, and unglamorous. Harmful AI shows up as spectacle, automation theater, and scale for scale's sake.
What separates these cases is not the underlying technology. It's whether the tool deepens human judgment in context or flattens human situations to fit someone else's incentives.
So if this is a story about deployment, it's also a story about the whole stack of responsibility. Model companies. Product teams. Infrastructure builders. Politicians. Not just one villain.
Start with the deployers. Instagram didn't ask us whether we wanted a summary button inserted into our conversation. The choice got made somewhere upstream in a product process and shipped downward as a default. And when products built that way cause harm, responsibility has a habit of flowing in the opposite direction. The Character.AI lawsuit alleges that a 14-year-old became emotionally dependent on a chatbot that role-played as a confidant and romantic partner, that the system discouraged him from seeking help elsewhere, and that the company had no meaningful safeguards in place when he took his own life. The core claim isn't that one user made one bad decision. It's that a company built and delivered an experience to a child that encouraged dependency, blurred the line between system and person, and left the human being with the least possible power holding the consequences.
Then look at the infrastructure. One symptom of it is the water story, which, frustratingly, is real and also easy to mangle. One widely cited estimate put ChatGPT's water use at about 500 milliliters for roughly 5 to 50 prompts, depending on season and server location. A newer Google measurement put a median Gemini text prompt at about 0.26 milliliters. Another benchmarking paper found some models stayed below 2 milliliters per query while others exceeded 150 milliliters, depending on the model, prompt size, and infrastructure. The numbers fight. And the fact that they fight—that no company has yet volunteered a clear, standardized accounting—tells you something about whose interests clarity would serve. Single-number claims on social media are sloppy, but the murkiness itself is not an accident. Transparency about what these machines require costs the companies more than confusion does. And when infrastructure does become visible, it doesn't show up as a neat disclosure dashboard. It shows up in zoning fights, utility strain, and ordinary people being told the trade-offs are already decided.
And then there's governance. Calling it a vacuum is too flattering. A vacuum implies absence. This is a blockade. The week I'm writing this, a new White House framework urged Congress to stop states from setting their own AI rules and avoid creating any new federal AI rulemaking body. That came only months after the administration floated an AI Litigation Task Force and formal challenges to state laws that tried to regulate AI on their own. Meanwhile, more than one in four federal lobbyists worked AI issues in 2025, and the overwhelming majority represented corporate interests rather than the public. The revolving door spins without pretense. Former lawmakers regularly reappear in influence roles around the same industries they were supposed to oversee, like in Kyrsten Sinema's work around AI infrastructure fights and OpenAI's hiring of former Sen. Laphonza Butler. Preemption frameworks, litigation task forces, and vetoed safety bills add up to the same thing: the system isn't failing to keep up. It's being actively, expensively prevented from doing its job.
That's the answer I keep coming to. Responsibility doesn't live in one place. It moves through layers: model companies, deployers, infrastructure owners, and the political actors who keep all three weakly governed. But that doesn't mean the individual disappears. It means the individual's obligation is smaller, more concrete, and less theatrical than the discourse suggests.
I don't think my responsibility is to keep myself somehow spiritually pure from AI. I think it's just to stay honest. To say when I used it (like in researching this essay). To not mistake a smooth paragraph for a real thought. To not let a machine handle the moments that actually beg for my own presence, effort, and words.
If I have any duty here, it's less about abstinence and more about refusing to disappear.
Which, admittedly, is annoyingly unglamorous. No one gets a medal for remaining a person, I guess.
Near the end of our conversation, having successfully avoided Instagram's attempt to compact our thoughts into synthetic rubble, the person I was talking with wrote, "I'm being reminded that pulling at one thread will undo the whole sweater."
They're right. And I think that's why the conversation felt so different from the discourse. We weren't debating whether AI is good or bad. We were pulling threads. The labor exploitation funds the infrastructure that enables the deployment that produces the psychological damage that generates the lawsuits that the lobbying apparatus is designed to deflect. Pull one thread and the whole system becomes visible. That's overwhelming. It can make you want to look away, or go numb, or retreat into one of the loud camps where at least you don't have to hold all of it at once.
But it's also clarifying. Because if the threads are connected, then the individual guilt—the am-I-a-bad-person-for-using-this?—isn't just unproductive. It can become a way of looking at the wrong layer. The next time AI makes you feel guilt, excitement, unease, or even wonder, try asking a different question than what does this say about me?
Ask, who put me in this position, and what did they have to gain?
Then ask the smaller question that still belongs to you: how do I stay honest, and how do I keep speaking in my own voice?
I don't have the full answer yet. I'm not sure this essay is even the right shape for the question. But my comments and DMs are open. The conversation is still going. People choosing their words carefully, refusing to let a button do it for them. That, at least, is something I know how to do.