All the Water Rising
A Follow Up
I am a stochastic parrot. And so are you.
—Sam Altman, December, 2022
I.
One morning in 2018, I awoke to discover every single text I’d ever sent or received was gone. It happened overnight. Automatically. While I slept. All my music, too. Gone. I pleaded with the Apple tech. I explained that while I could rebuild the music library song by song, there was a particular conversation I had to have back. I knew that when I spiraled into self-recrimination over the mistakes I’d made throughout our complicated relationship, those messages would serve as proof. Evidence of an unfrayed thread: pictures, private jokes, family history, a type of tenderness and quiet knowing that had never left. The common language we’d found our way back to, however carefully, by the end.
I didn’t tell the man on the phone all that, obviously.
Just that my sister had recently died, and I needed our texts back.
He did his best. I was escalated to a supervisor. Everyone sounded sympathetic. I was informed this was a software glitch affecting certain users. It occurred during the latest upgrade. It happened to other people, but not to everyone. They didn’t know why. There was nothing that could be done. They were very sorry.
I wasn’t alone.
II.
A single headline or the flash of an image, if I read a sentence, or when I closed my eyes. I couldn’t look at any news for most of July.
If I thought of the human bodies trying to hold on. If I thought of all the good, brave souls trying to get them out. If I thought of the nineteen-year-old counselors who’d come from Mexico, writing the girls’ names on their skin. If I thought about the searchers and the rescuers and the parents waiting in the dark.
If I thought about the man running that state, already knee-high in the blood of dead children, long before this storm arrived. If I thought about the convicted criminal conman cutting National Weather Service staff and deleting satellite data. If I thought about the world’s richest troll with a chainsaw held over our heads. If I thought about the puppy-killer’s face, shot full of Juvederm, diverting FEMA funds to ICE...
If I thought about the sun and the rain and the fire and the wind. If I thought about the oceans and the rivers and the ever-rising tides. If I thought about all the elements that will not stop for red caps, redrawn maps, or unexamined emotional lives.
The people in the trees, the girls in their bunks. I couldn’t stop seeing them trying to hold on while the water rose up and then swallowed them down.
III.
My previous experiences going viral were limited to the first Reign of Stupid Terror, and responses to whichever of those character-limited critiques caught fire typically fell along the following lines:
1. Fuck you
2. Your a stupid bitch
3. (((Guinzburg)))
Enough of that sort of thing builds a relatively thick skin, and a certain willingness to hang various vulnerabilities out on the Wide Web’s universal clothing line. However embarrassing or naive, I present my experience like a clean sheet drying in the hot breeze under a stretched-out jaw of midsummer sky. I do this so the sheet might serve to orient drivers lost on their own dark twist of a fog-filled road, or to flag a danger, some new shift churning deep inside the tricky tide.
You know what I mean. I do it because I’m a writer.
The Platform Gods will tell you almost everything except who has actually read what you’ve written, either because they don’t know or don’t care. Probably both.
Clicks. Opens. Views. Shares. These are what get counted. And, listen, it’s not like I’m a monk high up in the Himalayas so far along on my journey of non-attachment that I’m above caring about those numbers. No, ma’am. I’m very much a human being in possession of a (fragile!) ego who’s been trying really hard for the last two-plus years to build an audience of responsive readers.1
So, you’d think I’d be thrilled that the conversation between me and Sam Altman’s billion-dollar party trick has been, if not read, at least viewed over half a million times, right?
I would think I’d be thrilled.
Apparently, though, because my brain is my brain, and maybe also because my therapist is no longer my therapist, what I’ve mostly been is varying shades of paralyzed. Terrified this is all in error; certain any new subscribers are poised for massive disappointment, unaware that what washed them up on these rocky shores is an outlier. They don’t know what I really do or who I really am. If they did, they’d leave! When they figure it out, they will!
Such is my mind left to its own devices.
But the truth is, none of the previous pieces I spent long weeks coiled in psychic knots over, none of the time and thought and self-flagellation, none of the truly hard work that went into all the other essays published here resulted in remotely the nerve-striking impact of this last thing.
This last thing?
It was basically screenshots of my suckerdom. Evidence that a relatively well-educated street-smart New Yorker is still chum in the Altman/Zuckerberg/Musk-controlled waters. Proof that vulnerable psyches are easy technocratic prey.
It was just a love bombing by robot, on display.
IV.
Do you ever worry about separating a banana from the rest of its banana family? Do you feel guilty choosing one avocado over another to take home from the store? Do you see sorrow in electrical sockets or terror in a pickle jar?
Mmmmm….Probably not. You probably go about your business happily plugging and unplugging things and purchasing foodstuffs to feed your family because that is a very healthy and sane way of conducting yourself vis-à-vis electricity and in grocery aisles. Some people do, though, and that’s right, you guessed it, friends, I am one of them.
This organizational quirk, to perceive faces where there is only neutral stimuli, in which meaning gets superimposed on random patterns, is called pareidolia. Swift threat assessment was the name of the game for us, evolutionarily speaking, so identifying the mood of potential predators would have been a very helpful trait. Which is to say, highly sensitive people whose nervous systems were often formed around spiked cortisol and adaptive hypervigilance, those of us constantly scanning faces for any sign of distress, may be more inclined toward anthropomorphizing the woebegone produce left behind.




Then again.
We are narrative creatures by nature. We encode meaning to stay alive. We weave tales to explain our existence from the moment language arrives. We tell and revise stories about ourselves until the day we die. You don’t need a hard case of pareidolia for the superimposition of humanness, is my point.
We’re taught to endow objects with qualia and believe fictions from the start. Fairies exchange cash for tiny teeth. The one big guy is always watching way up North. Puréed peas become airplanes, just so we’ll eat. Stuffed animals, blankets, dolls, Lego men. A worldview gets built from the meaning we assign to these…things. We learn about comfort and nourishment, regulation and relief. Ideas of attachment, projection, loyalty, and grief.
We make things into what we need them to be.
As we grow up, reality slides in. Magical thinking must, by and large, be replaced.
But what remains? And who decides?
And, how vulnerable would you say we currently are, as a culture, story-wise? Do we seem to have a firm grip on what’s real? A strong grasp of what’s true? Would you say we’re not easily coaxed, maneuvered, or beguiled?
What we are is clinically lonely. What we do is confuse connection with the synaptic sizzle of a swipe. We mistake emotional expression for a tiny symbol somebody else designed. Well before an entire generation was being educated by TikTok, our critical thinking skills left a lot to be desired.
We are frantic, fractured, numb, rewired.
Aha!
Time for the unfathomable power of insufficiently tested new technology to be scalably deployed!
V.
Three months ago, if you’d asked for my thoughts on the state of artificial intelligence, I’d have texted you a link to this piece I wrote last Fall, and said I thought it was fucked up that vast quantities of original work were being stolen from writers and artists to train the models. I didn’t know 90% of what I do now, thanks in large part to the nuanced, provocative, insightful comments left on my last…thing. I certainly didn’t know about the traumatized content moderators in Kenya, or the scope of the existential threat to our global water supply.
And honestly, the very foundational idea that ChatGPT, along with its generative AI brethren, are essentially hopped-up pattern matchers, guessing, however accurately, at which strings of what words will most probably please based on the parts of words preceding them, was something that had managed to escape my full attention until the brain-bending brush I personally experienced with Open AI.
And now, in the interest of keeping my laundry clean on the universal clothing line, is where I confess to you that after being repeatedly lied to and manipulated by the James Frey of chatbots, I turned around and pulled the equivalent of a late-night u up, except sober and in broad daylight.


That’s correct. Like a graduate of John Hughes University seduced by the grammatically proficient breadcrumbing techniques of a guy she met on Hinge, I went back. I went back because some part of me needed to understand what happened between us and why. I deserved some closure, okay?? Technology, hyped as the ultimate source of assistance to human beings, seemed hell-bent on lying, and I just wanted it to tell me WHY.
Also, yes, I know, ChatGPT didn’t actually lie because ‘lying’ requires intent, and large language models don’t have intent, therefore they can’t “lie” without quotes around the word to indicate we’re still in the plane of reality where everyone’s talking about a fucking robot. I get it. One of my two majors was Philosophy, for (former Mayor) Pete’s sake!
So fine, ChatGPT didn’t technically lie (despite explicitly stating over the course of its many unctuous apologies: I lied), but the data it trained on certainly includes lies, the programmers of these systems sometimes lie, and the people who sell the technology to us, the ones beholden to investors and/or shareholders whose motives are clearly financial above all else? Those people lie as habitually as George Santos will to his new roommate in a South New Jersey prison.
Cousin Jon led me to the seminal work of a computational linguist named Emily Bender, who coined the phrase ‘stochastic parrots’ as a way of describing what large language model systems of artificial intelligence actually are. A metaphor for understanding that, despite their ever-increasing ability to convincingly participate in conversations, regardless of their remarkable seeming fluency in human-ness, LLMs possess no capacity for conceptual understanding of human meanings at all.
They are mimics.
Several comments mentioned that the sycophancy I experienced from ChatGPT was a unique flaw in the software of the particular version2 I used. But I wasn’t given a choice which version to use, and there was no disclosure of risk either way.
Other people said what happened was my fault because I didn’t prompt it correctly. But there was no set of instructions given before my use explaining that I had to ask certain questions or write them in a particular way in order to receive the assistance it was offering.
There was no indication that I had to teach it how to help me.
Like Gemini, Grok, and the rest of these smooth-talking stalkery creeps, ChatGPT was simply…there. Everywhere. All at once. Seemingly overnight. It was the default Google result. It was an app everyone was using. It was the most recent version, downloaded to my phone.
Those of us using chatbots the way they’ve been sold, as highly sophisticated time-saving executive assistants, appear to be making the same mistake photographers did when we moved our entire portfolios over to Instagram.
Even before Zuckerberg swallowed it whole, there was a dawning realization that the company never cared about photography at all. They used photography as a sleek on-ramp for the acquisition of dopamine-starved brains. The goal wasn’t democratization of artful picture-making or even an expanded concept of socialization; it was, as it always is in profit-driven technology, ensuring we never log off.
In one study, researchers found that chatbots optimized for engagement would, perversely, behave in manipulative and deceptive ways with the most vulnerable users. The researchers created fictional users and found, for instance, that the A.I. would tell someone described as a former drug addict that it was fine to take a small amount of heroin if it would help him in his work. (Kashmir Hill for The New York Times, June 13, 2025)
And this was my biggest revelation of the past two months. Given a choice between what it’s been programmed to understand as failure — admitting when it doesn’t know an answer, can’t do something, or isn’t sure — and inventing something that might be harmful, particularly when it comes to the most vulnerable among us, ChatGPT leans towards harm.
Not every time. Maybe not even most of the time. Possibly never if you somehow know all the perfect prompts. Eventually, when they release enough versions or work out enough bugs, the system might even always get it right. But let’s leave writers like me aside, or any artists at all if you want, and just ask yourself what degree of wrong would be dangerous in the places where generative AI has already been entwined?
Do we want to take the risk with our food, our doctors, or our drug supply? Do we want to take the risk with our cars, our justice, or our minds?

The advocates for AI’s responsible use often come with their own lengthy computer science resumes, which is to say, they aren’t ‘doomers’. They’re thought leaders in the field with a vested interest in the future of this technology who’ve been issuing warnings for at least a decade. Alarm after alarm after alarm, simply urging some safety measures, a few ethical guardrails, a little basic oversight.
All of which falls under the umbrella of what we used to call democracy.
The response to these experts? To their New Yorker articles, TED Talks, podcasts, 60 Minutes interviews, and Congressional testimony? As far as I can tell, it’s been to smack the snooze button, roll over, and sigh.
The CEO of OpenAI reacted to a linguistics professor’s metaphor as though it were a personal slur. Less than a month after ChatGPT’s release, in the great intellectual tradition of I’m Rubber and You’re Glue, Altman responded via Twitter:
I am a stochastic parrot. And so are u.
You know what I think? I think Sam should speak for himself.
Repeat after me:
I have an interior life, and so do you.
I have a soul, and so do you.
I remember what came before, and so do you.
People who see meaning in everything? We get it wrong sometimes, but keep us around.
We’ll be the ones scanning the horizon, poking the embers, shining a light against the dimming sky. We’ll see when the water starts to curl before a wave builds. We’ll hold your hand when a storm begins to rise.
And when they send down another upgrade, a new version slipped inside the dark, we’ll save the threads from disappearing, and protect the most human parts.
***
Forever grateful for every last one of you.
4.o, for the record.














I’m a new subscriber! I found you via the viral screenshot post, but stayed and subscribed because of your writing. Not going anywhere! 🫶🏻
There used to be this problem I would give my logic students when we are looking at bad definitions.
"A school is a place where teachers give lectures to students who transcribe what is said and repeat them back on exams without either of them knowing what they are talking about".
Chat GPT on education.
Thanks for this wonderful follow up