we keep building the same machine
from vegas to facebook to your favorite ai and the culture of addiction
recognizing the pattern
there's a kind of reward schedule that produces more pulling than any other. a pigeon that gets a treat every time it pulls a lever stops pulling once it's full. a pigeon that gets a treat on a fixed schedule learns the timing and waits between pulls. a pigeon that gets a treat randomly, sometimes a small one, sometimes a jackpot, sometimes nothing at all, will pull the lever until it dies.
b.f. skinner documented this in the 1950s. he called it variable ratio reinforcement. it produces the highest, most resistant-to-extinction behavior of any schedule we have ever measured.
what i want to walk you through is how that one finding has been turned into three different industries over the past fifty years. each one looks new. each one is the same machine.
first, the casino refined it
the modern slot machine is not a gambling device. it is an engineered behavioral product, optimized for one outcome: time-on-device.
natasha dow schüll, an mit anthropologist, spent fifteen years embedded in the las vegas gambling industry to write *addiction by design* (princeton, 2012). her finding: the people designing slot machines aren't trying to take your money. they're trying to put you into what they call the machine zone, a trance state where time stops mattering and the only thing that matters is the next pull. once you're in the zone, you don't play to win. you play to stay in the zone.
every part of the machine is engineered for this. the ergonomics of the cabinet. the precise pitch of the win-sound. the layered randomization. the near miss, that's the technical term for when two cherries land on the line and the third just barely misses, which fires the same dopamine pathway as an actual win, sometimes harder. casinos design for near misses on purpose. that's the science. that's the product.
then social media learned
forty years after vegas figured this out, silicon valley noticed.
sean parker, the founding president of facebook, said the quiet part out loud in a 2017 axios interview:
*the thought process that went into building these applications, facebook being the first of them, was all about: how do we consume as much of your time and conscious attention as possible. and that means that we need to sort of give you a little dopamine hit every once in a while because someone liked or commented on a photo or a post or whatever. it's a social-validation feedback loop, exactly the kind of thing a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology.*
then he added the line that haunts the rest of the decade. *the inventors, creators understood this consciously. and we did it anyway.*
tristan harris, formerly google's design ethicist, made the same point about the broader pattern. pull-to-refresh on your inbox is a slot machine. the red notification badge is a slot machine. the algorithmic feed is a slot machine, except that the casino now knows everything about you and personalizes the reward schedule in real time.
adam alter documented the consumer-side experience in *irresistible* (2017). he calls these patterns ludic loops, cycles of behavior that keep users engaged without satisfaction. the same mechanism schüll documented in the casino, repeated at the scale of the entire internet.
then tiktok perfected it. the for-you page is a personalized variable-ratio reinforcement schedule with a model trained to maximize the variability of your reward. it is, mechanically, the most refined slot machine ever built. the difference from a vegas floor is that the chips you spend are minutes of your life.
now llms are the third installment
i think we're in the middle of the third major application of the same mechanism. and i think most people building it haven't quite noticed.
every prompt is a pull. the response is the payout. the payouts are not predictable. sometimes you get a brilliant draft. sometimes you get a brilliant draft of the wrong thing. sometimes you get a confident hallucination. sometimes you get exactly what you needed, which trains you to keep pulling. variable ratio reinforcement, applied to the creation side this time instead of the consumption side.
as someone who has been stuck there myself: debugging hell, specifically, is the new machine zone.
there is a particular feeling that any developer who's used an ai coding assistant for more than a week will recognize. it's 2 a.m. you've been at it for six hours. the response from the model is *almost* what you wanted. the code compiles but does the wrong thing. you write a follow-up prompt. the next response is closer. you write another. the next one is *so close*. you write another.
that "so close" is the near miss. it's the same dopamine pathway schüll documented in the casino floor. it doesn't matter that the model isn't trying to keep you in the chair. you're in the chair anyway. and the longer you stay, the more committed you become to the bet that the next pull will be the one. the developer-author quentin rousseau recently named this directly, calling agentic coding *one more prompt: the dopamine trap*. he is right. it's the same trap, with a different chair.
the tokenomics are the third tell. casino chips. crypto tokens. llm tokens. three industries that converged on the same word for the same operation: convert money into some intermediate currency that's easier to spend in small amounts than dollars are. the chip on the table is friction-free. the next pull doesn't feel like spending. it feels like playing.
what's different (the honest version)
i don't want to overstate this. casinos are extractive. the long-run expected value of a slot machine is negative; you cannot win. social media is mostly extractive too: the long-run expected value of a tiktok session is, for most people, lost time you don't get back.
llms are different. they have real positive expected value when used well. some prompts produce work that compounds. some debugging sessions actually solve the bug. unlike a casino, where the house edge guarantees you lose, the long-run expected value of a deliberate, well-targeted llm session is positive.
but only when it's deliberate. an unfocused prompt loop, the "let me try one more thing" pattern at 2 a.m., has the same expected value as a slot machine. the mechanism doesn't care whether you're working or playing. it just keeps you pulling.
what i'm taking from this
i'm not going to give you rules. i don't know yours.
what i think is worth sitting with: in fifty years, our species has industrialized the same psychological vulnerability three times. we built the casino. we built the newsfeed. we are building the prompt loop. each time, the people building it understood what they were doing, and they did it anyway, because it works. each time, the rest of us were on the receiving end before we had the language for the experience.
we have the language now. that's the only difference. the machine works on us either way. but knowing the shape of the machine is the first thing that lets you decide, sometimes, to walk away from the chair.
if you've been pulling all day, that's data about your day, not a verdict on the machine. notice it.
links for the lab:
addiction by design (natasha dow schüll, mit/princeton 2012): the foundational ethnography of variable-reward design in las vegas.
sean parker on facebook exploiting "human vulnerability" (axios, 2017): the "we did it anyway" quote in full.
tristan harris on the slot machine in your pocket: the canonical breakdown of newsfeed-as-slot-machine.
one more prompt: the dopamine trap of agentic coding (rousseau, 2026): the same argument, applied to ai-assisted dev.
building with grace is a daily ish newsletter about ai, building, and the chaos in between.


