Discussion
Landmark L.A. jury verdict finds Instagram, YouTube were designed to addict kids
yacin: this has to be the first of many right? fingers crossed this leads to some meaningful change.
2OEH8eoCRo0: It's a huge deal because it was the bellwether case for over 1,000 other similar cases.
onlyrealcuzzo: How is any app/website that 1) appeals to kids, 2) sells attention, 3) does A/B testing NOT guilty of this?
guzfip: It probably helps when you suppress research that shows you’re harming children and allow human traffickers to fester on your platform with 17 warnings or whatever.
ramesh31: I've heard about "landmark" cases against these companies over and over again for the last decade. There seems to be at least one every couple of years. And yet literally nothing has ever happened or changed.
sampullman: I think there's a little more nuance than that, but it seems roughly correct.Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?
embedding-shape: I guess ultimately it depends on if the app/website authors do so "negligently" or not.> Jurors were charged with determining whether the companies acted negligently in designing their products and failed to warn her of the dangers.So if you do so while providing warnings and controls for people, that might make it OK in the eyes of the law?
SirFatty: algorithm would be the key word I think.
steve-atx-7600: How’s this different than tv that a kid might see that has ads and programming targeting kids?I watched 80s horror movies when I was in elementary school and had nightmares for years. Should I sue now?How about parents be held responsible for how they care for their kids or not? Maybe a culture that judged parents more strongly for how they let their kids spend their time would be an improvement.
everdrive: Being able to find some basis for comparison between two things does not render them equivalent, and this is an extremely frequent fallacy I see with regard to technology discussion on HN.
everdrive: Correct, selling attention inevitably leads to harm.
roxolotl: Both things can be true. Parents can share responsibility. But it is also the case that Facebook actively suppressed research that showed that children using their platforms experience emotional harms. It is also the case that around the time you were in elementary school discussions about children’s programming had been ongoing for years and eventually regulations were put in place[0].0: https://en.wikipedia.org/wiki/Regulations_on_children's_tele...
schmidtleonard: > more nuanceNot enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because> It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair
Hobadee: Is the addictiveness of social media great? No. But the blame shouldn't be placed squarely on the companies either. What happened to personal responsibility? I was addicted to Facebook, I realized it, and I disconnected from it. I had withdrawals for a while (pulling out my phone and trying to open the app I had deleted without really thinking about what I was doing) but I quit. I know I am addicted to YouTube shorts, so I stay away from them. Occasionally I'll go on a bender and a few hours will slip by without me realizing, but while I know YouTube is designing them to be addictive, I blame myself for falling for it.There are plenty of things in life that can be addicting; drugs, sex, money, power, adrenaline, entertainment, technology... The list goes on. If we remove everything addicting from life, you better believe something else will rise up to take its place.The solution therefore isn't to remove everything addicting from life, but rather to raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop.
imiric: On the one hand: sure.On the other, it's very different when companies explicitly design their products to be as addictive as possible.We've been through this with Big Tobacco already. Nicotine and other tobacco subtances are addictive on their own, but tobacco companies were prosecuted for deliberately making cigarettes as addictive as possible, besides also marketing to children. The parallels with Big Tech and social media are undeniable.
pearlsontheroad: Everyone should at least be a conscientious junkie.
nkrisc: Yes, personal responsibility is important. That doesn't mean we need to allow companies to attempt to addict as many people as they can.The question we should be asking: are these technologies a net-positive to society?
paulkon: Just needs a health warning label, like on alcohol or cigarettes. Then onto the high sugar products, and a quarter of the grocery store
ddoolin: If we want to compare it to alcohol/cigarettes, then kids shouldn't be allowed to use this either.
simonh: The problem is that internal communications inside these companies raised concerns about the manipulativeness, and even deceptiveness of the algorithms and tactics they were using.They weren't just consciously creating an attractive platform, they were consciously creating a manipulative platform.
parpfish: When it comes down to it, I’m not sure how you differentiate an “addictive” product from a well-made product that I choose to keep using.When people say that Tetris and Civilization are “addictive” they aren’t implying anything malicious about the development, it’s more of a compliment about the game (and maybe a little lament about staying up too late).But the addictive nature of social media feels different and I can’t figure out what that distinction is.
everdrive: I think this represents a strong misunderstanding of what addiction is, and how it works. I mean this respectfully, and not combatively -- I expect you have never had problems with addiction.When it comes to behavioral psychology research, there is a strong understanding of concepts such as behavioral reward schedules; interval-based rewards, time-based rewards, variably-interval-based rewards. People have a very clear understanding of what sort of stimulus is and is not prone to addiction. You can get a mouse in a cage to become hopelessly addicted to pressing a lever for a reward depending on what reward schedule you use, and this does not translate to a mouse who can just get the reward at a regular interval. (or perhaps merely a less-addicting interval) The mouse in the cage pressing a button set to a variable-ratio reward is equivalent to an old person using a slot machine in a very literal and direct way. This also translates to social media with permanent scrolling. So many of the stories such, but the variable interval is the extremely enticing (or enraging) story that just might be the next one.
GardenLetter27: Mandatory age verification is coming.
bogdanoff_2: The solution to this would be a law forcing these sites to allow third-party suggestion algorithms, so that you can choose who and how content is being suggested to you.It could be perhaps as simple as allowing third-party websites and apps for watching Youtube on your phone. And it's okay if this would be a premium paid feature, so there's no counter argument that "it costs them money to host videos".This is not an entirely new idea either. Before Spotify became popular, people would integrate Last.FM into their media players to get music recommendation based on their listening history, and you could listen to music via YouTube directly on the last.fm website.
dmbche: Or algorithms have to be submitted and approved by a government body before being allowed to be implemented and are frequently audited
card_zero: People will now say "the algorithm" and "dopamine", explaining nothing. You see, social media is truly addictive because it's been honed to be addictive in some way that isn't specified or known or actually true.
steve-atx-7600: I understand what you’re saying, I personally don’t like or use social media, but I don’t agree that these companies are at fault after reading this article and others. I’d rather be wrong and learn something than think I’m right, so I welcome further criticism.
F7F7F7: They strategically use patterns that directly trigger the release of dopamine into the brain.They've created algorithms that use slot machine like experiences that keep kids hooked to the screen.These algorithms feeds users barely moderated content that feeds their worst instincts. With almost surgical precision when wanting to illicit engagement.Then when research shows them the harm their causing they bury it, hire lobbyist, and double down.Switch out a few words up there and you have the big tobacco playbook.
ddoolin: Maybe this applies more towards adults, but I don't think the correct answer for kids is only "just have self-control," something kids are notorious for not having. Certainly there's a lot of parental responsibility here but we can simultaneously hold companies responsible for their part too.
ValentinPearce: It also is a situation where the ubiquity of these companies make it exceptionally difficult for parents to regulate access.
CarVac: We can't raise other people. We can prohibit the addicting things like newsfeeded Facebook.
ValentinPearce: If they are liable of making the thing addictive, it does mean it is their fault. In this case, it specifically says it's designed to be addictive to children, whose personal responsibility is probably not expected.
mrbluecoat: I feel the same way. They're just going to appeal the case until they find a layer of the legal system where they have leverage.
data-ottawa: How do you prevent a Cambridge Analytica exfiltration situation with third party algorithms?And how does this prevent addictive algorithms which will win through social selection?
DavidMcLaughlin: A/B testing is very, very different to handing over control of your content to a reward function that optimizes for time spent over any other criteria.We had 10 years+ plus of having products like Facebook, Twitter, YouTube, hell even LinkedIn with a basic content model of "you build your own graph of people who you pull content from" and their job was to show it to you and puts ads in there to fund the whole enterprise. If I decided to follow harmful content? That was a pact between me and the content creator, and YouTube was nothing more than a pipe the content flowed through. They were able to build multi-billion dollar businesses off of this. That's really important, this was enormously profitable. But then the problem happened that people's graphs weren't interesting enough, and sometimes they'd go on the thing and there were no new posts from people they followed, and this was leaving money on the table. So they took care of that problem by handing over control of the feed to the reward function.More accurately, especially for Meta products: they completely took control away from you. You didn't even have the option to retain the old, chronological social graph feed anymore. And it was ludicrously profitable. So now the laws of capitalism dictate that everyone else has to follow suit. I now have extensions on my browser for Instagram and YouTube to disable content from anything I don't follow - because I still find these apps useful for that one original purpose they had when they blew up and became mainstream. Why are these browser extensions? Why can't I choose to not see this stuff in their apps? That's the major regulation hole that led to this lawsuit, imo.It's the same thing you see with people blaming smartphones for brainrot. We've had 15 to 20 years of smartphones with more or less the same capabilities as they have today and for the vast majority of that time my phone didn't make books less interesting or make me struggle to do chores or manage my time. For a full decade or more I saw my phone as a net positive in my life, was proud to work for Twitter and generally saw technology like the Louis CK bit about the miracle of using a smartphone connected to WiFI on an airplane. But in the last five years or so, things have noticeably and increasingly gone to shit. Brainrot is a thing. All my real life friends who are the opposite of terminally online or technical are talking about it. I don't use TikTok but it seems like that is absolutely annihilating attention spans. The topic of conversation over drinks is how we've collectively self-diagnosed with ADHD and struggle with all kinds of executive function.. but also are old enough to remember a time when none of this existed. Complete normies are reading Dopamine Nation and listening to Andrew Huberman trying to free themselves.I don't know what the exact solution is, but there's at least a simpler time we can point to when we all had smartphones and we were all connected via platforms and we all posted and consumed stupid pictures of each other and it wasn't.... _this_.
onlyrealcuzzo: Great point RE the self-learning algorithms. That's what I intended originally, but didn't communicate clearly.
robinanil: I didn't see this thread. So started something new. https://news.ycombinator.com/item?id=47530367I have a somewhat unusual vantage point on this — I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.Litigation is negative reinforcement — and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.The change has to come from people who understand these systems building better ones.Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.I will give you a simple feature. Try blocking blippi from youtube kids, man its crazy, even if you block the main blippi and moon bug channel. 100s of channels have blippi content cross posted. And it keeps popping up. I know its easy to build a blippi block feature using AI that blocks across channels.Thats the kind of solutions we need. We have the tools. Just need intent and purpose[1] https://www.emorahealth.com/clinical-insights/social-media-v...
beepbooptheory: [delayed]
KaiserPro: I think addiction is a redherring.Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.
bknight1983: When you put something out there, there's a question of ownership for how people end up using it. - Some think that "if you use it incorrectly, it's your fault" and probably agree with the statement that Palantir is not an evil software and that one must "change the administration". - Some think that "if you use it incorrectly, it's the creator's fault" and then you have safety labels on everything (see Prop 65).It's a spectrum of risk between the user and the creator. My opinion is that there's enough scientific evidence that social media to show that it has a negative impact on kids and teenagers as their brains are still developing. I think a social media ban on kids is a good thing (similar to a driver's license or age of drinking).
KaiserPro: I think there is a fourth portion that is probably more important:Actively ignoring harm caused by your product. TV/radio has sold attention, but there were pretty strict rules on what you can/can't broadcast, and to whom. (ignoring cable for the moment) Its the same for services, things that knowingly encourage damaging behaviours are liable for prosecution.
_kidlike: my thoughts exactly... this "verdict" came with very suspicious timing.
heyitsaamir: Bluesky does this. In fact, the For You algorithm is a community built algorithm and way more popular than the native Discover algo.
SecretDreams: Because most are just no where near as good and effective at ruining a kid's mind as meta. If others were as good as meta at destroying whole generations of cognitive development, they'd probably also be liable.
ceejayoz: > How’s this different than tv that a kid might see that has ads and programming targeting kids?Those ads didn't adjust themselves on a per-child basis to their exact interests.