A California jury found Meta (parent of Facebook and Instagram) and Google (parent of YouTube) liable for negligence and failure to warn in a landmark social media addiction case, awarding a plaintiff identified in court documents as "Kaley" $6 million in damages — $3 million in compensatory damages and $3 million in punitive damages — with Meta bearing 70 percent of the total liability. The verdict is significant not for its dollar amount, which is modest compared to the companies' market capitalizations, but for the legal theory the jury accepted: that social media platforms can be held liable as manufacturers of defective products based on the design of their addictive features, rather than being shielded by Section 230's traditional protection against liability for user-generated content. NPR called the verdict a "turning point for Silicon Valley" in its Up First newsletter; Fox News carried the verdict prominently on its tech page, with legal analyst Jonathan Turley calling it "clearly challengeable."

The features characterized as "defective products" in the trial include infinite scroll — the design that eliminates natural stopping points in content feeds, preventing users from experiencing the completion cues that would prompt them to put a device down — algorithmic recommendations that prioritize engagement-maximizing content regardless of its psychological effect on users, and beauty filters on Instagram and Facebook that research has linked to body image disorders in adolescent girls. The plaintiff's legal team argued these features were not passive conduits for user-generated content but active product design choices that knowingly created compulsive engagement patterns — making them product liability questions, not speech liability questions, that Section 230 does not protect.

Section 230 of the Communications Decency Act generally shields online platforms from liability for content posted by their users — a protection that has allowed the internet to develop without every platform being held responsible for every user post. The California jury's verdict, if upheld on appeal, would establish that platforms can be held liable for their own design choices that cause harm even when Section 230 bars content-based claims. Legal experts across the political spectrum consider the distinction legally significant: Justice Clarence Thomas has written separately in prior cases questioning whether Section 230 has been interpreted more broadly than Congress intended, and both progressive advocates for platform accountability and conservative critics of social media's cultural effects have called for narrowing the shield. Jonathan Turley argued on Fox News that the verdict is "clearly challengeable" because the line between design and content is blurry — algorithms that recommend content are both a design feature and a content curation decision.

On the same day, a separate New Mexico jury ordered Meta to pay $375 million for inadequate child safety protections — a verdict arising from different legal theories but reinforcing the pattern of growing jury willingness to hold platforms financially accountable for harms to users. The dual verdicts on a single day represent the most significant legal setback for social media companies since the passage of Section 230 itself. Congress is simultaneously debating revisions to Section 230 and considering legislation requiring algorithmic disclosure and design safeguards for platforms with minor users. Both the left's concerns about platform harm to mental health and the right's concerns about platform power make social media liability a rare area of genuine bipartisan policy interest.