Social media companies have spent a decade arguing they’re not publishers — courts are now asking whether they’re something more dangerous: product designers whose choices cause measurable harm

Social media companies have spent a decade arguing they’re not publishers — courts are now asking whether they’re something more dangerous: product designers whose choices cause measurable harm
Social media companies have spent a decade arguing they’re not publishers — courts are now asking whether they’re something more dangerous: product designers whose choices cause measurable harm
ywAAAAAAQABAAACAUwAOw==
  • Tension: Tech companies built their legal identity around being neutral conduits for speech — courts are now forcing them to reckon with the architecture they built beneath that speech.
  • Noise: Section 230 has been treated as settled law, an immovable ceiling above which tech accountability could not reach — a consensus that obscured a growing crack in the foundation.
  • Direct Message: The same argument that shielded social media for thirty years — “we don’t make the content” — has become the argument that exposes them: but you did make the machine.

To learn more about our editorial approach, explore The Direct Message methodology.

On March 25, 2026, a Los Angeles jury did something that lawyers, regulators, and advocates had been trying to engineer for years. It found Meta and Google liable for the depression of a young woman who said she became addicted to Instagram and YouTube as a child. The combined damages: $6 million. The day before, a New Mexico jury had ordered Meta to pay $375 million for misleading users about product safety and enabling the exploitation of children on its platforms. Two juries, two states, one week. The legal architecture that has protected the social media industry for three decades did not collapse — but it cracked in ways that now require explanation.The question these verdicts raise is not really about Meta or Google. It is about a much older and more consequential argument: what, exactly, is a social media platform? Is it a publisher? A utility? A town square? The industry has spent years insisting on the answer that suited it best. Courts are now asking a different question entirely. Not what these platforms publish — but what they build.

The Identity They Chose, and the One Being Assigned

For most of its history, the social media industry has operated under a carefully maintained self-conception: it is a neutral host. A pipeline, not a producer. It carries the speech of its users; it does not generate it. This identity was not only commercially convenient — it was legally essential. Section 230 of the Communications Decency Act, passed in 1996 when the web was in its adolescence, generally protects online platforms from liability over content posted by their users. For thirty years, that protection functioned as a near-total liability shield, and the industry defended it with the fervor of companies who understood, correctly, that without it their business model would face existential exposure.

The legal strategy worked, repeatedly. Lawsuits brought by parents of children harmed by content on social platforms were routinely dismissed at the threshold stage, with courts finding Section 230 foreclosed the claims. The shield held. Then plaintiff attorneys noticed something. The shield protected platforms from liability for what users posted. It said nothing, explicitly, about how platforms were designed.

In the Los Angeles case, lawyers argued that features like infinite scroll, constant notifications, autoplaying videos, and algorithmic beauty filters made apps like Instagram and YouTube function as a kind of digital casino — environments engineered to be compulsive, especially for young users. The argument reframed the entire lawsuit. This was not a case about what a stranger posted. It was a case about what the company built, and why it was built that way. By shifting the target from content to architecture, the plaintiffs found a route around the Section 230 barrier, and the judges let both cases proceed to trial.

What emerged in the courtroom was a portrait of companies whose internal research, according to evidence presented at trial, documented the risks their platforms posed to younger users — and who proceeded anyway. When Meta CEO Mark Zuckerberg testified, he told the jury that protecting young users has always been a company priority. The jury did not find his account persuasive.

When I look at how media narratives around tech accountability have evolved over the past decade, what strikes me is how successfully the industry managed to keep the policy conversation focused on content moderation — on what gets removed, what stays up, who decides — while the more structurally significant question, about how these systems were engineered to capture and hold attention, stayed largely out of public frame. These verdicts are, in part, a correction to that distortion.

The Shield That Stopped Being Invisible

Section 230 is one of the most consequential laws most people have never heard of. Its defenders, and they are numerous and often credible, argue that without it the open internet as it currently exists would be impossible to operate: no platform could afford to host user content if it faced unlimited liability for whatever users posted. This is not a trivial argument. It reflects a genuine tension between accountability and the practical conditions of operating at scale.

But the conventional wisdom that Section 230 represents settled, stable, and broadly legitimate protection has been quietly eroding for years — a process these verdicts now accelerate. According to Reuters, legal experts have noted that courts have been moving toward a narrower reading of the law’s liability shield. Several lower courts have already ruled that platform design choices fall outside Section 230’s protections — though no appellate court has yet issued a binding ruling to that effect.

That distinction matters. Trial court decisions, including jury verdicts, do not bind other courts. What these verdicts do is validate a legal theory: that companies can be held liable not for what users say on their platforms, but for how the platforms are constructed. Gregory Dickinson, an assistant professor at the University of Nebraska College of Law who studies the intersection of technology and law, described the emerging pattern clearly: courts are distinguishing claims about platform functionality from claims that would simply impose liability for third-party speech. The former is increasingly understood as a corporate design decision. The latter is what Section 230 was always meant to address.

The broader numbers are striking. More than 2,400 cases against social media companies have been centralized before a single federal judge in California, while thousands more are consolidated in California state court. Meta, Google, Snap, and TikTok parent ByteDance are all named in litigation arguing that their design choices produced a measurable mental health crisis among teenagers and young adults. The Los Angeles and New Mexico verdicts are the first to reach jury decision — described in legal circles as bellwether trials, test cases whose outcomes signal what may lie ahead.

Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University School of Law, offered a framing that extends the stakes even further: “I think the internet is on trial, not social media.” His point is that the design-defect theory, if upheld by appellate courts, would reach well beyond Instagram and YouTube. More than 130 lawsuits are already pending in federal court against Roblox, the gaming platform, over alleged failures to protect child users from exploitation.

The Argument That Cuts Both Ways

The legal defense that protected social media for thirty years — “we are not responsible for what users create” — has become the precise ground on which they are now being held responsible. It was never the content. It was always the design.

There is something almost structurally elegant about the legal logic that has finally found traction in American courts. Social media companies spent years arguing, successfully, that they were not publishers. They did not make editorial decisions about content; users did. They were the envelope, not the letter. That argument was designed to deflect liability. What it inadvertently conceded was agency over everything else — over the architecture, the algorithms, the behavioral levers built into the product at the engineering level. If you are not a publisher, you are something else. And that something else, courts are now exploring, might be a manufacturer.

The manufacturer analogy is not new — it was developed deliberately by plaintiff attorneys as a strategic framework, and it draws explicitly on the legal history of tobacco litigation. In the 1990s, a similar reframing shifted the target from individual cigarette smokers to the companies that designed addictive products and withheld knowledge of their risks. The social media litigation is not identical, but the structural logic rhymes: the harm is not incidental to the product; it may be intrinsic to the design.

Both Meta and Google have said they will appeal. Their appeals will almost certainly center on Section 230, and the outcomes will be decided at the appellate level — courts whose rulings actually bind other courts and jurisdictions. The U.S. Supreme Court, which heard a Section 230 challenge involving YouTube in 2023 and ultimately declined to rule on the core question, may eventually be pulled back in. Two conservative justices, Clarence Thomas and Neil Gorsuch, dissented when the Court declined a related case in 2024, writing that social media platforms had increasingly used Section 230 as a “get-out-of-jail free card.” That language — from the Court’s own bench — suggests the legal settlement around Section 230 is less stable than the industry has wanted to believe.

What the verdicts of late March 2026 represent, pending those appeals, is a recalibration of where the argument lives. For decades, the question was whether platforms could be held accountable for content. That question has largely been answered in the negative, and the legal machinery of Section 230 made it so. The new question — whether platforms can be held accountable for the products they built, the choices they made in engineering those products, and the harms those choices foreseeably produced — is now, formally, open.

Social media companies chose their identity carefully. They chose to be infrastructure, not media. Neutral, not editorial. They built that identity into their legal strategy, their lobbying, and their public communications for thirty years. What courts are now examining is whether that identity was ever the full picture — or whether it was, at least in part, a design decision of its own.

The post Social media companies have spent a decade arguing they’re not publishers — courts are now asking whether they’re something more dangerous: product designers whose choices cause measurable harm appeared first on Direct Message News.


Discover more from RSS Feeds Cloud

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from RSS Feeds Cloud

Subscribe now to keep reading and get access to the full archive.

Continue reading