A jury just found Meta guilty on every count. Here's why this one is different.
By Daily Direct Team · 25 March 2026
Juries have found against tech companies before. Fines have been levied, settlements reached, apologies issued. The companies have paid, updated their terms of service, and continued operating largely as before.
Tuesday's verdict in New Mexico may be different — not because of the dollar amount, though $375 million is significant, but because of the structure of the finding.
The jury found Meta liable on every single count. They identified 37,500 individual violations of state consumer protection law, and awarded the maximum $5,000 penalty for each one. They found that Meta had "unconscionably" exploited children's vulnerabilities — not that the company had been careless, or that its systems had been misused, but that it had deliberately designed products in ways it knew caused harm, and done so anyway.
That is a different kind of verdict. That is a jury saying: you knew, and you chose.
What the case was actually about
The New Mexico lawsuit centred on Meta's platforms — primarily Instagram — and the ways their design choices systematically targeted, retained, and exposed minors to harmful content and predatory contact.
The state's argument was not that bad things happened on Meta's platforms despite the company's best efforts. It was that Meta's best efforts were directed at keeping children on the platform as long as possible, and that the company's own internal research — research it suppressed — confirmed those efforts were causing measurable psychological harm. The argument was that Meta's products were not neutral tools that could be misused; they were engineered for engagement in ways that were specifically harmful to developing minds.
The jury agreed. Every count. Maximum penalty.
This matters beyond New Mexico because it establishes a legal frame — deliberate exploitation, not negligent oversight — that other states and other courts can now build on. There are active lawsuits against Meta in dozens of jurisdictions. This verdict gives them a template.
The Section 230 wall is cracking
For most of the internet's existence, technology companies have been shielded from liability for content on their platforms by Section 230 of the Communications Decency Act. The law was written in 1996, when the web was a novelty and its architects couldn't have imagined what would eventually run on it. It protected platforms from being treated as publishers, and that protection allowed the modern internet to develop without constant litigation.
The New Mexico case did not pierce Section 230 directly. It went around it. By framing the lawsuit as a consumer protection case — Meta made unconscionable trade practices by misleading users about product safety — the state found a legal pathway that doesn't require overturning federal statute.
This is the approach that may ultimately reshape how tech companies are held accountable. Not a frontal assault on Section 230, which has repeatedly failed in Congress, but state-level consumer protection claims that treat harmful design choices as deceptive business practices. If other states follow New Mexico's framework, the liability exposure for tech companies gets dramatically larger.
The same week OpenAI killed Sora
The Meta verdict didn't land in isolation. Tuesday also brought the news that OpenAI is shutting down Sora, its AI video generation app — a product that was positioned just months ago as a flagship demonstration of what generative AI could do.
The official reason is strategic focus: OpenAI is pulling back to concentrate on business and productivity tools. But the timing and the subtext matter. Sora was pulled partly because of deepfake concerns — the same category of harm that the Meta verdict is forcing the industry to confront more seriously. Building tools that can generate convincing synthetic video at scale, in a legal environment that is increasingly asking who is responsible for the damage that causes, is becoming a different kind of risk calculation than it was a year ago.
The legal environment is catching up to the technology, slowly and imperfectly. But it is catching up.
Amazon bought a company that makes child-sized robots
Also on Tuesday: Amazon acquired Fauna Robotics, a startup that builds child-sized humanoid robots. It is the company's second robotics acquisition this month.
This detail sits slightly uncomfortably alongside the Meta verdict, though the connection is indirect. What the pairing illustrates is the breadth of the technological transformation underway and the relative infancy of the accountability frameworks governing it.
Meta's platforms were designed and deployed over two decades before a single jury found the company liable for the harm they caused to children. The legal and regulatory infrastructure for AI-generated content, humanoid robotics, and whatever comes next is being built — if it is being built at all — while the technology is already in deployment. The lesson of the Meta verdict is that the gap between what technology can do and what accountability structures exist to govern it tends to close painfully, slowly, and in courtrooms rather than legislatures.
What $375 million actually means for Meta
To put the number in context: Meta generated approximately $165 billion in revenue last year. The $375 million verdict represents roughly 0.2% of that figure — less than a rounding error at the scale the company operates.
If the verdict stands and is not reduced on appeal, it will not fundamentally alter Meta's economics. What it changes is the risk calculus. This verdict will be cited in every state-level lawsuit now in progress. It will influence how those cases are framed, how juries are instructed, and what plaintiffs' lawyers believe they can win. The multiplication effect of dozens of similar cases, each referencing Tuesday's finding, is where the real financial exposure lies.
Meta has already indicated it will appeal. The company disputes the finding and the damages calculation. This will take years to fully resolve.
But in the meantime, a jury of ordinary citizens looked at the evidence of what Meta built, how it was built, and who it was built for — and found the company guilty on every count.
That is a sentence that did not exist before Tuesday. It exists now.
The social media ban wave and where it's heading
Tuesday's verdict arrives as governments around the world are independently reaching similar conclusions through the legislative pathway.
Australia's under-16 social media ban is in effect and being actively debated. The UK government this week announced trials of social media bans and digital curfews for teenagers. Canada's Liberal Party is putting minimum age restrictions for social media and AI chatbots on its national convention agenda. New Mexico's jury verdict is the courtroom expression of a political consensus that has been building for years.
The question is whether legal liability and legislative restriction will actually change how these platforms are designed — or whether they will produce compliance theater while the core engagement mechanics that drove the harm remain intact. The cynical read of Meta's history is that the company has paid fines, promised change, and continued doing what works for revenue. The optimistic read is that eventually the liability exposure becomes large enough to make the calculus change.
Tuesday moved that dial. Not far enough to be definitive. But in the right direction.
Daily Direct covers the stories that connect — across technology, health, policy, and more — every morning. Subscribe here.