INV Group

Picture of Paul Zimmerman

Paul Zimmerman

INV Group Chief Communications Officer

26 March 2026

This article originally was posted on LinkedIn on 26 March 2026

A still image from a BBC Breakfast interview with Ellen Roome, a campaigner, whose head and shoulders are pictured, in front of the BBC news room with various screens behind her showing a variety of different locations.

Yesterday’s news that Meta and Google‘s YouTube were found liable by a California jury for designing platforms in ways that harmed a 20-year-old woman’s mental health when she was a child is a watershed moment in the world of safety and ethics for big tech. It followed the previous day’s news that Meta had been hit with a $375 million penalty by a New Mexico jury over findings that its platforms endangered children and exposed them to sexual exploitation risks. These are not minor legal bumps. They feel much more like the early signs of a broader reckoning.

This is just the beginning. This morning on BBC Breakfast, the eloquent and passionate campaigner on this subject, Ellen Roome MBE – who tragically lost her only child when he got deeply immersed in the dark side of social media as a teenager – was asked whether it is a daunting challenge taking on big tech companies.

She replied: “Not when you’ve lost a child, no. You put a whole group of bereaved parents in front of people, and we don’t care. We’ve lost the most important person in our world and we want to stop that happening to other people. No, it doesn’t scare me at all. I’m not terrified, I’m not nervous. Bring it on.

Big tech has a lot to answer for here. Those of us who have spent time around product development know that, for years, technology firms have wanted to make their apps and services “sticky”. In most cases, that has been a good thing. Better engagement often means better usability, better discovery and better outcomes for users.

But over time, it has become increasingly apparent that some firms have gone well beyond designing for usefulness or habit. Too often, the incentives have drifted towards maximising attention, ad revenue and shareholder value, even where that has meant exposing vulnerable or impressionable users to genuine harm. That is where the line gets crossed. [Full disclosure: I have been a long-term investor in both Meta and Alphabet (the parent company of Google), and will likely remain so.]

There is a very good book on this subject, Hooked, written in 2014 by Nir Eyal. It is often cited in product circles because it lays out, quite openly, how products can be designed to change behaviour and build habitual use. In Hooked, Eyal wrote:

You may be asking, “When is it wrong to manipulate users?” Manipulation is an experience crafted to change behaviour — we all know what it feels like. We’re uncomfortable when we sense someone is trying to make us do something we wouldn’t do otherwise, like when sitting through a car salesman’s spiel or hearing a timeshare presentation.  Yet, manipulation doesn’t always have a negative connotation. If it did, how could we explain the numerous multi-billion-dollar industries that rely heavily on users being willingly manipulated?

Eyal’s important point is that behavioural design is not inherently bad. Sometimes it can be used for clearly positive ends. He provides the example of Weight Watchers: a service that encourages behaviours people actively want to adopt, for outcomes they themselves value.

That distinction matters. Designing for habit is not automatically unethical. Designing without sufficient regard for harm is.

And that is why these jury decisions matter so much. They suggest that courts are becoming more willing to look past the old assumptions of tech exceptionalism and ask a more basic question: what exactly were these systems designed to do, what did the companies know about the consequences, and what did they choose not to change? In both recent cases, the legal scrutiny turned in large part on design, safety and corporate knowledge rather than simply on user-generated content. That is a very important shift.

Now bring AI into the picture, and the stakes rise again.

We are moving into a world where AI will influence, filter, rank, recommend and generate more of what people see. Increasingly, agentic systems will not just suggest content, but take actions, make decisions, escalate tasks and shape user journeys on our behalf. That creates huge opportunities, but also much greater responsibility.

This is why safety, ethics and governance have to be front and centre in the design of AI and agentic systems. It is also why firms such as Anthropic have been so vocal about model safety and guardrails, and why public bodies such as the UK AI Security Institute exist to evaluate frontier models and test dangerous capabilities, including misuse risks in areas such as cyber and chemical or biological harm.

The lesson from social media should be clear by now: when powerful technologies are designed without enough regard for safety, governance and human consequences, the damage does not stay theoretical for long. As we enter the age of AI and agentic systems, we have a choice. We can repeat those mistakes at greater speed and scale, or we can build differently this time — with safety, ethics and governance embedded from day one. That should not be optional. It should be the standard.