Flagging “bad” content cannot make social media safer

Social media platforms get in public hot water these days for they way they recommend material to their users. Ysabel Gerrard and Tarleton Gillespie wrote an an article in Wired called When Algorithms Think You Want to Die.  They described how Pinterest and other platforms, by their recommendations, amplify peoples’ exposure to potentially dangerous material. The mechanism is straightforward. When a person views some information about self-harm, for example, the platform recommends more of the same. The platform doesn’t know whether the viewer is depressed, or a social worker or guidance counselor trying to get educated. It just favors information on the same topic.

Why is getting these recommendations right so difficult? Because making a recommendation is a moral decision. (Wait, stick around, let me explain what I mean.)

We make moral decisions in our lives based on our personal ethical frameworks. Our frameworks can be modeled roughly as a combination of three ways of making decisions — of three ethical stances.

  1. rule-following (do not show images of self-harm to anybody ever). This is the easy one.
  2. utilitarianism (the greatest good for the greatest number). This requires some sort of global utility function.
  3. responsibility. This implies a relationship between the parties to the moral decision.

My baby is hungry, I need to feed him. Should I steal this food? I know there’s a rule against stealing.  I know the supermarket would go broke if everybody stole, then everybody would be hungry. But, hungry baby.  We all make moral decisions without first hiring consulting philosophers to give us advice. We have our own personal ethical frameworks, built into our minds by our upbringing and experience. Each of us can, faced with such a hard moral decision, do the most right (least wrong) thing, according to our personal ethical frameworks.

In most commercial social media, utilitarianism is the default ethical framework. The utility function is “engagement” — the amount of time we spend using the network. Engagement in turn drives up advertising revenue. Where money is involved, people game the utility function. Social media platforms spark outrage when the utility function runs amok and amplifies particular users’ exposure to bad material. When the material is images of teenagers harming themselves and the user is a vulnerable teenager it’s easy to assign blame. Bad utility function! Bad social media platform! Bad! Bad!

The problem is this: The reason for the harm is based on responsibility ethics. It’s based on knowledge about the particular user. What would happen if this user went to a public library and asked a librarian to recommend books on suicide?  The librarian would assess whether the user was a social worker, worried parent, or at-risk teenager, and respond appropriately. But most social media recommendations don’t work that way.

The platform companies use rule-following ethics approaches to try to fix the problem. “Ban offending content!” They’re driven to do that partly because it’s the cheapest solution, and partly because public disapproval gets expressed as “there oughta be a law” rather than “be nice to people.” So, they refine their classification algorithms to flag certain content. Inevitably that doesn’t work very well. Facing outrage, they hire up so they can brag about how many people they’re throwing at their rule-following efforts. They try to detect “outlawed” material, and then ban or demonetize it.

But they’re trying to fix a mass number of relationship-based ethical problems with a rule-following approach. That Doesn’t Work™. Hence the ongoing problems and frustrations. Diagnosing the systemic problem by critiquing the utility function (“greed for ad money”) is just true enough to make us lazy. The deeper problem stems from using the wrong ethical framework to address the problem.

In 1964, in the halcyon days before US Supreme Court judges were political hacks, Justice Potter Stewart famously wrote

I shall not today attempt further to define the kinds of material I understand to be [called hard core pornographic movies]; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

That is a relationship-based solution to an ethical problem of undesirable material. It’s disguised as a rule-following solution. It concisely states the issue and implicitly shows its complexity. Justice Stewart made a good decision for himself and possibly the members of his own family. And he surely did his best to follow Kant’s rule of making personal ethical decisions by trying to consider whether they would work for everybody. That was his job as a Supreme Court judge. But, in reality everybody who sees any motion picture must exercise the same judgement. We all must be able to recognize bad material and resist it.

But when there are 10^9 (a billion) people on social media, there are potentially 10^18 relationships, each of which could potentially be helpful or harmful. That is too many potential relationships to consider algorithmically. Even if the number of relationships is closer to O(n log n) than O(n**2) it’s still a prohibitively large number. Ethical decision making necessarily must be distributed, not centralized, given such large numbers. People as smart as Sili Valley bigshots and Supreme Court judges simply cannot make those decisions in a centralized way for us all.

My point is this: rule-following about undesirable material does not, and can never, solve the problem of amplification of undesirable material. It takes relationship-building. Good recommendations require a sense of responsibility to the viewer.

Jeff Atwood’s and Joel Spolsky’s Stack Exchange social media platform has an instructive approach to the relationship-building problem. Every interaction starts by creating or reinforcing an empathic relationship. Users ask questions, and other users answer them. Of course some of the questions and answers are low quality or harmful. Stack Exchange addresses that problem by crowdsourcing the curating and moderation of their content (with upvotes, downvotes,comments, and flags). But reciprocal relationships and mutual responsibility are core motivators for the platform’s users. Even their monetization scheme involves relationship-building: they get paid when their users find jobs with their sponsors. In 2018 they handled a proliferation of toxic material by establishing a “be nice” policy. That policy encourages empathy and relationship-building.

Facebook and Pinterest must in-source their regulation of unwanted material. They must make it responsible. Even though it’s very hard in their business because their platforms don’t encourage relationships with viewers. I post stuff, and you all look at it. The more of you that look at my stuff, the more the platform gets paid. They own the relationships. Stack Overflow starts with empathic one-to-one relationships: I ask for help, and you help me. Others may eavesdrop on our conversation and learn from it.

Major world religions promote healthy relationship ethics:  Judaism and Christianity: “Love your neighbor as yourself.”  Buddhism: The “Right View” that actions have consequences. It’s no accident; ethical frameworks based on relationships have stood the test of deep time. To survive, social media platforms need to get with the program.

(Please, please, if you are considering hurting yourself, call the Suicide Prevention Lifeline at +1.800.273.8255. Somebody will listen to you.)

Leave a Comment