Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

Why We Should Treat Algorithms As Authors

Should social media companies continue to enjoy near-blanket protections from liability for content on their sites? It is an increasingly controversial question, but we face a classic legal mismatch. Section 230 of the U.S. Communications Decency Act of 1996, which offers those protections, was never created with social networks in mind. Meanwhile, we need to hold the social networks accountable — and there may be a creative way to amend this law to do it.

What Section 230 Didn’t See Coming

Section 230 came to life in the period of chat rooms and America Online and was particularly designed to shield AOL from liability if people said bad things in chats. The writers of the act never considered that algorithms would emerge that are designed to promote content consumption and elevate content based on whether or not it drives engagement. But that’s how sites like Facebook and Twitter mostly work. This new reality requires a mental shift in how we view social media companies and how the law might apply to them. 

Few deny that social media algorithms can fan hatred and promote extremist views, even contributing to tragedies like the one we saw at the U.S. Capitol on January 6. But Facebook and Twitter are not held responsible for content and intent as it is expressed on their platforms. Any of us who post there remain liable for anything we say which is illegal or libelous, but the platform itself is inoculated, because of Section 230.

But today, the social networks can and should be viewed as the de facto creators of this content. That’s because they write the algorithms that curate users’ feeds, determining what we see. They often elevate the most inflammatory views to give them maximum exposure. Their central business goal is to generate page views and user attention. So if the algorithms of the social media companies are determining what a user sees, I believe they should be seen as the authors and editors of the content of the news feed. Therefore, the networks should be held responsible for that curation. 

The Early Success of AOL

AOL may have defined the consumer internet of its day. The reality, however, is that most of what people did there was communicate in email, send instant messages and communicate on message boards and chat rooms. Communication was so dominant that at one point Founder Steve Case of AOL thought the company’s entire future was as a communications company and he went on the board of directors of a telecom to learn more.  

After the passage of Section 230, AOL could not be sued or charged if someone misbehaved in a chat room. One of the key ways AOL won the battle with its well-heeled competitor Prodigy was by allowing more topics in chat rooms. 

AOL had an army of volunteer chat room moderators, though there was also a paid staff. But the only compensation for those volunteers was free service. Because they were volunteers, they could not be tightly managed, so another part of Section 230, written with the company in mind, said that AOL or any other provider could not be sued or charged for failures of moderation. Straightforward. Difficult to oppose.

Under the law, though, what is an algorithm, especially sophisticated algorithms like the ones Facebook uses to order its news feed? An algorithm is an author and should be treated as one.

Are Algorithms Authors?

With almost three billion users, Facebook has a wealth of content that it has to organize. In the beginning, Facebook’s CEO Mark Zuckerberg’s brilliant friend and then-colleague Adam D’Angelo figured out how to make vast amounts of content simultaneously available for users. But that created a problem: How do you choose what they should see?

The solution was an algorithm — rules implemented in software that chose which content gets in front of which user. Algorithms have no soul. If told to maximize clicks, they will maximize clicks in the best way possible. Algorithms are amoral. Balance and reason are not generally favored by an algorithm, though someone could try to program one that way. 

As many observers have noted, fear and anger are what most keep social media users engaged. Emotion matters. In addition, research suggests that angry people show less ability to determine truth from fiction.

An algorithm itself will never be emotional but instead will rationally evaluate what makes you angry and emotional, if that’s what keeps you engaged and engagement is the goal for which the algorithm is programmed. 

Under the law, though, what is an algorithm, especially sophisticated algorithms like the ones Facebook uses to order its news feed? I believe the answer is simple: An algorithm is an author and should be treated as one. Indeed, in other venues, the big tech companies are arguing that what algorithms produce is protected by copyright, which of course suggests that the algorithm, or its creator, is an author of the resulting content. 

Treating algorithms as authors could have a profound effect, because authors are responsible for what they create. Section 230 does not protect them. And recently, U.S. Supreme Court Justice Clarence Thomas questioned in an official Supreme Court publication whether the courts had gone too far in extending the protections afforded by Section 230. 

If it were liable for the choices made by its algorithms, a company like Facebook would have to operate differently. If it could be sued if someone saw an incendiary piece of content — like an incitement to violence or certain kinds of illegal hate speech — in their news feed, the company would have no choice but to find ways to avoid displaying that content.

If the U.S. Congress or the courts declared that provider-programmed algorithms were authors and thus accountable for their content, that would resolve many of the problems that plague social media. 

A version of this piece originally appeared in Techonomy

William Raduchel

Strategic Advisor @rnomad

William J. Raduchel is a strategic advisor on technology and media, independent director and angel investor. He has taught corporate strategy at the McDonough School of Business at Georgetown University for six years.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​