Skip to main content

Content Moderation Through Design

Every day, urban design impacts our lives almost imperceptibly. While some design approaches are more obvious, such as speed bumps that slow drivers down and security cameras that discourage theft, many design choices affect human behavior without conscious awareness. For example, restaurant chairs are often deliberately uncomfortable to encourage faster dining, and sidewalk tiling can influence pedestrian walking patterns.

Similarly, digital design impacts our daily interaction with consumer technology, such as smartphones, mobile applications, and social media. However, as many lawmakers have drawn attention to in the past year, this isn’t always a good thing. Our smartphones are habit-forming, and social media platforms are under fire for impacting mental health and spreading misinformation.

While these concerns often lead to calls for improved content moderation, many of the same design principles that serve to make technology broadly adopted by users can also help improve safety at scale. In addition to content moderation teams, companies can use design to limit the creation of inaccurate, misleading, fake, or offensive content and provide tools to users that will help them navigate this content once it’s already in circulation.

This type of design, often referred to as integrity design, informs many of the platforms we know and interact with already. In 2021, Twitter launched a feature that detects when a tweet is considered unkind and encourages users to reconsider the post by displaying a prompt asking, “Want to review this before Tweeting?” According to Twitter, 34% of people revise their initial reply or decide not to send the response at all.

Building on such interventions, the NUDGE Act, introduced last February, requires the Federal Trade Commission to identify interventions that reduce the harm of algorithm-driven amplification. Distinct from Twitter’s new feature, however, interventions would be content-agnostic. For example, a pop-up asking users if they want to read an article before sharing it.

Rather than using pop-ups, many forums rely on reputation systems that encourage users to monitor themselves. On Reddit, for example, any user can up and downvote posts, allowing users who receive upvotes to accumulate “karma.” Reputation-based systems remind the individual to behave respectfully, make quality contributions, and are visible to anyone who clicks that person’s username.

Most well-known social media platforms that allow unmoderated posts (e.g., Twitter, Facebook, Instagram) also enforce “rate limits.” These limits disincentivize behaviors like spamming the same comment into multiple groups or sending an egregious number of direct messages to other users by limiting access to features once a user has reached some pre-set limit.

In these examples, companies curb the supply of low-quality content by encouraging good behavior. Unfortunately, however, incentives do little when the information ecosystem rewards bad behavior.

To help users navigate misinformation or otherwise low-quality posts, Meta began applying labels on Facebook’s News Feed. By assisting people to identify posts from official officeholders rather than Fan and Satire Pages, users can make better decisions about what types of content they’d like to consume. Similarly, Meta also applies informative overlays and labels on Facebook and Instagram accounts that help reduce the risks of misinformation and graphic content. Almost all social media companies provide reporting functions to help other users enforce community guidelines.

Companies are also experimenting with allowing users to customize their experience by selecting the areas of an application with which they want to engage. In 2018, Snapchat created a “Friends” page where users could see content from people they follow or are friends with and a “Discover” page serving vetted publisher stories or content from influencers. Creating technology that flexibly allows for a customized experience is increasingly important for parents who need to set permissions for what their children can see on such sites. Draft legislation also recognizes this need, and the Kids Online Safety Act of 2022, would require any app used by kids 16 and younger to implement new controls for minors and their parents.

Although these are a few common examples, this barely scratches the surface of how design can improve self and community-driven moderation on social media platforms. Facebook has over 2.9B users, and YouTube users upload 500 hours of content—the sheer volume makes moderation of all inappropriate content nearly impossible. Through intentional design, companies have many tools to create and enforce community norms, improve user incentives, and limit the presence of malevolent actors to ensure safety.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags