Social media companies still need to improve transparency around content moderation requests they receive from governments to better protect users’ speech online, according to a Republican senator who has focused on policy issues related to technology during his tenure.

Sen. Cynthia Lummis, R-Wyo.—who sits on the influential Senate Commerce, Science and Transportation Committee—said during a virtual event hosted by the R Street Institute on Tuesday that it was “a time without precedent in history” for free speech and how governments regulate public speech, especially “the government’s real-time failures to interact with companies that provide platforms for speech on the Internet”.

Many Republican lawmakers have claimed in recent years that social media companies censor conservative users on their platforms, with government demands for moderation becoming a focal point of their concerns.

These complaints were further amplified by the recent release of the Twitter Files, internal documents from the social media giant that purported to show government involvement in its content moderation decisions. While the document releases have been highlighted in conservative circles as evidence of concern, they have been widely derided by critics as proof of the banality and frequency of content moderation requests that media platforms social media such as Twitter receive daily.

To combat perceived online censorship of conservative voices, Republicans in Congress — and even some states — have directed much of their anger at Section 230, part of the 1996 Communication Decency Act that allows online platforms line to host and moderate third party content without being owned. responsible for what their users might post or share. Section 230 has become a political flashpoint for both parties, with many Democrats worried it doesn’t do enough to limit the spread of harmful online content, and Republicans contesting that it stifles the freedom of expression.

Given the disparate political views around Section 230, Lummis seemed to advocate a more targeted approach to addressing concerns about content moderation requests from government entities. She said Congress currently has two options: “We can reform Section 230, which is the subject of much discussion here on Capitol Hill, or create stricter transparency requirements regarding government restraint demands. “.

“And, at this point, increasing transparency around government requests for content moderation seems like the best move,” she added, saying Twitter users should know if and when the White House, for example, requests that policies content be removed from the platform.

Lummis cited the PRESERVE Online Speech Act — a law she co-sponsored in 2021 — as a way to improve transparency around these types of moderation requests. The bill would require social media platforms “to post a public disclosure containing specified information related to a request or recommendation by a government entity that the service moderate content on its platform.” House Republicans introduced similar legislation at the start of the 118th Congress earlier this month.

“The government must be very careful how it interferes in the regulation of social media platforms, so as not to stifle free speech,” Lummis added.

Some social media platforms are already working to publicly disclose the types of content moderation requests they receive from governments around the world. Meta – the parent company of Facebook, Instagram and WhatsApp – maintains a transparency report on government requests for user data, which shows that governments have collectively filed hundreds of thousands of moderation requests with the company each year, including more than 237,000 requests from January to June. 2022 alone.

Kaitlin Sullivan, director of content policy at Meta and a panelist at the R Street Institute event, said the company strives to notify users “in almost all cases when their content is removed for violating our community standards”, as well as when “their content has been removed or restricted in a particular jurisdiction based on an official government report, which is generally that such content violates local law.

But Sullivan said Meta is sometimes blocked from disclosing these types of moderation requests, including for FISA orders, for national security reasons and in some countries “that give us legal orders and then have legal orders. gagging that comes with them, for companies where they can’t disclose to the user or the public what the request was, and who it came from or why.

Source link

Leave A Reply