top of page

Social Media
and Inciting Violence


Protesters rally at the Capitol interrupting the peaceful transfer of power, January 6th, 2021.

The evidence social media companies gathered from the fallout of the 2016 election was sufficient to drive action four years later. According to a Washington Post article, Facebook’s Civic Integrity team established expansive election season measures to restrict misinformation in the months leading up to the 2020 election. However, Facebook policy actions suggest that they considered disinformation to be of heightened concern primarily around the election and began to remove much of the regulation and surveillance done afterward. When describing the aftermath of election season a Facebook employee said this: 


“Facebook rolled back many of the dozens of election-season measures that it had used to suppress hateful, deceptive content. A ban the company had imposed on the original Stop the Steal group stopped short of addressing dozens of look-alikes that popped up in what an internal Facebook after-action report, first reported by BuzzFeed News, called “coordinated” and “meteoric” growth. Meanwhile, the company’s Civic Integrity team was largely disbanded by a management that had grown weary of the team’s criticisms of the company, according to former employees.” (Timberg, Craig, et al. “Inside Facebook, Jan. 6 Violence Fueled Anger, Regret over Missed Warning Signs.” The Washington Post, WP Company, 29 Oct. 2021,

Although it is important to credit Facebook for their commitment to regulation during the election season—establishing the Civic Integrity team in 2018—we cannot ignore their almost immediate pullback, dismantling it in early December of 2020. As a public company, Facebook has no vested interest in protecting the rights of the citizens of the United States but instead its interest lies in protecting its shareholders. It has not done and is unlikely to do what’s right without great scrutiny or government intervention.


Frances Haugen testifies to Congress. 

Political demonstrations are not new to American society; in fact, they are an integral part of it. Demonstrations are a physical representation of our freedom of speech, yet not all are protected. Citizens often assume that all speech is protected, especially speech of a political nature. 


Yet, according to the US Supreme Court: “[t]hat speech is used as a tool for political ends does not automatically bring it under the protective mantle of the Constitution. For the use of the known lie as a tool is at once at odds with the premises of democratic government and with the orderly manner in which economic, social, or political change is to be effected. … Hence the knowingly false statement and the false statement made with reckless disregard of the truth, do not enjoy constitutional protection” Garrison v. Louisiana, 379 U.S. 64, 75 (1964). 


Instances of the “knowingly false statement” have been amplified and promoted by the very platforms designed to give us an equal voice. The rise of disinformation and its amplification have paved the way for riots and violent political speech. 


Most notably, the January 6, 2021 assault on the U.S. Capitol was the culmination of a rising movement of violence principally organized on social media.. This act of democratic dissent was not a tragic anomaly; it was a preventable expected outcome.


312 Week Business Vision Tim Elkins CIO.png

It is not simply Facebook’s inaction, but their informed inaction that drives the need for government intervention. In October of 2021, Product manager of the Civic Integrity team at Facebook and whistleblower Frances Haugen provided thorough documents to Congress presenting the data collected while in her capacity as a member of the team which detailed information Facebook knew about the effects of their platform. As alleged by the Washington Post, Facebook has an intimate understanding of the dangers present on their platform.


“The documents also provide ample evidence that the company’s internal research over several years had identified ways to diminish the spread of political polarization, conspiracy theories and incitements to violence but that in many instances, executives had declined to implement those steps.” (“Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs,” Washington Post).


Not only has Facebook removed much of its focus on disinformation, but it has done so while acknowledging its danger. How can Facebook in good conscience say they must remove some prominent disinformation groups such as the “Stop the Steal” group (designed to delegitimize the election of Joe Biden in the event of a Donald Trump loss) yet ignore the subsequent duplicates? It is clear that Facebook only cares about optics. Facebook will not do their duty to protect its users from the danger that they have created. 


Given this understanding, Facebook should and must continue to act to protect its users from being manipulated or endangered. Facebook leaders often attempt to separate themselves from their creation, but to put it simply, they have created a product, and like any other company they ought to be liable for that product. Yes, Section 230 of the Communications Decency Act (CDA) shields internet companies from liability for users’ posts. However Facebook is the source of the opportunities that it has created. Without these platforms and algorithms, public discourse would not create such opportunities for disinformation. 


Social media platforms need to be held accountable and monitor their platforms for disinformation. As a case study, Facebook suggests that companies will blatantly ignore evidence of disinformation and act only when pushed. This points to the need for government intervention and legal enforcement to push companies towards necessary action.

If we do nothing, our democracy is at stake.

bottom of page