“Chapter 1 - Social Media and Our Political and Economic Lives” in “Social Media, Social Justice, and the Political Economy of Online Networks”
The Capitol riot in Washington, D.C. on January 6, 2021 was an infamously historic moment for the United States, as it was only the second time in the life of the nation that its statehouse had been breached. The first time was on August 24, 1814, when British adversaries set the Capitol building on fire during the War of 1812, when Congress was in recess. In January of 2021, though, it was some of the country’s own that stormed the Capitol while Congress was in session to certify the results of a presidential election, and many of the insurrectionists posted images and livestreamed video of the violence on their social media accounts as it unfolded.
Not only did social media play a role in documenting the events of January 6, 2021, it was arguably the primary platform for unfounded claims of election fraud and calls to action that precipitated the rioters’ march into the Capitol. Then-president Donald Trump, a lame duck incumbent who lost his re-election bid to Democratic rival Joseph Biden, had refused to concede his defeat. On his Twitter account he steadily promoted baseless claims of election fraud and promoted a rally that would ultimately precipitate the mayhem, tweeting on December 19, 2020: “big protest in DC on January 6th” adding “Be there, will be wild.”1 The U.S. House of Representatives impeached Trump for incitement of the insurrection that day, and his Twitter messages before, during, and after the Capitol riot were used as evidence against him during the impeachment trial. Throughout the day on January 6, Trump tweeted inflammatory messages, including two that were flagged by Twitter and later deleted, before the social media company permanently suspended his account two days later.
The Capitol riot and the permanent suspension of a U.S. president from Twitter were, perhaps, a strange coda to the story that had been unfolding about the role that social media had played in the Breonna Taylor and George Floyd protests during the summer of 2020 and the social justice activities taking shape on social media since Ferguson in 2014. How did social media, which evolved through message boards, personal spaces on the web, and connecting with high school friends on Facebook become so integral to our social, political, and economic lives? Why are digital platforms designed for recreation so useful to both the causes of social justice and right-wing authoritarianism? What does the front line of popular politics look like on social media?
Our initial approach to these questions was to combine political economic theory and network analysis techniques to further our understanding of social movements and political action. What does a “social movement” (conventionally defined in terms of strikes, protest marches, or sit-ins) look like through network visualizations, and how might these visualizations of movements taking place on Twitter reshape our understanding of how political action takes place in the digital era? We aim here to contribute to a more comprehensive understanding of how social media may empower and hinder social justice activity.
We explored these questions through a series of data-based case studies of Twitter activity, including tweets during the Ferguson demonstrations in 2014 when the hashtag #BLM trended, the 2016 presidential election season when Donald Trump’s #MAGA hashtag came into prominence, and throughout the summer of 2020 that featured nationwide protests around the #BlackLivesMatter movement and another presidential campaign season. While the goals of social justice advocates and political groups may be different, we are curious about how they might intersect on Twitter, especially as social justice is often linked to popular politics. The examination presented here relies on both data analytics and qualitative analysis, as we provide political economic context for the most used and impactful hashtags in the immediate aftermath of Ferguson, as well as describing how hashtags went viral during the 2016 election season and throughout the summer of 2020. We also address the meanings and implications of these activities and hashtags. Our work here is decidedly large-scale in method and yet critically determined, as we consider the scope of social justice activity on social media and examine it within the broader context of the political economy of misinformation, disinformation, and so-called fake news.
One of the key features of our work is the use of machine learning methodologies to develop a unique and engaging look at social movements on social media. We combine data visualization and computational text analysis to parse the semantic discourse within Twitter archives, and provide digitization, imaging, and 3D modeling of networks. Our analysis also engages critical political economic theory and network analysis to create an interactive look at the role of social media activity such as Twitter posts in social justice and political campaigns.
Before we examine the social justice and political activities on Twitter in the current age of fake news and post-truth, as well as network manipulation by bots, the influence of commercial interests, troll farms, and clever memes that shape public discourse, we must first return to our original question: How did social media, which evolved through message boards, personal spaces on the web, and connecting with high school friends on Facebook become so integral to our social, political, and economic lives?
The rise of social media platforms
Social media platforms are a byproduct of Internet development, which began in the 1960s and culminated in an early internet prototype created by the U.S. Department of Defense and university researchers, known as the Advanced Research Projects Agency Network (ARPANET) in 1969.2 A commercial version of ARPANET called Telnet was created in 1974, and the first “major development toward social media sites” came about in 1978 with an online bulletin board system in Chicago, which included announcements, meetings, and other information posted by users.3 With the growth of home computing systems and modems in the 1980s, internet service providers (ISPs) such as Prodigy in 1984 created relay chats and news sharing for its users, while America Online (AOL) featured member profiles that were organized into communities.4 With the development of the World Wide Web, ISPs like Mosaic, Prodigy, and AOL began offering their users access to the world wide web in the early 1990s, and their popularity grew in a matter of years. In 1993 there were just over 200 web servers online, just over 1,500 in 1994, and over a million by 1997.5 While there was no website specifically referred to as “social media” in the mid-1990s, the concept of social media has existed at least since the mid-1990s, as there were many sites that featured elements of today’s social media. For instance, several websites allowed users to post comments, later referred to as “web logs” or “blogs.” AOL’s instant messenger “chat rooms” were popularized in the 1998 film You’ve Got Mail, starring Tom Hanks and Meg Ryan. Other websites created in the mid-to late 1990s, such as Classmates, SixDegrees, BlackPlanet, AsianAvenue, and MiGente allowed users to create personal profiles, create groups, and identify friends – all of which are features of what is commonly known now as “social media.”
Danah Boyd and Nicole Ellison defined social media as “web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system.”6 In the early 2000s, other social media sites, such as Friendster and MySpace, came along, but it was not until Facebook was made public in 2006 that social media grew even further in popularity and developed more permanent features, such as the “like” button, which has been adapted on other social media platforms and apps. Also in 2006, another social media mainstay, Twitter, was developed and allowed its users to “follow” each other. Additionally, Twitter featured cross-platform connectivity, so that users could more easily share other online content to and from Twitter, as well as their other social media channels. Twitter is an open-network in which people can “follow” other accounts, while Facebook is established on a more closed “friend” structure. Both Facebook and Twitter now provide instant communication (including images and videos) to large numbers of friends/followers, while also affording one-to-one communication, similar to AOL’s early instant messenger service. YouTube also emerged in the mid-2000s, as a platform specifically for sharing user-generated video content. Users have their own “channels” and can post comments on videos posted by other users. YouTube and Facebook, along with Twitter and now others, also feature cross-platform connectivity, which was an important element for users to quickly disperse content across an array of online-based media channels during the Arab Spring in 2011 and Ferguson in 2014.
The development of smartphones and tablets, such as the iPad in 2010, have made social media more accessible and popular. Today, social media applications are most often used with mobile telecommunication devices with either iOS or Android operating systems, allowing users to post images and livestream videos from their mobile phone’s camera. These critical features were employed by protesters, journalists, and other social media users during the Breonna Taylor and George Floyd protests throughout the summer of 2020.
Social media and hate groups
Hate groups have also employed early internet services, such as online bulletin board systems and newsgroups, and later, social media platforms, to create, find, and facilitate like-minded networks. In fact, hate groups were organized online well before those interested in social justice.7 While the specific counts varied, by 2001 there were somewhere between 800 and 2,200 hate group websites, newsgroups, clubs, and other types of communities organized online.8 One of the more recent networks of racist, white nationalist and other hate-based groups used Facebook to organize the infamous “Unite the Right” rally in Charlottesville, Virginia.9
The Southern Poverty Law Center (SPLC) defined a hate group as an organization that through “official statements or principles . . . or its activities” has “beliefs or practices that attack or malign an entire class of people, typically for their immutable characteristics.”10 Hate groups denigrate and attempt to inflame public opinion against certain groups of people based on skin color, race, religion, ethnic origin, age, gender, or sexual orientation. The targets of hate groups are often blamed for an array of social, economic, or political ills. Internet-based platforms, including social media, have provided a useful form of networking, organization, and communication for these groups, as they are easy to access, inexpensive, and provide anonymity if necessary. Online and social media can also be used to bring in revenue through merchandising and donations. More critically, it is difficult for the government to interdict the activities of hate groups online due in part to Section 230 of the Communications Decency Act of 1996, which provides broad immunity to interactive computer service operators, such as social media outlets, for content posted on their services by third-party users.
Furthermore, blanket statements of hatred toward ethnic, racial, religious, or other groups are protected in the U.S. by the First Amendment to its Constitution. Only threats and intimidation (which may be based on racial, ethnic, gender, or other animus) directed at specific individuals are not protected by the First Amendment. Therefore, the regulation of hate speech online is at the discretion of internet service providers, social media outlets, and individual websites. Some of these interactive service operators prohibit hate speech or certain groups through their “terms of service” statements. For instance, Facebook removed several pages used by racist and white nationalist groups from its service after the fatal “Unite the Right” rally in 2017. YouTube banned Atomwaffen’s channel in 2018 after the group praised the killing of Blaze Bernstein, a gay Jewish college student. Atomwaffen’s videos had featured armed group members yelling slogans about killing Jews. And in 2021, Twitter suspended more than 70,000 accounts linked to the Capitol riot, including then-president Trump and other users linked to the QAnon conspiracy theory movement.11 This minimal form of industry self-regulation is the only kind of limitation for hate groups networking online and through social media.
With limited government interdiction and minimal industry self-regulation, social media networks have been useful platforms for hate groups to organize, such as the “Unite the Right” rally, or spread harmful conspiracy theories. For instance, David Duke, a former Grand Wizard of the Ku Klux Klan and key figure in the “Unite the Right” rally in Charlottesville, sometimes tweeted 30 times a day to nearly 50,000 followers.12 Moreover, social media outlets, including mainstream platforms Twitter and Facebook, have been used to spread QAnon based conspiracy theories.13 QAnon is an unknown source of an unproven far-right conspiracy theory that a group of Hollywood elites and Democratic politicians are engaged in pedophilia, child sex-trafficking, and Satan worship, among other outrageous claims. While QAnon originated on a 4chan message board, its theories are spread by users on more popular social media outlets.14
The social media landscape leading up to Ferguson
Social media have become a dominant force in many of our social, political, and economic lives. By 2013 Facebook and YouTube had more than one billion users worldwide, while Twitter boasted more than 500 million, with the average American between the ages of 18 and 64 spending an average of over three hours per day on social media.15 Social media have become a significant source of interpersonal connection with friends and family, information gathering and sharing, political organizing, and job networking, with sites such as LinkedIn. No matter whether one sees this level of consumption and engagement with social platforms as good or bad, we can at least agree that it has pervasive presence in our lives and that sheer presence alone demands critical attention. Beyond connecting and sharing for entertainment, diversion and escape, social media use has become a way that we acquire all sorts of social, cultural, and political knowledge, including social movements on both the left (e.g., Black Lives Matter, Ferguson, George Floyd, etc.) and the right (MAGA, Unite the Right, Stop the Steal, etc.).
With social media, essentially anyone can be a storyteller. Social media and mobile streaming applications have demonstrated that the relationship between news media and the public is subject to change in significant ways, as virtually everyone now has the potential to document and livestream events to a global audience. To say the least, social media have become a primary venue for public commentary about current events, disrupting the gatekeeping power once held by national news outlets.
While we survey analyses of all social platforms, we concentrate our analysis on Twitter in the chapters that follow. Twitter was not only the social medium of choice for Donald Trump and the historic event that took place on January 6, 2021, but it has been the primary social medium for engagement between professional and citizen journalists covering social justice movements16 and is the most “normalized” social medium for journalistic activity.17
Accordingly, our analysis will begin by examining how social media empower and influence social justice movements, focusing on Black Lives Matter and Twitter. We will also take a look at how social media platforms affected social discourse about social justice during Ferguson in 2014 and a year later in Cincinnati when @BlackLivesCincy had a role in the aftermath of the Sam DuBose shooting. From there we analyze conversations taking place on Twitter to understand how networks of discourse affect social and political movements, including an analysis of the tweets during the U.S. presidential election cycle of 2016 when Trump’s signature hashtag (#MAGA) emerged. Finally, we consider how misinformation and disinformation on Twitter complicate analyses of social media and social justice movements. Amidst social justice groups working from the bottom up and political forces pushing from the top down, there are also commercial interests generating unnatural networks and connections between people, creating the complex political economic struggle taking place over online networks that is the subject of this book.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.