“Chapter 8 - The Political Economy of Social Media Networks, Social Justice, and Truth” in “Social Media, Social Justice, and the Political Economy of Online Networks”
This chapter brings together an examination of social justice and political discourse in critical political economic perspective, and endeavors to contribute to an ongoing critical-cultural examination of the interplay between online social networks, political economics, and social justice, including Christian Fuchs’s Culture and Economy in the Age of Social Media, which examined critical cultural theory as applied to the culture and economy of social media, and Zizi Papacharissi’s “Affective Publics: Sentiment, Technology and Politics,” which provided an insightful analysis of the affective nature of Twitter streams within political debates.1 From our analysis, the political economic struggle that is taking place throughout online networks can be seen as two opposing forces of social justice efforts from the bottom up, and social propaganda from the top down, as well as other artificially created communities, in which social networks’ business models generate links between people.
We examine the political economy of social media networks by applying Edward Herman and Noam Chomsky’s classic propaganda model to the digital marketplace of ideas and find that too often seemingly grassroots movements can be manufactured from the top down via social media networks, confounding social justice movements and confusing epistemic validity within political discourse.2
Mobile telecommunication, the internet, and the development of social media platforms have become integral not only to our social lives but also to how we get news and information. For instance, a Pew Research study in 2016 showed that 67 percent of Americans report getting their news from social media, which is up 62 percent from the year before. While this may be a good indication about the popularity of social media, it also presents a problem for the institutional effectiveness of journalism in the United States and, more important, for the quality of information within political discourse.
Knowing the difference between real and fake news, as well as truth and falsity, has become increasingly problematic on social media and in this so-called age of post-truth.3 On open social media platforms anyone can share practically anything with anyone else without any quality or fact-checking filters. As a result, fake news and misinformation distributed through social media can confuse political discourse, making it difficult to discern credible news outlets from specious ones. Moreover, “bots” allow users to automate hundreds of posts from a single social media account within a day and spread false information from fake news sites at levels that make it appear legitimate. Furthering this conundrum is when legitimate actors unwittingly share misinformation or engage in public discourse on subjects for which they are insufficiently informed, like what happened with the spread of misinformation about the efficacy of hydroxychloroquine as a treatment for coronavirus during the early stages of the COVID-19 pandemic in 2020.4
The political economic analysis presented in this chapter will examine the impact of fake news via social media by applying Edward Herman and Noam Chomsky’s famous “propaganda model,” which described how ownership, advertising, sourcing, flak, and anti-communism shape media behavior and content.5 In applying each of these factors to the present social media environment, which can be manipulated through the use of bots and muddled by doublespeak about what is and is not fake news, it appears that the production of fake news may do more than “manufacture consent”; it reconstructs the very perception of truth itself. Accordingly, this analysis further considers the limits of the oft-cited marketplace of ideas metaphor for discerning truth, and addresses the epistemic problem confronting the U.S. institution of journalism and its news audiences.
The political economy of news media: a propaganda model
A political economic analysis of fake news on social media, specifically using Herman and Chomsky’s propaganda model, will be a valuable addition to the critical study of how digital technologies have been used to transform journalism. As McChesney described, the political economy of media “is a field that endeavors to connect how media and communication systems and content are shaped by ownership, market structures, commercial support, technologies, labor practices, and government policies.”6 In this sense, political economy can be thought of as a study of media business, but from a decidedly moral and philosophical perspective.7
One of the more famous political economic analyses of the commercial influence on U.S. news media, Edward S. Herman and Noam Chomsky’s book, Manufacturing Consent: The Political Economy of Mass Media has celebrated its thirtieth anniversary. Herman and Chomsky followed seminal research by Walter Lippmann and Harold Lasswell and posited a propaganda model of media in which content is shaped by five filters, described as ownership, advertising, sourcing, flak, and anti-communism.8 These five filters represent the overall constraints of market pressures, ownership and organizational structures, as well as political power on media performance.
The “ownership” filter is set by the expectations of the “large, profit-seeking corporations” that are owned and “controlled by very wealthy people or by managers who are subject to sharp constraints by owners and other market-profit-oriented forces, and they are closely interlocked, and have important common interests with other major corporations, banks and government.”9 Accordingly, there are inherent conflicts of interest built into the media system based on ownership and the demands for increasing wealth, which tends to constrain news content in the interest of profits, as well as concern for other investments.
Closely related to ownership pressures is the advertising filter. As described by Herman and Chomsky, advertising can function as a “de facto licensing authority”10 that limits content based on the interests of advertisers, but also limits access to content that is produced by advertising-supported news outlets.11 As Herman and Chomsky explained, news media that lack advertising are put at a “serious disadvantage” in the marketplace because their prices for subscription will be “high, curtailing sales, and they will have less surplus to invest improving the salability” of their product, such as “features, attractive format, promotion, etc.”12 Furthermore, “an advertising-based system will tend to drive out of existence or into marginality the media companies and types that depend on revenue from sales alone.”13 The significance of advertising in the news business has also contributed to the increasing concentration of media ownership, as outlets with greater horizontal and vertical reach have more market range and can compete more aggressively for national advertising, while leaving competitors without such interlocking interests in the margins.
Separate from the demands of ownership and advertising is the “sourcing” filter of the propaganda model described by Herman and Chomsky, which refers to what may often become the symbiotic relationship between news media and powerful sources of information, and their mutual need for one another. News media are reliant on credible sources of information on such a regular basis to fill the daily schedules of newspapers and television outlets, and government and corporate sources are “credible for their status and prestige.”14 At the same time, these sources are aware of their power and may leverage it through threats to cut off access to news media or inundate the media with “a particular line and frame” for its stories.15 Sourcing as a news filter in this sense can be summarized as the media’s reliance on particular powerful sources of information, as well as the status conferred upon them as being credible ones.
The “flak” filter is described as political spin, as well as the negative responses to news reporting that powerful interests find unfavorable. As Herman and Chomsky explained, those with influence can “work on the media indirectly by complaining to their own constituencies” and generate advertising that does the same, as well as “funding right-wing monitoring or think-tank operations designed to attack the media.”16 Most notably, perhaps, is that Herman and Chomsky described governmental entities as a major producer of flak by “regularly assailing, threatening and ‘correcting’ the media.”17
The fifth and final filter described by Herman and Chomsky is “anti-communism” as an ideology practiced and propagated by those in power, as communism “threatens the very root of their class position and superior status.”18 Herman and Chomsky explained further that this kind of pro-Western capitalism ideology “helps mobilize the populace against an enemy, and because the concept is fuzzy it can be used against anybody advocating policies that threaten property interests.”19 Moreover, because any triumph of communism is perceived as “the worst imaginable result” in conflicts, especially abroad, “the support of fascism” can be viewed as a “lesser evil.”20 Although, Herman and Chomsky’s work was published before the collapse of the Soviet Union in Russia, in a broader sense the anti-communism filter can be seen as a pro-capitalism ideology that worries about the growth of socialistic ideas.
Fourteen years after their 1988 book, Herman and Chomsky revisited their model and acknowledged that the internet and developing media platforms appeared to be breaking up some of “the corporate stranglehold on journalism and opening an unprecedented era of interactive democratic media.”21 However, Herman and Chomsky also note that despite the internet’s value as additional platform for “dissident and protesters” it is limited as a tool for the critical information needs for many in the public, who lack access and knowledge for effective use.22 Furthermore, the privatization, commercialization, and concentration of control over internet hardware and platforms “threaten to limit any future prospects of the Internet as a democratic media vehicle.”23
As Herman and Chomsky acknowledged in 2002, internet-based media operate differently than legacy news media, and duly noted that online communication has not lived up to its early emancipatory expectations, it could be logically re-asserted that corporate media continue to play central role in the production of news and information as the primary providers of mass-distributed content. However, we might question in the two decades following Herman and Chomsky’s 2002 revisit of their propaganda model: has user-generated content via social media outlets, such as Twitter and Facebook, changed this dynamic? While Twitter and Facebook are another iteration of corporate form, does the proliferation of user-generated content sharing on their networks disrupt the centrality of their institutional power?
These questions beg further analysis of the online media ecology to understand and perhaps reformulate Herman and Chomsky’s original five filters to address the way digital intermediaries, particularly social media, and user-generated content have affected traditional information value chains, such as the process of content production, discovery and distribution, as well as the how that process is monetized by advertising and data vending. Accordingly, the political economic analysis of fake news on social media, applying (and reforming) Herman and Chomsky’s original propaganda model, will be a valuable addition to the critical study of how digital technologies have been used to transform journalism.
The 2016 presidential election and the rise of fake news, which occurred another fourteen years after they revisited and republished their model, provides a ripe opportunity to re-apply the model and critically examine how new online media platforms have affected the delivery of news and information. Similar to the original 1988 propaganda model, the analysis to be presented in this chapter will focus on “media structure and performance, not the effects of the media on the public” and does not imply that fake news, or other elements of the model, are always effective.24 Moreover, in their republished edition, Herman and Chomsky noted several limitations of internet media – primarily the growth and prominence of brand names and commercial organization.25 As Herman and Chomsky put it: “People watch and read in good part on the basis of what is readily available and intensively promoted.”26 This assertion is ripe for further political economic analysis in the current fake news and post-truth environment.
Applying the propaganda model to fake news: a political economic critique
In applying Herman and Chomsky’s classic propaganda model to the new terrain of fake news on social media, this chapter employs a political economic critique of social media activity, focusing on the U.S. presidential election contest between Republican Donald Trump and Democrat Hillary Clinton in 2016, as well as its aftermath. Herman and Chomsky explained that their propaganda model was an analytic framework to interpret U.S. media performance based upon the “institutional structures and relationships within which they operate” that showed how media tend to serve “the powerful societal interests that control and finance them.”27 While Herman and Chomsky were looking at the entire U.S. media system, this analysis applies that model to contemporary social media platforms.
While a shared reality exists on social media that can be measured in terms of the number of “likes,” “shares,” or “followers” that an account has, it is more challenging to understand how technology, business, and other forces under the surface shape these measures. However, political economy’s critical realism is ideally suited to expose these kinds of elusive structural processes.28
In addressing the structural processes of social media, the analysis presented here follows Thomas Corrigan’s suggestion to further utilize trade press and popular reporting in data research.29 This study applies Herman and Chomsky’s 1998 propaganda model to the contemporary social media environment using data from social media companies, an array of legal documents, including Congressional testimony and grand jury indictments, as well as examples drawn from social media content and trade press reporting.
Ownership and advertising: the business of fake news
As noted earlier, “bots” are automated software programs that operate on social media platforms. They perform specific tasks, like making posts, and give the illusion of representing a real person interacting and engaging on social media. For instance, two of the most popular conservative Twitter pundits during the 2016 U.S. Presidential election cycle, each with tens of thousands of followers, were outed as Russian trolls.30 As Samuel Woolley and Douglas Guilbeault showed in their social network analysis, bots can be used to “manufacture consensus by giving the illusion of significant online popularity in order to build real political support” for particular candidates or perspectives.31
Woolley and Guilbeault do not suggest that there is some form of direct effect at play here. Nonetheless, bot-generated messages do represent, perhaps, a new form of “agenda-setting” function in that the tweets and memes do not necessarily cause social media users to think differently about the issues referenced in the messages, but the repetition of similar themes and messages appearing in social media feeds likely means that we are thinking about those particular issues, and those issues become salient ones.32 Throughout the campaign season bot-generated posts provided a prevalence of a certain kind of framing about candidates and issues. A Buzzfeed study showed that fake news on Facebook generated more engagement than regular news sources during the last 3 months of the 2016 presidential campaign.33 Notably, the “20 top-performing false election stories from hoax sites and hyper-partisan blogs generated 8,711,000 shares, reactions, and comments on Facebook.”
As discussed in a previous chapter, Bloomberg News reported that an engineering manager at Twitter discovered a hoard of spam accounts, which happened to be from Russia and Ukraine, in 2015, but the company did not delete them because its growth numbers would be diminished if the accounts were deleted. The accounts (fake or not) were counted as part of its total universe of users, and thus helped inflate Twitter’s company value. Twitter with its 300 million users is competing with Facebook and its 2 billion users; and shaving off spam accounts would hurt their stock prices as investors and constantly looking for and rewarding growth.34 In Congressional testimony, the estimated number of fake accounts from Russia number over 36,000. According to data obtained by the New York Times from social media companies, Russian propaganda reached over 126 million Facebook users, published over 130,000 messages on Twitter, and uploaded over 1,000 videos on YouTube.35
While it is difficult to directly quantify the impact of those ads, propaganda and misinformation, Herman and Chomsky limited their analysis to “media structure and performance, not the effects of the media on the public.” Furthermore, “the propaganda model describes forces that shape what the media does; it does not imply that any propaganda emanating from the media is always effective.”36
Still, the manufactured repetition of themes and messages is problematic, especially as social media, like most legacy media, is dominated by a small group of powerful, far-reaching platforms, such as Facebook (which also owns Instagram), YouTube, and Twitter. Herman and Chomsky noted in their updated propaganda model the continuing “centralization and concentration” of media ownership, and a similar observation can be made about a social media oligopoly that concentrates power within a handful of the most popular platforms.37
In Herman and Chomsky’s analysis of the “advertising filter” in news, they noted that when news is cheaper, or in recent cases “free,” it tends to make those news outlets more accessible to the public, and thus more popular.38 Legacy news media, particularly the old newspaper model that was heavily subsidized by advertising (particularly classified advertising) has been steadily waning since the growth of free news sources online. Furthermore, the classified advertising dollars left newspapers, as job listing services, apartment rentals, and other types of listings moved to more specialized services online. In order to stay in business, most newspapers had to increase their subscription costs online. In contrast, fake news propagated through social media sites is free, and easier to access. Perhaps, confusing things further, traditional news media also use social media platforms to share legitimate news content.
To make the matter even more vexing, there are businesses such as 500views.com that sell views, likes, and dislikes.39 In other words, social media users pay for services to “like” their posts, and thus increase their popularity (albeit artificially). One of these companies, Devumi.com, sold more than 196 million YouTube views over a three-year period from 2014 to 2017, which included the U.S. presidential campaign season.
Devumi’s customers included an employee of RT, a media organization funded by the Russian government, and an employee of Al Jazeera English, another state-backed company. Other buyers were a filmmaker working for Americans for Prosperity, a conservative political advocacy group, and the head of video at The New York Post.40
The system uses bot-generated traffic, as well as pop-under videos, on the computers of unsuspecting viewers. The owner-operator of 500views.com claimed that by 2014 “his website was on the first page of Google search results for buying YouTube views” and was selling between 150 and 200 orders a day, “bringing in more than $30,000 a month.”
While these examples bring some light to the concentration of sources and artificiality of news and information presented through social media, the “sourcing” and “flak” filters described by Herman and Chomsky need to be applied to further understand the impact of these ownership and advertising mechanisms, particularly in the context of the 2016 U.S. presidential election cycle.
Sourcing and flak: understanding fake news and doublespeak
While Herman and Chomsky’s analysis viewed “sourcing” as an activity performed by news media in their selection of stories and sources, when examining social media this function works in two ways.41 As described in the previous section, a sourcing function occurs in the manipulation of reach and popularity through bots and the vending of likes and dislikes. For instance, a study of over 10 million tweets from 70,000 Twitter accounts found that 6.6 million tweets linked to fake news and conspiracy news publishers in the month before the November 2016 election, and that 65 percent of the fake news links went to a group of just 10 sites.42
However, a more dynamic form of sourcing occurs through social media in the form of individual users exercising their personal discretion to like and re-tweet and post similar themes and messages on their own. A data study reported in FiveThirtyEight examined why Americans shared millions of tweets from a single well-funded Russian troll factory, known as the Internet Research Agency (IRA), which ran a sophisticated campaign to “sow disinformation and discord into American politics via social media.”43 FiveThirtyEight showed that IRA troll activity peaked on October 6, 2016, right before WikiLeaks released Hillary Clinton’s campaign emails.44 Moreover, Darren Linvill and Patrick Warren’s analysis provided more insight into why the deluge of troll activity was so effective, as the IRA managed to mimic an array of entities across the political spectrum, including Black Lives Matter activists, the Democratic Party, Trump supporters expressing virulent anti-immigration sentiments, and local American news outlets.45 Ironically, for Americans who were attempting to source different political perspectives on social media (either intentionally or not), they were most likely exposed to the work product of a single Russian-based troll factory. To further the illusion, IRA worked across different online platforms to acquire and repurpose abandoned social media accounts (once created by real people) because these kinds of accounts look more real.46
While it appears that market forces (including ownership and advertising) and sourcing are still significant elements in the application of Herman and Chomsky’s propaganda model, what they described as “flak” may have become the most momentous factor in the contemporary manufacture of not only consent, but perhaps, the constitution of truth itself.47 Flak can be seen as more than just political spin, but fake news itself, and the form of doublespeak Trump engages in when talking about what is (and is not) fake news, as well as his denigrating attacks on journalists and journalism. President Trump regularly stoked hatred and distrust of news media by frequently describing the press and reporters as “dishonest,” “scum,” “horrible people,” and “sleaze.”48 These slurs clearly resonated with his supporters during campaign rallies, but they became particularly troubling in blurring the lines of truth, as Trump has used the phrase “fake news” as a form of doublespeak to refer to professional news outlets that produce stories unfavorable to his campaign and administration. Together, fake news and Trumpian doublespeak have created a treacherous post-truth environment for the institution of journalism.
Facebook itself has also engaged in a form of flak that sought to discredit critics of the social media network for being manipulated by foreign agents in Russia during the 2016 presidential election. Facebook hired “Definers,” a Washington, D.C.–based firm, to monitor news coverage of the social media network and produce information to counter its critics’ claims. “Definers pressed reporters to explore financial connections” between its critics and “Color of Change, an online racial justice organization,” although no grants had been made by the group to support the campaign against Facebook.”49 That Facebook responded to criticism over its platform being used a conduit for Russian interference in the 2016 presidential election, as well as Cambridge Analytica’s appropriation of its consumer database, with a flak campaign of its own to “divert attention to critics and competitors” raises questions about its ability to fairly mediate discourse.50
Interestingly, in contrast to Twitter and Facebook’s manipulation during the 2016 presidential election cycle, and contrary to the findings presented earlier in this book, President Trump has asserted the opposite – that social media is biased against conservative perspectives.51
Sophistry, the ideology of post-truth; and epistemic crisis
Herman and Chomsky explain the force of “anti-communism” ideology in their 1988 propaganda model, and pro-capitalism in their 2002 iteration. However, in the realm of social media it seems that the sophistic nature of fake news has created another kind of ideological problem by frustrating the notion of truth and the validity of knowledge.
Douglas Kellner described “postmodern sophistry” as form of discourse that occurs in a relativist cultural environment, which accepts that all discourse is laden with biases – thus, one proposition is no more or less credible or valid than another.52 Everything is relative, and thus there can be no right or wrong, fact or falsity, truth or lie. Hence, postmodern sophistry thrives in a post-truth environment.
The problem presented by post-truth is, perhaps, further vexed in a society that values free expression, as provided under the First Amendment to the U.S. Constitution:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.53
Only in extremely rare circumstances, such as blackmail, fraud, child pornography, harassment, or incitement can criminal penalties be imposed on speech. Moreover, political speech in particular is at the heart of what the First Amendment is supposed to protect, and political speech is often opinion, or belief in something that cannot necessarily be proven true or false. Thus, speech that includes false information or lies is most often protected.
Rather, U.S. jurisprudence has become fond of the “marketplace of ideas” metaphor, the self-righting process, and the belief that truth will always win out over false ideas. Justice Brandeis and courts have long said that the way to address falsehood, fallacies, and lies is with more speech, speech that is true. However, the concern presented here is that fake news may have exposed a fatal flaw in the marketplace of ideas metaphor – the truth may not always emerge in a social media environment prone to sophistic manipulation.
Justice Brandeis himself said that education is the key to making the marketplace of ideas metaphor work. People need to be educated, and willing and able to engage in critical discourse. Moreover, Brandeis asserted that the news media is a key to fostering an educated and well-informed public. If it is, then the public needs to be able to tell the difference between what is real and what is fake, what is true and what is false. While there are professional codes of ethics for journalists, such as those provided by the Society of Professional Journalists and Radio Television Digital News Association, there are none for everyday social media users or social media platforms. If only journalists who subscribe to professional codes of ethics are accountable for reporting the truth, it will matter little if audiences are not listening.
Re-examining Herman and Chomsky’s famous propaganda model in light of the fake news era prompted an urgent analysis of the impact of fake news on the epistemic reputation of journalism. From this political economic analysis of fake news sites, bot technology, and the Twitter rhetoric of Donald Trump during his presidential election campaign and presidency, it appears that market forces (including ownership and advertising) and self-censorship (e.g., sourcing and anti-communism) are still significant elements in the application of Herman and Chomsky’s propaganda model. Whether or not there was malicious political intent at play with the production of fake news stories to sway voters’ opinions (e.g., reports about Russians micro-targeting key voting districts in Wisconsin, Michigan, Ohio, and Pennsylvania with fake news stories on Facebook); there was nonetheless a commercial incentive to produce stories (fake or not) with salacious headlines that would cater to information bias, and more importantly, draw “clicks” and consequently, advertising dollars for the producers of such content. The clicks may help measure cash, but not quality or truth.
Fake news, bots, and doublespeak about what is and isn’t “fake news” may have exposed a fatal flaw in the metaphor, as the truth doesn’t necessarily emerge in a bot-generated barrage of sophistic tweets, posts, and memes. Furthermore, Brandeis asserted that the news media is key to fostering an educated and well-informed public to make the marketplace of ideas work. In the post-truth world, though, journalism’s epistemic status is diminished, as a news audience with appropriate critical literacy skills is needed before journalism can perform its epistemic function.
Sue Robinson has suggested that journalists expand their productive space beyond newsrooms by “building presence in all of the citizen-dominated spaces of the Web; instead of using Twitter or Facebook to merely link back to homepages, move into these spaces as new homes to create fully operational news realms outside of the traditional singular home page.”54 This approach is also problematic, as it too easily invites the notion that “we’re all journalists now,” and therefore, every so-called journalist is as equally credible as any other. Thinking that because we all have access to social media we are all journalists is likely to beget more confusion about what is and is not journalism.
Just because one has the tools to tell stories, does not necessarily mean that one can effectively practice the craft. We posit here that we need to reclaim journalism as an epistemic community. Good investigative journalism will allow us to derive the meaning of what we are reading from what otherwise might seem to be isolated bits of information.
We also need a news audience that appreciates and supports journalism, and values accurate and credible information. Audiences need to know their sources of information, and take responsibility for what they communicate to others, whether it is a share, a like, or a re-tweet. Bots, algorithms, corporate powers, and political interests manufacture versions of reality as truth, and we too often swim through these uncritically. For instance, Sam Wineburg and colleagues found in a study of middle-school-aged children through college students that they were typically unable to understand the differences in credibility among online information sources or tell the difference between advertisements and news stories.55 Alison Head and colleagues found that nearly half of college students did not feel comfortable telling the differences between real and fake news on social media; and even more concerning, 36 percent distrusted all media because of the possibility of misinformation.56
As Herman and Chomsky noted, one of the strengths of the U.S. media system is that there is space for dissent from popular governmental narratives, although it is relegated to the “back pages of the newspapers,” so to speak, and discoverable to only the most “diligent and skeptical researcher.”57 Still, there is capacity within the system for the volume of facts to expand. The problem of course is that will matter very little unless facts are given proper attention in terms of “placement, tone and repetition,” as well as appropriate context within the media system. And, moreover, there has to be a critically minded and journalistically literate news audience to interpret and digest the meaning of those facts.
Too often society seems to look for technological solutions to the problems technology creates. For instance, it has been suggested that algorithms may be created to detect fake news. However, as professors Gary Marcus and Ernest Davis explained in a recent New York Times op-ed, the idea that artificial intelligence platforms would be able to detect fake news “would require a number of major advances in A.I., taking us far beyond what has so far been invented.”58 Moreover, algorithms that would be employed to detect fake news are imperfect when it comes to the detection of context and nuance, resulting in “panoptic missorts” with troubling results.59
Social media, social problems, and social justice
Given the political economic limitations of the digital marketplace of ideas described in this chapter, particularly related to national politics, questions must be raised about how social media is prone to manipulation around social problems and social justice efforts. Just as fake news played a critical role in the 2016 presidential election, similar concerns are present in the realm of social justice, as the largest Black Lives Matter page on Facebook was found out to be a fake one.60
In January 2019 a fake account on Twitter flamed controversy around a Catholic high school in Covington, Kentucky with a viral video showing students wearing “Make America Great Again” hats while confronting a Native American man in Washington, D.C.61 The initial tweet went out from an account that appeared to belong to a San Francisco-area schoolteacher named “Talia” with the handle @2020fight.62 The tweet stated, “This MAGA loser gleefully bothering a Native American protestor at the Indigenous Peoples March,” and included a tightly edited video of the confrontation. That infamous tweet received at least 2.5 million views, over 27,000 likes and over 14,000 retweets, according to a USA Today analysis of over 3 million tweets and thousands of Facebook posts that went out in the moments after the original video.63 While the account purportedly belonged to a California teacher, Twitter later determined that it actually originated from Brazil and had an inordinate number of followers (approximately 40,000) for a non-celebrity, similar to the fake “Jenna Abrams” account described earlier.
Another suspected fake account (this one on Facebook) also helped to fan the flames of the same Covington Catholic controversy. In the early morning hours of January 19, a Facebook page called “Real Mexican Problems” posted the same video and attracted over a million views, as well as over 20,000 shares.64 This Facebook page was created in 2013 with a self-described mission to “abolish white supremacy.” However, the contact information for the page lists a phone number for the White Knights of the Ku Klux Klan – an obvious red flag for its authenticity.
Later, another (and much longer) video emerged from a group of Black Hebrew Israelites, who were present during the incident on the National Mall in Washington, and provides more details, context, and nuance than the original video that went viral. Furthermore, the longer video that emerged after national outrage focused on the Covington Catholic High School students contradicted claims made in the original viral post, as the Black Hebrew Israelites were taunting Native Americans and the Covington students before the Native American man (later identified as Nathan Phillips) is clearly seen marching into the crowd of Covington students.
Another way to look at the incident that took place on January 18, 2019 is that there were three different groups of people (high school students from a conservative Catholic high school in Kentucky, Native Americans, and Black Hebrew Israelites) from widely different cultures, with contrasting world views, exercising their First Amendment rights on the National Mall in ways (that in at least in two cases) were patently offensive. Absent social media and the original viral video, this would not be a news story.
However, with social media, a fake social media account was able to ignite national outrage over the incident by presenting it within a narrow frame of cultural politics. When the later video emerged that showed the students were not the only aggressors in the incident, this did little to quash widespread anger at the students, as individual perceptions of the “truth” of the event had already been manufactured through their social media feeds.
Accordingly, the next chapter examines the manipulation of social justice activities on social media, and its engagement with national cultural politics throughout the summer of 2020 leading up to the U.S. Capitol riot on January 6, 2021.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.