“Age of Disenlightenment” — Fake News, Echo Chambers, Deepfakes & The Infocalypse

TobiasMJ
DataDrivenInvestor
Published in
8 min readDec 30, 2021

--

At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement.

- Apple CEO, Tim Cook[1]

Introduction

The age of enlightenment was a cultural and philosophical movement that spread throughout Europe and later North America in the 17th and 18th century. The movement grew out of the scientific revolution and was led by thinkers such as Voltaire, Immanuel Kant, John Locke, and Isaac Newton. Among the core ideas were:

  • all people can think for themselves,
  • everybody in a society should enjoy the same rights, and
  • a society should be founded on freedom, democracy, and reason.

I argue that we have now entered a new era, “the age of disenlightenment”. An era where the lines between facts and fiction, reality and fantasy, are becoming increasingly blurred. Essentially, we are heading towards a future where information can no longer be trusted.

Policymakers are having a hard time identifying and tackling the novel dangers posed by new information technologies, AI, and machine learning. There are bigger forces at play than humanity is used to. However, a growing awareness is arising. For example, World Economic Forum has listed “Critical Information Structure Breakdown” as a global risk[2], and the European Commission has proposed a new framework to regulate the risks of Artificial Intelligence.[3]

It’s impossible to cover the scope of these problems in a single blog post or even a series of blog posts. That is why I have dedicated a substantial amount of time to research and write about these topics, and will continue to do so in the coming years. In this post, I will go over some of the fundamental threats, we see right now.

The Fundamental Threats

Freedom of expression, the right to receive and impart information, and the right to privacy are constitutional pillars of any democracy. Without these rights, the liberal idea that “each man is the architect of his own fortune” is thrown out with the bathwater. Individuals or groups of individuals are not able to thrive in a society where they cannot speak freely, receive and impart information without interference from authorities, or if they are put under surveillance.

Historically, human right laws were designed to protect individuals from unjust interference in their lives by public authorities. Nowadays, in the developed part of the world, the biggest threat is not posed by tyrannic rulers, but by big technology companies that de facto serve the role of governments. Because companies like Google, Facebook, Amazon, Microsoft, and Apple are able to extract and use colossal amounts of user data, and to some extend have control over the flow of information, the companies have been assigned much more power, than anyone would have predicted

In addition, the universal dependency on social media, the subtle spread of misinformation and polarization, along with new developing technologies in the fields of AI and machine learning, could incrementally eat in on the foundation for democracy.

Fake News

In ancient times, local rumors travelled throughout villages from mouth to ear. Although human evolution pretty much stopped 10.000 years ago[4], our means of communication have drastically changed. Thanks to the internet a rumor — whether false or true — can now instantly reach thousands or millions of people instantaneously.

The far majority of young Europeans and Americans rely on social media as their primary or only news outlet.[5] That is problematic since lots of information on social media range from being slightly inaccurate (disinformation) to intentionally misleading (misinformation). Especially, the younger generation who are brought up with social media, are in danger of becoming incrementally misinformed and confused about the world they live in.

I have already written about fake news, and how false rumors have been proven to travel faster, broader, and wider than true rumors on a platform like Twitter. I believe there is a deep-seated psychological trait in humans, which causes us to gravitate towards information that invokes strong feelings in us, rather than factually true, but less interesting information. In many instances, social media algorithms intentionally magnify this trait, by drawing us to like, share, comment, or click on provocative or shocking content.

As we know, there is no entry bar to become a creator on popular social media platforms. Realistically, there is no functional or ethical sustainable way for the platforms — even with sophisticated deep-learning algorithms — to make detailed fact-checks or value judgements of the vast amount of user-generated data. Therefore, the value of information on social media is mainly based on the level of exposure (numbers of followers and views). Not on the content creator’s personal or professional background (qualifications, competencies, authority, or track-record) as the case often is when we decide whether we should listen to someone in real life. Machine learning algorithms decide which posts should be shown to you first, simply by ordering them based on popularity and your estimated interest in them.[6] Essentially the loudest, most popular voices, are often dictating the discourse on social media, regardless of the merit they have.

Echo chambers

We’ve already seen how social media algorithms can open the floodgates for a rapid spread of misinformation. Conspiracy theories such as the COVID-19 being a government-developed biological weapon or the 2020 US election being stolen[7], are good examples of how disproven rumors can continue to grow and flourish on social media. How can it be that debunked rumors continue to gain traction so persistently?

The answer is found in the formation of epistemic bubbles and echo chambers on social media.

Epistemic bubbles stem from the “cognitive filter” that helps each of us in daily life to seek out and select information. For example, we read articles that friends re-post on Facebook, watch clips from YouTubers who we like, and stay in touch with like-minded peers. Filtering out information in accordance with our own opinions and belief set is completely necessary. Otherwise, our brains would be overflooded with all kinds of useless information. The problem arises when search engines and social media algorithms track personal information for each particular user and adapt the user’s online experience so it fits with their interests. [8] Humans’ healthy cognitive “spam-filter” becomes magnified and partly controlled by the algorithms. Consequently, the users are only exposed to a one-sided perspective of the world, and relevant viewpoints to the contrary may be hidden from them behind algorithmically selected content.

The good thing about epistemic bubbles is that they can easily burst when a member is presented with relevant information or convincing arguments they have missed out on. Echo chambers, on the other hand, have a more robust, cult-like structure. In an echo chamber, the members of the group share strong beliefs in regards to a particular topic. It could be gun control, anti-vaxxing, Cross Fit, or a certain diet plan. Unlike an epistemic bubble, the group actively discredit other relevant sources[9]. Typically, the echo chambers develop a private language, full of familiar terms and new jargon, along with counter-explanations for all contrary viewpoints.[10] The private language is used to strengthen the community and exacerbate its separateness, while the counter-explanations are rehearsed by group members to attack and undermine any opposing fact or opinion they are confronted with.

Epistemic bubbles and echo chambers have been observed to dominate interactions on Facebook and Twitter.[11] The Pizzagate Theory or The QAnon movement are more extreme examples of how derailed echo chambers can become. Particularly echo chambers can lead to political polarization and extremism in society.

The Infocalypse & Deepfakes

“The Infocalypse” refers to an envisioned point in time where our information ecosystems collapse. US technologist, Aviv Ovadaya coined the phrase as the title of a presentation he gave to fellow technology experts in the Bay Area of San Francisco, a few weeks prior to the 2016 US election.[12] Ovadaya was one of the first people to address how bad information was overwhelming society, and he asked whether there is a critical threshold at which society will no longer be able to cope.[13]

Misinformation now takes on a whole new dimension with the emergence of “deepfakes”. The term ‘deepfake’ refers to a piece of synthetic media, an image, video, or audio clip, that is either manipulated or wholly generated with deep learning technology.[14] The concept was introduced to the world and named after, an anonymous Reddit-user who started a subreddit in November 2017, dedicated to posting fake porn videos of celebrities.[15] The same open-source code that he used to “face-swap” the faces of celebrities onto the bodies of porn stars is now readily available for anyone with just a bit of technical knowledge via software platforms such as DeepFaceLab or Faceswap.

There is no shortage of satirical, rather disturbing deepfake videos online. Usually, it’s relatively easy to tell they are fake as the lip-synchronization and facial expressions often seem a bit off, or there is a flicker around the edges of the person. One of the better demonstrations of deepfakes is the skits “Sassy Justice” made by South Park creators Trey Parker, and Matt Stone with actor Peter Serafinowicz up to the 2020 US election. Another convincing example is the deepfake version of Tom Cruise on the popular Tiktok channel deeptomcruise.

Despite the devasting implications, deepfake videos could potentially have on society, they have not really evolved much since the original r/Deepfakes subreddit saw the light of day. It is still not possible to create synthetic videos from scratch without “body donors” and similar faces. [16] However, as machine learning methods evolve and gain momentum, the picture could look different in a few years (pun not intended). We have already seen how journalists with AI-generated images are used to defame public figures and spread fake news. Language models like GPT-3 are able to generate texts that are indistinguishable from human writing. Without the right precautions, and with new quantum leaps in machine learning, the infocalypse could be near…

[1] https://www.reuters.com/article/us-apple-facebook/apples-tim-cook-criticizes-social-media-practices-intensifying-facebook-conflict-idUSKBN29X2NB (20–12–2021).

[2] See http://reports.weforum.org/global-risks-2018/global-risks-landscape-2018/#landscape (28–12–2021).

[3] See https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF (28–12–2021).

[4] https://www.psychologytoday.com/us/blog/the-scientific-fundamentalist/200810/why-human-evolution-pretty-much-stopped-about-10000-years (29–12–2021).

[5] See https://www.pewresearch.org/journalism/2018/10/30/younger-europeans-are-far-more-likely-to-get-news-from-social-media/ (EU) and https://www.statista.com/statistics/1124119/gen-z-news-consumption-us/ (US).

[6] Noah Giansiracusa (2021), How algorithms create and prevent fake news — exploring the impact of social media, deepfakes, GPT-3, and more, pg. 177.

[7] Ibid.

[8] C. Thi Nguyen (2020). Echo Chambers and Epistemic Bubbles . Episteme, 17(2), 141–161.

[9] Ibid.

[10] Ibid.

[11] Ibid.

[12] Charlie Warzel (2018) Believable: The Terrifying Future Of Fake News -> https://www.buzzfeednews.com/article/charliewarzel/the-terrifying-future-of-fake-news#.nxgZrozEx (20–12–2021).

[13] Nina Schick (2020), “Deep Fakes and the Infocalypse: What You Urgently Need To Know”, pg. 10.

[14] Lisa Onyeak (2021), Deepfakes and Their Relationship With Law and Politics -> https://theradius.eu/deepfakes-and-their-relationship/ (29–12–2021).

[15] Schick (2020), pg. 35.

[16] Martin Anderson, The limited future of deepfakes -> https://rossdawson.com/futurist/implications-of-ai/the-limited-future-of-deepfakes/#_edn77. (30–12–2021).

--

--