Digital World

Where Online Meets Offline

Social Media's Double-edged Sword

Young people have grown up alongside social media platforms and many have never known a world without them. Teens and young adults are often very intentional about what they post on social media and how they present themselves on each platform. Social media allows users to carefully curate content that follows a certain narrative and builds an online identity. However, this on-screen identity can be different to the one users portray offline.

In the digital world, people can choose to use their real name and have public profiles or they have the option to be anonymous, allowing them to post content while hiding behind a screen. Anonymity and crafted online identities can make people feel more empowered to speak their minds and be themselves when they do not feel comfortable doing so in real life. This is what cyberpsychology experts call the online disinhibition effect: the lack of restraint people feel when interacting online as opposed to interacting in-person.

Most people’s online disinhibition is benign. Social media users benefit from the anonymity the internet offers and use it to express themselves more openly and form interpersonal relationships. People that are shy or have social anxiety find it much easier to make friends and be more vocal on the internet, where they can reveal parts of themselves without fear of judgment. The most popular social media platforms among Gen Z are the ones that serve precisely this purpose — providing a space for young people to express themselves in formats that are quick, simple, and visual.

TikTok teens

A 2022 Pew Research survey measuring teenager’s social media usage showed 95% of U.S. teens use YouTube, 67% TikTok, and 62% Instagram. By comparison, only 32% of them use Facebook and 23% Twitter. In the U.S., 80% of TikTok’s users are between the ages of 16-34 and globally that figure is almost 70%. What initially made TikTok so popular among younger audiences was its short 30-60 second video format, which Instagram later emulated through Instagram Reels in order to compete with the app. TikTok videos can now be up to 10 minutes long, but an internal survey showed that shorter videos were still favored as users found content lasting longer than a minute to be stressful. The short nature of the videos makes content more accessible and easy to consume, while allowing creators to quickly produce content in response to cultural trends and global news, ensuring the videos stay fresh and relevant.

The other feature that contributes to TikTok’s success is its highly personalized algorithm, which makes people with shared interests receive the same type of content. This filtering process creates sub communities within the app. These sub communities, ranging from BookTok to DisabilityTok to Mental HealthTok, are a place where users can find others they can relate to and find support. In short, the sub communities have become a safe space for many users. This is particularly true for underrepresented and marginalized communities. For example, TikTok user @shinanova uses her platform to talk about Inuit indigenous culture and spread awareness about issues that are prevalent in her community. Another user, @abbeysmom17, joined the platform to help her daughter share her experiences as an adult on the spectrum. Using the hashtag #everydayautism, Abbey posts videos where she talks about her special interests and how she overcomes daily struggles. In one video, Abbey says that she likes being on the app because she feels that her videos can help a lot of people. In the comments, an anonymous user writes, “I am autistic and a bit different than Abbey but her ways of communicating have helped me a lot and I feel more confident being myself.”

What is posted online can take root offline.

A double-edged sword

But anonymity on the internet has the power to be harmful when users wield it with more sinister motives. This is what cyber psychologists call toxic online disinhibition. The effects of this phenomenon can range from cyberbullying and harassment to building online communities around beliefs that are not widely accepted by the rest of society. Anonymous online forums, such as Reddit, Telegram and BitChute, have become popular among fringe groups that come together to share controversial opinions and beliefs, knowing they can remain invisible behind their screens. The conversations that take place on these platforms can range from hateful remarks towards various social groups to anti-establishment and antivaxxer rhetoric to conspiracy theories.

One example is the incel subculture, an online community of predominantly young men who self-identify as involuntary celibates and blame women for their inability to attract a romantic or sexual partner. These men use social media platforms to find anonymous support for their deeply rooted belief that women hold too much power in relationships. In doing so, many of them express views that are hostile — and even violent — toward women. The biggest issue with these online groups is that what is posted online can take root offline. Once they have built an online community, members of the incel subculture can meet offline and perpetrate deadly attacks against women. In 2021, a man named Tres Genco, who self-identified as an incel, was charged by a federal grand jury with attempting to conduct a mass shooting of women at a sorority in Ohio. Genco was a frequent poster on popular incel websites and compared his planned attack to that of Elliot Rodger, a 22-year-old man who killed six people and injured 14 others outside a University of California Santa Barbara sorority house. Right before the shooting, Rodger uploaded a video to YouTube, in which he detailed the plans for his attack and his motives, explaining that he wanted to punish women for not being attracted to him and sexually active men for “living a better life” than him.

Another example are the far-right online communities and election conspiracy theorists who were convinced that the 2020 presidential election was stolen from Donald Trump. The disinformation that conservative television networks broadcasted leading up to and in the days following the election was amplified on social media, where unfounded stories of voter fraud were shared across platforms. The disinformation campaign was so effective that, according to MIT professor David Rand, 77% of Trump voters believed in widespread voter fraud. Just as the online incel communities can lead to real life crimes against women, the rampant election disinformation on social media largely contributed to the January 6 Capitol insurrection.

Regulating online lives

It seems that social media is part of our lives for good. There are admittedly many benefits to these platforms and their ability to bring people together, but how can policymakers effectively mitigate the harmful impact of toxic online disinhibition? The U.S. has made some progress regarding the protection of children online, holding several Congressional hearings in the past two years to better understand the protection of children’s privacy and shelter them from content that is not appropriate for their age and harmful to their mental health. At the time of writing, Congress has not yet come up with a good solution for how to regulate hate speech and illegal content more broadly, nor for how to prevent toxic online groups from forming or from committing crimes in the physical world.

Even the European Union, a leader in online content regulation, has struggled to police anonymous and encrypted social media platforms. In Germany, one of the EU member states with the strictest internet censorship laws, the government is struggling to limit the reach of Telegram — an online platform popular among far-right extremists, terrorist groups, and conspiracy theorists. In 2020, COVID-19 misinformation and conspiracy theories shared on Telegram sparked mass protests across Germany. Over time, according to openDemocracy, the COVID-19 conspiracy movement has become more radicalized and turning towards the far right, which is cause for concern for German lawmakers.

The EU’s Digital Services Act (DSA), passed in 2022, is the closest there is to cohesive online platform regulation in Europe. The legislation makes what is illegal offline, also illegal online, such as terrorist content or illegal hate speech. The DSA aims to limit the spread of disinformation and harmful content by requiring online platforms to conduct yearly risk assessments of the content hosted and circulated on the platforms, as well as provide data about the use of their algorithms. The legislation is seen as a game changer when it comes to internet regulation but there is skepticism about the DSA’s implementation and enforcement. There are still details to hash out regarding how the platforms will conduct the risk assessments, who will be responsible for auditing, and, since the platforms are the ones who pay the auditors, how to avoid conflict of interest.

Internet regulation is complex and there is no quick fix to this multi-layered issue. The fact remains that it is difficult to determine what exactly classifies as harmful content. Another challenge is drawing the line between online discussions that are concerning and have the potential to create real harm in the offline world and those that are illegal. This is especially a topic of debate in the U.S., where free speech is protected under the First Amendment and social media companies cannot be held liable for user-generated content circulating on their platforms. This is due to the protections provided to them in Section 230 of the Communications Decency Act.

“In the U.S., social media companies cannot be held liable for user-generated content circulating on their platforms.”

The future of digital life

Even with new legislation like the DSA, it is doubtful that regulation alone will be able to curb the spread of online disinformation and the creation of toxic online communities. This is going to take a comprehensive societal approach, where governments, social media companies, and civil society work together to find applicable solutions. Policymakers and their staffers, particularly in the U.S., need to understand the different types of risk associated with online platforms, not just the national security threats. During the recent Congressional hearing with TikTok CEO Shou Zi Chew, the primary concern from lawmakers was TikTok’s Chinese parent company and the national security concerns posed by foreign technology products. The risks the app poses to teenagers’ mental health, due to the amount of hours they spend on the app or the access to potentially harmful content, was not as prominent in the discussion.

In order for strong internet regulation to be drafted in the U.S., Members of Congress need to develop a deeper understanding of how anonymous groups operate on social media platforms, the role of algorithms in feeding users harmful content, and the new risks posed by AI, starting with the large language models used in ChatGPT. Legislation like the DSA, despite having its skeptics, can serve as a benchmark for U.S. lawmakers as they draft federal online content regulation. A second step would be for Congress to implement a Section 230 reform to address the unchecked spread of illegal or toxic online content. There should also be an incentive mechanism for social media companies to improve their content moderation and removal practices, especially on platforms that are popular among fringe groups and thrive on online anonymity. These actionable steps could spell out the beginning of a brighter future online. Social media is here to stay, but that does not mean we cannot make efforts to develop a healthier relationship with it.

Print

Originally published
in Transponder Issue #4: Identity
Jun 20, 2023

Daniela Rojas Medina

Research Analyst
Bertelsmann Foundation

daniela.medina@bfna.org