How the Current Social Media Legislature Landscape Impacts Youth Mental Health

Commentary
Article

Social media content moderation: a brief for clinicians.

social media

Irina Strelnikova/AdobeStock

COMMENTARY

Social media apps hold massive power and influence. Its effects on youth mental health are complex, multiple, and individual-dependent. While it can be a great place for connection and self-expression, it can also be a place where youth encounter harmful content. Questions ranging from who can exert content moderation to who has access to the data of billions of users to whether the apps should be banned altogether have recently surfaced. The Supreme Court case involving Florida and Texas’ attempts to ban content moderation by the platform are highly relevant to child psychiatrists because of the complex relationship between social media use and youth mental health. We will attempt to provide a historical context for this very layered topic.

What Is the Current State of Social Media Content Moderation?

Instagram recently introduced new features where users can change the level of “fact-checking” that occurs for the content they are presented with. For a user who chooses the highest level of “fact-checking,” content which has been deemed “false, partly false, altered or content with missing context” will appear lower in the person’s feed. Similarly, users can decide to control the amount of “sensitive” content they are presented with in their feeds. These new features were recently added on top of Meta’s existing community standards.1 Social media platforms delineate what types of content will be taken down by the platform in their site policies. For Meta, their site policies indicate that threats, “content that promotes, encourages, coordinates, or provides instructions for suicide, self injury or eating disorders,” hate speech, misinformation that leads to violence, harmful health misinformation, and misinformation that interferes with a person’s ability to vote (among other things) violate their community standards and will be taken down. Additionally, in February 2024, Instagram announced “we…don’t want to proactively recommend political content from accounts you don’t follow,” and indicated that they “won’t proactively recommend content about politics on recommendation surfaces across Instagram and Threads”; however, users can opt back to receiving political content if they so choose.2 These changes generated a great deal of public conversation.

How Are These Community Standards Applied and Whose Content Gets Removed?

A 2021 study found that there are 3 groups of social media users who experienced content and account removals most often: politically conservative participants, transgender participants, and Black participants.3 The researchers noted that “conservative content removals in the dataset were more likely to represent true positives: content that violated site policies, and thus was correctly removed” and marginalized social media users are more likely to experience incorrect content removals related to expressing their marginalized identities that did not violate site policies. The harmful phenomenon of social media users flagging content to be removed because it violates their personal, prejudiced norms leads to content removals that privilege majority identities and experiences which worsen systemic inequities.

Social media platforms still have a long way to go in this regard. When it comes to censoring or not censoring information, it is important to learn from the historical presence or absence of entertainment media censorship. This history shows us that whether TV/movies were censored or not has seriously harmed historically, persistently, or systemically marginalized (HPSM) groups. In 1915, the National Board of Censorship approved “Birth of a Nation”; the lack of censorship in this case spread hate and racism. In 1930, the Motion Picture Production Code created 12 categories of censorable subjects which included interracial and LGBTQ+ relationships; the imposition of censorship in this case spread hate and racism.

Content moderation is important, but it must be ethical and just. Social media companies are under pressure from their consumers to create online spaces that are safe. It is paramount to elevate HPSM groups in this regard to make sure that they feel safe and have opportunities in these spaces.

Why Does This Matter for Youth Mental Health?

If suicide promoting, self-harm promoting, eating disorder promoting, racism, and hate speech content is allowed, youth mental health will suffer. Youth themselves expressed concerns about this type of content in a 2022 study.4 When asked about their social media experiences, here are a few examples of what they had to say:

  • “People were being hateful and I could not take it.”
  • “There is a lot of hate and bigotry online and I can only take so much”
  • “I had to delete my Facebook because of bigoted relatives and feeling like I was surrounded by homophobia online”
  • “do not fall into toxic traps! there is so much pro-eating disorder, pro-self-harm content out there”
  • “Listen, I know that parental supervision does not sound cool, but I grew up with zero parental supervision on the internet at a young age and it really messed me up. I saw a lot of things and experienced things that I wish I did not.”

These are all quotes from youth who experienced social media in its present form, with the content moderation in place. Furthermore, there is variability in youth insight about how social media affects them as well their knowledge of potential changes to make. There is also great variability in youth’s ability to discern true information from mis- or disinformation. The National Academies of Science, Engineering, and Medicine Consensus Study Report on Social Media and Adolescent Health5 includes Recommendation 6-1 which states: “Media literacy education currently suffers from scarce funding, uneven content, poorly qualified instructors, and a lack of reference standard. All four of these issues should be addressed in a comprehensive national media literacy education program prioritized for all children in grades K through 12.” Access to media literacy education for a child in 2024 is paramount. Such education could have large impacts on both public mental and physical health and this type of education should be available equally to all youth. Unequal distribution of media literacy education could widen disparities and could lead to a more polarized society.

What Are the Impacts of Social Media-Based Racism and Discrimination on Youth Mental Health?

While many different types of social media content can be harmful to youth mental health, the literature shows specific negative effects of social media-based racism and discrimination. Exposure to individual and vicarious discrimination on social media has been found to exacerbate depressive symptoms and substance use problems in youth, with particular impact on adolescents of color.6 While HPSM adolescents may go online to seek connections with peers, especially when inaccessible in person, they may also encounter negative stereotypes, prejudice, and exposure to potentially traumatizing experiences. In effect, young HPSM individuals may face an additional venue for exposure to racism, discrimination, and bullying.

The harms associated with social media exposure for specific youth groups are highlighted in the 2023 US Surgeon General’s Advisory on Social Media and Youth Mental Health.7 The report points to the relationship between social media-based cyberbullying and depression among children and adolescents. In particular, adolescent girls, sexual minority, and transgender youth are disproportionately impacted by online harassment, cyberbullying, and abuse, which are associated with negative emotional impacts including sadness, worrying, and anxiety. A study finding significant effects of online hate messaging exposure reveal associations between online behaviors and depression and social anxiety across adolescents.8 Adolescents of color experience online racial discrimination at high rates, associated with depression, anxiety, and trauma-related symptoms.9 A study demonstrated significant mental wellness disparities in sexual minorities, including depressive symptoms, loneliness, and self-harm, as well as significantly more exposure to posts about self-harm in their peers’ posts compared with heterosexual participants.10 Adolescent girls are particularly vulnerable to the negative consequences of social media use, including sleep disturbance, poor body image, disordered eating, low self-esteem, and depressive symptoms.

Why Does This Matter for Society?

Appropriate, ethical, and just content moderation leads to a safer community.

When conspiracy theories are allowed spiral out of control, the risk of real physical harm is only a post of a home address away. This is what happened to the Benassi’s, who were falsely accused of starting the COVID-19 pandemic in the spring of 2020.11 Similarly when dangerous social media challenges12 are not taken down by the platforms, youth who participate in them may incur serious injuries, health problems, or die.

While some incorrect information is accidentally shared online (misinformation) other incorrect information is purposefully spread (disinformation) and some conspiracy theories are intentionally created. As a part of Harvard’s Technology and Social Change Project, “The Media Manipulation Casebook”13 describes the lifecycle of intentionally created conspiracy theories. These conspiracy theories are crafted to gain political power at great cost to the general public, including those who fall victim to believing the lies. The cases published on this website are eye-opening and show the extent of harm generated from conspiracy theories, especially those created in the context of the COVID-19 public health emergency.14

Censorship or lack of censorship can have very real effects for youth and society at large. Social media companies must do better with their own content moderation, but a complete absence of content moderation on these platforms would have very detrimental effects for youth and society.

How Can Social Media Companies Do Better?

As we have tried to demonstrate, the topic of social media content moderation is incredibly layered and complex. The history of entertainment media censorship can serve as a guide in that we do not want to perpetuate the cycle of censoring/not censoring content in a way that disadvantages or harms HPSM groups. Organizations such as the Center for Countering Digital Hate have recommended that social media companies be required to conduct regular risk assessments of their products, policies, and processes paired with greater transparency for the general public.15

What Can We Do as Clinicians?

Sometimes we may become aware of concerning trends, challenges, content, or hashtags when patients or families tell us about them. These types of concerning content likely violate the app’s policies and would be taken down if reported. We can help to prevent future harm by contacting these companies so they can remove the content. Reporting directly to the platform (either through the reporting feature of the app or by emailing the contacts available on the platforms’ websites) might be better than raising awareness about a concerning trend or hashtag via news media because youth themselves are also consuming the same news media and may look up harmful social media content as a result of the media exposure.

For tips related to exploring social media use with patients in the clinical setting, consider reading “Social Media Use and Youth Mental Health: A Guide for Clinicians Based on Youth Voices.”

Dr Harness is a clinical assistant professor in the Department of Psychiatry at the University of Michigan. Dr Anam is an associate professor of Psychiatry and Behavioral Neuroscience at the University of Chicago.

References

1. Facebook Community Standards. Meta. Accessed June 6, 2024. https://transparency.fb.com/policies/community-standards

2. Continuing our approach to political content on Instagram and Threads. Instagram. February 9, 2024. Accessed June 6, 2024. https://about.instagram.com/blog/announcements/continuing-our-approach-to-political-content-on-instagram-and-threads/

3. Haimson OL, Delmonaco D, Nie P, Wegner A. Disproportionate removals and differing content moderation experiences for conservative, transgender, and Black social media users: marginalization and moderation gray areas. Proc ACM Hum-Comput Interact. 2021;5(CSCW2):1-35.

4. Harness J, Fitzgerald K, Sullivan H, Selkie E. Youth insight about social media effects on well/ill-being and self-modulating efforts. J Adolesc Health. 2022;71(3):324-333.

5. Social Media and Adolescent Health Consensus Study Report Highlights. National Academies Sciences Engineering Medicine. 2023. Accessed April 2, 2024. https://nap.nationalacademies.org/resource/27396/Highlights_for_Social_Media_and_Adolescent_Health.pdf

6. Tao X, Fisher CB. Exposure to social media racial discrimination and mental health among adolescents of color. J Youth Adolesc. 2022;51(1):30-44.

7. Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. 2023. Accessed June 6, 2024. https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf

8. Hernandez JM, Charmaraman L, Schaefer HS. Conceptualizing the role of racial–ethnic identity in U.S. adolescent social technology use and well-being. Transl Issues Psychol Sci. 2023;9(3):199-215.

9. Tynes BM, Willis HA, Stewart AM, Hamilton MW. Race-related traumatic events online and mental health among adolescents of color. J Adolesc Health. 2019;65(3):371-377.

10. Charmaraman L, Hodes R, Richer AM. Young sexual minority adolescent experiences of self-expression and isolation on social media: cross-sectional survey study. JMIR Ment Health. 2021;8(9):e26207.

11. O’Sullivan D. Exclusive: she’s been falsely accused of starting the pandemic. Her life has been turned upside down. CNN. April 27, 2020. Accessed June 6, 2024. https://www.cnn.com/2020/04/27/tech/coronavirus-conspiracy-theory/index.html

12. Dangerous social media challenges: understanding their appeal to kids. 2023. Accessed June 6, 2024. https://www.healthychildren.org/English/family-life/Media/Pages/Dangerous-Internet-Challenges.aspx

13. Media Manipulation Casebook Methods. Harvard Kennedy School Shorenstein Center on Media, Politics, and Public Policy. Accessed June 6, 2024. https://mediamanipulation.org/methods

14. Jennifer Nilsen. Distributed amplification: the plandemic documentary. October 7, 2020. Accessed June 6, 2024. https://mediamanipulation.org/case-studies/distributed-amplification-plandemic-documentary

15. Deadly by Design. Center for Countering Digital Hate. 2022. Accessed June 6, 2024. https://counterhate.com/wp-content/uploads/2022/12/CCDH-Deadly-by-Design_120922.pdf

Related Videos
Dune Part 2
heart
uncertainty
bystander
Discrimination
MLK
new year
love
baggage
2024
© 2024 MJH Life Sciences

All rights reserved.