Will AI be the opioid crisis of social media?

How can companies promote the use of AI in ways that are safe, effective and responsible? How can they build trust with their key stakeholders and the broader public?

Greg Wallig

6/20/20243 min read

It was the apology heard around the world.

In a hearing on Capitol Hill, Mark Zuckerberg stood before a group of parents and said he was sorry for what their children experienced on Meta platforms. This week the U.S. surgeon general called on Congress to require warning labels on social media platforms, similar to those now mandatory on cigarette boxes.

If you’ve seen Emily Blunt’s based-on-a-true-story Netflix movie Pain Hustlers, you know that in the case of the opioid crisis, government action lagged the wild euphoria associated with the discovery, distribution and demise of the opioid industry.

How are these threads connected?

AI moves at lightning speed, and it’s quickly permeating even more aspects of business and daily life. From government contracting to life sciences, companies in every industry are debating how best to adopt this powerful technology. Likewise, everyone from parents to business leaders (and people who, like me, are both) are understandably concerned about children’s safety.

I was in the Senate Judiciary room that day; I witnessed the painful apology. And in the weeks since, I’ve found myself thinking about what the advance of tools like AI will mean for families, kids and companies.

Will AI be the opioid crisis of social media?

How can companies promote the use of AI in ways that are safe, effective and responsible? How can they build trust with their key stakeholders and the broader public?

As part of my Executive MBA studies at Georgetown University McDonough School of Business , my colleagues, Anagha Mhatre , Swathi Young and I created an AI governance symposium that tackled these questions. My roles as parent, student and executive met a crucial intersection, and the path forward was clear:

Proactive governance and education through public-private collaboration.

Here are the key takeaways from our work:

  1. Although use of AI is rapidly increasing, governance fundamentals still apply. The National Association of Corporate Directors (NACD) AI Essentials series outlines the boardroom’s role in AI governance and helps directors become conversant in the basics of the underlying technologies. It then introduces the operational and regulatory matters that can shape an AI oversight plan.

  2. The Global Fund for Children’s Funder Safeguarding Collaborative provides a governance model, collaboration platform, tools and resources for funding organizations to provide mandatory safeguarding for children in the physical world. We believe this model could be expanded to encompass the digital world.

  3. The Economist Impact's Out of the Shadows Index, supported by World Childhood Foundation USA, was designed to shine a spotlight on state action—and inaction—to address child sexual exploitation and abuse (CSEA). The Index scores states and countries on how they address CSEA. We believe this concept can be applied to social media companies and their use of AI.

  4. Organizations like Thorn, Polaris and the National Center for Missing and Exploited Children have extensive experience using technology to identify actors and victims of human trafficking. We believe these automated tools can be leveraged to systematically identify scenarios where AI is used to accelerate harms to children, and refer the actors to law enforcement.

  5. In 2017 Scouts BSA developed a comprehensive CyberChip program to train Scouts in online safety. This training has been incorporated into the Scouting program for grades 1 – 12, with age-appropriate content at each grade. We believe this material could be used as foundational educational content that is administered to all children.

  6. Innovators, like Jonathan Bertrand, the founder of the Social Media Research Institute, have already pioneered the concept of a Social Media First Aid Kit and related seminars to create a more responsible and healthier social media culture by investigating its impact on mental health and providing guidance to users. With additional awareness and funding, these tools can be incorporated into standard classroom education.

Interested in this topic? Let’s continue the conversation: gwallig@gmail.com. I’ll also use LinkedIn to keep you updated on the AI symposium I and my fellow researchers are developing in collaboration with Georgetown University.