Skip to yearly menu bar Skip to main content


Oral

Position: Generative AI Regulation Can Learn from Social Media Regulation

Ruth Elisabeth Appel

West Ballroom A
[ ] [ Visit Oral 2B Positions: AI Regulation and Safety ]
Tue 15 Jul 3:30 p.m. — 3:45 p.m. PDT

Abstract:

There is strong agreement that generative AI should be regulated, but strong disagreement on how to approach regulation. While some argue that AI regulation should mostly rely on extensions of existing laws, others argue that entirely new laws and regulations are needed to ensure that generative AI benefits society. In this position paper, we argue that the debates on generative AI regulation can be informed by evidence on social media regulation. For example, AI companies have faced allegations of political bias which resemble the allegations social media companies have faced. First, we compare and contrast the affordances of generative AI and social media to highlight their similarities and differences. Then, we discuss four specific policy recommendations based on the evolution of social media and their regulation: (1) counter bias and perceptions thereof (e.g., via transparency, oversight boards, researcher access, democratic input), (2) address specific regulatory concerns (e.g., youth wellbeing, election integrity) and invest in trust and safety, (3) promote computational social science research, and (4) take on a more global perspective. Applying lessons learnt from social media regulation to generative AI regulation can save effort and time, and prevent avoidable mistakes.

Chat is not available.