The Political Interplay of AI, Democracy, and Creativity
NEHA JAMPALA: On April 9, 2024, Georgetown University’s McCourt School of Public Policy’s Massive Data Institute and the Tech & Public Policy program hosted the final installment of the “AI & Me” series. The series culminated with a discussion from a panel of experts to delve into the nuances of artificial intelligence in creativity and content. The event, part of Tech & Society Week, offered insights into how AI is reshaping the landscape of creative industries and daily content consumption.
Dr. Soyica Diggs Colbert, Vice President for Interdisciplinary Initiatives at Georgetown, moderated the panel featuring experts like Jacqueline Assar, a Spatial Computing & AI Innovator; Dr. Sarah Adel Bargal, Assistant Professor of Computer Science; Dr. Laura DeNardis, Professor and Endowed Chair in Technology, Ethics, and Society; and Henry Lee, MFA, an Interdisciplinary Designer from Parsons School of Design.
Each panelist brought forward insights into how AI is reshaping the fields of journalism, art, and more broadly, content creation. Dr. Laura DeNardis expressed concerns about the proprietary nature of AI technologies and opaque algorithms, emphasizing the need for transparency to ensure they serve democratic values rather than undermine them.
“We can’t tell the House of Representatives, for example, to use a large language model or a database that is not completely transparent in its provenance... So I really worry about this, like, do we need to create a large language model in democracies that is sort of the frontier model for democracies that competes with the private sector side?”
Jacqueline Assar highlighted AI’s potential as a collaborative tool in creative fields, offering new possibilities for personalized learning and artistic expression in educational and professional fields. She pointed out how AI could democratize creative opportunities by lowering entry barriers for various socio-economic groups to bridge knowledge gaps that come with income inequality.
Dr. DeNardis further touched on the regulatory and ethical challenges posed by AI-generated content, including deep fakes and the need for robust legal frameworks to manage these issues effectively. She suggested that cybersecurity measures could play a crucial role in safeguarding democratic processes from potential manipulations through AI technologies.
“Cybersecurity applied to that has a lot of promise, you know, using different kinds of public key cryptography applied to this, and other approaches. And so that’s why I always say cybersecurity is the great human rights issue of our time,” Dr. DeNardis said.
Continuing the conversation on ethical considerations when deploying AI tools, Sarah Adel Bargal focused on how AI content generation could have inherent biases.
“There are different ways to mitigate bias... One of the most important aspects of bias is having technology themes to reflect diversity, as opposed to, you know, more homogeneous themes,” said Bargal.
Henry Lee reflected on the integration of AI in creative processes, assessing it as a better fit serving as a creative partner rather than a substitute. “One of the things that I’ve learned about AI is it doesn't yet know how to create for humans and a lot of creativity has to do with relatability.”
The speakers concurred that while AI presents unprecedented opportunities for innovation and efficiency in creative industries, it also necessitates careful consideration of ethical standards and regulatory measures to ensure it enhances rather than detracts from human creativity and societal values. Such discussion marks a significant step in ongoing conversations about the future of AI in public life, emphasizing the importance of inclusivity and transparency as society forges ahead into an uncharted and rapidly advancing technological terrain.
Reflecting on the discussions, I find myself deeply concerned yet cautiously optimistic. AI’s potential to enhance creative processes is undeniable; Assar’s vision of AI as a tool for democratizing access to creativity resonates with me. She articulated how AI could act as an equalizer in education and professional fields by providing personalized learning opportunities and support: “If you had AI be able to catch exactly where a student is making a mistake and then give them in real time that advice and that correction in a supportive way... that could make a massive difference.”
However, the political ramifications of AI, particularly around issues of transparency and accountability, loom large. I am particularly concerned about AI tools becoming so integral to our societal infrastructure without robust governance frameworks in place. The encrypted nature of AI algorithms and their proprietary underpinnings could potentially erode the values of transparency and accountability on which democracies rely.
Dr. DeNardis particularly emphasized the broader social implications of AI technology: “It's an entire ecosystem. It’s geopolitical as with other technologies… and arrangements of technical architecture are also arrangements of power.”
This panel served as a potent reminder of the dual aspects of AI. As we marvel at AI’s capabilities to push creative boundaries, we must also critically engage with its societal implications. It is imperative that we, as a community, advocate for policies that not only foster innovation but also safeguard our democratic values against the encroaching shadows of opaque technological advances. The path forward must involve transparent, inclusive, and robust policy frameworks that hold AI developments accountable and ensure they serve the public good, not just private interests.
Neha Jampala is a freshman studying Political Economy in the College and edits for OTR’s Campus section. She is interested in global trade and the intersections of economic policy and healthcare equity.