Reflections on the Frontline: Navigating the Policy Maze of Generative AI

Civil society advocates and policymakers discussing ethical regulations for Generative AI to protect public interest and fundamental rights at TACD 2024 annual meeting

In June, TACD hosted a roundtable in Washington, D.C., as part of our annual gathering of the clans.  It was one of those very rare “multi-stakeholder” events where you feel really inspired , surrounded by individuals who are equally passionate about steering the future of technology in the right people-centered direction. The focus of our discussion? Generative Artificial Intelligence (GAI) – a force of innovation that’s raising many concerns amongst civil society groups.

As we gathered under the protection of Chatham House rules, it was clear that everyone around the table was ready to speak candidly. Our mission was to confront the growing array of harms posed by GAI, dissect current policies on both sides of the Atlantic, and brainstorm how we, as a collective, could forge solutions that protect consumers and citizens, and particularly those more vulnerable.

The dark side of Generative AI: Unmasking hidden dangers

From the outset, it was impossible to ignore the mounting evidence of GAI’s potential to wreak havoc. The discussion began with a stark enumeration of the risks: economic harms from scams and voice cloning, the psychological toll of AI companions, the abuse of personal data, the spread of misinformation, and the concentration of power in a few hands. These aren’t just theoretical dangers – they’re real and present threats that are already affecting people’s lives. What struck me most was the breadth of these issues.

We weren’t just talking about isolated incidents; we were grappling with systemic problems that could reshape society in ways we barely understand.

As one participant aptly put it, we’re in a race against time – a race to ensure that the technology doesn’t outpace our ability to regulate it effectively.

Legal Limbo: Current policies are falling behind AI advancements

One of the most interesting parts of the conversation revolved around the existing legal frameworks in the U.S. and the EU. On the U.S. side, there’s a patchwork of laws that can address some GAI harms, like the Fair Credit Reporting Act and the FTC’s consumer protection authority. Yet, there’s a glaring absence of a comprehensive federal privacy law, leaving state laws fragmented and insufficient to tackle the challenges posed by GAI.

The EU, with its General Data Protection Regulation (GDPR) and upcoming AI Act, seems to have a more robust framework. But even here, participants highlighted significant gaps. The AI Act, for instance, might not be future-proof enough to keep pace with the rapid evolution of technology, plus its risk-based approach leaves many consumer-facing harms unaddressed.

This policy conundrum begs the question whether both sides are doing enough, quickly enough, to protect people? Sure, as people round the table pointed out, we have enough laws and regulations on both sides to act now, not wait for more.  But more importantly, going forward, are we asking the right questions? Shouldn’t we be considering whether some of these technologies should exist at all, and define instead the basis for their legitimacy?

From profits to people: changing the AI conversation

The current narrative is overwhelmingly focused on the innovation potential of AI – on how it can drive efficiency, boost profits, and add trillions to the economy. But this narrative often sidelines the human cost.

The sentiment in the room was that we need to reframe the conversation, and I agree! It’s not just about what GAI can do, but what it should do. We need to spotlight the real stories of harm – how facial recognition technology affects minority communities, or how AI-induced misinformation can sway public opinion with effects on democracy itself. This way, we can push for a market that prioritizes the public good over profit margins

Fighting for fair AI: where Civil Society steps up

As the conversation drew to a close, the focus shifted to what we, as civil society, can do. It was clear that our role is crucial – not just in advocating for stronger regulations but in being watch-dogs, the eyes and ears on the ground, bringing evidence of real cases of harm to the attention of authorities, investigations and litigation even, and push the authorities to act. We need to be vocal, relentless, and strategic.

One point that resonated strongly with our audience was the need for strong and better collaboration across the Atlantic.

The same tech giants operate on both sides, so why not share intelligence, strategies, and even coordinate actions when impactful? We’re all fighting the same battle, after all.

Act now or pay later, urgent action is needed

Leaving the roundtable and over lunch networking, there was a mix of urgency and cautious optimism. The challenges ahead are daunting, but the discussions we had gave hope that, together, we can push for the changes needed to safeguard fundamental rights of people. The road ahead will require persistent advocacy, smarter regulations, and a collective commitment to ensuring that GAI serves the public interest.

One thing is certain: the time to act is now. We can’t afford to wait until the harms of Generative AI are too entrenched to reverse. It’s up to us – civil society, regulators, and policymakers – to craft a future where technology enhances our lives rather than undermines them.