Generative AI threatens consumer rights

Over the last months there has been an explosion of services driven by generative artificial intelligence. Consumer organizations in EU and the U.S. call for consumer rights to be at the center of its development and implementation.

Generative artificial intelligence (AI) raises serious concerns for consumers’ rights and safety. The use of this technology creates consumer challenges related to privacy, manipulation, personal integrity, scams, disinformation, and more. These services are also very resource-demanding, which has serious repercussion for the climate and environment.

We must ensure that the development and use of generative AI is safe, reliable, and fair. Unfortunately, history has shown that we cannot trust the big tech companies to fix this on their own, says Find Myrstad, Director of Digital Policy at the Norwegian Consumer Council.

Calling on policymakers and enforcement agencies

The Norwegian Consumer Council today published a detailed reportGhost in the machine – Addressing the consumer harms of generative AI” outlining the harms, legal frameworks, and possible ways forward. In conjunction with this launch, the Norwegian Consumer Council and 15 consumer organizations from across the EU and the U.S. demand that policymakers and regulators act.

The risks from generative AI for consumers are well documented in this report by The Norwegian Consumer Council. As long as the EU’s AI Act is not applicable, authorities need to investigate where new generative AI-driven products and services may be harming consumers and enforce existing data protection, safety and consumer protection legislation. Companies cannot be absolved from the EU’s existing regulations, nor should consumers be manipulated or misled, just because this technology is new says Pachl.

– Technology is not some uncontrollable force, but must be adapted and formed by fundamental rights, regulations, and societal values. We are in the driver seat if we choose to be. Many of the challenges we are facing can be tackled using the laws we already have, says Myrstad.

Hand in hand with this, relevant enforcement agencies must have the necessary resources and competence to follow the technological development and to enforce against companies using generative AI without complying with the law.

Therefore, the report also calls for:

  • Strengthened consumer protections to make the technology safe, reliable, and fair so that consumers are not used as laboratory animals for new technologies.
  • An overarching AI-strategy which takes into account of recent developments, is centered on fundamental rights, and offers strict guidelines for the use of generative AI in the public sector.
  • Suitable future-proof regulations in instances where existing laws fall short.

The national authorities must act

It will take several years to create the best possible international framework. This is essential. Efforts are underway. For example, on the 14th of June, the European Parliament agreed on a proposal for regulating artificial intelligence, the AI Act. The regulation is now progressing through negotiations with the European Commission and the Council.

– This regulation will have a significant impact on the development and use of artificial intelligence. However, people are using this technology already and the harms will occur now. Therefore, it is important that national authorities fill the gap with robust enforcement in the meantime, says Myrstad.

In the U.S., TACD members wrote to President Joe Biden to ask that existing laws are enforced wherever they apply. New regulations must be passed that specifically address the serious risks and gaps in protection identified.

About generative AI

Generative AI-models are trained on large amounts of data to identify patterns and structures allowing them to generate new content. This can be text, images, audio, or video, which may resemble human-made content.

ChatGPT, Midjourney, Stable Diffusion, and DALL-E are among the best-known generative AI-driven services. Companies such as Microsoft, Google, and Snapchat have already implemented the technology in their services, including search engines, word processing, and chat.

Generative AI is not self-aware, and cannot “learn” new skills by itself, but it can simulate some of the things that humans can do by using machine learning.

More TACD work on Artificial Intelligence:

  • Policy priorities from European and U.S. consumer and digital groups (May 2023)
  • The Consumer perspective on the joint EU-U.S. roadmap on Artificial Intelligence (February 2023)