The question of how to regulate Artificial Intelligence (AI) is at the centre of many current policy debates in the European Union (EU).
From the consumer perspective, it is imperative to enact an AI regulation that strengthens consumer trust through a high degree of transparency and accountability.
Currently, consumers are at the receiving end of Artificial Intelligence: AI applications can bring many benefits to consumers as well as influence their behaviour. Furthermore, they can transform entire consumer markets through the personalisation of offers. As beneficial and as useful many AI applications are, they can also bring harm to consumers: just think of virtual personal assistants that could personalise prices based on willingness to pay or gender; or booking platforms that could exclude consumers based on the analysis of personal traits.
As many AI applications are supplied cross-border, the implication of trade commitments on the EU’s ability to regulate should also be looked at.
For this reason, the Federation of German Consumer Organisations (vzbv) commissioned a study from Kristina Irion (University of Amsterdam).
The study takes a closer look at the intersection of the internal debate in the EU about AI transparency and its regulation on the one hand, and the EU’s proposal on source code in the negotiations on electronic commerce in the World Trade Organisation (WTO) on the other.
This intersection is of particular importance to ensure the compatibility of EU policies with trade commitments to which the EU is then bound by international law. This is especially true as AI technologies are not (yet) regulated properly, and the understanding of risks is still nascent and will likely be evolving in the years to come.
A modular approach to regulate AI
From the consumer perspective a modular approach to regulate AI is needed to safeguard consumer rights. Unlike the EU White Paper on AI not only “high-risk” AI but also “medium-risk” applications have to be regulated to ensure a high level of consumer protection. The aim of this would be to ensure consumers are not discriminated and are protected from (systemic) flaws e.g. due to incorrect databases.
Such a modular approach would consist of different regulatory instruments that would range from auditing the source code (“white-box” method), auditing the inputs and outputs of an AI system via interfaces (“black-box” method) all the way to ex ante third-party certification of specific AI applications.
The EU’s source code proposal and its link with AI regulation
Currently, the European Union participates in plurilateral negotiations about an agreement on electronic commerce which take place under the roof of the WTO. In these negotiations, a clause on the nondisclosure of source code of software is discussed with the aim to stop forced technology transfers. These transfers are demanded by some states from companies that want to supply services cross-border.
While forced technology transfers are indeed a relevant problem, there is at the same time no experience with such a trade law discipline on source code of software and insufficient analysis of its scope, application, and effects on the EU’s autonomy to regulate.
vzbv’s legal study finds that the scope of the current EU proposal on source code would not only cover computer and machine-learning algorithms but also protect the interfaces of an AI system against third-party access.
This would also subordinate mechanisms for black-box testing of AI to the trade clause.
A number of instruments to effectively regulate AI would most likely be inconsistent with the current EU proposal on source code, leaving it to general exceptions under trade law to justify measures. Justifying such public interest measures under trade law is very cumbersome, especially in areas such as AI where no international standards and very few domestic rules on algorithmic accountability and external audits exist.
EU policy options risk to breach trade law commitments
Several policy options that are currently discussed in the field of AI governance risk being inconsistent with a clause on source code unless they can be justified under the general exceptions inside trade law. This relates to the following:
- The White Paper on AI proposes the introduction of prior conformity assessment of high-risk AI applications by certified testing centres;
- Germany’s Data Ethics Commission recommends “always-on” regulatory oversight of algorithmic systems with a high potential for harm through a live interface; or
- The Digital Services Act proposal requires very large online platforms to enable vetted researchers to study systemic risks by accessing data via interfaces (APIs).
Main recommendations of the study
While preventing forced technology transfers is a relevant policy goal, nevertheless, the European Union and its trading partners need to ensure that they are not preventing future public policies on AI as collateral damage.
- The EU needs to ensure the internal compatibility of its trade law commitments with its internal EU policies, now and in the future.
- The principles of foresight, precaution and protection of the weaker party should be paramount in any elaboration of trade commitments to guard a sufficient margin of maneuver to respond to the evolving risks – especially in AI technology – and to ensure a high level of consumer protection in the Union.
- The EU should limit the scope of its source code provision to forced technology transfers for dishonest commercial practices, or clearly carve out measures on algorithmic accountability. This would be prudential and provide time to develop domestic policy for accountable AI and to develop international standards on AI auditing (see page 81 of the study for a textual proposal).
- Finally, similar to its internal policy processes, the European Commission needs to ensure that a broad debate takes place in advance of the submission of proposals that have a potentially wide-ranging impact on the EU’s internal policies.
Related readings:
- A fact sheet summarizing the main findings of the study can be found here.
- Joint statement: WTO trade talks must safeguard privacy, 42 organisations urge.
- TACD’s Response to the European Commission consultation on the Trade policy review