Position On EU “Trustworthy AI” Debate

Developers Alliance welcomes the debate at EU level and the efforts of the High Level Expert Group on AI (AI HLG) to debate and refine the “Ethics Guidelines for Trustworthy AI” & “Policy and Investment Recommendations for Trustworthy AI.”

Developers Alliance is registered to participate in developing the Ethics Guidelines, in order to bring developer community contributions to this exercise.

The EU needs AI technology solutions for achieving its sustainability, growth and competitiveness objectives. Furthermore, we agree with the idea that the development of AI solutions could be a great opportunity for Europe to lead in the global race for achieving the SDGs (see: Artificial Intelligence-Ethics, governance and policy challenges (Report of a CEPS Task Force) by Andreea Renda, Part.II, 5.6 Can Europe be the champion of “AI for Good”?).

We strongly support the need for an increased knowledge and awareness on AI technology and for ensuring the right skills for the workforce. Fostering and scaling AI technology in the EU implies ensuring the right ecosystem, which means:

  • fostering research and innovation,

  • developing a suitable infrastructure,

  • enabling a proper investment environment,

  • promoting effective skilling measures,

  • establishing high quality datasets at the EU level (for testing and training AI that would ensure that AI solutions don’t create or reinforce unfair bias),

  • and most importantly, ensuring an appropriate legal framework.

An adaptive, principle-based regulation is needed in order to ensure that Europe’s economy and society in general enjoy AI benefits. Policy-making, both at EU and national level, should have a future-proof approach, any regulatory response should be based on a consistent impact assessment and stakeholders’ involvement. The precautionary principle should go hand in hand with the innovation principle. Making full use of Better Regulation tools, including experimental policy-making (e.g. like regulatory sandboxes, as recommended by the Competitiveness Council on 27 May 2019 Conclusions on "A new level of ambition for a competitive Single Market"), represents the optimal solution to create an innovation-friendly legal framework, while also protecting the public interest.

The stakeholder consultations in policy-making should be effective, involving all categories of relevant parties. We salute the multi-stakeholder approach proposed by the AI HLG. The consultation of the developer community has a particular added value, as it can give a “reality-check” for the political debate and decision-making. It can help policy-makers to know the specifics of AI development and to acknowledge the technical possibilities to implement in practice the proposed regulatory solutions. Moreover, for any regulatory solutions envisaged, it is essential to consider the whole range of business models and sizes. A discussion on data access regimes or liability, for instance, should consider not only large companies, but also SMEs and startups developing or using AI.

Knowledge and awareness is needed at the policy-making level in order to avoid regulatory responses made in haste, based purely on political impulses. When addressing technology and especially AI, but also in general, the EU Better Regulation policy should continue to serve one of its main objectives; to ensure that “EU actions are based on evidence and understanding of the impacts” (see: Better Regulation: Why and How) and not the other way around, to support and justify policy options based on ex-ante decisions. The independence of competition policy should remain free from strong political interference.

Developers need a coherent legal framework, providing legal certainty and predictability. It is extremely difficult for developers to operate when forced to follow incoherent or even divergent rules. We support the AI HLG’s recommendation to avoid fragmentation of rules at Member States level, but also encourage an awareness and consideration of global regulatory approaches.

While the Commission’s Communication on AI makes reference to AI “made in Europe”, the AI HLG Guidelines explicitly underline that they “aim[s] to encompass not only those AI systems made in Europe, but also those developed elsewhere and deployed or used in Europe”. A pragmatic approach is recommended, taking into consideration the dissimilarity between perspectives on AI ethics at the global level. Moreover, one should consider the different existing national legal frameworks that are relevant for the development and implementation of AI technology in Europe. The same pragmatic approach is recommended when assessing the necessity of standardisation of AI systems at EU level (see: Chapter II - 30.7 of the“Policy and Investment Recommendations for Trustworthy AI”, pg. 43).

It is to be noted, also, that the AI HLG Recommendations encourage a protectionist policy approach (see: e.g. Chapter II - 19.2 of the“Policy and Investment Recommendations for Trustworthy AI”, pg. 30), which risks ignoring the ecosystem that allows AI solutions to emerge and develop as they do today. We underline that AI developers’ work is intrinsically global (free flow of data is essential) and any constraints in this sense would have undesired consequences, limiting the ability of EU developers to compete globally for both customers and investment. Instead, the policy measures should focus on continuing and fostering the cooperation between the countries that share the same vision (see: OECD Principles on AI).

Developers Alliance is committed to stay active in the debate, offering up the developer community’s expertise and experience, while improving regulatory quality by ensuring their concerns are taken into account.