Our AI HLEG Trustworthy Assessment List Contribution Standpoints

The standpoints of our contribution to the piloting process on the Trustworthy Assessment List, developed by the AI HLEG.

Overall there is no clear evidence on the relevance and feasibility of implementation of the Assessment List, the added-value is uncertain. Below are some examples which support this perception.

The Assessment List, generated by the piloting process, seems to be drafted having in mind mainly complex systems which involve extended developer or academic research teams, ignoring the large spectrum of the developer community working with AI (small companies, start-ups and also very small teams of developers and even individual developers working as freelancers).

Many questions start from the premise that the software engineers should have solid knowledge of other fields than those related to Computer and Information Science, such as human sciences. While it’s a salutary idea to get involved other specialists than software developers in the deployment of AI, the design and development of AI solutions are and should stay within the remit of people knowing computer languages and understanding computation.

There are criteria based on vague concepts (e.g. “adequate definition of fairness”, “broader societal impact”) or are unclear (see the questions on Stakeholder participation or on Documenting trade-offs). This is highly problematic for the implementation of the Assessment List, as it proves extremely difficult to be applied by developers without properly defining all the concepts and the terms that it relies on.

Some particular questions (Social impact) are designed from a narrow perspective, reluctant and even oppositional to the development of Artificial Empathy, which can provide useful technological solutions, for example in the area of Social and Assistive Robotics. AI is making a shift in skills and jobs, but the criteria contained in Q52 is disregarding one of the main objectives of machine learning solutions: enhancing productivity and providing cost efficiency.

In relation to some questions on Explainability, it should be noted that, although there are developments on Explainable Artificial Intelligence (e.g. research projects like Open AI & Google’s Activation Atlases, Google’s Deep Dream or DARPA’s XAI Program), it’s implausible that this principle could be applied as a standard for every AI solution.

A reasonable approach is recommended, considering also the limitations of technology. Just as in the case of certain categories of approved drugs in which the specific mechanism of action relevant to therapeutic effects is unknown or unclear, it is reasonable to expect that it’s impossible to have complete explanations on how the outputs of AI systems are provided. Moreover, even human’s decision-making processes aren’t fully known.

Some criteria could be considered as too intrusive and raising concerns related to the protection of commercial interests and trade secrets (e.g. Stakeholder participation).

Related Content

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

A Busy Regulatory End of the Year in Europe 

A Busy Regulatory End of the Year in Europe 

Join the Alliance. Protect your interests.

©2019 Developers Alliance All Rights Reserved.