Last week, the second edition of the German Standardization Roadmap Artificial Intelligence (AI) was handed over to Vice Chancellor and Federal Minister for Economic Affairs and Climate Action Robert Habeck at the Digital Summit held in Berlin. What may have appeared initially to be a peripheral issue for the summit participants turned out to be, in fact, a headline topic throughout the week of digital policy events in the capital city.

Standards determine the credibility of trustworthy AI design

In Berlin’s digital policy circles and across Europe, the EU’s proposed AI Act is widely viewed as a milestone in the design of trustworthy AI. Beginning in 2023, the European Council is set to enter into negotiations (“trilogue”) with the EU Parliament and the European Commission to reach agreement on the details of the legislation. And it’s already clear that the negotiation process is likely to run into complications. Still, the outcome of these negotiations alone will not determine the success of the AI Act, as its key legal requirements will have to be specified in harmonized European standards. European and national standardization bodies are thus going to be key in the translation of European requirements into effective and practical guidance. Standards are effective instruments in the design of trustworthy AI.

Get civil society on board

Transparency and human oversight issues are particularly important when it comes to establishing confidence in the use of AI systems. Unfortunately, the AI Act is rather vague on both points thus far. Standardization serves to specify requirements in these areas, thereby facilitating the translation of abstract principles into actionable practice.

This process of standardization then provides companies with a reliable foundation upon which they can credibly address the risks posed by algorithmic systems. If standards are to actually serve this purpose, we need to integrate sociotechnical – rather than exclusively technical – considerations into their development. Doing so, however, poses a challenge to existing standards committees, which have, to date, been comprised primarily of technical experts. In other words, if the sociotechnical standardization process wants to be effective, it is going to have to include input from civil society stakeholders. Indeed, these are the individuals who have the know-how to grasp and interpret the profound ethical, legal and social issues relating to the development and use of AI.

When innovation meets the common good

Standardization processes, which are generally complex in nature, have been dominated thus far by commercial enterprises, some of which devote entire job profiles to the field. And while all stakeholders are in theory allowed to participate in standards committees, very few civil society organizations have actually been involved in such processes. This is particularly disconcerting given that civil society stakeholders, with their relevant expertise, have painstakingly sought to have sociotechnical perspectives integrated more thoroughly into the design of technology. We are therefore going to need to engage in unconventional coalitions involving strengthened cooperation between standards organizations, companies and civil society. Standardization processes offer the perfect opportunity to forge such coalitions. As processes by which abstract principles can be translated into actionable specifics for the design of trustworthy AI, they make it possible to link the will to innovate with a deep regard for the common good.

Forging unconventional coalitions

Incorporating various perspectives on the design and use of AI into standardization begins with industry and civil society stakeholders coming to the table. Unfortunately, civil society stakeholders have generally lacked the time and financial resources to participate in such activities to date. We therefore need to provide them with greater systemic and financial support. Established as part of the second edition of the Standardization Roadmap, the “Sociotechnical Systems” working group has proved able to get the ball rolling in this regard. Creating a level playing field is the next step. In addition to funding, those involved in the process will need support with translating sociotechnical requirements into the language of standards. Experienced experts from standardization bodies must therefore work in tandem with civil society actors.

Policymakers and standardization organizations are now called upon to create a structural and financial environment that allows civil society stakeholders to engage as equals in standardization processes. Doing so would give standardization the social status it deserves when it comes to implementing and enforcing the complex legal regulations governing the use of AI.


In the past several months, Lajla Fetic joined with Rosmarie Steininger and Patricia Stock to head up the Sociotechnical Systems working group under the auspices of the Standardization Roadmap AI. Originally published in German, this article appeared in Tagesspiegel Background on December 9, 2022.