{"id":4225,"title":"From principles to practice: How can we make AI ethics measurable?","link":"https:\/\/www.reframetech.de\/en\/2020\/04\/02\/from-principles-to-practice-how-can-we-make-ai-ethics-measurable\/","date":"04\/02\/2020","date_unix":1585796400,"date_modified_unix":1707753016,"date_iso":"2020-04-02T03:00:00+00:00","content":"<p><em>Discussions about the <\/em><em>societal consequences of algorithmic decision-making systems are omnipresent. <\/em><em>A<\/em><em> growing number of guidelines for the ethical development of so-called artificial intelligence (AI) have been put forward by stakeholders from the private sector, civil society, and the scientific and policymaking spheres. The Bertelsmann Stiftung\u2019s Algo.Rules are among this body of proposals. However, <\/em><em>it remains unclear <\/em><em>how <\/em><em>organizations<\/em><em> that develop and deploy AI systems should implement precepts <\/em><em>of this kind.<\/em><em> In cooperation with the <\/em><em>nonprofit VDE standards-<\/em><em>setting organization<\/em><em>,<\/em><em> we are <\/em><em>seeking to bridge<\/em><em> this gap <\/em><em>with<\/em><em> a new\u00a0<strong><a href=\"https:\/\/www.bertelsmann-stiftung.de\/de\/publikationen\/publikation\/did\/from-principles-to-practice-wie-wir-ki-ethik-messbar-machen-koennen?\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">working paper that demonstrates how AI ethics principles can be put into practice<\/a><\/strong><\/em><em>. <\/em><\/p>\n<p><!--more--><\/p>\n<p>With the increasing use of algorithmic decision-making systems (ADM) in all areas of life, the discussions about a \u201cEuropean approach to AI\u201d are becoming more urgent. Political stakeholders on the German and European levels describe this approach using ideas such as <a href=\"https:\/\/www.bmas.de\/DE\/Presse\/Pressemitteilungen\/2020\/eroeffnung-ki-observatorium.html\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\"><strong>&#8220;human centered&#8221;<\/strong><\/a> systems and<strong> <a href=\"http:\/\/www.rfi.fr\/en\/science-and-technology\/20200219-eu-promotes-trustworthy-artificial-intelligence-new-digital-roadmap\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">&#8220;trustworthy AI&#8221;<\/a><\/strong>.\u201d A large number of ethical guidelines for the design of algorithmic systems have been published, with most agreeing on the importance of values such as privacy, justice and transparency.<\/p>\n<p>In April 2019, the Bertelsmann Stiftung published its Algo.Rules, a set of nine principles for the ethical development and use of algorithmic systems. We argued that these criteria should be integrated from the start when developing any system, enabling them to be implemented by design. However, <strong><a href=\"https:\/\/www.bertelsmann-stiftung.de\/de\/unsere-projekte\/ethik-der-algorithmen\/projektnachrichten\/voneinander-lernen-internationaler-austausch-zu-algorithmenethik-guidelines-angestossen\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">like many initiatives, we are currently facing the challenge of making sure that our principle are actually put into practice.<\/a><\/strong><\/p>\n<p><strong>Making general ethical principles measurable!<\/strong><\/p>\n<p>In order to take this next step, general ethical principles need to be made measurable. Currently, values such as transparency and justice are understood in many different ways by different people. This leads to uncertainty within the organizations developing AI systems, and impedes the work of oversight bodies and watchdog organizations. The lack of specific and verifiable principles thereby undermines the effectiveness of ethical guidelines.<\/p>\n<p>In response, the Bertelsmann Stiftung has created the interdisciplinary <strong>AI Ethics Impact Group<\/strong> in cooperation with the nonprofit VDE standards-setting organization. With our joint working paper \u201cAI Ethics: From Principles to Practice \u2013 An Interdisciplinary Framework to Operationalize AI Ethics,\u201d we seek to bridge this current gap by explaining how AI ethics principles could be operationalized and put into practice on a European scale. The AI Ethics Impact Group includes experts from a broad range of fields, including computer science, philosophy, technology impact assessment, physics, engineering and the social sciences. The working paper was co-authored by scientists from the <strong><a href=\"http:\/\/aalab.informatik.uni-kl.de\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">Algorithmic Accountability Lab of the TU Kaiserslautern<\/a><\/strong>, the <strong><a href=\"https:\/\/www.hlrs.de\/home\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">High-Performance Computing Center Stuttgart (HLRS)<\/a><\/strong>, the <strong><a href=\"https:\/\/www.itas.kit.edu\/index.php\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">Institute of Technology Assessment and Systems Analysis (ITAS)<\/a> in Karlsruhe<\/strong>, the <strong><a href=\"https:\/\/www.philosophie.tu-darmstadt.de\/institut_phil\/willkommen_phil\/index.de.jsp\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">Institute for Philosophy of the Technical University Darmstadt<\/a><\/strong>, the <strong><a href=\"https:\/\/uni-tuebingen.de\/en\/facilities\/central-institutions\/international-center-for-ethics-in-the-sciences-and-humanities\/the-izew\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">International Center for Ethics in the Sciences and Humanities (IZEW)<\/a><\/strong> at the University of T\u00fcbingen, and the Thinktank <strong><a href=\"https:\/\/irights-lab.de\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">iRights.Lab<\/a><\/strong>, among other institutions.<\/p>\n<p><strong>A proposal for an AI ethics label<\/strong><\/p>\n<p>At its core, our working paper proposes the creation of an ethics label for AI systems. In a manner similar to the European Union\u2019s energy-efficiency label for household appliances, such a label could be used by AI developers to communicate the quality of their products. For consumers and organizations planning to use AI, such a label could enhance comparability between available products, allowing quick assessments to be made as to whether a certain system fulfilled the necessary ethical requirements for a given application. Through these mechanisms, the approach could incentivize the ethical development of AI beyond the requirements currently enshrined in law. Based on a meta-analysis of more than 100 AI ethics guidelines, the working paper proposes that transparency, accountability, privacy, justice, reliability and environmental sustainability be established as the six key values receiving ratings under the label system.<\/p>\n<p>The proposed label would not be a simple yes\/no seal of quality. Rather, it would provide nuanced ratings of the AI system\u2019s relevant criteria, as illustrated in the graphic below.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4292 aligncenter\" src=\"https:\/\/www.reframetech.de\/en\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig1_ohne.jpg\" alt=\"\" width=\"215\" height=\"538\" srcset=\"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig1_ohne.jpg 827w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig1_ohne-768x1925.jpg 768w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig1_ohne-600x1504.jpg 600w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig1_ohne-613x1536.jpg 613w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig1_ohne-817x2048.jpg 817w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig1_ohne-311x780.jpg 311w\" sizes=\"auto, (max-width: 215px) 100vw, 215px\" \/><\/p>\n<p><strong>The VCIO model concretizes general values<\/strong><\/p>\n<p>The so-called VCIO model (referring to values, criteria, indicators and observables) can be used to define the requirements needed to achieve a certain rating. As the scientific basis for the AI Ethics Impact Group\u2019s proposal, the model can help concretize general values by breaking them down into criteria, indicators and measurable observables. This in turn allows their implementation to be measured and evaluated. The paper describes how the model might function for the values of transparency, accountability and justice.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4296 aligncenter\" src=\"https:\/\/www.reframetech.de\/en\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig2_ohne.jpg\" alt=\"\" width=\"748\" height=\"559\" srcset=\"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig2_ohne.jpg 1713w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig2_ohne-768x574.jpg 768w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig2_ohne-600x449.jpg 600w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig2_ohne-1536x1149.jpg 1536w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig2_ohne-396x295.jpg 396w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig2_ohne-780x583.jpg 780w\" sizes=\"auto, (max-width: 748px) 100vw, 748px\" \/><\/p>\n<p style=\"text-align: center\"><em>Functioning<\/em><em> of the VCIO model <\/em><em>and<\/em><em> its different layers<\/em> <em>\u2013<\/em><em> values, criteria, indicators and observables.<\/em><\/p>\n<p>The ultimate impact of any algorithmic system will necessarily be influenced both by its technical features and the way the technology is organizationally embedded. Therefore, requirements should be defined that relate both to the technical system itself (system standards) and to the processes associated with its development and use (process standards). For the value of transparency, for example, requirements could include: 1) use of the simplest and most intelligible algorithmic system, taking into account the issues of efficiency and accuracy; 2) provision of information to all affected parties regarding the AI system\u2019s use; and 2) provision of information that is sufficiently oriented toward the needs of the target group.<\/p>\n<p><strong>The risk matrix helps classify AI application scenarios<\/strong><\/p>\n<p>The decision as to what levels on the ethics-label should be considered ethically acceptable would necessarily vary across application cases. For example, the use of an AI system in an industrial process might be subject to lower transparency requirements than if the same system were applied in a medical procedure requiring the processing of personal data.<\/p>\n<p>To help in such decisions, the working paper proposes the use of a risk matrix to guide the classification of different application cases. Instead of a binary classification into high-risk and non-high-risk cases (as outlined in the<strong> <a href=\"https:\/\/ec.europa.eu\/info\/sites\/info\/files\/commission-white-paper-artificial-intelligence-feb2020_en.pdf\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\">European Commission\u2019s White Paper On Artificial Intelligence<\/a><\/strong>), our risk matrix uses a two-dimensional approach that does more justice to the diversity of application cases. The horizontal x-axis represents the intensity of potential harm. The vertical y-axis represents the dependence on the decision of (the) person(s) affected. The assessment of the intensity of potential harm should reflect any potential impact on fundamental rights, the number of people affected and any risks for society as a whole, for instance if democratic processes are affected. The degree of dependence on the outcome of the decision reflects issues such as an affected individual\u2019s ability to avoid exposure, switch to another system or challenge a decision.<\/p>\n<p>We further recommend a division into five different classes of risk (see diagram). Systems that do not require any regulation whatsoever would fall into class 0. The class reflecting the greatest amount of risk, in this case class 4, would be used to describe situations in which AI systems should not be applied at all, due to the high level of associated risk. For application cases falling between these two extremes, system requirements would need to be defined through use of the VCIO model.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-4319 aligncenter\" src=\"https:\/\/www.reframetech.de\/en\/wp-content\/uploads\/sites\/23\/\/2020\/04\/WKIO_2020_Fig7_ohne-2.jpg\" alt=\"\" width=\"2067\" height=\"1329\" srcset=\"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig7_ohne-2.jpg 2067w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig7_ohne-2-768x494.jpg 768w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig7_ohne-2-600x386.jpg 600w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig7_ohne-2-1536x988.jpg 1536w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig7_ohne-2-2048x1317.jpg 2048w, https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/04\/WKIO_2020_Fig7_ohne-2-780x502.jpg 780w\" sizes=\"auto, (max-width: 2067px) 100vw, 2067px\" \/><\/p>\n<p style=\"text-align: center\"><em>\u00a0Illustration of<\/em><em> the two-dimensional <\/em><em>risk-<\/em><em>classification approach<\/em><em>, along with<\/em><em> the <\/em><em>proposed<\/em><em> division into <\/em><em>five risk<\/em><em> classes<\/em><em> (<\/em><em>numbered as <\/em><em>0<\/em> <em>\u2013<\/em> <em>4).<\/em><\/p>\n<p>This systematic approach could be adopted by policymakers and oversight bodies as a means of concretizing requirements for AI systems and ensuring effective control. Organizations intending to implement AI systems in their working processes could use the model to define requirements for their purchasing and procurement processes.<\/p>\n<p><strong>A toolset for further work on AI ethics<\/strong><\/p>\n<p>The solutions presented in the working paper \u2013 from the ethics label to the VCIO model and the risk matrix \u2013 have yet to be tested in practice, and need to be developed further. However, we believe that the proposals may help to advance the much-needed debate on how AI ethical principles can be put effectively into practice. Policymakers, regulators and standards-setting organizations should regard the working paper as a toolset to be further developed and reflected upon in an interdisciplinary and participatory manner.<\/p>\n<p>We look forward to continuing the conversation!<\/p>\n<div class=\"postContentEmbed\">\n<div class=\"embedContainer embedContainer--video\"><iframe loading=\"lazy\" title=\"Ethics of Algorithms \u2013 How do we bring AI from principles to practice?\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/ydbXgCWK-0M?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<\/div>\n<hr \/>\n<p>This text is licensed under a\u202f<a href=\"http:\/\/creativecommons.org\/licenses\/by\/4.0\/\" target=\"_blank\" aria-label=\"Opens in a new tab\" ><strong>C<\/strong><\/a><a href=\"http:\/\/creativecommons.org\/licenses\/by\/4.0\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener noreferrer\"><strong>reative Commons Attribution 4.0 International License<\/strong><\/a><\/p>\n","excerpt":"<p>Discussions about the societal consequences of algorithmic decision-making systems are omnipresent. A growing number of guidelines for the ethical development [&hellip;]<\/p>\n","thumbnail":"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/03\/william-warby-WahfNoqbYnM-unsplash-780x373.jpg","thumbnailsquare":"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2020\/03\/william-warby-WahfNoqbYnM-unsplash-370x370.jpg","authors":[{"id":715,"name":"Carla Hustedt","link":"https:\/\/www.reframetech.de\/en\/blogger\/carla-hustedt-2\/"},{"id":3277,"name":"Lajla Fetic","link":"https:\/\/www.reframetech.de\/en\/blogger\/lajla-fetic-2\/"}],"categories":[{"id":698,"name":"Political decision-makers","link":"https:\/\/www.reframetech.de\/en\/category\/political-decision-makers\/"}],"tags":[{"id":640,"name":"Algo.Rules","link":"https:\/\/www.reframetech.de\/en\/tag\/algo-rules-en\/"},{"id":301,"name":"Ethics of Algorithms","link":"https:\/\/www.reframetech.de\/en\/tag\/ethics-of-algorithms\/"},{"id":639,"name":"Publications","link":"https:\/\/www.reframetech.de\/en\/tag\/publications\/"},{"id":349,"name":"Regulation","link":"https:\/\/www.reframetech.de\/en\/tag\/regulation\/"}]}