{"id":13052,"title":"Why AI foundation models can pose risks to the common good","link":"https:\/\/www.reframetech.de\/en\/2025\/09\/25\/why-ai-foundation-models-can-pose-risks-to-the-common-good\/","date":"09\/25\/2025","date_unix":1758780671,"date_modified_unix":1759330632,"date_iso":"2025-09-25T06:11:11+00:00","content":"<p><em>Foundation models form the backbone of generative artificial intelligence \u2014 and thus of numerous digital tools such as ChatGPT and Gemini. However, their use carries risks, whether due to randomly compiled training data, profit-driven business models, or limited transparency. A new report by the Bertelsmann Stiftung highlights what mission driven organizations in particular should pay attention to and explores more responsible alternatives.\u00a0<\/em><\/p>\n<p><span data-contrast=\"auto\">Creating a presentation with ChatGPT, translating a text with Gemini, or planning the next event with Copilot: artificial intelligence (AI) is becoming increasingly common in everyday (working) life. It is designed to provide quick answers, give food for thought, or take over routine tasks, freeing up more time for more complex work. These digital assistants rely on foundation models \u2013 large-scale AI systems trained on massive datasets that are adaptable to a wide range of uses. Whether an AI system produces accurate or misleading, balanced or biased results depends directly on the training data of its underlying <span tabindex='0' class='glossary-item-container'>foundation model<span class='glossary-item-hidden-content'><span class='glossary-item-header'>Basismodell<\/span> <span class='glossary-item-description'>Ein <strong>gro\u00dfes, auf umfangreichen Datens\u00e4tzen trainiertes KI-Modell<\/strong>, das als Grundlage f\u00fcr verschiedene spezifische Anwendungen dient. Foundation Models\/Basismodelle k\u00f6nnen f\u00fcr eine Vielzahl von Aufgaben in verschiedenen Anwendungsgebieten feinabgestimmt werden.<\/span><\/span><\/span>.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The Bertelsmann Stiftung has examined the potential disadvantages of these models in its new study, <\/span><i><span data-contrast=\"auto\">Fragile Foundations: The Hidden Risks of Generative AI. <\/span><\/i><span data-contrast=\"auto\">Author Anne L. Washington, Associate Professor of Technology Policy at Duke University&#8217;s Sanford School of Public Policy, analyzes the systemic weaknesses of current foundation models. Drawing on expert interviews, comparative model analysis, and recent research, the study shows that many problems stem not only from the applications but from the models themselves.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<h2><b><span data-contrast=\"auto\">Significant risks for vulnerable groups<\/span><\/b><span data-ccp-props=\"{}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">Although marketed as \u201cgeneral purpose models,\u201d foundation models often prove unsuitable for broad use. For organizations with a social mission \u2013 whether in social work, environmental protection, or support for disadvantaged groups \u2013 the risks can be particularly serious. Precisely in contexts where people in vulnerable situations rely on assistance, faulty or discriminatory AI outputs can cause real harm.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">These risks are not hypothetical. A <\/span><a href=\"https:\/\/www.nytimes.com\/2023\/06\/08\/us\/ai-chatbot-tessa-eating-disorders-association.html\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">chatbot designed for people with eating disorders<\/span><\/a><span data-contrast=\"auto\"> offered dieting tips, a <\/span><a href=\"https:\/\/netzpolitik.org\/2024\/diskriminierung-ams-erntet-hohn-mit-neuem-ki-chatbot\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">system used by the Austrian employment agency<\/span><\/a><span data-contrast=\"auto\"> recommended cooking or nursing jobs to women and IT jobs to men, and in California, a <\/span><a href=\"https:\/\/www.theguardian.com\/technology\/2024\/oct\/23\/character-ai-chatbot-sewell-setzer-death\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">chatbot encouraged suicidal thoughts<\/span><\/a><span data-contrast=\"auto\"> in a teenager instead of pointing him to sources of help. Such cases show how the supposed neutrality of AI can prove illusory. Organizations that rely on AI that produces distorted or discriminatory results risk eroding the trust on which their work depends.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<h2><b><span data-contrast=\"auto\">Critically questioning the fundamentals of AI<\/span><\/b><span data-ccp-props=\"{}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">\u201cAI foundation models carry the risk of exacerbating existing injustices. Our report invites us to critically question and rethink the fundamentals of generative AI. Mission-driven organizations in particular should make a conscious decision about whether to use a foundation model and, if so, for what purpose,\u201d says Teresa Staiger, digital expert at the Bertelsmann Stiftung.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">A key concern is the reliance on uncurated training data \u2013 datasets gathered largely through automated web scraping. Such data often reflect historical <span tabindex='0' class='glossary-item-container'>bias<span class='glossary-item-hidden-content'><span class='glossary-item-header'>Bias<\/span> <span class='glossary-item-description'>In der KI bezieht sich <strong>Bias auf Verzerrungen<\/strong> in Modellen oder Datens\u00e4tzen. Es gibt zwei Arten:\r\r<strong><em>Ethischer Bias<\/em>:<\/strong> systematische Voreingenommenheit, die zu unfairen oder diskriminierenden Ergebnissen f\u00fchrt, basierend auf Faktoren wie Geschlecht, Ethnie oder Alter.\r\r<strong><em>Mathematischer Bias<\/em>:<\/strong> eine technische Abweichung in statistischen Modellen, die zu Ungenauigkeiten f\u00fchren kann, aber nicht notwendigerweise ethische Probleme verursacht.<\/span><\/span><\/span> and overlook the perspectives of certain social groups. The business models of AI providers also play a role: When profit maximization overrides data quality, external evaluations of the data becomes impossible and accountability requirements are absent. In addition, systemic conditions further compound the problem: without consistent tools for systematic evaluation and review, errors in in foundation models ripple into downstream applications.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<h2><b><span data-contrast=\"auto\">Other paths are possible<\/span><\/b><span data-ccp-props=\"{}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">In addition to identifying structural weaknesses, the study outlines potential alternatives. These include technical, participatory, data-related, and collaborative. Key recommendations include:<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<ul>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"1\" data-list-defn-props=\"{&quot;134224900&quot;:true,&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" data-aria-posinset=\"0\" data-aria-level=\"1\"><b><span data-contrast=\"auto\">Avoid monocultures in training data:<\/span><\/b><span data-contrast=\"auto\"> Foundation models should rely on carefully curated datasets that have been deliberately assembled and reliably verified. This is the only way to avoid distortions and reflect diverse perspectives.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<\/ul>\n<ul>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"1\" data-list-defn-props=\"{&quot;134224900&quot;:true,&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" data-aria-posinset=\"1\" data-aria-level=\"1\"><b><span data-contrast=\"auto\">Focus on transparency and feedback:<\/span><\/b><span data-contrast=\"auto\"> Foundation models should be open to ongoing evaluation and external review, allowing errors to be identified and corrected while building trust.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<\/ul>\n<ul>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"1\" data-list-defn-props=\"{&quot;134224900&quot;:true,&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" data-aria-posinset=\"2\" data-aria-level=\"1\"><b><span data-contrast=\"auto\">Adopt a library model:<\/span><\/b><span data-contrast=\"auto\"> Like a (national) library, foundation models should preserve knowledge over the long term while ensuring broad and equitable access. Training data must be balanced and representative.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<\/ul>\n<p><span data-contrast=\"auto\">\u201cOur study clearly shows that if foundation models are to serve the common good, they must be developed, evaluated, and operated differently than they are today,\u201d emphasizes Teresa Staiger.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<h2><b><span data-contrast=\"auto\">A starting point and impetus for mission-driven organizations<\/span><\/b><span data-ccp-props=\"{}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">This report illustrates why it is critical to scrutinize foundation models. It thus offers a starting point and impetus for decision-makers and practitioners in mission-driven organizations, as well as anyone committed to a responsible digital future. Using generative AI meaningfully in the service of the common good requires a clear understanding of the technology\u2019s foundations \u2013and of the questions these raise for mission-driven organizations.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The race to design responsible AI infrastructures is still open. With better datasets, more reliable evaluation, and broader access to expertise, foundation models can be made safer and more oriented toward the common good. A digital infrastructure modeled on libraries is one conceivable path: safeguarding knowledge for the long term, expanding access, and aligning it with the common good.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<h2><b><span data-contrast=\"auto\">Additional information:<\/span><\/b><span data-ccp-props=\"{}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">Our <\/span><a href=\"https:\/\/www.reframetech.de\/2024\/11\/26\/neue-wissensseite-das-oekosystem-der-ki-basismodelle\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">knowledge page on AI foundation models<\/span><\/a><span data-contrast=\"auto\"> (in German) provides an overview of how they work, the types of models, and the mechanisms behind the technology possible \u2013 including resource consumption, training data, and the invisible human labor underpinning automation.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<hr \/>\n<p><span data-teams=\"true\"><i>This text is licensed under a \u202f<\/i><a id=\"menurlnv\" class=\"fui-Link ___1q1shib f2hkw1w f3rmtva f1ewtqcl fyind8e f1k6fduh f1w7gpdv fk6fouc fjoy568 figsok6 f1s184ao f1mk8lai fnbmjn9 f1o700av f13mvf36 f1cmlufx f9n3di6 f1ids18y f1tx3yz7 f1deo86v f1eh06m1 f1iescvh fhgqx19 f1olyrje f1p93eir f1nev41a f1h8hb77 f1lqvz6u f10aw75t fsle3fq f17ae5zn\" title=\"http:\/\/creativecommons.org\/licenses\/by\/4.0\/\" href=\"http:\/\/creativecommons.org\/licenses\/by\/4.0\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Link Creative Commons Attribution 4.0 International License\"><i><strong>Creative Commons Attribution 4.0 International License<\/strong><\/i><\/a><\/span><\/p>\n","excerpt":"<p>Foundation models form the backbone of generative artificial intelligence \u2014 and thus of numerous digital tools such as ChatGPT and Gemini. However, their use carries risks, whether due to randomly compiled training data, profit-driven business models, or limited transparency. A new report by the Bertelsmann Stiftung highlights what mission driven organizations in particular should pay attention to and explores more responsible alternatives.\u00a0<\/p>\n","thumbnail":"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2025\/09\/250922_Cover_Quer.png","thumbnailsquare":"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2025\/09\/250922_Cover_Quer.png","authors":[{"id":7882,"name":"Teresa Staiger","link":"https:\/\/www.reframetech.de\/en\/blogger\/teresa-staiger\/"}],"categories":[{"id":700,"name":"Public interest organizations","link":"https:\/\/www.reframetech.de\/en\/category\/public-interest-organizations\/"}],"tags":[{"id":719,"name":"Latest Publications","link":"https:\/\/www.reframetech.de\/en\/tag\/latest-publications\/"},{"id":639,"name":"Publications","link":"https:\/\/www.reframetech.de\/en\/tag\/publications\/"}]}