{"$schema":"https://www.lobbyregister.bundestag.de/json-schemas/R2.22/Lobbyregister-Registereintrag-schema-R2.22.json","source":"Deutscher Bundestag, Lobbyregister für die Interessenvertretung gegenüber dem Deutschen Bundestag und der Bundesregierung","sourceUrl":"https://www.lobbyregister.bundestag.de","sourceDate":"2026-04-17T00:28:36.110+02:00","jsonDocumentationUrl":"https://www.lobbyregister.bundestag.de/informationen-und-hilfe/open-data-1049716","registerNumber":"R002153","registerEntryDetails":{"registerEntryId":51527,"legislation":"GL2024","version":24,"detailsPageUrl":"https://www.lobbyregister.bundestag.de/suche/R002153/51527","pdfUrl":"https://www.lobbyregister.bundestag.de/media/75/84/489046/Lobbyregister-Registereintraege-Detailansicht-R002153-2025-03-10_17-32-22.pdf","validFromDate":"2025-03-10T17:32:22.000+01:00","validUntilDate":"2025-04-08T18:01:52.000+02:00","fiscalYearUpdate":{"updateMissing":false,"lastFiscalYearUpdate":"2025-01-31T09:11:59.000+01:00"}},"accountDetails":{"activeLobbyist":true,"activeDateRanges":[{"fromDate":"2024-07-12T14:11:07.000+02:00"}],"firstPublicationDate":"2022-02-28T14:42:45.000+01:00","lastUpdateDate":"2025-03-10T17:32:22.000+01:00","registerEntryVersions":[{"registerEntryId":51527,"jsonDetailUrl":"https://www.lobbyregister.bundestag.de/sucheJson/R002153/51527","version":24,"legislation":"GL2024","validFromDate":"2025-03-10T17:32:22.000+01:00","validUntilDate":"2025-04-08T18:01:52.000+02:00","versionActiveLobbyist":true},{"registerEntryId":50418,"jsonDetailUrl":"https://www.lobbyregister.bundestag.de/sucheJson/R002153/50418","version":23,"legislation":"GL2024","validFromDate":"2025-02-12T11:44:13.000+01:00","validUntilDate":"2025-03-10T17:32:22.000+01:00","versionActiveLobbyist":true},{"registerEntryId":49017,"jsonDetailUrl":"https://www.lobbyregister.bundestag.de/sucheJson/R002153/49017","version":22,"legislation":"GL2024","validFromDate":"2025-01-31T09:11:59.000+01:00","validUntilDate":"2025-02-12T11:44:13.000+01:00","versionActiveLobbyist":true},{"registerEntryId":48506,"jsonDetailUrl":"https://www.lobbyregister.bundestag.de/sucheJson/R002153/48506","version":21,"legislation":"GL2024","validFromDate":"2024-12-23T11:55:09.000+01:00","validUntilDate":"2025-01-31T09:11:59.000+01:00","versionActiveLobbyist":true},{"registerEntryId":41462,"jsonDetailUrl":"https://www.lobbyregister.bundestag.de/sucheJson/R002153/41462","version":20,"legislation":"GL2024","validFromDate":"2024-07-12T14:11:07.000+02:00","validUntilDate":"2024-12-23T11:55:09.000+01:00","versionActiveLobbyist":true}],"accountHasCodexViolations":false},"lobbyistIdentity":{"identity":"ORGANIZATION","name":"Microsoft Deutschland GmbH","legalFormType":{"code":"JURISTIC_PERSON","de":"Juristische Person","en":"Legal person"},"legalForm":{"code":"LF_GMBH","de":"Gesellschaft mit beschränkter Haftung (GmbH)","en":"Limited liability company (GmbH)"},"contactDetails":{"phoneNumber":"+491806672255","emails":[{"email":"msftber@microsoft.com"}],"websites":[{"website":"https://www.microsoft.com/de-de/"}]},"address":{"type":"NATIONAL","street":"Walter-Gropius-Straße ","streetNumber":"5","zipCode":"80807","city":"München ","country":{"code":"DE","de":"Deutschland","en":"Germany"}},"capitalCityRepresentationPresent":true,"capitalCityRepresentation":{"address":{"type":"NATIONAL","nationalAdditional1":"Niederlassung Berlin","street":"Unter den Linden","streetNumber":"17","zipCode":"10117","city":"Berlin"},"contactDetails":{"phoneNumber":"+4930390970","email":"msftber@microsoft.com"}},"legalRepresentatives":[{"lastName":"Heftberger","firstName":"Agnes","function":"Vorsitzende der Geschäftsführung ","recentGovernmentFunctionPresent":false,"entrustedPerson":true,"contactDetails":{}},{"lastName":"Deter","firstName":"Florian","function":"Geschäftsführer","recentGovernmentFunctionPresent":false,"entrustedPerson":false,"contactDetails":{}},{"lastName":"Orndorff","firstName":"Benjamin","function":"Geschäftsführer","recentGovernmentFunctionPresent":false,"entrustedPerson":false,"contactDetails":{}},{"lastName":"Dolliver","firstName":"Keith","function":"Geschäftsführer","recentGovernmentFunctionPresent":false,"entrustedPerson":false,"contactDetails":{}},{"lastName":"Smith","firstName":"Bradford","function":"\"Vice Chair and President\" Microsoft Corporation","recentGovernmentFunctionPresent":false,"entrustedPerson":true,"contactDetails":{}}],"entrustedPersonsPresent":true,"entrustedPersons":[{"academicDegreeBefore":"Dr.","lastName":"Brinkel","firstName":"Guido","recentGovernmentFunctionPresent":false},{"lastName":"Reicherts","firstName":"Joana","recentGovernmentFunctionPresent":false},{"lastName":"Heftberger","firstName":"Agnes","recentGovernmentFunctionPresent":false},{"lastName":"Langkabel","firstName":"Thomas","recentGovernmentFunctionPresent":false},{"lastName":"Bettzuege","firstName":"Maximilian","recentGovernmentFunctionPresent":false},{"lastName":"Wigand","firstName":"Ralf","recentGovernmentFunctionPresent":false},{"academicDegreeBefore":"Dr.","lastName":"Pernau","firstName":"Jennifer","recentGovernmentFunctionPresent":false},{"academicDegreeAfter":"LL.M.","lastName":"Weiss","firstName":"Rebekka","recentGovernmentFunctionPresent":false},{"lastName":"Smith","firstName":"Bradford","recentGovernmentFunctionPresent":false}],"membersPresent":false,"membershipsPresent":true,"memberships":[{"membership":"American Chamber of Commerce in Germany e. V."},{"membership":"Atlantik-Brücke e.V."},{"membership":"Bitkom e.V."},{"membership":"Bundesverband Deutscher Startups e.V."},{"membership":"eco - Verband der Internetwirtschaft e.V."},{"membership":"Förderkreis der Deutschen Industrie e.V."},{"membership":"game - Verband der deutschen Games-Branche e.V. "},{"membership":"Grüner Wirtschaftsdialog e.V."},{"membership":"Initiative D21 e.V."},{"membership":"International Data Spaces e. V."},{"membership":"Verband der Automobilindustrie e. V. (VDA)"},{"membership":"Verband kommunaler Unternehmen e. V. (VKU)"},{"membership":"VDMA e. V."},{"membership":"Wirtschaftsforum der SPD e.V."},{"membership":"Wirtschaftsrat der CDU e.V."},{"membership":"Freiwillige Selbstkontrolle Multimedia-Diensteanbieter e.V. (FSM)"},{"membership":"Freiwillige Selbstkontrolle Unterhaltungssoftware GmbH (USK)"},{"membership":"Netzwerk „Wirtschaftskoalition Daten & Digitales“ "},{"membership":"Netzwerk \"Allianz der Chancen\""},{"membership":"Netzwerk \"Collegium\" "},{"membership":"Bundesvereinigung Logistik (BVL) e. V."},{"membership":"AFCEA Bonn e.V. "},{"membership":"Berlin Partner"},{"membership":"Bundesverband Digitale Wirtschaft (BVDW) e.V."},{"membership":"Bundesverband Materialwirtschaft, Einkauf und Logistik e.V. (BME)"},{"membership":"Bundeswirtschaftssenat (BVMW)"},{"membership":"Bündnis für Bildung e.V. "},{"membership":"Unternehmerverband Deutschlands e.V."},{"membership":"Catena-X Automotive Network e.V."},{"membership":"Marktoffensive erneuerbare Energien der dena"},{"membership":"Deutscher Feuerwehrverband"},{"membership":"Deutscher Städte- und Gemeindebund e.V."},{"membership":"Deutschsprachige SAP Anwendergruppe e.V."},{"membership":"Deutscher Industrie- und Handelskammertag (DIHK) e.V."},{"membership":"EHI Retail Institute GmbH"},{"membership":"EnOcean Alliance Inc."},{"membership":"FGE - der Forschungsgesellschaft Energie "},{"membership":"FINSOZ e.V. Fachverband Informationstechnologie in Sozialwirtschaft und Sozialverwaltung"},{"membership":"GovTech Campus e.V."},{"membership":"GS1 Germany GmbH"},{"membership":"H2Hub"},{"membership":"Hamburg @ Work e.V. "},{"membership":"Handelsverband Deutschland - HDE e.V."},{"membership":"Institut für Digitalisierung im Steuerrecht e.V."},{"membership":"Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V."},{"membership":"Münchner Kreis e.V. "},{"membership":"Selbstregulierung Informationswirtschaft e.V. (SRIW)"},{"membership":"SIBB - Verband der Software-, Informations- und Kommunikations-Industrie in Berlin und Brandenburg e.V."},{"membership":"Unternehmensnetzwerk Klimaschutz - DIHK Service GmbH"},{"membership":"Zentren für Kommunikation und Informationsverarbeitung in Lehre und Forschung e.V. (ZKI)"}]},"activitiesAndInterests":{"activity":{"code":"ACT_ORGANIZATION_V2","de":"Sonstiges Unternehmen","en":"Other company"},"typesOfExercisingLobbyWork":[{"code":"SELF_OPERATED_OWN_INTEREST","de":"Die Interessenvertretung wird in eigenem Interesse selbst wahrgenommen","en":"Interest representation is self-performed in its own interest"}],"fieldsOfInterest":[{"code":"FOI_EP_OTHER","de":"Sonstiges im Bereich \"Bildung und Erziehung\"","en":"Other in the field of \"Education and parenting\""},{"code":"FOI_MEDIA_COMMUNICATION","de":"Kommunikations- und Informationstechnik","en":"Communication and information technology"},{"code":"FOI_MEDIA_INTERNET_POLICY","de":"Internetpolitik","en":"Internet policy"},{"code":"FOI_MEDIA_COPYRIGHT","de":"Urheberrecht","en":"Copyright"},{"code":"FOI_EU_OTHER","de":"Sonstiges im Bereich \"Europapolitik und Europäische Union\"","en":"Other in the field of \"European politics and the EU\""},{"code":"FOI_SA_OTHER","de":"Sonstiges im Bereich \"Staat und Verwaltung\"","en":"Other in the field of \"Government and administration\""},{"code":"FOI_ENERGY_OVERALL","de":"Allgemeine Energiepolitik","en":"Energy policy in general"},{"code":"FOI_HEALTH_OTHER","de":"Sonstiges im Bereich \"Gesundheit\"","en":"Other in the field of \"Health\""},{"code":"FOI_FOREIGN_TRADE","de":"Außenwirtschaft","en":"Foreign trade"},{"code":"FOI_FA_OTHER","de":"Sonstiges im Bereich \"Außenpolitik und internationale Beziehungen\"","en":"Other in the field of \"Foreign policy and international relations\""},{"code":"FOI_IS_CYBER","de":"Cybersicherheit","en":"Cyber security"},{"code":"FOI_ENERGY_OTHER","de":"Sonstiges im Bereich \"Energie\"","en":"Other in the field of \"Energy\""},{"code":"FOI_ECONOMY_COMPETITION_LAW","de":"Wettbewerbsrecht","en":"Competition law"},{"code":"FOI_RPI_INTEGRATION","de":"Integration","en":"Integration"},{"code":"FOI_ENVIRONMENT_SUSTAINABILITY","de":"Nachhaltigkeit und Ressourcenschutz","en":"Sustainability and resource protection"},{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"},{"code":"FOI_ENVIRONMENT_CLIMATE","de":"Klimaschutz","en":"Climate protection"},{"code":"FOI_MEDIA_PRIVACY","de":"Datenschutz und Informationssicherheit","en":"Data protection and information security"},{"code":"FOI_MEDIA_ADVERTISEMENT","de":"Werbung","en":"Advertising"},{"code":"FOI_EU_DOMESTIC_MARKET","de":"EU-Binnenmarkt","en":"EU internal market"},{"code":"FOI_WORK_OTHER","de":"Sonstiges im Bereich \"Arbeit und Beschäftigung\"","en":"Other in the field of \"Work and employment\""},{"code":"FOI_DEFENSE_OTHER","de":"Sonstiges im Bereich \"Verteidigung\"","en":"Other in the field of \"Defense\""},{"code":"FOI_ENVIRONMENT_OTHER","de":"Sonstiges im Bereich \"Umwelt\"","en":"Other in the field of \"Environment\""},{"code":"FOI_FA_INTERNATIONAL","de":"Internationale Beziehungen","en":"International relations"},{"code":"FOI_IS_OTHER","de":"Sonstiges im Bereich \"Innere Sicherheit\"","en":"Other in the field of \"Internal security\""},{"code":"FOI_ECONOMY_INDUSTRIAL","de":"Industriepolitik","en":"Industrial policy"},{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"},{"code":"FOI_MEDIA_OTHER","de":"Sonstiges im Bereich \"Medien, Kommunikation und Informationstechnik\"","en":"Other in the field of \"Media, communication and information technology\""}],"activityDescription":"Microsoft ein globaler Anbieter von Cloud-Computing-Diensten, Computersoftware, KI-Lösungen, Videospielen, Computer- und Spielehardware, Suchdiensten und anderen Online-Diensten. Die Microsoft Deutschland GmbH beschäftigt mehr als 3.000 Mitarbeiter*innen in der Firmenzentrale in München sowie in sechs weiteren Regionalbüros bundesweit.\r\n\r\nDie Interessen Microsofts im Sinne des Lobbyregistergesetzes werden durch das Politik-Team in der Hauptstadtrepräsentanz vertreten. Zum Zwecke der Interessenvertretung werden Gespräche mit Vertreterinnen und Vertretern des Bundeskanzleramtes, der Bundesministerien, mit Mitgliedern und Mitarbeitern des Deutschen Bundestages sowie mit Vertretern von Behörden in Bezug auf die aufgeführten Themenfelder und Regelungsvorhaben geführt. Zudem nimmt Microsoft an Anhörungen und politischen Fachveranstaltungen teil und beteiligt sich mit Stellungnahmen an politischen und legislativen Entscheidungsprozessen.\r\n\r\nIm Rahmen der aufgeführten Verbands-Mitgliedschaften beteiligt sich Microsoft an der Erstellung politischer Positionspapiere der verschiedenen Verbände und nimmt an politischen Fachveranstaltungen teil, um mit Entscheidungsträger*innen in Kontakt zu treten. Darüber hinaus sprechen Microsoft-Vertreter*innen regelmäßig im Rahmen von politischen- und Fachkonferenzen.  \r\n\r\nIn seiner Hauptstadtrepräsentanz richtet Microsoft Veranstaltungen wie Konferenzen, Workshops und Diskussionsrunden zu politischen Themen aus – hierzu laden wir regelmäßig Politik, Zivilgesellschaft, Wissenschaft und Wirtschaft ein.  "},"employeesInvolvedInLobbying":{"relatedFiscalYearFinished":true,"relatedFiscalYearStart":"2023-07-01","relatedFiscalYearEnd":"2024-06-30","employeeFTE":1.91},"financialExpenses":{"relatedFiscalYearFinished":true,"relatedFiscalYearStart":"2023-07-01","relatedFiscalYearEnd":"2024-06-30","financialExpensesEuro":{"from":1660001,"to":1670000}},"mainFundingSources":{"relatedFiscalYearFinished":true,"relatedFiscalYearStart":"2023-07-01","relatedFiscalYearEnd":"2024-06-30","mainFundingSources":[{"code":"MFS_ECONOMIC_ACTIVITY","de":"Wirtschaftliche Tätigkeit","en":"Economic activity"}]},"publicAllowances":{"publicAllowancesPresent":false,"relatedFiscalYearFinished":true,"relatedFiscalYearStart":"2023-07-01","relatedFiscalYearEnd":"2024-06-30"},"donators":{"relatedFiscalYearFinished":true,"relatedFiscalYearStart":"2023-07-01","relatedFiscalYearEnd":"2024-06-30","totalDonationsEuro":{"from":0,"to":0}},"membershipFees":{"relatedFiscalYearFinished":true,"relatedFiscalYearStart":"2023-07-01","relatedFiscalYearEnd":"2024-06-30","totalMembershipFees":{"from":0,"to":0},"individualContributorsPresent":false,"individualContributors":[]},"annualReports":{"annualReportLastFiscalYearExists":true,"lastFiscalYearStart":"2023-07-01","lastFiscalYearEnd":"2024-06-30","annualReportPdfUrl":"https://www.lobbyregister.bundestag.de/media/3f/29/489042/2024_Annual_Report-2.pdf"},"regulatoryProjects":{"regulatoryProjectsPresent":true,"regulatoryProjectsCount":12,"regulatoryProjects":[{"regulatoryProjectNumber":"RV0006766","title":"EU Verordnung zur Festlegung harmonisierter Vorschriften für künstliche Intelligenz (KI Verordnung)","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Begleitung des EU-Gesetzgebungsvorhabens auf nationaler Ebene in Deutschland (insbesondere im Rahmen des Trilog-Verfahrens) mit der Zielsetzung der Sicherstellung eines risikobasierten Ansatzes und einer sinnvollen Zuteilung der regulatorischen Verantwortlichkeiten entlang des KI Technologie-Stacks sowie einer sinnvollen Zuweisung der Verantwortlichkeiten zwischen Anbietern von KI-Systemen, Anwendern solcher Systeme sowie den Anbietern von Foundation Models. Weitere Zielsetzungen: 1. Sicherstellung der Festlegung sinnvoller Anforderungen für die Anbieter von Foundation Models 2. Praktikable Vorgaben für sogenanntes \"Watermarking\". 3. Klare Definitionen zur Bestimmung des Anwendungsbereichs der VO","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"},{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"},{"code":"FOI_MEDIA_INTERNET_POLICY","de":"Internetpolitik","en":"Internet policy"},{"code":"FOI_MEDIA_COMMUNICATION","de":"Kommunikations- und Informationstechnik","en":"Communication and information technology"}]},{"regulatoryProjectNumber":"RV0006767","title":"EU Data Act","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Begleitung des EU-Gesetzgebungsvorhabens auf nationaler Ebene in Deutschland und Begleitung der umsetzenden Überlegungen auf nationaler Ebene (insb. Aufsichtszuständigkeiten). Zielstellungen: Förderung der Zielstellung, die gemeinsame Nutzung und Verwendung von Daten im Datenökosystem zu fördern. Hinsichtlich möglicher Auswirkungen auf Geschäftsgeheimnisse setzen wir uns für funktionierende Safeguards, auch im Interesse unserer Kunden, ein. Als Cloud-Anbieter unterstützen wir die Maßnahmen des Data Act zur Erleichterung von Switching und Datenportabilität und setzen uns diesbezüglich für praktikable Vorschriften ein. ","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_ECONOMY_INDUSTRIAL","de":"Industriepolitik","en":"Industrial policy"},{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"},{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"}]},{"regulatoryProjectNumber":"RV0006768","title":"European Cloud Certifictaion Scheme (EUCS)","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Microsoft setzt sich für eine Harmonisierung der technischen Zertifizierungsanforderungen für Cloud Service Provider auf EU-Ebene ein und unterstützt in diesem Sinne den grundsätzlichen Ansatz, einen solchen Rahmen im Wege des EUCS zu schaffen. Nichttechnische Anforderungen (sog. immunity requirements) sollten aus Sicht von Microsoft dagegen nicht zum Gegenstand eines EU-Zertifizierungsrahmens gemacht werden, sondern den Mitgliedstaaten vorbehalten bleiben.  ","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_IS_CYBER","de":"Cybersicherheit","en":"Cyber security"},{"code":"FOI_MEDIA_COMMUNICATION","de":"Kommunikations- und Informationstechnik","en":"Communication and information technology"},{"code":"FOI_MEDIA_INTERNET_POLICY","de":"Internetpolitik","en":"Internet policy"},{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"},{"code":"FOI_MEDIA_PRIVACY","de":"Datenschutz und Informationssicherheit","en":"Data protection and information security"}]},{"regulatoryProjectNumber":"RV0006769","title":"Deutsche Verwaltungscloud Strategie (DVS)","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Microsoft unterstützt die Umsetzung der Deutschen Verwaltungscloud-Strategie (DVS) im Sinne einer Dachstrategie zur Cloudifizierung in der öffentlichen Verwaltung in Deutschland. Wir treten für eine praxisnahe Umsetzung des verfolgten Multi-Cloud-Ansatzes ein, im Rahmen dessen klare allgemeingültige Anforderungen für alle Marktteilnehmer definiert werden.  ","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_MEDIA_COMMUNICATION","de":"Kommunikations- und Informationstechnik","en":"Communication and information technology"},{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"}]},{"regulatoryProjectNumber":"RV0006770","title":"Gesetz zur Umsetzung der EU NIS2 Richtlinie und zur Stärkung der Cybersicherheit - NIS2UmsuCG","printedMattersPresent":true,"printedMatters":[{"title":"Entwurf eines Gesetzes zur Umsetzung der NIS-2-Richtlinie und zur Regelung wesentlicher Grundzüge des Informationssicherheitsmanagements in der Bundesverwaltung (NIS-2-Umsetzungs- und Cybersicherheitsstärkungsgesetz)","printingNumber":"380/24","issuer":"BR","documentUrl":"https://dserver.bundestag.de/brd/2024/0380-24.pdf","projectUrl":"https://dip.bundestag.de/vorgang/gesetz-zur-umsetzung-der-nis-2-richtlinie-und-zur-regelung/314976","leadingMinistries":[{"title":"Bundesministerium des Innern und für Heimat","shortTitle":"BMI","electionPeriod":20,"url":"https://www.bmi.bund.de/DE/startseite/startseite-node.html"}],"migratedDraftBill":{"title":"Entwurf eines NIS-2-Umsetzungs- und Cybersicherheitsstärkungsgesetzes","publicationDate":"2024-05-07","leadingMinistries":[{"title":"Bundesministerium des Innern und für Heimat","shortTitle":"BMI","electionPeriod":20,"url":"https://www.bmi.bund.de/DE/startseite/startseite-node.html","draftBillDocumentUrl":"https://www.bmi.bund.de/SharedDocs/gesetzgebungsverfahren/DE/Downloads/referentenentwuerfe/CI1/NIS-2-RefE.pdf?__blob=publicationFile&v=5","draftBillProjectUrl":"https://www.bmi.bund.de/SharedDocs/gesetzgebungsverfahren/DE/nis2umsucg.html"}]}},{"title":"Entwurf eines Gesetzes zur Umsetzung der NIS-2-Richtlinie und zur Regelung wesentlicher Grundzüge des Informationssicherheitsmanagements in der Bundesverwaltung (NIS-2-Umsetzungs- und Cybersicherheitsstärkungsgesetz)","printingNumber":"20/13184","issuer":"BT","documentUrl":"https://dserver.bundestag.de/btd/20/131/2013184.pdf","projectUrl":"https://dip.bundestag.de/vorgang/gesetz-zur-umsetzung-der-nis-2-richtlinie-und-zur-regelung/314976","leadingMinistries":[{"title":"Bundesministerium des Innern und für Heimat","shortTitle":"BMI","electionPeriod":20,"url":"https://www.bmi.bund.de/DE/startseite/startseite-node.html"}],"migratedDraftBill":{"title":"Entwurf eines NIS-2-Umsetzungs- und Cybersicherheitsstärkungsgesetzes","publicationDate":"2024-05-07","leadingMinistries":[{"title":"Bundesministerium des Innern und für Heimat","shortTitle":"BMI","electionPeriod":20,"url":"https://www.bmi.bund.de/DE/startseite/startseite-node.html","draftBillDocumentUrl":"https://www.bmi.bund.de/SharedDocs/gesetzgebungsverfahren/DE/Downloads/referentenentwuerfe/CI1/NIS-2-RefE.pdf?__blob=publicationFile&v=5","draftBillProjectUrl":"https://www.bmi.bund.de/SharedDocs/gesetzgebungsverfahren/DE/nis2umsucg.html"}]}}],"draftBillPresent":false,"description":"Microsoft setzt sich für eine richtliniennahe Umsetzung der NIS2-Richtline der EU im Rahmen der nationalen Umsetzung ein. Wir plädieren für die Vermeidung von Doppelzuständigen im Rahmen von Meldepflichten sowie eine klare Definition des materiellen und territorialen Anwendungsbereichs, um die Binnenmarktpotentiale der NIS2-Richtlinie zu heben.  ","affectedLawsPresent":true,"affectedLaws":[{"title":"Gesetz über das Bundesamt für Sicherheit in der Informationstechnik","shortTitle":"BSIG 2009","url":"https://www.gesetze-im-internet.de/bsig_2009"},{"title":"Verordnung zur Bestimmung kritischer Anlagen nach dem BSI-Gesetz","shortTitle":"BSI-KritisV","url":"https://www.gesetze-im-internet.de/bsi-kritisv"}],"fieldsOfInterest":[{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"},{"code":"FOI_MEDIA_PRIVACY","de":"Datenschutz und Informationssicherheit","en":"Data protection and information security"},{"code":"FOI_MEDIA_INTERNET_POLICY","de":"Internetpolitik","en":"Internet policy"},{"code":"FOI_IS_CYBER","de":"Cybersicherheit","en":"Cyber security"},{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"}]},{"regulatoryProjectNumber":"RV0006771","title":"EU Kommission Weißbuch Digitale Infrastrukturen - How to master Europe’s digital infrastructure needs?","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Microsoft beteiligt sich am Weißbuchprozess \"How to master Europe’s digital infrastructure needs?\" der EU-Kommission und begleitet in diesem Rahmen den Weißbuchprozess auch auf nationaler Ebene in Deutschland. Mit dem Weißbuchprozess will die EU-Kommission eine Debatte über die strategische Ausrichtung ihrer Infrastrukturpolitik anstoßen, was auch die Möglichkeit gesetzlicher oder regulatorischer Maßnahmen beinhaltet. \r\n\r\nMicrosoft unterstützt den Weißbuchprozess und hat im Rahmen der Konsultation der EU eine Stellungnahme eingereicht. Microsoft betrachtet hierbei den bestehenden gesetzlichen Rahmen als ausreichend flexibel, um auf die dynamischen Marktentwicklungen reagieren zu können. ","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"},{"code":"FOI_EU_DOMESTIC_MARKET","de":"EU-Binnenmarkt","en":"EU internal market"},{"code":"FOI_MEDIA_INTERNET_POLICY","de":"Internetpolitik","en":"Internet policy"},{"code":"FOI_ECONOMY_INDUSTRIAL","de":"Industriepolitik","en":"Industrial policy"},{"code":"FOI_MEDIA_COMMUNICATION","de":"Kommunikations- und Informationstechnik","en":"Communication and information technology"},{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"}]},{"regulatoryProjectNumber":"RV0006772","title":"Anpassung Energieeffizienzgesetz (EnEfG) zur Umsetzung der Neufassung der Energieeffizienzrichtlinie","printedMattersPresent":true,"printedMatters":[{"title":"Entwurf eines Gesetzes zur Änderung des Gesetzes über Energiedienstleistungen und andere Effizienzmaßnahmen, zur Änderung des Energieeffizienzgesetzes und zur Änderung des Energieverbrauchskennzeichnungsgesetzes","printingNumber":"20/11852","issuer":"BT","documentUrl":"https://dserver.bundestag.de/btd/20/118/2011852.pdf","projectUrl":"https://dip.bundestag.de/vorgang/gesetz-zur-%C3%A4nderung-des-gesetzes-%C3%BCber-energiedienstleistungen-und-andere-effizienzma%C3%9Fnahmen/312312","leadingMinistries":[{"title":"Bundesministerium für Wirtschaft und Klimaschutz","shortTitle":"BMWK","electionPeriod":20,"url":"https://www.bmwk.de/Navigation/DE/Home/home.html"}]}],"draftBillPresent":false,"description":"Begleitung der Anpassung des Energieeffizienzgesetzes (EnEfG) an den delegierten Rechtsakt der EU auf nationaler Ebene in Deutschland, um Rechtskonformität und fairen Wettbewerb sicherzustellen. Microsoft begrüßt, dass die delegierte EU-Verordnung Klarheit darüber schafft, welche Informationen von Rechenzentren in aggregierter Form veröffentlicht werden sollen. Ergänzend schlagen wir vor, dass für die Grenzwerte von Rechenzentren Faktoren wie Verfügbarkeit, Auslastung und Kühlung berücksichtigt werden. ","affectedLawsPresent":true,"affectedLaws":[{"title":"Gesetz zur Steigerung der Energieeffizienz in Deutschland","shortTitle":"EnEfG","url":"https://www.gesetze-im-internet.de/enefg"}],"fieldsOfInterest":[{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"},{"code":"FOI_MEDIA_COMMUNICATION","de":"Kommunikations- und Informationstechnik","en":"Communication and information technology"},{"code":"FOI_ENERGY_OVERALL","de":"Allgemeine Energiepolitik","en":"Energy policy in general"},{"code":"FOI_ENVIRONMENT_SUSTAINABILITY","de":"Nachhaltigkeit und Ressourcenschutz","en":"Sustainability and resource protection"}]},{"regulatoryProjectNumber":"RV0006773","title":"Anpassung der Green Claims Richtlinie","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Begleitung des EU-Gesetzgebungsvorhabens zur Anpassung der Green Claims Directive bzgl. ausdrücklicher Umweltaussagen auf nationaler Ebene in Deutschland mit der Zielsetzung, die Überprüfung von Umweltaussagen mit dem Verfahren der Konformitätsvermutung und der Selbstbewertung auf Grundlage anerkannter Methoden und Standards zu vollziehen. Microsoft begrüßt das aus EU-Recht zur Produktsicherheit bewährte Verfahren, Umweltaussagen und die angewandte Methodik transparent und überprüfbar darzustellen, über das Unternehmen in die volle Verantwortung für die von ihnen gemachten Aussagen genommen werden. Zusammen mit der jüngsten Revision der Richtlinie über unlautere Geschäftspraktiken würde die Green Claims Richtlinie so umfassend zur Verhinderung von „Greenwashing“ beitragen.","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"},{"code":"FOI_ENVIRONMENT_SUSTAINABILITY","de":"Nachhaltigkeit und Ressourcenschutz","en":"Sustainability and resource protection"},{"code":"FOI_ENVIRONMENT_CLIMATE","de":"Klimaschutz","en":"Climate protection"},{"code":"FOI_MEDIA_ADVERTISEMENT","de":"Werbung","en":"Advertising"}]},{"regulatoryProjectNumber":"RV0008851","title":"EU GDPR Enforcement Review","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Begleitung des EU-Gesetzgebungsvorhabens auf nationaler Ebene in Deutschland. Zielstellungen: Größere Änderungen an der Datenschutz-Grundverordnung sind nicht erforderlich. EDSA und die Datenschutzbehörden sollten jedoch Leitlinien in Schlüsselbereichen bereitstellen, die die Verbraucher schützen und die Sicherheit für Unternehmen erhöhen. Zu den spezifischen Bereichen, in denen Leitlinien erstellt werden könnten, gehören die zusätzliche Bestätigung, dass alle Grundlagen für die Verarbeitung personenbezogener Daten gleich behandelt werden sollten und die Aktualisierung der Leitlinien der Artikel-29-Arbeitsgruppe aus dem Jahr 2014 zur Verwendung anonymer und pseudonymer Daten. \r\nUnterstützung für die zentrale Anlaufstelle und Verbesserung der Verfahrensregeln.\r\n","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_MEDIA_PRIVACY","de":"Datenschutz und Informationssicherheit","en":"Data protection and information security"}]},{"regulatoryProjectNumber":"RV0008852","title":"12. GWB Novelle ","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Mit der 12. GWB Novelle (dem Wettbewerbsmaßnahmenpaket) sollen Effizienzen im Wettbewerbsrecht durch Anpassungen des GWB vorangebracht werden. Zielstellung: Rechtsklarheit, insb. bei Aufgreifschwellen und einer der Sektoruntersuchung nachgelagerten Möglichkeit des Bundeskartellamts, Maßnahmen zu ergreifen. ","affectedLawsPresent":true,"affectedLaws":[{"title":"Gesetz gegen Wettbewerbsbeschränkungen","shortTitle":"GWB","url":"https://www.gesetze-im-internet.de/gwb"}],"fieldsOfInterest":[{"code":"FOI_ECONOMY_COMPETITION_LAW","de":"Wettbewerbsrecht","en":"Competition law"}]},{"regulatoryProjectNumber":"RV0011022","title":"Internationale KI Governance und Aufsichtsstrukturen","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Microsoft beteiligt sich an der Diskussion zu Governance-Prozessen zum Thema KI, insbesondere zum Thema internationaler Governance und Aufsichtsstrukturen. Microsoft setzt sich insb. für folgende Ziele ein: 1. Globale Risikosteuerung verbessern: Global bedeutsame und Sicherheitsrisiken, die uns alle betreffen, wie z. B. die KI-gestützte Beschleunigung der Entwicklung chemischer oder Entwicklung biologischer Waffen oder der Einsatz zunehmend autonomer Systeme, müssen global adressiert werden. 2. Regulatorische Interoperabilität voranbringen: Kohärenz und Interoperabilität der nationalen Politik und Regulierung über Grenzen sicherstellen. 3. Integrativer Fortschritt: Zugang zu den Vorteilen der KI sicherstellen.","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_MEDIA_COMMUNICATION","de":"Kommunikations- und Informationstechnik","en":"Communication and information technology"},{"code":"FOI_MEDIA_DIGITALIZATION","de":"Digitalisierung","en":"Digitalization"},{"code":"FOI_FA_INTERNATIONAL","de":"Internationale Beziehungen","en":"International relations"},{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"},{"code":"FOI_MEDIA_INTERNET_POLICY","de":"Internetpolitik","en":"Internet policy"}]},{"regulatoryProjectNumber":"RV0014996","title":"EU CSAM Verordnung","printedMattersPresent":false,"printedMatters":[],"draftBillPresent":false,"description":"Der Verordnungsvorschlag adressiert \"child sexual abuse material\" (CSAM) und den Missbrauch einschlägiger Dienste der Informationsgesellschaft für den sexuellen Kindesmissbrauch im Internet. Wir begrüßen den risikobasierten Ansatz des Vorschlags von 2022, sind jedoch besorgt darüber, dass sowohl die Kommission als auch das Parlament einen ausschließlich obligatorischen Ansatz für Aufdeckungsanordnungen vorschlagen, der die Fähigkeit von Unternehmen, den Schaden durch sexuellen Missbrauch und sexuelle Ausbeutung von Kindern zu verhindern, übermäßig einschränken würde.","affectedLawsPresent":false,"affectedLaws":[],"fieldsOfInterest":[{"code":"FOI_EU_LAWS","de":"EU-Gesetzgebung","en":"EU legislation"}]}]},"statements":{"statementsPresent":true,"statementsCount":7,"statements":[{"regulatoryProjectNumber":"RV0006766","regulatoryProjectTitle":"EU Verordnung zur Festlegung harmonisierter Vorschriften für künstliche Intelligenz (KI Verordnung)","pdfUrl":"https://www.lobbyregister.bundestag.de/media/02/87/389741/Stellungnahme-Gutachten-SG2412230019.pdf","pdfPageCount":4,"text":{"copyrightAcknowledgement":"Die grundlegenden Stellungnahmen und Gutachten können urheberrechtlich geschützte Werke enthalten. Eine Nutzung ist nur im urheberrechtlich zulässigen Rahmen erlaubt.","text":"EU AI Act: Key opportunities for improvement in the first draft of the Code of Practice for GPAI model providers\r\nThe implementation phase of the EU AI Act and corresponding secondary legislation present an important opportunity to provide the clarity needed for an expanding European AI ecosystem and the safety and innovation-friendly framework the Act has as its goal. The Code of Practice for GPAI model providers (“Code”) exemplifies this opportunity in the immediate term. Microsoft describes below key substantive concerns regarding the initial draft version of the Code as well as recommended ways to address them. We recognize the significant work already invested in rapidly developing a high-quality draft, and we appreciate opportunities to contribute to the Code’s improvement.\r\nThe Code should align with the letter of the AI Act, which was designed in accordance with the EU’s goals of advancing safety and fostering innovation, by ensuring that future versions address the initial draft’s significant divergences from the legal text. For example, the draft Code sets transparency expectations that go beyond the Act, implicating trade secrets without articulating additional safety value; puts forward a need for pre-deployment testing by the AI Office and independent third parties, despite a lack of ecosystem readiness to deliver consistently valid evaluations; and requires reporting of “near misses” (versus serious confirmed incidents).\r\nThe first draft of the Code introduces requirements for technical documentation and information-sharing that go beyond the scope of the AI Act and raise concerns around confidentiality, trade secret protection, and information hazards.\r\n•\r\nMeasures 1 and 2 on technical documentation go beyond the letter of the Act and include concerning expectations on the types of information that providers will need to make available to the AI Office (upon request) and to downstream providers, such as the proportions of data sources used for training, testing, and validation, and specifics on model architecture (e.g., number and types of layers), which go beyond the scope of Annexes XI and XII. Expectations to document both computational resources used for inference and energy use also go beyond the AI Act’s Annex XI, which focuses on computational resources used for model training and allows for reporting estimated model-level energy consumption; such requirements also pose challenges due to the lack of standards for tracking and reporting energy use.\r\n•\r\nSub-Measure 13.6 implies an expectation for model providers to share detailed information about safety and security testing that could compromise the value of such tests for future models and risk assessments.\r\nRecommendation: The Code should align with Annex XI and XII’s defined scope in the AI Act legal text. Any additional information categories under the Code should be framed as optional implementation approaches rather than mandatory requirements. The Code should also provide clear context on the regulatory objectives for requesting such information so that model providers can propose alternatives that may be less sensitive but still responsive to the regulatory rationale behind requesting those categories of information. Sub-Measure 13.6 should explicitly acknowledge that the information provided in the “Safety and Security Report” (SSR) will not allow for independent assessment of any results, evidence, or analysis, but rather only enable assessment of the methodology itself, i.e., the risk assessment and mitigation process outlined in the “Safety and Security Framework” (SSF) and implemented in the SSR.\r\n2\r\nThe draft Code would also effectively establish a pre-market authorization regime, exceeding the scope of the AI Act and contrasting with its emphasis on post-market monitoring and risk management at the model level:\r\n•\r\nSub-Measure 17.1 calls for ensuring sufficient independent expert testing before model deployment, e.g., by the AI Office and third-party evaluators.\r\n•\r\nSub-Measure 14.3 would similarly require signatories to detail in their SSF when development and deployment decisions will have input or require external authorization from external actors, including relevant regulators such as the AI Office.\r\n•\r\nSub-Measure 10.3 further implies mandatory third-party validation of all evaluation results, contradicting AI Act recital 114, which allows providers to conduct evaluations internally or externally as appropriate.\r\nRecommendation: The Code should explicitly enable model providers the flexibility to perform pre-deployment evaluations with high scientific rigor and to validate evaluation results with internal and/or third-party experts, in line with recital 114. While Art. 92 empowers the AI Office to appoint independent experts to carry out evaluations on its behalf, this is limited to evaluations carried out in the context of investigatory actions. Clarifying this flexibility would ensure the Code does not go beyond the scope of the AI Act nor overly burden the EU market without clear risk management value, especially considering methods and processes for confirming the quality of third-party evaluation services will take time to put in place.\r\nThe draft Code, in the context of serious incident reporting in Sub-Measure 18.1, implies a requirement to also report near-misses. This goes beyond the scope of the AI Act, where reporting is only required for (confirmed) serious incidents, in line with existing practices under EU cybersecurity legislation, which is important for providing clarity and focusing risk management resources. In addition, the AI Act’s legal text only defines serious incidents at the system level; the Code should identify criteria for model-level serious incidents.\r\nRecommendation: The Code should remove reference to “near misses” and clarify that the scope of reporting is limited to “confirmed serious incidents”, bringing it in line with the AI Act’s requirement to report serious incidents.\r\nThe Code should distinguish between relevant systemic risks, and appropriate mitigations, at the model versus system level.\r\nSub-Measure 6.1 identifies categories of systemic risk, such as “persuasion and manipulation” and “large-scale discrimination”, which are contextual and/or heavily influenced by system-level deployment decisions, and therefore especially difficult to measure at the model layer. Sub-Measure 6.3.3 lists several socio-technical factors beyond model capabilities and propensities, such as the potential for downstream users to remove guardrails, that are more typically associated with functionality and usability enhancements that emerge once a model is integrated into a system and are difficult to evaluate at the model level.\r\nRecommendation: The Code should explicitly set expectations regarding how assessment and mitigation of systemic risks must work in concert with system-level risk assessment and mitigation, already covered by the AI Act’s requirements for high-risk systems. Model providers should be given flexibility in determining what factors to consider and address as nature (Sub-Measure 6.2) and sources (Sub-Measure 6.3) of systemic risk.\r\n3\r\nSub-Measure 11.4 on post-deployment monitoring, as currently conceived, would implicate platform- and/or system-level capabilities like monitoring production metrics (e.g., how often platform-level classifiers are triggered) or system-level capabilities like identifying where system outputs are not aligned with intended behavior—and thus assumes that model providers are also platform and/or system providers or that platform and/or system providers or system deployers report to model providers. Such monitoring also presents privacy challenges and conflicts with requirements for highly regulated global customers.\r\nRecommendation: The Code should limit post-deployment monitoring requirements at the model level to receiving and investigating reports from system providers and deployers and actioning those reports as appropriate.\r\nSub-Measure 10.5 would require evaluation of a model’s capabilities and limitations for all existing and future deployment scenarios. This requirement fails to acknowledge that model and system evaluations differ. Mandating system-level evals at the model layer imposes an unreasonable burden on model providers to anticipate and evaluate use cases they do not build for, fully have insight on, or have appropriate data or tools to assess.\r\nRecommendation: The Code should remove this sub-measure, recognizing that the AI Act already addresses system-level risks through the application of the AI Act’s high-risk AI system provisions. Model providers can support these efforts by providing tools and best practices, as outlined in Sub-Measure 10.8, but should not be responsible for conducting or reporting on system-level evaluations.\r\nThe Code should be reviewed end-to-end to make sure that all Measures and Sub-Measures are necessary, consistent, and add value towards achieving regulatory outcomes.\r\nMeasures overlap or interconnect, but these relationships are inconsistently acknowledged. For example, Measure 12 appears to rely on the \"intolerable level\" defined by model providers in Sub-Measure 9.3, though this link is not explicitly stated, and the connection between Sub-Measure 13.7 and Measure 14 is unaddressed.\r\nRecommendation: The Code should draw out these and other intersections, which may also surface opportunities for streamlining and delivering greater clarity.\r\nMeasures 6-22 also apply unevenly across model and/or deployment scenarios.\r\nRecommendation: The Code should allow model providers the flexibility to apply measures as appropriate to assess and mitigate identified risks. This flexibility should be clarified with explicit language in the Code’s sections on governance.\r\nThe Code should reinforce the AI Act’s risk-based approach by applying systemic risk requirements to the most advanced models demonstrating significant risks.\r\nRegulatory efforts should be directed toward mitigating significant risks, applying to providers based on the risks posed by their models rather than on their size, consistent with the Act’s risk-based approach. While onerous requirements applied to a wide range of models could affect what is made available on the EU market, today’s most powerful models have been evaluated for dangerous misuse capabilities, and the findings from those evaluations have not identified unacceptable risks (e.g., see pre-deployment testing of Anthropic’s upgraded Claude Sonnet 3.5 model, jointly conducted by the U.S. and the UK AI Safety Institutes). Rather than\r\n4\r\nestablishing a framework that exempts model providers based solely on their size, regardless of the risks associated with their models, a risk-based approach would consider risks of today’s most powerful models and focus in on a scope that addresses significant risks of concern. A focus on models that pose significant risks would also help alleviate concerns voiced by European start-ups and SMEs about the burdensome nature of the AI Act1 while maintaining a principled risk-based and safety-oriented approach.\r\nRecommendation: The Code should prioritize the application of risk assessment, testing, reporting, and notification measures for the most advanced GPAI models with systemic risk, defined as models that are trained with compute power over 1026 FLOPs and that demonstrate leading indicators of high-impact capabilities.\r\nThe AI Office should clarify the scope of the application of the Code to bring clarity to downstream entities.\r\nThe scope of application of the Code will depend on further guidance on what constitutes substantial fine-tuning. Clarifying this term will be crucial for downstream providers to understand whether and when they could be considered GPAI model providers and thus subject to the Code. Depending on this definition, additional entities may be brought into the scope of the Code without the opportunity to properly contribute to the drafting process. Further clarity will also be crucial to ensure legal certainty for companies across the AI value chain.\r\nRecommendation: The AI Office should provide a further detailed definition for fine-tuning and/or thresholds for substantial fine-tuning as soon as possible through guidelines, welcoming input from model providers and deployers."},"recipientGroups":[{"recipients":{"parliament":[],"federalGovernment":[{"department":{"title":"Bundesministerium für Wirtschaft und Klimaschutz (BMWK) (20. WP)","shortTitle":"BMWK (20. WP)","url":"https://www.bmwk.de/Navigation/DE/Home/home.html","electionPeriod":20}}]},"sendingDate":"2024-12-13"}]},{"regulatoryProjectNumber":"RV0006766","regulatoryProjectTitle":"EU Verordnung zur Festlegung harmonisierter Vorschriften für künstliche Intelligenz (KI Verordnung)","pdfUrl":"https://www.lobbyregister.bundestag.de/media/63/54/455619/Stellungnahme-Gutachten-SG2502120008.pdf","pdfPageCount":10,"text":{"copyrightAcknowledgement":"Die grundlegenden Stellungnahmen und Gutachten können urheberrechtlich geschützte Werke enthalten. Eine Nutzung ist nur im urheberrechtlich zulässigen Rahmen erlaubt.","text":"1\r\nKey opportunities for improvement in the EU AI Act’s\r\nsecond draft of the Code of Practice for GPAI model providers\r\nWhile the second draft Code of Practice (“Code”) contains several welcome improvements and clarifications, material revisions are needed to bring it into alignment with the safety and innovation-friendly framework that the AI Act has as its goal. Future drafts of the Code should more closely align with the letter of the Act, avoiding expansions or divergences from the legal text, and define more flexible and outcome-oriented Measures.\r\nIn their current form, the draft transparency and copyright Commitments applicable to all GPAI models go beyond the letter of the AI Act and include Measures that would undermine innovation without adding clear safety or other regulatory value. Similarly, among the draft Commitments applicable to GPAI models with systemic risk, the level of detail and prescription risks locking in rapidly evolving safety and security practices. Moreover, several draft Commitments go beyond the Code’s intended scope of GPAI models and implicate AI systems.\r\nFurther context is included below on key substantive concerns and recommended ways to address them. With substantial revisions, the Code could provide the clarity and streamlined approach needed to support an expanding European AI ecosystem, exemplifying the opportunity ahead during the Act’s implementation phase.\r\nRecommendations on Commitments for all GPAI Models\r\nAs with the first draft of the Code, the second draft’s transparency expectations (Commitment 1 and Measure 1.1) go beyond the letter of the AI Act’s Annexes XI and XII and raise concerns around confidentiality, trade secret protection, information hazards, and impacts to innovation—without articulating additional safety or other regulatory value. Below are key examples of concerning draft expectations and recommended alternative approaches, also summarized in the table in Appendix A.\r\nMeasure 1.1 includes overly prescriptive expectations to provide information that is not called for in the Act and that risks confusing downstream AI system developers, disrupting innovation. This includes “a list of the types of high-risk AI systems in which the model can be integrated” (emphasis added). For general-purpose technology that is upstream of many use cases for which high-risk AI systems may be developed, an expectation to comprehensively describe types of high-risk systems in a manner akin to an “allowlist”, even in advance of their development, is impractical. Meanwhile, maintaining an expectation for public documentation that is likely to inadvertently exclude types of high-risk systems that could be AI Act compliant would undermine legal clarity for downstream AI system developers. In addition, expectations to maintain a list of “restricted tasks” at the model layer confusingly diverge from the Act’s focus on “prohibited practices” at the system layer and are especially inappropriate to apply to open source.\r\nRecommendation: Instead of listing “the types of high-risk AI systems in which the model can be integrated” (emphasis added), providers of closed-source models should\r\n2\r\nbe expected to list intended uses and, in acceptable use policies, explicitly allow or prohibit use of a model in high-risk AI systems. In acceptable use policies, providers of closed-source models should also include a list of prohibited uses, including those prohibited consistent with the Act’s Art. 5 and any others, if applicable—rather than a “list of restricted tasks”, the scoping of which is unclear. Providers of open-source models could be expected to include guidance around intended or anticipated uses and to prohibit use of models only for practices in scope for Art. 5. This approach would be consistent with the letter of the AI Act (i.e., requirements to provide information about a) “the tasks that the model is intended to perform and the type and nature of AI systems into which it can be integrated” and b) “the acceptable use policies applicable”) and address concerns about lack of clarity and impracticality, with downstream negative impacts to innovation.\r\nMeasure 1.1 also includes several overly prescriptive expectations to provide especially sensitive trade secret information not required by the Act, impacting both innovation and safety. This includes “a description of how the model architecture departs from standard model architecture practices”; “the sequences of steps or stages involved in the training process,” “a description of the objective and optimisation method for each step or stage in the training process,” and “a general description for why each step or stage is implemented, along with any key assumptions”; “the fraction of the training, testing, and validation (TTV) data corresponding to each of the data acquisition methods and sources, in number of data points for each modality”; and information about “the number and type of hardware units used to train the model” as well as hardware ownership and location (emphasis added). Each of these expectations implicates information that model providers are substantially investing in and protecting to drive innovation, and any risk of exposure also carries globally significant safety and security concerns.\r\nRecommendation: Expectations for disclosure of especially sensitive trade secrets not called for in the Act should be removed or calibrated in line with Act. Regarding calibration, expectations to provide information about model architecture should be limited to a general description. Expectations to provide information about model design and training should be limited to “a detailed description” of “key design choices,” consistent with the Act—rather than expecting information about “each step or stage” of model training. Expectations for information about TTV data should be limited to the fraction of data corresponding to data acquisition methods (i.e., open web, synthetic, first-party proprietary, and third-party licensed), in number of data points for each modality, consistent with the Act’s call for information on “the type and provenance of data…the number of data points, their scope and main characteristics.” For example, this could be implemented as: Text = 10 trillion tokens made up of 50% open web, 20% synthetic, 20% first-party proprietary, and 10% third-party licensed data; and Audio = 10K hours made up of 100% third-party licensed data. To the extent the Code does not expect but invites further information, including on such elements as novel model architecture or data sources, it should provide clearer context on expected regulatory outcomes (e.g., for safety or copyright interests) rather than just describing what the information sought is intended to provide clarity on.\r\n3\r\nMeasure 1.1 also includes overly prescriptive expectations to provide information that is not required under the Act and that, given a lack of standards for tracking, describing, and/or reporting, risks misinterpretation or even improper reliance. This includes “the number of parameters that are active during inference”; “computational resources for model inference”; known or estimated energy mixture and carbon emissions for model training; a description of the methodology used for estimating energy cost, consumption, and/or emissions for model training; and a description of any methods implemented in data acquisition or processing to address the prevalence of CSAM, NCII, copyrighted materials, personal data, identifiable biases, or other potentially harmful data or legality concerns in TTV data (emphasis added).\r\nThe Act’s transparency requirements do not extend to model inferencing, and current approaches to inference-time calculations are flexible and dynamic, not well capturing variability in conditions like latency. Likewise, the Act’s energy-related requirements concern its use rather than its cost or related emissions, and approaches to tracking such energy information vary, with work to develop consistent standards ongoing. Curation methods to improve data quality and manage bias are rapidly evolving and similarly lack standardized ways to measure impact across the identified categories of potentially harmful data. In advance of more standardized practices being available, overly prescriptive expectations to provide information not called for in the Act are burdensome to model providers that have to make interpretive calls, resulting in a drag on innovation, and, more importantly, pose cascading risks related to potential misinterpretations or instances of improper reliance.\r\nRecommendation: Expectations for disclosure of information that is not called for in the Act and that poses risks of misinterpretation and improper reliance should be removed or recalibrated in line with the Act.\r\n•\r\nExpectations to provide information about energy should be limited to known or estimated consumption of model training, and if energy consumption is unknown, then estimated energy consumption may be based on information about computational resources used, consistent with the Act.\r\n•\r\nExpectations to provide information about acquisition or processing methods to address risks of concern in TTV data should be limited to reporting on methodologies in general and as applied in the context of reasonably identifiable biases, consistent with the Act.\r\nMeasure 1.1 also explicitly expects that transparency documentation will be updated to reflect “any changes” to GPAI models. Today, models are regularly improved, including to address safety and performance issues, and an expectation to provide updated documentation alongside “any changes” risks slowing or otherwise impeding more granular updates that do not significantly impact regulatory interests in models. The Code should instead acknowledge the need for a threshold for significant model updates that would trigger disclosure updates (such as a “substantial fine tuning” threshold being defined by the AI Office).\r\n4\r\nCopyright expectations (Commitment 2) should adhere more strictly to the Act and be refocused on Measures that are designed to support model providers putting in place appropriate policies and procedures for, rather than proactively proving, compliance.\r\nThe preamble presents significant challenges, as it combines existing legal requirements and interpretations in a way that seeks to bind signatories to obligations that are not clearly grounded in legislative provisions. For example, Part C references Recital 105 of the AI Act and states that where a reservation of rights has been expressly reserved in an appropriate manner, providers of GPAI models need to obtain authorisation from rightsholders if they want to carry out text and data mining over such works, ignoring other exceptions that may apply. In Part D, Recital 106 is combined with reference to Article 53(1)(c). Recitals are intended to provide interpretative context rather than impose binding obligations, which are established in the articles of the legislation. This conflation of recitals with enforceable rules, in addition to the including of further legal interpretation, risks creating ambiguity and overextending the scope of Signatories' responsibilities. Additionally, embedding legal interpretations of both the AI Act and Directive (EU) 2019/790 into the preamble creates legal uncertainty.\r\nRecommendation: Amend the preamble to focus on clear obligations as set out in the articles of the AI Act, including, for example, the provisions in Article 53 1(c).\r\nMeasures 2.3 and 2.4 mandate developers to demonstrate proof of compliance, creating a significant administrative burden that could stifle innovation. This approach would also be inconsistent with other regulatory frameworks, which generally rely on policies being in place and compliance being enforced as necessary, rather than imposing an upfront burden of proof. Furthermore, it reverses the burden of proof in relation to copyright law, potentially placing developers in a position where they must prove their systems do not infringe, rather than requiring any allegations of infringement to be substantiated. Such a reversal would undermine fundamental legal principles and stifle innovation.\r\nRecommendation: Remove Measures 2.3 and 2.4, which require Signatories to proactively prove compliance, in favor of Measures that more clearly relate to putting in place a policy.\r\nRecommendations on Commitments for GPAI Models with Systemic Risk\r\nThe Code should clearly scope its expectations for systemic risk evaluation and mitigation to practices that can be implemented exclusively at the model layer, rather than expanding commitments to cover the downstream systems layer. This will improve clarity and appropriately apply value chain responsibilities, supporting innovation and achievement of desired regulatory outcomes.\r\nMeasure 3.2 identifies as a systemic risk “large-scale, illegal discrimination”, which is not specific to a model’s high-impact capabilities and is contextual and/or heavily influenced by system-level deployment decisions—and therefore especially difficult to measure at the model layer. Measures 3.3 and 3.4 identify several socio-technical factors beyond model capabilities and propensities, such as the potential for downstream users to remove guardrails, that are difficult to evaluate at the model level as they are more typically associated with functionality and usability enhancements that emerge once a model is integrated into a system.\r\n5\r\nRecommendation: The Code should explicitly set expectations regarding how assessment and mitigation of systemic risks at the model layer must work in concert with system-level risk assessment and mitigation, already covered by the AI Act’s requirements for high-risk systems. Providers of GPAI models with systemic risk should have flexibility in determining which factors under Measures 3.3 and 3.4 they “commit to consider” – such factors could be further clarified as voluntary considerations in risk assessments.\r\nMeasure 10.5 would require evaluation of a model’s capabilities and limitations for all existing and future system-level deployment scenarios “relevant” to a risk being assessed. This requirement fails to acknowledge appropriate distinctions between obligations for model providers to conduct model-level evaluations, and obligations for high-risk AI system providers and deployers to conduct system-level evaluations via an approach otherwise defined by the AI Act and its implementation. Expectations related to system-level evaluations in the model provider Code of Practice impose an unreasonable burden on model providers to anticipate and evaluate use cases they do not necessarily build for, have full insight into the details of, or have appropriate data or tools to assess. They impose system-level obligations on model providers that will deploy proprietary models in first-party systems, as referred to in the preamble, or license terms on model providers that are not also system providers. Moreover, they risk duplication and/or inconsistency with the AI Act’s high-risk system requirements defined and implemented via other secondary measures.\r\nRecommendation: Measure 10.5 should be removed, recognizing that the AI Act already addresses system-level risks through obligations impacting high-risk AI systems and their providers and deployers. Model providers can support these efforts by providing tools and best practices, as outlined in Measure 10.8, but should not be responsible for conducting, mandating, or reporting on system-level evaluations.\r\nMeasure 10.12 on post-deployment monitoring, as currently conceived, requires model providers that deploy AI systems to monitor such models as part of first-party systems, effectively imposing system-level requirements that may overlap or conflict with system-level requirements defined and implemented elsewhere as part of the AI Act. It also suggests that providers of proprietary models with systemic risk should monitor downstream use of such models via lightweight telemetry (logging) and data analysis. Such expectations implicate platform- and/or system-level capabilities like monitoring production metrics (e.g., how often platform-level classifiers are triggered) or system-level capabilities like identifying where system outputs are not aligned with intended behavior. In some circumstances, they can also present significant privacy challenges and conflict with requirements for highly regulated global customers.\r\nRecommendation: Expectations for post-deployment monitoring requirements that directly implicate the system level should be limited to receiving and investigating reports from system providers and deployers and actioning those reports as appropriate.\r\n6\r\nThe Code should be reviewed with the goal of ensuring that all Measures and proposed KPIs are as streamlined and outcome oriented as possible, strengthening safety and innovation outcomes by applying risk management resources to the highest value practices and providing flexibility as safety practices rapidly evolve and improve.\r\nMeasures 10.4 and 10.7 propose benchmarks for engineering hours to be dedicated to model elicitation and exploratory safety research, exceeding the expectation set by the AI Act’s recital 114, which gives model providers the flexibility to perform evaluations with high scientific rigor, without prescribing in detail how such evaluations should be performed. Engineering and research hours should ideally go down over time as solutions are discovered or tools are built to enable greater efficiency and coverage. Rigor and automation should be incentivized, particularly for challenges of scale, such as red teaming or model elicitation. Moreover, in establishing as a point of comparison hours on “the largest internal non-safety project,” Measure 10.4 also demonstrates the challenges of relying on non-standardized quantification approaches. For example, to apply such a point of comparison, the Measure would need to define “non-safety project,” which could still be applied differently across organizational and operational contexts, and recognize disparities in how the number of engineering hours dedicated to “non-safety projects” may vary (e.g., where organizations integrate safety work across non-safety projects, they may in effect be penalized for a larger overall number of engineering hours on the project).\r\nRecommendation: Instead of requiring model providers to commit to a specific percentage of engineering or research hours, less prescriptive KPIs would allow model providers to select the most suitable approach to prioritizing safety depending on the model and internal context. Such KPIs could include the following commitments, for example: (1) releasing research outputs on new benchmarks or evaluation techniques to an extent proportionate to their release of GPAI models with systemic risk; (2) onboarding new benchmarks that meet minimum criteria for quality and validity; and/or (3) requiring internal staff directly involved in development of models with systemic risk to complete trainings on evaluations.\r\nMeasure 10.9 would set expectations for model providers to provide external evaluators with sufficient time, engineering support, compute budgets, and access, including grey- and white-box access, to GPAI models with systemic risk. Measure 16.2 would also set requirements for providers of proprietary models to facilitate “secure and non-restrictive access to deployed models for independent model evaluation, subject to adherence to established rules of engagement and safeguards to prevent misuse.” Moreover, proposed KPIs for Measure 16.2 would require that independent external assessors “receive access to deployed models within 30 days of a formal request, barring exceptional circumstances”, and that Signatories permit external assessors to publish responsibly disclosed findings 60 days after notification.\r\nThese Measures and KPIs go far beyond the still-evolving norms of providing select assessors with access to models pre- and/or post-deployment under strict guidelines and with security and confidentiality restrictions. Providing white-box access at scale to all external evaluators as implied under Measure 10.9 would raise serious concerns. The expectation to allow assessors to publish findings 60 days after notification to model providers is also at odds with the decades-long development of Coordinated Vulnerability Disclosure, which has generally avoided hard deadlines as an industry norm, allowing prioritization of resources toward the\r\n7\r\ngreatest risks. Moreover, where timelines have been centered in the conversation given different practices among industry, the expectation for a fix or disclosure has rather been set at 90 days.\r\nRecommendation: Expectations that, after a formal request, model providers facilitate “non-restrictive access” to deployed models among assessors that adhere to rules of engagement should be removed—on any timeline. The Code should instead recognize a variety of approaches to meeting post-deployment external assessment expectations, including approaches that are not dependent on non-restrictive access, such as via bug bounty programs, as already contemplated in Measure 16.2. The Code should likewise remove prescriptive expectations for how providers of GPAI models with systemic risk interact with external evaluators with which they might contract or otherwise agree on terms for evaluation activities. Specifically, neither resourcing expectations nor deadlines for providing access or allowing for public disclosure should be defined by the Code. Measure 16.1 limits the scope of involvement for external assessors to cases where the model poses novel risks compared to models with systemic risk already on the market, and/or the model provider has insufficient internal expertise to perform risk assessment on the systemic risks posed by the model. Where applicable, Measure 16.2 should be aligned with these conditions set under Measure 16.1.\r\nMeasures 12.3 and 12.4 set prescriptive expectations for security mitigations, including how model providers would need to protect stored model weights and related assets as well as harden software interfaces and control access to model weights. While many of the proposed implementing measures are relevant to protecting model weights, their level of detail results in an overly prescriptive approach. Whereas more prescriptive approaches establish a “ceiling” that locks in current best practice, more outcome-oriented approaches establish a “floor” for ongoing investments and improvements. For example, Measure 12.3 requires “access control and monitoring of access on all devices storing model weights, with alerts on copying to non-controlled devices.” Future capabilities may allow for automated blocking of copying to non-controlled devices, subject to manual override, but the current language could be interpreted as locking in “alerts” as a monitoring versus a prevention capability.\r\nMore prescriptive approaches also undermine an intention to be risk based, rigidly applying controls to technologies or assets where security investments are appropriately differentiated. For example, unreleased model weights and \"associated assets such as unreleased algorithmic insights\" may have different levels of sensitivity; for example, \"algorithmic insights\" may include an internal performance analysis which, while not public, does not need to be treated as strictly as unreleased model weights for a GPAI model with systemic risk. In such cases, application of differentiated controls is consistent with a risk-based approach. Likewise, the Code treats “all devices” without distinction, whereas different classes of hardware may allow for different risk mitigations. For example, a registry of all devices and locations might help ensure that drives are disposed of safely, but with protections like drive-level encryption in place, that level of tracking may not be necessary to meet a desired security outcome.\r\nRecommendation: The Code should include more outcome-oriented security measures and defer to relevant technical standards for further implementation detail, resulting in a more flexible, risk-based, and future-proof approach. KPIs could align with adherence to relevant technical standards, such as ISO 27001, ISO 27017, ISO 29147, ISO 30111, Trusted\r\n8\r\nComputing Group (TCG) standards, NIST SP 800-53, NIST SP 800-171, and INCITS 359-2004. Overly prescriptive requirements, especially included in Measure 12.3, should also be removed or recalibrated.\r\nCommitment 15 would require model providers to conduct (1) an assessment of adherence to Safety and Security Model Reports (SSMRs) every six months after placing GPAI models with systemic risk on the market; and (2) adequacy assessments of their Safety and Security Framework (SSF) within four weeks of notifying the AI Office that they have or will meet the criteria for systemic risk, or every six months, whichever comes sooner. Undertaking such exercises bi-annually is a resource-intensive endeavor without any clear added safety value. Such requirements are not risk-based and will divert AI research and engineering resources towards producing superfluous documentation.\r\nRecommendation: Expectations for regular adherence and adequacy assessments should be removed, in favor of obligations to report on updates to SSFs and to provide updates to SSRMs as deemed necessary based on major shifts in the risk profile of a model.\r\n9\r\nAppendix A\r\nRelevant sections\r\nProposed language from the second draft\r\nRecommended revisions Annex XI §1 1.(a) and (b) and Annex XII 1.(a) and (b): Intended tasks and type and nature of AI systems in which it can be integrated and acceptable use policies A list of the types of high-risk AI systems (within the meaning of Article 6 AI Act in conjunction with Annex I and III AI Act), if any, in which the model can be integrated A list of the restricted tasks with a description of the associated restrictions, including the prohibited uses beyond those prohibited by Article 5 AI Act, if any Revise this category as per following:\r\n• for providers of closed-source models: list intended uses and, in acceptable use policies, explicitly allow or prohibit use of a model in high-risk AI systems and list any prohibited uses.\r\n• for providers of open-source models: include guidance around intended or anticipated uses and prohibit use of models only for practices included in Art. 5.\r\nAnnex XI §1 1.(d) and Annex XII 1.(f): Architecture and number of parameters\r\na description of how the model architecture departs from standard model architecture practices\r\na general description of the architecture and number of parameters Annex XI §1 2.(b): Design specifi-cations of the model and training process the sequences of steps or stages involved in the training process a detailed description of the design specifications of the model and training process, including training methodologies and techniques a description of the objective and optimisation method for each step or stage in the training process a detailed description of what the model is designed to optimise for and the relevance of the different parameters, as applicable a general description for why each step or stage is implemented, along with any key assumptions a detailed description of the key design choices including the rationale and assumptions made\r\nAnnex XI §1 2.(c) and Annex XII 2.(c):\r\nInformation on data used for training, testing and validation\r\nthe fraction of the training, testing, and validation (TTV) data corresponding to each of the data acquisition methods and sources, in number of data points for each modality\r\nExpectations for information about TTV data should be limited to the fraction of data corresponding to data acquisition methods (i.e., open web, synthetic, first-party proprietary, and third-party licensed), in number of data points for each modality, consistent with the Act’s call for information on “the type and provenance of data…the number of data points, their scope and main characteristics.”\r\nFor example, this could be implemented as: Text = 10 trillion tokens made up of 50% open web, 20% synthetic, 20% first-party proprietary, and 10% third-party licensed data; and Audio = 10K hours made up of 100% third-party licensed data. Annex XI §1 2.(e): Known or estimated energy consumption The owner(s) of the hardware used in model training Remove – limit to “known or estimated energy consumption of model training (reported in MWh). If the energy consumption is unknown, the energy consumption may be based on information about computational resources used.” The location(s) of the hardware used in model training\r\nAnnex XI §1 1.(d) & Annex XII 1.(f)\r\nThe number of parameters that are active during inference\r\nRemove and replace with “number of parameters”\r\n10\r\nAnnex XI §1 2.(c) and Annex XII 2.(c): Information on data used for training, testing and validation A description of any methods implemented in data acquisition or processing, if any, to address the prevalence of:\r\n- child sexual abuse material (CSAM) or non-consensual intimate imagery (NCII) in the training, testing, and validation data\r\n- copyrighted materials in the training, testing, and validation data\r\n- personal data in the training, testing, and validation data, where relevant and applicable\r\n- identifiable biases in the training, testing, and validation data\r\n- other types of potentially harmful data in the training, testing, and validation data\r\n- other types of legality concerns in the training, testing, and validation data Remove and replace with “information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering, etc.), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable”\r\nAnnex XI §1 2.(d): Computational resources\r\n-\r\nThe number and type of hardware units used to train the model\r\n-\r\nThe duration of model training measured in wall clock time (reported in units of days) and hardware time (reported in units of hardware hours, e.g. GPU hours)\r\n-\r\nThe compute for a fixed computation (e.g. generating 1000 words for a model capable of text generation) used during model inference (reported in units of integer or floating-point operations)\r\nRemove and replace with “the compute used during model training (reported in units of integer or floating-point operations)” Commitment 1 Table, Annex XI §1 2.(e): Known or estimated energy consumption\r\n- The owner(s) of the hardware used in model training\r\n- The location(s) of the hardware used in model training\r\n- The known or estimated energy mixture for energy used to perform computation on the hardware used in model training\r\n- The known or estimated emissions associated with model training (reported in tCO2eq)\r\n- A description of the methodology for measuring or estimating energy cost, consumption and/or emissions for model training Remove and replace with “the known or estimated energy consumption of model training (reported in MWh). If the energy consumption is unknown, the energy consumption may be based on information about computational resources used”"},"recipientGroups":[{"recipients":{"parliament":[],"federalGovernment":[{"department":{"title":"Bundesministerium für Wirtschaft und Klimaschutz (BMWK) (20. WP)","shortTitle":"BMWK (20. WP)","url":"https://www.bmwk.de/Navigation/DE/Home/home.html","electionPeriod":20}}]},"sendingDate":"2025-02-10"}]},{"regulatoryProjectNumber":"RV0006766","regulatoryProjectTitle":"EU Verordnung zur Festlegung harmonisierter Vorschriften für künstliche Intelligenz (KI Verordnung)","pdfUrl":"https://www.lobbyregister.bundestag.de/media/03/0a/455621/Stellungnahme-Gutachten-SG2502120009.pdf","pdfPageCount":2,"text":{"copyrightAcknowledgement":"Die grundlegenden Stellungnahmen und Gutachten können urheberrechtlich geschützte Werke enthalten. Eine Nutzung ist nur im urheberrechtlich zulässigen Rahmen erlaubt.","text":"Key opportunities for improvement in the EU AI Act’s\r\nsecond draft of the Code of Practice for GPAI model providers\r\nWhile the second draft Code of Practice (“Code”) contains some welcome improvements and clarifications, substantial revisions are necessary to bring the Code into alignment with the safety and innovation-friendly framework that the AI Act has as its goal. Future drafts of the Code should more closely align with the scope of the AI Act, avoiding significant expansions or divergences from the legal text, and define more outcome-oriented and less prescriptive Measures. With substantial revisions, the Code could provide the clarity and streamlined approach needed to support effective compliance with the AI Act for an expanding European AI ecosystem, while ensuring transparent and responsible development and deployment of GPAI models.\r\nThe current draft transparency and copyright Commitments applicable to all GPAI models go beyond the AI Act’s scope and include Measures that would undermine innovation without adding clear safety or other regulatory value.\r\n•\r\nOverly detailed technical documentation expectations raise significant concerns around confidentiality, trade secret protection, information hazards, and impact on innovation. We recommend removing and aligning with the AI Act’s legal text, disclosure of information that could:\r\no\r\nConstitute trade secrets (e.g., description of how the model architecture departs from standard practices; information about “each step or stage” of model training);\r\no\r\nPose risks of misinterpretation given a lack of standardized methods for tracking and reporting (e.g., computational resources for model inference; detailed information about acquisition or processing methods to address risks of concern in training, testing and validation data); and\r\no\r\nRisk confusing downstream AI system providers, therefore disrupting innovation (e.g., a list of allowed types of high-risk systems or “restricted tasks” at the model layer).\r\n•\r\nCopyright provisions should adhere more strictly to the Act and be refocused on Measures that are designed to support model providers in putting in place appropriate policies and procedures for, rather than proactively proving, compliance.\r\no\r\nAmend the Preamble to focus on clear obligations as set out in the articles of the AI Act, including, for example, the provisions in Article 53 1(c). Remove requirements based on interpretations of the legal text, that are not clearly grounded in legislative provisions and conflate recitals of the AI Act with enforceable obligations in the articles of the Act. Such requirements risk creating ambiguity and overextending the scope of Signatories' responsibilities under the Code.\r\no\r\nRemove Measures 2.3 and 2.4 to avoid requiring model providers to proactively demonstrate proof of compliance and reversing the burden of proof, which would impose a significant administrative burden and potentially stifle innovation.\r\nA significant portion of the draft Commitments applicable to GPAI models with systemic risk are overly prescriptive, locking in rapidly evolving safety and security practices, or duplicative, resulting in redundant expectations that risk diverting safety resources. The Code should be reviewed with the goal of ensuring that all Measures and proposed KPIs are as streamlined and outcome oriented as possible, strengthening safety and innovation outcomes by enabling appropriate flexibility and prioritization.\r\n•\r\nOverly prescriptive expectations for model providers to commit to a specific percentage of engineering hours risk locking in current approaches to risk assessment rather than incentivizing the development of rigorous new techniques and automation tools for challenges of scale, such as red teaming. More outcome-oriented approaches would also allow model providers to select the most suitable methods to advance safety depending on the model and internal context.\r\n•\r\nOverly prescriptive expectations for security mitigations, including how model providers would need to protect stored model weights and related assets, should be replaced with more risk-based and outcome-oriented security measures, including to reflect distinctions between the risk profile of covered\r\nassets. Where feasible, deferring to relevant technical standards1 for further implementation detail will support a more flexible, risk-based, and future-proof approach.\r\n•\r\nExpectations to report on adherence and adequacy assessments every six months are not risk-based and will divert AI safety engineering and research resources towards producing redundant documentation. They should be removed in favor of risk-based commitments to report on updates to Safety and Security Frameworks and to provide updates to Safety and Security Model Reports, as deemed necessary based on major shifts in a model’s risk profile.\r\n•\r\nExpectations to provide, upon request, external evaluators with sufficient time, engineering support, compute budgets, and “non-restrictive” access to models go far beyond the still-evolving norms of providing select assessors with access to models pre- and/or post-deployment under strict guidelines and with security and confidentiality restrictions in place. The Code should instead recognize a variety of approaches to leveraging external assessments post-deployment, including via approaches that are not dependent on non-restrictive access, such as external research, responsible disclosure, and bug bounty programs.\r\nSeveral draft Commitments go beyond the AI Act’s and thereby the Code’s intended scope of GPAI models by implicating AI systems. The Code should clearly scope its expectations for systemic risk evaluation and mitigation to practices that can be implemented exclusively at the model layer, rather than expanding commitments to cover the downstream systems layer. This will improve clarity and appropriately apply value chain responsibilities, supporting innovation and achievement of desired regulatory outcomes.\r\n•\r\nRemove provisions for model providers to evaluate a model’s capabilities and limitations for all existing and future system-level deployment scenarios “relevant” to a risk being assessed. The Code should instead explicitly set expectations on how model-level risk assessment and mitigation, must work in concert with system-level risk assessment and mitigation, already covered by the AI Act’s requirements for high-risk systems.\r\n•\r\nLimit provisions on model-level post-deployment monitoring that directly implicate the system level and can also present significant privacy challenges, as well as conflict with requirements for highly regulated global customers. Receiving and investigating reports from system providers and deployers and actioning those reports as appropriate would suffice.\r\n1 For example: ISO 27001, ISO 27017, ISO 29147, ISO 30111, Trusted Computing Group (TCG) standards, NIST SP 800-53, NIST SP 800-171, and INCITS 359-2004."},"recipientGroups":[{"recipients":{"parliament":[],"federalGovernment":[{"department":{"title":"Bundesministerium für Wirtschaft und Klimaschutz (BMWK) (20. WP)","shortTitle":"BMWK (20. WP)","url":"https://www.bmwk.de/Navigation/DE/Home/home.html","electionPeriod":20}}]},"sendingDate":"2025-02-10"}]},{"regulatoryProjectNumber":"RV0006767","regulatoryProjectTitle":"EU Data Act","pdfUrl":"https://www.lobbyregister.bundestag.de/media/92/8d/322766/Stellungnahme-Gutachten-SG2406280060.pdf","pdfPageCount":20,"text":{"copyrightAcknowledgement":"Die grundlegenden Stellungnahmen und Gutachten können urheberrechtlich geschützte Werke enthalten. Eine Nutzung ist nur im urheberrechtlich zulässigen Rahmen erlaubt.","text":"„Innovative Datenpolitik: Potenziale und Herausforderungen“\r\nSchriftliche Stellungnahme von Rebekka Weiß, Senior Manager Government Affairs, Microsoft Deutschland GmbH zur öffentlichen Anhörung des Ausschusses für Digitales des Deutschen Bundestags am Mittwoch, 26. Juni 2024\r\n\r\n\r\nVorbemerkungen\r\nDie EU hat in den vergangenen zwei Legislaturperioden zahlreiche Rechtsakte erlassen, die die Datenpolitik berühren und damit einen umfassenden Regulierungsrahmen für Datenverarbeitung, Datensicherheit, Dateninfrastrukturen, Datenzugang und -austausch geschaffen. \r\nDie Datenschutzgrundverordnung (DS-GVO) war dabei lange Dreh- und Angelpunkt der datenpolitischen Debatte. Dies hat den Diskurs stark geprägt und den Datenschutz fest in allen Unternehmen verankert. Es hat die datenpolitische Debatte aber zT auch auf datenschutzrelevante Fragestellungen verknappt UND Bereiche der Datenökonomie bisher daher unterbelichtet gelassen – diese Aspekte müssen daher nun viel stärker in den Mittelpunkt gerückt werden, um einerseits die verschiedenen Regulierungsinstrumente einzubeziehen und andererseits eine neue Balance und einen ausgewogenen Interessensausgleich zu ermöglichen. \r\nEs ist daher ausdrücklich zu begrüßen, dass die Anhörung zum Thema „Innovative Datenpolitik“ vor allem auch den EU Data Act (DA), den EU Data Governance Act (DGA), den EU Digital Services Act (DSA) und den EU AI Act sowie Fragen der Infrastrukturebene direkt in Bezug nimmt. Damit ist ein wichtiger Schritt getan und das Narrativ für eine innovativere Datenpolitik geöffnet.\r\nWir befinden uns zudem im Zeitalter der Künstlichen Intelligenz. Zukunfts- und innovationsoffene Entwicklung und Interpretation von Regulierungen wird entscheidend dafür sein, dass wir die Potenziale einer wirklich innovativen Datenpolitik so nutzen können, dass Politik, Unternehmen und die Gesellschaft vom enormen Potenzial der KI nachhaltig profitieren können. \r\nDer nationale Gesetzgeber muss sich zeitnah zudem mit der Ausgestaltung der nationalen Aufsichtsstrukturen befassen. Benötigt wird ein sowohl nach innen (innerhalb Deutschlands bezüglich der beaufsichtigten Stellen) als auch nach außen (Abstimmung und Harmonisierung innerhalb der EU und international) funktionierendes System. Klare Zuständigkeitszuweisungen helfen dabei, den Unternehmen Sicherheit zu vermitteln, wer die richtigen Ansprechpartner sind. Nicht (nur) für Aufsicht und Kontrolle, sondern vor allem auch für Beratung und die gemeinsame Entwicklung von Interpretationen der verschiedenen Regulierungsinstrumente und ihres Zusammenspiels. Keine einzelne Behörde wird dabei alle notwendigen Kompetenzen und die notwendige rechtliche und technische Expertise allein abdecken können – es wird auf Kooperation und Abstimmungen und die Einbeziehung von Fachbehörden und Experten aus der Praxis ankommen. Eine Bündelung dieser Koordinierungsfunktion bei der Bundesnetzagentur (BNetzA) als zuständige Aufsicht für DSA, DGA, DA und AI Act scheint der sinnvollste Weg zu sein, um Klarheit, Rechtssicherheit und Entscheidungsfähigkeit sicherzustellen.\r\nEin weiterer wichtiger Baustein für innovative Datenpolitik sind verstärkte (Förder-)Maßnahmen im Bereich der Standardisierung sowie der Entwicklung von (branchenspezifischen) Codes of Conducts bzw. Codes of Practices, die die Umsetzung und Implementierung der zahlreichen und verzahnten Regulierungen erleichtern. Hier sollten Politik, Wissenschaft und Unternehmen in den Entwicklungsgremien zusammenwirken, um praktikable, internationale anschlussfähige und für die Unternehmen umsetzbare Lösungen zu entwickeln.\r\nDie nachfolgende Stellungnahme führt hierzu einige Details aus und bezieht sich auf den Fragenkatalog vom 7. Juni 2024, der insgesamt 18 Fragen umfasst. \r\n\r\nBerlin, den 24. Juni 2024\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nFragenkatalog und Antworten:\r\n\r\n1)\tMit dem Data Act und dem Data Governance Act (und weiteren Rechtsakten) wurde ein wegweisender europäischer Datenraum geschaffen. Welche Spielräume hat der deutsche Gesetzgeber bei der Umsetzung der Vorgaben, die er für eine innovative Datenpolitik nutzen sollte und welche Maßnahmen sehen Sie bei der Umsetzung - etwa in der Bündelung der Aufsicht für die digitalpolitischen Dossiers – als besonders wichtig an?\r\nFür eine innovative Datenpolitik ist angesichts der bereits bestehenden umfassenden Regulierung vor allem eine moderne, sichere und vertrauenswürdige Infrastruktur notwendig, die datengetriebene Geschäftsmodelle und Künstliche Intelligenz befähigt sowie Datenaustausch zwischen Industriepartnern und öffentlicher Hand ermöglicht. \r\nDer Rechtsrahmen aus Data Governance Act und Data Act hat zahlreiche Möglichkeiten geschaffen, den Datenaustausch zu befördern. \r\nDer deutsche Gesetzgeber sollte sich nun bei der Umsetzung weniger auf zusätzliche regulierende Vorschriften und stärker auf die Schaffung einer leistungsfähigen Dateninfrastruktur und die Förderung der digitalen Kompetenz fokussieren. Investitionsoffenheit und ein stabiler Rechtsrahmen sowie das klare Signal an Industrie und Wissenschaft, dass Dateninnovationen gefördert werden, sind dafür essentiell. \r\nDie Bündelung der Aufsicht für digitalpolitische Dossiers sollte mit der Benennung einer zuständigen Stelle beginnen. Die BNetzA scheint hierfür geeignet und hat bezüglich der notwendigen Abstimmungsprozesse mit weiteren einzubeziehenden Behörden wichtige Vorerfahrungen. Zudem erfüllt sie bereits die Rolle als Digital Services Coordinator, was sich gut mit der Rolle der Daten- und KI-Aufsicht verzahnen lässt. Für Abstimmungsprozesse mit Datenschutzaufsichtsbehörden und sektoral zuständigen Aufsichtsbehörden wie Bafin, KBA oder auch dem BSI für Fragen der Informationssicherheit kann die BNetzA Anleihen an die Abstimmungsprozesse aus ihrer Rolle als DSC ziehen. \r\nEine Bündelung der Zuständigkeit muss dabei vor allem drei wesentliche Zwecke erfüllen: \r\n•\tKlarheit für beaufsichtigten Unternehmen, wer zuständige Behörde und Ansprechpartner ist \r\n•\tSektor-, Rechts- und Technologieexpertise anderer Behörden in den Bewertungsprozess einbeziehen\r\n•\teinheitliche Bewertungen inländischer Sachverhalte herbeiführen und Entwicklung als zentrale Stelle für europäische Abstimmungsprozesse\r\n\r\n2)\tFür eine innovative Datenpolitik bedarf es einer innovativen, modernen aber auch sicheren und vertrauenswürdigen Infrastruktur. Was sind zentrale Elemente dieser Infrastruktur, wie muss diese ausgestaltet sein, um eine innovative Datenpolitik zu ermöglichen und wie weit sind wir beim Aufbau einer solchen Infrastruktur und welche Bedeutung kommt hier einer souveränen europäischen Cloudinfrastruktur zu? \r\nDie zentralen Elemente einer innovativen, modernen, sicheren und vertrauenswürdigen Infrastruktur für eine innovative Datenpolitik umfassen die den Ausbau von Rechenzentren, Cloud-Technologie, IT-Sicherheitsstandards und Datenschutz. Eine ausgebaute Rechenzentren- und Cloudinfrastruktur ist von besonderer Bedeutung, da sie die Grundlage für effiziente und moderne Organisation von Unternehmen und Geschäftsmodellen und einer modernen Verwaltung bildet. \r\nInnovative Datenpolitik basiert auf der Möglichkeit des dezentralen Arbeitens, flexibler Skalierungsfähigkeit und Vernetzung sowie Datenauswertungen mittels Methoden der Künstlichen Intelligenz. Die Cloud ist dafür faktisch und praktisch der sinnvollste Distributionskanal für IT-Services und KI – und damit auch für datengetriebene Verwaltung und Politikgestaltung.\r\nUm insbesondere die öffentliche Verwaltung zukunftsfest und innovativ zu gestalten, bedarf es einer umfassenden Cloudifizierung der öffentlichen Verwaltung. Souveränitätsüberlegungen spielen hierbei eine wichtige Rolle. Souveränität muss dabei zum einen als mehrdimensionale und zum anderen als gestufte Anforderung verstanden werden, um Anforderungen gerecht zu werden und Skalierbarkeit sicherzustellen. Konkret heißt das:\r\n•\tNicht alle Cloud-Workloads sind gleich sensibel, daher ist es wichtig zu definieren, welcher Souveränitäts- und Sicherheitsgrad für welche Art von Workloads erforderlich ist.\r\n•\tEin entsprechender Rahmen für die Bewertung bieten die Cloud Platform Requirements des BSI, der einen Rahmen bis in hoch sensible Inhalte definiert. \r\n•\tSouveränität, ob mit Blick auf Cloud- oder Datennutzung, muss in klare technisch implementierbare Vorgaben übersetzt werden. Dies ermöglicht die Gestaltung entsprechender Geschäftsmodelle und Angebote für alle Marktteilnehmer.\r\n\r\n\r\n\r\n\r\n3)\tOft wird Datenschutz als Hemmnis für innovative Datenpolitik vorgeschoben oder werden Datenpolitik und Datenschutz gegeneinander in Stellung gebracht. Wie sehen Sie die Rolle des Datenschutzes für eine innovative Datenpolitik, welche Instrumente wie beispielsweise Datentreuhänder können welchen Beitrag leisten, um Datenschutz und innovative Datenpolitik zusammenzudenken und sehen Sie es auch als Wettbewerbsvorteil an, innovative Datenpolitik unter Wahrung des Datenschutzes made in EU sicherzustellen? \r\nDatenschutz ist integraler Bestandteil einer innovativen Datenpolitik. Datenschutz und innovative Datenpolitik sollten daher nicht als Gegensätze betrachtet werden, sondern als komplementäre Elemente, die gemeinsam einen Rahmen für verantwortungsvolle Innovation schaffen. Akzeptanz und Vertrauen in digitale Innovationen hängen dabei sowohl vom Datenschutz als auch von Praktikabilität und Nutzerfreundlichkeit der Dienste ab. \r\nEs ist unerlässlich, eine Balance zwischen Datenschutz und Datennutzung zu finden. Dies erfordert ein chancenorientiertes Narrativ, das die Vorteile der Datennutzung hervorhebt, ohne die Bedeutung des Datenschutzes zu mindern. Der Reflex, bei jeder Innovation zuerst zu hinterfragen, ob diese Datenschutzvorgaben einhält, ist einer innovativen Datenkultur abträglich. \r\nDie Abwägung von Interessen erfordert zudem vor allem Kenntnis über und Berücksichtigung von Rechtsgütern auch außerhalb des Datenschutzes. Dies gilt insbesondere im Kontext neuer technologischer Entwicklungen, die gesamtgesellschaftliche Mehrwerte bringen. Beispielhaft seien folgende Szenarien gebildet: Wenn wir beispielsweise dem Fachkräftemangel mit effizienzsteigernden KI-Anwendungen begegnen wollen, muss dieser Mehrwert auch bei der Datennutzung berücksichtigt werden. Bei der Erhebung und Nutzung von Trainingsdaten für die Weiterentwicklung des autonomen Fahrens müssen selbstverständlich auch ausreichend Datensätze aller Altersgruppen sowie von Menschen mit besonderen Mobilitätsanforderungen einbezogen werden können. Die Sicherheit aller Straßenverkehrsteilnehmenden und die Sicherheitsgewinne durch Fahrassistenzsysteme und autonomes Fahren müssen in eine Balance zu den Datenschutzinteressen gebracht werden. Gleiches gilt für Innovationen im Bereich des Gesundheitswesens, sowohl mit Blick auf Fortschritte in der Forschung als auch in der Digitalisierung des Gesundheitswesens als Ganzes. Nicht ohne Grund hat bereits das Gutachten der Datenethikkommission 2019 festgehalten, dass nicht nur die Nutzung sondern auch die Nichtnutzung von Daten ethisch zu verantworten ist. \r\nEine der jeweiligen Anwendung und Datennutzung angemessene Datensicherheit und Privacy Enhancing Technologies (PETs) werden in der Praxis bereits zur Anwendung gebracht (siehe hierzu auch Frage 16). Datentreuhandmodelle bzw. Datenplattformen für Datenräume können ebenfalls zum Einsatz gebracht werden – sie müssen jedoch skalierbar, international anschlussfähig und praktikabel ausgestaltet sein, um insbesondere Unternehmen dabei zu unterstützen Daten zu teilen. Es ist zu begrüßen, dass diesbezüglich auch verschiedentlich Forschungsprojekte gefördert werden, um branchenbezogene Lösungen zu finden.  \r\nInsbesondere falsch verstandener Datenschutz hat in der Vergangenheit dennoch zu viel Verunsicherung geführt. Eine innovative Datenpolitik muss hier ansetzen, ein neues Narrativ etablieren und der Komplexitäten von Dateninnovationen Rechnung tragen. Auch den Datenschutzaufsichtsbehörden kommt hier eine wichtige Rolle zu. Die Entwicklung einer harmonisierteren Auslegung der Datenschutzregeln und die Leistung eines Beitrags zu einer chancenorientierten Datenpolitik liegt auch in ihrer Hand. Dies gilt umso mehr, weil die Datenschutzaufsichtsbehörden, ganz unabhängig von der Strukturfrage, zukünftig auch in die Auslegung von Vorschriften vieler weiterer „Digitalgesetze“ einbezogen werden müssen.\r\nIn den vergangenen Jahren hat die Vielstimmigkeit im Datenschutzdiskurs in Deutschland teilweise dazu geführt, dass Dateninnovationen mit Skepsis begegnet wird und das Risikonarrativ das Chancennarrativ überlagert. Dies hat dazu geführt, dass innovative Geschäftsmodelle verzögert oder auch gar nicht durchgeführt wurden.  Es ist an der Zeit, diesen Trend umzukehren und einen positiven Diskurs über Dateninnovation zu fördern. Das beginnt auch bei der Digitalisierung der Politik selbst, die eine Vorreiterstellung bei der Nutzung innovativer Dienste einnehmen sollte. Daneben ist es Aufgabe der Unternehmen, innovative Datenprodukte und -services zu entwickeln, die für KundInnen und Partnerunternehmen attraktiv, skalierbar und nutzerfreundlich sind. Zusätzlicher Wettbewerb entsteht so auch durch den Aufbau lokaler Ressourcen, der innovative Geschäftsmodelle voranbringt, durch zusätzliche Absicherungen im Bereich der internationalen Datentransfers nach den Regeln der DS-GVO  oder die Weiterentwicklung von Privacy Enhancing Technologies (PETs) . \r\n\r\n\r\n\r\n\r\n\r\n4)\tWelche Elemente fehlen in Deutschland auf dem Weg zu innovativer Datenpolitik, wie können weitere Anreize für das Teilen von Daten in wechselseitigem Interesse weiter ausgebaut werden und welche Bedeutung – Stichwort Open Data, Datenlabore und Transparenzgesetz – kommen dem Staat und der öffentlichen Verwaltung zu und werden diese dieser gerecht? \r\nDatenlabore und der Austausch zwischen den verschiedenen Stellen der öffentlichen Hand können wichtige Bausteine für eine innovative Datenpolitik sein. Die Finanzierung solcher Ansätze sollte sowohl auf Bundes- als auch auf Landes- und Kommunalebene sichergestellt sein. Innovative Datenpolitik darf nicht als Projekt einer Legislatur verstanden werden, sondern benötigt eine dauerhafte Struktur (und Finanzierung). Nur durch die Beständigkeit entsprechender Strukturen kann der notwendige Kulturwandel, der Kompetenzaufbau und die notwendige Data Literacy aller Verwaltungsangestellter dauerhaft in der öffentlichen Hand umgesetzt werden.\r\n\r\n5)\tHaben Forschung, Zivilgesellschaft und öffentliche Stellen ausreichend Datenzugang zu den Daten sehr großer Online-Plattformen (VLOPs) und anderen datenhaltenden Unternehmen, um gemeinwohlorientierte Fragestellungen zu Themen wie beispielsweise Klimaschutz, sozialer Gerechtigkeit oder effizienter Verwaltung zu bearbeiten bzw. gibt es weitere Ansatzpunkte im nationalen und EU-Recht, um einen solchen Datenzugang zu gewährleisten und welchen Regelungsbedarf sehen Sie insoweit für die Zukunft? \r\nEs gibt sowohl auf nationaler als auch auf EU-Ebene Ansätze, um den Datenzugang zu gewährleisten. Neben den Regelungen des DSA bieten beispielsweise auch der Data Act, das geplante Forschungsdatengesetz und Vorschriften im Gesundheitsbereich Hebel, um einen Datenzugang zu ermöglichen. Weiterer Regulierungen bedarf es daneben nicht zwangsläufig. Bessere und passgenauere Lösungen können stattdessen Codes of Conduct bieten. \r\nZudem haben kürzliche Fälle mit Empfehlungen der Kommission gezeigt, dass Datenzugänge auch neben gesetzlichen Verpflichtungen gewährt werden. Kooperative Ansätze, die es Unternehmen ermöglichen Datenzugänge unter Berücksichtigung ihrer eigenen Geschäftsinteressen und auch der Interessen ihrer KundInnen und NutzerInnen einzurichten, sollten daher gefördert werden. Über solche Lösungen kann anlassbezogen und ausgerichtet am Forschungsinteresse unter Mitwirkung der Unternehmen eine Balance zwischen Innovation, Forschungs- aber auch Datenschutzinteressen sowie Investitionsschutz und Geschäftsgeheimnisschutz gefunden werden. \r\n\r\n6)\tWelchen Effekt haben neue Formate der Datenpolitik wie das von BMWK und BMI vorangetriebene Dateninstitut für eine innovative Datenpolitik und braucht es weitere Maßnahmen, um eine breite Nutzung von Daten für das Wohl der Gesellschaft zu ermöglichen? \r\nUm eine breite Nutzung von Daten und eine innovative Datenpolitik zu fördern, braucht es vor allem eine skalierbare Infrastruktur und die Förderung des digitalen Skillsets in der gesamten Bevölkerung. Ob in schulischen oder weiterführenden Bildungszweigen oder auch in beruflicher Fortbildung: Data Literacy, Medien- und Digitalkompetenz sowie Cybersecurity Trainings sind die wichtigen Bausteine für innovative Datenpolitik. Hier kann und sollte die Politik auf allen Ebenen Unterstützung und Finanzierung bewerkstelligen und Partnerschaften knüpfen, um den wachsenden Fortbildungsbedarf zu decken.  \r\nDurch Kooperationen zwischen Schulungsdienstleistern, Instituten, Universitäten, gemeinnützigen Organisationen und Unternehmen können Ansätze skaliert werden, sodass die gesamte Gesellschaft erreicht wird. \r\nBeispielshaft sei die Initiative „IT-Fitness“ erwähnt, die Microsoft gemeinsam mit dem Förderverein für Jugend und Sozialarbeit e.V. (fjs) entwickelt hat. Diese bieten beispielsweise kostenlose und einsteigerfreundliche Lernerfahrungen in den Bereichen KI, Cybersecurity und Green Digital Skills in Deutschland an – mit dem Ziel, mehr als 550.000 Menschen zu erreichen. Auch die Initiative „BoostYourSkills“ ist aus einer Industriepartnerschaft in Zusammenarbeit mit Unternehmen wie Schaeffler und DHL Group entstanden. Sie ist auf einen erfolgreichen Berufseinstieg ausgerichtet und soll einen Karrierestart in Bereichen erleichtern, in dem digitale Fähigkeiten immer wichtiger werden. Auch die ReDI School of Digital Integration stellt digitale Skills in den Mittelpunkt: Sie hilft geflüchteten Menschen und marginalisierten Gruppen, einen Arbeitsplatz in der deutschen IT-Branche zu finden. \r\nJede politische Einheit, jede Verwaltungsstelle (und auch jede Organisation aus Wissenschaft und Unternehmen) sollte in der Zeitplanung ihrer Mitarbeitenden ein Stundenkontigent einplanen, das dezidiert für den Aufbau dieser neuen Kompetenzen verwendet werden kann. \r\n\r\n7)\t[Welche Form der Zusammenarbeit ist auf internationaler Ebene notwendig, um eine innovative Datenpolitik proaktiv und menschenzentriert voranzutreiben und welche Bedeutung kommt dabei dem „globalen Süden“ zu?]\r\n\r\n8)\tWelche Möglichkeiten gibt es, mithilfe von datenbasierten Anwendungen der Klimakrise zu begegnen und welche datenpolitischen Maßnahmen sind notwendig, um das Potential für eine nachhaltige Digitalisierung sowie für einen innovativen Klimaschutz voll auszuschöpfen? \r\nUmweltschutz und Nachhaltigkeit sind von zentraler Bedeutung für Unternehmen und Politik. Datenbasierte Prognosen können und müssen dafür eingesetzt werden, um die bevorstehenden Herausforderungen konkret abschätzen zu können. So können wir auf der Grundlage von Daten Antwortstrategien entwickeln und bewerten. Datengetriebene Technologien und Künstliche Intelligenz sind daher zweifellos Teil der Lösung.  KI ist dabei nicht die Lösung an sich, sondern ein entscheidendes Werkzeug, das Wissenschaftler, Forscher, Unternehmen und die Politik nutzen können, um präziser zu arbeiten und effizienter mit maßgeschneiderten Methoden reagieren zu können.\r\nKI kann zudem dazu beitragen, nachhaltige Energie zu fördern und den Klimawandel zu bekämpfen. Beispielsweise kann sie den Menschen dabei helfen, ihren Energieverbrauch zu optimieren, das Energieangebot und die Nachfrage besser vorherzusagen und darauf zu reagieren. Neue Analysen können zudem Wetterextreme genauer vorhersagen oder auch die Wiederaufforstung beschleunigen, indem sie die besten Gebiete für die Bepflanzung ermittelt. Zugleich muss selbstverständlich daran gearbeitet werden, moderne Technologien so energiesparsam und nachhaltig wie möglich zu gestalten.\r\nDie Wettbewerbsfähigkeit in Deutschland wird jedoch bisher zum Teil durch hohe Energiekosten, Fachkräftemangel und aufwändige bürokratische Genehmigungsprozesse beeinträchtigt. Jedoch zeigen aktuelle Projekte, dass die Verzahnung von Nachhaltigkeit, Nutzung regenerativer Energien und Ansiedlung innovativer Datenstrukturen möglich ist.  Innovative Datenpolitik und der Einsatz von modernen Technologien zur Bekämpfung des Klimawandels muss daher auch die Bedeutung von Rechenzentren in den Mittelpunkt stellen. \r\n\r\n9)\tWie beurteilen Sie das Zusammenwirken der zahlreichen Dateninitiativen (z.B. Dateninstitut, MISSION KI, Gaia-X Hub, Förderprojekte, Datenraumvereine, Data Spaces Support Center, Gaia-X etc.) auf deutscher und europäischer Ebene im Hinblick auf Ihre Kohärenz und Zielerfüllung? Wie bewerten Sie ihr Einzahlen auf die Erfüllung von Compliance-Pflichten durch die Wirtschaft, das Ausnutzen von unternehmerischen Effizienzreserven und der Schaffung von Schlüsselinnovationen in Europa, die das Potential haben, ganz neue Märkte zu schaffen? \r\nGrundsätzlich ist zu begrüßen, dass in verschiedenen Projekten und Initiativen Schlüsselfragen der Datenökonomie analysiert und bearbeitet werden. Forschungsprojekte z.B. im Bereich der Datenräume sind wichtige Ergänzungen, um vor allem zukunftsfähige Datennutzungen zu erarbeiten. \r\nDie Verzahnung der einzelnen Initiativen untereinander und auch die inhaltliche Verzahnung bzw. inhaltliche Erweiterung zwischen Datennutzung und KI sind jedoch ausbaufähig. Die Zielerfüllung aller Initiativen muss sich in jedem Fall an ihrer Praktikabilität messen lassen. Die Nutzbarkeit der Ergebnisse hängt dabei unter anderem davon ab, wie gut die Einbeziehung der Praxis im Entwicklungsprozess umgesetzt wird. Aufgrund der Vielzahl an Initiativen ist es jedoch für viele Unternehmen aus Kapazitätsgründen kaum möglich, sich einzubringen. Zudem wird in den verschiedenen Projekten, Instituten und Förderprogrammen häufig parallel an ähnlichen Fragen gearbeitet. Die Dopplung führt zu zusätzlichen Aufwänden. Bei der Evaluierung der Projekte sollte daher auch eine Zusammenlegung und vor allem der Anschluss an internationale Dateninitiativen und -projekte erwogen werden. \r\n\r\n10)\tDie Bundesregierung hat im Jahr 2023 eine überarbeitete Datenstrategie veröffentlicht (https://www.bundesregierung.de/breg-de/themen/digitalisierung/datenstrategie-2023- 2216620). Wie beurteilen Sie diese in ihrer Machart und Zielsetzung und in ihrer bisherigen Umsetzung? \r\nDie 2023 veröffentlichte Datenstrategie betont grundsätzlich die richtigen Zielstellungen: die Nutzung und Verwaltung von Daten, um Innovationen und Fortschritt zu fördern. Positiv ist außerdem die klare Betonung der Bedeutung einer neuer Datenkultur.\r\nDie bisherige Umsetzung der Strategie zeigt Fortschritte, jedoch bleibt entscheidend, dass klare Aufträge und Umsetzungszeiträume festgelegt werden, um die Effektivität und Transparenz der Maßnahmen zu gewährleisten. Dies war bereits eine Stärke der früheren Datenstrategie und sollte beibehalten werden.  Zudem sollten Priorisierungen klar definiert und in die Strategie eingearbeitet werden, um sicherzustellen, dass die wichtigsten Maßnahmen zuerst umgesetzt werden.\r\nZudem ist es von großer Bedeutung, dass die Datenstrategie von einer chancenorientierten Kommunikationsstrategie flankiert wird. Eine solche Strategie sollte die Vorteile der Datennutzung hervorheben und dabei vor allem bei innovativen Datennutzungsmodellen der öffentlichen Hand auf die Fragen von Bürgerinnen und Bürger eingehen. Investitionen in die digitale Infrastruktur und die Schaffung von Rechtssicherheit für Unternehmen sind ebenfalls wichtige Bestandteile, um die Zukunftsfähigkeit der deutschen Datenpolitik zu sichern. Diese Bausteine sollten in der Datenstrategie zukünftig noch stärker hervorgehoben werden.\r\n\r\n11)\tWie sollte, vorangestellt die Zielparameter einer verbesserten Datenverfügbarkeit- und Nutzbarkeit, eine grundlegende Neuordnung der Datenschutzaufsicht in Deutschland aussehen, wo genau sollte eine Reform der DSGVO ansetzen und welche möglichen Restriktionen sehen Sie hierbei? \r\nFür eine innovative Datenpolitik in Deutschland ist weniger die Struktur der Datenschutzaufsicht allein entscheidend, als eine zukunftsorientierte Einbeziehung von verschiedenen Behörden für die Auslegung und Umsetzung des gesamten digitalpolitischen Regelungsrahmens. Datenverfügbarkeit und -nutzbarkeit sind angesichts der Vielzahl an neuen Regulierungsinstrumenten und auch der technologischen Entwicklung keine Fragen mehr, die sich nur nach Datenschutzrecht entscheiden lässt. \r\nDie DS-GVO wird durch zahlreiche datenrechtliche Regulierungsinstrumente (insb. Data Act, Data Governance Act, AI Act, EHDS) und digitale Rechtsakte (bspw. Digital Services Act, Digital Markets Act) ergänzt, die allesamt datenbezogene Regelungen und Rechte enthalten. \r\nDie Weiterentwicklung der deutschen Aufsichtslandschaft muss daher umfangreicher angegangen und funktionierende Konsultationsmechanismen entwickelt werden. Eine wichtige Weichenstellung kann dabei dadurch erreicht werden, dass die Aufsicht in Aufstellung, Funktionsweise und Finanzierung so ausgestattet wird, dass Kapazitäten für Beratung und Begleitung innovativer Datenprojekte frei sind. Die Aufsichtsbündelung bei der BNetzA kann eine wichtige Weichenstellung für eine zukunftsfähige Aufsichtsstruktur sein. Dies würde auch den Wechsel von Risiko- zum Chancennarrativ nachhaltig fördern und den beaufsichtigten Unternehmen die benötigte Rechtssicherheit geben.\r\nEine grundlegende Reform der DS-GVO selbst braucht es für eine innovative Datenpolitik zum jetzigen Zeitpunkt nicht. Um konkrete Datennutzungen zukünftig zu erleichtern, scheint es eher angezeigt, neue Erlaubnistatbestände in EU-Verordnungen zu etablieren. Die Evaluierung der EU-Kommission zur Frage der DS-GVO Reform hat ebenfalls bestätigt, dass die europäische Industrie wenig Vorteile in einer Überarbeitung der DS-GVO sieht. Die Gründe hierfür sind vielfältig: die voraussichtlich lange Zeit und der Aufwand für die Überarbeitung, Investitionsschutz für getätigte Umsetzungs- und Compliancestrecken, erlangtes Vertrauen durch den derzeitigen Rechtsrahmen und ein insgesamt hoher Implementierungsaufwand durch die Vielzahl an Regulierungen. \r\nZudem sind die neueren Digitalrechtsakte alle mit der DS-GVO verzahnt und arbeiten mit Pauschalverweisen auf deren Geltung. Eine nun erfolgende Änderung der DS-GVO könnte daher neue Fragen auch hinsichtlich der weiteren datenbezogenen Rechtsakte aufwerfen und die Rechtssicherheit für Unternehmen eher senken als erhöhen.\r\nZwar sind größere Änderungen an der DS-GVO im Moment nicht notwendig, doch könnten der EDSA und die Datenschutzbehörden der Mitgliedstaaten in wichtigen Bereichen neue Leitlinien bereitstellen, die die Verbraucher schützen, die Sicherheit für Unternehmen erhöhen und die neuen Rechtsakte sowie technologische Entwicklungen mit berücksichtigen. In den vergangenen Jahren hat sich der Datenschutzdiskurs allzu häufig auf die Einwilligung als Rechtsgrundlage der Datenverarbeitung fokussiert. Die Gleichrangigkeit der verschiedenen Erlaubnistatbestände und insb. die Bedeutung der Datenverarbeitung nach Art. 6 Abs. 1 lit. f DS-GVO (berechtigtes Interesse) für die Datenökonomie könnten in einer aktualisierten Leitlinie der Aufsicht betont werden. Daneben wäre die Aktualisierung der Leitlinien der Artikel-29-Arbeitsgruppe aus dem Jahr 2014 zur Verwendung anonymer und pseudonymer Daten angezeigt. Letzteres wäre besonders wichtig, um die Entwicklung neuer Technologien, einschließlich (generativer) KI, zu unterstützen. \r\n\r\n12)\tWie kann die Umsetzung von Data Act und AI Act, gerade was die Ermöglichung von KI angeht, durch Standardisierungsarbeiten, Codes of Conducts und Codes of Practices erleichtert werden, insbesondere mit Bedeutung von Transparenz und Kontrolle über Daten? \r\nDie Umsetzung des Data Act und des AI Act kann durch Standardisierungsarbeiten, Codes of Conduct und Codes of Practices in mehrfacher Hinsicht erleichtert werden. Diese Instrumente können dazu beitragen, klare und einheitliche Rahmenbedingungen zu schaffen, die sowohl die Entwicklung als auch die Anwendung von KI-Technologien fördern und gleichzeitig Transparenz und Kontrolle über Daten gewährleisten. Insbesondere die Arbeit in internationalen Standardisierungsgremien ist hier von essentieller Bedeutung, da sie die Anschlussfähigkeit der Lösungen und Rechtssicherheit auch bei übergreifenden Geschäftsmodellen sicherstellt. Es ist zu begrüßen, dass bereits Standardisierungsvorhaben durch die EU Kommission an das Joint Technical Committee bei CEN/CENELEC (JTC21)  gegeben wurden, die Transparenz, Sicherheit, Robustheit uvm. in den Bearbeitungsmittelpunkt rücken und die Konformitätsbewertung unter dem AI Act erleichtern werden. Bestehende Standards, wie ISO 42001 und ISO 23894 können und sollten zudem nutzbar gemacht werden für den AI Act, da diese sowohl Interoperabilität als auch Qualitätssicherung und Risikobewertungen in den Mittelpunkt stellen. Dies sind Bewertungsaspekte, die auch für den AI Act zentral sind. Damit werden wichtige Fragen adressiert, die alle Unternehmen bei der Umsetzung des AI Acts beantworten (können) müssen.  \r\nDaneben sind Codes of Practices ein weiterer Baustein für praxistaugliche Umsetzung der Verpflichtungen, insbesondere mit Blick auf Transparenzvorgaben. Diese sind auch ausdrücklich vom AI Act vorgesehen.  \r\nAus den Erfahrungen mit Codes of Conduct unter der DS-GVO können wichtige Lehren gezogen werden und etablierte Mechanismen zur Erarbeitung und zum Monitoring fortgesetzt bzw. an diese angeknüpft werden. Für Schnittstellenbereiche zwischen DS-GVO und Data Act könnten CoCs vor allem bei Fragen zu technischen Maßnahmen, Rollen der Datennutzenden und Kontrolle über die verarbeiteten Daten wichtige Ergänzungen bieten und die Umsetzung erleichtern. Der auf CoC Ebene geführte praxisnahe Dialog und Austausch innerhalb von Sektoren und Industriebereichen erleichtert den beteiligten Unternehmen und Organisationen die Umsetzung und leistet einen wichtigen Beitrag zu Etablierung von Industriestandards. Auch in diesem Zusammenhang sind die sachgerechte Ausstattung und Ausgestaltung der Aufsicht entscheidend. Bisherige CoC Erarbeitungen im Bereich der DS-GVO haben (leider) gezeigt, dass die Genehmigungen zu langwierig sind und zT an Zuständigkeitsstreitigkeiten zwischen den Aufsichten in Europa scheitert. \r\nDennoch bieten alle Instrumente (Standardisierung, CoCs und CoPs) wichtige Potenziale für die Umsetzung der Rechtsakte.\r\nZusammenfassend können diese Instrumente dazu beitragen, die rechtlichen Anforderungen des Data Act und des AI Act zu operationalisieren und somit ihre Umsetzung zu erleichtern. Sie können auch die Akzeptanz von KI-Technologien erhöhen, indem sie Vertrauen schaffen und sicherstellen, dass KI im Einklang mit gesellschaftlichen Werten und Normen entwickelt und eingesetzt wird.\r\nDie Bedeutung von internationalen Standards ist dabei für die Umsetzung des EU AI Act von großer Relevanz. Internationale Standards können dazu beitragen, die Anforderungen des AI Act zu harmonisieren und die Einhaltung zu erleichtern, indem sie klare Richtlinien für die Entwicklung und den Einsatz von KI-Systemen bieten. Sie fördern die Interoperabilität und Kompatibilität von KI-Systemen über Grenzen hinweg und unterstützen somit die Schaffung eines einheitlichen Marktes für KI-Produkte und -Dienstleistungen innerhalb der EU.\r\n\r\n13)\tWelche Maßnahmen sind aus Ihrer Sicht prioritär, um eine starke Datenökonomie und ein innovatives Daten-Ökosystem mit Rechen- und Datenzentren in Deutschland und Europa aufzubauen und die Ansiedlung von Daten-getriebenen Unternehmen zu erleichtern? \r\nDie Schaffung einer modernen Rechenzentren-Infrastruktur ist für ein innovatives Daten-Ökosystem unerlässlich. Vor allem Hyperscale-Rechenzentren  sind wichtige Säulen für eine sich schnell entwickelnde Datenwirtschaft. \r\nFür die Bereitstellung dieser Infrastruktur ist vor allem ein vorhersehbares und effizientes Genehmigungssystem für Rechenzentren wichtig. \r\nEin Beispiel aus Finnland zeigt, dass dort eine Gesetzgebung für \"grüne Übergangsprojekte\" eingeführt wurde, die sich positiv auf viele Rechenzentrenprojekte auswirkt, die den grünen Wandel Finnlands unterstützen sollen und daher von einem beschleunigten Genehmigungsverfahren profitieren. Darüber hinaus ist die finnische Regierung dabei, einen \"One-Stop-Shop\"-Mechanismus einzuführen, um eine reibungslosere und schnellere Umweltgenehmigung zu fördern. Derartige Maßnahmen können ein entscheidender Wettbewerb für Märkte sein, die umweltfreundliche Investitionen und Arbeitsplätze anziehen und durch den Bau von Rechenzentren nachhaltige Standortvorteile für die Datenwirtschaft schaffen. Bei allen regulatorischen Maßnahmen, die sich auf den Bau von Rechenzentren auswirken oder direkt beziehen ist daneben vor allem wichtig, dass eine frühzeitige Information über entsprechende Gesetzesvorhaben erfolgt. Der Bau von Rechenzentren ist regelmäßig ein mehrere Jahre umspannender Planungs- und Aufbauprozess. Planungssicherheit und die Einbeziehung zukünftiger Anforderung bereits in der Planungsphase sind daher entscheidend, um Vorhersehbarkeit sicherzustellen.\r\nUm eine starke Datenökonomie und ein innovatives Daten-Ökosystem mit Rechen- und Datenzentren in Deutschland und Europa aufzubauen und die Ansiedlung von datengetriebenen Unternehmen zu erleichtern, sind daher folgende Maßnahmen prioritär:\r\n•\tInvestitionen in digitale Infrastruktur: Der Aufbau und die Erweiterung von hochleistungsfähigen Rechenzentren und Dateninfrastrukturen sind grundlegend, um die Datenverarbeitung und -speicherung zu unterstützen. Dies beinhaltet auch die Förderung von Cloud-Infrastrukturen und Diensten, die eine sichere und effiziente Datenverarbeitung ermöglichen. Ein stabiles Investitionsklima und vorhersehbare Genehmigungsprozesse sind entscheidende Weichenstellungen.\r\n•\tRechtliche und technische Rahmenbedingungen: Es ist wichtig, rechtliche und technische \"Brücken\" zu schaffen, die sichere Datenkooperationen und Industriepartnerschaften, auch zur Entwicklung des KI-Ökosystems, ermöglichen. Die Entwicklung von internationalen Standards muss durch Industrieexperten und Politik priorisiert werden, um anschlussfähige Technologien voranzubringen. Ebenso sollten Labelling Schemes und Zertifizierungen auf europäischer Ebene entwickelt werden. Es ist zu begrüßen, dass beispielsweise bereits ein „Labelling Scheme for Data Center Sustainability“ auf europäischer Ebene angestoßen wurde. Solche europäischen und internationalen Zertifizierungen und Labels können eine wichtige Brücke bilden, da sie Vertrauen in die Dateninfrastruktur stärken und die Einhaltung der zahlreichen datenbezogene Regulierungsinstrumenten unterstützen.\r\n•\tFörderung von Datenkompetenz: Aufbau von Fähigkeiten im Bereich der Datenkompetenz brauchen Förderung und Finanzierung. \r\n\r\n14)\tWie müssten ideale Leitlinien für die rechtssichere Anonymisierung von Daten im Rahmen der DSGVO und des Data Acts aus Ihrer Sicht ausgestaltet sein? Wie wird die Anonymisierung in anderen EU-Mitgliedsstaaten gehandhabt, und welche Maßnahmen sind erforderlich, damit Deutschland in diesem Bereich endlich Fortschritte erzielt? \r\nIn der Praxis werden bereits seit vielen Jahren erfolgreich Anonymisierungsverfahren eingesetzt und Datenbestände angelegt, die personenbezogene und nicht-personenbezogene Daten zu separieren. \r\nRechtliche Unsicherheiten bzw. jahrelange Diskussionen in Deutschland über (z.T. rein theoretische) Re-Identifizierungs-Risiken haben indes zu Verunsicherungen geführt. Dabei haben erfolgreiche Anonymisierungsverfahren, die auch in Absprache mit den Aufsichtsbehörden entwickelt wurden, gezeigt, dass Unternehmen vor allem bei umfangreichen Datenauswertungsprozessen oder Datenkooperation auch mit anonymisierten Datensätzen arbeiten können und wollen. \r\nDie Abgrenzung zwischen personenbezogenen und nicht-personenbezogenen Daten wird vor dem Hintergrund des EU Data Acts noch zusätzliche Bedeutung erlangen. Der EU Data Act erfasst sowohl personenbezogene als auch nicht personenbezogene Daten, lässt aber zugleich die DS-GVO unberührt, sodass für die vom Data Act erfassten personenbezogenen Daten auch die DS-GVO mit allen Rechten und Verpflichtungen gilt.\r\nZukünftige Leitlinien in diesem Bereich sollten daher vor allem folgende Elemente beachten/beinhalten:\r\n•\tBranchenunterschiede berücksichtigen\r\n•\tGeänderte rechtliche Rahmenbedingungen einbeziehen (Anonymisierung ist kein alleiniges DS-GVO-Thema mehr)\r\n•\tRisikobasierte Ansätze abbilden, die die Wahrscheinlich einer Re-Identifizierung nach erfolgter Anonymisierung angemessen berücksichtigen\r\n•\tTechnische Standards (technikneutral und zukunftsoffen) einbeziehen, um praktische Anleitungen zu geben, wie Anonymisierung erfolgreich durchgeführt werden kann und den Fokus auf Chancen und Potenziale zu lenken\r\n\r\n15)\tInwiefern sind die Zweifel an der Rechtssicherheit des Data Protection Agreements zwischen den USA und der EU, das auf zwei vorhergehend aufgehobene Agreements nach dem Schrems I- und Schrems II-Urteil des EuGH folgte, berechtigt und außerdem eine Bremse für Innovationen in Europa und welche Regulierung bräuchte es, um nachhaltig für Rechtssicherheit zu sorgen? \r\nRechtsunsicherheiten bei Fragen des internationalen Datentransfers wirken sich stets auf Innovationen und Geschäftsmodelle aus, die auf Transfers und internationale Kooperation ausgerichtet sind. Die Umsetzung der Vorgaben und ein datenschutzkonformer Datentransfer sind selbstverständlich möglich, aber in der Praxis natürlich mit Aufwänden und verschiedenen Risikoassessments verbunden.  Rechtssicherheit und Verlässlichkeit von Transfermechanismen und auch internationalen Abkommen über Zugriffe auf Daten sind daher für Unternehmen entscheidend. \r\nDas gilt umso mehr, da auch der EU Data Act in Kapitel VII Vorgaben zu Übermittlungen, Zugriffen und Transfers im internationalen Umfeld enthält und Kooperationen und Partnerschaften für die gemeinsame und Weiterentwicklung von KI-Services und -produkten in Zukunft zusätzliche internationale Kooperationen und internationalen Datentransfers erfordern wird.\r\nDas für den EU-US Transfer im Juli erlassene Data Privacy Framework (DPF)  bildet zur Zeit den relevanten Rahmen zur Absicherung des Transfers nach der DS-GVO und unterscheidet sich insofern von seinen Vorgängerabkommen, dass die dem DPF vorangegangene Executive Order zusätzliche Mechanismen in das US-Rahmenwerk eingearbeitet hat.  Auf z.T. geäußerte Bedenken hinsichtlich der zugrundeliegenden Executive Order und der Überprüfungsgerichte sei nachfolgend eingegangen.\r\nDie Executive Order selbst entfaltet volle Gesetzeskraft und ist für die Executive verbindlich, einschließlich der US-Geheimdienste und Strafverfolgungsbehörden. Der Erlass ist nicht nur wirksam, sondern auch als dauerhafter Mechanismus anzusehen. Frühere Executive Orders, die sich auf nationale Sicherheit und Überwachung bezogen, erhielten parteienübergreifende Unterstützung und wurden auch von nachfolgenden Regierungen anerkannt.  Und selbstverständlich ist die EU in der Lage, die dauerhafte Wirkung der Executive Order sicherzustellen, indem sie ihre Entscheidung über die Angemessenheit und den Fortbestand des DPF von der Existenz der Executive Order (oder von ähnlichen zukünftigen Bestimmungen) abhängig macht.  \r\nDer sogenannte Data Protection Review Court (DPRC) wird zum Teil als nicht unabhängig genug angesehen, weil er durch einen Exekutivakt eingerichtet wurde und nicht Teil der Gerichtsbarkeit nach Art. 47 der Charta und der US-Verfassung sei, sondern ein Gremium innerhalb der Exekutive der US-Regierung. Er sei daher keine ausreichende Verbesserung gegenüber dem früheren „Ombudsmann“-System unter dem Privacy Shield. Der Ombudsperson nach dem Privacy Shield fehlte jedoch die Unabhängigkeit bei der Ernennung, Überwachung und Abberufung der eingesetzten Entscheidungsträger. Das US-Justizministerium hat den DPRC nun durch entsprechende Verordnungen eingerichtet und operationalisiert. Der DPRC-Mechanismus garantiert, dass die Entscheidungsträger unabhängig sind, obwohl sie in der Exekutive untergebracht sind. Weder Generalstaatsanwalt noch andere Exekutivorgane dürfen die unabhängige Arbeitsweise des DPRC beeinflussen. Die vom DPRC erlassenen zukünftigen Entscheidungen in Beschwerdefällen sind zudem für alle US-Nachrichtendienste verbindlich, da dem DPRC die vollen Befugnisse eines Generalstaatsanwalts übertragen wurden.\r\nDer Themenkomplex internationale Datentransfers ist insgesamt komplex und vielschichtig. So wie das Risiko von Datenverarbeitungen von verschiedenen Faktoren abhängt (und dies auch durch den risikobasierten Ansatz der DS-GVO gewürdigt wird), sind auch Risikoabwägungen bei internationalen Transfers notwendig. Verknappte Darstellungen und Pauschalantworten (sämtliche Transfers sind unmöglich bzw. sämtliche Transfers in Drittstaaten sind unproblematisch) beschädigen das Vertrauen in eine globale Datenwirtschaft in der Industriepartner und Partnernationen gemeinsame wirtschaftliche und gesellschaftliche Potenziale durch Datennutzung heben (wollen). \r\nIm Spannungsfeld zwischen Datenschutz, Sicherheitsinteressen und unter Berücksichtigung verschiedener Risikokonstellationen, die von zahlreichen Faktoren wie verarbeiteten Daten, eingesetzten Datensicherheitstechnologien, dem tatsächlichen Zugriffsrisiko etc. abhängig sind, ist der größte Beitrag für Rechtssicherheit neben dem Bestand von internationalen Abkommen wie dem DPF und den weiteren erlassenen Adäquanzentscheidungen der EU Kommission ein faktenbasierter und lösungsorientierter Dialog - sowohl zwischen Politik, Zivilgesellschaft und Unternehmen.\r\n\r\n16)\tWie können Innovationen sowohl im Bereich digitaler Dienste als auch im Bereich Regulierung für mehr Datenschutz und Einhaltung der Grundrechte sorgen und welche guten Beispiele kennen Sie dafür?\r\nBesonderes Potenzial für Innovationen im Bereich des Datenschutzes bieten Entwicklungen von Privacy Enhancing Technologies (PETs). PETs umfassen verschiedene technische Ansätze wie z.B. Anonymisierungstechniken, die Entwicklung synthetischer Datensätze und K-Anonymität. Trotz der zunehmenden Anerkennung ihres Potenzials zur Erleichterung einer verantwortungsvollen und die Privatsphäre schützenden Datennutzung stehen PETs aus einer Reihe von Gründen weiterhin Hindernisse für eine breitere Akzeptanz entgegen:   \r\n•\tDer Bekanntheitsgrad von PETs ist nach wie vor begrenzt. Potenzielle Nutzer von PETs müssen weiter darüber aufgeklärt werden, was diese bewirken können, welche Möglichkeiten und Grenzen sie haben, welche Vorteile und Risiken sie bieten und wann sie eingesetzt werden sollten.   \r\n•\tDie Entwicklung und Umsetzung von PETs kann schwierig sein. Viele Technologien sind relativ neu und erfordern Fachwissen; nicht alle sind gleich gut entwickelt, und viele sind ressourcenintensiv, so dass ihre Einführung in manchen Fällen relativ teuer ist.  Nicht immer sind ein hoher Reifegrad oder Standards vorhanden.  \r\n•\tDie politischen und rechtlichen Rahmenbedingungen für PETs sind noch weitgehend unentwickelt. Nur wenige Länder haben bisher regulatorische Vorgaben für PETs gemacht, geschweige denn deren Einsatz aktiv gefördert. Viele Regulierungsbehörden sind noch dabei, ihr Verständnis für diese Technologie zu entwickeln. In Fällen, in denen sich die Regulierungsbehörden nicht direkt mit diesen Technologien und ihrer Anwendbarkeit auf die Einhaltung von Rechtsvorschriften befasst haben, wird die Rechtsunsicherheit in Bezug auf die Einführung von Technologien zum Schutz der Privatsphäre eine Herausforderung bleiben und deren Potenzial daher nicht ausreichend genutzt.\r\nEin aktueller Vorschlag der Kommission für eine Verordnung über europäische Statistiken zu Bevölkerung und Wohnen  zielt darauf ab, die Konsistenz aller auf Personen und Haushalten basierenden EU-Sozialstatistiken zu verbessern, indem die Rechtsgrundlage gestärkt und die Entwicklung innovativer Lösungen für den Datenaustausch zwischen den EU-Mitgliedstaaten gefördert wird. In Bezug auf \"innovative Lösungen zur Ermöglichung des Datenaustauschs\" bezieht sich der Vorschlag ausdrücklich auf PETs, um den Datenaustausch in Übereinstimmung mit den EU-Rechtsvorschriften zum Schutz personenbezogener Daten durchzuführen.  Um eine wirksame gemeinsame Nutzung von Daten zu Qualitätszwecken im Einklang mit DS-GVO zu ermöglichen, fordert dieser Vorschlag die Erprobung und Nutzung von Technologien zur Verbesserung des Datenschutzes, die die Datenminimierung durch Technik umsetzen. Solche regulativ fördernden Mechanismen sollten ausgebaut werden, um die Entwicklung, den Einsatz und die Akzeptanz von PETs zu steigern.\r\nAufbauend auf solchen und ähnlichen vielversprechenden Initiativen sollten folgende Fördermaßnahmen ergriffen werden, um die weitere Entwicklung und Annahme von PETs voranzutreiben:  \r\n•\tRegulierungsbehörden und politische Entscheidungsträger sollten Anreize für die Verwendung von PETs schaffen. Um die Einführung von PETs zu fördern, sind klare regulatorische Vorgaben erforderlich. Die Unternehmen benötigen eine Anleitung, wie die Aufsichtsbehörden den Einsatz der Technologie zur Erfüllung gesetzlicher und behördlicher Verpflichtungen interpretieren und wie PETs bei Überlegungen zur Durchsetzung berücksichtigt werden. Wenn Regulierungs- und Durchsetzungsbehörden beispielsweise Parteien, die PETs einsetzen, Erleichterungen zur Verfügung stellen und die Verwendung von PETs als potenziell mildernde Faktoren in Betracht ziehen, sollten diese Erleichterung explizit in Leitlinien oder Verordnungen festgehalten werden. Die Regulierungsbehörden könnten ebenfalls klarstellen, ob die Nichtverwendung leicht verfügbarer PETs als potenziell erschwerender Faktor interpretiert wird.  \r\n•\tEntwickler und Anbieter von PETs, Forschungseinrichtungen, Aufsichtsbehörden und Regierungen sollten Maßnahmen ergreifen, um die Aufklärung zu PETs auszubauen.  Um eine breite Akzeptanz zu erreichen, brauchen potenzielle Nutzer konkrete Nachweise und Erläuterungen zum Wert von PETs und dafür, wie sie zur verantwortungsvollen Datennutzung beitragen. Fallstudien über den Einsatz von Technologien sind zu diesem Zweck besonders zielführend. \r\n•\tDurch Forschung, Experimentierräume und Diskussionen müssen die Beteiligten, darunter Regulierungsbehörden, Branchenexperten, Forscher und Datenschutzbeauftragte, Leitlinien und bewährte Verfahren für PETs entwickeln. Das Fehlen von Normen für PETs ist derzeit ein Hindernis für eine breitere Akzeptanz. Normen sind wichtig, um die Interoperabilität zu erleichtern, wenn verschiedene Technologien zum Schutz der Privatsphäre gemeinsam und länderübergreifend eingesetzt werden. Durch die Schaffung gemeinsamer Rahmenbedingungen können verschiedene PETs problemlos miteinander kommunizieren und zusammenarbeiten, indem Kompatibilität und Konsistenz hergestellt werden. Normen fördern auch das Vertrauen in diese Technologien, stellen sicher, dass PETs auf hohem technischen Niveau entwickelt werden, und fördern die Umsetzung, indem sie ein hohes Maß an Sicherheit bieten.       \r\n\r\n17)\t[Was kann und sollte Ihrer Auffassung nach der Staat tun, damit die Datenbestände, über die er selbst auf Bundes-, Landes- und kommunaler Ebene verfügt, nicht weiterhin unberührt in Silos schlummern, sondern von der Gesellschaft insgesamt besser genutzt werden können, etwa zum Bürokratieabbau, zu mehr Sicherheit und Komfort beim Nutzen staatlicher Leistungen? Wäre vor diesem Hintergrund das Zusammenlegen einzelner Datenbanken zu einem großen Register ein vernünftiger Weg, und falls ja, wie ließe sich dieser verfassungsfest im Sinne des Föderalismus beschreiten?]\r\n\r\n18)\tDie großen Digitalkonzerne zeigen es: Maschinenlesbare Daten haben einen Wert, mit ihrer Monetarisierung werden die zahlreichen Dienste, die unseren Alltag prägen, finanziert. Sollten Ihrer Auffassung nach digitale Daten, die die Menschen alltäglich erzeugen und die gleichsam als Blut der Gesellschaft zirkulieren, auch offiziell einen Wert und damit einen Preis bekommen, und wenn ja, wie ließe sich eine solche Datenökonomie im Wortsinn aufbauen und regulieren? Wie ließe sich die griffige Formel vom „Eigentum an den eigenen Daten“ real umsetzen?\r\nMit dem EU Data Act wurde der Datenaustausch und die Datennutzung im wirtschaftlichen Austauschverhältnis EU-weit geregelt. Der Rechtsrahmen für eine regulierte Datenökonomie besteht damit bereits. Darüber hinaus sind im Zivilrecht Verträge mit Daten als Gegenleistung Teil des bürgerlichen Rechts geworden (§§ 327 ff. BGB), was die rechtliche Anerkennung des Werts von Daten weiter und die Möglichkeit des Austauschs von Daten und Gegenleistung unterstreicht.\r\n\r\n"},"recipientGroups":[{"recipients":{"parliament":[{"code":"RG_BT_FRACTIONS_GROUPS","de":"Fraktionen/Gruppen","en":"Parliamentary parties/groups"},{"code":"RG_BT_COMMITTEES","de":"Gremien","en":"Committees"}],"federalGovernment":[]},"sendingDate":"2024-06-24"}]},{"regulatoryProjectNumber":"RV0006771","regulatoryProjectTitle":"EU Kommission Weißbuch Digitale Infrastrukturen - How to master Europe’s digital infrastructure needs?","pdfUrl":"https://www.lobbyregister.bundestag.de/media/c8/43/316742/Stellungnahme-Gutachten-SG2406240003.pdf","pdfPageCount":1,"text":{"copyrightAcknowledgement":"Die grundlegenden Stellungnahmen und Gutachten können urheberrechtlich geschützte Werke enthalten. Eine Nutzung ist nur im urheberrechtlich zulässigen Rahmen erlaubt.","text":"RESPONSE OF MICROSOFT CORPORATION \r\nEuropean Commission’s public consultation on the White Paper \r\n“How to master Europe’s digital infrastructure needs?” \r\nJune 2024 \r\nMicrosoft has with interest taken note of the European Commission’s White Paper “How to master Europe’s digital infrastructure needs?”1 and welcomes the opportunity to participate in the consultation and provide feedback on its contents. Coming after two years of intense discussions on the future of EU connectivity policy, the White Paper provides the opportunity to consider possible directions of the EU connectivity sector, covering a range of issues. \r\nIntroduction \r\nThe connectivity sector is a cornerstone of Europe's ambition for a green and digital future. To achieve this vision, it is essential to align industrial capacities with the goals of reducing regulatory burdens, fostering innovation, and promoting seamless integration of green and digital initiatives. We recognise the challenge but also the importance of expanding high-performance digital infrastructures. \r\nMicrosoft has more than 40 years of presence and experience in Europe and is committed to sustaining its cloud and broader digital infrastructure investments in the region to support the 2030 Digital Decade targets. Worldwide, Microsoft has over 20k peering connections and over 350k kilometers of terrestrial and subsea cables, not to mention significant CDN installments across Europe. Microsoft continues to invest billions of Euros in internet infrastructure and digital infrastructure in Europe. This comprises 17 countries in Europe with local datacenters that are built or under construction. In addition, Microsoft is a defender of the global internet through its global cybersecurity operations that contribute to the resilience of the internet. We have invested more than $1 billion in cloud security each year and announced in 2021 to quadruple the amount to $20 billion over 5 years. These extraordinary figures still do not represent the totality of Microsoft’s investments, which are often not broken down by region and which cover for example substantial research and development in technologies such as artificial intelligence, quantum computing, and many other software elements. Microsoft is therefore a substantial contributor to the global internet infrastructure and a major capital investor in Europe’s digital future, with continuous long-term commitments in the continent. \r\nMicrosoft partners heavily with telecom network operators across Europe2 and looks forward to increasing that collaboration to facilitate and support their contributions to European digital transformation. It has been our longstanding view that telecoms infrastructure providers and content and application providers (CAPs) enjoy a symbiotic relationship. Providers of connectivity services allow consumers to enjoy innovative applications and engaging content. At the same time, CAPs, by investing in said applications and content, are “demand creators” for connectivity services, encouraging, in turn, increased take-up of broadband connectivity; and making investment in improved infrastructure possible. The White Paper recognises this reality, noting that “Profitability [for investors] depends on the take-up of enhanced fixed and mobile networks, which is itself linked to the development and increased take-up of data intensive applications and use cases, e.g., based on edge computing, AI, and IoT”3. Any policy change should take due consideration of this complementary relation and safeguard existing incentives that have allowed both sectors to mutually benefit from their services. \r\nFurthermore, it is important to recognize that there are a great number of stakeholders involved in the value chain, each with their own specific contributions and interdependencies. The EU Digital Decade objectives are broad and cover a wide range of issues. In addition to connectivity ambitions, there are goals around skills, quantum, usage of cloud, big data and AI, and digitalization of the public sector. We applaud the fact that these targets cover a broad range of digital aspects, which are all necessary to ensure digital growth for European businesses and consumers. Europe’s digital transformation can only be accomplished based on efforts from all players within the internet value chain, each investing in their own operated segments within that value chain. \r\nSimilarly, it is positive to see a recognition of multiple solutions available to achieve high speed connectivity across Europe. For example, the White Paper notes that satellite broadband can bring high speed connectivity “to very rural and remote areas, where no very high-capacity networks are available”. Such innovative connectivity ideas, beyond 5G and FTTH, should be explored further, if we are to efficiently reach Europe’s connectivity goals, for example by promoting cost-efficient technologies like Wi-Fi. It goes without saying that EU and member state regulators have a societal duty to opt for connectivity choices that are reasonable and efficient. While the Digital Decade targets for connectivity set laudable aspirations, society is best served with efficient, ubiquitous and affordable connectivity. This requires efficient and technology neutral choices that make the best usage of all connectivity technologies, whether fibre, 5G, Wi-Fi, satellite or others, depending on the needs at hand, and does not impose disproportionate costs or unreasonable coverage obligations. \r\nThe EU is on a successful pathway and, in the absence of a demonstrated market failure, should be wary of extreme changes in direction that could interfere with continued progress without an exhaustive discussion of the implications of such a move. We believe that the EU’s continued path to success will best be accelerated by market-driven investments and collaborative partnerships among cross-industry participants. The Commission should champion the market-based innovation that has served Europe so well, and that will continue to benefit Europeans and prepare their networks for the future. \r\nHereby Microsoft would like to share more specific comments and observations on the Commission’s White Paper. Our reflections, set out in this document, are structured around the following seven main considerations:\r\n\r\n1) The scope of the current regulatory framework for electronic communications is sufficient: considerations of preemptive regulation should be approached with great caution \r\n2) Telco and cloud are different services, their convergence is arguably overestimated and assimilating them can create adverse unintended consequences \r\n3) Regulatory simplification is the right way forward: incentivize competitiveness and introduce investment conducive policies \r\n4) Avoid introducing mandatory ‘network fees’ mechanisms \r\n5) Resiliency and security of submarine cable infrastructure is best achieved by increasing redundancy of submarine cables \r\n6) EU needs a strategy on Post-Quantum Cryptography \r\n7) Promote transatlantic cooperation to achieve digital decade goals \r\n1) The scope of the current regulatory framework for electronic communications is sufficient: considerations of preemptive regulation should be approached with great caution \r\nIn pillar II “completing a digital single market” the Commission discusses the objectives and challenges of the European Electronic Communications Code (EECC) in promoting connectivity and investment in high-capacity networks. Despite the efforts to streamline regulations, the Commission deems the results to be less than satisfactory, notably due to delays in implementation and complexity and lack of harmonization/consolidation. The EECC presently aims to promote but also to balance the overarching goals of investment and competition. The Commission suggests broadening its scope by incorporating sustainability, competitiveness, and economic security into the policy framework, while also ensuring that end-user protection remains a focal point, aligned with the European Declaration on Digital Rights and Principles. \r\nWhile re-evaluation of regulatory frameworks at regular intervals is relevant and prudent, the stability of the regulatory framework holds paramount importance as it underpins investment security. In recent years, the internet industry in Europe has developed into a prosperous internet ecosystem. This internet ecosystem is characterised by a division of labour, diversity of services and low-threshold access options. This internet ecosystem should be preserved in Europe. Therefore, we remain apprehensive regarding indications in the Commission's White Paper suggesting a preference for broadening the scope of the current regulatory framework. Such policy scenarios must be considered with caution as they risk distorting the carefully balanced market dynamics, disrupting competition and thus risking end user welfare. \r\nOverall, considerations of preemptive regulation should be approached with great caution because of the burden it can place on European innovation and digital transformation.\r\n2) Telco and cloud are different services, their convergence is arguably overestimated and assimilating them can create adverse unintended consequences \r\nThe Commission assumes that there is a convergence taking place between telecommunications and cloud markets necessitating a common regulatory regime. Such an assumption is arguably overestimated. While cloud and telecommunication markets complement each other, the notion of a convergence taking place to such a large extent that regulatory uniformity would be warranted would need to be substantiated much more specifically than is presently the case in the White Paper. \r\nThere has not been a convergence between telecommunications service providers and IT companies providing cloud-based services in terms of the relevant underlying technologies which remain distinct and should be regulated distinctly. Cloud providers are to be seen as suppliers to telecommunications providers, in the same way as network equipment vendors or tower companies are suppliers to them. Therefore, aiming to regulate cloud via the EECC would be as inappropriate as applying the EECC to regulate traditional network equipment vendors serving the telecommunications sector. In addition, the underlying core telecommunications network infrastructure, particularly the last-mile, remains necessary for complementary innovations such as cloud or edge-based computing services to function. Therefore, the existing regulatory regime should remain intact for the purposes of regulating the core telecommunications services which are its focus and should not be extended to regulating distinct underlying technologies just because they help extend network functionality or services. \r\nApplying an imprecise concept of convergence can create several adverse unintended consequences, such as different layers of legislative complexity, impact on competitiveness, and fragmentation. Therefore, we do not agree with describing an application layer service as “converged” with an infrastructure service, especially from a regulatory lens. Information technology has been regulated from different angles in the last few years and adding another layer of legislation that overlaps with telecommunications could cause a trickle-down effect that will generate an overregulated business ecosystem, make the cost of application layer services rise and ultimately impact European businesses and consumers. \r\nThe cloud service layers are very diverse in nature4 and they are offered and used in a very broad range of sectors. Hence, they are not constrained to the telecommunications sector alone, but are used practically in every sector, e.g. financial services, manufacturing, public sector, media, automotive, retail, healthcare, etc. Given that cloud services are used by a great variety of industry sectors, and these services perform non-telecommunications related functions across industry sectors, they should not be regulated vertically via sectoral legislation. \r\nIn fact, cloud computing services are already regulated by other horizontally applicable instruments, as is confirmed by the BEREC Report on Cloud and Edge Computing Services5. Overall, application layer platforms are already subject to a range of legislative initiatives, creating a need to understand their interaction in practice. These regulations cover various aspects such as data subject and controller obligations under GDPR, unfair commercial practices, product liability, Data Act/interoperability, DMA, and security through acts like NIS2, EUCS, Cyber Resilience Act, and the AI Act. Additionally, the Digital Services Act addresses consumer obligations. Given the novelty of many of these laws, focusing on their implementation and assessing their impact on the cloud services market before further regulatory intervention would be a sensible approach. Sectoral regulators will likely have to find effective means and methods (within existing regulatory frameworks or through amendments, wherever necessary) to deal with the challenge of regulating new and emerging technologies. In the above-mentioned report, BEREC appropriately acknowledges the intricacies involved in the interaction among various new EU regulations, emphasizing the need for meticulous consideration to ensure their effective implementation and legal clarity, while also preventing the imposition of unnecessary bureaucracy on users and providers. \r\nRegulation necessarily intervenes where markets fail. Hence, additional regulation of the cloud market would require evidence of market failure. This is not apparent in the Commission’s argumentation. Additional regulation of cloud services market is not warranted, particularly given that there has been insufficient time to evaluate the efficacy of the considerable regulation that has only just been imposed.\r\n3) Regulatory simplification is the right way forward: incentivize competitiveness and introduce investment conducive policies \r\nThe European connectivity model is successful, rather than gloomy. Further growth of the telecom sector requires continued data growth, innovation based on IT based technologies, and release of unlicensed spectrum. \r\nRegulatory simplification, proportionality, and improved legislative harmonisation across Member States must be facilitated. In addition, legal coherence and certainty should be a top priority. Specifically with regards to the EECC, implementation has been late in several Member States and has led to materially different interpretations (or significant additional burdens) at the national level, degrading the utility of a single market. \r\nCollective resources should be focused on enforcing, implementing and reviewing the effectiveness of existing legislation, including at national level, rather than creating new frameworks. New initiatives should only be created and prioritized if they accrue directly to the benefit of European businesses and consumers and are based on transparent and fact-based consultation processes and impact assessments. \r\nIMT licensed spectrum should not be prioritized over WiFi use by default, as sufficient WiFi is necessary to complement the FTTH investments in-house and ensure take-off of fiber networks. Instead, most cost-efficient sustainable and more green solutions should be always considered. \r\nThe transition from copper to fibre networks ought, in a market environment, to be largely led by users themselves and not by regulatory targets. Some Member States with reasonable levels of FTTH deployment nonetheless have relatively low adoption levels whilst others have comparatively high levels. This has implications for the financial returns generated by those assets and expectations for returns on additional investments. Greater understanding of the large variances in adoption rates between Member States would be useful and ought to inform strategies to lower barriers to adopting new technologies. This might include the use of State Aid to provide users with demand side incentives to move to fiber. Finally, accelerating the retirement of old technologies such as copper networks should also contribute to sustainability objectives.\r\n4) Avoid introducing mandatory ‘network fees’ mechanisms \r\nThe European Commission has stated that they are interested to look into private networks and interconnection. The White Paper (p. 26) notes that changes in this market have resulted “to a very direct and cooperative interaction between CAPs and ISPs as they have to agree on technical and commercial conditions for transit and peering bilaterally”. The paper goes on to note that the existing market “generally functions well and so do the markets for transit and peering”. At the paper’s own admission, there are very few cases of intervention (and the European Commission has subsequently noted that most examples of such intervention come from outside the EU). European telecommunications regulators (BEREC) also clearly concluded that regulatory intervention is not justified by confirming that “the IP-IC ecosystem is still driven by competitive forces which are functioning without regulatory intervention”6. We strongly caution against any move to intervene in a market that admittedly works well; and would like to recall that regulatory intervention in the telecoms market is meant to address market failures. \r\nNevertheless, the White Paper touches on the concept of arbitration / dispute resolution, which amounts to a network fees mechanism through the back door (in the form of an interconnection payment). Although the paper offers the caveat that action will be considered if disputes increase, such an approach would send a clear market signal and is likely to encourage certain parties to actively seek disputes. It is worth recalling that such a move would: undermine the principle of net neutrality; increase the cost of delivering content to consumers, resulting in higher prices; lead to a deterioration in the quality of service and inefficient traffic routing; and have a detrimental effect for a range of actors, such as European SMEs, European cloud providers, local administrations, hospitals, universities. These points were demonstrated by a range of participants during the Exploratory Consultation the European Commission conducted in 2023. \r\nMicrosoft and other cloud providers have an interest in ensuring that European networks do not become congested. It is commonly understood that last-mile facilities are not the portion of the network where capacity constraints would occur, if they were to occur, because they carry low traffic volume – dedicated to a single business or household. Rather, the network elements that aggregate traffic (e.g., middle-mile and long-haul) are those that are most susceptible to capacity constraints. European middle-mile and long-haul facilities are not suffering. European telecom network operators have been responsibly increasing network capacity. Similarly, cloud providers like Microsoft have constructed – and continue to construct - robust parallel networks across Europe and beyond to manage the additional capacity that digital transformation demands. It is in their economic interest to do so. In addition to the construction of physical networks, Microsoft and other cloud providers employ efficient traffic management. \r\nUltimately, the current system of commercially negotiated peering arrangements serves the digital economy goals efficiently, adapting dynamically to technological and consumer demands. Mandated interconnection fees would distort market incentives and inflate consumer costs. Any changes to the telecoms regulatory regime should be carefully weighed to avoid unintended consequences, such as overregulation, especially in areas of the ecosystem where there is no obvious need for intervention; and undermining mechanisms such as interconnection points that play a crucial role in the functioning of the internet.\r\n\r\n5) Resiliency and security of submarine cable infrastructure is best achieved by increasing redundancy of submarine cables \r\nIt is important to ensure strong and secure connectivity by establishing a regime that maximally promotes investment in submarine cable infrastructure. It is equally important to reinforce maintenance and repair capacity at EU level, which would mitigate the impact of any attempts to sabotage submarine cable infrastructure. \r\nThe European Commission’s considerations on international submarine connectivity provide a holistic approach, which is welcomed. This exercise comes at a very timely moment, given the increased attention from policymakers, regulators, governments, and defense sector to the importance of submarine connectivity, its security in Europe and across the globe, as well as the starting discussions around the policies that could strengthen security and resilience of this critical part of connectivity infrastructure. The Ministerial Declaration on European Data Gateways7 highlights that Europe’s digital sovereignty and global competitiveness depend on strong and secure internal and external connectivity, as a precondition for the European Union to become “the most attractive, most secure and most dynamic data-agile economy in the world”. In the area of submarine cables, we believe that such strong and secure connectivity will be best achieved by a regime that maximally promotes investment in submarine cable infrastructure and flexible deployment of its landing zones. \r\nRedundancy of subsea cable infrastructure is one of the best measures to ensure resilience of communications and offers the best protection against the impact of cable damage incidents8. Moreover, repair ships that are responsible for maintenance and repair of submarine cable infrastructure are scarce resources and therefore reinforcement of such maintenance and repair capacity at the EU level is required to ensure security of existing infrastructure. \r\nExisting national approaches to subsea cable landing have been successful in facilitating redundancy. Imposing an EU layer of regulation on top of a successful subsea cable landing model is not necessary and could be harmful to achieving the foundations of cable security: redundancy and resiliency. \r\nAlthough the portion of subsea cables that are on land – such as the subsea cable landing stations – are well-protected by landing parties and have not traditionally been subject to damage or intentional sabotage, the underwater portion of the cables are more physically vulnerable. The EU could play a uniquely effective role in enhanced naval protection of the physical security of the underwater portion of subsea cables by encouraging cooperation and coordination with NATO protective forces. Marine protection is ideally suited to military naval efforts, and coordination with NATO forces would deter potential adversaries from physically damaging and jeopardizing this critical infrastructure. \r\nWe are living in the era of intelligent connectivity, which is driven by massive technological shifts and emerging technologies. These technological advances are shaping the next phase of innovation, which in turn requires increased capacity of geographically diverse networks. While we do not identify any major regulatory blockers to subsea cable landing in Europe, the threat and security landscape is shifting constantly, and it is important to undertake pan-European measures to encourage and facilitate subsea cable landings and overall physical security of subsea cable infrastructure.\r\n\r\n6) The EU needs a strategy on Post-Quantum Cryptography \r\nWe welcome the fact that the European Commission discusses quantum and post-quantum technologies for secure communication. It is recommended to define a phased migration towards quantum-safe networking including timelines at least for network infrastructure carrying sensitive data. \r\nWhile quantum cryptography is at an early stage of development, we encourage the European Commission to accelerate the transition to post-quantum cryptography (PQC) - not because of the threat, but because of the enormity of the task. In that regard, we welcome the Commission’s Recommendation9 for member states on this very topic and hope this momentum continues. To achieve this, it is paramount for the European Union to work with allied governments to (1) attract and retain the world's best talent today; and (2) to build the talent base and technical expertise required for the future. With regards to research and investment activities, the European Commission is already actively working on several areas related to quantum cryptography, led by various European agencies, funded by European funds, and with the involvement of national governments. The White Paper mentions the important work of ENISA, and the EuroQCI initiative. There is also the EU-funded Quantum Technologies Flagship, and in particular related to the Scenario in the White Paper, the Horizon 2020 OPENQKD project. \r\nWe stress that international policy coordination is necessary to achieve secure and harmonised PQC solutions globally. The EU-US Trade and Technology Council (TTC) is particularly important in this respect. Coordinated efforts with diverse stakeholders are needed, including industry players from like-minded countries beyond the EU, to ensure comprehensive engagement and effective deployment of quantum-resistant technologies. \r\nTechnical standardisation must be conducted in an open, multi-stakeholder environment to ensure high-quality outcomes endorsed by industry and the technical community, and to maximise the adoption and benefits of standardised PQC solutions for users globally.\r\n\r\n7) Promote transatlantic cooperation to achieve digital decade goals \r\nEurope is an attractive partner for collaboration, and it will need to maintain strong international digital partnerships. Europe’s starting point is an open digital economy based on the flow of investment and innovation. Indeed, the EU should strengthen partnerships with like-minded global partners such as the United States, as openness and collaboration are key drivers of prosperity. \r\nTransatlantic cooperation, as we have seen under the Trade and Technology Council (TTC) dialogues, can reinforce Europe’s ambitions in a geopolitically uncertain world. We encourage Europe to actively pursue alliances with like-minded partners to enable common standards. If we focus instead on EU-centric solutions in the digital space, this may have an adverse effect. \r\nBoth public and private transatlantic partnerships can help provide momentum to reach shared policy goals, such as the creation of technical standards for secure and interoperable telecommunications equipment and services.10 Such partnerships would also prevent fragmentation and would underpin the creation of consensus-based international standards, paving the way for broad industry adoption. \r\nFurther cooperation under the TTC, as well as other fora, should be encouraged. The TTC has played a pivotal role in driving dialogue and expediting coordination and quick responses to trade and technology related developments. We do encourage the EU to explore the continuation and/or deepening of the TTC dialogues in the future.\r\nConclusion \r\nThe connectivity sector is a cornerstone of Europe's ambition for a green and digital future. To achieve this vision, it is essential to align industrial capacities with the goals of reducing regulatory burdens, fostering innovation, and promoting seamless integration of green and digital initiatives. \r\nPolicymakers must focus on enforcing and potentially strengthening existing legislation effectively and remain cautious when envisaging new frameworks. Supporting entrepreneurs through regulatory stability is also critical for fostering innovation and growth. In addition, simplifying and harmonising regulations across Member States is essential to create a competitive market. Encouraging innovation and sustainability without over-prescribing technical solutions will be key to allowing market dynamics to drive progress and to maximise connectivity sector’s potential. Finally, enhanced transatlantic cooperation can help establish common standards, prevent market fragmentation, and leverage Europe's educational and research strengths. \r\nBy implementing these strategies, Europe can continue building a resilient, innovative, and sustainable economy that is fit for the future.\r\n"},"recipientGroups":[{"recipients":{"parliament":[],"federalGovernment":[{"department":{"title":"Bundesministerium für Digitales und Verkehr (BMDV) (20. WP)","shortTitle":"BMDV (20. WP)","url":"https://bmdv.bund.de/DE/Home/home.html","electionPeriod":20}}]},"sendingDate":"2024-06-24"}]},{"regulatoryProjectNumber":"RV0011022","regulatoryProjectTitle":"Internationale KI Governance und Aufsichtsstrukturen","pdfUrl":"https://www.lobbyregister.bundestag.de/media/7f/4c/334033/Stellungnahme-Gutachten-SG2407120017.pdf","pdfPageCount":99,"text":{"copyrightAcknowledgement":"Die grundlegenden Stellungnahmen und Gutachten können urheberrechtlich geschützte Werke enthalten. Eine Nutzung ist nur im urheberrechtlich zulässigen Rahmen erlaubt.","text":"Goals and\r\nGlobal Governance:\r\nLessons for AI\r\n2024\r\nGlobal Governance: Goals and Lessons for AI 2\r\nContents Foreword 3\r\n1 Frameworks and Outcomes\r\nfor International AI Governance 7\r\n2 The Building Blocks of Global Governance:\r\nA Comparative Exploration with Lessons for AI 36\r\n3 Institutional Analogies\r\nfor Governing AI Globally 46\r\n3.1 The International Civil Aviation Organization (ICAO) 48\r\n3.2 The European Organization for Nuclear Research (CERN) 56\r\n3.3 The International Atomic Energy Agency (IAEA) 62\r\n3.4 The Intergovernmental Panel on Climate Change (IPCC) 69\r\n3.5 The Bank for International Settlements (BIS), Basel, the Financial\r\nStability Board (FBS), and the Financial Action Task Force (FATF) 82\r\n4 Looking Back to Look Ahead 91\r\n5 Recent Multilateral Developments in AI 94\r\nGlobal Governance: Goals and Lessons for AI • Foreword 3\r\nForeword As AI policy conversations expanded last year, they started to be\r\npunctuated by unexpected abbreviations. Not the usual short names for\r\nnew AI models or machine learning jargon, but acronyms for the different\r\ninternational institutions that today govern civil aviation, nuclear power,\r\nand global capital flows. ICAO, IAEA, FATF, and FSB were in the mix,\r\nalongside IPCC and CERN, two institutions that facilitate critical scientific\r\nresearch across borders.\r\nThis piqued our curiosity. We wanted to learn more about how approaches\r\nto governing civil aviation might apply to a set of digital technologies that\r\nwould never be assembled in a hangar or guided by air traffic control\r\nofficers. And we were eager to learn about nuclear commitments that\r\nemerged in an entirely different geopolitical era to regulate technology\r\nthat showed promise as a tool but had only been used as a weapon.\r\nOur curiosity set us on a journey to learn more about international\r\nanalogies for AI governance. Through research and referrals from\r\ncolleagues, we identified a global group of experts who had studied\r\nrelevant international institutions or participated in them directly. We\r\nfocused on a range of institutions, including: the International Civil\r\nAviation Organization (ICAO), the European Organization for Nuclear\r\nResearch (CERN), the International Atomic Energy Agency (IAEA), the\r\nIntergovernmental Panel on Climate Change (IPCC), the Financial Action\r\nTask Force (FATF), and the Financial Stability Board (FSB).\r\nIn October, we had the pleasure of hosting the group at our Redmond\r\ncampus for a day-long workshop. In a wide-ranging discussion that\r\ntraversed history, politics, economics, and the law, we covered the\r\nmissions, functions, and evolutions of these institutions, highlighting the\r\nlessons they offer for the global governance of AI. We came away with a\r\nrich set of insights and lots of follow-up questions that we subsequently\r\ndug in on.\r\nThis publication pulls together the product of our learning journey so far:\r\nan institutional case study or governance theory chapter from each of our\r\nexperts, as well as our own reflections on directions for AI governance at\r\nthe global level. We offer it as a resource to share our learnings with the\r\nbroader AI policy community and to spur further reflection and discussion\r\nabout goals and lessons for governing AI globally.\r\nA key takeaway for us has been that our question should be less about\r\nwhich institutional analogy is most apt for global AI governance and more\r\nabout the multiple governance functions that apply to AI.\r\nGlobal Governance: Goals and Lessons for AI • Foreword 4\r\nThrough this lens, each institution we studied has relevance for\r\ninternational AI governance. Defining global standards, as ICAO does;\r\ndriving international scientific consensus, as IPCC does; and managing\r\nemergent global stability risks, as the FSB does, are all important functions\r\nfor AI.\r\nAs we recognized the relevance of multiple institutional functions, we\r\nsought to zoom out and put them in a wider governance context. We\r\ndefined three desired outcomes of international AI governance:\r\n1. Globally significant risk governance: We must manage globally\r\nsignificant safety and security risks that affect us all and on\r\nwhich there’s broadly shared agreement regarding the need for\r\ncoordinated action, such as AI-powered acceleration of chemical or\r\nbiological weapons development or the deployment of increasingly\r\nautonomous systems.\r\n2. Regulatory interoperability: We must build international\r\nframeworks that help to facilitate and strengthen the coherence\r\nand interoperability of domestic policies and regulation across\r\nborders.\r\n3. Inclusive progress: We must ensure broad access to AI’s benefits,\r\nfostered through an inclusive global community that contributes to\r\nAI research, development, and deployment.\r\nKey desired\r\ninternational\r\nAI governance\r\noutcomes\r\n1. Globally significant risk governance\r\nInternational collaboration to monitor for and respond to\r\nglobally significant safety and security risks\r\n2. Regulatory interoperability\r\nInternational framework to facilitate and strengthen the\r\ninteroperability of domestic policies and regulation\r\n3. Inclusive progress\r\nInternational network to broaden access to infrastructure\r\nand skilling for inclusive AI research and development and\r\ntechnology benefits\r\nGlobal Governance: Goals and Lessons for AI • Foreword 5\r\nIt became apparent that the governance functions we distilled from our\r\nanalysis of existing international institutions could help secure multiple\r\ninternational AI governance outcomes. For instance, defining and\r\nfacilitating consistent implementation of standards or codes of conduct is\r\na governance function pursued by ICAO, IAEA, FATF, and the FSB. Having\r\ncommon standards and codes of conduct, in turn, is a key enabler of\r\nglobally significant risk governance and regulatory interoperability.\r\nThis web of functions and outcomes especially matters for the current\r\nhistoric moment. When ICAO was formed as World War II came to\r\na close, formal, treaty-based commitments were more likely to gain\r\ntraction. Today, “regime complexes” of formal and informal international\r\norganizations “coordinating and competing over policy space” define 21st\r\ncentury global governance.i A web of institutions and initiatives, pursuing\r\noverlapping and intersecting functions and outcomes, will continue to play\r\nkey roles in AI governance.\r\nTo get the most traction out of this international AI governance system, we\r\nneed common frameworks and clear areas of focus to track our progress\r\ntoward shared goals. We need clarity on where we are today in pursuing\r\nthese shared goals and where there are gaps that will benefit from\r\ncoordinated investment and further thinking.\r\nSince we hosted our workshop last October, governments have made\r\ntremendous progress. The Hiroshima AI Process defined an International\r\nCode of Conduct for Developers of Advanced AI Systems (Code of Conduct);\r\nthe United Nations General Assembly voiced support for many elements\r\nof the Code of Conduct; and the Organization for Economic Co-operation\r\nand Development (OECD) initiated a process to develop a mechanism\r\nto monitor the application of the Hiroshima Code of Conduct by\r\norganizations that choose to adopt it. The UK hosted the inaugural AI\r\nSafety Summit; multiple governments have created AI safety institutes;\r\nand the US and UK AI Safety Institutes announced a Memorandum of\r\nUnderstanding to work together on AI research, standards, and testing.ii\r\nBut we are still in the early days of our AI governance project. To achieve\r\nthe international AI governance outcomes that we’ve offered here, more\r\nwork is required, including on developing common frameworks that will\r\nact as durable guides for an evolving system. What follows is our further\r\nreflections on those frameworks, leveraging what we learned through\r\ndialogues with the experts whose insights are captured in case studies, as\r\nwell as our ideas for concrete next steps to advance further along the path\r\ntowards those outcomes.\r\nGlobal Governance: Goals and Lessons for AI • Foreword 6\r\nAs we continue to develop, implement, and continuously improve AI\r\nguardrails, we remain committed to learning about and contributing\r\nideas on AI governance. Most of all, we are excited about what effective\r\ngovernance of our emerging AI economyiii will mean for people,\r\norganizations, and our shared humanity. History tells us that, if we get\r\ngovernance right, a powerful new technology could fundamentally\r\nimprove countless lives around the world—in ways we can anticipate\r\ntoday and ways that we may later look to with wonder.\r\nBrad Smith\r\nVice Chair and President\r\nNatasha Crampton\r\nChief Responsible AI Officer\r\n1Frameworks and Outcomes for\r\nInternational AI Governance\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 8\r\n1\r\nFew leaps forward in technology and policy\r\ninnovation compare to what the world has\r\nrecently experienced. Artificial intelligence (AI)\r\nmodels have proliferated, their capabilities rapidly\r\nprogressing.iv Global policy has likewise developed\r\napace, with AI’s promise and peril animating\r\ndiscussions in cities ranging from Brussels to DC,\r\nDelhi, London, Santiago, Tokyo, Verona, and many\r\nplaces in between. One thing has become clear:\r\nThere is widespread determination to act, both to\r\ngovern how AI is developed and deployed and to\r\napply recent lessons about technology’s power as\r\na tool and a weapon.\r\nBut act how, where, and toward what more specific\r\noutcomes? These questions are more perplexing\r\nthan they may appear at first glance, and parallels\r\nbetween technology and policy innovation\r\ncontinue to be instructive in understanding our\r\nprogress with them.\r\nIf 2023 was the year of exploration and framing,\r\nthen 2024 is shaping up to be the year where\r\nmany new efforts are brought to ground as we\r\nfurther understand the practical application\r\nand implementation of technology and policy\r\nframeworks. Users are asking more tactical and\r\noperational questions about when and how\r\nthey can put AI technologies to work. Likewise,\r\ndevelopers and implementers of AI policy are\r\ntesting how higher-level objectives can be\r\nrealized in practice.\r\nGlobal AI governance discussions fit this\r\npattern as well. Last year was one of highlevel\r\ninstitutional analogies, with the roles of\r\nthe International Civil Aviation Organization\r\n(ICAO), the European Organization for Nuclear\r\nResearch (CERN), the International Atomic Energy\r\nAgency (IAEA), the Intergovernmental Panel on\r\nClimate Change (IPCC), the Financial Action Task\r\nForce (FATF), and the Financial Stability Board\r\n(FSB) all referenced in the context of global AI\r\ngovernance needs. This year, the United Nations\r\n(UN) High-Level Advisory Body (HLAB) on AI,\r\na group to which Microsoft’s Chief Responsible\r\nAI Officer contributes in her personal capacity,\r\nis considering key questions surrounding the\r\nopportunities and enablers of AI, the risks and\r\nchallenges of AI, and the international governance\r\nof AI, including the governance functions needed\r\nand the institutional arrangements for carrying\r\nthem out.v\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 9\r\nKey AI technology and policy moments\r\nDecember 2022\r\nChatGPT reaches\r\nover 1 million\r\nusers in less than\r\n1 week\r\nFebruary 2023\r\nRelease\r\nof Bing Chat\r\nMarch 2023\r\nRelease\r\nof GPT-4\r\nMay 2023\r\nJapan initiates\r\nHiroshima AI\r\nProcess (HAIP)\r\nat G7\r\nAugust 2023\r\nChina implements\r\nInterim Measures for\r\nthe Management\r\nof Generative AI\r\nServices\r\nJuly 2023\r\nRelease of Llama 2; US organizes voluntary\r\ncommitments from AI companies\r\nJune 2023\r\nHugging Face\r\nadds 100,000\r\nAI models since\r\nJanuary\r\nOctober 2023\r\nRelease of DALL-E 3; Chile and UNESCO\r\nhost Ministerial on the Ethics of AI; US\r\nreleases AI Executive Order; G7 agrees to\r\nHAIP Code of Conduct\r\nNovember 2023\r\nUK hosts AI Safety Summit; release of\r\nChatGPT Plus and Microsoft 365 Copilot\r\nJanuary 2024\r\nSwiss Call for Trust\r\nand Transparency in\r\nAI Action 1 launches\r\nDecember 2023\r\nEU agrees on the AI Act; India hosts Global Partnership on AI (GPAI)\r\nSummit; UN High-Level Advisory Body on AI releases interim report\r\nMarch 2024\r\nG7 calls upon Organization for Economic Co-operation\r\nand Development (OECD) to support HAIP Code of Conduct\r\nmonitoring at Italian Ministerial; UN adopts AI resolution, “Seizing\r\nthe opportunities of safe, secure, and trustworthy AI systems for\r\nsustainable development”\r\nApril 2024\r\nUK and US\r\nannounce AI\r\nsafety Memorandum\r\nof Understanding\r\n(MoU)\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 10\r\nBringing higher-level policy frameworks to ground\r\nrequires sorting out where, how, and by whom\r\nvarious objectives are pursued. While international\r\ninstitutions are a key part of pursuing safe, secure,\r\nand trustworthy AI, other actors have important\r\nand complementary roles for realizing those\r\nobjectives as well.\r\nApplying learnings from other domains, and\r\nin particular civil aviation, nuclear power,\r\nand global capital flows, three interrelated\r\nlayers of AI governance are needed: industry\r\nstandards, domestic regulation, and international\r\ngovernance. For AI, this is how we see these three\r\nlayers fitting together:\r\n• First, industry standards and specifications\r\nfor AI safety, security, and trust support\r\npolicy implementation, bringing together\r\nstate-of-the-art practices and guardrails\r\nbased on operational learnings. Industry\r\ncontributions are important because, like\r\nmany other 21st century technologies, AI\r\nis being pioneered by the private sector.\r\nCivil society and academia also influence\r\nspecifications and standards, contributing\r\nresearch and insights on how proposed\r\nmeasures or controls achieve objectives.\r\n• Second, domestic regulation builds\r\non consensus-based standards and\r\nspecifications. Domestic policies and\r\nregulation may be focused on a specific\r\nsector, domain, issue, or layer of the AI\r\neconomy (e.g., health, privacy, provenance,\r\nor AI applications)—or they may be more\r\nhorizontal and comprehensive, taking on\r\na broader set of interests, risks, desired\r\noutcomes, and AI economy actors.\r\n• Third, international governance also\r\nbuilds from consensus-based standards\r\nor specifications and complements\r\ndomestic regulation. Bilateral or multilateral\r\nagreements and governance institutions\r\ntake on issues that particularly demand or\r\nbenefit from cross-border collaboration,\r\nincluding safety or security imperatives or\r\nopportunities to facilitate global innovation\r\nand economic development.\r\nRecognizing these three overlapping layers\r\nallows us to home in on the distinct and\r\ncomplementary roles of international agreements\r\nand institutions as part of a broader governance\r\nstructure. It allows us to consider the issues that\r\nparticularly demand or benefit from cross-border\r\ncollaboration, driving a need for international AI\r\ngovernance.\r\nWhat AI governance outcomes\r\nare critical at the international\r\nlevel?\r\nLike many modern-day scientific, industrial, and\r\ncommercial breakthroughs that came before it,\r\nAI is the product of cross-border collaboration\r\nthat it also stands to strengthen. “Top-tier\r\nAI researchers” live, work, and collaborate\r\nacross regions, with the proportion of “elite\r\nAI researchers” working in different countries\r\ngrowing more diverse between 2019 and 2022.vi\r\nThe AI economy is also international; AI systems\r\nare often built with components sourced from\r\ndifferent countries and then, via the global\r\nconnectivity offered by the internet, made\r\navailable to customers around the world.\r\nGlobal interconnection underpins AI governance\r\nopportunities. Across borders, we share\r\na common interest in defining safety and\r\nsecurity rules that are impermeable.vii People\r\nand organizations around the world benefit\r\nfrom accessing the best AI technologies and\r\ncomponents without significant technical or\r\ncompliance barriers. We also stand to benefit\r\nboth nationally and across humanity if consistent\r\nnorms and guardrails help accelerate responsible\r\ninnovation that hastens sustainability and\r\nhealthcare solutions.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 11\r\nThe cross-border nature of AI technology also\r\nchallenges governance. National-level technical\r\nor compliance barriers may develop for a variety\r\nof reasons, including value differences that are\r\ndifficult to reconcile and more minor discrepancies\r\nthat are nonetheless burdensome to coordinate.\r\nIn addition, as with other technologies, AI risks\r\ntranscend borders; an AI system developed in\r\none country could be misused by someone based\r\nelsewhere to cause harm in a third country—\r\nor even in multiple countries simultaneously,\r\nfor example, via cyberattack. Aligning and\r\nconsistently enforcing rules is critical to managing\r\nsuch risks effectively.\r\nKey international AI governance outcomes should\r\nbe defined in response to these opportunities and\r\nchallenges and how international governance fits\r\ninto a more holistic AI governance framework.\r\nFrom our vantage point, three high-level\r\noutcomes are important to pursue at the\r\ninternational level:\r\n1. Globally significant risk governance,\r\nfocusing on the most severe safety and\r\nsecurity risks that affect us all and on which\r\nthere’s broadly shared agreement regarding\r\nthe need for coordinated action, such as\r\nAI-powered acceleration of chemical or\r\nbiological weapons development or the\r\ndeployment of increasingly autonomous\r\nsystems;\r\n2. Regulatory interoperability, leveraging\r\ninternational frameworks that help to\r\nfacilitate and strengthen the interoperability\r\nof domestic policies and regulation; and\r\n3. Inclusive progress, ensuring broad access\r\nto AI’s benefits, fostered through an inclusive\r\nglobal community that contributes to AI\r\nresearch, development, and deployment.\r\nAchieving these international AI governance\r\noutcomes will require progress across a mix\r\nof efforts, including bilateral and multilateral\r\nagreements, and existing and new processes and\r\ninstitutions. To secure this future, we need clarity\r\nas to the core set of enabling functions that will\r\nmake it possible.\r\nKey desired\r\ninternational\r\nAI governance\r\noutcomes\r\n1. Globally significant risk governance\r\nInternational collaboration to monitor for and respond to\r\nglobally significant safety and security risks\r\n2. Regulatory interoperability\r\nInternational framework to facilitate and strengthen the\r\ninteroperability of domestic policies and regulation\r\n3. Inclusive progress\r\nInternational network to broaden access to infrastructure\r\nand skilling for inclusive AI research and development and\r\ntechnology benefits\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 12\r\nWhich international AI\r\ngovernance functions are\r\nnecessary, drawing upon\r\nlessons from the past?\r\nJust as global interconnection both underpins\r\nand complicates international AI governance, so\r\ntoo have similar opportunities and challenges\r\nexisted in other domains. In the decades after\r\nWorld War II came to a close, greater international\r\ninterconnection boosted research, invention, and\r\ncommerce, helping to reduce global poverty\r\nand enrich many lives. But it also accelerated\r\nthe spread of weapons and amplified crossborder\r\ncriminal activity, leading to safe havens\r\nfor bad actors and facilitating their access to\r\nenabling resources. Governments responded\r\nby collaborating to define shared expectations,\r\nenforce rules, and share resources.\r\nAI is in many ways unique, and the task of\r\nfurther developing an international governance\r\nsystem for a technology that will continue to\r\nrapidly evolve is formidable—but history holds\r\nmany lessons. In contemplating the international\r\ngovernance functions AI compels, there are\r\nuseful parallels with institutions and systems\r\ncreated during the post-World War II period to\r\naddress scientific, industrial, and commercial\r\nbreakthroughs. This includes ICAO, CERN,\r\nIAEA, IPCC, FATF, and the Bank for International\r\nSettlements (BIS), which hosts the Basel\r\nCommittee for Banking Supervision and the FSB.\r\nAs the relevance of these institutions for\r\ninternational AI governance has been referenced\r\nover the past year, at Microsoft, we’ve sought\r\nto learn more from experts who have studied or\r\nparticipated in them directly. We invited these\r\nexperts to campus for a workshop discussion,\r\nduring which we sought to more deeply\r\nunderstand why the institutions were created\r\nand what their impact has been—as well as\r\ncontemplate broader international governance\r\ntrends. To help share our learnings with the\r\nbroader AI policy community, we invited each\r\nexpert to submit an institutional case study or\r\ngovernance theory chapter.\r\nThis publication brings together these\r\nsubmissions, which we see as offering context\r\nand analogies for international AI governance.\r\nDr. Julia Morse provides an historical overview\r\nand analysis of international governance, which is\r\nfollowed by five case studies on:\r\n• ICAO, authored by David Heffernan and\r\nRachel Schwartz;\r\n• CERN, authored by Professor Sir Christopher\r\nLlewellyn Smith;\r\n• IAEA, authored by Dr. Trevor Findlay;\r\n• IPCC, authored by Diana Liverman and\r\nYouba Sokona; and\r\n• FATF, BIS, Basel, and the FSB, authored by\r\nChristina Parajon Skinner.\r\nThe workshop discussion and expert submissions\r\nhelped distill for us that our question should be less\r\nabout which institution is most apt for international\r\nAI governance and rather more about how multiple\r\ngovernance functions and institutional purposes might\r\nbe relevant to AI and our key desired international\r\nAI governance outcomes.\r\nInternational AI governance functions\r\nFrom the authors of each institutional or domain\r\narea case study, we learned about the core\r\ngovernance functions pursued in each context.\r\nWe defined four functions that are not only\r\nrepresented in what each institution was designed\r\nor evolved to pursue but also consistent with\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 13\r\ngovernance needs that exist for AI. We also\r\nrecognized how each of these international AI\r\ngovernance functions act as enablers for our\r\ndesired international AI governance outcomes.\r\nThis section unpacks that analysis, highlighting\r\nhow the most relevant international institutions\r\nfrom different domains pursued similar functions.\r\nInternational AI governance function\r\n1. Monitoring for\r\nand managing\r\nglobally\r\nsignificant AI\r\nsafety and\r\nsecurity risks\r\n2. Setting and\r\nfacilitating\r\nconsistent\r\nimplementation\r\nof common\r\nstandards\r\nand codes of\r\nconduct for AI\r\ngovernance\r\n3. Building\r\ntechnical\r\nunderstanding\r\nand scientific\r\nconsensus on\r\nAI risks and\r\neffective safety\r\npractices\r\n4. Strengthening\r\naccess to\r\nresources needed\r\nfor inclusive AI\r\nresearch and\r\ndevelopment\r\nand technology\r\nbenefits\r\nInternational AI governance outcome\r\n1.\r\n2.\r\n3.\r\nGlobally significant\r\nrisk governance\r\n2.\r\n3.\r\n4.\r\nRegulatory\r\ninteroperability\r\n2.\r\n3.\r\n4.\r\nInclusive\r\nprogress\r\nMultiple international AI governance functions, which build upon lessons learned from other\r\ndomains, could help secure multiple international AI governance outcomes.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 14\r\nMonitoring for and managing globally\r\nsignificant AI safety and security risks\r\nThis function is closely tied to our desired\r\noutcome of globally significant risk governance.\r\nEven if interoperable domestic regulations,\r\nimplemented through well-crafted international\r\nstandards, enable effective management of\r\nmany AI risks, technologies and threats will\r\ncontinue to evolve, prompting a need for\r\ninternational coordination on emergent risks\r\nwith global significance. For example, even as\r\nglobal financial regulators defined general risk\r\nmitigation standards, as with Basel, and standards\r\nfor a specific area of risk, as with FATF, the global\r\nfinancial crisis still transpired—prompting a\r\nneed for the FSB to both facilitate a response\r\nand improve its monitoring of and readiness to\r\nmitigate emergent risks.\r\nThe FSB and IAEA case studies detail two models\r\nfor managing globally significant risks. The FSB\r\nconducts monitoring or early warning work,\r\nidentifying emerging financial stability risk and\r\npublishing research and working papers that\r\nurge attention to certain areas; it also drives\r\nforward collective problem solving in areas of\r\nhigh concern to the G20. IAEA requires Members\r\nto implement nuclear safeguards whereby states\r\ndeclare the types, amounts, and locations of\r\nnuclear materials in their possession. IAEA applies\r\nseveral layers of safeguard measures to ensure\r\nthat state declarations are correct, including\r\ninspections, sample analysis, video monitoring,\r\nand satellite imagery. Non-conformance with\r\nconstraint requirements can trigger UN Security\r\nCouncil action.\r\nSetting and facilitating consistent\r\nimplementation of common standards and\r\ncodes of conduct for AI governance\r\nInternational standards, ranging from technical\r\nspecifications to sets of practices or control\r\nframeworks against which third parties can\r\ncertify conformance, will be key to regulatory\r\ninteroperability and globally significant risk\r\ngovernance outcomes. International standards\r\ncan also help enable inclusive progress by\r\nfacilitating the interoperability that enables a\r\nglobal community to access and integrate with\r\nglobal technologies and supply chains.\r\nStandards have a long history of formalizing\r\nand advancing best practice and providing\r\nimplementation details for government-led policy,\r\nnot only for tangible products growing out of the\r\nIndustrial Revolution but also for digital services\r\nof the current era. An ecosystem of international\r\nstandards forms the backbone of governance\r\nin many sectors, effectively addressing global\r\nconcerns through a consensus-based mechanism\r\nto advance a common approach and reduce\r\nbarriers to trade and market access.\r\nAs Dr. Morse raises and the case studies\r\ndemonstrate, how institutions develop and\r\nimplement standards varies. Across ICAO,\r\nIAEA, Basel, FATF, and FSB, some institutions\r\nfocus broadly on an entire industry or sector of\r\nthe economy,viii such as civil aviation, whereas\r\nothers address a specific issue, such as money\r\nlaundering. However, institutions commonly have\r\ngovernance processes whereby areas of practice\r\nin which standards are needed may be identified\r\nby governments; then, technical experts, in some\r\ncases including stakeholders from academia,\r\ncivil society, and industry, are convened to\r\ndevelop technical standards or more detailed\r\nimplementation practices.\r\nThere are more marked differences in how\r\nadherence to these standards is encouraged or\r\nenforced. IAEA, the Basel Committee, and FSB\r\nencourage adherence to safety and security\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 15\r\nor financial governance standards through\r\nnormative expectation setting and reputational\r\nnon-compliance costs. Alternatively, FATF and\r\nICAO oversee more intensive monitoring and\r\nenforcement regimes—though FATF is not\r\nlegally established by treaty, and ICAO’s impact\r\nalso depends upon bilateral agreements and\r\ndomestic monitoring and enforcement. FATF’s\r\npeer review system of cooperative monitoring\r\nhas proven nimble and effective at advancing\r\nstandards adoption, especially when coupled\r\nwith commercial and reputational costs for noncompliance.\r\nICAO conducts safety audits but\r\ndoes not have a direct enforcement role; Member\r\nStates also audit other states’ compliance with\r\nstandards and, importantly, manage any market\r\naccess restrictions based on a finding of deficient\r\ncompliance. Outside of the role of international\r\ninstitutions, there are also processes for mutually\r\nrecognizing conformance with product safety\r\nand security standards. Mutual recognition\r\nagreements (MRAs) assert that certification of\r\na product in one country that is party to the\r\nagreement is sufficient for that product to be sold\r\nacross other jurisdictions that are party to the\r\nagreement. They have proven popular across a\r\nrange of product areas; the EU, for example, has\r\nMRAs in place for machinery, medical devices,\r\nand marine equipment.ix\r\nBuilding technical understanding and\r\nscientific consensus on AI risks and\r\neffective safety practices\r\nThis function is a key enabler of all three\r\ninternational AI governance outcomes:\r\nglobally significant risk governance, regulatory\r\ninteroperability, and inclusive progress. Building\r\ntechnical and scientific consensus on responses\r\nto questions of foundational significance, such as\r\nhow to measure AI capabilities and risks, means\r\nmore effective use of resources, more consistently\r\nunderstood and applied safety practices, and\r\nmore aligned interpretations of globally significant\r\nsafety and security risks.\r\nAs the case study details, the IPCC is an exemplary\r\nmodel for this governance function. It leverages\r\nvolunteers from the scientific community, largely\r\nacademics, to develop research reports that are\r\npeer reviewed, reflect global consensus, and are\r\npolicy relevant. It works best when research is\r\ndirected by UN Framework Convention on Climate\r\nChange (UNFCCC) questions, lending greater\r\ncredibility to direct the broader research agenda\r\nof climate scientists and build from their work.\r\nStrengthening access to resources needed\r\nfor inclusive AI research and development\r\nand technology benefits\r\nBroad and appropriate access to AI technology\r\nand skilling resources is foundational to inclusive\r\nprogress in a healthy global ecosystem as well as\r\nan enabler of regulatory interoperability. Global\r\nand local innovation are most impactful when\r\npaired together, ensuring that local context\r\nhelps bridge powerful platform technologies and\r\nthe needs of diverse communities. In addition,\r\nthe broader the community that’s familiar and\r\ninteracting with AI technology, the broader our\r\nthinking and more inclusive our processes will\r\nbe for defining and implementing responsible\r\npractices. We need individuals and organizations\r\nall over the world to be working on responsible AI\r\ndevelopment, deployment, use, and evaluation,\r\nand that broad community needs foundational AI\r\nskills to contribute to AI safety practices.\r\nCERN and IAEA offer two models for facilitating\r\naccess to AI technologies and skills. CERN\r\nprovides shared infrastructure funded by Member\r\nStates and Associated Members based on recent\r\nnet national income; it also requires publication\r\nof research findings and welcomes commercial\r\nspinoffs. Most CERN Member States are European,\r\nand CERN’s formation was in part motivated by\r\nan intention to build bridges across states recently\r\nin conflict.x Alternatively, IAEA’s membership is\r\nglobal. As part of its “bargain” with states for\r\ncomplying with nuclear safeguards, IAEA provides\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 16\r\ntechnical assistance to support use of nuclear\r\npower, as funded by contributions by Member\r\nStates according to their Gross Domestic\r\nProduct (GDP).xi\r\nInternational institution purposes\r\nIn defining the governance functions described\r\nabove, we applied an AI lens to understand what\r\nICAO, CERN, IAEA, IPCC, FATF, BIS, and the FSB\r\nset out or evolved to pursue. However, there’s\r\nanother layer of depth to unpack with regard to\r\nthe purposes that these and other international\r\ngovernance institutions have historically served.\r\nAs further described by Dr. Morse, political\r\nscientist Robert Keohane has theorized that there\r\nare three purposes for international institutions:\r\nfacilitating the flow of information; intensifying the\r\nconsequences of rule breaking; and lowering the\r\ncosts of cooperation. These purposes cut across\r\na much broader array of international institutions\r\nthan those highlighted above, surfacing the\r\nfoundational challenges that international\r\ninstitutions consistently address.\r\nThis is an instructive layer to add to our\r\ninternational AI governance framework because\r\nit allows us to more directly ask: what kind of\r\nproblem or opportunity do we need a new or\r\nevolving international institution or system of\r\ninstitutions to help address?\r\n• Is there a need to help resolve uncertainty\r\nby facilitating information flow;\r\n• Is there a collective action problem that\r\nwould benefit from more consequences for\r\nrule breaking; or\r\n• Are there high transaction costs that necessitate\r\neasing or lowering the costs of cooperation?xii\r\nImagining pursuit of our international AI governance\r\nfunctions, we can anticipate such challenges or\r\nopportunities. For example, if we want to build\r\nscientific consensus, we can imagine the need\r\nto resolve uncertainty about how the scientific\r\ncommunity will prioritize research questions for\r\nwhich there’s the most pressing policy need for\r\nconsensus—or the need to structure a process that\r\nreduces the potentially high transaction costs of\r\ncoordination across a broad global community. The\r\nIPCC case study provides experts’ perspectives on\r\nhow this prioritization and coordination works in\r\npractice.\r\nUltimately, each of Keohane’s purposes for\r\ninternational institutions overlaps conceptually\r\nwith the governance functions introduced above\r\nand pursued by the institutions from different\r\ndomains. Studying this overlap helps to illuminate\r\nthe range of challenges and opportunities that\r\nsit beneath each governance function and that\r\nmotivate the creation or evolution of international\r\ninstitutions.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 17\r\nStrengthening\r\ncross-border\r\naccess to\r\nresources or\r\nassistance needed\r\nfor inclusive R&D\r\nor benefits\r\nCERN • IAEA\r\nBuilding technical\r\nunderstanding of\r\nand/or scientific\r\nconsensus on\r\nresearch key to\r\ncross-border\r\nissues\r\nIPCC\r\nSetting and\r\nimplementing\r\nstandards and\r\ncontributing to\r\nconsequences for\r\nnon-compliance\r\nICAO • IAEA •\r\nBasel • FSB •\r\nFATF\r\nMonitoring for\r\nand managing\r\nglobally significant\r\nsafety, security, or\r\nstability risks\r\nIAEA • FSB\r\nFacilitating\r\nthe flow of\r\ninformation\r\nX X X X\r\nIntensifying the\r\nconsequences\r\nof rule\r\nbreaking\r\nX X\r\nLowering\r\nthe costs of\r\ncooperation X X X X\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 18\r\nStudying this overlap also helps to draw out the\r\nconnections among the governance functions\r\nthemselves. For example, building scientific\r\nconsensus may seem a relatively discrete function,\r\nwith only IPCC being a clear candidate for being\r\ndedicated to that function. However, the function\r\ncould also be considered an enabler of not\r\nonly every desired international AI governance\r\noutcome but also embedded in other functions.\r\nCommon scientific understanding could support\r\nthe development and implementation of common\r\nstandards by facilitating the flow of information\r\nunderpinning them and lowering the costs of\r\nconsensus building.\r\nAs an analytical tool, Keohane’s framework\r\nalso helps surface two different paths toward a\r\ncoherent governance system that benefits from\r\nthese reinforcing functions and purposes. In one\r\npath, more common earlier in our post-World\r\nWar II era, individual institutions may evolve to\r\noperate multiple distinct functions, growing their\r\nexpertise and influence in addressing international\r\ncooperation challenges; IAEA epitomizes this\r\napproach. Or, in another path, more common\r\nlater in our post-World War II era, interconnected\r\nfunctions and purposes might be pursued by a\r\nsystem of more and less formally coordinating\r\ninstitutions that help enable and complement\r\neach other; the array of institutions that\r\ncontribute to governance of our global financial\r\nsystem epitomize this approach.\r\nThis context also underlines the need for a\r\nnetworked web of institutions and initiatives\r\nto work well together, leveraging common\r\nframeworks and orientating around key\r\ngovernance functions and outcomes. Durable\r\nframeworks for understanding the foundational\r\npurposes that international institutions have\r\nplayed can help direct more coordinated\r\ninvestments in the complementary and\r\nreinforcing functions and outcomes needed.\r\nToward international AI\r\ngovernance outcomes in 2024\r\nand beyond\r\nThis first half of this chapter has set forth\r\nframeworks to put international AI governance\r\nefforts in context. It has offered a high-level\r\nframework for AI governance, recognizing\r\ncomplementary roles for industry standards,\r\ndomestic regulation, and international\r\ngovernance. It has then overlaid an international\r\nAI governance framework, proposing desired\r\noutcomes and functions and weaving in political\r\nscience theory on international institutional\r\npurposes. Working in concert, these frameworks\r\nprovide breadth and depth to a perspective on\r\nwhy and how we are collectively acting.\r\nAnd acting we are. As acknowledged at the\r\noutset, 2023 was an active year in the realm of\r\nAI, and 2024 is at pace to carry forward that\r\nmomentum. Leveraging the desired international\r\nAI governance outcomes we’ve defined, this final\r\nsection reflects on recent progress, challenges,\r\nand opportunities. It proposes next steps and\r\noffers ideas about where energy might be\r\ndirected in the longer term.\r\nGlobally significant risk governance\r\nThe world is closer to the start of its AI journey\r\nthan the end. Given the impressive innovation\r\nwe have seen over the last 18 months, it is\r\neasy to forget that AI is a set of relatively new\r\ntechnologies. In the same way that other generalpurpose\r\ntechnologies like the printing press,\r\nelectricity, and the combustion engine have gone\r\nthrough many iterations, it’s likely the bulk of AI\r\ndevelopment and innovation is still ahead of us.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 19\r\nIncreasingly capable AI will offer significant\r\nopportunity, accelerating scientific discovery\r\nand addressing major challenges; it may also\r\npose increased safety and security risks. A bad\r\nactor might intentionally misuse powerful AI\r\ntools as weapons to develop a new pathogen or\r\nperpetrate a cyberattack. As more capable models\r\nare applied ever more broadly across society, the\r\nrisk of significant accidental damage may also\r\nincrease. AI to help manage critical infrastructure,\r\nfor example, could pose significant harm if not\r\nequipped with safety brakes and operated by\r\nappropriately trained individuals.\r\nSome of the most serious safety and security\r\nrisks of highly capable AI will transcend and\r\nmanifest across borders.xiii As with other domains\r\npresenting globally significant safety, security,\r\nor stability risks, such as aviation and financial\r\nservices, a framework for addressing these risks\r\nmust therefore be global. Below, we set out\r\nideas about how to build upon efforts already in\r\nmotion to develop a framework for managing\r\nglobally significant safety and security risks at\r\nthe international level, grounded in the following\r\nareas of action:\r\n1. Developing international safety and security\r\nstandards through a global network of AI\r\nsafety institutes and partners\r\n2. Requiring notification of highly capable\r\nAI model development and advancing an\r\ninternational agreement for government-togovernment\r\ninformation sharing\r\n3. Licensing compute providers to validate their\r\noperation of secure infrastructure and verify\r\ndevelopers meet international safety and\r\nsecurity standards\r\nDeveloping international safety and\r\nsecurity standards through a global\r\nnetwork of AI safety institutes and partners\r\nEarly efforts to develop a network of global\r\nAI safety institutes are underway. Following\r\nthe establishment of the UK and US AI Safety\r\nInstitutes (AISIs) late last year, Japan, Singapore,\r\nCanada, and others have been in the process\r\nof creating safety institutes, and the EU has\r\nmeanwhile been building out its AI Office. In April,\r\nthe US and UK announced a Memorandum of\r\nUnderstanding (MoU) to work together via AISIs\r\non AI research, standards, and testing.xiv\r\nCollaboration among AISIs and their partners,\r\nincluding various government structures and\r\nmultistakeholder initiatives that might play\r\ncomplementary or contributing roles, is critical\r\nto building common understanding and\r\nexpectations around what risks are most globally\r\nsignificant and ripe for international governance.\r\nRisks that AI could facilitate the creation of\r\nchemical, biological, radiological, or nuclear\r\n(CBRN) or cyber weapons have been especially\r\ntop of mind, along with the importance of\r\nensuring AI remains under human control.\r\nThe need for a scientific approach to measuring\r\nhighly capable AI dovetails with concerns\r\nacross these areas. Thresholds based on the\r\ncomputational power or “compute”xv used to train\r\na model will likely serve to help identify more\r\ncapable models to which greater governance\r\nscrutiny should be applied,xvi consistent the US\r\nAI Executive Order and EU AI Act.xvii However,\r\ncompute-based thresholds do not directly\r\nindicate risky capabilities and will likely require\r\nrevision as algorithmic efficiency improves. As\r\nwork on direct capability evaluations accelerates\r\nand methods and instruments with greater\r\ndemonstrable validity are available, they may\r\nact in concert with or replace compute-based\r\nthresholds.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 20\r\nEven as regulatory approaches that leverage\r\ncompute-based thresholds as capability proxies\r\nare nascent, they have already diverged,\r\ndemonstrating the need for greater research\r\nconsensus and coordination on defining\r\nthresholds. While the US government requires\r\ndevelopers training models with more than 10^26\r\nfloating point operations (FLOPs) to provide\r\nnotification and report evaluations, via the AI Act,\r\nthe EU has defined a threshold of 10^25 FLOPs\r\nfor identifying general-purpose AI models with\r\nsystemic risk.xviii\r\nA global network of AISIs and partners\r\ncoordinating on capability evaluations could help\r\ndrive forward research on scientific validity as\r\nwell as standardize approaches to defining risk\r\nthresholds and conducting safety tests. This work\r\ncould be informed by efforts to build consensus\r\non AI safety research, levering multistakeholder\r\ngroups like the US AISI Consortium or building on\r\n“State of the Science”-type reports.xix As with IPCC\r\nand UNFCCC, a global network of AISIs could even\r\ncontribute to directing the research questions on\r\nwhich a broader community of experts offers a\r\nconsensus view.\r\nBecause evaluating highly capable AI for\r\ndangerous or concerning capabilities involves\r\nensuring that thresholds for severe safety and\r\nsecurity risks are not exceeded after guardrails\r\nare applied, collaboration on defining guardrails\r\nis also needed. Building on the model of ICAO’s\r\ntechnical panels, AISIs could work closely with\r\nexperts in civil society, academia, and industry\r\nto define effective guardrails. Over time, a\r\nglobal network of AISIs and partners could\r\nalso standardize a broader set of evaluation\r\nframeworks and metrics, including for testing the\r\nrigor of safety and security guardrails.\r\nDeliberately structuring collaboration among a\r\nglobal network of AISIs and partners will facilitate\r\nprogress. A regular cadence of working group\r\nefforts should be punctuated by annual or\r\nbiannual convenings. Leveraging the AI summit\r\nseries initiated by the UK in 2023 and carried\r\nforward by the Korean and French governments\r\nthis and next year would help reinforce momentum\r\nand alignment at not only the researcher and\r\npractitioner but also the political level.\r\nRequiring notification of highly capable\r\nmodel development and advancing an\r\ninternational agreement for governmentto-\r\ngovernment information sharing\r\nCountries collaborating through an AISI and\r\npartner network could also implement a domestic\r\nnotification regime for highly capable model\r\ntraining and agree to share high-level information\r\nabout where such models are being developed in\r\ntheir jurisdictions, helping advance understanding\r\nof, and visibility into, emerging risks.\r\nBilateral agreements, such as the MoU\r\nbetween the UK and US to collaborate\r\nvia AISIs on AI research, standards, and\r\ntesting, serve as the foundation for broader\r\ncooperation and governance in domains\r\nbeyond AI. For instance, “123 Agreements,”\r\nwhich precede significant transfers of\r\nnuclear material from the US to partners,\r\nalso facilitate technical exchanges, scientific\r\nresearch, and safeguards discussions,\r\nincluding via IAEA. Likewise, bilateral\r\nagreements are critical to enforcing domestic\r\nimplementation of safety and security\r\nstandards defined through ICAO.xx\r\nTo effectively govern highly capable AI,\r\ngovernments need visibility into where it is\r\nbeing developed and used. Just as aircraft\r\nmust be registered with domestic authorities,\r\nwhen models hit a high capability threshold—\r\ndefined through collaboration among the AISI\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 21\r\nnetwork—developers could be required to\r\nprovide notification to their home governments,\r\nalong with information about risk assessment\r\nand mitigation measures. Notably, the US and EU\r\ngovernments have taken steps toward this end,\r\nwith their FLOPs-based thresholds that trigger\r\nregulatory reporting.xxi\r\nGovernments could also require AI compute\r\nproviders to help verify that highly capable model\r\ndevelopers provide appropriate notification. Just\r\nas anti-money laundering and terrorist financing\r\nregulation uses “know your customer” (KYC)\r\nrequirements to ensure banks track customers\r\nengaging in large transactions, KYC obligations\r\ncould ensure AI compute providers track when\r\nmodel developers use a very large amount of\r\ncompute, indicating that they are training highly\r\ncapable models.xxii Moreover, as part of a broader\r\nnotification framework, AI compute providers\r\ncould then be required to report to their home\r\ngovernment that a model developer is using a\r\nvery large amount of compute.\r\nCountries collaborating through an AISI and partner\r\nnetwork could also work to develop information\r\nsharing infrastructure and processes so that\r\nvisibility into highly capable model development\r\ncan be shared across jurisdictions. One option\r\nis to advance an international agreement, with\r\ncountries committing to collectively requiring\r\nthat model developers headquartered in their\r\njurisdictions notify them prior to training a highly\r\ncapable model.xxiii Governments could then share\r\ninformation with each other to ensure broad\r\nvisibility of highly capable model development\r\nwhile also addressing issues of confidentiality and\r\nsovereignty.xxiv The recent US-UK MoU could serve\r\nas the foundation from which to build a broader\r\ninformation sharing network.xxv\r\nUltimately, model developers in jurisdictions\r\nthat are party to an international agreement\r\ncould be required to provide notification to their\r\nhome governments, while compute providers\r\ncould be required to ensure customers provide\r\nproof of such notification. As an additional layer\r\nof verification, compute providers in jurisdictions\r\nthat are party to the agreement could be required\r\nto notify their home government when they grant\r\ncustomers access to very large amounts of compute\r\nfor highly capable model training. Governments\r\ncould then communicate across the framework to\r\nmatch a model developer’s training notification\r\nwith the reporting from the compute provider.\r\nModel developers based outside of AISI-networked\r\ncountries involved in government-to-government\r\ninformation sharing could have the option of\r\nproviding notification to a participating country.\r\nThis framework could also support a deeper\r\nexchange across countries on any emerging\r\nchallenges with highly capable AI, underpinning\r\nthe coordinated, rapid response capacity that the\r\nFSB provides for global financial stability.\r\nLicensing compute providers to validate\r\ntheir operation of secure infrastructure and\r\nverify developers meet international safety\r\nand security standards\r\nOver time, as scientific understanding and best\r\npractice progress—and standards for measuring\r\nand mitigating globally significant safety and\r\nsecurity risks are defined—this framework\r\ncan act as a foundation to advance additional\r\ngovernance safeguards. Beyond a notification\r\nregime, governments could require highly capable\r\nmodel developers or providers in their jurisdiction\r\nto apply safety and security measures prior to\r\ndeveloping or placing such a model on the\r\nmarket.xxvi Highly capable model providers could\r\nalso be required to undergo safety tests, ensuring\r\nthat the model does not present CBRN, cyber, or\r\nother serious risks.\r\nWhile safety standards and testing methods\r\nwould be developed globally via networked\r\nAISIs and their partners, those developing and\r\nproviding highly capable models would be\r\ndirectly accountable to their home government.\r\nApproval from a home government or an\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 22\r\nAISI-accredited third-party assessor could\r\nbe recognized by others, setting a high bar\r\nfor safety and security while providing for\r\nstreamlined regulatory enforcement.\r\nCompute providers could serve as another\r\nimportant governance node. In addition to\r\nrequiring that customers training and providing\r\nhighly capable AI models show they have been\r\napproved by their home government against the\r\nAISI-developed global safety standards, compute\r\nproviders could also be obligated to implement\r\nglobal security standards to guard against AI\r\ninfrastructure being compromised. Meeting\r\nKYC and security requirements could serve as\r\ncore elements of a licensing regime, whereby a\r\nlicense would need to be granted by the compute\r\nprovider’s home government before it’s legally\r\nauthorized to provide compute for highly capable\r\nAI model development or hosting.\r\nThis framework draws from other models of\r\nglobal governance outlined in later sections.\r\nIt builds on key concepts from ICAO and\r\nFATF, including the way in which globally\r\ndeveloped standards are implemented locally\r\nin an internationally coherent manner. Ensuring\r\nhighly capable model developers and providers\r\nare subject to direct oversight by their home\r\ngovernments will likely address concerns many\r\ncountries may have about excessive regulatory\r\ninspection risking leakage of sensitive information.\r\nAs model capabilities continue to improve, it\r\nmay also play a role in limiting the unintentional\r\nproliferation of highly capable models that\r\ncould be intentionally misused to cause harm.\r\nUltimately, such a framework would build on key\r\nefforts already in motion by governments across\r\nthe globe to advance a durable and effective\r\ngovernance framework capable of responding\r\nto emerging and globally significant safety and\r\nsecurity risks.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 23\r\nCountries in international\r\nagreement\r\n• Exchange information\r\non highly capable model\r\ndevelopment\r\n• Contribute to\r\ndevelopment of global\r\nstandards via AISI\r\n• Regulate domestic\r\ndevelopers and compute\r\nproviders against AISI\r\nGlobal network of AI standards\r\nsafety institutes\r\n• Formed of AISIs\r\nfrom countries\r\nin international\r\nagreement\r\n• Develop global\r\nsafety standards via\r\nAISI network\r\n• Certify third-party\r\nevaluators\r\nProviders of highly\r\ncapable models\r\n• Accountable to home\r\ngovernments for\r\nnotification a nd safety\r\ncertification\r\n• Ensure compute\r\nproviders they use are\r\nlicensed\r\nCompute\r\nproviders\r\n• Verify model\r\ndevelopers have\r\nproof of notification\r\nand safety\r\ncertification\r\n• Provide secure\r\ninfrastructure\r\n• Licensed by home\r\ngovernment\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 24\r\nRegulatory interoperability\r\nWhile a seamless global framework is especially\r\nimportant for managing significant safety and\r\nsecurity risks about which governments share\r\nconcern, interoperability among a broader set\r\nof policy activities is likewise valuable for global\r\ntechnologies and markets. AI has the ability to\r\nhelp people and organizations around the world\r\nachieve more, for their businesses, public sector\r\nprojects, and the planet—but the degree to which\r\nit can do so depends on a globally interconnected\r\necosystem with minimal unnecessary friction. It\r\ndepends on regulatory interoperability, where\r\nthere are consistent rules and standards applied\r\nto address common expectations for safety,\r\nsecurity, rights protection, and trust.\r\nInteroperability has economic, safety, and\r\nsocietal benefits. It enables global organizations\r\nto operate more efficiently, directing resources\r\ntoward rigorous safety and societal risk\r\ngovernance rather than navigation of redundant\r\nor inconsistent obligations. It also enables small\r\nbusinesses to access cross-border markets,\r\nintegrate with global supply chains, and drive\r\ninnovation. When we say AI has the power to\r\naddress the world’s greatest challenges, we often\r\nthink of the big breakthroughs underway—but\r\nthe impact of innovative startups can cascade\r\nacross industry sectors and parts of the world,\r\ncatalyzing transformation one developer and one\r\ndeployer at a time.\r\nTake BeeOdiversity, a Belgian startup.\r\nCofounder Bach Kim Nguyen invented a\r\nsystem that knocks a tiny bit of pollen off\r\nworker bees as they return to the hive.\r\nUsing laboratory analysis and AI models,\r\nBeeOdiversity can identify more than 500\r\npesticides and heavy metals as well as plants.\r\nOnce they analyze the data, BeeOdiversity\r\nscientists make recommendations—including\r\nto farmers in Oregon, public water utilities\r\nin Europe, and beverage giant AB InBev for\r\nits operations in South Africa. In the end,\r\ntheir recommendations not only improve\r\nthe clients’ operations but also the overall\r\nenvironment—and help save bees, which\r\npollinate over 70% of crops that provide the\r\nvast majority of food worldwide.xxvi\r\nFacilitating and strengthening the interoperability\r\nof domestic policies and regulations, which help\r\nenable small and large businesses alike access\r\nmarkets, grow, and innovate, benefit from three\r\ninterrelated areas of focus:\r\n1. Defining global principles, policy\r\nframeworks, and codes of conduct\r\n2. Supporting consistent implementation\r\nthrough common standards and\r\nexpectations for artifacts\r\n3. Establishing a process to facilitate ongoing\r\ncollaboration and iterative improvements\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 25\r\nDefining global principles, policy\r\nframeworks, and codes of conduct\r\nGlobal principles, policy frameworks, and\r\ncodes of conduct act as a sturdy foundation for\r\ninteroperable global regulation. Global principles\r\ndefine common priorities and desired policy\r\noutcomes; global policy frameworks define\r\nthe roles of various stakeholders and areas of\r\npotential policy action; and global codes of\r\nconduct define sets of more specific common\r\nactions that align with areas of focus and accrue\r\nto principles.\r\nIterations of these building blocks have been\r\nput in place for international AI governance\r\nby existing global institutions. In 2019, the\r\nOrganization for Economic Co-operation and\r\nDevelopment (OECD) adopted AI Principlesxxvii\r\nthat domestic governments and global\r\norganizations, including the G20, have endorsed\r\nor leveraged to promote responsible AI.xxviii In\r\n2021, UNESCO adopted its Recommendation on\r\nthe Ethics of Artificial Intelligence, endorsed by 193\r\nMember States, providing a framework of values,\r\nprinciples, and areas of action to link higherlevel\r\nobjectives with more practical application\r\napproaches.xxix\r\nLast October, the Group of Seven (G7) Hiroshima\r\nAI Process (HAIP) agreed to an International Code\r\nof Conduct for Advanced AI Systems, defining a set\r\nof actions for responsible AI development and\r\ndeployment.xxx In March, the UN General Assembly\r\nResolution on Seizing the opportunities of safe,\r\nsecure, and trustworthy artificial intelligence systems\r\nfor sustainable development (UNGA AI Resolution),\r\nadopted by consensus of all UN member states,\r\nextended support for the HAIP Code of Conduct\r\napproach, broadening a shared commitment to\r\nconsistent policies and actions to promote safe,\r\nsecure, and trustworthy AI.xxxi\r\nWhile the progress made on defining common\r\nprinciples, frameworks, and codes of conduct\r\nand growing support for them has been critical,\r\nultimately, realizing their value is dependent\r\nupon taking further steps. Clear expectations for\r\nhow governments and industry can consistently\r\nimplement globally interoperable measures are\r\nneeded.\r\nSupporting consistent implementation\r\nthrough common standards and\r\nexpectations for artifacts\r\nWhile leveraging common principles, policy\r\nframeworks, and codes of conduct as reference\r\npoints for domestic AI regulation is a critical step\r\ntowards interoperability, if jurisdictions adopt highlevel\r\nactions or provisions but miss opportunities\r\nto coordinate further, then they risk creating\r\nunnecessary barriers for global commerce and\r\nAI safety. As efforts shift toward more detailed\r\nimplementation, as they already are with the\r\nEU’s AI Act and US AI Executive Order, significant\r\nquestions can emerge, and divergences in how\r\ncountries respond to them, even if unintended, can\r\nmeaningfully disrupt interoperability.\r\nThe HAIP Code of Conduct’s actions and the\r\nUNGA Resolution’s provisions offer an up-to-date\r\nand focused set of priorities, but they need to be\r\nfurther defined through explanatory guidance and\r\nmore clearly enumerated expectations regarding\r\nimplementation. Such guidance and expectations\r\ncould help direct the efforts of AI developers and\r\ndeployers and ensure that jurisdictions intending\r\nto align to the Code of Conduct or Resolution are\r\ninterpreting actions and provisions consistently.\r\nThe important role of standards in supporting\r\naligned implementation of higher-level policy\r\nis well recognized. For example, the EU and US\r\nhave emphasized a “shared interest in supporting\r\ninternational standardization efforts” in their\r\njoint Roadmap for Trustworthy AI and Risk\r\nManagement.xxxii\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 26\r\nThe HAIP Code of Conduct also encourages\r\norganizations to “contribute to the development…\r\nand use of international technical standards,”xxxiii\r\nand the UNGA AI Resolution stresses the\r\nurgency of cooperating on “internationally\r\ninteroperable safeguards, practices and standards\r\nthat promote innovation…”xxxiv\r\nWithin the International Organization\r\nfor Standardization and the International\r\nElectrotechnical Commission (ISO/IEC), a joint\r\ntechnical committee on AI is developing or has\r\npublished numerous standards. ISO/IEC 42001,\r\nArtificial Intelligence Management System (AIMS),\r\nwas published earlier this year, providing an\r\ninternational standard that can help organizations\r\nimplement responsible AI processes and\r\nprocedures and provide a basis for third-party\r\ncertification.xxxv ISO/IEC 42005 will also provide\r\ndetailed guidelines on implementing and\r\nconducting an AI system impact assessment.xxxvi\r\nWhile international standards should play an\r\nimportant role in defining implementation details\r\nand expectations, other reference points can\r\nalso be valuable, especially in the near term.\r\nStandards for implementing all of the HAIP Code\r\nof Conduct actions and UNGA AI Resolution\r\nprovisions do not yet exist; moreover, the\r\ndevelopment of standards through a consensusbased\r\nprocess is time intensive. The parallel\r\ndevelopment of other reference points, such\r\nas best practice implementation guidance, not\r\nonly addresses near-term gaps but also informs\r\npotential future standards.\r\nNext steps with the HAIP offer an especially\r\npromising path forward with implementation\r\nof the Code of Conduct. In March, the G7\r\nDigital Ministers tasked the OECD to develop\r\na mechanism to monitor the application of\r\nthe Code of Conduct by organizations that\r\ncommit to its actions on a voluntary basis,xxxvii\r\nalso recognizing a potential role for other\r\nstakeholders, such as the Global Partnership on\r\nAI (GPAI) and UNESCO. In April, the OECD kicked\r\noff an effort to develop an initial approach to a\r\nreporting framework that could be reviewed in\r\nJune and further built out throughout the year.\r\nThe OECD is working collaboratively with partners\r\nto align with existing, interoperable frameworks,\r\nand define an approach whereby organizations\r\ncan voluntarily report on their implementation\r\nof the Code of Conduct actions, enabling a\r\nmonitoring mechanism.\r\nA reporting framework that supports\r\norganizations in demonstrating implementation of\r\nCode of Conduct actions can help to ensure more\r\ninteroperable regulation globally—especially if\r\nsuch a process is undertaken in advance of or in\r\nparallel to domestic regulatory implementation\r\nefforts. It can serve as a reference point for\r\ndomestic efforts, providing greater clarity on key\r\nterms, explanatory guidance that bridges from\r\npolicy objectives to implementation details, and\r\ndescriptions of potential artifacts through which\r\norganizations can demostrate implementation,\r\nsuch as templates for documentation. While\r\nmanaging international and domestic efforts\r\nproceeding in parallel is complex, as with\r\nimplementation of the HAIP Code of Conduct,\r\nthe EU AI Act, and US AI Executive Order, trying\r\nto retrofit domestic policy to align with global\r\napproaches is more arduous than leveraging\r\ncommon reference points from the outset.\r\nEstablishing a process to facilitate ongoing\r\ncollaboration and iterative improvements\r\nFostering regulatory interoperability is an iterative\r\nprocess, building common reference points,\r\nbroadening participation and feedback loops,\r\nand considering new ways to support consistent\r\nimplementation. Codes of conduct themselves\r\nshould not be static over prolonged periods of\r\ntime, especially in areas as dynamic as AI. The\r\nprocess of implementation, especially among a\r\ndiverse group, is also likely to regularly surface\r\nareas for potential elaboration or improvement.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 27\r\nThe value of developing international\r\nimplementation reference points to support\r\ndomestic regulatory implementation underpins\r\nthe work now underway to develop the\r\nHAIP Code of Conduct reporting framework.\r\nInstantiating an iterative pilot this year offers\r\ntwo key advantages. First, it allows the reporting\r\nframework to be built out in time to function as a\r\nconsistent reference point for implementation of\r\nthe EU AI Act, US AI Executive Order, and other\r\nglobal regulatory activity. Second, it allows for a\r\nprocess whereby governments, the OECD, and\r\npartners can surface and discuss challenges and\r\nwhat might be needed to maximize collective\r\ninvestments in a well-regarded reference point.\r\nBroadened collaboration is also needed. The HAIP\r\nFriends Group, launched on May 2 as an initial set\r\nof 49 countries supporting the process, represents\r\na critical step, bringing together a diverse group\r\nfrom Asia, Europe, the Middle East, and North\r\nand South America.xxxviii Going forward, the OECD,\r\nworking with partners such as GPAI and the HAIP\r\nFriends Group, could welcome implementation\r\nprojects related to the HAIP Code of Conduct and\r\nreporting framework. The OECD could also bring\r\ntogether a broader set of partners in a sustained\r\neffort, such as an inclusive framework, to leverage\r\nand contribute to regularly improving the Code\r\nof Conduct reporting framework. Alignment with\r\nthe OECD’s AI principles and other policies and\r\ntools, such as draft due diligence guidance for\r\nresponsible AIxxxix —along with coordination with\r\nrelated efforts to advance accountability and\r\ngovernance, such as the AI summit series initiated\r\nby the UK at Bletchley Park—could also broaden\r\ncollaboration and build consensus.\r\nThrough an iterative and expansive effort, the\r\nOECD could work with partners to evolve the\r\nHAIP Code of Conduct as needed, including to\r\naddress known gaps. For example, evaluations of\r\nAI products are likely to be an important part of\r\na governance regime, including at the domestic\r\nand international levels—consistent with the HAIP\r\nCode of Conduct, EU AI Act, and US AI Executive\r\nOrder calls for evaluation of advanced AI models\r\nand high-risk systems, as well as the remits of\r\nglobal AI safety institutes. However, AI evaluation\r\nscience is today unsettled; measurement\r\ntechniques and instruments are rapidly evolving,\r\nand the need for scientifically valid measurement\r\ninstruments is increasingly in focus. As AI safety\r\ninstitutes and the EU AI Office are expected to\r\nwork with others, including industry and other\r\nexperts, to progress the scientific rigor of AI\r\nevaluations and the development of effective\r\ntechniques and instruments, there will likely\r\nbe value in refining the HAIP Code of Conduct\r\nexplanatory guidance and expectations regarding\r\nartifacts that committed organizations should\r\nshare to demonstrate implementation.\r\nApproaches for promoting interoperable\r\nimplementation of consistent policy could also\r\nbe refined over time. For example, as a reporting\r\nframework for demonstrating implementation\r\nof the HAIP Code of Conduct is developed by\r\nthe OECD and its partners, it could also serve\r\nto support domestic regulatory efforts toward\r\nmutual recognition. As a leading domestic\r\napproach for comprehensive AI legislation, the\r\nEU AI Act sets an important precedent for such\r\nan approach, acknowledging a role for mutual\r\nrecognition where “conformity assessment bodies\r\nestablished under the law of a third country meet\r\nthe applicable requirements of [the Act] and\r\nthe Union has concluded an agreement to that\r\nextent.”xl\r\nInclusive progress\r\nAt the heart of our AI journey is opportunity.\r\nBut amidst excitement in domains like AI and\r\nsustainabilityxli or AI4Sciencexlii—and recent\r\nprogress towards protecting the Amazon\r\nrainforest,xliii improving cancer care and research,xliv\r\nor developing new drugs for global infectious\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 28\r\ndiseasesxlv — there is the critical question:\r\nopportunity for whom? How do we make sure this\r\nAI revolution not only enables the transformations\r\nwe need for our shared futures, but also helps\r\nraise everyone up?\r\nWe appropriately look to technologies like AI that\r\ncould help put us on the right course where we’ve\r\nfallen behind on the UN Sustainable Development\r\nGoals (SDGs) and otherwise accelerate our\r\nprogress, but those results are not a given.xlvi We\r\nmust focus on making them happen, investing\r\nin AI development and deployment that is\r\ninclusivexlvii so that AI technologies can most\r\neffectively benefit everyone.\r\nWe reflect below on the progress that’s been\r\nmade thus far and where we can and need to go,\r\nrecognizing the role of investments by industry,\r\nnational governments, and cross-border or global\r\ninstitutions. We consider three areas of focus:\r\n1. Investing in greater access to infrastructure\r\nand models\r\n2. Enhancing AI skills by strengthening and\r\namplifying available resources\r\n3. Promoting and facilitating AI for good\r\nInvesting in greater access to infrastructure\r\nand models\r\nBroad and appropriate access to AI technologies\r\nis needed to empower people and organizations\r\naround the world to develop and use AI in ways\r\nthat will serve the public good. Just like other\r\ngeneral-purpose technologies in the past, AI is\r\ncreating a new sector of the economy, with many\r\ndifferent technology components—from chips to\r\ndatacenters, data, models, tooling, applications,\r\nand distribution channels—offering entry points\r\nfor innovation.xlviii\r\nTo achieve democratic access to AI, Internet\r\nconnectivity is essential.xlix Appropriate access to\r\nAI infrastructure is likewise critical, particularly for\r\nresearch communities that foster economic growth\r\nand public accountability by analyzing the behavior\r\nof models and more broadly advancing our\r\nunderstanding of AI.l Appropriate access to models\r\nis also important for not only researcher but\r\nalso developer communities that have a greater\r\nunderstanding of their local challenges and ways AI\r\napplications may help solve them.\r\nAs discussed above in the context of AI\r\ngovernance functions, broadened global access to\r\ninfrastructure and models would also enable other\r\ninternational AI governance outcomes, including\r\nregulatory interoperability. It would accelerate\r\nexisting efforts to foster globally interoperable\r\napproaches to risk evaluation and other required\r\nsafety practices.\r\nHowever, strengthening access to AI\r\ninfrastructure is a formidable challenge. The high\r\ncost of compute resources for the training of\r\nlarge-scale AI models has been a barrier for many\r\nhigher education and nonprofit communities.\r\nIn addition, there’s a rising consensus among\r\nkey stakeholders that, at the frontiers of model\r\ncapability, careful release strategies may be\r\nnecessary until marginal safety and security risks\r\nare effectively addressed. This need for careful\r\nrelease strategies underscores why inclusive\r\nprogress needs to be nested within a broader\r\ninternational governance framework.\r\nNational governments, including the US, UK,\r\nand Canada, are making significant investments\r\nto address gaps.li Private sector companies are\r\nalso making investments to support research\r\ncommunities and the broader ecosystem.\r\nMicrosoft has expanded our AI research grants\r\nprogram;lii announced investments of over\r\n$17.5 billion in new AI and hyperscale cloud\r\ninfrastructure in Australia, the UK, Europe, and\r\nJapan along with new partnerships with Mistral\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 29\r\nAI and G42;liii and committed to our AI Access\r\nPrinciples, including broader programs to promote\r\ninnovation and competition than ever before.liv\r\nMultilateral investments are also needed. One\r\nexample of potential regional coordination\r\non shared AI infrastructure got underway\r\nin September 2023, with the Multilateral\r\nCooperation Center for Development Finance\r\nannouncing a grant to support a Development\r\nBank of Latin American and the Caribbean\r\nproject toward creating a network of highperformance\r\ncomputing centers for AI growth,\r\nstarting in Chile and the Dominican Republic.lv\r\nThe UNGA AI Resolution also provides a strong\r\nfoundation for collaboration, calling upon\r\nMember States and inviting other stakeholders\r\nto provide assistance to developing countries\r\nby enhancing digital infrastructure connectivity;\r\nenhancing access to technology that facilitates\r\ndeveloping country participation throughout the\r\nlifecycle of AI systems; and enabling innovationbased\r\nenvironments to enhance the ability\r\nof developing countries to develop technical\r\nexpertise and capacities and harness data and\r\ncompute resources.lvi\r\nEnhancing AI skills by strengthening and\r\namplifying available resources\r\nTo build with and use AI technologies most\r\neffectively, digital and AI skills are critical. As\r\nwith infrastructure and model access, effectively\r\nenhancing AI skills would also have compounding\r\npositive effects on broader international AI\r\ngovernance functions and desired outcomes,\r\nincluding by driving inclusive innovation as well as\r\nsupporting global readiness to implement a more\r\nseamless approach to consistent AI guardrails.\r\nBut, also as with infrastructure access, the scale\r\nof the challenge is substantial. Many different\r\nlearning paths may be helpful for people and\r\norganizations with different starting points with\r\ntechnology and different anticipated scenarios for\r\ninteracting with AI—across industries, countries,\r\nand languages. The demand for baseline digital\r\nskills and specific domain areas, like cybersecurity,\r\nalso is massive and continues to grow.lvii\r\nExisting international institutions and private\r\nsector partners are actively working on skilling\r\nresources. For example, UNESCO is developing\r\nresources,lviii and UNESCO’s AI Business Council\r\nhas also developed a skilling inventory.lix Last\r\nJune, Microsoft launched an AI Skills Initiative,\r\nthrough which we have already reached more\r\nthan 80 million people worldwide.lx Microsoft\r\nhas also invested in AI training programs in\r\nAustralia, the United Kingdom, Germany, and\r\nSpain and via partnership with the American\r\nFederation of Labor and Congress of Industrial\r\nOrganizations (AFL-CIO).lxi\r\nThe UNGA AI Resolution, in calling upon\r\nMember States and inviting other stakeholders\r\nto “provide assistance to developing countries…”,\r\nalso underlines skilling.lxii Specifically, it calls for:\r\nincreasing digital literacy; capacity building and\r\nknowledge sharing related to AI; and providing\r\ntechnical assistance to developing countries\r\nrelated to AI systems.lxiii\r\nIn addition to recognizing a broad need to\r\nincrease digital literacy and build AI capacity,\r\nwe anticipate value in more in-depth technical\r\nassistance in support of other international AI\r\ngovernance functions and outcomes, in particular\r\nrelated to managing globally significant safety\r\nand security risks. The emerging network of\r\nAISIs and partners discussed above offers a\r\nmechanism by which technical assistance could be\r\nenhanced, including among new AISIs or similarly\r\nfunctioning government structures ramping up\r\ncapacity. Such a network, coordinating formally\r\nor informally, would benefit from strengthened\r\nglobal readiness to not only monitor for risks but\r\nalso reinforce consistent monitoring of guardrails.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 30\r\nPromoting and facilitating AI for good\r\nRaising up real-world examples of the use of\r\nAI to benefit humanity and bringing together\r\nmultistakeholder research and development efforts\r\nusing AI to address some of humanity’s greatest\r\nchallenges are critical to realizing the potential of\r\nthis new technology. Leveraging these examples\r\nand efforts, individuals and organizations driving\r\nprogress can learn from and build on each\r\nother’s successes, and institutions and effective\r\ngovernance can provide the infrastructure needed\r\nto help lower barriers to their cooperation.\r\nFor instance, the ITU manages AI for Good, an\r\ninclusive UN platform that aims to identify practical\r\napplication of AI to advance the SDGs and scale\r\nthose solutions for global impact.lxiv Microsoft’s AI\r\nfor Good Lab likewise functions as a research hub,\r\nleveraging big data, our cloud technology, and\r\ncollaboration with our partners to address global\r\nchallenges.lxv\r\nRecent multilateral efforts underline the\r\nimportance of ongoing efforts. In October 2023,\r\nthe HAIP Code of Conduct, building on the White\r\nHouse Voluntary Commitments from Leading AI\r\nCompanies to Manage the Risks Posed by AI, called\r\nupon organizations to “prioritize the development\r\nof advanced AI systems to address the world’s\r\ngreatest challenges, notably but not limited to\r\nthe climate crisis, global health and education.”lxvi\r\nThe March UNGA AI Resolution also calls upon\r\nMember States and invites others to “accelerate\r\nthe inclusive and positive contribution” of AI to the\r\nSDGs. Likewise, in March 2024, the G7 recognized\r\nthe need for new multistakeholder partnerships to\r\nstrengthen AI ecosystems in developing countries,\r\nincluding by democratizing compute power and\r\ndeveloping open and secure data models.lxvii\r\nWe see opportunities for globally coordinated\r\ninvestments in AI for good to be more integrated\r\nwith those towards investments in skills and\r\ninfrastructure. For instance, Microsoft’s AI for\r\nGood Lab works at the intersection of AI for good\r\nand digital and AI skills, creating AI tools that can\r\nhelp illuminate gaps in broadband availability\r\nat a more granular level. The International\r\nComputation and AI Network (ICAIN), an effort\r\nlaunched earlier this year at the World Economic\r\nForum (WEF), likewise aims to work at the\r\nintersection of AI for good and appropriate\r\naccess to infrastructure, envisioning pooling\r\nexpert knowledge and computing resources “to\r\npromote the development of interdisciplinary,\r\ninnovative research and expertise for large-scale\r\nAI models that serve society and the achievement\r\nof the [SDGs]”.lxviii\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 31\r\nOrienting for what’s next\r\nOver recent months, the pace and scope of\r\ninternational AI governance activities have been\r\nencouraging but also dizzying. This chapter has\r\noffered frameworks to orient those activities in\r\nthe broader context of AI governance as well as\r\nthe longer history of international governance. It\r\nhas proposed three international AI governance\r\noutcomes, highlighting how they relate to efforts\r\nat the domestic level and among industry as\r\nwell as how international governance functions\r\nrelevant across other domains can help enable\r\nthose outcomes.\r\nAs a vast and multifaceted project, international\r\nAI governance will continue to involve multiple\r\ninstitutions and processes, building from today’s\r\nefforts by the UN, G7, G20, OECD, GPAI, and\r\nother organizations and initiatives. Together, this\r\nAI governance system will fill in gaps but also\r\nleverage the global governance infrastructure and\r\nmore than 400 formal and informal international\r\norganizations referenced in Chapter Two,\r\nsupporting critical progress on our common\r\nobjectives for AI safety, security, and trust.\r\nDeepening our understanding of some of these\r\ninstitutions, the global governance systems\r\nthey’ve helped form, and their purposes and\r\nfunctions will help us further orient toward our\r\ndesired outcomes and anticipate the challenges\r\nand opportunities ahead. The chapters that follow\r\nthus elaborate on historical context, conceptual\r\nframeworks, and institutional analogies relevant\r\nto the international AI governance project\r\non which the world is now embarking. At this\r\ncritical moment in 2024, when we need to\r\nmaintain momentum as we shift to the difficult\r\nwork of implementing and iteratively refining,\r\nthese reflections can help inform our efforts to\r\ndefine common language and frameworks that\r\nreinforce a set of common expectations for how,\r\nwhere, and toward what specific AI governance\r\noutcomes we are collectively acting—as well as\r\nthe most valuable next steps we can take toward\r\nthose outcomes.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 32\r\ni. See Chapter 2.\r\nii. “UK & United States announce partnership on science of AI safety,” UK Government, April 2, 2024, https://www.gov.uk/government/\r\nnews/uk-united-states-announce-partnership-on-science-of-ai-safety.\r\niii. Brad Smith, “Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy,”\r\nMicrosoft On the Issues, February 26, 2024, https://blogs.microsoft.com/on-the-issues/2024/02/26/microsoft-ai-access-principlesresponsible-\r\nmobile-world-congress/.\r\niv. Millions of people began using natural language and generative AI systems to create text, code, and audio-visual media, helping\r\npeople achieve better outcomes in education, energy, healthcare, and more. “NTNU helps K-12 students master English faster with\r\ninnovative learning platform powered by Azure OpenAI Service,” Microsoft Customer Stories, April 19, 2023, https://ms-f1-sites-\r\n02-we.azurewebsites.net/en-sg/story/1623125651482069014-national-taiwan-normal-university-higher-education-azure-openaiservice;\r\n“EDP: powering the global energy transition using Microsoft Intelligent Data Platform, AI and IoT technology,” Microsoft\r\nCustomer Stories, June 30, 2023, https://ms-f1-sites-02-we.azurewebsites.net/en-us/story/1651196286774740054-edp-energyazure-\r\nen-portugal; “Healthcare for All with Kry using Azure Open AI Service,” Microsoft Customer Stories, October 24, 2023, https://\r\nms-f1-sites-03-ea.azurewebsites.net/en-us/story/1693712644049090392-kry-azure-open-ai-service-sweden; “When everyone\r\nspeaks the same language: Boehringer Ingelheim speeds up knowledge sharing with Azure OpenAI Service,” Microsoft Customer\r\nStories, October 24, 2023, https://ms-f1-sites-03-ea.azurewebsites.net/EN-IN/story/1693653851333576209-boerhingeringelheimazureopenaiservices-\r\nen; “Providence uses Azure OpenAI Service to decrease clinician burnout and expedite patient care,” Microsoft\r\nCustomer Stories, November 10, 2023, https://ms-f1-sites-03-ea.azurewebsites.net/ja-jp/story/1701699654934326459-providenceazure-\r\nopenai-service-united-states.\r\nv. Interim Report: Governing AI for Humanity, United Nations, December 2023: 22, https://www.un.org/sites/un2.un.org/files/un_ai_\r\nadvisory_body_governing_ai_for_humanity_interim_report.pdf. Microsoft’s Chief Responsible AI Officer is Natasha Crampton, included\r\namong other members listed here.\r\nvi. “The Global AI Talent Tracker 2.0,” MacroPolo, https://macropolo.org/digital-projects/the-global-ai-talent-tracker/.\r\nvii. Describing “impermeability” as one of five guiding principles. Ian Bremmer and Mustafa Suleyman, “Building Blocks for AI\r\nGovernance,” International Monetary Fund, December 2023, https://www.imf.org/en/Publications/fandd/issues/2023/12/POVbuilding-\r\nblocks-for-AI-governance-Bremmer-Suleyman.\r\nviii. For example, ICAO develops standards that span a wide range of safety and security issues in the civil aviation sector. This includes\r\nsetting out standardized specifications for critical pieces of safety equipment, such as the collision avoidance systems, through pilot\r\nand crew licensing, evaluation of aircraft worthiness, and airport regulation.\r\nix. “Mutual Recognition Agreements,” European Commission, https://single-market-economy.ec.europa.eu/single-market/goods/\r\ninternational-aspects-single-market/mutual-recognition-agreements.\r\nx. See Chapter 3.\r\nxi. See Chapter 3.\r\nxii. Discussing these “problem” questions in connection with the purposes for international institutions Keohane defined. Robert\r\nKeohane, After Hegemony: Cooperation and Discord in the World Political Economy, Second Edition. (New Jersey: Princeton University\r\nPress, 2005).\r\nxiii. Trager, et al., “International Governance of Civilian AI: A Jurisdictional Certification Approach,” September 11, 2023, https://arxiv.org/\r\nabs/2308.15514.\r\nxiv. “UK & United States announce partnership on science of AI safety.”\r\nxv. Compute refers to both computational infrastructure—the hardware necessary to develop and deploy an AI system—and\r\ncomputational power, the performance of the AI chip commonly measured in integer or floating-point operations. Highly capable AI\r\ntraining requires large amounts of AI chips performing a large number of computations. See Pistillo and Heim 2024 (forthcoming).\r\nxvi. This is due to the mathematical relationship between the amount of compute used to train a model and model performance, with\r\nso called “scaling laws” describing how fundamental measures of model performance continue to improve as training compute is\r\nincreased. Kaplan, et al., “Scaling Laws for Neural Language Models,” January 23, 2020, https://arxiv.org/abs/2001.08361.\r\nxvii. “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October\r\n30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-andtrustworthy-\r\ndevelopment-and-use-of-artificial-intelligence/; “Artificial Intelligence Act,” European Parliament, March 13, 2024, https://\r\nwww.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 33\r\nxviii. Id.\r\nxix. Following the UK AI Safety Summit, Yoshua Bengio is leading a process to develop an International AI Safety (previously “State of the\r\nScience”) Report.\r\nxx. 123 Agreements for Peaceful Cooperation,” National Nuclear Security administration, US Department of Energy, https://www.energy.\r\ngov/nnsa/123-agreements-peaceful-cooperation.\r\nxxi. “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”; “Artificial Intelligence Act”.\r\nxxii. The US Department of Commerce has proposed a rule that would apply KYC requirements to compute providers, with a view to\r\nensuring visibility into where companies are using US compute providers to train the most capable models. “Taking Additional Steps\r\nTo Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities,” Proposed Rule by the Commerce\r\nDepartment, September 24, 2021, https://www.federalregister.gov/documents/2021/09/24/2021-20430/taking-additional-steps-toaddress-\r\nthe-national-emergency-with-respect-to-significant-malicious.\r\nxxiii. As the US Department of State reports, the US enters into more than 200 treaties and international agreements each year, and EU\r\nlaws like the General Data Protection Regulation (GDPR) and Data Act provide for the development of international agreements\r\nas part of supporting international data flows in a way that aligns with EU legislation. “Treaties and International Agreements,” US\r\nDepartment of State, https://www.state.gov/policy-issues/treaties-and-international-agreements/; Article 27 of the Data Act and\r\nArticle 48 of the GDPR state that international transfers of non-personal and personal data, respectively, may be provided for by an\r\n“international agreement…in force between the requesting third country and the Union or a Member State.” “General Data Protection\r\nRegulation,” https://gdpr-info.eu/.\r\nxxiv. While notification and KYC requirements could help provide visibility into the development of highly capable AI technology, they\r\nalso require careful calibration given questions of customer privacy and concerns from governments around the world about the\r\nimplications of AI for sovereignty and national competitiveness. Heim, et al., ”Governing Through the Cloud: The Intermediary Role of\r\nCompute Providers in AI Regulation,” March 13, 2024, https://www.oxfordmartin.ox.ac.uk/publications/governing-through-the-cloudthe-\r\nintermediary-role-of-compute-providers-in-ai-regulation.\r\nxxv. “UK & United States announce partnership on science of AI safety.”\r\nxxvi. The organization providing the model, for example, could be required to demonstrate that they have a suitable risk management\r\nplan in place. This could include a process for performing risk identification and mitigation, implementing cybersecurity\r\nrequirements, and monitoring for and responding to significant incidents.\r\nxxvii. Chris Welsch, “Assisted by AI, a workforce of bees tracks pollution and boosts biodiversity,” Microsoft News Centre Europe,\r\nSeptember 18, 2023, https://news.microsoft.com/source/emea/features/assisted-by-ai-a-workforce-of-bees-tracks-pollution-andboosts-\r\nbiodiversity/.\r\nxxviii. “OECD AI Principles Overview,” OECD, https://oecd.ai/en/ai-principles.\r\nxxix. “G20 Ministerial Statement on Trade and Digital Economy,” Ministry of Foreign Affairs of Japan, https://www.mofa.go.jp/\r\nfiles/000486596.pdf; “The state of implementation of the OECD AI Principles four years on,” OECD, October 27, 2023, https://www.\r\noecd.org/publications/the-state-of-implementation-of-the-oecd-ai-principles-four-years-on-835641c9-en.htm.\r\nxxx. “Recommendation on the Ethics of Artificial Intelligence,” UNESCO, May 16, 2023, https://www.unesco.org/en/articles/\r\nrecommendation-ethics-artificial-intelligence.\r\nxxxi. “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems,” G7 Hiroshima Summit 2023,\r\nhttps://www.mofa.go.jp/files/100573473.pdf\r\nxxxii. “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development,”\r\nUnited Nations General Assembly, March 11, 2024, https://documents.un.org/doc/undoc/ltd/n24/065/92/pdf/n2406592.\r\npdf?token=sq74ZVdohESsTzyJEM&fe=true.\r\nxxxiii. “TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management,” National Institute of Standards\r\nand Technology, December 1, 2022, https://www.nist.gov/system/files/documents/2022/12/04/Joint_TTC_Roadmap_Dec2022_Final.\r\npdf.\r\nxxxiv. “Hiroshima Process International Code of Conduct.”\r\nxxxv. In doing so, it will provide an AI equivalent to ISO/IEC 27001 and ISO/IEC 27701, foundational standards in the cybersecurity and\r\nprivacy domains.\r\nxxxvi. “Information technology — Artificial intelligence — AI system impact assessment,” International Organization for Standardization,\r\nhttps://www.iso.org/standard/44545.html.\r\nxxxvii. “G7 Industry, Technology and Digital Ministerial Meeting Ministerial Declaration,” G7 Italia, March 15, 2024, https://www.g7italy.it/wpcontent/\r\nuploads/G7-Industry-Tech-and-Digital-Ministerial-Declaration-Annexes-1.pdf.\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 34\r\nxxxviii. “Member countries of the Hiroshima AI Process Friends Group,” https://www.soumu.go.jp/hiroshimaaiprocess/en/supporters.html.\r\nxxxix. Describing existing guidance and international instruments with which the draft guidelines will dock in, see OECD Due Diligence\r\nGuidance for Responsible Business Conduct, OECD, 2018, https://mneguidelines.oecd.org/OECD-Due-Diligence-Guidance-for-\r\nResponsible-Business-Conduct.pdf.\r\nxl. See Recital 127: “In line with Union commitments under the World Trade Organization Agreement on Technical Barriers to Trade, it\r\nis adequate to facilitate the mutual recognition of conformity assessment results produced by competent conformity assessment\r\nbodies, independent of the territory in which they are established, provided that those conformity assessment bodies established\r\nunder the law of a third country meet the applicable requirements of this Regulation and the Union has concluded an agreement\r\nto that extent. In this context, the Commission should actively explore possible international instruments for that purpose and in\r\nparticular pursue the conclusion of mutual recognition agreements with third countries.”\r\nxli. Brad Smith and Melanie Nakagaw, “Accelerating Sustainability with AI: A Playbook,” Microsoft On the Issues, November 16, 2023,\r\nhttps://blogs.microsoft.com/on-the-issues/2023/11/16/accelerating-sustainability-ai-playbook/.\r\nxlii. Microsoft Research AI for Science, https://www.microsoft.com/en-us/research/lab/microsoft-research-ai4science/.\r\nxliii. Elliott Smith, “AI may hold a key to the preservation of the Amazon rainforest,” Microsoft Source LATAM, September 6, 2023, https://\r\nnews.microsoft.com/source/latam/features/ai/amazon-ai-rainforest-deforestation/?lang=en.\r\nxliv. “Research Forum Episode 2: Transforming health care and the natural sciences, AI and society, and the evolution of foundational\r\nAI technologies,” Microsoft Research, March 6, 2024, https://www.microsoft.com/en-us/research/blog/research-forum-episode-2-\r\ntransforming-health-care-and-the-natural-sciences-ai-and-society-and-the-evolution-of-foundational-ai-technologies/.\r\nxlv. Tie-Yan Liu and Tao Qin, “GHDDI and Microsoft Research use AI technology to achieve significant progress in discovering new drugs\r\nto treat global infectious diseases,” Microsoft Research, January 16, 2024, https://www.microsoft.com/en-us/research/blog/ghddiand-\r\nmicrosoft-research-use-ai-technology-to-achieve-significant-progress-in-discovering-new-drugs-to-treat-global-infectiousdiseases/.\r\nxlvi. “The 17 Goals,” United Nations, https://sdgs.un.org/goals.\r\nxlvii. Inclusiveness is one of Microsoft’s foundational AI principles. “Principles and Approach,” Microsoft AI/Responsible AI, https://www.\r\nmicrosoft.com/en-us/ai/principles-and-approach.\r\nxlviii. Smith, “Microsoft’s Access Principles.”\r\nxlix. Many international organizations continue to focus and make progress on broadband infrastructure, including the International\r\nTelecommunications Union (ITU) and the UN Educational, Scientific and Cultural Organization (UNESCO). “Universal Broadband\r\nConnectivity,” International Telecommunication Union, https://www.itu.int/en/action/broadband. Microsoft also recognizes\r\nthis imperative and is actively working to bridge gaps in these areas, including via our Airband Initiative, through which we’ve\r\nconnected 51 million people to high-speed internet since 2017 and set the goal to connect 250 million people, including 100 million\r\nAfricans, by 2025. “The ITU/UNESCO Broadband Commission for Sustainable Development,” Broadband Commission, https://www.\r\nbroadbandcommission.org/.\r\nl. Basic research, perhaps especially at universities, is of fundamental importance to countries’ economic and strategic success; the past\r\nfew decades have seen huge swaths of research in almost every field propelled by growing compute resources and data science.\r\nGoverning AI: A Blueprint for the Future, Microsoft, May 25, 2023, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/\r\nRW14Gtw.\r\nli. Last October, the US AI Executive Order tasked the National Science Foundation with launching a pilot program implementing the\r\nNational AI Research Resource (NAIRR), pursuing the infrastructure, governance mechanisms, and user interfaces to make available\r\ncomputations, data, models, and training resources to support AI-related research and development. “Executive Order on the Safe,\r\nSecure, and Trustworthy Development and Use of Artificial Intelligence.” In November, the UK announced that it will build and\r\nconnect two supercomputers, giving researchers access to resources with more than 30 times the capacity of the UK’s current largest\r\npublic AI computing tools. “Technology Secretary announces investment boost making British AI supercomputing 30 times more\r\npowerful,” UK Government, November 1, 2023, https://www.gov.uk/government/news/technology-secretary-announces-investmentboost-\r\nmaking-british-ai-supercomputing-30-times-more-powerful. In April, Canada announced a $2.4 billion investment, including\r\nto build and provide access to AI infrastructure for local researchers, start-ups, and scale-ups; help bring new technologies to market\r\nand support deployment among critical sectors and small businesses; support workers; strengthen enforcement of a proposed law;\r\nand create a new Canadian AI Safety Institute. “Securing Canada’s AI advantage,” Prime Minister of Canada, April 7, 2024, https://\r\nwww.pm.gc.ca/en/news/news-releases/2024/04/07/securing-canadas-ai.\r\nlii. Through Accelerate Foundation Models Research (AFMR), we’ve selected 125 new projects from 75 institutions across 13 countries.\r\nAFMR is a research grant program through which we aim to facilitate interdisciplinary research on aligning AI with human goals,\r\nvalues, and preferences; improving human interactions via sociotechnical research; and accelerating scientistic discovery in natural\r\nsciences. After managing a pilot phase that launched earlier in 2023, we expanded the program and have now selected 125 new\r\nGlobal Governance: Goals and Lessons for AI • Frameworks and Outcomes for International AI Governance 35\r\nprojects from 75 institutions across 13 countries. The focus of our first open call for proposals was on aligning AI systems with human\r\ngoals and preferences; advancing beneficial applications of AI; and accelerating scientific discovery in the natural and life sciences.\r\nAs we continue to expand the breadth of our reach with academic partnerships, we will also continue to expand the depth of our\r\nresearch, including in areas like AI evaluation and measurement. “Microsoft’s AI Safety Policies,” Microsoft On the Issues, October 26,\r\n2023, https://blogs.microsoft.com/on-the-issues/2023/10/26/microsofts-ai-safety-policies/.\r\nliii. “Microsoft announces a $5 billion investment in computing capacity and capability to help Australia seize the AI era,” Microsoft\r\nAustralia News Centre, October 24, 2023, https://news.microsoft.com/en-au/features/microsoft-announces-a5-billion-investmentin-\r\ncomputing-capacity-and-capability-to-help-australia-seize-the-ai-era/; Brad Smith, “Our investment in AI infrastructure, skills\r\nand security to boost the UK’s AI potential,” Microsoft On the Issues, November 30, 2023, https://blogs.microsoft.com/on-theissues/\r\n2023/11/30/uk-ai-skilling-security-datacenters-investment; Smith, “Microsoft’s AI Access Principles”; “Microsoft to invest US$2.9\r\nbillion in AI and cloud infrastructure in Japan while boosting the nation’s skills, research and cybersecurity,” Microsoft Stories Asia,\r\nApril 10, 2024, https://news.microsoft.com/apac/2024/04/10/microsoft-to-invest-us2-9-billion-in-ai-and-cloud-infrastructure-injapan-\r\nwhile-boosting-the-nations-skills-research-and-cybersecurity/.\r\nliv. Smith, “Microsoft’s AI Access Principles.”\r\nlv. MCDF highlighted that its investment aims to enhance regional “research, development, and application capabilities of AI solutions\r\nthrough the HPC network,” starting in Chile and the Dominican Republic. The MCDF grant will cover a series of feasibility studies,\r\nincluding an HPC supply and demand analysis, a roadmap for constructing HPC centers in Chile and the Dominican Republic, and a\r\nproposal based on technical, legal, environmental, institutional, and other assessments. “MCDF Grant to Open Door to Ai Computing\r\nNetwork in Chile and Dominican Republic,” Multilateral Cooperation Center for Development Finance, September 21, 2023, https://\r\nwww.themcdf.org/en/news-activities/news/2023/MCDF-Grant-to-Open-Door-to-Ai-Computing-Network-in-Chile-and-Dominican-\r\nRepublic.html.\r\nlvi. “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.”\r\nlvii. Miriam Brady, “Microsoft Launches New AI Skills Training and Resources as part of Skill for Jobs Initiative,” Microsoft Nonprofit\r\nCommunity Blog, October 24, 2023, https://techcommunity.microsoft.com/t5/nonprofit-community-blog/microsoft-launches-new-aiskills-\r\ntraining-and-resources-as-part/ba-p/3963189.\r\nlviii. “Artificial intelligence and the Futures of Learning,” UNESCO, September 12, 2023, https://www.unesco.org/en/digital-education/aifuture-\r\nlearning.\r\nlix. Business Council for Ethics of AI,” UNESCO, https://www.unesco.org/en/artificial-intelligence/business-council.\r\nlx. The Initiative puts forward new, free coursework developed with LinkedIn, including the first Professional Certificate on Generative\r\nAI in the online learning market, as well as a new open global grant challenge in coordination with data.org to uncover new ways\r\nof training workers on generative AI; it also advances greater access to free digital learning events and resources for everyone to\r\nimprove their AI fluency. There are also resources dedicated to schools and educators and pathways for specific roles, including\r\nengineering. “AI Skills,” Microsoft Corporate Social Responsibility, https://www.microsoft.com/en-us/corporate-responsibility/ai-skillsresources;\r\n“Microsoft Azure AI Fundamentals: Generative AI,” Microsoft Build, https://learn.microsoft.com/en-us/training/paths/\r\nintroduction-generative-ai/.\r\nlxi. Smith, “Microsoft’s AI Access Principles.”\r\nlxii. “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.”\r\nlxiii. Id.\r\nlxiv. AI for Good consists of an online program and annual in-person summit, co-convened with Switzerland and 40 other UN agencies. The\r\nITU also facilitates Focus Groups on AI for Natural Disaster Management, for Digital Agriculture, and for Health; these groups are helping\r\nassist with data collection and handling, improving the precision and sustainability of farming techniques, and evaluating AI-based\r\nhealth methods. “Artificial Intelligence,” International Telecommunication Union, https://www.itu.int/en/ITU-T/AI/Pages/default.aspx.\r\nlxv. “AI For Good Lab, Microsoft Research, https://www.microsoft.com/en-us/research/group/ai-for-good-research-lab/.\r\nlxvi. “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems,” G7 Hiroshima Summit 2023,\r\nhttps://www.mofa.go.jp/files/100573473.pdf; “Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial\r\nIntelligence Companies to Manage the Risks Posed by AI,” The White House.\r\nlxvii. “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.”\r\nlxviii. “G7 consensus reached on advancing AI for sustainable development,” United Nations Development Programme, March 15, 2024,\r\nhttps://www.undp.org/news/g7-consensus-reached-advancing-ai-sustainable-development.\r\nlxix. “Rebalancing the Global AI Landscape,” ICAIN, https://icain.ch/.\r\n2The Building Blocks of Global Governance:\r\nA Comparative Exploration with Lessons for AI\r\nAuthored by Julia C. Morse\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 37\r\n2\r\nThe modern era is characterized by\r\nunprecedented levels of global cooperation.\r\nInternational organizations (IOs) organize\r\nstate behavior across numerous issue areas,\r\ncovering everything from high-stakes security\r\nconcerns like nuclear proliferation and terrorism\r\nto complex, technocratic topics like sanitation\r\nand food safety. Whereas global governance\r\nwas once rare and known primarily for idealistic\r\nfailures like the League of Nations, today\r\nmore than 300 formal and 150 informal bodies\r\npromote cooperation across states. These IOs\r\nvary in mandate, membership, and authority,\r\nyet each is part of the complex architecture that\r\ngoverns life in the 21st century.\r\nHow did we get to this highly institutionalized\r\nworld? And what lessons do existing IOs hold\r\nfor incipient AI governance? This chapter links\r\npast and present with an eye to the future.\r\nSection One begins by recounting the origin\r\nstory of modern global governance. Despite its\r\ncooperative orientation, it was war, not peace,\r\nthat gave birth to the United Nations and many\r\nother well-known IOs. The political tensions of\r\nthe Cold War spawned additional growth, and\r\nthe number of treaty-based-IOs more than\r\ndoubled during this period. Over the last thirty\r\nyears, however, cooperation has shifted toward\r\nmore informal bodies, as states seek flexible and\r\nadaptive solutions to new types of challenges.\r\nAs a result of these trends, cooperation has\r\nfragmented—even a single-issue area might have\r\nten or more IOs that make relevant policy.\r\nThe five domain areas and related IOs examined\r\nin this report are best understood within the\r\nbroader context of this historical trajectory. Each\r\nIO is a product of a historical moment when\r\nstate goals and geopolitical interests aligned and\r\nresulted in a specific mandate, structure, and\r\noperations. The objectives and operations of each\r\nIO thus offer insights into possible roles for future\r\nAI regimes.\r\nTo compare and contrast these objectives, Section\r\nTwo draws on political scientist Robert Keohane’s\r\nfoundational insights into institutionalized\r\ncooperation and applies these arguments to the\r\nIOs included in this report. First, IOs facilitate the\r\nflow of information across cooperating states.\r\nThey create shared understanding of problems,\r\ndevelop standards for acceptable behavior, and\r\nmonitor state conformity with the standards.\r\nSecond, IOs intensify the consequences for rule\r\nbreaking through reputational mechanisms,\r\nexternal enforcement, and even occasionally\r\ninstitutionalized enforcement. Third, IOs lower\r\nthe “costs of doing business” so that states and\r\nnon-state actors can exchange information and\r\ndevelop expertise, provide technical assistance,\r\nand even transfer technologies across borders.\r\nComparing the cases along these key dimensions\r\nreveals both variation and commonalities across\r\ngovernance models.\r\nSection Three extracts policy lessons from the\r\ncomparative case analysis. The cases illustrate the\r\nimportance of strong leadership, particularly from\r\nactors with the technical expertise to develop\r\nstandards and the market power to enforce them.\r\nHistorically, this leadership has come most often\r\nfrom the United States. The cases also highlight\r\nthe importance of defining a clear purpose for a\r\nnew IO. Not all objectives can be accomplished\r\nat once, and states may need to make tradeoffs\r\nbetween different goals. Additional lessons\r\nhighlight how first steps at cooperation may be\r\nreinforced over time, as IOs evolve and often\r\nstrengthen through external processes. Overall,\r\nthe cases highlight the urgent need to identify\r\ncommon objectives and initiate preliminary\r\ngovernance; many of the fine-grained details will\r\nlogically follow.\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 38\r\nGlobal governance from 1945\r\nto today\r\nModern global governance has its roots in war\r\nand conflict. Amid the pronounced desperation\r\nand fear of the early World War II period, allied\r\ncountries became convinced that the only hope\r\nfor establishing a lasting peace lay in the creation\r\nof an international organization that would unite\r\ncountries. In August 1941, US President Franklin\r\nD. Roosevelt and UK Prime Minister Winston\r\nChurchill laid the foundation for such a body,\r\nforging an agreement that affirmed common\r\nprinciples like respect for sovereignty, trade\r\nopenness, and abandonment of the use of force.i\r\nFive months later, twenty-six countries, all at\r\nwar with the Axis powers, subscribed to these\r\ncommon principles in the “Declaration by United\r\nNations.” This was the first time that the term\r\n“United Nations” was used, and it stipulated a\r\nclear vision for a post-war world.\r\nThe next three years saw intense negotiations\r\nover the structure and membership of the United\r\nNations, with the US taking a leadership role.\r\nRoosevelt wanted to build a strong post-war\r\norder where political disputes could be routed\r\nthrough international institutions rather than\r\nspilling into military battles. Given the League of\r\nNations’ failure to prevent the outbreak of war,\r\nRoosevelt was convinced that any new institution\r\nneeded the power to enforce its decisions and\r\nthat US involvement was essential. He promoted\r\na framework where core countries like the United\r\nStates, the Soviet Union, the United Kingdom, and\r\nChina would provide institutional leadership, and\r\nworked to reach compromises that would balance\r\nthe need for widespread participation with the\r\nprotection of US interests.ii The UN’s eventual\r\nbicameral structure, where enforcement power\r\nresides within the 15-member Security Council but\r\nbudgetary power lies with the inclusive General\r\nAssembly, reflects this balance.\r\nThe post-war period saw tremendous growth in\r\nglobal governance. The creation of the United\r\nNations in 1945 launched a new trend in which\r\nstates sought to institutionalize cooperation.\r\nIn the economic arena, organizations like the\r\nWorld Bank and the International Monetary\r\nFund became key to development and monetary\r\nefforts. Security cooperation expanded through\r\nregional organizations like the North Atlantic\r\nTreaty Organization and oversight bodies like\r\nthe International Atomic Energy Agency (IAEA).\r\nAcross issue areas and policy domains, states\r\nincreasingly turned to IOs. Between 1945 and\r\nthe end of the Cold War, the number of treatybased-\r\nIOs more than tripled, growing from 66 to\r\n313 in a little more than four decades.iii The US\r\ndesire to institutionalize its leadership position,\r\nthe rise of shared global norms, and the increased\r\nnumber of countries in the global system all likely\r\ncontributed to this trend.iv\r\nThe post-Cold War period heralded another\r\nshift, this time in how states designed new\r\nglobal governance bodies. Formal, treaty-based\r\ncommitments were poorly suited to address more\r\nspecialized challenges like combating money\r\nlaundering, intelligence cooperation after terrorist\r\nattacks, and private security during armed conflict.\r\nModern threats required more technocratic and\r\nflexible approaches, often with a smaller group\r\nof likeminded countries. While legally binding\r\ntreaties provided stability and policy reassurance,\r\nthey also took years to negotiate and involved\r\nvaried coalitions. States turned to creating task\r\nforces, clubs, networks, and forums; informal IOs\r\nsurged as formal IOs stagnated. Today, there are\r\nnearly 150 informal IOs—more than double the\r\nnumber at the end of the Cold War.v\r\nInformal global governance is one of the defining\r\nfeatures of the 21st century. Such organizations\r\nhave no legal status, often a small or non-existent\r\nsecretariat, and fewer members than formal\r\nIOs, yet they make decisions with wide-ranging\r\nrepercussions for states. Informal forums like the G7\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 39\r\nand G20 allow states to cooperate and coordinate\r\npolicy while protecting autonomy. They are also\r\nremarkably durable, as states adapt or expand IO\r\nmissions to address new challenges or increase\r\ntheir authority over time. The Financial Action Task\r\nForce (FATF) began in 1989 as a G7 initiative to\r\ncoordinate anti-money laundering policy, but today\r\nthe FATF has 39 members plus a vast network of\r\nassociate countries, and designs standards that\r\ncover additional topics like combating the financing\r\nof terrorism and proliferation.vi\r\nYet while countries turn to informal IOs to address\r\nnew challenges, the post-war institutional order\r\ncontinues to be the foundation for cooperation.\r\nFormal and informal IOs sit alongside each\r\nother, coordinating and competing over policy\r\nspace. These “regime complexes” of multiple\r\nIOs that work on a single issue can reinforce\r\neach other’s actions, as has occurred in the\r\nglobal counter-terrorism arena. States have\r\ninserted FATF recommendations on combating\r\nterrorist financing into UN Security Council\r\nresolutions, lending additional legal clout to\r\n“soft law” standards.vii They may also compete\r\nwith each other, challenging established rules or\r\ninternational law.viii As institutions proliferate, the\r\neffects of a single IO on policy outcomes become\r\nchallenging to disentangle from larger patterns of\r\nglobal governance.\r\nComparative analysis of cases\r\nThe five case studies in this report reflect many\r\nof the historical trends described above. Treatybased\r\norganizations like the International Civil\r\nAviation Organization (ICAO), CERN, and the IAEA\r\nwere established in the two decades immediately\r\nfollowing World War II, when states viewed\r\nmultilateral solutions as integral to preventing\r\nthe outbreak of another war. Indeed, even CERN,\r\nan IO centered around research and scientific\r\ncollaboration, was also intended to foster\r\ncooperation between people recently in conflict.\r\nLater cooperative efforts were more varied and\r\nencountered different geopolitical challenges.\r\nThe Intergovernmental Panel on Climate Change\r\n(IPCC) was an outgrowth of a formal IO, the\r\nUnited Nations, and has worked to achieve\r\nscientific consensus around climate change to\r\nsupport the creation of new legally binding\r\nclimate change treaties. Yet such efforts have\r\nproceeded in fits and starts, as formalized\r\ncooperation appears increasingly difficult to\r\nachieve in the post-Cold War era. Meanwhile,\r\nfinancial governance has expanded significantly\r\nover the last fifty years, all the while relying on\r\ninformal IOs staffed with government bureaucrats.\r\nHow can we make sense of such varied\r\ninstitutions with quite different origin stories?\r\nRenowned political scientist Robert Keohane\r\ntheorizes that states create IOs to serve three\r\npurposes: facilitating the flow of information,\r\nintensifying the consequences of rule breaking,\r\nand lowering the costs of cooperation.ix This\r\ntheoretical framework sheds light on the\r\nachievements and challenges of each global\r\ngovernance example.\r\nImproving information\r\nAll IOs exist in part to facilitate the flow of\r\ninformation across states. One common way\r\nthat information promotes cooperation is when\r\nIOs work to build consensus around problem\r\ndefinitions. When states have varied threat\r\nperceptions, this task is crucial: how can states\r\nwork together to solve a problem if they fail to\r\nunderstand it in the same way? IOs can help\r\nstates define the nature of a challenge, which\r\nis often a necessary first step before moving\r\nforward with a solution.\r\nNearly all IOs in this report take on this\r\nproblem-defining role, but none as important\r\nas the IPCC. When scientists and policymakers\r\nconvened in Toronto, Canada, in 1988 to call\r\nfor the establishment of an intergovernmental\r\npanel on climate change, there was significant\r\nuncertainty about the process surrounding\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 40\r\nglobal warming, including its attribution to\r\nhuman activities. Each consecutive IPCC report\r\nenhanced intergovernmental consensus on the\r\nnature of the risk at hand. Because IPCC reports\r\nare made public, they also promoted a shared\r\nunderstanding across citizens and private actors.\r\nIn addition to defining problems, IOs can help\r\ncoordinate state expectations around acceptable\r\nbehavior and best practices. ICAO was explicitly\r\nestablished for this standard-setting purpose:\r\ncountries needed to develop a single set\r\nof expectations around topics like airspace\r\nsovereignty, overfly rules, and air navigation.\r\nStates also anticipated the challenges that would\r\nbe posed by differing approaches to airline\r\nsafety, and the concomitant need to set clear\r\nguidelines. Given its influence on industry, ICAO\r\nconsults heavily with private sector experts\r\nwhen formulating standards, but member states\r\napprove the final decisions.\r\nFinancial governance institutions are also oriented\r\nprimarily around improving information, in this\r\ncase through adaptable standard setting and\r\nmonitoring. The advantages of such an approach\r\ncan be seen through the lens of crisis response.\r\nThe 2008 financial crisis led G20 states to pay\r\nrenewed attention to topics like financial risk\r\nmanagement. In the wake of the crisis, the G20\r\ncreated the Financial Stability Board and provided\r\nexisting IOs with core tasks related to enhancing\r\nsound regulation in the financial sector and\r\npromoting integrity in financial markets.x Financial\r\ngovernance institutions responded quickly to this\r\nrequest. Because the Basel Committee’s standards\r\nare not tied to a specific treaty, finance ministers\r\nwere able to integrate new information and\r\nupdate the accords, publishing Basel III in 2011.\r\nFATF similarly updated standards and intensified\r\nits monitoring of state compliance with its\r\nstandards. FATF’s approach, wherein it regularly\r\nupdates its recommendations and conducts indepth,\r\npeer evaluations of member state policy,\r\nis emblematic of the advantages of informal IOs.\r\nWithout the force of international law, states\r\nare more willing to revise standards and subject\r\nthemselves to intensive monitoring.\r\nFinally, of all the case studies included in this\r\nreport, the IAEA has perhaps the most important\r\ninformational role of all: monitoring civilian\r\nnuclear programs in an effort to detect diversions\r\nfor weapons purposes. The IAEA’s safeguards\r\nregime is one of the most intrusive monitoring\r\nregimes in international politics, and it is a product\r\nof both its time and the alignment of geopolitical\r\ninterests on this particular issue. When the IAEA\r\nwas created in 1957, and when its role shifted to\r\nmandatory safeguards with the entry into force\r\nof the Nuclear Non-Proliferation Treaty in 1970,\r\ntreaty-based cooperation was the norm and on\r\nthis rare issue, US and Soviet interests aligned.\r\nMoreover, non-nuclear states were told that to\r\ngain access to these technologies, they had to\r\nsubmit to the IAEA’s procedures, including nuclear\r\nmaterial accountancy, on-site inspections, remote\r\nvideo monitoring, and sample analysis. The IAEA’s\r\nmonitoring powers are thus intrinsically tied to\r\nthe context of this issue: states agreed to a strong,\r\nlegally binding monitoring regime because they\r\ngained access to technologies that otherwise\r\nwould be unavailable.\r\nOrganizing an IO around information provision\r\ninvolves making tradeoffs between different\r\ngoals. If states are interested in reaching a\r\nshared understanding of a threat, then broad\r\nparticipation across both governments and nonstate\r\nactors will add legitimacy to the effort and\r\nmake the final outcome more impactful. But this\r\ntype of widespread information-gathering effort\r\nmay also slow progress on policy action, as it\r\nallows countries to deflect cooperation by saying\r\nthey are waiting for a final consensus. Informal\r\nstandards, on the other hand, can be established\r\nin a timelier fashion, and may incentivize quicker\r\npolicy action through monitoring. Yet this\r\napproach often works best with smaller groups\r\nof likeminded states and so policymakers will\r\nhave to work harder to achieve global legitimacy.\r\nAdditionally, if states anticipate that countries may\r\nbe unwilling to follow global standards, a robust\r\nand widespread monitoring apparatus will be\r\nessential to policy impact.\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 41\r\nIntensifying the consequences\r\nof rule breaking\r\nA second objective of institutionalized\r\ncooperation is to intensify the consequences\r\nfor rule breakers. International politics has no\r\noverarching authority or global policeman, yet the\r\nexistence of IOs makes it more costly for countries\r\nto break agreements or violate established\r\nnorms. States incur varying degrees of reputation\r\ndamage for failing to follow through on their\r\ncommitments. As a result, IO monitoring reports\r\nthat highlight non-compliance can be a powerful\r\nway of incentivizing behavior change.\r\nBoth the financial governance institutions and\r\nICAO lean into these reputational mechanisms.\r\nSuch governance models are built around the\r\nassumption that states prefer to have positive\r\nreputations in these arenas and will therefore work\r\nto modify their behavior to avoid bad publicity.\r\nIn the case of financial governance, governments\r\nwant to attract private capital and cross-border\r\ninvestments, and therefore strong financial\r\nincentives exist to maintain a positive reputation.\r\nIn the case of ICAO, governments could face\r\nreputational fallout from both citizens and industry\r\nif they fall far below international standards.\r\nReputational mechanisms may reach into the\r\nrealm of outside enforcement. While most\r\nIOs lack formal enforcement powers, states\r\nsometimes step in to punish other countries\r\nthat fail to follow the rules. The ICAO case study\r\nprovides such an example. The United States\r\nand European Union have audit systems based\r\non ICAO standards and may restrict air travel\r\nto their jurisdictions if countries receive poor\r\nratings. Given the size of these economies,\r\nsuch ramifications can be extremely costly for a\r\ncountry’s airline industry.\r\nFATF takes this outside enforcement a step farther.\r\nSince 2010, the organization has maintained\r\n“black” and “grey” lists of countries that are\r\nfailing to comply with FATF standards. This list\r\nis publicized in triannual announcements, and\r\nalthough it is officially not coercive, it has market\r\nrepercussions. Banks in other countries typically\r\nsubject clients from listed countries to higher\r\ncosts and transaction delays, thus imposing direct\r\npenalties on the banking sector in listed countries.\r\nThis market enforcement process has been\r\nextremely effective in incentivizing countries to\r\nimprove their compliance with FATF standards.xi\r\nUnsurprisingly given the importance of its\r\nmandate, the IAEA has the strongest incentive\r\nstructure to encourage states to follow\r\ninternational rules. If IAEA inspectors detect noncompliance\r\nwith nuclear safeguards, the IAEA\r\ncan report a state to the UN Security Council. In\r\nFebruary 1993, for example, the IAEA Director\r\nGeneral referred North Korea to the UN Security\r\nCouncil after it failed to grant IAEA permission\r\nfor a special inspection.xii The Security Council\r\nthen called upon North Korea to comply with\r\nthe agreement but refrained from undertaking\r\nany significant punitive action until establishing\r\nsanctions in 1996.\r\nIn cases where some states are likely to ignore\r\ninternational standards or take actions that\r\nundermine global cooperation, an IO’s ability to\r\ncreate consequences for rule breaking is essential\r\nto institutional success. But the optimal system for\r\nincentivizing behavior is likely to vary. Reputation\r\ncan be a powerful mechanism when states share\r\nsimilar priorities, but it may fall short if interests\r\nsignificantly diverge. Outsourcing enforcement\r\nto other actors, whether they be states or\r\nmarkets, can be powerful, but it assumes that\r\nthese actors have clear incentives to punish noncompliant\r\nbehavior. Finally, creating a strong legal\r\nenforcement regime as exists in the IAEA example\r\nmay be an effective deterrent, but it is unlikely\r\nto override core security concerns, particularly\r\nwhen punishment requires widespread agreement\r\namong states.\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 42\r\nLowering the costs of cooperation\r\nFinally, IOs may also be designed to lower the\r\ncosts of ongoing cooperation. Many cooperation\r\nproblems require ongoing engagement from\r\nstates. The creation of an IO, particularly one that\r\nmaintains regularly scheduled meetings where\r\ncountries are represented by the same delegates\r\nyear after year, allows countries to engage with\r\neach other in a more efficient manner. Even when\r\nthe full membership of an IO meets less frequently,\r\nIOs typically have subsidiary bodies like ICAO’s\r\n36-member Council that are tasked with the\r\nmore technical aspects of cooperation and adopt\r\nprocedures for routine meetings and discussions.\r\nFormal IOs may have secretariats that facilitate\r\nsuch processes, providing even basic services like\r\nthe UN’s Blue Book where diplomats can easily\r\nfind the contact information for their counterparts\r\nin other countries. Secretariats may also house\r\nexperts with specialized knowledge. The IAEA,\r\nfor example, not only monitors safeguards but\r\nalso assists developing countries with nuclear\r\ntechnology. Its technical cooperation program\r\nprovides transfer assistance, helps states identify\r\nenergy needs, and assists with radiation and\r\nnuclear safety. Such ongoing assistance is an\r\nimportant part of the nuclear bargain whereby\r\nstates are willing to submit to intrusive monitoring.\r\nLowering the costs of research and scientific\r\ncollaboration are also common benefits of IOs.\r\nCERN has been quite effective in this regard. The\r\nexistence of a shared space where scientists can\r\nconverge to focus on a narrow set of topics has\r\nled to significant advances in research, and the\r\nfacility has become a focal point for physicists\r\nfrom all over the world. The IPCC has also ensured\r\nongoing cross-country scientific exchange, both\r\nby convening IPCC panels and also by producing\r\nrigorously researched reports.\r\nAmong informal IOs, ongoing cooperation is\r\ntypically facilitated through transnational networks\r\nof bureaucrats.xiii Financial governance institutions\r\nlike the Basel Committee are staffed with regulators\r\n(typically central bankers and finance officials);\r\nFATF meetings are attended by finance officials,\r\ncentral bankers, foreign affairs, and sometimes\r\nlaw enforcement officials. In FATF’s case, these\r\nbureaucrats are also directly involved in evaluating\r\nthe policies of peer countries. The meetings and\r\nmonitoring processes build relationships and make\r\nit easier for these officials to engage with each\r\nother on relevant policies.\r\nEach IO promotes ongoing cooperation in\r\ndifferent ways, and each approach has its own\r\nstrengths and weaknesses. Concentrating\r\nknowledge in a secretariat can build expertise\r\nand provide direct points of contact for states\r\nseeking technical assistance, yet over time,\r\nIO bureaucrats may increasingly expand their\r\nauthority and operate in ways unanticipated by\r\nstates.xiv Scientific and research collaboration may\r\npromote great leaps forward in understanding\r\nand knowledge, yet states have no obligation\r\nto integrate such advances into their\r\ndecision-making or to act in response to such\r\ndevelopments. Finally, bureaucratic networks\r\nintensify policy investment in participating states,\r\nyet they may operate like clubs that concentrate\r\nknowledge in the hands of developed countries\r\nand exclude developing economies.\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 43\r\nLessons for AI governance\r\nUnderstanding the history, politics, and operations\r\nof existing global governance regimes illuminates\r\nfive core lessons for AI. First, and most essential,\r\nany conversations around IO creation must start\r\nwith establishing clear objectives. A new IO for\r\nAI could create shared expectations around AI risks\r\nand develop clear conceptualizations of safety\r\nand security, or it could be more action-oriented,\r\nfocused on standard setting and incentivizing state\r\ncooperation. Each approach would necessitate\r\ndifferent design choices in terms of membership,\r\ngovernance, and operations. Governments must\r\nstart by asking themselves: what is the most urgent\r\ncooperation problem? If states do not agree that\r\nAI poses significant risks and need to build out\r\nbaseline knowledge before taking additional steps,\r\nthen perhaps the IPCC model is best. If risks are\r\nclear but can be circumvented by establishing\r\nbest practices, then a standard setting model like\r\nBasel might work. Finally, if some risks are clear but\r\nstates anticipate an unwillingness of some parties\r\nto follow established standards, an approach\r\nthat involves standard setting and outsourced\r\nenforcement, such as with FATF or ICAO, might be\r\nthe best way forward.\r\nSecond, political leadership will be paramount\r\nto achieving any action in a timely fashion.\r\nStronger IOs require more engagement from\r\npolitically powerful actors. The policy success of\r\nbodies like the IAEA, ICAO, and the FATF is directly\r\nlinked to support from countries like the United\r\nStates. The rapid pace of AI developments means\r\nthat countries need to act quickly, and rapid\r\npolicy response is most possible when powerful\r\ncountries are at the forefront of policy action.\r\nNotably, leadership on AI governance also has\r\nsignificant strategic advantages, as first movers\r\nwill have more influence. It is easier to shape\r\nincipient norms than to disrupt established ones.\r\nAny early IOs in this area will have prolonged\r\neffects on the evolution of AI global governance.\r\nThird, while early governance endeavors set\r\nthe tone for future cooperation, they should\r\nnot be viewed as final products. Most IOs\r\ndeepen their authority and expand their mandates\r\nacross time. Even formal IOs like the IAEA have\r\nadopted new agreements to address gaps in\r\nmonitoring and enforcement. In the IAEA’s case,\r\nit has also expanded its governance to include\r\nnuclear safety and security. Mandate expansion\r\nis particularly common in informal IOs like Basel\r\nand FATF. States should not aim to create a full\r\ncooperative agreement regulating all aspects of\r\nAI, particularly given the rapidly changing nature\r\nof the threat. Instead, incremental cooperation\r\nmay be the best path forward. Focusing on topics\r\nthat have broad geopolitical consensus, such\r\nas preventing the use of AI for the creation of\r\nbiological weapons, may be one path forward;\r\npolicymakers may want to delay negotiations\r\non more controversial subjects, such as AI and\r\nmilitary technology.\r\nEven amid disagreement about core principles,\r\ncommon ground is still possible if cooperation is\r\noriented around practical applications. Cooperation\r\nto combat terrorism is one notable such example.\r\nCountries have negotiated 13 international\r\nconventions and protocols related to preventing\r\nspecific types of terrorism, yet no consensus\r\ndefinition exists around the term “terrorism”.\r\nIndeed, after 9/11, the United Nations Security\r\nCouncil adopted a far-reaching resolution requiring\r\nstates to take legislative action on terrorism\r\nwithout ever specifying the definition of the\r\nterm. In contrast to the Council’s quick response,\r\ncountries have been negotiating a comprehensive\r\nterrorism convention for more than 20 years\r\nthrough the General Assembly and have yet to\r\nreach consensus. If member states had waited for a\r\nshared definition of “terrorism,” policy action would\r\nhave been significantly delayed.\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 44\r\nFourth, formal legal authority does not equate\r\nto strong policy impact, just as informal status\r\ndoes not mean an IO is ineffectual. States have\r\nincreasingly turned to informal governance in\r\nrecent years because it is adaptable and effective\r\nin many policy domains. The financial governance\r\ninstitutions in this report have had significant\r\nimpacts on regulatory policy and the day-to-day\r\npractices of global banks. In the FATF case, the\r\norganization has diffused its recommendations\r\nacross 200 economies, despite lacking any legal\r\nstatus. And while this report highlights informal IOs\r\nin the financial space, this mode of cooperation is\r\nmost common in the security realm.xv\r\nThe distinction between formal and informal IOs\r\nalso does not equate to enforcement. An IO may\r\nofficially have a strong legal enforcement regime,\r\nbut the existence of such a mechanism does not\r\nmean that states are willing to use it. The UN\r\nSecurity Council has the ability to authorize the\r\nuse of force—the strongest possible enforcement\r\nof international law that exists in international\r\npolitics—yet the Council rarely deploys this\r\npunishment, even amid significant rule violations.\r\nIn contrast, both FATF and ICAO have relied\r\non external actors like the private sector and\r\nindividual governments to enforce compliance\r\nwith their standards.\r\nFinally, any new cooperative efforts on AI will\r\nneed to be integrated into the existing global\r\ngovernance infrastructure. More than 400\r\nformal and informal IOs exist today. Within each\r\nissue area, a host of different IOs coordinate and\r\ncompete over policy influence. Even though AI is\r\na new issue area, new IOs will bump up against\r\nother policy domains. AI global governance could\r\ntouch on security, development, climate change,\r\nand human rights. Strategic policymakers may be\r\nable to leverage the existence of longstanding\r\ninstitutions to reinforce AI governance efforts,\r\nusing bodies like the Security Council and the\r\nGeneral Assembly to endorse new standards.\r\nBut to the extent AI governance touches on other\r\npolicy domains, governments should anticipate\r\ncalls for inclusion and potential pushback from\r\nexisting IOs and relevant actors.\r\nConclusion\r\nThe world is at a pivotal moment when it comes\r\nto AI. This technology will transform modern\r\nsociety in a myriad of ways, and policymakers\r\nhave a unique opportunity to shape this\r\ntransformation. Global governance initiatives\r\nare already in incipient stages; now is the time\r\nto make crucial decisions about core objectives.\r\nIOs are designed to solve specific cooperation\r\nproblems, and therefore all institutional design\r\nproposals should be contingent upon first\r\nidentifying top priorities. Importantly, global\r\ngovernance can proceed on several fronts at\r\nonce. It is possible to create one body to assess\r\noverall risks, another to set standards and\r\naddress core security threats, and still another to\r\npromote technology transfer. Yet the most urgent\r\npriorities are to identify common objectives with\r\nlikeminded partners and begin to build out a\r\nmultilateral framework. What starts as a small AI\r\nagreement may rapidly expand to become a core\r\nfeature of 21st century global governance.\r\nGlobal Governance: Goals and Lessons for AI • The Building Blocks of Global Governance 45\r\ni. David Brazier, “The Atlantic Charter: Revitalizing the Spirit of the Founding United Nations Over Seventy Years Past,” United Nations\r\nChronicle, https://www.un.org/en/chronicle/article/atlantic-charter-revitalizing-spirit-founding-united-nations-over-seventy-years-past.\r\nii. Notably, Roosevelt also understood the need for domestic political buy in. To build up political support for this new institution, he brought\r\nboth administration and elected officials in negotiations, and sought to ensure Congress was supportive throughout the endeavor. For more\r\non this point, see “The United States and the Founding of the United Nations, August 1941 – October 1945,” Office of the Historian, Bureau of\r\nPublic Affairs, US Department of State, https://2001-2009.state.gov/r/pa/ho/pubs/fs/55407.htm.\r\niii. IO numbers drawn from Jon C.W. Pevehouse, Timothy Nordstron, Roseanne W McManus, and Anne Spencer Jamison, “Tracking\r\nOrganizations in the World: The Correlates of War IGO Version 3.0 datasets.” Journal of Peace Research 57, no. 3 (2020).\r\niv. For a discussion of US leadership and Western commitment to a rule-based order, see G. John Ikenberry. 2019. After Victory: Institutions,\r\nStrategic Restraint, & The Rebuilding of Order After Major Wars (New Edition). Princeton: Princeton University Press, and G. John Ikenberry.\r\n2009. “Liberal Internationalism 3.0: America and the Dilemmas of Liberal World.” Perspectives of Politics 7(1): 71-87. On the rise of global\r\nnorms, see G. John Ikenberry, After Victory: Institutions, Strategic Restraint, & The Rebuilding of Order After Major Wars (New Edition), (Princeton:\r\nPrinceton University Press, 2019) and G. John Ikenberry, “Liberal Internationalism 3.0: America and the Dilemmas of Liberal World,”\r\nPerspectives of Politics 7, no. 1 (2009): 71-87. On the rise of global norms, see Michael Barnett and Martha Finnemore, Rules for the World:\r\nInternational Organizations and Global Politics, (New York: Cornell University Press, 2004). On the link between the number of countries and\r\nthe number of IOs, see Pevehouse, Nordstrom, McManus, and Jamison, “Tracking Organizations in the World.”\r\nv. Informal IO numbers drawn from Felicity Vabulas and Duncan Snidal, “Cooperation under autonomy: Building and analysing the Informal\r\nIntergovernmental Organizations 2.0,” Journal of Peace Research 54, no. 4 (2021): 859-869.\r\nvi. Julia C. Morse, The Bankers’ Blacklist: Unofficial Market Enforcement and the Global Fight Against Illicit Financing. (New York: Cornell University\r\nPress, 2022).\r\nvii. See Tyler Pratt, “Deference and Hierarchy in International Regime Complexes,” International Organization 72,3 (2018): 561-590 for work on\r\npatterns of institutional deference within regime complexes.\r\nviii. Julia C. Morse and Robert O. Keohane, “Contested Multilateralism.” The Review of International Organizations 9 (2014): 385-412.\r\nix. In Keohane’s book After Hegemony, he describes these three advantages as reducing information asymmetries, establishing legal liability,\r\nand reducing transaction costs. I have modified these terms to make them more accessible to a general audience. For a more detailed\r\ndescription, see Robert O. Keohane, After Hegemony. (Princeton, NJ: Princeton University Press, 1984).\r\nx. http://www.g20.utoronto.ca/2008/2008declaration1115.html\r\nxi. Morse, The Banker’s Blacklist.\r\nxii. “Factsheet on DPRK Nuclear Safeguards,” IAEA, https://www.iaea.org/newscenter/focus/dprk/fact-sheet-on-dprk-nuclear-safeguards.\r\nxiii. Anne Marie Slaughter, A New World Order. (Princeton, NJ: Princeton University Press, 2005).\r\nxiv. Barnett and Finnemore, Rules for the World.\r\nxv. Vabulas and Snidal, “Cooperation under autonomy.”\r\n3Institutional Analogies for Governing\r\nAI Globally\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 47\r\nBuilding on the comparative exploration offered in the previous\r\nchapter, we delve more deeply into the emergence, evolution, and\r\nfunctions of institutions and governance systems that offer analogies\r\nand lessons for international AI governance, including:\r\n• The International Civil Aviation Organization (ICAO);\r\n• The European Organization for Nuclear Research (CERN);\r\n• The International Atomic Energy Agency (IAEA);\r\n• The Intergovernmental Panel on Climate Change (IPCC); and\r\n• The Financial Action Task Force (FATF), Basel, and the Financial\r\nStability Board (FSB).\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 48\r\n3.1\r\nThe International\r\nCivil Aviation\r\nOrganization\r\n(ICAO)\r\nAuthored by David Heffernan\r\nand Rachel Schwartz\r\nPurpose\r\nInternational commercial air transport is a\r\ncomplex and constantly evolving industry, the\r\nsuccess and vitality of which are attributable in\r\nsignificant part to the role of the International Civil\r\nAviation Organization (ICAO), a United Nations\r\n(UN) body. The complex and high-stakes nature\r\nof safely moving people and goods around the\r\nworld requires a robust international governance\r\nsystem that provides legal and operational\r\nstability and predictability. Since its conception,\r\nICAO has served the civil aviation sector as the\r\nindustry’s global standard-setting agency and\r\nfacilitator of cooperation among nations in\r\nfurtherance of a coordinated approach to the\r\nfundamental issue of air safety.\r\nHistory\r\nThe Chicago Convention\r\nICAO is the product of an extraordinary World\r\nWar II era initiative that led to the signing of\r\nthe Chicago Convention, an international treaty\r\ngoverning civil aviation. In September 1944,\r\n52 nations represented by over 950 delegates\r\nconvened in Chicago to negotiate the scope\r\nand terms of such a treaty. The conference’s\r\npurpose was to “make arrangements for the\r\nimmediate establishment of provisional world\r\nair routes and services” and “to set up an\r\ninterim council to collect, record and study\r\ndata concerning international aviation and to\r\nmake recommendations for its improvement.”i\r\nOn December 7, 1944, the Chicago Convention\r\nwas signed and opened for ratification Member\r\nStates. Today, 193 nations are Member States of\r\nthe Convention.ii\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 49\r\nThe Chicago Convention specifically envisioned\r\nan immediate post-war era in which civil aviation\r\nwould play an essential role in forging a new global\r\neconomic and trade order, including between\r\nnations formerly at war. It was the essence of that\r\ntransition from devastating war to a peaceful and\r\nprosperous future that weapons of war (aircraft)\r\ncould be repurposed for the movement of people\r\nand goods around the world based on an orderly,\r\nglobally accepted system of rules, reciprocal\r\nrecognition, and mutual accommodations among\r\nnations. As the Convention’s preamble states: “the\r\nfuture development of international civil aviation\r\ncan greatly help to create and preserve friendship\r\nand understanding among nations…to avoid friction\r\nand to promote the cooperation between nations…\r\nupon which the peace of the world depends.”iii\r\nThe Chicago Convention covers a wide range of\r\ntopics, including the sovereignty of States over\r\ntheir own airspace and the rights of aircraft of\r\none State to overfly the territory of other States,\r\nto make technical stops in other States, and to\r\ntake on and discharge passengers and cargo on\r\na charter basis at airports in other States. The\r\nConvention also addresses regulation of aircraft\r\nby nationality (the State in which it is registered),\r\nair navigation, licensing and certification of aircraft\r\nand crew, the development of safety standards\r\nand practices, and the settlement of disputes\r\nbetween States.\r\nICAO\r\nThe Chicago Convention established ICAO as an\r\ninternational governing body for civil aviation.\r\nICAO’s main functions include (i) developing and\r\nrevising matter-specific Annexes to the Convention\r\nthat establish Standards and Recommended\r\nPractices (SARPs) for aviation safety and security, (ii)\r\naddressing issues of access to airspace and airports\r\nin other countries, (iii) serving as a clearinghouse\r\nfor cooperation and discussion on civil aviation\r\nissues, and (iv) providing a forum and procedures\r\nfor resolution of disputes between States.\r\nEvolution\r\nOver time, ICAO has sought to implement the\r\nChicago Convention’s commitment to create a\r\nunified post-war era civil aviation sector, with a\r\nprimary focus on aviation safety and security. As\r\ndescribed below, ICAO has had important successes\r\nbut has also struggled with significant challenges.\r\nICAO’s main achievements\r\nOver the past nearly 80 years, ICAO has proven\r\nits durability. Its greatest successes have been\r\nin aviation safety. ICAO’s status as a UN body\r\nunderscores its authority to bring Member\r\nStates together to address often-complex safety\r\nproblems. ICAO has developed a modus operandi\r\nwhereby Member States can participate at a high\r\nlevel in initially establishing policy objectives\r\nand ultimately approving specific measures for\r\nglobal implementation, while leaving the technical\r\n“sausage making” of SARP developments to\r\nindustry experts who work on the details in a\r\nless politicized (but never entirely apolitical)\r\nenvironment. ICAO’s workings are relatively\r\ntransparent and based on cooperation among\r\nMember States, all of whom have a vested interest\r\nin global aviation safety and the relatively free\r\nmovement of aircraft.\r\nThe following are examples of SARPs that\r\nMember States have implemented:\r\n• The establishment of standards for an airborne\r\ntraffic alert and collision avoidance system that\r\ninterrogates air traffic control transponders in\r\nnearby aircraft and uses computer processing\r\nto identify and display potential and predicted\r\ncollision threats (i.e., the automated system\r\nthat alerts a pilot in flight to “pull up” in\r\nresponse to a risk of collision).iv\r\n• The development of standards for Flight\r\nData Recorders (FDRs), which provide critical\r\ninformation for investigators in understanding\r\nwhy an aircraft crash may have occurred.v\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 50\r\nMember States, which often cooperate\r\non accident investigations, have a strong\r\ncommon interest in the gathering and\r\npreservation of FDR data in the event of an\r\naccident, so the establishment of uniform\r\nFDR standards continues to be of great\r\nimportance for ICAO.\r\n• The creation of principles and instructions\r\ngoverning the international transport of\r\ndangerous goods by air, such as the now\r\nubiquitous transport of highly flammable\r\nlithium batteries onboard civil aircraft.vi\r\n• The creation of the Safety Management\r\nSystem (SMS)/State Safety Program (SSP),vii\r\nwhich set forth comprehensive, systematic,\r\nand cohesive approaches to managing safety\r\n(i.e., structures, accountabilities, policies, and\r\nprocedures). The FAA and other Member\r\nState regulators now require SMS compliance\r\nfor all large commercial air carriers.\r\n• The development of aircraft noise standards,\r\nwhich provide maximums for the noise\r\nlevels that civil aviation aircraft may emit.\r\nThese standards have been adopted by the\r\nFAA for the new type certification of jet and\r\nturboprop aircraft.viii\r\nICAO’s main challenges\r\nThe challenges ICAO faces include the inherently\r\npolitical nature of governance, deliberation,\r\nand compromise among 193 nations. Because\r\nICAO lacks enforcement authority, it relies on\r\nMember States to comply with the technical\r\nguidelines it produces. In practice, enforcement\r\noccurs bilaterally and multilaterally between\r\nand among Member States. ICAO’s processes\r\ncan be hamstrung by bureaucracy as well as\r\nintergovernmental politics. This impedes ICAO’s\r\nability to respond more nimbly and effectively\r\nto urgent aviation safety problems. For example,\r\nit falls to individual Member States to “ground”\r\naircraft in response to safety problems (e.g., the\r\nBoeing 737 MAX)ix or impose specific retaliatory\r\nor restrictive measures on a Member State (e.g.,\r\nthe response to Russia’s invasion of Ukraine).x\r\nICAO also has struggled (but arguably has achieved\r\nsome success based on international compromise)\r\nto develop a global approach to commercial\r\naircraft emissions, which account for about 2.5%\r\nof global carbon emissions. After the EU grew\r\nimpatient with the pace of progress to address\r\nthe issue at ICAO, it developed its own initiative,\r\nan Emissions Trading Scheme (ETS), that would\r\napply to aircraft of non-EU Member States.xi ICAO’s\r\ncompromise, the so-called Carbon Offsetting\r\nand Reduction Scheme for International Aviation\r\n(CORSIA) provides for a multi-year, phased\r\nprocess for Member States to meet certain limits\r\non aircraft carbon dioxide emissions, culminating\r\nin net-zero emissions by 2050.xii\r\nCORSIA remains controversial, however, with the\r\nEU threatening to reinstate the ETS if CORSIA is\r\nnot implemented on schedule.xiii China and Russia,\r\nby contrast, have refused to commit to participate\r\nin Phase One of CORSIA (which will run through\r\n2026 and for which participation is voluntary),\r\nwhile maintaining that they will participate in\r\nPhase Two (which will begin in 2027 and for which\r\nparticipation will be mandatory).xiv China and Russia\r\nargue that a requirement to meet certain targets\r\nwithin CORSIA’s timeframes would unfairly penalize\r\ndeveloping countries.xv China’s refusal to fully\r\nparticipate in CORSIA could make it more difficult\r\nto ensure the participation of other countries.\r\nWhile ICAO has ultimately achieved an effective\r\nrole in safety regulation, it lacks a similar role in\r\nthe areas of economic/trade and security relations\r\namong nations relating to air transportation.\r\nNations generally negotiate bilaterally to\r\nexchange scheduled air service “traffic rights,”\r\nwhich has produced a system that lacks uniformity\r\nand arguably is excessively protectionist (e.g.,\r\nthe airline industry remains subject to varying\r\nrestrictions on foreign or cross-border ownership,\r\nwhich do not apply to most other global\r\nindustries). Nations have also adopted a more\r\nunilateral approach to aviation security, with the\r\nevents of September 11, 2001, having accelerated\r\nthat trend.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 51\r\nFor example, the United States has established\r\nits own specific requirements for passenger and\r\ncargo security screening. If an airline of a foreign\r\ncountry that is also a Member State wishes to fly\r\npassengers to the United States, it must gather and\r\ntransmit specific passenger data to US authorities\r\nin advance of the flight and submit the aircraft\r\nand its passengers to US screening requirements.\r\nIf a foreign airline or its government refuses to\r\ncomply, the United States may refuse entry to that\r\nairline—regardless of the Convention’s provisions\r\non providing access to airspace and airports. Other\r\nMember States have established their own security\r\nscreening and entry requirements.\r\nGovernance\r\nICAO’s governance structure\r\nICAO has three main bodies that serve to carry\r\nout its mission and purpose: the Assembly, the\r\nCouncil, and the Secretariat.\r\n• The Assembly is ICAO’s supreme body and\r\nis composed of delegations from ICAO’s 193\r\nMember States. The Assembly meets every\r\nthree years to set ICAO’s agenda, vote on\r\nmajor policy initiatives, and elect Member\r\nState representatives to the Council. Industry\r\nand civil society groups, along with various\r\nregional and international organizations, also\r\nparticipate in these events in their capacity\r\nas “Invited Organizations.”\r\n• The Council is ICAO’s governing body,\r\ncomprising of representatives from 36\r\nMember States appointed by the Assembly\r\nto serve three-year terms. After the Assembly\r\napproves a policy initiative, the Council\r\nconvenes expert panels and working groups\r\nto develop a SARP. These industry experts\r\nmay be recommended by Member States\r\nbut do not represent the interests of any\r\nparticular State; rather, they provide objective\r\ntechnical expertise and recommendations on\r\nhow best to address a particular safety issue.\r\nAny new SARP recommended by an expert\r\npanel is subject to review by the Secretariat\r\n(see below) and approval by the Council and\r\nultimately the Member States through the\r\nAssembly. In recent years, the Council also has\r\ndeveloped aircraft CO2 emissions reductions\r\nmeasures, at the request of the Assembly.\r\n• The Secretariat is ICAO’s professionally\r\nstaffed executive body. It is led ICAO’s\r\nSecretary General and is responsible for\r\nmanaging ICAO’s day-to-day operations.\r\nSARPs\r\nSARPs are the primary tool for implementation of\r\nICAO-approved safety standards and practices.\r\n“Standards” are presumptively mandatory:\r\nspecifications “the uniform application of which\r\nis recognized as necessary for the safety or\r\nregularity of international air navigation and to\r\nwhich…States will conform in accordance with\r\nthe Convention.”xvi “Recommended practices,”\r\nmeanwhile, are hortatory: specifications “the\r\nuniform application of which is recognized as\r\ndesirable in the interest of safety, regularity or\r\nefficiency of international air navigation, and to\r\nwhich…States should endeavor to conform in\r\naccordance with the Convention.”xvii\r\nSARPs may address the full range of subjects\r\ncovered by the ICAO Annexes, including pilot and\r\ncrew licensing, rules of the air, meteorological\r\nservices, air navigation and air traffic control\r\nservices, safety management, aircraft operations,\r\naircraft airworthiness, aircraft nationality and\r\nregistration, search and rescue, accident and\r\nincident investigation, airport regulation, the\r\ntransport of dangerous goods by air, and\r\nenvironmental protection and security issues.\r\nThe ICAO Council, which meets three times\r\nannually, may propose a safety issue for review.\r\n(Such a proposal may also originate in the ICAO\r\nAssembly, which may direct the proposal to the\r\nCouncil.) The Council then refers a proposal\r\nto ICAO’s Air Navigation Commission (ANC).\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 52\r\nThe ANC is comprised of 19 members who are\r\nnominated by Member States and appointed\r\nby the Council. The ANC has 17 technical panels\r\nwith specific subject-matter expertise (e.g., safety\r\nmanagement, remotely piloted aircraft systems,\r\ndangerous goods). The relevant ANC technical\r\npanel will then conduct research as a basis for\r\npotentially drafting a SARP for the ANC’s review. If\r\nthe ANC decides that the SARP is warranted, the\r\nANC will finalize the SARP, consulting informally\r\nwith the Secretariat (while the Secretariat’s approval\r\nof a SARP is not required, the Secretariat provides\r\ntechnical, legal, and administrative support). The\r\nANC then submits the proposed SARP to the\r\nCouncil where adoption requires the approval of\r\ntwo-thirds of the Council’s members. Thereafter,\r\nthe SARP is distributed to the Member States,\r\nwhich have three months in which to approve or\r\ndisapprove the SARP.\r\nUnless a majority of Member States register their\r\ndisapproval, the SARP becomes effective four\r\nmonths after its adoption by the Council. Member\r\nStates may lodge “differences” with ICAO (i.e., the\r\nintention of a Member State to deviate from some\r\naspect of the SARP), however, practically speaking\r\na Member State that has notified a difference is\r\nmotivated to eventually harmonize its national\r\nregulations, as one State’s failure to conform to\r\na particular standard may form a basis for other\r\nStates to eventually withhold approvals for the\r\nnon-conforming State’s aircraft operators. After\r\nICAO adopts a SARP, Member States are charged\r\nwith implementing it into their national laws and\r\nregulations. This process varies from State to\r\nState. In the United States, the FAA (or another\r\nfederal agency, as may be applicable) generally\r\nincorporates SARPs directly into its regulations. For\r\nexample, after ICAO adopted a SARP regarding\r\naircraft engine emissions, the Environmental\r\nProtection Agency (EPA), which regulates engine\r\nemissions, conducted a rulemaking to incorporate\r\nthe SARP into its regulations. US legal and policy\r\nrequirements pertaining to agency rulemaking\r\n(e.g., public notice and comment requirements)\r\nmay delay full US implementation of a SARP.\r\nMember States also pursue uniformity of SARP\r\nadoption and implementation via bilateral\r\nand multilateral (e.g., regional) aviation safety\r\nagreements.\r\nBroader global governance landscape\r\nBilateral aviation safety agreements\r\nThe United States and other Member States\r\nhave entered into bilateral aviation safety\r\nagreements (BASAs) in an effort to achieve: 1)\r\nbroader compliance with ICAO Annexes and\r\nSARPs; and 2) as a related matter, a greater\r\ndegree of consistency between the safety\r\nregulations of Member States. BASAs provide\r\nfor bilateral cooperation in a wide variety\r\nof safety areas, including aircraft and crew\r\nlicensing, air navigation, aircraft maintenance,\r\nand flight operations. BASAs often reference and\r\nincorporate SARPs or, more generally, adherence\r\nto ICAO standards. The United States and other\r\nMember States use BASAs as a way to harmonize\r\ntheir respective safety regulatory frameworks. In\r\nsome cases, such as between the United States\r\nand the European Union, each Party may defer\r\nto the other’s licensing, compliance, and other\r\nsafety determinations. As Article 5 of the USEU\r\nBASA states: “[T]he Parties agree that each\r\nParty’s civil aviation standards, rules, practices\r\nand procedures are sufficiently compatible to\r\npermit reciprocal acceptance of approvals and\r\nfindings of compliance…”xviii\r\nDispute resolution\r\nUnder chapter XVIII of the Chicago Convention,\r\nthe ICAO Council provides a forum for the\r\nresolution of disputes between Member States\r\nrelating to the interpretation or application of\r\nthe Convention and its Annexes. In practice,\r\nhowever, such disputes are rarely brought to\r\nICAO and are even more rarely adjudicated.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 53\r\nThis is because bilateral air transport agreements\r\nbetween Member States generally include\r\nrights and procedures both informal (e.g.,\r\nintergovernmental consultations) and formal\r\n(e.g., arbitration) that offer a more direct and\r\nefficient path to dispute resolution.\r\nUnder ICAO dispute resolution procedures,\r\nMember States must first attempt to resolve a\r\ndispute by direct negotiation. Only after failed\r\nnegotiations may a Member State seek resolution\r\nby a decision of the ICAO Council. A Member\r\nState may appeal the Council’s decision to an ad\r\nhoc arbitral tribunal or the Permanent Court of\r\nInternational Justice. The ICAO dispute resolution\r\nprocess is protracted and slow moving. In most\r\ncases, Member States resolve a dispute before\r\nthe Council renders a decision, but in some cases\r\na Member State may submit a dispute to ICAO in\r\nan effort to apply additional pressure on another\r\nMember State to resolve the matter.\r\nICAO does not have direct authority to impose\r\nsanctions regarding the specific subject matter of\r\na dispute, but individual Member States may use\r\na Council decision as a basis for refusing access to\r\nits airspace or territory. The ICAO Assembly may\r\nsuspend the voting rights of a Member State in\r\nthe Assembly following a Council decision that\r\nthe Member State is in “default” of its obligations\r\nunder the Convention.\r\nCompliance and enforcement\r\nICAO does not directly enforce SARPs; rather, it\r\nfalls to Member States, individually and via bilateral\r\nand multilateral agreements, to ensure compliance.\r\nICAO, however, plays a role in “assisting” Member\r\nStates to comply with ICAO’s Annexes and SARPs,\r\nincluding by conducting safety audits of Member\r\nStates. ICAO’s auditors examine Member States’\r\nlegislation and regulations for compliance with ICAO\r\nAnnexes and SARPs. ICAO’s audit reports, which are\r\npublished on ICAO’s website, identify any significant\r\nsafety concerns. ICAO does not conduct audits of\r\nairlines or airports; such regulation falls to the civil\r\naviation authorities of individual Member States.\r\nAlthough ICAO does not have authority to\r\nenforce compliance with its Annexes and SARPs,\r\nMember States may use information and findings\r\ncontained in ICAO audit reports to improve their\r\nsafety oversight regimes. Some Member States\r\nalso audit other states’ compliance with ICAO\r\nstandards and impose restrictions on access to\r\nnational airports and air service markets based on\r\na finding of deficient compliance. The United States\r\nand the EU have adopted different approaches to\r\nauditing Member States’ compliance with ICAO\r\nstandards. The FAA has established an International\r\nAviation Safety Assessment (IASA) program\r\nunder which it audits and then assigns ratings to\r\nother Member States, either a Category 1 rating\r\n(complies with ICAO standards) or Category 2\r\nrating (non-compliant). The EU, by contrast, asks\r\ncountries to audit themselves to confirm their\r\ncompliance with ICAO standards. The EU maintains\r\na blacklist of airlines determined to have serious\r\nsafety deficiencies, prohibiting those airlines from\r\noperating to or within the EU.\r\nThe FAA’s IASA program’s audits and country\r\nratings have a significant impact on international\r\ncommercial air transportation because the United\r\nStates is the world’s largest air service market.\r\nFor example, in May 2021, the FAA downgraded\r\nMexico from a Category 1 to Category 2 rating\r\nfollowing an FAA audit finding that Mexico\r\nwas not in compliance with ICAO standards.\r\nConsequently, the FAA prohibited Mexican\r\nairlines from introducing new services to the\r\nUnited States or engaging in codesharing with US\r\nairlines, where a US airline would sell tickets for\r\ntravel on a Mexican airline under the US airline’s\r\ntwo-letter code. The FAA allowed Mexican airlines\r\nto continue operating services to/from the United\r\nStates that were already in place at the time of\r\nthe downgrade. In September 2023, the FAA\r\nrestored Mexico to Category 1 status. In doing so,\r\nthe FAA noted that “[w]ith a return to Category\r\n1 status, [Mexican airlines] can add new service\r\nand routes to the US, and US airlines can resume\r\nmarketing and selling tickets with their names and\r\ndesignator codes on Mexican-operated flights.”xix\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 54\r\nThe FAA, in announcing the restoration of\r\nMexico’s Category 1 rating, emphasized how\r\nthe FAA had made its “expertise and resources”\r\navailable to provide “technical assistance” to\r\nenable Mexico’s civil aviation authority to achieve\r\ncompliance with ICAO standards.\r\nConclusion\r\nTo paraphrase Winston Churchill, ICAO, like\r\ndemocracy, is the worst possible governance\r\nsystem—except for all of the alternatives. Although\r\nimperfect and limited, particularly in non-safety\r\nareas, the ICAO regulatory scheme enabled the\r\npost-World War II development of a global air\r\ntransport industry in which weapons of war (aircraft)\r\nwere converted into vehicles for the safe global\r\nmovement of people and goods, for the greater\r\neconomic and social benefit of the world.\r\nIn some respects, ICAO’s greatest success is its\r\nendurance. It has survived for nearly 80 years\r\nand there is no discussion about replacing or\r\nabandoning it. ICAO will likely endure and continue\r\nto provide leadership in the essential area of aviation\r\nsafety for the foreseeable future. In other areas,\r\nhowever, nations are likely to forge ahead based\r\non unilateral action (e.g., security) or initiatives\r\nthat are the product of regional coordination or\r\nunderstandings between nations (e.g., the exchange\r\nof air traffic rights and the related issue of rules\r\ngoverning the ownership and control of airlines).\r\nThe environment may prove to be a bellwether\r\nof ICAO’s future. While ICAO has touted CORSIA\r\nas “the first time that a single industry sector has\r\nagreed to a global market-based measure in the\r\nclimate change field,” it represents an uneasy\r\ncompromise between nations that want to move\r\nmore quickly or slowly to address aircraft emissions.\r\nIf that compromise does not hold on what has\r\nbecome one of the most challenging points of\r\ncontroversy in international aviation, Member States\r\nmay revert to unilateral approaches, which in turn\r\ncould undermine ICAO’s authority and effectiveness\r\nas an aviation safety regulator.xx\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 55\r\ni. See ICAO, ”Chicago Conference Introduction,” International Civil Aviation Organization, https://www.icao.int/ChicagoConference/Pages/\r\nchicago-conference-introduction.\r\nii. See “About ICAO,” International Civil Aviation Organization, https://www.icao.int/about-icao/Pages/default.aspx.\r\niii. Convention on International Civil Aviation, ”Preamble,” International Civil Aviation Organization, https://www.icao.int/publications/\r\ndocuments/7300_orig.pdf.\r\niv. See FAA Advisory Circular No. 20-151C.\r\nv. See 81 Fed. Reg. 96,572 (Dec. 30, 2016).\r\nvi. See “Department of Transportation Pipeline of Hazardous Materials Safety Administration,” ICAO, https://www.phmsa.dot.gov/internationalprogram/\r\ninternational-civil-aviation-organization.\r\nvii. See “Safety Management System Frequently Asked Questions,” Federal Aviation Administration, https://www.faa.gov/about/initiatives/sms/faq.\r\nviii. See “Details on FAA Noise Levels, Stages, and Phaseouts,” Federal Aviation Administration, https://www.faa.gov/about/office_org/\r\nheadquarters_ offices/apl/noise_emissions/airport_aircraft_noise_issues/levels.\r\nix. Ken German, “2 Years After Being Grounded, the Boeing 737 Max is Flying Again,” CNET, June 19, 2021, https://www.cnet.com/tech/techindustry/\r\nboeing-737-max-8-all-about-the-aircraft-flight-ban-and-investigations/.\r\nx. Allison Lampert, “Russia Loses U.N. Aviation Council Seat in Rebuke,” Reuters, October 1, 2022, https://www.reuters.com/world/europe/\r\nrussia-notre-elected-un-aviation-agencys-36-member-council-2022-10-01/.\r\nxi. “Reducing Emissions From Aviation,” European Commission, https://climate.ec.europa.eu/eu-action/transport/reducing-emissions-aviation_en.\r\nxii. “CORSIA Fact Sheet,” International Air Transport Association, https://www.iata.org/en/iata-repository/pressroom/fact-sheets/fact-sheet---\r\ncorsia/.\r\nxiii. Rafael Schvartzman, “EU ETS Reform Destabilizes International Consensus for Aviation Carbon Reductions,” International Air Transport\r\nAssociation, April 18, 2023, https://www.iata.org/en/about/worldwide/europe/blog/eu-ets-reform-destabilizes-international-consensus-foraviation-\r\ncarbon-reductions/.\r\nxiv. Allison Martell and Allison Lampert, “China Denounces U.N. Aviation Emissions Plan in Blow to Industry Efforts,” Reuters, September 24,\r\n2019, https://www.reuters.com/article/us-un-aviation-china/china-denounces-u-n-aviation-emissions-plan-in-blow-to-industry-effortsidUSKBN1W938W.\r\nxv. Id.\r\nxvi. “ICAO Annex Forward: SARPs Definition and Actions,” International Civil Aviation Organization, https://www.icao.int/Meetings/AMC/MA/\r\nEleventh%20Air%20Navigation%20Conference%20(ANConf11)/anconf11_wp142_app_en.pdf.\r\nxvii. Id.\r\nxviii. “U.S. – European Union Safety Agreement,” Federal Aviation Administration, https://www.faa.gov/aircraft/air_cert/international/bilateral_\r\nagreements/eu.\r\nxix. “Federal Aviation Administration Returns Mexico to Highest Aviation Safety Status,” Federal Aviation Administration, September 14, 2023,\r\nhttps://www.faa.gov/newsroom/federal-aviation-administration-returns-mexico-highest-aviation-safety-status.\r\nxx. Id.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 56\r\n3.2\r\nThe European\r\nOrganization for\r\nNuclear Research\r\n(CERN)\r\nAuthored by Professor Sir\r\nChristopher Llewellyn Smith\r\nPurpose\r\nAn organization with 23 Member States (22\r\nEuropean and Israel), CERN seeks to advance\r\nthe boundaries of human knowledge through\r\nresearch in particle physics.i Originally an acronym\r\nfor Conseil Européen pour la Recherche Nucléaire,\r\nCERN now styles itself the European Laboratory\r\nfor Particle Physics. CERN constructs and operates\r\nfacilities that are used by over 13,000 physicists\r\nfrom around the world (the “users”) and employs\r\naround 3,390 fellows and permanent staff. Many\r\nof the components of CERN’s large particle\r\ndetectors are largely built in the users’ home\r\ninstitutions and then transported to CERN.\r\nCERN hosts the Large Hadron Collider, the largest\r\nand highest-energy particle collider in the world.\r\nThe laboratory has made major contributions to\r\ncurrent understanding of the structure of matter\r\nand invented, developed and pioneered the use\r\nof a wide range of technologies, the best-known\r\nexamples being the discovery of the Higgs boson\r\nand the invention of the World Wide Web.\r\nCERN was conceived in the late 1940s with\r\nthe dual aims of enabling the construction\r\nof facilities beyond the means of individual\r\ncountries—thereby allowing European physicists\r\nto compete with their peers in the USA, where\r\nlarge accelerators were being built—and fostering\r\ncooperation between peoples recently in conflict.\r\nFrom the outset, CERN intended its findings\r\nto be widely accessible. CERN’s equivalent of\r\na constitution, its Convention, stipulates that\r\n“the Organisation shall have no concern with\r\nwork for military requirements” and that “the\r\nresults of its experimental and theoretical\r\nwork shall be published or otherwise made\r\ngenerally available”. CERN shares its technology\r\nand knowledge with companies and research\r\ninstitutes, and its experts frequently consult with\r\nbusinesses. CERN encourages the creation of\r\nnew companies based on its technologies and\r\ngrants licenses to commercial and academic\r\npartners for the use of its technologies.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 57\r\nPatents are only filed if doing so makes\r\ntechnologies more attractive to companies\r\ninterested in using them.\r\nHistory\r\nAt a meeting of the United Nations Educational,\r\nScientific and Cultural Organization (UNESCO) in\r\nParis in 1951, 12 European governments adopted a\r\nresolution establishing CERN (CERN is not part of\r\nthe UN system, and although UNESCO has been\r\nan Observer since the beginning, it did not send\r\nrepresentatives to meetings of the CERN Council\r\nfor many years). Two months later, an agreement\r\ncreated CERN’s Provisional Council, which\r\ndrafted the Convention that governs CERN. The\r\nConvention was signed by the original 12 Member\r\nStates in June 1953, and CERN formally came into\r\nexistence on September 29, 1954 when it had\r\nbeen ratified by all twelve Members.\r\nIn 1952, the Swiss, Dutch, French, and German\r\ngovernments submitted proposals to host the\r\nCERN laboratory. Geneva was ultimately chosen\r\ndue to its central location and Switzerland’s\r\nneutrality in World War II. While technical\r\nfactors (such as the availability of large amounts\r\nof electrical power) can be helpful in making\r\nshortlists of potential sites for international\r\norganizations, the experience of CERN and other\r\nsimilar organizations indicates that political and\r\neconomic factors tend to dominate.ii Factors to be\r\nconsidered, apart from money, when selecting a\r\nsite include: logistical ease of access, openness to\r\nvisitors, accommodation, and schooling.iii\r\nThroughout CERN’s history, collaboration\r\nhas created connections that cross political\r\nand cultural divides and foster better\r\ninternational understanding. CERN was the first\r\nintergovernmental organization that Germany\r\njoined after World War II. During the Cold War,\r\nCERN maintained links with scientists behind the\r\nIron Curtain. In the 1980s, CERN became one\r\nof the first European scientific organizations to\r\nwelcome significant numbers of Chinese scientists.\r\nEvolution\r\nCERN’s facilities have grown enormously over\r\nthe years, and today it is the world’s pre-eminent\r\nlaboratory for particle physics. The number of\r\nusers has also grown, although they currently\r\nlook set to decline following the CERN Council’s\r\nannouncement that cooperation with Russia and\r\nBelorussia will come to an end when the current\r\nagreements expire in 2024. While CERN has\r\ngrown spectacularly, the individual Members’\r\ncontributions to the budget have remained\r\nroughly constant or even declined in real terms.\r\nSince its inception, CERN has also grown from\r\n12 to 23 Member States, mainly because of the\r\naccession of formerly communist countries. In\r\nCERN’s early years, Observers (which included\r\nboth organizations, such as UNESCO and the EU,\r\nand non-member countries) received invitations\r\nto attend public sessions of the Council. While not\r\nentitled to speak, Observers may be invited by\r\nthe President to do so.\r\nDuring the Large Hadron Collider (LHC)\r\nconstruction era, non-European countries\r\ncontributing 15 million Swiss Francs or more to its\r\nconstruction were granted Observer status, which\r\ncame with the right to contribute to the LHC\r\ndecision-making process. This “Observership with\r\nspecial rights” was granted to four states (Israel,\r\nJapan, Russia, and the United States). In 2010, this\r\nstatus was replaced by a new Associate status,\r\nand it was decided that the status of Observer\r\nshould be granted only to organizations.\r\nToday there are nine Associate Member States,\r\nincluding three (Cyprus, Estonia, and Slovenia)\r\nin a pre-stage to membership. Their annual\r\ncontributions are set at a level that is high\r\nenough to have a tangible impact on the CERN\r\nbudget without discouraging applications.\r\nAssociate Member States are granted the right\r\nto attend the Council’s open and restricted\r\n(but not closed) sessions and can send\r\nrepresentatives to finance committee meetings.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 58\r\nThey cannot vote in the Council and its\r\ncommittees but can ask for the floor and make\r\nstatements without having been invited to do so.\r\nGovernance\r\nCERN is an intergovernmental organization,\r\nestablished by a treaty, that possesses its own\r\ninternational legal personality. Changing the\r\nConvention, which provides the framework for the\r\norganization’s governance, is difficult. It requires\r\nunanimity and ratification by all Members, which\r\ntypically involves approval by their national\r\nlegislative bodies. This has proved to be a source\r\nof stability. CERN has only revised its convention\r\nonce, in 1971, when it established a substantial\r\npresence in France in addition to Switzerland.\r\nThe CERN Convention, which has served CERN\r\nwell for nearly 60 years despite significant\r\nchanges in its size and nature, reflects the longterm\r\nvision of CERN’s founders and grants the\r\nCouncil powers that have provided important\r\nflexibility. It has, for example, allowed Israel\r\nto become a Member State, despite the word\r\n“European” appearing in CERN’s official title.\r\nThe Convention’s flexibility is one of the pillars\r\non which CERN’s success rests. The other is the\r\ntrust that Member States have in the laboratory’s\r\nmanagement and technical judgements. There\r\nhas only been one major review of CERN’s\r\nmanagement, which was carried out in the 1980s\r\nas a condition for the UK’s continued membership\r\nafter it had considered withdrawing.iv In contrast,\r\nhistorians of the US Superconducting Super\r\nCollider attribute its demise partly to almost\r\ncontinuous management and technical reviews by\r\nthe Department of Energy.\r\nAlternatives to treaties\r\nSigning onto international treaties generally\r\nrequires legislative or parliamentary approval.\r\nIn the case of the US, joining international\r\norganizations or collaborations not established\r\nby treaty is normally “subject to the annual\r\navailability of funding”, which creates unease\r\namong other parties that have made longterm\r\ncommitments. There are several examples\r\nof international scientific organizations with\r\nalternative structures.\r\n• The Institut Laue–Langevin (ILL) in Grenoble,\r\nwhich houses a high flux nuclear reactor\r\nthat is used to study materials on shortdistance\r\nscales, is a private company\r\nunder French law that is jointly owned\r\nand governed by French, German, and UK\r\nscientific organizations. They work closely\r\nwith the ILL’s 11 European “Scientific Member\r\ncountries”, who together contribute some\r\n20% of the annual budget.\r\n• Similarly, a nonprofit limited liability\r\ncompany owned by participating countries\r\nis responsible for constructing and operating\r\nthe European Xray Free Electron Laser\r\n(XFEL), based at the DESY laboratory\r\nin Hamburg. Likewise, the Facility for\r\nAntiproton and Ion Research (FAIR), which\r\nis an international center and one of the\r\nworld’s largest research projects, is being\r\nbuilt by a private company at GSI. Both DESY\r\nand GSI are large, established laboratories\r\nonto which XFEL and FAIR are being grafted.\r\n• The Joint European Torus (JET) at Culham\r\nin the UK provides another model. About\r\n350 scientists from EU countries and other\r\ncountries from around the globe participate\r\nin JET experiments each year under the\r\nscientific direction of a leader appointed\r\nby Eurofusion. The Culham Centre for\r\nFusion Energy (CCFE) is responsible for\r\nmaintaining and upgrading JET, under a\r\ncontract between the European Commission\r\nand the UK Atomic Energy Authority (CCFE’s\r\noperator). This funds around 400 engineers\r\nand technical staff who are responsible for\r\noperating and maintaining JET.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 59\r\nStructure and leadership\r\nCERN’s governing body, the Council, is composed\r\nof two delegates from each Member State.\r\nTypically, one of the delegates is a government\r\nrepresentative (often from a Ministry of Science,\r\nor in some cases, the country’s ambassador\r\nto the UN organizations in Geneva) and the\r\nother is a scientist. This combination of political\r\nand technical representation has served CERN\r\nwell. The Council elects a President and two\r\nvice Presidents from among the Delegates and\r\nappoints the Director-General, who is the chief\r\nexecutive officer of the Organization and its legal\r\nrepresentative. The Convention stipulates that in\r\nthe discharge of his or her duties, the Director\r\nGeneral “shall not seek or receive instructions\r\nfrom any government or from any authority\r\nexternal to the Organization”.\r\nThe Convention established a Scientific Policy\r\nCommittee (SPC) and a Finance Committee (FC).\r\nThe SPC’s mandate includes setting research\r\npriorities, measuring CERN’s achievements\r\nagainst annual goals, and overseeing senior\r\nstaff appointments. Its members include\r\nindividuals of the highest standing in the\r\nscientific community, who are appointed\r\nby the Council, and the Chair s of various\r\nadvisory committees. All act as individuals,\r\nnot as national representatives (the members\r\ninclude nationals of non-member states) or as\r\nrepresentatives of the bodies they chair.\r\nThe FC provides the Council with advice on\r\nfinancial matters, approves large-scale contracts\r\nand staff regulations, and recommends staff rules\r\nto the Council.\r\nThe “President’s group”, which includes the\r\nDirector General, the two Vice Presidents, and\r\nthe chairs of the finance and scientific policy\r\ncommittee, helps the President prepare for\r\nCouncil sessions.\r\nThere is a tradition that during its meetings,\r\nwhich normally take place over dinner between\r\nmeetings of the FC and of the Council, the\r\nDelegates take off their hats as national\r\nrepresentatives and discuss how to conduct the\r\nbusiness in what they consider to be the best\r\ninterest of CERN.\r\nVoting\r\nWhile unanimity is required for changes to the\r\nConvention, admission of new members, and\r\napproval of major projects, almost all other\r\nmatters are in principle decided by a two-thirds\r\nmajority (some international organizations require\r\nunanimity for most decisions, which is known to\r\nhave led to difficulties in some cases). However,\r\nCERN has a long tradition of reaching consensus\r\non difficult issues through diplomatic means,\r\nsuch as informal negotiations between delegates,\r\nrather than formal voting, and has effectively\r\nabandoned the two-thirds majority rule for major\r\nfinancial issues.\r\nIn its first decades, when CERN had 12 Member\r\nStates, there was a tacit understanding that\r\ncountries that made relatively small financial\r\ncontributions would not outvote a majority of\r\nmembers that made major financial contributions\r\non important financial issues. In 1991, when\r\nthere were 16 Member States, it was decided\r\nthat the FC’s recommendations to the Council\r\nshould be backed by 55% of the annual financial\r\ncontributions of the Member States, in addition\r\nto the majority required by the Convention. This\r\nnumber was later increased to 70%.\r\nCERN’s plans include the possible construction of\r\na 90-km circumference Future Circular Collider\r\nby a large global collaboration of partners. A\r\nspecially constituted Council Working Group on\r\nthe Governance of CERN is currently considering\r\nhow such a project might be governed.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 60\r\nFunding\r\nWhile non-member States have contributed to\r\nthe construction of the LHC in-kind, Members’\r\nregular contributions to the budget of CERN are all\r\nin cash. Cash contributions with open bidding for\r\ncontracts leads to lower costs. At the International\r\nThermonuclear Experimental Reactor (ITER), where\r\nthe major contributions are in-kind, construction\r\nof some large components has been split between\r\nsuppliers in different member states. This has\r\nproduced technical issues and raised costs, as each\r\nsupplier incurs their own set-up costs.\r\nThe CERN Members’ contributions are\r\ncalculated as a percentage of their average net\r\nnational incomes for the preceding three years.\r\nUntil the late 1980s, “average” was interpreted\r\nas a simple average, and Members’ payments\r\nreflected their past—rather than their current—\r\neconomic strength. Since then, CERN has used\r\nweighted averages that account for trends in\r\nrelative economic strengths and changes in\r\nexchange rates.\r\nThe original Convention set a maximum\r\npercentage for the contribution of any Member.\r\nThis was removed when the Convention was\r\nrevised in 1971, but the Council subsequently set a\r\nmaximum. This protects the biggest contributors\r\nfrom feeling that they are carrying the main\r\nburden without receiving more influence. The\r\nCouncil can also take into account a Member\r\nState’s situation and temporarily reduce its\r\ncontribution, as it is currently doing with Ukraine,\r\nwhich is an Associate Member.\r\nCountries that host international organizations\r\nbenefit from staff salaries being spent locally, as\r\nwell as the placement of most small and service\r\ncontacts. Consequently, the hosts of some\r\norganizations are required to pay a “host state\r\npremium”. France and Switzerland, CERN’s two\r\nhost nations, have made additional voluntary\r\ncontributions, some of which were in-kind.\r\nProcurement\r\nCERN calculates a return coefficient, which is the\r\nratio between a Member state’s percentage share\r\nof the value of all contracts and its percentage\r\ncontribution to the CERN budget. Members\r\nare said to be “poorly balanced” if their return\r\ncoefficient is less than 1.0, and “well balanced” if\r\nit is greater than or equal to 1.0. When awarding\r\nnew contracts, consideration is given to whether\r\nthe lowest bidders are well or poorly balanced. If\r\nthe lowest bid is from a manufacturer in a wellbalanced\r\ncountry, then the two lowest bidders\r\nfrom poorly balanced countries are offered the\r\nopportunity to adjust their bids to match the\r\nlowest bid, as long as their bids were within 20%\r\nof that bid.\r\nConclusion\r\nIn its mission of advancing the boundaries of\r\nhuman knowledge through research in particle\r\nphysics, CERN has been a success. Analysis of the\r\nway that CERN and other international scientific\r\norganizations referenced above work leads to\r\na list of issues that will have to be addressed in\r\nestablishing new international organizations,\r\nincluding their legal status, voting procedures,\r\nthe basis for calculating contributions, the\r\nconstitution of advisory bodes, and site selection.\r\nHow best to deal with these issues will depend on\r\nan organization’s mandate. Issues that deserve\r\nparticular attention in establishing an organization\r\ncharged with governing AI (which, unlike CERN,\r\nwill presumably not be a user organization,\r\nand will not require infrastructure that will take\r\ndecades to construct) include:\r\nIntellectual property, openness and independence.\r\nCERN’s core tenet of separation from military\r\nendeavors and the accessibility of its scientific\r\nresearch has been central to its mission, as\r\nhas the stipulation that the Director General\r\nof CERN’s laboratory operates independently\r\nof any government or outside institution.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 61\r\nQuestions of independence and accessibility\r\nwill be similarly critical with regard to an AI\r\ngovernance organization.\r\nWhether to graft a new organization onto an\r\nexisting body. The core of an AI organization\r\nwill likely be its staff, supported by computing\r\npower which could presumably be acquired\r\nrelatively quickly compared to constructing a\r\nnew fusion device or accelerator. Grafting such\r\nan organization onto an existing body (as XFEL,\r\nFAIR, and JET have done) would allow it to rely\r\nlargely on pre-existing administrative support and\r\nservices and get off to a rapid start.\r\nThe possible involvement of private companies.\r\nCreating an international organization to which\r\nprivate companies belong alongside countries\r\nwould raise novel governance issues. If private\r\ncompanies are formally involved in an AI\r\ngovernance organization, these issues might be\r\nfinessed by making these companies Observers or\r\ngiving them some sort of associate status.\r\nThe possible involvement of a politically neutral\r\n“parent body” such as UNESCO, to which all\r\npotential members already belong, lowering political\r\nbarriers to joining. An example is provided by\r\nSESAME (Synchrotron-light for Experimental\r\nScience and Applications in the Middle East),\r\nwhose Members include Iran, Israel and Palestine,\r\nwhich (like CERN) was founded after UNESCO\r\nsummoned a meeting of potentially interested\r\nparties (in contrast to CERN, UNESCO continues\r\nto play a role in SESAME). UNESCO’s involvement\r\nmade it easier for some countries to participate\r\nthan it might otherwise have been.\r\nEvaluation of the benefits of cash and in-kind\r\ncontributions from member states. In the case\r\nof an AI institute, the major purchases will\r\npresumably be of computing resources. In this\r\ncase, the organization could be funded by cash\r\ncontributions, which would purchase equipment\r\nor services on the basis of open tender. While\r\nusing fewer vendors or even a single vendor would\r\nimprove technological compatibility, this would\r\nraise the issue of industrial returns to members.\r\nThe development of a new international\r\ngovernance organization offers the opportunity\r\nto learn from the challenges faced and obstacles\r\novercome by CERN. These lessons have the\r\npotential to help accelerate science and advance\r\nhuman potential in the field of AI and beyond.\r\ni. CERN’s stated mission is to: perform world-class research in fundamental physics; provide a unique range of particle accelerator facilities\r\nthat enable research at the forefront of human knowledge, in an environmentally responsible and sustainable way; unite people from all\r\nover the world to push the frontiers of science and technology, for the benefit of all; and train new generations of physicists, engineers, and\r\ntechnicians, and engage all citizens in research and in the values of science.\r\nii. For instance, the UK was chosen over Germany to house the Joint European Torus (JET) due to a hijacking in Mogadishu. Europe was\r\nchosen to house the International Thermonuclear Experimental Reactor (ITER) rather than Japan because it offered to make a much larger\r\ncontribution.\r\niii. Access: CERN has benefited from being next door to an international airport, whereas arguably the International Thermonuclear\r\nExperimental Reactor (ITER) has suffered from being an hour’s drive from Marseille airport; openness to visitors: The host should be\r\nable to provide access to visiting scientists, although the host country generally reserves the right to deny access on good grounds;\r\naccommodation: If ample short-term accommodation is not available locally, centers that anticipate large numbers of visitors often construct\r\nhostels, thereby allowing visitors to make best use of their time and facilitating collaboration; and schooling: If the organization employs\r\nsignificant numbers of staff, it may be necessary to provide access to education in various languages. This is available in some cities, such as\r\nGeneva and London, which have large international populations. In other cases, special schools have been built, e.g., close to ITER, where\r\nteaching is available in six languages.\r\niv. This author served as the scientific adviser to this External Review Committee. In the early 1990s, in the run-up to the approval of the Large\r\nHadron Collider (LHC), which relies on what was then a very novel design of superconducting magnets, the author set up an external review\r\nof the design in order to reassure the Council, although they had not asked for such a review.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 62\r\n3.3\r\nThe International\r\nAtomic Energy\r\nAgency (IAEA)\r\nAuthored by\r\nDr. Trevor Findlay\r\nPurpose\r\nThe International Atomic Energy Agency (IAEA)\r\nis a multilateral, intergovernmental organization\r\nthat pursues a variety of interrelated governancei\r\nmissions, including nuclear safeguards, nuclear\r\nsafety and security, and technical assistance with\r\nnuclear technology. Established in 1957 in Vienna\r\nto promote and govern the peaceful uses of\r\nnuclear energy worldwide, the IAEA is best known\r\nfor the nuclear safeguards system later put in place\r\nand its unparalleled monitoring, verification and\r\ncompliance capacities.\r\nThe IAEA’s safeguards system represents the most\r\nradical impingement on national sovereignty yet\r\ndevised: safeguards are legally binding for most\r\nstates, they encompass extensive monitoring and\r\nverification measures (including notably intrusive,\r\nmandatory on-site inspections), and the Agency has\r\ndirect access to the United Nations Security Council\r\nto request enforcement measures. IAEA also defines\r\nsafety and security standards for handling nuclear\r\ntechnology and helps developing countries identify\r\nenergy needs and use nuclear technology.\r\nThe success that the IAEA has helped achieve in\r\navoiding nuclear catastrophe on a global scale\r\noffers lessons that may be applicable to the\r\ncreation of a new governance organization. For\r\nexample, the IAEA encourages states to accept\r\nimpingements on their sovereignty in return for an\r\norderly regime that benefits all states. It also offers\r\nassistance to states regarding the peaceful uses of\r\nnuclear technology to enhance this “bargain”.\r\nWhen it comes to constructing a new\r\ninternational regime, the bargain struck between\r\ndeveloped and developing countries can mean\r\nthe difference between success and failure. Such\r\na bargain may involve enhanced regulation,\r\nmonitoring, verification, and compliance\r\nmechanisms in exchange for development\r\nassistance and technical cooperation.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 63\r\nHistory\r\nThe objective of the IAEA, as set by its statute, is\r\nas follows:\r\n… to accelerate and enlarge\r\nthe contribution of atomic\r\nenergy to peace, health\r\nand prosperity throughout\r\nthe world. It shall ensure,\r\nas far as it is able, that\r\nassistance provided by it\r\nor at its request or under\r\nits supervision or control is\r\nnot used in such a way as to\r\nfurther any military purpose.\r\nThe establishment of the IAEA stemmed from\r\nUS President Dwight D. Eisenhower’s “Atoms for\r\nPeace” speech at the UN General Assembly on\r\nDecember 8, 1953. Eisenhower suggested creating\r\nan agency that would receive nuclear material\r\nfrom “advanced” nuclear nations and provide\r\nthis material to member states for peaceful use\r\nin medicine, agriculture, science, and power\r\ngeneration. The hope was that this clearinghouse\r\narrangement would not only decrease the stocks\r\nof nuclear material available for nuclear weapons,\r\nbut also head off aspirations by additional states\r\nto acquire such weapons.\r\nFollowing secret talks between the US and the\r\nSoviet Union, a select group of states convened in\r\nWashington, DC to negotiate a draft statute. This\r\nstatute was subsequently amended and adopted\r\nby the UN General Assembly in 1956, and the\r\nAgency was established the following year.\r\nIn many respects, the IAEA was an American\r\nproject—initiated, developed, and funded\r\ngenerously by successive US administrations until\r\nit took on a life of its own. When creating a new\r\ngovernance organization, it is often helpful if a\r\npolicy leader (or a coalition of them) emerges\r\nquickly to drive the process, as a negotiating\r\nfree-for-all will likely not produce the necessary\r\ncoherence and effectiveness.\r\nEvolution\r\nThe original concept for the IAEA as a nuclear\r\nmaterial clearinghouse never eventuated, partly\r\nbecause more states began to start their own\r\nnuclear programs. The US also began to directly\r\nsupply other countries with nuclear assistance,\r\nunder bilateral US safeguards agreements to\r\nprevent misuse for weapons purposes. The\r\nSoviets soon followed with their own program.\r\nThe IAEA instead became the “nuclear watchdog”,\r\nestablishing a nuclear safeguards system to\r\ndeter states without nuclear weapons from\r\nmanufacturing them. In the 70 years since\r\nits inception, the IAEA has also adopted new\r\ngovernance roles in nuclear safety (preventing\r\nnuclear accidents) and nuclear security\r\n(preventing nuclear terrorism). In addition, the\r\nIAEA provides technical assistance to member\r\nstates in a manner that vastly exceeds what was\r\nenvisaged in the statute.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 64\r\nThe nuclear safeguards system\r\nThe IAEA’s regime for detecting the diversion of\r\npeaceful nuclear materials to weapons purposes is\r\nknown, confusingly to outsiders, as “safeguards”.\r\nThe safeguards system involves states declaring\r\nto the Agency the types, amounts, and locations\r\nof nuclear materials in their possession. The\r\nmost sensitive materials are enriched uranium\r\nand plutonium, both of which may be used for\r\nnuclear weapons, and both of which also feature\r\nin a sophisticated nuclear fuel cycle designed for\r\npeaceful purposes.\r\nThe Agency applies several layers of safeguards\r\nmeasures to ensure that state declarations are\r\ncorrect, including:\r\n• nuclear material accountancy;\r\n• on-site inspections (the Agency employs\r\nroughly 200 inspectors to carry out on-site\r\nactivities, as well as a cadre of information\r\nanalysts and technical support staff);\r\n• seals to ensure that material is not tampered\r\nwith between inspections;\r\n• sample analysis;\r\n• remote video monitoring;\r\n• satellite imagery;\r\n• open-source information analysis; and\r\n• in extreme circumstances, the analysis\r\nof intelligence information provided by\r\nmember states.\r\nIn theory, at least, the consequences for a state\r\ncaught in non-compliance are serious. Once the\r\nIAEA director general reports a non-compliant\r\nstate to the UN Security Council, the Council\r\nis empowered to punish such violators with\r\nsanctions, including economic sanctions, and\r\nultimately, the use of military force.\r\nThe system has been subject to almost continuous\r\ntechnical improvement since being established\r\nin the late 1950s. Originally, safeguards were\r\npurely voluntary, imposed as states offered\r\nnuclear materials to others and wished to have\r\nreassurance that such material would not be\r\nmisused. A major shift occurred in 1970 with\r\nthe entry into force of the 1968 Nuclear Non-\r\nProliferation Treaty (NPT). The NPT made IAEA\r\nsafeguards mandatory and legally binding for\r\nstates without nuclear weapons, but not for the\r\nfive official nuclear weapon states―China, France,\r\nthe Soviet Union/Russia, the United Kingdom and\r\nthe US.\r\nNon-nuclear weapon states were obliged to sign\r\nbilateral agreements with the IAEA establishing\r\nthe scope and nature of their safeguards\r\nobligations, which varied depending on national\r\ncircumstances. The NPT vastly increased the\r\nimportance and technical capacities of the IAEA\r\nand its safeguards system.\r\nAfter the discovery in 1991 of an illicit Iraqi\r\nnuclear weapons program, the IAEA further\r\nstrengthened and modernized nuclear safeguards\r\nby negotiating an Additional Protocol (AP) for\r\nbilateral safeguards agreements between states\r\nand the IAEA. The adoption of an AP by states\r\nis voluntary, although a substantial majority of\r\nstates have chosen to adopt one.\r\nThe safeguards regime, both by design and by\r\naccidents of history, creates different obligations\r\nfor different states, which has led to charges of\r\ninequity and discrimination:\r\n• as the IAEA was established two decades\r\nbefore the NPT for a different purpose,\r\nnot all IAEA member states (notably India,\r\nPakistan, and Israel) are party to the NPT,\r\nyet these states may still be elected to the\r\nboard of governors and sit in judgement\r\non other states violating their NPT\r\nsafeguards obligations;\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 65\r\n• not all IAEA member states or parties to\r\nthe NPT are required to have safeguards\r\nagreements (the nuclear weapon states\r\nare only encouraged to adopt “voluntary”\r\nagreements); and\r\n• not all states with safeguards agreements\r\nhave concluded a voluntary Additional\r\nProtocol, the highest level and most intrusive\r\nform of safeguards (notably Argentina,\r\nBrazil, Egypt, Iran, Saudi Arabia, and Syria).\r\nEnsuring that the establishment of an international\r\nagency flows directly from its foundational treaty\r\nis one way to avoid such complexities. This is the\r\nmodel followed by more recent examples, such\r\nas the Comprehensive Nuclear Test Ban Treaty\r\nOrganization, established pursuant to the 1996\r\nComprehensive Nuclear Test Ban Treaty, and\r\nthe Organization for the Prohibition of Chemical\r\nWeapons, established pursuant to the 1993\r\nChemical Weapons Convention.\r\nThe nuclear safety and security regimes\r\nWhile the nuclear safety and security regimes\r\nare also based on legally binding treaties, they\r\nare not subject to the legally binding reporting,\r\nmonitoring, verification, and compliance processes\r\nof the safeguards regime. Often referred to as\r\n“incentive regimes”, the treaties only commit\r\nstates to making their best efforts to achieve\r\nsafety and security. The measures applied include\r\nvoluntary reporting, recommended standards and\r\npractices, assessment missions, periodic review\r\nconferences, and technical assistance.\r\nThe rationale behind the safety and security\r\nregimes is that states themselves should have\r\nprimary responsibility for the safety and security\r\nof their nuclear enterprises, and the IAEA should\r\nonly advise and assist them in carrying out such\r\ntasks.\r\nLike safeguards, these regimes have become\r\nmore extensive and sophisticated in response\r\nto clarifying events, including the accidents at\r\nChernobyl (1996) and Fukushima (2011). None of\r\nthe innovations that followed these crises included\r\nintrusive measures, such as on-site inspections.\r\nThe IAEA has found that the development of\r\nagreed standards and codes of conduct, even if\r\nnot mandatory, can have a normative effect. The\r\ndownside of these measures is that agreement on\r\nstandards and recommendations tends to devolve\r\nto the lowest common denominator. Additionally,\r\nthe IAEA uses visiting missions, comprising both\r\nIAEA and national representatives, to assess\r\nimplementation and make recommendations.\r\nOver the years, this has led to improvements in\r\nstate performance.\r\nGovernance\r\nAny UN member state may join the IAEA. As of\r\nSeptember 2023, the Agency had 178 members\r\nout of 193 UN member states, making it close\r\nto universal (mostly small island states are\r\nunrepresented). All states possessing nuclear\r\nweapons or with significant peaceful nuclear\r\nactivities are members, with the stark exception\r\nof North Korea, which withdrew in 1994―the only\r\nstate ever to have done so.\r\nAchievements and challenges\r\nThe IAEA confronts the classic dilemma of all\r\ninternational organizations―it is both empowered\r\nand hindered by its member states. The Agency\r\nis crucially dependent on states to carry out its\r\nmandate on their behalf. This means that the\r\ndirector general and secretariat can only act with\r\nthe approval and support of member states,\r\nespecially the most powerful. The United States,\r\nfor instance, provides up to 25% of the IAEA’s\r\nregular budget, in addition to generous voluntary\r\ncontributions and technical assistance.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 66\r\nChina, Russia, the European Union, and\r\ndeveloping countries collectively have also\r\nbecome key players. Such power dynamics are\r\nespecially prominent in determining Agency action\r\nagainst states that have violated their safeguards\r\nobligations. They also arise when the secretariat\r\nattempts to further strengthen safeguards.\r\nOn the other hand, the IAEA, like other\r\ninternational organizations, has carved out\r\na certain autonomy in the nuclear field. The\r\nincreasing complexity of the nuclear enterprise, the\r\nnumber of industrial players that have emerged,\r\nand the expansion of IAEA membership means\r\nthat only a handful of states can keep track of\r\nall the IAEA’s activities and acquire the same\r\nfamiliarity with global nuclear governance as the\r\nAgency itself. In carrying out its mission, the IAEA\r\nhas also attempted to portray itself—not always\r\nsuccessfully, given the political issues at stake—as\r\na science and technology-based institution that is\r\nimpartial, autonomous, and non-discriminatory in\r\nits dealings with its member states.\r\nThe IAEA levers its accumulated experience and\r\nexpertise to establish and reinforce good behavior\r\nin all areas of its mandate. It can produce\r\ncompromises among its member states by\r\ntrading off their competing interests against each\r\nother. A recent example is the “7 Pillars of Nuclear\r\nSafety and Security” that the current Director\r\nGeneral, Rafael Grossi, issued immediately\r\nfollowing Russia’s invasion of the Chernobyl and\r\nZaporizhzhia nuclear power plant sites.\r\nThe role of industry\r\nFrom the beginning, the IAEA has kept the\r\nindustry it was supposed to be governing at arm’s\r\nlength, a flaw that has long been apparent but\r\nonly recently addressed. This is due in part to\r\nthe fact that the IAEA’s establishment was driven\r\nby the concerns of national leaders about the\r\ndangers of nuclear weapons proliferation.\r\nThe impetus did not come from the nuclear\r\nindustry, which barely existed in the 1950s and\r\nwas almost exclusively operated by governments.\r\nThe IAEA has historically handled nuclear\r\ngovernance matters via either member states’\r\npermanent diplomatic representatives in Vienna\r\nor foreign offices in member states’ capitals. While\r\nsome delegations, notably those of China, Russia,\r\nand the US, include nuclear experts, these are\r\nmostly from national nuclear bureaucracies, such\r\nas the US Department of Energy; governmentrun\r\nnuclear laboratories, such as Sandia in the\r\nUS; or national regulators, such as the US Nuclear\r\nRegulatory Commission. Nuclear industry has not\r\nbeen invited to join national delegations to IAEA\r\nconferences, the theory being that companies can\r\ninteract directly with their national governments\r\nto protect their interests.\r\nFor their part, private companies in the nuclear\r\nfield have also tended to keep their distance from\r\nthe international regime. They almost invariably\r\nregard the IAEA and governments as seeking\r\nto intrude on their commercial operations and\r\nsee instruments such as the NPT as “political”\r\ndocuments of no concern to them. From the\r\noutset, the privately owned uranium mining\r\nindustry pressured governments whose territory\r\ncontained large uranium deposits (such as\r\nAustralia, Canada, and Belgium) to exempt natural\r\nuranium from IAEA safeguards.\r\nToday, industry is more involved in the nuclear\r\nsecurity issue, presumably due to the commercial\r\nimplications of a catastrophic nuclear terrorism\r\nincident. This has led, for instance, to industryorganized\r\nsummits that coincided with the stateled\r\nNuclear Security Summits held at US initiative\r\nfrom 2010 to 2016.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 67\r\nStructure\r\nThe IAEA is a member of the United Nations\r\nfamily of organizations and shares much of\r\nthe UN’s structures, processes, and culture. It is\r\nlocated at the Vienna International Centre along\r\nwith other Vienna-based UN organizations.\r\nThough it reports annually to the United Nations\r\nGeneral Assembly and, on request, to the UN\r\nSecurity Council, the IAEA is not a UN-specialized\r\nagency like UNESCO or the World Health\r\nOrganization. Rather, the IAEA is an autonomous\r\norganization governed by its member states\r\nthrough a general conference, in which all member\r\nstates are represented, and a 35-member board\r\nof governors. In theory, the general conference,\r\nwhich convenes annually, sets broad policy that\r\nguides the board of governors. In reality, power\r\nat the IAEA is concentrated in the board, both by\r\ndesign and evolved practice. The board comprises\r\nsemi-permanent members repeatedly elected due\r\nto their importance to the peaceful uses of nuclear\r\nenergy, along with non-permanent members\r\nelected for two-year terms on a regional basis. This\r\nallows every member state to be represented on\r\nthe board at some point.\r\nThe board holds at least six sessions per year\r\nand may also meet in emergency situations. It\r\nconsiders membership applications, establishes the\r\nAgency’s work program and budget, and approves\r\nall agreements with member states, safety and\r\nsecurity standards, major infrastructure, and special\r\nprojects. The board has the right to declare a state\r\nin violation of its safeguards obligations and to\r\nreport it to the UN Security Council for possible\r\nenforcement action, which it has done with respect\r\nto Iraq, North Korea, and Iran.\r\nAll five of the “official” nuclear weapon states\r\n(according to the NPT) and the states most\r\nadvanced in nuclear energy in each region\r\nof the world are awarded virtual permanent\r\nmembership on the board. Unlike the UN Security\r\nCouncil, no member state has veto power.\r\nWhile approval of the Agency’s program and\r\nbudget requires a two-thirds majority, only a\r\nsimple majority is required for all other matters.\r\nApart from its headquarters in Vienna, the Agency\r\nhas regional offices in Tokyo and Toronto and\r\nresearch laboratories in Seibersdorf, Austria,\r\nand Monaco. The staff of the Agency, known\r\nas the secretariat, comprises approximately\r\n2,560 multidisciplinary professional and support\r\nstaff from more than 100 countries. All are\r\ninternational civil servants recruited according\r\nto UN regulations, with consideration given to\r\ngeographical (and more recently, gender) balance.\r\nThe Agency is headed by a director general who\r\nis appointed by the board of governors, with the\r\napproval of the general conference, for a four-year\r\nterm, which is often renewed.\r\nFunding\r\nThe Agency is funded by assessed contributions\r\nfrom each member roughly according to its GDP\r\nas calculated by the UN, with significant discounts\r\nfor developing countries. The Agency also relies\r\non voluntary contributions from wealthier member\r\nstates. The IAEA’s total regular budget in 2022-23\r\nwas approximately $419.8 million. The Agency,\r\nalong with all other UN bodies, has operated at\r\nzero real budgetary growth since 1985.\r\nConclusion\r\nThe IAEA aims to use international cooperation\r\nto promote the benefits of a powerful technology\r\nwhile also limiting the harm it can do to humanity.\r\nIn this purpose, it has much in common with a\r\npotential AI regime. Because IAEA similarly deals\r\nwith a highly sensitive technology that, if misused,\r\ncan pose an existential threat, the Agency’s\r\nexperience suggests several lessons for the\r\ninternational governance of AI.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 68\r\nDo not underestimate the potential pervasiveness\r\nof a given technology. In the earliest years of the\r\n“nuclear age”, it appeared that only the most\r\nsophisticated countries could pursue nuclear\r\ntechnology. This was soon proven false. The\r\nsame is even more likely to apply to the spread\r\nof AI capabilities, where the misuse of AI could\r\nbe perpetrated not only by any state but by any\r\ncitizen of any state. Even if universal participation\r\nin a governance organization is unachievable\r\nat the outset, it will be important to have all\r\nthe major players involved in negotiating and\r\ninitiating implementation of an AI regime.\r\nPromotional efforts to achieve universality could\r\nfollow, as was the case with the IAEA. Additionally,\r\nthe division of states into permanent “haves” and\r\n“have nots” as in the IAEA should be avoided.\r\nAvoid giving veto power to a single member\r\nin the quest for universality. In the case of the\r\nComprehensive Nuclear Test Ban Treaty,\r\nnegotiators wanted to lock in all states that had\r\ntested nuclear weapons in the past or could so\r\nin the future. This gave states like India, Egypt,\r\nIran, North Korea, and Pakistan, which are on\r\nthe “essential country” list, an effective veto over\r\nentry into force of the treaty. As a result, the\r\nimplementation organization for the treaty can\r\noperate only in provisional mode, which prevents\r\nactivity such as on-site inspection in case of a\r\nsuspected nuclear test. This situation should be\r\navoided by an AI regime: better to have most of\r\nthe key players involved than to hold out for all\r\nof them.\r\nIncorporate industry from the beginning. In the\r\nearly nuclear industry, governments were the\r\nprincipal drivers of technological innovation and\r\ncalls for governance. In the AI industry, newly\r\nemerging expertise is confined to a relatively small\r\nnumber of corporations and countries. Given\r\nthe difference in who is driving technological\r\ninnovation and the lessons learned by IAEA,\r\nfiguring out how to bring both companies and\r\ngovernments to the table at the earliest stages\r\nwill be critical. An alternative to the IAEA model is\r\nthe International Labor Organization, a “tripartite”\r\nUN body where industry and trade unions are\r\nrepresented along with governments.\r\nWhile both nuclear radiation and AI are\r\nintangible to the average person, nuclear\r\nmaterial is a physical artifact that must be dug\r\nout of the ground, processed, refined, enriched,\r\nshaped into the form of an explosive, mounted\r\non a delivery vehicle, and launched. Only then\r\ndoes it become deadly. Obtaining weaponsgrade\r\nnuclear material is a high bar to anyone\r\ncontemplating using nuclear weapons. The\r\nrapid evolution of AI, its seeming malleability,\r\nand its potential for misuse by a wide variety\r\nof actors with as yet unknown effects make AI\r\nfundamentally different and potentially even\r\nharder to govern than nuclear. At the same time,\r\nthe success that international organizations like\r\nthe IAEA have achieved in avoiding catastrophe\r\nand encouraging collaboration between nations\r\nin the nuclear realm offers hope for achieving\r\nsimilar outcomes in other fields.\r\ni. This paper uses the term global governance to refer to international treaties, organizations, arrangements and procedures agreed between\r\ngovernments to bring order and predictability to a particular realm of human activity. Although this might also be termed international\r\nregulation, it differs from national regulation. Governments have political and legal jurisdiction over their people and territory, not least\r\na monopoly on the use of force. International organizations do not have such characteristics. While they may monitor and verify state\r\ncompliance, they are only able to enforce their decisions in extreme circumstances through the intercession of the United Nations Security\r\nCouncil. There is no standing international police force. At the international level then, the term governance is preferable to regulation\r\nas it suggests a collective, collaborative endeavour to establish a normative umbrella over national behaviour that involves nurturing and\r\npromoting norms, standards and recommendations, codes of conduct, best practice and incentives for compliance, such as economic\r\nsupport and technical assistance. Monitoring and verification are increasingly possible, not least due to technological advances, but\r\nenforcement is always confronted by the doctrine of state sovereignty.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 69\r\n3.4\r\nThe\r\nIntergovernmental\r\nPanel on Climate\r\nChange (IPCC)\r\nAuthored by Diana Liverman\r\nand Youba Sokona\r\nPurpose\r\nThe Intergovernmental Panel on Climate Change\r\n(IPCC) was established by the United Nations in\r\n1988 to provide regular scientific assessments\r\nof climate change—including climate science,\r\nimpacts and responses—that would reduce the\r\nrisks and “consider uncertainties and gaps in\r\nknowledge about climate change and information\r\nneeded for responses and policies”.i\r\nThe international scientific community, and some\r\ngovernments, pushed for the establishment\r\nof IPCC because of the potential risks that\r\nincreasing atmospheric carbon dioxide and socalled\r\ngreenhouse gases produced by human\r\nactivity would lead to the warming of the planet\r\nand other climatic changes that would endanger\r\npeople and ecosystems. Scientists saw the need\r\nfor careful review of emerging published research\r\non climate change and options for mitigation (the\r\nreduction or removal of greenhouse gases) and\r\nadaptation (processes of adjusting the climate\r\nchanges that have occurred or could occur).\r\nThe assessments published by IPCC are\r\nintended to provide policy makers, especially\r\nUN member governments, with information to\r\ndevelop climate policy as well as to inform the\r\nnegotiations and agreements developed under\r\nthe United Nations Framework Convention on\r\nClimate Change (UNFCCC).\r\nThe first set of IPCC reports was published in 1990.\r\nIPCC’s first report laid out the risks of climate\r\nchange and the need for international cooperation.\r\nThis report played a key role in the creation of the\r\nUNFCCC, which was approved by 154 nations at\r\nthe 1992 UN Rio Conference on Environment and\r\nDevelopment/Earth Summit. The UNFCCC set out\r\nto prevent “dangerous human interference with the\r\nclimate system” with an initial focus on stabilizing\r\nand reducing greenhouse gas concentrations in\r\nthe atmosphere.\r\nSince then, IPCC reports written by scientific\r\nexperts from around the world have been\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 70\r\npublished every five to seven years, with the\r\nlatest 6th Assessment cycle releasing their final\r\nreports from 2021 to 2023. Report outlines and\r\nsummaries are approved by an intergovernmental\r\npanel of delegates at IPCC plenaries. Reports are\r\nusually acknowledged, discussed and accepted by\r\nthe UNFCCC. The UNFCCC is the United Nations\r\ninstitution of political governance, whereas the\r\nIPCC is a scientific governance process.\r\nSince its founding, the IPCC has influenced\r\nresearch and the UN negotiations process,\r\nas well as informing decision making by local\r\nand national governments, citizens, and the\r\nprivate sector. The IPCC, by identifying gaps in\r\nknowledge, has also influenced the research\r\nagendas and funding of thousands of scientists,\r\nas well as governments and foundations.\r\nSpecifically designed to inform international\r\npolicy and treaties on climate change, the IPCC\r\nis, for the most part, admired and respected for\r\nproviding critical scientific and technical input for\r\nthe governance of climate change through the\r\nUNFCCC, governments and other international\r\nenvironmental treaties.\r\nThe impact of the IPCC received worldwide\r\nrecognition in 2007 when it was awarded the\r\nNobel Peace Prize jointly with Al Gore for “their\r\nefforts to build up and disseminate greater\r\nknowledge about man-made climate change,\r\nand to lay the foundations for the measures that\r\nare needed to counteract such change”.\r\nThe IPCC’s most significant impact on global\r\ngovernance has been its engagement with the\r\nUNFCCC and its decisions. Perhaps the IPCC’s\r\nmost influential report, requested by the UNFCCC\r\nwith the 2015 Paris Agreement and published in\r\n2018, examined potential impact of limiting global\r\nwarming to 1.5°C and the pathways to achieving\r\nthis goal. The report’s main conclusions were that\r\nimpacts of warming increased significantly from\r\n1.5°C to 2°C, and that to have a good chance of\r\nkeeping warming under 1.5°C, the world needed\r\nto cut emissions in half by 2030 and reach net\r\nzero by 2050. Countries, cities, corporations,\r\nand citizens regularly refer to this report in their\r\nclimate commitments.\r\nDespite the efforts of IPCC, and nations’ efforts\r\nto agree on responses through the UNFCCC,\r\nhuman-generated greenhouse gas emissions\r\nhave continued to increase, albeit at a slower rate.\r\nGlobal temperatures are the highest on record,\r\nwith serious impacts on human wellbeing and the\r\nnatural world. When IPCC was founded in 1988,\r\nthe concentration of CO2 in the atmosphere was\r\nabout 350 ppm. This has continued to increase\r\neach year, reaching 420ppm in 2023. In 1988,\r\nglobal emissions of greenhouse gases were\r\nequivalent to about 38 gigatons per year. By 2022,\r\ngreenhouse gas emissions had reached almost\r\n55 gigatons. The global average temperature\r\nhas risen by 1.2°C since 1880, and the 10 warmest\r\nyears on record have occurred since 2010.\r\nHistory\r\nThe risk that carbon dioxide emissions would\r\nwarm the planet has been known for more than\r\n50 years. Since the 1970s, key scientific papers\r\nhave projected a doubling of carbon dioxide\r\nconcentrations in the atmosphere associated\r\nwith fossil fuel burning and deforestation and\r\noutlined potential impacts on temperatures, crop\r\nyields and sea levels.ii The Keeling curve, which\r\nshowed the steady rise of CO2 in the atmosphere\r\nat Mauna Loa, became an early emblem of a\r\npending climate crisis.\r\nInternational cooperation on climate research and\r\napplications can be traced to the establishment\r\nof the International Meteorological Organization\r\n(IMO) in 1873 to share weather data and forecasts.\r\nThe formation of this organization acknowledged\r\nthat weather transcends national boundaries and\r\nglobal observations are important. IMO later\r\nbecame World Meteorological Organization\r\n(WMO). In 1972, the pivotal UN Stockholm\r\nEnvironment conference included discussion of\r\nrising CO2 levels and the risks of global warming.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 71\r\nIn 1979, the first World Climate Conference\r\nincluded presentations on climate change, the\r\natmosphere as a common concern of humanity,\r\nthe need for international agreements on weather\r\nmodification, and the increase in carbon dioxide\r\nassociated with fossil fuel use. The conference\r\nrecommendations include the need for\r\ninternational assessments of future global climate.iii\r\nIn the opening keynote for the 1979 World\r\nClimate Conference,iv Bob White (of the US\r\nNational Academy of Sciences Climate Research\r\nBoard) made a farsighted comment:\r\nYou may ask, ‘Why should the climate\r\ncommunity extend its concern so\r\nfar beyond scientific and technical\r\nmatters into the realm of economics\r\nand social structure?’ The answer\r\nis clear: Our task is to identify not\r\njust what it is that science should\r\ndo, but what it is that governments\r\nshould know. Unless there is a better\r\ncomprehension of the chain of events\r\nand the complex interactions that\r\ntake place, governmental decisions\r\nto mitigate the economic, social, and\r\nother effects of climatic impacts may\r\nvery well provide the wrong remedies.\r\nThe Conference Declaration was clear about the\r\nrisks of climate change driven by human activities:\r\nNevertheless, we can say with some\r\nconfidence that the burning of fossil\r\nfuels, deforestation, and changes of\r\nland use have increased the amount\r\nof carbon dioxide in the atmosphere\r\nby about 15% during the last century\r\nand it is at present increasing by\r\nabout 4% per year. It is likely that an\r\nincrease will continue in the future.\r\nCarbon dioxide plays a fundamental\r\nrole in determining the temperature\r\nof the Earth’s atmosphere, and it\r\nappears plausible that an increased\r\namount of carbon dioxide in the\r\natmosphere can contribute to\r\na gradual warming of the lower\r\natmosphere, especially at high\r\nlatitudes. Patterns of change would\r\nbe likely to affect the distribution\r\nof temperature, rainfall and other\r\nmeteorological parameters, but the\r\ndetails of the changes are still poorly\r\nunderstood.\r\nThe origins of the IPCC can be traced to a series\r\nof events and meetings in the 1980s. A UN\r\nworkshop in Villach, Austria, in 1985 convened\r\nexperts from 29 countries to assess the impacts\r\nof rising CO2.v The Villach conference statement\r\ncalled for periodic assessments of the state\r\nof scientific understanding and its practical\r\nimplications, and proposed a global convention,\r\nperhaps inspired by the successful negotiation of\r\nthe 1985 Vienna Convention on the ozone layer.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 72\r\nIn 1987, the influential Brundtland Commission\r\nhighlighted the Villach report, and the World\r\nMeteorological Congress recommended that\r\nthere be periodic assessments of climate risks\r\nunder the overall guidance of governments. The\r\nWMO had also initiated a series of assessments\r\nfocused on the risks of atmospheric ozone\r\ndepletion with reports in 1985, 1988, and 1989.iv\r\nBob Watson of NASA chaired these assessments,\r\nwhich included many scientists who later\r\nbecame IPCC authors. The ozone assessments\r\nunderpinned the 1987 United Nations Montreal\r\nProtocol on Substances that Deplete the Ozone\r\nLayer, presaging the 1992 UNFCCC.\r\nIn 1988, 300-plus scientists and policy\r\nmakers gathered in Toronto, Canada, for the\r\n“Conference on the Changing Atmosphere”.\r\nThose gathered called for the establishment of\r\nan Intergovernmental Panel on Climate Change\r\nand a comprehensive global convention to\r\nprotect the atmosphere.vi They argued for a\r\n20% reduction in carbon dioxide levels by 2005.\r\nThe Toronto conference received widespread\r\nmedia attention—in the US, the media was\r\nsimultaneously responding to a heatwave and\r\nNASA scientist Jim Hansen’s recent congressional\r\ntestimony about the serious risks of global\r\nwarming. High-level political attention in Europe\r\nincluded a major speech by British Prime Minister\r\nMargaret Thatcher about global warming as a\r\nmassive experiment on the planet. In November\r\n1988, the UN established the IPCC at a session of\r\nthe WMO, and it was then endorsed at the UN\r\nGeneral Assembly.\r\nThe creation of IPCC was based on international\r\nnetwork of scientists who saw the systemic\r\nrisk and need for response to anthropogenic\r\nclimate change. It became an intergovernmental\r\norganization partly because countries such as the\r\nUS wanted some control over the assessments.\r\nWhile it was initially challenging to develop IPCC’s\r\nprinciples, the organization’s core principles have\r\nendured. These include that assessments should:\r\n• Be based on scientific expertise and a\r\nbalanced and comprehensive analysis of the\r\nstate of knowledge;\r\n• Be based, to the extent possible, on peerreviewed\r\nscientific literature;\r\n• Go through review by other scientists and by\r\ngovernments;\r\n• Seek consensus; and\r\n• Be policy relevant but not policy prescriptive.vii\r\nIn some cases, governments and climate skeptics\r\nsaw any discussion of solutions as political and\r\npolicy prescriptive, and challenged the scientists\r\nin plenary sessions and the media. A principle of\r\nconsensus has endured, with scientists working\r\nincredibly hard to agree upon their conclusions\r\nand attaching careful statements about confidence,\r\nuncertainty, likelihood and lines of evidence. This\r\neffort at consensus is also evident in the conduct\r\nof government delegates at the Summary for\r\nPolicy Makers report approval sessions.\r\nEvolution\r\nThe first (1990) and second (1995) assessment\r\nreports of the IPCC, as well as specific technical\r\nreports on assessing emissions, regional impacts,\r\nsea level rise, and potential climate scenarios,\r\nwere particularly important in identifying trends\r\nin different greenhouse gases—notably adding\r\nmethane to the well-known trends in carbon\r\ndioxide—as well as the human activities that\r\nproduced them, especially fossil fuel use and\r\nland use change.viii IPCC also synthesized what\r\nwas known about climate trends and developed\r\nglossaries that defined key terms such as\r\nmitigation and adaptation. IPCC assessed the\r\nresults of complex models that analyze the\r\nfuture impacts of greenhouse gases on global\r\ntemperature and the earth system as well as a\r\nset of socioeconomic scenarios to project the\r\nemissions associated with different demographic,\r\ntechnological, and policy futures.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 73\r\nJust as the suite of material released by IPCC\r\nhas evolved over time, IPCC connections to the\r\nUNFCCC have also evolved. At COP3 in 1997, the\r\nUNFCCC, recognizing the risks identified by IPCC,\r\nadopted the Kyoto Protocol, in which developed\r\ncountries made binding commitments to reduce\r\nemissions and carbon trading mechanisms were\r\nestablished to increase flexibility. The IPCC Task\r\nForce on National Greenhouse Gas Inventories\r\nsupported national reporting to the convention\r\non emissions.\r\nThe third assessment report, released in 2001, was\r\nnotable for its focus on vulnerability to climate\r\nchange—particularly the disproportionate impacts\r\non polar regions, small islands, and Africa—as well\r\nas the importance of adaptation. The literature\r\ncited and the conclusions of the assessment\r\nhave underpinned the negotiation positions of\r\nUNFCCC, national groups of common interests\r\nsuch as the Association of Small Island States\r\n(AOSIS), and the Climate Vulnerable Forum.\r\nThe fourth assessment (2007) laid the ground for\r\nthe target of limiting warming to 2°C. Debates at\r\nCOP 15 in Copenhagen and the fifth assessment\r\nreport (2013/14) underpinned the pivotal COP 21\r\nParis Agreement in 2015. UNFCCC has also made\r\nseveral requests for special reports from the IPCC,\r\nincluding one on regional vulnerability (1997),\r\ntechnology transfer (2000), and land use and\r\nforestry (2000).\r\nOne of the contentious issues for the UNFCCC\r\nclimate negotiations has been identifying what\r\nlevel of global temperature rise constitutes\r\n“dangerous” interference with the climate system.\r\nThe EU had identified 2°C as a potential target,\r\nwhile vulnerable countries have called for lower\r\ntargets. This debate emerged in the tense\r\nnegotiations at COP 15 over the Paris Agreement,\r\nwhich included the goal of keeping global\r\ntemperature rise below 2°C and eventually 1.5°C.\r\nThe agreement included a request to the IPCC to\r\nassess these goals.\r\nThe impact of the 1.5°C report, published in 2018,\r\nwent far beyond interested scientists and the\r\nclimate negotiations.ix Climate activists, including\r\nyouth movements, took to the streets to put\r\npressure on policy makers. Major corporations\r\nmade pledges to halve emissions by 2030 and\r\nreach net zero by 2050. Congress in the US\r\nand other governments around the world held\r\nhearings and made emission reduction pledges.\r\nThe EU revised policies to reduce emissions by\r\n55% below 1990 levels by 2030 and to be climate\r\nneutral by 2050. After US President Joe Biden was\r\nelected in 2020, his administration aligned with\r\nthe 1.5°C target by aiming to reduce emissions\r\n52% below 2005 levels by 2030.\r\nAchievements and challenges\r\nIPCC’s significant scientific achievements include\r\nreducing uncertainty in understanding how\r\ngreenhouse gas emissions drive climate change\r\nand compiling the evidence that changes are\r\noccurring and can be attributed to human\r\nactivities. In each successive report, the scientific\r\nevidence for observed climate change and\r\nfuture projections has become more robust and\r\nconfident. While the 1990 report highlighted many\r\nuncertainties about ongoing and future climate\r\nchange, the most recent report concludes, with\r\nhigh confidence, that “human activities, principally\r\nthrough emissions of greenhouse gases, have\r\nunequivocally caused global warming”.\r\nDespite the large research literature now available\r\nfor assessment by IPCC, there are significant gaps\r\nin the science and literature required to cover\r\nthe full scope of issues and make confidence\r\nstatements and to serve the needs of international\r\nand local climate governance.\r\nDetailed analyses of national responsibility for\r\nemissions are critical to negotiations about who\r\nshould reduce emissions and who should pay\r\nfor reductions by others or compensate them\r\nfor impacts. Because greenhouse gases remain\r\nin the atmosphere for long periods, historical\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 74\r\nresponsibilities are an important element of\r\nnegotiations. IPCC avoids pointing fingers at\r\nspecific nations or regions because it will cause\r\nproblems with member countries. IPCC also avoids\r\nassessing the responsibility of specific companies—\r\nsuch as fossil fuel majors—and aggregates\r\nemissions by sector. The UNFCCC convention\r\nincluded recognition of differing responsibilities\r\nand capacities of developed and developing\r\ncountries in “common but differentiated”\r\nresponsibilities and respective capabilities. The\r\nrecent focus on the relationships between climate\r\nand sustainable development is a step towards\r\nacknowledging vastly different levels\r\nof development.\r\nSome impacts of climate change are understudied\r\nin the literature and cannot be confidently assessed\r\nby IPCC. These include, for example, impacts on\r\ncertain regions, on the manufacturing and service\r\nsectors, on workers, on culture, and on supply\r\nchains and trade. Earlier gaps in research on\r\nclimate impacts on health, cities, and food systems\r\nare now better addressed. Another related change\r\nis that scientific literature on climate change\r\nimpacts is often reliant on case studies rather than\r\ncomparative or aggregated assessments, in part\r\nbecause local governments such as cities often\r\nbenefit most from case studies focused on their\r\nparticular risks and solutions.\r\nAssessment of the economic costs and impacts of\r\nclimate change are limited, with an overreliance\r\non integrated and aggregated models as well\r\nas controversial assumptions about discounting\r\nand non-market values. During the approval\r\nplenary for the 1.5°C report, several developing\r\ncountries expressed disappointment at the lack\r\nof quantitative or economic data on costs and\r\nclimate impacts on their regions and economies.\r\nAdditionally, a number of fossil fuel-producing\r\ncountries wanted information on how energy\r\ntransitions could damage their economies.\r\nThese issues are of heightened importance given\r\nthe new UNFCCC negotiating track on “loss and\r\ndamage”.\r\nGeoengineering solutions, which focus on removing\r\ngreenhouse gases from the atmosphere or reducing\r\nsolar radiation inputs, have not been fully analyzed\r\nby IPCC or addressed by global governance.\r\nOn the other hand, greenhouse gas removal\r\nthrough land use, especially through protecting\r\nand restoring forests, has been a strong focus\r\nof IPCC from the beginning. In recent years,\r\nassessment has broadened to look at the role of\r\nother land uses, including coastal ecosystems, for\r\ncapturing carbon. Technological solutions that\r\ninvolve capturing CO2 at power stations or from\r\nthe air have been assessed by IPCC as not yet\r\neconomically feasible or scalable. Solar radiation\r\nmanagement, which would compensate for\r\nwarming by reflecting incoming solar radiation\r\nthrough putting sulfur or other particles into the\r\natmosphere, is now briefly mentioned in IPCC\r\nreports. However, solar radiation management is\r\nnot mentioned as a mitigation option, and reports\r\ninclude cautions about the risks of unanticipated\r\nand unequal consequences of implementing or\r\nhalting the technologies.\r\nGovernance\r\nIPCC operates within the United Nations system\r\nunder the auspices of the WMO and the United\r\nNations Environment Programme (UNEP). As\r\nan intergovernmental institution, it is managed\r\nby the IPCC Plenary, which discusses plans and\r\nbudgets, approves the Summaries for Policy\r\nMakers of major reports, and elects the scientific\r\nmembers of the IPCC Bureau. The Plenary meets\r\nat least once per year in different host countries\r\nand is attended by government delegates, some\r\nscientists, and some observer organizations.\r\nGovernment delegates vote on key issues,\r\nwith one vote per country and a tradition of\r\nconsensus approval of reports. Most countries\r\nare represented by government-designated\r\n“focal points” who may be from foreign affairs or\r\nenvironment departments, or from meteorological\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 75\r\nresearch institutes. Representatives may or\r\nmay not be well informed about climate issues,\r\nespecially if they are from weather bureaus, but\r\nare increasingly trained in diplomatic negotiations\r\nand are often laser focused on the wording of\r\nthe IPCC summaries. IPCC does not have a legal\r\npersonality or engage in treaty making.\r\nOrganizations with observer status can send\r\nsomeone to plenaries to attend but not speak\r\n(in principle, although they will often interact\r\nwith scientists and delegates outside the\r\nroom). These include other international and\r\nregional organizations (such as the World Bank,\r\nInterAmerican Institute for Global Change\r\nResearch, and African Union Commission) and\r\nNGOs such as the C40 Cities climate leadership\r\ngroup, Greenpeace, Oxfam, and the Stockholm\r\nEnvironment Institute.\r\nInitial funding for IPCC was set up in 1989 through\r\nthe IPCC Trust fund, with contributions from\r\nWMO, UNEP and member countries. The trust\r\nfund has accumulated a balance of about $20\r\nmillion, with the largest country contributions\r\nsince 1989 coming from France, Germany, Japan,\r\nthe UK, and the US. In 2022, contributions totaled\r\nabout $2.5 million. The annual budget for IPCC\r\nis about $8.5 million, with about $4 million for\r\nexpenses incurred by the Secretariat, including\r\npublications, IT, and communications; $2.8 million\r\nfor the plenary and governing meetings; and $1\r\nmillion for author meetings.\r\nIPCC is staffed by a small, 14-person secretariat\r\nbased at WMO in Geneva and includes legal,\r\nlogistical, and communications staff. Scientific work,\r\nincluding major assessment reports, is managed\r\nby a Bureau of 34 scientists elected by the member\r\ncountries and includes the chair and vice-chairs of\r\nthe overall IPCC and its Working Groups.\r\nThere are three major scientific Working Groups:\r\n• Working Group I: Assesses the physical\r\nscience of the climate system and climate\r\nchange. This includes the understanding of\r\nclimate processes, observations of climate\r\nchange, and climate modeling.\r\n• Working Group II: Focuses on impacts,\r\nadaptation, and vulnerability. This group\r\nassesses the impacts of climate change on\r\nnatural and human systems, the capacity of\r\nsocieties to adapt, and options for reducing\r\nvulnerability.\r\n• Working Group III: Addresses the mitigation\r\nof climate change. This includes options\r\nfor reducing greenhouse gas emissions,\r\neconomic and technological issues, and\r\nactivities that remove greenhouse gases\r\nfrom the atmosphere.\r\nEach Working Group is coordinated by a small\r\nTechnical Support Unit that organizes logistics,\r\nreviews, and other everyday needs of the\r\nworking groups. Some of the co-chairs from\r\nthe Global South receive support for technical\r\nassistance in their work. The IPCC also has a Task\r\nForce on National Greenhouse Gas Inventories\r\n(TFI), which develops and assesses methods for\r\ninventorying emissions.\r\nThe UNFCCC provides the main conduit for\r\nIPCC to influence global governance, with the\r\ntechnical arm of the UNFCCC serving as the\r\nprimary formal link between IPCC and UNFCCC.\r\nThe annual UNFCCC Conference of Parties\r\n(COP), hosted by a different country each year,\r\nis the main venue for discussion, assessment of\r\nprogress, and negotiation, with a more technical\r\nmeeting held at the UNFCCC secretariat in Bonn\r\neach summer.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 76\r\nIPCC is also asked to present at every COP. The\r\nUNFCCC regularly expresses appreciation for\r\nIPCC’s reports, invites their presentation at COPs,\r\nand provides some funding to IPCC.\r\nIPCC operations and governance\r\nIPCC Plenary IPCC Secretariat\r\nIPCC Bureau\r\nExecutive Committee\r\nWorking\r\nGroup I\r\nThe Physical\r\nScience Basis\r\nTechnical\r\nSupport Unit\r\nWorking\r\nGroup II\r\nImpacts,\r\nAdaptation,and\r\nVulnerability\r\nTechnical\r\nSupport Unit\r\nWorking\r\nGroup III\r\nMitigation of\r\nClimate Change\r\nTechnical\r\nSupport Unit\r\nTask Force\r\non National\r\nGreenhouse Gas\r\nInventories\r\nTechnical\r\nSupport Unit\r\nAuthors, Contributors, Reviewers\r\nSource: IPCC\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 77\r\nThe IPCC report process\r\nSince the first comprehensive assessment in\r\n1990, the working groups write separate reports\r\n— though special reports, such as 1.5°C, have\r\nbeen developed jointly. The final synthesis report\r\nis also written by scientists selected from all three\r\nworking groups. The preparation and release\r\nof the major assessment reports is staged with\r\nthe release of the Climate Science (WG1) report\r\nseveral months before the Impacts (WG2) and\r\nthen the Mitigation (WG3) reports. The Working\r\nGroups try to connect and coordinate their\r\nmessages, but this is not always successful.\r\nEach report starts with a scoping meeting of\r\nexperts nominated by member governments\r\nand the IPCC Bureau; experts prepare an outline,\r\nwhich becomes the approved outline for the\r\nreport. This is followed by a call to nominate\r\nauthors, mostly through national governments\r\nbut also through the Bureau. Authors are\r\nselected based on their expertise related to the\r\napproved outline, but increasingly to ensure a\r\nbalance of gender, geography, and disciplines\r\nand to ensure there are some contributors with\r\nprior experience as IPCC authors, especially\r\nin selecting who will lead the chapters as\r\nCoordinating Lead Authors. Each chapter of a\r\nreport has Coordinating Lead Authors (2-3) and\r\n10-20 Lead authors, with some scientists invited\r\nto write small sections as Contributing Authors.\r\nEach chapter has a chapter scientist, most often\r\na younger scholar and two review editors who\r\nhave prior experience with IPCC and will ensure\r\nthat review comments are addressed.\r\nThe evolution of the IPCC has seen six rounds\r\nof assessment reports so far, beginning with the\r\n1990 First Assessment and with the latest Sixth\r\nAssessment—which ran from 2015 to 2023.\r\nEach round includes a set of comprehensive\r\nassessment reports produced by the working\r\ngroups as well as special reports on specific topics.\r\nSpecial reports for the Sixth Assessment included\r\nreports on Climate Change and Land, the Ocean\r\nand Cryosphere, and Global Warming of 1.5°C.\r\nAuthors for each Working Group then convene\r\nseveral times to draft the report with oversight\r\nfrom the Working Group chairs and vice chairs\r\nwho are members of the IPCC Bureau. Each\r\nWorking Group has a Technical Support Unit with\r\na small number of paid staff to coordinate the\r\nreport preparation. Pre-COVID, meetings lasted\r\nabout one week, with about four meetings for\r\neach report. At the initial meetings, the chapter\r\ngroup decides how to implement their chapter\r\noutline and starts to compile the relevant peer\r\nreviewed literature, asking authors to start\r\nwriting in their areas of expertise. In some cases,\r\nthe Coordinating Authors dominate the action;\r\nother chapters work more collectively. A first\r\ndraft usually emerges about half way through\r\nthe process and is made available for expert\r\nreview by fellow scientists. Almost anyone can\r\napply to be a reviewer and will be given access\r\nto the draft, although they are expected to have\r\nsome expertise and not to leak the report. Some\r\nchapters receive thousands of review comments.\r\nChapters are then rewritten in response to\r\nreview, and a second order draft is prepared\r\nand opened for government review. A final\r\ndraft is then prepared, responding to reviews,\r\nupdating literature, and polishing conclusions,\r\nand submitted to the Bureau. Around this time,\r\na subset of authors, led by the Working Group\r\nCo-chairs, starts to prepare the Summary for\r\nPolicy Makers (SPM). This is the most important\r\npart of every IPCC report and is what receives\r\nmost media, political and public attention. It\r\nsummarizes the conclusions of the report and\r\nmust be approved, sentence by sentence, by\r\nmember governments at an IPCC plenary.\r\nApproval of the SPM is eagerly awaited by the\r\nmedia and science community, and IPCC now\r\nemploys a sophisticated communications team\r\nto manage press inquiries and train scientists to\r\ntalk about the reports.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 78\r\nChallenges in governance for IPCC\r\nThere have been important governance challenges\r\nfor the IPCC over the years. These include:\r\n1. Political interference with IPCC processes:\r\nPolitics has inevitably entered into the\r\noperations of the IPCC. The process of\r\nelecting the bureau has become very\r\npolitical, with countries vying to have their\r\nscientists elected and doing side deals to\r\ngain support for their candidates. Requests\r\nfor input from the UNFCCC often reflect\r\ntensions between countries, such as\r\ndecisions about temperature targets, who is\r\nmost vulnerable, and the technologies and\r\nfunding of responses. Government reviews\r\nof report drafts and approval sessions of\r\nthe Summary for Policy Makers also tend to\r\nreflect international politics. Some countries\r\nprepare extensive comments from several\r\ngovernment agencies that are mostly\r\nconstructive but are clearly trying to limit\r\nconclusions. Some governments object to\r\nany discussion of equity and justice, claiming\r\nit to be normative rather than objectively\r\nscientific. In some cases, conclusions\r\nare toned down through scientists’ selfcensorship\r\nor government changes to the\r\nSummary for Policy Makers.\r\n2. Efforts to delegitimize IPCC reports: Every\r\nIPCC report receives criticism from climate\r\nskeptics. Perhaps the most notable example\r\nof this was “Climategate” in 2009, where the\r\nhacking of servers at the University of East\r\nAnglia resulted in the release of hundreds\r\nof emails between IPCC authors as they\r\nprepared reports. Critics interpreted several\r\nemails to suggest scientists were biased in\r\ntheir assessments of temperature trends\r\nand impacts. Despite robust responses,\r\nthe scandal partly derailed COP 15 in\r\nCopenhagen.x Such attacks have made\r\nIPCC careful to avoid leaks of report drafts,\r\nthough they still occur, and to check every\r\nstatement and line of evidence.\r\nIPCC governance has also developed a\r\nprotocol to investigate and address alleged\r\nerrors in reports as well as potential conflicts\r\nof interest.\r\n3. Author burn-out: IPCC authors are not\r\npaid by IPCC. While a few may be released\r\nfrom their regular job duties (usually those\r\nworking for government research groups),\r\nthe majority are volunteers working in their\r\nspare time. Assessment reports take up to\r\nfour years to prepare, with special reports\r\noperating on shorter timelines. Additionally,\r\nthe amount of scientific literature that needs\r\nto be reviewed has grown exponentially\r\nsince the first report. Authors of the latest\r\nWorking Group II report on impacts cited\r\nan overwhelming 34,000 articles and\r\nresponded to 62,000 review comments.\r\nThe IPCC plenaries, where the Summary for\r\nPolicy Makers from reports are approved,\r\ninvolve negotiators and scientists working\r\naround the clock for days. IPCC authors\r\nalso experience great frustration and even\r\nclimate grief when they see the results of\r\ntheir reports attacked, ignored, or producing\r\ninadequate policy responses.\r\n4. Bias in author and bureau selection:\r\nAuthor selection has been criticized for\r\noverlooking women, people from the Global\r\nSouth, representatives from the private\r\nsector, and NGO and social science experts,\r\nas well as indigenous and younger scholars.\r\nFor example, early reports included very\r\nfew female authors, even when there were\r\nsenior women available. Even now, women\r\ncomprise only one-third of the authors. Early\r\nreports also lacked representation from\r\ndeveloping countries. Even as representation\r\nis broadened to include more female or\r\ndeveloping world scholars, surveys find that\r\nindividuals from these communities often do\r\nnot feel as though their voices are heard. An\r\nofficial IPCC gender task force is attempting\r\nto improve the situation.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 79\r\n5. Weak policy responses: Despite progress\r\nin areas such as electric vehicles (EVs) and\r\nrenewables, temperatures on land are now\r\n1.74°C warmer than they were in 1850. IPCC\r\npreviously estimated that emissions needed\r\nto drop about 5% per year from 2018 to 2030\r\nto keep global temperature increase under\r\n1.5°C. IPCC also estimated that our remaining\r\ncarbon budget was about 500 gigatons (GT).\r\nNow, given the delay in action, we need\r\nto achieve a steep drop in emissions every\r\nyear to 2030, and our remaining carbon\r\nbudget is only 250 GT remaining. Net-zero\r\ngoals and carbon neutrality promises are\r\nunrealistic if they rely on technologies that\r\nare not yet economically feasible or scalable.\r\nRecent research found that most countries,\r\ncorporations, and cities that have made net\r\nzero pledges are making assumptions about\r\nnegative emissions or carbon offsets without\r\nviable strategies.xi This lack of progress has\r\nbeen linked to global geopolitics such as\r\nthe Ukraine war, the voluntary nature of the\r\nParis agreement, continued subsidies for\r\nfossil fuels, climate obstruction by fossil fuel\r\ncompanies, the rebound of aviation and\r\nconsumption post COVID, countries with high\r\nhistorical emissions refusing to make deeper\r\ncuts, and extreme weather/AC demand.\r\n6. Communication challenges: In its efforts\r\nto provide comprehensive assessments of\r\nclimate change, the IPCC has produced\r\never longer assessment reports. Whereas\r\nthe first set of reports was about 1,000\r\npages, the latest set reached over 10,000\r\npages. Efforts to better communicate results\r\nof the assessments include a carefully\r\ncurated web site, shorter Summaries\r\nfor Policy Makers (35-50 pages for each\r\nWorking Group), translations into various\r\nUN languages (Arabic, Chinese, French,\r\nRussian, and Spanish), a set of brief headline\r\nstatements, technical summaries, fact\r\nsheets, downloadable figures and slide\r\npresentations, and social media campaigns.\r\nNevertheless, there are many calls for\r\nmuch shorter reports, more frequent brief\r\nupdates on the state of science, combining\r\nof assessments into a single working group,\r\ngreater focus on key policy questions, and\r\nimproved graphics.\r\nIPCC itself, and the scholarly community, have\r\nproposed reforms of IPCC processes over the\r\nyears.x These include reducing political influence,\r\nincreasing diversity of voices, writing much shorter\r\nreports, merging or reorganizing the working\r\ngroups, softening the emphasis on consensus,\r\nreducing reliance on computer models, including\r\nmore social sciences and humanities, adding more\r\nstakeholder authors, and giving more attention to\r\nlocal knowledge and solutions.\r\nHow IPCC has influenced other\r\ngovernance processes\r\nGlobal environmental governance has seen\r\nextensive cooperation on a set of important\r\ntreaties and conventions, including the UN\r\nconventions on long-range air pollution (1979),\r\nthe ozone layer (1985), and desertification\r\n(1994). Most of these assessments have been\r\nunderpinned by scientific assessments with similar\r\nprocesses to IPCC.\r\nPerhaps the most important siblings to the IPCC\r\nand UNFCCC are the UN Convention on Biological\r\nDiversity and the Intergovernmental Platform on\r\nBiodiversity and Ecosystem Services (IPBES), both\r\nof which aim to protect biodiversity.xi IPBES was\r\nmodeled on IPCC. While supported by UNEP, IPBES\r\nis not an official intergovernmental body of the UN\r\nand has a stronger focus on local knowledge than\r\nIPCC. In 2021, IPCC and IPBES issued a joint report\r\non biodiversity and climate change.\r\nThere have been calls for IPCC-like scientific\r\nassessments for health, AI, and geoengineering,\r\nbut these sometimes idealize or misunderstand\r\nthe IPCC, the challenges it has faced, and its\r\nimportant relationship with the UNFCCC.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 80\r\nConclusion\r\nThe IPCC represents an important model for the\r\nglobal governance of systemic risks that also\r\nseeks to inform policy. IPCC emerged due to\r\ngrowing concern about climate change promoted\r\nby an epistemic community of scientists who\r\nvolunteered to assess peer-reviewed literature.\r\nIPCC reports have influenced international\r\nnegotiations as well as actions and awareness\r\nof national and local governments, nongovernmental\r\norganizations, businesses, and\r\nthe general public, and have shaped scientific\r\nresearch agendas. But IPCC is not without its\r\ncritics, limitations, and gaps in knowledge. New\r\nscience-based assessment proposals should pay\r\nclose attention to these challenges.\r\nProposals for an IPCC-type assessment\r\nprocess for AI should take into account key\r\naspects of IPCC governance, including the way\r\nintergovernmental status both benefits and\r\npoliticizes IPCC assessments and policy impact,\r\nthe vital connection between IPCC and the UN\r\nclimate convention (UNFCCC), and the challenges\r\nof including all countries and stakeholders in\r\nthe assessments.xii Does the most important\r\nknowledge on the risks and possibilities of AI exist\r\nin an open, peer-reviewed literature? How can\r\nprivate and defense sector insights become part\r\nof such assessments without conflict of interest,\r\ncompetitive issues, or security risks? Should a\r\nscientific assessment of AI be closely linked to UN\r\nor other multilateral agreements on AI safety?\r\nCould such an assessment rely on consensus\r\nbetween authors and unanimous government\r\nagreement to approve reports?\r\nAbove all, a key lesson from the experience\r\nof IPCC is that, despite decades of warnings\r\nabout climate change, action has been delayed\r\nand limited, and the risks are still existential\r\nand immediate. It is vitally important that an\r\nintergovernmental organization related to AI\r\nnot only deepens knowledge but also hastens\r\nsolutions, rather than underestimate risks and\r\ndistract from action.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 81\r\ni. “Organization History,” Intergovernmental Panel on Climate Change, https://archive.ipcc.ch/organization/organization_history.shtml.\r\nii. For example: Rasool, S. I, and S.H. Schneider, “Atmospheric carbon dioxide and aerosols: Effects of large increases on global climate,” Science\r\n173, no. 3992 (1971): 138-141; Schneider, SH, “On the carbon dioxide–climate confusion” Journal of Atmospheric Sciences 32, no. 11 (1975):\r\n2060-2066; Hansen, J, et al, “Climate impact of increasing atmospheric carbon dioxide,” Science 213, no. 4511 (1981): 957-966. Even earlier\r\nwork by Eunice Foote (1856), John Tyndall (1860s) and Svante Arrhenius (1896) identified the potential of the greenhouse effect.\r\niii. As a Masters student at the University of Toronto Diana Liverman helped Canadian climatologist Ken Hare, one of the organizers, prepare for\r\nthe conference and then studied for her PhD under climate scientist Steve Schneider who was instrumental in creation of IPCC.\r\niv. Proceedings of the World Climate Conference: A Conference of Experts on Climate and Mankind. World Meoteorlogical Organization 537 (1979).\r\nv. “International Assessment of the Role of Carbon Dioxide and of Other Greenhouse Gases in Climate Variations and Associated Impacts,”\r\nWorld Meteorological Organisation,” (Villach, Austria, 1985) 537.\r\nvi. “Scientific Assesment Panel,” UN Environment Program, https://ozone.unep.org/science/assessment/sap.\r\nvii. The Changing Atmosphere: Implications for Global Security. Conference Proceedings. World Meteorlogical Organization 710 (1988).\r\nviii. “IPCC Procedures,” Intergovernmental Panel on Climate Change, https://www.ipcc.ch/documentation/procedures/.\r\nix. All IPCC reports are available online at: https://www.ipcc.ch/reports/.\r\nx. Boykoff, Maxwell, and Olivia Pearman, “Now or Never: How Media Coverage of the IPCC Special Report on 1.5 C Shaped Climate-Action\r\nDeadlines,” One Earth 1, no. 3 (2018): 285–88. Doran, Rouven, Charles A. Ogunbode, Gisela Böhm, and Thea Gregersen, “Exposure to and\r\nLearning from the IPCC Special Report on 1.5 C Global Warming, and Public Support for Climate Protests and Mitigation Policies,” Npj Climate\r\nAction 2 (2023). Livingston, Jasmine E., and Markku Rummukainen, “Taking Science by Surprise: The Knowledge Politics of the IPCC Special\r\nReport on 1.5 Degrees,” Environmental Science & Policy 112 (2020): 10–16. Ogunbode, Charles A., Rouven Doran, and Gisela Böhm, “Exposure\r\nto the IPCC Special Report on 1.5 °C Global Warming Is Linked to Perceived Threat and Increased Concern about Climate Change,” Climatic\r\nChange 158, no.3–4 (2020): 361–75.\r\nxi. Anderegg, William RL, and Gregory R. Goldsmith, “Public Interest in Climate Change over the Past Decade and the Effects of the\r\n‘Climategate’ Media Event,” Environmental Research Letters 9, no. 5 (2014): 054005. Maibach, Edward, Anthony Leiserowitz, Sara Cobb,\r\nMichael Shank, Kim M. Cobb, and Jay Gulledge, “The Legacy of Climategate: Undermining or Revitalizing Climate Science and Policy?” WIREs\r\nClimate Change 3, no. 3 (2012): 289–95. Shapiro, Harold T, Roseanne Diab, Carlos Henrique de Brito Cruz, Maureen Cropper, Jingyun Fang,\r\nLouise O Fresco, Syukuro Manabe, et al., “Climate Change Assessments Review of the Processes and Procedures of the IPCC: Committee to\r\nReview the Intergovernmental Panel on InterAcademy Council,” InterAcademy Council (2010): 103. Emails from our colleagues were among\r\nthose whose emails were released and we spent days reading every email released, working with university attorneys, and explaining that\r\ncomments were innocuous and consistent with peer review.\r\nxii. Allen, Myles R., Pierre Friedlingstein, Cécile A.J. Girardin, Stuart Jenkins, Yadvinder Malhi, Eli Mitchell-Larson, Glen P. Peters, and Lavanya\r\nRajamani, “Net Zero: Science, Origins, and Implications,” Annual Review of Environment and Resources 47, no. 1 (2022): 849–87. Hale, Thomas,\r\nStephen M. Smith, Richard Black, Kate Cullen, Byron Fay, John Lang, and Saba Mahmood, “Assessing the Rapidly-Emerging Landscape of Net\r\nZero Targets.” Climate Policy 22, no. 1 (2022): 18–29.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 82\r\n3.5\r\nThe Bank for\r\nInternational\r\nSettlements (BIS),\r\nBasel, the Financial\r\nStability Board\r\n(FSB), and the\r\nFinancial Action\r\nTask Force (FATF)\r\nAuthored by Christina\r\nParajon Skinner\r\nIntroduction\r\nAfter World War II, many Western states\r\nexpressed commitment to global economic\r\ncooperation as a means of ensuring lasting\r\npeace.i By the 1980s, the arena of international\r\nfinance came to be increasingly governed by\r\nsoft-law institutions, which consist of networks\r\nof financial regulators. This chapter explains the\r\narchitecture of that system with an aim to provide\r\nlessons, inspirations, and cautionary tales for\r\na possible global framework to govern—more\r\nspecifically, set safety standards around—the risks\r\npresented by artificial intelligence (AI).\r\nToday, a number of international financial\r\nregulatory bodies set international standards\r\nfor globally active banks and other financial\r\ninstitutions. These bodies identify possible risks\r\nthat these institutions could present to the\r\nglobal economy (or, conversely, risks presented\r\nto these institutions’ safety and soundness), and\r\nshare information across borders. Most of these\r\norganizations arose in response to economic\r\ncrises or gaps in public international law in the\r\nrealm of financial supervision and risk.\r\nThese bodies include, most notably, the Bank for\r\nInternational Settlements (BIS), which formally\r\nhosts both the Basel Committee for Banking\r\nSupervision (Basel or BCBS) and the Financial\r\nStability Board (FSB). These institutions mainly\r\nfocus on risks relevant to banks and systemically\r\nimportant market-based credit intermediators, like\r\nmoney market funds.ii\r\nThe Financial Action Task Force (FATF), meanwhile,\r\nis a global standard-setting body with a focus\r\nthat’s risk-specific rather than industry-specific.\r\nFATF seeks to address the global challenges\r\npresented by money laundering, the financing\r\nof terrorism, and the proliferation of weapons of\r\nmass destruction. Often, the financial system is\r\nat the center of these problems, but increasingly\r\nmoney is laundered for these illicit ends through\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 83\r\ncrypto assets, real estate, art, and other nonfinancial\r\ncompanies.\r\nPurpose\r\nThese organizations exist to serve four main\r\npurposes:\r\n1. To address the risk of regulatory\r\narbitrage. Certain risks are global in nature\r\nand thus cannot be mitigated by any one (or\r\nhandful of) jurisdictions. A porous system\r\nallows for what is known as regulatory\r\narbitrage, whereby risky or unlawful\r\nbehaviors simply shift from more regulated\r\nto relatively laxer geographies. These\r\ninternational institutions thus endeavor\r\nto address the possibility of arbitrage by\r\nworking toward the harmonization of basic\r\nstandards.\r\n2. To minimize informational blind-spots.\r\nHigh quality and complete information is\r\nessential to early or preventative action.\r\nHowever, without cooperation among\r\nnational regulators, blind-spots emerge.\r\nAccordingly, much of these institutions’\r\nwork is geared toward sharing knowledge,\r\ninformation, and best practices across\r\njurisdictions.\r\n3. To advance international comity. In a crisis,\r\ncooperation among regulators is crucial.\r\nEstablishing sound working relationships\r\nbuilds good will and trust across staff\r\nand can smooth crisis-time interventions.\r\nAccordingly, these institutions form and\r\nmaintain networks between national\r\nregulators and supervisors from a wide\r\nrange of jurisdictions.\r\n4. To apply moral suasion. Given the risk of\r\nregulatory arbitrage, lax enforcement or\r\nnon-compliance by a handful of jurisdictions\r\ncan undermine the efforts of others to plug\r\ngaps in legal and supervisory frameworks.\r\nAccordingly, these bodies have developed\r\na system of peer monitoring, review, and\r\nfeedback that is ultimately meant to pressure\r\njurisdictions into cooperating and publicly\r\n“naming-and-shaming” them if they do not.\r\nOver the past century, this system for the global\r\ngovernance of finance has evolved to take on\r\nan increasing number of tasks and promulgate\r\nan intricate set of standards. The lessons of this\r\nexperiment are, however, somewhat mixed. On\r\nthe one hand, the system has worked relatively\r\nwell at coordinating principles and ideas. On the\r\nother hand, it suffers from perennial challenges\r\nto its legitimacy—which will only grow along with\r\nthe scope and mission of this framework— a lack\r\nof transparency, and the tendency to occasionally\r\ndistort outcomes on the national level.\r\nThe most notable legal feature of the BIS, Basel,\r\nFSB, and FATF is that they have no formal legal\r\nstatus. Unlike formal international economic\r\ninstitutions that are constituted and governed\r\nby treaty—like the WTO, the World Bank,\r\nand the International Monetary Fund—these\r\ninternational financial regulatory organizations\r\nexist only pursuant to “soft law”.iii They exist\r\nbecause regulators decided to agree, amongst\r\nthemselves, to form these networks and crossborder\r\nassociation.iv While these regulators have\r\ncome to agreement about chartering provisions\r\nand governance procedures, these organizations’\r\nexistence has not been authorized by relevant\r\nnational legislatures. It bears emphasis that these\r\ncentral bankers and other bank regulators are not,\r\nthemselves, elected or democratically responsive.\r\nThe soft law nature of these networks of\r\nregulators poses interesting and often overlooked\r\nquestions about the force of their prescriptions.\r\nEach of these institutions formally acknowledges\r\ntheir informal, soft law status. The Basel\r\nCommittee, for example, states in its charter that\r\n“Its conclusions do not have, and were never\r\nintended to have, legal force. Rather, it formulates\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 84\r\nbroad supervisory standards and guidelines\r\nand recommends statements of best practice in\r\nthe expectation that individual authorities will\r\ntake steps to implement them through detailed\r\narrangements – statutory or otherwise – which\r\nare best suited to their own national systems.”v\r\nThe FSB’s charter, in similar spirit, acknowledges\r\nin its Article 23: “This Mandate is not intended\r\nto create any legal rights or obligations.”vi Using\r\nidentical language, Article 48 of the FATF Charter\r\nnotes that “This Mandate is not intended to create\r\nany legal rights or obligations.”vii Of course it must\r\nbe stated so—short of a formally ratified treaty,\r\nno institution that exists in international law can\r\nimpose binding obligations on national sovereign\r\nstates.\r\nStill, their respective charters also require\r\nmembers to commit to implementing their\r\nstandards. Basel committee members “agree\r\nto implement fully Basel standards for their\r\ninternationally active banks. These standards\r\nconstitute minimum requirements and BCBS\r\nmembers may decide to go beyond them.”viii FATF\r\nmembers must likewise “endorse and implement\r\nthe FATF Recommendations for combating money\r\nlaundering and the financing of terrorism and\r\nproliferation, using where appropriate guidance\r\nand other policy endorsed by the FATF.”ix The\r\nBasel and FATF charters also require members\r\nto commit to peer review process, which will be\r\ndiscussed in further depth.\r\nHistory\r\nIn some form or another, war motivated\r\ninternational cooperation in the banking space.\r\nThe first of these efforts was the creation of the\r\nBIS, formed in 1930 at the Hague Conference.\r\nIts initial job was the complicated task of\r\nsettling, in as neutral a fashion as possible, the\r\nreparation payments that were imposed on\r\nGermany after World War I. Specifically, the BIS\r\nmanaged the collection and then administration\r\nand distribution of the annuities payable as\r\nreparations. Later, it would facilitate the issuance\r\nof German bonds through the Dawes and Young\r\nprograms.x\r\nThe story of the BIS is one of evolution and\r\nadaption. After the cessation of reparation payments\r\nin the 1930s, the BIS evolved its role to promote\r\ntechnical cooperation between central banks,\r\nincluding on matters involving reserve management,\r\nforeign exchange transactions, gold deposits, and\r\nswap facilities; it also convened and provided a\r\nforum for meetings of central bankers.xi After the\r\nabandonment of the Bretton Woods Agreement—\r\nthe essence of the international gold standard—\r\nthe BIS evolved yet again to focus principally on its\r\ncoordinating role and to a lesser known extent, the\r\nprovision of financial service for central banks.xii\r\nThe BIS is owned by national central banks;\r\n63 different central banks own its shares. By\r\naccepting currency and gold deposits, and\r\ninvesting through proceeds to earn a profit,\r\nits balance sheet resembles a national central\r\nbank.xiii As such, the BIS functions somewhat like\r\nan international central bank, albeit without the\r\nability to set anything like global monetary policy.\r\nIt does, however, indirectly influence national\r\ncentral banking policy by hosting two distinct\r\ninternational regulatory institutions, Basel and the\r\nFSB.\r\nThe Basel Committee was established by the\r\ncentral bank Governors of the G10 countries at\r\nthe end of 1974.xiv Though it was not formed in\r\nthe heat of war, per se, an increasing number of\r\ndisturbances in the international currency and\r\nbanking markets prompted reflection on how\r\nbest to close gaps between the supervision and\r\nregulation among increasingly internationally\r\nactive banks.xv Today, 45 institutions from 28\r\ndifferent jurisdictions are members of the Basel\r\nCommittee. Members are generally central banks\r\nor authorities for prudential (that is, safety and\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 85\r\nsoundness) supervision in their country.xvi\r\nThe FSB was established much later, in the wake\r\nof the global financial crisis of 2008, though\r\nit had its origins in a separate, now defunct,\r\nbody, the Financial Stability Forum (FSF). The\r\nFSF was launched in 1999 in response to the\r\nAsian financial crisis, with a goal of analyzing\r\nrisks that, if materialized, could propagate and\r\nadversely affect wide swaths of regional or global\r\neconomies—known as “systemic risk”.xvii\r\nAfter the 2008 crisis, the G20xviii met in Cannes\r\nand decided to transform the FSF into the\r\nFSB and bolster its mandate and capacity.xix In\r\nparticular, it was agreed that the FSB would gain\r\nan “enduring organisation footing, strengthening\r\nits coordination role vis-à-vis other standardsetting\r\nbodies on policy development and\r\nimplementation monitoring, and reconstitution of\r\nthe FSB’s Steering Committee.”xx Although the G20\r\ntechnically sits atop the FSB, the BIS hosts the FSB\r\nphysically and provides for its Secretariat. Perhaps\r\nfor this reason, the FSB has focused its attention\r\non many central banking prudential matters and\r\ntends to work closely with Basel.\r\nFATF is a reaction to the global war on drugs\r\nand terror. It was initially developed by the\r\nG7 in the 1980s, in response to the drug trade\r\nbeing financed by money laundered through\r\nthe global banking system.xxi For the leaders in\r\nthese nations, it was “clear that there needed to\r\nbe a coordinated response. No country could\r\nfight money laundering on its own.”xxii Money\r\nlaundering is a global problem that is difficult\r\nto mitigate. The UN Office on Drugs and Crime\r\nestimates that the amount of money laundered\r\nglobally each year is about 2-5% of global GDP—\r\naccording to IMF estimates, this is about $1.6-\r\n$4 trillion annually.xxiii The internationally active\r\nbanks that form correspondent banking networks\r\nremain a key battleground of governments’ fight\r\nagainst it. Accordingly, financial policy makers\r\nconvene globally to set international standards of\r\nbest practices for combatting money laundering,\r\ncorruption, and terrorist financing through the\r\nFATF. FATF now has nearly 40 members.\r\nEvolution\r\nIn domestic law, all administrative agencies—\r\nwhich include financial regulators—have\r\nmandates and responsibilities set out in statute.\r\nThese organizations also have mandates and\r\nobjectives set out in their charters. Again,\r\nhowever, these mandates have been developed\r\nby the institutions’ members—not by any\r\ndomestic or supranational legislature. These\r\nmandates have been framed quite broadly, which\r\nhas over time supported the expansion of these\r\ninstitutions’ scope and functions.\r\nThe FSB’s mandate is to “promote global financial\r\nstability by coordinating the development of\r\nregulatory, supervisory and other financial sector\r\npolicies and conducts outreach to non-member\r\ncountries.”xxiv To accomplish that objective,\r\nthe FSB has two main functions. The first is a\r\nstandard setting role. It has, since 2012, developed\r\nstandards and principles that have cross-sectoral\r\nimplications for multiple jurisdictions. Examples\r\ninclude Key Attributes of Effective Resolution\r\nRegimes for Financial Institutionsxxv and a set of\r\npolicy recommendations for dealing with the risks\r\npresented by non-bank credit intermediation.xxvi\r\nThe FSB also engages in monitoring or early\r\nwarning work. It studies what it identifies as\r\nemerging financial stability risk and publishes\r\nresearch and working papers that direct attention\r\nto certain areas. It is difficult to say what the impact\r\nof this work product is. The FSB also has some\r\nability to drive forward collective problem-solving\r\nin areas of high concern to the G20. The FSB’s\r\nongoing work to tackle the efficiency of crossborder\r\npayments is a current case in point.xxvii\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 86\r\nThe output of this type of work can take the\r\nform of something like a “roadmap” for national\r\njurisdictions to follow toward the collective goal.\r\nThese outputs are less formal than standards—\r\nwhich carry the expectation of implementation—\r\nyet more concrete than papers identifying\r\nemergent risks.\r\nThe Basel committee’s mandate is also set out\r\nin its charter, which stipulates: “The BCBS is the\r\nprimary global standard setter for the prudential\r\nregulation of banks and provides a forum for\r\ncooperation on banking supervisory matters.\r\nIts mandate is to strengthen the regulation,\r\nsupervision and practices of banks worldwide\r\nwith the purpose of enhancing financial\r\nstability.”xxviii Its function in pursuit of that goal\r\nas evolved over time. In its early days, Basel was\r\nprincipally focused on coordinating supervisory\r\nstandards. Today it is best known for its setting of\r\ninternationally agreed capital adequacy standards\r\nof banks in various Basel Accords.\r\nThe Third Basel Accord, adopted in 2010 following\r\nthe 2008 financial crisis, has involved multiple\r\ninter-linking layers, including capital adequacy\r\nstandards, liquidity standards, stable funding\r\nstandards, and new guidance for supervisory\r\nstress testing.xxix Most advanced economies have\r\nover the years been diligent in implementing\r\nthe Basel agreements; the US is currently in the\r\nlast phases of implementing Basel III, colloquially\r\nreferred to as the Basel “endgame.”\r\nFATF’s objective is to set anti-money laundering\r\n(AML) and combatting the financing of terrorism\r\n(CFT) standards. More specifically:\r\nThe objectives of the FATF are\r\nto set standards and to promote\r\neffective implementation of legal,\r\nregulatory and operational measures\r\nfor combating money laundering,\r\nterrorist financing and other related\r\nthreats to the integrity of the\r\ninternational financial system. In\r\ncollaboration with other international\r\nstakeholders, the FATF also works to\r\nidentify national-level vulnerabilities\r\nwith the aim of protecting the\r\ninternational financial system from\r\nmisuse.xxx\r\nAccordingly, the chief function of FATF is the\r\ndevelopment of its recommendations, standards\r\nfor the effective detection of money laundering\r\nand other forms of illicit finance. Naturally, the\r\nrecommendations have iterated over the years\r\nto account for new ways that criminals launder\r\nmoney; most recently, the cecommendations have\r\nbeen updated to address digital assets.xxxi\r\nHistorically, FATF has faced more difficulty than\r\nBasel in securing widespread compliance. Part\r\nof this is because money laundering happens\r\nin a wider range of jurisdictions than those that\r\nare home to large, internationally active banks.\r\nThis leads to broad differences in capacity\r\nand willpower to enforce. Even to secure the\r\nminimum level of buy-in, in order to coordinate\r\nthe recommendations, FATF has had to face the\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 87\r\npolitical and practical reality that each country’s\r\ncircumstances differ, and participation will only\r\nbe maximized with the offer of flexibility. As such,\r\nFATF subscribes to a risk-based approach, in which\r\n“countries, as well as private sector, identify, assess\r\nand understand the risks they are exposed to and\r\nfocus their resources on areas where the risks are\r\nhighest.”xxxii\r\nGovernance\r\nThe global governance framework for\r\ninternational finance is highly unique in\r\nthe history of public international law and\r\ninternational economic coordination. These\r\ninstitutions have considerable authority to set\r\nrules of the game for private institutions—most of\r\nwhich are tremendously costly and restrictive to\r\ntheir business models—but, as noted, no formal\r\nbasis in law and little political accountability. This\r\ndistinct legal and institutional design is both a\r\nstrength and weakness of this governance model.\r\nGovernance at each institution is generally set\r\nup to enable these bodies to “remain a flexible,\r\nresponsive, member-driven, multi-institutional\r\nand multidisciplinary institution”.xxxiii Nominally, the\r\nG20 sits atop of most of it, although the funding\r\ncomes mostly from the central banks.\r\nThe FSB’s main body is the “Plenary” of the entire\r\nmembership. The Plenary is populated by senior\r\npolicymakers from ministries of finance, central\r\nbanks, and supervisory and regulatory bodies\r\nform the G20 countries. It is led by a Chair that\r\nrotates among senior officials from the members\r\nfor a three-year term.xxxiv However, most of the\r\nwork is done at the steering committee and\r\nstanding committee levels. The Plenary appoints\r\nmembers to both the steering committee,\r\nwhich drives forward the plenary agenda, and\r\nthe standing committees, which move forward\r\nthe various workstreams of the FSB. Standing\r\ncommittees are led by senior level officials\r\nfrom the member states.xxxv The FSB also has a\r\nSecretariat, which is directed by the Secretary\r\nGeneral who is appointed by the Plenary.\r\nTechnically, these employees contract directly\r\nwith BIS.\r\nThe Basel Committee reports to the central bank\r\ngovernors and heads of bank supervisors from\r\nthe G10—the group is referred to as the Group of\r\nCentral Bank Governors and Heads of Supervision\r\n(GHOS). GHOS acts like an oversight body and\r\nappoints the Basel Committee Chair, who serves\r\nas the external face of the Committee. Basel also\r\nhas a Secretariat funded and hosted by the BIS.\r\nThe FATF follows a similar governance structure,\r\nwith a Plenary that is responsible for agendasetting,\r\nwhich in turn creates working groups\r\npopulated with members chosen based on\r\nthe Plenary President’s recommendation.\r\nThese working groups and steering groups are\r\nresponsible for “taking forward, in consultation\r\nwith the plenary, any other work necessary for the\r\nFATF to fulfill its mandate”.xxxvi The IMF and World\r\nBank also play a significant role in solidifying\r\nFATF’s work by conducting country assessments\r\nand providing technical assistance and capacity\r\nbuilding in the anti-money laundering and\r\ncounter-terrorism financing spaces.\r\nThe primary implication of these bodies’ soft\r\nlaw status is that the standards they create do\r\nnot have any binding force in law. They must\r\nbe implemented into domestic law pursuant to\r\nthe jurisdiction’s usual process for promulgating\r\nrules and other kinds of public law. In the United\r\nStates, this means that whatever standards\r\ncentral bankers or Treasury officials might agree\r\nto internationally, they must be re-written to be\r\nspecific to the US financial and economic system\r\nand will only become binding once successfully\r\nfinalized through the notice-and-comment\r\nrulemaking process required by the Administrative\r\nProcedure Act.xxxvii\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 88\r\nThis leads to questions of enforcement. Even\r\ntreaty-based public international law struggles\r\nto enforce its rules—and those institutions have\r\nthe political support, and in theory might, of the\r\nstate behind them. How, then, do these softlaw,\r\nnon-binding, network-focused institutions\r\ncompel compliance with their standards,\r\nrecommendations, and roadmaps?\r\nThe short answer to this question is that formally\r\nthey cannot. Still, each of these three institutions\r\nengages in soft enforcement in the form of peer\r\nreview. The FSB uses two types of peer reviews:\r\nthematic reviews and country reviews.xxxviii Thematic\r\nreviews consider how effectively members are\r\nimplementing FSB standards.xxxix Thematic reviews\r\ncan also address other areas important for global\r\nfinancial stability where international standards\r\nor policies do not yet exist. The reviews are\r\nmeant to “encourage” implementation and make\r\nrecommendations to members about how they\r\nmight fill in identified gaps.xl\r\nCountry reviews, on the other hand, are\r\nconnected to the IMF-World Bank Financial Sector\r\nAssessment Program (FSAP) and Reports on the\r\nObservance of Standards and Codes (ROSCs)\r\nrecommendations on financial regulation and\r\nsupervision. Beyond the FSAP compliance, these\r\ncountry reviews can also focus on regulatory,\r\nsupervisory, or other financial sector policy issues\r\n“that are timely and topical for the jurisdiction\r\nitself and for the broader FSB membership”.xli\r\nThe Basel Committee takes a similar-in-spirit\r\napproach. It monitors implementation of its\r\nstandards through its Regulatory Assessment\r\nProgramme (RCAP), established in 2012. RCAP has\r\ntwo main elements: monitoring and assessment.\r\nBy compiling information periodically submitted by\r\nmembers, BCBS maintains a monitoring dashboard\r\nthat is publicly available; assessments, meanwhile,\r\ninvolve the constitution of a cross-jurisdictional\r\nevaluative team and result in the formal publication\r\nof a graded report card of sorts.xlii\r\nLike the FSB, the FATF uses peer reviews,\r\ncalled Mutual Evaluations, to diagnose\r\nproblems and evaluate implementation of FATF\r\nRecommendations.xliii Mutual Evaluations are\r\nframed around both effectiveness and technical\r\ncompliance. An effectiveness assessment entails\r\na visit from an assessment team—the assessed\r\ncountry will have to demonstrate evidence that\r\ntheir measures are working.xliv The technical\r\ncompliance aspect of a Mutual Evaluation entails\r\nthe assessed country providing “information on the\r\nlaws, regulations and any other legal instruments it\r\nhas in place to combat money laundering and the\r\nfinancing of terrorism and proliferation”.xlv These\r\nevaluations are performed regularly, and all reports\r\nare published by the FATF.xlvi\r\nConclusion\r\nThe standards set by the financial governance\r\norganizations discussed in this chapter have had\r\nconsiderable influence in domestic law. Still, the\r\nreader should recognize that this governance\r\nparadigm has its limits, many of which have not\r\nyet been fully tested.\r\nThe first of these concerns the lack of political\r\naccountability. These institutions are not\r\nresponsive to their members’ domestic\r\nlegislatures and yet their standards often\r\nultimately become imported into law.xlvii\r\nOccasionally, this oddity grabs the attention of\r\nlawmakers and shines an unpleasant light on their\r\nwork.xlviii The more stringent the standard, the\r\nmore likely it will be to raise questions about the\r\nlegitimacy of the soft-law process.\r\nAlthough these bodies do engage in public\r\nconsultation, and sometimes include academics\r\nand private sector representatives in their working\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 89\r\ngroups, ultimately, they alone decide the content\r\nof any standards set. It is difficult for the public\r\nto know which individuals, exactly, populate\r\nthese working level groups. So, the public cannot\r\nknow these participants’ interests, objectives, or\r\nincentives. Relatedly, because these institutions’\r\ninner workings are opaque, it is easy for them\r\nto become captured by the special interests of\r\noutside groups.\r\nUltimately, there are plenty of reasons why soft law\r\nregimes are more agile, innovative, and adaptable\r\nthan formal institutions. For that reason, a realist\r\nmight say that any ability to secure coordination\r\nis superior to none. But because broad-based\r\ndomestic political buy-in is essential to the longterm\r\nviability of this tenuous framework, they can\r\nonly go so far in pushing outside the bounds of\r\nthe public and lawmakers’ reasonable expectations\r\nof transparency and accountability.\r\ni. These include the General Agreement on Trade and Tariffs in 1947, the Bretton Woods Agreement 1944, and later the establishment\r\nof the World Trade Organization (“WTO”) in 1995.\r\nii. In the capital markets space, the International Organization of Securities Commissioners (“IOSCO”) coordinates securities regulators. I\r\ntreat this organization as just beyond the scope of this Chapter, but the reader should nevertheless be aware of its presence.\r\niii. See Alan Boyle, “The Choice of a Treaty: Hard Law versus Soft Law”, In Oxford Handbook of the United Nations (2019).\r\niv. Anne-Marie Slaughter & David Zaring, “Networking Goes International: An Update”, Annual Review of Law and Science 2 (2006).\r\nv. BIS, supra note xv.\r\nvi. FSB Charter, available at https://www.fsb.org/wp-content/uploads/FSB-Charter-with-revised-Annex-FINAL.pdf. The FSB is technically\r\na Swiss Law nonprofit. As noted, it is hosted by the BIS under a five-year renewable service agreement.\r\nvii. Art. 48, FATF Charter, available at https://www.fatf-gafi.org/content/dam/fatf-gafi/FATF/FINAL%20FATF%20MANDATE%202012-2020.\r\npdf.coredownload.pdf.\r\nviii. Art. 5, Basel Charter; see also History of the Basel Committee, supra note xiv.\r\nix. Art. 5, FATF Charter.\r\nx. “BIS History - Overview,” Bank for International Settlements, https://www.bis.org/about/history_newarrow.htm.\r\nxi. Id.\r\nxii. “History - the BIS going global,” The Bank for International Settlements, https://www.bis.org/about/history_4global.htm.\r\nxiii. See “Annual Report,” the Bank for International Settlements, https://www.bis.org/about/areport/areport2023.pdf#bal_sheet.\r\nxiv. “History of the Basel Committee,” the Bank for International Settlements, https://www.bis.org/bcbs/history.htm.\r\nxv. “History of the Basel Committee and Its Membership,” the Bank for International Settlements, March 2001, https://www.bis.org/\r\npubl/bcbsc101.pdf. The failure of Bankhaus Herstatt in West Germany was, specifically, the catalyst for the formation of the Basel\r\nCommittee.\r\nxvi. Basel Committee, Charter, Section 4, available at https://www.bis.org/bcbs/charter.htm.\r\nxvii. See Chris Brummer, “Introductory Note to the Financial Stability Board Charter,” International Legal Materials 51, no. 4 (2012): 828.\r\nxviii. The G20 is itself an informal organization, the convenes leaders from the world’s 20 largest economies. As the Council on Foreign\r\nRelations describes it, “by gathering so many leaders together, G20 summits offer rare opportunities to develop such relationships and\r\nrecast bilateral ties.” James McBride, Anshu Siripurapu, and Noah Berman, “What Does the G20 Do?”, Council on Foreign Relations,\r\nOctober 11, 2023, https://www.cfr.org/backgrounder/what-does-g20-do.\r\nxix. “Report to the G20 Los Cabos Summit on Strengthening FSB Capacity, Resources and Governance,” Financial Stability Board, June 12,\r\n2012, https://www.fsb.org/wp-content/uploads/r_120619c.pdf.\r\nGlobal Governance: Goals and Lessons for AI • Institutional Analogies for Governing AI Globally 90\r\nxx. Id.\r\nxxi. “Financial Action Task Force - 30 Years,” Financial Action Task Force, https://www.fatf-gafi.org/en/publications/Fatfgeneral/Fatf-30.html.\r\nxxii. Id.\r\nxxiii. “IMF, Countries are Advancing Efforts to Stop Criminals from Laundering Their Trillions,” International Monetary Fund, https://www.\r\nimf.org/en/Publications/fandd/ issues/2018/12/imf-anti-money-laundering-and-economic-stability-straight.\r\nxxiv. “Work of the FSB,” Financial Stability Board, https://www.fsb.org/work-of-the-fsb. See Art. 2, FSB Charter.\r\nxxv. See “Key Attributes of Effective Resolution Regimes for Financial Institutions,” Financial Stability Board, Oct. 15, 2014, https://www.fsb.\r\norg/ wp-content/uploads/r_141015.pdf.\r\nxxvi. “An Overview of Policy Recommendations for Shadow Banking,” Financial Stability Board, August 29, 2013, https://www.fsb.\r\norg/2013/08/an-overview-of-policy-recommendations-for-shadow-banking/\r\nxxvii. See “Annual Progress Report on Meeting the Targets for Cross-Border Payments, Financial Stability Board, Oct. 9, 2023, https://www.\r\nfsb.org/ wp-content/uploads/P091023-1.pdf.\r\nxxviii. “Basel Committee Charter Article 1,” the Bank for International Settlements, June 5, 2018, https://www.bis.org/bcbs/charter.htm.\r\nxxix. “Basel III: International Regulatory Framework for Banks,” the Bank for International Settlements, https://www.bis.org/bcbs/basel3.htm.\r\nxxx. “Financial Action Task Force Mandate,” Financial Action Task Force, April 20, 2012, https://www.fatf-gafi.org/content/dam/fatf-gafi/\r\nFATF/FINAL%20FATF%20 MANDATE%202012-2020.pdf.coredownload.pdf.\r\nxxxi. “International Standards on Combatting Money Laundering, the Financing of Terrorism & Proliferation: The FATA Recommendations,”\r\nFinancial Action Task Force, February 2023, https://www.fatf-gafi.org/content/dam/fatf-gafi/recommendations/FATF%20\r\nRecommendations%202012.pdf.coredownload.inline.pdf.\r\nxxxii. Id.\r\nxxxiii. “Report to the G20 Los Cabos Summit on Strengthening FSB Capacity, Resources and Governance,” Financial Stability Board, June 12,\r\n2012, https://www.fsb.org/ wp-content/uploads/r_120619c.pdf.\r\nxxxiv. Art. 21, FSB Charter.\r\nxxxv. The four committees including a Standing Committee on the Assessment of Vulnerabilities, which monitors and assesses\r\nvulnerabilities in the global financial system, and is chaired by Nellie Liang, US Assistant Secretary for Domestic Finance; the Standing\r\nCommittee on Supervisory and Regulatory Cooperation (SRC), which develops policy to address key financial stability risks and\r\ncoordinates issues that arise among supervisors and regulators and is Chaired by Bank of England Governor Andrew Bailey; Standing\r\nCommittee on Standards Implementation (SCSI) undertakes FSB peer reviews of its members (which FSB members have committed\r\nto undergo, chaired by Bank of Japan Deputy Governor, Ryozo Himino; and the Committee on Budget and Resources (SCBR), which\r\nassesses the resource needs of the FSB Secretariat and reviews the annual and medium-term budget of the FSB and is chaired by the\r\nChairman of Governing Board Swiss National Bank, Thomas Jordan.\r\nxxxvi. Art. 42, FATF Charter.\r\nxxxvii. Administrative Procedure Act, § 553.\r\nxxxviii. “Peer Reviews,” Financial Stability Board, https://www.fsb.org/work-of-the-fsb/implementation-monitoring/peer_reviews/.\r\nxxxix. Id.\r\nxl. Id.\r\nxli. Id.\r\nxlii. “Basel II Implementation,” Financial Stability Board, https://www.fsb.org/work-of-the-fsb/implementation-monitoring/monitoring-ofpriority-\r\nareas/basel-iii/.\r\nxliii. “Mutual Evaluations,” Financial Action Task Force, https://www.fatf-gafi.org/en/topics/mutual-evaluations.html.\r\nxliv. Id.\r\nxlv. Id.\r\nxlvi. Id.\r\nxlvii. Although the FSB is formally headed beneath the G20, as it explains, “the FSB is not run by the G20—[its] membership is somewhat\r\nwider, and the FSB comes to independent policy views on issues.” “Work of the FSB,” Financial Stability Board, https://www.fsb.org/\r\nwork-of-the-fsb/.\r\nxlviii. See, e.g., Peter J. Wallison, Transparency on FSOC Designations and its Relations with the FSB (Mar. 25, 2015),\r\nhttps://www.aei.org/wp-content/uploads/2015/05/Senate-Banking-Testimony-3-25- 15-FSB-FSOC-2.pdf [https://perma.cc/W3DAAEHT]\r\n(written statement submitted to the US Senate Committee on Banking, Housing, and Urban Affairs).\r\n4Looking Back to Look Ahead\r\nGlobal Governance: Goals and Lessons for AI • Looking Back to Look Ahead 92\r\n4\r\nGoverning AI is a vast, multidimensional, and\r\niterative project. AI will affect how we live and\r\nwork; how every major industry operates;\r\nhow governments serve their citizens; how\r\ncriminals act nefariously; and how conflicts\r\nare waged. Around the world, private sector\r\ncompanies, governments, civil society, and\r\nacademia will contribute to understanding AI’s\r\nopportunities and risks and defining and ensuring\r\nimplementation of effective guardrails.\r\nBut as was clear from the knowledge shared\r\nwith us by our group of experts, AI is not the\r\nfirst domain to require complex and ever\r\nevolving global governance. Take civil aviation,\r\nnuclear power, and global capital flows. At\r\nthe dawn of the 20th century, early flight\r\nexperimentation set the stage for decades of\r\nchange to war, commerce, and culture. Enrico\r\nFermi’s 1934 discovery that neutrons could split\r\natoms entwined devastating weapons with the\r\nemergence of a potentially pivotal global energy\r\nindustry. And our modern financial system was\r\ndisrupted by the Great Depression and two\r\nworld wars before it was then revived, enabling\r\ninnovation while creating global systemic risk.\r\nCivil aviation, nuclear power, and global capital\r\nflows have prompted governance by industry,\r\ndomestic authorities, and international institutions.\r\nThe balance across each layer of regulation has\r\nvaried, with public-private partnerships playing a\r\nstronger role for civil aviation and global capital\r\nflows and international institutions being granted\r\nmore authority with nuclear power. These variations\r\nreflect differences in the technologies and risks\r\nbeing governed as well as the historic moments\r\nin which these governance systems emerged and\r\nbegan to evolve.\r\nToday, governments, industry, and civil society are\r\nactively advancing industry standards, domestic\r\nregulation, and international governance for AI,\r\ncreating an opportunity to build in interoperability\r\nand cohesion across a broad constellation of\r\ninitiatives from the start.\r\nAs the international community commits to\r\nbuilding a more robust system of AI governance,\r\nwe see value in developing frameworks that help\r\nreinforce a coherent direction and coordinated\r\naction among a proliferation of tremendously\r\nuseful initiatives at the global and domestic levels.\r\nWe see value in reflecting on other historical\r\nmoments that have called upon global leaders to\r\ncreate durable institutions as well as the ways in\r\nwhich time and circumstance have tested them,\r\nmotivated their evolution, and demonstrated\r\ntheir impact. And we see value in continued\r\nexchange among diverse experts, including those\r\nwe’ve welcomed the opportunity to learn from\r\nthrough their contributions to our thinking and\r\nthis publication. Given the rapid advancements\r\nin technology that we’re witnessing and the\r\nmomentum of AI policy discussions, learning from\r\nother domains can help ground efforts to build\r\nout a framework and agenda for international AI\r\ngovernance.\r\nUltimately, though, we need to use this context\r\nto look ahead. Recognizing the many efforts\r\nat play and many interests at stake—and the\r\nresulting imperative of collaboration—is at the\r\nfoundation of this AI governance project. We\r\nneed collaboration to help weave together\r\nmutually reinforcing initiatives, reducing their\r\npermeability by reinforcing their seams—or to\r\nbuild from a common cloth, adding local color\r\nand details as we bring together the resources\r\nand capabilities more inherent to a global system.\r\nLearning from the decades of experience that\r\nhave defined our modern, highly interconnected\r\nworld, international initiatives and institutions\r\nare likely to play a critical role in facilitating this\r\nGlobal Governance: Goals and Lessons for AI • Looking Back to Look Ahead 93\r\ncollaboration. Institutions can bring focus to new\r\nor evolving functions and grow expertise needed\r\nto take on more complex and multi-faceted\r\ngovernance projects, enabling collaboration\r\ntoward a shared vision even in complex\r\ngeopolitical environments.\r\nInternational institutional purposes and functions\r\nwill also be key to realizing the three international\r\nAI governance outcomes that Chapter One\r\nproposed: globally significant risk governance,\r\nregulatory interoperability, and inclusive progress.\r\nHow we act—and toward what outcomes\r\nand with what impact—matters not just for AI\r\ntechnology but also and much more importantly\r\nfor the social, environmental, economic, and\r\npolitical futures that are interwoven with it. The\r\nconsequences of AI governance thus reverberate\r\nfor organizations, communities, and people\r\neverywhere. They call upon us to be inclusive\r\nand collaborative, representing the many\r\ninterests at stake and the many efforts that will\r\nultimately accrue to effective, interoperable\r\nglobal AI governance.\r\nKey lessons for AI\r\nfrom existing domains\r\nof global governance\r\n• Domains presenting global\r\nchallenges and opportunities require\r\nglobal governance.\r\n• Policymaking is more effective if\r\ngrounded in scientific or technical\r\nresearch and a deep understanding\r\nof the challenges to be addressed.\r\n• Effective governance frameworks\r\ndefine core functions and desired\r\noutcomes.\r\n• For global frameworks to succeed,\r\nproactive and strategic leaders\r\nplay a critical role in building and\r\nbroadening support.\r\n• Multistakeholder collaboration at\r\nthe technical and political levels\r\nis important to develop robust\r\nglobal standards and allow for rapid\r\nresponse to emergent risks.\r\n• Successful governance systems\r\nevolve over time, with old and\r\nnew institutions taking on a mix\r\nof interconnected functions and\r\nobjectives.\r\n5Recent Multilateral Developments in AI\r\nGlobal Governance: Goals and Lessons for AI • Recent Multilateral Developments in AI 95\r\n5\r\nThis section provides an overview of the variety of\r\ndevelopments in the area of AI governance at the\r\ninternational level over the last 12 months.\r\nUN initiatives\r\nIn the past year, there have been several UN\r\ninitiatives to address AI, some of which stemmed\r\nfrom proposals of the UN Secretary-General and\r\nothers driven by UN organizations and processes.\r\nIn March 2024, the UN General Assembly\r\nadopted a resolution to promote safe, secure,\r\nand trustworthy AI systems for sustainable\r\ndevelopment. It was adopted by consensus\r\nand co-sponsored by more than 120 countries.\r\nThe resolution highlighted the need to respect,\r\nprotect and promote human rights in the design,\r\ndevelopment, deployment, and use of AI, and\r\nalso recognized the potential of AI to accelerate\r\nand enable progress towards reaching the UN\r\nSustainable Development Goals.\r\nSeparately, UN Member States will develop a UN\r\nGlobal Digital Compact (GDC) to be adopted\r\nas part of the Summit of the Future in September\r\n2024. The GDC is expected to “outline shared\r\nprinciples for an open, free and secure digital\r\nfuture for all”, including on AI. Throughout\r\n2023, AI was featured in many of the GDC\r\nconsultation stakeholder submissions (including\r\nfrom Microsoft), as well as the Secretary-General’s\r\nown GDC policy proposal. The co-facilitators\r\nof the GDC process noted in an Issues Paper\r\nthat AI is emerging as a key issue for the GDC.\r\nThe process has included input from a wide\r\narray of stakeholders including UN member\r\nstates, industry, civil society, academia, nongovernmental\r\norganizations, and youth.\r\nA key input to the GDC and the Summit of the\r\nFuture will be the work of a new UN High-\r\nLevel Advisory Body on AI. The 39-member\r\nmultistakeholder and interdisciplinary body (which\r\nincludes Microsoft’s Chief Responsible AI Officer,\r\nNatasha Crampton, in her personal capacity)\r\npublished an interim report in December, and\r\nwill make final recommendations in summer\r\n2024 in three areas: international governance of\r\nAI, understanding AI’s risks and challenges, and\r\nopportunities to leverage AI to deliver the UN\r\nSustainable Development Goals.\r\nThe United Nations Educational, Scientific and\r\nCultural Organization (UNESCO) continues\r\nits work to support implementation of its 2021\r\nRecommendation on the Ethics of AI. Its February\r\n2024 Global Forum on the Ethics of AI focused\r\non the changing landscape of AI governance. In\r\n2023, it launched a UNESCO Business Council for\r\nEthics of AI to help ensure that AI is developed\r\nand utilized in a manner that respects human\r\nrights and upholds ethical standards. The AI\r\nBusiness Council (of which Microsoft is a co-chair)\r\nis committed to strengthening technical capacities\r\nin ethics and AI, designing and implementing the\r\nEthical Impact Assessment tool mandated by the\r\nUNESCO Recommendation, and contributing to\r\nthe development of regional regulations.\r\nThe UN Office of the High Commissioner\r\nfor Human Rights (OHCHR) and its B-Tech\r\nCommunity of Practice launched a Generative\r\nAI Human Rights Due Diligence Project in May\r\n2023. The project looks at how the UN Guiding\r\nPrinciples on Business and Human Rights\r\n(UNGPs) can guide more effective understanding,\r\nmitigation, and governance of the risks of\r\ngenerative AI.\r\nGlobal Governance: Goals and Lessons for AI • Recent Multilateral Developments in AI 96\r\nThe International Telecommunication Union\r\n(ITU) works with stakeholders to build a\r\ncommon understanding of the capabilities of\r\nAI technologies to facilitate the trusted, safe,\r\nand inclusive development of AI technologies,\r\nand equitable access to their benefits. Its AI for\r\nGood platform promotes AI to advance health,\r\nclimate, gender, inclusive prosperity, sustainable\r\ninfrastructure, and other global development\r\npriorities. In July, the ITU’s annual AI for Good\r\nGlobal Summit included discussions about the\r\nneed for guardrails and global AI governance\r\nframeworks. The 2024 Summit in May 2024 will\r\nfor the first time include an AI Governance Day.\r\nIn July 2023, the UN Security Council convened\r\na session on “Artificial Intelligence: Opportunities\r\nand Risks for International Peace and Security”.\r\nThe session, led by the UK during its presidency\r\nof the Security Council, was the Council’s first-ever\r\ndiscussion on AI.\r\nAI was also a major topic of discussion at the\r\nannual meeting of the UN Internet Governance\r\nForum (IGF) in October 2023. The overview of\r\nthe topics discussed included views on several\r\nelements of AI policy: global cooperation,\r\ngovernance, human rights and development,\r\nand generative AI. An IGF Policy Network on AI\r\nalso made recommendations in a report entitled\r\nStrengthening multistakeholder approach to global\r\nAI governance, protecting the environment and\r\nhuman rights in the era of generative AI.\r\nIn October 2023, the United Nations Institute\r\nfor Disarmament Research (UNIDIR) launched\r\na report on “AI and International Security:\r\nUnderstanding the Risks and Paving the Path\r\nfor Confidence-Building Measures”. This report\r\ncreates a taxonomy of the risks of AI in the\r\ncontext of international peace and security and\r\nprovides a comprehensive overview of these\r\nrisks and how they may impact global security.\r\nMultistakeholder discussions on Confidence-\r\nBuilding Measures for AI are expected to\r\ncommence in early 2024.\r\nIn October 2023, the World Health Organization\r\n(WHO) released a new publication listing key\r\nregulatory considerations on AI for health. It\r\noutlines key principles that governments and\r\nregulatory authorities can follow to develop new\r\nguidance or adapt existing guidance on AI. It\r\nemphasizes the importance of establishing the\r\nsafety and effectiveness of AI systems, rapidly\r\nmaking appropriate systems available to those\r\nwho need them, and fostering dialogue among\r\nstakeholders, including developers, regulators,\r\nmanufacturers, health workers, and patients.\r\nIn October 2023, the United Nations Third\r\nCommittee (focused on Social, Humanitarian and\r\nCultural Issues) in New York started discussions\r\non a draft resolution on the “Promotion and\r\nprotection of human rights in the context of\r\ndigital technologies”. The draft notes that AI can\r\ncontribute to the promotion and protection of\r\nhuman rights and has the potential to transform\r\ngovernments and societies, economic sectors,\r\nand the world of work. It calls upon the private\r\nsector and all relevant stakeholders to ensure that\r\nrespect for human rights is incorporated into the\r\nconception, design, development, deployment,\r\noperation, use, evaluation, and regulation of all\r\nnew and emerging digital technologies.\r\nIntergovernmental initiatives\r\nAlongside these developments at the global level\r\nthrough UN bodies, there are also discussions and\r\ninitiatives taken by smaller groups of governments.\r\nIn 2023, the G7 produced a Hiroshima AI Process\r\nComprehensive Policy Framework, which included\r\na Code of Conduct for organizations developing\r\nGlobal Governance: Goals and Lessons for AI • Recent Multilateral Developments in AI 97\r\nadvanced AI systems and Guiding Principles\r\nfor all AI actors. The 2024 G7 Digital Ministerial\r\nDeclaration committed to working with the OECD\r\non tools and mechanisms to monitor application\r\nof the Code of Conduct, and to broaden the\r\ninvolvement of key partners and organizations.\r\nThe G20 2023 Leaders Declaration in September\r\nreaffirmed a commitment to the G20 AI Principles\r\n(2019) and the pursuit of a “pro-innovation\r\nregulatory/governance approach that maximizes\r\nthe benefits and takes into account the risks\r\nassociated with the use of AI” and promotes\r\n“responsible AI for achieving SDGs”.\r\nIn November 2023, the UK hosted the AI Safety\r\nSummit, attended by 27 governments, the EU,\r\nUN, and tech companies, including Microsoft,\r\nDeepMind, Meta, and OpenAI. The summit had\r\na number of outcomes: the Bletchley Declaration\r\nsigned by all attending governments and the\r\nEU, a commitment to a “State of the Science”\r\nreport on the capabilities and risks of frontier AI,\r\na partnership between the UK and US AI Safety\r\nInstitutes, and a Chair’s statement on safety.\r\nAdditional AI safety summits will take place in\r\nSouth Korea and France in the coming year.\r\nOECD, GPAI, and other initiatives\r\nIt is also important to consider the work of the\r\nOECD and others where governments work with\r\nstakeholders to incorporate technical expertise\r\ninto policy analysis to advance thinking on various\r\naspects of AI governance.\r\nThe 38-country Organisation for Economic\r\nCooperation and Development (OECD)\r\ncontinued its wide range of work on AI—a\r\nWorking Party on AI Governance leads work on\r\nAI policy while a separate AI Network of Experts\r\nprovide technical, academic, and business expert\r\ninput. Outputs in 2023 included Initial policy\r\nconsiderations for generative AI and a report\r\non AI and Jobs that explored future skills needs.\r\nThe OECD also updated the definition of an AI\r\nsystem within its 2019 AI Principles to reflect the\r\nemergence of generative AI; a full review of the\r\nOECD AI Principles will be undertaken in the first\r\nhalf of 2024.\r\nThe Global Partnership on AI, a multistakeholder\r\ninitiative which provides a mechanism for sharing\r\nmultidisciplinary research and identifying key\r\nissues among AI practitioners, released a policy\r\nbrief on Generative AI, Jobs, and Policy Response\r\nand a report on AI Foundation Models &\r\nDetection Mechanisms.\r\nUNESCO, the OECD, GPAI, and other partner\r\norganizations launched a Global Challenge to\r\nBuild Trust in the Age of Generative AI. Over the\r\nnext two years, it will surface and test innovative\r\nideas to promote trust and counter the spread\r\nof disinformation.\r\nGlobal Governance: Goals and Lessons for AI • Recent Multilateral Developments in AI 98\r\nAcknowledgements\r\nThank you to the following experts, who contributed their time and knowledge to this report.\r\nChristina Parajon Skinner\r\nAssistant Professor - The Wharton School,\r\nUniversity of Pennsylvania\r\nSir Christopher Llewellyn Smith\r\nEmeritus Professor of Theoretical Physics -\r\nUniversity of Oxford\r\nFormer Director General of CERN\r\n(1994-1998)\r\nDavid Heffernan\r\nChair, Transportation & Trade Practice Group\r\n- Cozen O’Connor\r\nDiana Liverman\r\nRegents Professor of Geography - University\r\nof Arizona, Tucson\r\nJulia C. Morse\r\nAssistant Professor of Political Science -\r\nUniversity of California, Santa Barbara\r\nRachel Schwartz\r\nAssociate, Cozen O’Connor\r\nTrevor Findlay\r\nPrincipal Fellow - University of Melbourne\r\nYouba Sokona\r\nVice Chair, Intergovernmental Panel on\r\nClimate Change\r\n© 2024 Microsoft Corporation. All rights reserved. Global Governance: Goals and Lessons for AI is for informational\r\npurposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION\r\nIN THIS DOCUMENT. This document is provided “as is.” Information and views expressed in this document, including\r\nURL and other Internet website references, may change without notice. You bear the risk of using it. This document\r\ndoes not provide you with any legal rights to any intellectual property in any Microsoft product."},"recipientGroups":[{"recipients":{"parliament":[],"federalGovernment":[{"department":{"title":"Bundeskanzleramt (BKAmt)","shortTitle":"BKAmt","url":"https://www.bundeskanzler.de/bk-de","electionPeriod":20}},{"department":{"title":"Bundesministerium für Digitales und Verkehr (BMDV) (20. WP)","shortTitle":"BMDV (20. WP)","url":"https://bmdv.bund.de/DE/Home/home.html","electionPeriod":20}}]},"sendingDate":"2024-07-10"},{"recipients":{"parliament":[],"federalGovernment":[{"department":{"title":"Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung (BMZ)","shortTitle":"BMZ","url":"https://www.bmz.de/de","electionPeriod":20}}]},"sendingDate":"2024-12-23"}]},{"regulatoryProjectNumber":"RV0014996","regulatoryProjectTitle":"EU CSAM Verordnung","pdfUrl":"https://www.lobbyregister.bundestag.de/media/b4/44/489043/Stellungnahme-Gutachten-SG2503100017.pdf","pdfPageCount":2,"text":{"copyrightAcknowledgement":"Die grundlegenden Stellungnahmen und Gutachten können urheberrechtlich geschützte Werke enthalten. Eine Nutzung ist nur im urheberrechtlich zulässigen Rahmen erlaubt.","text":"Microsoft Recommendations on the EU Proposed Regulation Laying Down Rules to Prevent and Combat Child Sexual Abuse\r\nFebruary 2024\r\nMicrosoft has a long-standing commitment to online child safety and recognizes the responsibility online service providers have to prevent harm while respecting human rights including privacy and freedom of expression. As such, we welcome the Commission’s Proposal for a Regulation laying down rules to prevent and combat child sexual abuse (‘The Proposal’).\r\nWhile we welcome the 2022 Proposal’s risk-based approach we remain concerned that both the Commission’s and Parliament’s positions propose a mandatory-only approach to detection orders that would unduly restrict companies’ ability to prevent harm of child sexual abuse and exploitation.\r\nIn this context, Microsoft welcomes the Polish Presidency’s compromise text (28/01/25) and its approach to voluntary detection, for three reasons:\r\n1.\r\nThe Polish Presidency’s permanent extension of the ePrivacy derogation will ensure that Interpersonal Communication Services (ICS) providers can continue to deploy tried and tested detection technologies that are central to the prevention and detection of child sexual abuse and exploitation.\r\nIn 2009, Microsoft created in partnership with Dartmouth University, PhotoDNA, a robust hash-matching technology that identifies duplicates of known child sexual abuse imagery in order to detect, remove, and report this heinous content. PhotoDNA, as well other technological means of detecting child sexual abuse material within ICS, account for the tens of millions of reports of child sexual abuse material (CSAM) made to authorities every year. At Microsoft, 99% of the child sexual abuse and exploitation imagery actioned on our services was detected through the voluntary application of detection technologies. Technology is critical to address this harm at scale. Putting the ePrivacy derogation through a long-term legal regime will offer welcome legal clarity to providers that deploy technology to reduce the harm on their services. Microsoft therefore welcomes the Polish Presidency’s proposal to enable continuing voluntary detection measures.\r\n2.\r\nThe additional privacy and transparency safeguards and conditions will ensure that voluntary detection is not deployed at the expense of the fundamental right to privacy.\r\nMicrosoft also welcomes the proposed additional safeguards and accountability required for providers to take voluntary steps to protect their services. The additional data protection requirements, including a data protection impact assessment, consultations with stakeholders, and transparency measures will all serve to facilitate trust and confidence in industry practices.\r\n3.\r\nVoluntary detection, in tandem with additional safeguards, will serve to promote Trust & Safety innovation.\r\n2\r\nBy maintaining the legal basis for voluntary detection of known CSAM, new CSAM and the solicitation of children, the Polish Presidency text will also support an environment that promotes continued technological innovation. The nature of this harm means that perpetrators are always seeking to circumvent tooling designed to protect children.\r\nMoreover, we are in an environment where artificial intelligence is driving significant advancements in a range of fields, including trust and safety. Ensuring that the voluntary framework allows for innovation, by including requirements for high-risk services to contribute to the development of detection technologies, will ultimately promote innovation in this field, and raise the bar for child protection across the eco-system.\r\nMicrosoft recommends, where possible, that the processes around risk categorization are made as efficient as possible. As written, the process of risk categorization (and possibly, recategorization) is lengthy and creates significant red-tape for service providers, the EU Centre, and the Coordinating Authority. In the event a service labels itself or is labelled high-risk, it may be subject to additional prevention measures, inform its users of the risk, undertake engineering cycles to showcase reduced risk and report on such, and contribute to the operational, financial and technical development of detection tooling.\r\nNot only that, but the Centre and Coordinating Authorities will also have to remain available to comb through thousands of risk assessments, the possibility for new categorizations, privacy impact assessments, and the success of any new mitigation measures introduced. To optimize the time of all stakeholders involved, and ensure the best result is driven from these procedures, we recommend that the risk assessments be mandated only to ICS providers, and made optional for hosting providers.\r\nWe recognize that in a voluntary-only regime, checks and balances must be implemented to ensure all players are measured against the same bar. However, we recommend considering ways in which the process can be streamlined for providers undertaking good faith efforts to address CSAM risks on their services.\r\nWe also recommend that provisions related to age verification are put aside for this specific regulation and consulted on separately. While age assurance remains one of the many ways in which children can be better protected only, this technical solution warrants to be examined specifically, following extensive multi-stakeholder consultation. A variety of workstreams are also seeking to develop a harmonized EU approach to age verification, notably through the Digital Services Act’s Article 28 – which take into account its technical overlap with the eIDAS Regulation. We recommend that, to avoid conflicting or repetitive rules, the CSAM Regulation only propose age assurance, or verification, as possible mitigation measure – as opposed to an obligatory provision specific to app stores.\r\nIn conclusion, in the context of this continued deadlock in Council, and the pressing deadline for the expiration of the ePrivacy derogation (April 2026) Microsoft recommends that co-legislators strongly consider the approach put forth by the Polish Presidency. The proposal advances privacy and safety protections for Europeans by providing a clear legal framework and ensures the benefits of time-tested voluntary detection remains in companies’ toolkits. Microsoft welcomes the opportunity to provide feedback to this important topic. We remain ever committed to the whole-of-society fight against child sexual exploitation and abuse and available to discuss any questions you may have."},"recipientGroups":[{"recipients":{"parliament":[],"federalGovernment":[{"department":{"title":"Bundesministerium für Digitales und Verkehr (BMDV) (20. WP)","shortTitle":"BMDV (20. WP)","url":"https://bmdv.bund.de/DE/Home/home.html","electionPeriod":20}}]},"sendingDate":"2025-03-10"}]}]},"contracts":{"contractsPresent":false,"contractsCount":0,"contracts":[]},"codeOfConduct":{"ownCodeOfConduct":false}}