The European Commission has published the first draft of a code of conduct for providers of general-purpose AI systems (GPAI). This code is intended to facilitate the application of the EU AI Act's rules to such models. The Commission can approve the code EU-wide and give it general validity. The draft, which was created by independent experts, contains strict requirements for GPAI models with so-called systemic risk. This concerns models trained with more than 10^25 FLOPs of computing power (a value that, to our knowledge, has already been exceeded by GPT-4). According to the current draft, these would have to be reported to the EU two weeks before the start of training.
The code provides for two central documents: the "Safety and Security Framework" (SSF) and the "Safety and Security Report" (SSR). The SSF is the overarching framework that defines the basic guidelines for risk management. It comprises four main components:
The SSR, on the other hand, is the concrete documentation tool for each individual model. It contains:
Both documents are closely linked: The SSF provides the framework and guidelines according to which the SSRs are created. The SSRs, in turn, document the concrete implementation and provide insights that are incorporated into updates of the SSF. This interplay is intended to ensure continuous improvement of the safety measures.
A new element in the draft code: External tests by the AI Office and third parties are to be carried out for GPAI models with systemic risk. The text stipulates that providers must ensure sufficient independent expert testing before deploying such models to assess risks and mitigation measures more accurately and to provide external actors with certainty. This can also include a review of the evidence collected by the provider. Such external audits are not currently provided for in the AI Act. The question arises as to who would be able to test and evaluate the most complex AI models and whether the AI Office has the necessary expertise. The draft code leaves this open for the time being. The proposal is also controversial because intensive testing or the release of complex models involves extensive technical insights into the models to be examined. Testing companies would need to have the expertise for testing cutting-edge technology while keeping the results of the tests confidential.
The requirement for external tests in the code could have far-reaching consequences. The EU Commission can declare the code of conduct binding EU-wide through an implementing act. This would also give the external tests provided for therein legal force. Alternatively, the Commission could enact its own rules to implement the obligations if the code is not finalized in time or is deemed unsuitable by the AI Office. External audits could thus become mandatory either through the code or directly through a Commission decision. This would be a significant tightening compared to the original AI Act. However, the preamble also provides that providers can demonstrate compliance through "appropriate alternative means" if they do not want to rely on the code. The practical implementation of this option, however, remains unclear.
Another focus of the code is on copyright regulations. Providers must establish a policy for copyright compliance. This includes taking into account the reservations of rights holders who do not want to release their content for the training of AI models. As a technical means for this, providers should support the industry standard robots.txt. This allows website operators to specify which content may be indexed by crawlers. Providers should also take measures to exclude piracy websites from their crawling activities, for example, based on the EU Commission's "Counterfeit and Piracy Watch List".
In the next steps, the draft code will be discussed with around 1,000 stakeholders in four thematic working groups. Based on the feedback, the experts will further develop and refine the code.
Bibliography:
```