EU harmonised rules on Artificial Intelligence
Introduction
The EU AI Act has been published and will take effect in August 2024.
Naturally, we were eager to understand what the impact of this set of new harmonised rules across the EU is for us as an AI-focused company. We started by diving into the regulation document… but quickly realized how challenging it is to get through lengthy legal texts. How easy do you find it to read through such documents? To help us understand the new regulation, we turned to our trusty friend ChatGPT. Crafting the right prompt was crucial to ensure we got the information we needed. For more insights on prompt engineering and how to effectively communicate with AI, check out this blog post.
Here’s the prompt we used to get the answer we were looking for:
Answer - summary of the new EU AI Act
The new rules for processing data to train AI models introduced in the legislation include several key changes and requirements that differ from previous regulations. Here are the most important points along with actual quotes from the document:
- High-Quality and Representative Datasets:
- AI models must be trained using datasets that are representative and of high quality to avoid biases and inaccuracies.
- Quote: “The datasets used by AI systems, particularly those categorised as high-risk, should be of high quality, in terms of accuracy, completeness, and representativity to minimize the risk of bias and ensure the AI system’s proper functioning”.
- Data Minimization and Purpose Limitation:
- Data collected for training AI models must be minimized and should only be used for specified purposes.
- Quote: “The principle of data minimisation should be strictly adhered to, ensuring that only the data necessary for the specific AI training purpose is collected and processed”.
- Transparency and Explainability:
- AI systems should be transparent, and their decision-making processes should be explainable to users.
- Quote: “Providers of high-risk AI systems are required to ensure transparency and provide clear and comprehensible information about the AI system, including its capabilities and limitations”.
- Bias Mitigation and Risk Management:
- Providers must implement measures to mitigate biases and manage risks associated with AI systems.
- Quote: “The risk-management system should consist of a continuous, iterative process that is planned and run throughout the entire lifecycle of a high-risk AI system, aimed at identifying and mitigating the relevant risks of AI systems on health, safety and fundamental rights”.
- Post-Market Monitoring:
- Continuous monitoring and evaluation of AI systems after deployment are required to ensure ongoing compliance and performance.
- Quote: “Providers shall establish and document a post-market monitoring system… to evaluate the continuous compliance of AI systems with the requirements set out”.
- Explicit User Consent and Rights:
- Users must provide explicit consent for their data to be used in training AI models, and they must be informed of their rights.
- Quote: “Explicit consent from the users whose data is being used must be obtained, ensuring they are fully aware of their rights and how their data will be used”.
- AI Regulatory Sandbox:
- A controlled environment (sandbox) will be established to allow for the development, testing, and validation of AI systems under regulatory supervision.
- Quote: “AI regulatory sandboxes shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems”.
Differences from previous legislation on Artificial Intelligence
- More Rigorous Data Standards: The new rules emphasize the need for high-quality and representative datasets, a step beyond previous general data quality requirements.
- Greater Emphasis on Transparency: There is a stronger focus on the transparency and explainability of AI systems, making it mandatory for providers to disclose more detailed information about their AI models.
- Enhanced Risk Management: The introduction of continuous risk management throughout the AI lifecycle is more comprehensive than previous periodic review requirements.
- Post-Market Monitoring: Continuous and systematic post-market monitoring is now required, whereas earlier regulations often focused only on pre-market assessments.
- Regulatory Sandboxes: The creation of AI regulatory sandboxes is a new concept, designed to facilitate innovation while ensuring regulatory compliance in a controlled environment.
Impact on companies:
- Increased Compliance Costs: Companies will face higher costs due to the need for more rigorous data collection, transparency measures, and continuous monitoring.
- Innovation within Regulation: The regulatory sandboxes provide a space for innovation while ensuring compliance, potentially speeding up the development and deployment process.
- Higher Standards for Data Quality: Companies must invest in obtaining high-quality and representative datasets, potentially increasing the costs and time required for data preparation.
- Enhanced Accountability: With explicit user consent requirements and post-market monitoring, companies are more accountable for the performance and impact of their AI systems.
These new rules set a higher bar for the responsible development and deployment of AI, aiming to balance innovation with the protection of fundamental rights and public safety.
Next steps
As indicated, the new regulation enters into force on August 1st, 2024. Is your company ready?
If you do want to read the whole regulation, or want to create your own prompt to get out what you are interested in, you can find the full text here