There has never been a legal framework for artificial intelligence (AI) before. However, it is precisely this proposal that the European Commission (EC) has now made. In doing so, it strives for a coordinated approach with the Member States and sets this down in a plan.
Basically, the legal framework rests on two pillars. That’s security and innovation. On the one hand, people and companies should be able to trust their safety and their fundamental rights in connection with AI. On the other hand, investments and innovations are to be increased across the EU. The confidence of AI users is also to be amplified by means of new safety regulations for machines.
Categories according to risk
AI systems with an unacceptable risk that pose a clear threat to the security and rights of people, such as public mass surveillance, should be banned entirely. Where there is a high risk, it may be used to a limited extent and only under strict conditions, such as in critical infrastructure, in education when assessing exams at school or in law enforcement. In the case of low-risk AI systems, there are special obligations to disclose (e.g. chatbots) or, in the case of minimal risk (video games or spam filters), even free use should be possible.
Governance in the Member States
National market surveillance authorities should monitor compliance with the rules. In addition, a European Artificial Intelligence Committee is to be set up. It is to accompany the implementation and promote the development of standards in the field of AI.
Now the Commission's proposals for a European concept for artificial intelligence and for a machine Regulation must be adopted by the European Parliament and the Member States in the ordinary legislative procedure. Once the Regulations are adopted, they will immediately apply across the EU.