As part of its digital strategy, the EU wants to tướng regulate artificial intelligence (AI) to tướng ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to tướng the risk they pose to tướng users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI.
Bạn đang xem: be na la con ai
What Parliament wants in AI legislation
Parliament’s priority is to tướng make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than vãn by automation, to tướng prevent harmful outcomes.
Parliament also wants to tướng establish a technology-neutral, uniform definition for AI that could be applied to tướng future AI systems.
Learn more about Parliament’s work on AI and its vision for AI’s future
AI Act: different rules for different risk levels
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to tướng be assessed.
Unacceptable risk
Unacceptable risk AI systems are systems considered a threat to tướng people and will be banned. They include:
Xem thêm: quang trung nguyễn huệ là ai
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition
Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to tướng prosecute serious crimes but only after court approval.
High risk
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into eight specific areas that will have to tướng be registered in an EU database:
Xem thêm: lá nõn nhành non ai tráng bạc
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to tướng self-employment
- Access to tướng and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Generative AI
Generative AI, lượt thích ChatGPT, would have to tướng comply with transparency requirements:
- Disclosing that the nội dung was generated by AI
- Designing the model to tướng prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
Limited risk
Limited risk AI systems should comply with minimal transparency requirements that would allow users to tướng make informed decisions. After interacting with the applications, the user can then decide whether they want to tướng continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video clip nội dung, for example deepfakes.
Bình luận