The European Commission has established the final text for the Artificial Intelligence Act (AIA), which pushes for a human-centric approach and trustworthiness in the development and utilization of AI medical devices.
Published in the Official Journal on 12 July, the AI Act is set to be law as of August 2, with the requirements for high-risk medical devices set to be law as of August 2, 2026.
The AI Act, which was adopted by the European Parliament on 13 March, established a legal framework for the uptake of medical technologies such as in-vitro diagnostic devices (IVDs), medical devices, and other similar medical technology products, with human playing an integral role.
The Act Prohibits Certain AI practices, Pushes for Transparency
According to a report published by Regulatory Focus (RF), the Act outlaws certain AI practices and adds specific requirements for high-risk AI systems. However, the regulation doesn’t apply to medical devices and systems under research or testing prior to commercialization.
High-risk devices are those classified under the Class IIa category and higher, as outlined by the Medical Devices Regulations. Manufacturers of such devices are required to establish risk management throughout the entire lifecycle, ensure the product is free from errors, and provide technical documentation with proof that the product complies with the Act.
Additionally, the manufacturers must provide user manuals or guides and establish a quality management system in compliance with the Act. The regulation also requires such high-risk systems to be designed and developed to ensure transparency and allow employers to interpret the systems’ output.