News
The rapid evolution of Artificial Intelligence (AI) is transforming the financial services and insurance sector, offering innovative solutions ranging from fraud detection to personal credit scoring. However, with these advancements comes a pressing need for comprehensive regulations to protect consumer rights and ensure ethical use.
Central to this regulatory landscape is the AI Act of the European Union (Regulation (EU) 2024/1689), championed by the Malta Digital Innovation Authority (MDIA). This legislation aims to foster human-centric and trustworthy AI, while prioritising health, safety and fundamental rights.
A key aspect of the AI Act is its classification of AI systems, particularly identifying ‘high-risk’ applications. These systems, which pose significant potential harm to individuals’ rights or well-being, are subject to stringent regulatory oversight.
In the financial sector, including insurance services, several AI applications have been identified as high-risk:
• AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.
• AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.
• AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.
• AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.
The risks associated with these applications necessitate robust governance and compliance mechanisms. Obligations for high-risk systems include conducting conformity assessments, maintaining comprehensive documentation and quality management systems. The AI Act will come into effect on August 2 2026, allowing a two-year grace period for compliance, with an additional year for AI systems embedded in regulated products.
As a prospective market surveillance Authority, the MDIA will play a crucial role in ensuring compliance. This includes conducting inspections and investigations, addressing non-compliance issues, and enforcing administrative penalties when necessary. To support innovation while ensuring regulatory adherence, the MDIA has established a Sandbox, facilitating a controlled environment for developing and testing AI solutions. The AI Act forms part of a wider legislative framework that focuses on digital innovation. The AI Act interfaces with other digital innovation legislation, such as the Digital Operational Resilience Act and the Cyber Resilience Act. To this end, the MDIA has built a strong relationship with other competent authorities such as MFSA, in view of a shared commitment to ensure a secure and trustworthy digital environment.
The MDIA adopts a proactive approach, emphasising collaboration with economic operators to assess the impact of the legislation. By providing information, resources and support, the Authority aims to enhance compliance and foster a thriving ecosystem for digital innovation.
In an era where AI’s influence in finance continues to grow, establishing a robust regulatory framework is essential. The MDIA’s initiatives are positioning Malta as a leader in digital innovation while safeguarding consumer rights and promoting responsible AI use. As financial services increasingly integrate AI technologies, this comprehensive approach ensures a sustainable and ethically sound industry future.
Authors: Neil Micallef – AI Supervision & Market Surveillance Manager and Dr Annalise Vassallo Seguna – Managing Legal Counsel Professional Officer