Business Services
Artificial Intelligence
The Malta Digital Innovation Authority (MDIA) leads Malta’s implementation of the European Union Artificial Intelligence Act (EU AI Act), the first comprehensive Artificial Intelligence (AI) legal framework. The MDIA is committed to support the development of an ecosystem which is conducive to AI leadership while ensuring the safe, trustworthy, and human-centric use of AI.
The new rules:
- address risks specifically created by AI applications;
- prohibit AI practices that pose unacceptable risks;
- determine a list of high-risk applications;
- set clear requirements for AI systems for high-risk applications;
- define specific obligations for deployers and providers of high-risk AI applications;
- where stipulated by the Act, before a given high-risk AI system is put into service or placed on the market, a conformity assessment must be carried out;
- put enforcement in place after a given AI system is placed into the market; and amongst others,
- establish a governance structure at European and national level.
The AI Regulatory Framework
The AI Act defines 4 levels of risk for AI systems: Prohibited risk; High-risk; AI systems with specific transparency obligations, and AI systems permitted with no restrictions & which can adopt voluntary codes of conduct.
The Act also introduces requirements for general-purpose AI models.
Requirements for conformity assessment and timelines associated with the implementation of the EU AI Act are further explained in this section.

The AI Act bans certain AI applications that negatively impact individuals. These include the following:
- Subliminal, manipulative techniques or exploitation of vulnerabilities – To manipulate people in harmful ways
- Social scoring – For public and private purposes leading to detrimental or unfavourable treatment
- Individual predictive policing – Assessing or predicting the risk of a natural person to commit a criminal offence based only on this profiling without objective facts
- Untargeted scraping of facial images – AI systems creating or expanding facial recognition databases from the internet or CCTV footage.
- Emotion recognition – AI systems that infer emotions in the workplace or education institutions, unless for medical or safety reasons
- Biometric categorisation – To deduce or infer race, political opinions, religious, or philosophical beliefs or sexual orientation, except for labelling the area of law enforcement
- Real-time remote biometric identification – In publicly accessible spaces for law enforcement purposes with narrow exceptions and with prior authorisation by a judicial or independent administrative authority
Please refer to the AI Act for the complete list of prohibited AI Systems, including any applicable exceptions.
Key Entities and Obligations
The MDIA is leading Malta’s implementation of the EU AI Act and is working with key stakeholders to ensure an effective and supportive regulatory framework.
The AI Act requires the identification or designation of various bodies and authorities.
The main authorities are the Notifying Authority and Market-Surveillance Authorities. Furthermore authorities or bodies responsible for protecting fundamental rights have been identified in line with the requirements of the EU AI Act.
There are six main operators, including, deployer, provider, importer, distributer, authorized representative and product manufacturer.

The list of the Authorities protecting Fundamental Rights under Article 77 of the AI Act is available below:
Office of the Information and Data Protection Commissioner
Malta Competition and Consumer Affairs Authority
National Commission for the Promotion of Equality
Commission for the Rights of Persons with Disability
The Office of the Ombudsman
Department for Industrial and Employment Relations
JobsPlus Malta
Malta Broadcasting Authority
Director for the Protection of Minors
Electoral Commission Malta
The MDIA AI Self Assessment Toolbox
The MDIA EU AI Act self-assessment tools are designed to provide generic guidance to:
- determine whether your system is likely an AI System as defined in the EU AI Act
- understand potential classification of the AI System
- identify the potential role as a Provider, Importer, Distributor, or Deployer.
These tools offer a quick and informative way to navigate the EU AI Act’s requirements and serve as a first step in identifying high level obligations you may need to comply with. Users of these tools should not solely rely on any recommendation provided by the tools. Please seek professional advice for more accurate assessment.
AI Tool #1: AI Classification Guide V1
AI Tool #2: AI Compliance Role Finder (Providers, Deployers, Distributers, and Importers) V1
AI Tool #3: Building Trust in AI through a Cyber Risk-Based Approach

Measures in Support of Innovation
The AI Act encourages innovation through various measures, including the establishment of AI regulatory sandboxes. Article 57 of the AI Act delineates that AI regulatory sandboxes shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems in controlled environments. Providers in the sandbox are liable for damages but protected from administrative fines if they act in good faith.
The MDIA provides guidance on compliance and risk mitigation, collaborating with DiHubMT, Malta’s European Digital Innovation Hub, which offers support for Small and Medium-sized enterprises and public sector organizations to foster innovation and best practices.
More information on the MDIA sandbox is available here.
