Law and Practice
1. General Legal Framework
1.1 General Legal Background
Until the time of writing (May 2024), Malta has not specifically legislated to cater for the legal revolution AI is creating.
Malta’s legal system is a ‘mixed’ one, where its civil, commercial and criminal laws are mainly based on civil law, while its main source of public and administrative laws is common law. These legal systems remain influential for the interpretation of Maltese laws and it is expected that any decision of Italian, French and British courts with regard to AI will have an influence on the interpretation of Maltese civil, commercial and public laws.
Contractual and Tortistic Lability
The general principles of contract and tort law will continue to apply to the use of artificial intelligence in Malta. They are covered by the Civil and Commercial Code (Chapters 16 and 13, respectively) of the Laws of Malta.
Acting in good faith (in a paterfamilias bonus way) is one of the underlying principles of both contract and tort law. The use of AI is generally seen as a tool and the user of such a ‘tool’ remains ultimately responsible for the damage caused by it or through its use. The principle of fault under Article 1033 of the Civil Code whereby ‘any person who, willfully or unintentionally, injures, willfully or negligently, recklessly, or inadvertently, is guilty of any act or omission which constitutes a breach of the duty imposed. by law, shall be liable for any damage resulting therefrom’ is particularly relevant for damages resulting from the use of AI. Like any technology, the use of AI carries a duty of care towards others. This applies both where the use of technology is a private one, and where it is used in a professional context. The user cannot rely on ignorance of the effects of the use of the technology or the “black box” phenomenon.
IP, Data Protection and Consumer Affairs
In addition to its domestic laws, as an EU member state, Malta’s laws adopt harmonized EU legislation in many of the areas that are relevant to AI, be it copyright and IP, data protection, the use of medical devices, product safety or consumer protection law. Domestic laws transposing EU Directives or supporting EU Regulations in these areas, notably the Copyright Act (Chapter 415 of the Laws of Malta), the Data Protection Act (Chapter 586 of the Laws of Malta), the Consumer Affairs Act (Chapter 378 of the Laws of Malta) and the Medical Devices Regulations (Subsidiary Legislation 427.44) have not been modified to cater for the specificities of AI. Nor has Transport Malta (the Authority for Transport in Malta) updated its Highway Code or introduced any specific provisions related to the use of automated vehicles in Malta.
One piece of legislation that has been enacted and that may have a considerable impact on the development of AI solutions in the health sector is the Processing of Personal Data (Secondary Processing) (Health Sector) Regulations (Subsidiary Legislation 528.10), Under this law, where the use of health data by health providers for purposes other than the original intended use, which purposes are listed in the law, may lead to benefits for the health system in Malta, such use may be considered permitted subject to the use of anonymization techniques or approval by an established Ethics Committee. This permitted secondary use of health data should lead to advances in AI in the Maltese health sector.
In summary, all relevant public authorities and bodies are keeping an eye on developments in their areas of interest while, at the same time, expecting more concrete signs of the need to change the status quo of the legal frameworks for which they are responsible. Of course, the discussions taking place at pan-European level and at the level of inter-supervisory authority, will determine how the responsible authorities and the legislature will behave going forward.
Maltese regulators
The Malta Digital Innovation Authority (MDIA) was set up as a public authority in 2018 to lead and advise the government on developments and initiatives in the innovative technology space, including AI. It has developed and is revising a national AI Strategy for Malta and is also leading a legislative change allowing for proper regulation, in line with the EU AI Act.
Back in 2019, the MDIA launched what it described as ‘the world’s first national AI certification programme aiming to develop AI solutions in an ethically aligned, transparent and socially responsible way’. The AI-ITA scheme established a certification programme similar to that found in today’s EU AI Act under which, depending on the risks envisaged in the use of the technology, developers and employers could obtain certification through an MDIA-licensed technology systems auditor certifying that the technology met pre-established objectives and criteria.
Inevitably, Malta’s regulators, in particular the Malta Financial Services Authority (MFSA) and the Malta Gaming Authority (MGA), have been following and commenting on developments in the use of technology, including AI, in their focus sectors. Other legislation being harmonized at EU level will have an impact on the use of AI in certain sectors. In this line, the MFSA issued Guidelines on DORA (the EU Digital Operational Resilience Act), which updates its Guidelines on Technology Arrangements, ICT and Security Risk Management, and Outsourcing Arrangements for public consultation. This is another key aspect affecting and regulating the use of Artificial Intelligence within financial services.
EU regulators
Guidance from the European Central Bank (ECB) and the European supervisory authorities – the European Banking Authority (EBA), the European Insurance and Occupational Pensions Authority (EIOPA) and the European Securities and Markets Authority (ESMA) – on the use of AI, cyber risk and digital resilience, will continue to be key to developments in Malta regulating the use of technology, including AI, in the financial sector, where, except for harmonized standards at EU level, regulation is expected to come in the form of directives issued by sectoral regulators. This approach is likely to be experienced across all sectors, including transport, health and education.
2. Commercial Use of AI and Machine Learning
2.1 Industrial Use
AI is widespread in the industries that form the basis of Malta’s economic activity. In particular, the large-scale use of AI is known to take place in the financial services (banking, insurance and investments), games (both i-gaming and video gaming) and health sectors, among others. Uses range from predictive AI (for example in risk and credit worthiness controls, as well as prognostic medicine) to generative AI (in content and software development, as well as customer support and compliance).
Transport
In the public sector, the government expressed the need to return to AI to solve Malta’s traffic problems. It appears from published press releases that the government and relevant authorities are actually investing in smart management systems. A pilot project was launched, under the leadership of Transport Malta, with the following objectives:
- to reduce congestion and emissions;
- to identify trends in transport behaviour;
- providing knowledge to enable intelligent journey planning and public transport scheduling;
- to create a smart private journey route (alongside third-party applications); and
- to assist with monitoring, police, and enforcement.
- Health and Education
The health sector is also relying on AI to assist in the procurement and effective management of medicines. The Central Procurement and Supplies Unit (CPSU) has launched a pilot project for a forecasting application that will be a decision-making tool used by the CPSU to help budgeting, planning the procurement process (tenders, quotes, etc.) and order planning. process. It tries to predict future results based on past events and management knowledge. This application should provide CPSU management and procurement staff with knowledge and the underlying tools and techniques to help better manage and respond to fluctuations in demand.
In education, the Ministry of Education is reportedly working on a pilot project that will develop an AI-powered adaptive learning system to help students achieve better educational outcomes through personalised learning programmes based on students’ performance, ambitions and needs. The pilot will also help teachers to build more formative assessments of students’ capabilities.
Tourism and Utilities
The Malta Tourism Authority is also reported to be launching a Digital Tourism Platform to enable more meaningful use of tourist data.
In a pilot project owned by the Ministry for Energy, Enterprise and Sustainable Development, AI algorithms will be used to collect, organise and analyse current data to discover models and other useful information related to water and energy use. The solution will use large-scale analytics and machine learning on customer data to help utility companies to maximise resources and subsequently provide responsive customer service management in real time. At the same time, they can make real-time adjustments to achieve optimised generation efficiency.
Forecast maintenance models and scenarios will also be developed.
This project is expected to drive better efficiency, resilience and stability in Malta’s energy and water networks, laying the foundation for the next evolution of its smart grid network.
2.2 Involvement of Governments in AI Innovation Malta AI
Strategy and Vision 2030
Malta AI 2030 Strategy and Vision contains 22 action points in its education and workforce section, six dealing with legal and ethical issues, and 11 in the part focusing on ecosystem infrastructure. These are being disseminated by the MDIA together with other public entities.
The objectives in the education space and workforce are:
- understand and plan for the impact of technology and automation on the Maltese labour market;
- equip the workforce with stronger digital competences and new skills;
- build awareness among the general population of what AI is and why it matters;
- build awareness of AI among students and parents;
- foster and embrace the uptake of AI in education;
- develop teachers’ knowledge and understanding of AI in education;
- equip all students enrolled in higher education programmes in Malta with AI skills; and
- increase the number of graduates and postgraduates with AI-related degrees.
The legal and ethical objectives are:
- establish an ethical AI framework towards trustworthy AI;
- launch the world’s first national AI certification framework;
- appoint an advisory committee on technology regulation to advise on legal matters; and set up
- a regulatory sandbox for AI and a data sandbox for AI.
The objectives related to ecosystem infrastructure are:
- investment in Maltese language resources;
- incentivize further investment in data centres;
- establish a digital innovation hub (DIH) with a focus on AI;
- increase the range of open data availability to support AI use cases;
- provide cost-effective access to computing capacity;
- expand Malta’s data economy through 5g and IoT; and
- identify best practices to ensure national AI solutions.
- Other Initiatives
In addition to the above, MDIA, together with the Ministry of Economy and other constituted bodies, such as TechMT (industry/public partnership), are playing a central role in promoting AI initiatives. From the launch of sandboxes (such as a MDIA technology assurance sandbox), to the set-up of business incubators (such as the DIH) and the making available of digital innovation grants and AI research grants, as well as start-up funds, this network of bodies has been supporting technology development and innovation, including the development and uptake of AI.
In addition, under a project to be funded by the EU, MDIA, the Malta Council for Economic and Social Development (MCESD) and the University of Malta have created a hub (Malta – EDIH) where the full set of European Digital Innovation services. Hubs are provided on an open, transparent and non-discriminatory basis and target SMEs, small mid-cap companies, and public sector organisations. Public workshops are organised within the Hub to facilitate a two-way dialogue between AI experts and industry.
3. AI Specific Legislation and Directives
3.1 General Approach to AI Specific Legislation
The MDIA was set up in 2018 through the Malta Digital Innovation Authority Act (Chapter 591 of the Laws of Malta) with the aim of regulating innovative technology through the issuance of conformity certificates (both mandatory and voluntary). Its mandate was further defined by the Innovative Technology Arrangements and Services Act (Chapter 592 of the Laws of Malta). Originally focused mainly on regulating distributed ledger technology (DLT), its mandate was quickly extended to other forms of innovative technology, including AI.
Initially, Malta took a proactive and innovative approach to regulating AI within its jurisdiction. In October 2019, Malta released the Strategy and Vision for Artificial Intelligence in Malta 2030. This strategy described the policy that the country started to adopt in the following years in order to ‘achieve a strategic competitive advantage in the global economy as a leader in the field of AI’. The basis of the overall vision of the strategy is threefold. First, it focuses on building infrastructure that promotes investment in AI applications and R&D. Second, it explores how these AI applications can be deployed in the private sector and, third, it promotes the uptake of AI in the public sector in order to maximise the overall benefit that can be derived from this innovative technology. This strategy is constantly being updated and is expected to issue a review soon, taking into account various recent developments.
From a regulatory perspective, the strategy included an ethical AI framework (see 3.3 Jurisdictional Directives) as well as a national AI certification programme. A Technology Regulation Advisory Committee was also set up to act as a reference point for issues related to AI laws and regulation, as well as to help create regulatory and data sandboxes.
The AI Sandbox programme, which ensures that AI systems are developed in line with technology-driven control objectives, is one of the cornerstones of the 2030 vision.
The laws governing the functions and scope of the MDIA are also currently being revised to better equip the Authority to meet its obligations and objectives going forward. In particular, the revisions make room for the introduction of local legislation needed to complement the AI Act once it enters into force.
To date, the regulatory approach remains optional where developers are encouraged to make use of regulatory sandboxes to test whether their technology will cope with the scrutiny of mandatory regulation once it enters into force, in the form and form of harmonised EU laws and standards.
Apart from those legislative developments mentioned elsewhere in this chapter, to date, no specific, local laws on AI have been drafted, nor have laws related to intellectual property, data protection or other areas that are central to AI been amended to meet the challenges posed by technology. Regulatory authorities are expected to lead developments in this space, in particular in the area of financial services and insurance.
3.2 Jurisdictional Law
No AI-specific legislation has been enacted in Malta. Legislative preparatory work is ongoing to enable the introduction of the AI Act, which will have direct effect in Malta.
3.3 Jurisdictional Directives Back
in October 2019, an ethical AI framework for the development of secure and trustworthy AI was published as part of the Strategy and Vision for AI in Malta 2030. This non-binding AI framework was essentially a set of AI governance and control practices that were based on four guiding principles. First, AI systems should allow humans to maintain full autonomy while using them. Second, AI systems should not harm humans, the natural environment, or any other living beings. Third, the development, deployment and use of AI systems should always be in line with the principle of fairness. Finally, one must be able to understand and challenge the operations and outputs of AI systems.
This AI framework reflected the aspirations of Maltese policymakers to strike a balance between endorsing the use of AI technology, while also ensuring its safe use in relevant industries.
3.4 EU Law
3.4.1 Jurisdictional Communities
To date all AI-specific legislation is in draft form. Malta has not yet legislated to allow the transposition of AI-related directives and to cater for those measures in the EU Act that require national regulation and coordination between authorities. This is as draft national laws are currently being discussed and consulted between Ministries and interested public bodies, although they have not been made public. Enabling legislation (the ‘Maltese Digital Innovation Authority (Amendment) Act’) that will allow the entry into force of subsidiary legislation to regulate such issues, is currently undergoing Parliament’s second reading. It will be enacted once the third reading is completed in the second half of 2024 and Artificial Intelligence-related Regulations are expected to be enacted shortly afterwards.
Under current legislation (Innovative Technology Arrangements and Services Act (Chapter 592 of the Laws of Malta) developers of AI solutions can voluntarily obtain certification of their technology whereby the MDIA certifies that the technology meets predetermined control objectives. Under the Technology Assurance Assessment Framework, applicants would need to appoint a systems auditor from among a list of auditors that are certified by the same MDIA to be competent to audit AI systems, verifying whether the system meets the published criteria. This system, originally designed for auditing DLT systems and adopted for other forms of innovative technology, is similar in concept and process to that set out in the AI Act and the control objectives are expected to be similar to those that will be set under the same framework.
3.4.2 Jurisdictional Conflicts
Upon the entry into force of the AI Act and any other piece of EU legislation aimed at regulating AI, Maltese laws, initiatives (similar to the certification of the above-mentioned technology) and processes inconsistent with these harmonised rules will not be applied. .
There would appear to be no laws that would cease to exist once the AI Act enters into force. As mentioned in 3.4.1 Jurisdictional Communities, laws are currently being amended to allow for lighter and less burdensome legislative processes to enact new laws and amend current ones that may be required to supplement the aforesaid AI Act.
3.5 US State Law
This is not applicable in Malta.
3.6 Data, Information or Content Laws
Upon entry into force of the AI Act and any other EU legislation aimed at regulating AI, Maltese laws, initiatives (similar to that of technology certification referred to in 3.4.1 Jurisdictional Communities) and processes that are inconsistent with these harmonised rules will be disapplied.
There would appear to be no laws that would cease to exist once the AI Act enters into force. The laws are currently being amended to allow for lighter and less burdensome legislative processes to enact new laws and amend current ones that may be needed to supplement the aforementioned AI Act.
3.7 Proposed AI-Specific Legislation and Regulations
As mentioned in 3.4.1 Jurisdictional Communities, draft laws to amend the Malta Digital Innovation Authority Act are currently being put before Parliament and are expected to allow subsidiary legislation to be introduced to remove any inconsistencies in the law that may hinder the proper operation of the AI Act and any other technology-specific EU legislation. As an EU member state, Malta will adopt all other EU laws that may have an impact on the use of AI. Currently, under the Innovation Technology Arrangements and Services Act, one can apply for technology assurance certification, including in relation to AI solutions. Of course, this scheme will be limited to areas that do not coincide with the certification requirements provided for by the AI Act. This is as soon as there seems to be little incongruence between the said scheme and the AI Act requirements and the MDIA has put the scheme, which is a voluntary one, as a means for developers of AI solutions to test their solutions from a regulatory perspective in preparation for obligations that may arise under the AI Act.
With the growing relevance of generative AI, it is also possible to modify IP laws to allow for the creation of certain ownership rights in AI-generated works. This would be particularly relevant for the i-gaming and e-gaming development sectors that are relevant for Malta’s economy. Although there have been discussions and proposals in this regard, it is too early to say which position would be adopted by the government.
4. Judicial Decisions
4.1 Judicial Decisions The
Maltese courts have not had the opportunity to address the legal challenges AI is putting in place, particularly with regard to intellectual property rights and harms arising from the use of AI solutions. Decisions of foreign courts in those jurisdictions on whose laws Maltese law is modelled would be of significant importance and offer guidance to Maltese courts when deciding these unexplored issues. Therefore, UK court judgments on intellectual property rights and judgments of Italian and French courts in relation to contractual fault and damages arising from the use of AI would be of interest to courts in Malta.
4.2 Definitions of Technology
Maltese courts did not have to struggle with definitions of AI and did not shed any light on what should constitute AI. It is understood that the definition of AI may change depending on the legislation being applied. It is important to note that the Maltese legal system does not expressly recognise the principle of judicial precedent, although court judgments, especially those of the Court of Appeal and the Superior Courts, act as a source of interpretation of the law. This is because the Maltese courts tend to adopt a particularly positive attitude to the application of the law and do not seek to provide interpretations that go beyond what is found in the particular law they are applying. Therefore, a universal interpretation of AI that applies to all legal instruments is not predictable. The legislator and the regulatory authorities, acting within their given powers, would be free to adopt different meanings of the term ‘artificial intelligence’ and could define such a term in the law, directives, decisions and policies that they are drawing up. The courts then follow such an interpretation when applying such law, directives, decisions and policies, depending on the subject matter of the case before them.
5. AI Regulatory Oversight
5.1 Regulatory Agencies
The MDIA has been entrusted by the government of Malta to lead AI initiatives and policies. It acts as an advisor to the government on all matters related to AI and cooperates with other public authorities and bodies that have a role to play in regulating this technology in the sectors for which they are responsible.
The MDIA formulated Malta’s AI strategy (see 2.2 Involvement of Governments in AI Innovation for more details) and is currently implementing the various action points in cooperation with other stakeholders.
The MFSA and MGA are also expected to play a key role in shaping the use of AI in the financial and gaming sectors, which are key industries in Malta. The Ministry of Health and Active Ageing, acting through various units tasked with coordinating and leading projects for the said Ministry, will also have an important role to play. Transport Malta will equally be instrumental in regulating the use of AI-enabled autonomous vehicles and means of transport, including drones.
The Office of the Information and Data Protection Commissioner (IDPC) will continue to monitor developments related to the use of personal data in and by AI and will regulate these issues according to coordinated positions at the level of the European Data Protection Board (EDPB) .
5.2 Definitions of Technology
With greater harmonisation at EU level, it is expected that all legal instruments converge on the definition of AI provided in the AI Act. This does not mean that any applications that may fall outside the said definition will not be governed by technological agnostic rules, in the same way as applications that will be classified as AI under such a definition. The AI Act, the Cybersecurity Act, the Digital Operational Resilience Act (DORA), the Network and Information Systems Directive (NIS 2), among other instruments, all regulate, to varying degrees and from different angles, the use of technology, including AI. The same approach will be reflected at national level for all industries and sectors.
Therefore, it is not foreseen that there will be conflicts in the definition of AI that could lead to conflicting obligations stemming from different regulatory frameworks. However, AI employers would need to carry out a 360° evaluation of all legal obligations applying to the use of such technology in the given sector and in the circumstances in which they are.
The fact that, despite greater harmonization at EU level, different laws may be applied by different jurisdictions, leads to greater complexity for AI employers, who always provide services across jurisdictions and even continents. This is of particular relevance for Malta where several developers and deplorers that may be established in the jurisdiction are providing their services to clients in other jurisdictions.
Below and below I have removed quotation marks because we prefer to format these lists in a manner consistent with our home style (e.g., if a bulleted ends with a comma, the next one should not start with a capital letter).
The EU AI Act, which shall be directly enforceable in Malta, defines an “AI System” in Article 3(1) as “a machine-based system designed to operate with varying levels of autonomy and which can demonstrate adaptability after deployment and which, for explicit or implicit purposes, infers, from the input it receives, how it generates outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments”.
So far, the only attempt to define AI by the MDIA had been made in the form of the AI – ITA Guidelines, in which it was stated that AI can be considered as an Innovative Technology Arrangement (ITA) if it consists of software, the logic of which is based on underlying datasets and which exhibits one or more of the following functions:
- the ability to use the acquired knowledge flexibly in order to perform specific tasks and/or achieve specific targets;
- evolution, adaptation and/or production of results based on interpretation and processing of data;
- systems logic based on the process of knowledge acquisition, learning, reasoning, problem solving, and/or planning; and
- forecasting and/or approximation of results for inputs not previously encountered.
The same guidelines further specify how AI can be recognized as an ITA by the MDIA if one or more of the following techniques and/or algorithms apply:
- machine learning and variations thereof (e.g., deep learning);
- neural networks or their variation (e.g., convolutional neural networks (CNN) or recurrent neural networks);
- model recognition (e.g., computer vision);
- natural language processing (NLP);
- forecasting systems;
- Fuzzy systems;
- Expert systems;
- optimisation algorithms; (e.g., evolutionary and/or slope algorithms);
- probabilistic classifiers (e.g. Bayes naïve); and cluster analysis
- algorithms (e.g. k-means clustering).
This voluntary certification scheme will be replaced by the framework of the AI Act and, at best, is expected to be refined to the needs of those applications that do not qualify as requiring certification under the AI Act.
5.3 MDIA Regulatory Objectives
In fulfilling its mandate, the MDIA seeks, inter alia, to promote:
- governmental policies favouring the deployment of ITAs within the public administration;
- ethical and legitimate criteria in the design and use of ITAs to ensure quality of service and security;
- transparency and auditing in the use of ITAs;
- fair competition and consumer choice; and
- the overall advancement and uptake of ITAs;
The MDIA also seeks to prevent;
- misuse of ITAs by ensuring that ITA standards meet consumers’ legitimate expectations;
- violation of the data protection rights of users, consumers and the public at large;
- the use of ITAs for money laundering and terrorist financing purposes; and
- using ITAs in a way that could damage Malta’s reputation.
MFSA
The MFSA regulates banks, financial institutions, payment institutions, insurance companies and insurance intermediaries, investment services companies and collective investment schemes, securities markets, recognized investment exchanges, trust management companies, company service providers and pension schemes. Its mission is to safeguard the integrity of markets and maintain stability in the financial sector for the benefit and protection of consumers. The MFSA collaborates with other local and foreign bodies, government departments, international organisations, ESMA, EBA, EIOPA, colleges of supervisors, the European Systemic Risk Board (ESRB), the ECB, the Single Resolution Board (SRB) and other entities exercising regulatory, supervisory, registration or licensing functions and powers under any law in Malta or abroad.
Other regulators
As the regulator for the gaming industry in Malta, the MGA seeks to promote and ensure gaming is fair and transparent, prevents crime, and protects minors and vulnerable players.
The IDPC is the national data protection authority. Its role is to monitor and ensure that the necessary levels of data protection are implemented in Malta, while investigating and taking corrective measures against those entities that fail to adhere to their obligations.
5.4 Enforcement Actions
Since, to date, in view of its mandate, the MDIA is not known to have applied fines or taken enforcement action, in addition to suspending or cancelling licences, the MFSA, MGA and IDPC have all taken corrective measures and imposed fines for breaches of the law. However, it does not seem to be the case that any fines were imposed or action taken against some industry actors as a result of their use of AI solutions.
6. Standardisation Bodies
6.1 National Standardization Bodies
Despite the various government authorities discussed in this article setting standards for the sectors they oversee, to date no standards have been imposed specifically in relation to the use of AI. Nor do professional representative bodies seem to have set standards for the use of AI in their professions.
6.2 International Standardisation Bodies Until
standards are harmonised across jurisdictions, it is expected that any standards that may be in line with what is applied in one jurisdiction will not be automatically accepted by regulatory bodies in other jurisdictions. This said, regulatory authorities within the EU collaborate closely together within their pan-European bodies of regulators, such as EIOPA, the EDPB, ESMA and the EBA. It would be expected that the standards that are set by these authorities would find an equal application in Malta.
7. Government use of AI
7.1 Government use of AI
As discussed in 2.1 Industry use, the government embarked on a number of pilot projects where the use of AI for certain results mentioned therein is being tested. Apart from these, no other use of AI by the government was publicised.
7.2 Judicial Decisions No decision relating to the
use of AI by the government has been given by the Maltese courts.
7.3 National Security
The use of AI in matters of national security has not been publicised.
8. Generative AI
8.1 Emerging Issues in Generative AI
To date, Maltese law, regulators or courts have not dealt with the complex legal issues surrounding generative AI. It is expected that under general principles of contract law, courts uphold the limitations embedded in licences and the terms and conditions for the use of generative AI solutions.
Copyright and Generative AI
In cases where the use of generative AI is not bound by licensing conditions, whether copyright can arise in works generated depends on the originality of the works generated and the level of human intervention in the generation of the works.
If the AI-generated work constitutes a substantial copy of an original work and is used by the entity that, through its design, generated the work using a third-party model, that entity infringes the copyright of the author of the original work, irrespective of the entity’s knowledge or intention in creating a copy of the original work. The only exception to this is where the exhaustive exceptions to copyright protection contained in Article 9 of the Copyright Act (Chapter 415 of the Laws of Malta) apply. These include acts of reproduction of literary works by public libraries which are not for economic advantage, reproduction of works for teaching or illustration purposes without compensation, reproduction or translation of works to make them accessible to the disabled without compensation.
Similarly, if a model is trained on copyrighted works, without the authorisation of the copyright owner, developers are liable for copyright infringement. This may result in the copyright owner prohibiting the commercial use and/or deployment of the AI model.
Where a work which normally qualifies for copyright protection is created entirely by an autonomous process without significant intervention in the creation of the work, copyright shall not be created. This is because copyright arises where the author or one of the joint authors of an artistic, literary or audiovisual work qualifying for copyright protection is a national of, or is domiciled or permanently resident in, or in the case of a body of persons, is established in Malta or in a state in which copyright is protected under an international agreement to which Malta is also a party. The term ‘author’ is defined as ‘the natural person or group of natural persons who created the work eligible for copyright’. The creation of a work by an autonomous process therefore removes the ‘author’ and, consequently, copyright cannot arise in it.
Personal Data and Generative AI
Another risk caused by generative AI is related to the use of personal data both in training the model and interacting with it at a prompt stage. The use of personal data in the training of a model must necessarily comply with one of the legitimate grounds under Article 6 GDPR. This is often not the case. The situation is further aggravated if special categories of data are used in the training of the model. It is with this in mind that the Processing of Personal Data (Secondary Processing) (Health Sector) Regulations (Subsidiary Legislation 528.10) have been enacted. Under this law, where the use of health data by public health providers for purposes other than the original intended use, which purposes are listed in the law, may lead to benefits to the health system in Malta, such use may be considered as a permitted subject for the use of anonymisation techniques or approval by an established ethics committee.
It is also important to note that there is no exception of Maltese law to Article 22 GDPR. According to this provision of the data subject law may object to the fully automated processing of his or her data, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. Nor does Maltese law make any exception to the data subject’s rights of access to, rectification of and erasure of his or her personal data used in the training of AI models.
The risks of using personal data – and even more so, data that is covered by professional secrecy or legal privilege – when using generative AI, cannot be ignored. No guidance in this regard has yet been issued by Maltese regulators or professional representative bodies, although it remains the responsibility of professionals to ensure that protected or privileged information is not disclosed or violated through the use of technology.
8.2 IP and Generative AI
AI systems, which are computer programs and algorithms, are granted copyright protection. Under Article 2 of the Copyright Act, a computer program is defined as a literary work and, subject to its original character, is granted copyright until 70 years after the end of the year in which the author dies.
Data collected for the purpose of training an AI model may also enjoy sui generis protection rights related to databases. Under Section 25 of the Copyright Act, ‘the maker of a database who can demonstrate that there has been qualitatively or quantitatively a substantial investment in either the obtaining, verification or presentation of the contents of the database shall have, irrespective of the eligibility of that database or its contents for protection by copyright or other rights, the right to authorise or prohibit acts of extraction or re-utilisation of its contents, in whole or in a substantial part, evaluated qualitatively or quantitatively’.
As generative AI models become more accurate, the way in which a user encourages the model becomes a valuable element that the user may wish to protect. This protection can be achieved by treating prompts as trade secrets under the Trade Secrets Act (Chapter 589 of the Laws of Malta). A trade secret is defined as information that:
is secret in the sense that it is not, as a body or in the precise configuration and assembly of its components, generally known among or readily accessible to persons within the circles normally dealing with the kind of information in question;
has commercial value because it is secret; and
has been subject to reasonable steps under the circumstances, by the person lawfully in control of the information, to keep it secret.
8.3 Data Protection and Generative AI
The Maltese Data Protection Act (Chapter 586 of the Laws of Malta) and the subsidiary legislation made thereunder do not weigh up the rights of data subjects in an AI context. Nor do they create any notable exceptions to the position under the GDPR. The principles of data minimisation, purpose limitation, legitimate grounds for processing under Articles 6 and 9 GDPR, as well as the rights of data subjects under Articles 12-22 GDPR all need to be carefully considered by developers involved in the training of models and those operating AI systems, alike. Human oversight and the ability to fulfil the controller’s obligations in relation to the data subject’s requests are principles that would need to be followed at all stages of AI development and deployment. Anonymisation techniques are equally important measures to be considered, as promoted, inter alia, in the piece of Maltese legislation dealing with the use of personal data (medical records) for, among other things, the training of AI models: Processing of Personal Data ( Secondary Processing Regulations) (Health Sector) (Subsidiary Legislation 528.10).
9. Legal Tech
9.1 AI in the Legal Profession and Ethical ConsiderationsLegal
technology is on the agenda of legal professionals. This brings ethical considerations, including the impact on professional secrecy and legal privilege when interacting with generative AI. The UK Bar Council guidance on generative AI captures these issues well. The Maltese regulator for lawyers (the Committee for Advocates and Legal Procurators within the Commission for the Administration of Justice) and the representative body of lawyers – the Bar Association – have not issued any guidance. That said, it is expected that it will soon. Until then, AI should be seen as a useful tool that comes along with its dangers and challenges and does not change the level of responsibility of lawyers to act ethically under the Code of Ethics governing the profession and their legal obligations arising from, among other pieces of legislation, the Professional Secrecy Act (Chapter 377 of the Laws of Malta) and the Code of Organisation and Civil Procedure (Chapter 12 of the Laws of Malta).
10. Liability for AI
10.1 Liability Theories
As mentioned in 1.1 General Legal Background, liability in respect of the use of AI will continue to be governed by tort principles and contract law under the Civil Code and Commercial Code. The concept of acting in good faith as a paterfamilias bonus and of guilty negligence under Article 1033 of the Civil Code will apply to the deployment of AI.
Under Maltese law technology itself does not have legal personality. It is therefore the employer or the developer who is ultimately liable for the harm caused by the use of AI. The determining factor would be the cause of the injury suffered by the injured party, whether it was a result of the misuse of the technology or a defect in the technology itself. In any case where the harm has been suffered by a third party, the latter may choose to act against the user of the technology that directly caused the harm or even against the technology developer. Unless the developer is sued by the claimant, it is up to the employer to turn to the developer to recover the damages that the employer may incur to pay the injured party.
In addition, the Product Liability Directive, which has been transposed into the Maltese Consumer Affairs Act, provides for a concept of strict liability whereby the producer (and in certain cases the seller) of the AI system can be held liable for damage caused by a defect in their product, provided that the injured party proves the damage, the defect and the causal link between the two. The European Commission has, however, identified issues with the application of the Product Liability Directive to AI systems and, for this reason, has been working on an AI Liability Directive while political agreement was reached on the amendments to the Product Liability Directive. Through these amendments AI, as “software”, has been definitely included in the scope of this piece of legislation. AI system providers would therefore potentially be liable for any defective AI systems placed on the market. Manufacturers of AI systems will also be liable for defects in the free and open source software they integrate into their systems.
10.2 RegulatoryThere are
currently no proposed amendments to the liability regime for the development and deployment of AI. However, we expect the necessary legal provisions to put in place the amendments to the Product Liability Directive to be drafted and discussed in Parliament in the coming months.
11. Legal Issues With Predictive and Generative AI
11.1 Algorithmic
bias Algorithmic bias is one of the well-identified and documented risks of AI. Although no standards have been mandated by Maltese regulators and/or law to avoid the risk of algorithmic bias, AI developers are guided by industry best practice. The obligations of explainability, transparency and auditing of the solutions being imposed through the AI Act will act to minimise these risks in a harmonised way.
Bias caused as a result of algorithmic bias can be particularly relevant in areas of employment, credit worthiness and insurance assessment, among others. Where bias in the algorithm creates bias and damages are suffered, the abovementioned liability principles apply.
11.2 Data Protection and Privacy
The combination of legal frameworks dealing, directly or indirectly, with AI are intended to work together to provide comprehensive protection to persons (physical or legal) who are the subjects of the deployment of AI systems. Among these, data protection laws and principles remain of paramount importance. The transparency and explainability provisions in the AI Act, together with the information obligations in the GDPR, should empower data subjects to make a conscious decision as to whether to allow their use of personal data, or otherwise, in particular circumstances and for the purposes explained. .
Article 22 GDPR, which empowers the data subject to object to the processing of his or her data where it is fully automated, including profiling, and may give rise to legal effects concerning him or her or similarly significantly affect the data subject, is significant .
The extent to which AI systems are being integrated into every aspect of life and within different sectors necessarily brings with it the need to impose a greater focus on the ‘by design’ uptake of processes and procedures ensuring that the rights of data subjects are respected throughout the entire AI deployment lifecycle. In the absence of legislative amendments to explain these obligations more clearly in an AI context, harmonised guidance from the EDPB and other such bodies is expected to help shape the future of the way data is processed in an AI environment. The relevance of such guidance can be seen, for example, in the use of personal data of AI users captured and used by generative AI models.
As data, including personal data, is at the centre of AI and with AI so pervasive, the legislative frameworks dealing with network resilience and cybersecurity gain critical importance. DORA and the NIS 2 Directive are among the EU legal frameworks complementing the Cyber Resilience Act and the Cybersecurity Act in this area.
11.3 Facial recognition and Biometry
The use of AI for facial recognition and biometrics is known to be one of the most sensitive uses of this technology and poses inherent risks to the privacy of individuals. Article 9 GDPR provides a high level of care that needs to be applied to the use of biometric data, which is treated as a special category of personal data.
The AI Act also dealt to a large extent with the use of facial recognition including a number of uses of such techniques, including real-time facial recognition in public places (with certain exceptions), predictive policing, internet scraping of facial images to create databases and emotions, inference at work or school as prohibited uses. When not prohibited, facial recognition and biometrics are considered high risk uses under Annex III.
Given the jurisdictional scope of the AI Act, similar to that of the GDPR, together with the level of fines that can be imposed in cases of infringements, it is expected that the regulation of biometrics and facial recognition will be regulated and harmonised to a large extent.
In addition to these specific laws, the use of facial recognition and biometrics is central to the fundamental human right to respect for his private and family life (Article 8 of the European Convention on Human Rights). The State has the obligation to ensure that this human right is safeguarded and if the police or any other institution of the State violates this human right, the State is found liable for damages to the individual whose rights have been violated.
11.4 Automated Decision-Making As
mentioned in 8.1 Emerging Issues in Generative AI and 11.2 Data Protection and Privacy, the use of fully automated decision-making, including profiling, needs to be clearly explained to data subjects and they have the right to object to it under Article 22 GDPR where this may lead to legal effects concerning them or similarly significantly affect them. In addition, a data subject has the right to know how the data has been used and produced the results. The ‘black box’ risk associated with full automation is therefore one that cannot be underestimated by AI employers who remain responsible for the results produced by the system and the faults that may result from it.
Risks related to automated decision-making arise not only where personal data is involved. Automated algorithmic trading, creditworthiness or insurance decisions are equally risk-prone and can lead to AI employers taking responsibility for the wrong decisions taken by the AI system. As mentioned above, the principle of fault and negligence in performing his contractual obligations may apply.
Contradictoryly, it is in riskier areas such as health, education, finance and mobility, that the greatest benefits of automation are likely to appear. Until the technology becomes fully reliable with built-in auditable checks and balances that cannot be overwritten and that control the use of the technology itself, human oversight remains of paramount importance and the technology should not be allowed to replace the professional. It is this human oversight and the ability for the human professional to make the final decisions that aligns automation in AI with professional ethics and regulatory requirements of regulated professions.
11.5 TransparencyTransparency
obligations underpin the professional use of AI in all sectors. This results from the patchwork of laws regulating the industrial use of technology, whether the AI Act, the GDPR, or sector-specific regulation. The use of chatbots and other technologies providing services that are generally provided by natural persons, is no different. Users should be aware that they are interacting with AI technology and should be given the opportunity to stop this communication or request to interact directly with a natural person.
11.6 Anti-competitive Conduct AI employers
are responsible for actions taken by them at the back of the technology used. If the use of AI leads to anti-competitive behaviour by technology employers, whether this concerns abuse of a dominant position, or collusion, the employer is liable for the anti-competitive behaviour. Competition law does not distinguish or make exceptions for anticompetitive behaviour resulting from automated functions in a technology. Even in this scenario, therefore, human oversight remains imperative.
12. AI Procurement
12.1 AI Technology Procurement
AI employers are ultimately responsible for the use of the technology in their business practice. They should therefore ensure that the various obligations to which they are subject are reflected back-to-back in the procurement agreement with the AI provider. In this way they ensure that they can turn to the supplier if they are obliged to pay damages resulting from their use of the technology. In addition, certain sector-specific laws and regulatory directives may impose obligations on licensed entities in relation to the outsourcing agreements they have with third parties, including AI providers. This is the case, for example, with DORA and the ‘Guidance on Technology Arrangements, ICT and Security Risk Management Arrangements and Outsourcing’ issued by the MFSA (which are based on the EBA Guidelines) with regard to licensed financial service providers, where certain obligations need to be inserted in outsourcing agreements.
13. AI in Jobs
13.1 Rental and Termination Practices Automation
in the field of employment is one of those areas where Article 22 GDPR, related to automated decision-making, is of critical importance. Fully automated processes that lead to the selection of candidates for a job, are legally risky and can give rise to discrimination, challenge and ultimately damages paid by employers.
13.2 Employee Evaluation and Monitoring The
same concerns that arise in relation to the practice of hiring and termination may also apply to the analysis and monitoring of employment performance. In addition, the use of AI tools to make inferences about an employee’s emotions when at work is prohibited under the AI Act.
14. AI in Industry Sectors
14.1 Digital Platform Companies
The use of AI in digital platforms is a particular fact in today’s world. Digital platforms thrive on the data they obtain from their users. Consequently, data protection legislation and enforcement remain essential to curb abuse. Other instruments of an EU note that will help shape the future of this industry are the Digital Markets Act and the Data Act, which, in their own ways and from their own angle, seek to mitigate conglomeration and control of data by gatekeepers.
14.2 Financial Services
The financial services industry is one of the largest net beneficiaries of AI, and the use of technology is widespread in the sector, whether in the provision of services, for marketing purposes or internally for risk management.
This highly regulated industry is modelled through a mix of laws and regulations that address and contain the risks of using technology, including AI, from different angles. The main risks identified by the MFSA in its ‘Artificial Intelligence’ edition of its ‘FinSights: Enabling Technologies’ on knowledge information are: accountability, black box algorithms and lack of transparency, data quality, (restricted) competition, (inconsistency in and fragmentation of) regulation and discrimination.
The AI Act itself deals with a number of these issues, mandating transparency, explainability and auditing to varying degrees depending on the levels of risk posed by the use of the technology and also classifies creditworthiness and life insurance as high-risk uses for which there is greater scrutiny and more onerous obligations apply.
In addition, DORA obligations including appropriate risk management, incident response preparedness, including resilience testing, incident reporting obligations and ICT third-party risk management would apply, as will the MFSA Guidance on Technology Arrangements, ICT and Security Risk Management (subject to modification to supplement the DORA obligations.)
Similarly the GDPR obligations of transparency, explainability, data minimisation and purpose limitation, together with the rights of the data subject, including the right to object to the use of their data by fully automated systems which may produce legal effects or significantly affect the data subject, also apply to the use of AI.
Confidentiality and professional secrecy considerations have an impact on the interaction of licensed providers with generative AI and large language models, while the Data Act obligations relating to the control rights of the data owner, where the IoT is being deployed, may also apply.
It is to this end, given the complexity of regulation in this industry, that the actors of the sector are advised to take care of 360° of the regulatory implications resulting from their use of AI.
14.3 Healthcare Healthcare
is known to be another high-risk scenario for AI use. Patient rights, professional liability, along with the risks of faulty negligence, ethical considerations, professional secrecy and the use of highly sensitive health data are all issues that must be carefully considered when healthcare professionals are interacting with AI. In this regard, Malta has recently enacted the Processing of Personal Data (Secondary Processing) (Health Sector) (Subsidiary Legislation 528.10) Regulations to enable the exploitation of health data through technology in a controlled environment. See 3.6 Data, Information or Content Laws.
14.4 Autonomous Vehicles
It is stillearly in the process of testing autonomous vehicles on Maltese roads, despite reports of intended tests in the field of public transport and an AI-led traffic management system. Transport Malta does not appear to have proposed any changes to the road code or laws requiring vehicles to be driven by persons who hold a licence issued in accordance with the law.
14.5 Manufacturing
Product safety requirements apply regardless of the use of AI made by the manufacturer.
14.6 Professional Services
As mentioned in 9.1 AI in the Legal Profession and Ethical Considerations, issues of professional secrecy, confidentiality and, in the case of lawyers, legal privilege, are among the legal and ethical challenges that would need to be carefully considered by professionals when interacting with and using AI. It is expected that professional representative bodies will establish standards to be followed.
15. Intellectual Property
15.1 Applicability of Patent and Copyright Law
As mentioned in 8.1 Emerging Issues in Generative AI, under the Maltese Copyright Act, in order to create copyright protection, an “author” needs to be a natural person. Consequently, works generated by AI do not qualify for copyright protection unless a natural person can prove, if challenged, that he or she substantially participated in the creation process. There is currently no Maltese court ruling on this matter.
A similar interpretation would apply to the concept of inventor under the Patents and Designs Act (Chapter 417 of the Laws of Malta) where the right to a patent applies to the ‘inventor’ and only ‘a natural person or legal entity may file a patent application’ (Article 9).
15.2 Applicability of Trade Secrecy and Similar Protection As
mentioned in 8.2 IP and Generative AI, instructions in the generation of work through AI can be protected by trade secrets.
15.3 AI-Generated Works of Art and Copyright
Although there are still discussions on the need to grant protection to AI-generated works that do not infringe third party rights, to date no legislative steps have been taken in this direction by Maltese legislators.
15.4 OpenAI
The use of OpenAI to create works and products leads to a lack of knowledge as to whether the created work violates third party rights over works used in the machine learning process. The use of such infringing work exposes the user to potential liability for infringement of third-party rights despite his or her ignorance of the fact. In addition, the use of the generated work shall comply with any licence conditions attached to the use of the OpenAI.
16. Advises Corporate Boards
16.1 Advising Directors
Despite theway in which AI is changing the way we operate and live, from a legal point of view, AI, like any technology, is seen as a tool in the hands of those who choose to use it. In this sense, the traditional principles of our law which place the responsibility for using a tool on the professional or person using it, remain the basis of liability considerations.
In addition, one must be aware of the complex regulatory reality in which industries operate. Regulation does not work in silos, but a holistic approach to regulatory obligations that start under the various legal frameworks and instruments should be taken and taken into careful consideration when using AI.
17. AI compliance
17.1 AI Best Practice Compliance StrategiesA holistic legal and regulatory due
diligence/impact assessment that is regularly reviewed in light of changes in operations and/or law is essential in the complex world of interaction with AI. This leads to a full understanding of the obligations expected of technology users and will help establish processes and procedures to ensure that obligations are honoured. An appropriate compliance culture must then be fostered in the organisation through training and awareness programmes.
Source: European Digital Skills & Jobs Platform