A good contract for the implementation of an AI system is one that is tailored to the selected project management methodology.

Due to the non-deterministic nature of machine learning (“ML”) models, including large language models (“LLMs”), it is difficult to determine the qualitative parameters that an AI system must meet before the start of a project (e.g., how well an AI-supported chatbot will help customers, whether its answers will always be accurate or 90% accurate). Additionally, as is often the case where business and technology meet, the final form of a solution is not predetermined.

Therefore, it is recommended to follow a process for creating an AI system that includes prototyping, building a minimum viable product (MVP), testing, and only then scaling the solution. To achieve this, agile methodologies are used as they allow for an iterative approach to the project and solution testing.


Time & Material contract

The use of agile methodologies therefore determines the shape of the contract. A suitable solution for agile methodologies is a Time & Material contract. This contract corresponds to a contract for the provision of services under the provisions of the Polish Civil Code regarding a contract of mandate.

An alternative is a framework agreement, which defines the general framework of cooperation, in particular the method of commissioning individual tasks and the terms of payment. This type of contract organises the process of concluding executive agreements, in which individual tasks are commissioned for performance (e.g. adding a specific feature).

In the case of a fixed price contract (contract for a specific work under the Polish Civil Code), which provides for the achievement of a predetermined result and price, the result has to be precisely defined, which is difficult in the case of AI implementation projects – at least for some projects and for the time being. A recommended element of such contracts is the change request procedure.

A proof of concept (PoC) can be a helpful tool in describing the subject-matter of the contract, especially in the case of AI systems. A PoC consists in creating a simplified, interim version of a solution, often in the form of a prototype or pilot implementation with a view to checking whether the solution works as intended.


The implementation of an AI system requires a flexible approach to both project management and the form of legal cooperation. Due to the non-deterministic nature of AI models and the difficulty in precisely determining the expected business result, it is recommended to take advantage of agile methodologies and conclude Time & Material contracts, under which remuneration depends on the time spent on the implementation of tasks.

Alternatively, a framework agreement can be used, which governs the general terms and conditions of cooperation, or – in more predictable projects – a fixed price contract. The latter, however, requires a very precise specification of the final result. In this context, mechanisms such as change request and proof of concept (PoC) are essential, as they allow for testing a solution before it is fully implemented.

When selecting the right type of a contract, the AI project’s nature, purpose, scope, level of predictability, and solution development stage should be taken into account.


What issues should be addressed in an AI system implementation contract?

AI implementation contracts are essentially similar to other implementation contracts. However, there are several crucial issues that are typical of AI system implementation contracts.

1Which contract to choose?

The implementation of an AI system requires a flexible approach to both project management and the form of legal cooperation. Due to the non-deterministic nature of AI models and the difficulty in precisely determining the expected business result, it is recommended to use agile methodologies and conclude Time & Material contracts which provide for remuneration depending on the actual time spent on the implementation of tasks.

Alternatively, a framework agreement can be used, governing the general terms and conditions of cooperation, or – in more predictable projects – a fixed price contract. The latter, however, requires a very precise specification of the final result. In this context, mechanisms such as change request and proof of concept (PoC) become important as they allow for testing a solution before its full implementation.

2Regulatory issues

Depending on the purpose of the AI system being implemented, the provider and the user may have additional regulatory obligations (e.g. obligations set out under the DORA regulation for providers to the financial sector). The contract should specify which legal requirements apply to the software to be provided – this will allow to avoid doubts as to whether the provider should take into account the requirements when creating the solution.

First of all, it is necessary to determine under which category of AI systems under the AI Act the system to be implemented falls. AI is used in systems to be used for different purposes, but the requirements for each category (prohibited, high risk, limited risk, minimum risk) vary. An assessment of the assumptions of the implemented system will allow to determine the requirements that it must meet. If, during a project, e.g. implemented in the Time & Material formula, the assumptions for the system change significantly, it will be necessary to re-assess them.

3Data protection issues

When compared to contracts for the use of AI tools in the SaaS model, the implementation of a dedicated system differs significantly.

With SaaS, the creator and provider of a solution must independently acquire the data to train the model. In the case of contracts for the implementation of a dedicated solution, the client can provide their data for model training. Accordingly, the responsibility for data collection under the GDPR rests on other entities. In this context, it is necessary to determine the data sources for training and testing the model[1] and what their legal status will be after the training and testing is completed.

In the SaaS model, almost the entire burden of ensuring the security of data processed in the system rests on the provider. In the case of a model implementation on the client’s infrastructure, the client is responsible for the security of the infrastructure, and depending on the arrangements, the provider will be able to shift part of the responsibility for system security to the client. However, when implementing an AI system for a client, it is important to conclude at least a data processing agreement and ensure that appropriate security measures are in place for the client’s data that will be used.

4Model-related issues

A key question is which model or models will be used in the system. The contract should, at least, specify whether a proprietary model will be developed or an existing one used. If an existing model is to be used, the terms and method of its use must be clearly defined — for instance, whether an open-source model will be installed on the client’s infrastructure, or whether access will be provided via the model provider’s API.

It is necessary to estimate the costs of the available solutions, as their initial and operating costs vary considerably depending on the approach.

5Third party’s models

When models are accessed via API, the issue of responsibility for the operation of the AI system gets more complex. If an external model provider experiences issues and the API becomes unavailable, the implemented system may also fail to function properly (unless a contingency plan and a backup provider are in place). Providers implementing AI systems should seek to contractually exclude responsibility for such cases.

If a third-party model is used, it is necessary to determine whether you have the right to use it – to do so it is necessary to check the licence. Therefore, the contract for the AI system implementation should require that models used by the provider in the implemented AI system are based on open licenses (preferably non-viral ones).

The choice of the approach also determines the issues of ensuring an adequate level of personal data protection. If an AI system accesses the model via API, the client must verify the model provider in terms of security level, conclude a data processing agreement, and ensure that its data is not used for model training.

6Dedicated models

In the case of building a proprietary model, the contract should specify on whose infrastructure the model will be trained, as this entails significant costs.

In addition, it must be determined who will hold the rights to the model.

As mentioned above, it is also necessary to establish data sources for training and testing the model and what will happen to them after the training and testing is completed.

What is the provider responsible for? Assessment of the quality of the solution and responsibility limits

A major problem is measuring the results of an AI system and determining the causes of errors or suboptimal results of the system operation. This affects the limits of the contracting parties’ responsibility for the operation of the AI system.

There can be many reasons for malfunction, such as infrastructure errors, model imperfections, or incorrect data. Therefore, system’s operation transparency and tools that will allow for checking why the system is not working properly are crucial. For high-risk systems, this is a requirement under the AI Act.


A contract for AI system implementation should include provisions concerning:

It is recommended to use PoC and MVP as tools to mitigate the risk of mismatch between the solution and expectations. A well-prepared contract increases the chances of success of the implementation and minimises the risk of disputes between the parties.


Contact

Michał Pietrzyk – radca prawny (Attorney-at-law) | Senior Associate in the Transactional Team, the German Desk Team, the IP/IT Team and the Competition and Consumer Protection Team.


[1] In the case of training or fine-tuning the model.