AI system: There are many definitions of artificial intelligence. We have opted to use the definition given in the European AI Regulation: “‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" [16].
AI model: The term ‘AI model‘ is used to refer to software that has been developed with the aid of training on large data sets to learn what are often complex patterns and relations. After this training, the model can generalise and perform predictions or classifications on new, unknown data. An AI model will normally be a component of an AI system. An AI model is also sometimes called an algorithm (which is somewhat imprecise, as all computer programs are algorithms) or a machine learning model.
Quality assurance: Quality assurance consists of planned and systematic activities that are carried out to ensure that a product or service will meet quality requirements. Quality assurance is part of an organisation’s quality management [17]. Before an AI model can be adopted, it must undergo quality assurance, which involves validation, among other things.
Validation: In this report, validation is used to confirm that the AI model performs as intended for a specific intended purpose [18]. If an AI model is intended to diagnose fractures on an X-ray image of a bone, this is precisely what the AI model should be validated for. When validating AI models, it is therefore important that the “intended purpose” is clearly defined.
Generative artificial intelligence: Machine learning models that can generate unique content based on the information they have been trained on. This content could consist of text, images, audio and video. Although the results are often impressive, the systems are not creative in the human sense. Examples include ChatGPT and Bard, which generate text, and Dall-E and Midjourney, which generate images [19].
Fine-tuning is a technique where a pre-trained language model is further trained on a narrower or task-specific data set to perform a particular task, or to adapt it to an environment with specific professional language. This could, for example, involve training the model on judgments to summarise case law and provide legal advice, or on health data to identify patterns in health records or make diagnoses [20].
Continuous machine learning: Continuous learning is closer to how people learn: Every time we experience, see or learn something new, we adapt the models we use to interpret the world around us. Several techniques are available to ensure that machine learning models can continually learn from new data. The first of these is to include the new data in the training data set, rerun the entire learning algorithm, and update the model that is being used. This is not a problem with small data sets, but with today’s large data volumes, it can require considerable computing capacity, costing time, money and energy. Other methods entail using the new data to adjust the existing model. Although this is much easier, it has in some cases resulted in the new data gaining a disproportionate amount of weight, with the result that the model “forgets” things it had actually already learned [21].
Bias arises when the data sets that are used to train artificially intelligent systems are affected by historical biases, contain errors or are incomplete. This can result in systematic errors in the model’s predictions [22].
[16] Article 3.1 of the AI Act: https://data.consilium.europa.eu/doc/document/PE-24-2024-INIT/en/pdf
[18] In the standard ISO 9000:2015 "Quality Management System - Fundamentals and vocabulary", "validation" is defined as follows: "Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled”: https://www.iso.org/obp/ui/#iso:std:iso:9000:ed-4:v1:en
[21] Taken from page 64 here: https://media.wpd.digital/teknologiradet/uploads/2022/02/Kunstig-intelligens-i-klinikken.pdf