Du benytter en nettleser vi ikke støtter. Se informasjon om nettlesere

Inputs and discussions about the plan

Summarised input from the health and care sector

The sector provided input to an early version of the joint AI plan at a consultation meeting held on 23 May. It has also provided written input on the same version.

The consultation meeting held on 23 May

More than 130 participants from a broad cross-section of the health and care sector physically attended the consultation meeting held on 23 May to discuss the plan, along with more than 140 digital participants [14]. During the meeting, participants were specifically asked to provide input on the chapters concerning the Use of large language models (5), Boosting competence concerning AI (6), and the Framework for the use of AI (4) in the health and care services.

In summary, the following input was provided:

The use of language models must include risk assessments, which are tailored to their respective applications. It was recommended that “low-hanging fruit be picked first”. It is important to facilitate use under the conditions in Norway and the Norwegian health service. The groups also pointed out that language models can have a wide variety of uses, with varying degrees of complexity and risk. A forum should be established for sharing experiences. The appropriate use of language models requires sufficient competence, and this must be incorporated into the health service, health courses and health management courses.

Competence relating to AI in the field of health includes competence among managers, employees in the health service and population-oriented measures. Competence must be linked to relevant applications, and preferably to cost/benefit assessments. Multidisciplinarity is important. It is often better to start with simpler tasks, build on them, and then gradually develop the complexity. Competence needs will change continuously as a result of the development or implementation of innovative solutions.

Framework for the use of AI in the health and care services. In this area, the exchange of experience should be facilitated nationally, regionally and locally – as well as cross-sectorally. A number of suggestions have been put forward to address the “low-hanging fruit first”, with a particular focus on what happens after an AI system has been adopted. Reference was also made to an urgent need for (national) coordination concerning standards, infrastructure, access to data and various types of data, rights and validation. There are many challenges ahead, so prioritisation will be necessary. There was also input on looking at the assessment and realisation of benefit potential and funding solutions.

Written input from the sector

The sector has provided 20 written contributions to the plan (deadline 31 May). A number of these overlap with the feedback from Prioritization Committee (NUIT) and the consultation meeting. Among other things, this includes the need for funding, arenas for collaboration and the exchange of experience, the differences between performing administrative and clinical tasks, and procurement and inhouse-development.

The importance of involving users of the solutions through development and implementation processes was noted by a number of stakeholders. This includes both patient and employee perceptions and experiences. It was explicitly noted that employees must be involved in selecting the work processes that are to be supported by AI, and in the trial, testing and validation of AI tools. Very specific proposals were also put forward involving the regional health authorities.

Summary input from the meeting of the National council model for eHealth

The same focus areas as above were discussed by the National Forum for Prioritisation (NUIT) and the National Council for eHealth, which is part of the National council model for eHealth [15].

Input from the Prioritization Committee (NUIT)

An early version of the plan was considered by NUIT on 15 May 2024.

NUIT generally supports the proposed initiative areas but believes that they are not sufficient in themselves. It was pointed out that a somewhat more offensive approach could be adopted to identify more appropriate applications and new opportunities. At the same time, the need to prioritise and sharpen the focus was highlighted, often linked to the assignments that were given. It was also stated that there are many legal challenges associated with the use of AI in clinical practice. It was therefore proposed that a closer look be taken at the use of AI for administrative tasks, a step that would also free up time for health personnel. There is a strong need to further develop the established guidance service. The need for close user involvement from citizens and patient associations was underlined. More healthcare technology solutions have AI built-in. Many municipalities are small and common approaches to benefits, risks, procurement, etc. will therefore be important. In this regard, established experiential structures, such as the Digi networks, can play a role. Finally, it was noted that AI must not be used at the expense of essential medical prudence, testing, validation, etc.

Input from the National Council for eHealth

An early version of the plan was presented and discussed at the meeting of the National Council for eHealth held on 13 June 2024.

Areas such as ethics, risk, privacy, training and the implementation of AI in the healthcare sector were highlighted as being of importance by many of those present. One challenge highlighted by several participants relates to the need to clarify the uncertainties surrounding AI and the importance of establishing a safe framework for the introduction and use of AI. This may help to maintain the trust of the population, especially among children and young people, who are concerned about their privacy. It was emphasised that ethical questions must be handled properly.

The need to conduct risk analyses and provide guidance services was highlighted as a pressing matter. It was also highlighted that measures aimed at the general population will be important in addressing the knowledge gap concerning AI. Digital security was mentioned as an area that creates uncertainty and should have been discussed more.

Some advocated the need to develop a Norwegian language model that is tailored to the health and care sector. Training citizens, employees and managers on AI was also considered necessary. Research and risk analyses are essential to ensure that AI solutions are of sufficiently high quality. A health technology assessment at the national level was sought to ensure a solid knowledge base, and it was pointed out that a standardised method is needed for the approval of AI.

Overall, meeting delegates reflected on the ethical aspects, risks and privacy challenges associated with AI, with a strong need for clear policies, legislation, and guidance to ensure the safe use of AI in the healthcare sector. Collaboration across sectors and the establishment of robust language models were considered to be essential for the effective and safe implementation of AI technology.

Last update: 18. februar 2025