Paving the way for safe and innovative use of AI in Norway
Press release | Date: 26/03/2025 | Ministry of Digitalisation and Public Governance
The Government is preparing Norway for the implementation and enforcement of new rules on artificial intelligence. KI-Norge (literally “AI Norway”), a new national arena for innovative and responsible use of AI, will be one component in the national governance system that is now being established. The introduction of the EU’s Regulation on artificial intelligence, the AI Act, defines the framework for businesses and the public sector to be able to use this technology in innovative and ethically responsible ways.

Minister of Digitalisation and Public Governance Karianne Tung announced that a draft Act will be circulated for comment before the summer, with a view to the Norwegian Act coming into force from late summer 2026.
“The Government is now making sure that Norway can exploit the opportunities afforded by the development and use of artificial intelligence, and we are on the same starting line as the rest of the EU. At the same time, we want to ensure public confidence in connection with the use of this technology. It is therefore important that Norway has a robust national governance structure for enforcement of the AI rules,” says the Minister of Digitalisation and Public Governance.
Artificial intelligence is already in widespread use, and the technology can provide us with tools to resolve many of the major challenges we currently face in areas such as health, industry and education. At the same time, this technology has potential for misuse, especially in these times of global uncertainty and unrest.
“The EU’s AI Act will make it easier for Norwegian companies to compete, as we will be following the same rules as the rest of Europe. It will also ensure both innovation and accountability, which are essential for building expertise and trust among customers and businesses alike,” adds Minister of Trade and Industry Cecilie Myrseth.
Establishing AI Norway – a national arena for innovative and responsible artificial intelligence
The Government is strengthening Norway’s national efforts in the area of artificial intelligence by creating AI Norway – a national arena for innovative and responsible artificial intelligence. AI Norway will be established as a new, expanded expert environment within the Norwegian Digitalisation Agency (Digdir). AI Norway will be a driving force and advisory service, as well as serving as a link and partner between key AI players in the public sector, trade and industry, the research sector and academia.
“The establishment of AI Norway marks a change of gear in the use and development of artificial intelligence. A central tool in this work will be the AI Sandbox, where Norwegian businesses can experiment, develop and train AI systems in a safe environment. The goal is increased competitiveness and greater opportunities for Norwegian AI systems. This will be especially beneficial for start-ups and small and medium-sized enterprises,” says Tung.
Tung further emphasises that the establishment of AI Norway and the collaboration between the Norwegian Digitalisation Agency (Digdir), the Norwegian Communications Authority (Nkom) and the Norwegian Data Protection Authority (Datatilsynet) will strengthen and streamline the national work on artificial intelligence.
Norsk akkreditering will be the national accreditation body pursuant to the EU regulations
Norsk akkreditering (NA) is Norway’s national accreditation body for technical accreditation, and has now also been ascribed this role in respect of the European AI Act. By ascribing this role required by the EU’s AI Act to NA, we are building on the system that is already in use in the public administration in Norway today.
The Norwegian Communications Authority (Nkom) will have supervisory responsibility
The Government has decided that the Norwegian Communications Authority (Nkom) will be the national coordinating supervisory authority for AI, with responsibility for ensuring that the EU’s new rules on artificial intelligence are followed up in a uniform manner in Norway. Together with the various responsible sector authorities, Nkom will monitor that AI systems in use on the Norwegian market are safe, secure and responsible to use.
Nkom will also be the national single point of contact with EU bodies and other countries, and will help ensure that the regulations are understood and practised harmoniously across national borders.
“The Norwegian Communications Authority (Nkom) has been given an important role in overseeing compliance with the EU rules on artificial intelligence in Norway. It is important that society as a whole can be confident that artificial intelligence is developed in line with our shared European values and rules,” says Tung.
Key points of the EU’s AI Act
- The EU’s AI Act is the world’s first comprehensive regulation in the field of artificial intelligence. It ensures harmonisation across the Member States, and makes it easier to know what is required to ensure that the development and use of this technology is safe and ethically sound. It is also intended to promote innovation.
- The Act encourages safe, ethical AI that safeguards health, safety, fundamental rights, democracy, the rule of law and environmental sustainability. This will in turn help increase public confidence and trust in this kind of technology.
- The Act sets different requirements for AI systems based on the level of risk they entail.
- Some AI systems pose unacceptable risks and were banned in the EU from 2 February 2025. These systems violate fundamental values and human rights, for example through manipulation, by exploiting people’s vulnerabilities, or categorising people in ways that may have negative consequences for them.
- High-risk AI systems are subject to strict requirements. These kinds of systems can potentially have a significant adverse impact on people’s fundamental human rights. There are relatively few high-risk systems. The general need for innovation in society will therefore not be significantly affected.
- Limited-risk AI systems must comply with transparency obligations. The purpose of these obligations is to ensure that users realise they are interacting with AI.
- AI models for public use must comply with a number of special requirements, such as transparency obligations and compliance with copyright rules. An example of AI models for public use is generative large language models.
- Most AI systems have minimal risk, and the Act does not set any obligations for these kinds of systems. This provides opportunity for a high degree of innovation across multiple sectors, particularly in respect of systems to simplify and improve the efficiency of processes.