February 16, 2026
The 2026 International AI Safety Report has been published, assessing the progress and present capabilities of artificial intelligence, together with emerging risks and how they might be managed.
The Report is the work of an international Expert Advisory Panel, led by one of the so-called ‘godfathers of AI’, Yoshua Bengio, and formed following the AI Safety Summit in November 2023. It is the second full report from the panel. We commented on the first, published in January 2025, here.
Much has changed in the world of AI in the year since the first report’s publication. The panel notes significant technical developments and improvements in performance across many AI models, as well as heavy investment in agentic AI. However, it describes the capabilities of AI systems as ‘jagged’ – excelling in many complex domains while still struggling with seemingly straightforward tasks. How AI will progress in the near future remains uncertain. The Report states that “between now and 2030, it is plausible that progress could slow or plateau (e.g. due to bottlenecks in data or energy), continue at current rates, or accelerate dramatically (e.g. if AI systems begin to speed up AI research itself)”.
Having assessed the state of technology, the principal focus of the Report is on the risks that AI poses, as well as those that may emerge as it develops. Three themes are identified: (1) the risks of misuse, such as harmful AI-generated content and cyberattacks; (2) the risks of malfunctions, such as hallucinations or providing incorrect advice; and (3) systemic risks, such as the effects on the labour market and wider society.
One of the fundamental challenges for policymakers is what the Report calls the ‘evidence dilemma’: while the technology advances rapidly, evidence about new risks and mitigation strategies emerge slowly. Policymakers are therefore faced with the unattractive choice of either acting quickly on limited evidence which could inhibit innovation, or waiting for sufficient evidence to emerge, and intervening too late.
While it is outside the scope of the Report to recommend specific policy initiatives, it discusses various global approaches to risk management, assessing their effectiveness and limitations. It also stresses the importance of building so-called ‘societal resilience’ which it says should “complement technical safeguards by preparing societies for AI-related disruptions”.
To read the Report in full, click here.
Expertise