+44 (0)20 7612 9612
April 15, 2019
Building on the work of the group of independent experts appointed in June 2018, the Commission has launched a pilot phase to ensure that the ethical guidelines for Artificial Intelligence (AI) development and use can be implemented in practice. The Commission invites industry, research institutes and public authorities to test the detailed assessment list drafted by the High-Level Expert Group, which complements the guidelines.
The Commission’s plans follow its AI strategy of April 2018, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
The Commission is taking a three-step approach: setting out the key requirements for trustworthy AI; launching a large scale pilot phase for feedback from stakeholders; and working on international consensus building for human-centric AI.
The Commission says that trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy;
- robustness and safety: trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems;
- privacy and data governance: citizens should have full control over their own data, while data concerning them should not be used to harm or discriminate against them;
- transparency: the traceability of AI systems should be ensured;
- diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility;
- societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility; and
- accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
In summer 2019 the Commission will launch a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations can already sign up to the European AI Alliance and receive notification when the pilot starts.
The Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations.
Following the pilot phase, in early 2020 the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.
By Autumn 2019 the Commission will also: launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders start discussions to develop and implement a model for data sharing and making best use of common data spaces. To read the Commission’s press release in full, click here.