Our quick guide to the 6 ways we can regulate AI
Everyone from tech company CEOs to US senators and leaders at the G7 meeting has in recent weeks called for international standards and stronger guardrails for AI technology. The good news? Policymakers don’t have to start from scratch.
We’ve analyzed six different international attempts to regulate artificial intelligence, set out the pros and cons of each, and given them a rough score indicating how influential we think they are.
A legally binding AI treaty
The Council of Europe, a human rights organization that counts 46 countries as its members, is finalizing a legally binding treaty for artificial intelligence. The treaty requires signatories to take steps to ensure that AI is designed, developed, and applied in a way that protects human rights, democracy, and the rule of law. The treaty could potentially include moratoriums on technologies that pose a risk to human rights, such as facial recognition.
If all goes according to plan, the organization could finish drafting the text by November, says Nathalie Smuha, a legal scholar and philosopher at the KU Leuven Faculty of Law who advises the council.
Pros: The Council of Europe includes many non-European countries, including the UK and Ukraine, and has invited others such as the US, Canada, Israel, Mexico, and Japan to the negotiating table. “It’s a strong signal,” says Smuha.
Cons: Each country has to individually ratify the treaty and then implement it in national law, which could take years. There’s also a possibility that countries will be able to opt out of certain elements that they don’t like, such as stringent rules or moratoriums. The negotiating team is trying to find a balance between strengthening protection and getting as many countries as possible to sign, says Smuha.
Influence rating: 3/5
The OECD AI principles
In 2019, countries that belong to the Organisation for Economic Co-operation and Development (OECD) agreed to adopt a set of nonbinding principles laying out some values that should underpin AI development. Under these principles, AI systems should be transparent and explainable; should function in a robust, secure, and safe way; should have accountability mechanisms; and should be designed in a way that respects the rule of law, human rights, democratic values, and diversity. The principles also state that AI should contribute to economic growth.