A coalition of countries dedicated to promoting democracy and economic development, has announced a set of five principles for the development and deployment of artificial intelligence, writes Will Knight,
The announcement came at a meeting of the Organization for Economic Co-operation and Development (OECD) forum in Paris. The OECD does not include China, and the principles outlined by the group seem to contrast with the way AI is being deployed there, especially for face recognition and surveillance of ethnic groups associated with political dissent, he adds.
The OECD Principles on AI read as follows:
1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards—for example, enabling human intervention where necessary—to ensure a fair and just society.
3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
5. Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Democracies have been failing to counter China’s push to dominate cyberspace and AI, some observers suggest.
It is imperative that artificial intelligence evolve in ways that respect human rights, say analysts Eileen Donahoe and Megan MacDuffee Metzger. Happily, standards found in landmark UN documents can help with the task of making AI serve rather than subjugate human beings, they write for the NED’s Journal of Democracy.