We are supported by readers, when you click & purchase through links on our site we earn affiliate commission. Learn more.

UNESCO on AI: China also votes against social scoring and mass surveillance

Artificial intelligence (AI) systems must be proportionate and must not cause any damage. This is the first of ten principles in the recommendation for an ethical use of key technology, which the UNESCO General Conference decided on Tuesday. According to this, AI systems should above all not be used for social scoring and “purposes of mass surveillance”.

Surprisingly, China, as one of the 193 member countries of UNESCO, also voted for the 28-page document agreed. The Middle Kingdom itself is working on a “social credit system” including a citizen rating in the form of a “Citizen Score”. With the social scoring, authorities should be able to follow exactly what the population is doing. By awarding points, they would then also be able to restrict access to travel, for example.

In the west, China is also increasingly associated with the technologically supported oppression of the Muslim Uyghur minority in the autonomous region of Xinjiang and the fight against the democracy movement in Hong Kong. Surveillance cameras with biometric face recognition are part of the streetscape in the People’s Republic.

Nevertheless, the UN specialized agency, which is supposed to contribute to the preservation of peace and security by promoting international cooperation in education, science, culture and communication, managed to involve China in the ethics initiative. One reason for this is probably that the recommendation is of a voluntary nature. The participating States are not obliged to implement them. Nevertheless, the pressure is likely to increase to adhere to the principles and values ​​that have been adopted.

“The chosen AI method should not violate the basic values ​​set out in this document,” says the guidelines. “In particular, their use must not violate or abuse human rights.” The AI ​​process should also be “appropriate to the context and based on strict scientific principles”. In scenarios in which decisions have irreversible effects or effects that are difficult to undo, or in which life and death can be a matter of fact, it is ultimately people who should be on the trigger.

Unwanted damage such as security risks and an increased susceptibility to IT attacks must be avoided and “addressed, prevented and eliminated during the entire life cycle of AI systems”, is another requirement. AI actors should promote social justice and “ensure fairness and non-discrimination of any kind in accordance with international law”.

Other principles include sustainability, data protection, human control, transparency and explainability, and responsibility and accountability. UNESCO also recommends cooperation in AI development with the involvement of all actors according to the multi-stakeholder model.

Part of the recommendation is that AI developers conduct ethical impact assessments. Governments should “put in place strong enforcement mechanisms and remedial measures to ensure that human rights, fundamental freedoms and the rule of law are respected in the digital and physical world”.

It also contains some demands on specific topics such as gender, education, culture and the environment. For example, countries should provide public funding to promote diversity in technology, protect indigenous communities and the CO2-Monitor the footprint of AI technologies such as large language models.

According to the German UNESCO Commission it is “the first globally negotiated text of international law in the area of ​​AI ethics “. This was developed” in a two-year, intensive and sometimes controversial intergovernmental negotiation process. “The framework” translates “human rights and values ​​such as the precautionary principle into” concrete political design tasks “. These also relate to fields such as Environment and health.

Whenever developers and decision-makers had to assume that the development of certain AI applications could have negative effects, they should stay away from it, explained Gabriela Ramos, UNESCO Vice Director General for Social and Human Sciences, told the online magazine Politico about the basic approach. She did not want to speculate about whether Beijing, as the strongest proponent of social scoring to date, would follow this principle. But she saw it as a good sign that Russia and China are generally on board.

The USA, which is home to the world’s largest AI companies, is not a UNESCO member and has not signed the new recommendation. Ramos still hopes the other countries will peer pressure on the United States. She sees the paper as “code to change the business model of the AI ​​industry”. This will also certainly influence the negotiations on the planned European AI rules. This is intended to prohibit state social scoring, for example. There is still competition for an express ban on automated face recognition in public spaces.

(mho)

To home page