
UN High Commissioner for Human Rights during an interview with UN News Service. UN human rights chief: AI must be inclusive and accountable Human rights
The UN High Commissioner for Human Rights is convinced that without urgent protection mechanisms in place, artificial intelligence has the potential to increase inequality and bias across the planet. On the sidelines of the India AI Impact Summit 2026 in New Delhi, Volker Türk told UN News Service that technology must be governed through a rights-based approach that ensures transparency, accountability and inclusiveness.
Volker Türk : Artificial intelligence is a technological tool and should be developed based on risk assessment. There must be rules within which AI is developed, designed and used, and this is where human rights should be emphasized.
UN News Service : What do you see as the biggest risks to human rights in the face of the rapid spread of AI?
Volker Türk : There is a huge problem of inequality. That is why I am glad that this summit is taking place in India. It is important that such tools are used and created everywhere.
There is also the issue of bias and discrimination. If data is only collected in one part of the world, or if AI is developed exclusively by men, then unconscious bias is inevitably built into the system. We believe it is critical to consider the interests of vulnerable groups and minorities because they are often excluded from AI development processes. It’s about active participation and a vision of a better world. Human rights provide such a vision.
UN News Service : Generative AI is developing faster than regulation. What protective measures should governments and companies implement urgently?
Volker Türk : Take the pharmaceutical industry for example: testing [of new drugs] usually takes a very long time because you need to make sure that all risks associated with a new product are identified before it enters the market.
When it comes to AI tools, we must require companies to conduct human rights impact assessments during product development, implementation and promotion.
When it comes to AI tools, we must require companies to conduct human rights impact assessments during product development, implementation, and promotion.
We see that the budgets of some companies exceed the budgets of small states. If you control technology not only in your country, but throughout the world, you have power. This power can be used for good – such as improving health care, education and sustainable development. But it can also be used for evil – to create autonomous lethal weapons, spread disinformation, hatred and aggressive misogyny.
UN News Service : What AI governance mechanisms are needed to prevent increased bias and inequality?
Volker Türk : I had a chance to talk with people who create artificial intelligence systems. It amazes me that they often have a very superficial understanding of the fundamental principles when they start developing. It’s like Frankenstein’s monster: you create something that you have no control over from the start. Eventually the genie breaks out of the bottle.
I had a chance to talk with people who create artificial intelligence systems. It amazes me that they often have a very superficial understanding of the fundamental principles when they begin development.
If you do not take into account the risks and potential threats, you can cause enormous harm. We saw this in Myanmar, where hatred against the Rohingya was spread on social media.
It is extremely important to take into account the interests of all groups in society, especially women and youth, and remember that our consciousness develops differently. We don’t want to create addictions that poison the mind and soul. We also need to understand how destructive disinformation can be: it eats away at the social fabric, creating division and polarization where everyone begins to live in their own bubble.
We also see a lot of misogyny. Many women politicians tell me that they are considering leaving politics because of what they encounter on social media.
UN News Service : What do you think the responsible use of AI will look like in five years?
Volker Türk : I hope that we will come to an inclusive development of artificial intelligence, where power will no longer be concentrated in the hands of a few companies in North America, and AI development will take into account the richness and diversity of all communities.
I also hope for an inclusive and meaningful approach that will help us solve the many problems of the modern world. The climate crisis, access to healthcare, education for all – AI can be an incredible tool to achieve these goals. But unless we offer a vision of a better future, the world could become even more polarized and wars could finally spin out of human control. And this is extremely dangerous.