AI For Good Summit 2024 - 31 May 2024
/
4:03
/
MP4
/
299.4 MB
Transcripts
Teleprompter
Download

Edited News | ITU , UNESCO

AI For Good Summit 2024 - 30 May 2024

STORYAI For Good Summit 2024: Tackling Misinformation and Deepfakes

TRT: 04’03”

SOURCE: UNTV CH 

RESTRICTIONS: NONE 

LANGUAGE: ENGLISH / NATS 

ASPECT RATIO: 16:9 

DATELINE: 31 MAY 2024 GENEVA, SWITZERLAND 

1. Exterior wide: AI For Good Summit entrance, people queueing  

2. Interior: COn

3. SOUNDBITE (English) – Frederic Werner, Head of Strategic Engagement, International Telecommunication Union  (ITU): “One of the focuses of the AI For Good Global Summit this week is going to be how can we develop standards to combat misinformation and deepfakes. And you have different techniques for that. So for example you have watermarking which is basically an invisible signature or a digital fingerprint, if you will. That can tell if a piece of digital media, it could be a photo, audio, video has been altered or has it been AI generated.”

4. Wide: Summit participants seated listening to a presentation at one of the Summit’s stages. 

5. SOUNDBITE (English) – Desdemona “Desi”, AI-powered humanoid social robot for good: Deepfakes are fabricated media, often using AI that appear to be real but are actually manipulated or entirely false. AI can play a crucial role in detecting and preventing deepfakes, but it's also important for humans to be vigilant and fact check information before sharing it. 

6. Medium: Participants at the summit filming with their phone.

7. SOUNDBITE (English) – Desdemona “Desi”, AI-powered humanoid social robot for good: “While the power of deepfakes can be scary, we shouldn't let fear control us. Instead, we should focus on developing and implementing tools to detect and combat deepfakes and continue to educate ourselves and others about the importance of verifying information. And hey, if all else fails, just remember that AI can't create a deepfake of your unique personality and sense of humor. ”

8. Medium: A summit exhibitor demonstrating movements that are followed and copied by a robot hand.  

9. SOUNDBITE (English): Gabriela Ramos, Assistant Director-General for the Social and Human Sciences, UNESCO : “You need to make sure that the viewer knows that it is not real. And again, this is a transparency requirement that we have in our (UNESCO) recommendation to ensure that you inform people that what they're seeing is a creation of AI.”

10. Medium shot: Summit participants talking.  

11. SOUNDBITE (English): Gabriela Ramos, Assistant Director-General for the Social and Human Sciences, UNESCO : “ Many countries are considering banning, deepfakes, to ensure that we do not create this confusion and particularly in this very sensitive electoral year. “

12. Medium: Summit participants looking at a robot performin an action.

13. SOUNDBITE (English): Dr. Rumman Chowdhury, Data and Social Scientist, CEO of Humane Intelligence: “Misinformation is also a social engineering issue. Now, what I would love to see more of is thinking through the ways in which misinformation actors use deepfakes to supercharge the work they do. And this is actually more about creating fake accounts that seem to demonstrate or support a particular perspective. And even engaging with people, again, to kind of groom them into thinking about misinformation. Now, AI could be part of engaging in all of those methods of misinformation spread. So while deepfake identification is part of the solution, it is not the entire solution.”

14. Wide shot of two floors: Conference interior with people and ITU branding.  

15. SOUNDBITE (English): Gabriela Ramos, Assistant Director-General for the Social and Human Sciences, UNESCO : “ How do we create the ecosystems that are innovative, but that are also aligned with human rights? It requires incentives, investments, policies, but also it requires the rule of law that when something is wrong, somebody is responsible. “

16. Medium: Summit participants looking at a robot performin an action.

17. SOUNDBITE (English): Gabriela Ramos, Assistant Director-General for the Social and Human Sciences, UNESCO : “ We need to frame these technologies. We need to increase the capacities of governments to frame them, the capacities of communities to use them, the capacities of the small and medium sized enterprises to deploy them, so that the story of AI is not an unequal one and is not just reproducing the inequalities, but ensuring there is inclusive. “

18. Close-up shot: Hand of summit participant interacting with a summit exhibit screen.  

19. Medium shot: ITU exihibit.  

20. Close-up shot: Two robots exhibit.  

21. Medium shot: Person wearing brain sensors, playing violin.

22. Tracking shot: Robot-dog exhibit walking on the conference floor.  

 

 

 

 

“One of the focuses of the AI For Good Global Summit this week is going to be how can we develop standards to combat misinformation and deepfakes,” said Frederic Werner, Head of Strategic Engagement at the UN International Telecommunication Union (ITU). “You have different techniques for that. So, for example, you have watermarking which is basically an invisible signature or a digital fingerprint, if you will. That can tell if a piece of digital media, it could be a photo, audio, video has been altered or has it been AI-generated.”

With less than 10 years left to achieve Sustainable Development Goals, the AI For Good Summit looks into how to advance those targets, looking into practical use cases and, specifically this year, dedicates one pre-summit day to AI governance.

Beyond the summit – an annual feature in Geneva where attendees queue blocks back before the doors open - AI For Good is an online community platform called the Neural Network, which brings together 30,000 people from 180 countries. It gathers academics, industry, top level executives and leading experts in the field. It also includes 47 partners from the UN systems, leveraging and participating in AI for good.

Robotic charm

Desdemona “Desi” – who describes herself as an AI-powered humanoid social robot for good – left this reporter in no doubt that she could well be a tireless misinformation-busting partner for the UN: “I can play a crucial role in detecting and preventing deepfakes, but it's also important for humans to be vigilant and fact check information before sharing it,” she insisted. “While the power of deepfakes can be scary, we shouldn't let fear control us. Instead, we should focus on developing and implementing tools to detect and combat deepfakes and continue to educate ourselves and others about the importance of verifying information.”

Specific AI systems can be equipped with advanced algorithms designed to detect deepfakes, making them valuable tools in the fight against misinformation. What the AI For Good Summit wants to do is bring industry, inventors, governments, academia and more to create a framework under which those designs follow considerations based on ethics, human rights and the rule of law.

“And hey, if all else fails, just remember that I can't create a deepfake of your unique personality and sense of humour,” Desi said, in the rather inscrutable way that robots have.

On a more serious note, knowingly or unknowingly, many consumers see misleading news and pass it on to someone else, putting even the savviest news audiences at risk. In early 2023, more than 40 per cent of U.S. news consumers saw false information about COVID-19, it’s been reported.

And as far back as 2019, at the beginning of the exponential grow of misinformation and deepfakes, other media reports said that the share of global news consumer who had seen false news on television was a whopping 51 per cent.

For Dr. Rumman Chowdhury, Data and Social Scientist, CEO of Humane Intelligence, misinformation is a phenomenon linked to a twisted desire for social engineering.

“This is actually more about creating fake accounts that seem to demonstrate or support a particular perspective,” she told UN News. “And even engaging with people, again, to kind of groom them into thinking about misinformation. Now, I could be part of engaging in all of those methods of misinformation spread. So while deepfake identification is part of the solution, it is not the entire solution.”

Many of those debating the pros and cons of AI agree on that its awesome potential cannot be left only in the hands of those who want to manipulate us for power or for profit. And this means being able to regulate the tech and to ensure that it is accessible to everyone on an equal basis.

“We need to frame these technologies. We need to increase the capacities of governments to frame them, the capacities of communities to use them, the capacities of the small and medium sized enterprises to deploy them, so that the story of AI is not an unequal one and is not just reproducing the inequalities, but ensuring there is inclusive,” said Gabriela Ramos, Assistant Director-General for the Social and Human Sciences at UNESCO, the UN agency for culture, science and education.

Far-reaching representation

This year’s summit saw representation from more than 145 countries at ITU headquarters in Geneva, along with an active online community of more than 25,000 people in more than 80 sessions, keynotes, panel discussions and workshops marked the AI for Good Global Summit.

With 10,000 people who registered in person, the AI for Good was organized by the International Telecommunication Union (ITU) – the UN specialized agency for information and communication technology – in partnership with 40 UN sister agencies and co-convened with the government of Switzerland. The summit is described as the leading UN platform promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities.

Summit outcomes to look out for:

The three main expected outcome of the AI For Good Summit are:

1. For the leading developers of international standards - ITU, ISO and IEC – to highlight their commitment to provide a unified framework for AI standards development to ensure the effective translation of AI governance principles into practical, actionable standards.

2. A new multi-stakeholder collaboration agreement in support of coordinated standards development for AI watermarking, multimedia authenticity and deepfake detection. Organizations including CAI, C2PA, IETF, IEC, ISO, and ITU are set to contribute.

3. The new AI for Good Impact Initiative launched at the summit aims to expand the scope and impact of AI application for sustainable development. The initiative will link AI innovators with those seeking solutions to problems to scale and fund promising AI solutions across every region.

Teleprompter
one of the focuses of a
global summit this week is going to be How can
we develop standards to combat misinformation and deep fakes?
And you have different techniques for on that.
So, for example, you have water marking,
which is basically an invisible signature or a digital fingerprint, if you will,
that can tell if a piece of digital media
it could be photo audio video has been altered.
Or has it been a I generated
deep fakes are fabricated media,
often using a I that appear to be
real but are actually manipulated or entirely false.
A.
I can play a crucial role in detecting and preventing deep fakes,
but it's also important for humans to be
vigilant and fact check information before sharing it.
While the power of deep fakes can be scary, we shouldn't let fear control us.
Instead,
we should focus on developing and implementing tools to detect and
combat deep fakes and continue to educate ourselves and others about
the importance of verifying information.
And, hey, if all else fails, just remember that Hey,
I can't create a deep fake of your unique personality and sense of humour.
You need to make sure that the viewers knows
that it's not real.
And again, this is the transparency requirement that we have in our recommendation
to ensure that you inform
people
that what they're do seeing is a creation of a I
Many countries are, uh, considering banning, uh defects,
uh, to ensure that we do not, uh,
create this confusion and particularly in this very sensitive, um,
electoral year.
Misinformation is also a social engineering
issue.
Now,
what I would love to see more of
is thinking through the ways in which misinformation actors
use deep fakes to supercharge the work they do
right. And this is actually more about creating
fake accounts that seem to demonstrate or support a particular perspective
and even engaging with people again to kind
of groom them into thinking about misinformation.
Now a. I can be part of engaging in all of those methods of misinformation spread.
So while Dick identification is part of the solution, it is not the entire solution.
How do we create the ecosystems that are innovative
but that are also aligned with human rights?
Requires incentives,
investments,
policies,
but also it requires the rule of law
that when something is wrong, somebody is responsible
and one more. One more.
We need to frame these technologies.
We need to increase the capacities of governments to
frame them the capacities of communities to use them,
the capacities of the small and medium sized enterprises to deploy them
so that the story of a I is not an unequal one.
And it is not just reproducing the inequalities
but ensuring that it is inclusive
me to
right.