Connect with us

Tech

Experts fear AI deepfakes can deceive voters in Pakistan, India other Asian nations

Published

on

Divyendra Singh Jadoun was busy making artificial intelligence-based visual effects and voice clones for film and television in India, when he began getting calls from politicians: could he create AI videos, or deepfakes, for their election campaign?

With a hotly-contested local election in his home state of Rajasthan last November, and a national election due by May this year, the opportunity for his company, The Indian Deepfaker, is tremendous. But Jadoun was reluctant.

“The technology to create deepfakes is so good now, it can be done almost instantaneously, with very little effort – and people cannot tell if it’s real or fake,” said Jadoun, 30.

“There are no guidelines on deepfakes, and that’s worrying, as it has the potential to influence how a person votes,” he told the Thomson Reuters Foundation.

Instagram reels of Indian Prime Minister Narendra Modi singing in regional languages have gone viral recently, as have TikTok videos of Indonesian presidential candidates Prabowo Subianto and Anies Baswedan speaking in fluent Arabic.

But they were all created with AI, and posted with no label.

With elections due in India, Indonesia, Bangladesh and Pakistan in the coming weeks, misinformation is rife on social media platforms, with deepfakes – video or audio made using AI and broadcast as authentic – being particularly concerning, say tech experts and authorities.

In India, where more than 900 million people are eligible to vote, Modi has said deepfake videos are a “big concern”, and authorities have warned social media platforms they could lose their safe-harbour status that protects them from liability for third-party content posted on their sites if they do not act.

In Indonesia – where more than 200 million voters will go to the polls on Feb 14 – deepfakes of all three presidential candidates and their running mates are circulating online, and have the potential to influence election outcomes, said Nuurrianti Jalli, who studies misinformation on social media.

“From microtargeting of voters with disinformation to spreading false narratives at a scale and speed unachievable by human actors alone, these AI tools can significantly influence voter perceptions and behaviour,” she said.

“In environments where misinformation is already prevalent, AI-generated content can further skew public perception and influence voting behaviour,” added Jalli, an assistant professor at Oklahoma State University’s media school.

‘Political propaganda’

Deepfake images and videos churned out by generative AI tools such as Midjourney, Stable Diffusion and OpenAI’s Dall-E popped up ahead of elections from New Zealand to Turkey and Argentina last year, with growing concerns about their impact on US presidential polls in November.

AI makes the creation and spread of disinformation faster, cheaper and more effective, the US non-profit Freedom House said in a recent report.

In Bangladesh — where Prime Minister Sheikh Hasina is set for her fourth straight term after polls on Jan 7 — deepfake videos of female opposition politicians Rumin Farhana in a bikini and Nipun Roy in a swimming pool have emerged.

While they were debunked quickly, they are still circulated, and even poor-quality deepfake content is misleading people, said Sayeed Al-Zaman, an assistant professor of journalism at Bangladesh’s Jahangirnagar University, who studies social media.

“Given the low levels of information and digital literacy in Bangladesh, deepfakes can be potent carriers of political propaganda if crafted and deployed effectively,” he said.

“But the government does not appear concerned.”

The ministry of information did not respond to a request for comment.

In Pakistan, where an election is scheduled for Feb 8, Imran Khan, who is in prison on an official secrets acts case after being ousted as prime minister last year, used a AI-generated image and voice clone to address an online election rally in December, which drew more than 1.4 million views on YouTube and was attended live by tens of thousands.

While Pakistan has drafted an AI law, digital rights activists have criticised the lack of guardrails against disinformation, and to protect vulnerable communities including women.

“The threat that disinformation poses to elections and the overall democratic process in Pakistan cannot be stressed upon enough,” said Nighat Dad, co-founder of the non-profit Digital Rights Foundation.

“In the past, disinformation on online platforms has managed to sway voting behaviour, party support, and even influenced legislation change. Synthetic media will make this easier to do,” she added.

‘Dangerous sign’

At least 500,000 video and voice deepfakes were shared on social media sites globally in 2023, estimated DeepMedia, a company developing tools to detect synthetic media.

Platforms have struggled to keep up.

Meta, which owns Facebook, Instagram and WhatsApp, said it aims to remove synthetic media when the “manipulation is not apparent and could mislead, particularly in the case of video content”.

Google, which owns YouTube, said in November that the video sharing platform requires “creators to disclose altered or synthetic content that is realistic, including using AI tools, and we’ll inform viewers about such content through labels”.

But countries including India, Indonesia and Bangladesh have recently passed laws to more closely police online content and penalise social media sites for content deemed misinformation, so platforms are “holding their punches”, said Raman Jit Singh Chima, Asia policy director at advocacy group Access Now.

In these countries, “this election cycle is actually worse than the last cycle – platforms are not set up to handle problems, and they are not being responsive and proactive enough. And that’s a very dangerous sign,” he said.

“There is a danger that the world’s attention is only on the US election, but the standards being applied there, the effort being made there should be duplicated everywhere,” he added.

In India, where Modi is widely forecast to win a third term, Jadoun — who had declined to make deepfake campaign videos for the state elections — is gearing up to make them for the general election.

These will be personalised video messages from politicians for party workers, not voters, that can be sent on WhatsApp.

“They can really have an impact, because there are hundreds of thousands of party workers and they will, in turn, forward them to their friends and family,” he said.

“But we will add a watermark to show that it is made with AI, so there is no misunderstanding. That’s important.”

Latest News

Pakistan declares AI chatbots to be dangers to security.

Published

on

By

The National Computer Emergency Response Team (CERT) has released a security advisory concerning the increasing utilization of artificial intelligence (AI) chatbots, emphasizing potential hazards related to the exposing of private data.

The recommendation recognizes that AI chatbots, like ChatGPT, have gained significant popularity for personal and professional duties owing to their capacity to improve productivity and engagement. Nonetheless, the CERT cautions that these AI systems frequently retain sensitive information, so posing a danger of data breaches.

Engagements with AI chatbots may encompass sensitive information, such as corporate strategy, personal dialogues, or confidential correspondence, which could be compromised if inadequately safeguarded. The warning emphasizes the necessity for a comprehensive cybersecurity framework to alleviate concerns associated with AI chatbot utilization.

Users are advised against inputting critical information into AI chatbots and are encouraged to deactivate any chat-saving functionalities to mitigate the danger of unwanted data access. The CERT additionally advises performing routine system security checks and employing monitoring tools to identify any anomalous behavior from AI chatbots.

Organizations are urged to adopt rigorous security protocols to safeguard against possible data breaches resulting from AI-driven interactions.

Continue Reading

Latest News

Unlawful VPNs: Terrorists Utilize Unregistered VPNs to Disseminate Propaganda

Published

on

By

Terrorists utilize illicit VPNs to disseminate propaganda and misinformation while concealing their identities.

Several accounts have been uncovered and are under investigation.

The cessation of non-registered VPNs is crucial for the nation’s economy.

The PTA is blocking illegal URLs and websites disseminating objectionable material upon verification.

Forensic investigations of websites have yielded startling results.

Disseminating erroneous information is more facile using unregistered VPNs.

Terrorists were exploiting these unverified VPNs.

Continue Reading

Latest News

Air University Holds A Revolutionary Event For Students Focusing On Cybersecurity

Published

on

By

Air Marshal (R) Asad Lodhi has praised the Pakistan Cyber Security Challenge, an initiative by the Air University and Higher Education Commission that aims to train future cyber security experts and pioneers. At the opening ceremony of the two-day Pakistan Cyber Security Challenge, which took place at Air University in Islamabad, he was the chief guest.
Showdown challenges, the Ideas Cup, and the Pakistan Crypt Challenge are all part of the Pakistan Cyber Security Challenge, which lasts for two days.

Air Marshal (R) Abdul Moeed Khan, VC of Air University, made a speech praising the cyber security abilities of Pakistani youth, saying that they are among the best in the world and will help the country face cyber threats. A haven for cyber security excellence, he said, Pakistan Cyber Security Challenge 2024. Additionally, he praised the Air University for its innovative and outstanding work in this area.
At the inaugural ceremony of the Pakistan Cyber Security Challenge, Dr. Zia Ul Qayyum, executive director of the Higher Education Commission, also spoke to the crowd. He discussed how HEC has created a welcoming atmosphere and helped facilitate projects such as the Pakistan Cyber Security Challenge.
As part of the inauguration event, an MoU was also signed. Guests and participants were given souvenirs as the opening ceremony came to a close.

Continue Reading

Trending