Disinformation is expected to be a major cyber risk for elections in 2024. According to cyber experts who spoke to CNBC, Britain is anticipated to experience state-backed cyber attacks and disinformation campaigns as it prepares for the polls. The upcoming local elections on May 2 and a general election later this year are causing concern among experts due to issues such as a cost-of-living crisis and divisive opinions on immigration and asylum. Todd McKinnon, CEO of identity security firm Okta, believes that the majority of cybersecurity risks will arise in the months leading up to the election day as most U.K. citizens vote at polling stations.
The threat of disinformation and cyber attacks in elections is not new. Instances in the past, such as the disruption of the U.S. presidential election and the U.K. Brexit vote by disinformation spread on social media platforms, have raised alarm. State actors have been known to manipulate election outcomes through routine attacks in different countries. Recently, the U.K. accused a Chinese state-affiliated hacking group of attempting to access U.K. lawmakers’ email accounts, leading to sanctions being imposed on Chinese individuals and a technology firm in Wuhan.
Cybersecurity experts anticipate that malicious actors will use artificial intelligence (AI) to interfere in the upcoming elections, particularly through disinformation. The use of AI-generated synthetic images, videos, and audio, known as “deep fakes,” is expected to be prevalent due to the accessibility of creating such content. The increase in AI-powered identity-based attacks, misinformation, and bot-driven content poses a significant risk to the integrity of elections globally. Heightened awareness and international cooperation are needed to address this growing threat.
Adam Meyers from cybersecurity firm CrowdStrike identified AI-powered disinformation as a top risk for the 2024 elections. He highlighted the misuse of generative AI by hostile nation states like China, Russia, and Iran to conduct misinformation operations. The fragility of the democratic process is at risk as AI reduces barriers for criminals to exploit individuals online. This has resulted in scam emails and personal attacks being crafted using AI tools, posing a significant challenge for election security. The development of more advanced deepfakes raises concerns among cybersecurity experts as the U.K. approaches its elections.
The advancement of deep fake technology has prompted tech companies to engage in a race to combat them. The detection and mitigation of deepfakes using AI have become a focal point for many companies. As deepfakes become increasingly difficult to identify, experts emphasize the importance of verifying the authenticity of content before sharing it. The battle between AI technologies to detect and prevent deepfakes is ongoing, highlighting the evolving landscape of cybersecurity threats in the context of elections.
Disinformation is expected to be a significant cyber risk in the upcoming elections in the UK, with state-backed cyber attacks and disinformation campaigns being a major concern. The use of artificial intelligence is seen as a key risk, with cybercriminals expected to utilize AI-powered attacks such as deep fakes to spread misinformation. These attacks are likely to target politicians, campaign staff, and election-related institutions. The cybersecurity community has called for increased awareness and international cooperation to address this growing threat.
Experts warn that AI-powered disinformation is a top risk for elections in 2024, with the potential for hostile nation states like China, Russia, and Iran to leverage generative AI to craft compelling narratives that manipulate public opinion. The ease of access to AI tools has lowered the barrier to entry for cybercriminals, allowing for more advanced and personalized attacks. This has already been seen in the form of fake AI-generated audio clips being used to spread misinformation, with cybersecurity experts concerned about the impact of deepfakes on future elections.
Tech companies are also facing challenges in the fight against deep fakes, with the development of advanced technology making it increasingly difficult to distinguish between real and manipulated content. The use of AI to detect deep fakes and mitigate their impact is an ongoing battle that requires constant innovation. While it may become harder to identify deep fakes, experts suggest verifying the authenticity of content before sharing it as a simple step to combat the spread of misinformation.
As the UK prepares for upcoming elections, the threat of cyber attacks and disinformation campaigns remains a significant concern. It is crucial for governments, tech companies, and the public to remain vigilant and proactive in addressing these cyber risks to protect the integrity of the democratic process. Increased awareness, collaboration, and technological solutions will be key to mitigating the impact of malicious AI-powered attacks on future elections.
Source link