Disinformation and misinformation have become significant cyber threats in recent years. The rise of social media and the increasing availability of digital technologies has made it easier for bad actors to spread false information and manipulate public opinion.
Disinformation and misinformation are considered cyber threats because they can cause harm, spread malware, interfere with political processes, be difficult to detect, and incite violence. It’s important for individuals and businesses to be aware of these risks and take steps to protect themselves from these threats.
Areas of convergence
EU DisinfoLab, an independent non-profit organization focused on tackling sophisticated disinformation campaigns, highlights four areas of convergence between disinformation and cybersecurity
- the “terrain” on which disinformation is distributed (the social web and the internet stack, networking infrastructure, routing services),
- the “tactics” that increasingly combine disinformation as part of the cyberattack delivery package,
- the “targets” leading to victims of cyberattacks simultaneously being victim of disinformation,
- the “temptation”, ie. the lucrative possibility of both disinformation campaigns and cyberattacks.
What we have come to realize is that fake news required three different items to succeed. These collectively represent the “fake news triangle“: without any one of these factors, it is unable to spread and reach its target audience.
Yet, disinformation campaigns are a distributed phenomenon that extends beyond social media platforms. They use of networking infrastructure and routing services at various levels of the internet stack. Recent investigations have shown that disinformation websites often use social media platforms as gateways and amplifiers. Therefore, disinformation and cybersecurity involve many of the same private sector members and the internet technical community.
Disinformation has become an increasingly common component of cyber-attacks, used to deliver malware by exploiting people’s emotions and fears. For instance, attackers have deployed “fearware,” a subset of phishing lures that rely on anxieties and misinformation, which have become more prevalent during the pandemic. The ongoing rise of hack and leak operations and the coordination between hybrid tactics further illustrate the convergence of disinformation and cyber-attacks. Additionally, there is a significant overlap between disinformation campaigns and cybercrime tactics, including illegal transactions on the dark web, the use of illegally obtained documents, and various forms of fraud.
Disinformation campaigns and cyberattacks can both cause similar types of harm and may even be combined to target the same victims. While a data breach can compromise information security by exposing sensitive information, the manipulation of data can also have similar effects.
Hacking, cybercrime, and influence operations have become lucrative enterprises, frequently outsourced to skilled professionals. While individuals and businesses have heightened their defenses against ransomware attacks, disinformation strategies, such as defamation and extortion, are now being employed to inflict reputational damage and generate profits. These activities offer strong financial incentives and currently face insufficient consequences, due to the difficulties of attribution and the lack of adequate dissuasive or restrictive measures.
Focusing on business environments
Misleading information can pose significant cyber risks for businesses. Common types of misleading information and associated cyber risks that businesses should be aware of, include:
- phishing emails, i.e. emails that appear to be from a trusted source in order to trick the recipient into clicking on a malicious link or downloading malware. Users can also be tricked into clicking on malicious links, downloading malware, or revealing sensitive information via social media posts, or other types of misleading information. Businesses can mitigate this risk by implementing strong email security measures, such as spam filters and employee training on how to identify and avoid phishing emails.
- malware and ransomware, which are types of malicious software that can infect a business’s network or devices. Once installed, the malware can steal data or lock down the business’s systems in exchange for a ransom payment. Fake websites or social media accounts that contain malicious software; misleading ads or pop-ups are also used for deceiving and trick users into downloading malware onto their devices. On that aspect, businesses should implement strong cybersecurity protocols, such as two-factor authentication and access restrictions, firewalls, anti-virus software, regular software updates, and employee training on cybersecurity best practices. Similar precautions can be applied in order to avoid cyber espionage, where attackers use false information to gather intelligence or compromise a target’s security.
- social engineering attacks, such as data breaches and identity theft i.e. manipulating users into divulging sensitive information, such as login credentials or financial data. Financial fraud, where fake investment schemes or cryptocurrency scams can be promoted, leading users into sending money or cryptocurrency to fraudulent accounts, could be considered as an example. This can be done through fake phone calls, fake login pages, phishing emails, or social media messages that appear to be from a trusted source. To address this, businesses should provide regular employee training on how to identify and avoid such cases.
Cognitive hacking is a type of social engineering attack, referring to the use of techniques from psychology, neuroscience, and other cognitive sciences to manipulate human perception, cognition, and decision-making in order to gain access to sensitive information or systems.
Typical examples include:
- phishing emails (usually containing e emotional appeals or urgent language to induce recipients to click on a link or open an attachment);
- manipulating visual cues in user interfaces to deceive users into clicking on a button or entering information in a form;
- exploiting human biases and heuristics, such as the tendency to trust authority figures or follow the crowd, to influence decision-making;
- using social engineering tactics, such as pretexting or impersonation, to gain access to sensitive information or systems.
The corporate reputation aspect
The deliberate spread of false information with the intent of manipulating public opinion or causing harm, usually described as disinformation campaigns, can damage a business’s reputation or even cause financial harm. To minimize this risk, businesses should monitor their online presence and respond quickly to any false or damaging information that appears online.
Detecting disinformation targeting a company can be a challenging task, but there are some steps that a company can take to identify potential disinformation campaigns:
- monitor social media and news sources for any signs of false information being spread about the company or its products. This can involve setting up alerts for specific keywords and phrases that may be associated with disinformation.
- check for unusual patterns, such as sudden spikes in negative or false information about the company.
- verify the source of any information that is being spread about the company. This can involve checking the credibility of the source and examining any evidence that is presented to support the claims being made.
- engage with their stakeholders, including customers, employees, and partners, to identify any false information that is being spread and to address any concerns or questions that may arise.
- conduct a risk assessment to evaluate the potential impact of a disinformation campaign on their reputation, brand, and business operations. This can help to inform the company’s response to the campaign.
- work with experts in cybersecurity, public relations, and crisis management to effectively detect and respond to a disinformation campaign. These experts can provide valuable guidance and support in managing the risks posed by disinformation.
In summary, detecting disinformation targeting a company requires a proactive approach that involves monitoring social media and news sources, checking for unusual patterns, verifying the source of information, engaging with stakeholders, conducting a risk assessment, and working with experts. By taking these steps, companies can identify potential disinformation campaigns and take appropriate action to mitigate the risks.
- Gu, Vladimir K., and Fyodor (2017). The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Forward-Looking Threat Research (FTR).
- Petratos, Pythagoras N., 2021. “Misinformation, disinformation, and fake news: Cyber risks to business,” Business Horizons, Elsevier, vol. 64(6), pages 763-774.