2020 has become not only the year of the Covid19 pandemic but also the infodemic. Stuck at home, we have been spending more of our time online. This tendency has provided the vital ground for viral misinformation to spread.
Misinformation represents any false information, regardless of intent and includes the misunderstandings of the facts. Disinformation, on the other hand, refers to misinformation created and spread intentionally, as a way to mislead and confuse. Falsehoods, conspiracy theories, political manipulations, fraud – These all are parts of the infodemic, which embodies the global spread of inaccurate information. Statistics estimate, 75% of the time, adults are deceived with fake news, and Facebook appears to be the vital source for this.
In order to learn more about the outspread and the purpose of fake news, we sat down with Beka Kaplanishvili, Chief Advertising Officer at Noxtton, and Nino Turkia, Digital Copywriter.
“When we talk about fake news, the very first question that pops in our heads is who stands behind all of this? And the answer is simple: the foot soldiers in this infodemic war are bots and trolls,” says Beka Kaplanishvili.
As he explains further, in the social media context, bots are autonomous programs, which can run accounts to spread the content without human involvement. They are designed to resemble actual users and have the ability to influence a real person’s decision-making process.
“Our respondents elucidate that there are different reasons for creating and spreading false information. Some involve political intentions, or some groups or organizations intend to make money, some people create and use the bot to satirize the media or spread their conspiracy theories.”
“It is hard to track down who is in charge of the fake information. This mechanism works flawlessly. Roughly speaking, truth reaches only 1000 users, while false stories outreaches over 10 000 people,” says Nino Turkia.
Fake news is often spread through fake websites and shared by bots and trolls. They have a huge and often unrecognized influence on social media.
“These fake accounts, called Sockpuppet accounts, are created to influence conversations for commercial or political reasons. One great example of troll interference was in 2016 when Russian-sponsored Twitter trolls aggressively exploited social media to influence the U.S. presidential election and caused huge resonance on a global level,” says Nino Turkia.
“Our respondents suggest, in order to identify these deceptive accounts, looking at their account history appears to be one of the solutions. However, in some cases, interested groups make a huge investment to make fake accounts that seem real.
“Have you heard about Jenna Abrams? One xenophobic and far-right opinionist, quoted by The New York Times in 2017. She was controlled by the Internet Research Agency, known as a Russian government-funded troll farm. Her account had 70 000 followers, and no one suspected that she was not even a real person,” says Nino Turkia.
As Beka explains, bots and trolls aim to create division and distrust between internet users.
“Facebook, Twitter and Instagram started out as a platform connecting friends, family, people of interest. But today, it is an increasingly divisive virtual space. Today we use this space for our business, political campaigns, and medium of spreading our ideas. The presence of bots and trolls cause the trust levels to drop significantly,” states Beka Kaplanishvili.
“As our respondents suggest, to spot bots, users have to pay attention to the pattern of speech and look at the posts of an account. Additionally, if the username or handler contains random numbers and letters, it is easy to guess the user is non-existent.
“If we can spot bots easily, identifying trolls is more difficult. If we pay attention to off-topic comments, we can find them. Hence, trolls deliberately post irrelevant remarks to annoy and disrupt others. Ignoring hard facts is another sign of a troll since they are not looking to draw any conclusions from their ‘argument’,” says Beka Kaplanishvili.
Another worth mentioning tool of fake information is Deepfake. It represents synthetic media, in which a person in an existing image or video is replaced with someone else’s likeness.
“Deepfakes is a new concept in faking the content. It uses machine learning and artificial intelligence in order to manipulate and create visual or audio content. As a result, deepfake creates a non-existent person, sounds, and visual footage,” Kaplanishvili explains.
In 2020, Facebook, Twitter, and TokTop banned deepfakes that might mislead the user. However, they remain an inseparable part of the internet. There are few steps to determining if a video is real or fake: unnatural eye movement, facial expressions, lack of emotions, natural body movement, and inconsistent noise or audio.
“Whom to trust online? This is the question we ask ourselves rapidly while scrolling through the internet. Because, the illusion of having information is far more dangerous than ignorance,” says Turkia.
They suggest that while making a selection of whom to trust, every person should ask whether they have enough information about the four traits of trustworthiness – competence, reliability, integrity and benevolence.
“In a war with misinformation, various platforms are created which help internet users to identify fake accounts. There are several Georgian apps, and chrome extensions, which prove to identify several hundreds of fake accounts,” says Kaplanishvili.
“Our respondents suggest that to free our feeds from misleading information and fake accounts, we should check who we add to our friend list, which posts or pages do we follow and comment on, analyze suspicious account activity, use identifying extensions, and try to block or report every sockpuppet account along the way.
“We ought to use social media more deliberately. We should not outsource our capacity for decision making to an algorithm altogether. Rather than scrolling through our default feed, it is better to control and manage what we see, which content appears first and which posts are amplified. Instead of trusting and sharing the posts blindly, we have to check the sources, and the reliability of an account who shared this information,” says Turkia.
The answer to the question – Who should we trust online? Is not a direct one. Our respondents suggest the “test before you trust” approach, which appears to be the only way to get trustworthy information in this digital age.
In order to fight the misinformation, internet users should share the responsibility. Before sharing, we should examine information sources, check accounts who shared and engaged with the post, look into accounts bio and photos, activity. With joined forces, we fight against online disinformation.