This consultation is now closed

Read the Summary Report - Better Information Ecosystem.

Many thanks all contributors from over 50 countries for sharing your valuable knowledge, experience and perspectives in UNDP and UNESCO's global online consultation on the impact of, and responses to disinformation. The contributions from over 150 UN colleagues and other experts in this field will help to inform and sharpen UNDP and UNESCO’s responses to disinformation going forward.  If you missed the opportunity, you can still participate by submitting your written contribution to [email protected] on or before 13th November 2020.

With much gratitude to our excellent team of moderators: 

Based on the results of this e-discussion, we have continued to sharpen our thinking through focused consultations with key private sector actors, donors, UN and civil society organisations. As a result, a summary report from the e-discussion and consultations has now been compiled and is available on this page. The report summarises key points raised by the consultation participants. The views and opinions in the report are those of the contributors and do not necessarily reflect those of UNDP and UNESCO.

Thank you to all contributors for your great support. 

 

Welcome to engagement room 2!

While there is much focus on how to stem disinformation flows online and offline, there may also be pre-existing factors which enable disinformation to spread more easily in different contexts.  We want to identify those factors, social, political, and otherwise, that determine just what kind of foothold disinformation can get in a country, and understand how addressing those factors might build resilience and reduce the impact of disinformation. We also want to know how we can effectively monitor and anticipate waves of disinformation in order to address it preemptively.

In this room, we would like to explore the contextual enablers that we should be paying attention to, and hear about how you have effectively monitored disinformation.

As a reminder, disinformation is “false, manipulated or misleading content, created and spread unintentionally or intentionally, and which can cause potential harm to peace, human rights and sustainable development”.
 


Please answer any of the below questions (including the question numbers in your response).  Feel free to introduce yourself if you wish. We look forward to hearing from you.  

  1. In your country/community, what are the primary sources and motivations driving the creation and sharing of disinformation? Who are the “super spreaders” of disinformation, those who have sufficient influence and following to amplify on and offline?
     
  2. What are considered trusted news sources among different groups? What are the criteria for trusting these sources?
     
  3. What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?
     
  4. To what extent do Internet companies’ algorithms actively push disinformation, and what is the role of closed communications networks in amplifying the problem?
     
  5. What kind of monitoring can provide effective early warning of risks of potentially harmful disinformation?
     
  6. What examples have you seen of disinformation mapping feeding effectively into national or local policy decisions, including institutional codes of conduct, and/or programmes?
     
  7. What roles of civil society organisations, government, media and Internet companies play in terms of monitoring drivers and enablers of disinformation as a contribution towards social preparedness?

 


We commit to protect the identities of those who require it. To comment anonymously, please select "Comment anonymously" before you submit your contribution. Alternatively, send your contribution by email to [email protected] requesting that you remain anonymous.

Comments (69)

Niamh Hanafin
Niamh Hanafin Moderator

Week Four Summary

 

What a wonderfully rich final week in discussion Room 2, many thanks to those who took the time to join us and contribute!

This last week we had some interesting insights from Olivia Sohr and Matias Di Santi of Chequeado on the importance of identifying root sources of disinformation as well as factors which influence the spread of disinformation in different context. Matias mentioned the need for tools to provide a more nuanced insights into how disinformation is shared to provide more focussed, effective fact checking.

Israel Araujo notes that in his community, social media is a significant vector for disinformation. Access to quality public information is one way to combat this and promote transparency. While Orna Young reminded us of the challenge of influential individuals acting as super spreaders of disinformation by sharing with their constituents, who tend to accept the information more readily from someone they admire, and the need to use trusted organisations to access hard to reach groups.

Miroslava Sawiris's extensive contribution from the Alliance for Healthy Infosphere raised important points about the diversity of disinformation spreaders, from influencers to political actors to  pseudo news sites. All this is facilitated by social media algorithms which bring fringe sites and ideas to the fore.

Notable is the way in which advertising revenue has been diverted from reputable news sources to unreliable and dubious information sources. Transparency of ownership and labelling of online news sites would be one way for the public to discern their news sources. Also the diversity of factors from the curating of news by social media to the lack of public digital skills, all contribute to an easier spread of disinformation. Major question marks remain over the influence of closed groups on this spread. Though as Louise Shaxton reminds us, information echo chambers and bubbles are potentially not as harmful as we assume, according to research.

At the systemic level, more coordinated monitoring and independent oversight emerge as key early warning solutions. Regional or even international regulatory solutions may be needed to avoid fragmented and ineffective national regulation. However that doesn’t negate the need for a coordinated response involving other stakeholders, including national authorities, civil society groups, media and internet companies themselves.

Paola Forgione of the International Committee of the Red Cross underscores the role of inconsistent and contradictory official communication in driving people towards alternative information sources, which package content in a more compelling way. Also highlighted was the need for early warning systems, monitoring content pre-emptively for potential virality.

Finally Dhruv Ghulati of Factmata explained their innovative tool to detect disinformation based on the language and tone markers to reduce visibility of disinformation on newsfeeds and to support content moderation. Concerted efforts by multiple developers would be needed to create an effective and transparent system.

Thank you again for a diverse and fascinating discussion!

--

Week Three Summary (Room 2) by Moderator Ruth Canagarajah.
Week Two Summary (Room 2) by Moderator Rachel Pollack.
Week One Summary (Room 2) by Moderator Louise Shaxson.

Louise Shaxson
Louise Shaxson Moderator

Hello, and a very warm welcome to this UNDP-UNESCO consultation on the globally important topic of disinformation.  I'm Louise Shaxson, Director of the Digital Societies Programme at ODI, a London-based think tank, and I'm delighted to be moderating the consultation with Simon Finley from UNDP for this week.  Next week we'll hand over the moderating job to other colleagues, but we will still remain very engaged.  As a reminder, the consultation runs for three weeks, so there's plenty of time to get involved in some fascinating and in-depth discussions with a truly worldwide group of people!  

When you comment, please let us know which particular question or questions you are responding to; it helps other people anchor themselves in the conversation. 

And now it's over to you - how would you answer the questions set out at the top of this page?

We look forward very much to hearing from you all,

Louise & Simon

 

Yves
Yves

Bonjour à tous,

Je suis heureux de faire partie groupe assez intéressant. Je travaille dans la partie est de la République Démocratique du Congo comme coordonnateur média dans l'organisation internationale search for common ground. Ces derniers mois nous avons fait à l'épidémie d'Ebola et celle de la pandémie Covid10 dans un contexte sécuritaire extrêmement tendu et surtout très accentué par les fausses infomations sur les réseaux sociaux.  lesoù nous implémentons des activités de sensibilisation et d'éducation s

Bonjour à tous,

Je suis heureux de faire partie de ce groupe très intéressant dont je salue l’initiative. Je travaille dans la partie est de la République Démocratique du Congo comme coordonnateur média dans l'organisation internationale search for common ground. Ces derniers mois nous avons fait face à une montée phénoménale et inédite des fakenews et autres manipulations d’opinions sur les réseaux sociaux durant l'épidémie d'Ebola et celle de la pandémie Covid10 , et tout cela dans un contexte sécuritaire extrêmement tendu.

Fin 2018 alors que le pays est en pleine fièvre électorale, une dixième épidémie d’Ebola est déclarée à Beni, au Nord Kivu, dans l’est de la RDC considérée comme partie acquise à l’opposition.  La zone est de Beni et ses environs est écartée du processus électoral afin d’éviter la propagation rapide de l’épidémie dans le reste de la province, ce qui créera la révolte des partis d’opposition et des groupes de pressions tels que les mouvements des jeunes, des femmes et autres structures de la société civile. Les candidats députés aux élections annulées selon les premiers à lancer des discours de haine et encourager la circulation des fakenews sur les réseaux sociaux et surtout  à travers les médias classiques dont la radio et la télévision.

Beaucoup des messages incendiaires ont été relayés contre les équipes médicales de riposte ebola  ainsi que les agences humanitaires accusées d’avoir été achetées par le pouvoir en place pour inoculer le virus d’Ebola à des habitants afin que cette partie du pays hostile au pouvoir ne puisse pas participer aux élections.  Des manipulations d’images, des sons montés et des faux témoignages ont circulés pendant toute la période du début d’épidémie ; ce qui a provoqué des attaques contre des centres de traitement  qui ont causé la mort des plusieurs soignants, dans les territoires de Beni et Lubero où plusieurs groupes rebelles armés restent actifs.

Avec nos équipes, nous avons travaillé dans un contexte assez particulier avec des journalistes déjà eux-mêmes convaincus par les rumeurs et manipulations politiciennes. Nous avons pu organisé certaines activités assez importantes dont le mapping des groupes influenceurs sur les réseaux sociaux facebook et whatsapp qui a conduit sur l’identification des différents administrateurs principaux qui ont été formé sur les astuces principaux de factchecking et gestion des rumeurs. Les journalistes des radios ont quant eux été regroupés en synergie afin de composer une rédaction spéciale qui a lancé un journal spécial diffusé au même moment par toutes les radios.

Le combat contre les fakenews et discours de haine n’a pas été vaincu à 100% mais déjà des débats sur la véracité des certaine infos à travers des questions posées par des membres des groupes des différents réseaux sociaux restent un aspect positifs .

Actuellement le grand défi reste celui des rumeurs autour de la Covid19 dans cette zone à sécurité volatile où des habitants pensent que le port des cache-nez et masques facilitent l’entrée des groupes rebelles étrangers dans leurs villages car la maladie n’est pas censée exister en Afrique.

Ssanyu Rebecca
Ssanyu Rebecca

Qn1: In my country, Uganda, the primary sources of disinformation are politicians and some people who are closely connected to the powers that be - both in political and technocratic circles. Often, the disinformation comes as a result of rumours of issues that are being discussed at the high levels of decision making but have not been concluded or sanctioned for dissemination. Insiders leak the half-baked information and as it does its rounds on various social media platforms, particularly WhatsApp and Facebook, it becomes distorted and sensationalised. And by implication, the motivations driving especially the sharing of disinformation include the apparent fun that comes from sensationalism, the desire to tarnish names of key personalities such as opposition politicians (by those in government) or high level technocrats (by their workplace rivalries) among others. It would be difficult to pin point the so called “super spreaders” of disinformation. It is important to note however that redundancy is a major problem. When may people, especially the youth lack employment, they have ample time to engage in unhelpful communication, including the spread of disinformation both on and offline. The extent to which disinformation becomes influential is dependent on the subject matter. Political propaganda against opposition politicians tends to negatively influence public opinions and often creates hostile sentiments and responses.

Louise Shaxson
Louise Shaxson Moderator

Hi Rebecca, thanks very much for your comment.  I'm interested to know what the role of the mainstream media is in all of this: do they provide any checks and balances on the spread of disinformation?  

Ssanyu Rebecca
Ssanyu Rebecca

Louise Shaxson The country has laws and regulations for the media, including electronic media. But these only affect traditional electronic media and are not linked in any way to social media. The only instances where content of social media might become regulated by the policy and legal framework is when the state perceives that it is offensive especially of the person of the high level political personalities. 

Simon Alexis Finley
Simon Alexis Finley Moderator

Very interesting. As lockdowns continue across the world we see over one billion students spending more and more time online, not able to participate in important face-to-face contact activities that have important benefits for mental health and well-being. It would be interesting to see if anyone had any ideas on risk-mitigation for this enormous and, potentially at-risk, demographic.

 

Edem Agbe
Edem Agbe

I agree with your posting. Recently in Ghana during the compilation of the new voters register for the upcoming election in December, social media (particularly WhatsApp and Facebook) were awash with various forms of disinformation created by politicians to court the displeasure of the public against the government and electoral commission. Disinformation is thus becoming a threat to our democracy and participatory governance.

Ben Schonveld
Ben Schonveld

Simon Alexis Finley this is hugely important...the prior research suggests that the impact of online on its own may not lead to impact. But in today's very particular context and an external context that is unresearched (people stuck at home while jobs etc go down) the outcomes are unknown. Similarly, I think its important that we recognise that the message and means ofdisinformation is understudied. The model is not that ideology actually has a point but rather that the message seeks to confuse - leaving the individual baffled as to what truth looks like and hence disempowered. We have no real research about how that impacts. 

Simon Alexis Finley
Simon Alexis Finley Moderator

Ben Schonveld Good Point. From the Preventing Violent Extremism work we know that the link between online and offline is often cited but with a non-existent evidence-base. Further research on how disinformation persuades or doesn't in the current environment would be enlightening

 

Edem Agbe
Edem Agbe

Disinformation has become quite a big issue in Ghana. With experience in social policy and development space there are about 3 main drivers of disinformation (i) Politics - politicians and their surrogates intentionally creates fake news to discredit their opponents or government interventions. Citizens then believe some of this fake news and its affect their confidence in government or otherwise. (ii) Bloggers - Many bloggers depends on followers to earn money and they believe fake and sensational news attracts followers. So they create fake news to gain followers and they get paid based on the number of followers they have. (iii) The advent of social media has also become a primary driver of disinformation. Posting and sharing information of social media is not that much regulated. And also the Ministry of Communication and the Ministry of Information have warned the public about fake news but they are unable to effectively regulate the use of social media and production of social media content.

Louise Shaxson
Louise Shaxson Moderator

Thanks Edem - that's a great contribution.  Both you and Rebecca have highlighted the issue of sensationalism.  What role do newspapers and tv stations play in Ghana in countering some of this sensationalism?  Do they ever call it out and give more evidence-based reports, or do they just report it (and by doing that, give it more airtime)?  

Edem Agbe
Edem Agbe

Louise Shaxson Sections of the media in Ghana are becoming critical but the bigger challenge is politicians own some of the media houses so they must do according to the dictates of the owner. Civil Society Organisations are making efforts to combat spread of fake news. Recently there is a project by Media Foundation for West Africa (MfWA) that is dedicated to combating fake news and disinformation in Ghana during the COVID 19 and elections. They have developed a fact-checked system that checks the accuracy of information made by politicians on political podiums and information on COVID 19. The organisation then disseminates the accurate information to the public in local Ghanaian languages using radio and online platforms.

They challenge is disinformation spreads faster than accurate information.

Melody Azinim
Melody Azinim

I agree with Edem's submission and to also add that in Ghana one of the drivers of disinformation is some of the media houses want to be seen as the first to report on breaking news because of that they do not take the time to verify the information before sharing. Over the last week, there was a media from on the shooting of a gentleman in one of the regions in the northern part of the country i decided to follow up on one of the institutions mentioned as part of the report only to release that  the media report is not accurate unfortunately the news had already spread widely. To mitigate this issue, i believe that such media houses and individuals need to be called out to redraw such stories using the same platforms.

Rachel Pollack
Rachel Pollack Moderator

Thank you, Melody Azinim. Very interesting example.

Do you think increased capacity building for journalists to enable them to follow professional standards would help address this phenomenon?

Daniel Barraez
Daniel Barraez

Hello, everyone.
Disinformation and its implication are crucial for all societies, and their discussion should involve as many stakeholders as possible. This consultation is a concrete way to make the debate open and broad. Congratulations to UNDP/UNESCO for this initiative!.

I am the Human Development, Multidimensional Progress, and SDG Center Manager in UNDP-Venezuela. 

Regarding the QN1, the primary source of disinformation in Venezuela is the political confrontation. In our cases, political hyperpolarization affects almost every topic in social discussions and deepens social cohesion deterioration.  The traditional national media are more prudent about pollution information. But the social networks are the preferred field for disinformation disseminators. Hyperpartisan behavior has a strong correlation with disinformation in social networks.

It is not easy to think that the volumes of misinformation could reduce in this context. But  I believe a way to address de information pollution is to identifying and supporting trusted sources. 

Ema M Fong
Ema M Fong

Aloha Daniel,

Thank you for your post and the analysis of what is happening in Venezuela. I am a moderator from room one (1).  Please could you share more about the key stakeholders that you spoke of - the trusted sources? Who are they, and have they earned the trust of the people or the government or both? What roles do they play, what kind of power do they have, social capital, informational power, expert power, political power, etc.? Could you please also identify other key stakeholders and whether they are allies or drivers of violence or shadows? Can you share what role civil society organizations, government media, and internet companies play in monitoring drivers and enablers of disinformation as a contribution toward social preparedness?

Thank you and warmest aloha, Ema

Daniel Barraez
Daniel Barraez

Hi Ema,

Thank you for your interest in Venezuela Case. Regarding trusted sources, there are already fact-checking units mentioned by poynter.org (https://www.poynter.org/fact-checking/2019/against-all-odds-fact-checki…):  Espaja.com, CotejoEfecto CocuyoObservatorio Venezolano de Fake NewsCazadores de Fake News, and  Observatorio Venezolano de Desinformación (in twitter). In these units, there are from specialized checking news webs to the news portal. It is unclear if the fast-checking has increased its credibility, but this is a significant advance over the recent past.

In hyperpolarized contexts like Venezuela, there is a tendency to confuse reliable sources of information with neutrality in political conflict. Neutrality and trusted sources are two different concepts that don't necessarily come together. It is essential to promote the idea that it is possible to defend a political position without resorting to disinformation. That may seem obvious, but it is easy to get lost in these basic ideas in the middle of hyperpolarization.

 

The media, governmental and non-governmental, are very polarized and frequently contribute to polarization. Young professionals and academia are potential allies to address the disinformation. 

Simon Alexis Finley
Simon Alexis Finley Moderator

Thanks Daniel! The political motivations and drivers for disinformation are extremely important and can sometimes get lost as people focus on how social media spreads polarizing information. If the political climate is a driver, does anyone have positive experiences in addressing this in their own context?

 

Ruth Stewart
Ruth Stewart

Hi Everyone, I'm wondering if you've tapped into the fact checking community? They have so much experience in this area.... I recommend you approach them if you can.

Simon Alexis Finley
Simon Alexis Finley Moderator

Yes! They are doing some great work once the disinformation is out there. It would be great to hear from some of them in this consultation.

 

Daniel Barraez
Daniel Barraez

Thank Simon for your comments.

We are also working on disinformation about gender issues in Panama during the pandemic. It is a work in progress. As Simon has pointed out, political motivations are extremely important in disinformation, and Panama is no exception. Political issues are second place as a source of gender pollution information in this country on social networks. The clear first place is for the confrontation between secular and progressive values. The clash between abortion,  gender diversity, and gender equality can reach a high aggressiveness level, even resorting to misinformation. 

It is worth noting that on twitter, bots play a significant role in the political disinformation spread. But disinformation in the confrontation values, we couldn't detect bots. People seem to have no problem in publicly assuming their true identity to disqualify their adversary. Disqualifying people, instead of their arguments, seems to be a frequent cause of misinformation in hot social issues.

Regarding user communities, the spread of disinformation by users outside the main communities are significant. In general, communities are careful, but there are also few communities prone to spread disinformation. It will be interesting to meet these communities' influencers prone to disinformation to improve information quality.

Niamh Hanafin
Niamh Hanafin Moderator

Hi Ruth, thanks for the suggestion, we are indeed in contact with IFCN and many of its members.  Baybars Orsek we'd love to hear from you and your community for this important perspective!

Daniel Barraez
Daniel Barraez

Thanks for your suggestion. We have identified the main communities, and this is very informative for disinformation purposes. We find out in this context that all users' communities have some kind of pollution information. But the super spreaders" of disinformation almost always belong to the hyperpartisan communities. The hyperpolarization drives disinformation.

 

 

Ludwwin
Ludwwin

1.En Colombia proviene principalmente de figuras políticas, estrategias organizadas con cuentas falsas que posicionan tendencias en Twitter, y estrategias organizadas en grupos de Whatsapp que difunden contenidos a grandes velocidades.

2. Medios de comunicación tradicionales o con larga trayectoria.

3. El desconocimiento de los métodos de verificación de información y la falta de casos ejemplares sobre las implicaciones de compartir desinformación.

6. El documento de UNESCO sobre desinfodemia  reúne varios ejemplos locales de casos e iniciativas: https://en.unesco.org/covid19/disinfodemic

Las organizaciones de fact-checking son fundamentales para que la sociedad civil y medios de comunicación tengan herramientas para combatir las acciones de desinformación.

Louise Shaxson
Louise Shaxson Moderator

Week One Summary

Hi all

Thanks for all your thoughtful comments - this is developing into a very rich discussion.  For my own benefit I tried to summarise what I think has been said, and thought I would share it with you - please let me know whether I'm on the right track!

Where does it come from?  Several factors give rise to it: political partisanship came through quite strongly in Rebecca, Edem and Daniel's points.  Disinformation helps this partisanship to turn into hyperpolarisation, as Ludwwin pointed out, when people actively strategise to spread disinformation at high speed.  But it's not just partisanship: the desire to achieve political influence is obviously important, but there's also a desire for sensationalism and excitement, particularly from people who are simply bored and who have time to 'stoke the rumour mill'.  Edem made the point that bloggers who rely on numbers of eyeballs for their revenue are more likely to write sensationalist content - we shouldn't forget that Facebook already knows that our brains are hardwired to be attracted to divisiveness and immediate experiences INSERT LINK.  And even as far back as 2009, we knew that social media was leading to 'an inability to emphathise and a shaky sense of identity" (see  https://www.theguardian.com/uk/2009/feb/24/social-networking-site-chang… - who remembers Bebo??).  

What is disinformation trying to achieve?   Polarisation, disempowerment, confusion and discrediting others all came through from the contributions - all of which came through in Yves' writing about the situation in DRC.  If the aim is disempowerment and confusion then it seems to me that what might be driving it is the combination of a 'shaky sense of identity' and an inability to emphathise with others.  Obviously the question of how strong your sense of identity is isn't just about being online - there are long histories of questioning the identity of marginalised groups in order to disempower them.  But the way that social media changes our ability to emphathise does seem to be a particular facet of the online world and I wonder whether there's a sort of flow of lack of ability to emphathise --> shaky sense of identity --> feeling of disempowerment.  And boredom probably doesn't help with our sense of identity either.  

Then I started looking for the factors that don't help slow the spread of disinfo (or actively encourage it to spread).  Politicians owning traditional media, existing tensions & conflicts, and the perception of being overlooked in decision making processes all stood out for me, and it was interesting that they all seemed to come from the offline space.  Which took me to Ben's point, and how he highlighted that this online/offline question hasn't been very well researched.

A lot for me to think about!  Do let me know what you think about my summary - it's just what I highlighted as I was reading the above.  And please bring your friends, colleagues, Twitter followers etc into the discussions - it would be great to hear from different parts of the world.  

Louise Shaxson
Louise Shaxson Moderator

I'm not sure the Guardian link came through properly: it's an article entitled "Facebook and Bebo risk 'infantilising' the human mind" from 24th Feb 2009 on guardian.com, if anyone would like to look it up.  

Daniel Barraez
Daniel Barraez

Hi Loise, 
Thank you very much for the Guardian references on the social networks and your summary discussion.

I want to share a  comment about " the factors that don't help slow the spread of disinfo (or actively encourage it to spread)" you have pointed out in your summary.

In the gender disinformation in Panamá during the pandemic, we have seen in almost all misinformation messages,  the intention to harms is through murder accusations, no matter the subject (gender or politic).  

I believe this way to harm is related to two factors:
-the anonymity of users who create and disseminate disinformation
-the murder accusation (or another way to harm) doesn't have consequences on the spreader. In many countries, false allegations in media may have severe implications like lawsuits in court. But in social networks, it is no the case. This can explain why social networks spread more disinformation than traditional media.

Minimal regulation on social networks could help to address both factors and other factors like bots.

Juan Pablo Miranda
Juan Pablo Miranda

From Chile, we would want to contribute with this reflections:

1.- In Chile, sharing false or biased information is associated with digital activism. One reason is that these groups tend to share more information on social networks. Other reason is the way information pollution works on social networks. When a certain information coincides with our worldviews, we tend to believe in it more easily since it confirms the visions and values that we previously hold. This is called confirmation bias.

3.- At least two factors can be mentioned. First, the lack of awareness about how social networks work and how the algorithms used by platforms such as twitter or Facebook filter and bias the information to which we are exposed based on our values, political views and interests. Second, there is no culture about checking the information to which we are exposed and there is little awareness of the magnitude of the circulating information that could be false.

4.- One of the main problems with information pollution in social networks is the way in which the algorithms of platforms such as Twitter segregate users. In general, users on social networks interact with other users based on common interests and worldviews, which results in ideologically and socially closed social networks. The existence of these informative "bubbles" makes it difficult to contrast information and views on different topics.

7.- Awareness and pedagogy campaigns. Build alliances that allow breaking the information bubble.

Rachel Pollack
Rachel Pollack Moderator

Hi Juan Pablo, thank you for this interesting perspective from Chile.

You mention that much of the sharing of false and biased information shared in Chile is associated with digital activism. Does this false content relate to specific types of issues? What are the individuals or organizations spread this false information trying to achieve? 

Regarding Question 3, you identify a lack of awareness about how social networks work, as well an absence of a culture of checking information, as factors that make the public vulnerable. Do you think that media and information literacy could help to counter these?

Juan Pablo Miranda
Juan Pablo Miranda

Rachel Pollack 

Hello. Many political organizations use misleading or exaggerated information to make a point. However, most of people do not know when they are sharing disinformation in social platforms. The problem is that people tend to believe as true that information or news that math with their believes or political points of view. That is why political activism is related to disinformation spread in social media. Currently, fact checking organizations have detected many misleading or false information about the constitutional process that Chile is experiencing. 

About the second question, I agree that media and information literacy could play an important role, specially social platforms. For example, social media should explains actively how their algorithms work. 

 

Rachel Pollack
Rachel Pollack Moderator

Hello! Welcome to Week 2 of the UNDP-UNESCO consultation on disinformation.  I'm Rachel Pollack, and I work in UNESCO's Section for Freedom of Expression and Safety of Journalists.

I'll be serving as moderator of Room 2 this week, taking over from Louise Shaxson and Simon Finley.

This Room covers topics surrounding the drivers, enablers and mitigation mechanisms of disinformation such as: the primary sources of disinformation; the factors that make the public vulnerable to it; the extent to which internet companies' algorithms push disinformation; and the role of various stakeholders in monitoring disinformation. 

Please answer any of the questions at the top of the page, whether just one or all seven. It will help if you can please indicate the question numbers in your response. Feel free to introduce yourself if you wish. 

Your contributions will help shape UNESCO and UNDP's actions to counter disinformation around the world.

Looking forward to your input!

Rachel

 

Ruth Canagarajah
Ruth Canagarajah Moderator

What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?
 

An interesting finding from MIT is that fake political news, aside from just getting consumed quickly, gets shared up to 3x as quickly as factual news. Moreover, "controlling for many factors, false news was 70% more likely to be retweeted than the truth." This suggests to me two things: 1) there's something very intrinsic to the content itself (wording/language, "novelty" in ideas, formatting and headlines) that draw people in; and/or 2) that the content has been targeted well to the individual, whether it's through alignment in socio-political beliefs or being involved in an active social network that drives the spread of misinformation via information cascades. Regarding Point 1, I would be super keen to see the use of Natural Language Processing via sentiment analysis to explore if similar linguistic patterns emerge in identifying misinformation. Regarding Point 2, this is what Busara has been largely occupied by in recent projects. 

At the center is the capacity of fake news to override System 1/deliberate thinking by drawing upon reflexive human biases. This includes limited attention, the need for group belonging/identification (which occurs on a spectrum but is quite tied to Louise's idea on a shaky sense of identity that needs to be validated), and the comparative advantage that mis/disinformation is oftentimes more evocative and/or novel than run-of-the-mill news (well, depending on what country we're talking about these days). 

Dina Mansour-Ille
Dina Mansour-Ille

A very interesting follow-up to the earlier discussion. I would say nowadays, it is very difficult to pinpoint one primary source or specific motivations driving the creation and sharing of disinformation (Q1). It very much depends not only on the country/community, but also very much on the topic, the medium and people most influential in relation to that particular topic. That said, however, I believe that social media is by most accounts a powerful 'super spreader' of disinformation, because quite simply anyone can share and sell information as facts that then grows and mushrooms into new shapes and forms. 

That leads me to Q2: nowadays, I would never trust any news or information shared on social media. Unfortunately, even the most neutral information sources today, such as BBC etc., do get influenced by the super spreading of false information. I therefore only trust the 'law of 3', i.e. if in neutral, more traditional mediums of news sharing I see the same piece of information or news being shared for more than 3x then I do classify it as credible. I do not, however, think there is one source of news or information sharing that can be trust nowadays as information has become increasingly fluid.

In relation to Q3, I wouldn't say that the public is vulnerable to disinformation in general. It very much depends on who is sharing that information - i.e. if you are a supporter of a particular political figure and he/she share a particular piece of news or information as a fact, you are simply more likely to believe it. In a sense, the name, credibility and your interest in some public figures can make you vulnerable to disinformation. In addition, you could also be more likely to believe disinformation if it supports your position – it is simply human nature, especially when you are not sufficiently informed about the topic. We have witnessed this quite strongly with the coronavirus pandemic. Ample disinformation has been shared as ‘facts’ that support different positions on the virus and hysteric super spreading of these different – and sometimes even contradictory ‘facts’ – has been shared around. This leads me to Q4.

In relation to the virus, algorithms did indeed play a role in the spread of disinformation. I do not know, however, to what extent. I think this differs by topic, company, community and country. But it no doubt amplifies the problem. I am, however, conflicted over Q5. In relation to this question, I am not entirely sure if monitoring works. Monitoring can be and is many cases bias or follows algorithms that might censor information that may not be necessarily inaccurate. So personally, I am not sure if monitoring is an effective solution – self-censorship combined with awareness raising tools/campaigns might be more effective in my opinion. In relation to the coronavirus, FB, for example, has been actively monitoring content for a while now and its policy hasn’t been effective. If anything, it has been counter-productive, as the more content is deleted and banned the more those who didn’t believe in the severity of virus, were convinced that there is some sort of conspiracy happening. I have witnessed this first hand within my own network – friends whose content has been deleted on FB ended up believing in some sort of conspiracy around the coronavirus.

I am not sure about Q6, but I definitely think that CSOs, governments and media and internet companies have a serious role to play in controlling the spread of disinformation (Q7). But this shouldn’t be about banning content, but rather about monitoring the flow of information and identifying drivers and enablers of disinformation and sharing more accurate and informed knowledge and information to combat disinformation. Banning content on one platform adds to the drivers of disinformation – as we have seen with the coronavirus pandemic. Also banned content finds an audience elsewhere. I think educating people, sharing accurate and informed information and re-directing drivers and enablers would help and might prove to be more effective.

Rachel Pollack
Rachel Pollack Moderator

Dear Dina,

Thank you for this thoughtful and comprehensive response, which underscores the nuance in understanding the drivers and spread of disinformation.

As an academic, could you point us to some research that addresses some of the issues you raised? Are there specific examples or studies that we could consult to learn more?

Best wishes,

Rachel

Dina Mansour-Ille
Dina Mansour-Ille

Rachel Pollack Apologies for missing your earlier response. Here is some literature to consider on the topic.

- David J. Rothkopf, ‘The Disinformation Age’, Foreign Policy, no. 114 (1999), pp. 83–96;

- Yochai Benkler, Robert Faris, and Hal Roberts. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press (2018);

- R. Kelly Garrett, R. K. (2017). ‘The “echo chamber” distraction: Disinformation campaigns are the problem, not audience fragmentation’, Journal of Applied Research in Memory and Cognition, 6(4), 370–376 (2017);

- Erik C. Nisbet and Olga Kamenchuk, ‘The Psychology of State-Sponsored Disinformation Campaigns and Implications for Public Diplomacy’, Hague Journal of Diplomacy, 14(1-2):65-82 (2019);

- Gregory Asmolov, ‘The Disconnective Power of Disinformation Campaigns’, Journal of International Affairs, 71(1):69-76 (2018);

- Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook, ‘Misinformation and Its Correction: Continued Influence and Successful Debiasing’, Psychological Science in the Public Interest, 13(3):106-131 (2012);

- Jonah Berger, Katherine L. Milkman, ‘What makes online content viral?’ Journal of Marketing Research, 49 (2):192-205 (2012);

- L. John Martin, ‘Disinformation: An instrumentality in the propaganda arsenal’, Political Communication, 2(1):47-64 (1982);

- Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (New York: Farrar, Straus and Giroux, 2020); 

- Ulises A Mejias and Nikolai E Vokuev, ‘Disinformation and the media: the case of Russia and Ukraine’, Media, Culture & Society 39(7):1027-1042 (2017);

- Yevgeniy Golovchenko, Mareike Hartmann, Rebecca Adler-Nissen, ‘State, media and civil society in the information warfare over Ukraine: citizen curators of digital disinformation’, International Affairs, 94(5):975-994 (September 2018);

- Richard Fletcher, Alessio Cornia, Lucas Graves, and Rasmus Kleis Nielsen, ‘Measuring the reach of ‘fake news’ and online disinformation in Europe’ Reuters Institute, University of Oxford, 2018. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2018-02/Measuring%20the%20reach%20of%20fake%20news%20and%20online%20distribution%20in%20Europe%20CORRECT%20FLAG.pdf.

- House of Commons Culture, Media and Sport Select Committee, 'Disinformation and ‘fake news’: Interim Report' (July 2018); 

- W. Lance Bennett and Steven Livingston, ‘The disinformation order: Disruptive communication and the decline of democratic institutions’, European Journal of Communication, 33(2):122-139 (2018); 

- Susan Morgan, ‘Fake news, disinformation, manipulation and online tactics to undermine democracy’, Journal of Cyber Policy, 3(1):39-43 (2018); 

- Ladislav Bittman, ‘The use of disinformation by democracies’, International Journal of Intelligence and CounterIntlligence, 4(2):243-261 (1990);  

- Russ Hoyle, Going to War: How Misinformation, Disinformation, and Arrogance Led America into Iraq (New York: St. Martin’s Press, 2008).

- Amy MacKinnon, ‘Russia Knows Just Who to Blame for Coronavirus: America’, Foreign Policy, Foreign Policy, February 14, 2020. https://foreignpolicy.com/2020/02/14/russia-blame-america-coronavirus-conspiracy-theories-disinformation/;

- Jessica Glenza, ‘Coronavirus: US says Russia behind disinformation campaign’, The Guardian, February 22, 2020. https://www.theguardian.com/world/2020/feb/22/coronavirus-russia-disinf…;

Olena Borodyna
Olena Borodyna

Evening, everyone!

Just a few reflection from me on fake news in Ukraine.

In Ukraine, enablers of disinformation from Russia and other sources such as domestic TV channels and newspapers with ties to oligarchs include many factors, including exposure, political attitudes and receptivity to foreign media influence. While there was some development of independent media ecosystem, Ukraine and many other FSU countries continued consuming (and producing ) content from and for Russian (and Russian-speaking ) audiences. Such was (and still is) the case in many parts of Ukraine. In many cases people will continue consuming such content despite being aware that its biased. When confronted about false or misleading facts in the news sources some also dismiss it as false allegations by someone from the other end of political spectrum. I would add that understanding politics of disinformation in Ukraine requires getting acquainted with the interaction of domestic political and media landscapes, as media companies are owned by oligarchs and are generally perceived to be biased in favour/ against whoever that in-person seeks to support/ undermine. 

In Ukraine, civil society plays a crucial role in monitoring disinformation and providing training to journalists and other representatives of civil society to enhance early warning capacity. However, as is the case in many countries,  media literacy of the population is generally quite low and fact checking is limited to a small community of civil society organisations and engaged citizens. I should add that raising media literacy alone won't help tackle disinformation. In case of Ukraine (as is the case in many post-Soviet countries) trust in government and political institutions is weak, and in many cases people will trust official sources no more than other media channels. In the post-Soviet era, many countries of the former Soviet bloc, including Ukraine struggled with nation-building (precarious economic conditions, lack of unifying political leadership and ethnic divisions are some of the reasons for that). Lack of national unity and absence of impartial news sources, is also driving people to engage with media channels which often spreads false or misleading information creating siloed communities where polarising opinions can flourish. 

Rachel Pollack
Rachel Pollack Moderator

Dear Olena, 

Thank you for this interesting perspective. 

Going to Question 4, do you see a specific impact of the internet, and specifically social media platforms, in the spread of disinformation?

Best wishes,

Rachel

Jim Della-Giacoma
Jim Della-Giacoma

A few brief reflections from an analyst who works on fragility and conflict in South and Southeast Asia, but who lives in the United States.

  • In your country/community, what are the primary sources and motivations driving the creation and sharing of disinformation? Who are the “super spreaders” of disinformation, those who have sufficient influence and following to amplify on and offline?

For the angry and aggrieved, social media are an immediate, unfiltered, and powerful tool of expression. Their views go unchallenged by facts, laws, or historical complexity. They blend into the daily flood of information online. It can be sometime before the extreme and often ill-informed viewpoints are seen and assessed as a threat by those who are in a position to counter or check them. This ability to be challenged by peers or members of their own community is an important first step towards compromise, developing a common understanding, or narrowing the gap between extreme viewpoints.

  • What are considered trusted news sources among different groups? What are the criteria for trusting these sources?

I am not sure the traditional "fact-checking by media" model is working. There will never be enough resources. It is too slow and often deliberately discredited by those with a political interest in spreading disinformation.

My sense is that countering deliberate or misinformed disinformation, particularly that that prompts violent conflict, needs to be done at a much more local level by activating allies that have their own sources of community legitimacy. Community by community, however we define this, we need to understand who are the trusted sources of information, which is not always the same as a "news source".

  • What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?

In some places, I see the distrust of and dysfunction in representative government as a distinct problem, especially when it is not representative. Weak relationships between communities and those who govern them aid disinformation. A lack of transparency in decision making does not help. Corruption undermines legitimacy. A lack of understanding of how and why decisions are made is part of this problem. The way decisions are made involving communities and communicated to them is important in countering those inclined to use disinformation as an unconscious or deliberate tool. These weak relationships undermine the ability of concerned or active citizens, who are a small group, to quickly go to the source to cross-check disinformation and potentially counter it. These active citizens are the go-to people for many others in a community as sources of information when a "hot" issue appears. Making direct source information more easily available or increasing its prominence online, including on social media, could be a way to counter-disinformation. These should not just be understood as mass tools, but tools for the key active citizens or activists to use.

  • To what extent do Internet companies’ algorithms actively push disinformation, and what is the role of closed communications networks in amplifying the problem?

The algorithms reward anger, passion, and frenetic levels of activity. They are easily gamed. How do we add speed bumps into these social media tools to slow things down or slow down reaction time? How do we create a culture of "stop, think, check, before you respond" in communities of concern? How do we create or reinforce trusted people or places to check-in before responding? How do we reinforce or encourage placing more value on information from named, certified, or known sources versus the anonymous, pseudonymous, and unverified?

Ruth Canagarajah
Ruth Canagarajah Moderator

Hello all, and thanks for joining us for the third and final week of the UNDP-UNESCO consultation on disinformation. I’m Ruth Canagarajah and I currently work as a Senior Associate at the Busara Center for Behavioral Economics in Nairobi. 

I’ll be your moderator for Room 2 and given that it is our final week, I am especially keen to hear your remaining insights and experiences in relation to disinformation and its drivers, enablers, and risk mitigation approaches. Along with still fleshing out ideas on the guiding 7 questions found at the top of this page, I encourage you to interact with the other commenters on this forum. This is a wonderful time in our consultation process for more dynamic conversations and informal back-and-forths to take place on this hub for idea exchange. Additionally, I also encourage you to explore Rooms 1 and 3 for comments, questions, and cross-pollination.

 

I look forward to the last and final round of contributions, no matter how brief or in-depth the inputs!

 

Warm regards,

Ruth

Rachel Pollack
Rachel Pollack Moderator

Week Two Summary

Dear All,

Thank you for your thought-provoking contributions to the discussion in Room 2.

For those just joining, and as the moderation of Room 2 is skillfully taken up by Ruth, here are a few highlights from last week and over the weekend.

Ruth kicked off the conversation by sharing a study by MIT that found that false political news gets shared up to 3x as quickly as factual news. She pointed out that this reveals information about both the content itself and the way it targets specific audiences.

The sources and motivations driving the creation of disinformation vary significantly depending on the country/community, topic and medium, pointed out Dina. She stated that CSOs, government, media and internet companies have an important role to play in monitoring and sharing accurate and informed knowledge, and she warned that banning content has proven ineffective.

We received an insightful perspective on the situation in Ukraine from Olena. Civil society plays a crucial role in monitoring disinformation and providing training to journalists and others, Olena pointed out. She underlined the need for greater media and information literacy, while also pointing to challenges related to wider social and political divisions.

Jim reflected on the importance of community in countering disinformation, starting with challenging the false information spread by peers. It is also at the community level that trust in sources and greater transparency in political decision-making can happen, he noted. Jim raised a series of questions, including on how we can “create a culture of ‘stop, think, check, before you respond”?

A common thread across these contributions was the need for sharing reliable information and for fostering greater media and information literacy.

With great thanks again to all for these insightful contributions, I wish you an engaging discussion up ahead! Please do feel free to comment again and invite your colleagues and partners to join the discussion as well.

Every contribution—no matter the length, no matter if it addresses one question or all—will be valuable for shaping our collective work on countering disinformation.

Rachel

--

Week One Summary (Room 2) by Moderator Louise Shaxson.

Louise Shaxson
Louise Shaxson Moderator

Hello everyone, if you haven't seen this report in today's news, it is very relevant to what we are discussing here - how disinformation is combined with other methods of cyber warfare to destabilise.  It makes chilling reading. 

Separately, I've commented in Room 3 on a report that has come out in the UK that advocates for setting up an independent agency to monitor platforms (do go over there and have a look).  I wonder, though, how much that would achieve without a supporting structure that specifically looks at the issues of identity and community that we are discussing here.  And does it all come back to governance?  If, as Jim says, the traditional fact-checking model isn't working, is part of the problem that we haven't been nearly as imaginative as we need to be about how our governance structures need to change to handle our very rapidly shifting sense of identity and community?  Has anyone who's posted here had any experience of working with citizen's juries, for example?  Or with anything similar or more innovative?

Jamie Hitchen
Jamie Hitchen

I wanted to share a few thoughts based on my experience researching, primarily the use of WhatsApp, during recent elections in West Africa (Sierra Leone, Nigeria and The Gambia).

  1. Given the nature of my work, it is perhaps not surprising that I see political actors as the main sharers of falsehoods. This extends to government officials and those who are affiliated with political actors and parties, that are not always formally part of the party, but who work with it to advance its electoral chances (with the aim of a political appointment if the party/candidate is successful).
  2. Yes, a strong, existing, presence on all social media platforms is key (content is often shared on one platform - say Facebook - and then copy and pasted to Twitter and WhatsApp for example. In northern Nigeria several social commentators who had built up a following over several years online and offline were able to use, sell that captive audience to political bidders. I think here the idea of offline credibility can help with how information you share online is received (and vice-versa). Pastors and other religious/traditional leaders in Nigeria can be super-spreaders, as they are trusted arbiters of information among their congregation so if they take a piece of false information from online (deliberately or not) and share it with followers (offline), people will be much more inclined to believe it because of their standing in society.
  3. The source [the person sharing it] of news, matters more in some cases, than the content itself on platforms like WhatsApp. I think there is some good Afrobarometer data on this for multiple African countries, which puts radio as the most trusted source of mainstream media information (and the most listened to) and this was backed up my some of the small survey work done in Nigeria, where radio remains a source to verify content. 
  4. In Nigeria, there is a lack of trust in the government's ability to provide credible information and so that provides a space for falsehoods to flourish, and to be more easily believed, that align with existing biases and divisions. And the most effective electoral disinformation in Nigeria draws on/out these divisions. There is also a lack of digital/civic literacy among, particular older users, but also more generally, that means people struggle to discern what is true and what is false. And this is also limited by people's access to the internet. In Sierra Leone many young people I spoke with 'managed 25MB a day' which allowed them to use WhatsApp but not to download videos or PDFs or to fact-check away from the application. In many more rural parts of Nigeria this also applies. This also links to Facebook Basics and its potential issues in making its platforms the only source of online 'news' for some users. Others have researched/written about this more extensively - http://democracyinafrica.org/facebook_scramble_africa/
  5. Certainly closed communication networks like WhatsApp can amplify the risks of disinformation. First by being more private they make it easier to people to hide their identity when creating content (people in Nigeria create WhatsApp groups of 2 or 3 people and share into it first, so that it comes out of the group as labelled 'forwarded' making it hard to know the original source). Even if you do know the source, all you know is their phone number, names and a profile are not required like on Facebook. Instead users often rely on who shares the information with them on WhatsApp as the intimacy of the platform lends itself to this. So they don't always look to fact-check or verify online (though many do), but will judge its veracity based on what they know and who shares the content with them/how many times they receive it.
  6. This is difficult. A story can go viral on social media in 1-2 hours and its not always clear what that story will be, so early warning is quite difficult to do. Catching up with disinformation once it starts circulating is also difficult, something that limits the impact fact-checking can have (as by the time you do a through fact-check and present the findings, the disinformation can be everywhere online and already feeding in to people's bias. Efforts to flag accounts or groups that are known to have previously spread disinformation is one possibility, but when it comes to politics for example this will always lead to accusations of taking sides politically, for things like health disinformation its perhaps more plausible to identify accounts where disinformation is shared regularly and then communicate this information to citizens. 

Overall I think understanding the context in which disinformation is shared is fundamental to providing tailored responses that have shared principles but that are tailored to meet the specific reality of a country, or even states within a country. One thing that our Nigeria research (https://www.researchgate.net/publication/334736880_WHATSAPP_AND_NIGERIA…) showed was that localised pieces of disinformation (those focused on a state level event, rather than a national happening) were more likely to have been seen/engaged with. 

Ruth Canagarajah
Ruth Canagarajah Moderator

Thanks so much, Jamie, for these really interesting contextual examples from West Africa. Your inputs nicely describe how mis/disinformation converges from multiple starting points. Whereas the social media/online landscape almost simplifies dynamics of how disinformation spreads, it’s important to recognise that what spurs this content forward can oftentimes be from relatively “untrackable” content (i.e. information on the ground from religious leaders, community leaders, radio stations, rural contexts with limited access to online content, et cetera). The starting point for misinformation, then, can often be “Hydra-headed” and occur long before it becomes salient online.

 

With this in mind, have you heard of strategies to 1) identify methodologies on seeing how information on the ground that gains traction through influential sources (I’m thinking of more “preventative” strategies rather than reactive ones here, like using Social Network Analysis approaches to identify the central actor nodes. i.e. trusted arbiters, be they “positive” or “negative” information spreaders on both offline and online spheres, that typically influence communities); and 2) assess how similar actors can prevent the spread of disinformation via early warning systems? A tough question but one worth exploring. 

Jamie Hitchen
Jamie Hitchen

Ruth Canagarajah Yes, and equally, disinformation can start online and then penetrate into these offline spaces, which then makes it harder to track!

I am sure there must be some social network examples but I am not aware of them in the countries where I have been working. But I agree that this kind of approach could potentially be useful and that this has been a recommendation of some of our reports; to identify trusted arbiters of information and to target them with digital/civic literacy that will allow them to better assess the information they receive before sharing; with potential multiplier effects (say if its a religious leader speaking to his/her congregation).

Katie Burnham
Katie Burnham

I found your description very interesting and quite similar to what our organization, Farm Radio, has seen too. I particularly found your note about religious leaders as trusted sources of information to be interesting. We also found that religious leaders, in Burkina Faso  for example, were setting a good example for their communities --  we published a story about this and shared it with our radio network in an effort to encourage other religious leaders elsewhere in Africa to do the same. 

As you say, such "trusted" sources of information can be a bit risky if they are able to share inaccurate information in a way that lends credibility. Often radio broadcasters don't feel comfortable interrupting or contradicting such authorities on air, which can lead to disinformation spreading. We have tried to avert this in two ways: by sharing lists of good interview sources, including public health experts during this COVID-19 crisis, and creating training materials on how to interview experts, with tips on managing such situations, politely. But I am sure there are other ways we can support media to fight the spread of disinformation.

We also manage WhatsApp groups with our partners, and did a lot of fact-checking of fake news that was posted into these groups over the past few months. Additionally, we've shared a training guide for identifying fake news and linked broadcasters to Africa Check, but it's evident why media literacy skills are so important for media members and the general public. 

Ruth Canagarajah
Ruth Canagarajah Moderator

Katie Burnham - thanks for sharing Farm Radio’s experience with the centrality of religious leaders! I also appreciate the note re: the double-edged sword of “trusted” community members and not being in the position to push back against disinformation, especially on air. I'm very keen to know a bit more on how you’ve trained broadcasters to manage these situations of on-air/in-the-moment identification of disinformation.

 

I’m also very curious from your inputs on if/how you’ve seen radio stations play a part in “early warning” monitoring and identification of disinformation. For instance, after disinformation is called out on WhatsApp or identified through Farm Radio’s training guide, what are usually the next steps for broadcasters? Is it that they avoid sharing the information or do they use their platform to address the source + its issues?

Jamie Hitchen
Jamie Hitchen

Just to add on the question of how government institutions can participate and monitor in the information landscape. In politically polarised environments its hard to establish and sustain credible independent bodies (how will they be appointed etc) and that's even before you discuss whether they should be. You can perhaps look at the model used for journalism as one possible solution, whereby members are represented by a body that sets standards they must adhere too, but this only can apply to bloggers or online influencers, as otherwise you'd have every citizen being a part of this association. I think the more viable shorter term solution is around building codes of conduct for online engagement that are enforced by other users. So with WhatsApp you empower group admins to be better able to control what is discussed and shared in a group (perhaps here the platforms can also do more to support this). Though this might be a lot of work for the group admins!

Generally I think social media platforms need to do a lot more in the African context in terms of getting content moderators who speak local languages. Hausa has c.50 million speakers, so even if 2% online that's 1 million people who might use the language to converse (Hausa Facebook very vibrant in Nigeria). But I don't know if Facebook or Twitter have many, if any. There are also questions about whether the user terms are available in these languages (https://www.vice.com/en/article/xg897a/hate-speech-on-facebook-is-pushi…) and if that's the case, users can argue they didn't sign up to the platforms rules for use (hypothetically). 

 

I think platforms, working with media and civil society, can also do more to educate users about features of their applications that might not be well known. For example on WhatsApp how to change your group settings so you aren't automatically added to groups but that you receive an invite and then decide if you want to join. 

 

Louise Shaxson
Louise Shaxson Moderator

HI Jamie, thanks so much for your detailed contribution.  I particularly like your comments about the inability of platforms to do effective content moderation in local languages, and the important point about whether user terms are available in those languages.  Something we haven't really talked about in this conversation yet is platform's business models, which came through in your point that "several social commentators who had built up a following over several years online and offline were able to use, sell that captive audience to political bidders."  The advertising industry says that 'money follows eyeballs', which combines with what we well know - that platforms encourage us to share content that generates strong emotions.  This isn't just an online thing - tabloid newspapers sell copy with their front page headlines that are designed to outrage - but as has been noted above it's the speed and spread on social media that's the problem.   But I wonder if anyone has a deeper analysis to share on platforms' business models?   

Louise Shaxson
Louise Shaxson Moderator

One of the things I was thinking about was that we need to be involving young people in this conversation (not just in this consultation but more generally) - or at least to be aware of their efforts.  Has anyone else come across this by MSc student Abbie Richards?  I've invited her to take part, it would be great to have her energy and enthusiasm.  Does anyone know of any other similar initiatives happening around the world?    

Ruth Canagarajah
Ruth Canagarajah Moderator

Louise, that's quite an entertaining initiative! It's useful to have this thinking around how to conceptualize and define the complex boundaries of disinformation. I can imagine it's quite an undertaking given how vast the ecosystem is. I've found a youth-led initiative that specifically focuses on the taxonomy of disinformation during Covid19, led by three Harvard students using the ABC (Actor, Behavior, Content) mapping and analysis approach. Their visualization via axes of targets and motivations is an interesting alternative to Abbie's approach. I'll see if I can reach out as well.

Aaron Sugarman
Aaron Sugarman

Hello everyone, I would like to answer questions 1, 2, 3, and 7 on behalf of the Global Disinformation Index (GDI),  a global not-for-profit organisation whose mission is to reduce disinformation and it’s harms. We do this by providing neutral, independent risk assessments of news sites based on their risk of carrying disinformation. Our risk assessments use a combination of human review and artificial intelligence.

One of the motivations for actors to create disinformation is financial: content that triggers strong negative emotions (hatred, greed, envy, etc.) tends to generate the most clicks. We have documented significant advertising from well-known brands–from Amazon to Volvo–which are giving a funding stream to sites peddling disinformation. Weekly updates of popular brands advertising beside disinformation can also be found at https://disinformationindex.org/research/

GDI has developed a methodology for assessing the disinformation-risk of news sites with the criteria of a domain’s content, operations, and context. The content score is based on an anonymised review of 10 of the top-shared articles on a domain that have been randomly selected. The review is done by a researcher and the source of the articles is not disclosed to them. The operations score is based upon the underlying policies, standards and rules that domains abide by to establish trust and reliability. The context score assesses the reputational practices, reliability, and trustworthiness of a news domain. These disinformation flags are assessed by an independent expert survey of respondents from across the political spectrum. More information can be found at: https://disinformationindex.org/wp-content/uploads/2019/12/GDI_Index-Methodology_Report_Dec2019.pdf

This criteria for trusting news sources has already been piloted in South Africa and the UK, with the findings available at: https://disinformationindex.org/research/

GDI’s technology has also uncovered some of the worst offenders in disinformation-519 sites in the English language which publish the highest volumes of divisive, polarising content. Over one-third of those 519 sites carry ads: 189 domains. Among this group GDI has identified a few sites that have “high narrative density”—these sites could be considered “super spreaders” because they carry the highest amount of disinforming content as a proportion of all their output. For more information on this see: https://disinformationindex.org/2020/10/how-can-advertisers-disrupt-disinformation-dont-fund-it/

Solutions:

The solution is to track and block advertising on disinformation sites to remove the financial incentives of spreading disinformation. Brands, ad tech companies and platforms can reduce traffic on coronavirus disinformation sites and help shift the focus to credible journalism by: 

  1. Adopting a common disinformation risk rating for news sites in order to guarantee transparency for consumers and ad placement companies. 
  2. Using real-time lists of high-risk disinformation sites to steer ads from disinformation content and to quality, trusted news sites. For example, GDI produces real-time lists of risk-rated domains for use by brands, ad exchanges, ad networks and brand safety companies.
  3. Aligning brand and platform policies for high-risk content. There is a notable policy gap between content policies and ad policies for platforms and brands. This needs to be closed now — only they can do it.
  4. Ensuring high-risk disinformation sites are not being promoted via algorithms on platforms. GDI’s risk ratings offer the ability for platforms to set ‘tolerance thresholds’ for which domains to promote in feeds and searches.
Ruth Canagarajah
Ruth Canagarajah Moderator

Hi Aaron Sugarman - thanks for sharing the rich body of work that GDI is doing. Thanks especially for bringing the financial incentives perspective to the table. It’s one important motivator that we’ve yet to fully explore on this forum. The amount which these sites earn (and Google’s centrally in all of this) is interesting: “On just three topics – COVID-19 conspiracies, white supremacy and Anti-Semitism – the GDI estimates that all 189 sites will collectively earn nearly US$350K each month from ads served on them.” Given it’s an approximation for all of the sites you’re analysing, would you say that especially for sites like Breitbart, financial incentives may stand as secondary or convenient side-effects of their work rather than primary?

 

I’m intrigued by GDI’s methodology that combines both AI and human review. May I ask what is the approach specifically you’re using in machine learning in identifying adversarial content (NLP sentiment analysis?) I think Olivia Sohr  and Matias Di Santi would benefit from this insight for their own work with Chequeado, perhaps. Additionally, how does GDI go about down-ranking disinformation sites? I’m curious to hear more about how GDI disrupts this complex ecosystem where power is held in the hands of a few but simultaneously surviving because of the mass public. 

Aaron Sugarman
Aaron Sugarman

Ruth Canagarajah 

Financial incentives are still a primary motivator for many disinformation domains even though our numbers for their ad-profit are approximations. Ad-revenue for these sites is not public, but by using Alexa rankings to estimate the views per month we can get a reliable estimate of how much these platforms are making from ad-revenue (for more information on how GDI calculates this see: https://disinformationindex.org/wp-content/uploads/2019/09/GDI_Ad-tech_Report_Screen_AW16.pdf). GDI’s estimates are also conservative, so the actual money generated by these ads on these sites could be much higher than our numbers suggest.

The machine learning approach we use for identifying adversarial content is based on utilizing topic models. These topic models can distinguish between credible news coverage of a topic and highly divisive disinformation on the same topic with over 90% accuracy.  Our instruments can help governments, platforms, brands, and health officials see which sorts of narratives are gaining the most attention and hence what counter measures should be launched. This approach is more nuanced than current blunt keyword based blocklists and results in fewer false positives.

GDI accomplishes its mission of down-ranking disinformation sites by what was outlined in solution 4—ensuring high-risk disinformation sites are not being promoted via algorithms on platforms. For example, our risk ratings give the ability of a search engine like Google to not promote known disinformation vendors in their searches, thereby down-ranking the publicity of domains peddling harmful content.


 

Gideon
Gideon

What are considered trusted news sources among different groups? What are the criteria for trusting these sources?

+ Recent research in Nairobi, Kenya by Busarai identified that trust in a particular platform is often driven by the level of trust in the source of information behind that platform. The communities we engaged with, in predominantly low-income, underserved neighbourhoods, perceive information delivered through TVs as more trustworthy than other platforms because of the perception that it's well researched, especially COVID-19 information. Additionally, participants that demonstrated low levels of trust in the Kenyan government as an entity, preferred not accessing information from sources that they believe the government uses to pass information to its citizens, including television

 

In your country/community, what are the primary sources and motivations driving the creation and sharing of disinformation? Who are the “super spreaders” of disinformation, those who have sufficient influence and following to amplify on and offline?

+ Older populations are less likely to regularly use digital platforms to access news, compared to the youth (<35 years old). Therefore in the instances where this demographic comes across information online, they’re more likely to not only strongly believe that piece of information, they’re more likely to widely disseminate within their networks. To a significant segment of this population, trust in information is driven by their high levels of trust in digital platforms such as Facebook, Twitter, online news sites, but especially WhatsApp. 

 

 

What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?

+ Any successful efforts to identify and flag misinformation depends on audiences’ ability to effectively and critically evaluate information they come across. It is however well established in behavioral science research that this process of critically evaluating information is “cognitively expensive", particularly on digital information platforms such as social media tools, where users are often bombarded by a barrage of information every time they refresh their home pages. This explains why it’s easier for most people to fall for false information, because the human mind can only process, hold and critically evaluate so much information in a short period. By understanding how different types of audiences evaluate information, stakeholders keen to address mis- and disinformation can begin to identify effective strategies and interventions to encourage audiences to consume and critically evaluate information better.

 

 

+ Finally, Busara recently conducted a small experimental study to answer the question: 

Are people better able to classify news as true or false if the subject takes place in their country of residence?

Find out more through this Medium post:

https://medium.com/busara-center-blog/off-the-record-15-warning-fake-ne…

Simon Alexis Finley
Simon Alexis Finley Moderator

Many thanks Gideon. Very interesting, especially the parts where you bring age demographics into the piece. Across many areas of work there is a general assumption that "young people" are the most vulnerable, whether that's to extremist ideologies or disinformation. Interestingly, a US study released by researchers from New York and Princeton Universities in 2019 ,showed that older adults were seven times more likely to share disinformation than younger ones regardless of education, sex, race, income. However, given how context is key to understanding these issues, what might this actually mean for understanding our own assumptions on what role demographics have in spreading disinformation? A quick link here for more information on the article: https://www.technologyreview.com/2020/05/26/1002243/misinformation-olde…

Olivia Sohr
Olivia Sohr

One of the problems we have identified from Chequeado, based in Argentina, in the disinformation landscape is that there is very little research on who the actors behind it are, specially outside of the USA and Europe. This is why we are working with LatamChequea, a network of latinoamerican fact checkers, to do in depth investigation on who the actors are and what are the dynamics by which disinformation circulates in the countries and among different countries. We believe this will help better understand the phenomenon and allow us to better expose it and stop it. 

 

Niamh Hanafin
Niamh Hanafin Moderator

Olivia Sohr Indeed, this is production and creation area is one where we don't have a lot of overview of effective countering strategies.  Certainly researching and investigating the sources and their motivations is a critical first step to understanding where and why disinformation emerges into the public sphere.  Have you considered what your strategies might look like once you have that data?  Why kind of interventions are you exploring to deal with the source of the problem?

Matias Di Santi
Matias Di Santi

Monitoring social media platforms to identify disinformation and know how viral they are before we act on them takes effort and time. In Chequeado we work to make this work more efficient, and we, as other fact checkers, could greatly benefit from more tools that are specifically designed for fact checkers (since a lot of what we use is designed for marketing or other uses), and specifically tools that would give us more insight on specific groups that share a certain disinformation so we can focus on them when we debunk it, so the correct information can reach as many people who saw the disinformation in the first place. 

Niamh Hanafin
Niamh Hanafin Moderator

Matias Di Santi focussing on groups which are particularly prone to consume or disseminate disinformation would be strategic indeed. And of course the challenge of ensuring that the fact-checked version of a piece of content reaches those same groups would seem critical to raising awareness about the risks inherent in the information ecosystem and how to navigate them as users. What your describing is seems to be an expanded role for fact-checking entities.  Have you seen this work effectively elsewhere? And what kind of strategic partnerships would make sense to allow fact-checker to be more targeted, focussed and effective? And bringing in Olivia Sohr 's comment above, I wonder how you would imagine a more holistic approach which identifies and exposes the sources of disinformation and efficiently reaches groups and individuals more vulnerable to disinformation or more active in spreading it further?

Ruth Canagarajah
Ruth Canagarajah Moderator

Week Three Summary

 

Dear all,

We had a week full of insightful comments and discussions, thanks to your inputs. Themes that emerged this week is in relation to the multiple “Hyra-headed” avenues in which disinformation is driven forward (i.e. the tension between offline versus online resources, and the difficulty of early warning systems to identify them before they gain traction); organizational methodologies and research insights from GDI, Busara, Farm Radio, and Chequeado in navigating the social, financial, and psychological drivers of disinformation, as well as some experiences with innovative fact-checking models; and youth-led typologies in defining the complex boundaries and modes of disinformation.

Louise kicked off last week’s content by sharing an article on disinformation + cyber warfare, and posing the need for more imaginative structures (especially those inclusive of issues of identity/community) to monitor platforms. Aligned with this, Aaron shared the work of Global Disinformation Index (GDI) which combines artificial intelligence and human review to reduce disinformation. One can learn more about their index scoring here and their broader methodology here but the GDI approach offers insights into both the financial drivers of disinformation and a process that starts from identifying disinformation all the way o down-ranking misinformation (in other words, promoting a closed cycle of activity). Moreover, Aaron’s insights give specific insight to platform business models and the financial “benefits” that divisive, polarising sources of disinformation accrue. 

In the theme of insights into super spreaders and how disinformation gets consumed, Jamie Hitchen gave context to his work in West Africa, specifically the offline, on-the-ground super spreaders of disinformation (pastors, religious leaders, social commenters), and how the the lack of digital/civic literacy, especially amongst older users, means a more pronounced struggle to discern what is true and false. Katie Burnham also related Farm Africa’s experiences with “trusted” sources of information that spread disinformation, specifically religious leaders, and how Farm Africa trains its radio broadcasters to identify and manage situations of on-the-air/live sharing of disinformation through trainings. Finally, Gideon Too of Busara also contributed the Kenyan dynamics of misinformation, showing that trust in the source of information differs by segments of society (i.e. poorer, under-served neighbourhoods think messages shared through TV is most trustworthy) and echoing that older generations with low digital/civic literacy are especially likely to strongly believe and widely share disinformation. Gideon’s inputs highlight the need to understand populations’ consumptions and behaviours on disinformation through a segmented lens (rural vs urban; young vs old; educated vs uneducated; perhaps even male vs female), instead of assuming the general population will react and engage with disinformation in the same way. Finally, Olivia and Matias of Chequeado explain how its group of fact checkers and use of specific tools is being used to identify disinformation before it spreads and increase the research base on who the primary spreaders of disinformation are. In the theme of early warnings, Jamie mentioned how difficult it can be for independent bodies to take on this position, especially in contexts of political polarisation, and the need to highlight more localised processes first. (i.e. group admins of WhatsApp groups and social media platforms). Language is an important consideration for effective content moderation.

Finally, Louise introduced an interesting “pyramid” typology of disinformation created by Abbie Richards, and another was found by some Harvard students in mapping disinformation on the axes of targets and motivations—both interesting ways of conceptualising “infodemics”.

 

With the extension of this consultation for one more week, I leave the moderator position in the hands of UNDP’s own Niamh Hanafin. We look forward to reading your remaining contributions!

 

Best,

Ruth

--

Week Two Summary (Room 2) by Moderator Rachel Pollack.
Week One Summary (Room 2) by Moderator Louise Shaxson.

Niamh Hanafin
Niamh Hanafin Moderator

Many thanks to Ruth Canagarajah for a great week of moderation and this comprehensive summary. Sets us up wonderfully for this week's discussion - looking forward to it!

Israel Araujo
Israel Araujo

Qs1. In my community, the main motivations that drive the potential creation and sharing of disinformation is the lack of verification by users, mainly in the field of social networks. Without a doubt, social networks have proven to be both a great disseminator of information and disinformation. The context of the health crisis has favored the spread of false news. Therefore, and as part of a proactive transparency strategy, strategies have been promoted for the generation, verification, publication and dissemination of quality public information, which allows citizens to reduce information asymmetries and optimize decision-making. In this sense, deepening the launch of similar actions could anticipate and better prepare ourselves for the spread of disinformation, since having timely information, in the current health emergency framework, counteracts the spread of disinformation.

Niamh Hanafin
Niamh Hanafin Moderator

Israel Araujo increasing reliable sources of accurate information has certainly been one of the key responses of WHO, national governments and others to the huge volumes of disinformation.  I think there are some factors that need to be taken into consideration and I'd be interested to hear your thoughts:

1. Trust - how to ensure that the public is trusting the right sources of information, especially in this world of "internet bubbles", closed messaging groups, and variable degrees of trust in government institutions.

2. Volume - WHO describes the infodemic as an overwhelming amount of content (both accurate and false) which causes confusion amongst the public as they try to navigate.  Do you feel this problem of too much of everything is contributing? 

3. Quality - sometimes accurate information doesn't compete well with false information, which isn't constrained by the limits of truth and fact, and can often be spun into compelling narratives and stories which are far more engaging and easier to remember.  Have you seen any examples of information campaigns which successfully engage their target audiences in this way?

Louise Shaxson
Louise Shaxson Moderator

Niamh Hanafin there's some interesting research from the Reuters Institute in Oxford on filter bubbles which suggest that they might not be quite as harmful as has been thought.  We do naturally filter the information we search for (what they call self-selected personalisation), both in the offline and online worlds.  I'm very attached to the particular set of newspapers and magazines I read, for example - and get tied into subscriptions so don't bother to read around very much!  But when we're online, the issue of pre-selected personalisation - when an algorithm makes the selection for you - isn't apparently as much of an issue as we might think.  See here for an article on it, but also have a listen to the episode "The News on The News" on the podcast Government versus the Robots with Jonathan Tanner. (Disclosure: I work with Jonathan, but I do think the whole podcast is well worth listening to - this season is devoted to disinformation).  Having said that, I think the evidence for this point about filter bubbles is from the UK, and it would be well worth checking out whether this also holds true in other countries.  

Orna Young
Orna Young

hi everyone

Just a few things on specific questions: 

In your country/community, what are the primary sources and motivations driving the creation and sharing of disinformation? Who are the “super spreaders” of disinformation, those who have sufficient influence and following to amplify on and offline?
While we have not identified local sources of disinformation (ie. malintent), notable local politicians and celebrities have uncritically been content to repeat debunked misinformation. As evidenced elsewhere globally, it remains a challenge to persuade people to think critically in some cases of loyalty to an individual or a group.

 

What roles of civil society organisations, government, media and Internet companies play in terms of monitoring drivers and enablers of disinformation as a contribution towards social preparedness?
We had a positive experience working with a public health oriented CSO. In this partnership, we supplied fact checks and explainer articles that were disseminated to the CSO’s network of users—a segment of the community we may not have otherwise reached. The partnership maintained our editorial independence, while bringing mutual benefits.
 

Miroslava Sawiris
Miroslava Sawiris

This feedback is submitted by 10 organizations and civil society initiatives (GLOBSEC, nelez.cz, PSSI, CSD, Res Publica, Semantic Visions, Global Focus, Political Capital, Eastern Europe Studies Centre, DISI) from 6 European countries (Bulgaria, Hungary, Czechia, Lithuania, Slovakia and Romania) joined in the Alliance for Healthy Infosphere.

  1. In your country/community, what are the primary sources and motivations driving the creation and sharing of disinformation? Who are the “super spreaders” of disinformation, those who have sufficient influence and following to amplify on and offline?
  • Well-established disinformation outlets & influencers whose content is re-shared on social media
  • Growing number of non-state actors motivated by either economic or ideological reasons use disinformation to achieve their goals
  • Growing number of domestic political representatives, including representatives of far right or far left political parties, have been spreading various disinformation and hoaxes and voicing them during political debates. Thus, fringe disinformation has increasingly become part of the mainstream public debate
  • Disinformation is peddled by websites imitating good quality journalistic outlets, often with unclear ownership and editorial board
  • Disinformation is spread through manipulation of algorithmic processes on digital platforms. This represents a serious problem given that Facebook has become primary source of information for many young people in the V4 countries, according to latest NDI research (https://www.ndi.org/sites/default/files/NDI_Youth2020_V4.pdf )
  • Economically motivated actors who profit from disinformation ecosystems generate vast incomes through advertising
  • Medical disinformation channels, once a fringe aspect of the disinformation scene, have been catapulted to centre of information space and thus public debate
  • Anti-mask movements and 5G conspiracy theories proponents cause radicalization and polarization, particularly at the time of the pandemic
  • Report documenting political disinformation: Slovak Parliamentary Election 2020: Liberalism as a Threat, Facebook as a Battlefield report (https://www.globsec.org/publications/slovak-parliamentary-election-2020/ )
     
  1. What are considered trusted news sources among different groups? What are the criteria for trusting these sources?
  • This differs across different groups because social media have completely overhauled the traditional media model as news becomes accessed from digital platforms. Not only are reliable sources of information packaged in the same way as personal content from users and disinformation sources, the income generated from ad revenues no longer goes toward financing quality journalism – an essential pillar for any functional democracy.
  • However, the following criteria would help: Easy access to information about media ownership to help the consumers identify bias of the given media and increase the transparency of the media landscape. Also, the obligation to publish the ownership information would help uncover the owners of the outlets and websites promoting harmful content, including disinformation and conspiracy theories, who often hide their identity. Revelation about media-ownership would help prevent the attempts to monopolise the media landscape.
  • Good ranking would motivate the outlets to promote their evaluation on their websites / platforms. These labels should be enforced at search engines and social media platforms.
     
  1. What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?
  • Consumption of information from social media: Exposure to disinformation spread by manipulating algorithmic processes on online platforms
  • Political actors deploying disinformation as part of their communication strategy with their key audiences
  • Lack of digital skills and no notion of digital citizenship
  • Lack of efficient strategic communication at all levels of the state
  • Unregulated digital space
  • Disinformation economy which turns disinformation into a profitable business
  1. To what extent do Internet companies’ algorithms actively push disinformation, and what is the role of closed communications networks in amplifying the problem?
    • Disinformation is spread by manipulating algorithmic processes on online platforms to a great extent, as algorithms are designed to motivate the user to stay on the platform for as long as possible. Polarizing, hateful and manipulative content ensures this better than any other content.
    • Due to lack of transparency, it is difficult to evaluate the extent to which closed groups are responsible for disinformation dissemination. However, the likely probability is that it is another important factor contributing to information chaos.
       
  2. What kind of monitoring can provide effective early warning of risks of potentially harmful disinformation?
    • Strengthened sharing of information and monitoring activity across borders and between authorities
    • Independent oversight bodies for digital platforms to investigate reporting irregularities
    • Enhanced reporting obligations on online advertising during electoral campaigns
    • Increased transparency measures adopted by digital platforms in the context of electoral campaigns and algorithms’ deployment
    • Monitoring of accounts which repeatedly violate community standards
       
  3. What examples have you seen of disinformation mapping feeding effectively into national or local policy decisions, including institutional codes of conduct, and/or programmes?
  • European Commission has addressed the spread of disinformation through a self-regulatory approach, which in a Code of Practice on Disinformation being subscribed by major online platforms and trade associations representing the advertising industry.
  • However, Alliance for Healthy Infosphere’s stance is that even though Code of Practice on Disinformation (CoP) represents an important step in the right direction, number of evaluations conclude that it does not go far enough. Without common European approach to digital environment, member states are creating their own rules (NetzDG in Germany for example) which leads to further fragmentation of the Single market, while the problem with illegal content and disinformation online persists because of lack of regulatory oversight and transparency.
  • Digital platforms operate internationally and hence best approach to form policy would be based on international alliances as well.
     
  1. What roles of civil society organisations, government, media and Internet companies play in terms of monitoring drivers and enablers of disinformation as a contribution towards social preparedness?
  • Social media platforms should regularly publish comprehensive databases about political advertisements, as well as reported and deleted content, including bots and fake accounts taken down in a particular country to provide governments and civil society with accurate information based on which policy recommendations and measures could be adopted
  • National authorities often lack resources, expertise or access to tools which would enable them to effectively monitor in real time implementation of electoral legislation and take corrective measures. By pooling resources on EU level or internationally, sharing access to information, and adopting obligations for advertising service providers, national authorities would have a much better chance to actually implement these rules in the digital space. 
  • Civil society plays an important role in shaping agenda in fact-checking, research, trainings, and media literacy programmes, activities which should be further supported. Resources needs to be dedicated to encouraging multi-stakeholder discussions to bridge the divide between citizens and elites and address the lack of trust thereof.
  • Good quality journalism plays an important role in acting as an antidote to disinformation campaigns. However, journalists face intimidation and sometimes even bodily harm (investigative journalist Jan Kuciak and his fiancé were brutally murdered in Slovakia in 2018). Comments below the public profiles of journalists or media that publish the journalists’ articles on online platforms are full of hate speech directed at the journalists concerned. Whether they are trolls or real people, they often shape the discussion and divert the attention from the issue discussed in the article. Hate speech, threats and bullying continues in the private messages on online platforms or mailbox. Physical threats are present in the online space as well as offline space, as the case of Slovakia suggests.
  • Social media have completely overhauled the traditional media model as news becomes accessed from digital platforms. Not only are reliable sources of information packaged in the same way as personal content from users and disinformation sources, the income generated from ad revenues no longer goes toward financing quality journalism – an essential pillar for any functional democracy. Digital platforms need to be meaningfully regulated and taxed. Income from such taxation should go towards new models of quality journalism funding. Media outlets ranked as good quality journalism by independent ranking institutions should not be packaged on digital platforms in the same way as all other content.

 

Paola Forgione
Paola Forgione

Dear all,

Thank you for this brilliant initiative and for developing this really engaging platform. I work at the International Committee of the Red Cross in Geneva and I conduct research in the area of violence against healthcare. Please find below my contribution to some questions:

3. What are the factors that make the public vulnerable to disinformation? What are the key normative, technology, governance and social cohesion enablers of disinformation?

Confusing, discordant and even conflicting directives from public authorities generate puzzlement among the public, who may therefore seek clarity through other sources, including unreliable ones. We are currently witnessing this phenomenon with regard to Covid-19 and health disinformation. Social media fulfill very well the public’s need for answers, as they speak the language of “emotions” and “gut feelings”, rather than of scientific evidence. This language appeals to the average social media user much more than scientific content featuring medical information that s/he could hardly grasp.

6. What kind of monitoring can provide effective early warning of risks of potentially harmful disinformation?

An “early-warning process” should be put in place. In particular, monitoring comments and posts on social media before disinformation spreads could help identify, understand and respond to people’s concerns and questions before a rumor develops to answer these questions. Social media offer a great opportunity to “take the temperature” among people on social, economic, political issues, but it is crucial to do it an early stage, preventing the rumor to develop or limiting its outreach.   

 

Dhruv Ghulati
Dhruv Ghulati

Hi there, I am the founder of Factmata, a startup developing tools to detect misinformation using the linguistic markers that define it. We think about this in 3 ways.

  1. Algorithms to score how untrustworthy a piece of content is based on known signals that are highly correlated e.g. detecting clickbait language, hyperpartisan language, hateful content, one-sided content, agenda-driven content as well.
  2. Algorithms to score how trustworthy a piece of content is via signals that denote credibility e.g. balanced arguments, reasoned tone, claims which do not contain farfetched connections between people, places and things (different to what is out there on the internet).

These two systems can then be combined to produce credibility scores for content, that then can aggregate to the website level or journalist level if needed.

Note, these algorithms may be imperfect but do not use anything apart from the content of what is being said to judge credibility (rather than judging the author or site, which we believe introduces a lack of fairness).

The way we want to implement this "credibility score for any piece of content" is the following:

  1. Change how newsfeeds are ranked. Embed this signal into any newsfeeds which rank information for users, and deprioritise the focus on clicks/shares/likes
  2. Remove harmful content via more effective content moderation of the grey area.

By using this supply ranking system you can improve how advertising networks are kept clean (preventing monetisation of misinformation by bad actors), improve social networks, publishing networks, news aggregators and much more.

The system needs to be built not by one company like us but many different players all providing signals (no one entity will have the capital to build such a sophisticated system well). These signals need to have open training and test data, be fair, and have clear visibility into who judged the credibility and their backgrounds and potential expertise in the subject they are labelling.