Truth be told: how social listening helps brands tackle misinformation

4th November 2025

The escalating threat: from fringe issue to business-critical risk

The digital landscape has undergone a fundamental transformation, where the flow of information is no longer confined to traditional media channels. Disinformation, fake news, and misinformation have evolved. As outlined by Harvard Business School research, this directly impacts brand reputation consumer trust, and long-term business viability. The speed and scale at which false narratives can spread on social platforms, blogs, and forums create an environment where a single unverified claim can ignite a firestorm, affecting public perception, stock valuations, and brand loyalty.

To effectively combat this, it is crucial to understand that misinformation is often a symptom of a deeper cultural phenomenon. The problem is not merely the misinformation itself, but a growing audience distrust of traditional institutions. This includes governments, established media, and corporations. Misinformation does not create this vacuum of distrust; it exploits it.

A proactive defense strategy must therefore move beyond simply fact-checking and instead focus on identifying these underlying cultural anxieties. Social listening provides this critical diagnostic capability. By analyzing the nuanced, emotionally loaded conversations that precede a crisis, a brand can pinpoint the root causes of audience frustration. This allows them to address them with transparent, empathetic communication before a false narrative has the chance to take hold.

The business case for vigilance: brand at stake

In today's volatile digital environment, the risk of misinformation is not merely a public relations challenge; it is a business-critical issue that can directly impact a company's financial health and market position. To justify investment in this area, strategists must move beyond abstract discussions of "fake news" and present a data-driven case that quantifies the threat.

Pulsar’s Brand Misinformation Risk Index (BMRI), a collaboration with NewsGuard, provided a powerful framework for this. The index utilised Pulsar’s NewsGuard integration feature that allows you to see which news sites are most at risk of spreading misinformation - the BMRI quantifies a brand's exposure to misinformation by tracking the frequency and reach of its mentions on unreliable news and information websites. This score is calculated by combining Pulsar's Visibility scores, which measure a piece of content's audience size and engagement, with NewsGuard's Reliability Ratings, which assess the trustworthiness of a source based on NewsGuard’s nine journalistic standards. The resulting score, on a scale of 0 to 10, provides a clear, comparable metric for understanding which brands are at the highest risk.

Preview of the Brand Misinformation Index - using social listening to combat misinformation for your brand

The data revealed that exposure to misinformation is a tangible risk for some of the world’s most valuable brands. The BMRI table provided a snapshot of this risk, making a compelling case for a proactive, data-driven defense strategy. Social monitoring is an essential tool for reputation management, acting as a brand's eyes that are always alert and responsive to customer queries, complaints, and feedback. It is a form of digital housekeeping that addresses the immediate symptoms of online conversation - preventing it from spilling out into revenue-affecting narratives.

 

Mapping the misinformation landscape: use cases, insights, and benefits

Uncovering the anxiety - understanding the cultural & behavioral underpinnings of misinformation

Effective misinformation defense begins with a deep understanding of the narratives that resonate with audiences. A study conducted by Pulsar alongside UN Climate Change provides a model for this approach. We analyzed public discourse around the renewable energy transition, providing an exploration of the conversations about energy. These were emotionally charged and contradictory, filled with suspicion about who profits and frustration over who benefits from the transition.

The analysis of public discourse around renewable energy revealed that conversations in this arena harbour a complex interplay of government and energy provider distrust, audience emotion, and climate anxiety. The study identified three key narratives that dominate the discussion: affordability and bills, jobs and economic security, and reliability.

Chart showing Misinfo Climate misinformation morphs into celestial conspiracy - using social listening to combat misinformation for your brand

Our research found that misinformation often latches onto and amplifies these pre-existing anxieties. For example, while concerns about the cost of living and rising energy bills are valid and widespread, these grievances were often framed as a "political grift" on social platforms. This politicization of a legitimate economic concern then provided a fertile ground for the spread of outright falsehoods, such as the "Magnetic Field Theorizing" narrative, which claimed that a blackout in the Iberian Peninsula was "proof" of celestial conspiracy theories.

This process illustrates a critical pattern: misinformation is not a standalone event but often simply the latest stage of a narrative's escalation. A legitimate public concern (e.g., rising bills) evolves into a political grievance, which in turn becomes a platform for pure disinformation. By using social listening to track this entire chain, from the initial, valid grievance to the final, fabricated claim, a brand can understand the root emotional triggers. This allows for a proactive and strategic intervention with clear, transparent communication to fill the knowledge void before misinformation takes hold.

 

What’s the source? Proactively quantifying and mitigating brand reputation risk areas

Pulsar’s NewsGuard integration goes beyond a theoretical understanding of risk and provides a quantifiable metric for reputation management. By combining Pulsar’s Visibility scores with NewsGuard’s Reliability Ratings, the BMRI allows brands to measure their exposure to untrustworthy sources. The analysis of this data reveals that risk is often a function of proximity and association, not just direct claims.

For example, Tesla has frequently ranked highly in our investigations into brands with high exposure to misinformation - this is in part due to narratives surrounding its ESG rating, electric vehicles and the Inflation Reduction Act. These discussions, often found on untrustworthy sites, have created an environment ripe for misinformation. Similarly, American Express saw a significant surge in its risk score after adopting a new merchant category to track gun sales. While the initial criticism was political, it quickly escalated into exaggerated claims on unreliable websites, framing the decision as an infringement on civil liberties. These cases demonstrate that a brand can be exposed to risk simply by being mentioned alongside or in proximity to polarizing topics. Coca-Cola and Disney, for instance, have seen spikes in discussions related to "anti-ESG" or "wokeism" narratives, even if no direct fake news is told about them. 

Understanding which arenas are at risk of fake news and misinformation exposure means that a comprehensive social listening strategy must extend beyond brand-specific keywords to track the broader narrative ecosystems that could co-opt or be associated with a brand's identity.

 

Unveiling new risks: AI and misinformation of the future

The rapid advancement of artificial intelligence has created a new frontier for misinformation - both AI-generated disinformation and misinformation spread about AI and its uses. In a collaborative piece of research between Pulsar and The Office of Superintendent of Public Instruction (OSPI), the body in charge of K-12 education for Washington state, we looked into narratives surrounding AI use in education. Here, we see that AI conversations are an area in which misinformation is prevalent.

Sources in the AI x education conversation categorised by credibility misinformation risk - using social listening to combat misinformation for your brand

False claims about UK schools installing AI cameras in bathrooms and reports of "Chinese AI bases in schools" have been identified on non-credible sources, which account for over 5% of the total online conversation about AI in education. Though 5% may seem low, it’s a much higher percentage of fake news coverage than we expect to see in such a conversation - even at numbers as low as 1%, a drop of misinformation in the pool of news coverage can have dramatic effects on online narratives

Other falsehoods, such as claims that Google’s AI Gemini is "pro-liberal" and a threat, illustrate how the playbook of political bias and foreign influence is being directly applied to new technologies. This dynamic illustrates that the misinformation tactics and narratives are not new; they are re-skinned versions of older, pre-existing cultural anxieties. Social listening allows brands and organizations to identify these patterns and apply lessons from past crises to the emerging AI landscape. The speed of AI development creates a vast and fast-moving information vacuum, and the delay between a technological breakthrough and official communication can be measured in hours, not weeks. This makes real-time social listening a critical component of a proactive strategy to prevent and combat AI-related misinformation.

 

Case study: mapping misinformation at King Charles III’s Coronation

When the UK Department for Culture, Media & Sport (DCMS) monitored conversation around King Charles III’s coronation, the challenge wasn’t just scale - over 2.3 million mentions - but the risk of misinformation. Using Pulsar TRAC and its integrations with NewsGuard credibility ratings, the team could cut through the noise and identify how false narratives were surfacing and spreading.

We wanted to understand the spread of misinformation, both across socials and on news platforms, and Pulsar was really helpful in enabling us to get a fuller picture of this.” DCMS case study - using social listening to combat misinformation for your brand

The findings were revealing: misinformation made up only a small share of the total conversation, yet content from unreliable outlets carried disproportionate visibility. By analysing who was amplifying these stories, DCMS could distinguish satire and parody from more dangerous narratives, while also noting the outsized role of American outlets and communities in shaping UK perceptions.

The lesson was clear. Misinformation doesn’t need to dominate in volume to shape opinion; its impact comes from visibility and amplification. By tracing how these narratives travel across borders and communities, DCMS gained the foresight to adapt communications in real time and maintain public trust during a moment of national significance.

 

Misinformation infiltrating in unexpected arenas

Misinformation is not limited to high-tech or politically charged sectors; it can infiltrate even the most traditional industries. Our study on the global conversation around olive oil demonstrates this. The analysis found that the conversation had shifted dramatically, from focusing on the product's health benefits and recipes to being dominated by economic factors like inflation, climate change, and drought. This has led to conversations full of anxiety and worry - misinformation can easily spread into this once-neutral conversation topic.

This shift in consumer behavior, driven by economic anxiety, has opened the door for conflicting information and misinformation to enter the conversation. The olive oil example proves that when a fundamental, economic concern is not addressed by an industry or brand, audiences will seek alternative explanations - a huge risk for the creation of misinformation. 

The above post attempts to disseminate the misinformation surrounding olive oil online. This demonstrates that a brand's health is inextricably linked to socioeconomic factors, and a robust social listening strategy must be attuned to these external pressures to anticipate and neutralize narratives that may lead to the spread of misinformation.

 

The Pulsar x NewsGuard integration: How does it work?

We’ve made many references so far to the  Pulsar x NewsGuard integration, but how does it actually work? A social listening strategy for misinformation requires more than just tracking keywords and mentions. It demands an understanding of source credibility, a crucial layer of context that the Pulsar NewsGuard integration provides. We bring in the data, which is enriched and analyzed in multiple different ways – including application of the NewsGuard ratings. This partnership allows Pulsar users to layer trusted journalistic criteria onto their data trackers, enabling them to analyze conversations with a vital dimension of credibility.

This powerful integration provides significant benefits for brand and communications professionals:

  • Assessing Misinformation Risk: Users can assess the misinformation risk linked to any online conversation before it escalates
  • Identifying Misinformation Sources: It helps pinpoint which news sites and blogs are originating and spreading false information
  • Analyzing Engagement: Users can see who is engaging with these narratives and how they are being disseminated to new communities

The ability to differentiate between a negative comment on a credible news site and a viral but fringe conspiracy theory on an unreliable blog is paramount. The NewsGuard integration, in tandem with Pulsar's Visibility score, provides the crucial context of source credibility. This allows for a more efficient and targeted response, ensuring that resources are allocated to address the most dangerous and visible threats, thereby optimizing a brand's defense strategy and its return on investment.

 

The framework for misinformation resilience: a practical guide to the Listen–Map–Activate model

Step 1: Listen – setup & discovery

The first and most critical step in a social listening strategy is to move from a simple search to a strategic diagnostic. Before collecting any data, a brand must first define its objectives, such as boosting brand awareness or improving customer loyalty, and then identify the key questions its analysis will seek to answer. The UN Climate Change study provides an excellent example of this, where the researchers began by defining the "lenses" through which they would analyze the conversation: affordability, jobs, and reliability. A brand facing a misinformation crisis must first ask, "What are the core audience anxieties that this false narrative is leveraging?"

With the right questions in place, the listening setup can begin. This involves using tools like Pulsar TRAC to go beyond basic brand mentions and build comprehensive queries that include competitor, industry, and misinformation-specific terms (e.g., "green scam," "AI hoax"). It is also essential to segment the audience, identifying distinct communities and the platforms they frequent to understand the different narratives that may be circulating.

 

Step 2: Map – analyze & understand

The true value of a social listening tool is not in the data points it collects, but in the narratives it reveals. Misinformation is, by its very nature, a powerful narrative, and the ability to map these narratives is what separates a generic tool from a strategic asset. This step involves moving beyond simple keyword clouds to visualize the complex relationships and sub-narratives within a conversation using tools like Pulsar Narratives.

A critical component of this phase is applying the Pulsar NewsGuard integration to analyze conversations based on the trustworthiness of the source. This allows a brand to identify the sources and figures originating and amplifying false content, providing a vital layer of context. This process of dissection and interpretation, which includes sentiment and trend analysis, is designed to understand the story the audience is telling themselves, even if that story is false. The goal is to comprehend the landscape of a conversation fully before attempting to intervene.

 

Step 3: Activate – action & strategy

The final and most crucial step is activation. This involves applying the insights gained from social listening to inform marketing strategies and decision-making. A key takeaway from the UN Climate Change study is that direct rebuttal of a conspiracy theory is often ineffective. The most impactful strategy can be to address the underlying emotional need or question that led to the belief in the first place.

By using social listening data, a brand can develop and deploy timely, transparent communication to fill knowledge vacuums, as was demonstrated in the Iberian blackout example above. Activation also involves tailoring messaging to specific audience segments and their underlying anxieties. For example, a brand could directly address cost concerns with solar panels in one community while focusing on job creation in another. Social listening data also serves as a critical early warning system, allowing a company to prepare for and address potential crises before they escalate, safeguarding its reputation and its relationship with its audience.

 

Key takeaways

In the face of an ever-present and growing threat of disinformation, brands must fundamentally shift their approach from reactive to resilient. A passive, reactive stance leaves a brand vulnerable to narratives that can quickly spiral out of control. By embracing a proactive social listening strategy, a brand transforms from a passive reactor to an active shaper of its own narrative. This approach allows a company to build lasting brand trust, strengthen its reputation, and foster a deeper sense of loyalty with its audience.

In an increasingly fragmented and polarized digital landscape, the ability to understand and navigate misinformation is not just a defensive tactic but a key competitive advantage. Brands that invest in sophisticated tools like Pulsar TRAC and Pulsar Narratives will not only protect their reputation but also gain a deeper, more nuanced understanding of their audience's behaviors and the cultural undercurrents that shape them. These insights lead to better-informed marketing, more effective crisis communication, and a more robust overall brand strategy that is attuned to the realities of the modern digital world.

 

Listen now: The Audience of Misinformation

Wanting more insight into misinformation and social listening? The complexity and nuance of this landscape were explored in The Audiences Podcast in the episode titled 'The Audience of Misinformation'. Hosted by Pulsar CEO Francesco D’Orazio, the episode features a discussion with Sarah Brandt, Executive Vice President of Partnerships at NewsGuard, about how misinformation has ramped up in recent years, driven by a growing audience distrust and exploited by new technologies like generative AI. 

Sarah tells us that:

 "One thing we're really interested in the motivation behind misinformation. And one of the biggest motivators is money. And what I mean by that is ad revenue. So a lot of websites that spread misinformation and disinformation, they run programmatic ads. If they have really outrageous, engaging content that gets shared on social media, they get more visits to their site, they get more ad revenue. So it's a pretty lucrative business for a lot of them. And we quantified the market size, essentially, of disinformation that's supported by advertising, with a company called Comscore, and we were able to estimate that the global disinformation machine generates $2.6 billion in ad revenue every single year." - Sarah Brandt, NewsGuard

A core insight from the discussion is that misinformation is a cultural phenomenon, not just a simple falsehood; it preys on pre-existing emotions and anxieties, with the most powerful driver being outrage, whether audiences are sharing content they believe in or are outraged by it. This understanding of the emotional context, rather than just the content itself, is what allows for a more proactive and effective defense against false narratives. The podcast also illustrates how misinformation is more akin to a festering, long-term cultural trend than a flash-in-the-pan news event, and that even the most seemingly innocuous brands can be unexpectedly swept up in false narratives.

2026 Outlook: Building resilience in the misinformation era

Heading into 2026, misinformation is set to become an ongoing challenge across the UK and global media landscape. AI-generated content will continue to blur the line between authentic and synthetic information, while new UK and EU regulations such as the Digital Services Act, the Online Safety Act and transparency rules on political advertising aim to rebuild public trust. For brands and communications teams, success will depend on combining real-time social intelligence with evidence-based storytelling to identify risks early and respond clearly. Organisations that embed misinformation resilience into their communications and reputation strategy will strengthen consumer trust, protect brand value, and stay ahead in a fast-changing digital environment.


Ready to transform your misinformation strategy with deep audience insight?

Explore the power of Pulsar TRAC, Pulsar CORE and Pulsar Narratives to start uncovering the insights that matter most. Our Pulsar newsletter for more expert guides and industry insights.

To learn more, sign up to our newsletter below: