A deluge of misinformation online about back-to-back hurricanes in the US has been fuelled by a social media universe that rewards engagement over truth.
The scale and speed of false rumours about Hurricane Helene and Hurricane Milton has been unlike many of the frenzies I’ve investigated online before.
And then there were those who shared false and evidence-free conspiracy theories about the government manipulating – or “geo-engineering”- the weather.
It’s true, most social media companies allow users to make money from views.
“Wild Mother”, a social media influencer who regularly shares unproven theories across different sites, said that four years ago, her comments were filled with “people calling me names, denying it”.
A world of social media that prioritizes engagement over accuracy has contributed to a flood of false information regarding the two consecutive hurricanes that hit the US.
Unlike many of the frenzies I’ve looked into online previously, the scope and velocity of false rumors regarding Hurricane Helene and Hurricane Milton has been unusual.
Among the seemingly innocent queries about the accuracy of rescue operations and forecasts that have gone viral are deceptive assertions (which Donald Trump has made time and again) that money intended for hurricane relief is actually being used to compensate illegal immigrants.
Some circulated fake photos of the wreckage, including computer-generated (CGI) videos, old footage of various storms, and fictitious images of kids running from the destruction produced by artificial intelligence (AI). Subsequently, there were individuals who propagated unfounded and devoid of supporting data conspiracies alleging that the government was “geo-engineering” the weather.
Congresswoman Marjorie Taylor Greene wrote on X last week, “Yes, they can control the weather.”.
Verifying the accuracy of statements regarding hurricane response operations.
Nothing about Hurricane Milton was “engineered.”.
VIEW: Hurricane damage is assessed by Floridians.
Conspiracy-touting social media profiles with blue ticks have been the source of the majority of the misinformation that has gone viral. This week, a number of accounts propagated false information about Hurricane Milton, having previously shared posts implying that real-life events—such as elections, political violence, pandemics, and wars—were organized or manipulated.
I sent messages to numerous accounts that disseminated inaccurate and deceptive content about Hurricanes X on X. The reason why their accounts went viral appeared to be related to the modifications made at X after Elon Musk took over as its owner. Users can now purchase these ticks, whereas previously the blue-check could only be granted to individuals who had undergone verification and vetting. Consequently, the algorithm increases the prominence of their posts. Regardless of the veracity of the posts, they can then make money by sharing them.
Blue-tick users are eligible to receive a portion of the money generated by the advertisements in their replies thanks to X’s revenue sharing policy. The website declared on October 9th that “payouts are increasing” and that accounts would now be compensated for their participation—rather than the advertisements they respond with—by other users who have paid for Premium membership.
Some users have been encouraged to spread anything that goes viral, even if it is false, as a result. A number of the people I messaged told me they benefited from their posts receiving interaction and from sharing content that people would be interested in seeing.
It’s true that the majority of social media platforms let users earn money from views. Nevertheless, according to their own policies, Facebook, Instagram, TikTok, and YouTube may demonetize or suspend accounts that share false material and claim to identify posts that contain misleading content. X does not follow the same rules regarding false information.
It removed a previous feature that allowed users to report misleading information, even though it has rules against faked AI content and “Community Notes” to add context to posts.
When the BBC asked X for a comment, X did not reply.
One example of how a concept shared on one platform can spread throughout the social media ecosystem is the phenomenon of misleading posts that go viral on X also finding their way into other websites’ video comment sections.
The social media influencer “Wild Mother,” who frequently posts speculative ideas on various platforms, claimed that four years prior, “people calling me names, denying it” were all over her comments section.
She was referring to a recent post that discussed conspiracy theories about geo-engineering and the recent hurricanes when she said, “And now, I was surprised to see that nearly every comment is in agreement.”.
Such misinformation can actually have an adverse effect on public confidence in authorities, as demonstrated in this instance during the difficult rescue and recovery efforts after Hurricane Milton.
Despite the fact that false information has always circulated during natural disasters, this storm differs significantly from others in the past. Less than three dozen abusive or false posts were viewed 160 million times on X, according to the Institute of Strategic Dialogue think tank. This is just one example of how the lies being spread are reaching a wider audience.
Also, because of the approaching US presidential election, they now possess a more acute political edge.