Meta ditches fact-checking on its platforms. Marketers weigh in.
When social media platforms change their content moderation policies, as Meta did this week, there’s often a downstream impact on marketers. After Founder and CEO Mark Zuckerberg announced that the company was ditching fact-checking in favor of so-called Community News, it was inevitable that adtech marketers would have something to say about this shift.
“We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms,” Zuckerberg said in a video that dropped on Tuesday. That meant instead of factcheckers, readers would get to flag posts perceived to be inaccurate, (or worse).
One blogger on Facebook, Rachel Hurley, quipped that the decision to lose the fact-checkers was akin to “replacing firefighters with a suggestion box for what to do when things catch fire.”
For the marketing community it was a case of déjà vu, because the Meta move resembled a similar model to remove factcheckers deployed by Elon Musk’s X platform, which saw an exodus of advertisers concerned about brand safety on the platform over the last year.
Would something like that happen on Facebook and Instagram, which are of course major destinations for many brands? The Current asked marketers and analysts across the ad tech community for their takes. Here’s what they shared with us and on social media.
Kelsey Chickering, principal analyst, Forrester
“It’s tempting to compare Meta’s pivot on content moderation to X’s ‘free speech,’ anti-moderation transformation story, along with its subsequent advertiser exodus. But Meta isn’t X. Advertisers are too entrenched in its highly efficient paid media ecosystem — a one-stop shop for target audiences. Meta’s apps are — and will remain — a core part of most companies’ media mixes. And Meta’s position is only strengthened by the uncertainty around TikTok’s future.”
Josh von Scheiner, CEO and founder, A Different Story
“Will Meta become ‘not safe’ for brands? Unlikely. With other content-moderation tools Meta has at its disposal to address bullying, violent imagery, bad actors, etc., Meta will most likely not become an anything-goes Wild West. The real risk is that without fact-checkers muting content, users may be turned off by an increase in disinformation and as a result, turn their attention elsewhere.”
Chris Finnegan, SVP of media, Cornett
“While it doesn’t necessarily surprise me, this is a major step backwards for Meta when it comes to brand safety. Hopefully, it doesn’t turn into the cesspool that X has become from a content and algorithm standpoint.
“Between this announcement and the recent one around Meta introducing AI avatars as ‘users,’ this platform is creating a major challenge for advertisers when it comes to establishing brand safety and ensuring that impressions are being shown to actual consumers and not some bot or AI creation,” Finnegan wrote on LinkedIn.
Craig Elimeliah, chief creative officer, Code and Theory
“Wow…The illusion of neutrality is over. Brands must now navigate an openly political and culturally fractured media ecosystem. I’ve said this before, brand safety must be redefined as authenticity, transparency and resonance with human experiences.
“Brands will no longer survive by avoiding risk. They must embrace it by aligning with values, owning their narratives and being willing to stand in the tension of polarizing spaces,” Elimeliah wrote on LinkedIn.
Jennifer K., vice president and general manager of trust and safety, Upwork
“Mark Zuckerberg’s recent announcement about changes to Meta’s content moderation approach is raising eyebrows — and for good reason. These decisions appear to explicitly pander to political pressures, particularly from right-wing voices in the U.S., echoing a troubling trend we’ve seen elsewhere, most notably on Twitter/X.
“Content moderation isn’t just a technical or operational challenge — it’s a moral and social imperative. Platforms like Meta hold immense power in shaping the digital public square. When they dilute their commitment to safety under the guise of ‘free expression,’ they risk eroding trust, enabling harm and silencing marginalized communities,” Jennifer K. wrote on LinkedIn.
Jon Bond, founder, Bond World
“It’s not like anyone thought Meta content was really fact-checked to begin with!”
Linda Yaccarino, CEO, X
“Fact-checking and moderation doesn’t belong in the hands of a few select gatekeepers who can easily inject their bias into decisions. It’s a democratic process that belongs in the hands of many. And as we’ve seen on X since Community Notes debuted, it’s profoundly successful while keeping freedom of speech sacred. It’s a smart move by Zuck and something I expect other platforms will follow now that X has shown how powerful it is. Bravo!” Yaccarino tweeted on X.
Vanessa Otero, CEO and founder, Ad Fontes Media
“Mark Zuckerberg asserts that Meta shouldn’t be the ‘arbiter of truth’ as if they are being humble, but that’s disingenuous because they have already made themselves the arbiter of what is most distributed. It’s not humble to make that choice for the public at all — it’s the wielding of enormous power. The choice of what gets most distributed is based on what is most engaging, and what’s most engaging is often false or misleading.
“This has the potential to impact advertisers who may see Meta platforms as a risky bet, especially if they actually do devolve into places where misinformation and/or content that expresses hate toward minority groups becomes noticeably more rampant like X. The combo of ‘bringing back civic content’ PLUS reducing the filters on what counts as policy violations (i.e., disparaging comments toward women, LGBTQ people, immigrants) PLUS getting rid of fact-checking, all at the same time, makes this a real possibility.”