People searching on Facebook for footage of Saturday’s racist shooting rampage in Buffalo, NY, may have come across posts with footage of the attack or links to websites promising the gunman’s full video. Interspersed between those posts, they may have also seen a variety of ads.
The social network has sometimes served ads next to posts offering clips of the video, which a gunman live streamed on the video platform Twitch as he killed 10 people. For the past six days, recordings of that livestream have circulated across the internet including on Facebook, Twitter and fringe and extremist message boards and sites, despite some companies’ efforts to remove the content.
The pace at which an 18-year-old gunman’s ephemeral livestream morphed into a rapidly proliferating, permanent recording shows the challenges large tech platforms face in policing their sites for violent content.
Facebook and its parent company, Meta, rely on a combination of artificial intelligence, user reports and human moderators to track and remove shooting videos like the Buffalo one. But in some search results, Facebook is surfacing the violent video or links to websites hosting the clip next to ads.
It is not clear how many times ads have appeared next to posts with the videos. Searches for terms associated with footage of the shooting have been accompanied by ads for a horror film, clothing companies and video streaming services in tests run by The New York Times and the Tech Transparency Project, an industry watchdog group. In some cases, Facebook recommended certain search terms about the Buffalo gunman video noting that they were “popular now” on the platform.
In one search, the platform surfaced an ad for a video game company two posts below a clip of the shooting uploaded to Facebook that was described as “very graphic….Buffalo Shooter.” The Times is not disclosing the exact terms or phrases used to search on Facebook.
Augustine Fou, a cybersecurity and ad fraud researcher, said that large tech platforms have the ability to demonetize searches around tragic events. “It’s that easy technically,” he said. “If you choose to do it, one person could easily demonetize these terms.”
“Our aim is to protect people using our services from seeing this horrific content even as bad actors are dead-set on calling attention to it,” Andy Stone, a Meta spokesman, said in a statement. He did not address the Facebook ads.
Facebook also has the ability to monitor searches on its platform. Searches for terms like “ISIS” and “massacre” lead to graphic content warnings that users must click through before viewing the results.
While searches for similar terms about the Buffalo video on Google did not result in any ads, Mr. Fou said there was an inherent difference between the search platform and Facebook. On Google, advertisers can pick which keywords they want to show their ads against, he said. Facebook, on the other hand, places ads in a user’s news feed or search results that it believes are relevant to that user based on Facebook interests and web activity.
Michael Aciman, a Google spokesman, said that the company had designated the Buffalo shooting as a “sensitive event,” which means that ads cannot be served against searches related to it. “We don’t allow ads to run against related keywords,” he said.
Facebook has come under fire in the past for ads appearing next to right-wing extremist content. Following the Jan. 6, 2021, riot at the US Capitol, BuzzFeed News found that the platform was surfacing ads for military gear and gun accessories next to posts about the insurrection.
Following that report, the company temporarily halted ads for gun accessories and military gear through the presidential inauguration that month.