The terrorists who attacked a mosque in New Zealand on March 15 utilized social media and affected users across the globe, amplifying the discussion of the First Amendment freedom of speech and its regulation.
The massacre was live-streamed on Facebook, reposted on YouTube and shared on multiple social media accounts, reaching thousands of people despite the attempts of major media platforms to keep that from happening.
Jack Shock, distinguished professor of communication, teaches a number of courses including communication law.
“I think we are in uncharted water with some of the most toxic, unintended consequences of social media platforms,” Shock said. “The original intent of a social media platform was obviously to facilitate communication and connection. Somewhere along the way [it] went bad, and evil intent replaced original intent. Now the problem is, how do you regulate that?”
According to a CNN report, the shooter streamed killing people as if they were in a video game. The tie to the video-gaming world was prevalent, but more so when the shooter encouraged viewers to subscribe to PewDiePie, a gamer on YouTube known for making anti-Semitic remarks in his videos. The YouTuber, whose channel has over 90 million subscribers, instantly disavowed the shooting.
“It’s not the media,” Shock said. “It’s what people do with the media that is evil.”
With social media, content that used to be seen by small circles in law enforcement alone is now easily released and accessible in a matter of seconds world-wide. According to the CNN report, before the attack took place, the horror was previewed by a post on anonymous message board 8chan — “ a particularly lawless forum that often features racist and extremist posts.”
“The only thing I’m trying to say is media itself are not bad,” Shock said. “Because bad people sit behind a keyboard and do bad things.”
The CNN report states the post linked to the shooter’s manifesto and directed people to the live-stream on Facebook. Facebook took measures to take down the video, and Twitter deleted the alleged shooter’s profile, but not before multiple versions of the content were spread like wildfire over social media platforms.
Jim Miller, associate professor and chair of the communication department, said the issue comes when deciphering who gets to decide the rules and regulations for the broadcasting of crime.
“We record crimes all the time. We have hidden cameras with that sort of purpose — recording crime, so we can prosecute it,” Miller said.
Drew Harwell with the Washington Post, who covers artificial intelligence and algorithms, tweeted, “[The video] was re-uploaded 1.5 million times to Facebook within the first 24 hours; 300,000 made it through.”
Those pushing the videos, according to the Washington Post, made alterations of the original video of the shooting, which “were enough to evade detection by artificial-intelligence systems designed by some of the world’s most technologically advanced companies to block such content.”
According to the Washington Post, YouTube’s Chief Product Officer Neal Mohan assembled a group of executives who deal in crises such as when footage of a shooting spreads online. The team eventually had to take unprecedented steps: disabling search functions and cutting off human review features.
Miller said he becomes a little uneasy when there is regulation of speech on any level because of how near and dear he holds the First Amendment.
“Should people be nice and kind and polite and not broadcast murders on Facebook? Yes. Do they do it? Yes, they do,” Shock said. “Now, what we do with it, I don’t know yet.”