From the outside looking in, travelers from outside the United States have been cautioned by their own governments about travel to America because of the risk for gun violence. Inside the United States, the government is issuing a warning to its own citizens, particularly those who are Jewish, members of migrant communities, and members of the LGBTQ community, to take caution because of threats of violence.
The warning was especially poignant after a gunman opened fire in a LGBTQ night club in Colorado Springs, Colo., on Nov. 19, 2022. Five were killed in that shooting rampage, with 17 injured, including the club patrons who subdued and incapacitated the gunman. It brought to light the painful scars left by the 2016 shooting at Pulse Nightclub in Orlando, Fla., that killed 49 and injured 53.
With the rise of religious extremism and the unchecked resurgence in anti-government conspiracy groups under the reign of Donald Trump, the United States has become a volatile and hateful place to those who are not white, cisgender, hetero Christians. Though hatred has long been an underlying thread throughout American history, how did it hit this point? One could argue that the assault on the nation’s capital on Jan. 6, 2021 was the flash point.
In a 128-page report, the Senate Homeland Security Committee alleged that federal agencies, including the FBI and the Department of Homeland Security, as well as the major social media platforms, are not doing enough to address the threat of domestic terrorism from white supremacists and anti-government extremists.
“Unfortunately, our counterterrorism agencies have not effectively tracked the data that you need to measure this threat,” Sen. Gary Peters, D-Mich., who chairs the Senate Homeland Security and Governmental Affairs Committee, said. “If they’re not tracking it, it’s likely they are not prioritizing our counterterrorism resources to effectively counter this threat.”
The FBI countered with its own statement, reading in part that it is “agile” and adjusting resources to meet threats. The DHS said, “addressing domestic violent extremism is a top priority.”
The FBI itself has come under scrutiny when it came out that they had the Colorado Springs gunman on their radar just weeks before that shooting, when he had been arrested for threatening to kill family members. Agents closed out the case just a few weeks later. It’s not the first time a shooter has come to the FBI’s attention and ended in a mass shooting after they were cleared. The agency received information about the gunman who killed 17 at a Florida high school as well as the Pulse gunman and a man who set off bombs in the streets of New York City. All three of them had been scrutinized by federal agents who later determined that those individuals did not need further scrutiny by law enforcement.
The FBI does operate within guidelines meant to protect civil liberties, even as they can impose restrictions on an individual during the assessment phase. During this phase, agents may analyze information from government systems, Internet searches, and interviews. More than 10,000 assessments are opened every year and many are closed within days or weeks when it is decided by the FBI that there is no criminal or national security threat or that continued surveillace is warranted. The guidelines are meant to ensure that someone who has not broken the law doesn’t remain under a microscope without cause.
Both the FBI and DHS define “homegrown violent extremists” as those radicalized and inspired by foreign ideologies. The report pointed out that men accused of killing 23 people in a Walmart in El Paso, Texas, and 10 in a Buffalo, N.Y. supermarket were not given this designation though both claimed to be inspired by a shooter who killed 51 people at two Christchurch, New Zealand mosques as well as other racist and anti-semitic ideologies. The way the FBI categorizes domestic terrorism can also obscure understanding of the scope of the problem. In 2017, the agency created a category called “Black Identity Extremists” but stopped using it. In 2019, all forms of racially motivated extremism, including “White Supremacist Violence” was all lumped into one category called “Racially Motivated Violent Extremists.”
“This change obscures the full scope of white supremacist terrorist attacks, and it has prevented the federal government from accurately measuring domestic terrorism threats,” the report said.
The report also doled out criticism on both agencies in how they handle looking for threat intelligence, especially that which is posted publicly on social media. One glaring example of this was the flood of information leading up to the Jan. 6 attack that the FBI deemed not specific enough to prompt action.
On the social media front, each of the major platforms such as Meta (formerly known as Facebook), TikTok, and YouTube have all released their own statements. While a Meta spokesperson talked about the company’s Community Standards Enforcement Report, which showed a low prevalence of terror and organized hate content, Nick Clegg, a top executive, had a different opinion. Last year, Clegg stated, “The reality is, it’s not in Facebook’s interest — financially or reputationally — to continually turn up the temperature and push users towards ever more extreme content.”
In 2021, an investigation done by a researcher employed by Facebook found that the platform’s algorithms pushed users into so-called rabbit holes. The researcher created fictitious accounts and users in 2019 and 2020, to study what way the platform played a part in the misinformation and polarization of users through its algorithms. The algorithms suggest groups and people that could lead to the radicalization of users. Though that radicalization happens with a small bit of the total population of Facebook, given the scope and number of users, it could mean millions.
That researcher’s findings were compiled in a report called “Carol’s Journey to QAnon” and was included among documents presented to the Securities and Exchange Commission. It was also provided to Congress in redacted form by attorneys for Frances Haugen, a Facebook product manager turned whistleblower. She has filed several complaints alleging that the social media giant puts profits over public safety, going so far as to testify before a Senate subcommittee in the fall of 2021. Though Facebook has community standards, there have been complaints by users that material and accounts reported for violating community standards are not properly policed or even restricted. Rather those doing the reporting find their accounts restricted and suspended.
The Committee’s investigation found that the four major social media companies — Meta, TikTok, Twitter, and YouTube — use the same model, maximizing user engagement, growth, and profits, basically incentivizing extremists and their content.
“These companies point to the voluminous amount of violative content they remove from their platforms, but the investigation found that their own recommendations, algorithms and other features and products play in the proliferation of that content in the first place,” the report said. “Absent new incentives or regulation, extremist content will continue to proliferate on these platforms and companies’ content moderation efforts will continue to be inadequate to stop its spread.”
The report went on to add, “Although Meta, TikTok, Twitter, and YouTube have a range of policies aimed at addressing extremist and hateful content on their platforms…extreme content is still prevalent across these platforms.”