Officials from Facebook and Google went to Capitol Hill in order to discuss their companies efforts to tackle extremism and the rise of white nationalism on their platforms

Congress

(Credit: Pixabay)

Representatives from Facebook and Google appeared before congress today over the role of online platforms in promoting hate crimes, extremism and white nationalism.

The House Judiciary Committee hearing follows the mass shooting that targeted two mosques in Christchurch, New Zealand, resulting in the deaths of 50 people.

The terrorist live-streamed the attack on Facebook and videos of the shooting were shared on the Google-owned video platform YouTube.

Although both technology companies claim to remove any material linked to terrorism, they struggled to censor the footage as it was continually re-uploaded.

Spread of white nationalism and extremism online on the rise

In an opening statement, Judiciary Committee chairman Jerrold Nadler highlighted the fact that hate crimes have increased by 20% in the US last year, and the role that online platforms have had in spreading hateful speech.

Judiciary Committee chair Jerrold Nadler (Credit: Wikimedia Commons)

 

He said: “In the age of instant communication with worldwide reach, white nationalist groups target communities of colour and religious minorities through social media platforms – some of which are well-known to all Americans and some of which operate in hidden corners of the web.

“These platforms are used as conduits to send vitriolic hate messages into every home and country.

“Efforts by media companies to counter this surge have fallen short and social network platforms continue to be used as ready avenues to spread dangerous white nationalist speech.”

The Counter Extremism Project, a non-profit group that aim to tackle extremism, followed a selection of 35 Facebook pages belonging to groups that support white supremacism or neo-Nazism for two months and found they collectively grew by a total of 2,366 likes.

YouTube’s recommendations algorithm has also been accused of “radicalising” viewers by leading them to increasingly extreme content.

 

What are Google and Facebook doing to tackle white nationalism and extremism?

Both representatives of the tech giants condemned the New Zealand massacre and other incidents of far-right terrorism.

facebook google extremism
Neil Potts, Facebook’s director of public policy speaking at the committee hearing

Neil Potts, Facebook’s director of public policy, said: “All of us at Facebook stand with the victims, their families and everyone affected by the horrific terrorist attack in New Zealand.

“I want to be clear, there is no place for terrorism or hate on Facebook.

“We remove any content that incites violence, bullies, harasses or threatens others and that’s why we’ve had long-standing policies against terrorism and hate and have invested so heavily in safety and security in the last few years.”

The company claims to employ 30,000 people across the world who are focused on safety and security.

Mr Potts claimed that Facebook has made “significant investments” in AI to try and detect extremist content before it is seen or reported so it can be more swiftly removed.

He added: “Human reviewers and automated technologies work in concert to keep violent, hateful and dangerous content from our platform in the first instance and to remove it quickly when it manages to get by our first line of defence.

“Our rules have always been clear that white supremacists are not allowed on the platform under any circumstances and we have banned more than 200 white supremacist organisations under our dangerous organisations policy.

“Last month we extended that ban to all praise, support and representation of white nationalism and white separatism.”

He also mentioned that videos found to be in violation of the platform’s policies can be shared across the social media industry, enabling them to take quicker action.

facebook google extremism
Alexandria Walden, global policy lead for human rights and free expression for Google speaking at the Judiciary Committee hearing

Alexandria Walden, global policy lead for human rights and free expression for Google, said: “Over the past two years, we’ve invested heavily in people and machines to quickly identify and remove content that violates are policies in regards to incitement of violence and hate speech.

“In the fourth quarter of 2018, over 70% of the eight million videos removed and reviewed were first flagged by a machine – the majority of which were removed before a single view was received.”

Google also has an in-house team which “proactively looks for new trends and content that may violate its policies”.

 

YouTube comments disabled during live stream

Other methods used to cut out extremism on YouTube includes promoting videos with alternative viewpoints, making the videos ineligible for ads and blocking comments.

The final method was seen in action during the live stream of the Judiciary Committee as the comments section was flooded with white nationalist and anti-Semitic messages.

For 30 minutes, comments which said “end Jewish supremacy”, “Jews are lizard people” and “every white country is being turned non-white” were openly shared on the platform.

facebook google extremism
YouTube’s comment section during the House Judiciary Committee live stream

Clown emojis and comments saying “Honk Honk” – an online meme that has been used by far-right social media users – were also prevalent.

Racist memes were also used by the terrorist convicted of the Christchurch massacre as a method of linking his attack to previous events.

 

Tech giants struggling to stop terrorist messages spreading on their platforms

The YouTube comment section on the Congressional live-stream exemplified the difficulties that social media platforms have when trying to stamp out hate speech.

Both Facebook and Google claimed the sheer volume of content shared on their platforms made preventing the spread of extremist messages difficult.

Mr Potts said: “Determining what should and should not be removed from our site is not always simple given the amount of content we have on our platform.

“We know we don’t, and we won’t always get it right but we’ve improved significantly.”

Google representative Ms Walden said that two billion people come to YouTube every month and upload 500 hours of video every minute.

She added: “Often we’ve found that content sits in a grey area that comes right up against the line – it might be offensive but it does not violate YouTube’s policies against incitement to violence and hate speech.”

“Google builds its products for all users from all political stripes around the world.

“The success of our business is directly related to our ability to earn and maintain the trust of our users.

“We have a natural and long-term incentive to make sure that our products work for users of all viewpoints.

“We believe that we have developed a responsible approach to address the evolving and complex issues that manifest on our platform.”

One congressman raised the point that videos of the Christchurch shooting were still spreading freely on WhatsApp three days after the terrorist attack and claimed that by its design, the app doesn’t have a way of tracking or preventing the spread of such videos.

Mr Potts claimed Facebook was able to remove the videos of the New Zealand shooting in 10 minutes and prevented 1.2 million additional uploads on Facebook.

He added: “WhatsApp has its own policies on content and they are committed to working with law enforcement.”