King of social media, Mark Zuckerberg, has revealed plans to generate Artificial Intelligence (AI) software capable of reviewing suspect content on Facebook.

In a letter just short of 5,000 words, Zuckerberg outlines ambitions to design algorithms that could detect and prevent terrorism, bullying or even suicide.

Like other social media platforms, Facebook has received criticism over repeated failure to accurately identify and act when the social network is being misused.

One such example occurred after the killing of Lee Rigby, when it emerged that one of the culprits had spoken of the plot on Facebook months before the attack.

Mark Zuckerberg

 

In his letter, Zuckerberg responds to criticism by admitting, ‘the complexity of the issues we’ve seen has outstripped our existing processes for governing the community’.

He continues: ‘We are researching systems that can read text and look at photos and videos to understand if anything dangerous may be happening.

‘Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda.’

The concept is not original – with both governments and corporate agencies urgently awaiting progress, AI in the field of counter-terrorism is fast growing.

Intelligence agencies receive masses of information but without the ability to process such large quantities, many potential leads go unnoticed.

AI software capable of processing collected data will be key to staying on top of threats posed by terrorist activity, enabling agencies to take preventative rather than reactionary measures.

Coupled with AI software able to understand visual surveillance, there is real potential for AI to totally chance the face of the war on terror.

However, Zuckerberg notes how the effectiveness of AI software on the social networking site will not be immediate.

He writes: ‘It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more.

‘At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.’

With experts warning against becoming too reliant on technology in the fight against terrorism, it seems AI will be used to bolster, as opposed to replace current human-driven intelligence methods.