Ever since the Tor browser was released 15 years ago which let people surf the internet anonymously, the dark web has become a breeding ground for nefarious activities such as child sex abuse and child pornography. But now Google has developed a new Artificial Intelligence (AI) tool that will check the spread of contents involving child sexual abuse.
Google said its cutting-edge AI technology uses deep neural networks for image processing to help discover and detect child sexual abuse material (CSAM) online. The new tool based on the deep neural networks will be made available for free to non-governmental organizations (NGOs) and other “industry partners”.
Google lead engineer Nikola Todorovic and Product Manager Abhi Chaudhuri wrote in the company’s official blog post,
“Using the Internet as a means to spread content that sexually exploits children is one of the worst abuses imaginable. Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse.”
The new tool will help human moderators sort and flag CSAM photos and videos. The software will prioritise the “most likely content” in this category to speed up the review process and lead to the images being taken down faster. Reviewing abusive and disturbing images can be a harrowing job, with many reviewers reportedly suffering from Post Traumatic Stress Disorder (PTSD) by viewing a lot of CSAM, terrorism and bestiality images.
One of Facebook’s 7,500 reviewers told The Guardian last year,
“There was literally nothing enjoyable about the job. You’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off.”
According to Google, the new tool can speed up the identification of CSAM images by 700%. Last year, a similar tool flagged down 80,000 such CSAM images.