London, Computer scientists have launched an app, 'Bullstop' that uses novel artificial intelligence (AI) algorithms to combat trolling and bullying online.
The downloadable app is the only anti-cyberbullying app that integrates directly to social media platforms to protect users from bullies and trolls messaging them directly, the scientists from Aston University in the UK, reported.
Bullstop is available for free and can now be downloaded on GooglePlay.
"This application differs from other apps because the use of artificial intelligence to detect cyberbullying is unique in itself," said Semiu Salawu, who designed Bullstop.
"Other anti-cyberbullying apps, in comparison, use keywords to detect instances of bullying, inappropriate or threatening language," Salawu added.
According to the developer, the detection AI has been trained on over 60,000 tweets to recognise not only abusive and offensive language, but also the use of subtle means such as sarcasm and exclusion to bully, which are otherwise difficult to detect using keywords.
"It uses a distributed cloud-based architecture that makes it possible for 'classifiers' to be swapped in and out. Therefore, as better artificial intelligence algorithms become available, they can be easily integrated to improve the app," Salawu explained.
The team revealed that 'Bullstop' is unique in that it monitors a user's social media profile and scans for offensive incoming messages, to ensure the user is not subject to incoming abuse, as well as offensive outgoing messages.
This works via an algorithm which is designed to understand written languages. It analyses messages and flags offensive content, such as instances of cyberbullying, abusive, insulting or threatening language, pornography and spam.
Offensive messages can be immediately deleted from the user's inbox.
A copy of deleted messages are, however, retained should the user wish to review them. The app can also automatically block contacts who continuously send offensive messages.
Bullstop is highly configurable, allowing the user to determine how comprehensively the app removes inappropriate messages.
The app currently supports Twitter with support for text messages planned in the next stage of the rollout.
It is hoped that, with continued usage of the app and good results, other social media platforms such as Facebook and Instagram will come on board, allowing their users to benefit from the application.
The app is currently in the beta testing stage which means the researchers invite users of the app to provide them with feedback to allow them to make improvements.
"It has already been tested by a number of young people and professionals, including teachers, police officers and psychologists," the authors wrote.