Before we were so connected by technology, bullying was most frequently identified in school settings. For example, remember Scut Farkus in “A Christmas story” or Brian Johnson (Anthony Michael Hall’s character) in “The Breakfast Club?” But Bullying is not just for kids. Harassment and hate speech exist in the workplace and socially among adults. So, what’s changed, and what can be done about it?
Online activity, such as social media, texting and gaming, provides constant contact and further reach with our peers, colleagues, friends, and even strangers. Masked behind a keypad or keyboard, some people find the courage to publish disparaging and hateful posts that they might not have carried out in a face-to-face situation. And the problem for many young victims is that they do not report such instances.
What does this have to do with artificial intelligence (AI)? There are some brilliant applications being developed and implemented to identify and curtail these actions.
AI Processes on Social Media
Instagram and Facebook use AI to identify high-suicide-risk uses by recognizing word patters ina post and then within related comments. This is processed along with other information such as the time(s) the posts were made. Once an item is flagged, it’s sent to Facebook’s 1,000 plus team of Community Operations for further review. Their 27-page Community Standards guideline explains what the algorithm and the team looks for, among other things, hate speech,terrorist propaganda, violent imagery, and harmful threats.
DeepText is Facebook’s algorithm that they use to identify and understand textual content with “near human” accuracy. It analyzes content to filter out offensive posts. Think of your email spam detector. The concept of identifying malicious words and phrases in subject lines and message body works on in a similar way. A huge difference is that DeepText works on several thousand posts in 20 languages per second! In October 2018, Instagram (whose parent company is Facebook) jumped on board and announced that it uses machine learning technology to automatically detect bullying in photos, videos and their captions.
Another product working to eradicate cyberbullying is Guardio. This free service was developed as a not-for-profit startup that uses AI technology to identify problematic social media activity and send the messages to the child’s parents. Guardio uses IBM Watson technology that enables natural language processes (NLP) and natural language classifiers (NLC) to decipher, label, and categorizes words and messages.
In these products, once an algorithm is created, it relies on machine learning, which is the process of implementing the algorithm rules to in some way categorize or classify the results. Taking machine learning a step further, deep learning improves accuracy by the ongoing process of working with new data as it becomes available.
But still, how can an algorithm successfully evaluate and sensor content that is written with sarcasm or wit? As with most historic technology solutions, AI anti-bullying programs will improve with time. As AI evolves, we can count on technologies such as deep learning to improve it.
It’s all about the user experience and the safety of children. But to which end: free expression at all cost or machine censorship for our safety and well-being?
Here are some great resources if you’d like to read more on this important topic.