MENU

Omni-Channel Customer Engagement Article

Chat Bots of the Near Future Could Help Police for Illegal Activity

June 30, 2016



While digital messaging media have been a boon to humanity in general – from personal communications to customer support – it’s also important to recognize that these channels have sometimes been abused, particularly when it comes to kids on social media. It’s easy to send a threatening, bullying or terrifying message in a relatively anonymous format, and while it only takes a second for a perpetrator to hit “send,” the ramifications on the recipient can last a long time.


Parents may think they’re tech savvy, and they may be monitoring obvious channels like Facebook (News - Alert) messenger or email. What they might be missing, however, is communications channels in less obvious places, like inside online gaming experiences, where threatening behavior is common, and pedophiles may even be prowling for victims.

While many social media and gaming sites strive to keep ahead of security, some by learning the hard way – Twitter (News - Alert) has finally beefed up its reporting function after several high-profile cases of online bullying or threatening behavior – they can’t catch everything. This is another area in which automated chat bots, or artificial intelligence-driven automated chat windows, may prove to be highly useful, according to a recent blog post by Aspect’s (News - Alert) Lisa Michaud. As chat bots get smarter, they can begin to understand the intent of human language, and tell the difference between ordinary chat and dangerous chat.

“Research happening right now among my fellow computational linguists may be one way to start making the online streets safer,” wrote Michaud. “A significant effort has been made toward processing natural language not just to derive its meaning, but also for determining the intentions, personality, or even the age of the speaker. This year, I spoke with someone doing work on determining the age of a writer and also with one of the authors of a paper on detecting when a speaker is practicing deception. This second researcher and I discussed how her work has fantastic implications for the safety of online channels.”

Michaud wrote that it’s not uncommon today for law enforcement groups to express interest in developing a way to police online communities in an attempt to catch criminals before they can act, but these groups are small and cannot monitor everything. This is where chat bots can come in.

“A ‘bot,’ however, whose purpose is not to converse but just to listen, could potentially process ALL of the text communicated via a messaging platform in a way no humans could possibly do,” according to Michaud. “A learned model trained on the indicators of predatory behavior could flag potential predators for review by human staff, hopefully enabling interventions quickly.”

Smart language processing could, for example, spot differences between the words and the demographics of the person writing the words. Someone claiming to be 13 using slang terms common to older generations could be misrepresenting himself, for example. Chats that target younger people that demand in-person meetings or personal photographs could also be flagged for human attention, as could any chats that use threatening language.

“We know that millennials and my kids’ generation are taking to online life like an otter in a river,” wrote Michaud. “Anything we can do to keep that river free of crocodiles will make our burden as parents just a little bit easier.”




Edited by Maurice Nagle
Article comments powered by Disqus