Social media has sharpened humans’ age-old appetite for public shaming, providing a stage and unlimited seating for a seemingly unending stream of immorality plays. Those who share even the simplest identifying details about themselves are vulnerable to being pushed into the glare of the spotlight.
The anonymity the Internet provides frees many individuals of the consequences they might face offline for being abusive to other people. Perhaps appearing to their friends, family and connections as ordinary people in the real world, these Jekyll-and-Hyde netizens transform into trolls to carry out their online assaults.
Anonymity has been a hot button issue for just about the entire life of the Internet, and although there is no 100 percent solution in sight, the situation is not entirely hopeless, according to Charles King, principal analyst at Pund-IT.
“So long as public sites enable user anonymity, pathological behavior will continue, because it thrives in the shadows,” he told TechNewsWorld. “Forcing abusers into the sunlight may be difficult or impossible — but changes in rules, laws and enforcement practices could make their lives more complicated and less comfortable.”
Deep Dive Into Dirt
We know what the problem looks like, thanks to big data and analytics.
A recent analysis identified more than17,000 tweets related to body shaming, for example, and ranked the most common terms Twitter users lobbed at others to shame them for their weight.
Artificial intelligence soon might be able to catch and moderate cruel posts mere moments after publication, suggested a University of Lisbon team of researchers who have leveraged machine learning to teach AI to suss out sarcasm.
For now, the moderation and reporting tools available aren’t set up to prevent or discourage online abuse, said Rob Enderle, principal analyst at the Enderle Group.
“Reputation protection services can be used, but that doesn’t scale well — they target one person at a time — and it can be really expensive if you have to litigate and your attacker has no money,” he told TechNewsWorld.
What to Do?
It appears Reddit currently has the best system in place, in Enderle’s view, as its shadow-blocking tools shield users from whomever they wish to block, while allowing offenders to keep their accounts. Offenders are none the wiser, barring some detective work.
“Of course, publicizing shamers so they lose their jobs, gym memberships, and get attacked themselves does work,” he acknowledged, “and if it is done enough, that should change behavior.”
However, that approach so far hasn’t been used enough to make a difference, Enderle said.
That could change if social media sites and other forums were willing to make some changes.
They could take proactive steps that might make a difference, noted King, who pointed to a list of suggestions for Twitter, posted online by Randi Lee Harper, founder of the Online Abuse Prevention Initiative.
Those changes might result in a significant decrease in the prevalence of abuse on Twitter, but what will it take to inspire websites and their parent companies to intercede?
“Many, if not most, technology vendors bend over backward to avoid favoritism and maintain level playing fields for users of all stripes,” King pointed out. “I respect that attitude, but it’s often subject to being gamed by some users — and in some circumstances has resulted in online environments that amplify abusive behavior.”
Machine learning tools one day might be capable of rejecting abusive comments before their intended targets ever see them. However, even if the companies running social networks work strenuously to stomp out…