Twitter Finally Seeing the Light…We Hope. Attacks Abusive Behavior.
If there is something that is not in short supply in the great old USA these days it is hatred, and venues within which to spew whatever vitriol you choose to be your elected poison. While I use Twitter to promote my content and that of others, as well as to have conversations with fans and other writers in the biz, quite frankly, it is only a shade more policed than Reddit. Twittter’s primary problem is a set of regulatory responses that are about as confusing as the Apple App Store at its finest. In the wake of its recent failure to curry sufficient favor with any investor to nab a sale of its assets, Twitter appears to be trying to clean up its house before it tries putting itself on the market again.
Two years ago, Twitter’s former CEO, Dick Costolo, publicly admitted to Twitter’s failure to properly administer and regulate the social media outlet’s response to trolls, griefers, haters, and just plain old bigotry. There have been several high-profile cases of abuse in recent history, most notably the racially-oriented harassment of Ghostbusters (2016) star Leslie Jones. And let me not even speak of GamerGate. Twitter’s responses in those cases were more lame duck than any Congress that we’ve experienced in recent history.
Minorities suffer a statistically higher frequency of abuse on the platform. While Costolo has long since stepped down, current CEO and co-founder Jack Dorsey’s regime has seen little in the way of improvement. While some Twitter members have been able to attain the vaunted “Verified” status, even with the expansion of that qualification, the status is still reserved to the privileged, based, seemingly, on a subjective popularity contest and not based on any degree of quantified need. If you are attacked 100 times and I am attacked 1,000, I do not get moved ahead of you in the “Verified” vetting process and do not achieve approval at any higher rate. Unless I am more popular than you, and even that rubric and its application is shrouded in mystery.
The key elements of the new implementation will allow users to access enhanced functionality around Twitter’s “Mute” function. Currently, you can Mute accounts, theoretically to be used on accounts that have issued abusive comments in your direction. The problem with this is that, the army of professional trolls on Twitter will then just spawn frivolous accounts. In an operational profile analogous to spam in email, the trolls will often refine those accounts for a short period of time, trying to build them up to look legitimate in an effort to prolong the offensive when they flip the switch and start using them to be abusive. Because the accounts may appear legitimate, this has the effect of delaying Twitter’s banning of the account by extending the amount of time it takes human analysts to establish that the account is solely being used for abuse. These bad actors often have several accounts registered, marinating the clones in infrequent banter designed to obfuscate the true intent of the account until they wake the sleeper agent and vector it towards a target of opportunity.
Once the new features are implemented, Twitter users will be able to preemptively mute keywords, phrases, and whole conversational threads that they do not want to see. Applying the mute to a chosen set of data elements will prevent the user from ever even seeing them in the user’s feed. Twitter is also increasing the training it provides to moderators on its policies and enforcement of them. Supposedly, reporting abuse will also be easier, although I have not seen the specific details on that improvement.
Inserting these safe-social aids is timely and prudent. I’ll give kudos to Twitter for recognizing that the recent election results are likely to spark a spike in heated 140 character barbs aimed at anyone who tweets naughty statements concerning either contender to the US Presidency. That argument is going to go on for a very long time. Perhaps until the next election. Here’s to hoping that Twitter has struck on a mechanism that can be applied and policed consistently. At least more so than they have done in the time since the platform became as wildly popular as it is today.