At Forbes, Kalev Leetaru reports on Friday’s introduction of “hate speech” filtering on Twitter:
Earlier this morning social media and the tech press lit up with reports of users across Twitter receiving half day suspensions en masse as the platform abruptly rolled out its decade-overdue hate speech filter to its platform. The company has refused to provide details on specifically how the new system works, but using a combination of behavioral and keyword indicators, the filter flags posts it deems to be violations of Twitter’s acceptable speech policy and issues users suspensions of half a day during which they cannot post new tweets and their existing tweets are visible only to followers. From the platform that once called itself “the free speech wing of the free speech party” these new tools mark an incredible turn of events for the company that just two years ago famously wrote Congress to say it would do everything in its power to uphold the right of terrorists to post freely to its platform. What does Twitter’s new interest in hate speech tell us about the future of free speech online?
It was just a year ago that I wrote on these very pages about Twitter’s evolution from bastion of free speech to global censor as it stepped back from its utopian dreams as they collided with the realities of running a commercial company. Yet, even after changing its official written policy on acceptable speech and touting that it would do more to fight abuse, little has changed over the past year. Indeed, from its inception a decade ago, Twitter has done little to address the problem of hateful and abusive speech on its platform.
[…] the concern here is that Twitter has thus far refused to provide further detail into at least the broad contours of the indicators it is using, especially when it comes to the particular linguistic cues it is concerned with. While offering too much detail might give the upper hand to those who would try to work around the new system, it is important for the broader community to have at least some understanding of the kinds of language flagged by Twitter’s new tool so that they can offer more informed feedback to help it shape that tool given that both algorithms and people are far from infallible. Simply rolling out a new tool that begins suspending users without warning or recourse and without any visibility into how those decisions are being made is a textbook example of how not to roll such a feature out to a user community in that the tool instantly becomes confrontational rather than educational.
Moreover, it is unclear why Twitter chose not to permit users to contest what they believe to be a wrongful suspension. The company did not respond to a request for comment on why suspended users are not provided a button to appeal a suspension they believe is due to algorithmic or human error or lack of contextual understanding. Given that the feature is brand new and bound to encounter plenty of unforeseen contexts where it could yield a wrong result, it is surprising that Twitter chose not to provide a recovery mechanism where it could catch these before they become news.
H/T to Peter Grant for the link.