Collaborative Blocking

From P2P Foundation
Jump to navigation Jump to search


Description

Glenn Fleishman:

"With collaborative blocking, a group of people create a list of accounts to block or, in some cases, mute. The list is propagated through a Web-based app that allows people to opt-in with a Twitter account, authorizing the app to carry out certain behavior on their behalf. Twitter allows clients and specialized apps to block, mute, and unfollow, among other actions.

The services add blocked or muted accounts on a continuous basis to each subscribed account, throttled against Twitter’s rules for frequency of updates. Twitter declined to comment generally on harassment policies and related issues for this article, but confirmed that third-party apps like these are valid uses of its developer tools.

(A brief aside on Twitter terminology. When you block on Twitter, the blocked account is removed from your followers if it was among them, and can still see your timeline, but cannot use Twitter’s built-in retweet feature, favorite your tweets; Twitter also suppresses you seeing @ mentions directly in your feed. Blocking does equate to a spam report, which is a separate feature. A muted Twitter user doesn’t lose follow, RT, or mention privileges, but the account that has muted the user doesn’t see that user’s tweets in the timeline. Some third-party apps have their own mute options, such as duration-based suppression, which edit the timeline as displayed in the app.)

The fact is that, as with most egregious behavior, the Vaguely Unpleasant offenders are a minority of the people who may engage in abuse, but through the sheer volume of their interactions, their recidivism in abusing again and again, and their typical targeting of many individuals (sometimes in the same tweet), they are also relatively easy for a group to spot, mark, and block.

The asymmetrical benefit thus comes from a way to prevent those identified as fitting the profile of a block-worthy account from being knock out of circulation quickly enough that subscribers to a list don’t experience that person’s tweets — or, because they are blocked, those tweets are automatically deleted from a subscriber’s timeline. This reduces the effectiveness of any given piece of abusive text, as it may not reach intended victims and, even if it does, it doesn’t persist.

It can also provide an effective counter to the trolls and villains who create endless numbers of accounts to speak their maledictions. No one person has to find and kill all these accounts; the load (and thus psychological toll) is distributed among all those who maintain the list. The thrill of knocking out abusers may also counter some of the aggravation, too.

Of course, the devil is in the details as he always is: the people who mark accounts as abusive for a particular blocking tool are as human as the rest of us. Whatever criteria are used by which accounts are marked as unsavory to a particular group or service, the party being blocked will disagree with the action, as might people who have opted into the group list.

Do these tools constitute censorship or an abuse of free speech? Maybe, but Most Likely Not. Only to the same extent that any consensual, collaborative exclusionary process run for the benefit of its members receiving only the speech they wish to hear. Collaborative blocking is as much censorship as is Spamhaus, the service that uses a variety of methods to prevent some of the bazillions of unsolicited commercial emails from reaching their destination.

People opt into collaborative blocks. The people excluded may still use Twitter; they may still even read the timelines of the accounts that have excluded. What they can’t do is force people to hear what they have to say. And that enrages people who believe they have a right to speak in every fora. To paraphrase Stephanie Zvan, quoted later, people challenge exclusions when they think that all spaces in a given realm must also be their spaces. This is as true with street harassment as it is with Twitter." (http://enki2.tumblr.com/post/94572139249/how-collaborative-social-blocking-could-bring-sanity-to)


Example

The Block Bot

URL = http://www.theblockbot.com/

Glenn Fleishman:

"The Block Bot seems to be the first or first widely used collaborative blocking tool, and its original developer, James Billingham, says that he had to rewrite it last September to better conform with Twitter’s app rules. Block Bot’s administrative governance remains tied to the Atheist+ movement, a group attempting to accommodate diversity in “organized atheism,” as Stephanie Zvandescribes it. The five administrators are part of the A+ forum. Admins can add blockers who don’t need to be part of that group, and there are 33 such now. Accounts authorized by Block Bot are monitored for hashtag-based commands, such as “+ #AddBlocker”.

But Block Bot’s utility has shifted well beyond facilitating discussions about atheism, with a broad inclusion of people committed to ideologies around men’s rights (MRAs), anti-feminism, and exclusionary feminism (opposition to trans people and sex workers). These are all terribly loaded terms, and I’ll get to how that’s dealt with in a moment.

Block Bot users can opt in at Level 1, 2, or 3; Level 2 includes Level 1 blocks; Level 3, both 1 and 2. Level 1 blocks those sorts of people that most sensible people would agree were prima facie abusive; it doesn’t take a strong ideological association to identify with the sort of threats or abuse, or to recognize stalking and fraudulent accounts. Level 2 adds ideologically identified people who may or may not be per se abusive. Level 3 could be defined as the clueless and irritating.

To meet Twitter’s rules, Block Bot provides a public lists of those currently present in each level. Billingham notes, “It never blocks someone you follow; also if a user unblocks someone it never reblocks them.” To avoid triggering Twitter’s spam reporting algorithm, blocks are added in small waves. Block Bot has a tool, partly at Twitter’s request, to remove blocks by level (or even all blocks) on an account when one leaves the service. On August 8, the Block Bot account tweeted that the service had applied about 320,000 blocks to its subscribed accounts over the course of seven days.

Block Bot is currently hosted on Github as an open-source project for non-commercial use, and Billingham says he and others now helping with the project have a code rewrite plan to make it work in a more distributed fashion." (http://enki2.tumblr.com/post/94572139249/how-collaborative-social-blocking-could-bring-sanity-to)