<3 English version <3
Three things to make twitter a safe(r) space.
Twitter is a rather special place, even by social media standards. While it can quickly mobilize a great deal of activism and commitment – as in the #YesAllWomen campaign – it also gives rise to gale-force shitstorms and mass harassment – see #Gamergate. What makes a shitstorm so overwhelming is the sheer amount of abuse raining down on one person. As soon as one abusive account is blocked, another one is created, just to keep drowning someone in hatred. Twitter’s own broken abuse report system isn’t likely to see improvements anytime soon. The company has so far seemed to be quite unimpressed by calls for a system that could offer users better protection from threats, insults, stalking and violence. After all, it’s about freedom of expression – and thus, the requirements to delete accounts must be quite high. What remains unsaid is that the company simply lacks the internal structures and wo_manpower to stem the flood of abuse and harassment that becomes apparent on Twitter daily.
Well, if it’s not about the users per se, maybe the issue could be tackled in a different way? Here are a few suggestions I just came up with off the top of my head – no feasibility check done. I’ll leave that task to those who get paid to do it. Alternatively, one could develop a new social network based on these ideas. I’d call it “Ohai” and I’d like to get at least 50 per cent of the shares.
So, here come some of the core issues on Twitter and how they could be solved:
1. Mention flooding
During a shitstorm, the mentions get overrun with hatred, harassment and threats. When an account is blocked, new ones spring up immediately – the German campaign #aufschrei, for example, saw an abuser incessantly signing up accounts under the same name followed by a rising number, thus forcing his targets to block the new account every single time. The same tactics are used on a large scale under the banner of #GamerGate: a majority of the accounts sending ten thousands of replies to women like Anita Sarkeesian or Zoe Quinn have been created solely for this purpose.
Solution: Introduce the option to not receive mentions from accounts created less than X days ago. (choose from 1 day, 30 days, 90 days or enter a custom value). Users could turn this filter on and off freely whenever the need should arise. The wave of hatred would still be there, but it would vanish from the targets’ immediate attention. Accounts purposely created to this end would lose part of their appeal.
2. The individual burden
For many activists it’s a sad reality: There are accounts, which although they themselves are blocked, involve followers in discussions and thereby show up in the mentions since some part of ones own twitter network communicates with those accounts. There are cases of fake identities to gain the trust of the network. Some accounts move from activist to activst to harass - and meet people who are not aware who they encounter. Furthermore the burden of blocking lies with every single individual - even if several others have blocked the account already. An exhausting to do, if you have to face it every day.
Solution: One can appoint Trusted Users, which means that this person is in the inner circle. Every account can appoint Trusted Users. This is important since this way one can build up an effective - individual - filterbubble. One feature of these Trusted Users could be to automatically block/mute accounts that Trusted Users have already blocked/muted (this feature could be turned on/off on demand, voluntarily and not automatically for everybody). Furthermore, Trusted Users could mark accounts as problematic. Previous to following/replying/retweeting such accounts a warning could be shown - this could stop unaware followers from involving stalked users in conversations with their stalkers or from retweeting tweets from questionable sources.
3. Loss of control
Even though you can block, mute, filter accounts - the shit that is commented on your own tweets sticks. You can’t moderate or delete replies and that is even though Twitter as a microblogging tool constitutes a form of blogging and comment moderation is a basic functionality of every blog. Especially problematic are defamatory replies and commenting retweets.
Solution: Users gain back the power over their replies, similar to Facebook or G+, and are able to remove (not delete) insulting or defamatory comments or at least are able to mark them as problematic. Person A wouldn’t remove the problematic comments of account B, but simply inhibit them from being shown if a third party visits the profile of A to see A’s tweets. They would still be seen on the profil page of B.
What do you think? What could be implemented, what ideas do you have?
Many thanks to tante for his feedback and ideas, as well as Jo and xinitrc for helping me translate! (and if you do find grammar mistakes, just let me know, I’m happy to edit)