Rilke used to say that no poet would mind going to gaol, since he would at least have time to explore the treasure house of his memory. In many respects Rilke was a prick. – Clive James
So a particularly nasty bout of threatening, possibly illegal, abuse against Caroline Criado-Perez triggered a petition asking for a report abuse button. Brooke Magnanti counters with examples of how this, and a twitter boycott, may be unproductive; its insightful in itself, and as former Belle Du Jour she does have an interesting angle on pseudonymity and publishing.
So this is society’s pathology, mediated by technology, and because Twitter is pretty neat, mediated in real time and connecting strangers at massive scale. It’s Larry Niven’s Flash Crowd of course, taken to its fastest immaterial instantiation. There are slow hard things to change about human society to make it less awful. The petition is right, though, in that technology got us into this specific version of the problem and there are surely smarter technical things to limit it, but it’s worth noting that right now no-one actually knows what they are. So here’s a few design assumptions and speculations, in the hope it sparks ideas in others.
Parameters / Assumptions
Manual abuse reporting is a deliberate usability choice. It makes you think about the accusation of abuse, and will place a premium on a coherent case. Abuse reporting is judicial and needs due process. It’s probably also rational laziness by Twitter: at small scales this is the cheapest solution to implement.
Adding structure is adding due process, but it’s also institutionalising abuse. At uni, I broke my right arm in a soccer game. I had a lecturer in rationality at the time who noted that soccer had incorporated a whole system of breaking the rules into the game itself, with yellow / red cards. That then motivates the entire diving substructure (pretending to be injured or fouled to get advantage). As in soccer, so in Twitter: all systems will be gamed, especially judicial ones. This effect manifests right down to the amount of structure you put on the report abuse form. Each element narrows the likely scope of human judgement; an abuse form also describes the sort of thing that might be considered abuse.
Human review is needed – with tools that scale. I don’t know any Twitter employees, so this is speculation, but it sounds like it is just reading emails and kicking individuals at this point.
The criminal justice system is needed, and shouldn’t be outsourced to a corporation. This part will be slow. Write to your government, but also keep in mind a certain slowness is a side effect of due process.
Use data visualization to analyse abuse events rapidly and at scale. Using new data views to augment human judgement is a digital humanities problem. Require one example tweet in form submission. The abuse support person needs to be able to rapidly see the extent and intensity of the abuse. To facilitate this, when they open an abuse ticket, they should be able to see the offending tweet, the conversation it happened in, and all of the user and reporters twitter network. This consists of followers, people followed, people replied to, people mentioned, people mentioning, people using the same hashtag. They can view much of this as a literal graph. All this can be pre-calculated and shown as soon as the ticket is open without any automated intervention in the tweets themselves. Show ngrams of word frequencies in reported tweets. In the recent example, they aren’t subtle. Allow filtering by time window.
Rank tickets in an automated way and relate to other abuse tickets. The time of the abuse team is limited, but the worst events are flash mobs. Make it easy to see when network-related abuse events are occurring by showing and linking abuse reports in the graph visualization above. Identify cliques implicated in abuse events, in the social and graph-theoretic senses. Probably once an abuse mechanism is established, there will be events where both sides are reporting abuse: make it easy to see that. And yes, show when identified users are involved – but don’t ditch pseudonymity as an account option.
Allow action on a subgraph, slowdowns and freezes. Up until now we have just described readonly tools. Through the same graphical view, identify subgraphs to be acted on. Allow operators to enforce slowdowns in tweeting – the tweet is still sent, but after a number of minutes or hours. The advantage of being able to set say one minute is it will be less obvious investigation is going on. A freeze is a halt on posting until further notice. The operator can choose to freeze or slowdown any dimension of the graph – eg a hashtag, or all people who posted on that tag, or all people in a clique replying to certain users with a certain word. This is similar to a stock exchange trading halt. This has to be a manual action because its based on human judgement and linguistic interpretation. Finally allow account deletion, but not as a mass action.
Capture and export all this data for use by a law enforcement agency you are willing to collaborate with.
Open the API and share at least some of the toolset source so people can get perspective on the shape of an attack when it happens. And of course, don’t do this at once – start with simple read only monitoring and iterate rapidly. Remember that the system will be gamed. Keep the poets out of gaol.