Social media creepiness assessments

Email is one of the first social Internet technologies. Its weaknesses to spamming and attack are so well understood that this spam solutions form letter was developed to describe why a given spam approach would not work.

Email was designed to place almost all the costs on the recipient: as soon as an email server says 200 OK to a message, that email server is on the hook for delivering the message; the final email server has to store the message until it's read; and the recipient then has to read through the message.

Social media platforms have inherent design decisions that affect how much of the cost of a message is borne by the recipient. As examples:

  • Twitter: defaults to showing you comments from randos, who you have not said you want to interact with.
  • Facebook: profile pictures are often public and profiles must have real-ish names.
  • Instagram: two levels of privacy: public and private.
  • Google+: you can see random people's comments on public posts. (I had to get this in there while Google+ technically still exists.)
  • Slack: follows the benevolent dictator model of IRC but without the benefit of ban-bots.
  • Discourse: a community forum tool with trust levels designed to reward positive contributions from members.

Notably, Slack and Discourse are examples of platforms with many instances. That is, if you're banned from one instance you're banned from that particular community rather than from all instances everywhere.

I don't claim to be an expert on any of these tools; I'm just trying to list examples of how the tool's inherent design leads to certain privacy and cost trade-offs for its users.

All that said, I'd like to think about the tests we could develop to measure how open a given social media platform is to abuse. Here's what I've got so far.

The Questions

This isn't a complete list of questions, but I feel like it's an OK start and gives you a sense of what I'm talking about.

New accounts

If someone creates a new account on your service…

  1. …how easily can they impersonate another account already on that service?
  2. …how easily can they contact other people on that service, for example to send a private message?
  3. …how easily can they publicly interact, for example to comment on a public post and have other people see that comment?
  4. …for the above, how easily can they send multimedia (e.g. pictures) in addition to text?

Unsolicited communication from strangers

For accounts on your service interacting with people they are not "connected to" or otherwise indicated they know, …

  1. …how easily can they initiate communications on the first attempt?
  2. …does the recipient have to agree to receive the communication?
  3. …how easily can they initiate communications on subsequent attempts?
  4. …how easily can they be blocked by the recipient, so that they cannot interact again with the recipient?

Changing relationships

When two accounts are "connected to" one another on the service, …

  1. …how easily can one party disconnect from the other (e.g. by unfriending)?
  2. …how easily can one party silently disconnect?
  3. …how easily can one party filter what they share/how they communicate with the other party, e.g. to mark someone as a "close friend" or "acquaintance"?

Privacy from the public

For people without an account, …

  1. …what identifying information (such as real name or picture) can they gather from the platform?
  2. …what comments or other interactions can they see about accounts? ("Primary Colors" attack)
  3. …what account relationship information can they see?

Terms of service and enforcement

For people who have accounts on your service, …

  1. …how clear are your terms of service, notably with respect to privacy and trolling/abuse?
  2. …how are those terms of service enforced?
  3. …how important are other users' perceptions vs. the social media platform's employees' perceptions in reviewing potential terms of service violations?
  4. …how automated is enforcement, e.g. auto-bans or cool-off periods?