It’s time for social media interoperability

Social media is too important to be owned by a few private companies — or governments

Alex Van de Sande
16 min readAug 6, 2021

Social media is already an integral part of our society, but it’s also ripping us apart. Unfortunately, both the diagnostic on what’s wrong and the solutions on how to fix them are increasingly divided into tribal lines, as are all the other issues that social media debates. Some “solutions” on the table might even worsen the problem by solidifying monopolies and giving them police powers.

This essay proposes to sidestep the issue by separating social media itself — the act of citizens forming communities to share ideas — from social media companies, by allowing a common interoperable standard in which messages generated on one app could be transported and read in any other application, therefore breaking the monopoly on social media publication. Doing so would create a more free and competitive market for companies that allow people access to it, allowing multiple implementations of feed sorting, content filtering, and contextualization.

The Problem

Internet issues should not be seen as a digital problem anymore, rather than the traditional human rights translated to a new online medium. The right to talk to other people online encompasses many rights: free speech, peaceful assembly, freedom of religion, freedom of the press, the right to privacy from their governments, and the right to one’s tradition, language, and culture. Those are all ideals that have long practices as essential pillars in our societies. Now that our world is moving more and more into the digital realm, activism needs to push hard to maintain these freedoms.

On the other hand, these universal rights need to be balanced with newer challenges. How can we combat extremist content without falling into the pitfalls of censorship (either by state or private industries)? How can we contextualize the information we see without empowering companies as the “guardians of truth”? Is a lie always more viral than the truth–or does it depend on the medium?

Falsehood flies, and the Truth comes limping after it — Jonathan Swift

How NOT to fix it

Popular pressure exerted on politicians can lead to new laws that superficially seem to tackle the problem and create more profound issues. A good example is legislation that requires social media companies to use filtering algorithms to identify and root out terrorist content, graphic violence, and copyright infringement. Politicians tend to propose goal-oriented laws because it allows them to show their constituency they are fighting specific issues that are important to their electors (“Facebook has a problem with A, so we will pass a new law forcing them to do B”). These new laws can create second-order effects that can make the problem much worse:

It creates slippery slopes: the definitions of pornography, violent content and hate speech are subjective and can vary across jurisdictions and cultures. Even restrictions that seem straightforward, like “no nudity”, are more challenging in practice. Facebook learned this early on when nursing and breastmilk advocacy groups complained that breastfeeding photos were being removed as “porn.” They started fine-tuning the rules but found more and more edge cases, like “adult erotic breastfeeding” (porn) and “human-animal breastfeeding” (a non-erotic cultural practice in some drought-affected rural areas). How can you create a rule for “no underage nudity” that doesn’t accidentally also block historically significant war photos or photos of Yanomami children playing? It’s a fool’s errand to try to develop a universal set of machine enforceable rules for all contexts.

It creates collateral damage: even the most atrocious content is necessary to be kept in some contexts. For instance, when youtube deleted ISIS propaganda videos, it also unwittingly deleted war crime evidence needed to prosecute captured ISIS fighters. While the government was trying to cover a spike in Mexico’s cartel crimes, citizen journalists started sharing execution videos as proof of the contrary. Suppose someone in power is saying something that can be considered hate speech. Should the content be deleted to avoid further harm or kept as a public and historic service?

It deputizes private companies as extensions of law enforcement: when your video is taken off youtube for copyright violation (even if it’s your work or performance of a public domain song), your only recourses are youtube’s own “court system”, an opaque system that is not held accountable by anyone other than Google management, meaning that for most people, especially non-American citizens, if you are unfairly accused of something, you have very little recourse. It might feel entirely inconsequential while we are merely talking about simple music performance. Still, like everything else in society, as more of our lives become digital, it’s unavoidable these private courts will take on bigger and bigger responsibilities. Newer legislation that forces social media companies to combat terrorist content (however that is defined) will inevitably make the tech giants an extension of law enforcement. This symbiotic relationship usually goes both ways. As governments ask private companies to surveil their users to enforce specific laws, these tools typically make their way back to the state surveillance mechanism.

It creates a barrier of entry for new startup and enshrines monopolies. Suppose all websites that hold user content must hire many moderators or train a complex AI filter. In that case, it creates a minimal floor that sets how small a new social app can be. The more stringent the rules for social media companies are, the harder it is to innovate and create new apps that will dethrone the big tech giants. In our lives, we often accept tech giants’ inevitability, but this flies on the face of their own (relatively short) history. Facebook and Google didn’t exist 20 years ago. They reached success by launching a slightly better version of something that already existed before (Myspace, Altavista, etc.). But regulation that puts too much of a heavy burden on smaller companies might make sure they are never dethroned.

The Right to Speech vs Right to Reach

“Street preacher” by TimWilson is licensed under CC BY 2.0

The right of a private business to ban from their platforms certain types of speech or people is predicated by the fact that they’re not a government agency and that these people still have a right to speak in other platforms they create themselves. But if the walls between governments and private corporations are becoming more blurry, and monopolies threaten to remove any sort of other option of public spaces.

In short: Twitter should have a right to stop a given politician from posting in their website, but not from shouting outside their own window or a street corner.

Social media posts are a natural evolution of websites and blogs, a slow 20 year tale in which these tools have become easier to use and have reached more people. In the late 90’s having a website meant building everything on your own. By mid 2000’s this was simplified in Blogs by making it easier to post updates to your own website, which could be read by any application that supported RSS. But by mid 2010, most social media apps were all in one: you downloaded a single app that would allow you to read other posts, and write to the application website.

We gained users, but lost control over our own content.

Wait, don’t we have social media interoperability already?

Not this kind of interoperability

No. There are many ways in which social media posts can break the limits of their walled gardens. There are official methods, like media embeds or user-generated approaches, usually screen grabs or copy-pasting the contents from one to the other. Both have their particular problems, but the sheer amount of content from one social media site found on another via either method is an excellent proof of how much demand there is from the users for such capability.

The problem with the first approach is that it depends heavily on what the provider wants to do with it. If a politician posts something that years later they feel might embarrass them, they can delete it, and this will remove the content from every site that embedded it. On the other hand, when things are posted via screenshots or simply pasted, any authenticity is stripped away. Is that a fake parody quote, or did that person say this? Is this quote from someone else being now misattributed to another person for a more viral impact?

Reframing the problem

We started talking about hate speech and the future of online democracy, so the “chain of custody of social media posts” seems a relatively innocuous and trivial issue. But bear with me as we rephrase the problem. You’ll see how such a small technicality can have huge implications on the future of online speech (and, therefore, all speech). The issue at hand is:

How can we allow multiple parties to archive social media posts while guaranteeing its authenticity?

Once we reframe the question like this, it becomes evident that there is a solution that you have been using in your life for many, many years: cryptographically signing messages. Before installing a new software update, how does your device verifies that there was not an issue downloading it? Your phone comes out of the factory with some known hardcoded keys. Before doing any updates, it checks the new software’s signature against the ones it knows from the software makers. When you go to a website, and it shows it’s a “secure connection,”? It checks the content’s signatures to the keys of known internet providers, IP hosts, and significant content publishers. It’s also used in many mission-critical financial transactions, both in banks and cryptocurrencies. When a bitcoin node operator receives an order to move a million dollars from one account to another, it doesn’t care how that request arrived at them, how many people handled it, or which software version generated it: all it needs to do is verify that the signature of the message checks with the known, public key of the account that holds the funds. In fact, in the cryptocurrency world, it’s the only thing it needs to know about that account!

“Wind-up birds illustration” by Danny PiG is licensed under CC BY-SA 2.0

Cryptographically verified messages: the new RSS

So how exactly would the future of social media be? Not that very different from the past. Similar to the RSS protocol, every social media site would publish a file with the content of the latest posts by that user. But it would have some significant differences:

  1. Beyond the usual metadata associated with it, it would also come with two extra fields: a cryptographic signature of these posts and the public keys that signed it
  2. Publishers would also make available the history of all public keys associated with every account and periodically rss it as new ones were added and old ones revoked
  3. These messages would not be only hosted on a single file on the publisher’s site but also posted to an open distributed network in which anyone could set up new listeners or publishers

Let’s unpack that.

An associated signature with each media post: this would guarantee authenticity with each message. More importantly, it would allow you to separate how you received the message from its veracity. These social posts can travel via multiple websites, email, forwarded by messaging apps — you would not need to go to a third party to check its authenticity. Of course, it doesn’t judge the veracity of what is being said, just that you can verify who said directly from your device without having to check a third party. More than “you don’t need to check twitter to see if it was posted there,” but “it doesn’t even matter IF it came from twitter or Facebook, you know it came from this person”.

This exists already elsewhere: when you send a transaction in Bitcoin or Ethereum, there are popular sites in which you can, given a hash, check information about it. But those sites are not the sources of information, just a popular archive of it. If one of them goes away or deletes the data, you can check the same authenticity elsewhere.

Publishers would also periodically publish a list of active and revoked keys from accounts. Social media publishers’ critical role would still be to verify the account itself and link it to any meatspace persona. These publishers shouldn’t hold all these keys but would have a standard process of verification that a key belonged to a user, using passwords, emails, or other methods–just like they already do when authenticating a user on their website. The process of authenticating a new key should be open to any app and would only tie a public key to an account on that website, not necessarily outside of it. This isn’t a process of verifying if that account has a government ID or if they control the brand rights to the name they are using (but websites could offer these services). It’s also cross-compatible, so you could use the same device key to validate your account on both Facebook, Twitter, and YouTube. Validation can be in the form of the most basic “The user BobWY1997 has these three keys currently associate with him” or “We verified that these keys belong to the same owner associated with the email jennytut@gmail.com and the number +(201) 867–5309” to a more advanced “this is the official account for the White House spokespersons”. These lists would be publicized and updated periodically: some keys could be revoked and updated as their users changed their devices or new presidents were elected.

Messages would be published to an open, permissionless network: while RSS provided a convenient way to get the contents posted to a blog, you still needed to host your feed, meaning that if your site went down, so would the content. In this scenario, all messages would be relayed through one (or more than one) network that would rebroadcast some messages to their peers, similar to email servers. Not all nodes need to be equal or have any obligation to carry, archive and rebroadcast any message they are not interested in, which could open some opportunities for niche services for a geographical area or a given interest.

Is that this blockchain stuff again?

No! While P2P networks and signed transactions are wildly used (and necessary) for cryptocurrencies and other blockchain-based technology, blockchains themselves are not needed for this use case. Blockchain is a technology whose primary goal is to find a global consensus on messages’ exact timing. If both messages A and B are broadcasted within a few milliseconds of each other on different sides of the world, each witness in the network might receive them in different orders. While this difference can mean who gets some extra millions of dollars in some machine run financial applications, it’s mostly irrelevant for most cases when we are talking about media supposed to be consumed by humans. There are some edge cases in which a user might want to tie their message to a blockchain (to prove they have posted a message on a given date), but for the most part, this proposal uses cryptography but not much on cryptocurrency.

How would that fix things?

It won’t fix everything, but it can make a big difference in how competition works. In practice, this would make social media an internet primitive, a shared public good that is not owned by any single company. It would also allow a vibrant, competitive market for social media app makers.

Disinformation and Fake news: a single app would not control how to display and filter your feed, meaning that any enterprising developer could build their filters to spot unauthentic content and prevent the spread of misinformation.

Advertisement abuse: With the ad market’s fragmentation, no single company could then abuse ads to target minorities or spread misinformation. If you don’t like the type of ads you see or feel your feed is being used to influence your habits, just download another client. Your friends and content will be able to port over!

Bias in media: On the other hand, this also means that if you fear that your app is biased towards a political cause you disagree with or is unfairly editing or even censoring some voices you support, switch your app!

State censorship: This would also make State censorship much harder since even if they block some specific apps, a user can use an unofficial app to get to the same content.

“bird cages” by HeedingtheMuses is licensed under CC BY-NC-ND 2.0

Beyond Social media

These examples are all about social media, our current usage of twitter, Facebook, etc., but the technology could expand to many other internet areas.

Portable user reviews: many things make a few marketplaces an almost monopoly, but one of these locking methods is indeed products reviewed by the user. If you are an Amazon reseller feeling like amazon is eating your profits, you can use other places to sell your stuff, but you will lose all your user ratings and reviews. If these are all made in a compatible, portable, signed hashes, a seller could take their reputation with them whenever they go, encouraging better marketplaces.

Instant news articles: one severely underestimated factor in the dissemination of false information is the fact that when an item is shared in a messaging or social media app, the article headline and main image loads right in the client, no action necessary. If someone wants to read the content to check more details or context, the site will often take a long time to resolve (usually due to a series of user tracking third party code), be closed in a paywall, or provide very little extra content. This makes clickbait headlines go viral, even if the article’s content is not that enraging or absurd. If whole pieces were packed in signed bundles, they would load as fast as downloading an image while retaining a validity check written by an entity you trust.

Wouldn’t that make banning hate groups impossible?

The dark side of the coin that makes it impossible for the Chinese government to censor conversations among Uyghurs or for Russia to ban groups that promote LGBT rights is that it would also make it harder for western governments to combat groups that disseminate neo-nazi propaganda, racist, genocidal ideas, and other hate speech. But an argument can be made that the real issue with our current society is not that these ideas are new or they didn’t exist before: but that they are now mainstream.

Before the internet, you could always print your letter supporting scientific racism or promoting your local cross-burning KKK group–but you would never get that letter published on the NYT or on TV (unless you threatened to bomb people via the post). Even before social media, sites that promoted despicable ideas would often find a hard time finding some providers to host them as soon as they got enough attention.

But when a half dozen websites host a billion people’s content, hate groups can not only flourish; they can also use the own internal marketing features to promote their groups to other like-minded antisemite conspiracy fans. Youtube and Facebook have been called a radicalization machine, and their defense is that they have too much content to moderate it all.

If they are too big to moderate their content, then maybe they are too big to exist. In a world where social media content is interoperable and can be posted across networks, any app is free to use their filters to decide what goes in and what goes out, what is acceptable or not. If a significant company finds your content unacceptable, you can always move to a smaller, niche platform. If you are banned from even those smaller niche, you can host the content and handle distribution yourself. If you can’t do it, then maybe your ideas deserve to be forgotten. Like in a working free society, harmful thoughts are not mandated away; instead, they slowly die out, as individuals force them off of the main streets and back in the gutter where they can be collected by researchers working on vaccines that require gut bacteria samples.

Nice idea, but how do you expect it to happen?

Don’t underestimate the power of the web to change itself. It has been doing it nonstop for many years. This goal can be achieved in many complementary ways, either by voluntary cooperation, adversarial interoperability, or governmental mandate.

Voluntary cooperation: Of course, Facebook has no reason to voluntarily breaking itself up and allow the most valuable part of its business, the social graph, to be taken by anyone who wants it. But while Facebook is the biggest, there are many competitors, and more are popping up every day. If Twitter and a few other smaller sites voluntarily supported such standards, they could benefit from a range of newer clients developed by third parties, which could force the rest of the market to adopt it.

Adversarial interoperability: not all standards emerge from cooperation. During the browser wars, some browser vendors often would reverse engineer innovations created in Internet Explorer meant to lock users in and make them available in their platform. Open Office spent a large amount of energy trying to correctly read a closed source office document and make sure it was the most compatible tool. These are two examples in which a monopolistic power (Microsoft) ended up ceding and adopting an open standard.

Government mandate: there are many proposals to regulate social media. As discussed previously, many of these might make the problem worse by creating such a burden of requirements to host user-generated content that only big players can do. But suppose there is an emerging standard of compatibility for social media that can break the current monopolies. In that case, governments worldwide can point to that and say, “you should follow some industry-standard. Otherwise, you’ll force us to create our own”.

“Via Appiah Antica” by eriktorner , CC BY-NC-SA 2.0

Communities that last centuries

Today, you can still walk on roads that have been laid thousands of years ago. But you can also visit places that have existed for a long time but were suddenly dismantled in a short period. Communities were people were born, lived, fell in love, that disappeared due to a single decision from a board somewhere or a sudden market movement.

The difference between a structure that can last centuries is either the people who live in that community can maintain, repair, clean, own it—long after the people that built it originally are gone.

Our digital communities are where relationships are created, fights are fought, ideas are debated, new births are announced, deaths are mourned. And yet, they can all be deleted or disappear in an instant, with the unplugging of a server.

We can build digital communities so they outlast their builders. So we must.

--

--