Get Status

Can we can Spam?

Can we can Spam?

Status Messenger has progressed from version 1.0 to 1.7 in the last seven months narrowing the feature parity gap with leading messengers but with a privacy-centric approach while also bundling in a secure crypto wallet and a web3 browser.

As we roll out new features and attract new users to our platform, we see increasing engagement, some of which is challenging current limitations of the product. One recent incident was a spamming attempt on some of our public chat channels as announced in this post, which outlined the high-level problem, challenges and anticipated mitigation timelines.

This deep-dive article begins by looking at some historical context which provides important perspectives such as what is spam. We then review spam mitigations on some relevant platforms, motivate the challenges of addressing spam in the context of Status principles and architecture, describe current countermeasures and finally categorize the various proposals discussed for future mitigations along with some of their pros and cons.

https://twitter.com/VitalikButerin/status/1303140765275623424

Spam is not a new phenomenon. As long as we’ve communicated with each other, there have been situations when someone found what was being said as being loud, repetitive, irrelevant, distracting, annoying, hurtful, hateful or even harmful to the point where it perhaps prevented them from having their voice heard on that platform and therefore affecting its perceived utility.

While we might have experienced an analog version of this in the real world, the problem of spam gets significantly amplified in the digital world where this spam behavior can be programmed, automated and executed at scale with minimal effort.

All of us using email are surely familiar with the concept of spam. There is even a specific folder dedicated to spam on email clients. Although the awareness of email spam is most common today, spam is not limited to email and is equally present on other communication platforms such as phone calls, instant messaging (where spam is called spim), SMSs, blogs, wikis and social media.

So let’s start by looking at the historical context of spam.

History & Context

The origin of the term “Spam” in this context is attributed to a 1970’s Monty Python skit about a restaurant that has Spam, a type of canned meat product, in almost every dish. At many points in this skit, a group of actors sing loudly about Spam while drowning out other conversations. This was apparently mimicked on early Internet forums where users flooded those forums with the word “spam” taking inspiration from this skit. And ever since then, whenever someone flooded a communications channel with unsolicited or unwanted messages and thereby disrupting the channel’s usability for other users, they were referred to as spammers and the content as spam.

Spam Predates the Internet

The first recorded instance of spam behavior is apparently from May 1864 when some British politicians received an unsolicited telegram advertising a dentist. Until then, no one had used telegram to send unsolicited commercial advertisements and therefore it wasn’t the approved etiquette.

Gary Thuerek is alleged to have sent the first spam message advertising a new DEC computer system to ARPANET users in 1978. In 1994, lawyers Canter and Siegel were accused of  spamming Usenet groups advertising their immigration services which is popularly referred to as “Green Card spam.”

Spam is Tricky to Define

While the origins of spam were inspired from commercial advertising, their intent quickly spread to non-commercial advertising (e.g. religious or political), distributing malware, phishing or simply disrupting a service (i.e. Denial-of-Service or DoS). They span the spectrum from subjective annoyances to objective threats. Spam predicates on intent.

The typical definitions of spam include phrases such as unsolicited, unwanted, undesirable, misleading or fraudulent, all of which are in some ways subjective characterizations of the receiver and may differ greatly based on the context of the receiver. The commercial, religious or political intent of the sender may also be subjectively challenged. What if something were unsolicited but actually turned out to be desirable? How can we expect the sender to know what the receiver wants or desires? So, except in cases where malware or phishing is involved, defining spam objectively based on the content is a challenge. In practice, whatever content a typical receiver may complain about is assumed to be spam.

Defining spam objectively based on the sender is equally hard. Email, phone, instant messengers and other communication technologies were meant to connect people and hence their identifiers were advertised publicly on websites, business cards, directories etc. So does this mean that every unsolicited email, phone call or message from a hitherto unknown person is spam? Or does this have to be combined with their perceived intent to somehow determine if it is spam? What about messages from a known person that is unsolicited and has unwanted or undesirable content? In practice, whichever sender a typical receiver may complain about is assumed to be a spammer.

Another classic definition of spam is unsolicited bulk messages such as mass emails to thousands of recipients or messages posted on public groups/channels. This however doesn’t address unsolicited 1:1 messages with aforementioned spam intents.

Therefore, classifying a standalone message objectively as spam based on the content, sender, number of recipients or other characteristics is not straightforward. While some generic rules of thumb may work most of the time for certain categories of spam, there will be situations where some arbitration may be required.

Difficulties of accurately classifying spam aside, the impact of spam is real. Processing messages to identify and filter out spam results in significant consumption of network bandwidth, compute resources and human judgement leading to increased costs, lost productivity and potentially even censorship. This is considered a worthwhile trade-off to the alternative scenario of end-users sifting through all the messages to get to the relevant ones.

To consider the magnitude here, spam emails accounted for 53.95 percent of email traffic in March 2020, according to one study. Twitter, which has nearly 300 million monthly active users, reportedly challenged more than 17 million potential spam accounts in December 2019.

Spam is an Asymmetric Challenge

If a spammer can easily and cheaply create accounts and send messages on communication platforms (i.e. a Sybil attack), the burden falls on the receiver. A spammer has to hope that only a few of his messages get the anticipated results (e.g. clicks, responses, purchases) to achieve the desired ROI but the platform or its recipients have to avoid every such spam message from every spammer to ensure a spam-free experience. While the spammers’ effort is easy, cheap and fast, spam resistance is very expensive — an asymmetric challenge.

While this continues to be true for email, other platforms such as instant messengers have started requiring the compulsory association of a unique phone number to create/use an account. The premise here is that it is much harder (requires money, effort and Personally-Identifiable-Information in certain places) to acquire a new phone number and so most users will only create one account. This Sybil-resistance makes it much harder for spammers to create hundreds of temporary accounts for the purpose of spamming with the risk of having many of those accounts blocked eventually when they’re classified as spam accounts. The trade-off here is the reduction of openness and privacy from requiring phone numbers or other personal information (as we discussed here) for improved spam mitigation.

An interesting attempt at reducing this asymmetry has been the exploration of using proof-of-work (PoW) at the sender’s end to introduce some computational cost to sending an email. The premise here is that while this computational work is insignificant for sending emails for an average user, it becomes computationally (and hence economically) significant for spammers sending thousands of emails. While PoW was adopted into Bitcoin, Ethereum and other blockchains for deciding which miner gets to append the next block and claim rewards, its origins (1, 2, 3) were in fact to prevent email spam.

Spam on Other Platforms

Different platforms have different policies and measures to manage spam using a combination of automated and manual techniques. None of them are fool-proof and a balanced approach needs to take into account both false positives (legitimate messages treated as spam) and false negatives (spam treated as legitimate messages). Let’s take a look at spam mitigation techniques on some of the widely used platforms.

Email: As email continues to be a widely used communication medium for personal and business purposes, spam emails continue to target users with commercial advertisements, fraudulent propositions and phishing attempts constituting more than 50 percent of all emails sent as mentioned earlier.

Anti-spam techniques for email include: (1) User-side measures such as responsible sharing of email addresses, address munging and reporting (2) Email-administrator measures such as authentication protocols (SPF, DKIM, DMARC) for preventing spoofed addresses and various filtering techniques based on block lists and statistical rules/analysis applied on content (3) Email-sender measures such as CAPTCHAs to prevent bots, egress filtering, SMTP port 25 blocking and rate limiting.

Social Media: Facebook saysSpam involves contacting people with unwanted content or requests. This includes sending bulk messages, excessively posting links or images to people's timelines and sending friend requests to people you don't know personally. Spam is sometimes spread through clicking on bad links or installing malicious software. On other occasions, scammers gain access to people's Facebook accounts, which are then used to send out spam” and claims to use different tools and dedicated teams to mitigate spam. Concerns here also extend to dealing with the adjacent and more serious problems of fake accounts, fake news and content moderation.

Twitter saysSpam is a form of platform manipulation. We consider platform manipulation to be activity that is intended to negatively impact the experience of people on Twitter. This includes unsolicited or repeated actions. Spam can include malicious automation and other forms of platform manipulation such as fake accounts. Some examples of behaviours which would violate Twitter’s Rules against spam include: consistently Tweeting or DMing links only, without any commentary, posting duplicate or very similar content across multiple accounts, posting multiple, duplicate updates on one account, creating duplicate or very similar accounts; or creating fake accounts, impressions or account interactions (such as followers, Retweets, likes, etc.), posting multiple updates in an attempt to manipulate or undermine Twitter trends, sending large numbers of unsolicited replies or mentions, purchasing or attempting to artificially inflate account interactions (such as followers, Retweets, likes, etc.), using or promoting third-party services or apps that claim to get you more followers, Retweets, or likes; or that claim to be able to get topics to trend” and claims to use custom-built tools and processes such as phone verification and reCAPTCHAs to mitigate spam. Their transparency report has more details.

Instant Messengers: Telegram saysFor private messages, it really doesn‘t matter what you send, as long as the receivers find it unwelcome. It could have been a photo, an invite link or a simple ’hello‘. Please only send messages when you are sure people won’t mind getting them. As a general rule, people do mind getting unsolicited advertisements, links, invite links to groups or channels, random photos and, above all, anything related to commerce or online popularity. If you send them something like this, you will be blocked — and everybody else will be happy. When users press the ‘Report spam’ button in a chat, they forward these messages to our team of moderators for review. If the moderators decide that the messages deserved this, the account becomes limited temporarily. This means that if you have been sending unwanted messages to random strangers or posting spam in groups, you lose the ability to do so.

Signal, which is one of the most widely used privacy-centric messengers, doesn’t appear to have such a specific spam description or policy but continues to add privacy-enhancing features such as message requests which makes it harder to spam in general on its platform.

Conclusion: Every platform is slightly different in how it allows its users to discover, connect and communicate with each other (1:1, private groups, public groups), permissions required to do so, content that is generally permissible, content that is defined as spam and the context, combination of manual and automated spam mitigation tools and techniques, and the consequences on the user accounts identified as spammers.

What’s common is that spam is a prevalent and growing problem on all platforms. Both the platform and its users assume shared responsibility for identifying it and defending from it. Given the subjectivity in even defining spam, both automated techniques and manual interventions fall short. There is no technological silver bullet — period. Even legally, spam legislations (e.g. CAN-SPAM act of the United States) vary greatly across different countries and don’t appear to have had much impact.

But fundamentally, the lower the friction required to create accounts on a platform, fewer the permissions required to communicate with other users and lesser the moderation of accounts and content by the platform provider, the greater the possibility of spam on that platform.

Challenges of Handling Spam on Status Messenger

The foundational aspects of spam get significantly amplified on Status Messenger where we aim to provide the most privacy-centric messenger platform built using open-source, decentralized, peer-to-peer (P2P) protocols with the ultimate goals of surveillance-free and censorship-resistant communication. Most of the discussed spam mitigation techniques may be perceived to be in direct conflict with Status principles.

Chat Identifiers: As justified in our earlier article, ultimate privacy is the reason why Status Messenger relies on a randomly generated cryptographic key pair as the user’s identity. A user on Status is identified only by his chat key — a 65 byte uncompressed ECDSA secp256k1 public key. The registration onboarding does not require any personally identifiable information (PII) such as a name, email address or a phone number.

As Status identities are easily generated without any connection to the user’s real identity, they provide a high level of pseudonymity. The user is pseudonymous as far as Status is concerned and can generate as many chat keys as desired. What this means is that blocking spammers by their chat keys (e.g. by the recipients) will not be very effective for long because spammers can generate new ones very quickly without much effort. Sybil attacks are easy.

Open Source: The entire software stack of the messenger (as described in this article) including the client and the P2P protocols is open source. This means that unlike other platforms, we can not deploy secretive/proprietary spam filtering rules or algorithms on our messenger. There is no `security by obscurity` on our platform by design. It is a level playing field for everyone including spammers.

P2P Network: As described in our earlier article, unlike a client-server paradigm — as used in popular messengers including Signal and Telegram — where clients communicate via servers, Status messenger is architected as a P2P network which doesn’t have the concept of servers and clients. The motivation is the removal of single points of failure (at the servers) and increased resistance to censorship via policies implemented at the servers or banning servers altogether.

Every node is a peer. Messages don’t have only two hops — from the source client to the server and then to the destination client — as is the case in client-server architectures. They instead hop across multiple peers and continue hopping even after reaching their intended recipient because peers do not know who is the intended recipient. The motivation here is to provide privacy at the transport layer, achieved via Waku topics, where nodes can achieve plausible deniability of having received messages meant for them.

What this means is that any spam policy will have to be implemented in the messenger clients (at the senders and receivers) or at the relay and history nodes as part of the P2P protocols. By design, there are no centralized so-called “choke points” where spam may be identified or controlled. Therefore, conventional spam mitigation techniques relying on message content or metadata based filtering are not applicable here.

Decentralization: While P2P network helps us achieve architectural decentralization, the goal is to also achieve political decentralization. As Vitalik breaks down these concepts in his article on “The Meaning of Decentralization,” political decentralization refers to “how many individuals or organizations ultimately control the computers that the system is made up of?

For now, Status develops a majority of all the software (while encouraging external contributions) and maintains the infrastructure underlying the Status Messenger. Aligned with our stated principles of Transparency, Openness, Decentralization, Inclusivity and Continuance, we envision a state where Status is only one of the many contributors and maintainers of the software and infrastructure with significant participation from others.

What this permissionless participation entails in the spam context is that Status can not be the only entity entrusted with the responsibility of defining, monitoring and mitigating spam. It will be a collective responsibility of all the users and maintainers of the platform going forward.

Censorship Resistance: The fundamental question here is if spam is an expression of free speech and if so, whether any effort to identify and prevent it is censorship. As discussed earlier, what constitutes spam is a broad spectrum of many subjective criteria (except perhaps those that carry malware as generally agreed upon in security) and spills over to content moderation in many scenarios.

Given this and our stated principle of Censorship Resistance: “We enable free flow of information. No content is under surveillance. We abide by the cryptoeconomic design principle of censorship resistance. Even stronger, Status is an agnostic platform for information.”, spam proves to be a very tricky ethical challenge.

One perspective to consider is if the so-called spammer is not prevented from drowning the recipients with the flood of spam, does it amount to the platform effectively censoring the spam recipients who may no longer be able/want to use the platform. In such a scenario, should the platform empower all the users (via capabilities on the platform) to deal with spam themselves or should it mediate/arbitrate on behalf of some of the users?

Status believes in empowering users (and maintainers) of the platform with core capabilities across the layers of the stack (i.e. application to protocol) to mitigate spam without being the metaphorical man-in-the-middle. Users may opt-in to any of them and may even add new capabilities themselves. Markets and communities may form around such capabilities. Different communities on the platform may build and adopt different sets of capabilities as it suits their goals/members and may also evolve over time.

Aligned with our stated principle of Liberty: “We believe in the sovereignty of individuals. As a platform that stands for the cause of personal liberty, we aim to maximize social, political, and economic freedoms. This includes being coercion-resistant.”, everyone should have the liberty to choose their platform features without being coerced.

While some of our short-term trade-offs might appear to contradict some of these stances, this is where we’re headed in the longer-term.

Current Spam Mitigations

Status Messenger implements three variants of chat: 1:1 chats to chat individually with each other, group chats to privately chat with multiple users on an invite-only basis and public chats where anyone is free to join the topic channel. We currently implement the following four spam mitigations described below:

1. Proof-of-Work (PoW): Status inherits PoW from the Whisper implementation which says:

The purpose of PoW is spam prevention, and also reducing the burden on the network. The cost of computing PoW can be regarded as the price you pay for allocated resources if you want the network to store your message for a specific time (TTL). 

After creating the Envelope, its Nonce should be repeatedly incremented, and then its hash should be calculated.
...
PoW is defined as average number of iterations, required to find the current BestBit (the number of leading zero bits in the hash), divided by message size and Time-to-Live (TTL): PoW = (2^BestBit) / (size * TTL).

Status Messenger is currently aimed at mobile platforms where performing a meaningful amount of PoW (from spam’s perspective) leads to increased CPU usage and battery draining which significantly affect the UX. So we’ve currently configured PoW to a very low value which doesn’t impact the UX but also doesn’t meaningfully protect against spam.

We are debating if this is directionally an effective mitigation for spim (spam in instant messaging) because by its very nature, instant messaging happens in bursts and forcing a delay between messages (because of PoW) may significantly downgrade the UX to the point where it’s unusable.

Furthermore, as we extend our messenger capabilities to desktop platforms, spammers can leverage the increased compute on such devices to easily beat any PoW thresholds. Depending on the magnitude of spammers’ resolve and extent of their compute (or economic) capability, PoW may only add a minor bump to their attacks while continuing to be a constant annoyance for all other platform users.

2. Rate Limiting: Limiting the rate at which messages can be sent is a commonly used technique in many networking contexts. Status Messenger currently implements rate limiting of messages at relay/history nodes based on the IP address and the peer identifier of the sender. We presently set this limit to five messages per second.

We recognize that this limit is not small enough to prevent a spammer from flooding a public chat to cause nuisance and therefore might apply a tighter limit in the near future. We may also consider rate limiting at the messenger client itself which will reduce the flooding currently possible at relay/history nodes.

However, a determined spammer may change IP addresses and peer identifiers to circumvent this mitigation.

3. Drop messages with text longer than 4096 characters: We recently added a simple capability for clients to drop messages with text that are longer than 4096 characters both when sending and receiving.

This does not prevent a spammer using scripts (instead of the messenger app) to directly inject longer messages at the node/network level which will be received by the intended recipient nodes. However, it prevents the recipient user from having to see them in the app’s interface.

So, while such filtering improves the UX in some cases, it does not deter a determined spammer from flooding network nodes and launching a DoS attack. The spammer may also use non-textual content such as emojis, images or audio messages for spamming to circumvent this measure.

4. Block user: Users have always had the capability to block (and unblock) specific users in their contacts or chats by their identifiers. This prevents new messages from blocked users from being shown and also deletes their messages from any of the previous chats across 1:1, group and public chats.

This is convenient to prevent messages of specific users from showing up in the user-interface but it does not prevent blocked users from flooding the network or specific nodes because filtering happens at the application level. Blocked users may also very easily create new chat identifiers with their apps to circumvent such denylists.

Given the aforementioned long-term challenges of mitigating spam on Status Messenger, the above short-term mitigations currently implemented are admittedly limited because we’ve thus far focussed on prioritizing implementation of core messenger features and underlying protocols with the goal of architecting a solid foundation while reducing parity-gap with other leading messengers.

Future Spam Mitigations

Status has long anticipated the problem of spam and has considered many potential anti-spam measures, both internally and in community discussions (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12), spanning a wide range of categories. While a deeper-dive into individual proposals is outside the scope of this article, this section attempts to provide a summarized view, categorize these proposals and describe their pros and cons.

Approaching from first-principles of What/Who/Where/How, spam mitigations in Status Messenger may be considered under the following broad categories or dimensions. This is neither exhaustive nor mutually exclusive — mitigations may fall into two or more of these.

1. What: Based on message content

Content-based filtering may be based on keywords, patterns, length or other attributes of message content. These attributes may be pre-defined, configured by users/moderators/maintainers or derived using machine learning (ML) models trained on historical data.

Pros:

  1. Familiar concept to understand and implement.
  2. Deployed by most platforms today using both manual and automated ML techniques.

Cons:

  1. In end-to-end-encrypted (E2EE) protocols, especially P2P, this can only be done at the sender or receiver clients. Nodes that relay messages have no visibility into the content they are relaying.
  2. If a sender is able to post messages to the network bypassing the client somehow, the burden falls on the receiver at which point both the network and receiver’s client have already processed the spam message.
  3. Spammers may circumvent such measures with the knowledge of filtering attributes in an open-source platform.
  4. Deep-learning models such as GPT-3 have recently demonstrated impressive capability to generate human-like text which makes spam-bot detection very challenging.
  5. Efficacy of ML techniques depends on the amount and quality of training data combined with feature selection challenges in supervised learning. Models trained on other platforms (e.g. email) might not readily apply to Status Messenger. Training data on Status will be limited to public chats or have to be generated locally by individual clients at the edges.

2. What: Based on message sender

Sender-based filtering avoids having to analyze content and depends on the sender being identifiable and accountable somehow. Mitigations in this category may be based on techniques that leverage chat keys or ENS names as identifiers, denylists/allowlists, associated reputations, rate limiting or token staking.

Pros:

  1. Avoids having to analyze any content.
  2. Community-generated denylists/allowlists harness collective wisdom of crowds.
  3. Use of ENS names (which is already an option on Status) takes time/effort/money (registration with ETH/SNT) and hence spammers cannot easily generate them (as with key pairs) if they are mandated by recipients or the platform in certain circumstances e.g. public chats.
  4. Use of token staking (via SNT) is on the planned roadmap as Tribute-to-talk.
  5. Rate-limiting may be possible at the protocol level in a privacy-preserving manner and using token staking/slashing with the concept of semaphores.
  6. Allows imposing time constraints on the age of the sender's account within the network to prevent spammers/bots from spawning hundreds of accounts on-demand.

Cons:

  1. Chat keypair based approaches won’t be effective if spammers can generate and use them on-demand.
  2. Decentralized approaches to community-generated denylists/allowlists are effectively token-curated registries which have their challenges.
  3. Reputation-based approaches are susceptible to gaming and trading.
  4. Requires changes to protocol if filtering needs to be done by relay nodes.
  5. Use of ENS or staking introduces an economic barrier to participation and may require the use of Layer 2 scaling solutions to be feasible.

3. Who: Enforced by end-user, chat moderator (if present), network maintainer or protocol

Who decides what is spam and what/how mitigations get enforced are as/more important than the mitigations themselves in the context of a decentralized censorship-resistant self-sovereign communication layer.

Pros:

  1. Users will have a choice to opt-in to mitigations or protocol versions that suit their preferences.
  2. Allows moderated versions of public chat where membership and permissions (read-only or read/write) are determined by the moderators who themselves may be appointed using different mechanisms including staking, DAOs, etc.
  3. Allows those running Status nodes to enforce mitigations on their sub-networks that users may join based on their preferences.
  4. Allows versioning of protocols with different embedded spam mitigations.

Cons:

  1. Might lead to fragmentation of the Status Messenger UX.
  2. Adds complexity to the design and development of the protocol and products.
  3. “Who watches the watchmen” is an issue if enforcement is delegated to an intermediary such as a moderator, maintainer or protocol designer.

4. Where: Enforced at the end-points (i.e. sender/recipient) and/or at the intermediate P2P nodes (i.e. relay/history nodes)

Where in the P2P network spam mitigation is enforced determines its efficacy to a great extent.

Pros:

  1. Enforcement at the edges decentralizes decision-making while providing opt-in capabilities to end-users.
  2. Enforcement at the intermediate P2P nodes pushes decision-making upstream towards the spam source and limits impact on the network.
  3. Node reputation via peer-scoring is already part of gossipsub v1.1 in libp2p to be included in Waku v2.

Cons:

  1. Spammers are incentivized to bypass any sender-side enforcements.
  2. Receiver-side enforcements do not address spam impact on network traffic or receiver nodes i.e. DoS.
  3. Intermediate spam filtering only works in a P2P network if everyone does it. A single relay node dropping messages doesn’t necessarily prevent a message from being received.

5. Where: Enforced at different layers of the protocol stack: P2P Overlay (devp2p/libp2p), Transport Privacy (Whisper/Waku), Secure Transport, Data Sync and Data & Payloads (1:1 chat, group chat, public chat)

At which layer of the protocol stack spam mitigation is enforced also impacts its efficacy and is closely related to the previous category. Mitigations in the lower layers of P2P Overlay and Transport Privacy layers are applicable at all the P2P nodes while those in the topmost layer of Data & Payloads are only applicable at the end-points.

The Pros and Cons here are similar to the previous category.

6. How: Enforced via conventional technical measures or cryptoeconomic aspects

While Status Messenger itself does not interface with the Ethereum blockchain today, the app bundles a crypto wallet and web3 browser that do. This lets us consider spam mitigations that can complement conventional technical measures with decentralized cryptoeconomic concepts such as Tribute-to-talk, token staking, DAO-based moderation, ENS names, etc.

Pros:

  1. Cryptoeconomic countermeasures have the potential to address the core asymmetricity of spam by effectively allowing micro payments/penalties.
  2. Blockchain enables decentralized moderation and decision making.

Cons:

  1. Conventional technical measures have generally been ineffective, subjective, required privacy-invading aspects for content moderation or centralized in policy/design leading to forced enforcement, censorship and power abuse.
  2. Cryptoeconomics could be a barrier to participation and may require the use of Layer 2 scaling solutions to be feasible.

Summary

Spam is neither new nor unique to Status Messenger. It is an inherent challenge on any communication platform but amplified significantly on open, permissionless and privacy-centric ones. Traditional anti-spam measures as implemented on popular messengers have fundamental conflicts with Status principles and its decentralized P2P architecture.

In this article, we provided historical context to spam and reviewed mitigation approaches on related platforms. We then described the challenges of addressing spam conventionally within the framework of Status principles. Finally, we described currently implemented short-term countermeasures and concluded by discussing categories of future proposals.

We anticipate implementing some of these proposals in upcoming releases and look forward to continued engagement with the community in addressing spam.

(Thanks to Hester Bruikman, Tobias Heldt, Corey Petty, Mamy Ratsimbazafy, Jacek Sieka, Oskar Thoren and Jonathan Zerah for reviewing drafts of this article and providing helpful feedback. Thanks to Alex Howell for the thoughtful illustration. And thanks to everyone at Status and from the community who contributed to all the ideas consolidated in this article.)

Download Status

Get Status