Vitalik, Zoë and Glen just published a paper in which they propose a mechanism and philosophy for a self-organising decentralised ecosystem of public goods. It's a wild ride, and I'll go over the key points to encourage further discussion in Status.


Free(riding) Radicals

Basically, the question they're trying to answer is; "How do we fund public goods optimally?"  This is notoriously hard to do in economic theory because of something called the "free-rider" or "public goods" problem, where people benefit more from the public goods they use than they contribute back to them.

Traditionally, this has been solved two ways: simple private contributory systems, which lead "to the under-provision of public goods that benefit many people because of the free-rider problem", or membership systems based on the idea of 1 person, 1 vote (1p1v) which "cannot reflect how important various goods are to individuals and will tend to suppress smaller organisations of great value."

The goal, therefore, is "to create a system that is as flexible and responsive as the market, but avoids free-rider problems".

The authors argue that individuals should "make public goods contributions to projects of value to them. The amount [then] received by the project is (proportional to) the square of the sum of the square roots of contributions received." Don't be fooled by the maths: the key idea in Liberal Radicalism is that, if individuals contribute to the public goods they use AND that the "funding principle underlying the market" (i.e. all the square root stuff) is nonlinear, you can make sure that "small contributions are heavily subsidised (as these are the most likely to be distorted by free-riding incentives) while large ones are least subsidised, as these are more like private goods."

Put differently, Liberal Radicalism (LR) is a systematic design for what we might call "matching programs" like those for charitable donations made by employees, or New York City electoral campaigns, or even government tax deductions. These all "confer a greater degree of influence on individuals in determining ultimate funding allocations", but LR is a "coherent design that captures their central motivation in a mechanism that is (approximately) optimal from the perspective of economic theory."

The key takeaway here is that decisions about which public goods and and how to fund them are actually questions of optimal collective decision making. John Maynard Keynes probably sums up their overall project best:


Decisions, Decisions

Classical capitalism deals poorly with the public goods problems because "each individual, if she acts selfishly, only accounts for the benefits she receives and not the benefits to all other individuals, [so] funding levels will not scale with the number of individual beneficiaries as would be desirable." As above, private contributory systems and 1p1v have big limitations, are costly to set up, and tend to be quite fragile and/or easy to manipulate.

Recently, however, more practical mechanisms have emerged for near-optimal collective decision making. Groves and Ledyard (1977) and Hylland and Zeckhauser (1979) both suggested a quadratic mechanism for determining the level of continuous public goods, but their methods require either a centralised iterative process or depend heavily on an unrealistic assumption of complete information. However, the basic insight of quadratic pricing of collective choices reemerged in Weyl (2012)’s proposal for what he called “Quadratic Vote Buying”.

In particular, he proposed allowing individuals to buy votes, paying the square of the votes they buy. He argued, and Lalley and Weyl (2018) rigorously proved, that under standard assumptions (similar to those we use below) in large populations this leads to approximately optimal decisions on public goods.

This idea is broadly referred to as Quadratic Voting (QV) and is very interesting, but it "doesn't solve the problem of flexibility" which is also one of the limitations of the alternatives available to us in classical capitalism. To get that flexibility, we need some hardcore maths, so strap in...


Maths and Models

I don't want to rehash this too much: if you're interested, you can read §3 of the paper yourself. However, there are some assumptions I'd like to bring up for debate:

We assume all public goods generate independent value to citizens (no interactions across public goods) and that citizens have quasi-linear utility denominated in units of currency. Our interest here is in maximization of dollar-equivalent value rather than achieving an equitable distribution of value.

The idea of no interactions across public goods needs more justification in my opinion as does quasi-linear utility (which becomes important later). Even so, maximising dollar-equivalent value rather than equitable distribution is interesting to discuss. As I argued in a tech talk on using bonded curves to rank DApps: "equitable distribution" or egalitarianism is a great success metric (precisely because there are so many different ways to measure it), but it is not a valid design principle.

Admittedly, I'm not really up to speed on the special case of Samuelson’s analysis in the case of quasi-linear utility, which is used - along with choosing concave functions, to achieve some cool maths that is:

  1. Better than capitalism, which provably leads to the situation where, if,
"a large set of citizens benefit significantly from a public good, this will typically lead to severe underfunding. For example, if all individuals are homogeneous, this is equivalent to [...] setting the total marginal utility of the good to N times the level it should be at. When citizens have heterogeneous preferences, matters are even worse, at least from a distributive perspective: only the single citizen who cares most on the margin about the good has any influence on its provision. Matters are even more pessimistic if citizens can make negative contributions (privatize public goods), as then the lowest valuation citizen determines the provision level."

2. More optimal than what they call “(1p1v) democracy” [which] funds public goods through taxes. However, taxes work by looking at the median income of contributors, rather than the mean (which is more optimal, because the mean is generally more representative because maths).

Even if you think that democracy is the worst form of government, except for all the others, and that this proposal sounds far-fetched, they emphasise that:

"Bergstrom (1979a,b) argued, if there is some reasonable proxy for which individuals will benefit most from a good and we can tax them for it, 1p1v democracy may yield reasonable outcomes as everyone will then agree on whether a given good is desirable. But in this begs the question: in this case any consensual mechanism will agree. Our goal is to find appropriate funding level without assuming such prior centralised knowledge." (emphasis added)

We Are Designed By Our Designing

We then get into some Design and Analysis and the truly magical maths. Again, if interested, it's best to read this yourself. Exactly why is summed up neatly by the authors themselves:

"This result may seem somewhat magical, though less so for those familiar with the theory of QV. Drawing on a similar logic to QV, however, why this works out (and why, as we discuss in §5.4 below, other formulae would not), becomes clearer."

There are some real gems though, if you persevere, like "All must be treated interchangeably as we care only about aggregate welfare" - another interesting assumption with ethical implications worth discussing (idea jam, anyone?)

It does, however, allow the authors to achieve their goal, which is:

"to find a formula where a citizen’s influence is proportional to her marginal value for the public good. This will only occur if the marginal cost of influence is proportional to the citizen’s current influence, as the citizen will choose influence at the point where the marginal cost of additional influence is equal to the marginal benefit of influence (more of the public good)."

Woah, woah, woah! Slow down there with all the influence, right? Basically, we're still trying to make sure that getting (or using up) more of some public good costs as much as the benefit you actually gain from that good, now just with maths to prove exactly why. The psychological dimensions of the choice they mention - look at what a success rational actor theory turned out to be - are also worth discussing more.


The interplanetary maths continues, because this LR mechanism is "homogenous of degree one", which is a fancy way of saying that:

• Changing currencies makes no difference to the mechanism.
• Groups can gain nothing by splitting or combining projects with the same group of participants.
• It matters little precisely how frequently the mechanism is run, whether donations are aggregated at the monthly, daily or yearly level, unless the pattern of donations is temporally uneven in an important way

Such a design also means that "holding fixed the amount of the contribution, the funding received grows as the square of the community size" which kicks capitalism's ass when it comes especially to funding goods on the margin.

More importantly, it means:

"that a community that splits in half with roughly similar contribution profiles will receive half the aggregate funding of the total community: both halves will receive one quarter. This is a clear deterrent against fragmentation and atomisation, and is the core reason why the LR mechanism can solve the public goods problem."

LR does not prejudge the optimal size of societies or communities, but instead "offers a mechanism that creates truly neutral incentives among social organisation of different sizes". It also has the property that it "reverts to a standard private good in the case that a single citizen attempts to use the mechanism for her own enrichment."

"To summarise, the mechanism provides much greater funding to many small contributions than to a few large ones. This is not for any reason of equity or distributive justice, though there may be good reasons from those perspectives to admire the outcome it delivers.

Neat. This is all followed by a call to designers - "ETHMagicians and Conflux, we're looking at you", basically:

"The precise look and feel of the system requires much more thought and even might affect the formal rules in some way. None of us are designers so we are far from expert in these questions. We look forward to seeing what specific designs those more expert in this area come up with."

§4 then finishes in a realm even intrepid readers struggle to follow. If you incorporate the citizens impact on the deficit, then "moon maths" is not even close to what follows. But, whatevs, because "while considering incentives to affect deficits creates some complications and potential deviations from perfect optimality, the impact is small and often irrelevant. Therefore, we omitted it from our analysis above."


Who Will Build The Roads?

What about collusion and fraud, though? If we're gonna decide how to fund public goods that require individual contributions based on the benefit you're receiving and are controlled by all this crazy maths, how can we be sure that the systems we design are robust and secure?

First, some definitions: "Collusion takes place when multiple agents act in their mutual interest to the detriment of other participants. Fraud takes place when a single citizen misrepresents herself as many." After detailing threats and harms, the authors thankfully get to a broader point:

"If perfect harmonization of interests is possible, Capitalism leads to optimal outcomes. LR is intended to overcome such lack of harmonization and falls prey to manipulation when it wrongly assumes harmonization is difficult. "

That is, we know it's not actually possible to harmonise everyone's interests, so LR makes some different assumptions. This does mean that collusion, especially, must be guarded against when designing such mechanisms. Fraud can be fairly well taken care of by identity and disincentives; collusion - though - is much more subtle.

Thereafter follows a 2 page discussion on electronic voting systems and coercion resistance, which is great if you're interested, but not too relevant to this summary.

Negative Nancies

There are some other cases to consider closely, too, in particular the point that "not all public projects bring benefits alone; some may harm certain citizens by creating negative externalities such as pollution or offense such as hate speech" and, while these can often be dealt with by legislation, sometimes actually allowing this can also serve to deter collusion, in specific contexts.

"More broadly, negative contributions may be a quite powerful way to deter collusive schemes as they offer a way for any citizen to be a “vigilante enforcer” against fraud and abuse. The downside of this benefit, however, is obviously that, in some cases, absolute free speech and other protections may lead us to distrust such vigilantism."

Show Me The Money

I relegated §5.1 to the end for further reading only, but in arguing for a notion the authors call "Capital-constrained Liberal Radicalism" (CLR) they claimed that "funding for LR is likely to come from philanthropists or some dedicated government appropriation rather than from unlimited tax revenue" in the short term.

However, this is likely a bit wishful, so we require some history and the help of that old stalwart Henry George and his theorem. Confusingly, it was Joseph Stiglitz who proved in 1977 that beneficial investments in public goods will increase aggregate land rents by at least as much as the investments cost. This the became dubbed as the Henry George Theorem because it showed George's "single tax" on land values, is not only efficient, it is also the only tax necessary to finance public expenditures.

HGT is a pretty profound economic argument, when you sit with it for a while. As the authors point out: "it is always possible to fund optimal provision of public goods based on taxing away the rents of declining returns."

"The question this obviously raises is how the increasing-returns activities that are worth undertaking can be identified and how the rents associated with the decreasing returns to scale activities can be identified and taxed without damaging the economy. LR provides a fairly comprehensive answer to the first question, but does not answer the second one. Nonetheless, examples of decreasing returns goods are not too difficult to enumerate and there are existing taxes explicitly set up to tax them. Congestion pricing on heavily trafficked roads, air traffic rights, taxes on land and corporate profits and spectrum policy are all examples of policies designed to tax decreasing returns activities."

Luckily for us, there may even be a specific way of identifying decreasing returns activities and their associated rents: implementing "a self-assessed tax on capital with a right of compulsory purchase for any buyer willing to pay the possessor’s self-assessed price" - i.e. Harberger's tax.

"Because this tax overcomes the monopoly problem of sellers holding out for higher payments for their assets, it can alleviate much of the misallocation of assets, increasing aggregate wealth, at the same time as it taxes away rents associated with fixed assets [... Therefore, ] LR paired with Harberger’s tax offers an intriguing vision of a new society with efficient public goods provision funded by efficiency-enhancing taxes."

Have you drunk enough koolaid yet?


Apply With Care

Phew, we finally made it to the "applications" of all of this stuff. Take a moment, just for you.

OK, back to work. Applications include:

  1. Campaign financing, because "LR for campaign finance is  joined to matching funds proposals in spirit, while offering substantial improvements over these existing proposals in practice."
  2. News media financing: news, especially high-quality, investigative journalism is a great example of a public good, the underfunding of which leads to severe problems in the mechanisms underlying society itself (#FakeNews bro).
  3. Municipal projects and public works: "While urbanists have long recognised the importance of community-level decision-making in cities, cities often lack mechanisms that allow goods valued in communities to emerge. LR, as applied to urban public funding decisions, could allow communities at all scales to fund projects that would struggle to get funding under centralised systems." Bring on them smart cities!!
  4. Open source software communities, where code is considered a public good and coding is an increasing returns activity, but which can face potentially large upfront development costs that render capitalist solutions "wildly inefficient and undesirable," and which move too quickly for institution or government-driven provision can also apply LR to great effect.

    However, Jaron Lanier is ever-present to point out that "Many if not most of the benefits of such systems have increasingly been captured by 'central nodes' or what Lanier calls 'siren servers' such as Google and Facebook, and the systems do not seem to serve the broad public well." There are two potential solutions to this: crowdfunding and cryptocurrencies. No prizes for guessing which one is discussed more...

It's important to pay attention to how the authors chose to describe the current distribution of value in crypto given some obvious biases. Let's just say I think this is likely quite accurate personally, though also know that it could be interpreted in radically different and less idealistic ways:

"several community members who have grown personally wealthy through the bubbles created by speculation on the success of these technologies have a philanthropic, and possibly also speculative, interest in seeing the communities thrive and are thus willing to contribute to this out of their personal holdings."

Hmm... Either way, the key point is that "such a hierarchical structure seems both poorly attuned to the needs of the communities and, perhaps more importantly, antithetical to the principles of decentralised authority on which they were founded." LR could help address some of these imbalances that currently exist, and new currencies could even serve as "exciting test grounds" for the philosophy. politics, maths, and economics above.


Where To Now, Alice?

If we cast Liberalism as a tradition that "opposes arbitrary or historically-derived centralised authority [and] favours—to the maximum extent consistent with social order—social systems that are neutral across reasonable competing conceptions of the good life held by individual citizens", then the two driving desires of Liberalism can be described as:

  1. Facilitation of individuals’ pursuit of reasonable conceptions of the good.
  2. Neutrality across reasonable conceptions of the good.

There's endless debate about what things like "reasonable conceptions of the good" actually mean, and it all needs to be read particularly against the "Basic Communitarian Critique" of Liberalism, which comes from the question of how to sustain any kind of collective organisation once you've committed to allowing individuals to follow their own conception of the good. In other words, "A state cannot sustain the communities valued by individuals while remaining neutral among their competing conceptions of the good life." The critique therefore adds a third drive to the list above:

3. Fostering of community.

Which leads us through some history to the fact that 1p1v was created, in part, to answer the need to better represent communities, but is itself exposed to the "Basic Multiculturalist Critique". This states that "Standard democracy leads to systematic biases against minority groups and communities." That is, capitalism underfunds goods at the margin, and 1p1v suppresses minority voices. Therefore:

"Liberal Radicalism, a political philosophy based in the LR mechanism, resolves an important aspect of the abiding disagreement between liberals and communitarians. Our formal analysis is grounded in positions advanced by liberal thinkers who were attentive to communitarian concerns throughout the history of liberalism. Notably, Tocqueville’s political theory, stemming from his own resolution of the frictions he observed between liberalism and democracy, is a historical precedent for LR. Tocqueville argued that localised collective organisation is necessary, but if society becomes so localised so as to be atomistic, liberal resistance to central authority is impossible".

And resist we must! Vive la révolution! Especially when such a revolution includes "Kwame Anthony Appiah [who] has sought to build a liberal philosophy that is deeply engaged with the richness of human identity and community and yet which resists essentialist or eternal claims of such groups to allow for a liberal, free-flowing and evolving definition of identities that serve individuals."

There follows a bunch of stuff on LR versus Liberal Individualism and various forms of Communitarianism, if you're into it. The key point, though, is that "LR, insofar as it solves the liberal-communitarian debate, is not a fully-articulated ideology but rather a framework for further political theorising. We have presented formal rules for designing neutral societies that demand refinement and translation into other modes of communication before guiding the designs of actual institutions".


Some Final, Philosophical Nuggets

Obviously, the very title and name of the mechanism itself "is at least a quintuple entendre", because why cover moon maths and crazy economic theory if you can't make puns while doing so? Anyway,

LR is, at its core, "a form of liberalism that seeks to achieve liberal ends in a fundamentally social world. In this sense it is also 'radical' in the original meaning: it gets to the roots of what liberalism is about, in our account, namely an antiauthoritarian commitment to neutrality across ways of living and valuing". (emphasis added)

That's a genuinely profound thought. I'll give you some space to absorb it.

Tristram Shandy, anyone?

The authors also point out that the LR mechanism is a model, but ask us "what is the precise role of the model in designing, justifying, and explaining new proposals or philosophical positions?" This is a not a new problem, but definitely one due for discussion at #CryptoLife in Prague and our Philosopher's Salon.

"Connecting mechanism design more explicitly with political philosophy can help guide the powerful theoretical apparatus presented here into its most socially valued uses. At the same time, the introduction of complex justificatory tools that only a few experts can understand has its hazards."

Like that Keynes quote that started us off, oh so long ago, what all this asks of us is collaboration across radically different fields, views, and backgrounds. This is because LR ultimately "enables a vision of a different social organisation that plays out the fundamental ideas of liberalism much more completely."

Critically, we need to ask "Is it possible to make wide publics largely unfamiliar with mathematics comfortable with the LR mechanism?" because that's something we can all participate in.

So, whether you're a musician or a mathematician, or just some moron who heard about crypto in college - it's time to dive deeper...


Post Script - Mixing It Up

In §5.1, the authors prove that CLR is "consistent with existing within a broader capitalist society, not just in terms of funding but also in terms of getting people to “play ball” with the mechanism (what economists call 'individual rationality')."

As if all the maths at this point were not enough, there's also some deep economic theory at play in why CLR is a better mechanism than capitalism: "The basic idea of Atkinson-Stiglitz taxation is that, when considering commodity taxation, it is optimal to distort the consumption of all goods equally, so that the marginal rate of substitution across all goods is the same." Using this, they can prove CLR is "optimally allocated across goods if α is chosen to just exhaust the budget" (where alpha is the relative weighting of LR and Capitalism in CLR.

You can also see some variations of this functional form in §5.4