First, a confession: I wrote this blog post because I am lazy.
When talking to people about Q for the first time, I am often asked questions that are aimed at poking holes in the system. This is a natural and healthy reaction — I do exactly the same when delving into protocols for the first time. Any system that provides security-critical infrastructure should constantly be challenged.
However, in those conversations, the same questions come up again and again. And while I love talking to people about Q, I recently have begun to feel a bit like a broken record that keeps repeating the same answers over and over again. So I took the time to write down some of my answers, hoping that in the future I will get away with just lazily referring to this blog post when asked.
To be sure: I do enjoy personal discussions — especially the ones that really go to the core of an issue. So by all means, don’t be shy and keep the questions coming!
A caveat: This blog post assumes that you are already familiar with the basics of the Q Protocol. It has the character of “explainer notes” to specific questions around is governance and cryptoeconomic architecture. If you are not familiar with the Q Protocol yet, I recommend checking out the about-section on the website, the White Paper, the documentation or other introductory sources before diving into the questions below.
So without further ado, here are the questions (in no particular order) and my answers to them:
1) Why do we need a decentralized governance system if we already have functioning legal systems?
Nation-state-based legal systems are ill-suited for the digital world for various reasons:
- They do not work well across borders, whereas the digital world is by its very nature borderless. There are various reasons for this: Firstly, local legal systems have evolved to reflect local culture and traditions and are optimized for a specific environment, not for a global scale. Secondly, if you are doing cross-border business, which jurisdiction do you choose? This is often a contentious topic between parties. Ideally, you want some “neutral ground”. All of this is is evidenced by the fact that large corporations regularly choose to opt our of local law and opt into private arbitration arrangements when doing cross-border business. However, at the non-corporate of consumer level, this has not been practical up until now.
- In practice, most legal systems are lagging in adopting adequate policies for new technologies, and even once they do, it takes a long time for the legal complex — not least judges — to develop the required knowledge and skill-set to deal with cases regarding new technology effectively and efficiently.
- The quality of legal systems varies greatly between countries, with many jurisdictions around the world effectively failing to provide meaningful legal protection. In crypto, we talk a lot about the “unbanked”, but the share of the global population that does not have adequate access to basic legal protection is at least as concerning.
- More generally, there is always friction when moving back-and-forth between systems — including between decentralized on-chain governance systems and centralized off-chain legal systems. For example: How do you implement court rulings on-chain? There is simply no easy enforcement mechanism based on “old-world” legal systems. A solution will require extra effort, time and money and likely still be subject to error in the end.
For these very practical reasons alone, I believe that having functioning decentralized governance systems is of great value.
If you want a slightly more philosophical and long-term-oriented answer:
On the one hand, building decentralized governance systems can be regarded as a protective move. We are witnessing an era in which the world around us is changing rapidly and trust in public institutions erodes. We need to take the opportunity to build and experiment with new, alternative systems that promise to fulfill the “old-world” functions of institutions if they deteriorate further. Blockchain offers a solution, but so far only partially — it only offers transactional security. If we truly want the freedom to choose between alternative systems, we need to add decentralized governance security. Building decentralized governance systems is therefore a way to insure against the potential failure of existing systems. Even though we hope those systems will not fail, it is wise to protect against the worst case scenario.
On the other hand, it is also playing on the offensive: If we can improve upon existing systems even just by a small margin, the positive effects on wealth creation will be massive. Good governance is the key to value creation, as has been shown many times. On a macro level, improving governance is maybe the highest-value activity to engage in. And there is what I like to call the “double whammy” effect: As competition for governance infrastructure is introduced, this will likely have an invigorating effect on existing systems as well. It is generally accepted that competition improves the quality of products or services offered in a market, and I strongly believe that the market for governance infrastructure is no exception.
And lastly: The same question could be a for asked for blockchains in general. Why do we need Bitcoin if we have the US-Dollar? Why do we need DeFi if we have TradFi? The generic answer to those kind of generic questions is that new technologies always present new opportunities and areas for innovation. Even if we do not know precisely where the journey will take us, we are likely to discover exciting and value-creating new things along the way. LFG!
2) Isn’t the whole point of crypto / blockchain that it provides an objective truth, and introducing governance at the Layer 1 level weakens this proposition?
In answering this, I will dodge the bigger underlying question whether such a thing as an “objective truth” even exists. Let it suffice to say that this is a matter that philosophers and logicians could spend lifetimes discussing and still wouldn’t come to an agreement.
Here is a more practical answer:
Firstly, it is important to note that there is no such thing a “no governance”. If you dig down to the bottom of things, every protocol is run and supported by humans, and those humans coordinate in some way to determine what happens within the protocol. This is governance. So the real question is not whether a protocol “has governance”, but whether that governance is implicit or explicit. I believe framing the question this way is much more useful in deciding which type of governance is best suited for a particular protocol.
Implicit governance is often called “social consensus” — a term that is not very well understood and often misused. In essence, it means that the people who determine the definition of the protocol (e.g. by running nodes) coordinate informally, purely through their actions, the assumption being that there is a tacit agreement of what should be done. This works best when the protocol is relatively simple and ossified — i.e. when no big decisions need to be made on a day-to-day basis and there are no more material changes to the protocol. It works worst if the development of the protocol is still ongoing and there are frequent material changes necessary. Arguably, no crypto protocol has reached the stage yet where implicit governance works best — a possible exception being Bitcoin. But even there it is not entirely clear — there are Bitcoiners who argue passionately for making fundamental changes to the protocol.
For explicit governance, it is exactly the other way round: In a distributed protocol without a centralized control structure, it is very hard to coordinate informally without having a structured process of doing so. This is why for many protocols with governance based on “social consensus”, there is one entity (e.g. a foundation or “labs” company) or a very small number of key people (e.g. founders or core developers) who factually drive change.
The design choice to have an explicit governance framework for Q is therefore driven by the expectation that the Q Protocol will keep innovating and evolving. Q is not an ossified protocol that is released once and never changed again, but a dynamic one which can be adapted and improved, e.g. based on new developments in technology or our evolving understanding of cryptoeconomic principles. However, it is important to note that not all thigs can be changed easily or “at a whim”. Explicit governance per se does not make it easier to change a protocol. The same can be said for protocols building on Q that use Q’s open governance framework. For protocols that are expected to evolve, having an explicit governance framework makes sense.
Secondly, having explicit governance does not limit the protocol’s capacity to generate “objective truth”. What this objective truth is can be defined within the protocol, independently of the way it is governed. Practically speaking, what people typically mean with “objective truth” in the context of blockchains is one definite state of the distributed ledger; or even more plainly, an impossibility to double-spend (although this is greatly simplifying things — for example, what about censoring of transactions?). In Q, the Constitution clearly states that transactions cannot be reversed, and changing this rule would be extremely hard (and in any case only forward-looking even if it were to be changed). When assessing how hard this truth-generating capacity of the protocol is, one needs to dig down to the details of the governance structure: I.e., what would need to happen for “the truth” to be compromised (practically speaking: launching a successful double spend attack)? The short answer is that within Q, an attack on the system would face multiple hurdles, each of which would be extremely difficult to take on its own. The multi-layered governance architecture of Q means that not only one, but several elements of the system would need to be compromised simultaneously for an attack to succeed. While quantifying this (i.e. estimating a cost of an attack) is the subject for future research, it seems clear that Q’s ability to withstand an attack on the system is — compared to many other Layer 1 protocols — is extremely strong since it adds security layers on top of the standard PoS architecture. This means that the capacity of the protocol to “generate truth” is very high.
3) If root nodes’ identities are known, isn’t the system permission?
The short answer: No. Access to Q is not permissioned. Anyone can join and contribute. At the validator level, participation can be anonymous, just as in other public permissionless proof-of-stake blockchains. At the root node level, participation is dependent on being elected by QGOV token holders on the basis of specific criteria that aim to ensure that the root nodes are independent, resistant to censorship and competent to handle their tasks as defined in the constitution. Anyone who gathers a sufficient number of votes can contribute to Q’s security as root node. There is no external gatekeeper or permissioning entity.
If you were looking for a longer and more nuanced answer, it’s also “no”, but longer and more nuanced. Here we go:
Let’s dive into the question of what “permissionless” actually means, as this term is often ill-understood.
Interpreted literally, permissionless means that you can participate in a system without requiring the consent of any person or entity. This is true for blockchains like Bitcoin and Ethereum and is also true for Q: you can use Q tokens (QGOV) within the network, become a validator node or participate in governance without needing to ask anyone first. The notable exception is joining the root node panel or an expert panel: Here you need to be voted in by QGOV token holders. You could argue that this is a form of permissioning, since token holder voting is dependent on the consent of other human beings and not administered by code alone. However, I would argue that this is a purely semantic difference: Firstly, since access to the pool of humans that vote for or against root nodes is permissionless, as a second-order effect access to the root node panel is permissionless as well. Yes there is an extra step, but when push comes to shove, permissionless access to voting rights on Q will ensure that unwarranted gatekeeping to the root node panel cannot be upheld over longer periods of time. Secondly, the distinction between “human” permissioning and “technical” permissioning is a fallacy, since at the core, every public permissionless protocol is maintained by humans that need to consent to the rules of the game; either explicitly as in Q, or implicitly — this is what we typically call “social consensus”. So even if access to the validator set of a proof-of-stake protocol is based on technical parameters, there is still implicit human permissioning since social consensus could lead to a forking out of validators if the majority of the community (of humans) decides so.
Interpreted more broadly, permissionless means that the hurdles to access a protocol are as low as possible. This seems to me to be a more useful definition, since it is guides us towards the goals we should have in mind: A protocol should be inclusive and not arbitrarily restrict people from participating. Maybe even more importantly, permissionless ensures that attributes such as credible neutrality and censorship resistance can be upheld. In this regard, I would argue that Q is “more permissionless” than many other protocols since the hurdles to participate are low: Factors that play into this are how much you actually have to spend at the minimum to be able to participate in the consensus protocol and governance and whether you can gain influence through merit rather than money. On all of those dimensions, Q scores highly compared to most other protocols.
4) Isn’t the ICC factually controlling Q?
Again, the answer is no. The International Chamber of Commerce’s Court of Arbitration has a specific function within Q’s dispute resolution process which is defined in the Q Constitution. It is a venue for dispute resolution if, and only if, two parties within Q face a dispute about a question defined in the Q Constitution. There is no possibility for the ICC to engage with the Q Protocol proactively; it can only act re-actively if a valid request is made by a Q Stakeholder that is a party to a dispute within the Q Protocol.
Of course, it is true that if those conditions are met, the ICC can have the final say on matters that are of great importance for specific stakeholders of Q or even the Q Protocol at large. But in doing so, the role of the ICC is strictly limited to interpreting the rules of the Q Constitution, which are transparent, known and agreed-upon ex-ante to everyone participating in Q. This is different to the role of courts of law in many legal systems, which often have a much broader scope and competence. Therefore the risk of arbitrary legal overreach is greatly reduced compared to a situation where the dispute resolution process is not or only vaguely defined.
Nevertheless, let’s assume that there is a dispute that results in and ICC ruling that is generally regarded by the Q community as unfair. The remedy for this would be for Q Token Holders to propose and vote for a change of the Q Constitution to limit the role of the ICC in the dispute resolution process or even remove it entirely. Of course this would take some time, but so does a “civil war” as we have seen in other decentralized communities. And in contrast to a civil-war-type situation, changes to Q would follow an orderly process and hopefully come without the casualties. In other words, there is a system of checks-and-balances in place to ensure that no single stakeholder or entity can hijack the Q Protocol.
Lastly, it is instructive to compare the dispute resolution process in Q with the situation that other, non-constitution-based, protocols face in similar situations. Firstly, it is important to acknowledge that disputes are unavoidable. Whether you like it or not, any successful protocol will see disputes among its participants. Secondly, whenever there is a dispute, there WILL be an outcome of that dispute. This can either be achieved through an orderly and structured process, or through an unstructured process (the less sophisticated term for this would be “law of the jungle”). The nature of an unstructured process is that the outcome is difficult to predict — this greatly increases the uncertainty and hence the “risk premium” that participants are likely to attach to their participation in such a system. All other things equal, more risk means less value creation. Having a well-defined and functioning dispute resolution process therefore has a positive impact on value creation potential within a system.
5) Does constitutional slashing not introduce an attack vector because of its subjectivity?
In proof-of-stake networks with slashing, there are two classes of slashing penalties: Those that can be administered in an automated way (i.e. without the need for human input) and those that require human input or judgement and cannot be automated.
An example for the first class is double singing of blocks; an example for the second one is censorship (i.e. failing to include transactions in a block although they fulfill the requirements to be included).
Typically, proof-of-stake networks have formalized slashing rules only for the first class; the second class requires “social slashing”, whereby the community needs to coordinate off-chain and come to an agreement on the respective slashing (or forking) decision.
Let me start with the second class because I think the answer for this one is easy: Clearly, constitutional slashing as implemented in Q reduces risk and limits attack vectors because it provides a clear framework on how such slashings are administered. It provides a higher degree of objectivity than “social slashing”, which by its very definition is not based on objective rules that are known ex-ante. Stated bluntly, social slashing is a form of “mob rule” — I think Nic Carter’s article titled “If Ethereum starts slashing, it burns” does a fantastic job in highlighting the dangerous of social slashing, so I will not repeat all the arguments here.
The first class is more tricky and while I personally believe that constitutional slashing is the superior solution, I acknowledge that there are different views on this question. In the following, I will attempt to present both sides’ arguments dispassionately so that you can make up your own mind:
The argument for automated slashing is the assumption that protocol robustness is maximized if misbehaviour is always penalized, irrespective of the reason for the misbehaviour. So for example if a validator node is offline, the fact that it is offline is bad for the network and therefore should be penalized — even if it is for reasons completely outside the control of the respective node’s operator. From an incentives perspective, knowing that misbehaviour will be penalized no matter what, also minimizes the occurrence of misbehaviour. For example, if a potential node operator is situated in an area where electricity supply or internet access is sketchy, he will think twice before running a node there, since loss of the node’s connection will result in a penalty. This leads to a situation with maximal robustness of the node network.
The argument for constitutional slashing — even for slashing penalties that could in principle be automated — is twofold: Firstly, there are situations that are truly outside of the control of node operators and hence should not be penalized. An extreme example would be a node going offline due to a loss of electricity supply caused by a bombing of the region’s electric infrastructure — a scenario that tragically is not only theoretical, as the war in Ukraine has confirmed. This is not only a question of fairness, but also one of decentralization: Having nodes as diversified as possible — including in areas that may have less-than-perfect infrastructure — increases decentralization and censorship resistance. Importantly, this may be true on an aggregate level even if it results in a less stable validator base at the individual level. Constitutional slashing allows for this, whereas fully automated slashing does not. Secondly, constitutional slashing is an effective protection against bugs. As we have seen on the Ethereum testnet, simple bugs can lead to non-reversible slashing events, which can have a negative impact not only on validator economics but also on network security.
To put things into perspective: I believe that in practice, for automatable slashing decisions, the difference between fully automated and constitutional slashing will be small in practice. The big difference between the two slashing regimes lies in (potential) slashing penalties that cannot be automated, such as slashing of censoring validators.
To conclude, my personal view is that constitutional slashing is vastly superior to other slashing regimes and has the potential to increase network security. I do acknowledge though that this is a fundamental design choice on which there are bound to be different camps — I encourage everyone to dive into the topics and make up their own mind.
6) There is already a “social contract” underlying crypto — why do we need a constitution?
Social contract smocial contract.
Forgive me for getting worked up about this, but “social contract” is a term I would like to get rid of completely.
First of all, a social “contract” is not a contract. In a legal sense, a contract is a mutual agreement between two or more parties that specifies certain rights and obligations which can be enforced. None of these elements are present in what people consider social contracts in crypto.
But semantics aside, my main criticism of the idea of a social contract is that no one can actually spell out what the content of such an alleged “contract” is. Even for the oldest, largest and most popular protocols, there is no agreement on very basic properties that should define the protocol. As Hasu and Nic Carter have shown convincingly in 2018, this has been true for Bitcoin, and I still think it is true today (just check the heated debates about Ordinals that took place this year). Similarly, for Ethereum there is no agreement in the community on some fundamental questions, e.g. what would be done if the protocol were attacked. Just check the current debate about Lido’s role, or last year’s discussion about slashing in the case of censorship, where prominent figures argued passionately both for and against social slashing.
So, to be clear: There is no such things as social contracts. In the best case, they are a mirage, an illusion, a deception. In the worst case, they are a euphemism for mob rule.
In my answers I am trying to be nuanced, provide context and perspective, give space to alternative points of views. But when it comes to the idea of a social contract, let me be clear: Just give it up. Social contracts do not exist.
I am a little more lenient on the term “social consensus”, although I’m not a big fan either. It is just too easily abused. Whenever someone invokes a “social consensus” to justify some action, my reflex is to assume that he or she is just talking about his or her own opinion. Nevertheless, if used correctly, there is a big difference between the two terms: Social consensus is something that emerges, an outcome. It is can only be known ex-post. The term social contract, on the other hand, implies that it is known and agreed upon ex-ante. This is the great fallacy — and therefore the term should not be used.
7) Does it not increase the liability risk for participants in Q if the rules are written down? Why would they take the risk of being sued if they violate these rules?
As with any legal question, a disclaimer first: I am not a lawyer, this is not legal advice, just my personal opinion, and of course you should do your own research and potentially seek legal advice before engaging in any risky activities.
With this out of the way, my personal view and conviction is that establishing explicit rules actually have the opposite effect: They clarify expectations by spelling out agreements between stakeholders that would otherwise be implicit (and therefore unclear). The result: A reduced scope for potential liability claims.
To put it another way: Not writing down an agreement does not mean that there is no agreement. Of course every legal system has its own peculiarities, but I think this is a pretty universal principle. The attitude “I don’t write down the rules, therefore there are no rules” reminds me of a two-year-old who covers his eyes and thinks this makes him invisible to others. As a general rule, ignoring a legal or regulatory problem doesn’t make it go away.
If there is no explicit agreement, it is easier for anyone to allege that there was one, or even simpler, just assume a certain legal structure or construct that comes with certain obligations and liability traps for its participants. The Ooki DAO case is a perfect example of this: Because there was no explicit agreement, the judge looked at facts and circumstances and put the case into the category that he thought fitted best. This turned out to be a General Partnership, with potentially very bad consequences for participants in the DAO (namely, unlimited joint and several liability).
Now we can scream and shout that this is unfair, but it is just a reality of life: If there is no explicit agreement, a judge that is confronted with a claim does not have another choice but look at facts and circumstances and try to fit them into one of the “boxes” that the respective jurisdiction provides. This can result in unpredictable and unintended outcomes. And then — what? How do you defend yourself in the absence of any documentation of what was intended? This is very hard.
Conversely, if there is an explicit rule-set that is transparent to all stakeholders (and ideally accepted explicitly), the opposite is true: Someone making a liability claim would either have to refer to the agreed-upon rules, or argue convincingly that those rules does not apply. In most jurisdictions, a judge will have a hard time not honouring agreements concluded and documented between the parties as long as they are not in conflict with existing law. Oftentimes, the only effective way to limit liability is via an explicit agreement.
This may be counter-intuitive to some, but opting-in to a defined legal regime reduces liability risks. Because if you opt in to a certain regime, you make your intention clear that you opt out of everything else at the same time. The first question that a judge typically needs to answer when confronted with a claim is: Does my court have jurisdiction? If the parties to a dispute have agreed that they want to opt in to another legal regime or resolve their disputes via private arbitration, the answer to this question will often be “no”.
For the participants involved, having explicit rules reduces the risk. They know what they sign up to and can make an informed decision as to the level of risk they are willing to take. As a side effect, having rules explicitly spelled out typically make people think more critically about what they are getting themselves into. And I’m pretty sure that’s not a bad thing, in crypto as in life.
Summing up: Explicit rules, if they are effectively agreed between the affected parties, greatly reduce the risk of arbitrary legal overreach.
— — —
Closing remarks
This post has become much longer than I anticipated, yet it still feels like I have barely scratched the surface. Each question would have probably warranted its own blog post, and there are more I have not even tackled yet. Therefore, I have decided to turn this into a “Part 1” version. Stay tuned for the sequel.
The good news is that there is plenty of room for further discussion and research. Building a system for shared governance security is not an easy task and cannot be accomplished without critical thinking and input from many.
I am looking forward to feedback and discussion. If you have questions, comments or ideas around Q, please reach out!
A big thank you to Nicolas Biagosch for input to and review of this article
Leave a Reply