AI and blockchain are a perfect match: Combining both technologies opens up a whole new design space for new and exciting applications.
For example: blockchains can serve as base technology enabling multi-sided markets for decentralized compute and training data, provenance and data verification as well as protection against deep fakes. Conversely, AI can make blockchains more efficient and be applied as a tool to unlock new functionality both at the infrastructure and the application layer.
Maybe more importantly though, the decentralized nature of blockchain counterbalances the centralizing tendencies of AI. “Decentralized AI” may be a buzzword, but it can actually be part of the solution to some of the most pressing problems related to AI.
As a full-scope governance operating system for the digital world, the focus of the Q Protocol in relation to AI is clear: AI governance. The following provides a brief overview of the challenges and the role we expect the Q Protocol to play in decentralizing AI governance, with a special focus on autonomous AI agents.
How do we ensure that AI aligns with human interests?
AI should serve humans. This simple statement should not be controversial. But the big question is: How do we ensure that this will be so? AI gone rogue is the stuff of dystopian SciFi movies, and we all have pictures in our head of what the world will look like if things go wrong.
Merging AI with blockchain exacerbates the threat: imagine AI agents that act autonomously while having direct access to your wallet. This brings us one step closer to an AI Armageddon scenario, where AI simply takes control via economic means. But even on a much smaller scale, there are many scenarios with unintended consequences. A simple case are mis-spending decisions that – due to the immutable nature of blockchains – cannot be contested or reversed.
Enter Q
Q is a blockchain-based protocol that provides a full-scope governance operating system for the digital world. Its architecture is geared towards enabling governance frameworks for non-trivial use cases that serve the interest of humans.
It is based on a legal framework between all participants – the Q Constitution. In the preamble to the Q Constitution, it is explicitly stated that Q is here to serve humanity:
This clause was included in anticipation of a future where there will be conflicts between humans and machines. It not only sets the tone; it has a binding effect on everything that happens in the Q Protocol.
The Q Protocol has been in production for two years. Currently, its governance framework is used to govern the protocol and applications deployed on Q, with cross-chain functionality under development.
Risks and opportunities of Autonomous AI Agents
A concrete use case for decentralized governance in an AI context is the governance of autonomous AI agents which act independently of human input. Imagine a world where your personal AI agent trades stocks for you, does your taxes and optimizes every last cent out of your household spending decisions. This is a huge opportunity.
But when we deploy autonomous agents in a blockchain environment, we basically put them on steroids. First, they can directly access economic value through tokens. Second, the immutable nature of blockchains makes it difficult, if not impossible, to intercept or reverse transactions. The lack of intermediaries in crypto is a virtue in many cases. But getting rid of them also means that we lose the firewall function that they can have.
Blockchain can be a huge lever for the effectiveness of autonomous agents but also comes with significant risks: unintended consequences can occur, which in some cases can have disastrous results. We need ways to mitigate these risks.
Mitigating risks while safeguarding opportunities
How can we control autonomous agents using Q’s governance architecture?
Let’s first look at the rationale for the existence of human-based governance in a smart contract environment:
The strength of blockchains and immutable smart contracts is that they cannot be tampered with by humans. Immutability and autonomy are their defining features. However, there are limits to their usefulness: Code can be buggy, leading to fatal flaws; the environment may change, necessitating changes of the code’s functionality; there is human error, which may make it desirable to prevent certain outcomes. Lastly, many if not most decision types require judgment and interpretation, which cannot be automated and encoded in smart contracts. Governance by humans is inevitable as soon as use cases become a little more sophisticated.
Similarly, we want autonomous AI agents to act independently and without human interference in the “normal course of business”. That is the rationale for having them in the first place. But there are many circumstances – some known, some unknown and some resulting from “unknown unknowns” – where we will want to limit autonomous agents’ independence. For AI agents to be safe, we need human oversight.
Both problem sets – governing blockchain-based applications and governing autonomous AI agents – are similar in nature. Consequently, the solution to both problems is similar: We need the ability to intercept the autonomy of the technical system via decisions based on human judgment and interpretation.
However, this interception needs to follow objective guidelines that are transparent, fair and credibly neutral. If they are opaque, biased or tainted by particular interests, the whole point of relying on the technical system is lost.
Due to agency conflicts, the responsibility of human-based interception typically cannot be given to specific stakeholders or a specific stakeholder group of the system itself. Naïve token holder voting schemes as seen in many DAOs today are not fit for the task. To provide the desired security, governance decisions and enforcement need to be anchored to an authority outside of the system. This authority needs to be credibly fair and credibly neutral, just like a good nation state based legal system should be fair and neutral.
Leaving governance decisions to corporates does not fulfill the fairness and neutrality criterion. At the same time, nation state-based governance will not work, since AI is by its very nature borderless. Using existing legal systems to control AI is going to be ineffective at best. It will likely stifle innovation without achieving the desired effect.
We need decentralized authorities to control AI. To be effective, these need to fulfill some basic conditions: the cost of corrupting an authority needs to exceed the benefit of corrupting it for it to be safe and resilient to capture. We need to assume that autonomous AI agents themselves will come up with strategies to corrupt both an inside and an outside authority, so this principle is of critical importance for the safety of the system.
For this reason, the likely path we envision is for governance security to be shared between various AI systems, with eventually only a few key systems emerging as security anchors for AI governance.
Nation state-based regulation will not work, since AI is by its very nature borderless.
We need to make sure that control and governance of AI agents will be decentralized.
Sneak preview: How it can be done on Q
Q is designed as a credibly fair and neutral governance operating system. The protocol has the basis tools in place that can be adapted for AI governance. If done right, AI governance tools built with Q will fulfill the requirements for an effective and equitable control of autonomous AI agents as described above.
The basic elements of the system are:
- Rule setting: Q has a legal framework in place allowing for the definition of rules that go “beyond code is law” – i.e. require subjective judgment and interpretation;
- Enforcement: In Q, rules can be enforced effectively based on a number of mechanisms, with Root Nodes being a set of stakeholders in charge of ensuring compliance with agreed-upon rules;
- Dispute resolution: An integrated dispute resolution mechanism involving the ICC’s Court of Arbitration is in place to ensure that any disputes can be resolved within the protocol.
Based on this foundation, different mechanisms for governance enforcement can be deployed: slashing rules, veto rights, gating mechanisms, circuit breakers and more. How these come into play will depend on the concrete use cases and AI implementations. However, the Q architecture is modular and adaptable, meaning that new mechanisms and designs can be added if needed. As an example, different avenues for dispute resolution could easily be integrated into the existing framework.
We are still at the beginning of the journey for Q to become a fair and credibly neutral governance anchor for autonomous AI agents. Stay tuned for news and updates on this topic. If you are a builder working on decentralized AI governance systems, or work with a project where AI governance is an issue, we’d love to hear from you – please reach out and share what you are working on.
Leave a Reply