For at least a decade prior to Bitcoin, there was an underground movement to solve decentrilized money. The principles of asymmetric cryptography had made their way from mathematical theory, into useful tools as the disposal of software engineers. It seemed as though it should finally be possible, but it was not. There was one specific issue which remained unsolved: The double spend problem. Blockchains and proof-of-work were designed together to solve this specific problem. And of course bitcoin was born.
Now, zooming out from money alone, we now have generalized blockchains like Ethereum. With much more than ‘spending’ going on I feel that it is important to define what we are using a blockchain to solve. In this field most of us have a general sense that there is ‘added security’ of some kind, and we know from experience blockchains can ‘carry value’, but these are merely observed behaviors. With the tremendous drawbacks that blockchains have, we must be able to define the fundamentals in order to evaluate pro’s and con’s based on first principles. The key principle solved* by a proof of work blockchain is as follows:
The disagreement of chronology of events separated over a distance.
In centralized systems the chronology is determined by whichever message makes it to the central computer first. In a decentrilized system there is no single point of truth, and determining ‘what came first’ is non-trivial. It is in-fact a problem ingrained in the nature of physics that this ordering is merely an opinion** based on the observer. With many observers come many opinions. Note that this general problem, distills to the ‘double spend’ problem when mapped to the narrow context of money transfers.
As new technologies become available, new solutions emerge between them which are hard to imagine until each piece begins to solidify. Here I outline an architecture between consortium blockchains, a public PoW chain, and Truebit for the primary benefit of scalability.
Truebit is a very early stage project which is currently being built for Ethereum. It allows for large jobs to be registered on Ethereum but then computed outside of Ethereum while still assuring correctness. The consortium blockchain application outlined here is an ideal case for Truebit’s scalability benefits because it bundles large amounts of computation into a single job.
I’ll assume we all know how the Ethereum blockchain works. Currently Consensys, is building mini “ethereum-compatible” blockchains for companies around the world who hope to find money-saving solutions or more honest record keeping for their businesses.
These experiments use consortium blockchains mostly because the main Ethereum blockchain is seen as “unscalable”. It’s a real issue. The Ethereum blockchain can only handle limited data throughput, so as more computation is needed, the price for that computation will surely begin to rise. If an engineering team doesn’t have the foresight to notice these scalability flaws, the finance department will eventually see the blockchain ‘solution’ become widely too expensive.
What is a Consortium Chain?
Let us also define a consortium chain as a blockchain such that anyone can run a read-only full node, but only a chosen few can write to it (create a block). And that creating a block involves signing a block by a threshold of predetermined, semi-trusted participants.
The trusted-parties create a block whenever they want by signing it and sending it to the other nodes. If a threshold of the other nodes sign it, then it becomes ‘final’.
However this construction alone has not solved the timing attack. A few adversarial parties can secretly re-sign past blocks arbitrarily, creating completely different branches in almost any way that they choose. The ‘valid’ branch is the one that was signed ‘first’, but PoW is our only proven tool for this task. Again the chronology of events separated over a distance is merily an opinion. We need a way to decide which branch came first, and a protocol to adhere to it.
But what if we could leverage the chronology solutions from the public (PoW) chain, and add the benefits of the consortium chain?
Lets imagine the following partial solution: The major public blockchains like bitcoin have the ability to prove*** that a piece of data existed before a specific time. So let’s devise a protocol in which we take the merkle root of each consortium block and imbed it into bitcoin. That root could embed all the information of the consortium chain at a certain state, including all the signatures of those who signed it. Now what happens if the semi-trusted parties try to rewrite blocks and collude to re-sign things in arbitrarily different ways? This time we can actually differentiate between the 2 branches. We look to the public chain to see which merkle root was embedded earlier, and our consortium chain protocol is designed to follow this branch.
With a sufficient network, this system would be quite secure from our main attack vector, now let’s take a look how it scales.
To achieve 4 orders of magnitude in increased scaling let’s allow our our block-gas-limit to increase and accept price decrease each by a factor of 10,000. This will not increase our use of the mainnet, but will have several effects: Only a few participants can be expected to run full nodes. Nearly everyone else will have to rely on light clients.
Unfortunately light clients do not validate. They simply follow the protocol rules for longest chain. If the semi-trusted parties do something invalid, the few full nodes will catch it, but they have no way to inform the light clients of the breach.
In a consortium chain, there should be more than trust in the semi-trusted parties to guarantee validity. It’s too easy for them to suddenly decide to change the rules.
This is where Truebit comes in
Amend the protocol above to embed the merkle root of each consortium block into Ethereum (instead of bitcoin), and we put them into the Trubit contract specifically. Now imagine our consortium chain is a couple of petabytes. Imagine we have a dozen or so semi-trusted nodes and only a dozen watchdog groups running full-nodes. If just one single full nodes sees an invalid transaction show up they will can rectify it (via the Truebit challenge protocol) in a reasonable amount of time.
The light clients would be receiving blockhashes and proofs (of the latest embedded consortium merkle-root) from mainnet. Now even they will automatically switch to the honest chain. In practice, this system will provide both validation, as well as protection against reordering (inherited from the proof-of-work blockchain).
So there’s a problem with this construction. One which the public blockchains are solving behind the scenes but in a subtle way. In the system above we have to assume the vast majority of users must rely on light clients. But now what happens if a blockhash is added, but the data which created it is not made available? Of course in a pure PoW system the minor must send the validation data to everyone if they want to see their block become adopted. If they hesitate to do so, their block will become orphaned and they will lose the reward. In our system above, we don’t have this mechanism. There is no ‘reward’, and there’s no way to ‘orphan’ the block without a truebit proof of fraudulence. Without the data, such a proof cannot be constructed. Its also difficult to prove (in any way) that the data was not made available.
A possible solution to these problems may exist within erasure coding research. Basically ways to ask for pieces of data and if no one in the network responds in a reasonable time, fraud is likely. The linked paper focuses on light clients in a different context. In our system, there would probably need to be a consensus mechanism layered into this piece. Some way to bump the bad block after availability issues have surfaced.
* not exactly solved, but creates a ‘good enough’ solution for certain applications.
** ‘opinion’, better defined as a truth which is different for different people.
*** ‘proved’ only in the practical sense. not the strict mathematical sense.