Ethereum’s Gas Mechanism: Details and Rational from first principles.

Ethereum is essentially Bitcoin with a gas mechanism added in order to enable it to run a turning complete virtual machine.

In that sense, this gas mechanism was Ethereum’s biggest advancement. There. I said it. It is fairly complex because there are multiple different values to think about. These values are especially confusing when you don’t know their rational. This paper outlines the thought process that logically arrived at it’s design (which is exactly as complex as it needed to be, and no more).

There are 4 variables involved in understanding gas:

  • gas-cost
  • gas-price
  • gas-limit
  • block-gas-limit

 

Design Rational

First let’s understand Bitcoin’s problem and solution, then observe why Ethereum introduces new issues, then see how the gas-based solution solves them.

Bitcoin

Problem: Public blockchains are open networks. Therefore, anyone can DOS attack the whole network by sending millions of transactions at once.

Solution: To mitigate this lets require the transaction sender to attach a fee to the transaction.

Problem: Each block only has limited space. With a fixed fee, a block can still become “full” and there is no more room for more transactions. The rest of the pending transactions will have to wait an arbitrarily long amount of time.

Solution: What we really want is a market, so the user can offer a competitive fee, and the miner will prioritize by highest payment. Users can attach larger fees if they don’t want to wait in line.

Problem: The transaction data unfortunately can vary in size. So even with a fee, someone can still DOS attack the network by sending a couple of huge transactions.

Solution: To solve this, the miner first looks at the tx inputs and outputs, and subtracts them to find its fee, then it divides by the total length of the tx data to prioritize the transaction based on its price-per-byte of data. An attacker now must pay a price roughly proportional to the amount of computation the miner will be performing, and the miner can verify that fact. There is also a limit on the amount of bytes that can be included in a single block, so the miner is interested mostly in maximizing this price-per-byte unit specifically.

Ethereum

Problem: The scripting language is Turing complete. This means that a transaction script can have jumps and loops. Therefore the amount of computation done on it, is no longer related to the size of the code in bytes. So we can no-longer rely on the price-per-byte metric to be fair payment. An attacker can create a very small piece of code that pays a significant price-per-byte but causes the virtual machine to loop a million times before completing! There is no way for a miner to know this until it actually executes the million loops.

Solution: Enter – Gas. The concept that as the virtual machine executes a program, it tallies how much computation it is doing. Therefore a million loops would “use up” a million times more gas than one loop. Specifically each operation of the virtual machine uses a certain amount of gas whenever it is run: ADD costs 3 gas, MUL costs 5 gas, JUMPI costs 10 gas etc. These values are referred to as gas-cost and are generally static global values defined in a table in the yellow paper.

Instead of specifying a fee amount (as in Bitcoin), the sender specifies a gas-price with his transaction. As the transaction is processed, he will be charged this gas-price for each unit of gas used. This amount goes to the miner. Without both gas-cost and gas-price we can not have a market between miners and senders. We would have the same problem described for bitcoin where a block becomes full and there is nothing to do but wait in line.

And this is about how much people usually know about gas. However we can’t stop there, because there are still issues. let’s see what they are

Problem: We also need to have a limit on the amount of computation done per block. This is because blocks need to be able to be processed in a timely manner. Bitcoin had solved this by capping the amount of combined bytes of all the transactions in a block (so-called block-size), but this would not be sufficient in a turing complete environment for the same reasons described above.

Solution: And for those same reasons, we limit the block computation using gas, and defining another value, the “block-gas-limit”. This is a cap on the cumulative gas used by all the transactions of a single block. This value is not tied to a specific transaction, it is a global value associated with the whole network (as an aside, Ethereum’s block-gas-limit is somewhat dynamic as opposed to Bitcoin’s block-size which is hard-coded).

Problem: We have unfortunately just created another issue. This is the part that people rarely understand. As the miner assembles transactions into a block, the cumulative gas counter will approach the block-gas-limit. As they pick each transaction to include (prioritized by highest gas-price), they will have less room left before reaching the limit. They don’t actually know however, how much gas the next transaction will use until they process it. The sender of the transaction had no fault in this either. Here no one is to blame, but unusable computation was executed.

Solution: Another value is defined by the sender, “gas-limit” (confusingly referred to as simply “gas” from the RPC interface). This value is a hard cap that the transaction sender is willing to use before it should halt. This protects the sender from spending more on the transaction than he expected, however its main purpose is to give the miner an upper bound on the gas that the transaction will consume before processing it. This way, when miner first prioritizes by gas-price, processes transactions one-by-one  skipping those whose gas-limit is less then the remaining available space in the current block.

The miner will also check that the sender has enough Ether to pay for the gas-limit that they specified before processing.

If the transaction does reach the gas-limit, everything in the VM is reverted but the payment is still made from the sender to the miner. This is important because the miner could not have known the transaction would halt, and must be compensated for processing it. The sender however can run the transaction locally beforehand and hopefully (based on the contract) ensure it would execute as desired. He must take special care however: Some contracts calls may halt only depending on a state which may be updated by someone else in the meantime because the function is public.

Summary

The sender of the transaction creates it, and decides how much they are willing to pay per unit of computation (gas-price). They also specify an upper bound on how much computation the transaction should do before halting (gas-limit) –  which limits their possible cost ether cost as the product of the two. They usually can pre-calculate exactly how much computation is needed, and will send “a little extra”. Only gas that is used, gets charged (so the gas-limit only affects the sender as an upper cap).

The miner must assemble blocks within the block-gas-limit. They maximize profits by prioritizing their mempool by highest gas-price, process those transactions one by one, first checking that each one’s gas-limit is less then the remaining available before the block-gas-limit is reached. Also verifying that the sender has sufficient funds for the gas-limit*gas-price that they themselves specified.

All of this combines to create an ethics system where users and miners are able to participate in a market and where promises are made up front, before work has begun. It may seem complicated, but I hope I’ve proven that it’s exactly what is required (and nothing more) to go from Bitcoin Script, to the more versatile Ethereum Virtual Machine.

Ethereum is simply Bitcoin with a gas mechanism added in order to enable it to run a turning complete virtual machine.

The 6 Confirmation Bias

The more confirmations, the lower the risk of a transaction getting re-ordered, and specifically, re-ordered such that it produces a different result. In Bitcoin this different result is that you don’t receive the money because the sender balance lacks necessary funds. With smart contracts the effect could be anything really – just a different outcome of the transaction.

I’ve been researching layer 2 solutions to PoW blockchains and I find this need for finality systemic. Lightning networks, state channels, sidechains: they all have issues with finality and they basically all “solve” the problem by defining values for timelocks and block amounts needed before the next stage can move forward. This creates multiple problems of its own (slows things down, may be insecure during outages).

I believe there are more fundamental or natural ways to approach these issues, and I will try to enumerate some here.

First prerequisite is to convince yourself of this FACT: An objective chronological ordering of two events that took place in different inertial reference frames is not possible. 

This statement is not domain specific. It’s a simple consequence of Einstein’s Relativity. We therefore cannot strictly solve the double spend (bitcoin), or the ordering of state transitions (ethereum). The idea of what came first is impossible to solve, so as engineers often do, Satoshi relaxed the constraints. Instead of proving which of two events happened first, we merely aim to achieve a consensus as to which did.

Ruminate on that for a moment.

As it turns out, the relaxed constraint and the Nakamoto Consensus use to achieve it, has been mostly sufficient for humans to engage in commerce. I don’t care which of 2 transactions came first as long as I can have a definitive answer to that question within a reasonable wait period. The longer I wait, the more confidently I am that the matter is settled.

But let’s not fool ourselves. There is nothing magic about 6 confirmations. There is no inherent finality to this system, and as we engineer constructions atop bitcoin, imposing finality assumptions tend to break their structural integrity.

There is another way to force ordering as needed for these constructions that is much stronger than PoW consensus however that is often overlooked:

hash-pointers

If my transaction data contains a hash-pointer to a previous piece of data, we know which came first (with the only assumption being the cryptographic integrity of the hash). Subjective ordering is then practically impossible and we can reconcile Einstein’s Relativity by observing that we now have a natural constraint; That “information” (in the einsteinian sense) must first travel from the first event’s reference frame to the second’s, in order for the hash to become embedded in the second transaction. This imposes a speed between Tx1 and Tx2, that is precisely long enough such that all inertial reference frames observing the events, universally agree as to which event came first (even absent this hash proof). But enough physics for now. There are real uses that could/should exist for layer 2. The Bitcoin lightning network can and does use them, Ethereum currently can not.

Why? because Ethereum deals with errors at the VM level, and Bitcoin deals with them locally. Stated differently – in Ethereum script errors are propagated on-chain, whereas on bitcoin, they are not propagated past a bitcoin client node.

Coming soon: How to use these properties, and how to fix Ethereum so it can use them too.

 

 

Proof-of-Work Based Block Size Limits

The block size debate rages forward past the affective Bitcoin Cash fork. As with many arguments, there are more than just the two sides of the debate people think there are.

Instead of detailing the wholistic political philosophies behind each, I will outline here the main drawbacks of each idea, and show another construction which has some key benefits.

With Bitcoin we have the hard 1 MB block limit. The main drawback of this, is simply transaction throughput. We can only have a few TXs per second and therefore the market for these TXs will become more expensive as the bitcoin network grows more important in society.

Bitcoin Cash has taken the approach of allowing block size to be voted on by miners with a hard cap at 32 MB. This solves the current problem of throughput, but makes no promises about the future. As far as historical narratives are important to communities I would imagine that when the 32 MB becomes a problem again, another hard fork will gain significant consensus to move the limit higher.  The problem here is that full nodes are the only way to really audit a blockchain, and as they get larger, the cost for running one (which is not really incentivized) becomes much more costly. Without a real incentive mechanism, Bitcoin has remained healthy largely from the fact that running these full nodes is nearly free (cost of a 200GB hard drive and an internet connection).

Ethereum has a block gas-limit which is effectively a block size AND computation limit in one. It uses a voting mechanism of the miners to determine its value, and historically has followed direct suggestions from Vitalik. It does not have any hard limit, and therefore can grow as big as the miners decide is most profitable for themselves collectively.

All three blockchains can and will receive pressures to raise limits as time goes on. This will never stop. One can attribute the Bitcoin Cash fork to this pressure without any doubt.

Security as a function of block size:Screen Shot 2017-12-30 at 2.35.04 AM

I myself certainly am against arbitrarily increasing block size, but it’s important to note that the reason is this subtle but real loss in security.

As computers become more powerful the cost of running a full node will drop. A drop in computing price can increase security as more users decide to run full nodes, hardening the peer-to-peer network

Security as a function of hardware cost

Screen Shot 2017-12-30 at 12.04.07 PM

It is quite probable that a 1 MB block limit today is more secure than a 500 KB block would have been in 2009.

My proposal is to programmatically combine the two concepts above so that neither miner voting nor community hard forks are used to determine block size. Instead the advent of increased hardware ability itself can be used as a more secure and predictable way to calculate this “decision”.

Current difficulty at the last hash (Dn) multiplied by some constant (K) could be used to calculate the size limit of the next block (Bn+1).

This system is imperfect, because it is possible that hashing speed and currency price could advance far faster than state storage and internet speeds.

To leave large conservative margins let’s use the square root of the difficulty (or possibly the log of it). This would ensure security only grows with hardware capabilities. Block size would grow at a rate slower than hardware advances. Security would improve and so would blockchain throughput, but neither would decrease in sacrifice for the other.

Bn+1 = K √(Dn)

By solving K against the existing Blockchain data, we can allow block size to grow while the the cost of running a full node also falls.

Screen Shot 2017-12-30 at 11.45.58 AM

Verifiable Public Permissioned Blockchains (Consortium Chains) for scaling ~4 orders of magnitude

For at least a decade prior to Bitcoin, there was an underground movement to solve decentrilized money. The principles of asymmetric cryptography had made their way from mathematical theory, into useful tools as the disposal of software engineers. It seemed as though it should finally be possible, but it was not. There was one specific issue which remained unsolved: The double spend problemBlockchains and proof-of-work were designed together to solve this specific problem. And of course bitcoin was born.

Now, zooming out from money alone, we now have generalized blockchains like Ethereum. With much more than ‘spending’ going on I feel that it is important to define what we are using a blockchain to solve. In this field most of us have a general sense that there is ‘added security’ of some kind, and we know from experience blockchains can ‘carry value’, but these are merely observed behaviors. With the tremendous drawbacks that blockchains have, we must be able to define the fundamentals in order to evaluate pro’s and con’s based on first principles. The key principle solved* by a proof of work blockchain is as follows:

The disagreement of chronology of events separated over a distance.

In centralized systems the chronology is determined by whichever message makes it to the central computer first. In a decentrilized system there is no single point of truth, and determining ‘what came first’ is non-trivial. It is in-fact a problem ingrained in the nature of physics that this ordering is merely an opinion** based on the observer. With many observers come many opinions. Note that this general problem, distills to the ‘double spend’ problem when mapped to the narrow context of money transfers.

As new technologies become available, new solutions emerge between them which are hard to imagine until each piece begins to solidify. Here I outline an architecture between consortium blockchains, a public PoW chain, and Truebit for the primary benefit of scalability.

Truebit is a very early stage project which is currently being built for Ethereum. It allows for large jobs to be registered on Ethereum but then computed outside of Ethereum while still assuring correctness. The consortium blockchain application outlined here is an ideal case for Truebit’s scalability benefits because it bundles large amounts of computation into a single job.

I’ll assume we all know how the Ethereum blockchain works. Currently Consensys, is building mini “ethereum-compatible” blockchains for companies around the world who hope to find money-saving solutions or more honest record keeping for their businesses.

These experiments use consortium blockchains mostly because the main Ethereum blockchain is seen as “unscalable”. It’s a real issue. The Ethereum blockchain can only handle limited data throughput, so as more computation is needed, the price for that computation will surely begin to rise. If an engineering team doesn’t have the foresight to notice these scalability flaws, the finance department will eventually see the blockchain ‘solution’ become widely too expensive.

What is a Consortium Chain?

Let us also define a consortium chain as a blockchain such that anyone can run a read-only full node, but only a chosen few can write to it (create a block). And that creating a block involves signing a block by a threshold of predetermined, semi-trusted participants.

The trusted-parties create a block whenever they want by signing it and sending it to the other nodes. If a threshold of the other nodes sign it, then it becomes ‘final’.

However this construction alone has not solved the timing attack. A few adversarial parties can secretly re-sign past blocks arbitrarily, creating completely different branches in almost any way that they choose. The ‘valid’ branch is the one that was signed ‘first’, but PoW is our only proven tool for this task. Again the chronology of events separated over a distance is merily an opinion. We need a way to decide which branch came first, and a protocol to adhere to it.

But what if we could leverage the chronology solutions from the public (PoW) chain, and add the benefits of the consortium chain?

Lets imagine the following partial solution: The major public blockchains like bitcoin have the ability to prove*** that a piece of data existed before a specific time. So let’s devise a protocol in which we take the merkle root of each consortium block and imbed it into bitcoin. That root could embed all the information of the consortium chain at a certain state, including all the signatures of those who signed it. Now what happens if the semi-trusted parties try to rewrite blocks and collude to re-sign things in arbitrarily different ways? This time we can actually differentiate between the 2 branches. We look to the public chain to see which merkle root was embedded earlier, and our consortium chain protocol is designed to follow this branch.

With a sufficient network, this system would be quite secure from our main attack vector, now let’s take a look how it scales.

Public Validation

To achieve 4 orders of magnitude in increased scaling let’s allow our our block-gas-limit to increase and accept price decrease each by a factor of 10,000. This will not increase our use of the mainnet, but will have several effects: Only a few participants can be expected to run full nodes. Nearly everyone else will have to rely on light clients.

Unfortunately light clients do not validate. They simply follow the protocol rules for longest chain. If the semi-trusted parties do something invalid, the few full nodes will catch it, but they have no way to inform the light clients of the breach.

In a consortium chain, there should be more than trust in the semi-trusted parties to guarantee validity. It’s too easy for them to suddenly decide to change the rules.

This is where Truebit comes in

Amend the protocol above to embed the merkle root of each consortium block into Ethereum (instead of bitcoin), and we put them into the Trubit contract specifically. Now imagine our consortium chain is a couple of petabytes. Imagine we have a dozen or so semi-trusted nodes and only a dozen watchdog groups running full-nodes. If just one single full nodes sees an invalid transaction show up they will can rectify it (via the Truebit challenge protocol) in a reasonable amount of time.

The light clients would be receiving blockhashes and proofs (of the latest embedded consortium merkle-root) from mainnet. Now even they will automatically switch to the honest chain. In practice, this system will provide both validation, as well as protection against reordering (inherited from the proof-of-work blockchain).

Drawbacks

So there’s a problem with this construction. One which the public blockchains are solving behind the scenes but in a subtle way. In the system above we have to assume the vast majority of users must rely on light clients. But now what happens if a blockhash is added, but the data which created it is not made available? Of course in a pure PoW system the minor must send the validation data to everyone if they want to see their block become adopted. If they hesitate to do so, their block will become orphaned and they will lose the reward. In our system above, we don’t have this mechanism.  There is no ‘reward’, and there’s no way to ‘orphan’ the block without a truebit proof of fraudulence. Without the data, such a proof cannot be constructed. Its also difficult to prove (in any way) that the data was not made available.

A possible solution to these problems may exist within erasure coding research. Basically ways to ask for pieces of data and if no one in the network responds in a reasonable time, fraud is likely. The linked paper focuses on light clients in a different context. In our system, there would probably need to be a consensus mechanism layered into this piece. Some way to bump the bad block after availability issues have surfaced.

 

* not exactly solved, but creates a ‘good enough’ solution for certain applications.

** ‘opinion’, better defined as a truth which is different for different people.

*** ‘proved’ only in the practical sense. not the strict mathematical sense.

 

KarmaCoin

Experimental cryptocurrency of Burning Man.

I’d like to create a token for Burning Man, that is inspired by one of its core tenants: The gifting economy. The ethereum-based token will only be gifted, never bought or sold. Of course, as with burning man, this is an honor based system, but I think it’s likely the coin will largely adhere to this tenant, after all it has no other utilitarian value. So you can send someone a KarmaCoin the same way you’d send a bitcoin, but simply as a gesture of appreciation. It will have no monetary value, but it may be seen as somewhat of an honor to receive.

Creation: One of the first natural questions is how initial supply and creation of the token comes into existence. To insure a strong asset, it will have a fixed supply, that is locked after the creation phase. But how much to create? How to initially distribute and why? This is the interesting part: We have interested people pin actual paper US dollars to a piece of art on the playa. Then we burn the US dollars, and for every 1 dollar burned, 1 KarmaCoin is created.

specifics:
I’ll be sitting in the playa sporting 1920 accountant getup, registering everyone’s contributions.

I’ll either be:
1) handing our cards with addresses, and private keys (scratch-off) on them. The private keys will be made securely by me (trust me!), and I’ll post all details involved. OR
2) I’ll hand our worksheets, and dice. They literally follow the directions on the page, and, with my supervision, roll the dice to create random numbers as appropriate, until they have a 12-word seed/derived address made by themselves. This is cool because it will be educational and fun! Unfortunately we will need a computer to find the public key, but we can remove the reliance on an RNG which is a huge win (and I can destroy the computer at the end).

Either way, they will go home in a week and import the seed to any ERC20 compatible wallet to find their karmaCoins waiting for them.

I still need to decide what the sculpture/art-piece will be, but the idea is, some kind of steel frame that you can pin dollar bills to, and when lit, it burns nicely. Maybe just a 3d ethereum logo.

I will also not be the person to initiate the arson. I will possibly give out guy-foux masks to each contributor, and let them know the dollars are ‘intended on being burned’, but I should not for legal reasons (see trump presidency).

Along with this, I’ll be write an extensive blog article on the definition of money, and its roll in an economy. Mostly It will be to educate people, and defend the experiment. Many will be upset that “the money could have been put to good use instead of wasted”. However this is because most people do not understand money. If they did, they would see that no actual *Value* is being destroyed in this experiment, and that the deflation directly makes all other money more valuable. The exercise is mathematically equivalent to donating the money equally to all current USD holders in the world (weighted by how much they currently hold).

The Big Theory

In one sentence, you could simply say that I am a skeptic. Its true. I don’t believe what I’m told. I’m skeptic of authorities, and I’m skeptic of conspiracies as well. Sometimes simply saying “I don’t know” can get you into trouble. If I say I’m unsure about the affects humans have on climate change, you may think I align with some particular political affiliation.

Recently, one of my most controversial skepticisms is The Big Bang Theory.  People’s understanding of science goes like this. There are experts, they accept the theory; I’m not an expert, therefor I accept what the experts say. I don’t really have a problem with this line of reasoning, its usually correct, except when its not.

Before 1920 the established scientific community around the world had plenty of theories that are now know to have been wrong. Edwin Hubble for instance, was the first to observe that some of the nebulous clouds within the milky way were actually, in fact galaxies of their own. Before that moment, the milky way galaxy was considered to be the entire universe. We now know that there are billions of galaxies and we are in just one of them.

So here was a case where the established physics of the day turned out to be wrong. Dead wrong. But lets please take another second to appreciate just how wrong they were. There are plenty of galaxies just like The Milky Way, many of them bigger. The established understanding of the entire universe was off by more than a few percent, not only a few factors, not even just a couple orders of magnitude. No, We were all wrong by a factor of millions about the fundamental size of our world.

The point being that science is often wrong, but that’s how it grows. I just try to point out the overreach in places I suspect it. We know newtonian physics simply work. We’ve been building bridges and buildings with them for hundreds of years. You can slam yourself into a brick wall if you’d like a first hand understanding that every action has an equal and opposite reaction. You’ll get some very conclusive data points. The equations of Maxwell have been used over and over to build our electrical systems throughout the world, and regardless of how strange time dilation seems, Einstein’s contributions to relativity can be shown over and over in labs with great predictability. But other areas of physics, some currently seen as cutting edge, maybe string theory, or multiverse, will one day be usurped by something more provable.

The difficulty is determining how ‘sure’  experts really are. Let me use a more contrived example. Let’s say I put a dollar down at roulette table on black. Roulette has 38 numbered spots that you can land on. They are: 0, 00, and 1 through 36. If the marble lands on 00 or 0, the house will win. While the payout is double, the chance of winning is only 9/19 (slightly less than 50%).

If I ask a statistician who should win, they would tell me “the house should win”. There odds here are 11/19, a bit more than mine. Now here is the problem. What if I ask another statistician who should win? They will say again that it’s the house. If I poll 1000 statisticians at least 999 will agree that the house should win. The issue is that my poll showing 99.9% of experts agree, does not accurately reflect to the degree of certainty of the result. Nearly every ‘expert’ opinion we take as status quo, has a statistical element of how sure we really are about it. However the degree of certainty is usually lost by the time a message get to the general public. After all, the media can’t be expected to dive into a tangent on Descartes, to explain how one can never truly ‘know’ anything.

So getting to The Big Bang Theory specifically: Why don’t I believe in it, and how sure am I. My issue is that it reeks of knowledge overreach, and that in its attempt to rather intuitively explain a single observation, it requires breaking many other laws of physics.

Many years ago it was observed, that the further into the universe one looked, the ‘redder’ things got. The wavelengths of light were stretched. This known as the doppler effect, and it happens when things are moving relative to you. It happens with sound waves too, and is very well understood. The deeper into the universe astronomers look, the faster things seem to be moving away. This implies the same physical nature as an explosion. The outermost pieces move the fastest, everything moves away from everything else.

Everything from the above paragraph is just observable fact, but using the this doppler effect, physicists went on to calculate speeds, and decided to back-date our universe (15 billion years?), to when all these particles would have initially started the explosion from a point. Its a great theory, it’s simple and slowly mainstream science began to overwhelmingly accept it.

Now that the scientists know it’s fact, they went on to calculate the details. This is where an interesting type of exercise takes place. How can we design an equation to yield our known results. The math showed that particles had to have traveled faster than the speed of light. Now this already should be enough to throw away your idea, and move forward, but the big bang was already accepted. Now we are just drawing mathematical conclusions from it. In order to describe the observation about stretched light hitting us, we’ve decided to break Einstein’s law of relativity (something that can be experimentally reproduced in any lab and is wildly more provable).

It goes further. More recently astrophysicists have observed that each galaxy is not only moving away from us, but that this movement is accelerating. This observation is counterintuitive to the big bang theory. It shows that the outward movement does not reflect that of an explosion at all. Something else is going on here. The current explanation involves something to do with the idea that ‘space itself is growing between them’ (whatever that means). I tend to doubt, that if this was discovered at the same time as the doppler shift, we would have even ended up with The Big Bang Theory as accepted science at all.

Someday the skeptics will live and breath with the rest of us, and reveal out loud their criticisms without succumbing to academic and political pressure to conform. Until then I’ll quietly disagree.

 

 

Chain Games – Ethereum

I want to start off by saying that a lot of money, and therefore work, effort, and peoples livelihood are invested in this stuff, and I want to be somewhat sensitive to that fact. I’ll say some things that may hurt, especially if they are true, and you are currently getting burned by them.

My goal is a successful Ethereum ecosystem, first and foremost. No one can claim to care for this technology more than myself – I fell in love with the idea, and changed the course of my life to become part of it. To me, a healthy Ethereum means ONE Ethereum network on top of which, we can all run our separate applications that can directly talk to one another without technical limitation.

Ok, but a disagreement has taken place. This apparently is something that can happen. Voice and exit should be an acceptable response to disagreements in a free system, so I fundamentally agree with the idea that Ethereum Classic is viable. 

I didn’t fully predict whats happening now, and obviously most people didn’t. Ethereum is bigger than us. We don’t have the controls that many of us thought we did. This is OK. We are actually experiencing the raw power of a freedom engine that the world has never seen before. The blockchain plays no favorites. It simply offers a set of mathematical guarantees; Whereas our current system of rules/laws/regulations is complex and interpreted by mere humans. As such, I believe Ethereum may someday create a type of economic stability, a backbone, that our future societies can count on.

In human controlled systems, we will always have different interpretations of what is corruption. For example: Some have said that the creators of TheDAO were corrupt. More blamed the DAO hacker as the corrupt one. In response to this corruption, Ethereum was hard-forked (which quite literally required corrupting the database). The decision seemed to come from the most notable faces of leadership in the space, but it did indeed have a majority of users on board.  You will now hear voices saying The Foundation was corrupt in bailing out the DAO creators, investors, and themselves.

So corruption begets corruption begets corruption. Who is right and wrong here?

…I contend that this is simply the wrong question. The only question I’m interested in, and have ever been interested in is: How can we create the most value and prosperity for society?

Of course, the answer to this is a complex one, and we are bound to disagree with it as well. The behavior of “Money” itself is one of the most difficult things to understand in all of economics. Its true that only we can give it value, but there are and have been many different viewpoints on what steps might be best to handle its creation/distribution. Generally these decisions have been made by people in power, or better yet, by democratic majority vote.

This is where I actually get very excited. For the first time in history, a minority has affectively chosen their own monetary policy in a completely free/opt-in currency system. Instead of the minority being kicked along by popular vote, they were able to take a tiny chunk and already improve its value. This value is based on the implications of such a currency. The small minority that see value in it, are surprising the world by how much value they see.

But lets get back on track. The real question is how to create our best future from here. I have a solution that might enable the network to merge.

OK, but what about the few people who believe its a good thing that there are now multiple networks. Why do they think that? They think so because they believe philosophically that the 2 chains have different visions and are better apart. Mostly these are Classic supporters who want a truly immutable blockchain. Well, I would argue that ETH supporters actually want that too. The difference was that they were willing to make the compromise. They simply thought it was worth breaking the social part of the contract in this specific circumstance. The logical ones can admit in hindsight that this was a mistake (umm… it nearly destroyed Ethereum). My solution is based on the idea that the coins ETH vs ETC, could be up for debate, while forfeiting the protocol debate to Classic.

In a future release of Ethereum, the token itself is defined simply by a contract just like any other. Miners can accept gas payment in any currencies of their choosing. In that ecosystem this whole thing could have played out very similarly, but on one chain. Here’s how: Dao hacker steals 5% of funds, and locks the rest up. Vitalik and the leadership ask us to do a currency swap like so.

  1. A New token is created ETHN
  2. In order to create those tokens you have to submit your 1ETH:1 ETHN
  3. ETHN can, at any time be traded back through the contract to release ETH
  4. DAO tokens are also accepted into the contract at their pertinent ratio 100DAO:1 ETHN
  5. Before the darkDAO funds unlock, the window is closed in this direction.

The result of which is that the 85% who were pro the fork, will begin using a token which is distributed precisely how ETH would be sans DAO attack. Another 5% or so, myself included, who were against a fork, would have actually gone for this. After the darkDAO became free, the price of the original token would most likely still fetch a good price. Although never exceeding ETHN. This is based on the game theory similar to whats playing out now with Classic. Maybe most of the ETHN holders would funnel their funds back into ETH. Most likely this model would play out as total ETH being worth ~15% of total ETHN (the amount lost in TheDAO), but the point is, that we can chose our currency and monetary policy without having to choose our platform, and the network can live on agnostic to our regularly overplayed political monetary disputes…

So is this possible to fix retrograde? short answer: No. The current version of Ethereum only allows the chosen token Ether to be used as payment for gas.

However

  • It’s is already possible to create a 2-way-peg between the networks, where sending a coin in 1, can unlock a coin in the other. This means you could move ETH into the classic chain and vice-versa…
  • The plan to turn ETH/ETC into a standardTokenContract is already in the pipeline (meaning Ether will have no special privileges). Classic will mostly accept this fork.

I’m currently researching the possibility of combining these concepts to incentivize a natural merge. I am still conceptualizing, so please help spitball. There are no wrong answers.