Ethereum’s Gas Mechanism: Details and Rational from first principles.

Ethereum is essentially Bitcoin with a gas mechanism added in order to enable it to run a turning complete virtual machine.

In that sense, this gas mechanism was Ethereum’s biggest advancement. There. I said it. It is fairly complex because there are multiple different values to think about. These values are especially confusing when you don’t know their rational. This paper outlines the thought process that logically arrived at it’s design (which is exactly as complex as it needed to be, and no more).

There are 4 variables involved in understanding gas:

  • gas-cost
  • gas-price
  • gas-limit
  • block-gas-limit


Design Rational

First let’s understand Bitcoin’s problem and solution, then observe why Ethereum introduces new issues, then see how the gas-based solution solves them.


Problem: Public blockchains are open networks. Therefore, anyone can DOS attack the whole network by sending millions of transactions at once.

Solution: To mitigate this lets require the transaction sender to attach a fee to the transaction.

Problem: Each block only has limited space. With a fixed fee, a block can still become “full” and there is no more room for more transactions. The rest of the pending transactions will have to wait an arbitrarily long amount of time.

Solution: What we really want is a market, so the user can offer a competitive fee, and the miner will prioritize by highest payment. Users can attach larger fees if they don’t want to wait in line.

Problem: The transaction data unfortunately can vary in size. So even with a fee, someone can still DOS attack the network by sending a couple of huge transactions.

Solution: To solve this, the miner first looks at the tx inputs and outputs, and subtracts them to find its fee, then it divides by the total length of the tx data to prioritize the transaction based on its price-per-byte of data. An attacker now must pay a price roughly proportional to the amount of computation the miner will be performing, and the miner can verify that fact. There is also a limit on the amount of bytes that can be included in a single block, so the miner is interested mostly in maximizing this price-per-byte unit specifically.


Problem: The scripting language is Turing complete. This means that a transaction script can have jumps and loops. Therefore the amount of computation done on it, is no longer related to the size of the code in bytes. So we can no-longer rely on the price-per-byte metric to be fair payment. An attacker can create a very small piece of code that pays a significant price-per-byte but causes the virtual machine to loop a million times before completing! There is no way for a miner to know this until it actually executes the million loops.

Solution: Enter – Gas. The concept that as the virtual machine executes a program, it tallies how much computation it is doing. Therefore a million loops would “use up” a million times more gas than one loop. Specifically each operation of the virtual machine uses a certain amount of gas whenever it is run: ADD costs 3 gas, MUL costs 5 gas, JUMPI costs 10 gas etc. These values are referred to as gas-cost and are generally static global values defined in a table in the yellow paper.

Instead of specifying a fee amount (as in Bitcoin), the sender specifies a gas-price with his transaction. As the transaction is processed, he will be charged this gas-price for each unit of gas used. This amount goes to the miner. Without both gas-cost and gas-price we can not have a market between miners and senders. We would have the same problem described for bitcoin where a block becomes full and there is nothing to do but wait in line.

And this is about how much people usually know about gas. However we can’t stop there, because there are still issues. let’s see what they are

Problem: We also need to have a limit on the amount of computation done per block. This is because blocks need to be able to be processed in a timely manner. Bitcoin had solved this by capping the amount of combined bytes of all the transactions in a block (so-called block-size), but this would not be sufficient in a turing complete environment for the same reasons described above.

Solution: And for those same reasons, we limit the block computation using gas, and defining another value, the “block-gas-limit”. This is a cap on the cumulative gas used by all the transactions of a single block. This value is not tied to a specific transaction, it is a global value associated with the whole network (as an aside, Ethereum’s block-gas-limit is somewhat dynamic as opposed to Bitcoin’s block-size which is hard-coded).

Problem: We have unfortunately just created another issue. This is the part that people rarely understand. As the miner assembles transactions into a block, the cumulative gas counter will approach the block-gas-limit. As they pick each transaction to include (prioritized by highest gas-price), they will have less room left before reaching the limit. They don’t actually know however, how much gas the next transaction will use until they process it. The sender of the transaction had no fault in this either. Here no one is to blame, but unusable computation was executed.

Solution: Another value is defined by the sender, “gas-limit” (confusingly referred to as simply “gas” from the RPC interface). This value is a hard cap that the transaction sender is willing to use before it should halt. This protects the sender from spending more on the transaction than he expected, however its main purpose is to give the miner an upper bound on the gas that the transaction will consume before processing it. This way, when miner first prioritizes by gas-price, processes transactions one-by-one  skipping those whose gas-limit is less then the remaining available space in the current block.

The miner will also check that the sender has enough Ether to pay for the gas-limit that they specified before processing.

If the transaction does reach the gas-limit, everything in the VM is reverted but the payment is still made from the sender to the miner. This is important because the miner could not have known the transaction would halt, and must be compensated for processing it. The sender however can run the transaction locally beforehand and hopefully (based on the contract) ensure it would execute as desired. He must take special care however: Some contracts calls may halt only depending on a state which may be updated by someone else in the meantime because the function is public.


The sender of the transaction creates it, and decides how much they are willing to pay per unit of computation (gas-price). They also specify an upper bound on how much computation the transaction should do before halting (gas-limit) –  which limits their possible cost ether cost as the product of the two. They usually can pre-calculate exactly how much computation is needed, and will send “a little extra”. Only gas that is used, gets charged (so the gas-limit only affects the sender as an upper cap).

The miner must assemble blocks within the block-gas-limit. They maximize profits by prioritizing their mempool by highest gas-price, process those transactions one by one, first checking that each one’s gas-limit is less then the remaining available before the block-gas-limit is reached. Also verifying that the sender has sufficient funds for the gas-limit*gas-price that they themselves specified.

All of this combines to create an ethics system where users and miners are able to participate in a market and where promises are made up front, before work has begun. It may seem complicated, but I hope I’ve proven that it’s exactly what is required (and nothing more) to go from Bitcoin Script, to the more versatile Ethereum Virtual Machine.

Ethereum is simply Bitcoin with a gas mechanism added in order to enable it to run a turning complete virtual machine.

The 6 Confirmation Bias

The more confirmations, the lower the risk of a transaction getting re-ordered, and specifically, re-ordered such that it produces a different result. In Bitcoin this different result is that you don’t receive the money because the sender balance lacks necessary funds. With smart contracts the effect could be anything really – just a different outcome of the transaction.

I’ve been researching layer 2 solutions to PoW blockchains and I find this need for finality systemic. Lightning networks, state channels, sidechains: they all have issues with finality and they basically all “solve” the problem by defining values for timelocks and block amounts needed before the next stage can move forward. This creates multiple problems of its own (slows things down, may be insecure during outages).

I believe there are more fundamental or natural ways to approach these issues, and I will try to enumerate some here.

First prerequisite is to convince yourself of this FACT: An objective chronological ordering of two events that took place in different inertial reference frames is not possible. 

This statement is not domain specific. It’s a simple consequence of Einstein’s Relativity. We therefore cannot strictly solve the double spend (bitcoin), or the ordering of state transitions (ethereum). The idea of what came first is impossible to solve, so as engineers often do, Satoshi relaxed the constraints. Instead of proving which of two events happened first, we merely aim to achieve a consensus as to which did.

Ruminate on that for a moment.

As it turns out, the relaxed constraint and the Nakamoto Consensus use to achieve it, has been mostly sufficient for humans to engage in commerce. I don’t care which of 2 transactions came first as long as I can have a definitive answer to that question within a reasonable wait period. The longer I wait, the more confidently I am that the matter is settled.

But let’s not fool ourselves. There is nothing magic about 6 confirmations. There is no inherent finality to this system, and as we engineer constructions atop bitcoin, imposing finality assumptions tend to break their structural integrity.

There is another way to force ordering as needed for these constructions that is much stronger than PoW consensus however that is often overlooked:


If my transaction data contains a hash-pointer to a previous piece of data, we know which came first (with the only assumption being the cryptographic integrity of the hash). Subjective ordering is then practically impossible and we can reconcile Einstein’s Relativity by observing that we now have a natural constraint; That “information” (in the einsteinian sense) must first travel from the first event’s reference frame to the second’s, in order for the hash to become embedded in the second transaction. This imposes a speed between Tx1 and Tx2, that is precisely long enough such that all inertial reference frames observing the events, universally agree as to which event came first (even absent this hash proof). But enough physics for now. There are real uses that could/should exist for layer 2. The Bitcoin lightning network can and does use them, Ethereum currently can not.

Why? because Ethereum deals with errors at the VM level, and Bitcoin deals with them locally. Stated differently – in Ethereum script errors are propagated on-chain, whereas on bitcoin, they are not propagated past a bitcoin client node.

Coming soon: How to use these properties, and how to fix Ethereum so it can use them too.



Proof-of-Work Based Block Size Limits

The block size debate rages forward past the affective Bitcoin Cash fork. As with many arguments, there are more than just the two sides of the debate people think there are.

Instead of detailing the wholistic political philosophies behind each, I will outline here the main drawbacks of each idea, and show another construction which has some key benefits.

With Bitcoin we have the hard 1 MB block limit. The main drawback of this, is simply transaction throughput. We can only have a few TXs per second and therefore the market for these TXs will become more expensive as the bitcoin network grows more important in society.

Bitcoin Cash has taken the approach of allowing block size to be voted on by miners with a hard cap at 32 MB. This solves the current problem of throughput, but makes no promises about the future. As far as historical narratives are important to communities I would imagine that when the 32 MB becomes a problem again, another hard fork will gain significant consensus to move the limit higher.  The problem here is that full nodes are the only way to really audit a blockchain, and as they get larger, the cost for running one (which is not really incentivized) becomes much more costly. Without a real incentive mechanism, Bitcoin has remained healthy largely from the fact that running these full nodes is nearly free (cost of a 200GB hard drive and an internet connection).

Ethereum has a block gas-limit which is effectively a block size AND computation limit in one. It uses a voting mechanism of the miners to determine its value, and historically has followed direct suggestions from Vitalik. It does not have any hard limit, and therefore can grow as big as the miners decide is most profitable for themselves collectively.

All three blockchains can and will receive pressures to raise limits as time goes on. This will never stop. One can attribute the Bitcoin Cash fork to this pressure without any doubt.

Security as a function of block size:Screen Shot 2017-12-30 at 2.35.04 AM

I myself certainly am against arbitrarily increasing block size, but it’s important to note that the reason is this subtle but real loss in security.

As computers become more powerful the cost of running a full node will drop. A drop in computing price can increase security as more users decide to run full nodes, hardening the peer-to-peer network

Security as a function of hardware cost

Screen Shot 2017-12-30 at 12.04.07 PM

It is quite probable that a 1 MB block limit today is more secure than a 500 KB block would have been in 2009.

My proposal is to programmatically combine the two concepts above so that neither miner voting nor community hard forks are used to determine block size. Instead the advent of increased hardware ability itself can be used as a more secure and predictable way to calculate this “decision”.

Current difficulty at the last hash (Dn) multiplied by some constant (K) could be used to calculate the size limit of the next block (Bn+1).

This system is imperfect, because it is possible that hashing speed and currency price could advance far faster than state storage and internet speeds.

To leave large conservative margins let’s use the square root of the difficulty (or possibly the log of it). This would ensure security only grows with hardware capabilities. Block size would grow at a rate slower than hardware advances. Security would improve and so would blockchain throughput, but neither would decrease in sacrifice for the other.

Bn+1 = K √(Dn)

By solving K against the existing Blockchain data, we can allow block size to grow while the the cost of running a full node also falls.

Screen Shot 2017-12-30 at 11.45.58 AM

Verifiable Public Permissioned Blockchains (Consortium Chains) for scaling ~4 orders of magnitude

For at least a decade prior to Bitcoin, there was an underground movement to solve decentrilized money. The principles of asymmetric cryptography had made their way from mathematical theory, into useful tools as the disposal of software engineers. It seemed as though it should finally be possible, but it was not. There was one specific issue which remained unsolved: The double spend problemBlockchains and proof-of-work were designed together to solve this specific problem. And of course bitcoin was born.

Now, zooming out from money alone, we now have generalized blockchains like Ethereum. With much more than ‘spending’ going on I feel that it is important to define what we are using a blockchain to solve. In this field most of us have a general sense that there is ‘added security’ of some kind, and we know from experience blockchains can ‘carry value’, but these are merely observed behaviors. With the tremendous drawbacks that blockchains have, we must be able to define the fundamentals in order to evaluate pro’s and con’s based on first principles. The key principle solved* by a proof of work blockchain is as follows:

The disagreement of chronology of events separated over a distance.

In centralized systems the chronology is determined by whichever message makes it to the central computer first. In a decentrilized system there is no single point of truth, and determining ‘what came first’ is non-trivial. It is in-fact a problem ingrained in the nature of physics that this ordering is merely an opinion** based on the observer. With many observers come many opinions. Note that this general problem, distills to the ‘double spend’ problem when mapped to the narrow context of money transfers.

As new technologies become available, new solutions emerge between them which are hard to imagine until each piece begins to solidify. Here I outline an architecture between consortium blockchains, a public PoW chain, and Truebit for the primary benefit of scalability.

Truebit is a very early stage project which is currently being built for Ethereum. It allows for large jobs to be registered on Ethereum but then computed outside of Ethereum while still assuring correctness. The consortium blockchain application outlined here is an ideal case for Truebit’s scalability benefits because it bundles large amounts of computation into a single job.

I’ll assume we all know how the Ethereum blockchain works. Currently Consensys, is building mini “ethereum-compatible” blockchains for companies around the world who hope to find money-saving solutions or more honest record keeping for their businesses.

These experiments use consortium blockchains mostly because the main Ethereum blockchain is seen as “unscalable”. It’s a real issue. The Ethereum blockchain can only handle limited data throughput, so as more computation is needed, the price for that computation will surely begin to rise. If an engineering team doesn’t have the foresight to notice these scalability flaws, the finance department will eventually see the blockchain ‘solution’ become widely too expensive.

What is a Consortium Chain?

Let us also define a consortium chain as a blockchain such that anyone can run a read-only full node, but only a chosen few can write to it (create a block). And that creating a block involves signing a block by a threshold of predetermined, semi-trusted participants.

The trusted-parties create a block whenever they want by signing it and sending it to the other nodes. If a threshold of the other nodes sign it, then it becomes ‘final’.

However this construction alone has not solved the timing attack. A few adversarial parties can secretly re-sign past blocks arbitrarily, creating completely different branches in almost any way that they choose. The ‘valid’ branch is the one that was signed ‘first’, but PoW is our only proven tool for this task. Again the chronology of events separated over a distance is merily an opinion. We need a way to decide which branch came first, and a protocol to adhere to it.

But what if we could leverage the chronology solutions from the public (PoW) chain, and add the benefits of the consortium chain?

Lets imagine the following partial solution: The major public blockchains like bitcoin have the ability to prove*** that a piece of data existed before a specific time. So let’s devise a protocol in which we take the merkle root of each consortium block and imbed it into bitcoin. That root could embed all the information of the consortium chain at a certain state, including all the signatures of those who signed it. Now what happens if the semi-trusted parties try to rewrite blocks and collude to re-sign things in arbitrarily different ways? This time we can actually differentiate between the 2 branches. We look to the public chain to see which merkle root was embedded earlier, and our consortium chain protocol is designed to follow this branch.

With a sufficient network, this system would be quite secure from our main attack vector, now let’s take a look how it scales.

Public Validation

To achieve 4 orders of magnitude in increased scaling let’s allow our our block-gas-limit to increase and accept price decrease each by a factor of 10,000. This will not increase our use of the mainnet, but will have several effects: Only a few participants can be expected to run full nodes. Nearly everyone else will have to rely on light clients.

Unfortunately light clients do not validate. They simply follow the protocol rules for longest chain. If the semi-trusted parties do something invalid, the few full nodes will catch it, but they have no way to inform the light clients of the breach.

In a consortium chain, there should be more than trust in the semi-trusted parties to guarantee validity. It’s too easy for them to suddenly decide to change the rules.

This is where Truebit comes in

Amend the protocol above to embed the merkle root of each consortium block into Ethereum (instead of bitcoin), and we put them into the Trubit contract specifically. Now imagine our consortium chain is a couple of petabytes. Imagine we have a dozen or so semi-trusted nodes and only a dozen watchdog groups running full-nodes. If just one single full nodes sees an invalid transaction show up they will can rectify it (via the Truebit challenge protocol) in a reasonable amount of time.

The light clients would be receiving blockhashes and proofs (of the latest embedded consortium merkle-root) from mainnet. Now even they will automatically switch to the honest chain. In practice, this system will provide both validation, as well as protection against reordering (inherited from the proof-of-work blockchain).


So there’s a problem with this construction. One which the public blockchains are solving behind the scenes but in a subtle way. In the system above we have to assume the vast majority of users must rely on light clients. But now what happens if a blockhash is added, but the data which created it is not made available? Of course in a pure PoW system the minor must send the validation data to everyone if they want to see their block become adopted. If they hesitate to do so, their block will become orphaned and they will lose the reward. In our system above, we don’t have this mechanism.  There is no ‘reward’, and there’s no way to ‘orphan’ the block without a truebit proof of fraudulence. Without the data, such a proof cannot be constructed. Its also difficult to prove (in any way) that the data was not made available.

A possible solution to these problems may exist within erasure coding research. Basically ways to ask for pieces of data and if no one in the network responds in a reasonable time, fraud is likely. The linked paper focuses on light clients in a different context. In our system, there would probably need to be a consensus mechanism layered into this piece. Some way to bump the bad block after availability issues have surfaced.


* not exactly solved, but creates a ‘good enough’ solution for certain applications.

** ‘opinion’, better defined as a truth which is different for different people.

*** ‘proved’ only in the practical sense. not the strict mathematical sense.



Experimental cryptocurrency of Burning Man.

I’d like to create a token for Burning Man, that is inspired by one of its core tenants: The gifting economy. The ethereum-based token will only be gifted, never bought or sold. Of course, as with burning man, this is an honor based system, but I think it’s likely the coin will largely adhere to this tenant, after all it has no other utilitarian value. So you can send someone a KarmaCoin the same way you’d send a bitcoin, but simply as a gesture of appreciation. It will have no monetary value, but it may be seen as somewhat of an honor to receive.

Creation: One of the first natural questions is how initial supply and creation of the token comes into existence. To insure a strong asset, it will have a fixed supply, that is locked after the creation phase. But how much to create? How to initially distribute and why? This is the interesting part: We have interested people pin actual paper US dollars to a piece of art on the playa. Then we burn the US dollars, and for every 1 dollar burned, 1 KarmaCoin is created.

I’ll be sitting in the playa sporting 1920 accountant getup, registering everyone’s contributions.

I’ll either be:
1) handing our cards with addresses, and private keys (scratch-off) on them. The private keys will be made securely by me (trust me!), and I’ll post all details involved. OR
2) I’ll hand our worksheets, and dice. They literally follow the directions on the page, and, with my supervision, roll the dice to create random numbers as appropriate, until they have a 12-word seed/derived address made by themselves. This is cool because it will be educational and fun! Unfortunately we will need a computer to find the public key, but we can remove the reliance on an RNG which is a huge win (and I can destroy the computer at the end).

Either way, they will go home in a week and import the seed to any ERC20 compatible wallet to find their karmaCoins waiting for them.

I still need to decide what the sculpture/art-piece will be, but the idea is, some kind of steel frame that you can pin dollar bills to, and when lit, it burns nicely. Maybe just a 3d ethereum logo.

I will also not be the person to initiate the arson. I will possibly give out guy-foux masks to each contributor, and let them know the dollars are ‘intended on being burned’, but I should not for legal reasons (see trump presidency).

Along with this, I’ll be write an extensive blog article on the definition of money, and its roll in an economy. Mostly It will be to educate people, and defend the experiment. Many will be upset that “the money could have been put to good use instead of wasted”. However this is because most people do not understand money. If they did, they would see that no actual *Value* is being destroyed in this experiment, and that the deflation directly makes all other money more valuable. The exercise is mathematically equivalent to donating the money equally to all current USD holders in the world (weighted by how much they currently hold).

The Big Theory

In one sentence, you could simply say that I am a skeptic. Its true. I don’t believe what I’m told. I’m skeptic of authorities, and I’m skeptic of conspiracies as well. Sometimes simply saying “I don’t know” can get you into trouble. If I say I’m unsure about the affects humans have on climate change, you may think I align with some particular political affiliation.

Recently, one of my most controversial skepticisms is The Big Bang Theory.  People’s understanding of science goes like this. There are experts, they accept the theory; I’m not an expert, therefor I accept what the experts say. I don’t really have a problem with this line of reasoning, its usually correct, except when its not.

Before 1920 the established scientific community around the world had plenty of theories that are now know to have been wrong. Edwin Hubble for instance, was the first to observe that some of the nebulous clouds within the milky way were actually, in fact galaxies of their own. Before that moment, the milky way galaxy was considered to be the entire universe. We now know that there are billions of galaxies and we are in just one of them.

So here was a case where the established physics of the day turned out to be wrong. Dead wrong. But lets please take another second to appreciate just how wrong they were. There are plenty of galaxies just like The Milky Way, many of them bigger. The established understanding of the entire universe was off by more than a few percent, not only a few factors, not even just a couple orders of magnitude. No, We were all wrong by a factor of millions about the fundamental size of our world.

The point being that science is often wrong, but that’s how it grows. I just try to point out the overreach in places I suspect it. We know newtonian physics simply work. We’ve been building bridges and buildings with them for hundreds of years. You can slam yourself into a brick wall if you’d like a first hand understanding that every action has an equal and opposite reaction. You’ll get some very conclusive data points. The equations of Maxwell have been used over and over to build our electrical systems throughout the world, and regardless of how strange time dilation seems, Einstein’s contributions to relativity can be shown over and over in labs with great predictability. But other areas of physics, some currently seen as cutting edge, maybe string theory, or multiverse, will one day be usurped by something more provable.

The difficulty is determining how ‘sure’  experts really are. Let me use a more contrived example. Let’s say I put a dollar down at roulette table on black. Roulette has 38 numbered spots that you can land on. They are: 0, 00, and 1 through 36. If the marble lands on 00 or 0, the house will win. While the payout is double, the chance of winning is only 9/19 (slightly less than 50%).

If I ask a statistician who should win, they would tell me “the house should win”. There odds here are 11/19, a bit more than mine. Now here is the problem. What if I ask another statistician who should win? They will say again that it’s the house. If I poll 1000 statisticians at least 999 will agree that the house should win. The issue is that my poll showing 99.9% of experts agree, does not accurately reflect to the degree of certainty of the result. Nearly every ‘expert’ opinion we take as status quo, has a statistical element of how sure we really are about it. However the degree of certainty is usually lost by the time a message get to the general public. After all, the media can’t be expected to dive into a tangent on Descartes, to explain how one can never truly ‘know’ anything.

So getting to The Big Bang Theory specifically: Why don’t I believe in it, and how sure am I. My issue is that it reeks of knowledge overreach, and that in its attempt to rather intuitively explain a single observation, it requires breaking many other laws of physics.

Many years ago it was observed, that the further into the universe one looked, the ‘redder’ things got. The wavelengths of light were stretched. This known as the doppler effect, and it happens when things are moving relative to you. It happens with sound waves too, and is very well understood. The deeper into the universe astronomers look, the faster things seem to be moving away. This implies the same physical nature as an explosion. The outermost pieces move the fastest, everything moves away from everything else.

Everything from the above paragraph is just observable fact, but using the this doppler effect, physicists went on to calculate speeds, and decided to back-date our universe (15 billion years?), to when all these particles would have initially started the explosion from a point. Its a great theory, it’s simple and slowly mainstream science began to overwhelmingly accept it.

Now that the scientists know it’s fact, they went on to calculate the details. This is where an interesting type of exercise takes place. How can we design an equation to yield our known results. The math showed that particles had to have traveled faster than the speed of light. Now this already should be enough to throw away your idea, and move forward, but the big bang was already accepted. Now we are just drawing mathematical conclusions from it. In order to describe the observation about stretched light hitting us, we’ve decided to break Einstein’s law of relativity (something that can be experimentally reproduced in any lab and is wildly more provable).

It goes further. More recently astrophysicists have observed that each galaxy is not only moving away from us, but that this movement is accelerating. This observation is counterintuitive to the big bang theory. It shows that the outward movement does not reflect that of an explosion at all. Something else is going on here. The current explanation involves something to do with the idea that ‘space itself is growing between them’ (whatever that means). I tend to doubt, that if this was discovered at the same time as the doppler shift, we would have even ended up with The Big Bang Theory as accepted science at all.

Someday the skeptics will live and breath with the rest of us, and reveal out loud their criticisms without succumbing to academic and political pressure to conform. Until then I’ll quietly disagree.



Chain Games – Ethereum

I want to start off by saying that a lot of money, and therefore work, effort, and peoples livelihood are invested in this stuff, and I want to be somewhat sensitive to that fact. I’ll say some things that may hurt, especially if they are true, and you are currently getting burned by them.

My goal is a successful Ethereum ecosystem, first and foremost. No one can claim to care for this technology more than myself – I fell in love with the idea, and changed the course of my life to become part of it. To me, a healthy Ethereum means ONE Ethereum network on top of which, we can all run our separate applications that can directly talk to one another without technical limitation.

Ok, but a disagreement has taken place. This apparently is something that can happen. Voice and exit should be an acceptable response to disagreements in a free system, so I fundamentally agree with the idea that Ethereum Classic is viable. 

I didn’t fully predict whats happening now, and obviously most people didn’t. Ethereum is bigger than us. We don’t have the controls that many of us thought we did. This is OK. We are actually experiencing the raw power of a freedom engine that the world has never seen before. The blockchain plays no favorites. It simply offers a set of mathematical guarantees; Whereas our current system of rules/laws/regulations is complex and interpreted by mere humans. As such, I believe Ethereum may someday create a type of economic stability, a backbone, that our future societies can count on.

In human controlled systems, we will always have different interpretations of what is corruption. For example: Some have said that the creators of TheDAO were corrupt. More blamed the DAO hacker as the corrupt one. In response to this corruption, Ethereum was hard-forked (which quite literally required corrupting the database). The decision seemed to come from the most notable faces of leadership in the space, but it did indeed have a majority of users on board.  You will now hear voices saying The Foundation was corrupt in bailing out the DAO creators, investors, and themselves.

So corruption begets corruption begets corruption. Who is right and wrong here?

…I contend that this is simply the wrong question. The only question I’m interested in, and have ever been interested in is: How can we create the most value and prosperity for society?

Of course, the answer to this is a complex one, and we are bound to disagree with it as well. The behavior of “Money” itself is one of the most difficult things to understand in all of economics. Its true that only we can give it value, but there are and have been many different viewpoints on what steps might be best to handle its creation/distribution. Generally these decisions have been made by people in power, or better yet, by democratic majority vote.

This is where I actually get very excited. For the first time in history, a minority has affectively chosen their own monetary policy in a completely free/opt-in currency system. Instead of the minority being kicked along by popular vote, they were able to take a tiny chunk and already improve its value. This value is based on the implications of such a currency. The small minority that see value in it, are surprising the world by how much value they see.

But lets get back on track. The real question is how to create our best future from here. I have a solution that might enable the network to merge.

OK, but what about the few people who believe its a good thing that there are now multiple networks. Why do they think that? They think so because they believe philosophically that the 2 chains have different visions and are better apart. Mostly these are Classic supporters who want a truly immutable blockchain. Well, I would argue that ETH supporters actually want that too. The difference was that they were willing to make the compromise. They simply thought it was worth breaking the social part of the contract in this specific circumstance. The logical ones can admit in hindsight that this was a mistake (umm… it nearly destroyed Ethereum). My solution is based on the idea that the coins ETH vs ETC, could be up for debate, while forfeiting the protocol debate to Classic.

In a future release of Ethereum, the token itself is defined simply by a contract just like any other. Miners can accept gas payment in any currencies of their choosing. In that ecosystem this whole thing could have played out very similarly, but on one chain. Here’s how: Dao hacker steals 5% of funds, and locks the rest up. Vitalik and the leadership ask us to do a currency swap like so.

  1. A New token is created ETHN
  2. In order to create those tokens you have to submit your 1ETH:1 ETHN
  3. ETHN can, at any time be traded back through the contract to release ETH
  4. DAO tokens are also accepted into the contract at their pertinent ratio 100DAO:1 ETHN
  5. Before the darkDAO funds unlock, the window is closed in this direction.

The result of which is that the 85% who were pro the fork, will begin using a token which is distributed precisely how ETH would be sans DAO attack. Another 5% or so, myself included, who were against a fork, would have actually gone for this. After the darkDAO became free, the price of the original token would most likely still fetch a good price. Although never exceeding ETHN. This is based on the game theory similar to whats playing out now with Classic. Maybe most of the ETHN holders would funnel their funds back into ETH. Most likely this model would play out as total ETH being worth ~15% of total ETHN (the amount lost in TheDAO), but the point is, that we can chose our currency and monetary policy without having to choose our platform, and the network can live on agnostic to our regularly overplayed political monetary disputes…

So is this possible to fix retrograde? short answer: No. The current version of Ethereum only allows the chosen token Ether to be used as payment for gas.


  • It’s is already possible to create a 2-way-peg between the networks, where sending a coin in 1, can unlock a coin in the other. This means you could move ETH into the classic chain and vice-versa…
  • The plan to turn ETH/ETC into a standardTokenContract is already in the pipeline (meaning Ether will have no special privileges). Classic will mostly accept this fork.

I’m currently researching the possibility of combining these concepts to incentivize a natural merge. I am still conceptualizing, so please help spitball. There are no wrong answers.



Crisis Escalated, 2 Ethereums Forever

Unfortunately I think we’ve made the DAO hack worse by escalating the situation to an Ethereum hard fork. It was estimated by Vitalik months prior, that at LEAST 90% of the community would have to be on board for a hard fork to be successful. I don’t know where we lost sight of that, and thought we were simply doing a majority vote.

I have to admit I didn’t fully see this future (2 living forks), but now that there are, there will be some interesting consequences I’d like to share:

So in order to fully protect Ether, every exchange now has to ‘split’. The can accomplish this by creating a transaction which outputs their Ether into 2 different addresses based on which fork the transaction is being run on. This one transaction is send to both networks, and they each have different results (ether sent to different addresses).

Now you have your ETC in one address and ETH in another. Why do this? essentially, If you don’t, some confusing things can happen which may lead to assets in one of the networks being stolen. Here’s how:

Lets say I want to buy some ETH from someone. He has some stored away from last year, and sends it to my wallet. But now I can take his signed transaction and replay it on the Ethereum Classic chain, and boom. I have stolen ETC. You may say that this doesn’t matter, maybe he doesn’t care about that chain. Well unfortunately thats ridiculous, because its worth real money ( anyone who has no desire for ETC should send theirs here: 0x82ab1649f370ccf9f2a5006130c4fca28db2587e ;) ). Or maybe you say fine, that ETC just comes with it. Great but that should factor into the price.

Now that you understand replay attacks, lets look at some of there other implications, because it seems to me like virtually all digital assets on the network should go through various forms of this splitting. Lets say you have a digital land registry. Land rights are tied to addresses and can be sent into other addresses. If I send you my land right, I could split it as explained above, and then later sell someone the land right which is only viable on one chain. I hope its the ‘right one’. I can keep my land right on the other and you may not even notice. Which version of Ethereum is the ‘official’ one for this land rights contract anyway? It will probably come down to the history of the specific app, and which chain they decided to use at conception.

We may all have to remember which version of Ethereum a particular service is meant to be hosted on. GOD DAMN IT WHY CANT THESE ANTI-FORKERS JUST LET ETH-CLASSIC DIE! well here is one reason: When I chose what fork to run my service on, wouldn’t I want the one with cheaper gas prices for me and my clients?  Because the price of ETC is significantly lower, so will be the transactions. Also, when I make a decision on where to register the digital assets that my clients will hold for decades to come, which chain might be tempted to destroy or redistribute, and which chain can I trust to be immutable.

The problem is that this will happen again. lets not kid ourselves, Ethereum is only going to get bigger, and more valuable. Next time it might be 500 million in assets. Maybe that will only represent 1% or all Ether. While we perform that hard fork, who knows if the US government may require extras be thrown in, like redistributing funds held by know terrorists. What about the hard fork after that? Governments might start to order hard forks in certain countries. Maybe China would end up with there own forced fork that never really gets picked up internationally. They could make it illegal to interact with the old fork. In a future with all these forks out there, how do you pick one? At that point the only one that stands out is Classic – the only network to be clean of all (impromptu) forks.

Yes, it looks like there will be different forks around, I don’t know how long before it happens. Maybe a whole decade, but blockchains are forever, and I hate to see communities fizzle apart, ending up on different chains, for all of the technical reasons above, and many more networking affects, but also because …its confusing!

Will you be for or against the next hard fork? Should we ever bail another bad contract again (Lets say its a billion $ hack)? What if the purpose of the fork is to destroy an assassination market?


My Arduino Project: “Arduometer” (Bike Odometer and Calculator)

image image (1)

The Finished Product

image (3)

The original goal of the project

One idea of a bike computer seemed like a great way to experiment with the power of an arduino.  I could easily program the arduino to calculate a multitude of useful information using the wheel motion as input. I decided to use a sensor at the front fork, that could detect every time the wheel made one rotation.  Using those detections and some simple math, it was easy to calculate many useful data values such as: Distance traveled, Speed, and Acceleration. I also included the following features: Trip time, Average trip speed, and the ability to store data for multiple trips. Another included value was the TOTAL distance traveled in the lifetime of the bike (a value that saves data even when turned off using the chip’s flash memory).

Breaking the project up

It became clear early on that This project, although seemingly simple,  would require multiple disciplines that were vastly different from each other, and could easily be considered projects of there own. The main breakdown for this, and probably many other arduino projects, was:

  • Electronic circuitry

This alone included multiple steps and parts. It was probably where most of the time got spent. This included breadboard prototyping, configuring schematics, configuring board layout, board manufacturing, and soldering.

  • Software

This Was the most fun part for me. Working in a high level language for the arduino, yet knowing that you are controlling individual pins and voltages directly is a powerful thing. Yes, the display board has libraries and functions that do a lot behind the scenes, but most of the work was dealing with inputs/outputs and making calculations.

  • Hardware

    image (4)

In this case, I actually mean all hardware other than electronic hardware. This included a attaching a bike mount to the project, mounting the sensor to the fork/spokes,  and usually would have necessitated an enclosure, but in this case i got around that.

Step By Step Process (concept to completion):

The main software implications of an arduino odometer seemed simple enough. So one of the first questions became: what to use as a sensor. I didn’t want to buy an expensive part, because that would dilute my bottom up approach, and I realized one of the most basic principles of electricity would provide a cheap and easy solution: Pushing a magnet past an solenoid creates a voltage. I would put a magnet on a wheel spoke, and attach a solenoid to the side of the front fork. Thus every time one wheel rotation is made, the magnet would pass the solenoid, causing a “blip” or voltage transient across its 2 terminals. The terminals would then be wired to the arduino’s analog input where a main software loop would constantly poll the input triggering a set of actions every time a voltage threshold was reached.

image (5)I first made a solenoid with a sewing bobbin, wrapped in magnet wire, and tested by swinging the magnet past the solenoid while monitoring the voltage on my friend’s oscilloscope. We were getting voltage spikes of maybe only a millivolt or 2, when I decided to try testing an off-the-shelf 100mH inductor instead of the homemade solenoid. The voltage spikes were much higher. I think they were about 10-20 mV. This made the project even simpler, because you can just buy an inductor for about 20 cents.

When it came to the software portion of the input, I used a polling loop that looked for voltage spikes above a certain threshold. I also applied a capacitor in a low-pass-filter configuration and a pull down resistor (see schematic) to eliminate spurious input. When a blip was triggered, the code would not look for one again for a few milliseconds (to discourage multiple triggers during a pass).

At this point All I had was an arduino, and the one circuit specified, on a breadboard. I was coding the main loop with magnet in hand and I was temporarily monitoring results though a serial out window on my Macbook.

Once I was satisfied with the stability, and accuracy of the main polling loop, I set out to calculate the statistics. Distance was first, simply the total number of blips * wheel circumference. Next came speed: Wheel circumference divided by the time between the last two blips. I did some fancier stuff to take the average of the last few seconds rather than just using immediate data for some of the calculations though. I thought this would improve accuracy and readability slightly.

image (6)

Then came time to add the display… back to hardware. I pulled up the arduino hitachi display example and began plugging it all in on the breadboard. After successfully uploading the example code, I began copying over the code into my odometer project. Then it was time to display some info o my own, instead of on the computer screen this on the actual display. The hitachi libraries make it easy to display numbers, letters, and symbols, but there’s not much you can do to change the size of the text, or draw pixel by pixel. I’m sure you can do both of these things, but the main problem is the display itself has spaces in its pixel matrix in-between where each symbol pretty much has to go (the space between the dark rectangles on the bottom left exist because there are no pixels here).

I constructed a few different display “modes” in software. One for speed and distance, another for acceleration and trip time, and another for wheel diameter. Then i realized I wanted buttons to switch between the modes, and to change the wheel diameter (for use on different bikes).  For the buttons, I chose 3 momentary ones, and I used the interrupt pin to trigger a button press.  I’ll get more into the wiring challenges of this later but first I’d like to discuss I problem I came across at this point pertaining to multitasking, which I’m sure is universal to many software challenges.

image (7)The Idea of multitasking became quickly apparent with this project because the fundamental polling loop, was set to check the analog input constantly, waiting for a transient. The problem was, that the second I wanted to use the arduino to do any other task, I couldn’t do it. If I took program flow away from the blip monitor,  for even 5 milliseconds, I could potentially miss a rotation.

All I can say, is that I have since found solutions to my specific project, which I will lay out, but I still have no real understanding of the subject of multitasking, And any “universal” principles used to tackle these types of challenges, which I’m sure must be very common.

image (8)
This circuit is extremely useful for any custom arduino project. It’s basically a standalone arduino, and it can be made with $4 in parts

The solution I was able to use was to make all calculations happen right after a blip is triggered, so by the time the wheel comes back around for the second rotation, the calculations have been long since finished, and the main polling loop is back in play.

Next I added the buttons. There were more buttons than interrupt pins, so I attached each button to the same interrupt pin (through a logic gate that essentially caused all three to be “OR”ed with each other), as well as to its own unique digital input. The interrupt would trigger a function, that checked the other 3 inputs to determine which button exactly was pressed. Each button also needed an electrolytic capacitor to filter out noise, and a resistor to preventing floating inputs. The buttons have a little bit of software de-bounce as well.

The next software hurtle was to enable a display of the “total distance” traveled ever on the bike. To do this I needed access to permanent storage (that could store data while turned off) Luckily Arduino’s ATMEGA328 chip has permanent storage. The only problem is, its a little hard to use. Its made to store 4 bit numbers at a time. And I needed a value that could go from .1Mile – 10,000.0 Miles. I had to create a special function that did this. I found this great Arduino chat room where some friendly people helped me out (Connection details – freenode Web IRC).  Another notable challenge with this permanent storage is that you have to create some code that sets the value of “total distance” to zero. But then, before you complete the final product, you have to erase that line of code (or it will reset the value to zero every time you turn it on). Lastly under the data specs of the chip, you should not write to this permanent flash storage more than 100,000 times.

image (9)
Another unbelievably useful picture. Be careful when dealing with an atMega chip. It’s pin mapping and the arduino pin mapping are completely different!

Now comes a very Important step that applies to many many arduino projects, switching to just the pre-programmed ATMEGA chip and a battery. There are a few more parts needed to do this, but  You don’t want that whole arduino development board. You don’t need the headers, the USB connection, or the power outlet. But what do you need for an arduino to stand alone? Basically everything you need to include for powering the chip with a 9 volt battery is explained here Arduino – Setting up an Arduino on a breadboard. When I got to the USB stuff, I was done because I used the arduino board to program the chip instead of their expensive breakout board. The pieces I had to order online were the 16MHz crystal ($1), a narrow 28 pin socket ($.19) and a boot-loaded ATMEGA328 chip ($3). The rest of these parts were available at Fry’s electronics.

Once I had this breadboard arduino, I just had to incorporate  my project so far, with it. Instead of rewiring everything, I went directly to the schematic, including all the connections from both breadboards. When working with the Chip directly it was important to reference the pin layout (seen above), which has a very different numbering scheme than the arduino does.


Board Layout


image (10)

image (11)
Free Eagle CAD software had quite a steep learning curve.

The next step was to build all of this on a permanent board. I had the choice to use a protoboard, but the wiring was rather extensive and I would have gotten very messy. Therefore I decided to make my own printed circuit board. I did this with the help of a friend using the toner-transfer method. In order to do this, first the board had to be designed, which meant downloading a program called Eagle Cad, and rebuilding the schematic within it (also required importing a specific arduino library to the program).  Below is a picture of that digitalized schematic.

Once the schematic is complete. You then set dimensions and place the parts around the board. There is an “autoroute” feature that creates all the traces and pads in a format that can be sent out or printed.  To print this board, I bought double sided “copper clad boards” online.

image (12)

The board layout is printed on glossy paper and then ironed on to the copper board. The toner then transfers to the board.Next the board is dipped in an acid mixture and the copper covering on the board dissolves everywhere the toner in not. The toner is then removed using acetone, revealing the copper traces and pads underneath.The board then had to be drilled using a dremmel tool mounted with a drill press, and tiny (very fragile bits).  I had real problems with this method, especial during the ironing step because the toner wasn’t transferring perfectly, and a single hairline break in any trace, makes that trace useless. I would recommend spending slightly more doing the UV copper board method instead.

Next it was time to solder the pieces on. This went pretty well but all the breaks in the traces

caused the finished product not to even turn on. About 6 more hours of debugging and it finally worked!

Mounting and Hardware:

I already explained about the magnet on the wheel and the inductor at the fork of my bike. I glued the wires up toward the handlebars, and put a plug connector at the end which could be attached to the computer board. The computer board had to be mounted securely on the handlebars of my bike, but also be removable for when I left the bike locked up, or in the rain. To do this I used a piece of hardware from an old bike light. It’s a handlebar mount, with a slotted groove. The light would slide into this groove. The next stepped involved some creativity. I put plastic wrap over the slotted grove, and mushed epoxy putty into it. Then once it was starting to harden, I pulled it out, and basically had the detachable slide piece as needed. I removed the plastic wrap, and glued the new piece to the back of my board. It paid off. Now I can mount the board, plug in the inductor, and turn on the switch, and I’m good to go!


IMG_1305image (13)The final product has 3 buttons. One that switches between display modes, one that brings you back to the main display mode (for ultra simple use), and one that, when held down for 1.5 seconds, does rarly used tasks (depending on which display you are on). There is info for a trip A and a trip B. holding down the 3rd button resets whichever trip you are viewing. There is a display mode for acceleration and RPMs, and another for wheel diameter. By holding down button three while viewing wheel diameter, you enter a mode where you can change the diameter. There is one display for Lifetime distance. I’ve gotten the bike up to about 26 miles per hour. The distance tracker seems to agree well with google maps.

 Issues with the Final Product:

I should have picked a screen that lights up (I know one of the pictures used shows a lit up screen, but mine didn’t have that feature).The device is useless after dusk. I would have loved an e-paper display, they are not yet cheaply available to the hobbyist programmers. Another problem is the position of the main switch. It occasionally can get switched off by my knee while riding. Its rare but On a very long trip, when it gets turned off halfway through, its disappointing because you want to know exactly how far your long journey has been. Granted this has only happened twice ever, and I have since moved the position of the mounting bracket.