Warren Buffett’s Ground Rules (2016) is a study of the investment strategy of one most successful investors of all time. By analysing the semi-annual letters Buffett sent to partners in the fund he managed from 1956 to 1970, author Jeremy Miller isolates key strategies that investors can use to play the stock market to their financial advantage. Compiled for the first time and with Buffett’s permission, the letters spotlight his contrarian diversification strategy, his almost religious celebration of compounding interest, his preference for conservative rather than conventional decision making, and his goal and tactics for bettering market results by at least 10% annually.


Miller reveals how these letters offer us a rare look into Buffett’s mind and offer accessible lessons in control and discipline—effective in bull and bear markets alike, and in all types of investing climates—that are the bedrock of his success. The author of this book, Jeremy Miller is a strategy coach and founder of Sticky Branding, a company that works with leadership teams from around the world to create branding and growth strategies for businesses.


Key Learnings

Playing the stock market is not easy. It's not something that will happen overnight, and it won't happen if you don't take it seriously. It requires careful measurement, consistency, and most importantly patience.


Be patient. Careful investment, rather than frenetic speculation, is more likely to create value.

There’s a basic rule Wall Street types don't want us to know. It’s a secret that has helped Warren Buffett amass an $88.9 billion fortune. Investing isn’t rocket science, but there’s a catch. People frequently confuse speculation for investment. But there’s a difference. Speculators obsessively follow unpredictable market fluctuations to buy and sell stocks hoping to get rich quick. Investors, on the other hand, buy businesses based on careful assessment of their inherent value. 

The well-known billionaire, Warren Buffett, is an investor. He attended business school in New York, but he hails from the Midwest, and his methodical, straight-talking approach characterises his letters and overall investment philosophy. Inspired by his mentor Ben Graham, Buffett figured that the prices of most financial assets, like stocks, eventually fell in line with their intrinsic values. When buying a stock, you’re buying a tiny fraction of a business. Over time, a stock’s price changes to reflect how the business is doing. If profits are good, the business’s value grows, and the share price increases. But, if the business loses value – for example, there’s a big scandal or something – the share price falls. Sometimes, the stock price doesn’t accurately reflect the value of a business. Investors who buy shares in undervalued companies, then patiently wait for the market to correct itself, can’t help but make money. 

The key, though, is to focus on what the market should do, not when it should do it. If you trust that the market price will eventually reflect the actual value of a business, you can expect to eventually make a profit. This will help you to avoid selling just because the market dips. And this patience rewards you with compound interest, which is the key driver of value over long-term investments. Compound interest is the process of continuously reinvesting gains so that every new cent begins earning its own returns. Einstein himself called compound interest the eighth wonder of the world, remarking that “people who understand it earn it, and people who don’t understand it pay it.”

Buffett’s favourite story illustrating the power of compound interest involves the French government’s purchase of the Mona Lisa. King Francis I paid the equivalent of $20,000 for the painting in 1540. If he had instead invested the money at a 6 percent compound interest rate, France would have had $1 quadrillion by 1964.


Successful investors all have one thing in common – they compulsively measure.

Warren Buffett has always been a supremely confident investor. Even when he was a relatively inexperienced young fund manager, he saw his main competition as the Dow Jones Industrial Average – the famous New York stock index. His one job was to grow his fund at a faster rate than the market. It wasn’t easy. We all know the stress of checking your bank balance after a big weekend or stepping on the scale when trying to lose weight. For a lot of people, the anxiety of failure might be too much to handle. But to be a successful investor in the mold of Warren Buffett, you’re going to have to get over those anxieties. Careful measurement, clear-eyed analysis, and a steady hand even when you’re down are the only ways to succeed as an investor. 

The difficult truth is that most people aren’t shrewd enough investors to beat the market. It was huge for Buffett to deliver returns greater than 7 percent annually. But the miracle of compound interest means that you only have to do a little better than the market to create the potential for serious financial gains. Knowing what to measure – and then doing it properly – is the only way to know if you’re on the right track. You need to compulsively measure. You need to monitor your investments every day, keep track of how they’re doing relative to past performance, and be patient when your chips are down. It takes energy, commitment, and honesty. In short, you’ve got to know when to hold ‘em and when to fold ‘em.

You’re not just measuring your results against past performance, though. Each year’s results should also be measured against the market. This means if the market is down, and you’re slightly less down, this still counts as a win. When Buffett was a young investor, doing better than the market was a lot harder than it is now. It’s easier today thanks to index funds. Pioneered in 1975, index funds combine slices of many different companies on a given stock exchange. This means their returns broadly match the gains and losses of the overall market. Buffett advises those who don’t have the time or energy to devote to their investments to buy the index. Otherwise, compulsive measuring is the only way to determine how you’re doing.


Young investors should focus on buying shares in undervalued companies, which Buffett calls Generals.

Once you’ve got the measuring part down, you can start developing your personal investing style. Each investor is a unique snowflake. Your investing style should reflect your personality, goals, funds, and especially, your competence set. Example, if you’re an alpaca rancher, you shouldn’t try to get rich off computer chips. If you’re a new investor with less money, you actually have an advantage over investors managing huge funds. This is because you can invest in small companies not listed on the stock exchange, making big percentage gains. Once you’re managing more money, you need much bigger deals to move the needle on your overall results. 

When Warren Buffett started his fund in 1956, he had just over $100,000 to play with. By 1960, his fund had ballooned to $1,900,000. He attributed this incredible rate of return to his focus on small, relatively unimpressive investments. Along with his patient temperament, Buffett’s best asset as an investor is his skill at determining the value of a company. In the early years, he favoured buying Generals, which he defined as “fair businesses at wonderful prices.” This means that the companies were of middling quality, but, for some reason, priced under market value. Once again, Buffett’s patience paid off. Most of the Generals he bought stayed in his portfolio for years.

Buffett also liked buying shares in companies that were worth more dead, that is in liquidation, than they were alive. That way, if the business started failing, he could liquidate it and not lose money. This type of business is called a net-netUltra-cheap stocks and net-nets are not glamorous. In fact, Buffett referred to them as his “cigar butts.” But 12 years into his career as an investor, Buffett looked back and determined that this category of investment had done the best in terms of average returns. 

As his success grew, Buffett’s definition of value changed. He began looking beyond cheap stock prices, toward the quality of a business and whether its earnings could be sustainable. As his experience as an investor grew, he transitioned from buying fair businesses at wonderful prices, to buying wonderful businesses at fair prices. Once you have more experience as an investor, you might want to get involved in the management of one of your investments. 


Assuming more risk in markets you know well can yield even more reward potential.

As a kid, Warren Buffett would buy a 25-cent six-pack of Coca Cola from his grandfather’s store. He would then sell individual bottles on to his pals for a nickel each. There was certainly a risk involved: if the neighbourhood kids weren’t thirsty that day, he’d have extra bottles on his hands that he couldn’t move. But if he had a good day, he would earn 20 percent on every six-pack. Buffett didn’t know it, but with the 25-cent Coca Cola deal, he’d done his first arbitrage (is the simultaneous purchase and sale of the same asset in different markets in order to profit from tiny differences in the asset's listed price). He was capitalising on the price difference for one product – his Coca Cola – in two different markets – the store, and the neighbourhood kids.

Arbitrage is a way to bet on what you think a company will be worth in the near future. Returns on arbitrage bets can be very attractive. But to get it right, you have to know the businesses, and their respective markets, intimately. When that product is a piece of a company, this is called merger arbitrage. Merger arbs were one of Buffett’s specialties during his years as an early investor. He would buy stock in a company at one price, betting that it would be worth more once it merged with another company. Returns on merger arbs may be enticing, but the risk can be great. That’s why arbitrage is usually tricky for the average investor. Unless the deal is in your specialised field and you’ve studied it inside and out, it’s probably best to leave it alone.

But experienced investors who don’t want to mess with merger arbs can also get their control fix with what Buffett aptly referred to as Controls. That’s when you buy a large enough piece of a company listed on the public stock exchange that you have the right to influence how it’s run. As you might imagine, this type of deal can lead to stressful confrontations between company owners and new board members who may demand drastic operational changes. Buffett was vilified for these deals early in his career; he thought he was saving a company by removing the inefficiencies. But as he matured, Buffett stopped getting involved in Controls, which could turn out to be messy and uncomfortable with layoffs or firings. His core investment principles have never changed, though. 


Your methods may change with the market, but your core principles should stay the same.

Following the crowd can be an effective strategy. If everyone’s running away from something you can’t see, it’s probably a good idea to join them. But when it comes to investing, it can be problematic. By definition, the majority can’t do better than the average. So to be a successful investor, you have to train yourself to go against the crowd. Warren Buffett’s investment style reveals that there’s only one instance in which you should put your money on the line: when you totally understand the whole picture and the best course of action. In all other cases, you should pass. Even if everyone else is making money.

Buffett has always been a cautious investor. When he began his career as a professional investor in 1956, the stock market was generally considered to be too high. But instead of correcting itself, stocks continued to creep up. Buffett not only stayed true to his strategy, but he also doubled down on his ultra-conservative investing approach. He knew a correction was coming, he just didn’t know when.  Meanwhile, other hotshot investors were making big money. In New York, Jerry Tsai had invented a new kind of investment, which took advantage of the general public’s new appetite for speculation. Tsai’s approach was the opposite of Buffett’s. He’d jump in and out of stocks at the drop of a hat. Tsai’s approach worked, for a while. He earned fabulous sums for his firm, even as his fund lost and gained wildly with market swings. But Buffett remained convinced that it wouldn’t last.

When the market reached a new high in 1966, Buffett finally acted. He announced that he wouldn’t be accepting new partners and halved his performance goal. Miraculously, his fund continued to do very well: 1968 was its best year with a 58.8 percent return. But Buffett knew when to fold his hand. He was done risking his fortune on a market that was bound to crash. Tsai’s end was imminent, and he ultimately saw it coming too. He sold his fund at just the right moment in 1968. In the early 1970s, the Dow experienced its most spectacular crash since the Great Depression. Buffett’s net worth was unaffected, because he had taken all his money off the market. Tsai barely dodged defeat, but his investors lost 90 percent of their portfolio assets. Buffett’s courage of conviction is a worthy goal for all investors, if not people more generally.


Image source: CSQ 



This book is an easy-to-digest introduction to Blockchain, written by Stephen Williams. It talks about what Blockchain is, how it works and what the implications are for the future of our world. It makes clear why it's so important for our future, showing how this technology can help with everything from voting rights to economic inequality to corporate transparency to climate change. Williams interprets the complexity into digestible, straightforward descriptions for readers who don't know tech, and explains all of blockchain's most important aspects: why this so-called digital ledger is unhackable and unchangeable; how its distributed nature may transfer power from central entities like banks, government, and corporations to ordinary citizens around the world; and what its widespread use will mean for society as a whole.

Stephen Williams is a journalist and author. He has written business and health columns for The New York Times and Newsweek. He heads a sustainable fashion startup called Wm. Williams, which uses blockchain technology to manage distributed manufacturing. Williams has an MBA in Sustainability from Bard College and an MA in Communications from Stanford University.


Key Learnings

Blockchain is so much more than Bitcoin or any other cryptocurrency. Computer scientists are pushing this unhackable, distributed digital ledger technology to do everything from revolutionising business to tackling climate change. While the actual uses of the technology will depend on the groups and individuals who use it, it intrinsically encourages an unprecedented degree of transparency, as well as challenging the inequalities inherent in traditional hierarchical systems.

These days, everyone from finance gurus to conceptual artists is talking about blockchain. Initially devised as a platform for Bitcoin, blockchain went unnoticed for years until technologists realised that its potential far exceeds the realm of cryptocurrencies. Now, the hype around this new technology has amounted to some philosophers calling it the next enlightenment. What exactly is blockchain? Simply put, blockchain is a digital ledger software that is unhackable and unchangeable. Its distributed technology offers an unprecedented amount of transparency and accountability that poses a threat to traditional intermediary authorities such as banks, businesses, and even governments.


The blockchain is a new, revolutionary kind of ledger.

Blockchain is a digital ledger, or book in which accounts or monetary transactions are recorded. Ledgers are the foundations of civilisation. Without them, we wouldn’t have been able to build cities or efficient markets. They are the means by which we do everything from keeping track of our finances and demonstrating ownership of a house to verifying our status as citizens. 

For hundreds of years, the world economy has been based on a ledger system called double-entry bookkeeping. These ledgers have two columns for information: debit and credit. As long as the credit and the debit for a transaction match in both the buyer’s and the seller’s books, that transaction is error-free. In order to establish trust in the system and ensure that a transaction is true and accurate, double-entry accounting requires middlemen. Brokers, bankers or other intermediaries get a fee to certify the legitimacy of transactions.

However, history has shown us that this system isn’t always reliable. In the aftermath of the 2008 financial crisis, it was revealed that many large corporations, like Enron and Lehman Brothers, had effectively been keeping extra sets of books, which they used to conceal the true nature of their financial operations. For years, these companies were able to manipulate the system to launder vast amounts of money. Ever since the dawn of the Internet, many have hoped that it would bring an end to these kinds of transgressions. But until now, the Internet’s susceptibility to hackers has posed security issues when it comes to large financial transactions.

However, the blockchain might change all that. The blockchain was originally created as a platform for the cryptocurrency Bitcoin. By tracking every purchase or sale, the blockchain ensures that a digital coin can never be spent twice. Transactions are all online for everyone to see; all you need to join is an Internet connection.

In addition to credit and debit, the blockchain has a third column in its digital ledger: verification. This eliminates the need for intermediaries. Instead, trust is built into the very system. Blockchain technology is already being developed in ways which might revolutionise everything from the way that artists can certify the provenance on their work to the way we value currencies such as the U.S. dollar. It could even eliminate the possibility of tampering or lost ballots when it comes to voting. But how can we be sure that this technology is so trustworthy? That’s because unlike other ledgers, the blockchain is unhackable and unalterable.


Blockchain technology is unhackable and unchangeable.

Blockchain looks like: a network of phones, computers and other devices, forming a supercomputer that runs the blockchain.The blockchain system is secured through the construction of linked blocks that represent information. Say you want to record the number of rubber trees in the Amazon rainforest. The data would be entered into a digital collection called a “block.” After a block reaches its capacity it’s ready to be added to the chain of linked blocks, or the “blockchain.” However, before it can be added, the new block must be approved by every node – meaning every device linked to the chain – in a process known as the protocol. That's why blockchain is called a distributed technology: everyone on the chain has equal decision-making power.

There are various protocol methods, but the most common one is called “proof of work.” With proof of work, every new block comes with a complex mathematical problem, which must be solved before the block can be added to the chain. This is done by special nodes called miners that compete to solve the problem in order to win Bitcoins. Thanks to proof of work, adding blocks to the chain requires substantial computational resources, which helps prevent malicious actors from manipulating the blockchain.

Part of solving the problem generates a cryptographic hash, or a code made up of a long string of numbers and letters. In addition to its own hash, each block contains a timestamp as well as the hash from the previous block, aligning it with the rest of the blockchain. Why it’s possible to mine Bitcoins, but not to hack into the blockchain and change a block’s information? To say that someone else’s Bitcoin actually belongs to you, for instance, changing a single block’s information would render the entire chain’s hashes out of sync and automatically signal a break-in. Not only would hackers have to change the hashes for every block in the chain, but they would also have to do this for every single node, since the blockchain is copied onto every node’s device. The computing power required to achieve such a task increases exponentially with every node added to the chain. In other words, the more nodes there are on a chain, the stronger the chain becomes.



In theory and practice, distributed applications built on the blockchain have radical potential.

Unless you’re a computer programmer, you’ll probably never come across a blockchain code. What you will see and interact with is what blockchain enables - namely, distributed applications, or “dapps.” Dapps is a decentralised application that runs on a decentralised computing system. They are like normal apps and offer similar functions but the key difference is they run on a peer-to-peer network i.e. the Blockchain. The potential for these dapps is endless. 

One example is smart contracts. A smart contract is essentially an automated contract on a blockchain with terms agreed upon by both parties. Once the terms of the contract are carried out, an algorithm delivers the payment in cryptocurrency and documents the transaction on the blockchain. Smart contracts thus automate bureaucracy, eliminating the need for centralised authorities to verify the transaction. Many dapps already use smart contracts. On Ethereum, a public blockchain that supports smart contracts and uses a cryptocurrency called Ether, you can find dapps that use smart contracts to do things like record the origin of a work of art or time-stamp an idea you have for a film, creating an official document of your intellectual property.

Other possibilities for smart contracts require only that we use our imagination. Currently, Uber uses a centralised app to connect drivers with customers and ensure payment. A blockchain dapp equivalent could use smart contracts to enable taxis to connect to customers directly, and bypass any centralised intermediary. You can take that one step further. Say that at some point in the near future you buy a self-driving car. Smart contracts could allow you to set the car up to run itself as a taxi 24/7. If the car is low on gas, for instance, a smart contract between you and the car would be activated, and the car would drive itself to get its tank filled. Similarly, if the car has a flat tire, another smart contract would kick in and the car would drive itself to get the tire fixed. Eventually, your car will have earned enough money to purchase another self-driving car. The second car may then earn enough money to purchase another car all by itself. This process would continue until there was an entire fleet of autonomous taxis functioning without owners. Such a business model is called a DAO, or distributed autonomous organisation. We can’t be sure right now whether such a model could be successfully realised. But the mere idea of it goes to show how blockchain offers radical potential.


Distributed technology signals a paradigm shift away from centralised hierarchies.

The hype around blockchain might sound familiar to anyone who remembers the utopian ideas surrounding the Internet in the 1990s. Although the Internet did make people more connected, it failed to bring about the egalitarian society that many had hoped for, as corporations like Facebook were quick to monopolise the network for their own growth. However, with blockchain, things may be different.That’s because distributed systems intrinsically shift accessibility and power to the masses. 

Think of investing. For most of history, it has been exclusive to the elite. Banking fees, credit histories, and limited access have undermined the ability of the lower classes to participate in this lucrative activity. But anyone can join a blockchain like Bitcoin or Ethereum, meaning anyone can invest with them. Some even envision that cryptocurrencies will overtake the central banking system. That’s something that many people have dismissed, but since the US dollar no longer stands for a tangible asset such as gold, cryptocurrencies are really no less valid than dollars, or in fact any other currency. As more people start valuing cryptocurrencies, the vision of a future free of big banks might not be such a stretch after all.

For the time being, the United Nations, the World Economic Forum, and the Rockefeller Foundation are all already developing ways for blockchain technology to empower disadvantaged farmers, disenfranchised voters, and underprivileged people without banking access. Blockchain technology itself, of course, has no moral agenda. But the distributed system that it represents encourages a rethinking of hierarchies, creating a more equitable version of capitalism. In the same way that smart contracts could eliminate the need for Uber, blockchain dapps have the potential to enable other kinds of trading, like that around rented accommodation. Instead of paying fees to Airbnb as a central arbitrator, as the owner of a property you could set up a smart contract with a renter that would run the venture for you. Depending on your agreement, if guests overstayed their visit, the smart contract would either lock them out of the house or automatically charge them for a longer stay. This peer-to-peer system would mean a true sharing economy.


Blockchains champion an unprecedented degree of transparency while enabling privacy.

In the future, you might be able to check your turkey’s journey from farm to table on your phone. Cargill, the company that owns the brand The Honeysuckle White, has already tested a blockchain dapp that lets people track exactly where their turkeys on Thanksgiving came from. This is not the only way in which blockchain is paving a way for a future of unprecedented transparency. Fura, Everledger and DeBeers are three companies devising blockchains to avert blood-diamond trafficking. With the technology, once the certification for a conflict-free diamond is entered onto the blockchain, it follows that diamond all the way up the supply chain, as its location is updated at each step. Not only would this mean that buyers could recognise and refuse blood diamonds at the point of purchase, it would also enable diamond miners to track where their stones end up, and even give them a chance to communicate with people at the top of the supply chain. In that system, the diamond miner would be just as important as the buyer at the top of the supply chain, and would have a real voice to influence how the system is run.

At the same time as making transparency possible, blockchains also enable unparalleled privacy.Intimate is a dapp that enables pornography vendors and sex workers to offer their services using cryptocurrency and anonymous addresses. The dapp enables users to keep their identities private while making their reputation recognisable across the platform, making conditions for all participants safer. Up until this point, we’ve been discussing public blockchains. But some blockchains are private, invitation-only platforms. This kind of maximum privacy is essential for businesses, such as health care operators, that deal with confidential information, but it’s also being embraced by businesses in general. Computer behemoth IBM has already engaged private blockchains for major business operations using the Hyperledger Fabric framework, and other corporations are sure to follow. Still, the fact that corporations might opt out of transparency won’t affect blockchain’s utopian promise; the public chains will remain under the control of the public.


Environmental friendly solutions for blockchain are underway.

As of yet, blockchain networks require an extortionate amount of energy. This is especially true for the Bitcoin proof-of-work protocol. When Bitcoin was first released, you could mine coins from your desktop computer. Today, there is so much competition on the Bitcoin blockchain that it requires an assembly of computers drawing large amounts of energy. The business is so lucrative that professional bitcoin mining farms have been set up around the world. All this means that on some days the bitcoin network requires as much energy as the entire country of Denmark.

That being said, for most blockchain applications, proof of work isn’t the most efficient way of authenticating new blocks for the blockchain. One example is the Ethereum blockchain. It has been experimenting with a “proof of stake” protocol, which does without mining entirely. Rather, nodes called validators place a stake or bet that they will be given the next block to validate. If they do catch the next block, they gain a financial reward. The hope is that such alternatives will reduce blockchain energy consumption levels as well as make transactions faster.

Looking at the problem from another angle, there are many ways in which blockchain technology could become a tool in the quest for climate change solutions. One environmental application for blockchain is in the carbon-trading market. By turning carbon emissions into a tradable good, carbon-trading markets provide a financial incentive to offset air pollution. Beijing-based Energy Blockchain Lab, working with IBM, has already used the open-source Hyperledger blockchain to develop a new, more efficient platform for carbon asset trading in China.

Another green use of blockchains could be the use of a reliable and transparent ledger to track greenhouse gas emissions. This would be essential to monitoring the progress made by nations pledged to the carbon reduction targets of the 2016 Paris Agreement. Blockchain could also be used to track endangered species, make the ultimate destination of donated funds more transparent, and certify land ownership to counter deforestation. The possibilities are endless. It’s up to us to embrace this new technology and continue to think of creative ways in which blockchain could make the world a better place.



 


This book is about the nature and future of money—whose evolution may play a deciding role in the future success and prosperity of our species. Money is one of mankind’s earliest inventions. Its history appears to be as old as that of writing, and the two are closely connected—some of the oldest written artifacts in existence are 5,000-year-old clay tablets from Mesopotamia that were used to record grain deposits. This book begins with the debt tablets of Mesopotamia and follows with the development of coin money in ancient Greece and Rome, gold-backed currencies in medieval Europe, and monetary economics in Victorian England. The book ends in the digital era, with the cryptocurrencies and service providers that are making the most of money's virtual side and that suggest a tectonic shift in what we call money. By building this time line, The Evolution of Money helps us anticipate money's next, transformative role.

The Evolution of Money is written by David Orrell and Roman Chlupatý. David Orrell is a writer and an applied mathematician. He is the author of The Future of Everything: The Science of Prediction (2007), Economyths: How the Science of Complex Systems Is Transforming Economic Thought (2010), and Truth or Beauty: Science and the Quest for Order (2012). Roman Chlupatý is a journalist, lecturer, and consultant specialising in the global economy and politics. He is the coauthor, with David Orrell, of The Twilight of Homo Economicus (2012).



Key Learnings

It’s trite but true: money makes the world go round. Throughout history, money has taken on a number of different forms, but some things never change – those with money wield immense power, and the strength of a civilisation’s economy may spell its success or demise.


Contrary to popular belief, money wasn’t invented to replace the barter system.

One old and popular theory holds that money was “invented” as societies outgrew the barter system. In fact, this theory dates back to Aristotle. Though it was essentially pure speculation, the theory gained traction with many great thinkers who followed, including the influential eighteenth-century economist, Adam Smith. They all believed that money was an outgrowth of commercial trading– where, for example, some valuable bit of property such as cattle could be traded for a certain number of slaves. But this isn’t very efficient; things like cattle aren’t very easy to transport, whereas coins are. Plus, the precious metals in coins were seen as being valuable just about anywhere, while other goods may not be in demand in some areas and are therefore less valuable.

This idea of money evolving from bartering may sound plausible, but it’s actually been debunked. In 1913, Alfred Mitchell-Innes, a British economist, published his own findings, noting that there was no evidence in commercial history to suggest that a barter-only system ever existed. And Mitchell-Innes has yet to be proven wrong – in fact, historians have only gone on to find more evidence of ancient civilisations using old forms of money in addition to bartering.

Around 5,000 years ago, in Sumer, one of Mesopotamia’s earliest urban civilisations, commercial transactions were recorded on clay tablets, which show us that salt, beads and bars of precious metals were all used as early versions of money. The truth is, we don’t know exactly how or when money came to be used, but we do know that the first coins began appearing in the seventh century BC, in the Mediterranean kingdom of Lydia. And by the sixth century BC, Greek city-states were minting their own coins as a demonstration of power and independence.


Determining the value of money reveals its complex nature.

Most people know that Sir Isaac Newton is one of world history’s most influential physicists, but not many people know he also had a huge impact on our currency. Newton is responsible for developing the relationship between money and its weight. This happened in 1649 after Newton suffered a nervous breakdown and took a job as warden of London’s Royal Mint. It was here that he put England on the gold standard, a system that uses a fixed rate between the currency’s weight, such as England’s silver coins, and the value of gold. This is how we got the name for a British “pound” – one of these silver coins used to be worth one pound of gold.

But when it comes to money, it’s important to consider both its tangible and intangible properties.Money is, of course, a real and tangible object, like the coins and bills in your pocket. But money can also represent intangible things, such as the number denoting its value. Some people consider the dual nature of money – what it physically represents versus what it theoretically represents – to be similar to a quantum object, like a photon that has the characteristics of both a particle and a light wave. And like some quantum objects, money can change from one moment to the next. A one-dollar bill’s worth is determined by a trusted authority, such as the Federal Reserve. But once we begin using it to buy goods or services, that value can change due to market rates. So today a dollar might get you a bottle of water, but tomorrow conditions could change, and you might be able to sell that same bottle of water for two dollars. This complex nature has been confounding and enchanting economists for centuries.


Banking and international trade flourished after the invention of debt.

No one enjoys being in debt, but debt itself is a necessary part of a functioning economy. Debt can’t exist without negative numbers. And the first person to demonstrate the use of negative numbers was Brahmagupta, an Indian mathematician who explained their purpose in his seventh-century book, The Opening of the Universe. 

From this point on, businesses could use bookkeeping and the double-entry system, which notates two kinds of transactions, negative debits and positive credits. This system made it a whole lot easier to find out when a transaction error may have occurred and also to evaluate how profitable a business was. Once transactions were recorded in a ledger, ideas of money lending began to emerge, introducing intangible concepts, such as interest.

In seventh-century Mesopotamia, promissory notes (unconditional promise by an issuers to pay an agreed sum of money at a fixed or determinable future time to a specified person) known as sakk were introduced. Around the same time, Islam officially forbade usury, the practice of lending money at high interest rates, but it did allow for fees to be accepted in exchange for loans.

Loans proved to be useful in the Middle Ages, as European towns used them to build churches, which was seen as justifiable since it was in service of God. With economies becoming more complex in the Middle Ages, an international banking system began to emerge. It began with tradesmen forming associations that led to companies, prompting moneylenders to require a more formal system of finance. Port cities like Venice and Florence began trading with Asia, which led to their becoming major financial centers. Moneychangers then formed their own guild called Arte del Cambio at the turn of the thirteenth century, making them the earliest version of modern bankers.

Eventually, international traders realised that heavy coins weren’t ideal, which is how bills became popular. At first they were simply letters, instructing a banker or a foreign agent to make certain payments on the writer’s behalf. This dramatically boosted the efficiency of international trade. Now a merchant in Venice could purchase goods from a French supplier using a bill valued at an agreed exchange rate.


The gold and silver riches of the New World significantly changed the world economy.

The discovery of the Americas was, a big deal for folks in the Old World, and part of that was due to the economic impact it had. After Hernán Cortés conquered Mexico in 1521, Spain soon learned about the financial chaos that too much of a good thing can suddenly cause.

When Cortés landed in Mexico, he found that the Aztecs were rich in gold and silver, which they used for jewelry and decoration. For money, they used other things, such as cacao beans. And even though the Aztec emperor, Moctezuma II, offered the Spaniards gifts of silver and gold, they instead chose to conquer and take as much of the Aztec riches as they could. As a result, Spain was flooded with more precious metals than they could have imagined. Between 1500 and 1800, around 150,000 tons of silver and 2,800 tons of gold was produced. This influx led to a new problem: inflation, as the value of the precious metals declined.

Prices now had to be adjusted, and as Spanish goods became more expensive, massive debt caused Spain to default on its loans a whopping 14 times between 1500 and 1700. However, this surplus of gold and silver allowed more European nations to mint coins. Even the lower classes now had access to them.

The newfound riches also led to the development of mercantilist nations, such as Great Britain, which extended its military reach to secure as much of these precious metals as possible. Such nations operated under the mercantilist theory, which assumes that there’s a fixed amount of resources in the world, and a nation’s wealth depended on how much precious metal it possessed. So, for one to gain, another must lose.

With very few gold and silver mines, Great Britain became eagerly expansive by granting a royal charter to the East India Trading Company at the beginning of the seventeenth century. This charter allowed the company to mint its own coins and spread England’s power to India, where the silver rupee became the Indian standard.


The ability to print money led to certain problems, but a stable economy eventually emerged.

Paper money has come a long way. It’s now a complex mix of numbers, watermarks and even holograms. The history of banknotes goes back to the early eighteenth century when France was facing some tough economic times. To help straighten things out, economist John Law convinced France to let him start his own bank and to use banknotes as currency. This led to the nationalised Bank Royale in 1718 and a similar bank in the settlement of New France, in what is now Mississippi.

Banknotes were attractive since they could be produced cheaply and in huge amounts without using expensive metals. But, once again, people soon discovered the dangers of having too much currency. Coins were in short supply in the New World, so people relied on old-fashioned commodity trading and foreign coins. But this wasn’t ideal for the ongoing military campaigns in the area, and colonial governments were forced to issue bills, which again led to inflation.

To fix this problem, and ensure that the supply of physical money didn’t exceed the economy’s needs, Pennsylvania came up with a brilliant solution in 1723. Supported by Benjamin Franklin, the state tied its supply of bills to measurable assets, like land and future taxes, meaning that more bills were only issued in relation to growth in these assets. And, sure enough, the economy stabilised and grew.

But a stable system needs a stable relationship between banks, which is a concern that Abraham Lincoln had to deal with. He wasn’t happy with the power struggle that was going on between the private and the federal banks, both of which could issue money. Eventually, the Federal Reserve in the United States ended up providing reliable supervision and regulation of private banks, which has enabled a relatively stable and robust money system, even when the economy has faltered. It’s worth noting that when the economic crisis of 2007 unfolded, it was mostly the private banks that were abusing their power, not the federal ones.


Economic theory has changed over the last few centuries, and it’s come to include psychological aspects.

If you’ve taken a class in economics, one of the first names you’ll have encountered is that of the eighteenth-century philosopher, Adam Smith. In his desire to create a universal theory of finance, he gave birth to the science of economics.

Smith laid a very solid economic foundation, but our understanding of the relationship between money and its value has certainly evolved since then. Smith typically determined the value of something by considering the labor required to obtain it. So, for instance, the value of gold should reflect the amount of work it takes to unearth it. But this relationship isn’t always clear-cut. For instance, what’s the value of the labor when a company uses unpaid slaves to do the work? This is why, two centuries later, economist Irving Fisher developed the quantity theory of money. This became the prevailing philosophy for the twentieth century. Fisher, arguing that an active economy is the healthiest, felt that monetary value isn’t as important to an economy as the momentum or flow of money. So an ideal economy is one where people are constantly investing and buying, not hiding their money under mattresses or in piggy banks.

Another important factor that is now being reexamined is the psychology of consumerism. Most economists up until the latter half of the twentieth century assumed that our economic decisions were rational. More recently, economists like Daniel Kahneman and Amos Tversky have shown that that’s not the case, and we are in fact profoundly irrational when it comes to money. They created a new field called behavioural economics to help explain why we make biased, irrational and emotionally charged decisions about how we spend and save money. For example, behavioural economics illustrated that we place a higher value on money that we can have now rather than in the future.


Economists and politicians have discussed and tried various methods to deal with monetary crises.

A number of economists believe that providing people with extra spending money is a very realistic way of helping economies bounce back from recessions, like the one that followed the 2007 crisis.In December of 2008, Australia did just that, giving every taxpayer $900 to encourage spending. And unlike many other countries, Australia did not experience a recession after the crash.

Another strategy is called quantitative easing (QE), which involves a central bank providing extra money and boosting reserves by buying assets from private banks. While some think this could stimulate the economy by making loans more readily available, critics think this is too close to printing money and could lead to inflation. Yet a QE plan has recently been put in place in Iceland after the nation experienced a banking collapse, and it has since proved successful.

A third option for resolving an economic crisis is to change the currency altogether. Ever since the gold standard ended in 1971, the International Monetary Fund has reported around ten systemic financial crises every year. Economists think that one solution to this recurring problem would be to simplify economies by using a universal currency. And solving a problem by changing a nation’s currency does have a history: In 1922, after Russia faced a crisis with the ruble, it reintroduced gold chervonets, a move that did help stabilise the monetary system.

However, if currency is in short supply, introducing negative interest rates can help stimulate spending. For instance, during the Great Depression, stamp scrips were introduced. These were notes that would lose their value unless every week, you bought a one-cent stamp and stuck it onto the scrip. This incentivised people to spend them quickly.


Bitcoin has changed the present status of monetary systems – and the future has many challenges.

Things have been changing fast in the new millennium, and it’s not over yet. So far, Bitcoin is the closest we’ve come to a truly universal currency, and it’s also a threat to the traditional banking system.Bitcoin was created in 2008 as an electronic currency unconnected to any banking system. It can also be seen as a response to rising distrust of the global financial system after the 2007 banking crisis.

Unlike traditional money, new Bitcoins aren’t issued by any central bank based on a government order; they’re created as a kind of reward whenever someone with a powerful computer, or a network of powerful computers, solves a difficult math equation. It’s part of a process called mining, and as more Bitcoins get put into circulation, the more difficult the equations become. 

At first, Bitcoins were seen as being part of a game, but once people began buying real things with them, they quickly became legitimate. One of those first purchases with the new currency was for two pizzas that were bought by a computer programmer in Florida for 10,000 Bitcoins. In 2016, that amount of Bitcoins was worth millions of dollars.This kind of innovation may actually be good for the economy since it is currently facing some big problems.

Right now we’re in a largely deregulated capitalist system that aims for unlimited growth by exploiting more and more of nature’s very limited resources. Every level of the environment is being damaged; we release massive amounts of carbon dioxide into the atmosphere, we exploit the land in search of metals and minerals and we overfish the already polluted seas. Meanwhile, income inequality is at an all-time high. An average CEO in the United States earns 354 times more than an unskilled worker. It’s no wonder that we’re facing immense tensions due to social conflict. While it’s difficult to predict how all this will ultimately resolve itself, it’s not unreasonable to think that an economic revolution might play a role.






This is the epic story of the universe and our place in it, from 13.8 billion years ago to the remote future. David Christian gives the answers in a mind-expanding cosmological detective story told on the grandest possible scale. He traces how, during eight key thresholds, the right conditions have allowed new forms of complexity to arise, from stars to galaxies, Earth to homo sapiens, agriculture to fossil fuels. This last mega-innovation gave us an energy bonanza that brought huge benefits to mankind, yet also threatens to shake apart everything we have created. David Christian is a historian and scholar of Russian history and has become notable for teaching and promoting the emerging discipline of Big History.




Key Learnings

At the core of our origin story is a tale of increasing complexity. For billions of years, increasingly complex things, like stars, life, humans, modernity, have emerged out of a universe that is, for the most part, cold, dark space. In the last few hundred years, the pace at which change has occurred has been accelerating rapidly, and today, we live in a society of such great complexity that we have the ability to change the direction of our earth’s future.


The Big Bang created the Universe 13.8 billion years ago, the first of a series of key events in our history. 

The tale of our origins is told through thresholds – key transition points when more complex things appeared. These moments happen under what’s known as goldilocks conditions – when things are not too hot or too cold, but just right. For most of the thresholds in our story, we can explain what those goldilocks conditions were, and why the threshold was reached. But what about the Big Bang? We simply don’t know the conditions that allowed our universe to emerge. Perhaps the best way to explain what happened is to use the words of science fiction author Terry Pratchett: “In the beginning, there was nothing, which exploded.” What we do know is that the Big Bang created the universe 13.8 billion years ago – the first of a series of key events in our history. And we know what happened next, a fraction of a billionth of a second after that moment. At this point, the universe was smaller than an atom. 

It’s hard for human brains to comprehend the size of things like atoms, but you could comfortably fit a million of them into the dot of this “i.” To begin with, we only had energy, which quickly split into different forces, such as gravity and electromagnetism. Within a second, simple matter emerged and was followed by more complex structures, while protons and neutrons – extremely tiny particles – teamed up to become nuclei. All this happened within minutes, but as the universe cooled things slowed down a bit. 380,000 years later, electrons became trapped in orbit around protons, pulled together by electromagnetic forces, and the first atoms of helium and hydrogen were formed. The universe began as something unimaginably small, with all the energy and matter present in the universe today packed into it, and it’s been growing ever since.


The appearance of stars 12 billion years ago – and the way they die – were important steps forward for the universe.

Looking at the night sky, it’s easy to think of stars as having always existed. But stars only came into being a hundred million years after the Big Bang, when gravity and matter provided the goldilocks conditions for stars to form. At this point, the universe was a bit like a mist made up of tiny pieces of matter. In some areas – you could think of them as particularly cloudy areas – the volume of matter was denser than elsewhere. Here, gravity pulled atoms together, making them collide and speed up, raising the temperature. Over time, these clouds of matter grew denser and hotter. When a cloud of matter’s core hits 10 million degrees, trillions of protons will fuse together to form helium nuclei. In this fusion, huge amounts of energy are released – the same process that occurs in a hydrogen bomb explosion. A furnace is created, releasing vast energy that will burn as long as there are still protons to fuse together. The structure stabilises and will last for millions, even billions of years. We have a star.

Actually, we now have many stars, bound together in galaxies – kind of like star cities. Our galaxy, the Milky Way, contains hundreds of billions of stars. But it’s not just the birth of a star, but also their death that represented an important step forward for our universe, and eventually, for us. When a large star dies, gravity smashes the star’s core together with extreme force, and the star explodes with, for an instant, as much energy as an entire galaxy. In just a few moments, this explosion manufactures most of the elements we find in the periodic table and sends them flying out into space. Star deaths fertilised and enriched our universe, ultimately enabling the formation of our earth in a form that would eventually support life.


The earth was formed by the accumulation of debris about 4.5 billion years ago.

We have a lot to thank the sun for: heat, light and energy for a start. We also have it to thank for the earth’s creation. The formation of planets is a messy by-product of star creation, which takes place in areas of space rich in clouds of chemicals. After the star at the center of our solar system – our sun – was formed, a mass of debris made up of gas, dust and particles of ice was left over, while lighter elements such as hydrogen and helium were blasted away by violent bursts from the sun. That’s why the outer planets in our solar system are formed mainly from these elements. But closer to the sun, where rocky planets like Earth, Venus and Mars were formed, was an area rich in chemicals like oxygen, aluminum and iron.

Over time, particles of matter stuck together as they collided in orbit. Eventually larger objects such as meteors emerged, which were large enough that their gravity sucked up surrounding debris. Eventually, this led to the formation of planets. The signs of this process remain visible today. The slightly strange tilt of Uranus and its rings is most likely the result of a violent collision with another form, while our moon was probably created by a collision between Earth and a Mars-sized protoplanet (a kind of early, pre-planet). That collision sent vast quantities of matter into a circular orbit around Earth, like the rings of Saturn, before eventually coming together to form the moon.

For a long time, humans have known only of our own solar system – the collection of planets, moons and debris orbiting the sun. But in the last 30 years, we’ve learned that most stars have planets. There could be many billions of different kinds of planets in the universe. Studies by astronomists will, in time, reveal how many could support life. But what conditions enable life on a planet? In the next blink, we’ll consider what enabled life to emerge.


Earth had the right conditions to allow life to flourish.

Life is built out of billions of tiny molecular machines working inside protected bubbles, or cells. It can tap into energy, adapt to its environment, reproduce and evolve. In the right conditions, the molecules from which life is built can emerge spontaneously. In 1953, Stanley Miller, from the University of Chicago, put hydrogen, methane, water and ammonia in a closed system. He heated and electrified it (imagine volcanoes and electric storms), and within days a slurry of amino acids – simple organic molecules that are the basis for all proteins – emerged. We now know that the early atmosphere wasn’t methane and hydrogen, but the results still stand. Under the right circumstances, the basic building blocks of life can emerge. And Earth had those circumstances – the right combination of temperature and chemicals – to allow for the emergence of life. 

Temperature was important for life’s creation, but also for its maintenance. Moderate temperatures are essential to life, and Earth has built-in systems that maintain them. But how? Falling rain contains carbon, which eventually makes its way into the earth’s mantle, where it’s stored for millions of years. Volcanoes periodically spew some of this carbon back into the atmosphere. Less carbon means less carbon dioxide and that means colder temperatures. When it’s cold, it rains less. Less rain means that less carbon is stored away. Carbon dioxide levels build up and things get warmer. If it gets too warm, it rains more, which means more carbon is stored away and things cool down again. This self-regulation offers remarkable stability given that the sun’s warmth has been increasing for over four billion years. Our earth has been able to cope, but other planets haven’t. Venus, for instance, contains huge amounts of carbon dioxide and has a surface so hot it could melt lead. For life, Earth was just right. So what were the earliest life-forms like, and how did they evolve?


Photosynthesis was an energy bonanza for early, single-celled life that helped spark a biological revolution.

Early life-forms, known as prokaryotes, are single-celled organisms created in chemically rich volcanic vents on the ocean floor. Prokaryotes are tiny – a punctuation mark could hold a few hundred thousand of them. But they are still able to detect information, such as heat, and respond to it. So how did we get from these fairly simple creatures to more complex forms of life? The evolutionary innovation of photosynthesis heralded the first energy boom in the history of life. Photosynthesis is the conversion of sunlight into biological energy. Suddenly, energy was almost limitless, and prokaryotes were able to spread and proliferate. The amount of life in the early oceans increased to around 10 percent of today’s levels. 

Three billion years ago, a form of photosynthesis evolved that produced oxygen, with dramatic impacts on the atmosphere. Two and a half billion years ago, levels of atmospheric oxygen increased dramatically. Oxygen atoms began to form what we now call the ozone layer – protecting the earth’s surface from solar radiation and enabling algae to start growing on land for the first time. Up until this point, the earth’s surface had been pretty much sterile. The newly oxygenised atmosphere was bad news for most prokaryotes as it was poisonous to them. An “oxygen holocaust” occurred, and the prokaryotes that survived retreated to the deep ocean. Meanwhile, oxygen caused lower temperatures, and for a hundred million years, Earth was covered in ice. 

This doesn’t sound like a great outcome. But Earth’s self-regulation kept things in check while getting a helping hand from eukaryotes – new organisms that could suck oxygen out of the air – which helped to raise and stabilise the atmospheric temperature. Eukaryotes were special for another reason: sex. Up until now, organisms had simply copied themselves, but eukaryotes mixed their genetic material with those of a “partner.” This was hugely important because it meant that small genetic variations were guaranteed for each generation. With more variation to play with, evolution suddenly had more options. Suddenly, things were speeding up.


Evolution and the extinction of dinosaurs helped the big forms of life develop that would eventually lead to humanity.

With the right conditions, as well as benefiting from the energy boost of photosynthesis and the ability to deal with oxygen, single-celled organisms were able to evolve into much more complex, multi-celled beings. Plants, fungi and eventually animals developed and spread from the oceans onto land. The emergence of photosynthesising plants on land – which consumed vast amounts of carbon dioxide and released oxygen – created the high-oxygen atmosphere that is essentially what we live and breath today.

The emergence of life on land impacted evolution. Gravity isn’t a problem in water, but on land, plants needed to be able to stand up. They required rigid materials and internal plumbing systems to move liquids against gravity through their bodies. In a similar way, animals developed pumps – like our hearts – to circulate nutrients. Life also became slowly more intelligent as a result of evolution. Natural selection promoted information processing because information – like knowing whether another creature is a threat, or whether a plant is safe to eat – is key to success. An antelope that snuggles up with a lion isn’t going to be around long enough to pass its genes on. But it wasn’t just evolution that enabled major steps forward for the development of the forms of life that would eventually lead to humans, the extinction of dinosaurs was also great news for mammals.

The time was up for dinosaurs in a matter of hours when, 66 million years ago, a large asteroid hit the Yucatán Peninsula, in what is now Mexico. The asteroid generated dust clouds that blocked out the sun, creating a nuclear winter and producing deadly acid rain. Half of all plant and animal species died out, while larger creatures such as dinosaurs suffered more, probably because they required more energy to survive and that energy was now so much harder to get. Why was this good for mammals? Mammals tended to be small, rodent-like creatures, and unlike large dinosaurs, they survived. With dinosaurs gone, they were able to flourish. And one group of mammals that thrived were primates.


Humans evolved from primates and made a major breakthrough with the development of language.

How old are we as a species? By the standards of the universe, we’re extremely young. In just the last six million years (remembering that the universe is 13.8 billion years old, and the first large living organisms arrived 600 million years ago), we humans have gone our own way, evolving separately from primates. The first difference was that early humans walked on two legs – a change from our knuckle-dragging primate predecessors that had multiple effects on our development. To walk on two legs required narrower hips, for example, which meant that early humans often birthed babies not capable of surviving on their own. That encouraged parenting and sociability.

Early humans have also gradually evolved. Two million years ago, homo erectus learned how to use tools and control fire. Cooking food meant less digestive work. Our guts shrank, and we had more energy available for our brains. But the really spectacular changes came with homo sapiens, just a few hundred thousand years in the past. What makes homo sapiens – us – radically different? The answer is simple: language. Of course, other animals can communicate. In experiments, chimps have even learned a few hundred words. But this communication is very limited – an animal may be able to warn another of danger in the immediate vicinity, but it can’t warn of a lion pride five miles to the south.

Language enabled a complexity and precision of information sharing that proved to be a game-changer because it permitted collective-learning – the accumulation and passing on of knowledge from human to human and generation to generation. This unleashed a feast of new information, allowing for a breakthrough in the efficient use of energy and resources as well as advanced forms of leisure. Knowledge accumulated through language enabled better use of resources and therefore population growth. 30,000 years ago, there were around 500,000 humans. 10,000 years ago, there were five to six million. That represents a 12-fold increase in population and a 12-fold increase in total human energy consumption. By this point in our history, humans were spread across the globe. From Siberia to Australia, small communities enjoyed varied diets, decent health, storytelling, relaxing, dancing and painting. We were about to pass a new threshold in the story of our development.


Farming was a transformative innovation for human life.

We’ve seen that certain huge innovations, such as photosynthesis, have had a major impact on the development of life. Now we get to the next innovation, farming, which evolved in response to population pressures. Consider the Natufians – communities of humans who lived in villages of a few hundred people on the shores of the eastern Mediterranean. They were initially foragers, but population pressures meant they needed more resources. With plenty of neighbouring villages around, they couldn’t use a larger area of land. Instead, they had to use whatever techniques they could to increase the productivity of the land they already had.

Initially, humans were reluctant farmers. Farming was hard work – the bones of Natufian women show wear from many hours of movement while kneeling to grind grain. But necessity led them to persist, and over time, farming started to change human life, resulting in a huge leap forward in humanity’s mastery of energy and resources. For example, while a farmer himself can only generate about 75 watts of energy, a horse can deliver ten times that figure, meaning the horse can plow deeper and carry more goods than a human alone. As populations continued to grow, fuelled by this new energy, human life began to change.

As village communities became the normal way of life, societies had to develop new rules and behaviours, and humans began to work together more. In what is now modern-day Iraq, there was almost no rainfall, but there were two mighty rivers: the Tigris and the Euphrates. Early farmers dug themselves small ditches to use river water, but over time, communities built complex systems of canals, in some cases needing thousands of workers and considerable coordination from their leaders. Two thousand years ago there were 200 million humans, living in ever more complex communities. Change was starting to accelerate a bit more.


As farming improved, it generated surpluses which enabled the development of more complex agrarian societies.

Today, most of us take for granted that we don’t have to spend our days producing food. But that’s the product of a great change in human society. As the productivity of farming improved over time, farmers began to generate significant surpluses – more food and goods than they needed for day-to-day survival. Surplus produce from farming meant that there was a surplus of people with time on their hands because not everyone needed to work the land. And when people don’t need to spend all their time farming, they have time to, for example, make and sell pots.

We can trace this process through archaeology. The earliest pots from Mesopotamia – a historical region in what is now Iraq – were simple and individual. But starting around 6,000 years ago, there is evidence of specialised pottery workshops. Potters produced standardised bowls and plates in large quantities, which were sold far and wide. As surpluses grew, specialisations increased. 5,000 years ago in Uruk, a city in Mesopotamia, a list of all the standard professions was compiled. The list included kings and courtiers, as well as priests, tax collectors, silver workers and even snake charmers.

As surpluses and populations grew, so did the size and interconnectivity of communities. Rulers built roads to enable trade, like the Royal Road from Persia to the Mediterranean. Built in the fifth century BC, the road was 2,700 km long and could be traveled in just seven days by couriers using a relay system of fresh horses – a huge advance on the walking time of 90 days. Humans were becoming more and more accustomed to moving, sharing, exchanging and trading with one another. Fast forward a few centuries and this exchange would shape our world dramatically.


The exchange of ideas and discovery of fossil fuels accelerated the advance of human progress.

In 1492, Christopher Columbus became one of the first men to cross the Atlantic Ocean. Farming had taken 10,000 years to spread around the planet. Now, in just a few hundred years, humans had made vast leaps forward as information and ideas traveled over oceans and were exchanged more rapidly than ever before. When Sir Isaac Newton developed his theories of gravity in the seventeenth century, he was helped by access to information – such as a comparison of how pendulums swing – in Paris, the Americas, and Africa. Never before had scientists been able to test ideas so widely. This accelerated the learning and development process, leading to another critical discovery: fossil fuel energy.

Fossil fuels gave societies far more energy than that provided by farming, and this revolutionised human life again. England was the first country to benefit from fossil fuels, getting half its energy from coal, instead of wood, by 1700. The engineer James Watt invented the steam engine in the 1770s, enabling the efficient powering of industry by steam locomotives. Steam engines also allowed access to deeper mines, meaning that the amount of coal extracted increased by 55 times between 1800 and 1900. Coal changed the shape of the world. For instance, England’s steam-powered gunships could suddenly defeat Chinese ships, winning them control of Chinese ports in 1842. The discovery of electricity, and the ability to turn coal into electricity, powered another wave of innovations by revolutionising communication. At the start of the nineteenth century, the fastest way to communicate was via horse messenger. In 1837, with the invention of the telegram, communication was as fast as the speed of light.


The earth has entered a new age: the era of humans.

For the first time in the history of the universe, one species – humans – had become the dominant force and changed the earth’s environment forever. Without always knowing what we’re doing, we found ourselves in the planetary driving seat. Since the Second World War, we’ve experienced the greatest burst of economic growth in history, driven mainly by the exploitation of fossil fuels and technological innovation. This is the dawn of the Anthropocene – the era of humans.

Take the field of agriculture. The introduction of artificial, nitrogen-based fertilisers dramatically raised the productivity of agriculture, making it possible to feed several billion more humans. In 1950, when the author was a child, the world’s population was two-and-a-half billion. In the course of his lifetime, it has increased by an additional five billion people.Economic growth means the human experience is now completely different to that of our ancestors. Activities that had dominated human life for centuries – tending to crops, milking cows, or gathering fuel for fires – are now largely absent from our lives. Many of us live in cities that are almost totally shaped by humans rather than the natural environment. However great the benefits, the Anthropocene has also brought about some major negatives.

One flipside to economic progress is vast inequality, demonstrated most starkly in the fact that, even today, 45 million people live as slaves. And the environmental impact of the Anthropocene has been huge. Biodiversity is in freefall, with rates of extinction now hundreds of times faster than in the last few million years. We’ve driven our closest relatives, primates, to the edge of extinction.Perhaps most worryingly, we’re dramatically disturbing the processes that keep our environment stable by generating huge quantities of carbon dioxide. Current scientific models predict that within 20 years or so, a warmer world caused by greenhouse gas emissions will cause coastal cities to drown, make agriculture harder, and drive extreme weather patterns.


The future is ours to make.

What will eventually happen to Earth? Well, in the really long term – many millions of years – Earth will become sterile and eventually be swallowed by the sun. On a more human timeline, the future still remains in our hands. The story of humans is in large part a story of acceleration. Things are now happening so fast that our actions over the coming decades will have huge consequences for both us and Earth for thousands of years. The Stockholm Resilience Centre has for many years modeled “planetary boundaries” – lines which, if crossed, will endanger our future. Two of them, biodiversity and climate change, are particularly critical for a sustainable planet. The bad news? Researchers say that we have already surpassed the boundary for biodiversity and are getting closer to the climate change boundaries.

What could a better future look like? The nineteenth-century economist John Stuart Mill favoured the idea of a future without continuous growth. He argued it would be a pleasant contrast to the frenetic world of the industrial revolution, a world in which “the normal state of human beings is that of struggling to get on.” Instead, he suggested, it would be better to reach a state of balance in which “no one desires to be richer.” Could we be on the verge of a sustainable world? A world in which humanity has achieved a new level of complexity and stability, that allows us to self-regulate just as our earth self-regulates?

Many of the conditions are already here. There is now a clear scientific consensus and understanding of humans’ impact on the planet, reflected in documents like the Paris climate accord. What is lacking is determination. Many are skeptical about the warning signs in front of us. Few governments have the luxury of thinking beyond electoral cycles and short-term needs. All governments face pressure to prioritise their nation over the needs of the world. But achieving a sustainable world is a goal worth aiming for. It would mean human societies can be around for thousands, maybe hundreds of thousands of years to come.



 


Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. Bostrom argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.





Key Learnings

The key message in this book:
Inventing a superintelligent machine capable of things far beyond the ability of a human is both a tantalising prospect and a precarious path. In order to ensure such technology develops in a safe, responsible manner, we need to prioritise safety over unchecked technological advancement. The fate of our species depends on it.


History shows that superintelligence – a technology more intelligent than any human being – is fast approaching.

The main difference between human beings and animals is our capacity for abstract thinking paired with the ability to communicate and accumulate information. In essence, our superior intelligence propelled us to the top. So what would the emergence of a new species, intellectually superior to humans, mean for the world? First we’ll need to review a bit of history. For instance, did you know that the pace of major revolutions in technology has been increasing over time? For example, improving at the snail’s pace of a few hundred thousand years ago, human technology would have needed one million years to become economically productive enough to sustain the lives of an additional million people. This number dropped to two centuries during the Agricultural Revolution in 5,000 BC. And in our post-Industrial Revolution era it shrunk to a mere 90 minutes.

A technological advancement like the advent of superintelligent machines would mean radical change for the world as we know it. But where does technology stand at present? We have already been able to create machines that have the capacity to learn and reason using information that’s been plugged in by humans. Consider, for example, the automated spam filters that keep our inboxes free from annoying mass emails and save important messages. However, this is far from the kind of “general intelligence” humans possess, and which has been the goal of AI research for decades. And when it comes to building a superintelligent machine that can learn and act without the guiding hand of a human, it may still be decades away. But advancements in the field are happening quickly, so it could be upon us faster than we think. Such a machine would have a lot of power over our lives. Its intelligence could even be dangerous, since it would be too smart for us to disable in the event of an emergency.


The history of machine intelligence over the past half decade has had its ups and downs.

Since the invention of computers in 1940, scientists have been working to build a machine that can think. One major advance in Artificial Intelligence (or AI) are man-made machines that mimic our own intelligence. The story begins with the 1956 Dartmouth Summer Project, which endeavoured to build intelligent machines that could do what humans do. Some machines could solve calculus problems, while others could write music and even drive cars. But there was a roadblock: inventors realised that the more complex the task, the more information the AI needed to process. Hardware to take on such difficult functions was unavailable. 
By the mid-1970s, interest in AI had faded. But in the early ‘80s, Japan developed expert systems – rule-based programs that helped decision-makers by generating inferences based on data. However, this technology also encountered a problem: the huge banks of information required proved difficult to maintain, and interest dropped once again. 

The ‘90s witnessed a new trend: machines that mimicked human biology by using technology to copy our neural and genetic structures. This process brings us up to the present day. Today, AI is present in everything from robots that conduct surgeries to smartphones to a simple Google search. The technology has improved to the point where it can beat the best human players at chess, Scrabble and Jeopardy. But even our modern technology has issues: such AIs can only be programmed for one game and there’s no AI capable of mastering any game. However, our children may see something much more advanced – the advent of superintelligence (or SI). In fact, according to a survey of international experts at The Second Conference on Artificial General Intelligence at the University of Memphis, in 2009, most experts think that machines as intelligent as humans will exist by 2075 and that superintelligence will exist within another 30 years.


Superintelligence is likely to emerge in two different ways.

It’s clear that imitating human intelligence is an effective way to build technology, but imitation comes in many forms. So, while some scientists are in favour of synthetically designing a machine that simulates humans (through AI, for instance), others stand by an exact imitation of human biology, a strategy that could be accomplished with techniques like Whole Brain Emulation (or WBE). 

So what are the differences between the two? AI mimics the way humans learn and think by calculating probability. Basically, AI uses logic to find simpler ways of imitating the complex abilities of humans. For instance, an AI programmed to play chess chooses the optimal move by first determining all possible moves and then picking the one with the highest probability of winning the game. But this strategy relies on a data bank that holds every possible chess move. Therefore, an AI that does more than just play chess would need to access and process huge amounts of real world information. The problem is that present computers just can’t process the necessary amount of data fast enough. But are there ways around this? One potential solution is to build what the computer scientist Alan Turing called “the child machine,” a computer that comes with basic information and is designed to learn from experience. 

Another option is WBE, which works by replicating the entire neural structure of the human brain to imitate its function. One advantage this method has over AI is that it doesn’t require a complete understanding of the processes behind the human brain – only the ability to duplicate its parts and the connections between them. For instance, a scientist could take a stabilised brain from a corpse, fully scan it, then translate that information into code. But we’ll have to wait. The technology necessary for this process – high-precision brain scans, for instance – likely won’t be developed anytime soon. But, someday, it will.


Superintelligence will either emerge quickly via strategic dominance or as a result of long collaborative efforts.

Most of the great discoveries of humanity were achieved either by a single scientist who reached a goal before others got there or through huge international collaborations. So, what would each route mean for the development of SI? Well, if a single group of scientists were to rapidly find solutions to the issues preventing AI and WBE, it’s most likely their results would produce a superintelligent machine. That’s because the field’s competitive nature might force such a group to work in secrecy.

Consider the Manhattan Project, the group that developed the atom bomb. The group’s activities were kept secret because the U.S. government feared that the USSR would use their research to build nuclear weapons of their own. If SI developed like this, the first superintelligent machine would have a strategic advantage over all others. The danger is that a single SI might fall into nefarious hands and be used as a weapon of mass destruction. Or if a machine malfunctioned and tried to do something terrible – kill all humans, say – we’d have neither the intelligence nor the tools necessary to defend ourselves. 

However, if multiple groups of scientists collaborated, sharing advances in technology, humankind would gradually build SI. A team effort like this might involve many scientists checking every step of the process, ensuring that the best choices have been made. A good precedent for such collaboration is the Human Genome Project, an effort that brought together scientists from multiple countries to map human DNA. Another good technique would be public oversight – instating government safety regulations and funding stipulations that deter scientists from working independently. So, while the rapid development of a single SI could still occur during such a slow collaborative process, an open team effort would be more likely to have safety protocols in place.


We can prevent unintended catastrophes by programming superintelligence to learn human values.

You’ve probably heard it a million times, but there is some wisdom in being careful what you wish for. While we may be striving to attain superintelligence, how can we ensure that the technology doesn’t misunderstand its purpose and cause unspeakable devastation? The key to this problem lies in programming the motivation for SI to accomplish its various human-given goals. 

Say we designed an SI to make paper clips; it seems benign, but what’s to prevent the machine from taking its task to an extreme and sucking up all the world’s resources to manufacture a mountain of office supplies? This is tricky, because while AI is only motivated to achieve the goal for which it has been programmed, an SI would likely go beyond its programmed objectives in ways that our inferior minds couldn’t predict. But there are solutions to this problem. 

For instance, superintelligence, whether it be AI or WBE, can be programmed to learn the values of a human on its own. For example, an SI could be taught to determine whether an action is in line with a core human value. In this way we could program SI to do things like “minimise unnecessary suffering” or “maximise returns.” Then, before acting, the machine would calculate whether a proposed action is in line with that goal. With experience, the AI would develop a sense of which actions are in compliance and which aren’t. But there’s another option. We could also program an AI to infer our intentions based on the majority values of human beings. 

Here’s how:
The AI would watch human behaviour and determine normative standards for human desires. The machine would essentially be programmed to program itself. For instance, while each culture has it’s own culinary tradition, all people agree that poisonous foods should not be eaten. By constantly learning through observation, the SI could self-correct by changing its standards to correspond to changes in the world over time.


Intelligent machines will probably replace the entire human workforce.

But enough about decimation and total destruction. Before panicking about the impending machine-led apocalypse, let’s take a look at how general intelligence technology can be developed and put to productive use. It’s likely that the increasing availability and decreasing cost of technology will lead to the cheap mass production of machines capable of doing jobs that currently require the hands and mind of a human. This means that machines will not only replace the entire human workforce but will also be easily replaceable.

For instance, if a WBE worker needs a break, just like a real human would, it can simply be replaced with a fresh unit and no productive time needs to be sacrificed. In fact, it would be easy to do this by programming a template WBE that thinks it just got back from vacation. This template could then be used to make infinite copies of new workers. But clearly this amounts to mechanical slavery and raises important moral issues. For example, if a machine became aware that it would die at the end of the day, we could simply program it to embrace death. But is that ethical? Should these artificial employees be treated like sentient beings or inert tools?

Work isn’t the only thing that SI machines could take over; they could also be in charge of various mundane tasks in our personal lives. As the minds of these machines come increasingly closer to resembling those of human beings, we could use them to optimise our lives; for instance, we could design a digital program that verbally articulates our thoughts or that achieves our personal goals better than we could alone. The result of such advances would mean a human existence that is largely automated, low-risk, devoid of adventure and, frankly, too perfect. And where would that leave us? How would we occupy ourselves in such a future?



In the superintelligent future, the average human will be impoverished or reliant on investments; the rich will be buying new luxuries.

It’s clear that an entirely robotic workforce would completely transform the economy, as well as our lifestyles and desires; as machine labor becomes the new, cheaper norm, the pay of workers will drop so low that no human will be able to live off a paycheck. Also, the few employers of the mechanical workforce would accrue a lot of money. But this brings us back to an earlier point, because where that money ends up also depends on whether SI is designed by a single exclusive group or is the result of a slow collaborative process. If the former turns out to be true, most people would be left with few options for income generation, likely renting housing to other humans or relying on their life-savings and pensions. And the people who don’t have property or savings? They would be destitute. Their only options would be to use their remaining money to upload themselves into a digital life form, if such technology exists, or rely on charity from the hyper-wealthy. And the rich? They’ll lose interest in what we today consider highly desirable luxuries. That’s because with machines doing all the work, anything made or offered by a human will become a highly-valued rarity, much like artisanal products are in our time. While today it might be wine or cheese, in the future it could be something as simple as a handmade key chain.

But the new mode of production would also make possible an unimaginable variety of technological products – maybe even the ability to live forever or regain youth. So instead of buying yachts and private islands, the wealthy might use their money to upload themselves into digital brains or virtually indestructible humanoid bodies. However, this scenario assumes that the superintelligent worker robots will not rebel and try to destroy human society. Therefore, whatever route we follow with SI, safety will always be key.



Safety must be a top priority before superintelligence is developed.

It’s clear that the development of SI comes with a variety of safety issues and, in the worst case scenario, could lead to the destruction of humankind. While we can take some precautions by considering the motivation for the SI we build, that alone won’t suffice. So what will? Considering every potential scenario bringing a hyper-powerful force like SI into the world. For instance, imagine that some sparrows adopted a baby owl. Having a loyal owl around might be highly advantageous; the more powerful bird could guard the young, search for food and do any number of other tasks. But these great benefits also come with a great risk: the owl might realise it’s an owl and eat all the sparrows. Therefore, the logical approach would be for the sparrows to design an excellent plan for how to teach the owl to love sparrows, while also considering all possible outcomes wherein the owl could become a negative force.

So how can we teach our robotic, superintelligent baby owl to love humans? As we already know, we can make safety a priority through a long-term international collaboration. But why would the competitive rush to design the first SI be a safety threat? Because scientists would forgo safety to speed up their process and wouldn’t share their work with others. That means that if an SI project went horribly wrong and threatened humanity with extinction, too few people would understand the machine’s design well enough to stop it. On the other hand, if governments, institutions and research groups join together, they could slowly build a safe and highly beneficial SI. That’s because groups could share their ideas for safety measures and provide thorough oversight for each phase of the design. Not just that, but an international superintelligence project would promote peace through its universal benefits. Just consider the International Space Station, an endeavour that helped stabilise relations between the US and the USSR.

Image source: The Wall Street Journal
© Christine Calo 2021. Please do not reproduce without the expressed consent of Christine Calo. Powered by Blogger.