'How Innovation Works' by Matt Ridley, offers a systematic examination of this incredibly important but poorly understood phenomenon. What is innovation? According to Ridley, it is much more than invention (something often mistaken for innovation). Invention is the act of discovering an idea; it is merely a beginning, an inception point. Innovation is much more, it is when the invention is taken into the world and made practical. Innovation is the creative reconfiguration of multiple inventions and ideas into something new. Innovation is the act of fully exploring the consequences of that new thing and disseminating and integrating it into society and general use. 

Matt Ridley argues that we need to see innovation as an incremental, bottom-up, fortuitous process that happens as a direct result of the human habit of exchange, rather than an orderly, top-down process developing according to a plan. Innovation is crucially different from invention, because it is the turning of inventions into things of practical and affordable use to people. It speeds up in some sectors and slows down in others. It is always a collective, collaborative phenomenon, involving trial and error, not a matter of lonely genius. It still cannot be modelled properly by economists, but it can easily be discouraged by politicians. Far from there being too much innovation, we may be on the brink of an innovation famine.

Matt Ridley, 5th Viscount Ridley, is a British science writer, journalist and businessman. He is best known for his writings on science, the environment, and economics.


Key learnings

Innovation isn’t an instantaneous creative act practiced by lone geniuses. It’s actually a long, messy, and complicated process. Innovation occurs when chance encounters and serendipitous insights are shared, remixed, and built upon by countless individuals. New inventions are slowly and incrementally improved over time as people find practical uses for novel ideas. If we want more innovation in the future, we need to foster the open exchange of knowledge and take big risks as individuals, organizations, and nations.


Innovation is a complex, messy, and collective process

The Industrial Revolution – the giant leap in productivity that kicked off the modern era – began when humans first harnessed the power of steam to automate work. To do this, they used a new machine called the atmospheric steam engine. So, who do we thank for this astounding achievement? A man named Denis Papin. Or, wait, maybe we should thank Thomas Savery. Or, hold on, maybe a fellow called Thomas Newcomen deserves our praise? The truth is, all three men deserve some credit, but none of them can claim all of it. That’s because, around 1700, Papin, Savery, and Newcomen all produced their own working models of the atmospheric engine. To this day, it’s unclear who was truly first or how much each inventor influenced the others.

We often associate a new invention with a single creator. However, that’s an oversimplification of how innovation operates. Even the most creative people don’t work in a vacuum. They’re always influenced by the tools, technologies, ideas, and social structures that surround them. This often means multiple forces contribute to an innovation, even when one person takes the credit. Let’s consider the case of the atmospheric steam engine. This relatively simple device heats and cools water in a metal cylinder. The changing pressure caused by steam creates movement that can be used for work, like pumping water out of mines. Could Papin, Savery, or Newcomen have invented this completely on their own? Not really. The basic ideas behind the device were already hot topics of discussion in scientific circles at the time. Papin and Savery, both educated men, refined their thinking by exchanging letters and papers with other inventors. Moreover, Newcomen, who built the most successful version of the engine, relied on previous advances in blacksmithing technology to complete his machine. Thus, each man’s invention was also a product of their backgrounds and influences.

This principle applies to all innovation. While Thomas Edison gets credit for inventing the light bulb in 1879, the truth is, more than 20 other creators patented similar contraptions in earlier decades. All these thinkers were responding to ideas and technologies circulating at the time. Of course, some of these attempts were better than others, but none of these innovations happened in complete isolation.


Medical innovations offer high risks and even higher rewards

While the atmospheric steam engine kicked off the Industrial Revolution, medicine evolved with its own innovative procedures, like the following: Step one: Find someone recovering from smallpox. Carefully scrape some pus off one of the many open lesions caused by the disease. Step two: Using a knife or needle, cut an open wound into your own skin. Not too deep, but deep enough to draw blood. Step three: Rub the infected pus into your wound.

This technique is called engraftment. In most cases, it’ll make you immune to smallpox. If it seems gross and dangerous now, just imagine how it appeared to a European in the 1700s. They didn’t have a scientific understanding of why it worked, yet it did work. So, as the century progressed, the practice caught on. It saved countless lives and eventually led to the discovery of modern-day vaccines.

An interesting fact about innovation is that the biggest revelations don’t always come from deliberate discovery or sound scientific theory. Instead, they develop piecemeal over time through random chance, as well as trial and error, as people look for practical solutions to their problems. In the medical field, this is a particularly risky process, but it has resulted in many life-saving practices.

Consider Jersey City’s water supply. In 1908, rapid industrial development tainted the city’s water with unsanitary runoff. The result was serious outbreaks of cholera and other diseases. In a rush to fix the problem, Dr. John Leal added chloride of lime, a disinfectant, to the water. At the time, adding chemicals to drinking water was considered repulsive. Local citizens were outraged. But Leal had heard rumors of it working in European cities, so he tried it anyway. Within months, the experiment paid off, and disease rates plummeted. Soon, communities all around the country were following Jersey City’s example.

Are such open-ended experiments occurring today? Of course. Take the example of electronic cigarettes, also known as vaping. For many, picking up a vaping habit is the first step toward quitting smoking. Since tobacco use is a major cause of death, this could save many lives. Yet, we don’t fully know the health effects of vaping, so their use remains controversial. In some countries, like the United Kingdom, government agencies encourage their use. In contrast, other countries, like Australia, have banned it. Which country takes the right stance on this innovation? That remains to be seen.


Travel innovation is all about incremental improvements

The Salamanca, the Puffing Billy, the Sans Pareil. These names sound silly now, but in the early 1800s, each represented a small step toward improving the way we move. You see, at the dawn of the nineteenth century, the horse was the king of transportation. However, inventors believed a machine, the steam-powered locomotive, could take its place. The tricky part was figuring out how to build one. So, engineers tried a great deal of different designs, giving each new prototype a bold new name. Not every device succeeded, but some made subsequent improvements in speed, safety, or reliability. By 1829, the Rocket, a locomotive built by Robert Stephenson, was capable of transporting 13 tons of cargo at 30 miles an hour – and the world was on its way to a railway boom.

Throughout history, humans have always looked for faster, more reliable ways to travel. However, no new mode of transportation ever emerged in a completely perfect form. For example, the sleek, efficient machines that carry us around today are the result of countless individuals making innumerable small design improvements over time.

Look at the evolution of today’s automobiles. Most rely on the internal-combustion engine for power. Isaac de Rivaz, a Franco-Swiss artillery officer, built this machine’s earliest ancestor back in 1807. It ran on hydrogen and oxygen and was loud, clunky, and prone to explosions. In 1860, a Pennsylvanian man named Jean Joseph Lenoir updated the design to run on petroleum. This was a step-up, but the device was still very inefficient.

Next, in 1876, Nikolau Otto, a grocery salesman, refined the machine by adding a four-step cycle of compression and ignition. Dubbed the four-stroke engine, this model allowed for smoother operation. This design was adopted by the German inventor Karl Benz. In 1894, he amped up the engine’s power and used it to drive a three-wheeled machine called the Motorwagen.

While the Motorwagen was a hit with the rich, it remained a novelty. It took another inventor, Henry Ford, to bring the automobile to the masses. In 1909, his assembly-line manufacturing process made the Model T car affordable to more people. Soon, cars were one of the most popular forms of transportation around. It took decades of slow, steady improvement, but the engine had finally conquered the horse.


Some innovations aren’t solid things but simply good ideas

The humble potato is the basis for so many popular snacks and dishes we love today, but this wasn’t always the case. At least, it wasn’t in Europe. That took some innovation. First cultivated more than 8,000 years ago in the Andes Mountains of South America, the potato didn’t arrive in the Old World until the mid-1500s. However, for decades, Europeans regarded potatoes with suspicion. The church in England banned them. People in France believed they caused leprosy. Still, slowly, people learned to love this robust, nutrient-rich crop. The idea of eating potatoes first caught on in Belgium. Then the idea spread throughout the entire continent. By the 1800s, most European countries had made the potato a new staple of their cuisine.

Often, the concept of innovation gets reduced to invention. That is, we think of innovation as the process of making new tangible items like labor-saving machines or electronic gizmos. However, some of the most influential innovations of all time aren’t objects at all. Instead, they’re ideas that open up new ways to approach the world or solve problems.

One intangible innovation you use every day is the Arabic numeric system, more commonly known as numbers.
Yes, even the idea of using 1s, 2s, and 3s was once revolutionary. This counting system was first developed by Indian scholars around 500 AD. It was then adopted by Arab traders in the ninth century, and finally found a foothold in Europe in the 1200s thanks to an Italian author known as Fibonacci. Fibonacci advocated for using Arabic numerals because they were more practical than the Roman numerals popular at the time. Their key advantage was their positional system. While the Roman numeral V always means five, the Arabic five changes value based on its position in a sequence. So, a five followed by a zero means 50, a value ten times greater.

This seems like a small change, but it opens a whole new world for mathematics. With Arabic numbers, it’s possible to do more advanced calculations like multiplication, division, and algebra. It’s also much easier to keep financial records and do accounting. Adopting the idea of Arabic numbers was an innovation essential to launching Europe into a new age of trade, commerce, and scientific discovery.



Our desire to communicate drives rapid innovation

Baltimore, Maryland, 1843. The Whig Party holds a convention and nominates Henry Clay for president. It’s big news, and usually, it would take a train more than an hour to deliver the results to Washington, DC. But, this year, the message arrives in seconds. How? Thanks to the telegraph, a brand-new invention installed by Samuel Morse. It transmits information by sending electrical signals through a suspended wire. It’s the first practical innovation in the emerging field of electrified communication technology. The telephone arrives a few years later, in 1876. The wireless radio soon follows in the 1890s.

By the turn of the century, distant people are more connected than ever before. However, this is just the beginning. Over the following decades, innovations in communication and information technology will revolutionize the world. Before Morse tapped out the first dots and dashes through a telegraph, communication was either conducted face-to-face or through physical objects like letters and books. Ideas spread more slowly, and accessing information depended on which printed materials you could actually get your hands on. However, the advent of electronic communication like the telegraph, telephone, and eventually computers changed everything – and fast.

It’s hard to overstate how quickly new communication technology was adopted. The first telegraph line was completed in 1844. By 1855, there were 42,000 miles of lines in the United States alone. By the end of the 1870s, telegraph cables stretched across the Atlantic and Pacific oceans. Broadcast radio has a similar trajectory. It began with a single station in 1900 and grew to be the dominant form of public communication by the 1930s.

Computers also became an essential part of daily life at an astounding rate. This is partially due to how quickly computer technology improved and miniaturized. A computer’s processing ability is determined by how many transistors it has. Incremental improvements make transistors smaller and easier to produce, steadily allowing for more transistors to be housed in less space. This phenomenon is sometimes called Moore’s Law. So, in 1975, the average computer chip had 65,000 transistors. Today, that number is in the billions, and they’re much cheaper.

With the internet now connecting all the world’s computers, sharing information is easier than ever. This innovation has changed the political landscape by giving a huge amount of power to those who control communication technology. Now, the world’s most influential companies are search engines like Google and social media empires like Facebook.


Innovation relies on chance, collaboration, and recombination

The non-stick pans in your kitchen, the Gore-Tex coats worn in extreme environments, the fluorine gas chambers in the first atomic bombs. What do these all have in common? They’re all innovations based on polytetrafluoroethylene or PTFE. PTFE was first synthesized in 1938 – by accident. A scientist researching refrigerants stored tetrafluoroethylene gas at sub-zero temperatures. The chemical solidified into a hard substance that was unusually stable and heat-resistant. It didn’t work as a refrigerant, but other scientists found that in other contexts, it could be used for so much more. This story of PTFE is useful because it demonstrates the complex way innovation actually works.

Every story of innovation is different, but if you look closely, you’ll see they often follow a similar pattern. Many of the greatest innovations begin with a bit of serendipity. Someone has a lucky break, an unusual insight, or random occurrence. Then, others pick up on the discovery and apply it to new situations. Through trial and error, they apply the new idea or invention in different contexts until they find a practical use.

Consider the modern practice of using DNA as forensic evidence in criminal cases. No one set out specifically to create this innovation. Instead, it began in 1977, when Alec Jeffreys, a scientist at Leicester University, tried to develop a method using DNA to diagnose diseases. While collecting samples, he saw that DNA was a lot like fingerprints – that is, everyone’s genetic code was different. A chance discovery.

Meanwhile, the local police were struggling to solve a grisly murder. They wondered if Jeffery’s discovery could help solve the mystery. So, the scientist and the police worked together. They began collecting and analyzing more than 5,000 genetic samples from local suspects. They then compared them to DNA found at the crime scene. Eventually, they found a match. Case closed.

Because so much innovation follows this same pattern, it’s possible to identify conditions where it’s more likely to happen. Innovation thrives when people can cross paths, mingle, and exchange ideas. That’s why, throughout history, universities, trading hubs, and major cities have consistently produced novel innovations. By bringing different people with different expertise, perspectives, and cultures together in one place, these contexts foster the type of interactions that push innovation forward.


Innovation doesn’t always come from the top down

In 1924, the British government wanted to build a civilian airship capable of traveling across oceans. This raised the question: should the task be handled by the government or by private industry? They decided to try both approaches. Parliament contracted a government lab and a private firm, Vickers, to build two ships. How did this experiment play out? Well, by 1930, Vickers had designed the R100, a light, fast, and efficient aircraft. It traveled to Canada and back with no problems. Meanwhile, the government lab built the R101, a heavier, more costly ship. On its maiden voyage to Karachi, Pakistan, it only made it to France before crashing, killing 48 passengers.

These two very different outcomes illustrate an important point. When it comes to innovation, direct government oversight and control isn't always the answer. There’s a popular notion that innovation requires guidance and funding directly from the state. This argument posits that private industry, in a constant quest for easy profits, will avoid the costly research and development necessary to create truly new ideas. Instead, firms will hoard their patents and simply rehash old products. But is this true? Not exactly. While it’s true that government-directed research makes great discoveries, it often takes the ingenuity of private enterprise to turn them into practical innovations. Consider the internet. The basic components of computer networking were created by the Defense Advanced Research Project Agency, an American government lab. However, the world wide web didn’t take off as a household necessity until private firms like Cisco began to experiment with the technology in the 1980s and 1990s.

This dynamic occurs because big government projects often aren’t sensitive to the needs or desires of everyday people. Additionally, they can be slow to adopt new, outside-the-box ideas. However, big companies can also suffer this tendency. That’s why even giant firms are sometimes usurped by plucky start-ups.

Remember Kodak? This company was once the undisputed master of the photography industry. Film cameras were their flagship product. So, in 1975, when one of their scientists built an early version of a digital camera, his innovation was ignored. The higher-ups just didn’t see the potential in his bulky, electronic gizmo. Yet, smaller companies did. And they developed their own products, which took over the market. Thus, Kodak missed the digital photography revolution and filed for bankruptcy in 2012.


Innovation will always face resistance

Take a stroll through your grocery store’s dairy aisle. Here, you’ll see a wide selection of both butters and margarines sitting side-by-side in perfect harmony. The choice of spread is up to you. This wasn’t always the case. When margarine was first invented in 1869, it caused an uproar. The oily spread was both cheaper and more stable than butter. The dairy industry, fearing the competition, launched a vicious campaign against it. The National Dairy Council even faked studies showing it was dangerous. By the 1940s, two-thirds of American states had banned this innocuous staple.

Of course, the fervor eventually subsided, and margarine became an accepted foodstuff. Yet, this butter battle shows that even harmless new creations can stoke controversy. When a truly novel idea or invention arrives on the scene, it will often be rejected. This is because everyday people often fear change. Also, established industries don’t want to risk losing their supremacy. This is why horse breeders fought against tractors, ice-harvesters tried to stifle refrigeration technology, and some musicians initially wanted to ban radio stations from playing recorded music.

One way interest groups try to slow innovation is by sowing fears about safety and security. Consider the case of genetically modified organisms or GMOs. GMOs, such as vitamin-A-enriched golden rice, have the potential to bring cheaper nutrition to people all over the world. Yet, groups ideologically opposed to genetic modification, like Greenpeace, lobby hard against their production, citing sometimes flimsy evidence that these foods are dangerous.

Another way innovation is held back is through the overly aggressive application of intellectual property laws. When applied correctly, these laws, such as copyrights and patents, incentivize innovation by giving creators exclusive use of their ideas for a short time. This allows the original innovators to profit.

Yet, as we know, innovation requires sharing ideas and building on the work of others. Unfortunately, copyrights have been steadily extended, making this process more difficult. In the United States, a copyright used to last 14 years. In 1976, it was extended to last the life of the author plus 50 years. In 1998, 50 was extended again to 70 years. These laws no longer benefit the original creator post-mortem, but they do keep good ideas locked away from potential new uses.


Innovation is lacking in the West but booming elsewhere

In much of the world, the past few centuries have been filled with astounding leaps in innovation. In mere generations, Western countries have gone from largely agrarian economies to electrified, industrialized powerhouses. Even now, every day seems to bring novel developments in sectors like communications and computer technology.

Yet, alongside these changes, other sectors are strangely stagnant. In the realm of transportation, not much has changed. In 1958, the average commercial aircraft traveled at 600 miles per hour. Today, they move at roughly the same rate. There have been upgrades around the margins to aspects like fuel efficiency, but the fundamentals are untouched.

The business world, too, is less dynamic. In the United States, new businesses made up 12 percent of the economy in 1980. In 2010, they only accounted for 8 percent. Across the Atlantic, things are even staler. Looking at Europe’s 100 most valuable businesses, only two of them are younger than 40 years old. Most industries seem more focused on protecting current profits than bold new ideas.

So, where is innovation occurring? Mostly in rising nations like China. For the past few decades, this country has poured resources and manpower into urbanization and new technologies. Now Chinese firms like Tencent and Alibaba are at the forefront of growth industries like social media and financial services. Moreover, Chinese universities are making huge strides in fields like gene editing and artificial intelligence.

Can the Western world keep up? Maybe. It would require a renewed spirit of innovation. Companies will need to take more risks, workers will need to put in more hours, and governments will need to foster the free and open exchange of ideas that fueled past booms. All that, plus a little luck, will put innovation back on the agenda.


Image - Bright spark: detail of an 1879 illustration depicting Thomas Edison's lightbulb 
CREDIT: Getty Images



The book called 'Stolen Focus' by the British journalist Johann Hari, takes a close look at what’s happening, and what’s happened to our collective attention. Hari argues that we’re all becoming lost in our own lives, which feel more and more like a parade of diversions. And it seems to be getting worse and worse every year.

In Stolen Focus, Hari sets out on a global investigative journey into our shortening attention spans. Drawing on more than two hundred interviews with the world's leading experts on attention problems Stolen Focus moves beyond individual solutions towards a collective understanding of a problem facing us all. Only by first solving the attention crisis Hari argues can we turn to fixing the issues we care about most and sustain the attention to build a better society.

Johann Hari is a writer and journalist. He has written for the New York Times, Le Monde, the Guardian, and other newspapers. He was a columnist for The Independent and the Huffington Post, and has won awards for his war reporting.


Key learnings

Our attention spans are shrinking as a result of our accelerated pace of life and speed of communication. The internet, especially the rise of apps and platforms that prey on our focus, has supercharged this attention drain. And it’s not due to a personal flaw or individual weakness. Most of these attention-grabbing methods are intentional; they’re elaborately designed for the very purpose of keeping you distracted. To combat them we need large-scale, systemic change – on an individual level, as well as from the tech designers that invented these systems in the first place.


It’s not just you – everyone is struggling to focus

Unless you’re living off the grid, you’ve probably noticed that it’s getting increasingly difficult to focus. You’re busy all the time, yet you struggle to actually get anything done.

In 2016, Sune Lehmann was having these exact problems. His capacity for deep focus was dwindling, and he was more susceptible to distractions than ever before. Lehmann is a professor at Denmark’s Technical University – so he didn’t simply dismiss the nagging feeling that his concentration was waning. Instead, he spearheaded a study to find out if there was actually evidence to back up his suspicion.

By analyzing various metrics across online platforms, Lehmann and his team discovered something interesting: In 2013, conversation topics trended on Twitter for an average of 17.5 hours before people lost interest and moved on to a new topic. By 2016, that number had dwindled to 11.6 hours. That’s a six-hour decrease in only three years. The study records similar results across platforms like Google and Reddit as well. In short, the more time we’ve spent in online spaces, the shorter our attention spans have become.

So, is it really just the internet that’s eroding our focus? Well, yes. But also no. See, Lehmann also analyzed every book that’s been uploaded to Google Books between the 1880s and today. And he found that this phenomenon actually predates the internet. With every passing decade, trending topics appear and fade with increasing speed. Lehmann’s study is indicative rather than comprehensive, of course. And measuring these metrics isn’t a definitive way to map our evolving attention spans. But, if we accept the premise that our concentration is suffering, the next question is: Why?

It’s difficult to pinpoint precisely, but a good jumping-off point is what think-tank director Robert Colvile calls “The Great Acceleration.” Essentially, the way we receive information is speeding up. In the nineteenth century, for example, news could take days to travel from place to place. Then, technologies like the telegraph, radio, and television sped up the spread of information. On top of this, our information inputs – the different modes through which we receive information – have multiplied. In 1986, the average Westerner ingested the equivalent of 40 newspapers a day through the various available information inputs. By 2004, that figure had risen to an astonishing 174 newspapers worth of information. Today, that figure is almost certainly much higher. The internet has undeniably supercharged this acceleration. Now, information is not only available to us all the time; it actually intrudes on our lives through the ceaseless pings and notifications coming from our laptops and smartphones.

And our brains just haven’t caught up with this acceleration. Research suggests they never will. Our capacity for focus is an emergent field of study. But research in the area of speed-reading suggests that there’s a finite limit to how quickly we can process information. And, as neuroscientists point out, the cognitive capacity of the human brain has not significantly changed in the last 40,000 years. The amount of information we put into our brains has, however, stratospherically increased. It’s really no wonder we sometimes find it difficult to focus.


Apps and online platforms are addictive by design, not by accident

Facebook, Instagram, Twitter – the fact that these apps and other online platforms suck so much of your time isn’t a design flaw. They’re supposed to be addictive. After all, there’s a reason Silicon Valley calls its customers “users.” And where did this design originate? It originated at the Persuasive Technologies Lab at Stanford University. In the early 2000s, the lab asked whether the theories of influential behavioral psychologists could be incorporated into computer code – in other words, it asked whether tech can change human behavior. And the answer to that question was yes tech can change human behavior.

One of the psychologists studied in the lab was B. F. Skinner. Skinner was famous for the experiments he conducted on rats. He’d present a rat with a meaningless task, like pushing a button. But the rat showed no interest in doing this. So Skinner modified the task. Now, every time the rat pressed the button, it would be rewarded with a pellet of food. Rewards would motivate animals, Skinner found, to carry out tasks that had no intrinsic meaning to them. Skinner inspired the creation of other buttons you might recognize: like buttons, share buttons, and comment buttons. Those little hearts and emojis and retweet buttons aren’t design quirks; they’re programming us to use social media in addictive ways by rewarding us for the time we spend on the platforms.

These buttons keep us engaging longer. But they’re only one of the many design elements geared at keeping us online. Here’s another one: the infinite scroll. Back in the early days of the internet, web pages were just that: pages. Sites often comprised multiple pages; when you got to the bottom of one, you clicked through to the next. The bottom of each page offered a built-in pause. If you wanted to keep browsing, you had to actively decide to click ahead.

That is, until Aza Raskin stepped in. Raskin invented the infinite scroll – the endlessly refreshing feed of content that now features on the interface of nearly every social media platform, giving the impression that there is a never-ending supply of content. If likes and shares encourage users to stay online longer, the infinite scroll encourages users to stay online in perpetuity.

Raskin, however, has come to regret his invention. At first, he thought the infinite scroll was elegant and efficient. But he became troubled when he noticed how it was changing online habits – including his own. Noticing that he was spending longer and longer on social media, Raskin started to do the math. He estimates that the infinite scroll induces the average user to spend 50 percent more time on platforms like Facebook and Twitter.

The business model of most of these platforms is predicated on time – or, as they call it, engagement. This refers to how much time a user spends interacting with a product. That’s the metric tech companies use to measure their success – not money, but minutes. But money does play a part, too. Because the longer you spend “engaging,” the more chances the companies have to sell advertisements. The more you engage, the more companies track your behavior and build a profile uniquely designed to target you with specific ads. We don’t pay for platforms like Facebook and Instagram with our money. But we do pay with another precious, finite commodity: our attention.

In Silicon Valley, time equals money. The money is theirs. And the time – the attention – is yours.



Algorithms privilege outrage over community

Online platforms erode our focus and exploit one of our most precious resources – our attention – for their own financial gain. But these same platforms can be a force for good, strengthening community and driving collective action.

To better understand this potential, let’s travel to the Complexo do Alemão favela in Rio de Janeiro, Brazil. The Brazilian government takes a militant approach to this crowded, low-income area, routinely sending in tanks to suppress unrest. And it’s an open secret that the police shoot to kill. When innocent kids get in the way of their bullets, the police plant drugs or weapons on them and claim self-defense.
Raull Santiago lives in Alemão. He also runs the Facebook page “Coletivo Papo Reto,” which collects and disseminates videos of the police shooting innocent people. The page has galvanized many favela-dwellers to rally against their treatment. And it has shifted the tide of public opinion in Brazil, where favelas like Alemão are often reviled. But the situation in Alemão has only gotten worse since the election of Brazil’s far-right president, Jair Bolsonaro. And here’s the thing: Bolsonaro’s victory, like Coletivo Papo Reto’s success, can also be partly attributed to Facebook. Bolsonaro’s campaign inundated social media with clickbaity, fear-mongering campaigns – and he ended up getting elected.

What connects us can also divide us. Lately, it feels like online platforms have been much more intent on dividing than connecting. And it all has to do with algorithms. The content you see on this infinitely refreshing page isn’t ordered chronologically. It’s arranged by an algorithm that is programmed to feed us content that keeps us scrolling longer. It’s easier to disengage from calm, positive content. But if something strikes us as outrageous or controversial, we tend to keep looking. It’s part of a psychological phenomenon called negativity bias – that is, negative experiences impact us more than positive ones. So it’s in social media’s interest to literally provoke its users.

The algorithm has no ethics. It doesn’t condone or condemn; it just codes. But the people watching it feel, believe, and judge. For some, the more they’re exposed to misinformation, the more normal – even credible – it seems. A 2018 study that analyzed extreme right-wing militants in the US found that the majority of them were initially radicalized on YouTube.

You may not engage with misinformation online. You might put down your phone or close your laptop when you feel outraged by what you see online. You may choose not to spend your attention on provocative content. But this still affects you. See, when online platforms privilege divisive, shocking content, they also corrode our power for collective attention – our ability, as a society, to focus on issues that affect us.

Back in the ’70s, scientists discovered that there was a hole in the Ozone layer. It had been created by a group of chemicals called CFCs, which are commonly used in hairsprays. The scientists issued a warning: if the hole in the ozone grew, we would lose a crucial layer of protection against the sun’s rays. Life on earth as we knew it was at risk. Activists campaigned against the use of CFCs. They persuaded their fellow citizens to join the cause. Eventually, they put enough pressure on governments that the use of CFCs was banned. This is an environmental success story. But the outcome might have been different if we hadn’t focused our collective attention – first on the science, then on the arguments of our fellow citizens, and finally on the group effort of lobbying the governments for a total ban on CFCs. Would we be able to collectively train our focus on a similar issue today? Climate change poses a real and present danger to life on earth. But as a species, we can’t seem to absorb the science – or even agree on whether we should be listening to scientists in the first place.

Social media can be a powerful force for good. But rather than harness this force, platforms like Facebook are intent on exploiting our attention – and, as a consequence, they’re sowing division and controversy.

Recently, Facebook conducted an internal investigation called “Common Ground.” Its aim was to uncover whether the company’s algorithms really did promote controversy and misinformation to keep users engaged. According to the report, the findings were very clear: “Our algorithms exploit the human brain’s attraction to divisiveness.” Facebook hasn’t done very much about this disturbing finding, however. And neither have we. We’re too busy infinitely scrolling.


Ditch multitasking – recovering focus is about finding flow

It’s easy – and not inaccurate – to blame our shrinking attention spans on our devices and the easy access they offer to an attention-sucking online world. But, like an artfully cropped Instagram snap, that’s not the whole picture. See, there’s a fundamental flaw in the way we frame “focus.”

We live in an accelerating, consumerist society – one that values speed and output. And in this climate, we’re encouraged to “quantify” our attention in terms of what immediate results it yields. Our focus is a resource that allows us to produce, to earn, to tick items off our to-do lists. And that’s where multitasking comes in. The more we can simultaneously achieve, the better our focus is spent. So why not distribute our attention across several tasks at once? Well because, as it turns out, humans are really bad at multitasking. The word “multitask” was coined by computer scientists in the ’60s to describe the function of computers with multiple processors. It was never meant to be applied to humans. After all, we only have one processor: our brain.

When we multitask, we’re not simultaneously performing several tasks at once. We’re switching between them at hyperspeed. And every switch incurs what’s called a “switch-cost” effect. When you switch between tasks – or when you’re interrupted mid-task – your brain needs to recalibrate, which decreases your mental performance. A study commissioned by Hewlett Packard compared a group who worked on a task uninterrupted with a group that was distracted during the course of their task. The study found that members of the distracted group temporarily dropped an average of ten IQ points while they were completing their task.

In a work climate that values multitasking as a sign of peak productivity, distraction is practically encouraged. We’re constantly answering emails, participating in multiple conversations about multiple projects, and working across three or four different computer screens. In fact, in the US, the average white-collar worker spends 40 percent of their time engaging in so-called multitasking.

Luckily, there is an antidote to multitasking – a way of approaching tasks that cultivates deep focus. The psychologist Mihaly Csikszentmihalyi first identified this state, which he called “flow.” You find your flow, Csikszentmihalyi theorized, when you become so absorbed by a task that you lose all sense of your surroundings and are able to access a deep well of internal focus. In flow, your focus becomes deeper and better, and you’re far less susceptible to distractions.

The good news, according to Cskiszentmihalyi, is that everyone can access flow – as long as they meet a few key conditions. 
  1. First, the task you’re tackling needs to be intrinsically rewarding; when you’re in flow, it’s the process rather than the product that engages you.
  2. Second, the task should be challenging enough to demand your full attention – but not so difficult that you’re tempted to give up on it.
  3. Finally, monotasking is essential. To tap into that wellspring of focus, you need to direct all your mental energy toward a single task.

High-performing individuals like athletes, musicians, and scientists often attribute their achievements to their ability to access flow states. But in a society that has decided multitasking is a virtue – and that values speed and output over deep focus – the average person is finding it harder and harder to achieve flow.



We can get our attention back

In a world obsessed with multitasking, making room for other forms of focus, like flow, is a radical act. And it’s possible – but it’s not as simple as slowing down and switching off. Activating airplane mode won’t do much as long as you live and work in a system that encourages you to multitask, privileges productivity at all costs, and encourages you to spend increasing amounts of time in online spaces designed to sap your focus. It’s the system itself that needs to change.

Luckily, change may be on the horizon in Silicon Valley, where disillusioned designers are beginning to push back against our attention crisis. Former Google engineer Tristan Harris, as well as Aza Raskin – yes, the same Aza who designed the infinite scroll – want to see a non-predatory social media rise from the ashes of our current attention spans. Social media was designed to steal our attention. But Harris and Raskin are certain it could be redesigned to give our attention back. What would this new social media landscape look like? They have a few ideas.

The infinite scroll would be turned off, for one thing. All those little “rewards” like hearts and likes and shares might be turned off, too. You could instead receive a daily roundup of what’s happened on your feed, designed to discourage you from checking multiple times a day. And technology’s power to influence human behavior could be used for good. You could tell the platform how much time you wanted to spend online, and it could work with you to achieve that goal. It could help you achieve other goals, too. Want to try going vegan? The platform could connect you with online groups that share vegan recipes. Concerned about climate change? The platform could link you up with local activist groups, both on and offline. 

Around the globe, real pushback against our collective attention crisis is seeing inspiring results. Perpetual Guardian, a New Zealand company, instituted a four-day work week. Employees have since reported a better work-life balance, the ability to focus deeper for longer, and decreased susceptibility to distractions. And it’s not just employees who are reaping the benefits. Shorter workdays and workweeks enable deep focus instead of performative multitasking, and they encourage workers to avoid workplace distractions – like sneaking a scroll through social media when the boss isn’t looking. In fact, when a Toyota factory in Gothenburg cut its workday by two hours, workers actually produced at 114 percent of their previous capacity, and the factory reported 25 percent more profit.

In France, the escalating demands on our focus are seen for what they are: a health crisis. French doctors grew concerned about the rising number of patients experiencing “le burnout” and took those concerns to the government. Now, companies with over 50 employees have to formally agree on the limits of their workweek – meaning it may actually be illegal for a French boss to send their employees emails over the weekend.

In the big picture, these are all small changes. But they should leave us feeling optimistic. They show that there are solutions to this collective attention crisis. We can reclaim our attention . . . if only we can focus on the task at hand.


Actionable advice

Don’t focus harder on your task – instead, let your mind wander.

Doing nothing is actually a valuable form of focus because it facilitates creativity, which arises when you make unexpected mental connections and associations. The longer you can let your thoughts drift, the more unexpected associations your mind can create – which just might help you reclaim some of your stolen focus.


© Christine Calo 2021. Please do not reproduce without the expressed consent of Christine Calo. Powered by Blogger.