Innovation is slowing down
Innovation is a critical driver of economic growth, but major breakthroughs have been decreasing for decades.
We Need to Run Faster Just To Say In Place. Illustration by author
Bill Gates once remarked that “The idea that innovation is slowing down is one of the stupidest things anybody ever said.” But today, I want to convince you of the exact opposite. That not only is technological innovation slowing down, but it has been doing so for decades.
It’s only fair to be sceptical — after all, many of history’s greatest thinkers have been proven wrong when predicting the future of technology. For example, Nobel Prize-winning physicist Lord Kelvin declared in 1900 that “Nothing really new remains to be discovered in physics.” Just over a century later, we have quantum physics and nuclear weapons.
As I look at the world around us, I can’t help but be underwhelmed by the advancements of the past 60 years. Computers in our pockets, Swiss army knife apps and AI-driven cars are all impressive, but they don’t really seem to have changed our lives all that much.
Our cars, planes, public infrastructure, factories, food supply, and antibiotics — all of them are improved versions of what we had in 1960. Almost every leading energy generation technology was created over a century ago. We’ve had the combustion turbine since 1791, the fuel cell since 1842, hydroelectric turbines since 1878, and we’ve been harnessing the power of photovoltaic cells since 1883. Man, even every major invention that makes up the internet was created decades ago.
Transitioning from 1970 to 2020, besides computing and genetic engineering, one can’t help but notice a lack of revolutionary breakthroughs compared with previous decades.
The counter-argument might be: Wait, what do you mean, “other than computing?” How can you just ignore the area where we have seen revolutionary progress? Apart from orders of magnitude increases in capacity, performance, and cost; revolutionising all communications; connecting every human being on Earth, and putting all the world’s knowledge and culture in every pocket.
The rebuttal to the counter-argument is simple: Computing is just one area. But we used to have revolutionary changes happening in multiple areas simultaneously.
Putting this into perspective, from 1870 to 1920, we saw some incredible inventions and advances. We got the electric generator, electric motor, light bulbs; telephone, wireless, phonograph, and film; the first automobiles and aeroplanes; and the assembly lines to build them. We also saw the first synthetic plastic (Bakelite), the Panama Canal, the Haber-Bosch process, the germ theory, and its applications to public health.
Between 1920 and 1970 also brought incredible changes and technological advances. Radio and television were invented, radar and computers were developed, and plastic was created. Mass manufacturing led to an explosion of consumer goods, and the penicillin era ushered in the golden age of antibiotics. Norman Borlaug’s Green Revolution in agriculture transformed how we grow food, and nuclear power changed how we generate energy. The interstate highway system was built, jets were developed, and humankind landed on the Moon.
All of these inventions and advances have shaped our world today and made a massive impact on society. By comparison, it makes the last 50 years look mediocre.
When comparing the progress of different eras, finding a meaningful way to do so is surprisingly tricky. Part of the reason is how hard it is to measure the impact of any given scientific discovery.
However, technological innovation tends to correlate strongly with economic growth. That’s why looking at real GDP growth in advanced and global economies is so concerning because, since the 1970s, GDP growth and global economic output have decreased rapidly year after year, with little sign of slowing down.
Vannevar Bush, an American engineer who was one of the first to conceptualise a search engine, famously stated that “Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.”
I can’t help but feel Vannevar was onto something here. Climate change is a perfect example; it’s the biggest threat to our planet, and there are plenty of technologies we could be using to combat it, but we’re doing surprisingly little. We have all the pieces to understand and solve the problem, yet we have a limited capacity to optimally discover and apply that knowledge.
One of the reasons I left my career as a doctor was because I wanted to help find ways of advancing medical research and its practical applications. Now, I spend my days building AI tools and search engines that help researchers navigate the vast repositories of academic information and identify opportunities to apply their knowledge.
I want to live in a world where we’re constantly taking risks and innovating new technology — a world where we’re always ready to meet society’s next big challenge. The thought that our tools meant a researcher missed the opportunity to develop a novel solution that could’ve potentially changed the world is devastating.
In this article, I explore why technological innovation in society has declined over the last century and what can be done to get us back on track.
Evidence for innovation slowing down
Decreasing original and combinational patents
What triggered my concern about slowing innovation was this single chart of the US patent record.
So, what does the chart above tell us?
Well, since 1970, our patent activity seems to have focused primarily on gradual improvements to existing technologies. The blue line represents our focus on iterative improvements and steady gains. The other lines represent our ability to invent new ideas and discover new science and technology domains.
Reflecting on this data, I can only ask myself, “What happened?”
Well, from a capitalist perspective, it makes sense. Highly profitable or societally impactful technological change often derives from refinements of existing technological capabilities rather than developing wholly new technologies (Abernathy and Utterback, 1978, Rosenbloom and Christensen, 1994).
Using logarithms to make the data more compatible (above), it’s clear that the number of originations and technologically novel combinations have declined for a while now and even had a more severe drip recently.
This is really concerning. Many researchers who study invention agree that combinations of new and existing technology capabilities are a principal source of inventive novelty.
Technological novelty is essential to radical innovations — like the turbojet engine. The turbojet engine introduced a new way of generating thrust by expelling particles to create an opposite force that would accelerate an aeroplane. This was a significant change compared to the typical propeller engines which relied on drag to drive the plane. Over the following decades, several incremental improvements refined this new approach until it could generate unprecedented performance increases in jet engines. This, in turn, led to tremendous growth in the aviation industry and beyond.
Safe R&D Spending
I can only conclude this is because most of our R&D efforts since 1980 have focused on short-term profit-motivated refinements rather than the development of novel ideas that would substantially improve our lives.
I believe this impact is already being felt across the global economy. As mentioned before, the link between technological innovation and economic growth is pretty strong, and according to the World Bank, global GDP growth (also known as real GDP — often used as an indicator of the general health of the economy) has been decreasing across major world economies since the 1960s.
Another indicator is agriculture. Agriculture, it turns out, is a surprisingly excellent indicator of technological progress of the broader economy. It makes sense: agricultural products themselves haven’t changed much, with things like corn yields measured the same way 150 years ago as they are now.
It’s a pretty simple equation: agricultural yield is the rate at which one input (land) is transformed into an output (for example, bushels of corn), and technological progress is the growth rate of how efficiently this happens.
To illustrate this, the figure above plots the average US corn yields on the left and the twenty-year growth rate of those yields on the right. Similar to GPD growth and novel and combinational patents, we can see a dramatic slowdown in the growth rate of crop yields since the 1960s.
Further linking technology and agriculture is evidence that agricultural technology heavily relies on the novel patents and ideas developed outside of agriculture.
As the charts above indicate, our land use has stayed roughly flat while our labor force has fallen dramatically. That suggests that it’s technology that is accounting for our significant upswing in production from 1940 to 1970.
The stagnant food yields from the 1970s onwards are concerning, considering the global population recently tipped over 7 billion. Without more technological novelty, the world will not be able to sustain so many people. I mean, most of the reason life was so horrible for people before 1870 was that the human population was too large relative to our ability to feed everyone.
Another stagnation indicator is the cost of infrastructure.
One might expect innovations over time to make it easier to build the same things, but this isn’t necessarily true. In much of the Western world, especially in the United States, it’s become more expensive to build infrastructure than fifty years ago.
For example, actual spending per mile on Interstate construction in the US increased more than three-fold from the 1960s to the 1980s. Research by the New York Federal Reserve Bank and Brown University researchers revealed that even when ruling out “reasonable explanations” like paying workers more for their work or the increase in the price of highway materials, the cost to construct a “lane mile of interstate increased five-fold” between 1990 and 2008.
All this: a combination of slowing GDP growth, decreasing number of new and innovative ideas in the patent record, stagnating crop yields, and the increasing cost of infrastructure suggests something has been fundamentally wrong for decades in how we have been approaching human progress. The consequences of which may be rapidly catching up to us.
I won’t lie; it’s a grim picture that is likely to push up the cost of living through a mixture of increasing food prices, economic stagnation, domestic fuel bills and housing payments. Record lows of consumer confidence scores in both the United Kingdom and the US make me feel that many of us share similar concerns on how such things could impact our future economic prospects.
The question now becomes: How did this slow down happen? And what needs to change to get society back on track?
Why innovation is slowing down
Now, here, you see, it takes all the running you can do to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that! — Through the Looking-Glass, Lewis Carol
What made innovation fast to being with?
As an initial point, “why has progress been slow?” might be approaching things backwards — maybe it’s better to puzzle over “why is it ever fast?” or “why does it exist at all?”.
To understand this, we need to go back to 1870, when being a human (especially a childbearing one) sucked. A world where we are unable to sustain the human population relative to our ability to use technology to extract the necessary resources to live within it.
In such a world, about one woman in three was left without surviving sons. Hence the drive to reproduce more — even if you already had living sons, to have another as insurance — was immense.
For eight thousand years and up until the nineteenth century, poverty, patriarchal systems, and slow technological progress kept humanity in the grip of the Malthusian trap, with nearly all of the potential benefits of better technology being eaten up by population growth and resulting resource scarcity.
The rate of technological advance was slow. Ideas that flourished at this time were not focused on our ability to be more productive but rather part of a system of force, fraud, exploitation and extraction.
We take a lot of things for granted today, but many things had to go right for our world today to exist. At the start of the 1870s, three pivotal events occurred that helped bring this about — the development of modern science and the industrial research lab to discover and develop valuable technologies, the development of the modern corporation to build and deploy the technologies, and the development of the global market economy to deploy those insights and technologies worldwide.
Without the backing of any organised systems, innovations only came about unplanned and inefficiently. Without corporations to deploy technology on a large scale, the work that was done could not have a significant effect at any location other than where it was initially conducted. And without global trade and communication, inventions would only have a local impact.
Once these three engines all started working together, the technological progress of humanity roughly doubled every generation since 1870, with a healthy 2–5% annual increase in economic growth to accompany it.
What followed was a development boom known as the Great Acceleration of the mid-twentieth century when the era of nuclear warfare dawned, massive increases occurred in resource extraction, population growth, carbon emissions, species invasions and extinctions, and when the production and discard of vast quantities of metals, concrete and plastics boomed.
In the span of a mere 150 years, more technological progress occurred than in the previous 10,000. Practical ideas were discovered, developed, deployed, and then diffused throughout the global economy to an unprecedented degree — sometimes at a rate so fast people couldn’t fathom it.
It’s nothing short of a monumental feat that, as a species, we have successfully switched our societal focus from exploitation to productivity. This change has enabled us to develop technologies and make discoveries that have helped lift most people out of absolute poverty since 1800.
Ideas are getting harder to find
So what happened? What’s changed since 1970? In short, things have become more challenging to discover, and we are less willing to take risks when there are easy profits to make.
One thing to note about the early 20th century is the large-scale deployment of many powerful general-purpose technologies: electricity, the internal-combustion engine, radio, telephones, air travel, the assembly line, fertilizer etc. These novel inventions were typically not created by large teams of researchers but rather by motivated, curious individuals.
One could argue that the best and biggest ideas have already been discovered simply because it was easier to discover them.
Economists Bruce Weinberg and Benjamin Jones examined how old scientists are when they make discoveries that win them the Nobel Prize. They found that in the early days of the Prize, scientists were an average of 37 years old when they made their prizewinning discovery. But in recent times, the average age has risen to 47, an increase of approximately a quarter of a scientist’s working career.
Gone are the days, it seems, when one could simply observe the world around them, run a current through some gas, play around with some X-rays, watched what happened, and then change the world.
As time has progressed, we have learned so many things that no single individual can retain them long enough to be impactful. A greater understanding of the world has required more people and more time, money and resources to sustain our current level of economic growth.
This insight can be nicely explained with the following simple equation highlighting the economic growth that emerges from idea-based growth models.
Such an equation is interesting, considering a recent study found that research productivity in the US has decreased by a factor of 41 since the 1930s — an average decrease of more than 5% per year. At the same time, aggregate economic growth rates are relatively stable (but declining over decades as described previously), while the number of researches has risen enormously.
An excellent example of this is Moore’s Law. The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s. In fact, research teams have nearly quadrupled in size over the 20th century, and that growth continues today.
Our most significant opportunities simply require more skill, expensive equipment and researchers in ever-larger teams to make impactful progress.
To maintain our exponential growth, we must effectively run faster and faster to stay in the same place. We find ourselves in a never-ending race, like the Red Queen from Lewis Carroll’s Through the Looking Glass, where we have to run as fast as we can to stay in the same place. We have to double our efforts every 13 years to offset the difficulty of finding new ideas.
Declining public funding and Defensive private funding
Why invest time, money, and people into developing new ideas when I can guarantee economic returns and competitive advantages in the short term?
It's easy to put all the blame on the market economy for this situation, but it’s not all the industrialist’s fault. That’s the private market doing what it is designed to do — provide us with the innovation we demand — which now mainly revolves around smartphones.
The public funding domain doesn’t seem to be fairing much better. For the first time in the post–World War II era, the majority of the primary research is now carried out by the private sector in the United States.
Data from ongoing surveys by the National Science Foundation (NSF) show that federal agencies provided only 44% of the $86 billion spent on basic research in 2015. The federal share, which topped 70% throughout the 1960s and ’70s, stood at 61% as recently as 2004 before falling below 50% in 2013.
Coming back to agriculture as an example, one of the primary reasons cited for the current stagnation in agricultural innovation is due to a flatline investment in R&D since the 1980s.
Our insights show that low R&D investment tends to result in smaller and smaller gains in innovation. Unsurprisingly, a stagnant R&D budget has likely resulted in our current fixed agricultural yield growth.
Another concerning indicator is our decreasing willingness to invest in risky ideas.
This is best represented by how long it takes for venture-backed startups to receive funding: from 2006 to 2020, the median age of a startup in the seed-round funding stage increased from 0.9 years to 2.5 years. The median age of a late-stage startup rose from 6.8 years to 8.1 years in that same period.
Among firms that were acquired, the average time from first financing to acquisition tripled, from a little over two years in 2000 to 6.1 years in 2021.
All these factors contribute to lower economic and competitive dynamism. It kills one of Friedrich von Hayek’s key ingredients that soured our recent technological progress: the market economy. It hurts business, government, and society as a whole.
Why should we care
If you’re still with me, you might be thinking: Ok, I get it. It sounds like innovation is slowing down, but so what? Maybe we have enough innovation for now, anyway. It’s probably best we don’t create more things to destroy the planet or ourselves until we better understand the stuff we already have. Besides, I’m not sure if more innovation will improve my life anyway.
Well, you have a point there.
I’m telling you that productive human output must grow by 100% next year, but perhaps the question should be: why must it grow by 100% next year?
Over the last generation, we have solved tons of well-defined problems. We eradicated smallpox and polio. We landed on the Moon. We built better cars, refrigerators, and televisions. We even got ~15 IQ points smarter! And how did our incredible success make us feel?
It seems that despite most of us now being able to live above the poverty line, society is still unhappy. If solving well-defined problems didn’t make our predecessors happy, it’s unlikely that solving undefined problems would yield better results.
I get it. But even if we wanted to slow down, we can’t afford to.
The world faces many urgent challenges. Climate change, loss of topsoil, and diseases ranging from multi-drug-resistant bacteria to the growing burden of neurodegenerative disease are just some of them.
Tackling things like climate change requires more exponential technology, not less. Only technological innovation can solve problems are how to deliver good quality healthcare, education, sanitation and power to the poorest billions of the planet.
We need new ideas that allow people to experience abundance and prosperity in sustainable and moral ways. If we don’t, we’re not just gambling with the biosphere but with future generations who deserve the best possible chance.
So, if ideas are getting more complicated and expensive to develop, and the market only demands incremental changes, what needs to change to ensure our future prosperity?
Reversing the trend of declining innovation
“To you, Baldrick, the Renaissance was just something that happened to other people, wasn’t it?” — Blackadder II, Richard Curtis
Reversing a trend of technological stagnation starts with first acknowledging the stagnation and ends with simply believing change is possible.
That may sound crass, but remember, the concept of “having new ideas” is a relatively new thing historically. And even then, it’s still done by so few people that society hasn’t yet assimilated that this should be our actual destination, and intelligence is merely a means to that end.
Accept the decline and look for the big ideas
First, we need to believe that science is an endless frontier if we want to continue making progress. Our recent focus on refinements within established research fields suggests that we may be losing sight of this goal. Simple, incremental research does not create much uncertainty, whereas innovative research can lead to entirely new fields of study with their own fundamental questions.
For example, if you are a chip designer, geneticist, or AI researcher, you can imagine your next step in your research with a fair amount of certainty.
However, moving away from this approach to a more innovative model requires us to reward people for working on seemingly wacky, uncertain ideas with a high chance of failure. Most research funders are not going to be happy about this, but government funding isn’t going to make up the difference. And already, organisations are not spending enough on novel research. One option would be to have legislation requiring organisations of a specific size to invest part of their R&D budget in novel research.
Build a team of diverse experts
“Technological progress requires above all tolerance toward the unfamiliar and the eccentric.” — Joel Mokyr, The Lever of Riches
Next, we need to tackle the problem of an increasing burden of knowledge. As more problems are solved, we require additional knowledge to solve the remaining ones. This creates a vicious cycle — the more knowledge we need, the more difficult it becomes to solve problems.
The current response to the burden of knowledge is to develop increasingly specialized fields with increasingly specialized experts to occupy them. In fact, I wouldn’t be surprised if this is one of the biggest causes for the decrease in technological novelty. In many cases, it’s how we combine ideas from different disciplines that lead to novel breakthroughs. For example, some of the West’s most prolific idols, like Leonardo Da Vinci and, more recently, Steve Jobs, are known for combining art and science to create astonishingly new inventions.
Specialisation is the antithesis of the cross-disciplinary approach, but it is, unfortunately, a necessary evil of the modern world. Even at my job, I’m often told that I must focus on doing one or two things well to be successful — to hell with all the other stuff. I’m not saying my bosses are wrong — in fact, it’s damn good promotion advice — but it’s awful at the societal level.
Instead of focusing on one discipline at a time, we need to focus on patterns that can be applied from one knowledge domain to another. This will enable us to have a more accurate view of the world and better navigate its opportunities.
One solution to this problem is not to hire new employees with more and more similar expertise but instead a group of specialists with a common goal. Because no individual can be expected to have all the skills necessary to solve every problem, teams are essential to any business’s success.
An interdisciplinary team is ideal because no team can function without various perspectives and abilities working together. The solutions that come from the blend of different skills cross-pollinating each other are far more potent than any discipline.
If we want to be more novel in our work, hire an artist or philosopher to join your team of engineers. They can help foster creativity by exposing us to different perspectives. Even if those perspectives are wrong, they can still help improve your creativity. Authentic dissent can be difficult to encourage, but it is always invigorating. It has an excellent way of waking up your brain cells.
Become a more interesting person
Diverse teams certainly increase innovation, but they don’t go far enough. As Mark Twain said, “I have never let school get in the way of my education.” We can’t always rely on others to feed us new perspectives on the world, so we must also help ourselves. To do that, we must be generally interested in whatever topic crosses our path.
When we’re curious, we tend to question the world around us more. This can be as easy as looking at our surroundings and asking ourselves, “What must have been true in the world for this thing to exist?” If you want more guidance on how to do this, I highly recommend Rob Walker’s book The Art of Noticing.
Let's get closer together
Another factor we should consider is being physically closer to the people we work with. Isaac Kohane, a researcher at Harvard Medical School, published a study which analysed over 35,000 peer-reviewed papers and found that papers written by coauthors who were physically closer to each other tended to be of higher quality.
In a world where we’re increasingly communicating remotely, it’s worrying to hear that the best research is consistently produced by scientists working within ten metres of each other, such as the significant scientific advances made at places like MIT’s Building 20.
Building 20 is a legendary place of collaborative innovation. In the postwar decades, scientists working there pioneered a stunning list of breakthroughs, from advances in high-speed photography to developing the physics behind microwaves. Building 20 served as an incubator for the Bose Corporation, gave rise to the first video game and to Chomskyan linguistics.
The point is that Building 20 is an example of how throwing groups of individuals together from different disciplines, often with little knowledge of each other's work, helped create an unusual, creative, chaotic dynamic resulting in a bunch of innovations. Building 20 shows us that when the group compositions are correct — enough people with different perspectives running into one another in unpredictable ways — the group dynamics would take care of themselves, and so does the innovation.
It's probably one of the reasons why Steve Jobs demanded the new Apple spaceship campus be designed to encourage more in-person chance encounters. It's also probably why Apple is pushing hard for their staff to return to their Californian office.
Do not fear the machine
Next, we need to embrace artificial intelligence in our jobs.
I mean, I wish intelligent machines had enough capacity to take over my job, perform knowledge work or invent new things, but it seems like we are nowhere close to that.
I’ll put this in perspective. Let’s say you travelled back to ancient Greece, booted up a super-intelligent AI, fed it all human knowledge, and asked it how to land on the Moon. The AI would respond, “You can’t land on the Moon. The Moon is a god floating in the sky.” The best answer you could probably hope for is instructions to build a temple and start praying.
My point is that an AI will always be limited by only what it knows — a machine is unlikely to start an airline when no airlines exist.
However, there is something reassuring about AI’s incompetence — it reminds us that there is something uniquely special about the human creative process that science can’t quite capture, or at least AI’s.
We should not fear AI and instead utilise them to accelerate our creative process.
If innovating requires us to be at multiple frontiers of knowledge, but the overload of information means we constantly are missing opportunities, could artificial intelligence help us fill that gap?
For example, what if you could load up a language model trained on the greatest modern-day physicists, artists and philosophers and then ask them to look over your ideas and work? This would be an AI system that blends the best of both worlds: humans giving AI information and AI giving humans ideas in return, and us automating and streamlining this whole process on an industrial scale — think of the opportunities.
AI agents may not be able to make us super-intelligent, but they could pass knowledge on to us from the lessons of history — a sort of super-history. Chess is an excellent example of this: some young adults are now playing the game not the way our ancestors did but in a style that has been influenced by AI agents. Players now follow odd and unconventional patterns of play where human tradition would suggest conceding.
On the more creative side, image-generation tools such as MidJourney have enabled us to try new ideas and explore new forms of creativity because the friction of getting a decent visualisation has decreased to dramatically. Now I can ask for an engine made from jello, and boom — there is it! An incredible gleaming 12 cylinder jelly monstrosity.
Who knows what creativity will be unleashed when such solutions enable humanity to create videos, music, 3d models or even simulations in less time than it takes to brew tea.
Rise of the lone genius 2.0
Finally, we should consider encouraging highly motivated individuals to explore their curiosity and find ways of applying their outputs more effectively.
Joel Mokyr argues that “invention occurs at the level of the individual, and we should address the factors that determine individual creativity.” Decades of research have consistently shown that brainstorming groups think of far fewer ideas than the same number of people who work alone and later pool their ideas. Keith Sawyer, a psychologist at Washington University, has summarised the science: “Serendipitous thought, the kind that makes critical breakthroughs and solves challenging problems, does not occur in groups.”
Novel ideas pooled from individual ideation are like new tools: they let us see old problems in new ways or approach old problems with fresh perspectives. The more counterintuitive, the better; it means we as individuals have a better chance of identifying novel opportunities.
We need tools to help motivated individuals explore their curiosity and become more creative. When asked about the 10,000 attempts it took him to develop a commercially viable lightbulb, Edison purportedly responded, “I haven’t failed 10,000 times — I’ve successfully found 10,000 ways that don’t work.” I want AI tools that enable us to simulate, draw and explore an idea 10,000 times in 10 times faster without needing to expend years of our lives validating an idea.
Genius starts with individual brilliance. A singular vision can lead to innovation, but it takes teamwork to make creativity happen.
Closing thoughts and becoming protopian
I mentioned at the beginning how Lord Kelvin was reported to have claimed that physics had reached a natural end; in 1900, his prediction was received with mockery and even contempt. I am fully aware of the irony in making a similar statement, but I hope to be as wrong as Lord Kelvin was then.
I hope that by acknowledging and understanding the reasons behind the slowdown in innovation, we can start to implement systems that encourage riskier and bigger bets across all disciplines and industries. Only then can we hope to maintain our current standard of living while also making it more sustainable.
I believe that we can achieve continuous innovation if we are willing to take risks and learn from our failures. We need to learn to be unsatisfied with predictable incremental progress, and it beings at the level of the individual.
This is challenging, I know. It can be difficult to see how our individual actions can make a difference in the world. But it’s important to remember that such optimism is not naivete nor utopian; it’s protopian — a slow march toward incremental betterment. Our actions do matter, even if it doesn’t seem like it.
If we want to keep making the world a better place, we must stay hopeful that there are always ways to improve things. The key is not being a genius or getting lucky but begins with ambition and a willingness to try and change things.
We need to stand on the shoulders of giants — not simply look up and admire them.
Here is wishing you many happy future inventions.
On being asked what he thought of modern civilisation: “That would be a good idea” — Mahatma Gandi