The Software Engineer is Dead, Long Live the Software Engineer!

[Image composed with DALL·E]

An undeniable killer use case for large language models is code generation. It’s already helping software developers write code up to twice as fast and with more delight.

As LLM models improve with each release, they can write better software and handle more complex architectures at a fraction of the cost of older-generation models. Take this trajectory to its most freakish conclusion and you’ll have a future where AI writes all of the world’s code. In that instance, we won’t need software engineers, right?

This future is far more distant than sensationalists would have you believe. The genAI hype obscures the reality of what’s likely to happen over the next few years. I know this because even though I’m not a professional software developer, I write code with AI assistance every week, and my day job is to find and invest in talented tech entrepreneurs. I’ve seen first-hand and heard from other developers what makes LLMs brilliant and the areas where AI falls short.

[An example of code autocomplete in Python]

For example, try building a full-stack app with dozens of files and multiple API services. You’ll find that LLMs often get lost in the sauce when you have thousands of lines of code. No doubt, longer context windows (how much you can stuff into a model’s prompt) will solve some of this, but how about code maintenance?

If an AI writes most (or all) of the code, you still need someone who can interrogate and navigate the codebase. Why? Well, if the features of some arcane library get deprecated, for example, and the AI lacks sufficient training on what’s new, you’ll need a human to step in and help.

Or consider the seemingly simple task of understanding what users want or need. Software engineers don’t just write code. Knowing the syntax of a computer language is the easiest part of the job. You must also talk to people, understand their needs, distil those requirements into something more precise and achievable, and then balance the costs and benefits of various tools and approaches. AI will get good at this someday, but the world is too complex for LLMs to eliminate all software engineering jobs in the near term.

Some software engineers are already being replaced, though. People who use AI to compose better software are starting to replace those who don’t. And these AI-augmented engineers will have more work than ever. This is because things that were previously too expensive to automate will now be cheaper to codify. Just think about all the weird manual processes you tolerate because there’s no good software for it. AI-augmented software developers and hobbyists will be able to tackle these areas with greater speed and ease.

In my own life, I’ve composed code with AI to complete various mundane tasks. For example, I wrote a Python script to export my Spotify playlists to another streaming platform instead of paying for some obscure software. I hacked together a full stack app to help me find meeting venues. I built an app to summarise YouTube videos instead of waiting for Google to make the feature. I’ve also experimented with AI agents that conduct market research, among other things.

The software engineer of the pre-LLM era is clearly dead. But long live the software engineer who augments their craft with AI. Just as high-level programming languages eliminated the need to write tedious low-level machine code, AI will free up software engineers to focus on more creativity and innovation—two things the world needs now more than ever.

Coffee, Code, and ChatGPT: Lessons in Automation

(Photo by Dall-E)

I’ve been doing a lot of coffee meetings recently and often, I need to find venues—mostly cafes—that are convenient for both parties. Using Google maps for this can be cumbersome, and so I thought, why not ask ChatGPT-4 to write a simple program that could solve this problem? The app could take two locations and automatically identify a list of coffee shops located roughly halfway between the two addresses.

To my surprise, and in less than 30 minutes of working with GPT and the Python programming language, I had working code. Here’s the output of that initial process.

While this code was enough for me, it wasn’t user-friendly for non-technical people. So I went back and forth with ChatGPT-4 through a bunch of queries until I cobbled together a web app that anyone could use. You can see how that turned out in this demo video.

Feel free to try out the app here, but only for a short while before my Google Maps API budget is depleted.

What did I learn from this?

Going through this process of iteration and collaboration with AI was fun, but it also drove home a point that most tech-savvy people are already familiar with: AI can write code that works, but it’s not a full-on substitute for a good software developer. (This means now is still a good time to learn to code!)

Deploying even the simplest of apps involves a maze of tools and systems. It’s not just about the code. In my case, I had to set up a Google developers’ account to be able to use their maps technology. (This involved going through their documentation when the GPT-written code turned out to be out of date!) I also had to research and debate the merits of various hosting providers for the app before deciding which one to use. Additionally, I had to buy a domain name and link it to my servers. And then of course, I couldn’t forget the basics, like setting up analytics and regularly backing up the app code on Github, among other steps.

Of course people who do this work daily find it trivial in a technical sense. However, even seasoned software developers grumble about how time-consuming it is to get all these tools and platforms working together for a public-facing app.

Simply put, you can’t fully automate the process of building things that will be used in the real world by real people. We’re not there yet. But what tools like GPT can do is speed up your prototyping process. Furthermore, if you have a touch of technical know-how, you can quickly automate a variety of personal tasks that don’t need to be public or require a full-fledged app. To me, that’s enough reason to be optimistic about how generative AI will meaningfully impact global productivity in the years to come.

The Personal Brand Delusion

(Photo by Jonas Stolle on Unsplash)

Excessive focus on a personal brand is terrible. 

Personal branding — assuming brand can even be applied to a complex human being in the same way it can to a commodity — is mostly a by-product of something else: making valuable and meaningful contributions in your areas of interest.

People who tirelessly and directly work on personal brand often do so at the expense of other activities that matter. And at the extreme end, there are some who go as far as fraud just to build a name for themselves. (See this list of fraudulent Forbes 30 under 30 candidates for example).

Paradoxically, and as this paper about the ‘Best-New-Artist Grammy Nomination Curse’ puts it, if you seek recognition directly, you probably won’t do your best work. That’s because people-pleasing antics and unrealistic versions of success are a distraction from what really matters. In contrast, if you care less about public opinion and awards, you may find yourself producing better and more original work. 

For these reasons I’m not sold on the idea of spending lots of time on a “personal brand”, especially if it precedes any meaningful contributions from an individual. Attempts to establish a personal brand without genuine achievement are, at best, fruitless busy work and, at worst, delusional. 

Only a select few can pull off a personal brand. It emerges in its strongest form after incredible achievement. Beyonce, Michael Jordan, Steve Jobs: each of these individuals has a great personal brand. But in all cases their mastery of craft preceded mastery of image.

Chances are, you won’t tread the same path as Bey, Mike, or Steve. That’s okay. It means you can skip the personal branding frenzy. Instead, focus on doing exceptional work and contribute meaningfully in your areas of interest. Your reputation will naturally grow and you won’t have to rely on a contrived personal image to open doors. Let the superstars have their personal branding. For the rest of us, there are more impactful ways to invest our time.

How I Built a Venture Capitalist A.I. Bot and What I’ve Learned From It.

(Photo by Markus Spiske on Unsplash)

I’ve been reading Fred Wilson’s blog for almost a decade and his writing inspired my move into venture capital some years back. He’s a seasoned early-stage investor and a co-founder of Union Square Ventures, one of the best venture capital firms of all time. (They invested in the likes of Stripe, Twitter, and Coinbase). 

I’m still reading through each and every one of Fred’s blog posts because there’s a ton of early-stage investing knowledge in it, and his writing is such a delight to read. But as a fan of his blog I also wanted to create an interactive way of traversing his knowledge base. 

ChatVC: An AI bot that uses avc.com blog posts.

Enter GPT, Langchain, and Chromadb. These tools helped me quickly build a prototype chatbot that I could ask questions about early-stage investing. Below are some gif examples of the answers I can get from the bot. I also tweeted about it here. How did I build it?

How did I build it?

There were broadly three steps. First I used an open source Python library called BeautifulSoup to extract all the text from https://avc.com/. Second, I created an AI-native database of that text using Chroma. This part is really cool because Chroma stores the text in a highly multi-dimensional space where related concepts can be found easily. Finally, I used Langchain to connect to OpenAI’s GPT model for chat and language capabilities. (The user interface is powered by Gradio.)

All this sounds super technical for non-coders, but I’m not a software engineer by the way. I’ve completed online coding classes before and I know a few basic programming principles, but it’s not my area of expertise. I built this VC AI by using open source tools and asking ChatGPT for a lot of help!

Implications for Entrepreneurs

If I can build something like this over a few evenings, and the technology that enables people to do it is getting simpler and more accessible, we’re going to see a massive rise in the number of people who can create software tools and apps. And one of the things I’m excited about here is that entrepreneurs who previously weren’t technical enough to start a tech company can do so without the hurdle of having to find a technical co-founder. That partner could be an AI.

Implications for Investors

I’ve built a prototype investor chatbot for personal use and learning but I don’t believe it can displace any seasoned investors, yet. Coincidentally, just as I was finishing the prototype, Fred Wilson wrote about this same topic yesterday and he makes a number of points that I agree with.

The thing is, there are facts, there’s knowledge, and then there’s wisdom. Everyone has cheap access to facts. Wikipedia does a wonderful job of that. Many people also accumulate knowledge with time and some expense – i.e. the know-how and nuance of applying different facts and ideas in a specialist area. AI is now getting really good at this. 

The final element is wisdom, and that’s hard-won. It’s very human. It involves living through an experience and internalising the whys and counterfactuals of what happened and what could have happened. I can’t just read a blog post from Fred Wilson or ask an AI about how to deal with a situation and all of a sudden become a wise person. That takes significant time, reflection, and tangible practical experience. 

AI might become wise some day, but from what I’ve built and learnt about large language models so far, it will be exceptionally difficult to replicate that. I remain open to the idea that this could change quickly if some new innovation in AI emerges. For now though, I don’t think investor jobs are going to be displaced by bots. Investing will, however, be augmented by AI.

Acceleration and Uncertainty of Artificial Intelligence

The excitement that surrounded the personal computer revolution in the 1980s and the advent of the world wide web in the 1990s has surely been eclipsed by what’s going on in artificial intelligence today.

Bill Gates – who was around for both eras – believes that the “development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.”

The Economist and others go further. They liken what’s coming with AI to the impact the printing press ignited 600 years ago, when a new general purpose technology led to an explosion of knowledge and productivity, as well as widespread upheavals and disruptive social change.

ChatGPT – the fastest-growing internet app ever – and the large language model that powers it are the fuel behind the excitement. The app is unlike anything we’ve ever seen. It’s so good at answering a wide variety of questions that it feels as if you’re chatting to the collective knowledge of humanity (or at least the publicly available text the AI was trained on.)

Note: ChatGPT is the fastest growing app of all time.

I’ve worked in tech as a “non-techy” person on the investment and operational side of things for several years now, and I get to meet technologists often. I also often seek out and synthesize non-mainstream content about technology from a diverse group of friends, Twitter accounts, forums, and niche blogs. Never in this time have I felt a greater sense of technology acceleration than now.

From AI researchers highlighting that “2023 has already seen more advances in AI than any other year,” to hearing from software developers about how thrilled they are to build on this new technology, and how some are terrified of it (the potential near term challenges and long run risks should not be ignored), the path ahead is going to be full of surprises.

Note: Computers that train AI models are doubling in power every 6 months. This means a 1000x increase in power every 5 years if the trajectory continues. [Chart by Sevilla et. al 2022 and adapted by Korinek 2023.]

How should we prepare? I’m not entirely sure since I’m also just coming to terms with what’s happening. However, I’ve adopted the technology early (I first spoke about GPT on a BBC show back in 2020); I’m exploring how I can invest in entrepreneurs who are building AI tools that will get us to the future more safely; and I want to learn and do more of what makes us positively and uniquely human.

My hope is that we end up in a future where AI helps us solve some of the world’s biggest problems, rather than make them worse. But for that to happen, a lot more of the public will need to engage the topic today.


Note: This was written entirely by a human.

Is Going to Uni Worth it? Here are 3 Ways to Find Out

University has never been more expensive, so is it still worth it if it will cost new students £100,000 in graduate debt and take 40 years to repay?

Recently we’ve seen the cost of everyday items skyrocket. From pasta prices to fuel and energy bills hitting record highs. But while the cost-of-living crisis is making headlines, a lesser-told story is the fact that the average graduate today ends up with almost three times as much student debt as they did 10 years ago.

In addition, this cost-of-education crisis is expected to balloon further. The upcoming changes to the student loan system could see graduates pay back £100,000 of debt and interest over their working lives.

With such a huge price tag, you would expect university to still be the superior option. However, the research I conducted for my book, Is Going to Uni Worth it?, showed that there are many cases where an alternative, such as an apprenticeship, could be a better option.

How can you determine what’s likely to work for you? There are five key areas detailed in the book, but answering the three questions below can act as a starting guide.

1. What do you want to do in the future?

If you know what you’d like to do in the future and that path requires a degree, university is an obvious choice. For example an aspiring astronomer or biologist must complete a degree. In contrast, you don’t need a degree to work as a journalist, accountant, or banker. These careers can be pursued via an apprenticeship.  

2. What’s your learning preference?

University emphasises academics (i.e. lectures, reading assignments) while an apprenticeship focusses on the practical applications of knowledge. Practical learners are therefore better served by an apprenticeship, while conventional academia is better pursued at university.

3. What’s your affordability consideration?

Some university courses can be expensive. For example, just 4% of doctors come from a working-class background, and part of the reason for this is that many graduates of medicine accumulate £80,000 or more of student debt and often require additional financial support from their family to complete their training.

Thankfully, from 2023 it will be possible to train and qualify as a doctor by taking a degree apprenticeship, which doesn’t come with the student debt of a traditional path. So if you wished to be a doctor and the cost was out of reach, a degree apprenticeship would be a no-brainer.

These three questions are just the start when it comes to figuring out whether university is for you or not. But — as is the case with all interesting decisions — there’s no perfect answer, just one that’s good enough and hopefully worth it for you.


Is Going To Uni Worth It? by Michael Tefula (Trotman £12.99), is available now at Amazon and all good bookshops.

A version of this article was originally published in the Autumn 2022 edition of the WhatLive.co.uk magazine (page 33.)

(Photo in the blog by Susan Q Yin on Unsplash)

Highlights from the New Bill Gates Book on Climate Change

How to Avoid a Climate Disaster’ is a fantastic read for people who are new to climate change. To my surprise, Bill Gates is also relatively new to the space (although he’s ahead by over a decade compared to most people!) He changed his mind on the topic in 2006 and I believe his newness to the topic led to a more accessible climate book. In the introduction he writes:

“Things changed for me in late 2006 when I met with two former Microsoft colleagues who were starting nonprofits focused on energy and climate. They brought along two climate scientists who were well versed in the issues, and the four of them showed me the data connecting greenhouse gas emissions to climate change.

I knew that greenhouse gases were making the temperature rise, but I had assumed that there were cyclical variations or other factors that would naturally prevent a true climate disaster. And it was hard to accept that as long as humans kept emitting any amount of greenhouse gases, temperatures would keep going up.

I went back to the group several times with follow-up questions. Eventually it sank in. The world needs to provide more energy so the poorest can thrive, but we need to provide that energy without releasing any more greenhouse gases.”

How to Avoid a Climate Disaster (Hardback version, Page 7)

Bill Gates does well to avoid technical jargon in the book, and balances pessimistic realism with reasons to be optimistic. This makes the book an easy and encouraging read.

Below are some of the highlights I made in the book but the full title is worth reading if you’d like to learn more about climate change and what we can do about it.


  • We add 51 billion tonnes of greenhouse gases to the atmosphere annually. That’s about 1 billion a week. The warming impact is equivalent to detonating 1 Hiroshima-sized nuke every second of every day, all year round.
  • Greenhouse gases are roughly composed of CO2 (76%), methane (16%) and nitrous oxide (6%). But even though methane and nitrous oxide are a smaller portion of the total, their warming impact is significantly larger. Methane causes 28 times more warming per molecule in 100 years compared to CO2. Meanwhile nitrous oxide causes 265 times more warming per molecule over a century. (See Box 3.2, Table 1 here.)
  • The fossil fuel energy industry is huge and will be resistant to change. It generates $2-3 trillion of revenues a year. That’s like the GDP of a rich country. For example it’s more than the GDP of Canada or Italy, and close to that of the UK.
  • The energy sector needs to urgently invest in the research and development of clean energy technologies given the climate emergency we face. However, it only invests around 0.3% to 0.4% of its revenue in R&D. The electronics and pharmaceutical industries do over 30 times that amount, with 10% or more of revenues invested.
  • A small rise in global warming seems harmless but it isn’t. The Paris Agreement aims to keep it below 2.0°C relative to pre-industrial levels and ideally below 1.5°C. But if we go from 1.5 to 2.0 it won’t just be 33% worse. The damage could be 100% worse or more since climate is a non-linear system. (On a related note, researchers estimate that based “on current trends, the probability of staying below 2°C of warming is only 5%.” The temperature scale below from a book called ‘The Madhouse Effect’ shows what the impact of this could be.)
  • Oil is so cheap that it’s cheaper than a soft drink. In 2020 a barrel of oil cost around $40—that’s just 25 cents per litre. In pound sterling today that’s 18 pence, while a litre of coke at Tesco costs £1.45. (Notice that even though oil is cheap today, the price doesn’t reflect the potentially irreversible costs of climate change.)
  • Around 50% of global CO2 emissions come from just 15% of the world’s countries. These are China (27%), USA (15%), EU and the UK (10%). (Bill Gates framed this slightly differently in his book, noting that “nearly 40% of the world’s emissions are produced by the richest 16% of the population.”)
  • The major sources of greenhouse gases are listed below (from page 55 of the hardback version of the book).
  • Cement is especially worth highlighting. The following statistic isn’t from Bill Gates’ book but it’s telling nonetheless: “If the cement industry were a country, it would be the third largest emitter in the word,” according to Carbon Brief.
  • If we transition to clean energy for all our electricity, we’d have to pay a bit more in the short term. For Americans this would be around 15% (a Green Premium of $18/month for the average home) and around 20% for Europeans.
  • Transmission and distribution make up a considerable portion of the cost of electricity. Bill Gates cites that this is more than a third of the final cost. The number I found for the UK is around a fifth.
  • Although solar cells are almost 10 times cheaper since 2010 (which is fantastic progress), they are starting to reach their efficiency limits. The best solar panels convert less than 25% of sunlight into electricity and the theoretical limit for current technology is around 33%.
  • Lithium-ion batteries might also be reaching their peak in terms of how long they can last and the number of charge-discharge-cycles they can go through. Bill Gates believes that we can probably make batteries 3 times better but we’re unlikely to get a 50x improvement.
  • Nuclear energy sounds scary. People often cite accidents like Fukushima (2011) and Chernobyl (1986) as reasons for why we shouldn’t use it for electricity. However deaths per terawatt hour (“TWh”) are far less for nuclear (0.07 people) compared to coal (25 people) or oil (18 people). (Ps. The UK uses around 1 TWh in a day.)
  • The UK is currently the biggest user of offshore wind energy but China will likely take that position within a decade. (Ps. Offshore wind powers around 10% of the UK’s electricity and could be cheaper than gas by 2023.)
  • Carbon capture is an exciting new technology that can remove CO2 from the air directly. However, it’s technically challenging and expensive. This is because CO2 makes up just 1 molecule for every 2,500 molecules in the atmosphere (or 3.8 per 10,000).
  • Since 1990 the world has lost an area of forest that’s almost 4 times the size of Germany. That’s around 1.3 million square kilometres of forest cover.
  • The poorest countries will suffer the most from climate change yet they are the least responsible. For example, Africa accounts for just 2-3% of global emissions but it will be the hardest hit by climate change. This BBC article covers why this is the case.

Dear Overachiever

(Photo by Greg Rakozy on Unsplash)

Do more, we’re told. You’ll be happier when you do more. This is because when you achieve more, you’ll be more. ‘More of what?’ you say. ‘More worthy,’ they say.

People who achieve lots are admired by society. People who do little, well, we forget about them, mostly.

We equate doing more with being more worthy. Do well at school so you can be more. Do well at your job so you can be more. Achieve mastery and you’ll be more. Help lots of people and you can be even more. Of course, you can also make more money so you can buy more things and then you’ll be really more—more worthy than everyone else.

But there’s something deeply wrong with this kind of more because how much more is enough? How much more must you do to be more enough to be enough. How much money makes you enough? How much achievement makes you enough? How much of helping others and the world makes you enough? Is enough even possible? Not with the kind of ‘do more to be more worthy’ we’ve grown accustomed to. That kind of kind of more is a social construct with a reliably moving goalpost.

Want to know a secret that will set you free from this trap of more? It’s simple. Recognise that you are already more than more could ever be. You are the universe manifesting itself. Think about it. All your atoms—and the rest of the universe—came from a single point of infinite density 13.8 billion years ago. This singularity turned into an explosion of atoms, some of which organised into complex, sentient, thinking, feeling beings. That alone makes your life—and all lives for that matter—both infinitesimally rare and infinitely worthy. You are already more than any more can ever be.

But maybe you still think doing more makes you more worthy and that when you achieve no more you are worth—less. Well how about this: think about your best friend (or partner or kids). Do you love them more when they achieve more success? And when they do less do you love them no more? In the best relationships none of this matters. The person is infinitely worthy regardless of their achievements. Now if you can have that kind relationship with others why not have it with yourself too?

If you were your best friend you’d be worthy whether you did more or less. Of course you’d want your best friend to be happy, and some of that comes with doing more to be enough materially. But doing more to be more worthy doesn’t get you to happiness. Recognising you are already enough is what gets you there.

So do less, I say. Try it, even if for a short while, and see how it feels. The worse it feels the more trapped you are in doing more—in which case reading this couldn’t have come at a better time.


Afterword

This was a note I drafted to myself in 3 bullet points initially. Then I played with it some more by turning it into stylistic prose without the confines of formality or tightness of argument. Doing this more freely meant the first draft came out effortlessly, with the second and final drafts needing just a few minor tweaks.

The note is inspired by themes I’ve long thought about on the need for overachievement, and the works of Michael Singer (see “The Premise” chapter in the book “The Surrender Experiment“) and David Burns (see “Your work is not your worth” chapter in the book “Feeling Good“). 

How to Think Better (In One Step)

Photo by Mark Fletcher-Brown on Unsplash

This is a slight revision of an idea I blogged about here.

Here’s something they don’t teach at school: the simplest way to become a better thinker is to take whatever you think is right and ask if it could be wrong.

Put another way, if you’d like to be a better thinker form a habit of asking, “if this might be true, what would make it wrong?” Entertaining this question as often as you can will make you a more critical and better thinker.

Here are 3 simple examples of this technique.

On Success

Consensus: You need lots of success to be happy.

Critical view: Can you be happy without lots of success?

On Choice

Consensus: More choice is good.

Critical view: Can less choice be better?

On Hard work

Consensus: The more hours you put in, the better you’ll do.

Critical view: Can working smarter beat working harder?

Here are more debatable examples:

On Technology:

Consensus: Technology progress is accelerating.

Critical view: Could technology progress be slowing down? For example what recent technology has automated away many hours of laborious work for everyone? A washing machine did this over 100 years ago by cutting 4 hours of manual work down to a few minutes. What recent examples are just as impactful?

On the Economy

Consensus: Governments must cut public spending and increase taxes to help their economies recover from a recession.

Critical view: Can more public debt be worth it if it’s used to invest in technology and education that will help the economy recover faster? Is it possible that increasing taxes makes people spend less thereby actually limiting the growth of the overall economy?

This type of thinking takes a bit more work but with a simple question–“if this might be true, what would make it wrong?”–you open up branches of thinking that you would have missed otherwise. I also find this process quite fun, especially if taken lightly and without much ego.

That said, thinking more critically in this fashion doesn’t mean you should always reject what is thought to be true. In fact if many people believe something is true, there’s usually a number of valid reasons why that is the case.

However, to be a better thinker you should always be able to entertain the idea that what you think is obviously right could be non-obviously wrong.

A Simple Hack to Improve Your Critical Thinking

Here’s something they don’t teach at school: the simplest way to become a critical thinker is to take a consensus view and ask what it would take for it be false.

Put another way, if you’d like to be a better thinker form a habit of asking, “if this is the consensus, what would make it wrong?” 

Here are 3 simple examples:

On Success

Consensus: You need lots of success to be happy.

Critical view: Can you be happy without lots of success?

On Choice

Consensus: More choice is good.

Critical view: Can less choice be better?

On Hard work

Consensus: The more hours you put in, the better you’ll do.

Critical view: Can working smarter beat working harder?

Here are more debatable examples:

On Technology:

Consensus: Technology progress is accelerating.

Critical view: Could technology progress be slowing down? For example what recent technology has automated away many hours of laborious work for everyone? A washing machine did this over 100 years ago by cutting 4 hours of manual work down to a few minutes. What recent examples are just as impactful?

On the Economy

Consensus: Governments must cut public spending and increase taxes to help their economies recover from a recession.

Critical view: Can more public debt be worth it if it’s used to invest in technology and education that will help the economy recover faster? Is it possible that increasing taxes makes people spend less thereby actually limiting the growth of the overall economy?

This type of thinking takes a bit more work but with a simple question–“if this is the consensus, what would make it wrong?”–you open up branches of thinking that you would have missed otherwise. I also find this process quite fun, especially if taken lightly and without ego attached to one view.

That said, critical thinking doesn’t mean that you should always reject the consensus. In fact the consensus works in many cases.

However, to be a better thinker you should always be able to entertain the idea that what you think is obviously right could be non-obviously wrong.