How Computers Work – Part 1: “Bits”

Source: Safar Safarov (@codestorm / Unsplash)

Source: Safar Safarov (@codestorm / Unsplash)

Have you ever wondered how computers work? What’s really going on in your smartphone, tablet, or personal computer? What are the high-level concepts and ideas that power one of humanity’s greatest invention?

As a venture capitalist I often spend time thinking about technology trends and entrepreneurship but I’ve always felt like a bit of an imposter not knowing the basics of how a computer actually works.

This series is my attempt to grasp the basic principles of modern computation. Through these blog posts, I will share notes that may someday be turned into a short PDF for non-technical people. But for now, consider this a work-in-progress that will continue to be revised for some time.

The primary sources I will use in the series include this excellent book, this Youtube series, Wikipedia,  Google, and conversations with friends and colleagues.

I hope you find the series of some use and look forward to collecting feedback and discussing the topics as I go along.

Part 1 starts with the most basic unit of computation: a binary digit or in short, a bit.


Bits

Everything a computer does is based on two basic ingredients. The ingredients come down to just two elementary states: “on” or “off”. We commonly represent these states with 1 for “on” and 0 for “off”. Each unit of 1 or 0 is called a bit.

With enough ones and zeroes, you can have several bits of information–hence the name “bit”1–that in aggregate are able to express substantial complexity. This simple binary system is the foundation upon which all modern digital computation is done.

Using a binary system to represent more complicated information predates modern computers. For example in the 1800s, the navy used a signal lamp for flashes of light and intervals of darkness (the modern equivalent of bits of 1s and 0s) to communicate with other ships.2 Then of course there’s also morse code, which uses dashes and dots for messaging.

In today’s computers, instead of using flash lights to convey and manipulate information, we use the flow of electricity (or lack of it) at a microscopic level to represent 1s and 0s. If electricity is flowing in area on a computer chip, that is a 1. If there is no electricity flow in an area, that is a 0.

Electricity flow on computer chips is controlled by millions of tiny switches called transistors (more on this in the footnotes).3 These switches are a bit like a light bulb switch, except that in computers, the switches turn on and off really fast and they operate at a scale much thinner than a piece of hair.

My iPhone XS, for example, has 6.9 billion transistors on its A12 bionic chip and many modern computer chips pack more than 100 million of these microscopic switches per square millimetre.

Illustrative scale of the A12 chip in the iPhone XS Max, which has 6.9 billion transistors

Illustrative scale of the A12 chip in the iPhone XS Max, which has 6.9 billion transistors

Thanks to innovations in transistor technology, we have machines with the capacity to manipulate several billion bits. And given enough bits, you can compute almost anything. The table below highlights some of the common terminology you might know from the memory capacity of computers.

Computing Terminology No. of Bits
Byte 8 bits
Kilobyte (KB) 8,192 bits
Megabyte (MB) 8.4 million bits
Gigabyte (GB) 8.6 billion bits
Terabyte (TB) 8.8 trillion bits

How do we use bits exactly? We can encode information with 1s and 0s thanks to a number of international standards. For instance the American Standard Code for Information Interchange (shortened to ASCII) stipulates what numbers, and ultimately what series of bits, correspond to English language characters.

ASCII uses 8 bits (1 byte) for each character. Examples include the lowercase character ‘a’, represented by the number 65 and encoded in binary as ‘0110 0001’. Another example is the uppercase character ‘W’, represented by the number 87 and encoded in binary as ‘0101 0111’.4

caption

Example of binary code to text conversion

Encoding methods were established for other types of data (e.g. picturesaudio and video) and today, we can compute many things with just 1s and 0s. And what’s just as incredible, is that you never have to deal with the unwieldiness of working with billions of 1s and 0s, since that work is abstracted away with special hardware, encoding standards, and smart software that can turn things in the world into bits.

This ‘abstraction’ truly is the beauty of computer science. Once a low-level problem like turning an ‘a’ into ‘0110 001’ is solved–thanks to transistors, specialised electronic circuits and software–you and I never have to worry about that process again.

That complexity is essentially abstracted away and you can write other types of higher level software on top of it, such as a word processor, which in turn can empower someone to type up a best-selling novel with a computer without ever having to understand how bits work.

Example of higher level work, which is the result of lower level complexity being abstracted away

Example of higher level work, which is the result of lower level complexity being abstracted away

This series of “how a computer works” will move up other related ladders of abstraction in future parts.

This week we have started with bits. But in the next part, we will move up to what engineers call logic gates. They are a combination of transistors that collectively take multiple input bits of 1s and 0s and output a different bit. This is an important feature for arithmetic computation.


Notes

[1]

The term “bit” was reportedly first used in a computational context in 1936 by the American engineer Vannevar Bush and in a paper titled “Instrumental Analysis“, published by the Bulletin of the American Mathematical Society.

[2]

One such system was invented by Philip Howard Colomb, a Royal Navy officer and inventor. He patented the Flashing Light Signal system in 1862, which you can read about in this newspaper from the times.

[3]

Transistors used to be bulky, discrete components that you would have to laboriously wire up to make more complicated circuits and computers. This website has a good summary of innovation in this area. But in short, control of electricity was once achieved with vacuum tube technology (one bit required the space the size of a thumb), then transistors came along (one bit could fit a fingernail); then integrated circuits (thousands of bits in the space of a hand); and more recently, silicon-based computer chips (many millions of bits in the space of a finger nail.) This video explains the a modern approach to transistors.

[4]

You can experiment with binary to text code here. A full table of ASCII codes is available here.