We have seen that with the binary code we represent all quantities with only two symbols.

These two symbols represent the MINIMUM information possible: there is or there is no voltage on the basic component.

This information is called BIT: Basic Information Technology.

A single bit can then take the values 0 or 1.

TWO bits can have 4 different values (2^{2} = 4): 00,01,10,11 which represent the numbers 0,1,2,3

EIGHT bit can take 256 different values (2^{8} = 256): from 00000000 to 11111111, representing the numbers from 0 to 255

A set of eight bits is called BYTE, a byte is therefore another unit of measurement that can take 256 different values.

These values, by convention, are used in a PC, to represent the digits from 0 to 9, all the letters, uppercase and lowercase, all the punctuation symbols, and other symbols.

The correspondence between bytes and symbols represented is summarized and contained in the so-called Standard ASCII table

Through the bytes, the words, the instructions, the programs that the PC is able to interpret are made up, to do ... what we want !! (Almost always ... he he he !!)

Reminding us that at the base of everything there is always the binary code, and therefore that we are reasoning in base 2, we now see other units of measurement:

1 Kilobyte = 1 Kb = 1024 Bytes (1024 because 2^{10} = 1024)

1 Megabyte = 1 Mb = 1024 Kb = 1,048,576 bytes

1 Gigabyte = 1Gb = 1024 Mb = 1.048.576 Kb = 1.073.741.824 bytes

As a curiosity, I inform you that a set of 4 bits is called "Nibble", today it is almost unused, but there was a time when this unit of measurement had its own precise motivation. In particular, just think that there were 4-bit CPUs.