Bit – Concept, uses and how to calculate it

We explain what a bit is, what are its different uses and the methods in which this computing unit can be calculated.

Bit - Computing
A bit is the smallest unit of information that computing uses.

What is a bit?

In computing it is called bit (acronym in English for Binary digit, that is, “binary digit”) a a value from the binary numbering system. This system is so named because it comprises only two base values: 1 and 0, with which an infinite number of binary conditions can be represented: on and off, true and false, present and absent, etc.

A bit is, then, the minimum unit of information that computing uses, whose systems are all supported by said binary code. Each bit of information represents a specific value: 1 or 0, but by combining different bits many more combinations can be obtained, for example:

2-bit model (4 combinations):

00 – Both off

01 – First off, second on

10 – First on, second off

11 – Both on

With these two units we can represent four point values. Now suppose we have 8 bits (one octet), equivalent in some systems to a byte: you get 256 different values.

In this way, the binary system operates paying attention to the value of the bit (1 or 0) and its position in the represented string: if it is on and appears in a position to the left, its value is doubled, and if it appears to the right, cut in half. For instance:

To represent the number 20 in binary

Binary value net: 11

Numeric value per position: 168421

Result: 16 +0 +4 +0 + 0 = twenty

Another example: to represent the number 2.75 in binary, assuming the reference in the middle of the figure:

Binary value net: 111

Numeric value per position: 4210.50.25

Result: 0 +2 +0 +0.5 + 0.25 = 2,75

The bits in value 0 (off) are not counted, only those of value 1 (on) and their numerical equivalent is given based on their position in the string, to thus form a representation mechanism that will later be applied to alphanumeric characters ( called ASCII).

In this way, the operations of the computer microprocessors are recorded: there can be 4, 8, 16, 32 and 64 bit architectures. This means that the microprocessor handles that internal number of registers, that is, the calculation capacity that the Arithmetic-Logic Unit possesses.

For example, the first x86 series computers (the Intel 8086 and Intel 8088) had 16-bit processors, and the noticeable difference between their speeds had to do not so much with their processing power, as with the additional help of a computer. 16 and 8 bit bus respectively.

Similarly, bits are used to measure the storage capacity of a digital memory.