Understanding Bits and Bytes

A bit refers to binary digit. It refers to a digit in the binary numeral system, which consists of base 2 digits (i.e. there are only 2 possible values; 0 or 1).

This means that the number 10010111 is 8 bits long. Thus Bit is used as a variable or computed quantity that can have only two possible values. These two values are often interpreted as binary digits and are usually denoted by the Arabic numerical digits 0 and 1. Binary digits are almost always used as the basic unit of information storage and communication in digital computing and digital information theory. The bit is also a unit of measurement, the information capacity of one binary digit.

There are several units of information which are defined as multiples of bits, such as byte (8 bits), kilobit (either 1000 or 210 = 1024 bits).

A kilobyte is a unit of information or computer storage equal to either 1024 or 100 bytes. It commonly abbreviate as Kb, KB or Kbyte. The term “kilobyte” was first used to refer to a value of 1024 bytes (210), because the binary nature of digital computers lends itself to quantities that are powers of two, and 210 is roughly one thousand. This misuse of the SI prefixes spread from the slang of computer professionals into the mainstream lexicon, creating much confusion, between the Kilo and Kilobyte.

A megabyte is a unit of information or computer storage equal to approximately one million bytes. It is commonly abbreviated MB. Mb is used for megabits. A gigabyte (derived from the SI prefix giga)is a unit of information or computer storage equal to one billion (that is, a thousand million)bytes.

1,073,741,824 bytes, equal to 10243, or 230 bytes. This is the definition used for computer memory sizes, and most often used in computer engineering, computer science, and most aspects of computer operating systems. The IEC recommends that this unit should instead be called a gibibyte (abbreviated GiB), as it conflicts with SI units for bus speeds and the like.


Leave a Reply