Wednesday 5 December 2012

BIT (Computing)

A bit (a contraction of binary digit) is the basic capacity of information in computing and telecommunications; a bit can have the value of either 1 or 0 (one or zero) only. These attributes may be implemented, in a variety of systems, by means of a two state device.

In computing, a bit can be defined as a variable or computed quantity that can have only two possible values. These two values are often interpreted as binary digits and are usually denoted by the numerical digits 0 and 1. The two values can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. The length of a binary number may be referred to as its "bit-length".

In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability,or the information that is gained when the value of such a variable becomes known.
In quantum computing, a quantum bit or qubit is a quantum system that can exist in superposition of two bit values, "true" and "false".

The symbol for bit, as a unit of information, is either simply "bit" (recommended by the ISO/IEC standard 80000-13 (2008)) or lowercase "b" (recommended by the IEEE 1541 Standard (2002)).

No comments:

Post a Comment