answersLogoWhite

0

A bit. Logical and or. Switch on or off. Answer yes or no.


A bit
Each 1 or 0 used in the representation of computer data can be used to present information to you, in order that you may learn it, especially the syntax and forms of the English language, so that when you present questions to global forums where computers also interpret the 1's & 0's to present to people, your question will be understood by those experts whose assistance you so plainly require.

User Avatar

Wiki User

7y ago

Still curious? Ask our experts.

Chat with our AI personalities

ReneRene
Change my mind. I dare you.
Chat with Rene
RafaRafa
There's no fun in playing it safe. Why not try something a little unhinged?
Chat with Rafa
MaxineMaxine
I respect you enough to keep it real.
Chat with Maxine
More answers

Although binary digits (bits) are the smallest unit of information we can store in a computer, the smallest addressable unit is a byte, which is at least 8 bits in length. In order to access information at the bit level we must use bitwise logic, which is many times slower to access than the bytes that actually contain those bits.

User Avatar

Wiki User

7y ago
User Avatar

You likely mean a bit. A system of ones and zeroes is known as the binary system. Each 1 or 0 is a digit. Those digits are known as bits. Eight bits form a bit. Sixteen bits form a word. Thirty-two bits form a doubleword.

User Avatar

Wiki User

8y ago
User Avatar

A binary digit, or bit. When the value of a bit is zero, that bit is said to be unset. When the value is one, the bit is said to be set. Bits are addressed in groups called bytes. The length of a byte (in bits) is machine-dependent but is always at least 8 bits (the equivalent of a char data type in C). Bytes may be combined up to the word length of the machine (typically 32-bit or 64-bit). The order of the bytes within a word is architecture-dependent. Intel 8086-based computers use little-endian notation, where the least-significant byte is at the lowest address. However, the value of a multi-byte value is interpreted as if it were actually written in big-endian notation (most-significant byte first). Regardless of byte order, the order of bits within any one byte does not change.

We can test the state of individual bits within a word using bitwise logical operators; AND, OR and XOR. The position of a bit within a word (using big-endian notation) denotes its value, an increasing power of two from the least-significant bit, conventionally denoted as bit 0 (representing 2^0). Unset bits have no value, a zero bit is simply a placeholder (just as a zero digit is a placeholder in decimal). The sum of the set bits gives the overall word value in decimal. What that value actually represents is defined by the programmer.

To represent values beyond the range of the maximum word length, we can use any combination of arrays, data structures or classes. The word length merely dictates the maximum number of bits that can be physically accessed in a single machine instruction. Consecutive reads allow us to access additional words.

User Avatar

Wiki User

8y ago
User Avatar

The ones and zeros are referred to together as binary data and each one and zero is individually referred to as a bit. The bits can be positive (1) or negative (0).

User Avatar

Wiki User

17y ago
User Avatar

The computers are programmed to understand only the digits 0 and 1. So it becomes easier for the users around the world to understand and use the computer without getting confused.

User Avatar

Wiki User

12y ago
User Avatar

They represent the binary system of numeracy

User Avatar

Wiki User

11y ago
User Avatar

bit

User Avatar

Wiki User

13y ago
User Avatar

Add your answer:

Earn +20 pts
Q: Why computer understand 0 and 1?
Write your answer...
Submit
Still have questions?
magnify glass
imp