A bit. Logical and or. Switch on or off. Answer yes or no.
A bit
Each 1 or 0 used in the representation of computer data can be used to present information to you, in order that you may learn it, especially the syntax and forms of the English language, so that when you present questions to global forums where computers also interpret the 1's & 0's to present to people, your question will be understood by those experts whose assistance you so plainly require.
Although binary digits (bits) are the smallest unit of information we can store in a computer, the smallest addressable unit is a byte, which is at least 8 bits in length. In order to access information at the bit level we must use bitwise logic, which is many times slower to access than the bytes that actually contain those bits.
A binary digit, or bit. When the value of a bit is zero, that bit is said to be unset. When the value is one, the bit is said to be set. Bits are addressed in groups called bytes. The length of a byte (in bits) is machine-dependent but is always at least 8 bits (the equivalent of a char data type in C). Bytes may be combined up to the word length of the machine (typically 32-bit or 64-bit). The order of the bytes within a word is architecture-dependent. Intel 8086-based computers use little-endian notation, where the least-significant byte is at the lowest address. However, the value of a multi-byte value is interpreted as if it were actually written in big-endian notation (most-significant byte first). Regardless of byte order, the order of bits within any one byte does not change.
We can test the state of individual bits within a word using bitwise logical operators; AND, OR and XOR. The position of a bit within a word (using big-endian notation) denotes its value, an increasing power of two from the least-significant bit, conventionally denoted as bit 0 (representing 2^0). Unset bits have no value, a zero bit is simply a placeholder (just as a zero digit is a placeholder in decimal). The sum of the set bits gives the overall word value in decimal. What that value actually represents is defined by the programmer.
To represent values beyond the range of the maximum word length, we can use any combination of arrays, data structures or classes. The word length merely dictates the maximum number of bits that can be physically accessed in a single machine instruction. Consecutive reads allow us to access additional words.
0 and 1
Every thing a computer does is originated into Binary code with is a 1 or a 0 (on or off) and they are called bits that form into bytes like (101011001010). A Computer can understand anything and everything you are able to put in itself. The thing that matters is does it know where to put it and what to use the Data with. This is what applications, and device software is for.
decode
truth table a b out carry 0 0 0 0 0 1 1 0 1 0 1 0 1 1 0 1 it may be noted that out can be achieved with XOR gate and carry with an AND gate for a two bit adder you need twice the hardware if you can add in decimal with pencil and paper perhaps binary will be easier to understand
Ye, 1 bit can either represent on "1" or off "0".
Computer's understand binary, which is 0 as "off" and 1 as "on."
Computer's understand binary, which is 0 as "off" and 1 as "on."
Computer's only understand binary, which is 0 as "off" and 1 as "on."
The "1's and 0's" are referred to as binary. Binary is actually the only language a computer can understand. Everything else has to be translated to binary for it to understand it. 1 is conidered closed and 0 is open.
Because binary (0 or 1) is the only format that the computer can understand. A transistor is either off or on. There is no other state.
Binary is computer or machine laguage the best way to explain it is to show an example. 128 64 32 16 8 4 2 1 1 1 1 1 1 1 1 1 = 255 1 0 1 0 1 0 1 0 = 170 0 0 0 0 0 0 1 0 = 2 There are 10 types of people that understand binary. Get it?
A computer doesn't actually understand any language, it just processes binary numbers.
Humans understand natural numbers (1,2,3,etc) , but computers only understand binary (0,1). Computers only understand either 0 as "off" and 1 as "on."
bcoz internally it uses circuits, it can understand ON / OFF (1 or 0) for a open or closed circuit
A computer is a very simple machine that can only understand 1's and 0's. We just put simple building blocks together to make it quite large. We must convert everything we want a computer to do into 1's and 0's, by convention we use whats called ASCII to do this.
Computers have zero IQ. Computer can understand or feel "High voltage" or "Low voltage" or you can say, on and off. Computers use '0' for low voltage and '1' for high voltage. by using the conbinations of '0' and '1' all numbers and characters are classified. for example- if you have to write 'A', It is represented in ASCII code assigned to it and then converted to binary, hence use it.
0 1 0 1 0 1 0 10 1 0 1 0 1 0 1 0 10 1 01 this kind