The 1s and 0s that a computer works with are referred to as?
Computers don't actually work with 1s and 0s; they are simply
human-readable notations for the binary representations that a
computer actually works with. We refer to them as binary digits or
simply bits.
Inside a computer, bits are represented in a variety of ways,
including high or low voltage within a capacitor, positively or
negatively charged particles upon a magnetic disk or tape, long and
short scores burned into an optical disk. Anything that can switch
between two possible states and maintain that state (temporarily or
permanently) can be used to encode binary information. We use 1s
and 0s because it is the most convenient notation for binary
arithmetic and logic operations, precisely mimicking the operations
within the machine. We also use other notations that are more
concise, including hexadecimal notation (where each hex digit
represents 4 binary digits) and octal (where each octal digit
represents 3 binary digits). The computer doesn't understand these
notations any more than it knows the difference between a 1 and 0,
but we can program it to convert all of these human notations into
binary data (machine code) that it can understand. We can also
program it to convert decimal notation to binary, which is
convenient when we're working with real numbers such as currency,
length, temperature, speed, etc.