A normal byte consists of 8 bits, meaning 8 0s or 1s, this means it can be anything from 00000000 to 11111111. These are just binary numbers, and binary 00000000 is normal (decimal) 0, and 11111111 is decimal 255. So one byte can be any number from 0 to 255. If you don't understand binary, please look at http://en.wikipedia.org/wiki/Binary_numeral_system
ASCII is a simple (and increasingly obsolete) code which maps alphanumeric characters to numbers in the 0..255 range. Thus, any phrase expressed as a series of these alphanumeric characters can be expressed as a series of bytes with the corresponding numeric values, one byte per character. For example, the letter A is represented by a byte of numerical decimal value 65. It is characteristic for the ASCII code that it supports a limited alphabet of 256 different characters. While this might seem much in light of the fact that the 26 characters cover the A-Z alphabet, codes are assigned to lower-case and upper-case characters, digits, punctuation marks, a wide range of other characters including some simple symbols, and a range of 'foreign characters.' With today's demands on localized software and support for the local alphabet, the ASCII code becomes increasingly obsolete because it cannot support a great number of non-English alphabets.
ASCII only has 127 standard character codes and only supports the English alphabet. While you can use the extended ASCII character to provide a set of 256 characters and thus support other languages there's no guarantee that other systems will use the same code page, so the characters will not display correctly across all systems (the characters you see will depend upon which code page is currently in use). Moreover, some languages, particularly Chinese, have thousands of symbols that simply cannot be encoded in ASCII. UNICODE encoding supports all languages and the first 127 symbols are also the same as ASCII, so all characters appear the same across all systems. UTF8 is the most common UNICODE encoding in use today because it uses one-byte per character for the first 127 characters and is therefore fully compliant with non-extended ASCII. If the most-significant bit is set then the character is represented by 2 or more bytes, the combination of which maps to the UNICODE encoding.
A byte offset, typically used to index into a string or file, is a zero-based number of bytes. For example, in the string "this is a test", the byte offset of "this" is 0, of "is" is 5,"a" is 8, and "test" is 10.Note that this is not always the same as the "character offset". Some characters, such as Chinese ideograms, require two or more bytes to represent. Using ASCII characters only will ensure that the byte offset is always equal to the character offset.
8 bits form a byte. For example to store ASCII characters. Othe language encodings need more bytes, e.g., asian languages. A single bit of course is a 0 or 1 meaning a base2 system. Hence 8 bits or a byte can represent 2 to the power of 8 combinations.
In ASCII, EBCDIC, FIELDATA, etc. yes. However Unicode characters are composed of multiple bytes.
An extended ASCII byte (like all bytes) contains 8 bits, or binary digits.
If you're referring to kilobyte, then it contains 1024 bytes and if the characters are the standard ASCII character set where 1 character is 1 byte, then a kilobyte would have 1024 characters.
The letter S uses 1 byte of memory, as do all the other ASCII characters.
A char is already an integer, so there is no conversion required. A character is simply an integer that maps to a glyph in the current code page. ASCII characters are 1 byte long and have a value in the range 0 to 127 while extended ASCII characters are in the 128 to 255 range. Wide characters (UTF16 UNICODE) characters are two bytes long and cover the range 0 to 65,535, where 0 to 127 map to the standard ASCII character set. UTF8 UNICODE characters are variable width (1 to 6 bytes in length), where 0 to 127 are single-byte characters mapping to the standard ASCII set.
In the UTF 8 standard of representing text, which is the most commonly used has a varying amount of bytes to represent characters. The Latin alphabet and numbers as well as commonly used characters such as (but not limited to) <, >, -, /, \, $ , !, %, @, &, ^, (, ), and *. Characters after that, however, such as accented characters and different language scripts are usually represented as 2 bytes. The most a character can use is 4, I think (Can someone verify? I can't seem to find the answer).
1. byte-organised memory (every bytes of the memory has to be accessible) 2. support for every ASCII characters (0-127)
ASCII (American Standard Code for Information Interchange) is a character-encoding scheme that was standardised in 1963. There is no encoder required to create ASCII. Every machine supports it as standard, although some implement it via UNICODE. The only difference is in the number of bytes used to represent each character. The default is one byte per character yielding 128 standard encodings that map exactly with the first 128 characters in UNICODE encoding.