Chapter – 4
Number Systems
Introduction
It is very common in data recovery programming or any other disk troubleshooting programming to handle the different type of number systems simultaneously to perform a single task or even a very small piece of work such as calculating the specific locations of Extended MBR(s) in terms of CHS (Cylinders, Heads and Sectors) and these locations guide the programmer through out the operation(s).
Probably most of the beginning programmers encounter the problem or confusion while converting different type of number systems to one another when attempting to learn assembly language based system level programming and when the use of the binary and hexadecimal number systems is must.
In this chapters we shall discusses many important concepts including the binary, decimal, hexadecimal numbering systems and as well as binary data organization such as conversion of bits, nibbles, bytes, words, and double words etc. and many other related topics of number systems.
Most of the modern computer systems do not represent numeric values using the decimal system but they generally use a binary or 2’s complement numbering system.
There are four number bases commonly used in programming, Binary, Octal, Decimal and Hexadecimal. However most of the time we shall meet with Binary, Decimal and Hexadecimal number systems. These number systems have been differentiated according to their base number.
Every numbering system has its own base number and representation symbol. I have presented these four numbers in the following table:
Name of Number System |
Base Number |
Symbol Used for Representation |
Binary |
2 |
B |
Octal |
8 |
Q or O |
Decimal |
10 |
D or None |
Hexadecimal |
16 |
H |
Decimal Number System
The Decimal Number System uses base 10 and it includes the digits from 0 through 9. Don’t get confused, it is the common number system that we use in our daily life to calculate the things. The powers weighted values for each position will be as follows:
In this way if I have a decimal number 218 and I want to represent it in above manner the number 218 will be represented in the following manner:
2 * 102 + 1 * 101 + 8 * 100
= 2 * 100 + 1 * 10 + 8 * 1
= 200 + 10 + 8
= 218
Now let us take an example of any fractional decimal number. Let we have a number 821.128. Each digit appearing to the left of the decimal point represents a value between zero and nine and the power of ten is represented by its position in the number (starting from 0).
Digits appearing to the right of the decimal point represent a value between zero and nine times an increasing negative power of ten. Let us see how:
8 * 102 + 2 * 101 + 1 * 100 + 1 * 10-1 + 2 * 10-2 + 8 * 10-3
= 8 * 100 + 2 * 10 + 1 * 1 + 1 * 0.1 + 2 * 0.01 + 8 * 0.001
= 800 + 20 + 1 + 0.1 + 0.02 + 0.008
= 821.128
Binary Number System
Today most of the modern computer systems operate using binary logic. The computer represents values using two voltage levels that indicate to either OFF or ON using 0 and 1. For example the voltage 0V is usually represented by logic 0 and either +3.3 V or +5V voltage is represented by logic 1. Thus with two levels we can represent exactly two different values. These could be any two different values, but by convention we use the values 0 and 1.
Since there is a correspondence between the logic levels used by the computer and the two digits used in the binary numbering system, it should come as no surprise that computers employ the binary system.
The binary number system works like the decimal number system except the Binary Number System uses the base 2 and includes only the digits 0 and 1 and use of any other digit would make the number an invalid binary number.
The weighted values for each position are represented as follows:
The following table shows the representation of binary number against the decimal numbers:
Decimal Number |
Binary Number Representation |
0 |
0000 |
1 |
0001 |
2 |
0010 |
3 |
0011 |
4 |
0100 |
5 |
0101 |
6 |
0110 |
7 |
0111 |
8 |
1000 |
9 |
1001 |
10 |
1010 |
11 |
1011 |
12 |
1100 |
13 |
1101 |
14 |
1110 |
15 |
1111 |
Usually in case of decimal numbers, every three decimal digits are separated with a comma to make larger numbers easier to read. For example, it is much easier to read a number 840,349,823 than 840349823.
Getting the inspiration from the same idea, there is a similar convention for binary numbers so that it may be easier to read binary numbers but in case of binary numbers we will add a space every four digits starting from the least significant digit on the left of the decimal point.
For example if the binary value is 1010011001101011, it will be written as 1010 0110 0110 1011.
Binary to Decimal number Conversion
To convert the binary number to the decimal number, we multiply each digit by its weighted position, and add each of the weighted values together. For example, the binary value 1011 0101 represents:
1*27 + 0*26 + 1*25 + 1*24 + 0*23 + 1*22 + 0*21 + 1*20
= 1 * 128 + 0 * 64 + 1 * 32 + 1 * 16 + 0 * 8 + 1 * 4 + 0 * 2 + 1 * 1
= 128 + 0 + 32 + 16 + 0 + 4 + 0 + 1
= 181
Decimal to Binary number Conversion
To convert any decimal number to its binary number system the general method is to divide the decimal number by 2, if the remainder is 0, on the side write down a 0. If the remainder is 1, write down a 1.
This process is continued by dividing the quotient by 2 and dropping the previous remainder until the quotient is 0. When performing the division, the remainders which will represent the binary equivalent of the decimal number, are written beginning at the least significant digit (right) and each new digit is written to more significant digit (the left) of the previous digit.
Let us take an example. Consider the number 2671. The binary conversion for the number 2671 has been given in the following table.
Division |
Quotient |
Remainder |
Binary Number |
2671 / 2 |
1335 |
1 |
1 |
1335 / 2 |
667 |
1 |
11 |
667 / 2 |
333 |
1 |
111 |
333 / 2 |
166 |
1 |
1111 |
166 / 2 |
83 |
0 |
0 1111 |
83 / 2 |
41 |
1 |
10 1111 |
41 / 2 |
20 |
1 |
110 1111 |
20 / 2 |
10 |
0 |
0110 1111 |
10 / 2 |
5 |
0 |
0 0110 1111 |
5 / 2 |
2 |
1 |
10 0110 1111 |
2 / 2 |
1 |
0 |
010 0110 1111 |
1 / 2 |
0 |
1 |
1010 0110 1111 |
This table is to clarify every step of the conversion however in practice and to get the ease and speed of conversion you can follow the following manner to get the results.
Let 1980 is any decimal number to be converted into its binary equivalent. Than following the method given in the table we will solve this problem in the following manner:
When we arrange the remainders according to the direction of arrow, we get the binary number equivalent to the decimal number 1980 = 0111 1011 1100
Binary Number Formats
Typically we write binary numbers as a sequence of bits. The “bits” is short for “binary digits” in a machine. There are defined format boundaries for these bits. These format boundaries have been represented in the following table:
Name |
Size in bits |
Example |
Bit |
1 |
1 |
Nibble |
4 |
0101 |
Byte |
8 |
0000 0101 |
Word |
16 |
0000 0000 0000 0101 |
Double Word |
32 |
0000 0000 0000 0000 0000 0000 0000 0101 |
We may add as many leading zeroes as we wish without changing its value in any number base however we normally add leading zeroes to adjust the binary number to a desired size boundary.
For example, we can represent the number 7 as in different cases as shown in the table:
|
15 |
14 |
13 |
12 |
11 |
10 |
9 |
8 |
7 |
6 |
5 |
4 |
3 |
2 |
1 |
0 |
Bit |
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
1 |
1 |
Nibble |
|
|
|
|
|
|
|
|
|
|
|
|
0 |
1 |
1 |
1 |
Byte |
|
|
|
|
|
|
|
|
0 |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
Word |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
Where the rightmost bit in a binary number is bit position zero and each bit to the left is given the next successive bit number as shown in the above table.
Bit zero is usually referred to as the Least Significant Bit or LSB and the left most bit is typically called the Most Significant Bit or MSB. Let us know about these formats of representation:
The Bit
A Bit is the smallest unit of data on a binary computer. A single bit is capable of representing only one value, either 0 or 1. If you are using a bit to represent a Boolean (True/False) value then that bit represents true or false.
The Nibble
The Nibble specially comes in the area of interest when we are talking about the number systems, BCD (Binary Coded Decimal) or/and hexadecimal (base 16) numbers.
A nibble is a collection of bits on a 4-bit boundary. It takes four bits to represent a single BCD or hexadecimal digit. With a nibble, we can represent up to 16 distinct values.
In the case of hexadecimal numbers, the values 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F are represented with four bits. BCD uses ten different digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) and requires four bits.
In fact, any sixteen distinct values can be represented with a nibble but hexadecimal and BCD digits are the primary items we can represent with a single nibble. The bit level representation of nibble will be as follows:
The Byte
The Byte is the most important data structure used by 80x86 microprocessor. A byte consists of eight bits and is the smallest addressable data item in the microprocessor. The Main memory and I/O addresses in the computer are all byte addresses and thus the smallest item that can be individually accessed by an 80x86 microprocessor programs is an 8-bit value.
To access anything smaller requires that you read the byte containing the data and mask out the unwanted bits. We shall do the programming to do this in the next chapters.
The most important use for a byte is holding a character code. The bits in a byte are numbered from bit zero (b0) through seven (b7) as given follows:
Bit 0 (b0) is the low order bit or least significant bit and bit 7 (b7) is the high order bit or most significant bit of the byte.
As here we see that a byte contains exactly two nibbles where Bits b0 to b3 comprise the low order nibble and bits b4 through b7 form the high order nibble.
Since a byte contains exactly two nibbles, byte values require two hexadecimal digits.
As the traditional modern computer is a byte addressable machine, it turns out to be more efficient to manipulate a whole byte than an individual bit or nibble.
This is the reason that most programmers use a whole byte to represent data types that require no more than 256 items
Since a Byte contains eight bits, it can represent 28 or 256 different values because the maximum 8-bit binary number may by 1111 1111 that is equivalent to 256(Decimal) therefore generally a byte is used to represent the following:
- unsigned numeric values in the range 0 to 255
- signed numbers in the range -128 to +127
- ASCII character codes
- And other special data types requiring no more than 256 different values as many data types have fewer than 256 items so eight bits is usually sufficient.
The Word
A word is a group of 16 bits. But traditionally the boundary for a Word is defined as either 16-bits or the size of the data bus for the processor and a Double Word is Two Words. Therefore a Word and a Double Word is not a fixed size but varies from system to system depending on the processor. However for conceptual reading, we will define a word as two bytes.
When we see a word on bit level, it will be numbered as the bits in a word starting from bit zero (b0) through fifteen (b15). The bit level representation will be as follows:
Where bit 0 is the LSB (Least Significant Bit) and bit 15 is the MSB (Most Significant Bit). When there is need to refer the other bits in a word, their bit position number is used to refer them.
In this way a word contains exactly two bytes such that from Bit b0 to Bit b7 form the low order byte and bits b8 through b15 form the high order byte. With a word of 16-bits, we can represent 216 (65536) different values. These values may be of following:
- The unsigned numeric values in the range of 0 to 65,535.
- The Signed numeric values in the range of -32,768 to +32,767
- Any data type with no more than 65,536 values. In this way words are mostly used for:
- 16-bit integer data values
- 16-bit memory addresses
- Any number system requiring 16 bits or less
The Double Word
A double word is exactly according to its name and is two words. Therefore a double word quantity is 32 bits. The double word can also be divided into a high order word and a low order word, four bytes, or eight nibbles etc.
In this way The Double word can represent all kinds of different data. It may be of following:
- An unsigned double word in the range of 0 to 4,294,967,295,
- A signed double word in the range of -2,147,483,648 to 2,147,483,647,
- A 32-bit floating point value
- Or any other data that requires 32 bits or less.
Octal Number System
The Octal Number System was popular in old computer systems but it is very rarely used today. However we shall take an ideal of Octal System just for knowledge.
The Octal system is based on the binary system with a 3-bit boundary. The Octal Number System uses base 8 and includes only the digits 0 through 7. In this way any other digit would make the number an invalid octal number.
The weighted values for each position are as follows shown in the table:
(base)power |
85 |
84 |
83 |
82 |
81 |
80 |
Value |
32768 |
4096 |
512 |
64 |
8 |
1 |
Binary to Octal Conversion
To convert from an integer binary number to octal we follow the following two steps:
First break the binary number into 3-bit sections from the LSB to the MSB. And then convert the 3-bit binary number to its octal equivalent. Let us take an example to better understand it. If we have given any binary number say 11001011010001 to convert into Octal Number System, we shall apply above two steps on this number as follows:
3-bit Section of Binary Number |
011 |
001 |
011 |
010 |
001 |
Equivalent number |
3 |
1 |
3 |
2 |
1 |
Thus the Octal Number, Equivalent to The Binary Number 11001011010001 is 31321.
Octal to Binary Conversion
To convert any integer octal number to its corresponding binary number we follow the following two steps:
First convert the decimal number to its 3-bit binary equivalent. And then combine the 3-bit sections by removing the spaces. Let us take an example. If we have any octal number integer 31321(Q) to convert into its corresponding binary number, we shall apply above two steps as follows:
Equivalent number |
3 |
1 |
3 |
2 |
1 |
3-bit Section of Binary Number |
011 |
001 |
011 |
010 |
001 |
Thus the binary equivalent for the octal number 31321(Q) is 011 0010 1101 0001.
Octal to Decimal Conversion
To convert any octal number to Decimal we multiply the value in each position by its octal weight and add each value.
Let us take an example to better understand this. Let we have any octal number 31321Q to be converted into its corresponding decimal number. Then we will follow the following steps:
3*84 + 1*83 + 3*82 + 2*81 + 1*80
= 3*4096 + 1*512 + 3*64 + 2*8 + 1*1
= 12288 + 512 + 192 + 16 + 1
= 13009
Decimal to Octal Conversion
To convert decimal to octal is slightly more difficult. The typical method to convert from decimal to octal is repeated division by 8. For this method we divide the decimal number by 8 and write the remainder on the side as the least significant digit. This process is continued by dividing the quotient by 8 and writing the remainder until the quotient is 0.
When performing the division, the remainders which will represent the octal equivalent of the decimal number are written beginning at the least significant digit (right) and each new digit is written to the next more significant digit (the left) of the previous digit.
Let us better understand it with an example. If we have any decimal number say 13009 (we found this decimal number from the above example and by converting it back to Octal number we can also check the previous example.) then this method has been described in the following table:
Division |
Quotient |
Remainder |
Octal Number |
13009 / 8 |
1626 |
1 |
1 |
1626 / 8 |
203 |
2 |
21 |
203 / 8 |
25 |
3 |
321 |
25 / 8 |
3 |
1 |
1321 |
3 / 8 |
0 |
3 |
31321 |
As you can see, we are back with the original number. That is what we should expect. This table was to understand the procedure. Now let us repeat the same conversion to understand the method that should be followed in practice to get the ease of working and to save the time as well. Both are the same things in fact.
When we arrange the remainders according to the direction of arrow, we get the Octal Number 31321, which we were expecting.
Hexadecimal Number System
Hexadecimal number are most commonly used in our data recovery or any other type of disk troubleshooting or disk analyzing programming because hexadecimal numbers offer the two features as follows:
Hexadecimal numbers are very compact. And it is easy to convert from hex to binary and binary to hex. When we shall be calculating many important things like Number of Cylinders, Heads and Sectors of a hard disk or we shall be using hard disk editor programs to analyze different characteristics and problems, we shall need the good knowledge of Hex system. The Hexadecimal system is based on the binary system using a Nibble or 4-bit boundary.
The Hexadecimal Number System uses base 16 and includes only the digits 0 through 9 and the letters A, B, C, D, E, and F. We use H with the number to denote any hexadecimal number. The following table shows the representation of various number systems, differentiating them with each other:
Binary |
Octal |
Decimal |
Hex |
0000B |
00Q |
00 |
00H |
0001B |
01Q |
01 |
01H |
0010B |
02Q |
02 |
02H |
0011B |
03Q |
03 |
03H |
0100B |
04Q |
04 |
04H |
0101B |
05Q |
05 |
05H |
0110B |
06Q |
06 |
06H |
0111B |
07Q |
07 |
07H |
1000B |
10Q |
08 |
08H |
1001B |
11Q |
09 |
09H |
1010B |
12Q |
10 |
0AH |
1011B |
13Q |
11 |
0BH |
1100B |
14Q |
12 |
0CH |
1101B |
15Q |
13 |
0DH |
1110B |
16Q |
14 |
0EH |
1111B |
17Q |
15 |
0FH |
1 0000B |
20Q |
16 |
10H |
This table provides all the information that you may ever need to convert from one number base into another for the decimal values from 0 to 16.
The weighted values for each position for hexadecimal numbers have been shown in the following table:
(Base)power |
163 |
162 |
161 |
160 |
Value |
4096 |
256 |
16 |
1 |
Binary to Hexadecimal Conversion
To convert a binary number into hexadecimal format, first of all pad the binary number with leading zeros on the left most side to make sure that the binary number contains multiples of four bits. After that follow the following two steps:
First, break the binary number into 4-bit sections from the LSB to the MSB. And then convert the 4-bit binary number to its Hex equivalent. Let us take an example to better understand the method. Let we have any binary number 100 1110 1101 0011 to be converted into its corresponding hexadecimal number. Then we shall apply above two steps as shown below:
4-bit binary number section |
0100 |
1110 |
1101 |
0011 |
Hexadecimal value |
4 |
E |
D |
3 |
Thus the hexadecimal value, corresponding to the binary number 100 1110 1101 0011 is 4ED3.
Hexadecimal to Binary Conversion
To convert a hexadecimal number into a binary number we follow the following two steps:
First, convert the Hexadecimal number to its 4-bit binary equivalent. And then combine the 4-bit sections by removing the spaces. To better understand the procedure let us take an example of the above hexadecimal number, that is 4ED3 and apply these two steps on it as follows
Hexadecimal value |
4 |
E |
D |
3 |
4-bit binary number section |
0100 |
1110 |
1101 |
0011 |
Thus for the hexadecimal number 4ED3, we get the corresponding binary number = 0100 1110 1101 0011
This is the expected answer.
Hexadecimal to Decimal Conversion To convert from Hexadecimal to Decimal we multiply the value in each position by its hex weight and add each value. Let us take an example to better understand the procedure. Assume that we have any hexadecimal number 3ABE to be converted to its equivalent decimal number. Then the procedure will be as follows:
3*163 + A*162 + B*161 + E*160
= 3* 4096 + 10* 256 + 11*16 + 14
= 12288 + 2560 + 176 + 14
= 15038
Thus the equivalent decimal number for the hexadecimal number 3ABE is 15038.
Decimal to Hexadecimal Conversion
To convert decimal to hexadecimal, the typical method is repeated division by 16. For this method, we divide the decimal number by 16 and write the remainder on the side as the least significant digit.
This process is continued by dividing the quotient by 16 and writing the remainder until the quotient is 0. When performing the division, the remainders which will represent the hex equivalent of the decimal number are written beginning at the least significant digit (right) and each new digit is written to the next more significant digit (the left) of the previous digit.
Let us learn it with example. We take the decimal number 15038 which we got after conversion above. By this we can also check the above conversion and vice-versa.
Division |
Quotient |
Remainder |
Hex Number |
15038 / 16 |
939 |
14 (E H) |
E |
939 / 16 |
58 |
11 (B H) |
BE |
58 / 16 |
3 |
10 (A H) |
ABE |
3 / 16 |
0 |
3 (3 H) |
03ABE |
Thus we get hexadecimal number 03ABE H, equivalent to the decimal number 15038 and in this way we are back with the original number. That is what we should expect.
The table given next can help to get the quick search of Hexadecimal number to decimal number conversion and vice-versa from the range of 0 to 255 decimal numbers.
In this Square table there are 16 rows, starting from 0 to A and there are 16 columns also starting from 0 to A. From this table you can find the decimal value of any hexadecimal number that is in between the range of 0H to FFH. It means that the decimal value of the number should be in between the range of 0 to 255 decimal numbers.
- Finding Decimal value for Hexadecimal number from above table: In the table given above, the number of rows represent the first hexadecimal digit (left Hexadecimal digit) and the number of columns represent the second hexadecimal digit (right hexadecimal digit) of the hexadecimal number.
Let we have any hexadecimal number say ACH, to be converted into the equivalent decimal number. Then we shall see the decimal value in the Cth column of the Ath row in the table and get the decimal value 172, which is the equivalent decimal number for the hexadecimal number ACH.
- Finding Hexadecimal Value for the Decimal number from above table: In the table given above, the number of rows represent the first hexadecimal digit (left Hexadecimal digit) and the number of columns represent the second hexadecimal digit (right hexadecimal digit) of the hexadecimal number thus if you have any decimal number to be converted into equivalent hexadecimal number, search the number in the table and find the equivalent hexadecimal value as follows:
Hex Value for the Decimal Number = (Row Number)(Column Number)
For example if you want to find the equivalent hexadecimal value number for the decimal number 154, see the location of the number in the table. The number 154 is in the 9th row and Ath column of the table. Thus the equivalent hexadecimal value for the decimal number 154 is 9AH.
ASCII Code
The abbreviation ASCII stands for American Standard Code for Information Interchange. It is a coding standard for characters, numbers, and symbols that is the same as the first 128 characters of the ASCII character set but differs from the remaining characters. These other characters are usually called special ASCII characters of Extended characters which have been defined by IBM.
The first 32 characters which are ASCII codes 0 through 1FH, form a special set of non-printing characters. These characters are called the control characters because these characters perform various printer and display control operations rather than displaying symbols. These characters have been listed in the ASCII character table given in this chapter. These control characters have following meanings:
NUL (Null):
No character. It is used for filling in time or filling space on the surface (such as surface of platter) of storage device where there are no data. We’ll use this character when we’ll be doing programming for data wipers (destructive and non-destructive both) to wipeout the unallocated space so that deleted data may not be recovered by any one or by any program.
SOH (Start Of Heading):
This character is used to indicate the start of heading, which may contain address or routing information.
TX (Start of Text):
This character is used to indicate the start of text and in this way this is also used to indicate the end of the heading.
ETX (End of Text):
This character is used to terminate the text that was started with STX.
EOT (End Of Transmission):
This character indicates the end of the transmission, which may have included one or more “tests” with their headings.
ENQ (Enquiry):
It is a request for a response from a remote station. It is a request for a station to identify itself.
ACK (Acknowledge):
It is a character, transmitted by a receiving device as an affirmation response to a sander. It is used as a positive response to polling messages.
BEL (Bell):
It is used when there is need to call human attention. It may control alarm or attention devices. You can hear a bell tone from the speakers, attached to your computer when you type this character in the command prompt as given below:
C:\> Echo ^G
Here ^G is printed by the combination of Ctrl + G keys combination.
BS (Backspace):
This character indicates the movement of the printing mechanism or display cursor backward in one position.
HT (Horizontal Tab):
It indicates the movement of the printing mechanism or display cursor forward to the next pre assigned “Tab” or stopping position.
LF (Line Feed):
It indicates the movement of printing mechanism or display cursor to the start of the next line.
VT (Vertical Tab):
It indicates the movement of the printing mechanism or display cursor to the next of a series of pre assigned printing lines.
FF (Form Feed):
It indicates the movement of the printing mechanism or display cursor to the starting position of the next page, from, or screen.
CR (Carriage Return):
It indicates the movement of printing mechanism or display cursor to the starting position of the same line.
SO (Shift Out):
It indicates that the code combinations that follow shall be interpreted as outside of the standard character set until a Shift In character is reached.
I (Shift In):
It indicates that the code combinations that follow shall be interpreted according to the standard character set.
DLE (Data Link Escape):
It is a character that shall change the meaning of one or more contiguously following characters. It can provide supplementary control, or permits the sending of data characters having any bit combination.
DC1, DC2, DC3 and DC4 (Device Controls):
These are the characters for the control of ancillary devices or special terminal features.
NAK (Negative Acknowledgement):
It is a character transmitted by a receiving device as a negative response to a sender. It is used as a negative response to polling message.
SYN (Synchronous/ Idle):
it is used by a synchronous transmission system to achieve synchronization when no data is being sent a synchronous transmission system may send SYN characters continuously.
ETB (End of Transmission Block):
This character indicates the end of a block of data for communication purpose. It is used for blocking data, where the block structure is not necessarily related to the processing format.
CAN (Cancel): It indicates that the data that precedes it in a message or block should be disregarded usually because an error has been detected.
EM (End of Medium): It indicates the physical end of a tape, surface (usually of a disk’s platter) or other medium or end of the required of used portion of the medium.
SUB (Substitute): It is a substitute for a character that is found to be erroneous or invalid.
ESC (Escape): It is a character intended to provide code extension in that it gives a specified number of continuously following characters an alternate meaning.
FS (File Separator): This character is used as a file separator character.
GS (Group Separator): It is used as a group separator character.
RS (Record Separator): It is used as a record separator character.
US (United Separator):
It is a united separator character.
The second group of 32 ASCII character codes has various punctuation symbols, special characters, and the numeric digits. The most notable characters in this group include the following:
space character (ASCII code 20H)
numeric digits 0 through 9 (ASCII codes 30h through 39h)
mathematical and logical symbols
SP (Space):
It is a non printing character used to separate words or to move the printing mechanism or to display the cursor forward by one position.
The third group of 32 ASCII characters is the group of upper case alphabetic characters. The ASCII codes for the characters A through Z lie in the range 41H through 5AH. Since there are only 26 different alphabetic characters, the remaining six codes hold various special symbols.
The fourth group of 32 ASCII character codes is the group of lower case alphabetic symbols, five additional special symbols and another control character delete.
DEL (Delete):
It is used to obliterate unwanted characters rather we can say to delete the unwanted characters.
There have been shown two tables next, representing the ASCII codes and Extended Characters. The first table represents all the four group of different type of characters described. This table is data representation and ASCII table as shown next:
Data Representation & ASCII Code Table:
HEX |
DEC |
CHR |
CTRL |
00 |
0 |
NUL |
^@ |
01 |
1 |
SOH |
^A |
02 |
2 |
STX |
^B |
03 |
3 |
ETX |
^C |
04 |
4 |
EOT |
^D |
05 |
5 |
ENQ |
^E |
06 |
6 |
ACK |
^F |
07 |
7 |
BEL |
^G |
08 |
8 |
BS |
^H |
09 |
9 |
HT |
^I |
0A |
10 |
LF |
^J |
0B |
11 |
VT |
^K |
0C |
12 |
FF |
^L |
0D |
13 |
CR |
^M |
0E |
14 |
SO |
^N |
0F |
15 |
SI |
^O |
10 |
16 |
DLE |
^P |
11 |
17 |
DC1 |
^Q |
12 |
18 |
DC2 |
^R |
13 |
19 |
DC3 |
^S |
14 |
20 |
DC4 |
^T |
15 |
21 |
NAK |
^U |
16 |
22 |
SYN |
^V |
17 |
23 |
ETB |
^W |
18 |
24 |
CAN |
^X |
19 |
25 |
EM |
^Y |
1A |
26 |
SUB |
^Z |
1B |
27 |
ESC |
1C |
28 |
FS |
1D |
29 |
GS |
1E |
30 |
RS |
1F |
31 |
US |
HEX |
DEC |
CHR |
20 |
32 |
SP |
21 |
33 |
! |
22 |
34 |
" |
23 |
35 |
# |
24 |
36 |
$ |
25 |
37 |
% |
26 |
38 |
& |
27 |
39 |
' |
28 |
40 |
( |
29 |
41 |
) |
2A |
42 |
* |
2B |
43 |
+ |
2C |
44 |
, |
2D |
45 |
- |
2E |
46 |
. |
2F |
47 |
/ |
30 |
48 |
0 |
31 |
49 |
1 |
32 |
50 |
2 |
33 |
51 |
3 |
34 |
52 |
4 |
35 |
53 |
5 |
36 |
54 |
6 |
37 |
55 |
7 |
38 |
56 |
8 |
39 |
57 |
9 |
3A |
58 |
: |
3B |
59 |
; |
3C |
60 |
< |
3D |
61 |
= |
3E |
62 |
> |
3F |
63 |
? |
HEX |
DEC |
CHR |
40 |
64 |
@ |
41 |
65 |
A |
42 |
66 |
B |
43 |
67 |
C |
44 |
68 |
D |
45 |
69 |
E |
46 |
70 |
F |
47 |
71 |
G |
48 |
72 |
H |
49 |
73 |
I |
4A |
74 |
J |
4B |
75 |
K |
4C |
76 |
L |
4D |
77 |
M |
4E |
78 |
N |
4F |
79 |
O |
50 |
80 |
P |
51 |
81 |
Q |
52 |
82 |
R |
53 |
83 |
S |
54 |
84 |
T |
55 |
85 |
U |
56 |
86 |
V |
57 |
87 |
W |
58 |
88 |
X |
59 |
89 |
Y |
5A |
90 |
Z |
5B |
91 |
[ |
5C |
92 |
\ |
5D |
93 |
] |
5E |
94 |
^ |
5F |
95 |
_ |
HEX |
DEC |
CHR |
60 |
96 |
` |
61 |
97 |
a |
62 |
98 |
b |
63 |
99 |
c |
64 |
100 |
d |
65 |
101 |
e |
66 |
102 |
f |
67 |
103 |
g |
68 |
104 |
h |
69 |
105 |
i |
6A |
106 |
j |
6B |
107 |
k |
6C |
108 |
l |
6D |
109 |
m |
6E |
110 |
n |
6F |
111 |
o |
70 |
112 |
p |
71 |
113 |
q |
72 |
114 |
r |
73 |
115 |
s |
74 |
116 |
t |
75 |
117 |
u |
76 |
118 |
v |
77 |
119 |
w |
78 |
120 |
x |
79 |
121 |
y |
7A |
122 |
z |
7B |
123 |
{[} |
7C |
124 |
| |
7D |
125 |
} |
7E |
126 |
~ |
7F |
127 |
DEL |
The next table shows the 128 special ASCII characters set which are often called the Extended ASCII characters:
HEX |
DEC |
CHR |
80 |
128 |
Ç |
81 |
129 |
ü |
82 |
130 |
é |
83 |
131 |
â |
84 |
132 |
ä |
85 |
133 |
à |
86 |
134 |
å |
87 |
135 |
ç |
88 |
136 |
ê |
89 |
137 |
ë |
8A |
138 |
è |
8B |
139 |
ï |
8C |
140 |
î |
8D |
141 |
ì |
8E |
142 |
Ä |
8F |
143 |
Å |
90 |
144 |
É |
91 |
145 |
æ |
92 |
146 |
Æ |
93 |
147 |
ô |
94 |
148 |
ö |
95 |
149 |
ò |
96 |
150 |
û |
97 |
151 |
ù |
98 |
152 |
ÿ |
99 |
153 |
Ö |
9A |
154 |
Ü |
9B |
155 |
¢ |
9C |
156 |
£ |
9D |
157 |
¥ |
9E |
158 |
₧ |
9F |
159 |
ƒ |
A0 |
160 |
á |
A1 |
161 |
í |
A2 |
162 |
ó |
A3 |
163 |
ú |
A4 |
164 |
ñ |
HEX |
DEC |
CHR |
A5 |
165 |
Ñ |
A6 |
166 |
ª |
A7 |
167 |
º |
A8 |
168 |
¿ |
A9 |
169 |
⌐ |
AA |
170 |
¬ |
AB |
171 |
½ |
AC |
172 |
¼ |
AD |
173 |
¡ |
AE |
174 |
« |
AF |
175 |
» |
B0 |
176 |
░ |
B1 |
177 |
▒ |
B2 |
178 |
▓ |
B3 |
179 |
│ |
B4 |
180 |
┤ |
B5 |
181 |
╡ |
B6 |
182 |
╢ |
B7 |
183 |
╖ |
B8 |
184 |
╕ |
B9 |
185 |
╣ |
BA |
186 |
║ |
BB |
187 |
╗ |
BC |
188 |
╝ |
BD |
189 |
╜ |
BE |
190 |
╛ |
BF |
191 |
┐ |
C0 |
192 |
└ |
C1 |
193 |
┴ |
C2 |
194 |
┬ |
C3 |
195 |
├ |
C4 |
196 |
─ |
C5 |
197 |
┼ |
C6 |
198 |
╞ |
C7 |
199 |
╟ |
C8 |
200 |
╚ |
C9 |
201 |
╔ |
HEX |
DEC |
CHR |
CA |
202 |
╩ |
CB |
203 |
╦ |
CC |
204 |
╠ |
CD |
205 |
═ |
CE |
206 |
╬ |
CF |
207 |
╧ |
D0 |
208 |
╨ |
D1 |
209 |
╤ |
D2 |
210 |
╥ |
D3 |
211 |
╙ |
D4 |
212 |
╘ |
D5 |
213 |
╒ |
D6 |
214 |
╓ |
D7 |
215 |
╫ |
D8 |
216 |
╪ |
D9 |
217 |
┘ |
DA |
218 |
┌ |
DB |
219 |
█ |
DC |
220 |
▄ |
DD |
221 |
▌ |
DE |
222 |
▐ |
DF |
223 |
▀ |
E0 |
224 |
α |
E1 |
225 |
ß |
E2 |
226 |
Γ |
E3 |
227 |
π |
E4 |
228 |
Σ |
E5 |
229 |
σ |
E6 |
230 |
µ |
E7 |
231 |
τ |
E8 |
232 |
Φ |
E9 |
233 |
Θ |
EA |
234 |
Ω |
EB |
235 |
δ |
EC |
236 |
∞ |
ED |
237 |
φ |
EE |
238 |
Ε |
HEX |
DEC |
CHR |
EF |
239 |
∩ |
F0 |
240 |
≡ |
F1 |
241 |
± |
F2 |
242 |
≥ |
F3 |
243 |
≤ |
F4 |
244 |
⌠ |
F5 |
245 |
⌡ |
F6 |
246 |
÷ |
F7 |
247 |
≈ |
F8 |
248 |
° |
F9 |
249 |
∙ |
FA |
250 |
· |
FB |
251 |
√ |
FC |
252 |
ⁿ |
FD |
253 |
² |
FE |
254 |
■ |
FF |
255 |
|
Some important number system terms, often used for Data and Data Storage representation
The table given below represents the various prefix which are used as fractional prefixes and as magnifying prefixes:
Byte:
The most important use for a byte is holding a character code. We have discussed it earlier.
Kilobyte
Technically a kilobyte is 1024 bytes, but it is often used loosely as a synonym for 1000 bytes. In decimal systems, kilo stands for 1000 but in binary systems a kilo is 1024 (210).
Kilobyte is usually represented by K or Kb. To distinguish between a decimal K (1000) and a binary K (1024), the IEEE (Institute of Electrical and Electronics Engineers) standard has suggested following the convention of using a small k for a decimal kilo and a capital K for a binary kilo but this convention is by no means strictly followed.
Megabyte
Megabyte is used to describe data storage of 1048576 (220) bytes but when it is used to describe data transfer rates as in MBps, it refers to one million bytes. Megabyte is usually abbreviated as M or MB.
Gigabyte
Gigabyte is used to describe the storage of 1,073,741,824 (230) bytes and One gigabyte is equal to 1,024 megabytes. Gigabyte is usually abbreviated as G or GB.
Terabyte
Terabyte is 1,099,511,627,776 (240) bytes which is approximately 1 trillion bytes. Terabyte is sometimes described as 1012 (1,000,000,000,000) bytes which is exactly one trillion.
Petabyte
Petabyte is described as 1,125,899,906,842,624 (250) bytes. A Petabyte is equal to 1,024 terabytes.
Exabyte
Exabyte is described as 1,152,921,504,606,846,976 (260) bytes. An Exabyte is equal to 1,024 Petabyte.
Zettabyte
Zettabyte is described as 1,180,591,620,717,411,303,424 (270) bytes which is approximately 1021 (1,000,000,000,000,000,000,000) bytes. A Zettabyte is equal to 1,024 Exabytes.
Yottabyte
Yottabyte is described as 1,208,925,819,614,629,174,706,176 (280) bytes which is approximately 1024 (1,000,000,000,000,000,000,000,000) bytes. A Yottabyte is equal to 1,024 Zettabytes.
Common Data Storage Terms
There are various names used to refer the terms given before, to various groupings of bits of data. Some of the Most commonly used, have been listed in the following table:
Term |
Number of Bits |
Bit / Digit / Flag |
1 |
Nibble / Nybble |
4 |
Byte / Character |
8 |
Word |
16 |
Double Word / Long Word |
32 |
Very Long Word |
64 |
Page Modified on: 06/01/2022