The decimal seems to be a simple number of 0.1, which will be an infinite cycle in the binary: 0.00011001100110011 .... This shows that if the computer uses binary code to store 0.1, it will inevitably need to go to the back cycle position, thereby inevitably exist errors. This is unbearable for many applications. For example, the interest rate of the Fed, a small error can have an important impact on the US and the world economy. This issue has two solutions: the first method is to increase the number of tables of the computer so that this error is very small until it can be ignored. The disadvantage of this method is that the end is still an error, and more importantly, it increases the storage requirements of the data, the reduced computer's storage space utilization. The second method is to use the BCD code, even if the four binary codes are used to represent a decimal code. Thus, 0.1 can be said: 0. 10000.0001, such as 18.56: 1 8. 5 60001 1000.0101 1010 It can be seen that this representation is completely eliminated. But it's a lot of disadvantages: First, each four-digit binary code can represent 16 numbers, and now only 10 numbers (ie, decimal 0 ~ 9); second, it requires some special processing, actual On the upper, many computers provide operational instructions for the BCD code on the hardware level. However, the continuous expansion of memory capacity and the improvement of chip integration caused by molar law make the BCD code widespread application. I want to think about why binary can't express my decimal 0.1? This is a real number and progress. Our discussion is limited to the real numbers, regardless of immersion. In fact, for any kind, binary, decimal, and real numbers can be fully expressed. Because each of them can have unlimited number (0.0000 ..... 1), each real number can be concatenated by these unlimited minorities. 0.1 of the decimal is a real number, which must be represented by binary. However, the problem is that this corresponding binary number is an unlimited cycle decimal, which is equivalent to not explicitly (unless other special identification methods) is used for the computer. Therefore, "Computer cannot use binary to represent 0.1" is correct, because for a computer, if it is not a limited long decimal, it is not expressed (the number after the 取 舍 is not the original number). Think about it again, in the computer, is there a binary number that can be expressed and the number of decimal cannot be expressed? That is, in the binary, it can only be expressed in the decimal representation in the decimal system? The answer is not. All finite binary numbers can be represented by a corresponding limited number of finishes. We know that the real number is continuous on the number, each of which has endless continuous numbers. The decimal number can be divided into two categories: unlimited number and limited number, unlimited number refers to only the unlimited decimal position, the limited number is reversed.