Certificate of calculation theoretical

zhaozj2021-02-16  58

In the 1960s, if a college student does not understand the common sense of work, for example, confuse the leek wheat, it may be sneached. Originally, I walked a successful, and the tree industry has specially. People requiring people in a field understand the knowledge in another area is excessive. Today, if a master or doctor of a computer science does not know what is unknovable, what is a shutdown problem, why not solve the problem, what is NP =? P problem, or it may be sneached. Because these problems are too basic, too important for computer science, they all belong to a discipline called the theory of theory. It is a cultivated type of knowledge that computer scientific researchers should have. The calculated theory is about calculating the mathematical theory of machinery itself. Before the 20th century, computing machinery always "calculates" other objects, few "calculations". In the 1930s, in order to solve a basic problem, whether there is an unknown issue, the number of different (later proven is the definition of each other) regarding the definition of the algorithm, thus establishing a computable theory. Scientists make the machine to "calculate" self, the miracle appears. Tulex's diagonal method, encode the map spirit itself, and stir it into its own computational object, prove that the shutdown problem is not available. A computer (program) ability is limited to a certain extent. 30 years, K. Gotel and S.c. Clinney et al. Established a recursive function, and the algorithm of the number function can be calculated as recursive. In the mid-1930s, A.M. Tuling and E.L. Bostt independently proposed the concept of ideal computers, describing the algorithm of the problem as an ideal computer with strict definition. The algorithm theory developed in the 1930s has an impact on the design idea of ​​storage program-type computers in the late 1940s. One of the ideal computer (called a charter) proposed by Tuling is a storage program type.

The main contents of the theoretical theory are: automatic mechanism and form language theory; program theory (including program correctness certification, procedure verification, etc.); formal semantics; algorithm analysis and computational complexity. Automated machine theory and form language theory is the 50-year development. The former history can also be trapped back to the 1930s because the Charter is a class of automaton (unlimited automaton). Some scholars since the 1950s began to consider the ideal computer with real-world computers, J. Nuiman proposed the concept of computer with self-propagation function in the early 1950s.

Wang Hao puts forward a variant of a chartering machine in the mid-1950s, which is a machine that is more close to the real machine than the original map foreground. He also proposed a machine that cannot be cleared on the storage tape and proves that this machine is equivalent to the map of the chart.

In the early 1960s, some people proposed a computer (referred to as RAM) with a random access memory and a multi-band chart. The Form Language Theory Guide from Jums Keronaria in the Malistic Linguistics. In this theory, the formal language is divided into four types: 10-type language; 21 type language; 32 language; 43 language. There is correspondingly in the case of type 0, type 1, type 2, and 3 types. Type 1 language is also known as the language, 2 language is also known as the context, and the 3-type language is also named regular language. The 2nd language is most attention. In the mid-1960s, the correspondence between the four-class language and the four-class automators (see the relationship between form language and automaton) in the above table, the language listed on the left is exactly the right side. The language that can be identified (see form language theory). Program design Theory includes program correctness certificates and procedures verification, some of its basic concepts and methods are proposed by Nobangan and Tu Ling and others during the 1940s. Nogan, etc., in a paper proposes a method of verifying the correctness of the program by means of proof.

J. T. Schwatz and M. Davis have proposed a software technology they call "correct procedures technology" in the late 1970s. This method is to first select thousands of basic program modules and ensure the correctness of these basic programs with known various verification methods (including program correct authentication documents). Then, a set of programs that maintain the correctness can then be proposed. In this way, a variety of procedures can be generated by a constant combination. It is pointed out that the program's correctness of the proof of the "circulatory invariance" developed by the technology, that is, the predicate attached to the entrance or exit point in a program, some literature, called "summary assertion", which can be used Come to the process of research. That is, it is not as in the past, finding a given program to find a few cyclic uncarreans, and then prove the correctness of this program by these invariations; but before the program, according to this The requirements of a program, find a few cyclic invariance, then generate this program based on these invariance. The concept of automatic programming is also proposed since the 1940s.

In 1969, some people have filed this idea independently. The formal grammar of the program language has been greatly developed since the mid-1950s. The research of form semantics has made several different semantics theories since the 1960s, which proposes several different semantics theories, mainly for semantics, representation of semantics, and algebraic and algebraic. Semantics, but there is still no form of semantics that is recognized in software technology, and therefore need to be proposed more suitable for new semantics in actual calculations. Program logic applied in program correctness proves and in form semantics, is the development of the 1960s.

This is an expansion of predicate logic. In the original predicate logic, there is no time concept, the reasoning relationship considered is the relationship at the same time. The program is a process, and the logical relationship between the input predicate of a program and the output predicate is not the relationship between the same time. Therefore, in the reason of the programs, the original predicate logic is not enough, and there is a new logic. At the end of the 1960s, E. Engel et al. Found algorithm logic. C.A.R. Hall also created a program logic. This logic is obtained by adding a program operator in its original logic.

Algorithm analysis and calculation of complexity theory on the complexity of algorithms. The names in this area have argued. It is generally believed that the study of the complexity of various specific algorithms is called algorithm analysis, and the study of general algorithm complexity is called computational complexity theory. Computational complexity Theory is originally a calculated theory, which is the computational complexity of various can compute functions (ie, recursive functions) (early as "calculated difficulty") for its research object. Calcible divided into two kinds of theoretical cursive and actual calculated. As a computable theory, the complexity theory is the complexity of the former, and the complexity theory of a field of computer science is the complexity of the latter.

The basic problem of this branch is to figure out the structure and properties of the actual calculation of the functional class. Actual calculated is an intuitive concept. How to accurately describe this concept is an inclusive question. Since the mid-1960s, relevant research workers generally use a functionally calculated function in computational time polynomial. This is actually a topic, rather than a proposition that can be proved or invisible in mathematics. It is pointed out that it is difficult to say that it is actually calculated when the number of polynomial times is higher.

Another problem with a fundamental problem is: Comparison of the problem of determining the problem of determining machine and non-deterministic machine. It has long been known that the deterministic figure spirit and non-deterministic map spirit are equal. Because the non-deterministic machine is higher than the deterministic machine, if the calculation time is not limited, the deterministic machine can always use the exhaustive method to simulate the non-deterministic machine. Therefore, the solitary ability of the two is the same. However, when the calculation time polynomial is bound, the two problem-theme power is equal, this is a nameful P =? Np problem. About the calculation and algorithm (including procedures), the study of serial calculations is more research, and research on parallel calculations is still insufficient (especially for asynchronous parallel calculations). Therefore, research on parallel computing is likely to be a research focus of computer theory. For a determination problem, if a program can be reproduced, in the field as an input, the execution will terminate and output "Yes" when the answer is affirmative in the field, and the execution is "Yes", otherwise the execution is not terminated. This determination problem is semi-determinable. The problem that can be determined is always semi-determinable. The collection is the full essential condition of recursive enumeration, is a semi-determinable problem. Tuling proves in 1936 that the charter's shutdown problem is unknovable, that is, there is no map spirit that can determine whether any of the figure is stopped for any input. The problem of stopping problems of the map is semi-determinable. The shutdown problem of the map spirit is very important, which can launch many of the problems in computer science, mathematics, logic, is unknovable.

转载请注明原文地址:https://www.9cbs.com/read-19995.html

New Post(0)