Preliminary intelligence
Xia Jie Cloud
(Guangzhou Engineering and Technology Vocational College)
Abstract: It has a long history of imitating human behavior through machines. Artificial intelligence research As a scientific frontier and cross-discipline, its development history and the history of computer science are associated with the history of technology. This article describes the definition and development history of artificial intelligence, and a bold forecast for the future of artificial intelligence.
Keywords: artificial intelligent evolutionary automatic theorem certificate search knowledge engineering
1 Introduction to artificial intelligence
Artificial intelligence has made a very significant contribution to the real society, and its role has been played in various fields, especially in the field of computer, and artificially intelligent applications are more prominent. It can be said that there is computer application, where is it in applications Artificial intelligence; where you need automation or semi-activation, where is the theory, methods and techniques of artificial intelligence. At present, the main areas of artificial intelligence applications are the main areas of computer applications.
Artificial intelligence is a mechanism that studies human intelligence and how to use the intelligent intelligence of machine simulative people. In the later sense, artificial intelligence is also known as "machine intelligence" or "intelligent simulation". Artificial intelligence has developed after modern electronic computers, and it has become an extension of human intelligence, and on the other hand, it provides a new theory and research method for exploring human intelligence mechanism.
One of the main purposes of artificial intelligence is to make the machine competent for complex work that usually require human intelligence. However, different eras, different people are different about this "complex work". For example, heavy science and engineering calculations are originally a human brain, and now the electronic computer can not only complete this calculation, but it can be faster, more accurate than the human brain, so the contemporary people have no longer see this calculation. It is "complex tasks that need human intelligence." It can be seen that the definition of complex work is that with the improvement of the development and technology of the times, the specific goal of artificial intelligence this science has naturally developed with the changes in the times. On the one hand, it continues to get new progress, one hand, turns to more meaningful, more difficult goals.
The development history of artificial intelligence is to connect with the history of computer science and technology. In addition to computer science, artificial intelligence also involves information on informationism, control, automation, bionics, biology, psychology, proactive logic, linguistics, medicine and philosophy.
From international perspective, there are three three studies of artificial intelligence. First, the physiological pathway, the use of bionic methods, simulating animal and human sensory and brain structure and function, making neuron model and brain model; second, psychology pathway, application experimental psychology method, sum up people thinking The law of the activity is psychological simulation with electronic computers; third, engineering technology pathways, study how to use electronic computers from functional simulative intelligence. At present, the third research method has developed faster. It also absorbs new ideas in both ways, relying on new revelation to expand its results.
2 artificial intelligence
It is generally believed that artificial intelligence of ideological sprouts can be traced back to the famous German mathematician and philosopher Leibnitz (Leibnitz, 1646-1716). This idea is: establish a universal symbol language, express "ideological content" with symbols in this language, express the logical relationship between "ideological content" between the form of symbols. Thus, the idea of "mechanization" can be achieved in "General Language" can be seen as the earliest description of artificial intelligence.
The founder of computer science is considered to be "the father of artificial intelligence", and he focuses on how the computer should satisfy what kind of conditions can be called "intelligent". In 1950, he proposed a famous "Tuling Experiment": Let one person and a computer are in two rooms, and the links to the outside world are only through the keyboard and printers. Asked by the human referee to people and computers in the room (such as: "Are you a machine or a person?" Or "Are you a woman is a woman?", Etc.), and through people and computer answers to judge which room is people Which room is a computer. Tuling believes that if the "medium degree" referee does not correctly, this computer can be called intelligent. "Tuling Experiment" is a clear definition of intelligent standards. Interestingly, although some computers have passed the map spirit experiment, people do not recognize that these computers are intelligent. This reflects that people's understanding of intelligent standards is more deeper, and the requirements for artificial intelligence are higher. Almost in the above work, Von Noyman has studied artificial intelligence from a biological perspective. From the perspective of biology, intelligent is evolutionary results, and one of evolutionary basic conditions is "breeding". To this end, von Noyman constructs "self-regenerative automaton", which is a mathematical model with "breeding" capability. Von Neoman's analysis showed that the content structure of the self-regenerative automaton is fully and necessary for "breeding". He then speculated that this structure must be present in a living cell. Five years later, Crek and Watson's major discovery of DNA structures fully confirmed von Norman's guesses: several functional modules of self-regenerative automators have biological counterparts. Among them, the module A corresponds to the ribosoma, B corresponds to the RND enzyme and DNA polymerase, D correspond to RNA and DNA, and E correspond to repression control molecules and anti-stressed control molecules. Von Neuman's work provides an important basis for a research route (artificial life) in the later artificial intelligence.
The above work of Tuling and von Norman, as well as the study of Mike Corocco and Patty on the math model of neuron.com, which constitutes the initial stage of artificial intelligence.
Dadmouth Seminar held in the summer of 1956, is considered to be artificial intelligence as a sign of an independent discipline formally born. This seminar gathered leaders from different fields from mathematics, information science, psychology, neurophysics and computer science, including Minsky, Rochester, Simon, Solonio and McCarthy. Among them, MIUSKY, MCCARTHY, NEWELL and SIMON were later considered to be "four leaders" in American artificial intelligence. Participants searched from different angles that have intelligent pathways and methods, and decided to summarize this new research direction with the term "artificial intelligence". Dadmouth Seminar created the first development period of artificial intelligence. In this period, researchers launched a series of pioneering work and achieved adventive results.
Shortly, NEWELL, SHAW and Simon completed a computer program that automatically proved mathematics. Logic THEORIST (previous Martin and Davis have prepared an arithmetic proof program, but not published), proved "Mathematical Principles" Chapter 2 The 38 provinces in the middle have created this branch of "automatic theorem" in artificial intelligence.
In 1958, the important progress made by the American logician Wang Hao in the automatic theorem. His procedure is less than 5 minutes in the IBM 704 computer to prove the "Principles" in the "Mathematical Principles". In 1959, Wang Hao's improvement procedure proved most of the above 220 pivotes and predicate calculations.
In 1983, the US Mathematics Association awarded the first "Milestone Award" certified by Automatic Theorem to commend his outstanding contribution (the "Milestone Prize of Automatic Theorem" was selected every 25 years, thereby visible to its part). The encouragement of Wang Hao work, the research of automatic theorem has formed a boom. For example, the Slag's Symbol Point Program Saint has been tested has reached the level of college students; the efficiency of MOSI SIN has increased by about three times higher than Saint, and it is considered to reach an expert level. The theoretical value and application scope of automatic theorem certification is not limited to the math field. In fact, many problems can be converted to theorem certificate, or related to the theorem. It can be considered that the core issues of automatic theorem certification are automatic reasoning, and the intelligence behavior of people will play an important role in human intelligence. Based on this view, it is a matter worth exploring on the basis of this view. Since 1957, Newell, Shaw and Simon and others launched a general-purpose discussion process that does not rely on specific fields, called GPS, which is developed on the basis of Logic THEORIST, although later practice shows, GPS A independent solving process, its ability is limited, but technology developed in GPS is important for artificial intelligence development.
The early study of artificial intelligence gives people a deep impression that Samnel has developed a western checker program in 1956. The program "natural" is very low, far from Samuel's opponent. But it has learning ability, can learn from the game, and can sum up in practice. After three years of "learning", the procedure defeated Samuel in 1959; after three years, defeating a champion of the United States. It is worth noting that although the chess can only be considered a sports, the program of the play seems to be just a game program, but the meaning of Samuel work is very important: it simultaneously stimulates "Search" and "Machine Learning". The development of personal intelligence is important.
The research significance to the automatic theorem is not limited to mathematics, and the research significance of search is not limited to the game. According to the viewpoint of cognitive psychology, a large part of the human thinking process can abstract the process of reaching the termination state from the initial state of the problem, so it can be converted into a search problem and is automatically completed by the machine. For example, "planning" problem. Imagine a robot to be required to complete a complex task, which contains many different subtasks, where some subtasks are only available after completion of other sub-tasks. At this time, the robot needs to "envisage" a feasible action plan, so that action can be taken to successfully complete the task in accordance with the scheme. "Plan" is to find a viable action scheme, which can be implemented by a search in the state space of its subtask as a status space between its subtask as a state of its subtask.
Early study of artificial intelligence also includes natural language understanding, computer vision and robots, etc. Through a large number of studies, universal problems such as automatic reasoning search are far less than enough. NEWELL and Simon et al. Cognitive psychology have shown that experts in various fields exhibit extraordinary capabilities in their professional fields, mainly because experts have rich expertise (domain knowledge and experience). In the mid-1970s, Feigenbaum proposed a concept of knowledge engineering, marking the second development period of manual intelligence. Knowledge project emphasizes knowledge in problem solving; accordingly, research content is also divided into three aspects: knowledge acquisition, knowledge representation and knowledge utilization. How to effectively obtain expert knowledge; knowledge representation how to express expert knowledge into the form of easy storage, easy to use in the computer; how knowledge utilization studies use the appropriate expertise to solve the problem in the specific field. The main technical means of knowledge engineering is developing based on early results, especially knowledge utilization, mainly relying on automatic reasoning and searching for technical results. In terms of knowledge representation, in addition to the logical representation and process representation of early work, the semantic network representation proposed in Lenovo Memory and Natural Language is also developed, which in turn introduces the framework representation, concept-dependent And script representations and generating representations, etc. Different from early research, the knowledge project emphasizes practical applications. The main application results are various expert systems. The core components of the expert system include: (a) expressing a knowledge base including expert knowledge and other knowledge.
(b) Using knowledge to solve problems.
The development cycle of large expert systems is often more than 10 years, and the main reason is that knowledge acquisition. Although the field expert can solve the problem very well, it is often unclear how it is solved. What knowledge is used. This makes knowledge engineers responsible for collecting expert knowledge, difficult to effectively complete knowledge acquisition tasks. This situation has greatly stimulated automatic knowledge acquisition ---- the in-depth development of machine learning research. Machine learning methods that have been obtained more include: induction, subject to learning, interpretation, intensive learning and evolutionary learning. The research objectives of machine learning are: let the machine get relevant knowledge and skills from the experience of the problem of ourselves or "others", thereby improving the ability to solve problems.
3 artificial intelligence now
Since the 1980s, with the popularity of computer networks, especially the emergence of Internet, various computer technologies include a wide range of application of artificial intelligence technology to promote major changes in humanite relations. According to the prediction of future student in Japan, the human-machine relationship is quickly rapidly from the traditional model of "man-made link" to the new model of "machine-oriented" will change the transformation of human institutions will cause social production methods and The huge change in lifestyle, and it also puts new topics to artificial intelligence and even the entire information technology. This prompts artificial intelligence to enter the third development period.
In this new development period, artificial intelligence faces a series of new application needs.
The first is to provide a strong technical means to support distributed collaborative working methods. Modern production is a socialized production, from different professional workers in different or the same time, location is engaged in different sub tasks of the same task. . This requires that the computer provides assistance and support for each subtrane, but also needs assistance and support for coordination between subtrans. Since each sub-task can be independently performed independently, the relationship between subtrans inevitably presents dynamic changes and difficult to predict. Thus, coordination between subtrans (ie, support for distribution synergies) put a huge challenge to artificial intelligence and even the entire information technology and basic theory.
Second, netization advances informationization, enabling the original dispersed database forms an interconnected whole, namely a common information space. Although the existing browser and search engines provide the user's online finding information, this help is far less than enough, so that "information overload" and "information lost" are more severe. A more powerful intelligent information service tool has become an urgent need for users. On the other hand, the value of the information space is not only a separate information entry (for example, a manufacturer produces a new product information), far from the universal knowledge hidden in a large type of information (such as a industry Change trend of supply and demand relationships). Therefore, the knowledge discovery in the data has also become an urgent research topic. Robots are always an urgent need for modern industries. With the development of robotics, the research focuses on independent robots that can work independently in dynamics, unpredictable environments, and robots that can collaborate with other robots (including people). Obviously, cooperation between such robots can be seen as distributed collaborative work in the physical world, and thus includes the same theoretical and technical issues. It can be seen that the outstanding feature of the artificial intelligence third development period is that the system can be self-satisfied and coordinated in dynamics, unpredictable environments, which is called Agent. Currently, research is being studied around the theory of Agent, the architecture of Agent and Agent language, and has produced a series of important new ideas, new theories, new methods and new technologies. In this study, artificial intelligence presented a trend with software engineering, distributed computing, and communication technology. The application of Agent research is not limited to production and work, but also in depth to people's learning and entertainment. For example, the virtual training system that combines with virtual reality can enable students to learn the basic skills of flying in the case of not actually manipulating aircraft; similarly, it can also enable customers "enjoy" actual "taste".
Looking at the development history of manual intelligence, it can be seen that it is always following the basic ideas. The first is to emphasize manual implementation of human intelligence rather than simple simulations to serve as much as possible. Secondly, the more and more new student disciplines that emphasize multidisciplinary intersection, mathematics, information science, biology, psychology, physiology, ecology, and nonlinear science, etc. are integrated into artificial intelligence.
At present, three hotspots in artificial intelligence research are: intelligent interface, data mining, main body and multi-protected system.
Intelligent Interface Technology is how to make people conveniently communicate with computer communication. In order to achieve this goal, it is required to understand the text, understand the language, talk, and can even perform translations between different languages, and the implementation of these functions rely on knowledge representation. Therefore, the research of intelligent interface technology has both enormous application value, and the basic theoretical significance. At present, smart interface technology has achieved significant results, text identification, speech recognition, speech synthesis, image identification, machine translation, and natural language understanding have begun practical.
Data mining is from a lot of, incomplete, noise, blurred, random practical application data, and people don't know in advance, but it is the process of potentially useful information and knowledge. Research on Data Mining and Knowledge Discovery has now formed three powerful technical pillars: database, artificial intelligence and mathematical statistics. The main research content includes basic theory, discovery algorithm, data warehouse, visualization technology, qualitative quantitative interchange model, knowledge representation, discovery knowledge maintenance and reuse, semi-structural and unstructured data, and online data mining Wait.
The subject is an entity with the mental, desire, intent, capabilities, the ability to choose, and commitment, more than the particle size of the object, more intelligent, and has certain autonomous. The subject tries to autonomously, complete the task independently, and can communicate with other subjects with environmental interactions, and achieve the goal by planning. Multi-principal system mainly studies coordinate intelligent behavior between multiple subjects logically or physically separate, and ultimately implement the problem solving. At present, research on subjects and multi-bodies systems is mainly concentrated in the main body and multi-main theory, the main system structure and organization, the cooperation and coordination between the main language, the main body, communication and interaction technology, multi-subject learning, and multi-protected system applications, etc. aspect. 4 Future of artificial intelligence
Artificial intelligence may develop to the following aspects: fuzzy treatment, parallelization, neural network, and machine emotion. At present, the reasoning function of artificial intelligence has been breakthrough, learning and associative function is being studied. The next step is to imitate the fuzzy treatment function of human right brains and the parallelization of the entire brain. Artificial neural network is a new field of future artificial intelligence applications, and future intelligent computers may be a combination of von Nomani machine as a host and an artificial neural network as a smart peripheral. Studies have shown that emotions are part of intelligence, rather than separation with intelligence, so the next breakthrough in artificial intelligence may be to give computer emotional capabilities. Emotional Ability is critical to the natural exchange of computers and people.
Artificial intelligence has been in front of computer technology, the theory and discovery of artificial intelligence research, to a large extent, will determine the development direction of computer technology.
Due to the miniaturization of the computer chip, it is close to the limit. People are increasingly hoped with new computer technology to drive manual intelligence. At least three technologies may cause a new revolution, they are photon computer, quantum computers and biological computers.
According to estimation, future photon computers may be 1,000 to 10,000 times faster than today's supercomputer. A quantum computer with 5,000 left-to-left quantum bits can solve the problem of traditional supercomputement to solve the traditional supercomputer within approximately 30 seconds. Relatively speaking, bio-computer research is more reality, US Wisconsin - Madison has developed a more complex DNA computer. It is reported that the amount of information that a DNA can store can be comparable to 1 trillion CD disc. If the above three technologies can be mature, it will play a decisive role in the development of artificial intelligence.
5 Conclusion
Many scientists assert that the wisdom of the machine will quickly exceed the sum of Albert Einstein and Stephen Hawkin. The famous physicist Stephen Hawko believes that it is like human beings to design a computer with its superb smashing numbers, the smart machine will create a better performance. The most late cordial leaves and it is likely to be much faster, the computer's intelligence may exceed human intelligence.
references
[1] "Artificial Intelligence Simple History" Sun Xing Tsinghua University Press, 1990
[2] Cai Zixing Xu Guangyou "Artificial Intelligence and Its Application" Tsinghua University Press, January 2002