What is (transfer)
Since the neural network is a multi-disciplinary product, the neural network has its own views on the neural network. Therefore, there are many different insights in the scientific community. The most widely used is the definition of T. Koholen, ie "Neural network is a wide parallel interconnected network with adaptive simple units, which can simulate the interaction made by the biological nervous system for real world objects. Reaction. "If we compare the characteristics of human brain nervous information activities with the work mode of the current von Norman computer, it can be seen that the human brain has the following distinct characteristics: 1. Hwange parallelism. In von Norman machine, the information processing method is concentrated, serial, that is, all program instructions must be adjusted to the CPU and then execute. When people identify an image or make a decision-making, the many knowledge and experiences that exist in the brain will simultaneously work with a quick answer. According to research, there are about 10 ^ (10) to 10 ^ (11) quantities of neurons in the human brain, and each neuron has 103 order connections, which provides huge storage capacity, which can be Very high response speed makes judgments. 2. The information processing and the storage unit are combined. In von Norman machine, store content and storage address are separated, and you must first find the address of the memory before you can detect the stored content. Once the memory has a hardware failure, all the information stored in the memory will be destroyed. Human brain neurons have both information processing capabilities and storage capabilities, so it not only uses the storage address when you recall the stored address, but also resume all content from a part of the content. When a "hardware" failure (such as head injury) occurs, not all stored information is invalid, but only the part of the information that is most seriously damaged is lost. 3. Self-organizing self-learning function. Von Nomani has no active learning ability and adaptive ability, which can only be calculated according to the program steps in accordance with people have prepared in accordance with the program steps that have been prepared. The human brain can constantly adapted to the external environment through internal self-organizing, self-learning, so that various simulated, blurred or random problems can be effectively processed. The main development process of neural network research can be roughly divided into four phases: 1. The first phase is before the mid-1950s. Spanish anatomist Cajal created neuronology in the end of the 19th century, which believes that the shape of the neurons is two-pole, and the cells and dendrites are impulsive from other neurons, while the axle will signal the signal to the direction of the cell. transfer. The various staining techniques and microelectrode techniques invented after he have continued to provide the main features of neurons and their electrical properties. In 1943, the United States's psychologist W.S.MCCULLOCH and mathematician W.a.pitts proposed a very simple neuron model, ie M-P model in the logical activities of ideas in the "neurological activities" in the "neurological activity". This model treats neurons as a functional logic device, thereby creating the theoretical research of neural network models. In 1949, the psychologist D. HEBB wrote a book entitled "Organizational", which proposed the rules of connection intensity between neurons, that is, the so-called HEBB learning law. HEBB wrote: "When the axon of nerve cell A is sufficiently close to cell B and can be excited, if A is repeated or continuously excited B, then these two cells or one of the cells will inevitably have some growth or metabolic process. The changes in this change have increased the efficiency of A activation B. Simply, if both neurons are in an excitement, then the synaptic connection intensity will be enhanced.
In the past 50s, physiologists HODYKIN and mathematician Huxley were studying the migration of the membrane from the migration of the membrane from the migration of the membrane, which established a famous HODYKIN-HUXLEY equation for a variable Na resistance and K resistance. . These pioneers' work inspire many scholars to study in this area, thus making the foundation for neural calculations. 2. The second phase is from the end of the fifth year to the end of the 1960s. In 1958, F.Rosenblatt et al. Developed the first mode identification device with the characteristics of learning neural networks in history. It is a perceptual machine for Mark I, which is a neural network research into the second. Stage sign. For the simplest aware machine without the intermediate layer, ROSENBLATT demonstrates the convergence of learning algorithms, which makes the network perform expected calculations by iteration. Later, in Rosenblatt, B.Widrow, etc., has created a different type of neural network processing unit, namely adaptive linear components Adaline, and also finds a powerful learning rule for Adaline. This rule is now Still extensive. Widrow has also established the first neural computer hardware company, and the actual production of commercial neurocircuits and neural computers in the mid-1960s. In addition to Rosenblatt and Widrow, many people have made great contributions in the structure of neurological calculation and ideas. For example, K.SteinBuch studied a binary association network structure called a learning matrix and its hardware implementation. N.Nilsson 's "Machine Learning" book published in 1965 made this period of activity. 3. The third stage is from the end of the 1960s to the early 1980s. The label started in the third phase is published by the "Perceive Machine" book from M.Minsky and S.Papert in 1969. The book analyzes the single layer of neural network and proves that this network function is limited, and even a simple logical operation problem such as "different or" cannot even be solved. At the same time, they have found that there are many modes that cannot be trained in single-layer network, while the multi-layer network is feasible. Since M.Minsky is a huge prestige in the field of artificial intelligence, he has sphed a pot of cold water at the study of the neural network in the direction of the inhale. After publishing in the "Perceive Machine" book, the US federal fund has not funded research in neural networks for 15 years. The former Soviet Union also canceled several promising research programs. However, even in this low tide, there are still some researchers to continue engaged in the research of neural networks, such as the S. Grossberg, Boston University, Finland, Helsinki University of Technology, and Gairi Yidi, Tokyo University, Japan. They persisted unremitting work for the revival of neural network research. 4. The fourth stage has come since the early 1980s. In 1982, JJHopfield, the biophysician of the California Institute of Technology, using a mutual endless neural network model, which successfully solves the computational complexity of NP full-type travel provider issues (Travelling Salesman Problem) , Referred to as TSP). This breakthrough progress marks the study of neural networks into the fourth stage and is also booming. After the Hopfield model is proposed, many researchers force the model to make it closer to the functional characteristics of the human brain. In 1983, T. Sejnowski and G. Hinton proposed the concept of "hidden monolle" and developed the Boltzmann machine. On the basis of Japan's Fukushima Bang Room, on the basis of Rosenblatt's perception machine, increasing the hidden layer unit, constructing a "cognitive machine" that can realize Lenovo learning. Kohonen applies 3,000 thresholds to construct a neural network to achieve a two-dimensional network of Lenovo learning.