Analysis in software

xiaoxiao2021-03-06  22

Analytical discrete form is divided into Divide and Conquer. This idea is self-evident in the importance of software design. The large system is decomposed into small system, small system decomposition is a module, module decomposition is object, object decomposition is a function, and the function is decomposed to increase the verb and collection / individual, such as the change of the modification, etc., which recursively. In many "best practices" on the software, there are precautions in this decomposition process, such as high polymerization, low coupling, etc. But why do you want to emphasize these concepts, who can ensure that this checklist is complete, how do you do it during the specific practice? Can we skip the circle of the software, use the words outside the software, replay this idea? After breaking up a large system into a plurality of small systems, because the system scale is small, it is generally easy to process, but is this all analysts? There is a little story, saying that someone asks a scientist, if the earth civilization will be destroyed, only one sentence can be passed to the future generation, then he most wants to tell the future generation. That scientist replied: The universe is constructed from atoms. The philosophical foundation of analysts is still in primary, and atomism can be said to be the most brilliant victory for thousands of years. Atomicism also clearly reveals the mystery of analytics: thousands of variables are only the appearance of things, and they are constructed from homogeneous primitives after decomposition. During the decomposition, the scale is getting smaller and smaller, and the number of problems seems to be more and more, but when the problem space is "collapsed", the problem after the decomposition, a large amount of overlap, the whole problem The complexity has a decrease in nature. The FFT and dynamic planning algorithms are also this solution from heterogeneous to homogeneous solutions. After the decomposition, the subsystem we want to get is low, then it is best not related to it. We hope that the subsystem we want to be high, then the best is not enough, in mathematics, we call it Orthogonal. The most perfect carrier is the linear world, and the linear algebra (or more broad group theory) said that the linear system is completely characterized by the "kernel" formed by its orthogonal feature vectors, that is generally software The system should be built using a few reusable modules. However, the linear algebra has been said, and the selection method of the feature vector is an infrequent, and is completely equivalent. That is also a variety of ways to decompose in software systems, and most of them are difficult to divide. The linear algebra also said that the number of feature vectors is only determined by the dimension of the system (a metric of the system complexity), and that in general, the software system has a lower limit of complexity. Over-simple architecture can only support excessive applications. The linear algebra does not speak, but the potential expression, the status of the feature vector is equal, so on the basis of high cohesion, low coupling, the software decomposition is at least one: symmetry to maintain the overall structure of the system Balance. Unfortunately, more and more nonlinear phenomena have been found in the real world, so that nonlinear research itself has become an independent discipline. However, the ancient teachings are still valid, and the decomposition can help us retrieve the linearity of the system. In the limit case depicted in the calculus, the external force produces an acceleration, and then the acceleration has a speed, and since it is separated in this way. (Some said that the success of the reorganization in the micro-world is precisely because the calculus is invalid in extremely entangled critical circumstances, perhaps some truth).

In order to implement an analysis in software, we need some technical means. First, there is a need for a naming mechanism that allows us to define concepts in our minds and start modeling. The so-called object is this mechanism. This level of relationship can be understood from the following level 1. The advanced language specifies the type of data so that we can specify different data types for different memory blocks to distinguish them conceptually. 2. When the program becomes more complex, the STRUCT structure provided by the C language allows us to create a new data type, you can put a set of related data together, a name. And if there is no structure, this correlation cannot be expressed directly in the program, which must be recorded in the document or in the idea of ​​the programmer. 3. Object (Object) is a more powerful naming mechanism than structures, which can put a set of related data and functions together, and give a name. And through the package and virtual functions, an object type is not just the concept itself represented, it also expresses the characteristics of its derived class. That is, the object is expressed is a collection of concepts rather than a separate concept. 4. In more complex programs, the interaction between objects produces a certain identified feature, and the design mode appears. 5. What is the next step in this level? There is no mysterious place to objectification, it just makes us have a tool for expression. Sometimes the object is more absent because we are very likely to make a named error.

In the days without an object, we cannot name the data and functions, and some concepts cannot be naturally expressed in software design because they have no name in the world of the program! Once we can name all the concepts in the system, a door is opened, and a large number of possibilities are discovered to form today's object-oriented technology. The most important thing in this is the orthogonal decomposition technology in the software. The first is inheritance. In the early C procedures, the following code often appears: if a life a_work_1 (); else if b1n b_work_1 (); END ... IF A THEN A_WORK_2 (); ELSE IF B THEN B_WORK_2 (); END

By inheritance, we can capture the correlation between the above programs, the code is rewritten as x = a or b; x.work_1 (); ... x.work_2 ();

However, as an early major object-oriented technology, it will soon inherit this concept is overwhelmed. By inheriting, all the relationships in the system are organized into a tree structure. As the hierarchy of the tree is getting deeper, the entire structure becomes more unstable, and the small changes in the base class may cause an influence of avalanche. As a whole, the object is increasingly difficult to be reused.

At this point, the interface should be born in the interfacum. Understand from a simple meaning, the interface can be considered orthogonal decomposition of objects (OBJECT). If inherited, class chuman {public void eat () {..} // human eat public void sleep () {..} // human sleep} Class CManager Extends chuman {public void fireemoyee () {...} / / Manager fire employee}; Class Cemployee Extends Chuman {...} The public inheritance is generally corresponding to the "IS A" relationship, that is, an associated relationship, is called a partial order in mathematics. The predefiguration is logically implied in reasoning, ie we can inquire of derived class according to the behavior of the base class. So when we know that someone is a manager (CManager), we can inquire that he is a person, that is, he can eat. Unfortunately, this subtle information leak may not be what we want, after all, the board of directors hires a professional manager to manage instead of eating. Application Component Technology, we do the following: Interface Ihuman {bool Eat (); Bool Sleep ();}; interface imanager {bool fireemployee ();}; class manager imports human, iManager {...}; manager = ihuman iManager

The interface breaks the rigid tree structure constructed, promoting the flexible mesh structure, making the entire system structure flat, and the particle size of the decomposition is smaller. With an interface, should you forget to inherit? No, reasoning relationship is still important, just don't abuse.

In recent years, aspect programming (AOP) has gradually emerged. From the perspective of decomposition technology, it represents a new direction: adjectives and verbs orthogonal decomposition. For example, we need to implement transactions in a transaction, and implementing this feature can be easily written, "in a transaction" this modifier is abstracted as an aspect and implemented separately. With AOP technology, we will combine the actions to complete the required features.

Finally, talk about the concept of Reusablity. Software design is a series of model mappings from demand fields to software technology implementation, and there is a variety of orthogonal decomposition methods on every level. The purpose of building software is to meet demand, so the entire mapping process should be inclined to the application layer. There is a saying called Object Oriented to User, I heard from Chen Yuxi from Catai. I think this is also emphasizing guidelines for choosing from a variety of decomposition modes. Reusable objects mean it more likely to build a "feature primitive" of the building system, while its usability is implied to express the meaning of the application layer user. So Reusability is a more important concept than ObjectLization and Encapsulation.

转载请注明原文地址:https://www.9cbs.com/read-42232.html

New Post(0)