Chapter 1 CLI Component Model Introduction
The programmer in the 21st century has a lot of troubles.
First, excellent software is more complicated than before. Just provide a terminal-based simple command prompt or a character user interface that cannot be received by the user; the current user needs a rich graphical user interface of various excellent virtual functions. Software-related data, there are very few structures that are suitable for stored in a normal file of the local system; the current situation is that computer users have dependent on the functions of the queries and reports provided by the software, and to meet these needs, A relational database must always be used. The changing business condition requires frequent modifications to long-term storage data to also require the relational database. From the previous stand-alone environment, you can meet the needs of the application, in this environment, through the file or clipboard to achieve data sharing; now most of the computers on this planet are connected to the network, so deploy it on these computers Software must not only have network functions, but also adapt to changing network environments. Briefly, software development is no longer a technical master single-hm to complete the task. It has become a collective behavior based on fairly complex underlying framework.
The current programmers can not enjoy the extravagant software engineering from head to complete a complete software engineering from the bottom-up tool (such as a compilation or C compiler) close to the processor. There are very few people who have time or patient to write the framework of the intermediate layer. Even if there is a simple job like the HTTP protocol or complete a XML parser, there is very little work, and has the adjustment underlying framework to reach the system. People who need performance and quality skills are less. The most important focus on software development is now on the reusable code and reusable components. The operating system plus a small number of development libraries can no longer meet the needs of software development. So, no matter what you like, today's programmers must rely on multiple different source code to support his app, and these code is able to work with proper and reliable work.
In order to meet the current change trend of the current software development mode, a new software development method has appeared - component-based software development, which uses a combination of multiple independent code modules to create an application. The application can be created quickly and efficient by combining components of multiple sources. However, this technology puts new demand for programming tools and software development processes. For example, because of the other components developed by other developers who are not trusted or unknown, the strict control of the program is executed and the verification of the code at runtime is the most basic requirement for software. In the era of our network connection, component-based complex software often does not require a client's operation to dynamically update, and sometimes these updates may be malicious. If you ask the victims of the computer virus to protect the cleanness of her machine and data, or talk to an unskilled computer user discussing those experienced, due to installation and unloading applications, the system is inexplicably unstable You will find that the problem caused by the component-based software is almost the same as the benefits it.
Over the years, the commercial propaganda and expected efficient intensiveness based on component-based software are all offset by how to combine different source components. However, in the last 10 years, we have seen the virtual execution environment of the host hosterow component in business success. The hosted component is a simple software component that can be developed and deployed independently, but they can coexist in the application. The "hosting" of the managed component refers to it requires a virtual execution environment to provide runtime and execution services. To meet the needs of the components, these environments focus on providing a focus on ensuring a good model for secure cooperation and collaboration between components, rather than simply exposing the physical resources of the following processors and operating systems.
As described in Figure 1-1, virtual execution environments and managed components provide a lot of benefits for three different software groups: programmers, develop programming tools, and programming libraries, and those managed in virtual execution environments. Software administrator. For programmers who use managed components to develop complex applications, the emergence of development tools and programming tool libraries has reduced the time spent on the task of integrated components and management components, and improves productivity. For tool software developers, such as compiler developers, support framework and a clear, explaining that the detailed virtual machine can reduce the time spent on the framework and solve interoperability, and leave more time Development tool software. (High-Definition, Carefully Specified Virtual Machine ??) Finally, administrators and computer users can benefit from using a single runtime framework and packaging model, and therefore better controls computers, all of these all and specific processors. Unrelated to the operating system. Figure 1-1 When boarding is in a virtual execution environment, each component can work safely.
CLI virtual execution environment
ECMA Universal Language Framework is a standard specification for virtual execution environments. It describes a data-driven architecture, where language-independent data block is activated in the form of a self-installed accessory, a type, secure software system. Drive the data of this process, called metadata, develop tools to describe the behavior of the software, and the feature running in the memory. The CLI execution engine utilizes metadata to achieve the purpose of loading the components of each source. Each CLI component coexists under strict control and supervision, but they can also access each other and can directly access which resources that need to be shared. The CLI is a model that balances controllability and flexibility.
ECMA, the European Computer Manufacturer Federation is a standardized group that has many years of history. In addition to publishing its own standards, ECMA also has a close relationship with ISO (International Standardization Organization), based on this relationship, CLI standard has been recognized as in accordance with ISO / IEC 23271: 2003 standard, then according to a science and technology report, It is ISO: IEC 23272: 2003 standard, C # standards have also been recognized as in accordance with ISO / IEC 23271: 2003 standard, and eventually become ISO / IEC 23270: 2003 standard.
The CLI standard can be found in the Web site mentioned in this book, which is also included in the book discharge disc. It consists of 5 major components, plus its development library related documents. When the CLI is standardized, a programming language called C # is also standardized due to common efforts. C # utilizes most of the characteristics of the CLI, and it is an easy-to-object-oriented language, so we choose to complete the vast example of the exemplary applet in this book. In the form, C # and CLI standards are independent (although C # standards have indeed referred to CLI standards), but in fact, both are entangled together, and many people think that C # is a standard language to develop CLI components.
The virtual execution in the CLI occurs under the control of its execution engine, and the execution engine is hosted by the metadata that describes the components at runtime, which is the same for those not based on the component-based code. This way is often referred to as hosting code, which is created using tools and programming languages that generate executables that can be compatible with the CLI. A detailed event chain is used to load metadata from the packaging plant called the assembly, and convert metadata into executable code that is suitable for running on a processor and operating system. Figure 1-2 Demonstrating a simplified version of this event chain, which will become the foundation of other parts of this book. The first part of the CLI standard also describes the event chain in detail (Section 8 in the Standard Part 1 describes the general type system, which describes the virtual execution system, which is very good background information). Figure 1-2 Each step in the CLI load sequence is driven by a metadata annotation obtained by the previous stage.
In some aspects, the CLI execution engine is very like an operating system because it is also a privilege code that provides services (such as loading, isolation, and scheduling) and management resources (eg, memory and IO). Also, in the CLI and the operating system, the service can be explicitly requested by the program, and can be used as part of the implementation model environment. (Environmental Service refers to services that continue to run in the enforcement environment. They are very important because they define most of the contents of a system runtime calculation model) (Ambient Services ????)
In other aspects, the CLI is very similar to the traditional compiler, linker, and loader's toolchain because it also handles memory layout, compile and symbol resolution. The CLI standard not only has a strong force to describe how managed software work, but also describe how non-hosting software can coexist with managed software security, so as to be able to seamless sharing computing resources and responsibilities. The combination of systems and tool frames in the CLI makes it a unique powerful new technologies that create components-based software.
CLI standard foundation concept
Behind the CLI standard and execution model is a series of core ideas. These core ideas are included in the CLI design concept in the form of an abstract or concrete technology that can help developers organize and organize their code. Let us think about these technologies through a series of design principles.
l Use a unified type system to expose all programmable items.
l Pack the type into a fully described movable transplanted unit.
l When running, you can load types that can be individually independent of each other, but you can share resources.
l Use the differences in the version, a specific culture (such as calendar or character encoding), and the flexible binding mechanism of the management policy resolve the runtime type.
l Use the type of behavior that can be verified to verify the type of security, but do not require all programs to be type security.
l Use the task for specific processors, such as memory allocation, and compilation that can be delayed to the last time. But also to consider the tools for performing these tasks earlier.
l Execute the code under the control of the responsibility and execution of the runtime policy.
l The runtime service is designed to be driven by the scalable metadata format so that they will be easily adapted to new conditions and future changes.
Here we will talk about some of the most important concepts and then detail these concepts again in the process of whole book.
Types of
The CLI divides the world into various types, and programmers also use types to organize the structure and behavior in the code they write. Component models The description of the type is often very simple: a type describes the fields and properties containing data, and includes methods and events describing its behavior (all of this will be discussed in detail in Chapter 3). Status and behavior can exist in the instance level (in this form of all components sharing structures, but each component is different), or may exist in the type level (all instances in such a separate boundary) Single data copy or method scheduling information). Finally, the component model supports standard object-oriented structures, such as inheritance, interface-based polymorphism, and constructors. For execution engines, programmers, and other types, the structure of the type is represented by metadata, it is useful. The reason why metadata is very important because it makes the type of different people, local and platforms to coexist, and remain independent. By default, the CLI only loads it only when you need to use a type; the link will only be evaluated, determine the address and compilation when needed. All types of references to other types of types are symbols, which means they exist in the form of names, determine the address at runtime, rather than pre-calculate the address and offset. By using symbolic reference, a mature version mechanism can be established, and each type of independent version can be implemented in the binding logic of the execution engine.
With classic object-oriented single relay syntax, a type can inherit the structure and behavior from another type. In the definition of subclasses, all methods and fields of its base class are included, and an instance of a subclass can replace an instance of its base class. Although the type may only have one base class, they can attach any number of interfaces. All types are directly, or through their parent class inheritance, indirect inheritance of the autociological type System.Object.
The CLI component model exposes two more advanced structures: attributes and events to programmers, and expand the concept of fields and methods. Attributes allow types of data to expose data, and the values of data can be obtained and set by any code, not by direct memory access. From the perspective of the pipeline, the attribute is definitely the sweetness of the syntax because they are in the form of a method internally, but from the perspective of semantics, the attribute is the best element of a type of metadata, which can lead to more consistent API and better development tools.
Types Use events to notify the external observer to interested in the execution (for example, the notification data is available, or the internal state changes). In order to enable external observers to register the events they are interested in, the CLI commission will execute the information package necessary for a callback. When registering an event callback, programmers can create one of two delegates: one is a static delegation of a pointer to a static method of the pointing type; the other is a reference to the object and a method (object will An instance of the reinforcement is linked to this method. The delegate generally passes the event registration method in the form of parameters; when the type needs to trigger an event, it is just a callback to the delegate registered on the type.
Types use fields to store data, using methods to represent behavior, from minimity, it is a hierarchical method of organizing programming modules as described above. On this, the simplicity, integrity, model, attributes, events, and other construction methods provide additional structures, which can use these structures to create a shared program library and a runtime service different from the CLI.
COM and CLI
Software designers who have been pursued for a long time believe that standardized component packaging and runtime interoperability is the most basic. Just as the perforated punching card used in early use is the reusable library of computing functions. Microsoft's Development Component Object Model (COM) is to achieve two purposes of uniform packaging and fine-grained interoperability. As a result, using the "interface" method to perform binary component packaging has been successfully used to deploy their API and modularize the scattered code. Unlike the CLI, COM is a component model that is almost completely based on shared rules, rather than a shared execution engine. Each COM component shares the underlying runtime infrastructure and performs functional cooperation on each component. This method can be very useful, especially suitable for environmentally limited computer resources, and programmers must press each byte performance in these environments. Alternatively, there is a huge code that needs to expose the component interface. Only after using COM sharing rules, fine-grained binary interoperability between components is generally popular in software operations on the Windows operating system. It is widely successful as a method for the application to expose the internal and standard methods for publishing the API in order to provide programmability. Some system functions of Windows are also exposed through the COM interface, and there are many third-party manufacturers to sell in a reusable component.
However, the COM method has a significant decline trend. In this model, developers must be responsible for every detail of running operations, must be very careful to comply with complex interoperability protocols so that components can work correctly. Because the correct difficulty is made to make the protocol, the code is not only complicated, but it is easy to cause bugs.
Most COM complexity can be eliminated by providing a shared underlying service for component developers, just like the operating system provides shared underlying services for all programs that use computer resources (for example, memory garbage recovery, it is a kind of fundamental Reduce the service necessary to perform the collaboration operation between the components). In 1997, people put forward a way for COM services, which provides a class model for COM programmers, as well as universal runtime services, can improve productivity (programmers no longer need to repeat the same support mechanism) Also brings better security, high efficiency and stability. This runtime initial name is the Component Object Runtime (COR), you can find this name in some of the function names of the shared source CLI.
Microsoft initially developed COR's purpose just hopes to provide a runtime for COM, but Microsoft does not limit the limitations, but decide to complete a universal virtual execution environment. This effort eventually became a CLI standard.