Release Date: 4/1/2004
| Update Date: 4/1/2004
Summary: This article provides an overview of the basic features of the Microsoft .NET framework security system, including the ability to function on dynamically downloading and execution modes, and remote execution modes, restrictions in strict constraints, administrator defined security contexts. (Page 21 Print Page)
Dr. Demien Watkins, project 42sebastian Lange, Microsoft Corporation
January 2002
This page
Introduction to Perform Overview Verification Code Access Security Permissions Evidence Security Policy Stack Audit Declarative Method and Command Sexuality Admissions Based on Role Security Summary Thank A: Mscorcfg.msc
Introduction
In traditional program development mode, administrators usually install software to a fixed location of the local disk; when steeringly supporting dynamic downloading and execution, even supporting remote execution, security is the most important thing to consider. the elements of. To support this mode, the Microsoft .NET framework provides a strong security system that can limit the code in strict constraints, the administrator-defined security context is running. This paper studies some basic security features in the .NET framework.
Many security models associate security with users and their groups (or roles). This means that all codes that are running on behalf of these users or allowed to perform operations for important resources, or not allowed to perform operations. Safety in most operating systems is built in this model. The .NET framework provides a security model defined by the developer, called role-based security, which is also operated in this similar structure. The main abstraction of role-based security is Principals and Identity. In addition, the .NET framework also provides security on the code, which is called code access security (also known as evidence-based security). Using code access security, a user may get trust to access a resource, but if the code executed by the user is not trusted, the access resource will be rejected. Code-based security based on specific users is a basic tool that allows security to be embodied on mobile code. The movable code may be downloaded and executed by any number of users, while all of these users do not understand when developing. Code access security is mainly concentrated in some core abstractions, they are: evidence, policies, and permissions. The role-based security and code access security is represented by the type in the .NET Framework class library, and is the user scalable. There are two payable challenges: the types of security models to some programming languages in a consistent and coherent manner, as well as representative resources in the protection and publication .NET framework class library (using these resources).
The .NET Framework Safety system is running on the traditional operating system security. This approach increases a layer of more expressive and scalable security over the operating system security. These two levels of security complement each other. (Operating system security system can also delegate some responsibility to the public language runtime security system for managed code, as the runtime security system is more fine, more configurable than traditional operating system security.)
This article provides an overview of the security of .NET framework, specifically describes the role-based security, verification, code access security, and stack auditing. And use some small programming examples to reveal some concepts. This article has no argumentation and other runtime security tools such as encryption and independent storage.
By the way, this article generally describes the default behavior of these security features. However, .NET Framework security system has strong configurability and extremely scalability. This is a major advantage of the system, but unfortunately, this can not be discussed in detail in this conceptual article.
Back to top
Exploction overview
The run library performs both managed code and performs unmanaged code. The hosted code is performed under the control of the running library, so you can access the service provided by the running library, such as memory management, real-time (JIT) compile, and the most important security services (such as security policy systems and validation) involved in this article. The non-hosting code is compiled, and the code to run on a particular hardware platform cannot be directly utilized. However, when the language compiler generates a managed code, the output of the compiler is expressed in the Microsoft Intermediate Language (MSIL). MSIL is generally described as an abstract, stacked computer, object-oriented pseudocate language. The reason why MSIL is object-oriented because it has some instructions that support object-oriented concepts, such as object assignments (Newobj) and virtual function calls (CallVirt). The reason why it is abstract computer because MSIL does not depend on any specific platform. That is, it will not make any assumptions for hardware running on it. It is based on the stack because of the essential, MSIL is performed by pressing the stack press (PUSH) value and the pop-up (POP) value and the calling method. MSIL usually compiles the native code in real time before execution. (MSIL can also be compiled as native code before running the code. This helps to shorten the startup time of the assembly, but the MSIL code is usually JIT compilation at the method level.) Back to top
verification
Two forms of verification in the run library: MSIL verification and program collection metadata verification. All types in the Runturser specify the agreement they will implement, which will be reserved as metadata with MSIL in the hosted PE / COEFF file. For example, if a type specifies that it is inherited from another class or interface, this shows that it will implement some methods, which is an agreement. Agreements can also be linked to visibility. For example, the type can be declared as disclosed (export) or other content from their assembly. Because type security can only access the type according to their agreement, in this regard, it is an attribute of code. Msil can be verified to prove that it is safe. Verification is a basic construction block in the .NET Framework security system, currently only executing verification on the hosted code. Because the non-hosting code cannot be verified by the running library, the non-hosting code executed by the runtime must be completely trusted.
In order to understand MSIL verification, the key is to understand how to classify MSIL. MSIL is divided into the following types: invalid MSIL, valid MSIL, type secure MSIL, and verifiable MSIL.
Note It should be noted that the following definition provides more information than the standard definition. For more accurate definitions, see other documents, such as Ecma Standard:
• Invalid MSIL is the JIT compiler that cannot generate MSIL represented by this machine. For example, the MSIL translation of the invalid operator cannot be incorporated into the machine code. Another example is the jump instruction, the target of the instruction is the address of the operand, not an opcode. • Valid MSIL can be considered to be all MSIL that meets the MSIL syntax, so it can be represented by this unit code. This classification includes such msil, ie, a pointer algorithm using non-type safety forms to obtain MSIL access to type members. • Type Security MSIL interacts only through their agreements to the public. MSIL attempts to access another type of private member from a type is not type safe. • Venking MSIL is MSIL that can be proven to be type secure by a validation algorithm. The verification algorithm is conserved, so some type of security MSI may not pass verification. Verify that MSIL is naturally a type of security is effective, of course, is not invalid.
In addition to type security checks, the MSIL verification algorithm in the running library also checks the occurrence of stack overflow / underflow, the correct use of exception processing tools, and initialization of objects.
For code loaded from disk, the verification process is part of the JIT compiler, which will be intermittently intermittent in the JIT compiler. Verification and JIT compilation are not performed as two independent processes. If the program is found in the program set in the program, the security system will check if the program is sufficiently trusted and can skip the verification. For example, if an assembly is loaded from the local hard disk under the default setting of the security model, it may be the case. If the assembly is trusted to skip verification, MSIL will translate cost machine code. If the degree of credibility of the assembly is insufficient, you can not skip the verification, you will use a stub to replace the problem with MSIL, if the execution path is used, the stub will trigger an exception. A common question is: "Why not check if it needs to be verified before verifying the assembly?" Because verification is usually performed as part of the JIT compilation, it is usually more faster than checking if the assembly is allowed to jump faster. (Decide to skip the verification more intelligently than the process described here. For example, you can cache some results of some of the previous verification attempts to provide a fast lookup solution.) In addition to MSIL verification, you have to verify the program collective metadata. In fact, type security depends on these metadata checks because it assumes that the metadata tags used during MSIL verification are correct. When you load an assembly in the Global Assembly Cache (GAC) or download cache; if you do not insert it into the GAC, you will also verify that the program collective metadata is also verified when it is read from the disk. (GAC is the central storage of the assembly used by some programs. Download the cache saves the assembly downloaded from other locations (such as the Internet).) Metadata verification includes checking metadata tags and eliminating buffers overflow, the former is used to check them Whether it will correctly index to the table accessed, and whether the index to the string table does not point to the length greater than the string of the buffer size that should be saved. The type of security code to eliminate non-type security via MSIL authentication and metadata is the first part of the security of the run.
Back to top
Code Access Security
In essence, the code access security assigns permissions to the assembly evidence. When determining which resources should have access to access, code access security uses the location of the executable code from which the location and other information about the code identifier are used as a major factor. Information about the assembly identification is called evidence. Once the assembly is loaded to the running library for execution, the host environment adds some evidence to the assembly. The code access security system in the Runturship is responsible for maping these evidence to a permission set, which will determine what access to some resources (such as registry or file systems). This mapping is based on manageable security strategies.
For most applications for hosting code, the default code access security policy is secure and sufficient. It strictly limits the operation when it is executed on the local computer from the code that is not fully trusted or untrusted (such as the Internet or the local intranet). Therefore, the code access security default policy model represents a feasible way to communicate security. By default, resources are safe; administrators need to take explicit operations to make system security is slightly somewhat.
Why do we need another security program? Compared with the user ID, the code access security is centered on the code identification. This allows the code to run in an unlimited number of trust levels in one user context. For example, even if the operating system user context allows full access to all system resources, code from the Internet can only run in a defined security boundary.
Now let's take a look at the principal input and output of the code access security system: evidence and permissions.
Back to top
Authority
Permissions represent authorizations that can be implemented. These operations typically include access to a particular resource. Typically, these operations may include accessing resources such as files, registry, networks, user interfaces, or execution environments. An example of permission that does not involve actual resources is to skip the verification function. Note The System.Security.Permissions.SecurityPermission class contains a flag that determines whether the recipient that allows the permissions instance to skip verification. The SecurityPermission class contains other similar permissions, which covers the core runtime technology. Vulnerability. The core runtime technology is protected by the request caller to make the necessary SecurityPerMission class to set the appropriate permissions.
The basic abstraction of permissions is the iPermission interface, which requires a particular permissions to implement a set of standard permission operations, such as the federated or subset of other permission instances with the same permissions.
Permissions can be sorted to a permissions set, which represents a statement of access to various resources. The System.Security.PermissionSet class represents a collection of permissions. This class is included in INTERSECT and UNION. These methods use another permissionset as a parameter and provide a permissionset, which is the combination of all permissions in these two collections, or the intersection of all permissions in these two collections. (The set of permissions in the running library is expressed in a simple, unsorted collection.) With these tools, the security system can use the right to limit, without having to understand the semantics of each permissions. This allows developers to extend the hierarchy of permissions without having to modify security programs.
Note Each permission type must be derived from any permissions type to implement standard permission operations, such as joint, intersection, subset, and request methods. These permissions types do not have to implement semantics of the permissions specific to which they are included. For example, a permission containing a file name will intersect a permissions containing a simple Boolean status. When the power limit set A is intersected with the permissions set, if the A and B are different instances of the same permissions type X, the authority category A will call the intersection method on the instance of the X, do not have to know any content about the semantics of X. .
According to the evidence provided to the security system at the time of assembly loading, the security system will grant a permission set, which represents the permissions access to various protected resources. Instead, resources are protected by permission requests, which triggers a security check to see if a specific permissions have awarded all caller of the resource; if the request fails, it will trigger an exception. (There is a specific security check known as a link request, which only checks the direct call. But usually check the entire call stack of the call party.)
Back to top
evidence
The host environment will provide evidence of the assembly to the security system whenever you load an assembly to the running library. Evidence constitutes the content entered to the code access security policy system, which determines which permissions received by the program rally.
The .NET framework comes with some classes, in the security system, these classes are used in the form of evidence:
• Area: There are the same concept as the area used in the Internet. • URL: A specific URL file location identifies a specific resource, such as http://www.microsoft.com/test. • Hash: Use a program distributed column value generated by a hash algorithm like SHA1. • Strong Name: Strong name signature of the assembly. Strong name represents a version, encrypted reinforcement method for reference and identify one (or all) assembly of a particular signature party. For more information, see .NET Framework SDK. • Site: Site from the code. The URL is more specific than the concept of the site; for example, www.microsoft.com is a site. • Application Directory: To load the code of the code. • Publisher Certificate: AUThenticode digital signature of the assembly. Note According to theory, any managed object can constitute evidence. The above is only some types of corresponding member conditions in the .NET framework, so they can integrate them into the security policy without having to write a custom security object. For more information on security policies and code groups, see the following.
The following program is a simple example of evidence, where evidence is passed to the running library security system when loading an assembly. In this example, Mscorlib is a loaded assembly, which is an assembly that contains many run library types such as Object and String.
Using system;
Using system.collections;
Using system.reflection;
Using system.security.policy;
Namespace assemblyevidence
{
Class class1
{
Static void main (string [] args)
{
TYPE T = Type.gettype ("System.String");
AskEMBLY A = askEMBLY.GETASSEMBLY (T);
Evidence E = a. Evidence;
IENUMERATOR I = E.GeteNumerator ();
While (I.MOVENEXT ())
Console.writeline (I.current);
}
}
}
The output of the program shows what evidence is passed for this assembly to the security system. For the sake of simplicity, the following output has been edited. The security system uses this evidence and then generates a permission set based on the security policy set by the administrator.
File: /// c: /winnt/microsoft.net/framework/v1.0.2728/mscorlib.dll
Url>
Key = "000000000000000400000000000000" Name = "mscorlib" Version = "1.0.2411.0" /> Rawdata> Back to top security strategy Manageable security policies determine the mapping between the host environment to provide the assembly and grant the equity set of the assembly. The System.Security.securityManager class implements this mapping. Therefore, you can view the code access security policy system as a function with two input variables (evidence, and manageable security policies), which can view specific permissions set as an output value. This section focuses on the manageable security policy system. Some security managers can identify configurable policy levels, they are: • Enterprise Policy Level • Computer Policy Level • User Policy Level • Application Domain Policy Level Enterprise policy level, computer policy level, and user policy levels can be configured by security policy administrators. The application domain policy level can be configured by a host in a programmed manner. When the security manager needs to determine the level of the security policy grant the assembly, it starts from the enterprise policy level. Provide the assembly evidence to this policy level will grant the permission set from the policy level. Typically, the Security Manager will continue to collect the permissions set of the following policy levels below the enterprise policy level in the same way. Then these permissions rally intersect to generate the aggregate set of the assembly. All policy levels must first allow a specific permissions to enable it to enter the permissions granted to the assembly. For example, if the enterprise policy level does not grant a specific permissions during the calculation of the assembly, no matter what permissions specified at other levels, it will not grant permissions. Note There are some special situations, such as enterprise policy levels, may contain an instruction that specifies any policy level below this level, such as the computer policy level and user policy level. In this case, the computer policy level and user policy level do not generate a permission set, and these two levels will not be considered in the calculation of the grant assembly limit set. The developers of the assembly can affect the permissions calculation of the assembly run library. Although the assembly does not simply obtain the permissions required to run, it can declare a minimum required authority set or reject certain permissions. Security Manager ensures that only one of the necessary (or more) permissions is part of the rights set of the policy level structure, the assembly will run. Instead, security manager also ensures that the assembly does not receive any permissions it refuses to get. The developer of the assembly can place minimize privileges, denial privileges, or optional permissions by using security custom properties. For more information, see the following declaration methods and command partial or .NET Framework SDK. The process of deciding to grant an assembly an actual permissions set includes three steps: 1. Each policy level calculates the evidence of the assembly, and then generates a policy-specific grant permission set. 2. The permission set calculated for each policy level intersects each other. 3. The resulting permission set compares the permissions set or denied permission required to run the assembly declaration, and then modify the authority accordingly. Figure 1. Calculation of a grant set set of an assembly Figure 1 shows a general calculation process. The host of the running library provides evidence about the assembly, which is an input to the calculation of the permission set. Manageable Security Policy (Enterprise Policy, Computer Policies, and User Policy) is the second input that determines the permissions set of the assembly, which is called the security settings above. The security policy code (included in the SecurityManager class) traverses the assembly evidence provided by each policy level setting, and then generates a set of permissions set, which represents the assembly access to the privileged resources. How do you manage each policy level? The policy level represents an independent, configurable security policy unit - each level maps the assembly evidence to a permission set. Each policy level has a similar structure. Each policy level consists of three parts, these three parts are combined to represent the policy level configuration status: • Code group tree • Naming permission list • Policy assembly list Now let's explain these components of all strategy levels in detail. Code group The core of each policy level is the code group tree. It represents the configuration status of the policy level. The code group is essentially a conditional expression and a permission set. If the assembly satisfies the conditional expression, it will be granted to this permission set. The code set of each policy level is organized in the form of a tree. Once the calculation results of the conditional expression are true, the permission set will be granted, and then traversed in that branch. As long as you don't meet the conditions, you will not grant the right set, and you will no longer check the branch. For example, there is a code group tree that is running according to this situation below. Figure 2. Policy level code group tree Note We only discuss code group semantics for code group types used to implement the default security policy. It is also possible to include custom written, a semantic code group that is completely different from the semantics described herein. Again, the security system is fully scalable, so it provides unlimited possibilities for introducing new policy calculation semantics. Suppose there is an assembly, which has the following evidence: it comes from www.monash.edu.au because it comes from MoSh University's M-Commerce Center, so its strong name is MCMMERCE. Code group traverses will be conducted in the following manner: The root node has a condition "All Code" that is satisfied with any code. Therefore, "All Code" permissions granted our assembly, this permission set is called "nothing", which does not allow code to have any permissions. The next check code group is a code group that requires the code from My Computer. Because this condition is not satisfied, there is no grant set, so any subordinate node of this condition will not be checked. Then we return to the previous successful authorized code group (this example is all code), then continue to check its subordinate node. The next code group is Zone: Internet. Because our code is downloaded from the Internet, it satisfies this condition, thereby grandsing the permission set (possibly an Internet authority set), and then you can continue to check the next sub-code group in this branch. The next code group has a URL: Condition: It is pointed out that the code comes from www.microsoft.com. Since the code is from www.monash.edu.au, this condition is not met. At this point we return to the zone: Internet code group, find the other nodes below it. We look for the node for the URL: www.monash.edu.au. Due to this condition, we got a monashpset license set. Next we look for the node for the Strong name: M-Commerce Center. We got M-Commerce permission set due to this condition. Because there is no code group below this level, we return to match conditions and have the previous code group of subordinate code groups, and then continue. Final, satisfied conditions and authority sets granted from this policy level include: • Conditions: All Code, Permissions Set: Nothing • Conditions: zone: Internet, Permissions Set: Internet • Conditions: URL: www.monash.edu.au, Permissions Set: Monashpset • Conditions: Strong Name: M-Commerce, Permissions Set : M-Commercepset found in a policy level is usually combined to generate the total power limit granted by the policy level. Checking a policy level code group tree is very simple. Appendix A describes a Microsoft Management Console unit that provides a hierarchy for viewing and modifying code groups (all other configurable components of the policy level, see the content below). Named permission set A policy level contains a list of commands. Each peright limit represents a trust declaration for accessing various protected resources. Named Permissions Sets are some of the code groups that are referenced by their name. If you meet the condition of the code group, then grant the referenced naming set (see the example above). Here is some predefined named rights set: • FullTrust: Allows unrestricted access to system resources. • SkiPVerification: You can skip the verification assembly. • Execution: Allows the code to be executed. • Nothing: Does not grant permissions. The permissions do not be executed can effectively stop the running of the code. • Internet: Suitable permission set for code from Internet. The code will not receive access to the file system or registry, but can perform some limited user interface operations, and can use a secure file system called stand-alone storage. To view the right set of policy levels, as long as you open the policy level node in the GUI tool mentioned in Appendix A, then open the right set folder. Here is a small sample program that lists all known naming rights sets at all policy levels: The following program displays a list of named permission sets at all policy levels. The application is a C # program running from a local disk, so it receives a quite powerful permission set from the default policy settings. Using system; Using system.collections; Using system.security; Using system.security.policy; Namespace SecurityResolver { Class Sample { Static void main (string [] args) { IEnumerator I = securityManager.policyierchy (); While (I.MOVENEXT ()) { Policyvel P = (Policy) i.current; Console.writeline (P.Label); IENUMERATOR NP = P.NameDpermissions (); While (np.movenext ()) { NamedPermissionSet Pset = (nameDpermissions) np.current; Console.writeline ("/ tpermission set: / n / t / t name: {0} / N / T / T Description {1} ", pset.name, pset.description; } } } } } The output of the program. The output has been edited for the sake of simplicity and explicit. Enterprise Permission Set: Name: FullTrustDescription: Allows Full Access To All Resources Permission Set: Name: LocalIntranet Description: Default Rights Given to Applications ON Your Local Intranet ... Machine Permission Set: Name: Nothing Description: Denies All Resources, Including The Right To Execute ... User ... Name: SkiPVerification Description: GRANTS Right to Bypass the Verification Permission Set: Name: Execution Description: Permits Execution ... Policy assembly During safety calculations, you may need to load other assemblies to use during policy computing. For example, an assembly can include a user-defined authority class section of the code group issued a rights set. Of course, this assembly that contains custom privileges is also needed. If this set of custom privileges is granted a permission set that contains its own custom privilege, the loop dependency will be triggered. To avoid this, each policy level contains a list of trusted program set in the policy calculation. This list of required assemblies is naturally called the "Policy Collection" list, which contains the transition closure of all the assemblies required for security policies in this policy level. The policy calculation of all assemblies contained in this list is short-circuited to avoid loop dependencies. This list can be modified using the GUI management tool mentioned by Appendix A. This completes the check of configurable components at each policy level: code group tree, namedrice set list, and a list of policy assembly. Take a look at how the grant authority derived from the security policy status is accepted by instantiating the configuration of each policy level. In other words, so far, we only discuss how the assembly has received the authority set. If there is no infrastructure requires a certain level of permissions, the security system will not play any role. In fact, it is possible to make security to become possible technologies is a safe stack audit. Back to top Stack review Stack review is an important part of the security system. The stack audit operates in the following manner. Each time you call a method, you will put a new activation record in the stack. This record contains the parameters to be passed to the method (if there is a parameter), the address to be returned at the end of this function and any partial variable. When the program is executed, the stack grows and shrinks as the function calls. At some stage in execution, the thread may need to access system resources, such as file systems. Before allowing this access to protected resources, you may need to perform a stack auditing to verify that all methods in the calling chain have access to system resources. A stack audit is performed at this stage and usually check each activation record to see if these caller do have the required permissions. Compared to the full stack review, the CAS system also allows developers to use link time checks to annotate resources. This link time checks only to check the direct call. Modify the stack audit At any stage during execution, it may need to check the permissions of its caller before a function accesses a specific resource. At this stage, this function can require security checking for a particular permission or permission set. This triggers the stack audit, and the result is: If all the caller have a granted permissions, then the function will continue to execute; if the memory does not have the required (or more) permissions, it will lead to exceptions. The following figure illustrates this process. Figure 3. Sample of the stack audit The function can choose to modify the stack audit, and some mechanisms can complete this modification. First, a function may need to ensure that multiple functions are called. In this case, it can assert a specific permissions. If a stack audit for finding an assertion occurs, then when the activation record of this function is checked, if the function has the permissions it asserts, the check will be successful, the stack audit will also terminate. The assertion itself is a protected operation because it opens to all the caller of the protected resource permission to open the access rights of the protected resource. Therefore, in the run library, the security system checks if the assembly containing the self-asserted function has the permissions it tries to assert. Another way to modify the stack review is to support function denial of permissions. This situation may happen when a function knows that it should not access a resource and reject permissions. Permitonly provides a function similar to Deny because it causes the stack audit to fail. But Deny specifies the permission set that causes the stack audit failed, and Permitonly is specified by the continued stack audit. Note You should be very cautious when using a Deny Stack modifier. If the early stack frame is an assertion, the Deny modifier will be ignored. In addition, rejecting path-based permissions is quite difficult because there is often a variety of different path strings to actually point to the same location. Rejecting a specific path expression will still open other paths. There is also the last point you need to know. At any time, a stack frame can only have a Deny, a permitonly and an assert is active. For example, if the developer needs to assert a number of permissions, they should create a permissionSet to represent the collection and only one separate assertion. There are some ways to delete the current PermissionsEt settings for a stack audit replica to register other permission sets. One example of this method is System.Security.codeAccessPermission.RevertPermitonly. The following example illustrates the various stack modifications techniques described earlier: Using system; Using system.security; Using system.security.permissions; Namespace PermissionDemand { Class Entrypoint { Static void main (string [] args) { String f = @ "c: / system volume information"; Fileiopermission P = New fileiopermission FileiOpermissionAccess.write, f); p. Demand (); p. Deny (); p. Demand (); Checkdeny (P); P.ASSERT (); Checkdeny (P); } Static void Checkdeys (FileiOpermission P) { Try { p. Demand (); } Catch (securityException) { Console.writeline ("Demand Failed"); } } } } The previous program produces the following output, which looks very intuitive: Demand failed Demand failed In the above code example, although the first Demand access is a restricted system directory, it is still successful. Remember, the run library security system is on the underlying operating system settings. Therefore, it is possible to grant such access rights to certain directories, that is, the managed code actually tries to access these directories, it will cause the operating system access conflict because of this. The next Demand in Deny has been successful. When executing Demand, there is no inspection of the activation record of the request function, and only checks its caller. Therefore, although the function has rejected access, it will not be detected by Demand. Call Checkdeny and the rear Demand have indeed failed. Now check the Deny in the front method because it is located in the stack frame of the caller. Next we return to main and make an assert. One permissions that have been asserted here have also been rejected in this stack frame. We entered Checkdeny? ±, Demand once again triggered an exception, but why? In essence, Deny renovated Assert; this is because Deny permission sets are always checked before the Assert authority set. Summary, enabling managed resources to trigger the function of managed security stack audits to protect resource run security system methods. The granted authority set is that the assembly is received from authorization calculations running at each policy level, and the permissions rarely checks the permissions of the resource request. If the latter forms a subset of the former, you can access protected resources. This subset check is performed on all caller of hosted resources in the call chain unless the stack is modified unless it is described above. Therefore, the safety stack audit combines these two aspects of the running library security system: 1} The configurable mapping between the data and permissions; 2) Protect resources by enforcing all the caller with certain levels of permissions. There are actually two different ways to express stack auditing requests and stack modifications in a programmatically - declarative security and commandability. Back to top Declarative mode and order The .NET framework allows developers to express security constraints in two ways. Expressing security constraints in declating means means using custom attribute syntax. These annotations remain in the type of metadata, which is effectively dissolved in the compile time. Below is an example of declaring way security: [PrincipalPermissionatTribute (SecurityAction.Demand, Name = @ "culex / damien")]]] Expressing security requirements in ordering means that the instance of the Permission object is created at runtime and call them. Below is an example of a secure way to declare a safe way in the examples of this article: FileiOpermission P = New Fileiopermission FileiOpermissionAccess.write, f); p. Demand (); Why do you have a way for a way without choosing another way. First, you can use a declarative method to express all security operations; however, for command, it will not. However, the declarative approach requires expressing all security constraints at compile, and only allows any annotations within the full method, class, or assembly range. Command-nature is more flexible because it allows constraints at runtime, such as subsets that may perform paths in the method. A side effect of retaining commandable security requirements in metadata is that there are some tools to extract these metadata information and provide functions based on this information. For example, a tool can display a list of declarative security attribute sets on an assembly. And for commandability, this situation is impossible. Developers need to understand and become familiar with these two ways. Back to top advice Because the code executed from the local hard disk will significantly trust from the local hard disk, the code is more trust than the code executed by anywhere, so the code is stored to the disk and then execute it is completely different from the remote location execution code. Semantics. For previous systems, this situation is not very obvious, such as the browser chooses to download code rather than remote execution. Here, it is assumed that after the code is downloaded, it will check it before execution, such as virus scanning. Using code access security, the situation is completely different. If you use the default security policy, execute the code as a remote code will significantly increase security. However, it may increase the burden of the system or user, because they need to know the difference between hosting code and non-hosting code. Finally, there is another aspect of the runtime security system. For users of old security systems, they should be more familiar with this system because it is based on user identity: role-based security. Back to top Role-based security The code access security system introduced so far is basically centered on code identification, rather than user or role. However, it still needs to express security settings according to user ID. Therefore, the runtime security system is also included with role-based security features. Role-based security takes advantage of the concept of users and roles, which is similar to the security implementation in many operating systems. The two core abstractions based on role-based security are Identity and Principal. Identity is the user represented by code execution. Please remember that they may be a logic user defined by an application or developer without having to be a user who can see the operating system. Principal represents an abstraction of the roles of users and users. A class that represents the user identifier to implement the Identity interface. In the .NET framework, the general class of the default implementation of this interface is genericIdentity. The iprincipal interface is implemented on behalf of Principal's class. In the .NET framework, the general class of the default implementation of this interface is genericprincipal. In the runtime, each thread is there and only one current Principal object is associated with it. Of course, the code can access and change this object according to safety requirements. Every Principal is available and there is only one Identity object. Logically, the root structure of the object is similar to the following: Figure 4. Role security structure The following program illustrates how developers can use these general classes. In this example, developers are providing a security model. For example, names "Damien" and role "lecture", "Examiner" are independent of any users and roles that the operating system can support. Using system; Using system.threading; Using system.security; Using system.security.principal; Namespace rolebasedsecurity { Class Sample { Static void main (string [] args) { String [] Roles = {"lecturer", "examiner"}; GenericIdentity I = New GenericIdentity ("Damien"); GenericPrincipal g = new genericprincipal (i, Roles); Thread.currentprincipal = g; IF (thread.currentprincipal.Identity.name == "Damien") Console.Writeline ("Hello Damien"); IF (thread.currentprincipal.isinrole ("examiner")) Console.Writeline ("Hello Examiner"); if (Thread.currentPrincipal.Isinrole ("Employee")) Console.writeline ("Hello Employee"; } } } The program produces the following output: Hello Damien Hello EXAMINER If the developer wants, you can also use the Microsoft (R) Windows (R) security model. In this case, users and roles are closely linked to users and characters in the host computer, so they may need to create these accounts on the host system. The following example uses a user account on the local computer. In this example, some SyntActic Sugar uses some of the PrincipalPermissionattribute class in the NET framework to effectively encapsulate calls (such as isinrole) to enable developers to use simplified syntax. Namespace rolebased { Class Sample { [PrincipalPermissionatTribute (SecurityAction.Demand, Name = @ "culex / damien")]]] Public static void userDemanddamien () { Console.WriteLine ("Hello Damien!"); } [PrincipalPermissionatTribute (SecurityAction.Demand, Name = @ "culex / dean"]]]] Public static void userdemanddean () { Console.WriteLine ("Hello Dean!"); } Static void main (string [] args) { Appdomain.currentDomain.SetPrincipalPolicy ( PrincipalPolicy.WindowsPrincipal); Try { UserDemanddamien (); UserDemanddean (); } Catch (Exception) { Console.Writeline ("Exception Thrown"); } } } } PrincipalPerMissionatribute guarantees running checks every time you call UserDemandDamien and UserDemanddean methods. Of course, the program may be executed by DEAN, Damien may also be executed by others, so there is at least one of the security checks to these two ways to fail. The first row of Main puts the user policy to a user policy for Windows (Execute this example). When the user "Culex / Damien" executes the program, the following output is generated: Hello Damien! EXCEPTION THROWN Back to top summary Security is a basic and built-in factor in the .NET framework. This article is intended to make an overview of the safety system. Some of the main concepts that can be summarized from this article are: • The security system is scalable, because in the .NET framework, many concepts are expressed in type, so developers can extend and modify according to their own needs. • The security system provides different types of security models, specifically, with role-based security models and evidence-based security models. These different models meet different needs and are complementary. • The code access security is centered on the code of the code, so even if the operating system user context is granted to the computer, it is only allowed to execute the code in an incompletely trusted security context. This article has no discussion and a very detailed discussion of security system like encryption. Please refer to the white paper, especially those on these topics, to learn some details of this article not discussed. Back to top Thank you In the process of writing this article, Brian Pratt, Loren Kohnfelder and Matt Lyons provide strong help and support, I would like to thank you. Back to top Appendix A: MSCORCFG.MSC There is a Microsoft Management Console unit that allows us to visualize the code access security policy. The figure below shows the interface of this management unit, which highlights some of the concepts and some concepts. Figure 5. Microsoft Management Console Unit Interface You can access the tool in the Control Panel and click Microsoft .NET Framework Configuration shortcut to access the tool.