Security Overview in .NET Framework
Release Date: 7/28/2004
| Update Date: 7/28/2004
Dr. Demien Watkins, project 42sebastian Lange, Microsoft Corporation
Summary: This article outlines the basic features of the Microsoft .NET framework security system, including the capabilities that run for dynamic downloading and execution modes and remote execution modes, restriction code in strict constraints, administrator-defined security contexts.
This page
Introduction Explosion Overview Verification Code Access Security Permissions Evidence Security Policy Code Group Stack Audit Declaration Mode and Command Sexual Advice Based on Role Security Summary Thank A: Mscorcfg.msc
Introduction
In traditional program development modes, administrators typically install software to a fixed location of the local disk; when steering support from this mode, the security is the most important to consider the most, and even support remote execution environments. Important factor. In order to support this mode, the Microsoft .NET framework provides a strong security system that can limit the code in strict constraints, the administrator-defined security context is running. This paper studies some of the basic security features in the .NET framework.
Many security models associate security with users and their groups (or roles). This means that all codes that are running on behalf of these users or allowed to perform operations for important resources, or not allowed to perform operations. In most operating systems, security is built in this model. The .NET framework provides a security model defined by the developer, called role-based security, which is also operated in this similar structure. The main abstraction of role-based security is Principals and Identity. In addition, the .NET framework also provides security on the code, which is called code access security (also known as evidence-based security). Using code access security, a user can get trust to access a resource, but if the code executed is not trusted, the access resource will be rejected. Code-based security based on specific users is a basic tool that allows security to be embodied on mobile code. The removable code can be downloaded and executed by any number of users, and these users do not understand when developing. Code access security is mainly concentrated in some core abstractions, they are: evidence, policies, and permissions. The role-based security and code access security is represented by the type in the .NET Framework class library, and is the user scalable. Here are two points to note: Open security models to some programming languages in a consistent and coherent manner, as well as the type of protection and public .NET framework class library (using these resources may undermine security).
The .NET framework security system is running on the upper layer of traditional operating system security. This approach increases a layer of more expressive and scalable security over the operating system security. These two levels of security complement each other. (Operating system security system can also delegate some responsibility to the public language runtime security system for managed code, as the runtime security system is more fine, more configurable than traditional operating system security.)
This article provides an overview of .NET framework security, specifically describes role-based security, verification, code access security, and stack audit, and use some small programming examples to reveal some concepts. This article has no argumentation and other runtime security tools such as encryption and independent storage.
By the way, this article generally describes the default behavior of these security features. However, .NET framework security system has strong configurability and scalability. This is a major advantage of the system, but unfortunately, this cannot be discussed in detail in this conceptual description article.
Back to top
Exploction overview
The run library performs both managed code and performs unmanaged code. The hosted code is performed under the control of the runtime, so you can access the service provided by the running library, such as memory management, real-time (JIT) compilation, and the most important part of this article - security services such as security policy systems and validation. The non-hosting code is compiled, and the code to run on a particular hardware platform cannot be directly utilized. However, when the language compiler generates a managed code, the output of the compiler is expressed in the Microsoft Intermediate Language (MSIL). MSIL is generally described as an object-oriented pseudocate language for abstraction, a stack-based computer. The reason why MSIL is object-oriented because it has some instructions that support object-oriented concepts, such as object assignments (Newobj) and virtual function calls (CallVirt). The reason why it is abstract computer because MSIL does not depend on any specific platform. That is, it does not make any assumptions for hardware running on it. It is based on stack, because from essentially, MSIL is performed by pressing the stack press (PUSH) value and the pop-up (POP) value and the calling method. MSIL usually compiled as a native code before execution. (MSIL can also be compiled as native code before running the code. This helps to shorten the startup time of the assembly, but the MSIL code is usually JIT compilation at the method level.) Back to top
verification
Two forms of verification in the run library: MSIL verification and program collection metadata verification. All types in the Runturser specify the agreement they will implement, which will be reserved as metadata with MSIL in the hosted PE / COEFF file. For example, if a type specifies that it is inherited from another class or interface, this shows that it will implement some methods, which is an agreement. Agreements can also be linked to visibility. For example, the type can be declared as disclosed (export) or other content from their assembly. Because type security can only access the type according to their agreement, in this regard, it is an attribute of code. Msil can be verified to prove that it is safe. Verification is a basic construction block in the .NET Framework security system, currently only executing verification on the hosted code. Because the non-hosting code cannot be verified by the running library, the non-hosting code executed by the runtime must be completely trusted.
To understand MSIL verification, the key is to understand how to classify MSIL. MSIL is divided into the following types: invalid MSIL, valid MSIL, type secure MSIL, and verifiable MSIL.
Note It should be noted that the following definition provides more information than the standard definition. For more accurate definitions, see other documents, such as Ecma Standard:
• Invalid MSIL is the JIT compiler that cannot generate MSIL represented by this machine. For example, the MSIL translation of the invalid operator cannot be incremented by the machine code. Another example is the jump instruction, the target of the instruction is the address of the operand, not an opcode. • Valid MSIL can be considered to be all MSIL that meets the MSIL syntax, so it can be represented by this unit code. This classification included MSIL can use a pointer algorithm in non-type security to access access to type members. • Type secure MSIL interacts with the types of protocols to the public. Msil attempting to access another type of private member from one type is not type. • Venible MSIL is MSIL that can be proved by a validation algorithm. The verification algorithm is conserved, so some type of security MSIL may not pass verification. Of course, verified MSIL is both types of security and effective, of course, not invalid.
In addition to type security checks, the MSIL verification algorithm in the running library also checks the occurrence of stack overflow / underflow, the correct use of exception processing tools, and initialization of objects.
For code loaded from disk, the verification process is part of the JIT compiler, which will be intermittently intermittent in the JIT compiler. Verification and JIT compilation are not performed as two independent processes. If the program is found in the program set in the program, the security system will check if the program is sufficiently trusted and can skip the verification. For example, if an assembly is loaded from the local hard disk under the default setting of the security model, it may be the case. If the program is trusted, you can skip the verification, you will use the msil translation cost machine code. If the degree of credibility of the assembly is insufficient, you cannot skip the verification, you will use a stub to replace the problem with MSIL, if the execution path is used, the stub will trigger an exception. A common question is: "Why not check if it needs to be verified before verifying the assembly?" Because verification is usually performed as part of the JIT compilation, it is usually more faster than checking if the assembly is allowed to jump faster. (Decide to skip the verification more intelligently than the process described here. For example, you can cache some results of some of the previous verification attempts to provide a fast lookup solution.) In addition to MSIL verification, you have to verify the program collective metadata. In fact, type security depends on these metadata checks because it assumes that the metadata tags used during MSIL verification are correct. When you load an assembly in the Global Assembly Cache (GAC) or download cache; if you do not insert it into the GAC, you will also verify that the program collective metadata is also verified when it is read from the disk. (GAC is a central storage of some of the assembly used by some programs. Download Cache Save assembly from other locations (such as Internet) download.) Metadata verification includes checking metadata tags and eliminating buffers overflow, the former is used to check if they are Will correctly index the table accessed, and whether the index to the string table does not point to the length greater than the string of the buffer size that should be saved. The type of security code that eliminates non-type security by MSIL verification and metadata authentication is the first part of run library security.
Back to top
Code Access Security
In essence, the code access security assigns permissions to the assembly evidence. When the resource determines which resources should have access, the code access security uses the location of the acquired executable code and other information about code identification as a major factor. Information about the assembly identification is called evidence. Once the assembly is loaded to the running library for execution, the host environment adds some evidence to the assembly. The code access security system in the Runbel is responsible for maping these evidence to a permission set, which will determine which access to some resources (such as registry or file systems). This mapping is based on manageable security strategies.
For most applications for hosting code, the default code access security policy is secure and sufficient. It strictly limits the operation when it is executed on the local computer from the code that is not fully trusted or untrusted (such as the Internet or the local intranet). Therefore, the code access security default policy model represents a feasible way to security. By default, resources are safe; administrators need to take explicit operations to reduce system security.
Why do we need another security program? Unlike the user ID, the code access security is centered on the code identity. This allows the code to run in an unlimited number of trust levels in one user context. For example, even if the operating system user context allows full access to all system resources, code from the Internet can only run in a defined security boundary.
Now let's take a look at the principal input and output of the code access security system: evidence and permissions.
Back to top
Authority
Permission representatives can perform authorization for protected operations. These operations typically include access to a particular resource. Typically, these operations include accessing resources (such as files), registry, network, user interface, or execution environment. An example of permission that does not involve actual resources is to skip the verification function. Note The System.Security.Permissions.SecurityPermission class contains a flag that determines if the recipient that allows the permissions instance to skip verification. The SecurityPermission class contains other similar permissions, which covers the core running technology. If these technologies are not used correctly (such as the ability to control the evidence provided in the specific application domain), it may publicly Vulnerability. The core runtime technology is protected by the request caller to set the necessary securityPermission class to set the appropriate permissions.
The basic abstraction of permissions is the iPermission interface, which requires a specific permissions type to implement a set of standard permission operations, such as returning a joint or subset of other permissions with the same privilege type.
Permissions can be sorted to a permissions set, which represents a statement on access to various resources. System.Security.Permissions set represents a collection of permissions. This method of this class includes INTERSECT and UNION. These methods use another permissionset as a parameter and provide a permissionset, which is the combination of all permissions in these two collections, or the intersection of all permissions in these two collections. (The set of permissions in the running library is expressed in a simple, unsorted collection.) With these tools, the security system can use the right to limit, without having to understand the semantics of each permissions. This allows developers to extend the hierarchy of permissions without having to modify security programs.
Note Each permission type must be derived from the Ipermission interface, which requires any permissions type to implement standard permission operations, such as joint, intersection, subset, and request methods. These permissions types do not have to achieve semantics of the type of permissions specific to them. For example, the result generated by intersecting permissions containing file names is different from the results generated by intersecting permissions including simple Boolean status. When the power limit set A is intersected with the permission set B, if the A and B are different instances of the same permissions type X, the permission category A will call the intersection method on the X instance without having to know anything about x semantics.
According to the evidence provided to the security system at the time of assembly loading, the security system will grant a permission set, which represents access to various protected resources. Instead, resources are protected by permission requests, which triggers a security check to see if a specific permissions have awarded all caller of the resource; if the request fails, it will trigger an exception. (There is a specific security check known as a link request, which only checks the direct call, but usually check the entire call stack of the recall.)
Back to top
evidence
The host environment will provide evidence of the assembly to the security system whenever you load an assembly to the running library. Evidence constitutes the content entered to the code access security policy system, which determines which permissions received by the program rally.
The .NET framework comes with some classes, in the security system, these classes are used as evidence:
• ZONE: The area used in Internet Explorer has the same concept. • URL: A specific URL file location identifies a specific resource, such as http://www.microsoft.com/test. • hash: The program distributed value generated using a hash algorithm like SHA1. • Strong Name: Strong name signature of the assembly. Strong name represents a version, encrypted reinforcement method for reference and identify one (or all) assembly of a particular signature party. For more information, see .NET Framework SDK. • Site: Site from the code from. The URL is more specific than the concept of the site; for example, www.microsoft.com is a site. • Application Directory: To load the code from it. • Publisher Certificate: AUThenticode digital signature of the assembly. Note Theoretically, any managed code can form evidence. The above is only some types of corresponding member conditions in the .NET framework, so they can integrate them into the security policy without having to write a custom security object. For more information on security policies and code groups, see the following. The following program is a simple example of evidence, where evidence is passed to the running library security system when loading an assembly. In this example, Mscorlib is a loaded assembly, which is an assembly that contains many run library types (such as Object and String).
Using system;
Using system.collections;
Using system.reflection;
Using system.security.policy;
Namespace assemblyevidence
{
Class class1
{
Static void main (string [] args)
{
TYPE T = Type.gettype ("System.String");
AskEMBLY A = askEMBLY.GETASSEMBLY (T);
Evidence E = a. Evidence;
IENUMERATOR I = E.GeteNumerator ();
While (I.MOVENEXT ())
Console.writeline (I.current);
}
}
}
The output of the program shows what evidence is passed for this assembly to the security system. For the sake of simplicity, the following output has been edited. The security system uses this evidence and then generates a permission set based on the security policy set by the administrator.
MyComputer
File: /// c: /winnt/microsoft.net/framework/v1.0.2728/mscorlib.dll
4D5A90000300000004000000FFFFF0000B8000000000000 ...
0000000000000000000000000000000000000000000000000000000000000000000000000
Back to top
security strategy
Manageable security policies determine the mapping between the host environment to provide the assembly and grant the equity set of the assembly. The System.Security.securityManager class implements this mapping. Therefore, you can view the code access security policy system as a function with two input variables (evidence, and manageable security policies) and can view the specific permissions set as the output value. This section focuses on the manageable security policy system.
Some configurable policy levels can be identified by security manager, they are:
• Enterprise Policy Level • Computer Policy Level • User Policy Level • Application Domain Policy Level
Enterprise policy level, computer policy level, and user policy levels can be configured by security policy administrators. The application domain policy level can be configured by a host in a programmed manner. When the security manager needs to determine the level of the security policy grant the assembly, it starts from the enterprise policy level. Provide the assembly evidence to this policy level will grant the permission set from the policy level. Typically, the Security Manager will continue to collect the permissions set of the following policy levels below the enterprise policy level in the same way. These privileges then intersect to generate the aggregate set of policy system. All policy levels must first allow a specific permissions to enable it to enter the permissions granted to the assembly. For example, if the enterprise policy level does not grant a specific permissions during the calculation of the assembly, this permission is not granted regardless of what permissions specified at other levels.
Note There are some special situations (such as enterprise policy levels) in a policy level, may contain an instruction that specifies any policy level below this level, such as computer policy level and user policy level. In this case, the computer policy level and user policy level do not generate a set of permissions, and these two levels will not be considered in the calculation of the grant assemblies.
The developers of the assembly can affect the permissions calculation of the assembly run library. Although the assembly does not simply obtain the permissions required to run, it can declare a minimum required authority set or reject certain permissions. The security manager ensures that the assembly will run only when one (or more) permissions needed to be granted by the policy level structure. Instead, security managers can make sure that the assembly does not receive any permissions it refuses to get. The developer of the assembly can use the security custom attribute to minimize the required permissions, reject permissions or optional permissions. For more information, see the following declarative mode and command part or .NET Framework SDK.
The process of deciding to grant an assembly an actual permissions set includes three steps:
1. Each policy level calculates the evidence of the assembly, and then generates a policy-specific grant permission set. 2. The permission set calculated for each policy level intersects each other. 3. The resulting permission set is compared to the assembly declaration that needs to be run, and then modify the permissions accordingly.
Figure 1. Calculation of a grant set set of an assembly
Figure 1 shows a general calculation process. The host of the running library provides evidence about the assembly, which is an input calculated as the permission set received by the assembly. Manageable Security Policy (Enterprise Policy, Computer Policies, and User Policy) is the second input that determines the permissions set of the assembly, which is called the security settings above. The security policy code (included in the SecurityManager class) traverses the assembly evidence provided by each policy level setting, and then generates a set of permissions set, which represents the assembly access to the privileged resources.
How do you manage each policy level? The policy level represents an independent, configurable security policy unit - each level maps the assembly evidence to a permission set. Each policy level has a similar architecture. Each policy level consists of three parts. These three parts are combined to jointly represent the configuration status of the policy level:
• Code Group Tree • Name Permissions List • Policy assembly list
Now let's explain these components of all strategy levels in detail.
Back to top
Code group
The core of each policy level is the code group tree. It represents the configuration status of the policy level. The code group is essentially a conditional expression and a permission set. If the assembly satisfies the conditional expression, it will be granted to this permission set. The code set of each policy level is organized in the form of a tree. Once the calculation results of the conditional expression are true, it will be granted the permissions set, and then continue in that branch. As long as you don't meet the conditions, you will not grant the right set, and you will no longer check the branch. For example, there is a code group tree that is running according to this situation below.
Figure 2. Policy level code group tree
Note We only discuss code group semantics for code group types used to implement the default security policy. You can also include custom written, the semantics are completely different from the code groups described herein. Again, the security system is fully scalable, so it provides unlimited possibilities for introducing new policy calculation semantics. Suppose there is an assembly, which has the following evidence: it comes from www.monash.edu.au because it comes from MoSh University's M-Commerce Center, so its strong name is MCMMERCE.
Code group traverses will be conducted in the following manner:
The root node has a condition "All Code" that is satisfied with any code. Therefore, "All Code" permissions will be granted to our assembly, which is called "nothing", which does not allow code to have any permissions. The next check code group is a code group that requires the code from My Computer. Because this condition is not satisfied, there is no grant set, so any subordinate node of this condition will not be checked. Then we return to the previous successful authorized code group (this example is all code), then continue to check its subordinate node. The next code group is Zone: Internet. Because our code is downloaded from the Internet, it satisfies this condition, thereby grandsing the permission set (possibly an Internet authority set), and then you can continue to check the next sub-code group in this branch. The next code group has a URL: Condition: It is pointed out that the code comes from www.microsoft.com. Since the code is from www.monash.edu.au, this condition is not met. At this point we will return to the zone: Internet code group, find other nodes below it. We look for the node for the URL: www.monash.edu.au. Due to this condition, we got a monashpset license set. Next we look for the node for the Strong name: M-Commerce Center. We got M-Commerce permission set due to this condition. Because there is no code group below this level, we return to match conditions and have the previous code group of subordinate code groups, and then continue.
Final, satisfied conditions and authority sets granted from this policy level include:
• Conditions: All Code, Permissions Set: Nothing • Conditions: zone: Internet, Permissions Set: Internet • Conditions: URL: www.monash.edu.au, Permissions Set: Monashpset • Conditions: Strong Name: M-Commerce, Permissions Set : M-Commercepset
All rights sets found in a specific assembly in a policy level are usually combined to generate the total power limit granted by the policy level.
Checking a policy level code group tree is very simple. Appendix A describes a Microsoft Management Console unit that provides a hierarchy for viewing and modifying code groups (all other configurable components of the policy level, see the content below).
Naming
A policy level contains a list of namedrons. Each permission set represents a trust declaration to access a variety of protected resources. Named Rights Set is the permissions set by code group by its name. If you meet the code of the code group, you will grant a referenced named rate set (see the example above). Here is some predefined named rights set:
• FullTrust: Allows unrestricted access to system resources. • SkiPVerification: Allows the assembly to skip the verification. • Execution: Allows the code to be executed. • Nothing: Does not grant permissions. The permissions do not be executed can effectively stop the running of the code. • Internet: Suitable permission set for code from Internet. The code will not receive access to the file system or registry, but can perform some limited user interface operations, and can use a secure file system called stand-alone store. To view the right level of the policy level, simply open the policy level node in the GUI tool mentioned in Appendix A, then open the right set folder.
Here is a small sample program that lists all known naming rights sets at all policy levels:
The following program displays a list of named permission sets at all policy levels. The application is a C # program running from a local disk, so it receives a considerable power set from the default policy setting.
Using system;
Using system.collections;
Using system.security;
Using system.security.policy;
Namespace SecurityResolver
{
Class Sample
{
Static void main (string [] args)
{
IEnumerator I = securityManager.policyierchy ();
While (I.MOVENEXT ())
{
Policyvel P = (Policy) i.current;
Console.writeline (P.Label);
IENUMERATOR NP = P.NameDpermissions ();
While (np.movenext ())
{
NamedPermissionSet Pset = (nameDpermissions) np.current;
Console.writeline ("/ tpermission set: / n / t / t name: {0}
/ N / T / T Description {1} ",
pset.name, pset.description;
}
}
}
}
}
The output of the program is as follows. The output has been edited for the sake of simplicity and explicit.
Enterprise
Permission Set:
Name: FullTrust
Description: Allows Full Access To All Resources
Permission Set:
Name: LocalIntranet
Description: Default Rights Given to Applications
ON Your Local Intranet
...
Machine
Permission Set:
Name: Nothing
Description: Denies All Resources, Including The Right To Execute
...
User
...
Name: SkiPVerification
Description: GRANTS Right to Bypass the Verification
Permission Set:
Name: Execution
Description: Permits Execution
...
Policy assembly
During safety calculations, you may need to load other assemblies to use in the policy calculation process. For example, an assembly can include a user-defined authority class section of the permissions set by the code group. Of course, this assembly that contains custom privileges is also needed. If this set of custom privileges is granted a permission set that contains its own custom privilege, the loop dependency will be triggered. To avoid this, each policy level contains a list of trusted program set in the policy calculation. This list of required assemblies is naturally referred to as a "Policy Collection" list, which contains the transition closure of all the assemblies required for security policies in this policy level. The policy calculation of all assemblies contained in this list is short-circuited to avoid loop dependencies. This list can be modified using the GUI management tool mentioned by Appendix A. This completes the check of configurable components at each policy level: code group tree, namedrice set list, and a list of policy assembly. Now let's take a look at how the grant authority derived from the security policy state is instantiated by each policy level configuration, and is connected to the security inability to connect. In other words, so far, we only discussed how the assembly received a grant set. If there is no infrastructure requires a certain level of permissions, the security system will not play any role. In fact, it is possible to make security to become possible technologies is a safe stack audit.
Back to top
Stack review
Stack review is an important part of the security system. The stack audit operates in the following manner. Each time you call a method, you will put a new activation record in the stack. This record contains the parameters to be passed to the method (if any), the address to be returned at the end of this function and all local variables. When the program is executed, the stack grows and shrinks as the function calls. At some stage in execution, the thread may need to access system resources, such as file systems. Before allowing access to protected resources, you may need a stack auditing to verify that all methods in the calling chain have access to system resources. A stack audit is performed at this stage and usually check each activation record to see if these caller do have the required permissions. Compared to the full stack review, the CAS system also allows developers to use link time checks to annotate resources. This link time checks only to check the direct call.
Modify the stack audit
At any stage during execution, it may need to check the permissions of its caller before a function accesses a specific resource. At this stage, this function can require security checking for a particular permission or permission set. This triggers the stack audit, and the result is: If all the caller have a granted permissions, then the function will continue; if the reuse does not have the required (or more) permissions, it will trigger an exception. The following figure illustrates this process.
Figure 3. Sample of the stack audit
The function can choose to modify the stack audit, and some mechanisms can complete this modification. First, a function may need to determine multiple functions that call it. In this case, it can assert a specific permissions. If a stack audit for finding assertion permissions occurs, then when the activation record of this function is checked to find the permissions, if the function has the permissions it asserts, the check is successful, the stack audit will terminate. The assertion itself is a protected operation because it opens to all the caller of the protected resource permission to open the access rights of the protected resource. Therefore, in the run library, the security system checks if the assembly containing the self-asserted function has the permissions it tries to assert.
Another way to modify the stack review is to support function denial of permissions. This situation may happen when a function knows that it should not access a resource and reject permissions. Permitonly provides a function similar to Deny because it causes the stack audit to fail. But Deny specifies the permission set that causes the stack audit failed, and Permitonly is specified by the continued stack audit.
Note You should be careful when using the Deny Stack modifier. If the early stack frame is an assertion, the Deny modifier will be ignored. In addition, rejecting path-based permissions is quite difficult because there is often a variety of different path strings to actually point to the same location. Rejecting a specific path expression will still open other paths. There is also the last point you need to know. At any time, a stack frame can only have a Deny, a permitonly and an assert is active. For example, if the developer needs to assert a number of permissions, they should create a permissionSet to represent the collection and only one separate assertion. There are some ways to delete the current PermissionsEt settings for a stack audit replica to register other permission sets. An example of such a method is System.Security.codeAccessPermission.RevertPerMitonly.
The following example illustrates the various stack modifications techniques described earlier:
Using system;
Using system.security;
Using system.security.permissions;
Namespace PermissionDemand
{
Class Entrypoint
{
Static void main (string [] args)
{
String f = @ "c: / system volume information";
Fileiopermission P =
New fileiopermission
FileiOpermissionAccess.write, f);
p. Demand ();
p. Deny ();
p. Demand ();
Checkdeny (P);
P.ASSERT ();
Checkdeny (P);
}
Static void Checkdeys (FileiOpermission P)
{
Try
{
p. Demand ();
}
Catch (securityException)
{
Console.writeline ("Demand Failed");
}
}
}
}
The previous program produces the following output, which looks very intuitive:
Demand failed
Demand failed
In the above code example, although the first Demand accessed is a restricted system directory, it is still successful. Keep in mind that the running library security system is running on the underlying operating system settings. Therefore, the runtime security policy can be granted access to certain directories. When the managed code is attempting to access these directories, the operating system access conflict will be triggered. The next Demand followed directly behind Deny was successful. When executing Demand, there is no inspection of the activation record of the request function, but only checks its caller. Therefore, although the function has rejected access, it will not be detected by Demand. The call to Checkdeny and the rear Demand failed. Now check the Deny in the front method because it is located in the stack frame of the caller. Next we return to main and make an assert. Here, there is an asserted permissions to be rejected in this stack frame. When we enter Checkdeny, Demand will once again trigger an exception. Why is this? In essence, Deny renovated Assert; this is because the Deny permission set is always inspected before the Assert authority set.
Briefly, enabling managed resources to trigger the function of managed safe stack audits is to protect resource run security system methods. The granted authority set is that the assembly is received from authorization calculations running at each policy level, and the permissions rarely checks the permissions of the resource request. If the latter forms a subset of the former, you can access protected resources. This subset check is performed on all caller of hosted resources in the call chain unless the stack is modified unless it is described above. Therefore, the safety stack audit combines the following two aspects of the running library security system: 1) Configurable mappings between evidence and permissions; 2) Protection resources by enforcing all caller with certain levels of permissions. There are actually two different ways to express stack auditing requests and stack modifications in a programmatically - declarative security and commandability.
Back to top
Declarative mode and order
The .NET framework allows developers to express security constraints in two ways. Expressing security constraints in declating means means using custom attribute syntax. These annotations remain in the type of metadata, which is effectively dissolved in the compile time. Below is an example of declaring way security:
[PrincipalPermissionatTribute (SecurityAction.Demand,
Name = @ "culex / damien")]]]
Expressing security requirements in order to intend to create an instance of the Permission object at runtime and call them. Below is an example of a secure method in the example in the example above:
FileiOpermission P = New Fileiopermission
FileiOpermissionAccess.write, f);
p. Demand ();
Why do you have a way for a way without choosing another way. First, you can use a declarative method to express all security operations; however, the command is not. However, the declarative approach requires expressing all security constraints at compile, and only allows any annotations within the full method, class, or assembly range. Command-nature is more flexible because it allows constraints at runtime, such as subsets that may perform paths in the method. A side effect of retaining commandable security requirements in metadata is that there are some tools to extract these metadata information and provide functions based on this information. For example, a tool can display a list of declarative security attribute sets on an assembly. And for commandability, this situation is impossible. Developers need to understand and become familiar with these two ways.
Back to top
advice
Because the code executed from the local hard disk will be significantly more trust than the code executed from any other location, the code is downloaded to the disk and then execute it will be completely complete. Different semantics. For previous systems, this situation is not very obvious, such as the browser chooses to download code rather than remote execution. Here, it is assumed that it will check it before the code is executed, such as virus scanning. Using code access security, the situation is completely different. If you use the default security policy, execute the code as a remote code will significantly increase security. However, this may increase the burden of the system or user, because they need to know the difference between hosting code and non-hosting code. Finally, there is another aspect of the runtime security system. For users of old security systems, they should be more familiar with this system because it is based on user identity: role-based security.
Back to top
Role-based security
The code access security system introduced so far is basically centered on code identification, rather than user or role. However, it still needs to express security settings according to user ID. Therefore, the runtime security system is also included with role-based security features.
Role-based security takes advantage of the concept of users and roles, which is similar to the security implementation in many operating systems. The two core abstractions based on role-based security are Identity and Principal. Identity is the user represented by code execution. Please remember that they may be an application or developer's logic user, not necessarily the user who can see the operating system. Principal represents an abstraction of the role of users and users. A class that represents the user identifier to implement the Identity interface. In the .NET Framework, the general class of the default implementation of this interface is genericIdentity. The iprincipal interface is implemented on behalf of Principal's class. In the .NET framework, the general class that provides the default implementation of this interface is GenericPrincipal. At runtime, each thread has only one current Principal object associated with it. Of course, the code can access and change this object according to safety requirements. Every Principal is available and there is only one Identity object. Logically, the object's runtime structure is similar to the following:
Figure 4. Role-based security structure
The following program illustrates how developers can use these general classes. In this example, developers provide a security model. For example, the name "Damien" and role "lecturer" and "Examiner" are not related to any user and role that the operating system can support.
Using system;
Using system.threading;
Using system.security;
Using system.security.principal;
Namespace rolebasedsecurity
{
Class Sample
{
Static void main (string [] args)
{
String [] Roles = {"lecturer", "examiner"};
GenericIdentity I = New GenericIdentity ("Damien");
GenericPrincipal g = new genericprincipal (i,
Roles);
Thread.currentprincipal = g;
IF (thread.currentprincipal.Identity.name ==
"Damien")
Console.Writeline ("Hello Damien");
IF (thread.currentprincipal.isinrole ("examiner"))
Console.WriteLine ("Hello Examiner");
IF (thread.currentprincipal.isinrole ("employee"))
Console.writeline ("Hello Employee";
}
}
}
This program produces the following output:
Hello Damien
Hello EXAMINER
According to the needs of the developer, you can also use the Microsoft (R) Windows (R) security model. In this case, the user and role are closely linked to the users and characters in the host computer, so they may need to create these accounts on the host system. The following example uses a user account on the local computer. In this example, some SyntActic Sugar uses some SyntActic Sugar; the PrincipalPermissionAttribute class in the NET framework effectively encapsulates the call to some methods (such as Isinrole) so that developers can use simplified syntax.
Namespace rolebased
{
Class Sample
{
[PrincipalPermissionatTribute (SecurityAction.Demand,
Name = @ "culex / damien")]]]
Public static void userDemanddamien ()
{
Console.WriteLine ("Hello Damien!");
}
[PrincipalPermissionatTribute (SecurityAction.Demand,
Name = @ "culex / dean"]]]]
Public static void userdemanddean ()
{
Console.WriteLine ("Hello Dean!");
}
Static void main (string [] args)
{
Appdomain.currentDomain.SetPrincipalPolicy (
PrincipalPolicy.WindowsPrincipal) and DINDOWSPRINCIPAL
Try
{
UserDemanddamien ();
UserDemanddean ();
}
Catch (Exception)
{
Console.Writeline ("Exception Thrown");
}
}
}
}
PrincipalPerMissionatribute ensures that running checks each time you call UserDemandDamien and UserDemanddean methods. Of course, the program may be executed by DEAN, DAMIEN, or others, so there is at least one of the security checks to these two ways, even if there is not all failure. The first line of Main will set the user policy to the user policy of Windows (Execute this example). When the user "Culex / Damien" executes the program, the following output is generated:
Hello Damien!
EXCEPTION THROWN
Back to top
summary
Security is a basic and built-in factor in the .NET framework. This article is intended to make an overview of the safety system. Some of the main concepts that can be summarized from this article are:
• The security system is scalable because many concepts are expressed as types in the .NET framework, so developers can extend and modify them according to their needs. • The security system provides different types of security models, specifically, with role-based security models and evidence-based security models. These different models meet different needs and are complementary. • The code access security is centered on the code of the code, so even if the operating system user context is granted to the computer, it is only allowed to execute the code in an incompletely trusted security context.
This article has no discussion and a very detailed discussion of security system like encryption. Please refer to the white paper, especially those on these topics, to learn some details of this article not discussed.
Back to top
Thank you
In the process of writing this article, Brian Pratt, Loren Kohnfelder and Matt Lyons provide strong help and support, I would like to thank you.
Back to top
Appendix A: MSCORCFG.MSC
There is a Microsoft Management Console unit that allows us to visualize the code access security policy. The figure below shows the interface of this management unit, which highlights some of the concepts and some concepts.
Figure 5. Microsoft Management Console Unit Interface
You can click Administrative Tools in the Control Panel, and then click Microsoft .NET Framework Configuration shortcut to use the tool.