Abstract Component-based software is easily attacked. A large number of DLLs that are not strictly controlled are the main factors that cause this problem. Microsoft .NET Framework's public language runtime code access security is used to solve this common security vulnerability. In this model, the CLR acts as an assembly traffic police that comes from and what safety restrictions should be implemented. Another way to solve the security problem is a pre-existing class with a built-in security is provided. NET FRAMEWORK. These classes are classes that are called in .NET when performing hazardous operations (eg, reading and writing files, display dialogs, etc.). Of course, if the component calls the unmanaged code, it can skip the code access security measures. This article discusses these measures and other security issues.
This page
Overview Request Permissions Phase The Privilege of Attack Protection Protection You Ask yourself Asked Your Permissions Where is your right? Declarative attributes Attack Security Policy Evidence Evaluation Security Policy Try Microcation Conversion Set View and Edit Security Policy Summary
COM components may be very useful. They may also be very dangerous. Build an application on Windows_ is increasingly depends on purchasing third-party COM components (even with traditional C-interface-based DLLs) and gathers them into a process. Of course, be careful to use this modular approach to improve reuse, reduce coupling, and bring other long-term benefits to software development processes, but it often leads to obvious security vulnerabilities.
In MSDN_magazine, July 2000, I published a series of articles divided into two parts in the second part, I discussed a software program for the second part. Attack: Buffer overflow utilization. A stupid but very common procedure for somewhere in the DLL can make a firm attacker not only destroy the host process, but also able to deceive its security context. From a wide range of third-party DLL bonding processes, the problem will even make the problem worse, because the attacker has a lot of time to find various security gaps that can be utilized in the DLL in use. Before you implement the attack, he has a lot of time to prepare his attack in a safe location.
Authenticode_ It is believed that it can help solve this problem, although it is better than anything, but the problem is that it takes a penalty method, not a preventive way. When an attacker uses an email to send the porn web site to all your close friends and relatives and display your return address with a message, you can't comfort you at all the status of the attacker. In the case of typical users, it is alleged that it can enhance any DLL of its online experience, even if the authors have legally signatures for these DLLs, non-programmers do not actually do this DLL causes these damage? What should I do if the hard drive installed in the attack DLL is deleted or damaged? What happened to evidence? How to prove an attacker's crime?
Obviously, you don't just need someone to be responsible for this, but also need to access the code (especially if you rely on, mobile code like ActiveX_Components). When the administrator runs from the process constructed, he should get some guarantee, that is, his security context will not be damaged by malicious components. Using the verification of managed code to limit buffer overflow issues to solve the problem. Code Access Security (although not an universal key) is an important second step, this is the focus of this article.
Note that this article is based on the technical preview version of the Public Language Runtime (CLR), and the CLR may change before the last release.
Overview
First, I will outline the basic work of the code accesses security. Please take this section as a detailed discussion of the detailed discussion. For those who are familiar with the security models in Java 2, you will find that the CLR uses a similar model.
Typically, these operations are performed if the security-sensitive operation is required (for example, reading or writing files, changing environment variables, accessing the clipboard, display dialog, etc.). These classes joined the security requirements when writing, and such security requirements tell the system to be requested, so that the system has the opportunity to approve or reject the request. If the system rejects the request, it will refuse through an exception that generates the type SECURITYEXCEPTION. How does the system decide whether to ratify or reject each request? It does this way that it is a security policy that can be customized one by one machine and by user. The security strategy in the CLR is really simple in the concept level. Security policies are issued to the assembly when loading. Now, there are two common problems that may be asked in a slightly different way, and they will introduce them. Where is this assembly from? Who is the author of the assembly?
Safety strategy This method can map different answers to these issues to a particular permission set. For example, you can specify "Allow code from https://www.foobar.com/baz from C: / Quux and it below, and does not allow it to display dialog box". In this case, the proposed problem is "What URL comes from this assembly?", This is just another question method of the first question. From a macro view, this way summarizes the CLR code access security, but if you go a little, it will become more interesting. Let us now do this.
Back to top
Request permission
Imagine you are writing the following class, which allows you to perform simple file read and write operations (omitted):
Public class myfileAccessor {
Public myfileAccessor (String path,
Bool readonly) {}
Public void close () {}
Public String ReadString () {}
Public void WritestRing (String StringTowrite) {}
}
Below is a typical way of use of this class:
Class test {
Public static void main () {
MyFileAccessor Fa = New
MyFileAccessor ("C: /FOO.TXT", FALSE);
Fa.writestring ("Hello");
Fa.writestring ("world");
Fa.close (); // flush the file
}
}
After a given the model, it is obvious that the simplest position of the implementation of the security check is in the constructor. The parameters of the constructor clearly tell you to access which file and whether to access it in a read or read / write mode. If this information is provided to the underlying security policy, and thereby result in the generation of SecurityException, you only need to allow this exception to spread the callback. In this case, since the constructor will never be completed, the instance of your class will reject the caller, so the caller cannot call any other non-static member functions. This way, greatly simplifies the safety inspection of the class, from the perspective of programmers, this is a good thing, from the security perspective is also a good thing. The less access check logic written in your code, the less chances of errors.
The disadvantage of this approach can be seen in this case. If the client constructs an instance of your class after the initial request in the constructor, then the client is shared by another client (possibly in another program set), then the new client will not Affected by the code access request centered on the constructor. This is similar to the mode of core processing in Windows 2000. Here, you can see the tension between performance and security again.
In fact, in the actual development process, you will not write classes like MyFileAccessor to access the file. Instead, you will use the system provided by the system (for example, System.IO.FileStream), which has been designed to automatically perform the appropriate security check. Typically, these checks are completed in the constructor, in this, I have previously discussed all performance and security conclusions involved. Like a lot of Microsoft_ .NET architecture, for the purpose of scalability, the security check calls by the FileStream class can be used directly by your own components. Figure 1 shows how to add the same security check to the MyFileAccessor constructor (if you can't rely on the system provided by the system on behalf of you to perform security check). The code in Figure 1 performs security checks in two steps. It first creates an object that represents the permissions discussed, here, permission is file access. Then it requests this permission. This leads to the system to view the authority of the call, if the support is not the specific permissions, then Demand triggers SecurityException. In fact, Demand has implemented a slight complicated check than this, and we will then discuss it.
Figure 2 summarizes the current disclosed code access rights of the runtime. It should be understood that there is also a set of code identifier rights, which directly test the answer to the various issues mentioned in the outlined part. However, in order to avoid hardcoding security policy decisions into components, the code for conventional use does not often use code identification permissions. Then I will discuss security strategies in more detail.
At this point, it is worth noting that even if the code is successfully passed .NET security check, the underlying operating system will also perform its own access check. (For example, Windows 2000 and Windows NT_ restrict access to files in the NTFS partition by access control lists.) So), even if the .NET security mechanism will grant the assembly of unrestricted local file system access, Components are hosting in a process running in Alice, which will only open some files, ie those Alice can open files on the underlying operating system based on the underlying operating system.
Back to top
Induce attack
Recalling two basic issues raised by you to determine what permissions should be granted to the assembly: "Where is the assembly?" And "Who is the author of the assembly?" The answer to these issues specified the assembly The basic authority set is granted.
Figure 3 MyFileAccessor
Imagine the implementation of MyFileAccessor in the FileStream object, as shown in Figure 3. The code is generally like this:
Using system.io;
Public class myfileAccessor {
PRIVATE FileStream M_FS;
Public myfileAccessor (String path,
BOOL readonly) {
m_fs = new filestream (path, filemode.open,
Readonly? FileAccess.Read:
FileAccess.readwrite;
}
// ...
}
Suppose you have implemented MyFileAccessor in this way, and hand it over to your friends Alice, and Alice has installed it on her local hard drive. So, what kind of code access is now in myFileAccessor assembly? If I see my own local security policy, I will see that it grants a local component named FullTrust, and FullTrust includes unrestricted access to the file system (you should remember that the underlying operating system may be further Limit this permission). Figure 4 shows myFileAccessor on a local hard drive installed on Alice may be used by various types of components. It is possible to use MyFileAccessor to access the file with myFileAccessor. However, if Alice happens to point her browser to a rogue Web site, then the .NET component downloaded from the site can use MyFileAccessor to perform malicious operations. Here is an example of attacked attacks: MyFileAccessor can be attracted to perform malicious operations (for example, open Alice is unwilling to disclose private documents).
Figure 4 Example of attacked attack
The CLR does not force each intermediate component to perform its own access check to avoid this abnormal operation, and the CLR just confirms that each caller in the call chain has the permissions requested by the FileStream. In Figure 4, when a local component called Notepadex uses MyFileAccessor to open a file, it will be granted unrestricted access, because the entire calling chain starting is an assembly installed locally. However, when RogueComponent is trying to use MyFileAccessor to open a file, the CLR will perform a stack auditing when the FileStream call Demand, and then discovers one of the calls in the call chain does not have the required permissions, so Demand will trigger securityException.
Back to top
Implied permission
Before discussing further, it is necessary to note that some permissions are also impossible to have other permissions. For example, if you are granted to all access to the directory C: / Temp, you have also implicitly awarded all access to its child directory, Sun-level directory, etc. This is discovered by IssubsetOf methods on all code access rights objects. For example, if you run the code in Figure 5, you will get the following output:
P2 IS Subset of P1
The existence of contained privileges makes management a fairly easy task, but remember that it is specifically as possible when requesting permissions. The CLR will automatically compare your request to determine if it is a subset of the permissions granted.
Back to top
Protect you
The assembly is granted a set of basic permissions when loading. CLR and its host will discover these privileges by presenting issues related to the assembly to the security policy (then, the security policy converts answers to permissions). However, it is precisely because the assembly is granted a group of basic permissions based on a policy, so that all of these privileges will be used to meet the needs.
If the assembly is installed locally, it will have a large amount, even completely unrestricted permissions, at least in the field involved in code access security. Imagine if such a highly trusted assembly calls the user's script. Depending on the operations that the script should perform, the assembly that calls the script may want to limit valid privileges before the call (see Figure 6).
This code sets additional limits in the current stack frame. This means that if Calculate tries to access environment variables, voiced the contents of the clipboard, destroy any files in the file system under the two sensitive directories specified in the code, or if any components used within Calculate are trying to perform any such operations So, when the CLR is a stack audit to check the access right, it will notice that the stack frame explicitly rejected these permissions, so it will reject the request. Note that in this case, I have combined several permissions into a single permissionset and reject the entire permissions set. This is because each stack frame can only have one permission set for reject, and replaced the old authority set of the current stack frame for the call of the DENY function. This means that the following code does not execute the action you may expect it to perform: FileiOpermission p1 = new fileiopermission
FileiOpermissionAccess.Allaccess,
"c: // sensitivestuff");
Fileiopermission P2 = New FileiOpermission
FileiOpermissionAccess.Allaccess,
"c: // moresensitivestuff");
p1. Deny (); // p1 is Denied
p2. Deny (); // Now P2 is Denied (Not P1)
In this code, the second DENY call actually overwrites the first DENY call, so only P2 will be rejected here. Use rights collection allows multiple permissions to be rejected. The static RevertDeny function that calls the CodeAccessPermission class will clear the rejection of the current stack frame.
If you find that you have to reject a lot of individual permissions, you may find another way easier, that is, do not use Deny and Revertdeny, using Permitonly and RevertPermitonly. This method will be very good when you know which privileges you want to allow.
Back to top
Ask your own permissions
Stack audit mechanisms are used to protect classes that can be used in many different ways (such as FileStream and MyFileAccessor). For example, using FileStream can record errors to define a well-defined a perfect log file on the user's hard drive. Malicious Use FileStream will endanger the content of the local security policy file. The stack audit mechanism is designed to ensure that FileStream is before the FileStream object instantiates how many intermediate assemblies to be traversed, all of these assemblies must meet requests for file system access. This avoids attacked attacks similar to the previous description.
Although the stack audit mechanism provides all of these benefits, sometimes it will only bring you hindrance. Figure 7 shows a class called ERRORLogger that provides a definition perfect error logging service I mentioned earlier. If the ERRORLogger class is installed on the Alice's local hard drive, and the assembly of this class is granted full access to the file system (until the default security policy) on the code accessed by the code on the Alice machine (until writing this article, in the default security policy) Local components will be granted to unrestricted access to file systems). However, if the class is designed to provide services to other assessments, some of the other program sets are not authorized to write to the local file system, how will the situation?
Obviously, if you want to use any component (this will allow access to any file specified by the client), then ErrorLogger is much safe than MyFileAccessor. ErrorLogger is a simple class that can only be used to append a string to a definition of a perfect file. However, because the stack audit mechanism does not know this, when the FileStream constructor requests its caller's permissions, unless each caller in the calling chain requests FileiOpermission, the request will fail. If this is an hindrance, you can write it to the log file by let the ErrorLogger asserted its own permissions. Figure 8 shows a new implementation. The new version of the ERRORLogger class (installed as a local component) will also be granted full access to the file system. In this case, it will assert the file IO permissions before using FileStream actually opens the file. Note that you can only assert your assembly that is actually granted.
Each stack frame is likely to have the proposed authority set, and when the stack review is reviewed the stack frame, it will believe that the permissions of the assertion have been met. The stack audit does not even continue unless there is any other requested privileges that exist. Note that I omitted the trouble of calling Revertassert. In this case, RevertasterT is unnecessary because the assertion can be retained in place until the call to the log is returned, at which the returns, the stack frame (including the asserted authority set) will be destroyed. This is also applicable to Deny and Permitonly.
Back to top
Where is it?
Obviously, the ability to assert privileges may be abused. For example, if the local components are installed, if it is fully trusted, it can just assert all permissions to perform any operation, regardless of who it is. This is obviously a terrible idea, but how do you know that once the component is installed on your machine it will not do this? Since the assertion is a powerful function that may be abused, its use is also controlled by the permissions class securityPermission. This class actually represents several different permissions used to manage how to use and security-related classes and policies. Most such permissions can be used as elevation permissions.
Consider the ErrorLogger2 class shown in Figure 8. Write data into a single resolver file by asserting its own permissions, is it destroyed by the system's security policy? What kind of attack is possible? Malicious components can inject false error messages to confuse users. It can also send a very large string to attempt to fill the user's hard drive. Therefore, ErrorLogger2 seems to be more secure than more common classes (such as MyFileAccessor) even when asserted their own permissions, but due to the existence of the assertion itself, it is still possible to attack.
Just because of the problem like this should avoid using assertions? Like most questions about security, there is no clear answer. It will definitely make the security model very complicated, so a good experience is that any use of this feature is evaluated by your colleagues. Also note that your assertion may be rejected by the security policy, because many administrators want to use all components other than the most trusted local components to ban the use of assertions. If you rely on assertions in your entire code, then this rejection can make your application stop running properly. You may find this to help you: especially the securityException generated by calling Assert, and attempts to complete your work when asserted and completely stack audits.
An occasion that must be asserted is to span the boundary from the hosted code to the non-hosting code. As an example of the FileStream class defined by the system. Obviously, in order to actually open, close, read, and write files, this class needs to call the underlying OS implemented by the non-hosting code. When doing these calls, the interoperability is requesting SecurityPermission, specifically, requested UNMANAGEDCODE privileges. If the request will propagate along the stack, then all code will be prohibited from opening the file unless the code is also granted the permission to call the unmanaged code. The FileStream class will actually convert this very common request to a finer request, specifically convert to the request for FileiOpermission. It will achieve this by requesting FileiOpermission in its constructor. If the request is successful, the FILESTREAM object can quickly assert UnmanageDcode permissions before actually call the operating system. The non-hosting call made by the FISTREAM is not a random call to the non-hosting code, but to implement a specific target (which is specified by the request in the constructor), which is specified by the request in the constructor). The Mscorlib assembly that carries FileStream and other trusted components is considered to be trusted to perform these policy conversions, thus being granted assertion permissions. Before you trust any other assembly with Assertion, you should help to enforce it without damaging your security policy. This is highly trust.
Back to top
Declarative properties
If you plan to use Deny, Permitonly or Assert in your component, you should know that you can do not only by programming but also through a declaration. For example, Figure 9 shows a third implementation of the error logger, which uses a declarative property (here is FileiOpermissionattribute). It should be remembered that the Attribute suffix can be omitted for the simple start of the attribute in the C #.
There are two benefits in this way. First, it is easier to type. Second, this is also more important, declarative attributes become components of components of metadata, and it is easy to discover by reflection. This allows the tool scan assembly and found that it is a case and class that is asserted by the various permissions, such as if it is asserted. Tools can also discover potential conflicts with security strategies; remember, assertions are usually prohibited, especially if the components are not installed on the local hard drive.
The main disadvantage of this method is that if the assertion request is rejected, the method is not possible to capture an exception. This special drawback only involves the assertion permissions. If you just use the declarative properties to limit the permissions, you will never have this problem.
The SecurityAction enumeration is used with the declarative permissions attribute. Several options that can be used to fine-tune the permissions that your code can be used and can be used to request your client permission when loading or running. Figure 10 comes from the .NET Framework SDK document, which lists these options and classified them. For example, compare two attributes declarations:
[SecurityPermission (SecurityAction.Demand,
Unmanagedcode = true)]]
[SecurityPermission (SecurityAction.LinkDemand,
Unmanagedcode = true)]]
If the first of these two declarations is applied to a method, a normal stack audit occurs when the method is called each time the method is run. On the other hand, if a second declaration is used, only one check occurs at each reference. This will happen at real time (JIT) compile. Moreover, the second declaration only requests the permissions of the code related to it; does not perform a complete stack audit for LinkDemand. When this article discusses the security policy, I will come back to discuss some of the other properties in this list. Back to top
Attack on code access security
Since I have more time to visit the security infrastructure, I want to see other attacks that need to be noted. Now, two attacks that you will be easily experienced are abuse of assertion and unmanagedcode security privileges. I have discussed the dangers of assertions, but calling a non-hosting code is another tricky problem.
If the assembly is allowed to call unmanaged code, then it can skip almost all code access security. For example, if the assembly is not authorized to the local file system, it is only allowed to call the unmanaged code, then it can only call the Win32_ file system API to perform its malicious operation. I have mentioned earlier that these calls will be checked by any valid operating system security check, but it is usually unreliable, especially if the attacker's code will eventually load into an privilege (for example, administrators) The browser or the Daemon process running in the System login session).
From administrator's login session, you can easily envisage an attacker who uses Win32 file API to easily rewrite security policies for local machines (currently stored in the XML file written by administrators). Or in this regard, an attacker can use the same Win32 file API to replace the CLR execution engine itself. If an attacker uses the management privilege to execute a non-hosting code, you will lose. Obviously, through the careful management of security strategies, these attacks can be obstructed, and then I will discuss this.
Another obvious attack involves the enhancement of the permissions caused by the change. When using an assembly from the Internet URL, the usual program rally has a much less than the permissions you have when installing. A primary goal of an attacker will try to confuse the victim to install a copy of the assembly on its local hard drive, so that its permissions can be improved. Since many users don't have to do more, they will be willing to install ActiveX controls from the Internet site, so this will be a complicated issue. For discussion about potential attacks and defense measures, please refer to my web site (see http://www.develop.com/kbrown).
Back to top
security strategy
In this article, I have already implicit existence of security policies for allocating code access rights. This strategy may become very complicated, but once the basic knowledge is mastered, it is easy to understand its principles. First, be sure to note that the permissions are based on each assembly. I have discovered the process of discovering these privileges into three basic steps:
• Collect evidence • Show evidence to security policies and discover allocated permission sets • Deliver the permissions set based on assembly requirements
Back to top
evidence
When I first started experiment with CLR, I think "evidence" is a strange term. It sounds more like a security infrastructure designed by a group of lawyers instead of computer scientists. But after some time is familiar with CLR, I found this name is really suitable. In court, evidence is used to provide information that can help answer the questions raised by the jury: "What is the weapon?", Or "who signed this contract?"
In the case of CLR, evidence is a collection of answers to the answers to the issues raised by the security policy. Based on these answers, the security policy can automatically grant the code. Below is a question to prepare this document as a policy:
• What site is available from? • What is the URL? • What is the area from what area? • What is the strong name of the assembly? • Who signed this assembly? The top three problems are just different ways of querying the source location of the assembly, while the remaining two questions focus on the author of the assembly.
In court, evidence is submitted by one party, but can be confuted by opponents, usually the jury helps determine whether the evidence is properly established. In the case of CLR, two entities may collect evidence: the host of the CLR itself and the application domain. Since this is an automatic system, there is no jury; whether it is submitted to evidence that needs to be evaluated by the strategic assessment, it must be trusted without submitting evidence. This is why you need special safety permissions ControlIdence. The CLR itself naturally provides trust in evidence, as you must have trust it to execute security policies. Therefore, ControleVidence security permissions will be applied to the host. When writing this article, three hosts are available by default: Microsoft Internet Explorer, ASP .NET and hoster (for starting CLR applications from the housing).
In order to make this more specific, consider this function found in the System.AppDomain class:
Public int executessembly
String filename,
Evidence AssemblySecurity,
);
Although the browser may have downloaded the assembly to the cache on the local file system, it should provide evidence from the actual source of the assembly through the second parameter.
Back to top
Evaluate security strategy
Once host and CLR have collected all evidence, the evidence will be submitted to a security policy as a set of objects (these objects are encapsulated in a collection object of the type Evidence). The type of each object in this collection indicates the type of evidence it represents, and the evidence has the corresponding class to represent each question I have listed above:
Site
URL
ApplicationDirectory
Zone
Strongname
Publisher
The security policy is from three different levels, each level is a collection of serialized objects. Each of these objects is called code groups and represents a reference to the assembly as well as to the assembly, if evidence is confirmed, the issue should result in the reference. The issues raised from the technical statement as a member condition, and the authority set is to be used to reuse them when being named. Figure 11 shows a set of member conditions and corresponding naming rights.
Figure 11 Code group with member conditions
I always think that the term "code group" makes people make people confuse, but because I have not yet proposed better terms, I usually just treat the code group as a node in the graphic of the security policy level. Figure 12 shows how a set of code groups (or nodes) form a hierarchy having a single root. Remember, each node in the policy level represents a member's condition and reference to the permissions set, so by obtaining the collected evidence and matching the node in the hierarchy, the CLR can finally obtain a representative by this level. The combination of the permissions granted by the policy. Since the root node is actually only the starting point of the traversal, it matches all code and references Noth Nothing (which can guess, it does not contain any permissions) by default.
Figure 12 Safety Policy Level Graphics
The actual graphics traversal are controlled by two rules. First, if the parent node does not match, it does not test whether its sub-node matches. This will allow graphics to represent some of the content similar to and or logical operators. Second, each node may also have properties that can be controlled. The attributes applied here are Exclusive, and if nodes with an Exclusive property are matched, only the permission set of the particular node will be used. Naturally, two matching nodes in the policy level have this attribute without any meaning, which is considered to be an error. You need to ensure that this is not happening, but if this happens, the system will trigger policyException and the assembly will not be loaded. Figure 13 Traveling to the strategy level
Figure 13 shows an example of an assembly downloaded from https://q.com/downloads/foobar.dll (signed by ACME CORPORATION). Note How to match the four nodes in the graphic and only one publisher node is matched during the traversal. The top half of the graph illustrates two logical AND relationships of code from the ACME Corporation. It said, "Published by the ACME CORPORATION and the code downloaded from the Internet to obtain the permissions BAR and Baz, which is published by the ACME CORPORATION and installed in the local code. Foo and GIMP."
At this point, you may want to know why I have been talking about the strategy level. The reason is that there is actually three possible strategy levels, each level contains a node graphic, as shown in Figure 12. There are machine policy levels, user policy levels, and application domain policy levels, and they will be evaluated in this order. The resulting permission set is the intersection of the authority set found during each of the three strategy levels.
Figure 14 Three strategy levels
The application domain policy level is technically optional, which is dynamically provided by the host. The most obvious example of this feature is a web browser that might want to make restrictive policy options to its application domain. Figure 14 shows how I look at the policy level. You can also use another attribute on the node to abort policy level traversal: LevelFinal. If this property is discovered on the matching node, the policy level will not be further traveled. For example, this will allow domain administrators to write statements at the machine policy level, and a single user cannot change the machine policy level by editing the user-level policy.
Back to top
Tonic
Once the CLR collected a set of permissions from three policy levels, the final step will allow the assembly itself to operate independently. Please recall, the code can be submitted or declared to fine-tune the available authority set by rejection or asserting permissions at runtime. Ok, the assembly can be fine tuned by the policy to grant it with the policy by carefully using the following three elements (this three elements are also displayed in Figure 10):
SecurityAction.Requestminimum
SecurityAction.RequestOptional
SecurityAction.RequestRefuse
The names of these elements clearly illustrate their role. If the policy does not grant the minimum authority set requested by the assembly, the assembly is not run. Carefully use this particular feature allows you to make some assumptions on the environment and make programming more easily. However, if it is used much, this feature may bring adverse consequences. For example, when using Requestminimum to request all permissions that you may need, you will not load it in more environments, and this is unnecessary. This may also cause administrators to relax his security policy to a slightly exceeding the necessary level to allow your components to run.
RequestRefuse seems to be a useful tool for free use (at least in these early stages). This allows you to just refuse to be granted by the policy, your own permissions. You should refuse the set of permissions that you know your assembly. It is definitely safe to use it safely. Finally, Requestoptional allows you to specify that no options can be used, but can be used. If your assembly discloses those options that require a small amount of additional permissions, it will be useful.
If there is a set of permissions from the policy, the minimum, optional, and rejected permission sets are added, the following is the formula described in the CLR document, which is used to determine the permissions granted by the assembly:
G = m (o?) - r
G = permissions, m = minimum request, o = optional request, P = policy derived permission, R = rejected permissions.
Back to top
View and edit security policies
If you want to explore the specific situation of a security policy, use Caspol.exe, this is a code access security policy tool. Here is a few command line I like, they can be an example of your entry:
Caspol -a -listgroups
Caspol -a -resolvegroup C: /inetpub/wwwroot/bar.dll
Caspol -a -resolveperm c: /inetpub/wwroot/bar.dll
The first example lists the code groups for machines and user policy levels. If you look at it carefully, you will see the hierarchy of the node, and each of them has member conditions (later is the right set name). The second example requests a list of matching code groups for a particular assembly, while the third example actually parses the amount of the assembly.
For example, see how things change when you are quoted by HTTP:
Caspol -a -resolvegroup http://localhost/foo.dll
Although Caspol.exe can be used to edit security strategies, unless I am doing some very simple things, I prefer just propose Emacs and manually edit the policy file (because it is an XML document). If you decide you yourself try this, back up the original file. When writing this article, you can find the machine policy file in% systemroot% / complus / v2000.14.1812 / security.cfg. Your version number may be inconsistent with me, please make a corresponding change. The user security policy is stored in the user profile directory below the same path. When writing this article, the default user policy grants all code in FullTrust, which actually means that the security policy is fully controlled by the machine policy.
Back to top
summary
Code Access Security When combined with code verification in the CLR, it is a big step than the lethar in the previous platform. In the previous platform, it is only a large independent application that may be more secure. For more, the suspension DLL is considered very popular.
Code Access Security Acknowledge Today's application is the fact that the component is generated. In the process of making preventive rather than punitive security policy decisions, it will also consider the source of these components, and will extensively improve the security of many emerging types of mobile code applications.
Code access security is never universal. It introduces a complete set of complex issues, each of which is the challenge of management. If there is no training, it is willing to spend time to understand the feature administrator, it may just be a barrier cloth when many new attacks have occurred. It is best to refer to the history of Java security (Java security handles the history of mobile code has been several years, and has achieved different degrees of success), and use historical vision to assess this new architecture. Please visit my web site where I gathered the latest .NET security news and sample code, and list existing books for mobile code security. Finally, please actively share your opinions and opinions on the CLR security architecture. Related articles, see:
Code Access Security
.NET Framework Developer Center
Keith Brown works in Developmentor, his mission is to study, writing, teaching and promotion for programmers' security awareness. Keith wrote a book of Programming Windows Security (Addison-Wesley, 2000). He also participated in the preparation of Effective COM, and now is a book related to .NET security. Keith brown