Security coding guide for .NET framework

xiaoxiao2021-03-06  82

Summary: Public Language Runtime and Microsoft .NET Framework enforces all managed code applications for evidence-based security. Most of the code is rarely or does not require explicitly encoding for security. This paper briefly describes the security system, discusses the security issues that may need to be considered in the code, and provide a guide for the classification component so that you understand what problems can be solved in order to ensure that the code is secure.

Prerequisites: Readers should be familiar with the public language runtime library and Microsoft (R) .NET framework, as well as basic knowledge based on evidence-based security and code access security.

This page

Evidence-based security and code access security

Security coding goals

Secure coding method

Best practice of secure coding

Make sure the status data is safe

Ensure the security of the method

Packaging code

Unmanaged code

User input

Remote processing consideration

Protected object

Serialization

Application domain cross-domain problem

Evaluation permission

Other security technology

Evidence-based security and code access security

Combined with two separate technologies to protect managed code:

Decide what permission to grant code based on the security of evidence.

Code Access Security is responsible for checking all code on the stack with the required permissions to perform an action.

Permissions Bind these two technologies: Permissions are the right to perform a particular protected operation. For example, "Read C: / Temp" is a file permission; "Connect to www.msn.com" is a network privilege.

Evidence-based security determines the authority to grant code. Evidence is known information about any assembly (units granted permission) entered by the security policy mechanism. If the evidence is used as an input, the system will evaluate the security policy set by the administrator to decide what permission can grant code. The code itself can use rights requests to affect the authority being granted. Permission requests are expressed as declarative security of assembly levels using custom attribute syntax. However, the code cannot get more or less permission allowed by the policy system in any way. Permissions award only once, specify all the code of all code in the program. To view or edit the security policy, use the .NET Framework Configuration Tool (Mscorcfg.msc).

The following table lists some common certificate types used to grant permissions to the code. In addition to the standard evidence listed here (they are provided by the security system), the user-defined new type can be used to extend the evidence collection.

evidence

Description

Hash value

Hash value of the assembly

Publisher

Authenticode (R) signature

Strong name

Public key name version

Site

WEB site for code source

URL

URL of the code source

area

Internet Explorer area

Code Access Security is responsible for processing security checks to perform authority. The unique aspect of these security checks is that they will not only check the code that is trying to perform the protected operation, but also checks all the caller along the stack. To make the inspection success, all the code that is checked must have the required permissions (can be rewritten).

Safety check is beneficial because they prevent attacked attacks. The attacked attack refers to the unauthorized code to call your code and attract your code instead of performing some operations without authorized code. Suppose you have an application that reads the file, and the security policy will grant your code to your code. Because all your application code has permissions, it will pass the code access security check. However, if the malicious code for accessing the file is called in some way, the security check will fail, because the untrusted code calls your code, thus visible on the stack.

It should be noted that all aspects of this security are based on what operations that allow code to perform. The login information is authorized to authorize the user that is fully independent of the basic operating system. Please see these two security systems as multi-layer defense: For example, to access a file, you must authorize by code authorization and user. Although users may authorize the user's authorization in many applications that rely on user login information or other credentials, users may be important, but this type of security is not the focus of this article. Back to top

Security coding goals

We assume that the security policy is correct, and the potential malicious code does not have permission to grant trust code, which allows trusted code to be safely performing more powerful operations. (If you do other assumptions, a type of code will not be separated from other types of code space, so that the problem does not happen.) When using the permissions of the .NET framework, you must establish an obstacle To prevent malicious code from getting information you don't want to get, or prevent it from performing malicious operations. In addition, in all expected, a balance must be found between the security and availability of the code.

Evidence-based security policies and code access security provide a very powerful explicit mechanism for security. Most application code only requires the infrastructure implemented by the .NET framework. In some cases, other applications are required, which is generated by extending the security system or using a new special method.

Back to top

Secure coding method

One advantage of these security technologies is that you usually have forgotten their existence. If you grant code required to perform a task, you will work normally (simultaneously, you will be able to resist potential attacks, for example, the leading attacks described above). However, in some specific cases, you must explicitly handle security issues. These methods are described in several sections. Even if these parts are not directly applicable to you, understanding these security issues is always useful.

Secure neutral code

Secure neutral code does not perform any explicit operations for the security system. It only uses the permissions obtained. Although security exceptions that cannot be captured (eg, using files, networks, etc.) can lead to poor user experience (this refers to the exceptions of many details, but for most users, these details are completely blurred), but This approach utilizes safety techniques because even the trusted code cannot reduce the degree of security protection. The worst case that may occur is that the caller will need many permissions, otherwise it will be prohibited from being prohibited by the security mechanism.

Safety neutral libraries have special features you should understand. Suppose this library provides the API element to use files or call unmanaged code; if your code does not have the appropriate permissions, it will not be able to run as described. However, even if the code has permissions, any application code that calls it must have the same permissions to run. If the call code does not have correct permissions, the security exception will appear as the code accessed security stack audit. If you can ask the call party to have permissions on all the operations executed by the library, this will be simple and secure to achieve security because it does not involve dangerous security overwriting. However, if you want to make the application code of the invoking library is not affected by the permission requirements, and reduce the need for very powerful permissions, you must understand the library model that works with the protected resource, this part of this content will be "open A library code section of the Protection Resource is described.

Application code that does not belong to reusable components

If your code is part of an application that is called by other code, it is very simple and may not require special encoding. But remember that malicious code can call your code. Although the code access security mechanism prevents malicious code from accessing resources, such malicious code can still read the value of fields or properties that may contain sensitive information.

Also, if your code can be accepted from an Internet or other unreliable source, you must be careful to enter. For more information, see Make sure security and user input of status data herein.

Trust packaging program implemented by this machine

Usually in this case, some useful functions are implemented in the native code, and you want to use it for managed code without rewriting it. The hosted packaging program is easy to write to a platform call or use COM interoperability. However, if you do this, the package of the packaging must have the right to unmanaged code and call to success. Under the default policy, this means that the code downloaded from intranet and Internet will not work with the package.

A better way is to grant these unmanaged code rights only to the packaging code, not all applications that use these packages. If the basic function is safe (no resource is disclosed) and the implementation is also safe, the package only needs to assert its right, which will make any code can be called. If resources are involved, secure coding should be the same as the library code described in the next section. Since the packaging is potentially disclosed to the caller, you need to carefully verify the security of this unit, which is the responsibility of the packaging.

For more information, see the non-hosting code and evaluation authority of this article.

Publicly protected resources

This is the most powerful security code that is potentially most dangerous (if the method is incorrect): your library acts as other code to access specific resources (these resources are not available in other ways), just as. The NET Framework class is the same as the resources they use. Regardless of the public resources, the code must first request the permissions for resources (ie, perform security checks), and then often need to assert the right to perform actual operations.

For more information, see the non-hosting code and evaluation authority of this article.

Back to top

Best practice of secure coding

Note Unless otherwise specified, the code example is written in C #.

Permission request is a good way to get the code security. These requests allow you to do two things:

Request the minimum authority necessary for code run.

Make sure the permissions received by the code will not exceed the permissions they actually need.

E.g:

[Assembly: Fileiopermissionattribute

(Secur

IT

Yaction.Requestminimum, WR

IT

E = "c: //test.tmp")]

[assmbly: permissions

(Secur

IT

YACTION.RequestOptional, unrestricted = false]

... secur

IT

YACTION.Requestrefused_

This example tells the system: unless the code receives the authority written to C: / Test.TMP, it should not be run. If the code encounters a security policy that does not grant the permissions, it will lead to policyException, and the code will not run. You can make sure that the code will be granted to this permissions and do not worry about the error due to too little permissions.

This example also tells the system: No other permissions are required. In addition to this, the code will be granted to any permissions to grant it. Although additional permissions will not cause damage, if there is a security problem, there is a few permissions to block vulnerabilities. Permissions with code are unwanted will lead to security issues.

Another way to limit the permissions received by the code to the minimum privilege is to list specific permissions to reject. If you require all permissions to be optional, specific permissions are excluded from the request, and the permissions are usually rejected.

Back to top

Make sure the status data is safe

Applications that process sensitive data or make any security decisions need to make this data under their own control and cannot allow other potential malicious code directly to access the data. The best way to secure the data securely in memory is to define it as a private or internal (limited to the range of the same assembly) variable. However, this data also obeys the access you should know: •

At the time of reflection, the highly trusted code of the subject can be obtained and set up private members.

When serialization is used, if height trusted code can access the corresponding data by the sequence of objects, it can effectively obtain and set private members.

This data can be read when debugging.

Make sure that any of your methods or properties is not unintentionally disclosed.

In some cases, data can be protected using "protected", at this time, the class can only be accessed. However, due to the possibility of other disclosure, you should also take the following precautions:

Controlling certain identity or permissions by limiting the class derivation in the same assembly, or using declarative security to derive it from your class, so that allows which code to be derived from your class (see Make sure Method Access the security part).

Ensure that all derived classes achieve similar protection or seal.

Package value type

If you think that you have distributed a copy that cannot modify the original definition, you can sometimes modify the value type of the box. When you return the value type of the box, you return is a reference to the value type, not a reference to the value type copy, thus allowing calls to call your code to modify your variable value.

The following C # code example shows how to use reference to modify the value type of the box.

Using system;

Using system.reflection;

Using system.reflection.em

IT

;

Using system.threading;

Using system.collections;

Class bug {

// suppose you have an API Element That Exposes A

// Field Through a Property W

IT

H Only a get accessor.

Public Object M_Property;

Public Object Property {

Get {return m_property;}

Set {m_property = value;} // (if Applicable)

}

// you can modify the value of this by doing

// the Byref Method W

IT

H this signature.

Public Static Void M1 (Ref Int J) {

J = int32.maxvalue;

}

Public Static Void M2 (Ref ArrayList J)

{

J = new arraylist ();

}

Public static void

Main

(String [] ARGS)

{

Console.writeline ("// doing this w

IT

h value type ");

{

Bug b = new bug ();

B.M_Property = 4;

Object [] objarr = new object [] {b.property};

Console.WriteLine (B.M_Property);

TypeOf (bug) .getMethod ("m1") .invoke (null, objarr);

// Note That the property change.

Console.WriteLine (B.M_Property);

Console.writeLine (Objarr [0]);

Console.writeline ("// doing this w

IT

H a Normal Type ");

{

Bug b = new bug ();

ArrayList Al = New ArrayList ();

Al.Add ("ELEM");

B.M_Property = Al;

Object [] objarr = new object [] {b.property};

Console.writeline ((arraylist)). Count);

TypeOf (bug) .getMethod ("m2") .invoke (null, objarr);

// Note That The Property Does Not Change.

Console.writeline ((arraylist)). Count);

Console.writeline ((arraylist) (Objarr [0])). Count);

}

}

}

Back to top

Ensure the security of the method

Some methods may not be suitable for calling them from untrusted code. Such methods will lead to several risks: methods may provide some restricted information; it may believe any of the information passed to it; may not check the parameters; or if the parameter is wrong, it may fail or execute Some harmful operations. You should pay attention to these situations and take appropriate operations to ensure the safety of the method.

In some cases, you may need to limit whether it is not intended to use, but it must still be a public approach. For example, you may have an interface that needs to be called between his own DLL, so it must be public, but you don't want to disclose it to prevent users from using it or prevent malicious code from using it as an entry point into your component in. Another common reason to limit the method that does not intend to use public use (but still a public) is to avoid document recording and support very internal interfaces.

The hosted code provides several ways to limit the restriction method:

Limit accessibility scope to classes, assemblies, or derived classes (if they are trusted). This is the easiest way to restrict access. Note that typically derived credibility is lower than that of their derived class, but in some cases, they can share superclass ID. In particular, do not infer the trust from the keyword protected, because in the security context, this keyword is not necessary.

The method access is restricted to the designed identifier (substantially, it is any special evidence you choose).

Limit method access to a caller with your selected permissions.

Similarly, declarative security also allows you to control the inheritance of the class. You can use inher

IT

Ancedemand Do the following:

It is required to derive class with a specified identification or permission.

The derived class that requires rewriting a particular method has a specified identification or permission.

Example: Protect access to class or method

The following example shows how to ensure the safety of public methods to limit access.

The SN -K command is used to create a new private key / public key pair. The private key section needs to be signed using a strong name and is safely retained by the code issuer. (If you leak, anyone can fake your signature on their code, so that the protection measures have been fade.)

Ensure the safety of the method through strong name identity

SN -K Keypair.dat

CSC / R: app1.dll /a.keyfile:keypair.dat app1.cs

Sn -P keypair.dat public.dat

Sn -tp public.dat> publicHex.txt

[StrongNameIdentit

YpermissionAttribute

(Secur

IT

YACTION.LINKDEMAND,

Publickey = "_", hex _ ", name =" app1 ",

Version = "

0.0.0

.0 ")]

Public Class Class1

The CSC command is used to compile and sign App1 to delegate the permissions of the protected method.

The two SN commands are used to extract the public key portion from the key pair and format it into a hexadecimal.

The lower part of the example is the source code of the absorbed method of the extracted. Customization Properties Define a strong name, specify the public key in the key pair, and inserts data from the sixteen-entered format obtained from SN as the publickey property.

At runtime, Class1 is allowed because the app1 has the necessary strong name signature.

This example uses linkDemand to protect the API element; see the part of this article for important information about using LinkDemand.

Prevent untrusted code use classes and methods

Use the following statement to prevent part of the code using classes and methods (including properties and events). By applying these statements to classes, all methods, properties, and event applications can be applied to this class; however, note that field access is not affected by declarative security. Note that the link request only protects the direct caller and may still be attacked (the evidence-based security and code access security section of this paper).

The assembly with strong name will declare safely to all publicly accessible methods, properties, and events, so only fully trusted caller can use them unless the assembly explicitly decides to participate in the application by applying the AllowPartiallyTrustedCallers property. Therefore, through explicitly tag classes to exclude untrusted caller, only the assembly of the unsigned assembly or the assembly with this attribute, and the subset of the type of the untrusted caller is not intended. is compulsory. For all more details, see the first version of the Microsoft .NET Framework for Security Change Documents.

For public non-seals:

[System.Security.Permissions.PermissionsSetaTRibute (System.Secur) (SYSTEM.SECUR

IT

y.

Permissions.secur

IT

Yaction.inher

IT

Ancedemand, Name = "fulltrust")]]

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

YACTION.LINKDEMAND, NAME = "fulltrust"]]

Public class CanderiveFromme

For public seals:

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

YACTION.LINKDEMAND, NAME = "fulltrust"]]

Public Sealed Class CannoteriveFromme

For public abstract classes:

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

Yaction.inher

IT

Ancedemand, Name = "fulltrust")] [System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

YACTION.LINKDEMAND, NAME = "fulltrust"]]

Public Abstract Class CannotCreateInstanceOfme_cancasttome

For public virtual functions:

Class base {

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

Yaction.inher

IT

Ancedemand,

Name = "fulltrust"]]]

[System.secur

IT

Y.Permissions.PermissionsSetaTRibute

System.secur

IT

Y.Permissions.secur

IT

YACTION.LINKDEMAND,

Name = "fulltrust"]]]

Public override void canoverrideorcallme () {...}

For public abstract functions:

Class base {

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

Yaction.inher

IT

Ancedemand, Name = "fulltrust")]]

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

YACTION.LINKDEMAND,

Name = "fulltrust"]]]

Public override void canoverrideme () {...}

Public rewriting functions that do not require complete trust in the base function:

Class derived {

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

YACTION.DEMAND, Name = "fulltrust"]]]

Public override void canoverrideorcallme () {...}

The public rewriting function that needs fully trusted by the base function:

Class derived {

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

YACTION.LINKDEMAND,

Name = "fulltrust"]]]

Public override void canoverrideorcallme () {...}

For public interfaces:

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

Yaction.inher

IT

Ancedemand,

Name = "fulltrust"]]]

[System.secur

IT

Y.Permissions.PermissionSetAttribute

(System.secur

IT

Y.Permissions.secur

IT

YACTION.LINKDEMAND,

Name = "fulltrust"]]]

Public Interface Cancasttome

Demand and LinkDemand

Declarative security provides two similar, but inspecting methods are different. It takes some time to understand that these two forms are worthwhile because the wrong choices will cause vulnerability or performance loss. This section does not intend to fully explain these features; see the product documentation for complete details.

Declarative security provides the following security checks:

Demand Specifies the Stack Audit of Code Access Security: All caller on the stack must have permission or identification to pass. Demand will happen on each call because the stack may contain different calorificers. If you repeat a method, you will perform this security check each time. Demand has powerful resistance to attacked attacks; it captures attempting to pass its unauthorized code.

LinkDemand occurs when compiled (JIT) (in the previous example, when the app1 code of Class1 will be executed), and it only checks the direct call. This safety check does not check the calling party of the memory. Once this check, no matter how many times it is called, there will be no other security overhead. However, it cannot defend against attacked attacks. If you use LinkDemand, your interface is secure, but through testing and any code you can reference can potentially destroy security because they allow the use of authorized code to call malicious code. Therefore, do not use linkDemand unless you can avoid all possible weaknesses.

Additional precautions required to use LinkDemand must be "handmade" (security system can help). Any mistake can cause security vulnerabilities. All authorized codes using your code must be responsible for implementing other security by doing the following:

The access to the call code is restricted to the class or assembly.

Set the same security check for this code, and force it to do this. For example, if the code you have written calls a method, this method passes the Secur

IT

Ypermission.unmanageDCode Permissions Use LinkDemand to protect, then your method should also use LinkDemand (or Demand, which is a stronger means for this permission). The exception is that if there is other security mechanisms (for example, Demand) in the code, your code will use LinkDemand to use LinkDemand in a limited manner that is always secure or you decide to be safe. This exception occurs in the case where the caller is responsible for weakening the safety protection of the basic code.

Make sure that its caller cannot decept, to represent it to call the protected code (that is, the caller cannot force the authorized code to pass the specific parameters to protected code, or get results from it).

Interface and LinkDemands

If a virtual method, attribute, or event having a linkDemand rewrites a base class method, this base class must also have the same linkDemand so that the method is also safe. Malicious code may force the conversion of the basic type and call the base class method. Also note that you can implicitly adding the LinkDemands to a program that does not have the AllowPartiallytrustedCallersattribute program level properties.

A good approach is that when the interface method also has LinkDemands, use LinkDemands to protect the method. Note the following: About using linkDemands with the interface:

The AllowPartiallyTrustedCallers property affects the interface.

LinkDemands can be placed on the interface to selectively pick some interfaces to use the code of partial trusted code (for example, when using the AllowPartiallyTrustedCallers property).

If the interface is defined in a program that does not contain the AllowPartiallyTrustedCallers property, you can implement the interface on some trusted classes.

LinkDemand will not be performed if linkDemand places a public method of a class that implements the interface method, and will not perform LinkDemand when it is then enforced to the interface and calls the method. In this case, because the interface is linked, only the linkDemand on the interface.

Whether there are security issues should be reviewed:

Explicit link request on the interface method. Make sure these link requests provide expected protection. Determine if the malicious code can use forced conversion to avoid the link requests previously described.

A virtual method with a link request.

The types and interfaces they achieve should be consistent with linkDemances.

Virtual internal rewriting

When confirming that the code is unavailable to other assessments, you need to understand the microscopic differences of accessibility of type systems. Declare that Virtual and Internal methods can rewrite the superclass's vTable entry and can only be used inside the same assembly because it is inside. However, the rewriting accessibility is determined by the Virtual keyword, and as long as the code can access the class itself, it can be rewritten from another assembly. If the possibility of rewriting is relatively small, use declarative security to resolve it, or delete the Virtual keyword (if it is not required).

Back to top

Packaging code

The packaging code (especially when the packaging program is more reliable than using its code), it can reveal a unique set of security vulnerabilities. If no restricted permissions of the caller included in the appropriate security check, any of the operations executed by the Represents Potential Vulnerability may be utilized.

Do not enable some operations that the caller itself cannot be executed by the packaging program. There is a special risk when performing some operations involving restricted security checks (contrary to the complete stack audit request). When the single-level check is involved, insert the packaging code between the actual call and the suspicious API element, it can easily enable security checks to successfully pass at the time of success, thereby reducing security.

Delegate

Whenever you call it, if your code is likely to call it, but if you have a low trust, you will make your credit, make sure you don't make your credit code to increase its permissions. If you get the entrustment and then use it, then if you delegate or delegate the following code attempt to perform the protected action, create a delegate code will not be in the call stack and does not test its permissions. If your code and the delegate code have a higher privilege than the call, this will enable the caller to change the call path without being part of the calling stack.

To resolve this issue, you can limit the caller (for example, it requires a certain permission) or limits the permissions of the execution (for example, by using Deny or Perm)

IT

Only stack rewritten).

LinkDemands and packaging

In the security infrastructure, special protection measures to link requests have been strengthened, but it is still possible to vulnerability in the code.

If the fully trusted code calls a certain property, event, or method protected by linkDemand, then if the call is passed, the call will succeed. In addition, if the class disclosed by the completely trusted code uses a name of an attribute, use the reflection to call the GET accessor of the property, even if the user code does not have the right to access the attribute, the call to the GET Accessor is also Will succeed. This is because LinDemand will only check the direct call, and the direct call party is completely trusted code. In fact, the fully trusted code does not need to ensure that the user code has the right to make this call when performing authorized calls representing the user code. If you want to pack the reflection function, see the first version of the Microsoft .NET Framework for security changes to get detailed information. In order to prevent security vulnerabilities due to negligence (eg, the case described above), the full stack review request checks for any use of calls (instance creation, method call, attribute settings, or obtained) is expanded to the link. Request protection method, constructor, attribute, or event. This protection will result in some performance costs (single-stage LinkDemand speed faster), and change the semantics of the security check - even if the single-stage check has passed, the full stack review request may fail.

Jack packaging package

Several methods used to load managed codes (including Assembly.Load (byte [])), you can use the caller's evidence load assembly. Specifically, if any of these methods is to be packaged, the security system can load the assembly using the permissions granted by your code (rather than the caller of the package). Obviously, you don't want to allow a code that is less reliable, indicating that you will be able to load its permissions than the credit of the package than the call for the package.

It is easy to reduce security by any code that has a full trust or trust is much higher than that of the potential caller (including Internet Right Level Call). If your code contains a public method to get a byte array and pass it to assembly.load (byte []), you may undermine the security.

This issue occurs on the following API elements:

System.appdomain.defineDynamicassembly

System.Reflection.assembly.LoadFrom

System.Reflection.assembly.Load

Abnormal processing

A filter expression on the stack is run before running any Finally statement. After running the finally statement, the CATCH code block associated with the filter will be run. Consider the following pseudo code:

Void

Main

() {

Try {

Sub ();

} except (filter ()) {

Console.writeline ("catch");

}

}

Bool filter () {

Console.writeline ("Filter");

Return True;

}

Void sub () {

Try {

Console.writeline ("throw");

Throw new Exception ();

} finally {

Console.writeline ("Finally");

}

}

This code will print the following:

Throw

Filter

Finally

Catch

The filter will run before the finally statement so that other code can be performed, any operation of the state change can bring security issues. E.g:

Try {

Alter_securit

y_State ();

// this Means Changing Anything (State Variables,

// SW

IT

Ching unmanaged context, impersonation, and so on

// That Could Be Explo

IT

Able if Malicious Code Ran Before State Is Restore.

Do_some_work ();

}

Finally {

Restore_Secur

IT

y_State ();

// this Simply Restores The State Change Above.

}

The pseudo code allows the filter backup stack to run any code. Other operation examples with similar effects are temporary simulations for another identifier, which sets an internal flag that avoids a security check and changes the regionality associated with threads.

The recommended solution is to introduce an exception handler, and isolate the thread status of the code with the caller's filter block. But important, to introduce an exception handler, otherwise it will not be able to solve this problem. The following Microsoft Visual Basic (R) example will switch the UI region, but can disclose any thread state changes similarly.

YourObject.YourMethod ()

{

CultureInfo saveculture = thread.currentthread.currentunicuiculture;

Try {

Thread.currentthread.currentuicultuiculture = New CultureInfo ("de-de");

// Do Something That Throws an Exception.

}

Finally {

Thread.currentthread.currentuicultuiculture = saveculture;

}

}

Public Class Usercode

Public Shared Sub

Main

()

Try

DIM OBJ As YourObject = New YourObject

Obj.yourmethod ()

Catch e as Exception When Filterfunc

Console.writeline ("An Error Occurred: '{0}'", E)

Console.writeline ("Current Culture: {0}",

Thread.currentthread.currentuiculture)

END TRY

End Sub

Public Function Filterfunc as Boolean

Console.Writeline ("Current Culture: {0}", Thread.currentthread.currentuicultuiculture)

Return True

End Sub

END CLASS

In this case, the correct repair measures are packaged in the TRY / CATCH block to pack existing Try / Finally blocks. Only introducing a Catch-throw clause into an existing Try / FinalLy block will not solve the problem:

YourObject.YourMethod ()

{

CultureInfo saveculture = thread.currentthread.currentunicuiculture;

Try {

Thread.currentthread.currentuicultuiculture = New CultureInfo ("de-de");

// Do Something That Throws an Exception.

}

Catch {throw;}

Finally {

Thread.currentthread.currentuicultuiculture = saveculture;

}

}

This will not solve the problem because the finally statement has not been run before FILTERFUNC gets control.

The following code is to resolve this issue by ensuring that the Finally clause is performed before the exception is provided according to the unusual screening block of the caller.

YourObject.YourMethod ()

{

CultureInfo saveculture = thread.currentthread.currentunicuiculture;

Try {

Try {

Thread.currentthread.currentuicultuiculture = New CultureInfo ("de-de");

// Do Something That Throws an Exception.

}

Finally {

Thread.currentthread.currentuicultuiculture = saveculture;

}

}

Catch {throw;}

}

Back to top

Unmanaged code

Some library codes need to call unmanaged code (for example, this machine code API, such as Win32). Because this means separating the security border of the hosted code, it should be very cautious. If your code is secure neutral (see the secure neutral code part of this article), your code and any calling code must have unmanaged code rights (Secur

IT

Ypermission.unmanagedCode).

However, it is usually unreasonable to have such a big permission to have such a large amount of permissions. In this case, your trusted code can be a "mediator", similar to the previously described managed packaging or library code. If the function of the base unmanaged code is fully secure, you can disclose it directly; otherwise, you need to enter the appropriate permission check (Demand).

When your code calls the unmanaged code, you don't want the calling party to have this permission, you must assert your right. The assertion will block the stack audit at your frame. You must be careful, don't create security vulnerabilities in the process. Usually, this means you have to request an appropriate call party permission, and then only use the unmanaged code to perform the permissions allowed, but can no longer perform other operations. In some cases (for example, getting the non-hosting code), the non-hosting code can be disclosed directly to the calling party without any security check. In any case, any code for asserting must be responsible for security.

Because any hosted code for providing code paths in this unit is a potential target of malicious code, it is very careful if it is determined that any non-hosting code can be safely used and how to use it. Typically, all non-hosting code should not directly disclose the call partition of the partial trust (see the next part). There are two main considerations when evaluating the security of unmanaged code usage in a library that can be called by partial trust code:

Features. Whether the non-managed API provides a safe function, that is, is not allowed by calling it to perform potential hazardous operations? Code Access Security Use Level to implement access to resources, so consider whether API uses files, user interfaces, threads, or whether to publicly protected information. If so, the hosted code that is wrapped must request the required permissions before allowing you to enter it. In addition, although it is not permission protection, security requirements are limited to strict types of security.

Parameter check. Common attacks are passed to them when trying to disengage the disclosed non-hosting code API method. Buffering is a common example of such an attack (index or offset value outside the range), and another example is any parameter that may utilize errors in the base code. Therefore, even if the non-hosting code API is functional for some trusted caller (after the necessary request), the hosted code must completely check the validity of the parameters to ensure that the managed code packaging program layer is ensured. Malicious code does not issue non-expected calls. Use SuppressunmanageDcodeSecur

IT

y

As a result, then call the unmanaged code to have performance factors. For each such call, the security system will automatically request unmanaged code permissions, which will cause a stack audit every time. If you assert, call the unmanaged code directly, the stack audit does not make sense: it consists of an assertion and non-hosting code call.

Can put a named SuppressUnManageDcodeeSecur

IT

Y's custom property is applied to the unmanaged code entry point to disable request SECUR

IT

Normal security check for Ypermission.unmanagedcode. This must always be cautious when doing this, because the operation creates a open door to creating an open door without running when there is no runtime security check. It should be noted that SuPPRessunmanageDcodeeSecur is applied

IT

Y, a one-time security check will also occur in JIT to ensure that the direct call holds permissions to call the unmanaged code.

If you use SuppressUnManageDcodeSecur

IT

Y attribute, please check the following:

Make the non-hosting code entry point in the outside of the code (eg, "Internal").

Any position to call the non-host code is a potential security vulnerability. Make sure your code is not a malicious code indirectly calls unmanaged code and avoids a security check. Request permissions in an appropriate situation.

When creating a hazardous path to the non-hosting code, use naming settings to make it explicit, the next part will describe this.

Naming convention for non-hosting code method

People have established useful and highly recommended conventions for naming a non-hosting code method. All non-hosting code methods can be divided into three categories: SAFE, NATIVE, and UNSAFE. These keywords can be used as class names in which various non-hosting code entry points are defined. In the source code, these keywords should be added to the class name; for example, Safe.GettimeOfDay, Native.xyz or Unsafe.DangerousApi. Each of these categories should transmit strong messages to developers using their developers, as described in the table below.

Keyword

Safety Precautions

Safe

Any code (even malicious code) to be called is completely harmless. You can use it as other managed code. Example: Get the time of the day.

Native

Secure neutrality; ie, a non-host code authority can call the unmanaged code. Safety will be checked, which will prevent unauthorized caller.

Unsafe

Ignore secure and potentially hazardous unmanaged code entry points. Developers should be very cautious when using this unsafe code, and should ensure that other protection measures have been in place to avoid security vulnerabilities. When the keyword rewrites the security system, the developers must be fully responsible.

Back to top

User input

User data refers to any type of input (data from the web request or URL, input to the Microsoft Windows Form Application, etc.), which can have a negative impact on the code because these data is usually used as calls other code. Parameters. This situation is similar to malicious code to call your code using unfamiliar parameters, and should take the same precaution. User input is actually more difficult to ensure security because there is no stack frame to track the potential untrusted data that appears. In the security error that needs to be found, these errors are the most subtle and most difficult, because although they can exist in the surface and security-independent code, they are passed to other code. Channel. To find these errors, track any type of input data, imagine the range of possible values, and consider whether the code for this data can handle all of these situations. You can fix these errors through all entries that cannot be processed by scope checking and rejecting code.

Some common errors involving user data include:

On the client, all user data in the server response runs in the context of the server site. If the web server acquires user data and inserts it into the returned web page, it may include