How to find safety vulnerabilities

zhaozj2021-02-11  188

How to find safety vulnerabilities

By dspman

If a program has an error, it is not a big problem only in some special situations. Often, you can avoid these special situations, making the error failure in the program won't happen. You can even follow your will, add these small "bugs" in your program.

However, sometimes some programs are at the edge of the security boundaries. They are input from additional programs, but they are not in accordance with the original access method.

Some of our common examples: You read anyone from the mail reader to send you mail, then display on your monitor, and they should not do this. Any TCP / IP stack that is attached to the Internet has obtained anyone's input information from the Internet and can directly access your computer, and your online neighbors cannot do this.

Any program with this function must be careful. If there is any small error in it, it can do anything at allowing anyone - unauthorized people. A small "bug" with this feature is called "vulnerability" or more formally called "weakness".

There are some common features of vulnerabilities.

Psychological problem

When you write the normal part of the software, if the user's operation is correct, then your purpose is to complete this. When you write the security sensitive part of the software, you must make any users who have not trusted cannot complete the operation. This means that your program must have a function of functioning in many cases.

The programmers preparing encryption and real-time programs are fine. The most programmers make their usual work habits make their software never considered safety factors, in turn, their software is unsafe.

Transform role vulnerability

Many vulnerabilities are found in different running procedures. Sometimes a minimal mistake or its ordinary mistake can also cause security vulnerabilities.

For example, suppose you have originally planned to preview it through the PostScript interpreter before printing your documentation. This interpreter is not a security sensitive program; if you don't need it, it will not be your trouble at all. But once you use it to handle the files you get there from others, and that person, you don't know that it is not worth trustworthy. This way you may incur a lot of trouble. Others can send you documents that you can delete all your files or copy all your files to where you can get.

This is the root source of the vulnerability of most UNIX TCP / IP stacks - it is based on everyone on the network, and is applied to this environment where it is not as safe as it is imagined.

This is also the root of the problem that occurred in Sendmail. Until it passed the review, it has always been a lot of roots of vulnerabilities.

Further, when the function is used in a reasonable range, it is safe. If this is not this, they will cause unimaginable disasters.

A best example is gets (). If you use the gets () function in your control, you just entered the big buffer than you book, so you have achieved your purpose. To deal with this best patch is not to do something like this or set more than the original multi-buffer, and then recompile.

However, when the data is from non-reliable data sources, gets () can overflow the buffer, so that the program can do anything. The collapse is the most common result, however, you usually have a good arrangement to make the data can be executed like code.

This is what it brings to us ...

Buffer overflow vulnerability

When you write a string to the array, a buffer overflow occurs when you cross the array boundaries.

Several buffer overflows that can cause security issues:

1. Read the operation directly into the buffer;

2. Copy from a large buffer to a small buffer;

3. Do other operations for the input buffer.

If the input is trusted, it is not a security vulnerability, but it is also a potential security hazard. This problem is very prominent in most UNIX environments; if the array is part variable of some functions, its return address is likely to be in the stack of these local variables. This makes it easy to achieve this vulnerability. In the past few years, there is no numerous vulnerability. Sometimes even in other parts of the buffer generate security vulnerabilities - especially they are near the function pointer.

Need something you need to find:

Dangerous functions without any border check: Strcpy, Strlen, Strcat, Sprintf, Get;

Dangerous functions with border check:

Strncpy Snprintf - These functions that ignore the string tail mark will often copy other (possibly sensitive) data into the buffer, which may damage the program; Strncat does not exist, but for Snprintf does not dare, but Strncpy is absolutely existing; incorrectly uses Strncat, which will put an empty byte behind the array;

Safety-sensitive program crashes - any crash comes from a pointer error, and the most such incorrect originates from buffer overflow.

Try the input of the security sensitive program to input a large input - in the environment variable (if the environment variable is not trusted), in the command line parameter (if the command line parameter is not trusted), read it on the untrusted network connection Files in untrusted. If they explain these inputs into many segments and try to take advantage of these huge data segments. At this time, you should be a crash of the system or the program. If you have collapsed, see if you are in the place you entered.

Incorrect border check. If you have a few hundred lines of code, you have a border check, not a concentrated in two or three places, then the possibility of errors will be large.

A comprehensive security solution is to recompile all security sensitive programs with compilation systems with border check.

As I know, Richard W. M. Jones and Paul Kelly written by GCC are the first project of the industry. Can be found at http: www.doc.ic.ac.uk/ ~phjk/boundschecking.html.

Greg McGary (MAILTO: GKM@eng.ascend.com) made some other work.

Http://www.cygnus.com/ml/egcs/1998-may/0073.html.

Richard Jones and Herman Ten Brugge do other work.

Http://www.cygnus.com/ml/egcs/1998-may/0557.html.

Different implementations are compared in http://www.cygnus.com/ml/egcs/1998-may/0559.html. Confucian representative When you let a regular program open a file, the program requests the operating system to open this file. Since this program is running with your permissions, if you don't have permission to open this file, the system will refuse your request.

However, if you let the security sensitive program open - CGI scripts, a setuid program, a setgid program, any web service program - it cannot rely on the system's built-in automatic protection. Because it can do something you can't do. Just in the web server, it can do, and what you can't do is very small. But it can at least read private information of some files.

Many of this program will do some kind of check on the data they receive. But there are often several shortcomings:

* Since their inspection has time-dependent, there is a competitive condition. If a program first uses Stat () to check if you have write permissions, then (assume you doing this), so that you can change this file at the same time, although you didn't have this Permission. (A feasible solution is to open the files on the files in a non-destructive open mode before opening the file, with fstat () to open the file handle, then compare if you are the same File stat ().) * They are checked by analyzing file names, but their checks and operating systems are different. This is a big problem on the web server of many Microsoft operating systems; these operating systems do not do very strict analysis on files and their associations. The web server determines your next action by viewing the file name; usually, you can run some specific files (based on file name analysis), but you can't read them. If the default operation allows you to read a file, then change the file name so that the server thinks it is another type of file, but the operating system still thinks it is the original file so you can successfully read the file.

* They use and complex ways to check, but because of the designer's ignorance, this method has a vulnerability.

* They only perform quite ordinary inspections.

* They check quite simple. For example, a lot of old-fashioned UNIX web servers allow you to download the user public_html directory unless the operating system is prohibited from doing this. If you do a symbolic connection or hard connection to other people's directories, you may download the directory of others in the case where the web server is allowed.

In any case, if the program cannot be completed due to your permissions, you can use setFsuID (), setREUID () to help.

Another problem is that the standard library frequently views the environment variable used to open the file, and does not change their permissions when these operations are made (in fact, they can't.). In this way, we are forced to analyze the file names to know if it is reasonable.

Some operating systems use erroneous permissions Core Dump, if you can make SetUID's program crash, you can override files like file owners.

Fail-Openness

The safest sensitive system cannot do correct things under certain conditions. They can fail with two different ways:

* When they allow access, the action is rejected; this is called Fail-Open.

* When they do not allow access, the action is rejected; this is called fail-closed.

CGI scripts typically have another program to handle the pass-to-line user data passed to them. In order to avoid these data by the shell as an instruction, the CGI script removes special characters such as '<' | '' "and others. You can use the Fail-Open method through" bad characteristics " The list is to filter these characters. If one of them is leaking, this will become a security vulnerability. You can also make a "good characteristic" list through the fail-closed method to pass. This, even forget One is just a small trouble, but not constitutes a safe vulnerability. Example (Perl in)

Http://www.geek-girl.com/bugtraq/1997_3/0013.html

Compared with the Fail-Open system, Fail-Closed systems are not very convenient, especially if the failure is more frequent, but is more secure.

Basically all I have seen to protect the desktop computer of the MAC or Microsoft operating system is a Fail-open type. - If you are the process of this program, you can fully control your computer. Comparison, if you stop the UNIX'Login 'program to stop working, no one can control your computer. Phagocytical resources

Many programs are prepared when preparing, system resources are sufficient to use (see the problems in the above psychology). Many programs have never considered enough resources, so these programs are often wrong.

Regularity needs to be checked is:

* Call Malloc or NEW usually returns an empty pointer in the case where the memory is not enough or memory allocation error.

* If the untrusted user can use the resources of the system, (this is also a kind of refusal service, but it is also a common problem of many programs)

* If the file handle can be opened - call open () will return -1.

* If the program is not a Fork or sub-process in the initialization process due to phagocytosis resources.

Trust unconfirmed channel

If you send a clear text password on Ethernet, there is an unreliable person on the same network segment, or if you have a worldwide file, then try to read data from that file, or you use o_trunc Not O_EXCL creates files in / tmp, in short, you are done on unreliable transmission media to complete your work. In this case, if an attacker can destroy this unreliable channel, they can change the data in the channel / medium in the case of you, (the worst case is where they are in / tmp Symbolic connections to trusted files so that you can destroy the content of the authority protection file, not to build a temporary file, GCC has a similar vulnerability, this vulnerability will insert any code in your compiled programs.), Even They can't destroy any data, they can also illegally read data from authority protection.

Error default setting

If some of the default settings are not obvious but unsafe, it is easy to be ignored. For example, if you solve the RPM package and build some worldwide-writable profiles, you don't pay attention to them unless you look at the security vulnerability very carefully. This shows that people will leave security vulnerabilities on the system when they understand the RPM package.

Large interface

If the security interface is very small, the entire system security performance will be better than the interface. It is very important for safety than conditions. This is easy to understand, for example, my house has only one door, people can enter my house through this door, and when I am sleeping, I can remember to lock it. If there are five doors in the different parts of my house, all can be accessible to the outside, I am likely to forget one of them.

Therefore, the web server seems to be more secure than the SetUID program. SetUID program accepts information from all unreliable sources - environment variables, file descriptors, virtual memory images, command line parameters, and other file inputs. The web server program only gets input from the network socket (also possible from the file).

Qmail is an example of a small security interface. Only qmail small part (with more than ten rows of procedures, more than I used to send a list on Linux-security-audit mail) section is running with "root" permissions. The remaining or running with a special QMAIL user or by the mail recipient's permissions.

Inside QMAIL, the buffer-overflow check is set in two small functions, and all other functions call these two functions to modify the check string. This is another example of a small security interface - some chances of checking part of the wrong chance will be smaller.

You run more online daemons, then expand the security interface between your network and your machine.

If you have an Internet firewall, the security interface between your networks and the Internet is reduced to 1 machine.

In addition, the unreliable HTML homepage and unreliable JavaScript scripts also have the difference in security interfaces; the security interface of the JavaScript script interpretation is greater than the HTML. Often broken program

The procedures that are often broken in the past may also have a vulnerability in their subsequent versions, so they need to replace them in a possible situation. Replacing / bac / mail in the BSD system / bin / mail is for this reason.

If you want to check its code, this is a good idea, but a better way is to rewrite the code or not using them.

Weak security parts

Any secure system can be divided into security components. For example, my Linux system is divided into "users", "kernel", "network", and "network" can be divided into some sub-components such as "network connection". It has established a good trust relationship between the various components according to the system installation and authentication process. (For example, my user, KRAGEN will trust my network connection after I send my password out, I will trust my network connection.)

The measurement relationship between all security components must be strengthened. If you run the library terminal, perhaps you want the terminal to just access the library database (and can only read). You don't want to provide the unix shell for them.

Mirabilis ICQ trusts the Internet users, apparently this is unsafe.

On the other hand, TCP_WrapPERS trusts the result obtained from the reverse DNS query, and then pays it to the outer casing. When using Netscape Communicator to make a proxy with Squid, the user FTP password in the history list is placed in the URL. JavaScript programs and other web servers can see this URL.

Ignored situation

Do not trust logic. Due to verification is difficult, IF-ELSE and SWITH-CASE statements are dangerous. If you can find any code branch that is not performed forever, it is likely to contain an error. If you can find logical data streams, for example, if there are two statements, each of the two things, then the first output is sent to the second input - if these code Without testing, it is likely to contain a vulnerability.

View the ELSE in the condition statement and view the default in the switch statement to make sure they are fail-closed.

GCC-PG-A will make the program generate a bb.out file to help you determine if all branches in the code are executed. I think these are the root of the recent IP denial of service attack problem.

Low-level error

Many people trusts only a few people checking the code. If the software's code only has a few people to read, there will be many errors. If these codes involve a safe part, it will cause security vulnerabilities. The event of 3Com's recent Juji is an excellent example. All of their Corebuilder and SuperStack II Hub are found to have a secret universal password, and this password is set to handle user's emergency events.

转载请注明原文地址:https://www.9cbs.com/read-4187.html

New Post(0)