How to find a full leak hole (reposted from Sheng'an Technology)
description:
How to find a full leak hole
detailed:
Remarks: I didn't find any security vulnerabilities, so I took this article as a snatch. For this article, there is a better organizational and grammar suggestions, I will open my hands. Any wrong report is urgent.
If a loophole is exhibited in extreme cases, then it is normal, it is just a small problem. Usually, you only have to avoid this extreme appearance, then the bug is not a problem. If you like, by programming, you can copy the effect of this bug.
However, sometimes, the program is in a safe edge. They read the operation from some programs without the same authority. Some examples: Your email reader is read from your sender, which is usually not open to others only for you. Any TCP / IP stack of a computer that connects to the Internet reads the resources of all people on the Internet. In this regard, most people on the Internet have no right.
Programs do this must be careful. If there is any vulnerability, it has potential possibilities that may allow others (end-mid-on users) to do their privileges. There is such a bug, called "vulnerability" or more formal - "fragile".
Here are some of the usual vulnerabilities.
Structure:
When you write a software, your goal is that the user is operating correctly, something is possible. When you write a security sensitive software, you must also make something impossible without trust how you do it. This means that some parts of your program must run correctly in a very vast environment.
Vulnerability changes:
A large number of vulnerabilities come from programs running in different environments. The first is a small problem, even a convenience, eventually become a vulnerability.
For example: Suppose you have a script interpreter, which is designed to preview the document before printing. This is not a safe sensitive vulnerability. Script interpreter no power you do not have. However, if you use it to browse the other you don't know or even untrusted documents. Suddenly, this script interpreter turns a thread. Some people can send something to destroy your documents, or copy your part to where they can get.
In most UNIX systems TCP / IP stack, this is the root of the problem. They are developed on top of one of people trusted each other, but now they are used by users who don't trust online.
This is also the problem of Sendmail. It has always been a vulnerability before the audit is performed.
In a more subtle level: When the function does not jump through the trusted boundary, they are very safe, but a little over-track, the disaster begins. Function gets () is an excellent example. You use gets () control input, you just provide a cache than you expect. However, if you accidentally provide too much input, this mode does not work it or may overflow the cache to compile.
However, when the data comes from a unspecified source, gets () can overflow cache and can cause the program to do anything. Conflicts are the most common result, but you usually be carefully scheduled to make the program to run in executable code.
This brings us - buffer overflow vulnerability
When you write a string in the array and continue to write the remaining array, the buffer overflow of the array is overridden.
Buffer overflow security issues can appear in the following environments:
* When directly buffering directly, it is directly read.
* When copying a small cache from a large cache
* Put the input of other processes into a string cache
Remember, when the input is trusted, it is not a security vulnerability, just a potential threat.
In most UNIX environments, this is very confusing. If the array is part variable in the function, it may return the address after the stack. This seems to be a widely used vulnerability. The vulnerability of thousands of this nature in the past few years was discovered.
Even some other cache can also overflow and cause security vulnerabilities, especially in, function pointers, or trusted information. For example, as follows:
* Dangerous functions without any boundary check: Strcpy, Strlen, Strcat, Sprintfs, Gets;
* Hazardous functions with border check: Strcpy, Snprintf, some of this class will ignore unlocked to zero the string. This results in copying other data in the future. Such data is usually sensitive or conflict with procedures. Strucat has no problem, I don't know if Sprintf has this problem, but Strcpy must have.
* Strcat misuse, which can cause zero bytes in array.
* Security sensitive program conflicts. Any conflict comes from a pointer bug, and the pointer bug in the production code is mainly coming from buffer overflow.
* Trying to load large input for security sensitive data programs - in environment variables (if it is untrusted), in the command line function (if it is untrusted), in the untrusted files they read, On the invisible network connection.
Observe the conflict, if you can see, pay attention to the conflict address is like your input.
* Incorrect boundary check. If the border check is spread in hundreds, all lines of code, rather than concentrated in two to three places. Then the error is great. An insurance
The solution is to compile all security sensitive programs with borders.
I know the first border for GCC is made by Richard W.m.jones and Paul Kelly, located in:
http://www.doc.ic.ac.uk/~phjk/bounds checking.html /
Greg McGary (gkm@eng.ascend.com) made another job. Published at http://www.cygnus.com/ml/egcs1998-may/0073.html. Ricard Jones and Herman Ten Brugge do other work. Published in http://cygnus.com/ml/egcs/1998-may/1557.html. Greg compared two methods: http://www.cygnsu.com/ml/egcs/1998-may/0559 .html
Confused Deputies
When you let a program open a file, this program requests an operating system to open a file. Since the program runs with your permissions. If you have no right to open the file, the operating system will refuse the requirements of the program. At this point, everything is fine.
But if you give a file name to a security sensitive data, a CGI Script (General Gateway Interface Pass), a program that sets the user's certificate, any web server - it does not necessarily rely on the automated defense built by the operating system, then It is because it can do some things you have no right. For example, in a web server, it can do it and you can't do anything, few but it may read some files with private information.
Most programs are checked for their data, often fall into certain traps.
* They check it within a time you can participate, if a program is first stats () or lstat () file before opening it, use a non-change mode to open it, then fstat () The open FD, It is more time to see if you have already got the file you st ().
* They detect file segments, but use a different operating system. For many Microsoft's web servers, this is a problem, the operating system is sophisticated segment for files to find the file it actually referred to. The web server looks at the file name to see your permissions to it. If you let you read a file, change this file name to make the web server think it is a different file, but the operating system segments this file to the same file, which gives you the permissions to read. This is a double segmentation problem, we will discuss it. It is also derived from the failure. * Due to the incorrect of the original, they detect files with a vulnerability and complex manner.
* They don't check it at all, which is very common.
* They check in a vulnerability. For example, many old UNIX servers let you download any files in someone's public directory (unless the operating system stops them). But if you make a symbolic link or hard link to a personal private file, if the web server is right, you can download them.
Another problem is: Standard library opens files in environment variables, but did not drop the privilege. Therefore, we are forced to see if it is reasonable.
And, if you can design a setuid program conflict, you can override a file that other programs owners can overwrite. (Drop core with user privileges often leads to data from core documents from the core documents that users can read in normal conditions.)
Open failed:
Most security systems cannot do the right thing in some cases. They behave in two ways:
They allow not allowed things, this is open failed.
They reject what they should not refuse, this is the closure failed.
For example, when a power-off lock, a power control closure is not turned off at the time of power failure, causing this door to be easily opened. When the solenoidu is closed, a lock door system with a spring load latch will not work, when SolenLid is unmatched, it is impossible to add.
CGI scripts typically perform other programs to pass the data on the user command line to them. In order to avoid this data is performed by the shell as a command to execute to other programs or files, the CGI script removes special characters like "<", "|". You can use this type of open failure through a series of removed bad characters. If you forget it, it will become a vulnerability. You can have a series of removed "good data" to use the shutdown failure. As indicated therefore, it is possible to be a forgotten trap. An example of a Perl http://www.geek-girl.com/bugtraq/1997/0013.html.
If you often fail, the shutdown fails is more inconvenient than the open failed. They are also more likely to be a refuge.
Often, I have seen the open failed is the Microsoft's operating system desktop of Mac. If you can use this program to some extent, you will get full control of your computer. Conversely, if you damage the UNIX login program, you will not be able to use your computer.
Resources lack:
Most programming is assumed to have enough resources available (see the structure of the above). Many programs don't even consider what happens when there is no sufficient resource, and some when they don't do anything.
See a few examples:
* When there is not enough memory and some configuration failure, zero will return from macros and new.
* If you do not trust the system resource (this may be a denial service problem, even if the program is processed, the invader is allowed. But this problem is very serious for most software)
* If the program runs FDS, what will happen - TheOpen () function will return -1.
* What will happen if you can't work (), or because the resource is sleepy during the startup, what happens?
Trusting should not be trustworthy: If you send a password in a transparent way in an Ethernet with non-confident users, if you create a file that you can change, you will try back the data from that file later. You use O_Trunc to create a file in TMP, but you have not created a file in TMP, you trust an intermediary that is not allowed to do what you want. If an adapter can subvert the non-trust channel, maybe they can reject the service to your service by changing channel data, maybe change data in the case of you, this leads to a very bad thing: if the attacker will take the TMP The next file and a trust file have established a link. You will destroy the content of this holder instead of only a temporary file .GCC has some of this bug, which causes an attacker to insert the code into your compiled file. Even if they can't do these things, they can read them don't read.
Unreasonable default value:
Obviously, there is a default value for security vulnerabilities, people may ignore it. For example, you open an RPM package and create some of the people-writable profiles unless positive look for vulnerabilities, you may not pay attention. This means that most people that are unpacking will leave a vulnerability on their own system.
Large interface:
Small security interfaces are safe than large. This is a common sense. If my house has a door that people can enter, I should remember to lock it before I go to bed, but if there are five doors, they all lead to the outside, this may I forget one of them.
Therefore, the web server is more secure than the SetUID program. SetUID program acquires information from individual untrust sources - environment variables, file descriptors, virtual memory mapping, command line parameters, file input, etc. The web server is only entered from the network socket (or possibly the file input.)
QMAIL is an example of a small security interface. There is only a small portion of QMail (although it is more than ten lines compared to the Linux security audit mail column list mentioned earlier), the rest is run as a special QMAIL user or mail recipient.
For QMAIL inside, buffer overflow checks are set in two small functions, and all functions that use to adjust strings are verified. Another small security interface example - the probability of detecting the error is minimal.
The more online daemons you run, the greater the security interfaces between you and the Internet. If you have a firewall, you and the security interface between the Internet have been reduced on a machine.
One of the untrusted HTML pages The difference between the untrusted Java script pages is the interface size problem, and the Routines in the latter's interpreter are larger and complicated than the inner rendere.
Usually used procedures:
This process that is often utilized may have vulnerabilities in the future, and some should be replaced some. Based on this, / bin / mail in the BSD is replaced by mail.local.
If you are auditing, a comprehensive audit for such procedures is a very good idea, but sometimes rewritten them or don't use them better in the first place.
Define imperfect security components
A security system is divided into security components. For example, my Linux system has a lot of components called users, one of which is called the kernel, and there is also a network - it is divided into subcomponents called the network interface. These are defined with a very perfect trust relationship between the system's assembly and authorization components. (For example, after I issued a password, my user Kragen trusts my network interface.)
The trust relationship between the security component interface must be strengthened. If you are running a library terminal, you may think that this terminal only has read rights to the database. You don't want it to Unix's shell
Operation. I can't determine how to complete this task, but I am sure that you can see what I want to express.
Mirabilis ICQ trusts the entire network to send it correctly user ID, obviously, this is safe.
At a certain point, TCP_WrapPers trusts it to obtain data from the reverse domain name, pass it to the shell. (But not now). When using Squid as a proxy server, the web view of the net view sometimes inserts a user in the permanent resource list to type FTP passwords. The JavaScript program and other web servers can read this permanent resource list.
Ignored example:
Non-trustful logical phrases if-else and swetch-case are dangerous because it is difficult to detect. If you find a branch running from the end, it may be wrong. If you can find a combination of logical data streams - for example, if there are two independent routines, the first output is the second input, and if it is the four end detection combinations (Combinations ), Then leave a loophole.
Check the default values in the Esles and Swetch sentences, make sure they cannot be turned off (failed-closed) command gcc -pg -a will make the program generate a bb.out file, which will help you perform all branch statements Determine your validity of your test.
I believe this is the root cause of the IP refusal service problem.
Just a small negligence:
Many people trust only the code for a few people. If the code is just a software that is checked by a few people, it may leave a vulnerability; if the code has strict security requirements, it may destroy this security. The most recent 3COM barriers are an excellent example. All of his Corebuilder and SuperStack II hubs were found to have a secret door password, which can be seen by customers in an extreme case.
For Linux security audits, this should not be a major focus.
The issue of this document:
Several categories are repeated. It has been made in any practical, so I have a focus on different problems, and I may have, leaking some important things. Also, some content is not enough. However, I still think that this document is still a very useful tool for those who are not very Linux security statistics.
Information for those who are interested in writing security software:
SunWorld Online has an article about designing security software. Although Sun is not the most famous security company in the world, this article is still very worth reading.
Gugtraq Daily detailed reports Unix security vulnerabilities, Geek-Girl.com saves some of the static to 1993. This is a very useful source of learning new security vulnerabilities or looking for old vulnerabilities. But it is not a tool book for an index security vulnerability.
Adam Shostack posted some good code browsing guidance at http://www.homeport.org/~adam/review.html (some companies are placed on their firewall browsing code).
The COPS has a setuid (7) online help, including the directory of finding and preventing the SETUID program unsafe factor, it is posted at http://www.homeport.org/~adam/setuid.7.html.
EDS John Cochran guido I look for Auscert program list.
ftp://ftp.auscert.org.au/pub/auscert/paper/secure programming cheklist