Ext2 file system restores misused files

xiaoxiao2021-03-06  67

Sender: sinbad

Title: Ext2 file system restores misused files

Sending station: Sinbad (Fri Sep 27 10:54:25 2002)

-------------------------------------------------- ------------------------------

Author: thhsieh

Transfer from: http://www.linuxaid.com.cn

Summary

Ext2 file system restores misused files

-------------------------------------------------- -------------------

This department's BBS system is really difficult (um .... is actually because my negligence will so much disaster ..

..), follow the system time in this day, it has caused the ID of many people to be mistaken, and there is another problem due to system settings.

, Kill the important backup file of BBS. This matter is that the school brother discovers, tell me, when I stopped, I saw his MA.

IL, when you want to cry, there is almost no hit wall.

At that time, I was about 11:00 on Saturday. I thought about editing a set of saying that everyone explained that she could restore old letters.

The part is set, while still wants to save the situation. Everyone knows that UNIX Like's system is very difficult to M $

Like the system, do Undelete, all the tenants have once again warned us, be careful! Be careful! Before

Think twice, after cutting it, it is useless. Although I have gradually made twilight thinking, but the mistake

It is the system in the background. Wait until I find out the cause, it is already a one hour after the file is cut. I am a little bit

Impression I remembered that in the network, some people discussed the possibility of Undelete in Linux ext2 filesystem, but

Most of what I saw was a negative answer, but I really had this thing, so I did what I did, it was a horse.

The PARTITION MOUNT where the file is located is read-only, and any write action is prohibited, not afraid to have

The file is misunderstood (because there is no hacking), but it is afraid that there is a new file to write in, the new information may cover the old.

Medium the original magnetic area (Block). The only thing we now is to attempt to place the magnetic area that exists in the original.

Looking back, and the old information on these magnetic regions is still, then these magnetic regions are string into a file. Finally

I found it !! The technical documentation in this area exists in my own system :-))

/usr/doc/howto/mini/ext2fs-undeletion.gz

So I followed the instructions of this document step, finally saved a 79% of the 8MB compressed file,

There is a compressed file that is 1.1 MB to save back. Thanks to God, Linux designer, write that article

The author of the pieces, people who have discussed this technology, as well as Linux so good ext2 FileSystem, let me have a chance

Rescue the past. Now, I will make my rescue steps to make everyone a reference, I hope to have the same time (喔

! No, it is best to hope that everyone will never have the following steps :-))))

Here, it is strictly declared!! The purpose of writing this article is that people who are not in the case, there is a recovery

The opportunity does not mean that we can do it, we need to think twice. As mentioned earlier, I have a file that I can't

100% rescue, in fact, the length of the 8MB file can save 99% is the lucky, general case in the general situation.

Rescue 70% - 80% is already a happy. So, don't expect Undelete to save everything. Prevention is better than treatment! Please develop good habits, please think twice before dressing!!!

theoretical analysis

How big will we save? In the Kernel-2.0.x series (Kernel used in this site is 2.0.33)

, Depends on the following two points:

Is the magnetic area where the file origin is not overwritten?

Does the archive are completely continuous?

The first point we can compete with the time, that is, when a file is found, it is necessary to use the fastest umount.

ILESYSTEM, or the FileSystem Remount is only read. For this time, the file is mistaken.

After an hour, I found it, but because the FileSystem write is very small (I almost determine that only one day is only once

, Make backup, so the first point is considered.

The second point is really going to listen to the heavens, the kernel used in this site must be in the assumption of "long file"

When Block is completely continuous, it is possible to completely saving! A block is 1024 bytes, Changda 8 MB

The file has more than 8,000 blocks. In FileSystem, you can think that you can see your long file.

Continuous, but in our system, this seems to have a bit more expensive. At the same time, Linux Ext2 is so sophisticated Fi

Lesystem, can do the first 7950 blocks, this is not good.

Ok, let me talk about my steps.

Rescue Step i - Mount FileSystem Readonly

The location of this file is originally in / var / hda / backup / home / bbs, our system's FileSystem configuration

Yes:

Root @ bbs: / home / ftp / rescue # df

FileSystem 1024-block used Available Capacity Mounted ON

/ DEV / SDA1 396500 312769 63250 83% /

/ DEV / SDA3 777410 537633 199615 73% / Home

/ DEV / HDA1 199047 36927 151840 20% / VAR / HDA

/ DEV / HDA2 1029023 490998 485710 50% / Home / FTP

So / VAR / HDA This FileSystem should immediately become Readonly (here you please use the root identity):

Mount -o Remount, RO / VAR / HDA

Of course, you can also directly umount it, but sometimes some Process is working under this FileSystem.

You may not be able to directly umount it. So I chose Mount Readonly. But you can also use:

Fuser -v -m / usr

Look at it is currently those processs in this filesystem, then cut off one by one, and then umount.

Rescue step ii

carried out

Echo Lsdel | Debugfs / dev / hda1 | less

Look at the recently cut inode (why is it / dev / hda1? See

The DF list above)? Important information of this milk F file, such as size, time, attribute, etc. For our system

It is listed as follows:

Debugfs: 92 deleted inodes found.

Inode Owner Mode Size Blocks Time Deleted

................................................ ..................

29771 0 100644 1255337 14/14 sat Jan 30 22:37:10 199929772 0 100644 5161017 14/14 sat Jan 30 22:37:10 1999

29773 0 100644 8220922 14/14 sat jan 30 22:37:10 1999

29774 0 100644 5431 6/6 Sat Jan 30 22:37:10 1999

Please note! We must judge the file to be rescued in the file size, cut time, etc. Is that one. here

We have to save 29773 this inode.

Rescue step iii

carried out

Echo "stat <29773>" | debugfs / dev / hda1

List all the information of the inode, as follows:

Debugfs: staty <29773>

Inode: 29773 Type: Regular Mode: 0644 Flags: 0x0 Version: 1

User: 0 Group: 0 size: 8220922

FILE ACL: 0 Directory ACL: 0

Links: 0 blockcount: 16124

Fragment: Address: 0 Number: 0 Size: 0

Ctime: 0x36b31916 - sat jan 30 22:37:10 1999

ATIME: 0x36aebee4 - Wed Jan 27 15:23:16 1999

Mtime: 0x36adec25 - Wed Jan 27 00:24:05 1999

DTIME: 0X36B31916 - Sat Jan 30 22:37:10 1999

BLOCKS:

123134 123136 123137 123138 123140 131404 131405 131406

131407 131408 131409 131 410 131411 131668

Total: 14

The current focus is that the files referred to by the inode must be retrieved. Are this? 14

BLOCK? No! There should be more than 8,000 Blocks! In this i i i..

The first 12 Block listed above is the Block that really refers to the archive, called Direct Block. 13th

A first-order Indirect Block, the 14th, called the second order Indirect block. What do you mean? This file

The block location of the information is as follows:

Do you understand? The 13th (131411) and the 14th Block are not Data, but index, it refers to

The position of the next Block. Since a Block size is 1024 Bytes, an int is 32-bit

There is 4 Bytes, so a block can record 256 data. Take 131411 Block as an example, it records

The information is (before the file is not cut):

131412 131413 131414 .... 131667 (256)

And this 256 block truly recorded the file information, so we call the first order. Similarly, there is a second order.

Two layers index, in 131668, it may be recorded:

131669 131926 132182 .... (maximum of 256)

The block record in 131669 is:

131670 131671 131672 .... 131925 (256)

And this 256 Block is really stored in archival. And we want, that is, these real storage archives

BLOCK of the material. In theory, we only need to read all of these index blocks, and then take these in

DEX reads all of the blocks, you can save 100% rescue files (assuming that these Block has not been overwritten by the new file). The project is large, but it is feasible. Unfortunately, in Kernel-2.0.33, its design is if the file

Decomposed, then these index block will ruin, so I have read is

0 0 0 0 0 ..... (total of 256)

Wow! No way to know the true location of these Data Block. So, we have made a big fake here.

Setting: The Block in the entire file is continuous! It is the example of me. This is why, only continuous

Block (referred to the INDIRECT Block) file can be rescued completely, and this is to listen to the life.

Rescue Steps IV

Ok, now we have to assume that all files are in a continuous block, now please use http://archie.n.

Cu.edu.tw to find this tool: fsgrab-1.2.tar.gz and install it. Because the steps are simple, this is here

I don't talk much. We have to use it to capture all the blacks. Its usage is as follows:

FSGRAB-COUNT -S SKIP DEVICE

Among them, count is as long as (continuous) read a few, SKIP refers to reading from the first few, for example, I want to be from 1316.

70 began to read 256 continuously, this is the next command:

FSGRAB-C 256 -S 131670 / dev / hda1> recover

Now let's start saving your files! The above-mentioned heads, we must use the following instructions to save: (note the head

12 blocks did not completely continuous !!!)

FSGRAB -C 1 -S 123134 / dev / hda1> Recover

FSGRAB-C 3 -S 123136 / dev / hda1 >> Recover

FSGRAB -C 1 -S 123140 / dev / hda1 >> Recover

FSGRAB-C 7 -S 131404 / dev / hda1 >> Recover

This is the 12 block opening, for the first order Indirect, the information is optimistic about it is continuous :-))

FSGRAB-C 256 -S 131412 / dev / hda1 >> Recover

Note To skip 131411 because it is index block. For the second order Indirect, we * hypothesis * it

They are all continuous:

FSGRAB-C 256 -S 131670 / dev / hda1 >> Recover

FSGRAB-C 256 -S 131927 / dev / hda1 >> Recover

FSGRAB-C 256-S 132184 / DEV / HDA1 >> Recover

..........................................

To do it, until the RECOVER size exceeds the size of the file we have to rescue (8220922). Note

It is intended to skip those Index Block (such as 131668, 131669, 131926, 132183, ....)

.

Rescue step v

The last step is to put the file "shear" and see how much we have saved. In this glutulent] We repeat the above steps

The Recover file size is 8294400, and the size we want is 8220922, then this is the next instruction:

Split -b 8220922 recover

It will make two files, one is RECAA, the size is 8220922, and the other is RECAB is the remaining size.

The latter is garbage, throw it. Now we can check this file is not "complete" that is misleading file. Since our file is the format of .tar.gz, then our method checks:

Mv recaa recaa.tar.gz

Zcat Recaa.Tar.gz> Recaa.tar

If there is no error message, it is successful! It is completely saved. But unfortunately, we have no success, will

After the RecAa.TAR renamed Gzip, it was more than the original recaa.tar.gz, and found less than 1%.

It is said that the last Block has a last block in the block is discontinuous (or covered by the newly written file), but

This is unfortunate.

postscript

For in Undelete * required * Suppose all block continuous issues, the HOWTO file says Linus

With other Kernel designers are working research, see if it can overcome this difficult, that is, when the file is cut off, don't

Index Block brokerage. I just tried the Kenrel-2.2.0 environment and found that I have done it !! The following is a cut

Archive of Inode Data (read by Debugfs):

Debugfs: inode: 36154 Type: Regular Mode: 0600 Flags: 0x0 Version: 1

User: 0 Group: 0 size: 2165945

FILE ACL: 0 Directory ACL: 0

Links: 0 blockcount: 4252

Fragment: Address: 0 Number: 0 Size: 0

CTIME: 0x36b54c3b - mon Feb 1 14:39:55 1999

ATIME: 0X36B54C30 - Mon Feb 1 14:39:44 1999

Mtime: 0x36b54c30 - Mon Feb 1 14:39:44 1999

DTIME: 0x36B54C3B - Mon Feb 1 14:39:55 1999

BLOCKS:

147740 147741 147742 147743 147744 147745 147746 147747 147748 147769

147770 157642 157643 157644 157645 157646 157647 157648 157649 157650

157651 157652 157653 157654 157655 157656 157657 157658 157659 157660

157661 157662 157663 157664 157665 157666 157667 157668 157669 157670

157671 157672 157673 157674 157675 157676 157677 157678 157679 157680

157681 157682 157683 157684 157685 157686 157687 1 ...................

......... 9745 159746 159747 159748 159749 159750 159751 159752 159753

159754 159755 159756

Total: 2126

It's so perfect !! This means that in the Kernel-2.2.x environment, we don't have to assume all blocks.

Continuous, and can find 100% to find all the cut block! Therefore, the second risk above does not exist.

Advantage of the above information.

Reference file: ext2fs-undeletion mini howto

转载请注明原文地址:https://www.9cbs.com/read-120263.html

New Post(0)