Close Window

Print Story

Why Recovering a Deleted Ext3 File Is Difficult . . .

We have all done it before. You accidentally type in the wrong argument to rm or select the wrong file for deletion. As you hit enter, you notice your mistake and your stomach drops. You reach for the backup of the system and realize that there isn't one.

There are many undelete tools for FAT and NTFS file systems, but there are few for Ext3, which is currently the default file system for most Linux distributions. This is because of the way that Ext3 files are deleted. Crucial information that stores where the file content is located is cleared during the deletion process.

In this article, we take a low-level look at why recovery is difficult and look at some approaches that are sometimes effective. We will use some open source tools for the recovery, but the techniques are not completely automated.

What Is a File?
Before we can see how to recover files, we need to look at how files are stored. Typically, file systems are located inside of a disk partition. The partition is usually organized into 512-byte sectors. When the partition is formatted as Ext3, consecutive sectors will be grouped into blocks, whose size can range from 1,024 to 4,096 bytes. The blocks are grouped together into block groups, whose size will be tens of thousands of blocks. Each file has data stored in three major locations: blocks, inodes, and directory entries. The file content is stored in blocks, which are allocated for the exclusive use of the file. A file is allocated as many blocks as it needs. Ideally, the file will be allocated consecutive blocks, but this is not always possible.

The metadata for the file is stored in an inode structure, which is located in an inode table at the beginning of a block group. There are a finite number of inodes and each is assigned to a block group. File metadata includes the temporal data such as the last modified, last accessed, last changed, and deleted times. Metadata also includes the file size, user ID, group ID, permissions, and block addresses where the file content is stored.

The addresses of the first 12 blocks are saved in the inode and additional addresses are stored externally in blocks, called indirect blocks. If the file requires many blocks and not all of the addresses can fit into one indirect block, a double indirect block is used whose address is given in the inode. The double indirect block contains addresses of single indirect blocks, which contain addresses of blocks with file content. There is also a triple indirect address in the inode that adds one more layer of pointers.

Last, the file's name is stored in a directory entry structure, which is located in a block allocated to the file's parent directory. An Ext3 directory is similar to a file and its blocks contain a list of directory entry structures, each containing the name of a file and the inode address where the file metadata is stored. When you use the ls -i command, you can see the inode address that corresponds to each file name. We can see the relationship between the directory entry, the inode, and the blocks in Figure 1.

When a new file is created, the operating system (OS) gets to choose which blocks and inode it will allocate for the file. Linux will try to allocate the blocks and inode in the same block group as its parent directory. This causes files in the same directory to be close together. Later we'll use this fact to restrict where we search for deleted data.

The Ext3 file system has a journal that records updates to the file system metadata before the update occurs. In case of a system crash, the OS reads the journal and will either reprocess or roll back the transactions in the journal so that recovery will be faster then examining each metadata structure, which is the old and slow way. Example metadata structures include the directory entries that store file names and inodes that store file metadata. The journal contains the full block that is being updated, not just the value being changed. When a new file is created, the journal should contain the updated version of the blocks containing the directory entry and the inode.

Deletion Process
Several things occur when an Ext3 file is deleted from Linux. Keep in mind that the OS gets to choose exactly what occurs when a file is deleted and this article assumes a general Linux system.

At a minimum, the OS must mark each of the blocks, the inode, and the directory entry as unallocated so that later files can use them. This minimal approach is what occurred several years ago with the Ext2 file system. In this case, the recovery process was relatively simple because the inode still contained the block addresses for the file content and tools such as debugfs and e2undel could easily re-create the file. This worked as long as the blocks had not been allocated to a new file and the original content was not overwritten.

With Ext3, there is an additional step that makes recovery much more difficult. When the blocks are unallocated, the file size and block addresses in the inode are cleared; therefore we can no longer determine where the file content was located. We can see the relationship between the directory entry, the inode, and the blocks of an unallocated file in Figure 2.

Recovery Approaches
Now that we know the components involved with files and which ones are cleared during deletion, we can examine two approaches to file recovery (besides using a backup). The first approach uses the application type of the deleted file and the second approach uses data in the journal. Regardless of the approach, you should stop using the file system because you could create a file that overwrites the data you are trying to recover. You can power the system off and put the drive in another Linux computer as a slave drive or boot from a Linux CD.

The first step for both techniques is to determine the deleted file's inode address. This can be determined from debugfs or The Sleuth Kit (TSK). I'll give the debugfs method here. debugfs comes with most Linux distributions and is a file system debugger. To start debugfs, you'll need to know the device name for the partition that contains the deleted file. In my example, I have booted from a CD and the file is located on /dev/hda5:

# debugfs /dev/hda5
debugfs 1.37 (21-Mar-2005)
debugfs:

We can then use the cd command to change to the directory of the deleted file:

debugfs: cd /home/carrier/

The ls -d command will list the allocated and deleted files in the directory. Remember that the directory entry structure stores the name and the inode of the file and this listing will give us both values because neither is cleared during the deletion process. The deleted files have their inode address surrounded by "<" and ">":

debugfs: ls -d
415848 (12) . 376097 (12) .. 415864 (16) .bashrc
[...]
<415926> (28) oops.dat

The file we are trying to recover is /home/carrier/oops.dat and we can see it previously allocated to inode 415,926. The "(28)" shows us that the directory entry structure is 28 bytes long, but we don't care about that.

File Carving Recovery
The first recovery technique, called file carving, uses signatures from the deleted file. Many file types have standard values in the first bytes of the file header, and this recovery technique looks for the header value of the deleted file to determine where the file may have started. For example, JPEG files start with 0xffd8 and end with 0xffd9. To recover a deleted JPEG file, we would look at the first two bytes of each block and look for one with 0xffd8 in the first two bytes. When we find such a block, we look for a block that has 0xffd9 in it. The data in between are assumed to be the file. Unfortunately, not all file types have a standard footer signature, so determining where to end is difficult. An example of an open source tool that does file carving is foremost and there are several commercial options as well.

We can run a tool like foremost on the full file system, but we'll probably end up with way too many files, including allocated ones. We therefore want to run it on as little data as possible. The first way we can restrict the data size is to examine only the block group where the file was located. Remember that inodes and blocks for a file are allocated to the same block group, if there is room. In our case, we know which inode the file used and therefore we can examine only the blocks in the same group. The imap command in debugfs will tell us to which block group an inode belongs:

debugfs: imap <415926>
Inode 415926 is part of block group 25
    located at block 819426, offset 0x0a80

The output of the fsstat command in TSK would also tell us this:

# fsstat /dev/hda5
[...]
Group: 25:
   Inode Range: 408801 - 425152
   Block Range: 819200 - 851967

We next need to determine the blocks that are in the block group of the deleted file. We can see them in the previous fsstat output, but if we're using debugfs , we'll need to calculate the range. The stats command gives us the number of blocks in each group:

debugfs: stats
[...]
Blocks per group: 32768
[...]

Since we are looking at block group 25, then the block range is from 819,200 (25 * 32,768) to 851,967 (26 * 32,768 - 1). By focusing on only these blocks, we are looking at 128MB instead of the full file system. Although if we can't find the file in these blocks, we'll still need to search the full file system.

The next step to reduce the data we analyze is to extract the unallocated blocks from the file system because that is where our deleted file will be located. debugfs does not currently allow us to extract the unallocated space from only a specific block group, so we will need to use the dls tool from TSK.

# dls /dev/hda5 819200-851867 > /mnt/unalloc.dat

The above command will save the unallocated blocks in block group 25 to a file named /mnt/unalloc.dat. Make sure that this file is on a different file system because otherwise you may end up overwriting your deleted file.

Now we can run the foremost tool on the unallocated data. foremost can recover only file types for which it has been configured. If foremost doesn't have the header signature for the type of the deleted file, you'll need to examine some similar files and customize the configuration file. We can run it as follows:

# foremost -d -i /mnt/unalloc.dat -o /mnt/output/

The -d option will try to detect which blocks are indirect blocks and won't include them in the final output file. The /mnt/output/ directory will contain the files that could be recovered. If your file is not in there, you can expand your search to all unallocated blocks in the file system instead of only the blocks in the block group.

Journal-Based Recovery
The second method for trying to recover the files is to use the journal. We already saw that inode updates are first recorded in the journal, but the important concept here is that the entire block in which an inode is located is recorded in the journal. Therefore, when one inode is updated, the journal will contain copies of other inodes stored in the same block. The previous version of our deleted file's inode may exist in the journal because another file was updated before the deletion.

The easiest way to look for previous versions of the inode is by using the logdump -i command in debugfs:

debugfs: logdump -i <415926>
Inode 415926 is at group 25, block 819426, offset 2688
Journal starts at block 1, transaction 104588
  FS block 819426 logged at sequence 104940, journal block 2687
   (inode block for inode 415926):
   Inode: 415926 Type: regular Mode: 0664 Flags: 0x0
   User: 500 Group: 500 Size: 2048000
   [...]
   Blocks: (0+12): 843274 (IND): 843286
[...]

In this case, we found a previous copy of the inode and the file content blocks are listed on the last line. The last line shows that the first block of the file is 843,274 and the next 12 blocks in the file system are the next 12 blocks in the file. The file is large and requires an indirect block, which is located in block 843,286. So far, all blocks are consecutive and there was no fragmentation. Block 843,286 contains the rest of the block addresses, so we should try to look at a previous version to learn where the rest of the file is located. We can see if there is a copy in the journal using logdump -b:

debugfs: logdump -b 843286 -c

Unfortunately, we don't find a copy of the block that contains the original list of block pointers so, if we want to recover the file, we need to assume that the remaining file content is stored in block 843,287 and onward. A more advanced approach would also consider which blocks are currently allocated and skip over those. The data can be extracted with tools such as dd or the Linux Disk Editor. The journal can also be searched using the jls and jcat tools from TSK.

Conclusion
File recovery with Ext3 is not a trivial matter, which reinforces the concept of making backups of important files. If the file was not fragmented, then searching for its header signature can be useful, but the tool needs to know to ignore the indirect blocks and where to stop copying (not all files have a standard footer signature). Restricting the search to the local block group can help save time. The journal could be useful if files near the deleted file were recently updated and a previous version of the inode existed, but this is not always guaranteed and the file's indirect block may not exist.

References and Bibliography

© 2008 SYS-CON Media Inc.