Why we don't need to defrag UNIX FS?

Hi

I am wondering which is the reason why there is no need to defrag file system in UNIX and Linux, and in Windows I must defrag it ?

Try searching this forum for "UNIX defrag".

1 Like

Mostly because NTFS fragments like hell.

On a disk that's not full, defragmenting a file is as easy as creating a copy of it then replacing the original.

1 Like

Depends on the filesystem type in both unix and Windows.

Microsoft "fat32" is notorious for fragmentation, but so is unix "ufs".

Microsoft "ntfs" filesystems are broadly comparable to unix "vxfs" filesystems and are less liable to fragment badly unless you let them fill up.
Both can be defragmented with system tools or dump/load.

1 Like

I've seen NTFS fragment terribly when only 50% full. It's not as bad as FAT but still not great.

1 Like

MS must have made an absolute hash of it, then because, as I've said, I've seen it fragment terribly on drives only 50% full. It certainly doesn't seem to make any effort to obey your ideal of squashing everything at the head of the drive, either.

I use a distro that keeps 1.9 gigs of metadata in a tree of 100,000 tiny files with frequent replacement. I've seen ReiserFS fragment badly on that(the files didn't fragment, but the directories themselves did, leading to very slow ls), but not the more common Linux filesystems.

You've got an odd idea of fragmentation. It doesn't mean "all files in one giant clump at the start of the drive", which is a recipe for fragmentation -- growing files will have no room to expand, and get scattered in pieces when they do.

And there's certainly better alternatives to dumping the entire filesystem, like shake.

1 Like

Dear Corona688, you confuse binary tree algorithm for file seeking with an actual allocation i-nodes on physical drive. The gaps that left after removal of files and filled with parts of other files cause the fragmentation. When file open for reading, it goes through the chain and as parts of the file allocated in the areas that are not compactly located, the drive has to performs extra spins in order for perpendicularly moving heads to be on-time for reading/writing in the area. Along with disks' interleaving functionality it makes drive access very slow. That is exactly what fragmentation is. "shake" is one of many options available, again it is not universal and it is not capable of performing defragnetation on all kinds of file system used in UNIX land. "dump"-"restore" is a standard universal way of dealing with the problem, for enterprise level systems, at least for last 20 years.

1 Like

Me too.
After a cleanup of a full 40 Gb NTFS drive down to 60% full I've had the M$ Windows XP defrag take 30 hours despite having 2 Gb memory fitted. After that the system ran normally.

Similarly a large unix UFS /tmp mounted partition which briefly contained 400,000 files and directories (after a programming accident) ran really slowly afterwards until re-created from scratch.

With large database systems pre-allocating the segments means that you do not have disc fragmentation but that the database engine needs to handle fragmentation.
I've seen databases left to expand dynamically which caused severe slow running due to disc fragmentation even though the database engine reported no fragmentation. A database dump/load to preallocated segments on a properly tuned filesystem cured the problem.

2 Likes