The hard drive is the bottleneck of the computer. I have studied how the linux kernel accesses data from the HDD. When a process makes a request for a resource (such has a hard drive) it gets put into a queue. if the HD is not being used it gets written almost right away, if it is being used (most of the time) the process must wait for its turn to come up. The next processes in the task list then gets started again until the resource is free.
That's a little off topic but the point is when a program wants to get information off a hard drive it must wait, you can have a computer that is 3 Ghz quad core and that means nothing if that hard drive is busy.
a good defragmenting tool does two things:
1) puts information that goes together, together. simple right?<br>
2) makes things block aligned.
maybe an explanation for #2 is called for: hard drives are arranged in blocks it is quicker for a HD to get data off when all the data is in 1 block, so lets say data for 1 file is together, but it overflows into a new block from that which it starts. Now the HD has to pull two blocks of data, slowing it down.
so what does this mean? a good defrag utility may want to leave gaps at the end of blocks because it cannot find any data that fits in there, without over flowing. There is no algorithm for 100% optimization and it is unlikely that there will ever be one. So that means that each company produces their best guess at what is the best.
one that i think will work, and one that i dont know if it is being used is:
1. find largest file, place it in the frist block (if it will fit) starting at the beginning of the block
2. Find next largest file, see if it will fit at the of the first file written without over flowing, if not write to next empty block
3. check the 3rd largest file, place it in a space at the end of one of the files already written without over flowing, if not, go to the next block
4. and so on and so forth.
if a file takes up more than one block, write it to adjacent blocks with overflow (sometimes you just cant avoid it)
Some file systems are "immune" to fragmentation, I'm not so sure about this, but I trust that when writing files they use a similar algorithm to what is above, which does help reduce it. One thing my approach fails at is accommodating for edits in file sizes, which is a huge oversight.
But i have babbled on enough but to answer your question, most defiantly, as i said before, you can have a computer where the CPU is fast, and you have a lot of really fast ram, but all all that amounts to nothing if the hard drive is busy, and the best way to reduce hard drive seek time, is to keep it nice and clean.