How Often Disk Defrag


J

J. P. Gilliver (John)

BillW50 <[email protected]> said:
J. P. Gilliver (John) said:
In message <[email protected]>, Stefan Patric
However, what would really be ideal is a new filesystem where
performance
is less (or not at all) inexorably linked to fragmentation. NTFS with
[]
Unless you are talking of one which prevents fragmentation in the
first place, I don't see how it can be possible to have one where
performance isn't affected by fragmentation, to some extent at least.
Is that so? How about the damn I/O bus can't handle the speed of even a
fragmented hard drive? Yes that is right! Do the stupid experiments and
you will find that a fragmented hard drive isn't the bottleneck. It is
the damn bus. I can't believe how clueless most people are! Seriously!
Does it *really* take an engineering degree to see this stuff or what?
I have an engineering degree, thank you. When you said "a new
filesystem", and then go on about NTFS, most people would assume you're
talking of something that occupies the same place in the
hardware/software hierarchy as NTFS. If you're going to start to bring
in the speed of the buffer/bus/whatever, then that's not the filesystem,
it's the hardware. Sure, a filesystem may be designed to optimise
certain aspects of a particular hardware architecture, but that isn't
the filesystem.
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)[email protected]+Sh0!:`)DNAf

... but it's princess Leia in /Star Wars/ who retains the throne in terms of
abiding iconography. Ask any teenage boy, including the grown-up ones.
- Andrew Collins, RT 16-22 April 2011
 
Ad

Advertisements

B

BillW50

J. P. Gilliver (John) said:
BillW50 <[email protected]> said:
J. P. Gilliver (John) said:
In message <[email protected]>, Stefan Patric
[]
However, what would really be ideal is a new filesystem where
performance
is less (or not at all) inexorably linked to fragmentation. NTFS
with
[]
Unless you are talking of one which prevents fragmentation in the
first place, I don't see how it can be possible to have one where
performance isn't affected by fragmentation, to some extent at
least.
Is that so? How about the damn I/O bus can't handle the speed of even
a
fragmented hard drive? Yes that is right! Do the stupid experiments
and
you will find that a fragmented hard drive isn't the bottleneck. It is
the damn bus. I can't believe how clueless most people are! Seriously!
Does it *really* take an engineering degree to see this stuff or what?
I have an engineering degree, thank you. When you said "a new
filesystem", and then go on about NTFS, most people would assume
you're talking of something that occupies the same place in the
hardware/software hierarchy as NTFS. If you're going to start to bring
in the speed of the buffer/bus/whatever, then that's not the
filesystem, it's the hardware. Sure, a filesystem may be designed to
optimise certain aspects of a particular hardware architecture, but
that isn't the filesystem.
Good, I love chatting to supposedly intelligent individuals. ;-) Back in
the 80's and when we were using MFM drives, defragging was a huge
improvement (and a big deal). I remember the hard drive speed would
often double. I can't recall of a case of tripling, but in some cases
maybe. Defragging MFM drives was a huge big deal. Microsoft had no
utilities for defragging back then, as we used third party utilities.

I am not sure why you are focusing on NTFS, but that is ok I'll go with
it. NTFS is supposed to be smart enough to write with enough continuous
free sectors so it wouldn't purposely fragment files. Well I suppose it
could work better, since fragmentation still happens from time to time.

But what I am saying is what difference does it make really? As I wait
until my drives gets 40 to 60% fragmented (which takes like two years).
And the best I have recorded was like 1 or 2% improvement. I don't know
about you, but 1 or 2% performance boost is flat peanuts! And it isn't
even worth the electrons to make it happen.
 
C

Char Jackson

In message <[email protected]>, Stefan Patric
However, what would really be ideal is a new filesystem where performance
is less (or not at all) inexorably linked to fragmentation. NTFS with
[]
Unless you are talking of one which prevents fragmentation in the first
place, I don't see how it can be possible to have one where performance
isn't affected by fragmentation, to some extent at least.
I thought the eventual migration to solid state drives would eliminate
the fragmentation concerns.
 
P

Paul

BillW50 said:
J. P. Gilliver (John) said:
BillW50 <[email protected]> said:
In message <[email protected]>, Stefan Patric
[]
However, what would really be ideal is a new filesystem where
performance
is less (or not at all) inexorably linked to fragmentation. NTFS
with
[]
Unless you are talking of one which prevents fragmentation in the
first place, I don't see how it can be possible to have one where
performance isn't affected by fragmentation, to some extent at
least.
Is that so? How about the damn I/O bus can't handle the speed of even
a
fragmented hard drive? Yes that is right! Do the stupid experiments
and
you will find that a fragmented hard drive isn't the bottleneck. It is
the damn bus. I can't believe how clueless most people are! Seriously!
Does it *really* take an engineering degree to see this stuff or what?
I have an engineering degree, thank you. When you said "a new
filesystem", and then go on about NTFS, most people would assume
you're talking of something that occupies the same place in the
hardware/software hierarchy as NTFS. If you're going to start to bring
in the speed of the buffer/bus/whatever, then that's not the
filesystem, it's the hardware. Sure, a filesystem may be designed to
optimise certain aspects of a particular hardware architecture, but
that isn't the filesystem.
Good, I love chatting to supposedly intelligent individuals. ;-) Back in
the 80's and when we were using MFM drives, defragging was a huge
improvement (and a big deal). I remember the hard drive speed would
often double. I can't recall of a case of tripling, but in some cases
maybe. Defragging MFM drives was a huge big deal. Microsoft had no
utilities for defragging back then, as we used third party utilities.

I am not sure why you are focusing on NTFS, but that is ok I'll go with
it. NTFS is supposed to be smart enough to write with enough continuous
free sectors so it wouldn't purposely fragment files. Well I suppose it
could work better, since fragmentation still happens from time to time.

But what I am saying is what difference does it make really? As I wait
until my drives gets 40 to 60% fragmented (which takes like two years).
And the best I have recorded was like 1 or 2% improvement. I don't know
about you, but 1 or 2% performance boost is flat peanuts! And it isn't
even worth the electrons to make it happen.
And how did you make this unique discovery about how buses work ?

Inquiring minds, and all that...

*******

The current SATA III storage devices, quote around 500MB/sec transfer
rate, which you can verify with HDTune. This device happens to be
around 450MB/sec. It would take me a while to search enough HDTune
results, to find the very best one today.

http://www.legitreviews.com/images/reviews/1760/hdtune-sata3-read.jpg

Now, the best hard drive I know of, is a 15K RPM Seagate drive, with
a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around
125MB/sec.) Transfer rates on hard drives (sustained transfer rate)
are limited by the head to platter interface, the read amplifier, and the
data encoding technique (things like PRML partial response maximum
likelihood). You'll notice that 180MB/sec, isn't even remotely close
to 450MB/sec proven result. The 450MB/sec bus can handle that no
problem at all.

http://en.wikipedia.org/wiki/PRML

Amazing stuff! Works in the presence of ISI.

http://www.guzik.com/solutions_chapter9.asp

Techniques like that, eventually run out of noise
margin, and you can't get the desired error rate
performance if you go much faster. I expect now, the
amplifier can probably go faster, but the signal the
head sends back is the limitation. (At one time, it
would have been hard to make good amplifiers for
inside the HDA.)

Paul
 
C

Chris S.

Paul said:
BillW50 said:
J. P. Gilliver (John) said:
In message <[email protected]>, BillW50 <[email protected]>
writes:
In message <[email protected]>, Stefan Patric
[]
However, what would really be ideal is a new filesystem where
performance
is less (or not at all) inexorably linked to fragmentation. NTFS
with
[]
Unless you are talking of one which prevents fragmentation in the
first place, I don't see how it can be possible to have one where
performance isn't affected by fragmentation, to some extent at
least.
Is that so? How about the damn I/O bus can't handle the speed of even
a
fragmented hard drive? Yes that is right! Do the stupid experiments
and
you will find that a fragmented hard drive isn't the bottleneck. It is
the damn bus. I can't believe how clueless most people are! Seriously!
Does it *really* take an engineering degree to see this stuff or what?

I have an engineering degree, thank you. When you said "a new
filesystem", and then go on about NTFS, most people would assume
you're talking of something that occupies the same place in the
hardware/software hierarchy as NTFS. If you're going to start to bring
in the speed of the buffer/bus/whatever, then that's not the
filesystem, it's the hardware. Sure, a filesystem may be designed to
optimise certain aspects of a particular hardware architecture, but
that isn't the filesystem.
Good, I love chatting to supposedly intelligent individuals. ;-) Back in
the 80's and when we were using MFM drives, defragging was a huge
improvement (and a big deal). I remember the hard drive speed would
often double. I can't recall of a case of tripling, but in some cases
maybe. Defragging MFM drives was a huge big deal. Microsoft had no
utilities for defragging back then, as we used third party utilities.

I am not sure why you are focusing on NTFS, but that is ok I'll go with
it. NTFS is supposed to be smart enough to write with enough continuous
free sectors so it wouldn't purposely fragment files. Well I suppose it
could work better, since fragmentation still happens from time to time.

But what I am saying is what difference does it make really? As I wait
until my drives gets 40 to 60% fragmented (which takes like two years).
And the best I have recorded was like 1 or 2% improvement. I don't know
about you, but 1 or 2% performance boost is flat peanuts! And it isn't
even worth the electrons to make it happen.
And how did you make this unique discovery about how buses work ?

Inquiring minds, and all that...

*******

The current SATA III storage devices, quote around 500MB/sec transfer
rate, which you can verify with HDTune. This device happens to be
around 450MB/sec. It would take me a while to search enough HDTune
results, to find the very best one today.

http://www.legitreviews.com/images/reviews/1760/hdtune-sata3-read.jpg

Now, the best hard drive I know of, is a 15K RPM Seagate drive, with
a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around
125MB/sec.) Transfer rates on hard drives (sustained transfer rate)
are limited by the head to platter interface, the read amplifier, and the
data encoding technique (things like PRML partial response maximum
likelihood). You'll notice that 180MB/sec, isn't even remotely close
to 450MB/sec proven result. The 450MB/sec bus can handle that no
problem at all.

http://en.wikipedia.org/wiki/PRML

Amazing stuff! Works in the presence of ISI.

http://www.guzik.com/solutions_chapter9.asp

Techniques like that, eventually run out of noise
margin, and you can't get the desired error rate
performance if you go much faster. I expect now, the
amplifier can probably go faster, but the signal the
head sends back is the limitation. (At one time, it
would have been hard to make good amplifiers for
inside the HDA.)

Paul
Thank heaven for Paul! ;)
 
P

Paul

Char said:
In message <[email protected]>, Stefan Patric
However, what would really be ideal is a new filesystem where performance
is less (or not at all) inexorably linked to fragmentation. NTFS with
[]
Unless you are talking of one which prevents fragmentation in the first
place, I don't see how it can be possible to have one where performance
isn't affected by fragmentation, to some extent at least.
I thought the eventual migration to solid state drives would eliminate
the fragmentation concerns.
There are some file systems, where a bit of thought was put into
reducing the probability of fragmentation. The question I'd have
about this approach, is where the data is stored while the allocation
is delayed. It might still be on disk, in a buffer area, but that
would mean writing the data twice. And if the data is stored in
RAM, it's the old "power protected" problem (system needs a UPS).

At one time, Windows had a "power protected" feature, where you
could tell the OS that you had a UPS, and thus much more data
could safely sit in RAM, without endangering file system integrity.
If you could do that today, your computer would "fly" in terms
of performance. It would be up to the UPS, to send advanced power
fail, so the computer could be safely flushed to disk and shut down.

http://en.wikipedia.org/wiki/File_system_fragmentation

"A relatively recent technique is delayed allocation in XFS, HFS+ and ZFS;
the same technique is also called allocate-on-flush in reiser4 and ext4.
When the file system is being written to, file system blocks are reserved,
but the locations of specific files are not laid down yet. Later, when the
file system is forced to flush changes as a result of memory pressure or
a transaction commit, the allocator will have much better knowledge of
the files' characteristics."

An SSD is eventually limited by IOPS and SATA latency, if you push it
hard enough. The more fragmentation, the more ops it takes to complete
the transaction. But the SSD would have to be pretty grossly fragmented,
for that to happen. The SSD is likely to wear out, before it gets
that bad.

Paul
 
Ad

Advertisements

B

BillW50

Paul said:
BillW50 said:
J. P. Gilliver (John) said:
In message <[email protected]>, BillW50 <[email protected]>
writes:
message
In message <[email protected]>, Stefan Patric
[]
However, what would really be ideal is a new filesystem where
performance
is less (or not at all) inexorably linked to fragmentation. NTFS
with
[]
Unless you are talking of one which prevents fragmentation in the
first place, I don't see how it can be possible to have one where
performance isn't affected by fragmentation, to some extent at
least.
Is that so? How about the damn I/O bus can't handle the speed of
even
a
fragmented hard drive? Yes that is right! Do the stupid experiments
and
you will find that a fragmented hard drive isn't the bottleneck. It
is
the damn bus. I can't believe how clueless most people are!
Seriously!
Does it *really* take an engineering degree to see this stuff or
what?

I have an engineering degree, thank you. When you said "a new
filesystem", and then go on about NTFS, most people would assume
you're talking of something that occupies the same place in the
hardware/software hierarchy as NTFS. If you're going to start to
bring
in the speed of the buffer/bus/whatever, then that's not the
filesystem, it's the hardware. Sure, a filesystem may be designed to
optimise certain aspects of a particular hardware architecture, but
that isn't the filesystem.
Good, I love chatting to supposedly intelligent individuals. ;-) Back
in
the 80's and when we were using MFM drives, defragging was a huge
improvement (and a big deal). I remember the hard drive speed would
often double. I can't recall of a case of tripling, but in some cases
maybe. Defragging MFM drives was a huge big deal. Microsoft had no
utilities for defragging back then, as we used third party utilities.

I am not sure why you are focusing on NTFS, but that is ok I'll go
with
it. NTFS is supposed to be smart enough to write with enough
continuous
free sectors so it wouldn't purposely fragment files. Well I suppose
it
could work better, since fragmentation still happens from time to
time.

But what I am saying is what difference does it make really? As I
wait
until my drives gets 40 to 60% fragmented (which takes like two
years).
And the best I have recorded was like 1 or 2% improvement. I don't
know
about you, but 1 or 2% performance boost is flat peanuts! And it
isn't
even worth the electrons to make it happen.
And how did you make this unique discovery about how buses work ?

Inquiring minds, and all that...

*******

The current SATA III storage devices, quote around 500MB/sec transfer
rate, which you can verify with HDTune. This device happens to be
around 450MB/sec. It would take me a while to search enough HDTune
results, to find the very best one today.

http://www.legitreviews.com/images/reviews/1760/hdtune-sata3-read.jpg

Now, the best hard drive I know of, is a 15K RPM Seagate drive, with
a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around
125MB/sec.) Transfer rates on hard drives (sustained transfer rate)
are limited by the head to platter interface, the read amplifier, and
the
data encoding technique (things like PRML partial response maximum
likelihood). You'll notice that 180MB/sec, isn't even remotely close
to 450MB/sec proven result. The 450MB/sec bus can handle that no
problem at all.

http://en.wikipedia.org/wiki/PRML

Amazing stuff! Works in the presence of ISI.

http://www.guzik.com/solutions_chapter9.asp

Techniques like that, eventually run out of noise
margin, and you can't get the desired error rate
performance if you go much faster. I expect now, the
amplifier can probably go faster, but the signal the
head sends back is the limitation. (At one time, it
would have been hard to make good amplifiers for
inside the HDA.)
Paul... use your head! Take a hard drive that is 60% fragmented and use
it. Time how long it boots, time how long it takes to open your favorite
applications, search through newsgroups, etc. Now clone that drive and
defrag the cloned drive. Now what happens Paul? How much of a
performance difference do you have? Now be honest Paul!
 
B

BillW50

Chris S. said:
Paul said:
BillW50 said:
In message <[email protected]>, BillW50 <[email protected]>
writes:
message
In message <[email protected]>, Stefan Patric
[]
However, what would really be ideal is a new filesystem where
performance
is less (or not at all) inexorably linked to fragmentation.
NTFS
with
[]
Unless you are talking of one which prevents fragmentation in the
first place, I don't see how it can be possible to have one where
performance isn't affected by fragmentation, to some extent at
least.
Is that so? How about the damn I/O bus can't handle the speed of
even
a
fragmented hard drive? Yes that is right! Do the stupid
experiments
and
you will find that a fragmented hard drive isn't the bottleneck.
It is
the damn bus. I can't believe how clueless most people are!
Seriously!
Does it *really* take an engineering degree to see this stuff or
what?

I have an engineering degree, thank you. When you said "a new
filesystem", and then go on about NTFS, most people would assume
you're talking of something that occupies the same place in the
hardware/software hierarchy as NTFS. If you're going to start to
bring
in the speed of the buffer/bus/whatever, then that's not the
filesystem, it's the hardware. Sure, a filesystem may be designed
to
optimise certain aspects of a particular hardware architecture, but
that isn't the filesystem.

Good, I love chatting to supposedly intelligent individuals. ;-)
Back in
the 80's and when we were using MFM drives, defragging was a huge
improvement (and a big deal). I remember the hard drive speed would
often double. I can't recall of a case of tripling, but in some
cases
maybe. Defragging MFM drives was a huge big deal. Microsoft had no
utilities for defragging back then, as we used third party
utilities.

I am not sure why you are focusing on NTFS, but that is ok I'll go
with
it. NTFS is supposed to be smart enough to write with enough
continuous
free sectors so it wouldn't purposely fragment files. Well I suppose
it
could work better, since fragmentation still happens from time to
time.

But what I am saying is what difference does it make really? As I
wait
until my drives gets 40 to 60% fragmented (which takes like two
years).
And the best I have recorded was like 1 or 2% improvement. I don't
know
about you, but 1 or 2% performance boost is flat peanuts! And it
isn't
even worth the electrons to make it happen.
And how did you make this unique discovery about how buses work ?

Inquiring minds, and all that...

*******

The current SATA III storage devices, quote around 500MB/sec transfer
rate, which you can verify with HDTune. This device happens to be
around 450MB/sec. It would take me a while to search enough HDTune
results, to find the very best one today.

http://www.legitreviews.com/images/reviews/1760/hdtune-sata3-read.jpg

Now, the best hard drive I know of, is a 15K RPM Seagate drive, with
a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around
125MB/sec.) Transfer rates on hard drives (sustained transfer rate)
are limited by the head to platter interface, the read amplifier, and
the
data encoding technique (things like PRML partial response maximum
likelihood). You'll notice that 180MB/sec, isn't even remotely close
to 450MB/sec proven result. The 450MB/sec bus can handle that no
problem at all.

http://en.wikipedia.org/wiki/PRML

Amazing stuff! Works in the presence of ISI.

http://www.guzik.com/solutions_chapter9.asp

Techniques like that, eventually run out of noise
margin, and you can't get the desired error rate
performance if you go much faster. I expect now, the
amplifier can probably go faster, but the signal the
head sends back is the limitation. (At one time, it
would have been hard to make good amplifiers for
inside the HDA.)

Paul
Thank heaven for Paul! ;)
If Paul can get over than 2% improvement over a defragged IDE drive,
I'll be very impressed! As nobody I have talked to has done it yet. Say
have you done it yet Chris? And do you know why there are buffers (aka
cache) on the hard drive Chris? No I didn't think so. It caches
everything the bus can't handle so the drive doesn't have to wait for
it. If the I/O could keep up, there would be no reason for drive caches
at all. But you knew that right? ;-)
 
C

Char Jackson

Paul... use your head! Take a hard drive that is 60% fragmented and use
it. Time how long it boots, time how long it takes to open your favorite
applications, search through newsgroups, etc. Now clone that drive and
defrag the cloned drive. Now what happens Paul? How much of a
performance difference do you have? Now be honest Paul!
Over use of a person's name while talking to them can sometimes make
the speaker seem a little crazy. ;-)
 
B

BillW50

Char Jackson said:
Over use of a person's name while talking to them can sometimes make
the speaker seem a little crazy. ;-)
Oh sorry! I just get tired of all of this hearsay without proof or
evidence to back it up. If there is any, I want to see it. ;-)
 
C

Chris S.

BillW50 said:
Oh sorry! I just get tired of all of this hearsay without proof or
evidence to back it up. If there is any, I want to see it. ;-)
It was just your statement:

"Is that so? How about the damn I/O bus can't handle the speed of even a
fragmented hard drive? Yes that is right! Do the stupid experiments and
you will find that a fragmented hard drive isn't the bottleneck. It is
the damn bus. I can't believe how clueless most people are! Seriously!
Does it *really* take an engineering degree to see this stuff or what?"

And my BSEE degree is from Purdue, 1962.

Chris
 
Ad

Advertisements

P

Paul

BillW50 said:
Paul... use your head! Take a hard drive that is 60% fragmented and use
it. Time how long it boots, time how long it takes to open your favorite
applications, search through newsgroups, etc. Now clone that drive and
defrag the cloned drive. Now what happens Paul? How much of a
performance difference do you have? Now be honest Paul!
Your first statement you made above, is the non sequitur in logic that bothers me.
It's not the fault of the bus. You haven't characterized a bus - you're
looking at a bottleneck caused by the head movement of a disk drive.

The bus has absolutely nothing to do with it. The bus is dumb.
It has a percent occupancy. If the hard drive is doing seeks,
there is nothing going across the bus. If the hard drive is pulling
data off the drive, the bus is still not fully occupied.

This is a 500MB/sec bus carrying 180MB/sec sustained transfer
rate data (like on that Seagate 15K hard drive). The bus is
occupied about 1/3rd of the time. The bus is not impacting performance.
______ ______
| | | |
______| |_______________| |_______________

If the disk is not busy, the cache has drained, and a command
comes along, the bus can "burst" until the cache is filled. In
HDTune, they attempt to measure the "burst" performance.
(Note that several of the benchmark utilities, have needed
continuous code adjustments. It's actually pretty hard to
measure these things accurately. Many times, you see
results that don't make sense. You can't always trust
the results in a benchmark tool as being gospel.)

This is my bus, if the disk is idle, and we're filling the cache.
Once the cache is full, we're head limited again (back to the
sustained pattern above).

|<--- 8MB of data fills 8MB cache --->
_____________________________________ ______
| | | |
______| |____________| |__

"bus" and "fragmentation" should not be used in the same sentence !!!

Fragmentation in a file system, requires additional head movement, to
locate fragments of data. The issue is the time it takes the head to move
from A to B. When the head is moving, no data can be transferred. The
head must be stationary above the track, the embedded servo detected
to prove that is the case, the sector header located (if one is
present) and so on. That takes milliseconds, during which the
bus has nothing to do.

None of that has anything to do with busses or the theoretical
maximum transfer rate a bus can provide.

SSDs come closer to sustaining near-bus-rate transfers,
because there is no head movement. There is still
quantization at the SSD, because flash memory is arranged
in blocks or pages, and certain operations work at a larger
size than some other operations. But if you look at the results,
like that flat HDTune graph running at 450MB/sec, they're hiding
any internal details pretty well. A flash memory does need
time to locate your data, but the delay is pretty well hidden.

The fact the natural storage size of the SSD, doesn't exactly
align with a 512 byte sector, becomes apparent when you do a
large number of small transfers to the SSD. The results
can seem pathetic, except when you compare them to a hard
drive which couldn't even get close to the same performance
level (due to head movement). If the orientation in the flash
better aligned with sectors, it might go faster, but at the
expense of being a less-dense chip. You only get the 450MB/sec
if you do blocks 512KB or larger (in this example). The
flash page size might be 128KB, but I haven't checked
a datasheet lately to see how that has changes. (Every generation
of flash, is going to need some dimensional tweaking or
additional ECC code bits and so on.)

http://www.legitreviews.com/images/reviews/1760/cdm-sata3.jpg

People who design buses, take these insults personally...
Lay the blame, at what is inside the HDA and how it works.

Paul
 
C

Chris S.

Paul said:
Your first statement you made above, is the non sequitur in logic that
bothers me.
It's not the fault of the bus. You haven't characterized a bus - you're
looking at a bottleneck caused by the head movement of a disk drive.

The bus has absolutely nothing to do with it. The bus is dumb.
It has a percent occupancy. If the hard drive is doing seeks,
there is nothing going across the bus. If the hard drive is pulling
data off the drive, the bus is still not fully occupied.

This is a 500MB/sec bus carrying 180MB/sec sustained transfer
rate data (like on that Seagate 15K hard drive). The bus is
occupied about 1/3rd of the time. The bus is not impacting performance.
______ ______
| | | |
______| |_______________| |_______________

If the disk is not busy, the cache has drained, and a command
comes along, the bus can "burst" until the cache is filled. In
HDTune, they attempt to measure the "burst" performance.
(Note that several of the benchmark utilities, have needed
continuous code adjustments. It's actually pretty hard to
measure these things accurately. Many times, you see
results that don't make sense. You can't always trust
the results in a benchmark tool as being gospel.)

This is my bus, if the disk is idle, and we're filling the cache.
Once the cache is full, we're head limited again (back to the
sustained pattern above).

|<--- 8MB of data fills 8MB cache --->
_____________________________________ ______
| | | |
______| |____________| |__

"bus" and "fragmentation" should not be used in the same sentence !!!

Fragmentation in a file system, requires additional head movement, to
locate fragments of data. The issue is the time it takes the head to move
from A to B. When the head is moving, no data can be transferred. The
head must be stationary above the track, the embedded servo detected
to prove that is the case, the sector header located (if one is
present) and so on. That takes milliseconds, during which the
bus has nothing to do.

None of that has anything to do with busses or the theoretical
maximum transfer rate a bus can provide.

SSDs come closer to sustaining near-bus-rate transfers,
because there is no head movement. There is still
quantization at the SSD, because flash memory is arranged
in blocks or pages, and certain operations work at a larger
size than some other operations. But if you look at the results,
like that flat HDTune graph running at 450MB/sec, they're hiding
any internal details pretty well. A flash memory does need
time to locate your data, but the delay is pretty well hidden.

The fact the natural storage size of the SSD, doesn't exactly
align with a 512 byte sector, becomes apparent when you do a
large number of small transfers to the SSD. The results
can seem pathetic, except when you compare them to a hard
drive which couldn't even get close to the same performance
level (due to head movement). If the orientation in the flash
better aligned with sectors, it might go faster, but at the
expense of being a less-dense chip. You only get the 450MB/sec
if you do blocks 512KB or larger (in this example). The
flash page size might be 128KB, but I haven't checked
a datasheet lately to see how that has changes. (Every generation
of flash, is going to need some dimensional tweaking or
additional ECC code bits and so on.)

http://www.legitreviews.com/images/reviews/1760/cdm-sata3.jpg

People who design buses, take these insults personally...
Lay the blame, at what is inside the HDA and how it works.

Paul
100% correct, Paul. Thank you.

Chris
 
B

BillW50

Paul said:
Your first statement you made above, is the non sequitur in logic that
bothers me.
It's not the fault of the bus. You haven't characterized a bus -
you're
looking at a bottleneck caused by the head movement of a disk drive.

The bus has absolutely nothing to do with it. The bus is dumb.
It has a percent occupancy. If the hard drive is doing seeks,
there is nothing going across the bus. If the hard drive is pulling
data off the drive, the bus is still not fully occupied.

This is a 500MB/sec bus carrying 180MB/sec sustained transfer
rate data (like on that Seagate 15K hard drive). The bus is
occupied about 1/3rd of the time. The bus is not impacting
performance.
______ ______
| | | |
______| |_______________| |_______________

If the disk is not busy, the cache has drained, and a command
comes along, the bus can "burst" until the cache is filled. In
HDTune, they attempt to measure the "burst" performance.
(Note that several of the benchmark utilities, have needed
continuous code adjustments. It's actually pretty hard to
measure these things accurately. Many times, you see
results that don't make sense. You can't always trust
the results in a benchmark tool as being gospel.)

This is my bus, if the disk is idle, and we're filling the cache.
Once the cache is full, we're head limited again (back to the
sustained pattern above).

|<--- 8MB of data fills 8MB cache --->
_____________________________________ ______
| | | |
______| |____________| |__

"bus" and "fragmentation" should not be used in the same sentence !!!

Fragmentation in a file system, requires additional head movement, to
locate fragments of data. The issue is the time it takes the head to
move
from A to B. When the head is moving, no data can be transferred. The
head must be stationary above the track, the embedded servo detected
to prove that is the case, the sector header located (if one is
present) and so on. That takes milliseconds, during which the
bus has nothing to do.

None of that has anything to do with busses or the theoretical
maximum transfer rate a bus can provide.

SSDs come closer to sustaining near-bus-rate transfers,
because there is no head movement. There is still
quantization at the SSD, because flash memory is arranged
in blocks or pages, and certain operations work at a larger
size than some other operations. But if you look at the results,
like that flat HDTune graph running at 450MB/sec, they're hiding
any internal details pretty well. A flash memory does need
time to locate your data, but the delay is pretty well hidden.

The fact the natural storage size of the SSD, doesn't exactly
align with a 512 byte sector, becomes apparent when you do a
large number of small transfers to the SSD. The results
can seem pathetic, except when you compare them to a hard
drive which couldn't even get close to the same performance
level (due to head movement). If the orientation in the flash
better aligned with sectors, it might go faster, but at the
expense of being a less-dense chip. You only get the 450MB/sec
if you do blocks 512KB or larger (in this example). The
flash page size might be 128KB, but I haven't checked
a datasheet lately to see how that has changes. (Every generation
of flash, is going to need some dimensional tweaking or
additional ECC code bits and so on.)

http://www.legitreviews.com/images/reviews/1760/cdm-sata3.jpg

People who design buses, take these insults personally...
Lay the blame, at what is inside the HDA and how it works.

Paul
Paul, I too have HDTune and while it is a nifty utility and all, but it
doesn't test fragmentation! And that is what this thread is all about.

And I have a bunch of SSDs too. And I hear all of the claims and all...
but I am not seeing it. Nor am I seeing a big advantage of defrag hard
drives either.

Yes I know there is a delay while the head needs to move and position
itself to read the next chunk. This sounds really bad and all. But have
you taken a hard drive apart and actually watched it work? You need a
high speed camera to keep up to how fast the head flies around on a
really bad fragmented drive. Even a hummingbird would be impressed.

Okay let's accept that the head movement slows it down for argument
sake. Then how come defragging does little to nothing as far as
improvement? Sure lots talk about how well it does, but virtually nobody
offers any evidence whatsoever that it really does by much. I have done
the tests and at best 2% is the tops I have found. I don't know about
you, but there are lots of things I can do to improve performance way
over 2%. And 2% is not even noticeable to the user (nor would I care)
anyway.

So what I am asking you (or anybody that cares), is that I totally get
the technical side and all (when I was younger, I ate that all up)...
but nowadays I want to see the results just like the average user would.
As if they can't see it, then I am not impressed.
 
B

BillW50

Chris S. said:
It was just your statement:

"Is that so? How about the damn I/O bus can't handle the speed of even
a
fragmented hard drive? Yes that is right! Do the stupid experiments
and
you will find that a fragmented hard drive isn't the bottleneck. It is
the damn bus. I can't believe how clueless most people are! Seriously!
Does it *really* take an engineering degree to see this stuff or
what?"
Yeah so? When you test un-fragmented against super fragmented, I never
saw much of a difference in performance. I have ran into some souls that
claims it makes a huge difference. Well great! I want to see it. And for
decades I haven't seen it. And I am still waiting.
And my BSEE degree is from Purdue, 1962.
Millington '76.... I have to look at my military records to see what it
exactly says again. But I graduated tops in my class and the highest
test scores they had in 5 years. Ended up in black ops and it was
amazing that the consumer market saw nothing similar for at least 30
plus years later.
 
P

Paul

BillW50 said:
Paul, I too have HDTune and while it is a nifty utility and all, but it
doesn't test fragmentation! And that is what this thread is all about.

And I have a bunch of SSDs too. And I hear all of the claims and all...
but I am not seeing it. Nor am I seeing a big advantage of defrag hard
drives either.

Yes I know there is a delay while the head needs to move and position
itself to read the next chunk. This sounds really bad and all. But have
you taken a hard drive apart and actually watched it work? You need a
high speed camera to keep up to how fast the head flies around on a
really bad fragmented drive. Even a hummingbird would be impressed.

Okay let's accept that the head movement slows it down for argument
sake. Then how come defragging does little to nothing as far as
improvement? Sure lots talk about how well it does, but virtually nobody
offers any evidence whatsoever that it really does by much. I have done
the tests and at best 2% is the tops I have found. I don't know about
you, but there are lots of things I can do to improve performance way
over 2%. And 2% is not even noticeable to the user (nor would I care)
anyway.

So what I am asking you (or anybody that cares), is that I totally get
the technical side and all (when I was younger, I ate that all up)...
but nowadays I want to see the results just like the average user would.
As if they can't see it, then I am not impressed.
If you want to see the improvement, you can use nfi.exe, which
lists the position of the data for each file on an NTFS file
system only. nfi.exe doesn't do FAT32.

I mentioned nfi.exe already in this thread.

http://al.howardknight.net/msgid.cgi?STYPE=msgid&A=0&MSGI=<[email protected]>

This is an example of a fragmented file on my laptop. nfi.exe can
list the entire drive in a relatively short time. It will also show
directory entries, and even a directory has a structure.

*******
File 110660
\Program Files\Canon\Easy-PhotoPrint EX\Template\frameM004_2L_L.bmp
$STANDARD_INFORMATION (resident)
$FILE_NAME (resident)
$FILE_NAME (resident)
$DATA (nonresident)
logical sectors 5441600-5441631 (0x530840-0x53085f)
logical sectors 58199592-58199719 (0x3780e28-0x3780ea7)
logical sectors 58066792-58067079 (0x3760768-0x3760887)
logical sectors 58723640-58724231 (0x3800d38-0x3800f87)
logical sectors 61788208-61788831 (0x3aed030-0x3aed29f)
logical sectors 61768528-61768911 (0x3ae8350-0x3ae84cf)
logical sectors 61792480-61793103 (0x3aee0e0-0x3aee34f)
logical sectors 61775416-61775815 (0x3ae9e38-0x3ae9fc7)
logical sectors 57766616-57767287 (0x37172d8-0x3717577)
logical sectors 58825952-58826303 (0x3819ce0-0x3819e3f)
logical sectors 58877712-58878407 (0x3826710-0x38269c7)
logical sectors 58695904-58696231 (0x37fa0e0-0x37fa227)
logical sectors 58740320-58741023 (0x3804e60-0x380511f)
logical sectors 58833808-58834127 (0x381bb90-0x381bccf)
logical sectors 56225112-56225847 (0x359ed58-0x359f037)
logical sectors 58221816-58222103 (0x37864f8-0x3786617)
logical sectors 58811336-58811743 (0x38163c8-0x381655f)

File 110661
*******

I don't know all the steps to finding a file on the disk,
how many accesses are required before the file system has
a pointer to sector 5441600. It could be, that the process
of getting there, is still a significant component.

I think this is the entry for the directory holding that file.

File 108633

\Program Files\Canon\Easy-PhotoPrint EX\Template
$STANDARD_INFORMATION (resident)
$FILE_NAME (resident)
$INDEX_ROOT $I30 (resident)
$INDEX_ALLOCATION $I30 (nonresident)
logical sectors 58623864-58623871 (0x37e8778-0x37e877f)
logical sectors 58624128-58624135 (0x37e8880-0x37e8887)
logical sectors 58625072-58625079 (0x37e8c30-0x37e8c37)
logical sectors 58625160-58625167 (0x37e8c88-0x37e8c8f)
logical sectors 58625344-58625567 (0x37e8d40-0x37e8e1f)
logical sectors 58632840-58632903 (0x37eaa88-0x37eaac7)
logical sectors 53160392-53160455 (0x32b29c8-0x32b2a07)
logical sectors 58721888-58722015 (0x3800660-0x38006df)
logical sectors 5441760-5441887 (0x5308e0-0x53095f)
logical sectors 29425368-29425495 (0x1c0fed8-0x1c0ff57)
logical sectors 38306720-38306975 (0x24883a0-0x248849f)
logical sectors 14448264-14448519 (0xdc7688-0xdc7787)
logical sectors 61786600-61786855 (0x3aec9e8-0x3aecae7)
$BITMAP $I30 (resident)

That directory has 1678 items in it. And they should have been
installed, when the printer software was installed.

Paul
 
Ad

Advertisements

J

J. P. Gilliver (John)

Char Jackson said:
In message <[email protected]>, Stefan Patric
However, what would really be ideal is a new filesystem where performance
is less (or not at all) inexorably linked to fragmentation. NTFS with
[]
Unless you are talking of one which prevents fragmentation in the first
place, I don't see how it can be possible to have one where performance
isn't affected by fragmentation, to some extent at least.
I thought the eventual migration to solid state drives would eliminate
the fragmentation concerns.
Well, in theory not eliminate it/them, but in practice reduce them below
other concerns.
 
C

Char Jackson

Char Jackson said:
In message <[email protected]>, Stefan Patric
[]
However, what would really be ideal is a new filesystem where performance
is less (or not at all) inexorably linked to fragmentation. NTFS with
[]
Unless you are talking of one which prevents fragmentation in the first
place, I don't see how it can be possible to have one where performance
isn't affected by fragmentation, to some extent at least.
I thought the eventual migration to solid state drives would eliminate
the fragmentation concerns.
Well, in theory not eliminate it/them, but in practice reduce them below
other concerns.
I'll admit that I don't know why file fragmentation is a concern with
a SSD. Quite awhile ago I read somewhere that it takes the same amount
of time to read data from <here> as it does to read it from <there>,
and I guess I sort of took it as gospel. I think what you're telling
me is that, even with a SSD, it's still faster to read contiguous
memory locations than it is to read scattered locations. If so, I'll
try to Kelly Bundy the first gospel and replace it with this one.

<http://www.urbandictionary.com/define.php?term=Kelly Bundy>
 
P

Paul

Char said:
Char Jackson said:
On Fri, 23 Mar 2012 22:24:08 +0000, "J. P. Gilliver (John)"

In message <[email protected]>, Stefan Patric
[]
However, what would really be ideal is a new filesystem where performance
is less (or not at all) inexorably linked to fragmentation. NTFS with
[]
Unless you are talking of one which prevents fragmentation in the first
place, I don't see how it can be possible to have one where performance
isn't affected by fragmentation, to some extent at least.
I thought the eventual migration to solid state drives would eliminate
the fragmentation concerns.
Well, in theory not eliminate it/them, but in practice reduce them below
other concerns.
I'll admit that I don't know why file fragmentation is a concern with
a SSD. Quite awhile ago I read somewhere that it takes the same amount
of time to read data from <here> as it does to read it from <there>,
and I guess I sort of took it as gospel. I think what you're telling
me is that, even with a SSD, it's still faster to read contiguous
memory locations than it is to read scattered locations. If so, I'll
try to Kelly Bundy the first gospel and replace it with this one.

<http://www.urbandictionary.com/define.php?term=Kelly Bundy>
I guess it's easiest, to give an Atto benchmark. Try the rightmost
one in the review here. There's a transfer rate penalty, when you
do small block sizes. (I'm thinking a fragmented SSD would do that...)
What is surprising to me, is writes aren't more affected.

http://www.pcgameware.co.uk/kingston-hyperx-240gb-review/

There are more results for the same device type (OCZ Vertex 3 128GB)
on this page. It's interesting how the Crystal results don't agree
with the HDTune benchmark. HDTune seems to see better write results
(400MB/sec HDTune versus 177MB/sec in Crystal).

http://www.legitreviews.com/article/1760/11/

Paul
 
Ad

Advertisements

J

J. P. Gilliver (John)

Char Jackson said:
I'll admit that I don't know why file fragmentation is a concern with
a SSD. Quite awhile ago I read somewhere that it takes the same amount
of time to read data from <here> as it does to read it from <there>,
and I guess I sort of took it as gospel. I think what you're telling
The difference (settling time of buses, etc.) is probably within the
clocking time of the devices, so no _practical_ difference ...
me is that, even with a SSD, it's still faster to read contiguous
memory locations than it is to read scattered locations. If so, I'll
try to Kelly Bundy the first gospel and replace it with this one.

<http://www.urbandictionary.com/define.php?term=Kelly Bundy>
.... though Paul's point about block sizing may have a small effect. Sort
of in effect doing two or more block reads possibly taking more time
than one block read, though whether that's a function of the device or
the operating system is arguable. It's going to be a very small amount
of delay, if any, and probably not at all significant, at least until
SSDs are the norm and we're in a whole new mindset, possibly involving
different ways of accessing (possibly asynchronously?).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top