Yes, exactly.
I see an opportunity for real progress in I/O speed. This opportunity
will vanish when SSDs become so cheap that ordinary folks like me can
afford to put ALL of our data on SSDs.
An SSD connected directly to the motherboard via SATA presents a
unique opportunity. The operating system could assume that it is a
secure, permanent backing store. This is in contrast to a USB storage
device, which could disappear at any time.
A device driver (or operating system) could take advantage of this
fast, yet permanent storage by allowing the cache to remain valid even
when the computer is power-cycled. In this scheme, I/O with the
backing store of mechanical disk(s) would become rare. Most I/O would
occur between DRAM and the caching SSD disk drive.
I suspect that some big server farms (Google?) are already using such
schemes.
I'm not sure what you are referring to. You say a SSD connected via
SATA which, as far as I know, is how a SSD IS connected, or are you
talking about something connected directly to the memory bus - like
the current DIMM - with the CPU writing directly to it? In which case
it wouldn't be SATA.
I can see some problems with that. Do present SSD's accept reads and
writes at bus speeds? I doubt it.
A cursory look at Google says that SSD's seem to have transfer speeds
of between 200 - 400 MB/s while SATA-3 specs say 600 MB/s transfer
speed, which seems to say that the SATA system is not going to be the
bottle neck, that it is the storage device.
As always, there are two types of memory - fast memory, which today is
largely managed in the core memory (the DIMMs) and long term storage
memory (data base files).
Given the addressing ability of a 64 bit address register it would
seem likely that the solution is to expand the core memory to the
maximum and simply write anything needing long term storage out to
memory at the end of the session.
John B. Slocomb
(johnbslocombatgmaildotcom)