Robin said:
It's based on a Gigabyte GA-X79-UD3 m/b but I think that's irrelevant.
It's got two separate SATA controllers: one with internal connections
that I use and another with rear connections for quite complex RAID
setups, which I don't use. The two have separate drivers.
When I installed W7U I had not discovered this group and only had the
M/B guide. For some reason the AHCI drivers gave a hex return code that
was meaningless so I reset BIOS to IDE and it ran.
Then, later, I found that with two registry tweaks one could implement
AHCI. So I did, it found the new drivers and ran.
Now, I have the device manager entries I listed in my previous post. It
*appears* that all the drivers for IDE and AHCI are all installed at the
same time and the mode can be set merely by changing BIOS. That sounds
weird to me. Hard Disk Monitor (product) assures me that the disks are
using AHCI.
I just wondered if I could remove the unused drivers from device
manager.
Some drivers are built into the OS.
IDE (Compatible and Native). Compatible means using I/O Space addressing
and INT14/INT15 for interrupts. Native means sitting in PCI Address space
and using PCI interrupts. Hardware register definitions are standardized.
There is an IDE driver built into the OS. (Support for both versions,
Compatible and Native, might have appeared late in WinXP. Compatible
has been around the longest. At one time, you had to install your
own PCI space driver, if that's where the storage controller was mapped.)
AHCI is a relatively new standard, supporting native command queueing,
for out-of-order completion of disk commands. The "MSAHCI" driver
is built-in. Command queueing existed previously in the SCSI world,
so the concept isn't really new. Just new to the IDE disk world.
RAID is usually a custom driver. Certain drivers were shipped
with your installed OS, including "IASTORV" for Intel. That allows
Intel RAID arrays to come up, without a separate driver. If a
person installs Matrix storage or RST driver from Intel, that
will be given a separate name, such as "IASTOR". The "V" on the
built-in driver, stands for Vista, and indicates that the practice
of including that driver, started with Vista. The Intel RAID
driver at least, is a combined AHCI/RAID, as they're both
in the same txtsetup.oem based package. A person is not
precluded, from installing the Intel RST driver, over top
of the OS built-in. It just complicates matters later, when
doing driver re-arming.
For all the built-in drivers, you wouldn't delete them. They're
not hurting anything. And if you use the Regedit based
"driver re-arming" procedure, you'll want those built-in
drivers to remain as install candidates.
If you want to remove drivers, you'd look in the
equivalent of "Add/Remove", the Programs and Features
thing, and see what you've installed there. Most likely,
a person who idly install the Intel RST package, and
perhaps end up with a second solution for RAID or AHCI.
*******
The references to SCSI, point out the two paths a driver
writer has, when writing a driver.
If a driver writer uses a "direct" style of driver, then
the driver writer has to deal with each kind of OS API
storage call (whatever they are). If the OS changes,
perhaps the model changes. The advantage of doing it
this way, is there's only one entry in Device Manager.
But Windows also has a SCSI driver stack, which is a second
means to communicate with hardware. Whereas the "internal"
storage path, would have calls specific to the OS design,
the SCSI stack deals in standard commands in the form
of a SCSI CDB (command/data block). The SCSI standard
would define those. They're honored by hardware devices
directly (say a SCSI or SAS hard drive perhaps). A driver
writer can write a "CDB interpreter", accept CDBs from
the OS, and translate them into one or more disk hardware
specific commands. Sometimes, you'll see two driver entries
in Device Manager, one of which might make reference to
SCSI. And seeing that, that tells you the driver listens
for CDBs from the OS, and the OS is informed the subsystem
is "SCSI". The whole notion is referred to sometimes as
"pseudo-SCSI", because the hardware isn't really SCSI hardware,
but the driver hides the details. As an example, the SCSI stack
would be a good way to hook a hardware RAID to the OS, giving
the impression there is "one volume" when in fact an array
of disks in RAID is delivering the data. The OS just
gets the impression data is coming from "one virtual disk".
In any case, evidence of what's happened, would be
sitting in your INF folder, and in the setupapi.* files.
The INF folder is not straight forward, because when
a driver is installed, the vendor INF file is renamed.
Perhaps "mystor.inf" from the original installer,
becomes "oem23.inf" in the INF folder. If the driver writer
is smart, in the "mystor.inf" file, the original file
name appears at the top of the file. Doing a text
search for "mystor", against the INF folder, will locate
the "oem23.inf" file as being the renamed candidate.
That's how you'd figure out where it went. Other options
for finding it, include searching for the VEN/DEV numbers
of the associated installed hardware. That is for times,
when the mystor.inf doesn't have a self-referential
line of text near the top of the file.
The setupapi.log file in WinXP, used to be a very reliable
source of information, as it logs every twist and turn
in the driver story. If a user has been "flipping their
BIOS settings", dated entries in the log file, tell you
precisely which days the user did the flipping. Now, in
Vista or later, I'm not at all sure how to get the same
level of good information. There is certainly a set of files
by that name (now more than one setupapi.* file), but
I don't recollect being as impressed with the information
inside. The old setupapi.log file was "one stop shopping"
on WinXP, and I could learn a lot from what's in there,
as to what went wrong. I can't really be as positive,
about the ability to trace how the system got that way,
on Windows 7.
You only uninstall drivers, if you know there'll be
a mechanism for the OS to "come back up" on the next
boot. While the OS has things like a "last known good"
configuration, one level of driver rollback and the like,
to fix up minor problems, you don't really want an OS
that can't boot. So if you're going to tear out a
driver via the equivalent of "Add/Remove", you want
to make sure there is a driver to take its place, if
the OS needs that driver to boot with. The reason
you can remove your video card driver, is because
the OS has a built-in VESA driver, which people rely
on so they can see their screen. That's an example
of knowing you have an alternative ready-to-go. In
the storage area, you have to be a little bit careful
you don't "cut the legs off your OS". It needs a leg
to stand on. Built-in IDE, AHCI, and a couple RAID
drivers, are examples of those legs. And on the
latest OSes, there is the notion of using registry
settings for "driver re-arming", so the OS will
consider all the candidates on the next boot.
Paul