My Windows 7 SP1 installation nightmare!

Joined
Aug 3, 2011
Messages
1
Reaction score
0
Why windows is and alway will be a flawed OS

Let's be honest: there's no such thing as bug-free software. Initial versions of programs may occasionally crash, fail to de-allocate memory, or encounter untested conditions. Developers may overlook security holes, users may do things nobody thought of, and not all systems are identical. Software developers are only human, and they make mistakes now and then. It happens. But of all major software vendors Microsoft has the worst record by far when it comes to the quality of their products in general.

Microsoft boasts a rather extensive product range, but in fact there's less here than meets the eye. Microsoft has forever been selling essentially the same software over and over again, in a variety of colorful new wrappers.
Microsoft products can be divided into three categories: applications, operating systems, and additional server products. The applications include the Microsoft Office suite, but also Internet Explorer, Media Player, Visio, Frontpage, etc. The operating systems involve desktop and server versions of Windows. On the desktop we find Windows 9x/ME, NT Workstation, Windows 2000, XP and Vista, and at the server end we have Windows NT Server, Windows 2003 Server and varieties such as Datacenter. The additional server products, e.g. Internet Information Server (IIS) and SQL Server, run on top of one of the Windows server products. They add services (e.g. webserver or database server functions) to the basic file, print and authentication services that the Windows server platform provides.
Two different Windows families

Windows on the desktop comes in two flavors: the Windows 9x/ME product line, and the Windows NT/2000/XP/Vista product line. The different versions within one product line are made to look a bit different, but the difference is in the details only; they are essentially the same. Windows '95, '98 and ME are descended from DOS and Windows 3.x, and contain significant portions of old 16-bit legacy code. These Windows versions are essentially DOS-based, with 32-bit extensions. Process and resource management, memory protection and security were added as an afterthought and are rudimentary at best. This Windows product line is totally unsuited for applications where security and reliability are an issue. It is completely insecure, e.g. it may ask for a password but it won't mind if you don't supply one. There is no way to prevent the user or the applications from accessing and possibly corrupting the entire system (including the file system), and each user can alter the system's configuration, either by mistake or deliberately. The Windows 9x/ME line primarily targets consumers (although Windows '95 marketing was aimed at corporate users as well). Although this entire product line was retired upon the release of Windows XP, computers running Windows '98 or (to a lesser degree) Windows ME are still common.
The other Windows product line includes Windows NT, 2000, XP and Vista, and the server products. This Windows family is better than the 9x/ME line and at least runs new (i.e. post-DOS) 32-bit code. Memory protection, resource management and security are a bit more serious than in Windows 9x/ME, and they even have some support for access restrictions and a secure filesystem. That doesn't mean that this Windows family is anywhere near as reliable and secure as Redmond's marketeers claim, but compared to Windows 9x/ME its additional features at least have the advantage of being there at all. But even this Windows line contains a certain amount of 16-bit legacy code, and the entire 16-bit subsystem is a direct legacy from Microsoft's OS/2 days with IBM. In short, all 16-bit applications share one 16-bit subsystem (just as with OS/2). There's no internal memory protection, so one 16-bit application may crash all the others and the the entire 16-bit subsystem as well. This may create persistent locks from the crashed 16-bit code on 32-bit resources, and eventually bring Windows to a halt. Fortunately this isn't much of a problem anymore now that 16-bit applications have all but died out.
While Windows has seen a lot of development over the years, relatively little has really improved. The new features in new versions of Windows all show the same half-baked, patchy approach. For each fixed problem, at least one new problem is introduced (and often more than one). Windows XP for example comes loaded with more applications and features than ever before. While this may seem convenient at first sight, the included features aren't as good as those provided by external software. For example, XP insists on supporting DSL ("wideband Internet") networking, scanners and other peripherals with the built-in Microsoft code instead of requiring third-party code. So you end up with things like DSL networking that uses incorrect settings (and no convenient way to change that), scanner support that won't let you use your scanner's photocopy feature, or a digital camera interface that will let you download images from the camera but you can't use its webcam function. Wireless (WiFi) network cards are even more of a problem: where manufacturers could include their own drivers and client manager software in previous versions of Windows, users are now reduced to using XP's native WiFi support. Unfortunately XP's WiFi support is full of problems that may cause wireless PCs to lose their connection to the wireless access point with frustrating regularity. Also XP's native WiFi support lacks extra functions (such as advanced multiple-profile management) that manufacturers used to include in their client software.
Even basic services are affected. Windows 2000 and later have built-in DNS (Domain Name System) caching. DNS is the mechanism that resolves Internet host and domain names (e.g. www.microsoft.com) into the numerical IP addresses used by computers (e.g. 194.134.0.67). Windows' DNS caching basically remembers resolved hostnames for faster access and reduced DNS lookups. This would be a nice feature, if it weren't for the blunder that failed DNS lookups get cached by default as well. When a DNS lookup fails (due to temporary DNS problems) Windows caches the unsuccessful DNS query, and continues to fail to connect to a host regardless of the fact that the DNS server might be responding properly a few seconds later. And of course applications (such as Internet Explorer and Outlook) have been integrated in the operating system more tightly than ever before, and more (formerly separate) products have been bundled with the operating system.
Design flaws common to all Windows versions

All versions of Windows share a number of structural design flaws. Application installation procedures, user errors and runaway applications may easily corrupt the operating system beyond repair. Networking support is poorly implemented. Inefficient code leads to sub-standard performance, and both scalability and manageability leave a lot to be desired. (See also appendix A.) In fact, NT and its successors (or any version of Windows) are just not comparable to the functionality, robustness or performance that the UNIX community has been used to for decades. They may work well, or they may not. On one system Windows will run for weeks on end, on another it may crash quite frequently. I've attended trainings at a Microsoft Authorized Education Center, and I was told: "We are now going to install Windows on the servers. The installation will probably fail on one or two systems [They had ten identical systems in the classroom] but that always happens - we don't know why and neither does Microsoft." I repeat, this from a Microsoft Authorized Partner.
Be that as it may... Even without any installation problems or serious crashes (the kind that require restore operations or reinstallations) Windows doesn't do the job very well. Many users think it does, but they generally haven't experienced any alternatives. In fact Windows' unreliability has become commonplace and even proverbial. Tthe dreaded blue screen (popularly known as the Blue Screen of Death or BSOD for short) has featured in cartoons, screen savers and on t-shirts, it has appeared at airports and on buildings, and there has even been a Star Trek episode in which a malfunctioning space ship had to be switched off and back on in order to get it going.
Windows is quite fragile, and the operating system can get corrupted quite easily. This happens most often during the installation of updates, service packs, drivers or application software, and the problem exists in all versions of Windows so far. The heart of the problem lies in the fact that Windows can't (or rather, is designed not to) separate application and operating system code and settings. Code gets mixed up when applications install portions of themselves between files that belong to the operating system, occasionally replacing them in the process. Settings are written to a central registry that also stores vital OS settings. The registry database is basically insecure, and settings that are vital to the OS or to other applications are easily corrupted.
Even more problems are caused by the limitations of Windows' DLL subsystem. A good multi-tasking and/or multi-user OS utilizes a principle called code sharing. Code sharing means that if an application is running n times at once, the code segment that contains the program code (which is called the static segment) is loaded into memory only once, to be used by n different processes which are therefore instances of the same application. Apparently Microsoft had heard about something called code sharing, but obviously didn't really understand the concept and the benefits, or they didn't bother with the whole idea. Whatever the reason, they went and used DLLs instead. DLL files contain Dynamic Link Libraries and are intended to contain library functions only. Windows doesn't share the static (code) segment - if you run 10 instances of Word, the bulk of the code will be loaded into memory 10 times. Only a fraction of the code, e.g. library functions, has been moved to DLLs and may be shared.
The main problem with DLL support is that the OS keeps track of DLLs by name only. There is no adequate signature system to keep track of different DLL versions. In other words, Windows cannot see the difference between one WHATSIT.DLL and another DLL with the same name, although they may contain entirely different code. Once a DLL in the Windows directory has been overwritten by another one, there's no way back. Also, the order in which applications are started (and DLLs are loaded) determines which DLL will become active, and how the system will eventually crash. There is no distinction between different versions of the same DLL, or between DLLs that come with Windows and those that come with application software. An application may put its own DLLs in the same directory as the Windows DLLs during installation, and may overwrite DLLs by the same name if they exist.
What it boils down to is that the application may add portions of itself to the operating system. (This is one of the reasons why Windows needs to be rebooted after an application has been installed or changed.) That means that the installation procedure introduces third-party code (read: uncertified code) into the operating system and into other applications that load the affected DLLs. Furthermore, because there is no real distinction between system level code and user level code, the software in DLLs that has been provided by application programmers or the user may now run at system level. This corrupts the integrity of the operating system and other applications. A rather effective demonstration was provided by Bill Gates himself who, during a Comdex presentation of the Windows 98 USB Plug-and-Play features, connected a scanner to a PC and caused it to crash into a Blue Screen. "Moving right along," said Gates, "I guess this is why we're not shipping it yet." Nice try, Mr. Gates, but of course the release versions of Windows '98 and ME were just as unstable, and in Windows 2000 and its sucessors new problems have been introduced. These versions of Windows use a firmware revision number to recognize devices, so an update of a peripheral's firmware may cause that device to be 'lost' to PnP.
Another, less harmful but most annoying, side-effect of code confusion is that different language versions of software may get mixed up. A foreign language version of an application may add to or partially overwrite Windows' list of dialog messages. This may cause a dialog window to prompt "Are you sure?" in English, followed by two buttons marked, say, "Da" and "Nyet".
Peripheral drivers also use a rather weak signature system and suffer from similar problems as DLL's, albeit to a lesser degree. For example, it's quite possible to replace a printer driver with a similar driver from another language version of Windows and mess up the page format as a result. Printer drivers from different language versions of Windows sometimes contain entirely different code that generates different printer output, but Windows is unaware of the difference. This problem has been addressed somewhat with the release of Windows 2000, but it's still far from perfect.
Mixing up OS and application code: why bundling is bad

Designing an OS to deliberately mix up system and application code fits Microsoft's strategy of product bundling and integration. The results are obvious: each file operation that involves executable code essentially puts the entire OS and its applications at risk, and application errors often mean OS errors (and crashes) as well. This leads to ridiculous "issues" such as Outlook Express crashing Windows if it's a "high encryption" version with the locale set to France. Replying to an e-mail message may crash the entire system, a problem which has been traced to one of the DLLs that came with Outlook. (Are you still with me?)
In a well-designed and robustly coded OS something like this could never happen. The first design criterion for any OS is that the system, the applications, and (in a multi-user environment) the users all be separated and protected from each other. Not only does no version of Windows do that by default, it actively prevents you from setting things up that way. The DLL fiasco is just the tip of the iceberg. You can't maintain or adequately restore OS integrity, you can't maintain version control, and you can't prevent applications and users from interfering with each other and the system, either by accident or on purpose.
Integrating applications into the OS is also not a good idea for very practical reasons. Imagine that, when you buy your van, it comes with a number of large parcels already in it. These parcels are welded in place so that it is all but impossible to remove them without damaging your van. You have to live with them, drive them around to wherever you go and burn up extra fuel to do so, and quite often these built-in parcels get in the way of the items you wanted to deliver when you bought your van for that purpose. Even worse; when one of the parcels is damaged, quite often your van has to be serviced in order to fix the problem! But when complain about it, the manufacturer of the van tells you that this actually makes it a much better delivery van. Ridiculous? Yes, of course. It's just as ridiculous as Windows using up valuable system resources for bundled applications that more often than not get in the way of what you need to do, and add points-of-failure to boot.
The second practical issue, related to the first one, is control, from the user's point of view. A basic operating system allows the user to install, configure and run applications as required, and to choose the right application based on their features and performance, and the user's preference. Bundling applications in Windows removes their functions from the application level (where the user has control over them) to the operating system (where the user has no control over them). For example, a user application such as a web browser can be installed, configured and uninstalled as necessary, but Internet Explorer is all but impossible to remove without dynamite. The point here is not that Internet Explorer should not be provided (on the contrary; having a web browser available is a convenience much appreciated by most users) but that it should be available as an optional application, to be installed or uninstalled as per the user's preference without any consequence for how the rest of Windows will function.
Beyond repair

Then there's Windows' lack of an adequate repair or maintenance mode. If anything goes wrong and a minor error or corruption occurs in one of the (literally) thousands of files that make up Windows, often the only real solution is a large-scale restore operation or even to reinstall the OS. Yes, you read correctly. If your OS suddenly, or gradually, stops working properly and the components which you need to repair are unknown or being locked by Windows, the standard approach to the problem (as recommended by Microsoft) is to do a complete reinstallation. There's no such thing as single user mode or maintenance mode to do repairs, nor is there a good way to find out which component has been corrupted in the first place, let alone to repair the damage. (The so-called 'safe mode' merely swaps configurations and does not offer sufficient control for serious system repairs.)
Windows has often been criticized for the many problems that occur while installing it on a random PC, which may be an A-brand or clone system in any possible configuration. This criticism is not entirely justified; after all it's not practically feasible to foresee all the possible hardware configurations that may occur at the user end. But that's not the point. The point is that these problems are often impossible to fix or even properly diagnose, because most of the Windows operating system is beyond the users' or administrators' control. This is of course less true for Windows 9x/ME. Because these are essentially DOS products, you can reboot the system using DOS and do manual repairs to a certain degree. With Windows NT and its successors this is generally impossible. Windows 2000, XP and Vista come with an external repair console utility on the CD, that allows you some access to the file system of a damaged Windows installation. But that's about it.
The inability to make repairs has been addressed, to a certain degree, in Windows XP. This comes with a 'System Restore' feature that tracks changes to the OS, so that administrators may 'roll back' the system to a previous state before the problem occurred. Also, the 'System File Check' feature attempts to make sure that some 1000 system files are the ones that were originally installed. If a "major" system file is replaced by another version (for example if a Windows XP DLL file is overwritten by a Windows '95 DLL with the same name) the original version will be restored. (Of course this also prevents you from removing things like Outlook Express or Progman.exe, since the specification of what is an important system file is rather sloppy.) Windows Vista takes these features even further, by incorporating transaction-based principles. This enhances the chances of a successful roll-back from changes that have not been committed permanently yet.
Most of these workarounds are largely beyond the user's control. While some of them may have adverse effects (e.g. File System Check may undo necessary modifications) their effectivity is limited by nature. There are many fault conditions possible that prevent automated repair features from working correctly in the first place. When Windows breaks, the automated features to recover from that fault generally break as well. Also the number of faults that the automated repair options can deal with are limited. At some point manual intervention is the only option, but that requires the adequate maintenance mode that Windows doesn't have. The inability of a commercial OS to allow for its own maintenance is a good demonstration of its immaturity.
Even so, even the added options for system restore in XP and Vista are an improvement over the previous situation, in that at least a certain amount of recovery is now possible. On the other hand, this illustrates Microsoft's kludgy approach to a very serious problem: instead of implementing changes in the architecture to prevent OS corruption, they perpetuate the basic design flaw and try to deal with the damage after the fact. They don't fix the hole in your roof, they sell you a bucket to put under it instead. When the bucket overflows (i.e. the system recovery features are insufficient to solve a problem) you're still left with a mess.
Wasted resources, wasted investments

The slipshod design of Windows does not only reflect in its flawed architecture. The general quality of its code leaves a lot to be desired as well. This translates not only in a disproportionately large number of bugs, but also in a lot of inefficiency. Microsoft needs at least three or four times as much hardware to deliver the same performance that other operating systems (e.g. Unix) deliver on much less. Likewise, on similar hardware competing products perform much better, or will even run well on hardware that does not meet Microsoft's minimum system requirements.
Inefficient code is not the only problem. Another issue is that most bells and whistles in Microsoft products are expensive in terms of additional hardware requirements and maintenance, but do not increase productivity at all. Given the fact that ICT investments are expected to pay off in increased productivity, reduced cost or both, this means that most "improvements" in Microsoft products over the past decades have been a waste of time from a Return On Investment standpoint. Typical office tasks (e.g. accounting, data processing, correspondence) have not essentially changed, and still take as much time and personpower as they did in the MS-DOS era. However the required hardware, software and ICT staff have increased manifold. Very few of these investments have resulted in proportional increases in profit.
Only 32 kilobytes of RAM in the Apollo capsules' computers was enough to put men on the moon and safely get them back to Earth. The Voyager deep space probes that sent us a wealth of images and scientific data from the outer reaches of the solar system (and still continue to do so from interstellar space) have on-board computers based on a 4-bit CPU. An 80C85 CPU with 176 kilobytes of ROM and 576 kilobytes of RAM was all that controlled the Sojourner robot that drove across the surface of Mars and delivered geological data data and high-resolution images in full-color stereo. But when I have an 800MHz Pentium III with 256 Megabytes of RAM and 40 Gigabytes of disk space, and I try to type a letter to my grandmother using Windows XP and Office XP, the job will take me forever because my computer is underpowered! And of course Windows Vista won't even run on such a machine...
Server-based or network-based computing is no solution either, mainly because Windows doesn't have any real code sharing capability. If you were to shift the workload of ten workstations to an application server (using Windows Terminal Server, Citrix Server or another ASP-like solution) the server would need a theoretical ten times the system resources of each of the workstations it replaced to maintain the same performance, not counting the inevitable overhead which could easily run up to an additional 10 or 20 percent.
Then there's the incredible amount of inefficient, or even completely unnecessary code in the Windows file set. Take the 3D Pinball game in Windows 2000 Professional and XP Professional, for example. This game (you'll find it under \Program Files\Windows NT\Pinball) is installed with Windows and takes up a few megabytes of disk space. But most users will never know that it's sitting there, wasting storage and doing nothing productive at all. It doesn't appear in the program menu or control panel, and no shortcuts point to it. The user isn't asked any questions about it during installation. In fact its only conceivable purpose would be to illustrate Microsoft's definition of 'professional'. No wonder Windows has needed more and more resources over the years. A few megabytes doesn't seem much, perhaps, but that's only because we've become used to the enormous footprints of Windows and Windows applications. Besides, if Microsoft installs an entire pinball game that most users neither need nor want, they obviously don't care about conserving resources (which are paid for by the user community). What does that tell you about the resource-efficiency of the rest of their code? Let me give you a hint: results published in PC Magazine in April 2002 show that the latest Samba software surpasses the performance of Windows 2000 by about 100 percent under benchmark tests. In terms of scalability, the results show that Unix and Samba can handle four times as many client systems as Windows 2000 before performance begins to drop off.
Another example is what happened when one of my own clients switched from Unix to Windows (the reason for this move being the necessity to run some webbased accounting package with BackOffice connectivity on the server). Their first server ran Unix, Apache, PHP and MySQL and did everything it had to do with the engine barely idling. On the same system they then installed Windows Server 2003, IIS, PHP and MySQL, after which even the simplest of PHP scripts (e.g. a basic 100-line form generator) would abort when the 30 second execution timeout was exceeded.
Paradoxically, though, the fact that Microsoft products need humongous piles of hardware in order to perform decently has contributed to their commercial success. Many integrators and resellers push Microsoft software because it enables them to prescribe the latest and heaviest hardware platforms in the market. Unix and Netware can deliver the same or better performance on much less. Windows 2000 and XP however need bigger and faster systems, and are often incompatible with older hardware and firmware versions (especially the BIOS). This, and the fact that hardware manufacturers discontinue support for older hardware and BIOSes, forces the user to purchase expensive hardware with no significant increase in return on investment. This boosts hardware sales, at the expense of the "dear, valued customer". Resellers make more money when they push Microsoft products. It's as simple as that.
Many small flaws make a big one

Apart from the above (and other) major flaws there's also a staggering amount of minor flaws. In fact there are so many minor flaws that their sheer number can be classified as a major flaw. In short, the general quality of Microsoft's entire set of program code is sub-standard. Unchecked buffers, unverified I/O operations, race conditions, incorrectly implemented protocols, failures to deallocate resources, failures to check environmental parameters, et cetera ad nauseam... You name it, it's in there. Microsoft products contain some extremely sloppy code and bad coding practices that would give an undergraduate some well-deserved bad marks. As a result of their lack of quality control, Microsoft products and especially Windows are riddled with literally thousands and thousands of bugs and glitches. Even many of the error messages are incorrect!
Some of these blunders can be classified as clumsy design rather than as mere sloppiness. A good example is Windows' long filename support. In an attempt to allow for long filenames in Windows '9x/ME, Microsoft deliberately broke the FAT file system. They stored the extension information into deliberately cross-linked directory entries, which is probably one of their dirtiest kludges ever. And if that wasn't enough, they made it legal for filenames to contain whitespace. Because this was incompatible with Windows' own command line parsing (Windows still expects the old FAT notation) another kludge was needed, and whitespace had to be enclosed in quotation marks. This confused (and broke) many programs, including many of Microsoft's own that came with Windows.
Another good example is Windows' apparent case-sensitivity. Windows seems to make a distinction between upper and lower case when handling filenames, but the underlying software layers are still case-insensitive. So Windows only changes the case of the files and directories as they are presented to the user. The names of the actual files and directories may be stored in uppercase, lowercase or mixed case, while they are still presented as capitalized lower case file names. Of course this discrepancy causes no problems in a Windows-only environment. Since the underlying code is essentially case-insensitive, case is not critical to Windows' operation. However as soon as you want to incorporate Unix-based services (e.g. a Unix-based webserver instead of IIS) you discover that Windows has messed up the case of filenames and directories.
But most of Windows' errors and glitches are just the result of sloppy work. Of course there is no such thing as bug-free software, but the amount of bugs found in Windows is, to put it mildly, disproportionate. For example, Service Pack 4 for Windows NT 4.0 attempted to fix some 1200 bugs (yes, one thousand two hundred). But there had already been three previous service packs at the time! Microsoft shamelessly admitted this, and even boasted about having "improved" NT on 1200 points. Then they had to release several more subsequent service packs in the months that followed, to fix remaining issues and of course the additional problems that had been introduced by the service packs themselves.
An internal memo among Microsoft developers mentioned 63,000 (yes: sixty-three thousand) known defects in the initial Windows 2000 release. Keith White, Windows Marketing Director, did not deny the existence of the document, but claimed that the statements therein were made in order to "motivate the Windows development team". He went on to state that "Windows 2000 is the most reliable Windows so far." Yes, that's what he said. A product with 63,000 known defects (mind you, that's only the known defects) and he admits it's the best they can do. Ye gods.
And the story continues: Windows XP Service Pack 2 was touted to address a large number of security issues and make computing safer. Instead it breaks many things (mostly products made by Microsoft's competitors, but of course that is merely coincidence) but does not really fix any real security flaws. The first major security hole in XP-SP2 was described by security experts as "not a hole but rather a crater" and allowed downloadable code to spoof firewall information. Only days after XP-SP2 was released the first Internet Explorer vulnerability of the SP2-era was discovered. Furthermore SP2 leaves many unnecessary networking components enabled, bungles permissions, leaves IE and OE open to malicious scripts, and installs a packet filter that lacks a capacity for egress filtering. It also makes it more difficult for third-party products (especially multimedia plugins) to access the ActiveX controls, which in turn prevents the installation of quite a bit of multimedia software made by Microsoft's competitors. XP-SP2's most noticeable effect (apart from broken application compatibility) are frequent popups that give the user a sense of security. Apart from this placebo effect the long-awaited and much-touted XP-SP2 doesn't really fix very much.
In the summer of 2005 Jim Allchin, then group VP in charge of Windows, finally went and admitted all this. In a rare display of corporate honesty, he told the Wall Street Journal that the first version of Longhorn (then the code name for Windows Vista) had to be entirely scrapped because the quality of the program code had deteriorated too far. The root of the problem, said Allchin, was Microsoft's historical approach to developing software (the so-called "spaghetti code culture") where the company's thousands of programmers would each develop their own piece of code and it would then all be stitched together at the end. Allchin also said to have faced opposition to his call for a completely new development approach, firstly from Gates himself and then the company's engineers.
MS developers: "We are morons"

Allchin's revelations came as no great surprise. Part of the source code to Windows 2000 had been leaked onto the Internet before, and pretty it was not. Microsoft's flagship product turned out to be a vast sprawl of spaghetti in Assembly, C and C++, all held together with sticky tape and paper clips. The source code files contained many now-infamous comments including "We are morons" and "If you change tabs to spaces, you will be killed! Doing so f***s the build process".



There were many references to idiots and morons, some external but mostly at Microsoft. For example:
  • In the file private\ntos\rtl\heap.c, which dates from 1989:
    // The specific idiot in this case is Office95, which likes
    // to free a random pointer when you start Word95 from a desktop
    // shortcut.
  • In the file private\ntos\w32\ntuser\kernel\swp.c from 11-Jul-1991:
    // for idiots like MS-Access 2.0 who SetWindowPos( SWP_BOZO )
    // and blow away themselves on the shell, then lets
    // just ignore their plea to be removed from the tray.
  • Morons are also to be found in the file private\genx\shell\inc\prsht.w:
    // We are such morons. Wiz97 underwent a redesign between IE4 and IE5
  • And in private\shell\shdoc401\unicpp\desktop.cpp:
    // We are morons. We changed the IDeskTray interface between IE4
  • In private\shell\browseui\itbar.cpp:
    // should be fixed in the apps themselves. Morons!
  • As well in private\shell\ext\ftp\ftpdrop.cpp:
    We have to do this only because Exchange is a moron.
Microsoft programmers also take their duty to warn their fellow developers seriously against unsavory practices, which are apparently committed on a regular basis. There are over 4,000 references to "hacks". These include:
  • In the file private\inet\mshtml\src\core\cdbase\baseprop.cxx:
    // HACK! HACK! HACK! (MohanB) In order to fix #64710
    // at this very late date
  • In private\inet\mshtml\src\core\cdutil\genutil.cxx:
    // HACK HACK HACK. REMOVE THIS ONCE MARLETT IS AROUND
  • In private\inet\mshtml\src\site\layout\flowlyt.cxx:
    // God, I hate this hack ...
  • In private\inet\wininet\urlcache\cachecfg.cxx:
    // Dumb hack for back compatibility. *sigh*
  • In private\ispu\pkitrust\trustui\acuictl.cpp:
    // ACHTUNG! HACK ON TOP OF HACK ALERT:
    // Believe it or not there is no way to get current height
  • In private\ntos\udfs\devctrl.c:
    // Add to the hack-o-rama to fix formats.
  • In private\shell\shdoc401\unicpp\sendto.cpp:
    // Mondo hackitude-o-rama.
  • In private\ntos\w32\ntcon\server\link.c:
    // HUGE, HUGE hack-o-rama to get NTSD started on this process!
  • In private\ntos\w32\ntuser\client\dlgmgr.c:
    // HACK OF DEATH!!
  • In private\shell\lib\util.cpp:
    // TERRIBLE HORRIBLE NO GOOD VERY BAD HACK
  • In private\ntos\w32\ntuser\client\nt6\user.h:
    // The magnitude of this hack compares favorably with that
    // of the national debt.
The most worrying aspect here is not just how these bad practices persist and even find their ways into release builds in large numbers. After all, few things are as permanent as a "temporary" solution. Nor is it surprising how much ancient code still exists in the most recent versions of Windows (although it is somewhat unsettling to see how decades-old mistakes continue to be a problem). No, the most frightening thing is that Microsoft's developers obviously know they are doing terrible things that serious undermine the quality of the end product, but are apparently unable to remedy the known bad quality of their own code.
As you may remember, Windows XP was already out by the time that the above source code got leaked. In fact, back in 2004, Microsoft had been talking about Longhorn (Windows Vista) for three years. Just a few months after the source code leaked out, it was announced that WinFS, touted as Microsoft's flagship Relational File System Of The Future, would not ship with Vista after all. The reason isn't hard to guess: Windows' program code has become increasingly unmaintainable and irrepairable over the years.
In the long years since XP was launched, Apple have come out with five major upgrades to OSX, upgrades which (dare I say it?) install with about as much effort as it takes to brush your teeth in the morning. No nightmare calls to tech-support, no sudden hardware incompatibilities, no hassle. Yet Microsoft has failed to keep up, and the above example of the state of their program code clearly demonstrates why.
Unreliable servers

All these blunders have of course their effects on Windows' reliability and availability. Depending on application and system load, most Windows systems tend to need frequent rebooting, either to fix problems or on a regular basis to prevent performance degradation as a result of Windows' shaky resource management.
On the desktop this is bad enough, but the same flaws exist in the Windows server products. Servers are much more likely to be used for mission-critical applications than workstations are, so Windows' limited availability and its impact on business become a major issue. The uptimes of typical Windows-based servers in serious applications (i.e. more than just file and print services for a few workstations) tend to be limited to a few weeks at most. One or two server crashes (read: interruptions of business and loss of data) every few months are not uncommon. As a server OS, Windows clearly lacks reliability.
Windows server products aren't even really server OSes. Their architecture is no different from the workstation versions. The server and workstation kernels in NT are identical, and changing two registry keys is enough to convince a workstation that it's a server. Networking capabilities are still largely based on the peer-to-peer method that was part of Windows for Workgroups 3.11 and that Microsoft copied, after it had been successfully pioneered by Apple and others in the mid-eighties. Of course some code in the server products has been extended or optimized for performance, and domain-based authentication has been added, but that doesn't make it a true server platform. Neither does the fact that NT Server costs almost three times as much as NT Workstation. In fact we're talking about little more than Windows for Workgroups on steroids.
In November 1999, Sm@rt Reseller's Steven J. Vaughan-Nichols ran a test to compare the stability of Windows NT Server (presumably running Microsoft Internet Information Server) with that of the Open Source Linux operating system (running Samba and Apache). He wrote:
Conventional wisdom says Linux is incredibly stable. Always skeptical, we decided to put that claim to the test over a 10-month period. In our test, we ran Caldera Systems OpenLinux, Red Hat Linux, and Windows NT Server 4.0 with Service Pack 3 on duplicate 100MHz Pentium systems with 64MB of memory. Ever since we first booted up our test systems in January, network requests have been sent to each server in parallel for standard Internet, file and print services. The results were quite revealing. Our NT server crashed an average of once every six weeks. Each failure took roughly 30 minutes to fix. That's not so bad, until you consider that neither Linux server ever went down.
Interesting: a crash that takes 30 minutes to fix means that something critical has been damaged and needs to be repaired or restored. At least it takes more than just a reboot. This happens once every six weeks on a server, and that's considered "not so bad"... Think about it. Also note that most other Unix flavors such as Solaris, BSD or AIX are just as reliable as Linux.
But the gap between Windows and real uptime figures is even greater than Vaughan-Nichols describes above. Compare that half hour downtime per six weeks to that of Netware, in the following article from Techweb on 9 April 2001:
Server 54, Where Are You?
The University of North Carolina has finally found a network server that, although missing for four years, hasn't missed a packet in all that time. Try as they might, university administrators couldn't find the server. Working with Novell Inc. (stock: NOVL), IT workers tracked it down by meticulously following cable until they literally ran into a wall. The server had been mistakenly sealed behind drywall by maintenance workers.
Although there is some doubt as to the actual truth of this story, it's a known fact that Netware servers are capable of years of uninterrupted service. Shortly before I wrote this, I brought down a Netware server at our head office. This was a Netware 5.0 server that also ran software to act as the corporate SMTP/POP3 server, fax server and main virus protection for the network, next to providing regular file and print services for the whole company. It had been up and running without a single glitch for more than a year, and the only reason we shut it down was because it had to be physically moved to another building. Had the move not been necessary, it could have run on as long as the mains power held out. There's simply no reason why its performance should be affected, as long as nobody pulls the plug or rashly loads untested software. The uptimes of our Linux and Solaris servers (mission-critical web servers, database servers and mail servers, or just basic file and print servers) are measured in months as well. Uptimes in excess of a year are not uncommon for Netware and Unix platforms, and uptimes of more than two years are not unheard of either. Most OS updates short of a kernel replacement do not require a Unix server to be rebooted, as opposed to Windows that expects a complete server reboot whenever a DLL in some subsystem is updated. But see for yourself: check the Netcraft Uptime statistics and compare the uptimes of Windows servers to those of Unix servers. The figures speak for themselves.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top