Windows7/8 with multi-core CPUs

C

cameo

Is windows designed to really take advantage of multi-core processors or
we are just led to believe it?
 
E

Ed Cryer

cameo said:
Is windows designed to really take advantage of multi-core processors or
we are just led to believe it?
I'm wondering the same. I can at least say that it recognises my 4 CPUs
in Task Manager, because I've seen them there. But then it must use
resources to monitor them; so maybe that's it - it actually wastes
resources!
:)

What would be nice would be some program to produce usage statistics
over a given period of time. That should settle the issue.
Anybody know of one?

Ed
 
P

Paul

cameo said:
Is windows designed to really take advantage of multi-core processors or
we are just led to believe it?
Yes. It's been doing that, since Win2K. (Maybe even WinNT, but I
don't have that on any home machine.)

I don't think my copy of Win98, recognizes the second
core on my Core2 processor.

And the Win2K license, prevents "too many cores" from being
recognized (my desktop copy, is limited to two cores, so only
half a Q6600 would be recognized). The first OS with a "decent"
licensing scheme, was WinXP. The WinXP license was socket based,
rather than being based on core count. So if you had an
eight core processor, it would be accepted and fully used
on WinXP Home or Pro. Versus on my old Win2K desktop copy,
only two of the eight cores would work.

There is still an upper bound on cores that can be
licensed, but the odds of you running into that
limitation are slim. To see what that limit is,
go to Task Manager, right-click a process and
select "Set Affinity". There, you can see WinXP has
a limit of 32 tick boxes for selecting which cores
are candidates for running a program. The "Affinity"
capability, is how you prevent a multi-threaded program
from "hogging" the whole CPU. In this example of
setting affinity, the program is only allowed to run
on Core0. You can prevent programs from "bumping heads",
by segregating them this way.

http://www.lanpartyguide.com/images/setaffinity.jpg

I generally use "Set Affinity", when programs are
poorly written and they won't behave themselves. If
the machine is no longer responsive to keyboard input say,
then it's time for a bit of set affinity work.

Paul
 
P

Paul in Houston TX

Paul said:
Yes. It's been doing that, since Win2K. (Maybe even WinNT, but I
don't have that on any home machine.)

I don't think my copy of Win98, recognizes the second
core on my Core2 processor.

And the Win2K license, prevents "too many cores" from being
recognized (my desktop copy, is limited to two cores, so only
half a Q6600 would be recognized). The first OS with a "decent"
licensing scheme, was WinXP. The WinXP license was socket based,
rather than being based on core count. So if you had an
eight core processor, it would be accepted and fully used
on WinXP Home or Pro. Versus on my old Win2K desktop copy,
only two of the eight cores would work.

There is still an upper bound on cores that can be
licensed, but the odds of you running into that
limitation are slim. To see what that limit is,
go to Task Manager, right-click a process and
select "Set Affinity". There, you can see WinXP has
a limit of 32 tick boxes for selecting which cores
are candidates for running a program. The "Affinity"
capability, is how you prevent a multi-threaded program
from "hogging" the whole CPU. In this example of
setting affinity, the program is only allowed to run
on Core0. You can prevent programs from "bumping heads",
by segregating them this way.

http://www.lanpartyguide.com/images/setaffinity.jpg

I generally use "Set Affinity", when programs are
poorly written and they won't behave themselves. If
the machine is no longer responsive to keyboard input say,
then it's time for a bit of set affinity work.

Paul
How interesting!
Thanks Paul. I never knew that.
 
J

Jason

Is windows designed to really take advantage of multi-core processors or
we are just led to believe it?
Yes. It's been doing that, since Win2K. (Maybe even WinNT, but I
don't have that on any home machine.)

I don't think my copy of Win98, recognizes the second
core on my Core2 processor.

And the Win2K license, prevents "too many cores" from being
recognized (my desktop copy, is limited to two cores, so only
half a Q6600 would be recognized). The first OS with a "decent"
licensing scheme, was WinXP. The WinXP license was socket based,
rather than being based on core count. So if you had an
eight core processor, it would be accepted and fully used
on WinXP Home or Pro. Versus on my old Win2K desktop copy,
only two of the eight cores would work.

There is still an upper bound on cores that can be
licensed, but the odds of you running into that
limitation are slim. To see what that limit is,
go to Task Manager, right-click a process and
select "Set Affinity". There, you can see WinXP has
a limit of 32 tick boxes for selecting which cores
are candidates for running a program. The "Affinity"
capability, is how you prevent a multi-threaded program
from "hogging" the whole CPU. In this example of
setting affinity, the program is only allowed to run
on Core0. You can prevent programs from "bumping heads",
by segregating them this way.

http://www.lanpartyguide.com/images/setaffinity.jpg

I generally use "Set Affinity", when programs are
poorly written and they won't behave themselves. If
the machine is no longer responsive to keyboard input say,
then it's time for a bit of set affinity work.

Paul[/QUOTE]

I watched an interesting video interview with Mark Russinovich of
Sysinternals fame (now part of MS) a couple years ago. Part of the
interview dealt with just this subject. Mark explained that until Vista
(as I recall - might have been Win 7) there were some bottlenecks -
namely the global dispatcher lock. A MS Fellow singlehandeldy undertook
fixing that and made code changes in hundreds of places. As Mark said,
the performance hit without that change was hardly noticable on dual-core
machines, but becomes an issue with as few as four cores.
 
Y

Yousuf Khan

Is windows designed to really take advantage of multi-core processors or
we are just led to believe it?
Yes, Windows has been multi-core capable at least since Windows NT,
since NT was designed from the outset to be a server operating system
which had access to more than one processor at a time. The Windows NT
code was then evolved into Windows 2000, XP, Vista, 7 and finally 8. The
code for handling multiple cores has evolved with the times, and
improved as AMD & Intel introduced new architectures that changed
performance characteristics around from simple to more complex.

For example, originally before there were multi-cores, there were only
multi-processors, meaning fully separate chips plugged into the same
motherboard. Then as they started miniaturizing, you had the advent of
dual-core and multi-core. They also added twists to the design, such as
Intel's Hyperthreading, and AMD's core "module" concept where they make
one real core look like two virtual cores. These upgrades to the
multi-core concept, required that MS's Windows code get updated too, to
more properly take advantage of performance characteristics of these
processors, without bogging them down too much.

So there is a continuous evolution of the processors and the operating
system with each generation that is happening.

Yousuf Khan
 
Y

Yousuf Khan

There is still an upper bound on cores that can be
licensed, but the odds of you running into that
limitation are slim. To see what that limit is,
go to Task Manager, right-click a process and
select "Set Affinity". There, you can see WinXP has
a limit of 32 tick boxes for selecting which cores
are candidates for running a program. The "Affinity"
capability, is how you prevent a multi-threaded program
from "hogging" the whole CPU. In this example of
setting affinity, the program is only allowed to run
on Core0. You can prevent programs from "bumping heads",
by segregating them this way.

http://www.lanpartyguide.com/images/setaffinity.jpg

I generally use "Set Affinity", when programs are
poorly written and they won't behave themselves. If
the machine is no longer responsive to keyboard input say,
then it's time for a bit of set affinity work.
I don't think that there is all that much that Set Affinity can fix in
some of these cases where the program is unresponsive. The reason that
the program is unresponsive is oftentimes, it's just waiting on i/o data
to come in from the hard disks or network for example. No matter how
multithreaded a program is, if it's waiting on its weakest link, then
it's going to be getting stuck, and often times the weakest link isn't
its processor. The processor can't really make a peripheral hurry up all
that much.

Yousuf Khan
 
P

Paul

Yousuf said:
I don't think that there is all that much that Set Affinity can fix in
some of these cases where the program is unresponsive. The reason that
the program is unresponsive is oftentimes, it's just waiting on i/o data
to come in from the hard disks or network for example. No matter how
multithreaded a program is, if it's waiting on its weakest link, then
it's going to be getting stuck, and often times the weakest link isn't
its processor. The processor can't really make a peripheral hurry up all
that much.

Yousuf Khan
The point is, you can experiment with Set Affinity, as you see fit.
I don't think I've run into a situation to date, where I could not
undo a Set Affinity change, after I applied it.

For example, if I had a quad core processor, I wanted to run
a video transcode in the background, and I wanted to play
a first person shooter (FPS) game in the foreground, I could
assign two cores to each of those processes. And by using
that kind of "fire wall", the FPS game would feel as
snappy as if the video transcode wasn't running.

Naturally, YMMV.

Paul
 
P

Peter Jason

I'm wondering the same. I can at least say that it recognises my 4 CPUs
in Task Manager, because I've seen them there. But then it must use
resources to monitor them; so maybe that's it - it actually wastes
resources!
:)

What would be nice would be some program to produce usage statistics
over a given period of time. That should settle the issue.
Anybody know of one?

Ed
These's a desktop gadget for Win7 I have that
shows all my 12 cores in the form of a small set
of bars. According to this the cores are used
off & on all the time.
 
G

Gene E. Bloch

These's a desktop gadget for Win7 I have that
shows all my 12 cores in the form of a small set
of bars. According to this the cores are used
off & on all the time.
That won't tell you if any *one* application is using more than one
core.
 
P

Peter Jason

That won't tell you if any *one* application is using more than one
core.
Yes it does. But I get the same info from the
Windows Task Manager where all the cores are
working with some programs.
 
C

cameo

The point is, you can experiment with Set Affinity, as you see fit.
I don't think I've run into a situation to date, where I could not
undo a Set Affinity change, after I applied it.

For example, if I had a quad core processor, I wanted to run
a video transcode in the background, and I wanted to play
a first person shooter (FPS) game in the foreground, I could
assign two cores to each of those processes. And by using
that kind of "fire wall", the FPS game would feel as
snappy as if the video transcode wasn't running.

Naturally, YMMV.

Paul
But a good scheduler should be able to do that without the user's
intervention.
 
P

Peter Jason

How does it do that?
Computers confuse me; however the gadget is called
"All CPU Meter" and is installed from the "Gadget
Gallery" by clicking the desktop.
 
G

Gene E. Bloch

Computers confuse me; however the gadget is called
"All CPU Meter" and is installed from the "Gadget
Gallery" by clicking the desktop.
OK, reading back over the thread, I think I misinterpreted what you were
saying.

I thought you were saying, in the context of earlier posts in the
thread, that you thought the gadget could tell you whether a given app
was taking advantage of multiple cores.

Now I'm not so sure I was reading you correctly.

What I was saying is this: All these CPU usage gadgets tell you is the
usage of each core, not which application is using which cores.

Therefore they don't (can't!) tell you how many cores a *particular* app
is using.

But if you weren't talking about that, I was out in left field sending
my complaints to the infield :)

No excuses, just a brain quake.
 
J

John Williamson

What I was saying is this: All these CPU usage gadgets tell you is the
usage of each core, not which application is using which cores.
That particular gadget can be misleading, too. It tells me that both
cores on my Atom powered netbook are being used by various programs.
Atoms are single core with hyperthreading. It counts a hyperthreading
single core as two cores. It's still a good bet though, if, as soon as
you start a particular application, all the CPU meters peg at 100% or
the same load factor, that the application is using them all, though.
Therefore they don't (can't!) tell you how many cores a *particular* app
is using.
You'd need to get deeper into the OS kernel code than MS will easily let
you to find that out, I suspect.
 
D

Darklight

I'm wondering the same. I can at least say that it recognises my 4 CPUs
in Task Manager, because I've seen them there. But then it must use
resources to monitor them; so maybe that's it - it actually wastes
resources!
:)

What would be nice would be some program to produce usage statistics
over a given period of time. That should settle the issue.
Anybody know of one?

Ed
Gkrellm is worth a look.
 
E

Ed Cryer

Darklight said:
Gkrellm is worth a look.
That's a pretty nice package of system-monitoring tools.
It shows all cores in use most of the time. And the graph display helps,
but I can't find anything in it to do these;
1. Show which progs are using which cores.
2. Save a log of usage statistics.

It's a bit better in display than both the All CPU Monitor gadget, and
the Task Manager.

I started it, ran a chkdsk on C, and CPU 0 got very active while the
others were jumping around a bit. But it's guesswork as to whether
chkdsk was using more than one.

Ed
 
P

Paul

Ed said:
That's a pretty nice package of system-monitoring tools.
It shows all cores in use most of the time. And the graph display helps,
but I can't find anything in it to do these;
1. Show which progs are using which cores.
2. Save a log of usage statistics.

It's a bit better in display than both the All CPU Monitor gadget, and
the Task Manager.

I started it, ran a chkdsk on C, and CPU 0 got very active while the
others were jumping around a bit. But it's guesswork as to whether
chkdsk was using more than one.

Ed
For some background, you can try this doc. It reviews
various approaches to measuring what is going on. But the
tools are likely intended for developers, which is why they
"zoom in", far too closely.

http://demandtech.com/wp-content/uploads/2012/04/Measuring-Processor-Utilization-in-Windows.pdf

I couldn't figure out why that person wrote the paper, but it's still
interesting as a (relatively long) review of the topic.

*******

Your selection of CHKDSK, couldn't be worse :) I say that,
because modern CHKDSK is just about the most ill-behaved
system utility ever written.

1) It should be I/O bound. It should be constantly asking for
disk. The evaluation of what it reads, shouldn't take that
much time. I have a hard time believing it could be CPU bound.
What would it be doing, with 3 billion cycles in a second ?
2) The developers, for whatever reason, decided that it was OK
to use all of system memory as a file cache. (And I'm not
talking about using the existing system file cache either - they
just grab memory for themselves and do their own.) I managed to
use all system memory, running CHKDSK, on my Windows 7 laptop.
People with 16GB machines, have reported CHKDSK using 15GB
of system memory. When the system is under memory pressure,
it starts to page out, about 100 page events per second. So
it's not even like the application was engineered to use
"easily available" memory. It actually causes other applications
to start paging out, once it gets down to the last GB of
available RAM.

So it's disk intensive, and could be I/O busy on two disks at
the same time. The system partition, due to paging. And the
partition under test, due to trying to read everything.

And, as far as I know, this is not even the flavor of CHKDSK
run, where it reads every sector. This is still CHKDSK running
with purely structural checks.

I would be very surprised if it can keep a core busy, due to
the pounding on the disk. Imaging running CHKDSK on your C:
drive, while CHKDSK squeezes the crap out of available memory,
starts writing the pagefile, while at the same time, attempting
head movement and accesses to read yet-more structure.

I await your analysis :)

*******

By the way, for anyone who cares, if you run the 32 bit version
of CHKDSK, on your 64 bit OS W7 machine, and that will stop CHKDSK
from using all the RAM on your system. Not if you have a puny
amount of RAM. If you had a 16GB system, I would guess it'll
stop after it hogs around 2GB. It would stop at 3GB, if in a
/3GB and large_address_aware setup. The idea is, by being a
32 bit application, there is some limit to how far it can
address the memory it is attempting to hog.

I don't really have enough RAM on my W7 laptop to test this
well, but using the 32 bit CHKDSK seemed to work.

This is why I keep DVD downloads for both 32 bit and 64 bit W7.
When I need the 32 bit version of some program, I know I'm guaranteed
to find it, on one of n\my two ISO9660 files. For the 32 bit ISO,
I can do a dummy install in VPC2007, and then I gain access to
plenty of 32 bit applications. My laptop has 64 bit installed on
it, and that's where I find the 64 bit ones.

Have fun,
Paul
 
E

Ed Cryer

Paul said:
For some background, you can try this doc. It reviews
various approaches to measuring what is going on. But the
tools are likely intended for developers, which is why they
"zoom in", far too closely.

http://demandtech.com/wp-content/uploads/2012/04/Measuring-Processor-Utilization-in-Windows.pdf


I couldn't figure out why that person wrote the paper, but it's still
interesting as a (relatively long) review of the topic.

*******

Your selection of CHKDSK, couldn't be worse :) I say that,
because modern CHKDSK is just about the most ill-behaved
system utility ever written.

1) It should be I/O bound. It should be constantly asking for
disk. The evaluation of what it reads, shouldn't take that
much time. I have a hard time believing it could be CPU bound.
What would it be doing, with 3 billion cycles in a second ?
2) The developers, for whatever reason, decided that it was OK
to use all of system memory as a file cache. (And I'm not
talking about using the existing system file cache either - they
just grab memory for themselves and do their own.) I managed to
use all system memory, running CHKDSK, on my Windows 7 laptop.
People with 16GB machines, have reported CHKDSK using 15GB
of system memory. When the system is under memory pressure,
it starts to page out, about 100 page events per second. So
it's not even like the application was engineered to use
"easily available" memory. It actually causes other applications
to start paging out, once it gets down to the last GB of
available RAM.

So it's disk intensive, and could be I/O busy on two disks at
the same time. The system partition, due to paging. And the
partition under test, due to trying to read everything.

And, as far as I know, this is not even the flavor of CHKDSK
run, where it reads every sector. This is still CHKDSK running
with purely structural checks.

I would be very surprised if it can keep a core busy, due to
the pounding on the disk. Imaging running CHKDSK on your C:
drive, while CHKDSK squeezes the crap out of available memory,
starts writing the pagefile, while at the same time, attempting
head movement and accesses to read yet-more structure.

I await your analysis :)

*******

By the way, for anyone who cares, if you run the 32 bit version
of CHKDSK, on your 64 bit OS W7 machine, and that will stop CHKDSK
from using all the RAM on your system. Not if you have a puny
amount of RAM. If you had a 16GB system, I would guess it'll
stop after it hogs around 2GB. It would stop at 3GB, if in a
/3GB and large_address_aware setup. The idea is, by being a
32 bit application, there is some limit to how far it can
address the memory it is attempting to hog.

I don't really have enough RAM on my W7 laptop to test this
well, but using the 32 bit CHKDSK seemed to work.

This is why I keep DVD downloads for both 32 bit and 64 bit W7.
When I need the 32 bit version of some program, I know I'm guaranteed
to find it, on one of n\my two ISO9660 files. For the 32 bit ISO,
I can do a dummy install in VPC2007, and then I gain access to
plenty of 32 bit applications. My laptop has 64 bit installed on
it, and that's where I find the 64 bit ones.

Have fun,
Paul
There's a column heading in Task Manager Processes for "Threads". You
get to it from Processes, View, Select Columns.
Well, every single process (maybe I should have written "every
individual process") has several threads; not a one below 2.


At present the highest is NT Kernel & System, with 137; ...... Firefox
with 45,................ Tbird with 29, ............ lastly Windows
Session Manager with 3.
I just opened another tab in Firefox, went to a big site, and the Thread
count went up before my eyes to 48; opened two more tabs plus data, ...
up to 49. Then I cut the tabs down to just the original one, and lo! it
stayed with 49.

Does this mean that every individual program there is hard-coded to
handle multi-threading? Or is Win7 itself handling it, in the same way
it handles paging to swapfiles? Firefox isn't releasing memory too well.
Who should I blame, MS or the FF people?

Ed
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top