Joe Landman wrote:
??? Flat memory is non-segmented by definition. Would you care to
point out the flat memory addressing mode on x86 which can access all
4GB ram? I am sure I missed it.
I'll be happy to withdraw this comment.
Ok, here is where I guess I don't understand your point in posting this
to an (obviously) HPC list then. Is it oversold in the gamer market? In
the DB market?
I meant in general, although as far as I know, there aren't
any 64-bit games for the Windows PCs. There are definitely DBs and
other server products (e.g. Microsoft Exchange) that
require a 64-bit version of Windows.
... and your point is .... that the 64 appendage makes no difference
when you are running the chip in 32 bit mode (e.g. windows)?
Right.
ok. I might suggest avoiding conflating marketing with technology. Also
note that Athlon64 units do run noticably faster than Athlon units of
similar clock speed in 32 bit mode. The desktop I am typing this on now
is a 1.6 GHz Athlon 64 currently running a windows XP install, and is
noticably (significantly) faster than the system it replaced (a 2 GHz
Athlon). The minimal benchmarks I have performed indicate 30% ballpark
on most everything, with some operations tending towards 2x faster.
I don't doubt this, but this is because the version of Athlon you're
running, that has the magic number "64" on it, is just a better
processor than the one you had before. The same would have been
true if they had called it the Super Mega Athlon 2 Turbo++ .
case for them, especially if they're using a modern version of
Windows, which is what the original posting was about. These days you
also see "X2" which is a different kettle of fish and is, if anything,
being undermarketed.
Undermarketed? Not the way I see it (see the Intel ads on TV)
I was thinking of Microsoft.
You set up an argument for the sole purpose of knocking it down. "no
32bit address needed for instruction text" ... "a real limit in the
complexity" ...
Of course, no program I have seen is ever *just* instruction text. There
are symbols. Data sections, other sections. Whether or not ld can link
anything that large is not something I am directly aware of, not having
done it myself.
My point was that the text segment is created by a human
and is limited in size by human abilities.
Except for constants in the program, the data segment is
synthetic and is created algorithmically
or in some other way that doesn't expand in complexity
as the size of the data expands.
And I still claim that your assertion is bold and difficult to support
or contradict.
Right. I'd be happy to label it a conjecture. Like any conjecture,
all it takes is one example to be proved false.
Way back in the good old days of DOS, we regularly
generated binaries that were 700+kB in size. Had to use overlays and
linker tricks to deal with it. This was around the time of DOS 6. OS2
was a breath of fresh air, my codes could hit 2-3 MB without too much
pain (binary size). Had 16 MB on the machine, so it was "plenty of
room". I was not however of the opinion that 16 MB was all I would ever
need.
That's because you, and many other people, could handle the complexity
of creating a program with 2-3MB, or even 16MB, of text. But,
there's a wall up there somewhere that you, and everybody else, will
hit long before you get to 4GB of text.
Cordially,
Jon Forrest
Unix Computing Support
College of Chemistry
Univ. of Cal. Berkeley
173 Tan Hall
Berkeley, CA
94720-1460
510-643-1032
[EMAIL PROTECTED]
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf