> Yes - and yet they still have horrible problems every time you have
> a conditional branch instruction.  That's because they are trying

Not really.  The Pentium 4 has a very efficient branch prediction unit.
Most of the time it guesses the correct branch to take.  When the actual
branch is computed, it stores this information for later.  Next time that
branch is encountered it analyzes the stored information and bases its
decision on that.  Conditional branches are much less of a problem now.


> >  Most of
> > the processing power of today's CPUs go completely unused.  It is
possible
> > to create optimized implementations using
Single-Instruction-Multiple-Data
> > (SIMD) instructions of efficient algorithms.
>
> Which is a way of saying "Yes, you could do fast graphics on the CPU
> if you put the GPU circuitry onto the CPU chip and pretend that it's
> now part of the core CPU".

What does this have to do with adding GPU hardware to the CPU?  These SIMD
instructions are already present on modern processors in the form of SSE and
SSE2.


> > We have gone from approximately 200MB/s of memory bandwidth (PC66 EDO
RAM)
> > to over 3.2GB/s (dual 16-bit RDRAM channels) in the last 5 years.  We
have
> > over 16 times the memory bandwidth available today than we did just 5
years
> > ago.  Available memory bandwidth has been growing more quickly than
> > processor clockspeed lately, and I do not foresee an end to this any
time
> > soon.
>
> OK - so a factor 70 in CPU growth and a factor of 16 in RAM speed.

No, in this 5 year period, processor clockspeed has moved from approximately
200MHz to over 2GHz.  This is a factor of 10 in CPU growth and 16 in memory
bandwidth.  Memory bandwidth is growing more quickly than processor
clockspeed now.


> > Overutilised in my opinion.  The amount of overdraw performed by today's
> > video cards on modern games and applications is incredible.  Immediate
mode
> > rendering is an inefficient algorithm.  Video cards tend to have
extremely
> > well optimized implementations of this inefficient algorithm.
>
> That's because games *NEED* to do lots of overdraw.  They are actually

The games perform overdraw, sure.  But I am talking about at the pixel
level.  A scene-capture algorithm performs 0 overdraw, regardless of what
the game sends it.  This reduces fillrate needs greatly.


> > Kyro-based video cards perform quite well.  They are not quite up to the
> > level of nVidia's latest cards...
>
> Not *quite*!!! Their best card is significantly slower than
> a GeForce 2MX - that's four generations of nVidia technology
> ago.

This has nothing to do with the algorithms itself.  It merely has to do with
the company's ability to scale its hardware.  A software implementation
would not be limited in this manner.  It could take advantage of the
processor manufacturer's ability to scale speeds much more easily.


> Also, in order to use scene capture, you are reliant on the underlying
> graphics API to be supportive of this technique.  Neither OpenGL nor
> Direct3D are terribly helpful.

Kyro-based 'scene-capture' video cards support both Direct3D and OpenGL.
Any game you can play using an nVidia card you can also play using a
Kyro-based card.


> > > Everything that is speeding up the main CPU is also speeding up
> > > the graphics processor - faster silicon, faster busses and faster
> > > RAM all help the graphics just as much as they help the CPU.
> >
> > Everything starts out in hardware and eventually moves to software.
>
> That's odd - I see the reverse happening.  First we had software

The move from hardware to software is an industry-wide pattern for all
technology.  It saves money.  3D video cards have been implementing new
technologies that were never used in software before.  Once the main
processor is able to handle these things, they will be moved into software.
This is just a fact of life in the computing industry.  Take a look at what
they did with "Winmodems".  They removed hardware and wrote drivers to
perform the tasks.  The same thing will eventually happen in the 3D card
industry.


> As CPU's get faster, graphics cards get *MUCH* faster.

This has mostly to do with memory bandwidth.  The processors on the video
cards are not all that impressive by themselves.  Memory bandwidth available
to the CPU is increasing rapidly.


> CPU's aren't "catching up" - they are getting left behind.

I disagree.  What the CPU lacks in hardware units it makes up with sheer
clockspeed.  A video card may be able to perform 10 times as many operations
per clock cycle as a CPU.  But if that CPU is operating at over 10 times the
clockspeed, who cares?  It will eventually be faster.  Video card
manufactures cannot scale clockspeed anywhere near as well as Intel.


> They are adding in steps to the graphics processing that are programmable.

And this introduces the same problems that the main CPU is much better at
dealing with.  Branch prediction and other software issues have been highly
optimized in the main processor.  Video cards cannot deal with software
issues like this anywhere near as well.


> > Intel is capable of
> > pushing microprocessor technology more quickly than nVidia or ATI,
> > regardless of how much nVidia wants their technology to be at the center
of
> > the chipset.
>
> So how come Intel CPU's have only doubled in speed over the last 18
> months when nVidia's GPU's have speeded up by a factor of four or so
> in the same interval?

nVidia's GPUs have not increased in clockspeed by a factor of 4.  Increased
memory bandwidth may have allowed performance increases of this level, but
the same is going to happen as memory bandwidth to the main CPU continues to
increase.


> > Fill rate is just memory bandwidth.  It is not hard to offer more memory
> > channels.  In fact, a dual-channel DDR chipset is coming soon for the
> > Pentium 4.  In May the Pentium 4 will have access to 4.3GB/s of memory
> > bandwidth.  Future generations will offer considerably more.
>
> But all of those benefits are also available to graphics chips - you
> have to get a 100-fold speedup from *somewhere* - RAM bandwidth *could*
> possibly get you that - but then graphics cards will also have a 100x
> RAM bandwidth speedup - so the relative performances will remain.

Unlikely.  It is more likely that the main CPU will catch up to the memory
technology used by video cards.  nVidia *just* introduced a video card with
4 DDR SDRAM channels for 10.4GB/s of memory bandwidth.  Before that they
were using dual-channels.  The Pentium 4 will have access to about 4.3GB/s
of memory bandwidth next month.  We hardly need a 100x memory bandwidth
speedup.  It is closer to about 2.5x.

-Raystonn


_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to