On Fri, Feb 06, 2004 at 06:01:04PM -0800, Ian Romanick wrote:
> 
> I seem to recall those boards were basically two Rage128 chips and a PCI 
> bridge.

I don't see a PCI bridge.  It seems that it would probably be a
master/slave configuration of some sort, unless there is a bridge built
into one of the Rage128 chips. (?)

http://www.xbitlabs.com/images/video/ati-furymaxx/fury-maxx-small.jpg
Judging from the board layout, you are probably right that the memory
banks are private to each chip.

> Each chip would take turns rendering frames.  When the driver 
> got a SwapBuffer command, it would queue the swap, and start issuing 
> drawing commands to the other chip.

So instead of SLI, it is a frame-based interleave.  I guess this could
_only_ be used effectively when double-buffering.  Otherwise the other chip
is sitting there idle, while your program waits on the one to which you
sent cmd buffers to finish drawing so you can send another buffer to it.
No matter which chip actually receives the cmds, only one chip at a time
would see any action without double buffering it seems.

> What might be more interesting is to have half of the GL contexts render 
> with one chip and the other half render with the other chip.  It would 
> basically operate (from the driver's perspective) like you had two 
> separate cards in the system.  This may or may not work depending on how 
> buffer swaps work on that card.  If one chip always acts as the display, 
> and the other chip copies to the display chip's memory, it would work. 

That is interesting, I didn't think about that.  It would be more like a
SMP video card; instead of clients having to fight over a whole card
lock to run in interleaved mode, you have one lock for each chip, or
something like that.  Depending on the user's typical usage, the user
could configure the card to run in simultaneous rendering mode (if he 
typically has >=2 GL apps going at a time) or interleaved mode (for ==1
GL app).  Would there be any performance advantage though?

Of course you'd have a security question too, how do you keep clients
from stomping on each other's resources when you have two chips on the
same _card_?

I wonder if there is a frame buffer in both chips' local memory, and a
buffer swap does some strange CRTC swap or something.  Otherwise it
would seem that the slave chip would spend a lot of time copying a
private frame buffer to the real location.  Also that would mean the
slave has more texture memory than the master, also strange because
you'd be wasting memory since the master and slave need (theoretically)
to store the same textures.

Well, lots of speculation and no facts.

> Now that I think about it, that's probably how the card works.  That 
> would explain some of the problems & limitations the Windows drivers 
> had.  Hmpf.  Still, it might be a fun pet project.

It'd be great fun...  if only someone were able to coax a little info
from ATI. :)  I think only NDA people might have any luck there though,
and unfortunately none of them probably have the hardware anyway.  I'd
definitely want to play around with it once I clear out my current mga
stuff, in case someone knows who to contact to rustle up some docs.

-- 
Ryan Underwood, <[EMAIL PROTECTED]>

Attachment: signature.asc
Description: Digital signature

Reply via email to