Steve Herborn wrote:
There are disciplines like EDA which generate large programs today.
Computer generated, of course. Companies like Intel and AMD use these
programs to design microprocessors. So yes, their architects are well aware
of this issue.

In particular, compiled simulators, which use very long straight line code to evaluate what the gates are doing in the correct order, every cycle. The simple ones evaluate every gate every cycle, the complicated ones figure out what blocks of gates can be skipped. These things really stress i-stream bandwidth.

I've not heard of multicore or cluster versions of these, but I'm not paying much attention to it. Most design groups have so many test cases to run that many copies of the simulator is just as useful as one faster one.

Compiled sims generate very large text segments. At one instruction per gate, plus some overhead (loads and stores are "overhead" !) it is pretty easy to get to gigabyte text. The counterpressure is the performance - with 4 GB/s istream bandwidth it takes one second per simulated cycle to run.

The point is that this class of large program does not have unmanageable complexity.

OT - I once was writing programs to generate programs with very large basic blocks, maybe 50K instructions. It is a Big Mistake to try gcc -O3 on these, the optimizer goes into brain freeze for 15 minutes or so...

-Larry
Sector IX

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to