On Fri, Sep 24, 2010 at 6:21 AM, Matt Hurd <[email protected]> wrote:
> I'm associated with a somewhat stealthy start-up.  Only teaser product
> with some details out so far is a type of packet replicator.
>
> Designed 24 port ones, but settled on 16 and 48 port 1RU designs as
> this seemed to reflect the users needs better.
>
> This was not designed for HPC but for low-latency trading as it beats
> a switch in terms of speed.  Primarily focused on low-latency
> distribution of market data to multiple users as the port to port
> latency is in the range of 5-7 nanoseconds as it is pretty passive
> device with optical foo at the core.  No rocket science here, just
> convenient opto-electrical foo.
>
> One user has suggested using them for their cluster but, as they are
> secretive about what they do, I don't understand their use case.  They
> suggested interest in bigger port counts and mentioned >1000 ports.
>
> Hmmm, we could build such a thing at about 8-9 ns latency but I don't
> quite get the point just being used to embarrassingly parallel stuff
> myself.  Would have thought this opticast thing doesn't replace an
> existing switch framework and would just be an additional cost rather
> than helping too much.  If it has a use, may we should build one with
> a lot of ports though 1024 ports seems a bit too big.
>
> Any ideas on the list about use of low latency broadcast for specific
> applications in HPC?  Are there codes that would benefit?
>
> Regards,
>
> Matt.

Maybe they're doing a Monte Carlo forecast based on real-time market
data; broadcasting the data to 1000+ processes where each process is
using a different random seed to generate independent points in
phase-space. Of course they would then have to send the updated
phase-space somewhere in order to update their likelihoods and issue a
reaction. I suppose if communication was the primary bottleneck,
doubling of the performance would be an upper limit.


-Kevin


> _________________
> www.zeptonics.com
> _______________________________________________
> Beowulf mailing list, [email protected] sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf
>



-- 
Kevin Van Workum, PhD
Sabalcore Computing Inc.
Run your code on 500 processors.
Sign up for a free trial account.
www.sabalcore.com
877-492-8027 ext. 11

_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to