Michael Di Domenico wrote:
> lets see if i can clarify
>
> assuming there are two clusters - clusterA and clusterB
>
> Each cluster is 32nodes and has 50TB of storage attached
Attached how? Is the 50TB sitting on one file server on each cluster,
or is it distributed across the cluster? We nee
All,
I am augmenting a DDR switched SGI ICE system with
one that largely network-separate (a few 4x DDR links
connect them) and QDR switched. The QDR "half" also
includes GPUs (one per socket). Has anyone configured
PBS to manage these kinds of natural divisions as a single
cluster. Some p
On 03/05/2010 01:10 PM, Bill Broadley wrote:
Grid-ftp? http://www.globus.org/toolkit/docs/3.2/gridftp/key/index.html
If you don't already have the globus framework set up on both ends,
getting it installed just for gridftp is a huge amount of work;
especially since the advantage of gridftp d
On 3/5/2010 10:05 AM, Hearns, John wrote:
My recommendation also would be to use an external storage device - a
USB drive would be useful, and I have been involved in a couple of
industrial projects where data has been brought to a cluster on an
external USB drive. It is as people say quite a
Grid-ftp? http://www.globus.org/toolkit/docs/3.2/gridftp/key/index.html
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo
These might be the boxes used by post production/animation:
http://www.rocketstream.com/company/overview/default.aspx
The contents of this email are confidential and for the exclusive use of the
intended recipient. If you receive this email in error you should not copy it,
retransmit it, use i
>
> I'd like to paralyze that across multiple nodes to drive the aggregate
> up
>
> I was hoping someone would pop up say, hey use this magical piece of
> software. (of which im unable to locate)..
>
My recommendation also would be to use an external storage device - a
USB drive would be usefu
As i expect from the smartest sysadmins on the planet, everyone has
over analyzed the issue... :)
lets see if i can clarify
assuming there are two clusters - clusterA and clusterB
Each cluster is 32nodes and has 50TB of storage attached
the aggregate network bandwidth between the clusters is 80
Umm, you have your network guys pull a fiber run (or two) from your
cluster's file server over to the other cluster's core network switch?
Alternately, you unbolt and pull the shelf of FC disks out of the rack, put
them on a cart and wheel them over to the other cluster's filer.
(1/2 :-)
It's an
Hi Mark,
Many thanks 2 u..
Regards,
Rigved
On Thu, Mar 4, 2010 at 1:35 AM, Mark Hahn wrote:
> we r not getting latest free download version for mpijava for linux for
>>
>
> this is version 1.2.5 circa jan 2003, right? right away this should set
> off some alarms, since any maintained packag
On Fri, 05 Mar 2010 11:22:14 -0500, Mike Davis wrote:
> Michael Di Domenico wrote:
>> How does one copy large (20TB) amounts of data from one cluster to
>> another?
>>
>> Assuming that each node in the cluster can only do about 30MB/sec
>> between clusters and i want to preserve the uid/gid/timest
kyron wrote:
Given I haven't seen single 20TB drives out there yet, I doubt it to be
the case. I wouldn't throw in NFS as a limiting factor (just yet) as I have
I was commenting on the 30 MB/s figure. Not whether or not he had 20TB
attached to it (though if he did ... that would be painful).
Michael Di Domenico wrote:
How does one copy large (20TB) amounts of data from one cluster to another?
Assuming that each node in the cluster can only do about 30MB/sec
between clusters and i want to preserve the uid/gid/timestamps, etc
If the clusters are co-lo I wouldn't copy I would use sh
On Fri, 05 Mar 2010 11:00:03 -0500, Joe Landman
wrote:
> Michael Di Domenico wrote:
>> How does one copy large (20TB) amounts of data from one cluster to
>> another?
>>
>> Assuming that each node in the cluster can only do about 30MB/sec
>> between clusters and i want to preserve the uid/gid/time
Michael Di Domenico wrote:
How does one copy large (20TB) amounts of data from one cluster to another?
Assuming that each node in the cluster can only do about 30MB/sec
between clusters and i want to preserve the uid/gid/timestamps, etc
I know how i do it, but i'm curious what methods other peo
How does one copy large (20TB) amounts of data from one cluster to another?
Assuming that each node in the cluster can only do about 30MB/sec
between clusters and i want to preserve the uid/gid/timestamps, etc
I know how i do it, but i'm curious what methods other people use...
Just a general su
16 matches
Mail list logo