Hallo Orion,
Mittwoch, 15. Juli 2009, meintest Du:
OP> On 07/15/2009 11:23 AM, Joe Landman wrote:
>> Orion Poplawski wrote:
>>> I'm thinking about building a moderately fast network storage server
>>> using a striped array of SSD disks. I'm not sure how much such a
>>> device would benefit from b
Hallo Orion,
Mittwoch, 15. Juli 2009, meintest Du:
>> Can you give a bit more Info?
>> Network Storage Server = NFS?
OP> NFS, though I'm open for trying other options.
Well...NFS is a good start... if 100 MB/s is enough for your purposes.
NFS does not work very well on higher speeds - the pro
Hallo Orion,
Mittwoch, 15. Juli 2009, meintest Du:
OP> I'm thinking about building a moderately fast network storage server
OP> using a striped array of SSD disks. I'm not sure how much such a device
OP> would benefit from being dual processor and whether the latest Nehalem
OP> chips would be b
Hallo Tiago,
Sonntag, 17. Mai 2009, meintest Du:
On Sat, May 16, 2009 at 11:56 PM, Rahul Nabar wrote:
On Sat, May 16, 2009 at 2:34 PM, Tiago Marques wrote:
> One of the codes, VASP, is very bandwidth limited and loves to run in a
> number of cores multiple of 3. The 5400s are also very bandwi
Hallo Dr,
Mittwoch, 13. Mai 2009, meintest Du:
I have a cluster of identical computers. We are planning to add more nodes
later. I was thinking whether I should go the diskless nodes way or not?
Diskless nodes seems as a really exciting, interesting and good option, however
when I did it I ne
Hallo Rahul,
where are you "based"? If you are in Europe we (ClusterVision) can definitely
help you out with benchmarks and quotes...
Go to www.clustervision.com for further information...
Jan
Dienstag, 12. Mai 2009, meintest Du:
RN> I'm currently shopping around for a cluster-expansion and
Hallo Mark,
Mittwoch, 13. Mai 2009, meintest Du:
>> I'm currently shopping around for a cluster-expansion and was shopping
>> for options. Anybody out here who's bought new hardware in the recent
>> past? Any suggestions? Any horror stories?
MH> I think the answer depends on time-frame. if you
Hallo richard,
Freitag, 10. April 2009, meintest Du:
> Kilian CAVALOTTI wrote:
>> On Wednesday 08 April 2009 19:48:30 Joe Landman wrote:
>>> As an FYI, Beowulf veteran Jeff Layton wrote up a nice article on
>>> memory
>>> configuration issues for Nehalem (I had seen some discussion on this
>>>
Hallo Peter,
Mittwoch, 8. April 2009, meintest Du:
PK> On Wednesday 08 April 2009, Joe Landman wrote:
>> As an FYI, Beowulf veteran Jeff Layton wrote up a nice article on memory
>> configuration issues for Nehalem (I had seen some discussion on this
>> previously).
PK> Can anyone confirm that mi
Hallo Jeff,
Mittwoch, 8. April 2009, meintest Du:
JL> Peter Kjellstrom wrote:
>> On Wednesday 08 April 2009, Joe Landman wrote:
>>
>>> As an FYI, Beowulf veteran Jeff Layton wrote up a nice article on memory
>>> configuration issues for Nehalem (I had seen some discussion on this
>>> previousl
Hallo Rahul,
Dienstag, 7. April 2009, meintest Du:
RN> On Tue, Apr 7, 2009 at 12:08 PM, Gerry Creager
wrote:
>> Yep. When the hardware arrives, and is installed (at $5K or more/rack!),
>> the bill is due. Acceptance testing? No way. And if we choose to run
>> something silly like Rocks/Cent
Hallo Prentice,
Montag, 6. April 2009, meintest Du:
PB> Mark Hahn wrote:
>> buying an extended warranty might help. buying a shrink-wrapped cluster
>> might help too.
PB> Not really. My cluster was a "shrink-wrapped" cluster from Dell. Turns
PB> out Dell hired someone from a 3rd-party to actual
Hallo Rahul,
Montag, 6. April 2009, meintest Du:
RN> Just making the point that CentOS (IMHO) is as good or bad a choice
RN> and I wish the vendor's let me peacefully live with it! :) To each his
RN> own Distro (within reasonable bounds).
Well - the point is that a vendor can't debug a distro. Y
Hallo Rahul,
Montag, 6. April 2009, meintest Du:
RN> On Mon, Apr 6, 2009 at 7:46 AM, Chris Samuel wrote:
>> Even though we could reproduce it on 64-bit Debian
>> and 32-bit CentOS they wouldn't escalate the issue
>> until we could reproduce it on RHEL5 - which we did
>> today.
RN> Thanks for s
Hallo Joe,
Samstag, 4. April 2009, meintest Du:
JL> Mark Hahn wrote:
>>> Not every cluster FS we see is Lustre.
>> so what are the non-lustre scalable options for cluster FS?
JL> Good performance:
JL> --
JL> GlusterFS
JL> PVFS2
JL> Low performance:
JL> --
JL> G
Hallo John,
Freitag, 3. April 2009, meintest Du:
JH> 2009/4/3 Mike Davis :
>> > My experience is the polar opposite. SGI quotes took forever. And that
>> situation goes back to the mid 90's. I can usually have a Sun quote in
>> hours.
JH> This should not turn into a slanging match, but I'll back
Hallo Greg,
Donnerstag, 2. April 2009, meintest Du:
GL> On Thu, Apr 02, 2009 at 10:32:22AM +0200, Kilian CAVALOTTI wrote:
>> Anyway, I was thinking about companies like Penguin or Scalable, designing
>> their own hardware products, like Relions or JackRabbits, and not just
>> reselling Supermi
Hallo Geoff,
Donnerstag, 2. April 2009, meintest Du:
>> Also it's not like big discounts don't exist in the corporate world.
>> Back when I worked for $multinational our group rates with Dell were
>> astonishingly cheap.
GG> There was one university I worked for that Dell sold a medium-size
Hallo Kilian,
Donnerstag, 2. April 2009, meintest Du:
KC> On Wednesday 01 April 2009 16:19:57 John Hearns wrote:
>> > There are a few services/integration HPC
>> > companies in the EU, but not any that I'm aware of selling their own
>> > hardware, as Scalable or Penguin do.
>> ? I know that
Hallo Mikhail,
Dienstag, 31. März 2009, meintest Du:
MK> In message from Kilian CAVALOTTI
MK> (Tue, 31 Mar 2009 10:27:55 +0200):
>> ...
>>Any other numbers, people?
MK> I beleive there is also a bit other important numbers - prices for
MK> Xeon 55XX and system boards
Don't forget DDR3-ECC
Hallo Ivan,
since that list is read by readers from many countries and vendors are normally
active in certain geographical areas you should specify where the cluster will
be located...
Jan
Dienstag, 9. Dezember 2008, meintest Du:
I know that many readers of this forum work for cluser vendors
Hallo Bogdan,
Mittwoch, 3. Dezember 2008, meintest Du:
BC> On Wed, 3 Dec 2008, Mark Hahn wrote:
>> I don't know whether there would be any problem putting a real
>> interconnect card (10G, IB, etc) into one of these - some are
>> designed for GPU cards, so would have 8 or 16x pcie slots.
BC>
Hallo Franz,
Freitag, 21. November 2008, meintest Du:
FM> That's simply not true. Every newer card from NVidia (that is, every
FM> G200-based card, right now, GTX260, GTX260-216 and GTX280) supports DP,
FM> and nothing indicates that NV will remove support in future cards, quite
FM> the contrary.
Hallo Mark,
Donnerstag, 20. November 2008, meintest Du:
>> [shameless plug]
>> A project I have spent some time with is showing 117x on a 3-GPU machine
>> over
>> a single core of a host machine (3.0 GHz Opteron ). The code is
>> mpihmmer, and the GPU version of it. See http://www.mpihm
Hallo Mikhail,
Mittwoch, 22. Oktober 2008, meintest Du:
MK> In message from "Ivan Oleynik" <[EMAIL PROTECTED]> (Tue, 21 Oct 2008
MK> 18:15:49 -0400):
>>I have heard that AMD Shanghai will be available in Nov 2008. Does
>>someone
>>know the pricing and performance info and how is it compared with
Hallo Matt,
Mittwoch, 15. Oktober 2008, meintest Du:
ML> Also, I recommend staying away from the MD3000 storage unit. It has
ML> issues such as not being able to present LUNs larger thn 2GB, not
ML> supporting RAID6 and
The next firmware release will fix those issues. Don't know when it will
Hallo Prentice,
Mittwoch, 1. Oktober 2008, meintest Du:
PB> And what is the status of DDR? Are people still using it, or has it
PB> already been replaced by QDR in the marketplace?
DDR is still the most important Infiniband in the market. DDR provides enough
bandwidth for most applications at t
Hallo Greg,
Donnerstag, 25. September 2008, meintest Du:
Glen,
I have had great success with the *right* 10GbE nic and NFS. The important
things to consider are:
I have to say my experience was different.
How much bandwidth will your backend storage provide? 2 x FC 4 I'm guessing
best
Hallo Bernd,
Mittwoch, 10. September 2008, meintest Du:
BS> No, you can do active/active with several systems
BS> Raid1
BS> / \
BS> OSS1OSS2
BS> \ /
BS> Raid2
BS> (Raid1 and Raid2 are hardware raid systems).
BS> Now OSS1 will primarily serve Raid1 and O
Hallo Joe,
Dienstag, 26. August 2008, meintest Du:
>> Questions>> node.
:
>> 1) What is the cost-effective yet efficient way to connect this cluster
>> with IB?
JL> Understand that cost-effective is not necessarily the highest
JL> performance. This is where over-subscription comes in.
>> 2)
Hallo Henning,
Freitag, 8. August 2008, meintest Du:
HF> Hi everybody,
HF> One needs basically a daemon which handles copying requests and establishes
HF> the connection to next node in the chain.
Why a daemon? Just MPI that starts up the processes on the remote nodes during
programm startup.
Hallo Daniel,
Donnerstag, 24. Juli 2008, meintest Du:
[network configurations]
I have to say i am not sure that all the configs you sketched really work. I
never saw somebody creating loops in an IB fabric.
DP> Since I am not network expert I would be glad if somebody explains
DP> why the fir
Hallo Dan,
Dienstag, 1. Juli 2008, meintest Du:
>>Hi Jon,
>>We have our own stack which we stick on top of the customers favourite
>>red hat clone. Usually Scientific Linux.
>>Here is a bit more about it.
>>http://www.clustervision.com/products_os.php
>>We sell as a standalone product and it
certified Hard- and Software. Also you have long update cycles and product stability.
Regards,
Jan Heichler
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Hallo Mark,
Mittwoch, 25. Juni 2008, meintest Du:
>>> so the switch fabric would be a 'leaf' layer with 12 up and
>>> 12 down, and a top layer with 24 down, right? so 3000 nodes
>>> means 250 leaves and 125 tops, 9000 total ports so 4500 cables.
>> The number of switch layers, or tiers, depends
Hallo Ramiro,
Freitag, 13. Juni 2008, meintest Du:
RAQ> On Fri, 2008-06-13 at 17:55 +0200, Jan Heichler wrote:
>> You can use the 24-port switches to create a full bisectional
>> bandwidth network if you want that. Since all the big switches are
>> based on the 24-po
Hallo Ramiro,
Freitag, 13. Juni 2008, meintest Du:
RAQ> By the way:
RAQ> a) How many hops a Flextronics 10U 144 Port Modular is doing?
3
RAQ> b) And the others?
3 too.
RAQ> c) How much latency am I loosing in each hop? (In the case of Voltaire
RAQ> switches: ISR 9024 - 24 Ports: 140 ns ; IS
Hallo Gilad,
Donnerstag, 12. Juni 2008, meintest Du:
>
What is the chipset that you have?
MCP55 by Nvidia.
OFED 1.3 and MVAPICH2 1.03 and 1.02 tested.
Regards,
Jan
>
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jan Heichler
Sent: Thursday, J
t it be different then every run? And: how can i move the process? numactl or taskset just works on the local process i assume. How can i move the "remote process" on the other host?
Regards,
Jan
>
-Tom
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jan Heichler
Dear all!
I found this
http://mvapich.cse.ohio-state.edu/performance/mvapich2/opteron/MVAPICH2-opteron-gen2-DDR.shtml
as reference value for MPI-latency of Infiniband. I try to reproduce those
numbers at the moment but i'm stuck with
# OSU MPI Latency Test v3.0
# SizeLatency (us)
ions that compute results? Or Software
to manage a beowulf cluster?
Gruß Jan Heichler ___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Hallo Joshua,
Freitag, 9. Mai 2008, meintest Du:
Jma> If you had a 2.3GHz at 2.0GHz NB you would get 17.5GB/sec.
2.0 GHz what? What does NB mean?
Cheers,
Jan___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or uns
Hallo Tom,
Mittwoch, 7. Mai 2008, meintest Du:
>> Has there been a recent (i.e., conducted this year with
>> shipping silicon) performance study/benchmark of the
>> "Harpertwon" family of Intel Xeon processors running with an
>> external clock of 1600 MHz against a suitable high-end member
>>
Hallo Jaime,
Montag, 5. Mai 2008, meintest Du:
JP> Hello,
JP> Just a small question, does anybody has experience with many core
JP> (16) nodes and infiniband? Since we have some users that need
JP> shared memory but also we want to build a normal cluster for
JP> mpi apps, we think that this coul
Hallo Gilad,
Montag, 5. Mai 2008, meintest Du:
>> Bonding (or multi-rail) does not make sense with "standard IB" in PCIe
GS> x8 since the PCIe connection limits the transfer rate of a single
GS> IB-Link already.
>> My hint would be to go for Infinipath from QLogic or the new ConnectX
GS> from
Hallo Jaime,
Montag, 5. Mai 2008, meintest Du:
JP> Hello,
JP> Just a small question, does anybody has experience with many core
JP> (16) nodes and infiniband? Since we have some users that need
JP> shared memory but also we want to build a normal cluster for
JP> mpi apps, we think that this coul
Hallo Håkon,
Freitag, 25. April 2008, meintest Du:
HB> Hi Jan,
HB> At Wed, 23 Apr 2008 20:37:06 +0200, Jan Heichler <[EMAIL PROTECTED]> wrote:
>> >From what i saw OpenMPI has several advantages:
>>- better performance on MultiCore Systems
>>because of good sha
Hallo Mark,
Mittwoch, 23. April 2008, meintest Du:
>> Can anyone give me a quick comparison of OpenMPI vs. MPICH? I've always
>> used MPICH (I never liked lam), and just recently heard about OpenMPI.
>> Anyone here using it?
MH> openmpi is a genetic descendent of lam (with significant other cont
Hallo Jan,
Samstag, 19. April 2008, meintest Du:
>
KC> That's a pretty amusing blog, actually. See
KC> http://terboven.spaces.live.com/blog/cns!EA3D3C756483FECB!255.entry
KC> (wonderful permalinks, btw).
Read this arcticle to the end:
"Finally, lets take a quick look at some perfo
Hallo Kilian,
Samstag, 19. April 2008, meintest Du:
KC> On Wednesday 09 April 2008 11:48:19 am Greg Lindahl wrote:
>> p.s. did anyone see this blog posting claiming that it takes Linux
>> clusters several minutes to start a 2048 core job?
>> http://terboven.spaces.live.com/Blog/cns!EA3D3C756483F
Hallo Bogdan,
Freitag, 18. April 2008, meintest Du:
BC> Sorry to divert a bit the thread towards its initial subject and away
BC> from the security issues currently discussed...
BC> I've just seen a presentation from a University (which shall remain
BC> unnamed) which partnered with Microsoft
Hallo Carsten,
Dienstag, 18. März 2008, meintest Du:
CA> Hi John,
CA> John Leidel wrote:
>> This should be an option in your kernel. Check to see if you have this
>> enabled [assuming you are running some sort of Linux varient]
CA> Sorry, I came up with this question because when compiling 2.
Hallo Joe,
Donnerstag, 28. Februar 2008, meintest Du:
JL>Have a few simple setups that I am trying to figure out if I have a
JL> problem, or if what I am seeing is normal.
JL>Two 10 GbE cards, connected with a CX4 cable. Server is one of our
JL> JackRabbits, with 750+ MB/s direct IO and
Hallo Joe,
Donnerstag, 28. Februar 2008, meintest Du:
>> The best i saw for NFS over 10 GE was about 350-400 MB/s write and about 450
>> MB/s read.
>> Single server to 8 simultaneous accessing clients (aggregated performance).
The clients had a 1 gig uplink...
JL> Hi Jan:
JL>Ok. Thank
Hallo Herbert,
Mittwoch, 17. Oktober 2007, meintest Du:
HF> We are planning to extend an existing cluster (dual-core Opterons) with
HF> some Barcelona nodes. What the salesman tells me is that you can get
HF> 1.9GHz chips now, up to 2.3GHz in December and 2.4/2.6GHz in January.
HF> Does this soun
Hallo Henning,
Freitag, 12. Oktober 2007, meintest Du:
HF> Hello Greg,
HF> On Thu, Oct 11, 2007 at 03:13:16PM -0700, Greg Lindahl wrote:
>> I'm thinking about using "balance-alb" channel bonding on a
>> medium-to-large Linux cluster; does anyone have experience with this?
>> It seems that it mig
Hallo Barnet,
Montag, 8. Oktober 2007, meintest Du:
BW> I'm moving towards setting up a small cluster (my first), and am
BW> thinking about using Intel quad core processors. However, I'm a little
BW> concerned about memory contention. I'm (tentatively) going to have one
BW> processor per node
57 matches
Mail list logo