Hi David,
Can you give us more information about what you are doing? I'm getting
curious about what problem you are working with that requires these
conditions.
Steve
> We have not discussed election results (votes per candidate), but those
> are, ironically, really unsuitable for this, even
came up with, we hope, information that will prove to HP
>> that the card is under warranty. It was ordered through Synnex and the
>> order numbers are 24927045 and 26715052. They shipped from Synnex on
>> March 20, 2008.
>>
>> Part# 414129-B21 (they ordered a qty of 3)
>>
On Tue, 30 Mar 2010, Eric W. Biederman wrote:
Steve Cousins writes:
I have a couple of 10 GbE cards from HP (NC510F, NetXen/QLogic) and I have been
trying to get them to work in non-HP Linux systems. After failing to be able to
comile/install the nx_nic drivers on Fedora 9 and 12 (it checks
equipment? Any tips on getting an RMA
for this? I'm thinking of trying to track down an HP server on campus to
work with just to get the RMA but I doubt if there are too many of these
that aren't in use and still under warranty.
Thanks,
Steve
___
Gus Correa wrote:
Dear Beowulfers
Did anybody ever get Gigabit Ethernet NICs to work on
the Tyan Tiger S2466-4M motherboards under Linux?
Hi Gus,
I have a S2466 Tiger MPX board that has been running for years with a
copper Intel Pro/1000 MT NIC (82540EM) without trouble. The onboard 3Com
On Mon, 28 Sep 2009, Rahul Nabar wrote:
On Fri, Sep 25, 2009 at 5:47 PM, Steve Cousins wrote:
One thing to try with bonnie++ is to run multiple instances at the same
time. For our tests, one single instance of bonnie showed 560 MB/sec writes
and 524 MB/sec reads. Going to 4 instances at the
Hi Rahul,
I went through a fair amount of work with this sort of thing (specifying
performance and then getting the vendor to bring it up to expectations
when performance didn't come close) and I was happiest with Bonnie++ in
terms of simplicity of use and the range of stats you get. I haven'
I haven't seen anybody here talking about the 6-core AMD CPU's yet. Is
anybody trying these out? Anybody have real-world comparisons (say WRF) of
scalability of a 12-core system vs. a 16 thread Nehalem system?
Thanks,
Steve
___
Beowulf mailing lis
- "Chris Samuel" wrote:
The compute nodes are using SuperMicro H8DM8-2 based
with 32GB of ECC RAM.
Hi Chris,
I had MCE crashes on a Supermicro system (quad Xeon quad-core 2.4 Ghz)
that was driving me nuts for quite a while. It would take a couple of
months to crash which doesn't soun
David Mathog wrote:
Do you really need a UPS for the whole cluster?
In many instances it is good enough to put a UPS on the master node and
just use surge suppressors on the compute nodes. The up side being that
only a small and relatively inexpensive UPS is required. The down side
being of
Tue, 27 Jan 2009 10:02:22 -0600 Gerry Creager wrote:
Have had some first hand experience with 1.5TB drives. I'll not go down
that path again for some time. Seagate didn't yet gifure out how to
make them reliable.
Hi Gerry,
What problems did you have? I have 16 of these in an array (Areca
From: "Jon Aquilina"
this is slightly off topic but im just wondering why spend thousands of
dollars when u can just setup another server and backup everything to a
raided hard drive array?
Another RAID system helps but only if it is located somewhere else. The
main reason we backup is for
On Wed, 2 Jul 2008, David Mathog wrote:
Steve Cousins <[EMAIL PROTECTED]> wrote:
Do different LTO-3 drives have different maximum tape write speeds?
I don't know. I've always heard 80 MB/sec. lto.org shows:
http://www.lto.org/technology/ugen.php?section=0&subsec=ug
On Wed, 2 Jul 2008, David Mathog wrote:
Rats.
I wonder what the difference is now? If you don't already have it,
please grab a copy of Exabyte's ltoTool from here:
http://www.exabyte.com/support/online/downloads/downloads.cfm?did=1344&prod_id=581
% /usr/local/src/ltotool/ltoTool -C 1 /dev/n
) copied, 69.2487 seconds, 77.5 MB/s
I used a 512K block size because that is what I use with our backups and
it has given optimal performance since the DLT-7000 days.
Good luck,
Steve
__
Steve Cousins, Ocean Modeling Group
On Tue, 27 May 2008, Mark Hahn wrote:
Does anyone have any experience with SiCortex machines? Any thoughts? They
look cool and they don't use much power but I wonder how they compare to
blade type systems.
eh? blade systems are just tweaks for packaging and cable-management.
I don't believ
with a customized version of Vis5D.
Steve
______
Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu
Univ. of Maine, Orono, ME
hanks,
Steve
--
__
Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu
Univ. of Maine, Orono, ME 04469Phone: (207) 581
Thanks Bill. This is really helpful.
On Wed, 13 Dec 2006, Bill Broadley wrote:
What do you expect the I/O's to look like? Large file read/writes? Zillions
of small reads/writes? To one file or directory or maybe to a file or
directory per compute node?
We are basing our specs on large fi
Bill Broadley wrote:
Sorry all, my math was a bit off (thanks Mike and others).
To be clearer:
* $24k or so per pair of servers and a pair of 15*750GB arrays
* The pair has 16.4TB usable (without the normal 5% reserved by the
filesystem)
* The pair has 20.5TB raw (counting disks use for spa
ks,
Steve
__
Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu
Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302
___
Beowulf mailing list, Beowulf@beowulf.org
hem. We
have two with 400GB Hitachi drives and one with 750GB Seagate drives.
Partners Data is very easy to work with and they have very good prices.
Steve
--
______
Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
M
(of course).
Good luck,
Steve
______
Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu
Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302
23 matches
Mail list logo