Lest there be any confusion or doubt, the BASH is located at:
Clark Planetarium, 110 South 400 West
Two blocks from the convention center
Time: 9PM
See you there!
BTW R-HPC is sponsoring the Hitchhiker towels. They are one of funniest
trade show giveaways I have ever seen. They are bound to b
Lest there be any confusion or doubt, the BASH is located at:
Clark Planetarium, 110 South 400 West
Two blocks from the convention center
Time: 9PM
See you there!
BTW R-HPC is sponsoring the Hitchhiker towels. They are one of funniest
trade show giveaways I have ever seen. They are bound to be
On 11/13/12 6:11 AM, Vincent Diepeveen wrote:
> hi Duke,
>
> Where do you need a parallel file system for if you have 1 fileserver
> in total?
Vincent, our purpose is to build a cluster and data center which are
flexible and expandable. Based on my own experience on genome research,
we will soo
On 11/12/12 6:53 PM, Hearns, John wrote:
> Hi folks,
>
> We are still on the way of building our first small-cluster. Right now
> we have 16 diskless nodes 8GB RAM on X7DWT-INF and a master also on
>
>
> Also, if you are rolling your own cluster, learn how to configure the IPMI
> cards on the mothe
hi Duke,
Where do you need a parallel file system for if you have 1 fileserver
in total?
On Nov 12, 2012, at 12:26 PM, Duke Nguyen wrote:
> On 11/12/12 4:42 PM, Tim Cutts wrote:
>> On 12 Nov 2012, at 03:50, Duke Nguyen wrote:
>>
>>> On 11/9/12 7:26 PM, Bogdan Costescu wrote:
On Fri, Nov
On 11/12/12 4:42 PM, Tim Cutts wrote:
> On 12 Nov 2012, at 03:50, Duke Nguyen wrote:
>
>> On 11/9/12 7:26 PM, Bogdan Costescu wrote:
>>> On Fri, Nov 9, 2012 at 7:19 AM, Christopher Samuel
>>> wrote:
So JBODs with LVM on top and XFS on top of that could be resized on
the fly. You can do
On 11/12/12 7:05 PM, Vincent Diepeveen wrote:
> You can boot over PXE using normal network cables and boot over the
> gigabit ethernet using a cheap ethernet
> router.
Thanks Vincent. Yes we gonna try that with a gigabyte hub. Just wonder
anyone has other experience, for example PXE Boot on this
Hi folks,
We are still on the way of building our first small-cluster. Right now
we have 16 diskless nodes 8GB RAM on X7DWT-INF and a master also on
Also, if you are rolling your own cluster, learn how to configure the IPMI
cards on the motherboards (referred to as BMCs also)
You use IPMI/BMC
I stumbled upon this:
http://www.amd.com/us/products/workstation/graphics/firepro-remote-
graphics/S1/Pages/S1.aspx#2
1.5 Tflop double precision card.
It quotes it has ECC, yet past few weeks AMD started announcing a lot
of things.
Is it because they're nearly bankrupt (chapter 11) w
OR
put in a USB thumb drive in each node. It would be somewhat simpler to set
up :)
2012/11/12 Andrew Holway
>
>
>
> 2012/11/12 Vincent Diepeveen
>
>> Problem is not the infiniband NIC's.
>
>
> Yes it is. You have to flash the device firmware in order for the BIOS
> to recognize it as a boota
You can boot over PXE using normal network cables and boot over the
gigabit ethernet using a cheap ethernet
router.
On Nov 12, 2012, at 12:33 PM, Duke Nguyen wrote:
> Hi folks,
>
> We are still on the way of building our first small-cluster. Right now
> we have 16 diskless nodes 8GB RAM on X7DW
2012/11/12 Vincent Diepeveen
> Problem is not the infiniband NIC's.
Yes it is. You have to flash the device firmware in order for the BIOS
to recognize it as a bootable device.
Yes your a bit screwed for booting over IB but booting over 1GE is
perfectly acceptable. 1GE bit rate is not 1000 mil
Hi Duke
I cannot answer the first question but I try to give some input for the second
one.
I would have thought you can boot via the gigabit network and do the NFS-mount
via the IB network? You can run TCP-over-IB so the existing infrastructure
should work IMHO. My idea is to mount say /usr v
> * if PXE server with Infiniband is impossible, then it is OK with a
> gigabyte connection? Or should we go for 16 disks for these 16 clients
booting is rare and almost trivial, IO-wise. there's no reason you
shouldn't be able to boot several hundred clients over Gb.
__
Maybe there is a simpler thing going on here.
Had you looked at supermicro website whether your motherboard can
boot the infiniband?
If so you might want to check the bios version of the motherboards
and flash them to a newer
version that does support booting infiniband over PXE.
Probably nee
Problem is not the infiniband NIC's. You need a motherboard with a
bios fixed to boot infiniband over PXE, that's all.
So you need to contact Supermicro to fix the bios if it doesn't boot
them, if they would be willing to do that.
On Nov 12, 2012, at 1:56 PM, Duke Nguyen wrote:
> On 11/12/12
Hi folks,
We are still on the way of building our first small-cluster. Right now
we have 16 diskless nodes 8GB RAM on X7DWT-INF and a master also on
X7DWT-INF with 120GB disk and 16GB RAM. These boards (X7DWT-INF) has
built-in Infiniband Card (Infiniband MT25204 20Gbps Controller), and I
hoped
On 12 Nov 2012, at 03:50, Duke Nguyen wrote:
> On 11/9/12 7:26 PM, Bogdan Costescu wrote:
>> On Fri, Nov 9, 2012 at 7:19 AM, Christopher Samuel
>> wrote:
>>> So JBODs with LVM on top and XFS on top of that could be resized on
>>> the fly. You can do the same with ext[34] as well (from memory).
On Fri, Nov 9, 2012 at 7:39 AM, Joe Landman wrote:
> Hmmm ... I thought it was an interpreter throughout. Perl and Perl6 are
> compiled, Ruby is purely interpreted. Java is compiled (in the same way
> Perl is). Pythons performance is much closer to Ruby than Java/Perl.
>
This is also incorrec
19 matches
Mail list logo