I recently went to a FLOSS UK meeting.
Heard an excellent talk on MogileFS
Just the ticket for you I think!
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http:/
On 11/13/12 19:00, Bill Broadley wrote:
>
> If you need an object store and not a file system I'd consider hadoop.
Eeek -- for .5MB to 10MB files is anathema for Hadoop. As much as I
love Hadoop, there's a tool for every job and I'm not sure this one
quite fits for those file sizes. If you had
If you need an object store and not a file system I'd consider hadoop.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/b
Hello,
I've been asked to look how we would provide a PB+50%/year of storage for
objects between 0.5 and 10mb per file.
It will need some kind of restful interface (only just understanding what this
means but it seems to me mostly "is there a http server in front of it")
Gluster seems to do th
On Nov 13, 2012, at 9:55 PM, Greg Keller wrote:
>
>
> On Tue, Nov 13, 2012 at 2:40 PM, Vincent Diepeveen
> wrote:
>
> On Nov 13, 2012, at 9:17 PM, Greg Keller wrote:
>
>
> From: Joe Landman That's not the
> issue with glusterfs. It's distributed metadata architecture is a
> double edged
On Tue, Nov 13, 2012 at 2:40 PM, Vincent Diepeveen wrote:
>
> On Nov 13, 2012, at 9:17 PM, Greg Keller wrote:
>
>
>> From: Joe Landman
>> >
>> That's not the issue with glusterfs. It's distributed metadata
>> architecture is a double edged sword. Very good for distributed data, very
>> very b
On Nov 13, 2012, at 9:17 PM, Greg Keller wrote:
>
> From: Joe Landman That's not the
> issue with glusterfs. It's distributed metadata architecture is a
> double edged sword. Very good for distributed data, very very bad
> for metadata heavy ops.
>
> That and the xfs attributes haven't
On Tue, Nov 13, 2012 at 03:17:12PM -0500, Greg Keller wrote:
>
>From: Joe Landman
>mailto:land...@scalableinformatics.com>>
>That's not the issue with glusterfs. It's distributed metadata architecture
>is a double edged sword. Very good for distributed data, very very bad for
>metadata heavy o
> From: Joe Landman
That's not the issue with glusterfs. It's distributed metadata
> architecture is a double edged sword. Very good for distributed data, very
> very bad for metadata heavy ops.
>
> That and the xfs attributes haven't been slow in years though some folks
> like bringing up the
That's not the issue with glusterfs. It's distributed metadata architecture is
a double edged sword. Very good for distributed data, very very bad for
metadata heavy ops.
That and the xfs attributes haven't been slow in years though some folks like
bringing up the old behavior pre 2.6.26 as
I just picked up my copy. Never actually read the previous version (the
yellow book), but I'll read this one cover to cover. I swear.
On Nov 10, 2012 11:45 AM, "Eugen Leitl" wrote:
> - Forwarded message from Rolf Rabenseifner
> -
>
> From: Rolf Rabenseifner
> Date: Sat, 10 Nov 2012 19:0
On Mon, Nov 12, 2012 at 09:52:48AM -0500, Hearns, John wrote:
>
>> * if PXE server with Infiniband is impossible, then it is OK with a
>> gigabyte connection? Or should we go for 16 disks for these 16 clients
>
>booting is rare and almost trivial, IO-wise. there's no reason you
>shouldn't be able
> * if PXE server with Infiniband is impossible, then it is OK with a
> gigabyte connection? Or should we go for 16 disks for these 16 clients
booting is rare and almost trivial, IO-wise. there's no reason you
shouldn't be able to boot several hundred clients over Gb.
True.
I never implemen
On 13/11/12 10:04, Jonathan Barber wrote:
> Gluster makes extensive use of extended attributes, XFS extended
> attribute performance in kernel versions< 2.6.39 is very poor. [1]
> This makes XFS a poor choice in environments where files are smaller
> than ~500MB
thats interesting... do you have
Tom's Hardware has a report on the Intel presentation:
http://www.tomshardware.com/reviews/xeon-phi-larrabee-stampede-hpc,3342.html
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or
On Nov 12, 2012, at 12:41 PM, Hearns, John wrote:
> My questions are:
>
> * any body using the same (or similar main boards) and was able to
> boot using PXE?
I have supermicro X7DWE motherboards here.
PXE works, yet the bios of the motherboard doesn't allow Mellanox QDR
cards to be booted
Interesting article.
Regrettably the writer is a technical noob, clearly readable in the
German he writes.
Confusing MB with GB, so it's not so clear how accurate it is what he
writes. Well what can you
expect from Heise.de in that sense...
Let's assume that majority he wrote down is ok.
Th
My questions are:
* any body using the same (or similar main boards) and was able to
boot using PXE?
* if PXE server with Infiniband is impossible, then it is OK with a
gigabyte connection? Or should we go for 16 disks for these 16 clients
and dont care much on boot over IB or IP? (more mon
http://www.heise.de/newsticker/meldung/SC12-Intel-bringt-Coprozessor-Xeon-Phi-offiziell-heraus-1747942.html
http://translate.google.com/translate?sl=auto&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&u=http%3A%2F%2Fwww.heise.de%2Fnewsticker%2Fmeldung%2FSC12-Intel-bringt-Coprozessor-Xeon-Phi-o
The RAID software (megaRAID) also states that new disks can be added on
the fly, but I have no idea if the new disks can also be formatted and
ready together with the available storage. Everything is still very new
to me.
That is at a lower level that having storage formatted and available to
My honest advice to you is not to do any of this.
There are lots of reliable, knowledgeable companies out there who will only
Be too willing to partner with you and construct a cluster, plus expandable
storage for you.
I suggest that you start looking on various sites, eg Clustermonkey and Hpcwi
NVIDIA Tesla K20 family reintroduced as world’s most powerful GPU --
http://www.slashgear.com/nvidia-tesla-k20-family-reintroduced-as-worlds-most-powerful-gpu-12256504/
Titan steals No. 1 spot on Top500 supercomputer list --
http://news.cnet.com/8301-11386_3-57547952-76/titan-steals-no-1-spot-on-t
On 12 November 2012 11:26, Duke Nguyen wrote:
> On 11/12/12 4:42 PM, Tim Cutts wrote:
>> On 12 Nov 2012, at 03:50, Duke Nguyen wrote:
>>
>>> On 11/9/12 7:26 PM, Bogdan Costescu wrote:
On Fri, Nov 9, 2012 at 7:19 AM, Christopher Samuel
wrote:
> So JBODs with LVM on top and XFS on to
Hi Duke!
On 13.11.12 04:45, Duke Nguyen wrote:
> Not to mention that this
> solution is cheaper than putting disks onto nodes, I prefer this option
> so that we do not have to manually configure and install each node (with
> disks inside or a usb or an external CD drive).
I usually use the PXE
How are you planning to boot your nodes?
I have used perceus (http://www.perceus.org/) and was happy with it.
There is also Warewulf (http://warewulf.lbl.gov or
http://hpc.admin-magazine.com/Articles/Warewulf-Cluster-Manager-Master-and-Compute-Nodes)
which I haven't used.
Anyone who has compa
Thanks for all suggestions and comments. Looks like we will go for a
gigabyte switch and boot nodes over Gb. Not to mention that this
solution is cheaper than putting disks onto nodes, I prefer this option
so that we do not have to manually configure and install each node (with
disks inside or
Hi Duke!
On 12.11.12 12:33, Duke Nguyen wrote:
> Unfortunately after reading, I found out that our built-card is too old
> for Mellanox FlexBoot
"PXE Boot" is a nice container (buzzword? :-) for a hand full of simple
steps. First the PXE Boot ROM asks for a DHCP address, second it will
load PXEl
27 matches
Mail list logo