Re: [Beowulf] Small files

2014-06-13 Thread Kilian Cavalotti
On Fri, Jun 13, 2014 at 11:23 AM, Brian Dobbins wrote: > Just a quick addition to this thread (for now), here's the presentation I > think Killian refers to: > http://www.opensfs.org/wp-content/uploads/2014/04/D2_S16_PLFSandLustrePerformanceComparison.pdf Damn, I was sure I wrote that, the link i

Re: [Beowulf] Small files

2014-06-13 Thread Lux, Jim (337C)
On 6/13/14, 7:03 AM, "Ellis H. Wilson III" wrote: >On 06/13/2014 09:31 AM, Joe Landman wrote: >> On 06/13/2014 09:17 AM, Skylar Thompson wrote: >>> We've recently implemented a quota of 1 million files per 1TB of >>> filesystem space. And yes, we had to clean up a number of groups' and >>> indi

Re: [Beowulf] HPC with CUDA

2014-06-13 Thread Adam DeConinck
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On Fri, Jun 13, 2014 at 10:34:05PM +0700, "C. Bergström" wrote: > On 06/13/14 09:16 PM, Greg Keller wrote: > >Supermicro has a board that provides Eight x16 slots, but I understand > >it's wired so that 2 slots are effectively sharing 16 lanes. Let

Re: [Beowulf] Small files

2014-06-13 Thread Lux, Jim (337C)
On 6/13/14, 6:17 AM, "Skylar Thompson" wrote: >We've recently implemented a quota of 1 million files per 1TB of >filesystem space. So you¹re penalizing people with files smaller than 1 Mbyte? >And yes, we had to clean up a number of groups' and >individuals' spaces before implementing that

Re: [Beowulf] Small files

2014-06-13 Thread Brian Dobbins
Just a quick addition to this thread (for now), here's the presentation I think Killian refers to: http://www.opensfs.org/wp-content/uploads/2014/04/D2_S16_PLFSandLustrePerformanceComparison.pdf And here's *all* the slides from LUG 2014: http://opensfs.org/lug-2014-presos/ Some of them are incred

Re: [Beowulf] Small files

2014-06-13 Thread Kilian Cavalotti
On Fri, Jun 13, 2014 at 7:03 AM, Ellis H. Wilson III wrote: > a) Fix it transparently with automatic policies/FS's in the back-end. (I > know of at least one FS that packs small files with metadata transparently > on SSDs to expedite small file IOPS, but message me off-list for that as I > start w

Re: [Beowulf] HPC with CUDA

2014-06-13 Thread C. Bergström
On 06/13/14 09:16 PM, Greg Keller wrote: From: "Raphael Verdugo P." mailto:raphael.verd...@gmail.com>> To: beowulf@beowulf.org Subject: [Beowulf] HPC with CUDA I need install 5 GPUs(Geforce GTX 780s) in a server and 1 Tesla Keppler K40 in other.

Re: [Beowulf] HPC with CUDA

2014-06-13 Thread Greg Keller
> From: "Raphael Verdugo P." > To: beowulf@beowulf.org > Subject: [Beowulf] HPC with CUDA > > I need install 5 GPUs(Geforce GTX 780s) in a server and 1 Tesla > Keppler K40 in other. > > ? Do they have any recommendation for server HP or Dell? processor? , RAM? > We have considered the easily

Re: [Beowulf] Small files

2014-06-13 Thread Ellis H. Wilson III
On 06/13/2014 09:31 AM, Joe Landman wrote: On 06/13/2014 09:17 AM, Skylar Thompson wrote: We've recently implemented a quota of 1 million files per 1TB of filesystem space. And yes, we had to clean up a number of groups' and individuals' spaces before implementing that. There seems to be a trend

Re: [Beowulf] Small files

2014-06-13 Thread Joe Landman
On 06/13/2014 09:17 AM, Skylar Thompson wrote: We've recently implemented a quota of 1 million files per 1TB of filesystem space. And yes, we had to clean up a number of groups' and individuals' spaces before implementing that. There seems to be a trend in the bioinformatics community for using t

Re: [Beowulf] Small files

2014-06-13 Thread Skylar Thompson
We've recently implemented a quota of 1 million files per 1TB of filesystem space. And yes, we had to clean up a number of groups' and individuals' spaces before implementing that. There seems to be a trend in the bioinformatics community for using the filesystem as a database. I think it's enabled

Re: [Beowulf] Small files

2014-06-13 Thread Guy Coates
Hi Tom, > I want to ask this general question: how does your shop deal with the > general problem of > small files in filesystems on (beowulf) compute clusters? We have this workload in spades. As others have mentioned, good user education is the key. We use inode quotas on lustre (typically 15