Re: [Beowulf] Xi3 computers

2013-04-24 Thread Mark Hahn
> hrrm i wonder where they got the design for the x5a from? > https://www.google.com/search?q=sgi+tezro&tbm=isch I don't think it's that great, but suspect it was a honest "reinvention". I'd also argue that their shape is somewhat constrained by the nature of extruded housings, the symmetric place

[Beowulf] Linux page cache and pdflush

2013-04-24 Thread Max R. Dechantsreiter
Given recent discussion, I thought this might be of general interest: http://www.westnet.com/~gsmith/content/linux-pdflush.htm If anyone knows of a more recent version of the RHEL paper referenced therein: http://people.redhat.com/nhorman/papers/rhel4_vm.pdf I would be very interested.

Re: [Beowulf] Nobody ever got fired for using Hadoop on a cluster

2013-04-24 Thread Joe Landman
On 04/24/2013 04:00 PM, Adam DeConinck wrote: [...] > "However, evidence suggests that the majority of analytics jobs do not > process huge data sets. For example, as we will discuss in more detail > later, at least two analytics production clusters (at Microsoft and > Yahoo) have median job inpu

Re: [Beowulf] Xi3 computers

2013-04-24 Thread Joe Landman
On 04/24/2013 03:35 PM, Prentice Bisbal wrote: > Ha! What's old is new again! Old machines in newer minatures? No http://www.chrisfenton.com/homebrew-cray-1a/ -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics, Inc. email: land...@scalableinformatics.com web : http://scalablei

[Beowulf] Nobody ever got fired for using Hadoop on a cluster

2013-04-24 Thread Adam DeConinck
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 http://research.microsoft.com/pubs/163083/hotcbp12%20final.pdf An interesting paper from Microsoft research on the feasibility of using single large-memory servers as a more cost-effective replacement for Hadoop cluters. Especially since "Big Data"

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Max R. Dechantsreiter
John, > I may be wrong, but it is not that the cached pages are needed by this job or > the last job. > It is that the last job has caused buffer cache to be used, which fills up > memory on one or more NUMA nodes. > When the new job starts it allocates memory - if it finds memory is full on >

Re: [Beowulf] Xi3 computers

2013-04-24 Thread Prentice Bisbal
Ha! What's old is new again! On 04/24/2013 03:06 PM, Sabuj Pattanayek wrote: > hrrm i wonder where they got the design for the x5a from? > > https://www.google.com/search?q=sgi+tezro&tbm=isch > > On Wed, Apr 24, 2013 at 8:47 AM, Prentice Bisbal > wrote: >> Beowulfers, >> >> Have any of you seen t

Re: [Beowulf] Xi3 computers

2013-04-24 Thread Sabuj Pattanayek
hrrm i wonder where they got the design for the x5a from? https://www.google.com/search?q=sgi+tezro&tbm=isch On Wed, Apr 24, 2013 at 8:47 AM, Prentice Bisbal wrote: > Beowulfers, > > Have any of you seen these computers from Xi3 yet? I've been getting > e-mails from them for some time now. I'm a

Re: [Beowulf] Xi3 computers

2013-04-24 Thread Alex Chekholko
On Wed, Apr 24, 2013 at 6:47 AM, Prentice Bisbal wrote: > Thoughts? Discussion? Like any other SFF design, you're trading off some things for size. E.g. you can't use a high-power CPU because you can't dissipate that much heat because your heatsink is really small. You can't get big storage bec

Re: [Beowulf] physical memory

2013-04-24 Thread Greg Lindahl
On Wed, Apr 24, 2013 at 10:30:14AM -0400, Lawrence Stewart wrote: > Does linux recombine physical memory into contiguous regions? See: https://lwn.net/Articles/368869/ We find that it's awfully expensive when it's on with our search engine/nosql workload. In an HPC setting, you could explicitly

Re: [Beowulf] physical memory

2013-04-24 Thread Mark Hahn
> Does linux recombine physical memory into contiguous regions? > > My impression has been "no". Somewhere down in the guts of the kernel there > is viewed through the mosaic of user VM, one normally can't tell. but hugepage support is the only place I can easily imagine noticing. (and in variou

[Beowulf] A belated Happy Birthday

2013-04-24 Thread Hearns, John
http://www.theinquirer.net/inquirer/feature/2262881/amd-brought-64bit-to-x86-10-years-ago-today AMDOpteron ten years old. Now - on the count of 3 we are all going to sing Happy Birthday. All of us. Cache wars will be suspended for a Christmas truce. 0..1..2..3 Happy Birthday to you! Happy Birt

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Hearns, John
> nodes. > Do you run a scheduler? Any user ought to be able to specify exclusivity. Chaps, play nice please. I am sure Mark Hahn is well aware of what batch schedulers can do. Me, I come place myself in the camp of one job / one node (or one job / lots of nodes) Jobs may run wild and trigger

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Joe Landman
>> are you just hecking, or do you have some measurements to contribute? > > You show me yours, and maybe I'd show you mine. Gents ... please, this is a family oriented supercomputing list ... The'll be no pulling out your top500 results in public. There are perfectly reasonable venues for ben

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Hearns, John
> Sure, it's important - WITHIN a given job. Why should a > new job's performance depend on what ran before? (And in I can't see why anyone would want to throw away a performance improvement. > most cases, the impact is negative, because the cached > pages are not the ones needed by the new jo

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Max R. Dechantsreiter
On Wed, 24 Apr 2013, Mark Hahn wrote: >> Sure, it's important - WITHIN a given job. Why should a >> new job's performance depend on what ran before? (And in > > I can't see why anyone would want to throw away a performance improvement. > >> most cases, the impact is negative, because the cach

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Mark Hahn
> Sure, it's important - WITHIN a given job. Why should a > new job's performance depend on what ran before? (And in I can't see why anyone would want to throw away a performance improvement. > most cases, the impact is negative, because the cached > pages are not the ones needed by the new job

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Max R. Dechantsreiter
Mark, > it's very simple: pagecache and VM balancing is a very important part of the > kernel, and has received a lot of quite productive attention over the years. > I question the assumption that "rebooting the pagecache" is a sensible way to > deal with memory-tuning problems. it seems very pas

Re: [Beowulf] physical memory

2013-04-24 Thread Hearns, John
> Does linux recombine physical memory into contiguous regions? > My impression has been "no". Somewhere down in the guts of the kernel there > is the "slab" allocator, which maintains a data structure of free memory in > power-of-two sizes. As memory > is used, the chunks get broken up and

[Beowulf] physical memory

2013-04-24 Thread Lawrence Stewart
Does linux recombine physical memory into contiguous regions? My impression has been "no". Somewhere down in the guts of the kernel there is the "slab" allocator, which maintains a data structure of free memory in power-of-two sizes. As memory is used, the chunks get broken up and naturally mig

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Hearns, John
Sorry folks. Quoting is ABSYMAL in Outlook. For those of you who quite understandably could not be bothered to scroll through my last message here is a relevant graph. Look at page 96-98 of this set of slides (same graph as in the book) http://www.multicore-challenge.org/resources/FMCCII_Tutor

Re: [Beowulf] Definition of HPC

2013-04-24 Thread Hearns, John
>> Because it stopped the random out of memory conditions that we were having. > > aha, so basically "rebooting windows resolves my performance problems" ;) in other words, a workaround. I think it's important to note when behavior is a workaround, so that it don't get ossified into SOP. > Mark,

[Beowulf] Xi3 computers

2013-04-24 Thread Prentice Bisbal
Beowulfers, Have any of you seen these computers from Xi3 yet? I've been getting e-mails from them for some time now. I'm assuming they got my e-mail address from SC12. Anyway, instead of just blindly deleting the one I received yesterday, I decided to actually check out their products. They'r