Hi folks:
Quick post for the day job. AMD (my employer) is looking for expert
systems administrators for a mix of our internal HPC systems, and
helping customers stand up their AI and HPC clusters.
AMD systems include a small version of Frontier, some El Cap
adjacent nodes, and a vari
Hi fellow beowulfers
I don't know if its bad form to post job adverts here. Day job
(@AMD) is looking for lots of HPC (and AI) folks, think
debugging/support/etc. . Happy to talk with anyone about this.
Regards
Joe
--
Joe Landman
e:joe.land...@gmail.com
t: @hpcjoe
w:
but it's certainly not going to be cheap or
easy. What are you thinking/doing about this?
--
Prentice
___
Beowulf mailing list,Beowulf@beowulf.org sponsored by Penguin Computing
To change your subs
y was an he over-the-hill curmudgeon afraid of
new technology, there was also a pretty clear conflict of interest for
him to be pushing SGI, even though I'm sure our small purchase did
nothing to improve SGI stock value.
On 3/23/23 2:58 PM, Joe Landman wrote:
They had laid off all the good
isithttps://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e:joe.land...@gmail.com
t: @hpcjoe
w:https://scalability.org
g:https://github.com/joelandman
l:https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org s
org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e:joe.land...@gmail.com
t: @hpcjoe
w:https://scalability.org
g:https://github.com/joelandman
l:https://www.li
owulf
--
Joe Landman
e:joe.land...@gmail.com
t: @hpcjoe
w:https://scalability.org
g:https://github.com/joelandman
l:https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscri
ap clflushopt clwb sha_ni xsaveopt
xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local
clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale
vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq overflow_recov
succ
rentice
___
Beowulf mailing list,Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe)
visithttps://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e
Pid1 game ... heh !
On 11/9/21 2:13 PM, Douglas Eadline wrote:
Here is the Hybrid Beowulf Bash info
https://beowulfbash.com/
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
x27;ll get a great HPC compiler for C/Fortran.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org
especially ones waiting on
(slow) RAM and (slower) IO. Make the RAM and IO faster (lower latency,
higher bandwidth), and the system will be far more performant.
--
Joe Landman
e:joe.land...@gmail.com
t: @hpcjoe
w:https://scalability.org
g:https://github.com/joelandman
l:https://www.lin
change your subscription (digest mode or unsubscribe) visit
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
__
bin_mailman_listinfo_beowulf&d=DwIGaQ&c=D7ByGjS34AllFgecYw0iC6Zq7qlm8uclZFI0SqQnqBo&r=gSesY1AbeTURZwExR_OGFZlp9YUzrLWyYpGmwAw4Q50&m=jLfA-668qdAa9HzPD-HBTocn7f-NX1ASGLHzPe9-pDs&s=sWhmJZWjbMIRK4zSFnuL9kIlUUDTxkBGjmG8M6jKK4w&e=
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
most recently on
large supers over the past few months.
Thanks,
David Mathog
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
https://beowulf.org/cgi-bin
on (digest mode or unsubscribe) visit
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
t support the POWER architecture anymore
because they no longer have access to POWER hardware. Most of this
information comes from the Julia GitHub or Julia Discourse conversations.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l
To change your subscription (digest mode or unsubscribe) visit
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
scribe) visit
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beow
tion (digest mode or unsubscribe) visit
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Be
FWIW, have a look at scalematrix rack enclosures. I saw them last week.
Can get to 50kW as far as I understand.
Disclosure: I met with them last week as part of the day job. No financial
relationship with them. Just interesting tech.
On October 21, 2019 11:30:16 AM Michael Di Domenico
w
) in my
group as well. More standard "cloudy" things there (yes, $dayjob does
cloud!).
Please ping me on my email in .sig or at $dayjob. Email there is my
first initial + last name at cray dot com. Thanks, and back to your
regularly scheduled cluster/super ... :D
--
Joe Landman
e
the hard
problem in the mix. Not technically hard, but hard from a cost/time
perspective.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
r nodes. Then put a beegfs file system atop
those. Stage in the images. Run.
This is cheap compared to building the storage you actually need for
this workload.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linked
at Cray as Director of Cloud
Services and DevOps !
Thanks!
Joe
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing lis
See https://docs.lfortran.org/ . Figured Jeff Layton would like this :D
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing
ou in a particular direction versus
working with you to design what you need (the smaller shops do this).
If you want to do this yourself atop your existing kit, go for it. Its
not hard to set up/configure.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: htt
data frame packages.
R, Julia, and I think Python can all handle this without too much pain.
[1] https://gssc.esa.int/navipedia/index.php/Relativistic_Clock_Correction
[2] http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps.html
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w:
oks like a nail" view as much as possible.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org
On 2/27/19 9:08 PM, David Mathog wrote:
Joe Landman wrote:
[...]
I'm about 98% of the way there now, with a mashup of parts from boel
and Centos 7.
The initrd is pretty large though.
Wasted most of a day on a mysterious issue with "sh" (busybox) not
responding to the
oot-init.d
file, of the form 'grep -q option= /proc/cmdline).
I use this for doing all my booting of immutable images. Just need the
kernel, and the initramfs. I can build one for you if you want, and you
can play with it.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://sc
and, or most drivers.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Comp
ed by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org spon
make sure one does not ignore the vapor, or potential heat induced
reaction products of the vapor. Fluorinert has some issues:
https://en.wikipedia.org/wiki/Fluorinert#Toxicity if you overcook it ...
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://githu
It feels weird attending SC18, and not being an exhibitor. Definitely
looking forward to it.
Beobash will (of course) be fun ... and I'm looking forward to (finally)
being able to attend talks, poster sessions, panels.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w:
ts.
I'll paraphrase Churchill here: Systemd is the worst, except for all
the rest.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowu
lf
--
MailScanner: Clean
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.
nother point of mine. And Greg K @Sylabs is
getting free exposure here :D
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list,
e that ubuntu non-LTS are
potentially broken (bleeding edge).
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf
t
in Debian, and significant effort/pain in RH/CentOS, usually employing
modules or similar construct.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
Glad to see that!
[...]
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:r...@phy.duke.edu --
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scal
igest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing
e worked with some ARM product builders in the
past, and have been burned by the misalignment between reality and rhetoric.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
_
/20181011005476/en/Australia’s-DownUnder-GeoSolutions-Selects-Skybox-Datacenters-Houston
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf
0-0xbfff)
--8< snip snip 8<--
All the best!
Chris
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
__
ailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
emble output. SGI turned that into a product.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org sponsor
On 07/25/2018 04:36 PM, Prentice Bisbal wrote:
Paging Dr. Joe Landman, paging Dr. Landman...
My response was
"I'd seen/helped build/benchmarked some very nice/fast CephFS based
storage systems in $dayjob-1. While it is a neat system, if you are
focused on availability, scalab
s of about 16TiB last I
checked. If you need more, replace minio with another system (igneous,
ceph, etc.). Ping me offline if you want to talk more.
[...]
--
Joe Landman
e:joe.land...@gmail.com
t: @hpcjoe
w:https://scalability.org
g:https://github.com/joelandman
l:https://www.linkedin.com/in/
s of about 16TiB last I
checked. If you need more, replace minio with another system (igneous,
ceph, etc.). Ping me offline if you want to talk more.
[...]
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linke
s of about 16TiB last I
checked. If you need more, replace minio with another system (igneous,
ceph, etc.). Ping me offline if you want to talk more.
[...]
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linke
ystem, if you are
focused on availability, scalability, and performance, its pretty hard
to beat BeeGFS. We'd ($dayjob-1) deployed several very large/fast file
systems with it on our spinning rust, SSD, and NVMe units.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalabi
On 6/19/18 2:47 PM, Prentice Bisbal wrote:
On 06/13/2018 10:32 PM, Joe Landman wrote:
I'm curious about your next gen plans, given Phi's roadmap.
On 6/13/18 9:17 PM, Stu Midgley wrote:
low level HPC means... lots of things. BUT we are a huge Xeon Phi
shop and need low-level p
Midgley
sdm...@gmail.com <mailto:sdm...@gmail.com>
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe L
scription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
c: +1 734 612 4615
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
__
so ... suddenly discovering that the neat little hole
in the pipe enables this highly conductive ionic fluid to short ...
somewhere between 1V and 12V DC. 10's to 100's of thousands of Amps. I
wouldn't wanna be anywhere near that when it lets go.
--
Joe Landman
e: joe.land...
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/
typed on a phone, so auto-co-wrecked ... VNC
On 06/06/2018 05:56 PM, David Mathog wrote:
Thanks for all the responses.
On 06-Jun-2018 14:40, Joe Landman wrote:
When I absolutely need a gui for something this this, I'll light up
BBC over ssh session. Performance has been good even cro
Wait ... nedit? I wrote my thesis with that (LaTeX) some (mumble) decades
ago ...
On June 6, 2018 5:28:30 PM David Mathog wrote:
Off Topic.
I need to do some work on a system 3000 miles away. No problem
connecting to it with ssh or setting X11 forwarding, but the delays are
such that my us
When I absolutely need a gui for something this this, I'll light up BBC
over ssh session. Performance has been good even crossing the big pond.
This said, vim handles this nicely as well.
On June 6, 2018 5:28:30 PM David Mathog wrote:
Off Topic.
I need to do some work on a system 3000 mile
ling list, Beowulf@beowulf.org
<mailto:Beowulf@beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
<http://www.beowulf.org/mailman/listinfo/beowulf>
__
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalabi
em with my OpenBLAS build.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https:/
ough tuning per
use case mattered significantly.
I don't want this to be a discussion of what could be wrong at this
point, we will get to that in future posts, I assure you!
--
Joe Landman
t: @hpcjoe
w: https://scalability.org
___
Beowulf mai
part (the
expectation of Intel fixing it in their newer HW) is all the more
reason I'm inclined to believe the fix will be delivered as a tunable.
Best,
ellis
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://
On 12/23/2017 05:49 PM, Jeffrey Layton wrote:
I tried it but it doesn't come up as the job scheduler - just
capabilities of a company. Hmm..
FYI: https://soylentnews.org/article.pl?sid=16/11/06/0254233
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g:
beowulf.org/mailman/listinfo/beowulf>
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalabili
u have a fixed sized resource
bandwidth contention issue you are fighting. The question is what.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
owulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www
y idea why
this is only occuring with RHEL 6 w/ NFS root OS?
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
.0°C)
Core 2:+33.0°C (high = +82.0°C, crit = +92.0°C)
Core 3:+34.0°C (high = +82.0°C, crit = +92.0°C)
...
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/
on this system which
have no match on the other.
Regards,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscripti
Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
_
that does this for
you ...
https://github.com/joelandman/pcilist
:D
On Thu, Aug 17, 2017 at 12:35 PM, Joe Landman <mailto:joe.land...@gmail.com>> wrote:
On 08/17/2017 12:00 PM, Faraz Hussain wrote:
I noticed an mpi job was taking 5X longer to run whenever it
owulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change
enable ipoib and then rerun
test? It would then show ~40GB/sec I assume.
No. 9GB/s is about 80 Gb/s. Infiniband is working. Looks like you
might have dual-rail IB setup, or you were doing a bidirectional/full
duplex test.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https
level activity that the
subnet manager (OpenSM or a switch level version) enables.
For OpenMPI, my recollection is that they expect the IB ports to have
ethernet addresses as well (and will switch to RDMA after initialization).
What does
ifconfig -a
report?
--
Joe Landman
e: joe.land
mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman
___
Beowulf mailing list
Hi folks
I am trying to find contacts at Broadcom to speak to about NIC
drivers. All my networking contacts seem to have moved on. Does anyone
have a recommendation as to someone to speak with?
Thanks!
Joe
---
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https
.
All the best,
Chris
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or
Yeah, they should make very sweet storage units (single socket sku).
Dual socket is also nice, as you'll have 64x lanes of fabric between
sockets, as well as 64 from each socket to peripherals.
I'd love to see the QPI contention issue just go away. This looks like
it pushes back the problem
others, but ... wow ... losing Ph.D. project data.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsub
On 03/24/2017 12:02 PM, C Bergström wrote:
On Fri, Mar 24, 2017 at 11:48 PM, Joe Landman wrote:
On 3/23/17 5:27 PM, C Bergström wrote:
[...]
No issue, and I am sorry to see this happen. I enjoyed my time using the
PathScale compilers.
Its sad that an ecosystem chooses not to support
On 3/23/17 5:27 PM, C Bergström wrote:
Tiz the season for HPC software to die?
https://www.hpcwire.com/2017/03/23/hpc-compiler-company-pathscale-seeks-life-raft/
(sorry I don't mean to hijack your thread, but timing of both
announcements is quite overlapping)
No issue, and I am sorry to see th
For those who I've not talked with yet ...
http://insidehpc.com/2017/03/scalable-informatics-closes-shop/
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
c: +1 734 612 4615
w: https://scalability.org
___
Beowulf mailing list, Beowulf@beowul
https://scalability.org/2016/03/not-even-breaking-a-sweat-10gbs-write-to-single-node-forte-unit-over-100gb-net-realhyperconverged-hpc-storage/
Excellent performance, ease of configuration is what you should expect
from BeeGFS.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
c: +1 734 612 461
et up some infrastructure with this before,
and it was relatively painless to use. Think of it as a predecessor to
CoreOS, RancherOS, and others.
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
c: +1 734 612 4615
w: https://scalability.org
__
change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
c: +1 734 612 4615
w: https://scalability.org
___
Beowulf mailing list, Beowulf@beow
On 10/26/2016 10:20 AM, Prentice Bisbal wrote:
How so? By only having a single seat or node-locked license?
Either ... for licensed code this is a non-starter. Which is a shame
that we still are talking about node locked/single seat in 2016.
--
Joseph Landman, Ph.D
Founder and CEO
Scala
Licensing might impede this ... Usually does.
On 10/26/2016 09:50 AM, Prentice Bisbal wrote:
There is a amazing beauty in this simplicity.
Prentice
On 10/25/2016 02:46 PM, Gavin W. Burris wrote:
Hi, Michael.
What if the same job ran on two separate nodes, with IO to local
scratch? What a
On 10/25/2016 02:24 PM, Michael Di Domenico wrote:
here's an interesting thought exercise and a real problem i have to tackle.
i have a researchers that want to run magma codes for three weeks or
so at a time. the process is unfortunately sequential in nature and
magma doesn't support check poi
On 09/09/2016 07:20 AM, Tim Cutts wrote:
2. Surely, we heat up
the oceans regardless of whether it's directly by cooling with the
sea or indirectly by cooling in air, and atmospheric warming slowly
warming the oceans. Ultimately it will all come to equilibrium (with
possible disastrous consequen
On 08/24/2016 09:51 AM, Prentice Bisbal wrote:
his is an old article, but it's relevant to the recent discussion on
programming for Xeon Phis, 'code modernization', and the speedups
'code modernization' can provide.
https://www.hpcwire.com/2015/08/24/cosmos-team-achieves-100x-speedup-on-co
On 08/23/2016 10:01 AM, Peter St. John wrote:
HPE is in the process of being bought by CSC.
???
On the scale of 12 months
you will be contracting with CSC.
I thought they were spinning out their services organization to CSC ...
not the whole kit and kaboodle ...
http://www.csc.com/invest
Erp ...
On 08/23/2016 09:58 AM, Prentice Bisbal wrote:
How much power does that system use at full-tilt? I'm guessing about
2250 - 2500 kW.
Prentice
On 08/22/2016 07:40 PM, Stu Midgley wrote:
I measured the power draw of our 2RU 8 phi nodes with and without
fans... the fans draw about 20% pow
On 08/17/2016 11:50 AM, Kilian Cavalotti wrote:
On Wed, Aug 17, 2016 at 7:10 AM, Prentice Bisbal wrote:
When Intel first started marketing the Xeon Phi, they emphasized that you
wouldn't need to rewrite your code to use the Xeon Phi. This was a marketing
moving to differentiate the Xeon Phi fro
On 08/12/2016 10:46 AM, Douglas Eadline wrote:
I remember when the old HP bought Convex. More like 1 + 1 = .2
in that case. And, then in recent years many of the old
Convex crew emerged as Convey which was then bought by
Micron last year.
Maybe I am biased, but I see actual (strong) value in
On 08/11/2016 07:22 PM, Christopher Samuel wrote:
So SGI is getting bought (yet again), this time by HP Enterprise.
http://investors.sgi.com/releasedetail.cfm?ReleaseID=984160
Go off to class for a few hours and stuff happens ...
For their in memory analytics machines. Go figure.
--
Jose
I am working on extracting meaningful data on various components for our
monitoring tools, and realized that I don't have a good writeup anywhere
(other than the source) of what the fields are. Anyone have or know of
such a writeup?
For example:
root@n01:/sys/devices/pci:00/:00:03.0/
1 - 100 of 1036 matches
Mail list logo