> On Dec 9, 2018, at 7:26 AM, Gerald Henriksen wrote:
>
>> On Fri, 7 Dec 2018 16:19:30 +0100, you wrote:
>>
>> Perhaps for another thread:
>> Actually I went t the AWS USer Group in the UK on Wednesday. Ver
>> impressive, and there are the new Lustre filesystems and MPI networking.
>> I guess
> but for now just expecting to get something good without an effort is
probably premature.
Nothing good every came easy.
Who said that? My Mum. And she was a very wise woman.
On Sun, 9 Dec 2018 at 21:36, INKozin via Beowulf
wrote:
> While I agree with many points made so far I want to ad
While I agree with many points made so far I want to add that one aspect
which used to separate a typical HPC setup from some IT infrastructure is
complexity. And I don't mean technological complexity (because
technologically HPC can be fairly complex) but the diversity and the
interrelationships b
On Fri, 7 Dec 2018 16:19:30 +0100, you wrote:
>Perhaps for another thread:
>Actually I went t the AWS USer Group in the UK on Wednesday. Ver
>impressive, and there are the new Lustre filesystems and MPI networking.
>I guess the HPC World will see the same philosophy of building your setup
>using t
Monolithic static binaries - better have a fat pipe to the server to load the
container on your target.
On 12/7/18, 10:47 AM, "Beowulf on behalf of Jan Wender"
wrote:
> Am 07.12.2018 um 17:34 schrieb John Hanks :
> In my view containers are little more than incredibly complex sta
> Am 07.12.2018 um 17:34 schrieb John Hanks :
> In my view containers are little more than incredibly complex static binaries
Thanks for this! I was wondering if I am the only one thinking it.
- Jan
___
Beowulf mailing list, Beowulf@beowulf.org sponso
On 12/7/18, 8:46 AM, "Beowulf on behalf of Michael Di Domenico"
wrote:
On Fri, Dec 7, 2018 at 11:35 AM John Hanks wrote:
>
> But, putting it in a container wouldn't make my life any easier and
would, in fact, just add yet another layer of something to keep up to date.
On Fri, Dec 7, 2018 at 7:20 AM John Hearns via Beowulf
wrote:
> Good points regarding packages shipped with distributions.
> One of my pet peeves (only one? Editor) is being on mailiing lists for HPC
> software such as OpenMPI and Slurm and seeing many requests along the lines
> of
> "I installed
On Fri, Dec 7, 2018 at 11:35 AM John Hanks wrote:
>
> But, putting it in a container wouldn't make my life any easier and would,
> in fact, just add yet another layer of something to keep up to date.
i think the theory behind this is the containers allow the sysadmins
to kick the can down the r
On Fri, Dec 7, 2018 at 7:04 AM Gerald Henriksen wrote:
> On Wed, 5 Dec 2018 09:35:07 -0800, you wrote:
>
> Now obviously you could do what for example Java does with a jar file,
> and simply throw everything into a single rpm/deb and ignore the
> packaging guidelines, but then you are back to in
Good points regarding packages shipped with distributions.
One of my pet peeves (only one? Editor) is being on mailiing lists for HPC
software such as OpenMPI and Slurm and seeing many requests along the lines
of
"I installed PackageX on my cluster" and then finding fromt he replies that
the versii
On Wed, 5 Dec 2018 09:35:07 -0800, you wrote:
>Certainly the inability of distros to find the person-hours to package
>everything plays a role as well, your cause and effect chain there is
>pretty accurate. Where I begin to branch is at the idea of software that is
>unable to be packaged in an rpm
I think you do a better job explaining the underpinnings of my frustration
with it all, but then arrive at a slightly different set of conclusions.
I'd be the last to say autotools isn't complex, in fact pretty much all
build systems eventually reach an astounding level of complexity. But I'm
not s
On Tue, 2018-12-04 at 11:20 -0500, Prentice Bisbal via Beowulf wrote:
> On 12/3/18 2:44 PM, Michael Di Domenico wrote:
> > On Mon, Dec 3, 2018 at 1:13 PM John Hanks
> > wrote:
> > > From the perspective of the software being containerized, I'm
> > > even more skeptical. In my world (bioinformati
On Mon, 3 Dec 2018 10:12:10 -0800, you wrote:
> And then I realized that I was seeing
>software which was "easier to containerize" and that "easier to
>containerize" really meant "written by people who can't figure out
>'./configure; make; make install' and who build on a sand-like foundation
>of
On 12/3/18 2:44 PM, Michael Di Domenico wrote:
On Mon, Dec 3, 2018 at 1:13 PM John Hanks wrote:
From the perspective of the software being containerized, I'm even more skeptical. In my world
(bioinformatics) I install a lot of crappy software. We're talking stuff resulting from "I
read the
On Mon, Dec 3, 2018 at 1:13 PM John Hanks wrote:
>
> From the perspective of the software being containerized, I'm even more
> skeptical. In my world (bioinformatics) I install a lot of crappy software.
> We're talking stuff resulting from "I read the first three days of 'learn
> python in 21 d
On Fri, Nov 30, 2018 at 9:44 PM John Hearns via Beowulf
wrote:
> John, your reply makes so many points which could start a whole series of
> debates.
>
I would not deny partaking of the occasional round of trolling.
> > Best use of our time now may well be to 'rm -rf SLURM' and figure out
> h
On Sat, 1 Dec 2018 06:43:05 +0100, you wrote:
>My own thoughts on HPC for a tightly coupled, on premise setup is that we
>need a lightweight OS on the nodes, which does the bare minimum. No general
>purpose utilities, no GUIS, nothing but network and storage. And container
>support.
One of the la
Ho ho. Yes, there is rarely anything completely new. Old ideas get dusted
off, polished up, and packaged slightly differently. At the end of the day, a
Dockerfile is just a script to build your environment, but it has the advantage
now of doing it in a reasonably standard way, rather than wha
For me personally I just assume it's my lack of vision that is the problem.
I was submitting VMs as jobs using SGE well over 10 years ago. Job scripts
that build the software stack if it's not found? 15 or more. Never occurred
to me to call it "cloud" or "containerized", it was just a few stupid
sc
Yeah, I often thing some people are using the letters HPC as in 'high
profile computing' nowadays. The diluting effect I mentoined a few posts
ago.
Actually LOT of HPC admin folks I know are scientists, scientificly active
and tightly coupled to scientists in groups and they were doing DevOps
even
On 11/27/2018 4:51 AM, Michael Di Domenico wrote:
this seems a bit too stringent of a statement for me. i don't dismiss
or disagree with your premise, but i don't entirely agree that HPC
"must" change in order to compete.
I agree completely. There is and always be a need for what I call
"p
You can probably fork from a central repo.
> ___
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
_
On Wed, 28 Nov 2018 13:51:05 +0100, you wrote:
>Now I am all for connecting divers and flexible workflows to true HPC systems
>and grids that feel different if not experienced
>with (otherwise what is the use of a computer if there are no users making use
>of it?), but do not make the mistake of
Those interested in providing user-friendly HPC might want to take a
look at Open OnDemand. I'm not affiliated with this project, but wanted
to make sure it got a plug. I've heard good things so far.
http://openondemand.org/
Eliot
On 11/26/18 10:26, John Hearns via Beowulf wrote:
This may
On Wed, 28 Nov 2018 at 11:33, Bogdan Costescu wrote:
> On Mon, Nov 26, 2018 at 4:27 PM John Hearns via Beowulf <
> beowulf@beowulf.org> wrote:
>
>> I have come across this question in a few locations. Being specific, I am
>> a fan of the Julia language. Ont he Juia forum a respected developer
>>
>
> If HPC doesn't make it easy for these users to transfer their workflow
> to the cluster, and the cloud providers do, then the users will move
> to using the cloud even if it costs them 10%, 20% more because at the
> end of the day it is about getting the job done and not about spending
> time t
As a follow up note on workflows,
we also have used 'sshfs like constructs' to help non technical users to
compute things on local clusters, the actual CERN grid
infrastructure and on (national) super computers. We built some middleware
suitable for that many moons ago:
http://lgi.tc.lic.leiden
MArk, again I do not have time to give your answer justice today.
However, as you are in NL, can you send me some olliebollen please? I am a
terrible addict.
On Wed, 28 Nov 2018 at 13:52, mark somers
wrote:
> Well, please be careful in naming things:
>
> http://cloudscaling.com/blog/cloud-comput
Well, please be careful in naming things:
http://cloudscaling.com/blog/cloud-computing/grid-cloud-hpc-whats-the-diff/
(note; The guy only heard about MPI and does not consider SMP based codes using
i.e. OpenMP, but he did understand there are
different things being talked about).
Now I am all f
Bogdan, Igor. Thankyou very much for your thoughtful answers. I don not
have much time today to do your replies the justice of a proper answer.
Regarding the ssh filesystem, the scenario was that I was working for a
well known company.
We were running CFD simulations on remote academic HPC setups.
On Mon, Nov 26, 2018 at 4:27 PM John Hearns via Beowulf
wrote:
> I have come across this question in a few locations. Being specific, I am
> a fan of the Julia language. Ont he Juia forum a respected developer
> recently asked what the options were for keeping code developed on a laptop
> in sync
Julia packaging https://docs.julialang.org/en/v1/stdlib/Pkg/index.html
On Wed, 28 Nov 2018 at 01:42, Gerald Henriksen wrote:
> On Tue, 27 Nov 2018 07:51:06 -0500, you wrote:
>
> >On Mon, Nov 26, 2018 at 9:50 PM Gerald Henriksen
> wrote:
> >> On Mon, 26 Nov 2018 16:26:42 +0100, you wrote:
> >>
> * - note the HPC isn't unique in this regard. The Linux distributions
> are facing their own version of this, where much of the software is no
> longer packagable in the traditional sense as it instead relies on
> language specific packaging systems and languages that don't lend
> themselves to
On Tue, 27 Nov 2018 07:51:06 -0500, you wrote:
>On Mon, Nov 26, 2018 at 9:50 PM Gerald Henriksen wrote:
>> On Mon, 26 Nov 2018 16:26:42 +0100, you wrote:
>> If on premise HPC doesn't change to reflect the way the software is
>> developed today then the users will in the future prefer cloud HPC.
>
On Mon, Nov 26, 2018 at 9:50 PM Gerald Henriksen wrote:
> On Mon, 26 Nov 2018 16:26:42 +0100, you wrote:
> If on premise HPC doesn't change to reflect the way the software is
> developed today then the users will in the future prefer cloud HPC.
>
> I guess it is a brave new world for on premise HP
On Mon, 26 Nov 2018 16:26:42 +0100, you wrote:
>This leads me to ask - shoudl we be presenting HPC services as a 'cloud'
>service, no matter that it is a non-virtualised on-premise setup?
>In which case the way to deploy software would be via downloading from
>Repos.
>I guess this is actually more
This may not be the best place to discuss this - please suggest a better
forum if you have one.
I have come across this question in a few locations. Being specific, I am a
fan of the Julia language. Ont he Juia forum a respected developer recently
asked what the options were for keeping code develo
39 matches
Mail list logo