That might be exactly it..
thanks.

On 7/27/18, 2:17 PM, "Beowulf on behalf of Fred Youhanaie" 
<beowulf-boun...@beowulf.org on behalf of f...@anydata.co.uk> wrote:

    Jim
    
    I'm not a jupyter user, yet, however, out of curiosity I just googled for 
what I think you're looking for. Is this any good?
    
    https://ipyparallel.readthedocs.io/en/stable/
    
    I have now bookmarked it for my own future use!
    
    Cheers,
    Fred
    
    On 27/07/18 21:56, Lux, Jim (337K) wrote:
    > 
    > -----Original Message-----
    > From: Beowulf [mailto:beowulf-boun...@beowulf.org] On Behalf Of Joe 
Landman
    > Sent: Friday, July 27, 2018 11:54 AM
    > To: beowulf@beowulf.org
    > Subject: Re: [Beowulf] Jupyter and EP HPC
    > 
    > 
    > 
    > On 07/27/2018 02:47 PM, Lux, Jim (337K) wrote:
    >>
    >> I’ve just started using Jupyter to organize my Pythonic ramblings..
    >>
    >> What would be kind of cool is to have a high level way to do some
    >> embarrassingly parallel python stuff, and I’m sure it’s been done, but
    >> my google skills appear to be lacking (for all I know there’s someone
    >> at JPL who is doing this, among the 6000 people doing stuff here).
    >>
    >> What I’m thinking is this:
    >>
    >> I have a high level python script that iterates through a set of data
    >> values for some model parameter, and farms out running the model to
    >> nodes on a cluster, but then gathers the results back.
    >>
    >> So, I’d have N copies of the python model script on the nodes.
    >>
    >> Almost like a pythonic version of pdsh.
    >>
    >> Yeah, I’m sure I could use lots of subprocess() and execute() stuff
    >> (heck, I could shell pdsh), but like with all things python, someone
    >> has probably already done it before and has all the nice hooks into
    >> the Ipython kernel.
    >>
    > 
    > I didn't do this with ipython or python ... but this was effectively the 
way I parallelized NCBI BLAST in 1998-1999 or so.  Wrote a perl script to parse 
args, construct jobs, move data, submit/manage jobs, recover results, 
reassemble output.  SGI turned that into a product.
    > 
    > 
    > -- yes.. but I was hoping someone had done that for Jupyter..
    > 
    >>>> for parametervalue in parametervaluelist:
    > ....          result = simulation(parametervalue)
    >                 Results.append(result)
    > 
    > 
    > 
    > _______________________________________________
    > Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
    > To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf
    > 
    _______________________________________________
    Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
    To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf
    

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to