On Wed, Sep 09, 2015 at 04:33:49PM -0400, Trent Nelson wrote:
PyObjects, loads a huge NumPy array, and has a WSS of ~11GB.
[...]
I've done a couple of consultancy projects now that were very data
science oriented (with huge data sets), so I really gained an
appreciation for how common the situa
>
> I haven't tried getting the SciPy stack running with PyParallel yet.
That would be essential for my use. I would assume a lot of potential
PyParallel users are in the same boat.
Thanks for the info about PyPy limits. You have a really interesting project.
--
Gary Robinson
gary...@me.com
On Wed, Sep 09, 2015 at 04:52:39PM -0400, Gary Robinson wrote:
> I’m going to seriously consider installing Windows or using a
> dedicated hosted windows box next time I have this problem so that I
> can try your solution. It does seem pretty ideal, although the STM
> branch of PyPy (using http://c
I’m going to seriously consider installing Windows or using a dedicated hosted
windows box next time I have this problem so that I can try your solution. It
does seem pretty ideal, although the STM branch of PyPy (using
http://codespeak.net/execnet/ to access SciPy) might also work at this point
On Wed, Sep 09, 2015 at 01:43:19PM -0700, Ethan Furman wrote:
> On 09/09/2015 01:33 PM, Trent Nelson wrote:
>
> >This problem is *exactly* the type of thing that PyParallel excels at [...]
>
> Sorry if I missed it, but is PyParallel still Windows only?
Yeah, still Windows only. Still based off
On 09/09/2015 01:33 PM, Trent Nelson wrote:
This problem is *exactly* the type of thing that PyParallel excels at [...]
Sorry if I missed it, but is PyParallel still Windows only?
--
~Ethan~
___
Python-Dev mailing list
Python-Dev@python.org
https://
On Tue, Sep 08, 2015 at 10:12:37AM -0400, Gary Robinson wrote:
> There was a huge data structure that all the analysis needed to
> access. Using a database would have slowed things down too much.
> Ideally, I needed to access this same structure from many cores at
> once. On a Power8 system, for ex
Hi Gary,
On Tue, Sep 8, 2015 at 4:12 PM, Gary Robinson wrote:
> 1) More the reference counts away from data structures, so copy-on-write
> isn’t an issue.
A general note about PyPy --- sorry, it probably doesn't help your use
case because SciPy is not supported right now...
Right now, PyPy hit
On 9/8/2015 2:08 PM, Stephen J. Turnbull wrote:
R. David Murray writes:
> On Tue, 08 Sep 2015 10:12:37 -0400, Gary Robinson wrote:
> > 2) Have a mode where a particular data structure is not reference
> > counted or garbage collected.
>
> This sounds kind of like what Trent did in PyPa
On 8 September 2015 at 11:07, Gary Robinson wrote:
>> I guess a third possible solution, although it would probably have
>> meant developing something for yourself which would have hit the same
>> "programmer time is critical" issue that you noted originally, would
>> be to create a module that ma
Maybe you just have a job for Cap'n'proto?
https://capnproto.org/
On 8 September 2015 at 11:12, Gary Robinson wrote:
> Folks,
>
> If it’s out of line in some way for me to make this comment on this list, let
> me know and I’ll stop! But I do feel strongly about one issue and think it’s
> worth
>
> Trent seems to be on to something that requires only a bit of a tilt
> ;-), and despite the caveat above, I agree with David, check it out:
I emailed with Trent a couple years ago about this very topic. The biggest
issue for me was that it was Windows-only, but it sounds like that restrictio
R. David Murray writes:
> On Tue, 08 Sep 2015 10:12:37 -0400, Gary Robinson wrote:
> > 2) Have a mode where a particular data structure is not reference
> > counted or garbage collected.
>
> This sounds kind of like what Trent did in PyParallel (in a more generic
> way).
Except Gary has a
On 08.09.2015 19:17, R. David Murray wrote:
On Tue, 08 Sep 2015 10:12:37 -0400, Gary Robinson wrote:
2) Have a mode where a particular data structure is not reference
counted or garbage collected.
This sounds kind of like what Trent did in PyParallel (in a more generic
way).
Yes, I can recal
On Tue, 08 Sep 2015 10:12:37 -0400, Gary Robinson wrote:
> 2) Have a mode where a particular data structure is not reference
> counted or garbage collected.
This sounds kind of like what Trent did in PyParallel (in a more generic
way).
--David
___
Pyth
> I guess a third possible solution, although it would probably have
> meant developing something for yourself which would have hit the same
> "programmer time is critical" issue that you noted originally, would
> be to create a module that managed the data structure in shared
> memory, and then us
On 8 September 2015 at 15:12, Gary Robinson wrote:
> So, one thing I am hoping comes out of any effort in the “A better story”
> direction would be a way to share large data structures between processes.
> Two possible solutions:
>
> 1) More the reference counts away from data structures, so cop
Folks,
If it’s out of line in some way for me to make this comment on this list, let
me know and I’ll stop! But I do feel strongly about one issue and think it’s
worth mentioning, so here goes.
I read the "A better story for multi-core Python” with great interest because
the GIL has actually b
18 matches
Mail list logo