[Python-Dev] Re: Sponsoring Python development via the PSF

2019-06-26 Thread Pau Freixes
Hi,

Why not the other way around? Having from the very beginning a clear goal
with a speculative budget, companies would have visibilitty about the end
goal of their donations. Otherwise is a bit a leap of faith.

On Wed, Jun 26, 2019, 02:15 Brett Cannon  wrote:

> Victor Stinner wrote:
> > That's great!
> > "The PSF will use Python development fundraising to support CPython
> > development and maintenance."
> > Does someone have more info about that? Who will get the money, how,
> etc.?
>
> No because we have to see how much money we get first. ;) Without knowing
> how much we will get then you can probably assume it will go towards core
> dev sprint sponsorships like the page states. After that probably the PM
> role we have been talking about on the steering council to help sunset
> Python 2, then a PM-like role to help with GitHub issue migration. That's
> as far as the steering council has discussed things with the PSF in terms
> of short-term, concrete asks for money and staffing.
>
> After that it's my personal speculation, but I would hope something like
> the Django fellowship program. If we could get enough money to fund like 4
> people to work on Python full-time it would be amazing and help us stay on
> top of things like pull requests.
>
> But all of this depends on companies actually giving money in the proper
> amounts in order to be able to do anything like this. I've been told by
> companies they have been wanting to donate money for ages but wanted to
> make sure it was helping Python's development. Well, this is their chance
> to prove they actually meant it. ;)
>
> > Victor
> > Le lundi 24 juin 2019, Brett Cannon br...@python.org...
> > a écrit :
> > > We now have https://www.python.org/psf/donations/python-dev/...
> > > for anyone
> > > to make donations via the PSF for directly supporting Python
> development
> > > (and we have the Sponsor button at https://github.com/python/cpython.
> ..
> > > pointing to this link).
> > > --
> > Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/BGZI7NPQTCSHI3JNMKTSPV4XM7FPI7HZ/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TDORWU57QJDTAOCJVX6VHZBACPU2QX4I/


[Python-Dev] Object deallocation during the finalization of Python program

2020-01-09 Thread Pau Freixes
Hi,

Recently I've been facing a really weird bug where a Python program
was randomly segfaulting during the finalization, the program was
using some C extensions via Cython.

Debugging the issue I realized that during the deallocation of one of
the Python objects the deallocation function was trying to release a
pointer that was surprisingly assigned to  NULL. The pointer was at
the same time held by another Python object that was an attribute of
the Python object that had the deallocation function, something like
this:

class Foo:
my_type * value

class Bar
def __cinit__:
self._foo = Foo()
self._foo->value = initialize()

def __dealloc__:
destroy(self._foo->value)

Seems that randomly the instance of the object Foo held by the Bar
object was deallocated by the CPython interpreter before the Foo
deallocation, so after being deallocated - and zeroing the memory
space of the instance of Foo - the execution of the
`destroy(self._foo->value)` was in fact given as a parameter a NULL
address and raising a segfault.

It was a surprise for me, If I'm not missing something the
deallocation of the Foo instance happened even though there was still
an active reference held by the Bar object.

As a kind of double-checking I changed the program for making an
explicit `gc.collect()` before the last line of the Python program. As
a result, I couldn't reproduce the segfault, which theoretically would
mean that objects were deallocated "in order".

So my question would be, could CPython deallocate the objects during
the finalization step without considering the dependencies between
objects?

If this is not the right list to make this kind of questions, just let
me know what would be the best place for making this kind of questions

Thanks in advance,
-- 
--pau
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EQMJ2EZRI3QMGHOOWN7JVYVK7ZMIM2IP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Object deallocation during the finalization of Python program

2020-01-11 Thread Pau Freixes
HI,

Thanks for the comments, really interesting use case this one [1],
I've just read it in diagonal but seems that is similar to the bug
that finally I've found in our program.

Basically GC was clearing all of the attributes before the
deallocation for unbreaking an indirect reference cycle which resulted
later in access to an invalid address during the deallocation, this is
something that is already advised in the CYthon documentation.


[1] https://bugs.python.org/issue38006

On Sat, Jan 11, 2020 at 1:36 PM Armin Rigo  wrote:
>
> Hi Pau,
>
> Also, the Cython documentation warns against doing this kind of things
> (here, accessing the Python object stored in ``foo``).  From
> https://cython.readthedocs.io/en/latest/src/userguide/special_methods.html:
>
> You need to be careful what you do in a __dealloc__() method.
> By the time your __dealloc__() method is called, the object
> may already have been partially destroyed and may not be
> in a valid state as far as Python is concerned, so you should
> avoid invoking any Python operations which might touch the
> object. In particular, don’t call any other methods of the object
> or do anything which might cause the object to be resurrected.
> It’s best if you stick to just deallocating C data.
>
>
> Armin



-- 
--pau
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7FUOIKLO27U67VPPI2HLM3GLWXQ64ESF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Different cpu python code usage between embedded mode and standalone mode

2008-06-07 Thread Pau Freixes
Hi list,

First Hello to all, I have a serious problem for understand some results
when I'm comparing cpu usages between same python code in embedded mode and
standalone mode ( python name_script.py )

This last months I have been writing a  program in c like to mod_python for
embedding python language, it's a middleware for dispatch and execute python
batch programs into several nodes. Now I'm writing some python program for
test how scale this into several nodes and comparing with "standalone"
performance.

I found a very strange problem with one application named md5challenge, this
aplication try to calculate the max number md5 digest in several seconds,
md5challenge use a simple signal alarm for stop program when time has
passed. This is the code of python script

_nrdigest = 0
_const_b = 20
_f = None
_signal = False


def handler_alrm(signum, frame):
global _signal
global _nrdigest
global _f


_signal = True

def try_me():
global _nrdigest
global _f
global _signal

_f = open("/dev/urandom","r")
while _signal is not True:
buff = _f.read(_const_b)
md5.md5(buff).hexdigest()
_nrdigest = _nrdigest + 1

if _f is not None :
_f.close()

# Define entry point with one input variable
# req is a instance of Request object, usefull members of this object are:
# req.input is a dictionary with input.xml variables
# req.constants is a dictionary with constants defined into signature.xml
# req.output is void dictionary for full with output variables
# req.config is a dictionary with config values take from namespace
# req.apdn_pid is a pid of aplication


def main( req ):
global _nrdigest


signal.signal(signal.SIGALRM, handler_alrm)
signal.alarm(req.input['time'])

try_me()

req.output['count'] = _nrdigest

return req.OK


if __name__ == "__main__":

# test code
class test_req:
pass

req = test_req()
req.input = { 'time' : 10 }
req.output = { 'ret' : 0, 'count' : 0 }
req.OK = 1

main(req)

print "Reached %d digests" % req.output['count']


When I try to run this program in standalone into my Pentium Dual Core
md5challenge reached 1.000.000 milion keys in 10 seconds but when i try to
run this code in embedded mode md5challenge reached about 200.000 more keys
!!! I repeat this test many times and  always  wins  embedded mode  !!!
What's happen ?

Also I tested to erase read dependencies from /dev/random, and calculate all
keys from same buffer. In this case embedded mode win always also, and the
difference are more bigger !!!

The alarm time expires in both case in 10 seconds.

Thks to all, can anybody help to me for understand this results ?


-- 
Pau Freixes
Linux GNU/User
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] GIL cpu usage problem, confirm me

2008-06-08 Thread Pau Freixes
Hi List,

Surly this is a recurring theme into python dev world, but I need your help
for confirm if the follow image it's really

http://www.milnou.net/~pfreixes/img/cpu_usage_gil_problem.png

I'm writing a brief article for my blog and I need to make sure about the
current problem with GIL and multi core environments, this picture try to
explain with images the problem for scheduling multiple threads running
python code of same interpreter into multiple cpu cores. Can  anyone confirm
to me this picture ?

And if it's possible answer this two questions I will be happy :/

1) When this situation it's produced into one core environment whats happens
when thread library or os switch context into other python thread and this
don't have a GIL ?

2) Exist some PEP or plans for modify this and run multiple thread python
for same interpreter at current time ? for python 3000?

Thanks and excuse for this break.


-- 
Pau Freixes
Linux GNU/User
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com