Re: [Tutor] create 1000 000 variables
--- ÁÕàÓöÙ <[EMAIL PROTECTED]> wrote: > suppose I need to create 1000 000 variables > var_1, var_2, var_100 > > how to do this using for? > (something like > for i in range(100): > ___var_ How would you consider NOT creating 100,000 variables ?? Answer that question and you should be close of knowing the answer to your own question. HTH, Erob -- Etienne Robillard <[EMAIL PROTECTED]> JID: incidah AT njs.netlab.cz YMS/MSN: granted14 AT yahoo.com TEL: +1 514.962.7703 __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] (*args, **kwargs)
--- Matt Williams <[EMAIL PROTECTED]> wrote: > Dear All, > > I have learnt to do bits of python, but one of the > things I cannot get > my head around is the *args, **kwargs syntax. > I have tried reading stuff on the web, and I have a > copy of the python > cookbook (which uses it as a recipe early on) but I > still don't > understand it. > > Please could someone explain _very_ slowly? Here's how I've learned it so far: First, there's a convention for 'passing lists as arguments', which is simply to use 'kw' (I prefer that one) or 'kwargs' with a '*' in front of them. But first you need to initialize your data structure properly, thus: kw = {} # a empty dictionnary Then you may want to add stuff inside that kw object: kw['foo'] = "bar" Now consider the following data structure: class Foo: def __init__(self, **kw): # do something with kw objects here. pass Then here's how to pass the whole kw object as a argument list: Foo(**kw) HTH, Etienne > Apologies for the gross stupidity, P.S - There's no such thing here, sorry. :-) > Matt > > > -- > http://acl.icnet.uk/~mw > http://adhominem.blogsome.com/ > +44 (0)7834 899570 -- Etienne Robillard <[EMAIL PROTECTED]> JID: incidah AT njs.netlab.cz YMS/MSN: granted14 AT yahoo.com TEL: +1 514.962.7703 URL: http://www.assembla.com/wiki/show/stn-portfolio/ __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor
[Tutor] asyncio and wsgiref problem
Hi, I'm trying to implement the asyncio.coroutine decorator in my wsgi app. Here's my server code: class AsyncIOController(WSGIController): def __init__(self, settings=None, executor=None, loop=None ): super(AsyncIOController, self).__init__(settings) # asyncio config. self._executor = executor self._loop = loop or get_event_loop() #@asyncio.coroutine def get_response(self, request=None, method='GET', data={}): response = super(AsyncIOController, self).get_response(request) return response @asyncio.coroutine def application(self, environ, start_response): with sessionmanager(environ): request.environ.update(environ) response = self.get_response(request=request)(environ, start_response) #assert isinstance(response, bytes), type(response) return response @asyncio.coroutine def __call__(self, environ, start_response, exc_info=None): result = self.application(environ, start_response) return result My test script: @asyncio.coroutine def app(environ, start_response): try: result = (yield from AsyncIOController().application(environ, start_response)) return result except: raise if __name__ == '__main__': server = simple_server.make_server('127.0.0.1', 8000, app) server.serve_forever() Should I attempt to decorate the AsyncIOController.get_response method with the asyncio.coroutine method ? Any ideas how to fix this script ? Thank you in advance, Etienne ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] asyncio and wsgiref problem
This code is compatible with PEP- on Python 3.5.3: @asyncio.coroutine def app(environ, start_response): try: result = (yield from AsyncIOController().application(environ, start_response)) except: raise else: #XXX result is a generator. this should not be needed? yield from result I updated my server code like so: class AsyncIOController(WSGIController): def __init__(self, settings=None, executor=None, loop=None ): super(AsyncIOController, self).__init__(settings) # asyncio config. self._executor = executor self._loop = loop or get_event_loop() #@asyncio.coroutine def get_response(self, request=None, method='GET', data={}): response = super(AsyncIOController, self).get_response(request) return response @asyncio.coroutine def application(self, environ, start_response): with sessionmanager(environ): request.environ.update(environ) response = self.get_response(request=request) #assert isinstance(response, bytes), type(response) return response(environ, start_response) @asyncio.coroutine def __call__(self, environ, start_response, exc_info=None): result = self.application(environ, start_response) return result How can i avoid calling "yield" twice in my test script ? Thank you, Etienne ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] asyncio and wsgiref problem
OK i figured that wsgiref (PEP-) is currently incompatible with asyncio. I'm not sure about the motivations of this incompatibility. Would it be so hard to make a "wsgiref.asyncio_server" extension for people wanting to use wsgiref for development without having to use a third-party extension? Anyways, I'm now using uWSGI for development and testing. Etienne Le 2017-11-08 à 15:30, Etienne Robillard a écrit : This code is compatible with PEP- on Python 3.5.3: @asyncio.coroutine def app(environ, start_response): try: result = (yield from AsyncIOController().application(environ, start_response)) except: raise else: #XXX result is a generator. this should not be needed? yield from result I updated my server code like so: class AsyncIOController(WSGIController): def __init__(self, settings=None, executor=None, loop=None ): super(AsyncIOController, self).__init__(settings) # asyncio config. self._executor = executor self._loop = loop or get_event_loop() #@asyncio.coroutine def get_response(self, request=None, method='GET', data={}): response = super(AsyncIOController, self).get_response(request) return response @asyncio.coroutine def application(self, environ, start_response): with sessionmanager(environ): request.environ.update(environ) response = self.get_response(request=request) #assert isinstance(response, bytes), type(response) return response(environ, start_response) @asyncio.coroutine def __call__(self, environ, start_response, exc_info=None): result = self.application(environ, start_response) return result How can i avoid calling "yield" twice in my test script ? Thank you, Etienne ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor -- Etienne Robillard tkad...@yandex.com http://www.isotopesoftware.ca/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
[Tutor] How to debug a memory leak in a wsgi application?
Hi I think my wsgi application is leaking and I would like to debug it. What is the best way to profile memory usage in a running wsgi app? Best regards, Etienne -- Etienne Robillard tkad...@yandex.com https://www.isotopesoftware.ca/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] How to debug a memory leak in a wsgi application?
Hi Alan, Thanks for the reply. I use Debian 9 with 2G of RAM and precompiled Python 2.7 with pymalloc. I don't know if debugging was enabled for this build and whether I should enable it to allow memory profiling with guppy... My problem is that guppy won't show the heap stats for the uWSGI master process. However I have partially resolved this issue by enabling --reload-on-rss 200 for the uwsgi process. Previously, the htop utility indicated a 42.7% rss memory usage for 2 uWSGI processes. I have restarted the worker processes with SIGINT signal. Now my uwsgi command line looks like: % uwsgi --reload-on-rss 200 --gevent 100 --socket localhost:8000 --with-file /path/to/file.uwsgi --threads 2 --processes 4 --master --daemonize /var/log/uwsgi.log My framework is Django with django-hotsauce 0.8.2 and werkzeug. The web server is nginx using uWSGI with the gevent pooling handler. Etienne Le 2017-12-06 à 10:00, Alan Gauld via Tutor a écrit : On 06/12/17 09:21, Etienne Robillard wrote: Hi I think my wsgi application is leaking and I would like to debug it. What is the best way to profile memory usage in a running wsgi app? This is probably a bit advanced for the tutor list, you might get a better response on the main Python list. But to get a sensible answer you need to provide more data: What OS and Python version? What toolset/framework are you using? What measurements lead you to suspect a memory leak? -- Etienne Robillard tkad...@yandex.com https://www.isotopesoftware.ca/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] How to debug a memory leak in a wsgi application?
Hi James, Thank for your reply. Are you suggesting that under Linux the malloc() glibc library call is more memory efficient than using pymalloc? Best regards, Etienne Le 2017-12-13 à 12:27, James Chapman a écrit : Why pymalloc? I presume this means you're using ctypes which means I have more questions. If you're allocating your own blocks of memory then you need to free them too. IE, does each call to pymalloc have a corresponding call to pyfree? Is the overhead of pythons built in malloc really a problem? Are you changing pointers before you've freed the corresponding block of memory? There are many ways to create a memory leak, all of them eliminated by letting python handle your memory allocations. But, back to your original question, check out "valgrind". HTH -- James On 6 December 2017 at 16:23, Etienne Robillard <mailto:tkad...@yandex.com>> wrote: Hi Alan, Thanks for the reply. I use Debian 9 with 2G of RAM and precompiled Python 2.7 with pymalloc. I don't know if debugging was enabled for this build and whether I should enable it to allow memory profiling with guppy... My problem is that guppy won't show the heap stats for the uWSGI master process. However I have partially resolved this issue by enabling --reload-on-rss 200 for the uwsgi process. Previously, the htop utility indicated a 42.7% rss memory usage for 2 uWSGI processes. I have restarted the worker processes with SIGINT signal. Now my uwsgi command line looks like: % uwsgi --reload-on-rss 200 --gevent 100 --socket localhost:8000 --with-file /path/to/file.uwsgi --threads 2 --processes 4 --master --daemonize /var/log/uwsgi.log My framework is Django with django-hotsauce 0.8.2 and werkzeug. The web server is nginx using uWSGI with the gevent pooling handler. Etienne Le 2017-12-06 à 10:00, Alan Gauld via Tutor a écrit : On 06/12/17 09:21, Etienne Robillard wrote: Hi I think my wsgi application is leaking and I would like to debug it. What is the best way to profile memory usage in a running wsgi app? This is probably a bit advanced for the tutor list, you might get a better response on the main Python list. But to get a sensible answer you need to provide more data: What OS and Python version? What toolset/framework are you using? What measurements lead you to suspect a memory leak? -- Etienne Robillard tkad...@yandex.com <mailto:tkad...@yandex.com> https://www.isotopesoftware.ca/ <https://www.isotopesoftware.ca/> ___ Tutor maillist - Tutor@python.org <mailto:Tutor@python.org> To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor <https://mail.python.org/mailman/listinfo/tutor> -- Etienne Robillard tkad...@yandex.com https://www.isotopesoftware.ca/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] How to debug a memory leak in a wsgi application?
Hi again James, Le 2017-12-14 à 04:44, James Chapman a écrit : No, I'm saying you shouldn't need to make any kind of malloc calls manually. Python handles memory allocation and deallocation on your behalf. All I did is installing a precompiled Python 2.7 build from the Debian repository. I believe it was built with pymalloc and debugging. Why do you need to call pymalloc? I have not yet took the time to manually compile Python without pymalloc. Are you using ctypes? No. And if you are I presume this is then to make C-calls into a shared library? I use Cython instead of ctypes. I'm guessing the memory leak was not caused by the Cython-generated C code, but from the uWSGI backend. Cheers, Etienne -- James On 13 December 2017 at 18:30, Etienne Robillard <mailto:tkad...@yandex.com>> wrote: Hi James, Thank for your reply. Are you suggesting that under Linux the malloc() glibc library call is more memory efficient than using pymalloc? Best regards, Etienne Le 2017-12-13 à 12:27, James Chapman a écrit : Why pymalloc? I presume this means you're using ctypes which means I have more questions. If you're allocating your own blocks of memory then you need to free them too. IE, does each call to pymalloc have a corresponding call to pyfree? Is the overhead of pythons built in malloc really a problem? Are you changing pointers before you've freed the corresponding block of memory? There are many ways to create a memory leak, all of them eliminated by letting python handle your memory allocations. But, back to your original question, check out "valgrind". HTH -- James On 6 December 2017 at 16:23, Etienne Robillard mailto:tkad...@yandex.com>> wrote: Hi Alan, Thanks for the reply. I use Debian 9 with 2G of RAM and precompiled Python 2.7 with pymalloc. I don't know if debugging was enabled for this build and whether I should enable it to allow memory profiling with guppy... My problem is that guppy won't show the heap stats for the uWSGI master process. However I have partially resolved this issue by enabling --reload-on-rss 200 for the uwsgi process. Previously, the htop utility indicated a 42.7% rss memory usage for 2 uWSGI processes. I have restarted the worker processes with SIGINT signal. Now my uwsgi command line looks like: % uwsgi --reload-on-rss 200 --gevent 100 --socket localhost:8000 --with-file /path/to/file.uwsgi --threads 2 --processes 4 --master --daemonize /var/log/uwsgi.log My framework is Django with django-hotsauce 0.8.2 and werkzeug. The web server is nginx using uWSGI with the gevent pooling handler. Etienne Le 2017-12-06 à 10:00, Alan Gauld via Tutor a écrit : On 06/12/17 09:21, Etienne Robillard wrote: Hi I think my wsgi application is leaking and I would like to debug it. What is the best way to profile memory usage in a running wsgi app? This is probably a bit advanced for the tutor list, you might get a better response on the main Python list. But to get a sensible answer you need to provide more data: What OS and Python version? What toolset/framework are you using? What measurements lead you to suspect a memory leak? -- Etienne Robillard tkad...@yandex.com <mailto:tkad...@yandex.com> https://www.isotopesoftware.ca/ <https://www.isotopesoftware.ca/> ___ Tutor maillist - Tutor@python.org <mailto:Tutor@python.org> To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor <https://mail.python.org/mailman/listinfo/tutor> -- Etienne Robillard tkad...@yandex.com <mailto:tkad...@yandex.com> https://www.isotopesoftware.ca/ <https://www.isotopesoftware.ca/> -- Etienne Robillard tkad...@yandex.com https://www.isotopesoftware.ca/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
[Tutor] How to create a python extension module from a shared library?
Hi all, I want to build a CPython extension module for the libuwsgi.so shared library included in uWSGI. My objective is to allow unconditional access to this shared library using the standard Python interpreter, for introspection purpose and extending uWSGI. I have investigated several ways to do this: 1. ctypes Probably the most straightforward to load the shared library >>> from ctypes import CDLL >>> lib = CDLL('./libuwsgi.so') However, this method does not properly reflect C functions and variables to their respective Python attributes... 2. CFFI/pycparser Next I tried to parse the uwsgi.h file with CFFI and pycparser: >>> from cffi import FFI >>> ffi = FFI() >>> lib = ffi.cdef(open('./uwsgi.h').read()) However, since directives are not supported in CFFI, it's not possible to parse the header file. 3. CFFI/clang For this experiment, I wanted to parse the C header file to generate a Abstract Syntax Tree (AST) using clang: >>> from pycparser import parse_file >>> ast = parse_file('./uwsgi.h', use_cpp=True, cpp_path='clang') >>> ast.show() FileAST: (at None) Ideally, it would be really cool if CFFI could parse a C header and generate a AST with libclang. Another possibility I haven't explored yet is to use cppyy. So, could you please advise on the most robust approach to reflect a shared library into a Python extension module? Regards, Etienne -- Etienne Robillard tkad...@yandex.com https://www.isotopesoftware.ca/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor