pygame and music

2005-02-09 Thread maxime
Hi, I try to develop a game in python and pygame.
In my game I play a music (.mid with pygame.mixer.music) but sometime
I need to accelerate it but I don't see how to do that with pygame. Is
it possible? If not, do you know an other python music lib that do
that?
Thanks a lot
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cache-like structure

2008-08-07 Thread maxime


konstantin wrote:
> Hi,
Hi
>...
> - are there better ways to do this (or any ready recipes)


Did you consider using the shed module (not threaded)?

Cheers,

Maxime
--
http://mail.python.org/mailman/listinfo/python-list


Re: cache-like structure

2008-08-07 Thread maxime
On Aug 7, 7:00 pm, maxime <[EMAIL PROTECTED]> wrote:
> konstantin wrote:
> > Hi,
> Hi
> >...
> > - are there better ways to do this (or any ready recipes)
>
> Did you consider using the shed module (not threaded)?
>
> Cheers,
>
> Maxime

If you need threading, what i would do is implementing a _remove
method like:
def _remove(self, rec): #try not to use __form, almost always an error
self.records.remove(rec)

the Timer would become
Timer(ttl, self._remove, [(t, item)]) # t is the time that you got
before

And if you change the deque for a Queue.Queue, you don't even need a
Lock...

Cheers,

Maxime
--
http://mail.python.org/mailman/listinfo/python-list


Re: Why no '|' operator for dict?

2018-02-05 Thread Maxime S
2018-02-05 9:14 GMT+01:00 Ian Kelly :
> On Mon, Feb 5, 2018 at 12:35 AM, Frank Millman  wrote:
>> 2. Is there a better way to do what I want?
>
> The dict.items() view is explicitly set-like and can be unioned, so
> you can do this:
>
> py> dict(d1.items() | d2.items())
>
> As to the question of which value will appear in the union in the case
> of duplicate keys, it will be whichever one arbitrarily appears later
> in the iteration order of the intermediate set.

Since Python 3.5, it is also possible to use PEP448 generalized unpacking:

dict([*d1.items(), *d2.items()])

In which case the value that appears in case of duplicate keys is
better defined, it will be the one appearing in the last dictionnary.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: comapring 2 sequences of DNA ouput the silent and non mutations

2016-10-30 Thread Maxime S
2016-10-29 21:38 GMT+02:00 :
>
> Code:
>
> [...]
>
> for i in range (len(protein) & len(seq1)) :
>
> if protein[i] != mutantPRO[i] :
>print (protein[i] + str(i) + mutantPRO[i])
>A+= 1
> else:
> if seq1[i:i+3] != mutant[i:i+3]:
>  print(protein[i] + str(i) + mutantPRO[i] +'
Silent mutation ')
>  print(seq1[i:i+3] + mutant[i:i+3])
>  B+= 1

Hi,

The problem here is that you try to mix two different index in one
variable. Instead, you need to do something like this:

#i index protein
#j index DNA
for i in range (len(protein)) :
j = i*3
if protein[i] != mutantPRO[i] :
   print (protein[i] + str(i) + mutantPRO[i])
   A+= 1
else:
if seq1[j:j+3] != mutant[j:j+3]:
 print(protein[i] + str(i) + mutantPRO[i] +' Silent mutation ')
 print(seq1[j:j+3] + mutant[j:j+3])
 B+=1
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UserList - which methods needs to be overriden?

2016-06-10 Thread Maxime S
2016-06-10 10:37 GMT+02:00 Peter Otten <[email protected]>:
>
> Nagy László Zsolt wrote:
>
> > I'm not sure wich one is the best. Peter wrote that UserList was left in
> > collections only for backward compatiblity. This might be a point
>
> I'll take that back. I looked around and found no evidence for my claim.
> Only MutableString was removed during the transition to Python 3.
>
> Sorry for the confusion.
>
>

Up to Python 2.6, the docs of UserList had a note stating that:

This module is available for backward compatibility only. If you are
writing code that does not need to work with versions of Python earlier
than Python 2.2, please consider subclassing directly from the built-in
list type.

This was changed in Python 2.7 to:

When Python 2.2 was released, many of the use cases for this class were
subsumed by the ability to subclass list directly. However, a handful of
use cases remain.

So I think the intention was that the ability to subclass list would remove
the need to subclass UserList, but it was later realised that it is not
totally the case.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: psutil.boot_time() ... doesn't ?

2019-11-06 Thread Maxime S
Hello,

You may want to read PEP 418 which nicely summaries the different clock
available on each platform and their limitations.

It looks like CLOCK_BOOTTIME is what you want but it is only available on
Linux.

Regards,

Maxime.

Le mer. 6 nov. 2019 à 18:23, R.Wieser  a écrit :

> Hello all,
>
> I was doing a "lets print some time-related data", and also diaplayed the
> result of "psutil.boot_time()".
>
> Somewhere while doing that I saw that my clock was off, so I used the
> "date"
> command to rectify it.
>
> The thing is, after that the result of "psutil.boot_time()" was changed -
> and that I did (and do) not expect. :-(
> (Remark: the difference was exactly the same as the change I made with the
> "date" command).
>
> Question: Is there a way to retrieve the "last boot" time /without/ it
> getting changed by ... whatever ?
>
> Regards,
> Rudy Wieser
>
>
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fun with IO

2020-01-21 Thread Maxime S
Hi,

Le ven. 17 janv. 2020 à 20:11, Frank Millman  a écrit :


> It works perfectly. However, some pdf's can be large, and there could be
> concurrent requests, so I wanted to minimise the memory footprint. So I
> tried passing the client_writer directly to the handler -
>
>  await pdf_handler(client_writer)
>  client_writer.write(b'\r\n')
>
> It works! ReportLab accepts client_writer as a file-like object, and
> writes to it directly. I cannot use chunking, so I just let it do its
> thing.
>
> Can anyone see any problem with this?
>
>
If the socket is slower than the PDF generation (which is probably always
the case, unless you have a very fast network), it will still have to be
buffered in memory (in this case in the writer buffer). Since writer.write
is non-blocking but is not a coroutine, it has to buffer. There is an
interesting blog post about that here that I recommend reading:
https://lucumr.pocoo.org/2020/1/1/async-pressure/

Unfortunately, there is no way to avoid buffering the entire pdf in memory
without modifying reportlab to make it async-aware.

This version is still better than the one with BytesIO though because in
that version the pdf was buffered twice, once in BytesIO and once in the
writer, although you can fix that by using await writer.drain() after each
write and then the two versions are essentially equivalent.

Regards,

Maxime.
-- 
https://mail.python.org/mailman/listinfo/python-list


Interpreter Python 3.8 not there to select from PyCharm

2020-02-17 Thread Maxime Albi
I'm very new to Python and wanting to learn the basics.
I downloaded and installed Python 3.8 and PyCharm for my Windows 10 machine.  
All good.

Launched PyCharm and I started a new Project and tried to select an 
'interpreter' such as Python 3.8 but no interpreter was available for me to 
select from the down arrow menu ..!?!?

Any settings I need to do ??

Thanks,
Maxime

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a log file that tracks every statement that is being executed when a program is running?

2020-10-25 Thread Maxime S
Hi,

You can use the trace module for that:
https://docs.python.org/3.8/library/trace.html

Personally I tend to put print statement at strategic places instead, I
find that easier to analyse than a full trace but YMMV.

Maxime


Le dim. 25 oct. 2020 à 01:25, Steve  a écrit :

> This would seriously help troubleshooting for me.  I updated a data file
> and
> now my main program is choking on it.  When the program encounters an
> error,
> it dumps a bit of information to the screen for a few steps before the
> error
> but that is not enough.
>
>
>
>
> Footnote:
> English sprakers on a roller coaster: "W"
> Spanish speakers on a rollercoaster:  " Nosostros"
>
> -Original Message-
> From: Python-list  On
> Behalf Of shrimp_banana
> Sent: Saturday, October 17, 2020 9:47 PM
> To: [email protected]
> Subject: Re: File Name issue
>
> On 10/17/20 4:12 PM, Steve wrote:
>  > The line:
>  > with open("HOURLYLOG.txt", 'r') as infile:
>  > works but, when I rename the file, the line:
>  > with open("HOURLY-LOG.txt", 'r') as infile:
>  > does not.  The complaint is: Cannot Assign to operator  >  > However, I
> have:
>  > BPM_O2s=open("BPM-O2-Readings.txt","a")
>  > And it works.
>  >
>  > At first, I thought the issue was due to having the - in the filename.
>  >
>  > Is there a fix or explanation for this?
>  > Steve
>
> I am unsure if this will help but you could try putting an r in front of
> the
> quotes to make it take raw data only.
> ie.
>
> with open(r"HOURLY-LOG.txt", 'r') as infile
> --
> https://mail.python.org/mailman/listinfo/python-list
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Question about asyncio and blocking operations

2016-01-28 Thread Maxime S
2016-01-28 17:53 GMT+01:00 Ian Kelly :

> On Thu, Jan 28, 2016 at 9:40 AM, Frank Millman  wrote:
>
> > The caller requests some data from the database like this.
> >
> >return_queue = asyncio.Queue()
> >sql = 'SELECT ...'
> >request_queue.put((return_queue, sql))
>
> Note that since this is a queue.Queue, the put call has the potential
> to block your entire event loop.
>
>
Actually, I don't think you actually need an asyncio.Queue.

You could use a simple deque as a buffer, and call fetchmany() when it is
empty, like that (untested):

class AsyncCursor:
"""Wraps a DB cursor and provide async method for blocking operations"""
def __init__(self, cur, loop=None):
if loop is None:
loop = asyncio.get_event_loop()
self._loop = loop
self._cur = cur
self._queue = deque()

def __getattr__(self, attr):
return getattr(self._cur, attr)

def __setattr__(self, attr, value):
return setattr(self._cur, attr, value)

async def execute(self, operation, params):
return await self._loop.run_in_executor(self._cur.execute,
operation, params)

async def fetchall(self):
return await self._loop.run_in_executor(self._cur.fetchall)


async def fetchone(self):
return await self._loop.run_in_executor(self._cur.fetchone)

async def fetchmany(self, size=None):
return await self._loop.run_in_executor(self._cur.fetchmany, size)


async def __aiter__(self):
return self

async def __anext__(self):
if self._queue.empty():
rows = await self.fetchmany()
if not rows:
raise StopAsyncIteration()
self._queue.extend(rows)

return self._queue.popleft()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Question about asyncio and blocking operations

2016-01-29 Thread Maxime Steisel
Le 28 janv. 2016 22:52, "Ian Kelly"  a écrit :
>
> On Thu, Jan 28, 2016 at 2:23 PM, Maxime S  wrote:
> >
> > 2016-01-28 17:53 GMT+01:00 Ian Kelly :
> >>
> >> On Thu, Jan 28, 2016 at 9:40 AM, Frank Millman 
wrote:
> >>
> >> > The caller requests some data from the database like this.
> >> >
> >> >return_queue = asyncio.Queue()
> >> >sql = 'SELECT ...'
> >> >request_queue.put((return_queue, sql))
> >>
> >> Note that since this is a queue.Queue, the put call has the potential
> >> to block your entire event loop.
> >>
> >
> > Actually, I don't think you actually need an asyncio.Queue.
> >
> > You could use a simple deque as a buffer, and call fetchmany() when it
is
> > empty, like that (untested):
>
> True. The asyncio Queue is really just a wrapper around a deque with
> an interface designed for use with the producer-consumer pattern. If
> the producer isn't a coroutine then it may not be appropriate.
>
> This seems like a nice suggestion. Caution is advised if multiple
> cursor methods are executed concurrently since they would be in
> different threads and the underlying cursor may not be thread-safe.
> --
> https://mail.python.org/mailman/listinfo/python-list

Indeed, the run_in_executor call should probably protected by an
asyncio.Lock.

But it is a pretty strange idea to call two fetch*() method concurrently
anyways.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Cannot step through asynchronous iterator manually

2016-01-30 Thread Maxime S
2016-01-30 11:51 GMT+01:00 Frank Millman :

> "Chris Angelico"  wrote in message
> news:CAPTjJmoAmVNTCKq7QYaDRNQ67Gcg9TxSXYXCrY==s9djjna...@mail.gmail.com...
>
>
>> On Sat, Jan 30, 2016 at 7:22 PM, Frank Millman 
>> wrote:
>> > We had a recent discussion about the best way to do this, and ChrisA
>> > suggested the following, which I liked -
>> >
>> >cur.execute('SELECT ...)
>> >try:
>> >row = next(cur)
>> >except StopIteration:
>> ># row does not exist
>> >else:
>> >try:
>> >next_row = next(cur)
>> >except StopIteration:
>> ># row does exist
>> >else:
>> ># raise exception
>> >
>> > Now that I have gone async, I want to do the same with an asynchronous
>> > iterator.
>>
>
>
I might be a bit off-topic, but why don't you simply use cursor.rowcount?

For a pure iterator-based solution, I would do something like this (admitly
a bit cryptic, but iterator-based solutions often are :-) :

async def get_uniqu(ait):
async for row in ait:
break
else:
raise NotEnoughtRows()
async for _ in ait:
raise TooManyRows()
return row
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Broken IF statement

2015-01-13 Thread Maxime S
2015-01-12 22:19 GMT+01:00 :
>
> https://bpaste.net/show/93be9e15634b <--- Line 19 through 22
>
> At all times, my program is assigning the object priority of 0, even if
one already exists in the database with a priority of 0 (it's supposed to
be assigning it a priority of 1 in those cases).
>
> I'm a non developer trying to fix a freelancer's code. Would anybody be
able to suggest changes to the IF logic that might be able to fix it,
assuming the statements in the code provided look flawed?
>
> Thanks...
> --
> https://mail.python.org/mailman/listinfo/python-list


This line:

obj, created = SocialAccount.objects.get_or_create(...)

suggest you are using Django. If it is the case you have to add obj.save()
after changing the priority to send the new value to the DB.

Best,

Maxime
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Function decorator having arguments is complicated

2015-04-27 Thread Maxime S
Le lun. 27 avr. 2015 à 04:39, Makoto Kuwata  a écrit :
>
> If function decorator notation could take arguments,
> decorator definition would be more simple:
>
>   def multiply(func, n):
> def newfunc(*args, **kwargs):
>   return n * func(*args, **kwargs)
> return newfunc
>
>   @multiply 4  # ex: @decorator arg1, arg2, arg3
>   def f1(x, y):
> return x+y
>
>
> How do you think about this idea?
>

David Beazley has a nice trick [1] to allow optional argument in decorators:

def logged(func=None, level=logging.DEBUG, message=None):
if func is None:
return partial(logged, level=level, message=message)

@wraps(func)
def wrapper(*args, **kwargs):
log.log(level, message)
return func(*args, **kwargs)
return wrapper

I think that solve your problem nicely, and that it is quite readable.

[1] Amongst a heap of other cool tricks, in his Python Cookbook

Regards,

Maxime
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Anything better than asyncio.as_completed() and asyncio.wait() to manage execution of large amount of tasks?

2014-07-16 Thread Maxime Steisel
2014-07-15 14:20 GMT+02:00 Valery Khamenya :
> Hi,
>
> both asyncio.as_completed() and asyncio.wait() work with lists only. No
> generators are accepted. Are there anything similar to those functions that
> pulls Tasks/Futures/coroutines one-by-one and processes them in a limited
> task pool?


Something like this (adapted from as_completed) should do the work:

import asyncio
from concurrent import futures

def parallelize(tasks, *, loop=None, max_workers=5, timeout=None):
loop = loop if loop is not None else asyncio.get_event_loop()
workers = []
pending = set()
done = asyncio.Queue(maxsize=max_workers)
exhausted = False

@asyncio.coroutine
def _worker():
nonlocal exhausted
while not exhausted:
try:
t = next(tasks)
pending.add(t)
yield from t
yield from done.put(t)
pending.remove(t)
except StopIteration:
exhausted = True

def _on_timeout():
for f in workers:
f.cancel()
workers.clear()
#Wake up _wait_for_one()
done.put_nowait(None)

@asyncio.coroutine
def _wait_for_one():
f = yield from done.get()
if f is None:
raise futures.TimeoutError()
return f.result()

workers = [asyncio.async(_worker()) for i in range(max_workers)]

if workers and timeout is not None:
timeout_handle = loop.call_later(timeout, _on_timeout)

while not exhausted or pending or not done.empty():
yield _wait_for_one()

timeout_handle.cancel()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio with map&reduce flavor and without flooding the event loop

2014-08-06 Thread Maxime Steisel
2014-08-03 16:01 GMT+02:00 Valery Khamenya :
> Hi all
>
> [snip]
>
> Consider a task like crawling the web starting from some web-sites. Each
> site leads to generation of new downloading tasks in exponential(!)
> progression. However we don't want neither to flood the event loop nor to
> overload our network. We'd like to control the task flow. This is what I
> achieve well with modification of nice Maxime's solution proposed here:
> https://mail.python.org/pipermail/python-list/2014-July/675048.html
>
> Well, but I'd need as well a very natural thing, kind of map() & reduce() or
> functools.reduce() if we are on python3 already. That is, I'd need to call a
> "summarizing" function for all the downloading tasks completed on links from
> a page. This is where i fail :(

Hi Valery,

With the modified as_completed, you can write map and reduce
primitives quite naturally.

It could look like that:



def async_map(corofunc, *iterables):
"""
Equivalent to map(corofunc, *iterables) except that
corofunc must be a coroutine function and is executed asynchronously.

This is not a coroutine, just a normal generator yielding Task instances.
"""
for args in zip(*iterables):
yield asyncio.async(corofunc(*args))

@asyncio.coroutine
def async_reduce(corofunc, futures, initial=0):
"""
Equivalent to functools.reduce(corofunc, [f.result() for f in
futures]) except that
corofunc must be a coroutine function and future results can be
evaluated out-of order.

This function is a coroutine.
"""
result = initial
for f in as_completed(futures, max_workers=50):
new_value = (yield from f)
result = (yield from corofunc(result, new_value))
return result

===

Best,

Maxime
-- 
https://mail.python.org/mailman/listinfo/python-list


Seg fault when launching my module through my C/C++ application

2010-04-13 Thread Maxime Boure
Hello everyone,

I made a small python module to command NetworkManager and get some signals
from it.

My script works well alone, but when I launch it from my C/C++ program it
crashes at a certain function (here iface.GetDevices). Here is the function
that crashes and next my gdb print

*def get_device_path_by_type(type):*
*  proxy = bus.get_object('org.freedesktop.NetworkManager',
'/org/freedesktop/NetworkManager')*
*  iface = dbus.Interface(proxy,
dbus_interface='org.freedesktop.NetworkManager')*
*  print "iface : %s" % iface *
*  for d in iface.GetDevices():*
*print "--> after getdevices"*
*proxy = bus.get_object('org.freedesktop.NetworkManager', d)*
*iface = dbus.Interface(proxy,
dbus_interface='org.freedesktop.DBus.Properties')*
*devtype = iface.Get('org.freedesktop.NetworkManager.Device',
'DeviceType')*
*print "type : %d" % devtype*
*if devtype == type:*
*  print "%s" % d*
*  return d*
*  print "return none"*
*  return None*
*- - - - - - - - - - - - -  - - - - - - - - - - - -  - - - - - - - - - - -
-  - - - - - - - - - - - -  - - - - - - - - - - - -  - - - - - - - - - - -
-  - - - - - - - - - - - - *
*
iface :  :1.0 /org/freedesktop/NetworkManager at 0x3fae2710> implementing
'org.freedesktop.NetworkManager' at 0x3fae27d0>

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x4035a460 (LWP 2214)]
0x3b8af144 in sem_post@@GLIBC_2.4 () from /lib/libpthread.so.0
(gdb) bt
#0  0x3b8af144 in sem_post@@GLIBC_2.4 () from /lib/libpthread.so.0
#1  0x3b2ca9b4 in PyThread_release_lock () from /usr/lib/libpython2.6.so.1.0
#2  0x3b2987c4 in PyEval_ReleaseLock () from /usr/lib/libpython2.6.so.1.0
#3  0x3b2bd518 in PyThreadState_DeleteCurrent ()
   from /usr/lib/libpython2.6.so.1.0
#4  0x3f9bce50 in ?? () from /usr/lib/pyshared/python2.6/_dbus_bindings.so

Just so you know I use other function from that script that works well with
my C/C++ application. Do you have any idea of why there would be a threading
problem ? And why a seg fault ?

Thank you for your inputs

Regards

Maxime

*
-- 
http://mail.python.org/mailman/listinfo/python-list