list classes in package

2008-01-16 Thread Dmitry
Hi All,

I've trying to develop one Python application, and
neet to solve one problem. I need to list all classes defined in one
package (not module!).

Could anybody please show me more convinient (correct) way to
implement this?

Thanks,
Dmitry
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] ErlPort (library to connect Erlang to Python) 1.0.0alpha released

2013-06-11 Thread Dmitry Vasilev

Hi all,

I've just released ErlPort version 1.0.0alpha. ErlPort is a library for 
Erlang which helps connect Erlang to a number of other programming 
languages (currently supported Python and Ruby). Apart from using 
ErlPort as a tool to call Python/Ruby functions from Erlang it also can 
be used as a middleware for Python/Ruby instances.


Check http://erlport.org for more details.

The new version of ErlPort is basically a complete redesign and rewrite 
of the project. The main changes in this version are:


- Redesigned as Erlang application
- Added support for all recent (2.5 and higher) versions of Python
- Added support for all recent (1.8.6 and higher) versions of Ruby
- Added support for custom data types

Why ErlPort can be interesting for Python developers?
-

Apart from calling Python functions from Erlang or Erlang function from 
Python, ErlPort also allows to use Erlang as a middleware for Python. 
The following is a small inter-process recursion example (place it in 
roll.py file). Actually this example will be even simpler with ErlPort 
1.0.0.


from os import getpid

from erlport.erlterms import Atom
from erlport.erlang import call, self

def roll(n, procs=None):
if procs is None:
procs = []
procs.append(self())
print "Hello from %s" % getpid()
if n > 1:
status, pid = call(Atom("python"), Atom("start"), [])
if status == "ok":
procs = call(Atom("python"), Atom("call"),
[pid, Atom("roll"), Atom("roll"), [n - 1, procs]])
return procs

And it can be used with Erlang shell and ErlPort like this:

1> {ok, P} = python:start().
{ok,<0.34.0>}
2> python:call(P, roll, roll, [5]).
Hello from 7747
Hello from 7749
Hello from 7751
Hello from 7753
Hello from 7755
[<0.34.0>,<0.37.0>,<0.40.0>,<0.43.0>,<0.46.0>]

--
Dmitry Vasiliev 
http://hlabs.org
http://twitter.com/hdima
--
http://mail.python.org/mailman/listinfo/python-list


python+libxml2+scrapy AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER'

2012-08-15 Thread Dmitry Arsentiev
Hello.

Has anybody already meet the problem like this? -
AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER'

When I run scrapy, I get

  File "/usr/local/lib/python2.7/site-packages/scrapy/selector/factories.py",
line 14, in 
libxml2.HTML_PARSE_NOERROR + \
AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER'


When I run
 python -c 'import libxml2; libxml2.HTML_PARSE_RECOVER'

I get
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER'

How can I cure it?

Python 2.7
libxml2-python 2.6.9
2.6.11-gentoo-r6


I will be grateful for any help.

DETAILS:

scrapy crawl lgz -o items.json -t json
Traceback (most recent call last):
  File "/usr/local/bin/scrapy", line 4, in 
execute()
  File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 112, in 
execute
cmds = _get_commands_dict(inproject)
  File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 37, in 
_get_commands_dict
cmds = _get_commands_from_module('scrapy.commands', inproject)
  File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 30, in 
_get_commands_from_module
for cmd in _iter_command_classes(module):
  File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 21, in 
_iter_command_classes
for module in walk_modules(module_name):
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 65, 
in walk_modules
submod = __import__(fullpath, {}, {}, [''])
  File "/usr/local/lib/python2.7/site-packages/scrapy/commands/shell.py", line 
8, in 
from scrapy.shell import Shell
  File "/usr/local/lib/python2.7/site-packages/scrapy/shell.py", line 14, in 

from scrapy.selector import XPathSelector, XmlXPathSelector, 
HtmlXPathSelector
  File "/usr/local/lib/python2.7/site-packages/scrapy/selector/__init__.py", 
line 30, in 
from scrapy.selector.libxml2sel import *
  File "/usr/local/lib/python2.7/site-packages/scrapy/selector/libxml2sel.py", 
line 12, in 
from .factories import xmlDoc_from_html, xmlDoc_from_xml
  File "/usr/local/lib/python2.7/site-packages/scrapy/selector/factories.py", 
line 14, in 
libxml2.HTML_PARSE_NOERROR + \
AttributeError: 'module' object has no attribute 'HTML_PARSE_RECOVER'


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dash/underscore on name of package uploaded on pypi

2022-06-01 Thread Dmitry Labazkin
On Friday, March 1, 2019 at 12:08:00 AM UTC+3, Terry Reedy wrote:
> On 2/28/2019 11:09 AM, ast wrote: 
> > Hello 
> > 
> > I just uploaded a package on pypi, whose name is "arith_lib" 
> > 
> > The strange thing is that on pypi the package is renamed "arith-lib" 
> > The underscore is substitued with a dash 
> > 
> > If we search for this package: 
> > 
> > pip search arith 
> > 
> > arith-lib (2.0.0) - A set of functions for miscellaneous arithmetic 
> > (so a dash) 
> > 
> > For installation both: 
> > 
> > pip install -U arith_lib 
> > pip install -U arith-lib 
> > 
> > are working well 
> > 
> > and in both case I got a directory with an underscore 
> > 
> > C:\Program Files\Python36-32\Lib\site-packages 
> > 
> > 28/02/2019  16:57  arith_lib 
> > 28/02/2019  16:57  arith_lib-2.0.0.dist-info 
> > 
> > What happens ?
> To expand on Paul's answer. 
> 
> English uses '-' both as a connector for compound names and as a 
> subtraction operator. Context usually makes the choice obvious. But 
> context-free parsers must choose just one, and for computation, 
> subtraction wins. 'arith-lib' is parsed as (arith) - (lib). Many 
> algorithm languages use '_' instead of '-' as the compounder for 
> identifiers (object names). 
> 
> In addition, Python uses filenames -(minus) '.py' as identifiers for 
> imported modules. So if the repository allows '-' in package names, 
> installers must convert '-' to '_'. But if the repository allows 
> 'arith_lib' and 'arith-lib' to be distinct names for different packages, 
> both would be installed with the same file name. So the repository 
> standardizes on one form, and it went with English instead of Pythonese. 
> 
> -- 
> Terry Jan Reedy

Hi, 

Recently I wrote the article about packages vs distributions and also explored 
naming and normalization:
https://labdmitriy.github.io/blog/distributions-vs-packages/#additional-experiments

In the section “Additional experiments” I got the expected normalization 
results, but in “Open questions” I found that for another package URL is not 
normalized and original URL is used, and name is partially normalized 
(underscore is replaced by hyphen but dot remains unchanged).

Could you please explain this behavior?

Thank you.
-- 
https://mail.python.org/mailman/listinfo/python-list


SmallTalk-like interactive environment on base of cPython 2.7.x + wx

2016-08-14 Thread Dmitry Ponyatov
Does anybody can recomend some links on tutorials on making custom dynamic 
languages or objects systems on top of cPython2 ?

I want some interactive dynamic object environment with SmallTalk look&feel but 
with Python syntax.

Other tutorials I'm interested in are reflection, dynamic bytecode 
(de)compilation, using internal Python parser (for syntax highlighting for 
example), metacompilation (?) and making dynamic object VMs on top of cPython.
-- 
https://mail.python.org/mailman/listinfo/python-list


some question about tp_basicsize

2005-02-10 Thread Dmitry Belous
Hi, All

I use C++ to create new types(inherited from PyTypeObject)
and objects(inherited from PyObject) and virtual
destructor to destroy objects. sizeof() is different
for different objects and therefore i don't know what i must do
with tp_basicsize.

Will the following source code work?
Must i set tp_basicsize to right size? (Can I use zero
for tp_basicsize?)

static void do_instance_dealloc(PyObject* obj) {
  if(obj->ob_type == &mytype_base)
delete static_cast(obj);
}

PyTypeObject mytype_base = {
...
  0, /*tp_basicsize*/ /*i don't know size of object*/
...
  &do_instance_dealloc, /*tp_dealloc*/
...
};

class myobject_base : public PyObject {
public:
  myobject_base() : ob_type(mytype_base) {}
  virtual ~myobject_base() {}
};

class myobject_specific : public myobject_base {
public:
  std::string name;
  myobject_specific() : myobject_base() {}
};

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: brand new to python

2005-03-13 Thread Dmitry A.Lyakhovets
  continue
> part = string.strip(part)
> if not part: continue
> 
> record = {}
> 
> lines = string.split(part, "\n")
> m = re.search("(.+) \((.+)\) ([EMAIL PROTECTED])", lines[0])
> if m:
>   record['name'] = string.strip(m.group(1))
>   record['handle'] = string.strip(m.group(2))
>   record['email'] =
>   string.lower(string.strip(m.group(3)))
> 
>   flag = 0
>   addresslines = []
>   phonelines = []
>   for line in lines[1:]:
> line = string.strip(line)
> if not line:
>   flag = 1
>   continue
> if flag == 0:
>   addresslines.append(line)
> else:
>   phonelines.append(line)
>   record['phone'] = string.join(phonelines, "\n")
>   record['address'] = string.join(addresslines,
>   "\n")
> 
>   for contacttype in contacttypes:
> contacttype =
> string.lower(string.strip(contacttype))
> contacttype = string.replace(contacttype, "
> contact", "") contactDict[contacttype] =
> record
> 
> return contactDict
> 
> def ParseWhois_NetworkSolutions(page):
>   m = re.search("Domain Name: (.+)", page)
>   domain = m.group(1)
>   rec = DomainRecord(domain)
> 
>   m = re.search("Record last updated on (.+)\.", page)
>   if m: rec.lastupdated = m.group(1)
> 
>   m = re.search("Record created on (.+)\.", page)
>   if m: rec.created = m.group(1)
> 
>   m = re.search("Database last updated on (.+)\.", page)
>   if m: rec.databaseupdated = m.group(1)
> 
>   m = re.search("Registrant:", page)
>   if m:
> i = m.end()
> m = re.search("\n\n", page[i:])
> j = m.start()
> registrant = string.strip(page[i:i+j])
> lines = string.split(registrant, "\n")
> registrant = []
> for line in lines:
>   line = string.strip(line)
>   if not line: continue
>   registrant.append(line)
>   rec.registrant = registrant[0]
>   rec.registrant_address = string.join(registrant[1:],
>   "\n")
> 
>   m = re.search("(.+) \((.+)\)$", rec.registrant)
>   if m:
> rec.registrant = m.group(1)
> rec.domainid = m.group(2)
> 
> m = re.search("Domain servers in listed
> order:\n\n", page) if m:
>   i = m.end()
>   m = re.search("\n\n", page[i:])
>   j = m.start()
>   servers = string.strip(page[i:i+j])
>   lines = string.split(servers, "\n")
>   servers = []
>   for line in lines:
> parts = string.split(string.strip(line))
> if not parts: continue
> servers.append(parts[0], parts[1])
> rec.servers = servers
> 
> m =
> re.search("((?:(?:Administrative|Billing|Technical|Zone)
> Contact,?[]*)+:)\n", page)
> if m:
>   i = m.start()
>   m = re.search("Record last updated on",
>   page) j = m.start()
>   contacts = string.strip(page[i:j])
> 
>   rec.contacts =
>   _ParseContacts_NetworkSolutions(contacts)
> 
>   return rec
> 
> 
> ##
> ##
> -
> -
> ##
> 
> def ParseWhois(page):
>   if string.find(page, "Registrar..: Register.com
> (http://www.register.com)") != -1:
> return ParseWhois_RegisterCOM(page)
>   else:
> return ParseWhois_NetworkSolutions(page)
> 
> 
> ##
> ##
> -
> -
> ##
> 
> def usage(progname):
>   version = _version
>   print __doc__ % vars()
> 
>   def main(argv, stdout, environ):
> progname = argv[0]
> list, args = getopt.getopt(argv[1:], "", ["help",
> "version",
> "test"])
> 
> for (field, val) in list:
>   if field == "--help":
> usage(progname)
> return
>   elif field == "--version":
> print progname, _version
> return
>   elif field == "--test":
> test()
> return
> 
> 
>   for domain in args:
> try:
>   page = whois(domain)
>   print page
> except NoSuchDomain, reason:
>   print "ERROR: no such domain %s" % domain
> 
>   if __name__ == "__main__":
> main(sys.argv, sys.stdout, os.environ)
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list
> 


-- 
Best regards,
Dmitry A. Lyakhovets  mailto:[EMAIL PROTECTED]
visit http://www.aikido-groups.ru -- Aikido Dojo site
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CGI discussion Group

2005-03-15 Thread Dmitry A.Lyakhovets
On 15 Mar 2005 07:05:59 -0800
"Fuzzyman" <[EMAIL PROTECTED]> wrote:

> There is a `web design` group over on google-groups.
> http://groups-beta.google.com/group/wd
> 
> It's brief is for ``Discussion of web design (html, php, flash,
> wysiwig, cgi, perl, python, css, design concepts, etc.).``, but it's
> very quiet. I'd love to see it become a discussion forum for Python
> CGIs and associated issues (web design, the http protocol etc).

I'd also be happy to see that. May be somebody know an alive discussion 
on Python CGI's?

> 
> Regards,
> 
> Fuzzy
> http://www.voidspace.org.uk/python/cgi.shtml
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list
> 

-- 
Best regards,
Dmitry A. Lyakhovets  mailto:[EMAIL PROTECTED]
visit http://www.aikido-groups.ru -- Aikido Dojo site
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: MP3 - VBR - Frame length in time

2004-12-08 Thread Dmitry Borisov
"Ivo Woltring" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Dear Pythoneers,
>
> I have this problem with the time calculations of an VBR (variable bit
rate)
> encoded MP3.
> I want to make a daisy writer for blind people. To do this I have to know
> exactly what the length in time of an mp3 is. With CBR encoded files I
have
> no real problems (at least with version 1 and 2), but with VBR encoded I
get
> into trouble.
> I noticed that players like WinAMP and Windows Media player have this
> problem too.
> I don't mind to have to read the whole mp3 to calculate, because
performance
> is not a real issue with my app.
>
> Can anyone help me? I am really interested in technical information on VBR
> and MP3.
> Can anybody tell me the length in time of the different bitrates in all
the
> versions of mp3 and all the layers.
>
> Tooling or links also really welcome. My own googling has helped me some
but
> on the subject of VBR I get stuck.

Try mmpython.
It has something to deal with the VBR tags( XING header ).
Dmitry/


-- 
http://mail.python.org/mailman/listinfo/python-list


A strange statement in the bisect documentation?

2015-03-06 Thread Dmitry Chichkov
I was looking over documentation of the bisect module and encountered the 
following very strange statement there:

>From https://docs.python.org/2/library/bisect.html

...it does not make sense for the bisect() functions to have key or reversed 
arguments because that would lead to an inefficient design (successive calls to 
bisect functions would not "remember" all of the previous key lookups).

Instead, it is better to search a list of precomputed keys to find the index of 
the record in question...


Is that right that the documentation encourages to use O(N) algorithm [by 
making a copy of all keys] instead of using O(log(N)) bisect with 
kay=attrgetter?  And claims that O(N) is more efficient than O(log(N))?  

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: A strange statement in the bisect documentation?

2015-03-09 Thread Dmitry Chichkov
Steven,

I'm somewhat argeeing with you, regarding the general analysisn, yet I'm not 
quite sure about your analysis of the repeated bisect call code. In particular, 
in the sample that you've given:

data = [i/100 for i in range(1, 701, 7)]
data.sort(key=str)
keyed_data = [str(x) for x in data]
 
for x in many_x_values:
key = str(x)
pos = bisect.bisect(keyed_data, key)
keyed_data.insert(pos, key)
data.insert(pos, x)


As per Python documentation "doing inserts or pops from the beginning of a list 
is slow (because all of the other elements have to be shifted by one)." so the 
complexity here would be O(M*N)  [+ O(M*log(N))].

So this particular example looks like micro-optimization around bisect(), while 
loosing the big-picture out of sight. That inserts into the list are O(N) and 
in general are more expensive than O(log(N)) bisect calls, doing two inserts 
(into keyed_data and data) is worse than doing one, storing and maintaining an 
extra list of keys is relatively expensive, and some times plain impossible.

The following statement that agrees with the documentation also doesn't seem to 
be very sound "If you call bisect less than (10% of N) times, then it doesn't 
matter, because that's comparatively a small number and will be fast enough 
either way. ".

If the N is large (say 100 records) even if you need to call bisect 
only once or twise, the speed may still matter a lot. Take an example of a 
container on a timestamped 1TB log file, where you need to navigate to a 
perticular record. It would be a very slow operation indead, to create a copy 
of all keys, like the documentation suggests. Yet it would be very fast, to 
just bisect to the right element, taking just O(ln(100)) ~ 34 reads.

So to me, it looks like the argument  "it does not make sense for the bisect() 
functions to have key or reversed arguments because that would lead to an 
inefficient design"   documentation is incorrect in general, because it suggest 
to do O(N) instead of O(log(N)) operation, to optimize a hypothetical scenario.

To give examples, note, how it led you to an inefficient design in your sample 
code of doing *two* .insert() calls in your code, rather than just one. And I 
can easily see people writing something like .bisect([el.time for el in 
container]) just beceuse they don't expect the container to ever become large. 


On Friday, March 6, 2015 at 6:24:24 PM UTC-8, Steven D'Aprano wrote:
> Dmitry Chichkov wrote:
> 
> > I was looking over documentation of the bisect module and encountered the
> > following very strange statement there:
> > 
> > From https://docs.python.org/2/library/bisect.html
> > 
> > ...it does not make sense for the bisect() functions to have key or
> > reversed arguments because that would lead to an inefficient design
> > (successive calls to bisect functions would not "remember" all of the
> > previous key lookups).
> > 
> > Instead, it is better to search a list of precomputed keys to find the
> > index of the record in question...
> > 
> > 
> > Is that right that the documentation encourages to use O(N) algorithm [by
> > making a copy of all keys] instead of using O(log(N)) bisect with
> > kay=attrgetter?  And claims that O(N) is more efficient than O(log(N))?
> 
> Apparently :-)
> 
> 
> The documentation may not be completely clear, but what it is arguing is
> this:
> 
> If you are making repeated bisect calls, then using a key function is
> inefficient because the key gets lost after each call and has to be
> recalculated over and over again. A concrete example:
> 
> data = [i/100 for i in range(1, 701, 7)]
> data.sort(key=str)
> 
> for x in many_x_values:
> bisect.insort(data, x, key=str)  # Pretend this works.
> 
> 
> The first time you call insort, the key function (str in this case) will be
> called O(log N) times. The second time you call insort, the key function
> must be called again, even for the same data points, because the keys
> str(x) are thrown away and lost after each call to insort.
> 
> After M calls to insort, there will have been on average M*(log(N)+1) calls
> to the key function. (The +1 is because the x gets str'ed as well, and
> there are M loops.)
> 
> 
> 
> As an alternative, suppose we do this:
> 
> data = [i/100 for i in range(1, 701, 7)]
> data.sort(key=str)
> keyed_data = [str(x) for x in data]
> 
> for x in many_x_values:
> key = str(x)
> pos = bisect.bisect(keyed_data, key)
> keyed_data.insert(pos, key)
> data.insert(pos, x)
> 
> 
> This costs N calls to str in preparing the keyed_data list, plus M calls to
> str in the loop. The documentation suggests that we consi

Bugfixing python 3.5 asyncio

2015-11-04 Thread Dmitry Panteleev
Hello,

There is a bug in asyncio.Queue
(https://github.com/python/asyncio/issues/268), which makes it
unusable for us. It is fixed in master now. What is the easiest way to
patch the asyncio bundled with python if I have to distribute it among
5 colleagues? It is used in our private module.

I can think of:
1. Copy asyncio.queues into our package so it has a different name
2. Override sys.path

Both look really strange. Is there anything else?

Thanks,
Dmitry Panteleev
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Bugfixing python 3.5 asyncio

2015-11-04 Thread Dmitry Panteleev
Hello,

Yes, the fix has been merged. If 3.5.1 gets released in a month, it
should not be a problem. And looks like it dpends on
http://bugs.python.org/issue25446 .

Thank you for the information.
Dmitry

On Wed, Nov 4, 2015 at 5:52 PM, Terry Reedy  wrote:
> On 11/3/2015 8:24 PM, Dmitry Panteleev wrote:
>>
>> Hello,
>>
>> There is a bug in asyncio.Queue
>> (https://github.com/python/asyncio/issues/268), which makes it
>> unusable for us. It is fixed in master now.
>
>
> I presume that the fix has been merged into the CPython repository (you
> could check).  If so, it should be included when 3.5.1 is released in about
> a month+.
>
>> What is the easiest way to
>> patch the asyncio bundled with python if I have to distribute it among
>> 5 colleagues? It is used in our private module.
>
>
> The cleanest way -- in my opinion -- would be to patch your 3.5.0
> installations, making them '3.5.0+'.  For 5 installations,
>
>> I can think of:
>> 1. Copy asyncio.queues into our package so it has a different name
>
>
> this would be easiest, but then I would retest and presumably revise and
> redistribute after 3.5.1 is out
>
>> 2. Override sys.path
>>
>> Both look really strange. Is there anything else?
>
>
> What I said above.
>
> --
> Terry Jan Reedy
>
> --
> https://mail.python.org/mailman/listinfo/python-list



-- 
Best Regards,
Dmitry Panteleev
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Basic question

2007-05-12 Thread Dmitry Dzhus

> "j=j+2" inside IF does not change the loop
> counter ("j")

You might be not truly catching the idea of Python `for` statements
sequence nature. It seems that <http://docs.python.org/ref/for.html>
will make things quite clear.

> The suite may assign to the variable(s) in the target list; this
> does not affect the next item assigned to it.

In C you do not specify all the values the "looping" variable will be
assigned to, unlike (in the simplest case) you do in Python.

-- 
Happy Hacking.

Dmitry "Sphinx" Dzhus
http://sphinx.net.ru
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Basic question

2007-05-12 Thread Dmitry Dzhus

> Actually I'm trying to convert a string to a list of float numbers:
> str = '53,20,4,2' to L = [53.0, 20.0, 4.0, 2.0]

str="53,20,4,2"
map(lambda s: float(s), str.split(','))

Last expression returns: [53.0, 20.0, 4.0, 2.0]
-- 
Happy Hacking.

Dmitry "Sphinx" Dzhus
http://sphinx.net.ru
-- 
http://mail.python.org/mailman/listinfo/python-list


os.popen and lengthy operations

2007-09-19 Thread Dmitry Teslenko
Hello!
I'm using os.popen to perform lengthy operation such as building some
project from source.
It looks like this:
def execute_and_save_output( command, out_file, err_file):

import os

def execute_and_save_output( command, out_file, err_file):
(i,o,e) = os.popen3( command )
try:
for line in o:
out_file.write( line )

for line in e:
err_file.write( line )
finally:
i.close()
o.close()
e.close()

...
execute_and_save_output( '', out_file, err_file)

Problem is that script hangs on operations that take long to execute
and have lots of output such as building scripts.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Release 0.65.1 of Task Coach

2007-09-25 Thread Dmitry Balabanov
2007/9/24, Frank Niessink <[EMAIL PROTECTED]>:
> Hi,
>
> I'm happy to announce release 0.65.1 of Task Coach. This release fixes
> one critical bug and two minor bugs. Since the critical bug may lead
> to data loss, I recommend users of release 0.65.0 to upgrade.
>
> Bugs fixed:
>
> * Saving a task file after adding attachments via the 'add attachment'
> menu or context menu fails.
> * Tooltip windows steals keyboard focus on some platforms.
> * Taskbar icon is not transparent on Linux.
>
>
> What is Task Coach?
>
> Task Coach is a simple task manager that allows for hierarchical
> tasks, i.e. tasks in tasks. Task Coach is open source (GPL) and is
> developed using Python and wxPython. You can download Task Coach from:
>
> http://www.taskcoach.org
>
> In addition to the source distribution, packaged distributions are
> available for Windows XP, Mac OSX, and Linux (Debian and RPM format).
>
> Note that Task Coach is alpha software, meaning that it is wise to back
> up your task file regularly, and especially when upgrading to a new
> release.
>
> Cheers, Frank
> --
> http://mail.python.org/mailman/listinfo/python-announce-list
>
> Support the Python Software Foundation:
> http://www.python.org/psf/donations.html
>


-- 
Regards, Dmitry.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os.popen and lengthy operations

2007-09-26 Thread Dmitry Teslenko
Hello!

On 24/09/2007, Tommy Nordgren <[EMAIL PROTECTED]> wrote:
>Your problem is that you are not reading the standard output and
> standard error streams in the correct way.
> You need to do the reading of standard out and standard err in
> parallell rather than sequentially.
> The called process can't proceed when it's output buffers are full.
> Pipes is limited by small buffers (for examples 512 bytes
> on Mac OS X)

Thanks to all for your comprehensive explanation!
-- 
http://mail.python.org/mailman/listinfo/python-list


python 2.5 scripting in vim on windows: subprocess problem

2007-10-22 Thread Dmitry Teslenko
Hello!
I'm using subprocess.Popen in python script in vim.
It called this way:
def some_func():
p = subprocess.Popen( command , stdout = subprocess.PIPE, 
stderr =
subprocess.STDOUT)
while True:
s = p.stdout.readline()
if not s:
break
self.__output( '... %s' % s )
return p.wait()

It filters command's output and re-ouputs it in stdout.
Being called from console, it works fine.
Being called from vim with :python some_func() it says:
file "...subprocess.py", line 586 in __init__
...
file "...subprocess.py", line 699, in _get_handles
...
file "...subprocess.py", line 744 in _make_inheritable
DUPLICATE_SAME_ACCESS
WindowsError: [Error 6]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python 2.5 scripting in vim on windows: subprocess problem

2007-10-24 Thread Dmitry Teslenko
On 22/10/2007, Andy Kittner <[EMAIL PROTECTED]> wrote:
> >> Are you running this on vim or gvim? If you are running on gvim, my
> >> guess is that the handles that you are passing are not valid. In
> >> either case, try creating explicit handles that are valid (such as for
> >> /dev/null) and create the process with these handles.
> Just as a side note: my vim was a gvim-7.1.140 with dynamic python
> support, so it doesn't look like a general problem.
I've also tried *-1-140 from cream's sourceforge website and it works
just like my custom-built one.

> > I'm passing hadles that I get from subprocess.Popen. Just passing
> > command to Popen constructor and using his handles to read data. No
> > other handle-manipulations.
> When exactly does it throw the exception? Directly on creation of the
> Popen object, or when you try to read stdout?
It throws exception on subprocess.Popen object instantiation.
os.system() works fine but I want Popen functionality.
-- 
http://mail.python.org/mailman/listinfo/python-list


[win32] spawn background process and detach it w/o problems

2007-11-08 Thread Dmitry Teslenko
Hello!
How to write portable (win32, unix) script that launches another
program and continues its execution?

I've looked at spawn*() but it doesn't look in PATH dirs on windows so
it's totally unusable when you don't know where exactly program is.

I've looked at fork() way but there's no fork for windows.

My current solution is
thread.start_new(os.system, (,))

It's ugly and there's one big unpleasant pecularity:
in case there were any os.chdir()-s between beginning of script
execution and that thread.start_new() then new thread starts in
original directory. Not in current directory at moment of
thread.start_new()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: spawn background process and detach it w/o problems

2007-11-09 Thread Dmitry Teslenko
Hello!

On 08/11/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Take a look at the subprocess module.

Big thanks!

It's interesting what's happening with subprocess.Popen instance after
it has been instatiated and script's main thread exits leaving
Popen'ed application open.
-- 
http://mail.python.org/mailman/listinfo/python-list


Cookie expiration time

2007-12-17 Thread dmitry . ema
Hi All !
  I'm using httplib2 library in my python script for interactions with
remote web-server. Remote server responses me cookies with the set
expiration time. I'd like to extend this time.  Does anybody know how
can I do it using this library ?
-- 
http://mail.python.org/mailman/listinfo/python-list


Need help with "with-statement-compatible" object

2007-12-19 Thread Dmitry Teslenko
Hello!
I've made some class that can be used with "with statement". It looks this way:

class chdir_to_file( object ):
...
def __enter__(self):
...

def __exit__(self, type, val, tb):
...
def get_chdir_to_file(file_path):
return chdir_to_file(file_path)
...

Snippet with object instantiation looks like this:
for s in sys.argv[1:]:
c = chdir_to_file( s )
with c:
print 'Current directory is %s' % os.path.realpath( 
os.curdir )

That works fine. I want to enable it to be used in more elegant way:
for s in ... :
with get_chdir_to_file( s ) as c:
   c.do_something()

But python complains c is of NoneType and has no "do_something()". Am
I missing something?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need help with "with-statement-compatible" object

2007-12-19 Thread Dmitry Teslenko
On Dec 19, 2007 12:14 PM, Peter Otten <[EMAIL PROTECTED]> wrote:
> def __enter__(self):
> # ...
> return self
>
> should help.

That helps. Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


xml-filter with XMLFilterBase() and XMLGenerator() shuffles attributes

2007-12-20 Thread Dmitry Teslenko
Hello!
I've made a trivial xml filter to modify some attributes on-the-fly:

...
from __future__ import with_statement
import os
import sys

from xml import sax
from xml.sax import saxutils

class ReIdFilter(saxutils.XMLFilterBase):
def __init__(self, upstream, downstream):
saxutils.XMLFilterBase.__init__(self, upstream)

self.__downstream = downstream
return

def startElement(self, name, attrs):
self.__downstream.startElement(name, attrs)
return

def startElementNS(self, name, qname, attrs):
self.__downstream.startElementNS(name, qname, attrs)
return

def endElement(self, name):
self.__downstream.endElement(name)
return

def endElementNS(self, name, qname):
self.__downstream.endElementNS(name, qname)
return

def processingInstruction(self, target, body):
self.__downstream.processingInstruction(target, body)
return

def comment(self, body):
self.__downstream.comment(body)
return

def characters(self, text):
self.__downstream.characters(text)
return

def ignorableWhitespace(self, ws):
self.__downstream.ignorableWhitespace(ws)
return

...
with open(some_file_path, 'w') as f:
parser = sax.make_parser()
downstream_handler = saxutils.XMLGenerator(f, 'cp1251')
filter_handler = ReIdFilter(parser, downstream_handler)
filter_handler.parse(file_path)

I want prevent it from shuffling attributes, i.e. preserve original
file's attribute order. Is there any ContentHandler.features*
responsible for that?
-- 
http://mail.python.org/mailman/listinfo/python-list


draining pipes simultaneously

2008-03-05 Thread Dmitry Teslenko
Hello!
Here's my implementation of a function that executes some command and
drains stdout/stderr invoking other functions for every line of
command output:

def __execute2_drain_pipe(queue, pipe):
for line in pipe:
queue.put(line)
return

def execute2(command, out_filter = None, err_filter = None):
p = subprocess.Popen(command , shell=True, stdin = subprocess.PIPE, \
stdout = subprocess.PIPE, stderr = subprocess.PIPE, \
env = os.environ)

qo = Queue.Queue()
qe = Queue.Queue()

to = threading.Thread(target = __execute2_drain_pipe, \
args = (qo, p.stdout))
to.start()
time.sleep(0)
te = threading.Thread(target = __execute2_drain_pipe, \
args = (qe, p.stderr))
te.start()

while to.isAlive() or te.isAlive():
try:
line = qo.get()
if out_filter:
out_filter(line)
qo.task_done()
except Queue.Empty:
pass

try:
line = qe.get()
if err_filter:
err_filter(line)
qe.task_done()
except Queue.Empty:
pass

to.join()
te.join()
return p.wait()

Problem is my implementation is buggy and function hungs when there's
empty stdout/stderr. Can I have your feedback?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: draining pipes simultaneously

2008-03-05 Thread Dmitry Teslenko
On Wed, Mar 5, 2008 at 1:34 PM,  <[EMAIL PROTECTED]> wrote:
>  The Queue.get method by default is blocking. The documentation is not
>  100% clear about that (maybe it should report
>  the full python definition of the function parameters, which makes
>  self-evident the default value) but if you do
>  help(Queue.Queue) in a python shell you will see it.

>  Hence, try using a timeout or a non-blocking get (but in case of a non
>  blocking get you should add a delay in the
>  loop, or you will poll the queues at naximum speed and maybe prevent
>  the other threads from accessing them).

Thanks for advice! Finally I came up to following loop:

while to.isAlive() or te.isAlive():
try:
while True:
line = qo.get(False)
if out_filter:
out_filter(line)
except Queue.Empty:
pass

try:
while True:
line = qe.get(False)
if err_filter:
err_filter(line)
except Queue.Empty:
pass

Inserting delay in the beginning of the loop causes feeling of command
taking long to start and delay at the end of the loop may cause of
data loss when both thread became inactive during delay.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: draining pipes simultaneously

2008-03-05 Thread Dmitry Teslenko
On Wed, Mar 5, 2008 at 3:39 PM,  <[EMAIL PROTECTED]> wrote:
>  time.sleep() pauses ony the thread that executes it, not the
>  others. And queue objects can hold large amount of data (if you have
>  the RAM),
>  so unless your subprocess is outputting data very fast, you should not
>  have data loss.
>  Anyway, if it works for you ... :-)

After some testing I'll agree :) Without time.sleep() in main thread
python eats up all aviable processor time
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyGTK localisation on Win32

2008-03-27 Thread Sukhov Dmitry
>
> I had no problem with using standard gettext way of doing i18n on
> Windows with PyGTK an Glade, apart some quirks with LANG environment
> variable. Basically, the code that works looks like this:
>
> import gettext, locale
> locale.setlocale(locale.LC_ALL, '')
> if os.name == 'nt':
> # windows hack for locale setting
> lang = os.getenv('LANG')
> if lang is None:
> defaultLang, defaultEnc = locale.getdefaultlocale()
> if defaultLang:
> lang = defaultLang
> if lang:
> os.environ['LANG'] = lang
> gtk.glade.bindtextdomain(appname, translation_dir)
> gtk.glade.textdomain(appname)
> gettext.install(appname, translation_dir, unicode=True)
>
> Be aware, that you can not change the locale setting from the command
> line like you do on Linux.
>

I have the same problem. I did all as you wrote. gettext translations
do work fine. But translations in glade does not work.

The only way to turn it on is to set environment variable LANG
explicitly before program run:
set LANG=ru_RU
python test.py


-- 
http://mail.python.org/mailman/listinfo/python-list


file locked for writing

2008-05-13 Thread Dmitry Teslenko
Hello!
I use some script in python 2.5 from vim editor (it has python
bindings) that updates some file
and then launches another program (ms visual studio, for example) to
do something with updated file.
I faced problem when updated file is locked for writing until vim
editor is closed.

launch vim -> update file -> launch msvc -> file locked
launch vim -> update file -> launch msvc -> close vim -> file locked
launch vim -> update file -> -> close vim -> launch msvc -> file okay

Update code is something like that:

backup_file_name = ''
with open(backup_file_name, 'w') as backup_file:
input = sax.make_parser()
output = saxutils.XMLGenerator(backup_file, 'cp1252')
filter = __vcproj_config_filter('', input, 
output)
filter.parse('')
shutil.copyfile(backup_file_name, '')
os.remove(backup_file_name)

__vcproj_config_filter is a descent of a XMLFilterBase; it substitutes
some attributes in xml file and that's all.
must be noted that __vcproj_config_filter instance holds reference to
output (sax xml parser) object.
--
http://mail.python.org/mailman/listinfo/python-list


Re: file locked for writing

2008-05-14 Thread Dmitry Teslenko
On Wed, May 14, 2008 at 8:18 AM, Gabriel Genellina
<[EMAIL PROTECTED]> wrote:
> En Tue, 13 May 2008 11:57:03 -0300, Dmitry Teslenko <[EMAIL PROTECTED]>
>  Is the code above contained in a function? So all references are released
> upon function exit?

Yes, it's a function

>  If not, you could try using: del input, output, filter
>  That should release all remaining references to the output file, I presume.
> Or locate the inner reference to the output file (filter.something perhaps?)
> and explicitely close it.

That doesn't help either.
When I've rewrite code something like that:

   with open(backup_file_name, 'w') as backup_file:
   .
   filter.parse('')
   del input, output, filter
os.remove(project.get_vcproj())
os.rename(backup_file_name, project.get_vcproj())

It triggers WindowsError on os.remove()
--
http://mail.python.org/mailman/listinfo/python-list


Re: file locked for writing

2008-05-14 Thread Dmitry Teslenko
On Wed, May 14, 2008 at 11:04 AM, Dmitry Teslenko <[EMAIL PROTECTED]> wrote:
>  When I've rewrite code something like that:
>with open(backup_file_name, 'w') as backup_file:
>.
>
>filter.parse('')
>del input, output, filter
> os.remove(project.get_vcproj())
> os.rename(backup_file_name, project.get_vcproj())
>
>  It triggers WindowsError on os.remove()

Using "programming by permutation" pattern I've finally solved that
thing: filter
for some reason doesn't close file after XMLFilterBase.parse();
even after del filter; Workaround for this is to pass 
instead of  to XMLFilterBase.parse() and then explicitly close
file or put this call in with-block.
--
http://mail.python.org/mailman/listinfo/python-list


pygtk + threading.Timer

2008-04-14 Thread Dmitry Teslenko
Hello!
I have simple chat application with pygtk UI. I want some event (for
example update user list) to have place every n seconds.
What's the best way to archive it?
I tried threading.Timer but result is following: all events wait till
exit of gtk main loop and only then they occur.
Thanks in advance
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pygtk + threading.Timer

2008-04-14 Thread Dmitry Teslenko
2008/4/14 Jarek Zgoda <[EMAIL PROTECTED]>:
>  > I have simple chat application with pygtk UI. I want some event (for
>  > example update user list) to have place every n seconds.
>  > What's the best way to archive it?
>  > I tried threading.Timer but result is following: all events wait till
>  > exit of gtk main loop and only then they occur.
>  > Thanks in advance
>
>  See gobject.timeout_add documentation in pygtk reference

Thanks. That's exactly what I need.
-- 
http://mail.python.org/mailman/listinfo/python-list


Why python doesn't use syntax like function(, , x) for default parameters?

2006-03-10 Thread Dmitry Anikin
I mean, it's very convenient when default parameters
can be in any position, like
def a_func(x = 2, y = 1, z):
...
(that defaults must go last is really a C++ quirk which
is needed for overload resolution, isn't it?)

and when calling, just omit parameter when you want to
use defaults:
a_func(, , 3)

There are often situations when a function has independent
parameters, all having reasonable defaults, and I want to
provide just several of them. In fact, I can do it using
keyword parameters, but it's rather long and you have to
remember/lookup names of parameters.

Is there some contradiction in python syntax which disallows
an easy implementation of this feature, or just nobody bothered
with this? If former is the case, please show me why, because
I badly need this feature in embedded python app (for
compatibility with other language that uses such syntax) and might
venture to implement it myself, so don't want to waste time
if it's gonna break something.
Or maybe it might be an idea for enhancement proposal?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why python doesn't use syntax like function(, , x) for default parameters?

2006-03-10 Thread Dmitry Anikin
Some example (from real life).
def ChooseItems(StartDate, EndDate, Filter):
#function returns a set of some items in chronological order
#from required interval possibly using filter

ChooseItems() #get everything
ChooseItems('01.01.2000', ,SomeFilter) #get everything after a date using filter
ChooseItems(, '01.01.2000') #get everything before a date
ChooseItems(, , SomeFilter) #get everything using filter

Now compare this to something which (I hope) is rather pythonian

Seq[:] #get everything
Seq[2::3] #get everything after an index using filter (filter every third value)
Seq[:3] #get everythin before an index
Seq[::4] #get everything using a filter

Do you see any significant difference?

I understand that many do not need such a syntax, I don't understand
why someone would be AGAINST it. I don't propose to CHANGE anything
in python (right now this syntax is error anyway). What I propose is just
ADD another way of calling a function with keyword parameters but using
POSITIONS instead of NAMES. And sometimes position is easier to
remember than name. Anyway, who wants names let them use names.
Who wants positions let them use positions. But to have a CHOICE is
always good. As far as the choice itself doesn't damage anything,
and I don't think that my does.

I think that if we compare
ChooseItems('01.01.2000', ,SomeFilter)
and
ChooseItems(StartDate='01.01.2000', Filter=SomeFilter)
the first one is more readable, 'cos you see
what is meant right away. In second one you have to
actually READ the keyword names to understand.
It's not the common case, of course, but still, why
not have a choice to use it?

Some other examples which might benefit
SetDate(year, month, day)
SetDate(, month+1) # set next month, leaving year and day
SetDate(, , 31) # set to end of month, not changing year
#(wrong date adjusted automatically, of course)

FormatFloat(Float, Length, Precision, FormatFlags)
You might want precision, leaving length default, or just use FormatFlags

In fact, I became so used to convenience of such syntax that
it was a disappointment not to find it in python.

Please, don't try to scare me with 25-parameter functions.
This is not for them. But to remember positions of two to
five parameters is actually easier (if their order has some
logic) then what are their names: startDate ? beginDate?
firstDate? openDate? Date1?

The same approach can be used with tuples:
(, , z) = func() # returning three element tuple()
You think
z = func()[2]
is actually more clear? - By the way, I want THIRD value,
not SECOND. And tuples don't have keyword names, do they?
And what about
(a, , b)  = func()
...well, maybe I got carried away a little...

Finally, if syntax
func (None, None, 10)
seems natural to you, I propose to make it even more
natural: I don't want some "None" passed as argument,
I don't want anything at all passed, so I just use empty space
func ( , , 10)
And the called func don't have to bother with checking
None for EACH argument but will happily use defaults instead.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why python doesn't use syntax like function(, , x) for default parameters?

2006-03-10 Thread Dmitry Anikin
Some example (from real life).
def ChooseItems(StartDate, EndDate, Filter):
#function returns a set of some items in chronological order
#from required interval possibly using filter

ChooseItems() #get everything
ChooseItems('01.01.2000', ,SomeFilter) #get everything after a date using filter
ChooseItems(, '01.01.2000') #get everything before a date
ChooseItems(, , SomeFilter) #get everything using filter

Now compare this to something which (I hope) is rather pythonian

Seq[:] #get everything
Seq[2::3] #get everything after an index using filter (filter every third value)
Seq[:3] #get everythin before an index
Seq[::4] #get everything using a filter

Do you see any significant difference?

I understand that many do not need such a syntax, I don't understand
why someone would be AGAINST it. I don't propose to CHANGE anything
in python (right now this syntax is error anyway). What I propose is just
ADD another way of calling a function with keyword parameters but using
POSITIONS instead of NAMES. And sometimes position is easier to
remember than name. Anyway, who wants names let them use names.
Who wants positions let them use positions. But to have a CHOICE is
always good. As far as the choice itself doesn't damage anything,
and I don't think that my does.

I think that if we compare
ChooseItems('01.01.2000', ,SomeFilter)
and
ChooseItems(StartDate='01.01.2000', Filter=SomeFilter)
the first one is more readable, 'cos you see
what is meant right away. In second one you have to
actually READ the keyword names to understand.
It's not the common case, of course, but still, why
not have a choice to use it?

Some other examples which might benefit
SetDate(year, month, day)
SetDate(, month+1) # set next month, leaving year and day
SetDate(, , 31) # set to end of month, not changing year
#(wrong date adjusted automatically, of course)

FormatFloat(Float, Length, Precision, FormatFlags)
You might want precision, leaving length default, or just use FormatFlags

In fact, I became so used to convenience of such syntax that
it was a disappointment not to find it in python.

Please, don't try to scare me with 25-parameter functions.
This is not for them. But to remember positions of two to
five parameters is actually easier (if their order has some
logic) then what are their names: startDate ? beginDate?
firstDate? openDate? Date1?

The same approach can be used with tuples:
(, , z) = func() # returning three element tuple()
You think
z = func()[2]
is actually more clear? - By the way, I want THIRD value,
not SECOND. And tuples don't have keyword names, do they?
And what about
(a, , b)  = func()
...well, maybe I got carried away a little...

Finally, if syntax
func (None, None, 10)
seems natural to you, I propose to make it even more
natural: I don't want some "None" passed as argument,
I don't want anything at all passed, so I just use empty space
func ( , , 10)
And the called func don't have to bother with checking
None for EACH argument but will happily use defaults instead.
-- 
http://mail.python.org/mailman/listinfo/python-list


cmp() on integers - is there guarantee of returning only +-1 or 0?

2006-03-19 Thread Dmitry Anikin
doc says that it must be > 0, or < 0, but it seems that
it returns +1 or -1. Can it be reliably used to get the sign of x:
cmp(x, 0) like pascal Sign() function does? I mean, I'm
pretty sure that it can be used, but is it mentioned somewhere
in language spec, or it may be implementation defined?
If so, any other simple means of _reliably_ getting the sign?
-- 
http://mail.python.org/mailman/listinfo/python-list


Can't detect EOF from stdin on windows console

2006-04-03 Thread Dmitry Anikin
I want to read stdin in chunks of fixed size until EOF
I want to be able (also) to supply data interactively in console
window and then to hit Ctrl+Z when finished
So what I do is:

while True:
s = sys.stdin.read(chunk_size)
if not s:
break
# do something with s

if stdin is standard console input (on windows xp), here what happens:
(suppose, chunk_size = 3)
input: 123^Z
--- s gets "123" but reading doesn't end
input: ^Z
--- now s is empty and loop breaks,
so you have to press Ctrl-Z  TWICE to end the loop
worse still:
input: 12^Z
--- EOF is there, but there's only TWO chars instead of requested THREE,
so stdin.read() doesn't even return yet
input: ^Z
--- s gets "12" but reading doesn't end
input: ^Z
--- only now loop breaks
so you have to press Ctrl-Z  THRICE to end the loop

I haven't discovered any EOF function in python which could tell me
if eof was encountered. As you see, testing for empty string or for
len(s) = chunk_size doesn't improve the situation, anyone can
suggest a workaround?

Also I think the latter case is a straightaway bug, doc says:
  read( [size]) 

Read at most size bytes from the file (less if the read hits EOF before 
obtaining size bytes). 
According to that, stdin.read(3), when supplied with "12^Z" should return 
immediately
with two-character string instead of waiting for third character after EOF.
By the way, if I enter something between ^Z's that will be treated as vaild 
input
and ^Z's will be completely ignored.
-- 
http://mail.python.org/mailman/listinfo/python-list


call to pypcap in separate thread blocks other threads

2009-12-30 Thread Dmitry Teslenko
Hello!
I'm making gui gtk application. I'm using pypcap
(http://code.google.com/p/pypcap/) to sniff some network packets.
To avoid gui freezing I put pcap call to another thread.
Pypcap call looks like:

pc = pcap.pcap()
pc.setfilter('tcp')
for ts, pkt in pc:
spkt = str(pkt)
...

Sadly, but this call in another thread blocks gtk gui thread anyway.
If I substitute pcap call with something else separate thread don't block
gui thread.

Using another process instead of thead isn't appropriate.

Thread initialization looks like and takes place before gtk.main():

self.__pcap_thread = threading.Thread(target = self.get_city_from_pcap)
self.__pcap_thread.start()

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: call to pypcap in separate thread blocks other threads

2009-12-30 Thread Dmitry Teslenko
On Wed, Dec 30, 2009 at 16:18, aspineux  wrote:
> On Dec 30, 1:34 pm, Dmitry Teslenko  wrote:
>> Hello!
>> I'm making gui gtk application. I'm using pypcap
>> (http://code.google.com/p/pypcap/) to sniff some network packets.
>> To avoid gui freezing I put pcap call to another thread.
>> Pypcap call looks like:
>>
>> pc = pcap.pcap()
>> pc.setfilter('tcp')
>> for ts, pkt in pc:
>>         spkt = str(pkt)
>>         ...
>>
>> Sadly, but this call in another thread blocks gtk gui thread anyway.
>> If I substitute pcap call with something else separate thread don't block
>> gui thread.
>>
>> Using another process instead of thead isn't appropriate.
>>
>> Thread initialization looks like and takes place before gtk.main():
>>
>> self.__pcap_thread = threading.Thread(target = self.get_city_from_pcap)
>> self.__pcap_thread.start()
>
> Did you try using build-in gtk thread ?
>
> Regards
>
> Alain Spineux                         |  aspineux gmail com
> Your email 100% available             |  http://www.emailgency.com
> NTBackup frontend sending mail report |  http://www.magikmon.com/mkbackup
>
>>
>> --
>> A: Because it messes up the order in which people normally read text.
>> Q: Why is top-posting such a bad thing?
>> A: Top-posting.
>> Q: What is the most annoying thing in e-mail?
> --
> http://mail.python.org/mailman/listinfo/python-list
>

No. What makes it different?


-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: call to pypcap in separate thread blocks other threads

2009-12-30 Thread Dmitry Teslenko
On Wed, Dec 30, 2009 at 18:25, aspineux  wrote:
> On Dec 30, 3:07 pm, Dmitry Teslenko  wrote:
>> On Wed, Dec 30, 2009 at 16:18, aspineux  wrote:
>> > On Dec 30, 1:34 pm, Dmitry Teslenko  wrote:
>> >> Hello!
>> >> I'm making gui gtk application. I'm using pypcap
>> >> (http://code.google.com/p/pypcap/) to sniff some network packets.
>> >> To avoid gui freezing I put pcap call to another thread.
>> >> Pypcap call looks like:
>>
>> >> pc = pcap.pcap()
>> >> pc.setfilter('tcp')
>> >> for ts, pkt in pc:
>> >>         spkt = str(pkt)
>> >>         ...
>>
>> >> Sadly, but this call in another thread blocks gtk gui thread anyway.
>> >> If I substitute pcap call with something else separate thread don't block
>> >> gui thread.
>>
>> >> Using another process instead of thead isn't appropriate.
>>
>> >> Thread initialization looks like and takes place before gtk.main():
>>
>> >> self.__pcap_thread = threading.Thread(target = self.get_city_from_pcap)
>> >> self.__pcap_thread.start()
>>
>> > Did you try using build-in gtk thread ?
>>
>> > Regards
>>
>> > Alain Spineux                         |  aspineux gmail com
>> > Your email 100% available             |  http://www.emailgency.com
>> > NTBackup frontend sending mail report |  http://www.magikmon.com/mkbackup
>>
>> >> --
>> >> A: Because it messes up the order in which people normally read text.
>> >> Q: Why is top-posting such a bad thing?
>> >> A: Top-posting.
>> >> Q: What is the most annoying thing in e-mail?
>> > --
>> >http://mail.python.org/mailman/listinfo/python-list
>>
>> No. What makes it different?
>
> It allow you to access to you GTK widgets from your threads (you have
> rules to follow to do that).
>
> What are you doing from inside your "pcap" thread ?
>
>
>
>>
>> --
>> A: Because it messes up the order in which people normally read text.
>> Q: Why is top-posting such a bad thing?
>> A: Top-posting.
>> Q: What is the most annoying thing in e-mail?
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>

I do this thing:

>> >> pc = pcap.pcap()
>> >> pc.setfilter('tcp')
>> >> for ts, pkt in pc:
>> >>         spkt = str(pkt)
>> >>         ...

pc.next() blocks other threads. I visit project page and ppl mention
similar issues. Now I think it's pypcap-related problem and project
page is right place to discuss this issue.
Thanks for support.

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
-- 
http://mail.python.org/mailman/listinfo/python-list


[gtk+thread] Why worker thread never wakes from time.sleep()?

2010-01-03 Thread Dmitry Teslenko
Hello!
I have simple gui gtk app. It has worker thread that populates list
with strings and gtk window with main loop which pops strings
from this list and shows them in TreeView.

Thread runs get_data_from_pcap to populate list with strings.
Gtk app calls update_store() with gobject.timeout_add every second.

If I comment time.sleep() in update_store(),
worker thread never wakes up after his time.sleep().
Why?

Here's runnable example:

#!/usr/bin/python
# -*- coding: utf-8 -*-

import pygtk
pygtk.require('2.0')
import gtk
import gobject

import pcap

import sys
import threading
import time

CONSOLE_ENCODING = 'utf-8'
if sys.platform == 'win32':
CONSOLE_ENCODING = 'cp866'

global_pcap_queue = []
global_pcap_lock = threading.Lock()
global_pcap_stop_event = threading.Event()

class CityGame:
def __init__(self):
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
window.connect('delete_event', self.delete_event)
window.connect('destroy', self.destroy)

store = gtk.ListStore(gobject.TYPE_STRING)

view = gtk.TreeView(store)

#name
col = gtk.TreeViewColumn('Data')
cell = gtk.CellRendererText()
view.append_column(col)
col.pack_start(cell, True)
col.add_attribute(cell, 'text', 0)

vb = gtk.VBox()
vb.pack_start(view)

window.add(vb)
window.set_size_request(400, 300)
window.show_all()

self.__store = store


def main(self):
gobject.timeout_add(1000, self.update_store)
gtk.main()

def update_store(self):
data = None

global_pcap_lock.acquire()
if len(global_pcap_queue):
data = global_pcap_queue.pop(0)
print 'Update'
global_pcap_lock.release()

time.sleep(0.01)

if data:
self.__store.append([data])

return True

def delete_event(self, widget, event, data = None):
dlg = gtk.MessageDialog(flags = gtk.DIALOG_MODAL, type =
gtk.MESSAGE_QUESTION,
buttons = gtk.BUTTONS_YES_NO,
message_format = 'Are you sure you want to quit?')
dlg.set_title('CityGame')
result = dlg.run()
dlg.destroy()
return (result != gtk.RESPONSE_YES)

def destroy(self, widget, data = None):
gtk.main_quit()

def main(args):
cg = CityGame()
cg.main()

def get_data_from_pcap():
#while True:
while not global_pcap_stop_event.isSet():
global_pcap_lock.acquire()
global_pcap_queue.append(str(time.time()))
print 'Thread'
global_pcap_lock.release()
time.sleep(0.01)
return

if __name__ == '__main__':
global_pcap_stop_event.clear()
pcap_thread = threading.Thread(target = get_data_from_pcap)
pcap_thread.start()
main(sys.argv[1:])
global_pcap_stop_event.set()
pcap_thread.join()

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
-- 
http://mail.python.org/mailman/listinfo/python-list


Some syntactic sugar proposals

2010-11-14 Thread Dmitry Groshev
Here are some proposals. They are quite useful at my opinion and I'm
interested for suggestions. It's all about some common patterns.
First of all: how many times do you write something like
t = foo()
t = t if pred(t) else default_value
? Of course we can write it as
t = foo() if pred(foo()) else default_value
but here we have 2 foo() calls instead of one. Why can't we write just
something like this:
t = foo() if pred(it) else default_value
where "it" means "foo() value"?
Second, I saw a lot of questions about using dot notation for a
"object-like" dictionaries and a lot of solutions like this:
class dotdict(dict):
def __getattr__(self, attr):
return self.get(attr, None)
__setattr__= dict.__setitem__
__delattr__= dict.__delitem__
why there isn't something like this in a standart library?
And the third. The more I use python the more I see how "natural" it
can be. By "natural" I mean the statements like this:
[x.strip() for x in reversed(foo)]
which looks almost like a natural language. But there is some
pitfalls:
if x in range(a, b): #wrong!
it feels so natural to check it that way, but we have to write
if a <= x <= b
I understand that it's not a big deal, but it would be awesome to have
some optimisations - it's clearly possible to detect things like that
"wrong" one and fix it in a bytecode.

x in range optimisation
dot dict access
foo() if foo() else bar()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some syntactic sugar proposals

2010-11-14 Thread Dmitry Groshev
On Nov 15, 9:39 am, Dmitry Groshev  wrote:
> Here are some proposals. They are quite useful at my opinion and I'm
> interested for suggestions. It's all about some common patterns.
> First of all: how many times do you write something like
>     t = foo()
>     t = t if pred(t) else default_value
> ? Of course we can write it as
>     t = foo() if pred(foo()) else default_value
> but here we have 2 foo() calls instead of one. Why can't we write just
> something like this:
>     t = foo() if pred(it) else default_value
> where "it" means "foo() value"?
> Second, I saw a lot of questions about using dot notation for a
> "object-like" dictionaries and a lot of solutions like this:
>     class dotdict(dict):
>         def __getattr__(self, attr):
>             return self.get(attr, None)
>         __setattr__= dict.__setitem__
>         __delattr__= dict.__delitem__
> why there isn't something like this in a standart library?
> And the third. The more I use python the more I see how "natural" it
> can be. By "natural" I mean the statements like this:
>     [x.strip() for x in reversed(foo)]
> which looks almost like a natural language. But there is some
> pitfalls:
>     if x in range(a, b): #wrong!
> it feels so natural to check it that way, but we have to write
>     if a <= x <= b
> I understand that it's not a big deal, but it would be awesome to have
> some optimisations - it's clearly possible to detect things like that
> "wrong" one and fix it in a bytecode.
>
> x in range optimisation
> dot dict access
> foo() if foo() else bar()

Oh, I'm sorry. I forgot to delete my little notes at the bottom of
message.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some syntactic sugar proposals

2010-11-14 Thread Dmitry Groshev
On Nov 15, 9:48 am, Chris Rebert  wrote:
> On Sun, Nov 14, 2010 at 10:39 PM, Dmitry Groshev  
> wrote:
> > Here are some proposals. They are quite useful at my opinion and I'm
> > interested for suggestions. It's all about some common patterns.
> 
> > Second, I saw a lot of questions about using dot notation for a
> > "object-like" dictionaries and a lot of solutions like this:
> >    class dotdict(dict):
> >        def __getattr__(self, attr):
> >            return self.get(attr, None)
> >        __setattr__= dict.__setitem__
> >        __delattr__= dict.__delitem__
> > why there isn't something like this in a standart library?
>
> There 
> is:http://docs.python.org/library/collections.html#collections.namedtuple
>
> The "bunch" recipe is also fairly well-known; I suppose one could
> argue whether it's 
> std-lib-worthy:http://code.activestate.com/recipes/52308-the-simple-but-handy-collec...
>
> Cheers,
> Chris

namedtuple is not a "drop-in" replacement like this "dotdict" thing -
you first need to create a new namedtuple instance. As for me it's a
bit too complicated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some syntactic sugar proposals

2010-11-14 Thread Dmitry Groshev
On Nov 15, 10:30 am, alex23  wrote:
> On Nov 15, 4:39 pm, Dmitry Groshev  wrote:
>
> > First of all: how many times do you write something like
> >     t = foo()
> >     t = t if pred(t) else default_value
> > ? Of course we can write it as
> >     t = foo() if pred(foo()) else default_value
> > but here we have 2 foo() calls instead of one. Why can't we write just
> > something like this:
> >     t = foo() if pred(it) else default_value
> > where "it" means "foo() value"?
>
> Could you provide an actual use case for this. This seems weird to me:
> you're creating an object, testing the object, then possibly throwing
> it away and using a default instead. Are you sure you can't
> restructure your code as such:
>
>    t = foo(x) if  else default

Sure. Let's pretend you have some string foo and compiled regular
expression bar.
Naive code:
t = bar.findall(foo)
if len(t) < 3:
t = []
Code with proposed syntactic sugar:
t = bar.findall(foo) if len(it) > 2 else []

> > Second, I saw a lot of questions about using dot notation for a
> > "object-like" dictionaries and a lot of solutions like this:
> >     class dotdict(dict):
> >         def __getattr__(self, attr):
> >             return self.get(attr, None)
> >         __setattr__= dict.__setitem__
> >         __delattr__= dict.__delitem__
> > why there isn't something like this in a standart library?
>
> Personally, I like keeping object attribute references separate from
> dictionary item references.
Your Python doesn't - dot notation is just a sugar for __dict__ lookup
with default metaclass.

> This seems more like a pessimisation to me: your range version
> constructs a list just to do a single container check. That's a _lot_
> more cumbersome than two simple comparisons chained together.
By "wrong" I meant exactly this. I told about "compiler" optimisation
of statements like this so it would not construct a list.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some syntactic sugar proposals

2010-11-15 Thread Dmitry Groshev
On Nov 15, 12:03 pm, alex23  wrote:
> On Nov 15, 5:50 pm, Dmitry Groshev  wrote:
>
> > On Nov 15, 10:30 am, alex23  wrote:
> > > Personally, I like keeping object attribute references separate from
> > > dictionary item references.
>
> > Your Python doesn't - dot notation is just a sugar for __dict__ lookup
> > with default metaclass.
>
> That's a gross oversimplification that tends towards wrong:
>
> >>> class C(object):
>
> ...   def __init__(self):
> ...     self._x = None
> ...   @property
> ...   def x(self): return self._x
> ...   @x.setter
> ...   def x(self, val): self._x = val
> ...>>> c = C()
> >>> c.x = 1
> >>> c.x
> 1
> >>> c.__dict__['x']
>
> Traceback (most recent call last):
>   File "", line 1, in 
> KeyError: 'x'
>
> But my concern has _nothing_ to do with the implementation detail of
> how objects hold attributes, it's solely over the clarity that comes
> from being able to visually tell that something is an object vs a
> dictionary.

Oh, now I understand you. But this "dotdict" (or "bunch") things don't
break anything. You still need to use it explicitly and it is very
useful if you need to serialize some JSON data about some entities.
s = """[{"name": "Jhon Doe", "age": "12"}, {"name": "Alice",
"age": "23"}]"""
t = map(dotdict, json.loads(s))
t[0] #{'age': '12', 'name': 'Jhon Doe'}
t[0].age #'12'
Of course you can do this with namedtuple, but in fact this isn't a
tuple at all. It's a list of entities.

> > > This seems more like a pessimisation to me: your range version
> > > constructs a list just to do a single container check. That's a _lot_
> > > more cumbersome than two simple comparisons chained together.
>
> > By "wrong" I meant exactly this. I told about "compiler" optimisation
> > of statements like this so it would not construct a list.
>
> So you want this:
>
>   if x in range(1,10):
>
> ...to effectively emit the same bytecode as a chained comparison while
> this:
>
>   for x in range(1,10):
>
> ...produces a list/generator?
>
> Never going to happen. "Special cases aren't special enough to break
> the rules." The standard form for chained comparisons can handle far
> more complex expressions which your 'in' version could not: 0 <= min
> <= max <= 100

I know about chained comparisons, thanks. It's not about them. It's
about bytecode optimisation. But maybe you are right about "Special
cases aren't special enough to break the rules". I kinda forgot that :)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some syntactic sugar proposals

2010-12-01 Thread Dmitry Groshev
On Nov 22, 2:21 pm, Andreas Löscher  wrote:
> >     if x in range(a, b): #wrong!
> > it feels so natural to check it that way, but we have to write
> >     if a <= x <= b
> > I understand that it's not a big deal, but it would be awesome to have
> > some optimisations - it's clearly possible to detect things like that
> > "wrong" one and fix it in a bytecode.
>
> You can implement it yourself:
>
> class between(object):
>         def __init__(self, a,b):
>                 super(crang, self).__init__()
>                 self.a=a
>                 self.b=b
>         def __contains__(self, value):
>                 return self.a <= value <= self.b
>
> >>> 12.45 in between(-100,100)
>
> true
>
> But do you need
>
> a <  x <  b
> a <= x <  b
> a <= x <= b or
> a <  x <= b ?
>
> Sure, you could set a new parameter for this, but the normal way is not
> broken at all.
>
> Best

Of course there are better ways to do this. Your "between", standart
comparisons and so, but expressing this as "i in range(a, b)" is just
intuitive and declarative.
Here is a fresh example of what I meant by my first proposal. You need
to build a matrix like this:
2 1 0 ...
1 2 1 ...
0 1 2 ...
...
... 1 2 1
... 0 1 2
You could do this by one-liner:
[[(2 - abs(x - y)) if it > 0 else 0 for x in xrange(8)] for y in
xrange(8)]
...but in reality you should write something like this:
[[(lambda t: t if t > 0 else 0)(2 - abs(x - y)) for x in xrange(8)]
for y in xrange(8)]
or this
[[(2 - abs(x - y)) if (2 - abs(x - y)) > 0 else 0 for x in xrange(8)]
for y in xrange(8)]
or even this
def foo(x, y):
if abs(x - y) == 0:
return 2
elif abs(x - y) == 1:
return 1
else:
return 0
[[foo(x, y) for x in xrange(8)] for y in xrange(8)]
It's not THAT matter, but it's just about readability and shortness in
some cases.
-- 
http://mail.python.org/mailman/listinfo/python-list


True lists in python?

2010-12-18 Thread Dmitry Groshev
Is there any way to use a true lists (with O(c) insertion/deletion and
O(n) search) in python? For example, to make things like reversing
part of the list with a constant time.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: True lists in python?

2010-12-18 Thread Dmitry Groshev
On Dec 19, 9:18 am, Dmitry Groshev  wrote:
> Is there any way to use a true lists (with O(c) insertion/deletion and
> O(n) search) in python? For example, to make things like reversing
> part of the list with a constant time.

I forgot to mention that I mean *fast* lists. It's trivial to do
things like this with objects, but it will be slw.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: True lists in python?

2010-12-18 Thread Dmitry Groshev
On Dec 19, 9:48 am, Vito 'ZeD' De Tullio 
wrote:
> Dmitry Groshev wrote:
> > Is there any way to use a true lists (with O(c) insertion/deletion and
> > O(n) search) in python? For example, to make things like reversing
> > part of the list with a constant time.
>
> if you're interested just in "reverse" a collection maybe you can take a
> look at the deque[0] module.
>
> If you want "true lists" (um... "linked list"?) there are is this recipe[1]
> you might look.
>
> [0]http://docs.python.org/library/collections.html#collections.deque
> [1]http://code.activestate.com/recipes/577355-python-27-linked-list-vs-
> list/
>
> --
> By ZeD

-I can't find any information about reverse's complexity in python
docs, but it seems that deque is a linked list. Maybe this is the one
I need.
-Yes, I meant linked list - sorry for misunderstanding.
-Linked list on objects is too slow. It must be a C-extension for a
proper speed.
-- 
http://mail.python.org/mailman/listinfo/python-list


HOW TO build object graph or get superclasses list for self.__class__ ?

2010-04-20 Thread Dmitry Ponyatov
Hello

Help please with such problem:

I need to build program object graph (data structure) with additional
parameters for nodes and edges:

include nxgraph # data structure module allowes any py objects for
node/edge id
# (nxgraph ignores 2+ node/edge adding thus no checking need at node/
edge adding)

OBJ_TREE = nxgraph.DiGraph() # directed graph

OBJ_TREE.add_node(object,color='red') # root object

class A(object):
def __init__(self):
object.__init__(self) # (1) maybe map(lambda
s:s.__init__(self), self.__super__) better if I have __super__
OBJ_TREE.add_node(self.__class__,color='red') # (2) add self
class as new node
for SUP in self.__super__:
OBJ_TREE.add_edge(SUP,self.__class__,color='red') # (3)
add super class
  # to self class red arrow (directed
edge)
OBJ_TREE.add_node(self,color='blue') # (4) add self object
instance node
OBJ_TREE.add_edge(self.__class__,self,color='blue') # (5) add
object producing from class edge
OBJ_TREE.plot(sys.argv[0]+'.objtree.png') # dump obj tree as
picture to .png file

class B(A):
pass

class C(A):
pass

Using Python 2.5 I can't realize line (3) in A class, but lines 2) (4)
(5) works well
How can I realize some function get_super(self) giving list of
superclasses for self.__class__ ?
Preferable using py2.5

How days I must reimplement __init__ by dumb copy from parent class to
child class like

class B(A):
def __init__(self):
A.__init__(self)
OBJ_TREE.add_node(A,B) ; OBJ_TREE.plot('OBJ_TREE.png')

class C(A):
def __init__(self):
A.__init__(self)
OBJ_TREE.add_node(A,C) ; OBJ_TREE.plot('OBJ_TREE.png')

class D(B,C):
def __init__(self):
B.__init__(self) ; OBJ_TREE.add_node(B,D)
C.__init__(self) ; OBJ_TREE.add_node(C,D)
OBJ_TREE.plot('OBJ_TREE.png')

This is not good -- a lot of dumb code with chance to make mistakes



-- 
http://mail.python.org/mailman/listinfo/python-list


Functional composition in python

2010-08-28 Thread Dmitry Groshev
Hello all. Some time ago I wrote a little library:
http://github.com/si14/python-functional-composition/ , inspired by
modern functional languages like F#. In my opinion it is quite useful
now, but I would like to discuss it.
An example of usage:

import os
from pyfuncomp import composable, c, _

def comment_cutter(s):
t = s.find("#")
return s if t < 0 else s[0:t].strip()

@composable #one can use a decorator to make a composable function
def empty_tester(x):
return len(x) > 0 and x[0] != "#"

path_prefix = "test"

config_parser = (c(open) >>  #or use a transformer function
 c(str.strip).map >> #"map" acts like a function modifier
 c(comment_cutter).map >>
 empty_tester.filter >> #so does "filter"
 c(os.path.join)[path_prefix, _].map) #f[a, _, b] is
used to make a partial.
#f[a, foo:bar,
baz:_] is also correct

print config_parser("test.txt")
print (c("[x ** %s for x in %s]")[2, _] << c(lambda x: x * 2).map)([1, 2, 3])

Any suggestions are appreciated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Functional composition in python

2010-08-28 Thread Dmitry Groshev
On Aug 29, 5:14 am, Steven D'Aprano  wrote:
> On Sat, 28 Aug 2010 21:30:39 +0400, Dmitry Groshev wrote:
> > Hello all. Some time ago I wrote a little library:
> >http://github.com/si14/python-functional-composition/, inspired by
> > modern functional languages like F#. In my opinion it is quite useful
> > now, but I would like to discuss it.
> > An example of usage:
>
> > import os
> > from pyfuncomp import composable, c, _
>
> > def comment_cutter(s):
> >     t = s.find("#")
> >     return s if t < 0 else s[0:t].strip()
>
> > @composable #one can use a decorator to make a composable function
> > def empty_tester(x):
> >     return len(x) > 0 and x[0] != "#"
>
> Why do you need a decorator to make a composable function? Surely all
> functions are composable -- it is the nature of functions that you can
> call one function with the output of another function.
>
>
>
> > path_prefix = "test"
>
> > config_parser = (c(open) >>  #or use a transformer function
> >              c(str.strip).map >> #"map" acts like a function modifier
> >              c(comment_cutter).map >>
> >              empty_tester.filter >> #so does "filter"
> >              c(os.path.join)[path_prefix, _].map) #f[a, _, b] is
> > used to make a partial.
> >                                                     #f[a, foo:bar,
> > baz:_] is also correct
>
> > print config_parser("test.txt")
> > print (c("[x ** %s for x in %s]")[2, _] << c(lambda x: x * 2).map)([1,
> > 2, 3])
>
> > Any suggestions are appreciated.
>
> Did you expect us to guess what the above code would do? Without showing
> the output, the above is just line noise.
>
> What does c() do? What does compose() do that ordinary function
> composition doesn't do? You say that "map" acts as a function modifier,
> but don't tell us *what* it modifies or in what way. Same for filter.
>
> So anyone not familiar with C syntax, the use of << is just line noise.
> You need to at say what you're using it for.
>
> --
> Steven

Yep, it's my mistake. I thought this syntax is quite intuitive. Here
is some explanations in code:

@composable
def f1(x):
return x * 2

@composable
def f2(x):
return x + 3

@composable
def f3(x):
return (-1) * x

@composable
def f4(a):
  return a + [0]

@composable
def sqrsum(x, y):
return x ** 2 + y ** 2

print f1(2) #4
print f2(2) #5
print (f1 << f2 << f1)(2) #14
print (f3 >> f2)(2) #1
print (f2 >> f3)(2) #-5
print (c(float) << f1 << f2)(4) #14.0
print (sqrsum[_, 1] << f1)(2) #17
print (sqrsum[_, _].map)([1, 2, 3, 4, 5]) #[2, 8, 18, 32, 50]
print (c(lambda x: x * 2).map >> c("[x * %s for x in %s]")[3, _])([1,
2, 3]) #[6, 12, 18]

Generally, f1 >> f2 means "lambda x: f2(f1(x))" or "pass the result of
f1 to f2". But in python function can return only one value, so a
composable function should be a function of one argument. So some form
of making partial is needed, and here comes a
f[a, b, _] notation, which means "substitute 3rd argument of f, first
twos are a and b". Finally, we need some form of syntactic sugar for
this: c(map)[c(f),_], so we have a "map" modifier, which transforms
function F to an isomorphism or mapping between lists. For example,
c(lambda x: x * 2).map is equal to lambda x: map(lambda y: y * 2, x).
"Filter" modifier is the same thing for boolean functions.

>What does c() do? What does compose() do that ordinary function
>composition doesn't do?
I need c() or composable() to make an objects with overloaded
operators.

All in all, all this stuff is just a syntactic sugar for nested
functions, maps and filters, which brings a new semantics for old
operators (so one can call it edsl).
-- 
http://mail.python.org/mailman/listinfo/python-list


Selecting k smallest or largest elements from a large list in python; (benchmarking)

2010-09-01 Thread Dmitry Chichkov
Given: a large list (10,000,000) of floating point numbers;
Task: fastest python code that finds k (small, e.g. 10) smallest
items, preferably with item indexes;
Limitations: in python, using only standard libraries (numpy & scipy
is Ok);

I've tried several methods. With N = 10,000,000, K = 10 The fastest so
far (without item indexes) was pure python implementation
nsmallest_slott_bisect (using bisect/insert). And with indexes
nargsmallest_numpy_argmin (argmin() in the numpy array k times).

Anyone up to the challenge beating my code with some clever selection
algorithm?

Current Table:
1.66864395142 mins_heapq(items, n):
0.946580886841 nsmallest_slott_bisect(items, n):
1.38014793396 nargsmallest(items, n):
10.0732769966 sorted(items)[:n]:
3.17916202545 nargsmallest_numpy_argsort(items, n):
1.31794500351 nargsmallest_numpy_argmin(items, n):
2.37499308586 nargsmallest_numpy_array_argsort(items, n):
0.524670124054 nargsmallest_numpy_array_argmin(items, n):

0.0525538921356 numpy argmin(items): 1892997
0.364673852921 min(items): 10.026786


Code:

import heapq
from random import randint, random
import time
from bisectimport insort
from itertools import islice
from operator import itemgetter

def mins_heapq(items, n):
nlesser_items = heapq.nsmallest(n, items)
return nlesser_items

def nsmallest_slott_bisect(iterable, n, insort=insort):
it   = iter(iterable)
mins = sorted(islice(it, n))
for el in it:
if el <= mins[-1]: #NOTE: equal sign is to preserve duplicates
insort(mins, el)
mins.pop()
return mins

def nargsmallest(iterable, n, insort=insort):
it   = enumerate(iterable)
mins = sorted(islice(it, n), key = itemgetter(1))
loser = mins[-1][1] # largest of smallest
for el in it:
if el[1] <= loser:# NOTE: equal sign is to preserve
dupl
mins.append(el)
mins.sort(key = itemgetter(1))
mins.pop()
loser = mins[-1][1]
return mins

def nargsmallest_numpy_argsort(iter, k):
distances = N.asarray(iter)
return [(i, distances[i]) for i in distances.argsort()[0:k]]

def nargsmallest_numpy_array_argsort(array, k):
return [(i, array[i]) for i in array.argsort()[0:k]]

def nargsmallest_numpy_argmin(iter, k):
distances = N.asarray(iter)
mins = []

def nargsmallest_numpy_array_argmin(distances, k):
mins = []
for i in xrange(k):
j = distances.argmin()
mins.append((j, distances[j]))
distances[j] = float('inf')

return mins


test_data = [randint(10, 50) + random() for i in range(1000)]
K = 10

init = time.time()
mins = mins_heapq(test_data, K)
print time.time() - init, 'mins_heapq(items, n):', mins[:2]

init = time.time()
mins = nsmallest_slott_bisect(test_data, K)
print time.time() - init, 'nsmallest_slott_bisect(items, n):', mins[:
2]

init = time.time()
mins = nargsmallest(test_data, K)
print time.time() - init, 'nargsmallest(items, n):', mins[:2]

init = time.time()
mins = sorted(test_data)[:K]
print time.time() - init, 'sorted(items)[:n]:', time.time() - init,
mins[:2]

import numpy as N
init = time.time()
mins = nargsmallest_numpy_argsort(test_data, K)
print time.time() - init, 'nargsmallest_numpy_argsort(items, n):',
mins[:2]

init = time.time()
mins = nargsmallest_numpy_argmin(test_data, K)
print time.time() - init, 'nargsmallest_numpy_argmin(items, n):',
mins[:2]


print
init = time.time()
mins = array.argmin()
print time.time() - init, 'numpy argmin(items):', mins

init = time.time()
mins = min(test_data)
print time.time() - init, 'min(items):', mins

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Selecting k smallest or largest elements from a large list in python; (benchmarking)

2010-09-02 Thread Dmitry Chichkov
Uh. I'm sorry about the confusion. Last three items are just O(N)
baselines. Python min(), Numpy argmin(), Numpy asarray().
I'll update the code. Thanks!

> A lot of the following doesn't run or returns incorrect results.
> To give but one example:
>
> > def nargsmallest_numpy_argmin(iter, k):
> >     distances = N.asarray(iter)
> >     mins = []
>
> Could you please provide an up-to-date version?
>
> Peter
>
> PS: for an easy way to ensure consistency see timeit/time_all in my previous
> post.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Selecting k smallest or largest elements from a large list in python; (benchmarking)

2010-09-02 Thread Dmitry Chichkov
By the way, improving n-ARG-smallest (that returns indexes as well as
values) is actually more desirable than just regular n-smallest:

== Result ==
1.38639092445 nargsmallest
3.1569879055 nargsmallest_numpy_argsort
1.29344892502 nargsmallest_numpy_argmin

Note that numpy array constructor eats around 0.789.


== Code ==
from operator import itemgetter
from random import randint, random
from itertools import islice
from time import time

def nargsmallest(iterable, n):
it   = enumerate(iterable)
mins = sorted(islice(it, n), key = itemgetter(1))
loser = mins[-1][1] # largest of smallest
for el in it:
if el[1] <= loser:  # NOTE: preserve dups
mins.append(el)
mins.sort(key = itemgetter(1))
mins.pop()
loser = mins[-1][1]
return mins

def nargsmallest_numpy_argsort(iter, k):
distances = N.asarray(iter)
return [(i, distances[i]) for i in distances.argsort()[0:k]]

def nargsmallest_numpy_argmin(iter, k):
mins = []
distances = N.asarray(iter)
for i in xrange(k):
j = distances.argmin()
mins.append((j, distances[j]))
distances[j] = float('inf')
return mins

test_data = [randint(10, 50) + random() for i in range(1000)]
K = 10

init = time()
mins = nargsmallest(test_data, K)
print time() - init, 'nargsmallest:', mins[:2]

import numpy as N
init = time()
mins = nargsmallest_numpy_argsort(test_data, K)
print time() - init, 'nargsmallest_numpy_argsort:', mins[:2]

init = time()
mins = nargsmallest_numpy_argmin(test_data, K)
print time() - init, 'nargsmallest_numpy_argmin:', mins[:2]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Selecting k smallest or largest elements from a large list in python; (benchmarking)

2010-09-02 Thread Dmitry Chichkov
Yes, you are right of course. But it is not really a contest. And if
you could improve algorithm or implementation on "your Python version
running under your OS on your hardware" it may as well improve
performance for other people under other OS's.


On Sep 2, 3:14 pm, Terry Reedy  wrote:
> On 9/1/2010 9:08 PM, Dmitry Chichkov wrote:
>
>
> Your problem is underspecified;-).
> Detailed timing comparisons are only valid for a particular Python
> version running under a particular OS on particular hardware. So, to
> actually run a contest, you would have to specify a version and OS and
> offer to run entries on your machine, with as much else as possible
> turned off, or else enlist a neutral judge to do so. And the timing
> method should also be specified.
>
> --
> Terry Jan Reedy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-11 Thread Dmitry V. Gorbatovsky
[EMAIL PROTECTED] wrote:

>I challenge anyone making psychological
> claims about language X (for any X) being easier to either
> read/learn/use than language Y (for any Y) to come up with any valid
> evidence whatever. 
Put aside,that I don't understand meaning of term
"psychological claim" in that context.
It is obvious that different
medium of information exchange provides
different levels of readability for humans.
And accordingly easier/harder to learn/use.

Regards, Dmitry
-- 
http://mail.python.org/mailman/listinfo/python-list


is decorator the right thing to use?

2008-09-24 Thread Dmitry S. Makovey
Hi,

after hearing a lot about decorators and never actually using one I have
decided to give it a try. My particular usecase is that I have class that
acts as a proxy to other classes (i.e. passes messages along to those
classes) however hand-coding this type of class is rather tedious, so I
decided to use decorator for that. Can somebody tell me if what I'm doing
is a potential shot-in-the-foot or am I on the right track? (Note, It's
rather rudimentary proof-of-concept implementation and not the final
solution I'm about to employ so there are no optimizations or
signature-preserving code there yet, just the idea).

Here's the code:

class A:
b=None
def __init__(self,b):
self.val='aval'
self.b=b
b.val='aval'

def mymethod(self,a):
print "A::mymethod, ",a

def mymethod2(self,a):
print "A::another method, ",a


def Aproxy(fn):
def delegate(*args,**kw):
print "%s::%s" % (args[0].__class__.__name__,fn.__name__)
args=list(args)
b=getattr(args[0],'b')
fnew=getattr(b,fn.__name__)
# get rid of original object reference
del args[0]
fnew(*args,**kw)
setattr(A,fn.__name__,delegate)
return fn

class B:
def __init__(self):
self.val='bval'

@Aproxy
def bmethod(self,a):
print "B::bmethod"
print a, self.val

@Aproxy
def bmethod2(self,a):
print "B::bmethod2"
print a, self.val

b=B()
b.bmethod('foo')
a=A(b)
b=B()
b.val='newval'
a.bmethod('bar')
a.bmethod2('zam')


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-24 Thread Dmitry S. Makovey
[EMAIL PROTECTED] wrote:
> Your code below is very abstract, so it's kind of hard to figure out
> what problem you're trying to solve, but it seems to me that you're
> using the B proxy class to decorate the A target class, which means
> you want one of these options:

Sorry for unclarities in original post. Basically A aggregates object of
class B (example with no decorators and again it's oversimplified):

class A:
b=None
def __init__(self,b):
self.b=b

def amethod(self,a):
print "A::amethod ", a

def bmethod(self,a):
print "A::bmethod ",a
return self.b.bmethod(a)

def bmethod2(self,a,z):
print "A::bmethod2 ",a,z
return self.b.bmethod2(a,z)


class B:
def __init__(self):
self.val=a

def bmethod(self,a):
print "B::bmethod ",a

def bmethod2(self,a,z):
print "B::bmethod2 ",a,z


b=B()
a=A(b)
a.bmethod('foo')
a.bmethod2('bar','baz')

In my real-life case A is a proxy to B, C and D instances/objects, not just
one. If you look at above code - whenever I write new method in either B, C
or D I have to modify A, or even when I modify signature (say, add
parameter x to bmethod) in B, C or D I have to make sure A is synchronized.
I was hoping to use decorator to do it automatically for me. Since the
resulting code is virtually all the same for all those proxy methods it
seems to be a good place for automation. Or am I wrong assuming that?
(since it is my first time using decorators I honestly don't know)

Abovementioned code ilustrates what I am doing right now. My original post
is an attempt to make things more automated/foolproof.

--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-24 Thread Dmitry S. Makovey
Aaron "Castironpi" Brady wrote:
 
> It might help to tell us the order of events that you want in your
> program.  You're not using 'mymethod' or 'mymethod2', and you probably
> want 'return fnew' for the future.  Something dynamic with __getattr__
> might work.  Any method call to A, that is an A instance, tries to
> look up a method of the same name in the B instance it was initialized
> with.

well 'mymethod' and 'mymethod2' were there just to show that A doesn't
function as a pure proxy - it has methods of it's own. See my respnse to
Steve - I proxy messages to more than one aggregated object. going over
them on __getattr__ to look up methods just doesn't seem to be really
efficient to me (I might be wrong though). Decorators seemed to present
good opportunity to simplify the code (well except for the decorator
function itself :) ), make code bit more "fool-proofed" (and give me the
opportunity to test decorators in real life, he-he).

So decorators inside of B just identify that those methods will be proxied
by A. On one hand from logical standpoint it's kind of weird to tell class
that it is going to be proxied by another class, but declaration would be
real close to original function definition which helps to identify where is
it used.

Note that my decorator doesn't change original function - it's a subversion
of decorator to a certain degree as I'm just hooking into python machinery
to add methods to A upon their declaration in B (or so I think).


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-24 Thread Dmitry S. Makovey
Dmitry S. Makovey wrote:
> In my real-life case A is a proxy to B, C and D instances/objects, not
> just one. 

forgot to mention that above would mean that I need to have more than one
decorator function like AproxyB, AproxyC and AproxyD or make Aproxy smarter
about which property of A has instance of which class etc. 

Unless I'm totally "out for lunch" and there are better ways of implementing
this (other than copy-pasting stuff whenever anything in B, C or D
changes).
--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-25 Thread Dmitry S. Makovey
Diez B. Roggisch wrote:

> Dmitry S. Makovey schrieb:
>> Dmitry S. Makovey wrote:
>>> In my real-life case A is a proxy to B, C and D instances/objects, not
>>> just one.
>> 
>> forgot to mention that above would mean that I need to have more than one
>> decorator function like AproxyB, AproxyC and AproxyD or make Aproxy
>> smarter about which property of A has instance of which class etc.
> 
> __getattr__?

see, in your code you're assuming that there's only 1 property ( 'b' )
inside of A that needs proxying. In reality I have several. So in your code
self._delegate should be at least a tupple or a list. Plus what you're
doing - you just promiscuously passing any method not found in Proxy to
self._delegate which is not what I need as I need to pass only a subset of
calls, so now your code needs to acquire dictionary of "allowed" calls, and
go over all self._delegates to find if any one has it which is not
efficient since there IS a 1:1 mapping of A::method -> B::method so lookups
shouldn't be necessary IMO (for performance reasons).

> class Proxy(object):
> 
> 
>  def __init__(self, delegate):
>  self._delegate = delegate
> 
> 
>  def __getattr__(self, attr):
>  v = getattr(self._delegate, attr)
>  if callable(v):
> class CallInterceptor(object):
>   def __init__(self, f):
>   self._f = f
> 
>   def __call__(self, *args, **kwargs):
>   print "Called " + str(self._f) + " with " +
> str(args) + str(kwargs)
>   return self._f(*args, **kwargs)
> return CallInterceptor(v)
>  return v

> Decorators have *nothing* to do with this. They are syntactic sugar for
> def foo(...):
>  ...
> foo = a_decorator(foo)

exactly. and in my case they would've simplified code reading/maintenance.
However introduced "tight coupling" (A knows about B and be should know
about A) is something that worries me and I'm trying to figure out if there
is another way to use decorators for my scenario or is there another way of
achieving the same thing without using decorators and without bloating up
the code with alternative solution. 

Another way could be to use Metaclass to populate class with method upon
declaration but that presents quite a bit of "special" cruft which is more
than I have to do with decorators :) (but maybe it's all *necessary* ? )

--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-25 Thread Dmitry S. Makovey
Thanks Bruno,

your comments were really helpful (so was the "improved" version of code). 

My replies below:

Bruno Desthuilliers wrote:
>> So decorators inside of B just identify that those methods will be
>> proxied by A. On one hand from logical standpoint it's kind of weird to
>> tell class that it is going to be proxied by another class,
> 
> Indeed - usually, proxied objects shouldn't have to be aware of the
> fact. That doesn't mean your variation on the proxy pattern is
> necessarily bad design (hard to tell without lot of context anyway...),
> but still there's some alarm bell ringing here IMHO - IOW : possibly the
> right thing to do, but needs to be double-checked.

I'm kind of looking at options and not dead-set on decorators, but I can't
find any other "elegant enough" solution which wouldn't lead to such tight
coupling. The problem I'm trying to solve is not much more complicated than
what I have already described so if anybody can suggest a better approach -
I'm all for it. 

> Now I'm not sure I really like your implementation. Here's a possible
> rewrite using a custom descriptor:

yeah, that was going to be my next step - I was just aiming for
proof-of-concept more then efficient code :)

> class Proxymaker(object):
>  def __init__(self, attrname):
>  self.attrname = attrname
> 
>  def __get__(self, instance, cls):
>  def _proxied(fn):
>  fn_name = fn.__name__
>  def delegate(inst, *args, **kw):
>  target = getattr(inst, self.attrname)
>  #return fn(target, *args,**kw)
>  method = getattr(target, fn_name)
>  return method(*args, **kw)
> 
>  delegate.__name__ = "%s_%s_delegate" % \
>  (self.attrname, fn_name)
> 
>  setattr(cls, fn_name, delegate)
>  return fn
> 
>  return _proxied
> 
> class A(object):
>  def __init__(self,b):
>  self.val='aval'
>  self.b=b
>  b.val='aval'
> 
>  proxy2b = Proxymaker('b')
> 
>  def mymethod(self,a):
>  print "A::mymethod, ",a
> 
>  def mymethod2(self,a):
>  print "A::another method, ",a
> 
> class B(object):
>  def __init__(self):
>  self.val='bval'
> 
>  @A.proxy2b
>  def bmethod(self,a):
>  print "B::bmethod"
>  print a, self.val
> 
>  @A.proxy2b
>  def bmethod2(self,a):
>  print "B::bmethod2"
>  print a, self.val

> My point is that:
> 1/ you shouldn't have to rewrite a decorator function - with basically
> the same code - for each possible proxy class / attribute name pair combo
> 2/ making the decorator an attribute of the proxy class makes
> dependencies clearer (well, IMHO at least).

agreed on all points

> I'm still a bit uneasy wrt/ high coupling between A and B, and if I was
> to end up with such a design, I'd probably take some times to be sure
> it's really ok.

that is the question that troubles me at this point - thus my original post
(read the subject line ;) ). I like the clarity decorators bring to the
code and the fact that it's a solution pretty much "out-of-the-box" without
need to create something really-really custom, but I'm worried about tight
coupling and somewhat backward logic that they would introduce (the way I
envisioned them).


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-25 Thread Dmitry S. Makovey
Aaron "Castironpi" Brady wrote:
> You should write it like this:
> 
> class B(object):
>  @A.proxy
>  def bmethod(self,a):
> 
> Making 'proxy' a class method on A.  

makes sense.

> In case different A instances (do 
> you have more than one BTW?) 

yep. I have multiple instances of class A, each one has properties (one per
class) of classes B, C and D:

class A:
b=None
c=None
d=None
def __init__(self,b,c,d):
self.b=b
self.c=c
self.d=d

...magic with proxying methods goes here...

class B: 
def bmethod(self,x): pass # we proxy this method from A
def bmethod2(self,x): pass # this is not proxied
class C: 
def cmethod(self,x): pass # we proxy this method from A
class D: 
def dmethod(self,x): pass # we proxy this method from A

a=A(B(),C(),D())
x='foo'
a.bmethod(x)
a.cmethod(x)
a.dmethod(x)
a.bmethod2(x) # raises error as we shouldn't proxy bmethod2

above is the ideal scenario. 

> What you've said implies that you only have one B instance, or only
> one per A instance.  Is this correct?

yes. as per above code.

> I agree that __setattr__ is the canonical solution to proxy, but you
> have stated that you want each proxied method to be a member in the
> proxy class.

well. kind of. if I can make it transparent to the consumer so that he
shouldn't do:

a.b.bmethod(x)

but rather:

a.bmethod(x)

As I'm trying to keep b, c and d as private properties and would like to
filter which calls are allowed to those. Plus proxied methods in either one
always expect certain parameters like:

class B:
def bmethod(self,c,x): pass

and A encapsulates 'c' already and can fill in that blank automagically:

class A:
c=None
b=None
def bmethod(self,c,x):
if not c:
c=self.c
b.bmethod(self,c,x)

I kept this part of the problem out of this discussion as I'm pretty sure I
can fill those in once I figure out the basic problem of auto-population of
proxy methods since for each class/method those are going to be nearly
identical. If I can autogenerate those on-the-fly I'm pretty sure I can add
some extra-logic to them as well including signature change where
A::bmethod(self,c,x) would become A::bmethod(self,x) etc.


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-25 Thread Dmitry S. Makovey
Aaron "Castironpi" Brady wrote:
>> I kept this part of the problem out of this discussion as I'm pretty sure
>> I can fill those in once I figure out the basic problem of
>> auto-population of proxy methods since for each class/method those are
>> going to be nearly identical. If I can autogenerate those on-the-fly I'm
>> pretty sure I can add some extra-logic to them as well including
>> signature change where A::bmethod(self,c,x) would become
>> A::bmethod(self,x) etc.
> 
> Do you want to couple instances or classes together?

It would be nice to have objects of B, C and D classes not knowing that they
are proxied (as they are used on their own too, not only inside of A
objects). 

> If A always proxies for B, C, and D, then the wrapper solution isn't
> bad. 

the whole purpose of A is pretty much to proxy and filter. It's got some
extra logic to combine and manipulate b, c and d objects inside of A class
objects.

> If you're going to be doing any instance magic, that can change 
> the solution a little bit.
> 
> There's also a revision of the first implementation of Aproxy you
> posted, which could stand alone as you have it, or work as a
> classmethod or staticmethod.
> 
> def Aproxy(fn):
> def delegate(self,*args,**kw):
> print "%s::%s" % (args[0].__class__.__name__,fn.__name__)
> fnew=getattr(self.b,fn.__name__)
> return fnew(*args,**kw)
> setattr(A,fn.__name__,delegate)
> return fn

yep, that does look nicer/cleaner :)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Eggs, VirtualEnv, and Apt - best practices?

2008-09-25 Thread Dmitry S. Makovey
Scott Sharkey wrote:
> Any insight into the best way to have a consistent, repeatable,
> controllable development and production environment would be much
> appreciated.

you have just described OS package building ;)

I can't speak for everybody, but supporting multiple platforms (PHP, Perl,
Python, Java) we found that the only way to stay consistent is to use OS
native packaging tools (in your case apt and .deb ) and if you're missing
something - roll your own package. After a while you accumulate plenty of
templates to chose from when you need yet-another-library not available
upstream in your preferred package format. Remember that some python tools
might depend on non-python packages, so the only way to make sure all that
is consistent across environment - use unified package management.

Sorry, not specific pointers though as we're redhat shop and debs are not
our everyday business.
--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-25 Thread Dmitry S. Makovey
George Sakkis wrote:

> I'm not sure if the approach below deals with all the issues, but one
> thing it does is decouple completely the proxied objects from the
> proxy:



> class _ProxyMeta(type):



It smelled to me more and more like metaclass too, I was just trying to
avoid them :)

Your code looks awefully close to what I'm trying to do, except it looks bit
heavier than decorators. Seems like decorators are not going to happen in
this part of project for me anyway, however the whole discussion gave me a
lot to think about. Thank you Bruno, Aaron, Diez and George.

Thanks for the concrete code with metaclass. I'm going to study it
thoroughly to see if I can spot caveats/issues for my usecases however it
seems to put me on the right track. I never used metaclasses before and
decorators seemed to be bit more straight-forward to me :) ..oh well. 

--
http://mail.python.org/mailman/listinfo/python-list


Re: Eggs, VirtualEnv, and Apt - best practices?

2008-09-25 Thread Dmitry S. Makovey
Diez B. Roggisch wrote:

>   - different OS. I for one don't know about a package management tool
> for windows. And while our servers use Linux (and I as developer as
> well), all the rest of our people use windows. No use telling them to
> apt-get instal python-imaging.

that is a very valid point, but it seemed that Scott has homogeneous
environment: Debian/Ubuntu so my post was relative to the original request.
I agree that when you throw Windows/MacOS into the mix things
become "interesting". But then it's better when your developers develop on
server/platform they are going to be using, using same stack they going to
face in production etc. It all depends on requirements and current
practices in company. 

>   - keeping track of recent developments. In the Python webframework
> world for example (which the OP seems to be working with), things move
> fast. Or extremly slow, regarding releases. Take Django - until 2 month
> ago, there hasn't been a stable release for *years*. Virtually everybody
> was working with trunk. And given the rather strict packaging policies
> of debian and consorts, you'd be cut off of recent developments as well
> as of bugfixes.

that definitely becomes tricky however not impossible to track. You do need
a common snapshot for all developers to use anyway - so why not just
package it up?

Note: I do agree that depending on environment/development
practices/policies/etc my statement might become invalid or useless.
However when you're dealing with homogeneous environment or you require
development and testing to be done on your servers running targeted
application stack - things become much easier to manage :)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Eggs, VirtualEnv, and Apt - best practices?

2008-09-25 Thread Dmitry S. Makovey
Diez B. Roggisch wrote:
> Well, you certainly want a desktop-orientied Linux for users, so you
> chose ubuntu - but then on the server you go with a more stable debian
> system. Even though the both have the same technical and even package
> management-base, they are still incompatible wrt to package versions for
> python.
> 
> And other constraints such as Photoshop not being available for Linux
> can complicate things further.

actually I had in mind X11 sessions forwarded from server to desktop - all
development tools and libraries are on server, and all unrelated packages
(like Photoshop etc.) are on desktop.

>> that definitely becomes tricky however not impossible to track. You do
>> need a common snapshot for all developers to use anyway - so why not just
>> package it up?
> 
> I do, but based on Python eggs. They are platform independent (at
> ultimo, you can use the source distribution, albeit that sux for windows
> most of the time), and as I explained in my other post - things are
> moving in the right direction.


/I'll play devil's advocate here even though I see your point/

how do you deal with non-pythonic dependencies then? surely you don't
package ImageMagic into an egg ;)

> Don't get me wrong - I love .deb-based systems. But if using them for my
> development means that I have to essentially create a full zoo of
> various packages *nobody else* uses - I rather stick with what's working
> for me.

Looks like if you package and make those available you'll have quite a few
people using them. I've seen people looking for pre-packaged python libs
just to stick to OS package management tools. :)

Eggs and debs are not silver-bullet for *any* scenario, so you'd have to
weight what can you get out of either one against what are you going to
sacrifice. In my case I know all our systems (servers) run same OS, however
developers don't. So I provide them with environment on devel/testing
servers that they can use as a primary development environment or develop
on their own boxes (which means they are on their own hunting
dependencies/packages/etc.) but testing before moving forward they still
have to test it on "certified" server. And I don't suggest that everybody
should run *this* type of environment - it just works better in our case.


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-26 Thread Dmitry S. Makovey
Paul McGuire wrote:
>> see, in your code you're assuming that there's only 1 property ( 'b' )
>> inside of A that needs proxying. In reality I have several. 

> 
> No, really, Diez has posted the canonical Proxy form in Python, using
> __getattr__ on the proxy, and then redirecting to the contained
> delegate object.  This code does *not* assume that only one property
> ('b'? where did that come from?) is being redirected - __getattr__
> will intercept all attribute lookups and redirect them to the
> delegate.
> 
> If you need to get fancier and support this single-proxy-to-multiple-
> delegates form, then yes, you will need some kind of map that says
> which method should delegate to which object.  Or, if it is just a
> matter of precedence (try A, then try B, then...), then use hasattr to
> see if the first delegate has the given attribute, and if not, move on
> to the next.

that is what I didn't like about it - I have to iterate over delegates when
I can build direct mapping once and for all and tie it to class
definition ;) 

> Your original question was "is decorator the right thing to use?"  For
> this application, the answer is "no".  

yeah. seems that way. in the other fork of this thread you'll find my
conclusion which agrees with that :)

> It sounds like you are trying 
> to force this particular to solution to your problem, but you are
> probably better off giving __getattr__ intercepting another look.

__getattr__ implies constant lookups and checks (for filtering purposes) - I
want to do them once, attach generated methods as native methods and be
done with it. That is why I do not like __getattr__ in this particular
case. Otherwise - you're right.

--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-26 Thread Dmitry S. Makovey
Bruno Desthuilliers wrote:
> Hem... I'm afraid you don't really take Python's dynamic nature into
> account here. Do you know that even the __class__ attribute of an
> instance can be rebound at runtime ? What about 'once and for all' then ?

must've been wrong wording on my part. Dynamic nature is exactly what I
wanted to use :) except that I do not expect clients to take advantage of
it while using my classes ;)

>>> Your original question was "is decorator the right thing to use?"  For
>>> this application, the answer is "no".
>> 
>> yeah. seems that way. in the other fork of this thread you'll find my
>> conclusion which agrees with that :)
>> 
>>> It sounds like you are trying
>>> to force this particular to solution to your problem, but you are
>>> probably better off giving __getattr__ intercepting another look.
>> 
>> __getattr__ implies constant lookups and checks (for filtering purposes)
> 
> Unless you cache the lookups results...

sure

>> - I
>> want to do them once, attach generated methods as native methods
> 
> What is a "native method" ? You might not be aware of the fact that
> method objects are usually built anew from functions on each method
> call...

again, wrong wording on my part. by native I meant: make use as much as
possible of existing machinery and override default behavior only when it's
absolutely necessary.  (hopefully my wording is not off this time ;) )

>> and be
>> done with it. That is why I do not like __getattr__ in this particular
>> case.
> 
> There's indeed an additional penalty using __getattr__, which is that
> it's only called as a last resort. Now remember that premature
> optimization is the root of evil... Depending on effective use (ie : how
> often a same 'proxied' method is called on a given Proxy instance, on
> average), using __getattr__ to retrieve the appropriate bound method on
> the delegate then adding it to the proxy instance *as an instance
> attribute* might be a way better (and simpler) optimization.

I actually ended up rewriting things (loosely based on George's suggested
code) with descriptors and not using metaclasses or decorators (so much for
my desire to use them). 

With following implementation (unpolished at this stage but already
functional) I can have several instances of B objects inside of A object
and proxy certain methods to one or another object (I might be having a
case when I have A.b1 and A.b2 and passing some methods to b1 and others to
b2 having both of the same class B, maybe even multiplexing). This one
seems to be fairly light as well without need to scan instances (well,
except for one getattr, but I couldn't get around it). Maybe I didn't
account for some shoot-in-the-foot scenarios but I can't come up with any.
Last time I played with __getattr__ I shot myself in the foot quite well
BTW :)

class ProxyMethod(object):

def __init__(self,ob_name,meth):
self.ob_name=ob_name
self.meth=meth

def my_call(self,instance,*argv,**kw):
ob=getattr(instance,self.ob_name)
cls=self.meth.im_class
return self.meth.__get__(ob,cls)(*argv,**kw)

def __get__(self,instance,owner):
if not instance:
return self.my_call
ob=getattr(instance,self.ob_name)
cls=self.meth.im_class
return self.meth.__get__(ob,cls)

class B:
def __init__(self):
self.val='bval'

def bmethod(self,a):
print "B::bmethod", 
print a, self.val

class A:
b=None

def __init__(self,b=None):
self.val='aval'
self.b=b
b.val='aval-b'

def mymethod(self,a):
print "A::mymethod, ",a

bmethod=ProxyMethod('b',B.bmethod)

b=B()
b.bmethod('foo')
a=A(b)
b=B()
b.val='newval'
a.mymethod('baz')
a.bmethod('bar')
A.bmethod(a,'zoom')

--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-26 Thread Dmitry S. Makovey
Aaron "Castironpi" Brady wrote:
> That prohibits using a descriptor in the proxied classes, or at least
> the proxied functions, since you break descriptor protocol and only
> call __get__ once.  Better to cache and get by name.  It's only slower
> by the normal amount, and technically saves space, strings vs.
> instancemethod objects (except for really, really long function names).

that is an interesting point since I didn't think about having descriptors
in proxied classes. my reworked code clearly breaks when descriptors are
thrown at it. It will break with methods in proxied objects that are
implemented as objects too. Now I adjusted constructor a bit to account for
that (I just can't figure out case when I'll be proxying descriptors unless
they return function but than I don't see benefit in using descriptor for
that, probably because I haven't used them much).

class ProxyMethod(object):
def __init__(self,ob_name,meth):
self.ob_name=ob_name
if not hasattr(meth,'im_class'):
if hasattr(meth,'__call__'):
self.meth=getattr(meth,'__call__')
else:
raise ValueError("Method should be either a class method or
a callable class")
else:
self.meth=meth


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-26 Thread Dmitry S. Makovey
George Sakkis wrote:
> You seem to enjoy pulling the rug from under our feet by changing the
> requirements all the time :)

but that's half the fun! ;)

Bit more seriously - I didn't know I had those requirements until now :) I'm
kind of exploring where can I get with those ideas. Initial post was based
exactly on what I had in my code with desire to make it more
automated/{typo,fool}-proof/robust, elegant and possibly faster. Hopefully
such behavior is not off-topic on this list (if it is - let me know and
I'll go exploring solo).

> Although this works, the second argument to ProxyMethod shouldn't be
> necessary, it's semantically redundant;  ideally you would like to 
> write it as "bmethod = ProxyMethod('b')".

since I'm already on exploratory trail (what about that rug being pulled
from under?) With my code I can do really dirty tricks like this (not
necessary that I'm going to):

class B_base:
def bmethod(self):
print 'B_base'

class B(B_base):
def bmethod(self):
print 'B'

class A:
bmethod=ProxyMethod('b',B_base.bmethod)

> As before, I don't think
> that's doable without metaclasses (or worse, stack frame hacking).
> Below is the update of my original recipe; interestingly, it's
> (slightly) simpler than before:

Out of curiosity (and trying to understand): why do you insist on
dictionaries with strings contents ( {'bmethod' : 'b1' } ) rather than
something more explicit ? Again, I can see that your code is working and I
can even understand what it's doing, just trying to explore alternatives :) 

I guess my bias is towards more explicit declarations thus 

 bmethod=ProxyMethod('b',B.bmethod) 

looks more attractive to me, but I stand to be corrected/educated why is
that not the right thing to do?

Another thing that turns me away from string dictionaries is that those are
the ones causing me more trouble hunting down typos. Maybe it's just "my
thing" so I'm not going to insist on it. I'm open to arguments against that
theory.

One argument I can bring in defence of more explicit declarations is IDE
parsing when autocompletion for B.bme... pops up (suggesting bmethod and
bmethod2) and with 'b':'bmethod' it never happens. However that's not good
enough reason to stick to it if it introduces other problems. What kind of
problems could those be?

P.S.
so far I find this discussion quite educating BTW. I am more of a "weekend
programmer" with Python - using basic language for most of things I need,
however I decided to explore further to educate myself and to write more
efficient code.


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-27 Thread Dmitry S. Makovey
George Sakkis wrote:
> It's funny how often you come with a better solution a few moments
> after htting send! The snippet above can (ab)use the decorator syntax
> so that it becomes:
> 
> class A(Proxy):
> 
> @ProxyMethod
> def bmethod(self):
> return self.b1
> 
> @ProxyMethod
> def bmethod2(self):
> return self.b2

That is outstanding! This code looks very clean to me (just a touch cryptic
around declarations in A, but that was unavoidable anyway). Seems like the
right way to read it would be bottom up (or is it only my mind so
perverted?). By the looks of it - it does exactly what I needed with great
number of possibilities behind it and is very lightweight and transparent.
Now I regret I haven't come up with it myself :-D

George, at this point I'm out of rugs - so no more rug pulling from under
your feet for me. 

Now I'm going to apply all this knowledge to my code, see how that goes and
come back with more questions later.

Thank you (all) very much for a great discussion. This thread educated me
quite a bit on descriptors and why one would need them, and decorators -
just as subject line suggested, were not forgotten. 


--
http://mail.python.org/mailman/listinfo/python-list


Re: is decorator the right thing to use?

2008-09-28 Thread Dmitry S. Makovey
George Sakkis wrote:
> FYI, in case you missed it the final version doesn't need a Proxy base
> class, just inherit from object. Also lowercased ProxyMethod to look
> similar to staticmethod/classmethod:

I cought that, just quoted the wrong one :)

> class A(object):
> 
> def __init__(self, b1, b2):
> self.b1 = b1
> self.b2 = b2
> 
> @proxymethod
> def bmethod(self):
> return self.b1
> 
> @proxymethod
> def bmethod2(self):
> return self.b2
> 
> George

--
http://mail.python.org/mailman/listinfo/python-list


Re: Advice for a replacement for plone.

2008-10-02 Thread Dmitry S. Makovey

disclaimer: I'm not affiliated with any company selling Plone services ;)
I also have nothing against Django and such. 

Ken Seehart wrote:

> I want a new python based CMS.  ... One that won't keep me up all night
> 
> 
> I've been fooling around with zope and plone, and I like plone for some
> things, such as a repository for online project documentation.  However
> for general-purpose web development it is too monolithic.  

did you try to actually strip off all main_template stuff and create your
own from scratch? That gets rid of "monolithic" part but requires some
discipline from you while giving you benefits of Plone (skins etc.)

I think you've got misconceptions about CMS as-is. CMS doesn't preclude any
look-and-feel, it just happens that Plone comes with default out-of-the-box
that suits 80% of population. If you don't fall into 80% - you're free to
customize the heck out of it. I know I did :)

I haven't seriously touched Plone in a while (since 2.1.x series) but the
above suggestion comes from the times when I was involved in Plone-related
development. It's an excelent tool running on a pretty stable and scalable
platform, I wouldn't give it up that easy if I were you. see below for
clarifications.

> Is there 
> anything out there with the following properties?
> 
> 0. Web page being developed is a typical small business home page (i.e.
> not a newspaper or a blog).

easy with customized Plone

> 1. Page owner can edit pages on line with no expertise (plone does great
> here).

well, you've said it ;)

> 2. Main page does not look plone-like.  For an example of a main page
> that does not look plone-like, see http://www.arbitrary.com/  Note the
> lack of CMS style navigation widgets.

yep. get rid of main_template and start from scratch is that's what you're
after. otherwise it's not too hard to adapt skin to look like whatever you
want while maintaining all those widgets under-the-hood so if you log in -
it all "springs up" at editor. 

> 3. Item 2 should be reachable with nearly no effort (plone fails utterly
> here).

not exactly (about plone fails). see above ;)

> 4. Target viewer (not the owner), should /not/ see something that looks
> at all like a CMS system, but rather see exactly what the page owner
> wants the page to look like.

not sure what you mean here. sounds exactly like #2 and #3

> 5. Page owner should be able to view and navigate the tree of contents,
> and select pages to edit with a wysiwyg editor (plone does great here)...

see my comment for #2

> 6. ... But the target viewer should not see this tree.  See items 2 and 4.

comment for #2

> 7. Potential to add python scripted pages of various kinds.

absolutely!.

>  There are a couple different design approaches to making a
> development environment.  You can let the developer start with nothing,
> and provide the developer with tools to create something (e.g. zope,
> most plain text editors), or you can start with a finished result and
> let the developer rip out and discard what is not desired (e.g. plone).
> I often prefer to start with nothing.  It's a natural starting point.
> Note that systems that are based on starting with nothing can provide
> the benefits of starting with something by providing templates and such.
> 

maybe what you want to look at - CMF (Plone's foundation). It's got lots of
things Plone reuses but it's a minimalistic set of tools. 

> I would love to see a system that would give me an editable Hello World
> page in under 5 minutes.  Hello World would be a page consisting of
> nothing but the words "hello world" (no tools, no navigation bar, and
> certainly no CMS navigation stuff) in a url such as
> www.myhelloworldwebsite.com/hello and a different url to edit the page,
> such as www.myhelloworldwebsite.com/hello/edit or
> www.mywebsite.com/edit/hello

in Plone - easy. Create customized document type, modify it's default view
template to not use main_template. done. under 5 min ;)

> If you are a plone fanatic and still think I should use plone, fine, but
> please let me know where I can find a "Hello World" kind of example that
> demonstrates items 2, 4, and 6.

ugh. as I said - I haven't done Plone development for some time now so all
examples I might find could be irrelevant today, but I'm sure architecture
is the same. See my comment above, you should be able to follow it if
you've got current Plone docs handy.

> In addition, I would like the ability to design the layout of the page
> independent of the content.  Plone has some nice features that would be
> very helpful, but again, getting to hello world seems difficult.

that's the beauty of Plone - separated content, presentation and logic. 

> Just plain Zope does a pretty good job at some of this, but I haven't
> found a good online wysiwyg editor for the page owner to modify content.

Kupu, Epoz.

mixed bag of reactions to Plone as a platform here:
http://seeknuance.com/2008/09/13/django-plone-light-bulbs-differences-irks/

h

Re: Numeric literals in other than base 10 - was Annoying octal notation

2009-08-23 Thread Dmitry A. Kazakov
On Sat, 22 Aug 2009 14:54:41 -0700 (PDT), James Harris wrote:

> They look good - which is important. The trouble (for me) is that I
> want the notation for a new programming language and already use these
> characters. I have underscore as an optional separator for groups of
> digits - 123000 and 123_000 mean the same. The semicolon terminates a
> statement. Based on your second idea, though, maybe a colon could be
> used instead as in
> 
>   2:1011, 8:7621, 16:c26b
> 
> I don't (yet) use it as a range operator.
> 
> I could also use a hash sign as although I allow hash to begin
> comments it cannot be preceded by anything other than whitespace so
> these would be usable
> 
>   2#1011, 8#7621, 16#c26b
> 
> I have no idea why Ada which uses the # also apparently uses it to end
> a number
> 
>   2#1011#, 8#7621#, 16#c26b#

If you are going Unicode, you could use the mathematical notation, which is

   10112, 76218, c26b16

(subscript specification of the base). Yes, it might be difficult to type
(:-)), and would require some look-ahead in the parser. One of the
advantages of Ada notation, is that a numeric literal always starts with
decimal digit. That makes things simple for a descent recursive parser. I
guess this choice was intentional, back in 1983 a complex parser would eat
too much resources...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
-- 
http://mail.python.org/mailman/listinfo/python-list


Use of statement 'global' in scripts.

2024-05-07 Thread Popov, Dmitry Yu via Python-list
Dear Sirs.

The statement 'global', indicating variables living in the global scope, is 
very suitable to be used in modules. I'm wondering whether in scripts, running 
at the top-level invocation of the interpreter, statement 'global' is used 
exactly the same way as in modules? If there are any differences, I would 
really appreciate any comments on this.

Regards,
Dmitry Popov

Lemont, IL
USA

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Use of statement 'global' in scripts.

2024-05-08 Thread Popov, Dmitry Yu via Python-list
Thank you!

From: Python-list  on behalf of 
Greg Ewing via Python-list 
Sent: Wednesday, May 8, 2024 3:56 AM
To: [email protected] 
Subject: Re: Use of statement 'global' in scripts.

On 8/05/24 1: 32 pm, Popov, Dmitry Yu wrote: > The statement 'global', 
indicating variables living in the global scope, is very suitable to be used in 
modules. I'm wondering whether in scripts, running at the top-level invocation 
of the interpreter,
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

On 8/05/24 1:32 pm, Popov, Dmitry Yu wrote:
> The statement 'global', indicating variables living in the global scope, is 
> very suitable to be used in modules. I'm wondering whether in scripts, 
> running at the top-level invocation of the interpreter, statement 'global' is 
> used exactly the same way as in modules?

The 'global' statement declares a name to be module-level, so there's no
reason to use it at the top level of either a script or a module, since
everything there is module-level anyway.

You only need it if you want to assign to a module-level name from
within a function, e.g.

spam = 17

def f():
   global spam
   spam = 42

f()
# spam is now 42

A script is a module, so everything that applies to modules also
applies to scripts.

--
Greg
--
https://urldefense.us/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!G_uCfscf7eWS!airWCCS1QeLAhk0AfN3VxhuV9MZkx8YBhs5Vjf89K2WZPjhCUkXt9culFzwlX1_ON0G17lukcR79-kWAsA$

-- 
https://mail.python.org/mailman/listinfo/python-list


Version of NymPy

2024-05-15 Thread Popov, Dmitry Yu via Python-list
What would be the easiest way to learn which version of NumPy I have with my 
Anaconda distribution?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Version of NymPy

2024-05-15 Thread Popov, Dmitry Yu via Python-list
Thank you.

From: Larry Martell 
Sent: Wednesday, May 15, 2024 1:55 PM
To: Popov, Dmitry Yu 
Cc: Popov, Dmitry Yu via Python-list 
Subject: Re: Version of NymPy

On Wed, May 15, 2024 at 2: 43 PM Popov, Dmitry Yu via Python-list  wrote: > > What would be the easiest way to learn which version of 
NumPy I have with my Anaconda distribution? >>> import numpy >>>
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

On Wed, May 15, 2024 at 2:43 PM Popov, Dmitry Yu via Python-list
 wrote:
>
> What would be the easiest way to learn which version of NumPy I have with my 
> Anaconda distribution?

>>> import numpy
>>> numpy.__version__
'1.24.4'

-- 
https://mail.python.org/mailman/listinfo/python-list


Relatively prime integers in NumPy

2024-07-11 Thread Popov, Dmitry Yu via Python-list
Dear Sirs.

Does NumPy provide a simple mechanism to identify relatively prime integers, 
i.e. integers which don't have a common factor other than +1 or -1? For 
example, in case of this array:
[[1,5,8],
  [2,4,8],
  [3,3,9]]
I can imagine a function which would return array of common factors along axis 
0: [1,2,3]. Those triples of numbers along axis 1 with the factor of1 or -1 
would be relatively prime integers.

Regards,
Dmitry Popov

Argonne, IL
USA

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Relatively prime integers in NumPy

2024-07-12 Thread Popov, Dmitry Yu via Python-list
Thank you for your interest. My explanation is too concise indeed, sorry. So 
far, I have used Python code with three enclosed 'for' loops for this purpose 
which is pretty time consuming. I'm trying to develop a NumPy based code to 
make this procedure faster. This routine is kind of 'heart' of the algorithm to 
index of X-ray Laue diffraction patterns. In our group we have to process huge 
amount of such patterns. They are collected at a synchrotron radiation 
facility. Faster indexation routine would help a lot.

This is the code I'm currently using. Any prompts how to implement it in NumPy 
would be highly appreciated.

for h in range(0, max_h):
  for k in range(0, max_k):
for l in range(0, max_l):
  chvec=1
  maxmult=2
  if h > 1: 
maxmult=h
  if k > 1:
maxmult=k
  if l > 1:
maxmult=l
  if h > 1:
if maxmult > h:
  maxmult=h
  if k > 1:
if maxmult > k:
  maxmult=k
  if l > 1:
if maxmult > l:
  maxmult=l
  maxmult=maxmult+1
  for innen in range(2, maxmult):
if h in range(0, (max_h+1), innen):
  if k in range(0, (max_k+1), innen):
if l in range(0, (max_l+1), innen):
  chvec=0
  if chvec==1:
# Only relatively prime integers h,k,l pass to this 
block of the code



From: [email protected] 
Sent: Thursday, July 11, 2024 1:22 PM
To: Popov, Dmitry Yu ; 'Popov, Dmitry Yu via Python-list' 

Subject: RE: Relatively prime integers in NumPy

Дмитрий, You may think you explained what you wanted but I do not see what 
result you expect from your examples. Your request is a bit too esoteric to be 
a great candidate for being built into a module like numpy for general purpose 
se but
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

Дмитрий,

You may think you explained what you wanted but I do not see what result you
expect from your examples.

Your request is a bit too esoteric to be a great candidate for being built
into a module like numpy for general purpose se but I can imagine it could
be available in modules build on top of numpy.

Is there a reason you cannot solve this mostly outside numpy?

It looks like you could use numpy to select the numbers you want to compare,
then call one of many methods you can easily search for to see  how to use
python to make some list or other data structure for divisors of each number
involved and then use standard methods to compare the lists and exact common
divisors. If needed, you could then put the results back into your original
data structure using numpy albeit the number of matches can vary.

Maybe a better explanation is needed as I cannot see what your latter words
about -1 and 1 are about. Perhaps someone else knows.




-Original Message-
From: Python-list  On
Behalf Of Popov, Dmitry Yu via Python-list
Sent: Monday, July 8, 2024 3:10 PM
To: Popov, Dmitry Yu via Python-list 
Subject: Relatively prime integers in NumPy

Dear Sirs.

Does NumPy provide a simple mechanism to identify relatively prime integers,
i.e. integers which don't have a common factor other than +1 or -1? For
example, in case of this array:
[[1,5,8],
  [2,4,8],
  [3,3,9]]
I can imagine a function which would return array of common factors along
axis 0: [1,2,3]. Those triples of numbers along axis 1 with the factor of1
or -1 would be relatively prime integers.

Regards,
Dmitry Popov

Argonne, IL
USA

--
https://urldefense.us/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!G_uCfscf7eWS!ZGK1ZXYgmC6cpNa1xTXVTNklhunjYiinwaDe_xE3sJyVs4ZcVgUB_v2FKvDzDspx7IzFCZI7JpFsiV5iH58P$


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Relatively prime integers in NumPy

2024-07-12 Thread Popov, Dmitry Yu via Python-list
Thank you very much, Oscar.

Using the following code looks like a much better solution than my current 
Python code indeed.

np.gcd.reduce(np.transpose(a))
or
np.gcd.reduce(a,1)

The next question is how I can generate ndarray of h,k,l indices. This can be 
easily done from a Python list by using the following code.

import numpy as np
hkl_list=[]
for h in range(0, max_h):
  for k in range(0, max_k):
for l in range(0, max_l):
  hkl_local=[]
  hkl_local.append(h)
  hkl_local.append(k)
  hkl_local.append(l)
  hkl_list.append(hkl_local)
hkl=np.array(hkl_list, dtype=np.int64)

This code will generate a two-dimensional ndarray of h,k,l indices but is it 
possible to make a faster routine with NumPy?

Regards,
Dmitry





From: Python-list  on behalf of 
Popov, Dmitry Yu via Python-list 
Sent: Thursday, July 11, 2024 2:25 PM
To: [email protected] ; 'Popov, Dmitry Yu via 
Python-list' 
Subject: Re: Relatively prime integers in NumPy

Thank you for your interest. My explanation is too concise indeed, sorry. So 
far, I have used Python code with three enclosed 'for' loops for this purpose 
which is pretty time consuming. I'm trying to develop a NumPy based code to 
make this
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

Thank you for your interest. My explanation is too concise indeed, sorry. So 
far, I have used Python code with three enclosed 'for' loops for this purpose 
which is pretty time consuming. I'm trying to develop a NumPy based code to 
make this procedure faster. This routine is kind of 'heart' of the algorithm to 
index of X-ray Laue diffraction patterns. In our group we have to process huge 
amount of such patterns. They are collected at a synchrotron radiation 
facility. Faster indexation routine would help a lot.

This is the code I'm currently using. Any prompts how to implement it in NumPy 
would be highly appreciated.

for h in range(0, max_h):
  for k in range(0, max_k):
for l in range(0, max_l):
  chvec=1
  maxmult=2
  if h > 1: 
maxmult=h
  if k > 1:
maxmult=k
  if l > 1:
maxmult=l
  if h > 1:
if maxmult > h:
  maxmult=h
  if k > 1:
if maxmult > k:
  maxmult=k
  if l > 1:
if maxmult > l:
  maxmult=l
  maxmult=maxmult+1
  for innen in range(2, maxmult):
if h in range(0, (max_h+1), innen):
  if k in range(0, (max_k+1), innen):
if l in range(0, (max_l+1), innen):
  chvec=0
  if chvec==1:
# Only relatively prime integers h,k,l pass to this 
block of the code


____
From: [email protected] 
Sent: Thursday, July 11, 2024 1:22 PM
To: Popov, Dmitry Yu ; 'Popov, Dmitry Yu via Python-list' 

Subject: RE: Relatively prime integers in NumPy

Дмитрий, You may think you explained what you wanted but I do not see what 
result you expect from your examples. Your request is a bit too esoteric to be 
a great candidate for being built into a module like numpy for general purpose 
se but
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

Дмитрий,

You may think you explained what you wanted but I do not see what result you
expect from your examples.

Your request is a bit too esoteric to be a great candidate for being built
into a module like numpy for general purpose se but I can imagine it could
be available in modules build on top of numpy.

Is there a reason you cannot solve this mostly outside numpy?

It looks like you could use numpy to select the numbers you want to compare,
then call one of many methods you can easily search for to see  how to use
python to make some list or other data structure for divisors of each number
involved and then use standard methods to compare the lists and exact common
divisors. If needed, you could then put the results back into your original
data structure using numpy albeit the number of matches can vary.

Maybe a better explanation is needed as I cannot see what your latter words
about -1 and 1 are about. Perhaps someone else knows.




-Original Message-
From: Python-list  On
Behalf Of Popov, Dmitry Yu via Python-list
Sent: Monday, July 8, 2024 3:1

Re: Relatively prime integers in NumPy

2024-07-12 Thread Popov, Dmitry Yu via Python-list
Thank you very much. List comprehensions make code much more concise indeed. Do 
list comprehensions also improve the speed of calculations?

From: [email protected] 
Sent: Friday, July 12, 2024 6:57 PM
To: Popov, Dmitry Yu ; 'Popov, Dmitry Yu via Python-list' 
; [email protected] 

Subject: RE: Relatively prime integers in NumPy

Dmitry, I clearly did not understand what you wanted earlier as you had not 
made clear that in your example, you already had progressed to some level where 
you had the data and were now doing a second step. So, I hesitate to say much 
until
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

Dmitry,



I clearly did not understand what you wanted earlier as you had not made clear 
that in your example, you already had progressed to some level where you had 
the data and were now doing a second step. So, I hesitate to say much until 
either nobody else addressed the issue (as clearly some have) or you explain 
well enough.



I am guessing you have programming experience in other languages and are not as 
“pythonic” as some. The code you show may not be quite how others might do it. 
Some may write mch of your code as a single line of python using a list 
comprehension such as:



hkl_list = [ [h, k, l] for SOMETHING in RANGE  for SOMETHING2  in RANGE2 for 
SOMETHING3 in RANGE3]



Where h, k. l come from the somethings.



Back to the real world.





From: Popov, Dmitry Yu 
Sent: Friday, July 12, 2024 1:13 PM
To: [email protected]; 'Popov, Dmitry Yu via Python-list' 
; [email protected]; Popov, Dmitry Yu 

Subject: Re: Relatively prime integers in NumPy



Thank you very much, Oscar.



Using the following code looks like a much better solution than my current 
Python code indeed.

np.gcd.reduce(np.transpose(a))

or

np.gcd.reduce(a,1)



The next question is how I can generate ndarray of h,k,l indices. This can be 
easily done from a Python list by using the following code.



import numpy as np

hkl_list=[]

for h in range(0, max_h):

  for k in range(0, max_k):

for l in range(0, max_l):

  hkl_local=[]

  hkl_local.append(h)

  hkl_local.append(k)

  hkl_local.append(l)

  hkl_list.append(hkl_local)

hkl=np.array(hkl_list, dtype=np.int64)

This code will generate a two-dimensional ndarray of h,k,l indices but is it 
possible to make a faster routine with NumPy?



Regards,

Dmitry









From: Python-list 
mailto:[email protected]>>
 on behalf of Popov, Dmitry Yu via Python-list 
mailto:[email protected]>>
Sent: Thursday, July 11, 2024 2:25 PM
To: [email protected]<mailto:[email protected]> 
mailto:[email protected]>>; 'Popov, Dmitry Yu via 
Python-list' mailto:[email protected]>>
Subject: Re: Relatively prime integers in NumPy



Thank you for your interest. My explanation is too concise indeed, sorry. So 
far, I have used Python code with three enclosed 'for' loops for this purpose 
which is pretty time consuming. I'm trying to develop a NumPy based code to 
make this

ZjQcmQRYFpfptBannerStart

This Message Is From an External Sender

This message came from outside your organization.



ZjQcmQRYFpfptBannerEnd

Thank you for your interest. My explanation is too concise indeed, sorry. So 
far, I have used Python code with three enclosed 'for' loops for this purpose 
which is pretty time consuming. I'm trying to develop a NumPy based code to 
make this procedure faster. This routine is kind of 'heart' of the algorithm to 
index of X-ray Laue diffraction patterns. In our group we have to process huge 
amount of such patterns. They are collected at a synchrotron radiation 
facility. Faster indexation routine would help a lot.



This is the code I'm currently using. Any prompts how to implement it in NumPy 
would be highly appreciated.



for h in range(0, max_h):

  for k in range(0, max_k):

for l in range(0, max_l):

  chvec=1

  maxmult=2

  if h > 1: 

maxmult=h

  if k > 1:

maxmult=k

  if l > 1:

maxmult=l

  if h > 1:

if maxmult > h:

  maxmult=h

  if k > 1:

if maxmult > k:

  maxmult=k

  if l > 1:

if maxmult > l:

  maxmult=l

  maxmult=maxmult+1

  for innen in range(2, max

Assignment of global variables in a script.

2025-06-30 Thread Popov, Dmitry Yu via Python-list
Dear Sirs.

I found the following sentence in the Python documentation: "The statements 
executed by the top-level invocation of the interpreter, either read from a 
script file or interactively, are considered part of a module called 
__main__<https://docs.python.org/3.11/library/__main__.html#module-__main__>, 
so they have their own global namespace."

In other words, global assignments of variables placed directly in a module's 
name space are also "statements executed by the top-level invocation of the 
interpreter", if the module is executed as a script.

Is it stable at all to assign global variables directly in a module which runs 
as a script? Speaking practically, I have not observed any problem so far.

Regards,
Dmitry Popov

-- 
https://mail.python.org/mailman3//lists/python-list.python.org