Re: Beginner - GUI devlopment in Tkinter - Any IDE with drag and drop feature like Visual Studio?

2013-07-20 Thread Aseem Bansal
After considering all the options suggested here I decided to use 
PySide/QtCreator as was suggested by Dave Cook. I created a simple GUI with 
QtCreator and found a way to convert .ui files to .py files. So far so good.

But now I am having some confusion about the correct tools to use for PySide 
and I am stuck due to that. I explored the PySide wiki and discussed the 
confusion on Qt-forums but that didn't help much. The url of that discussion is 
given below(without backslashes to avoid it being shortened). Just using this 
on google you can easily find the discussion.

qt-project.org  forums   viewthread  30114

Do I need to use QtCreator with PySide if I want drag-and-drop feature for GUI 
development? Do I need to install Qt? If yes, which version - 4.8 or 5.1? 

Can I use cxfreeze to make exe files from the GUI developed? I used it for pure 
Python files and it worked. The size of the file was big but it worked with me 
having to install Python on another computer. I tried to use cxfreeze on the 
GUI Python script that I was finally able to make. Some exe was made but no GUI 
started when I ran the exe. Is that a problem with cxfreeze or me having wrong 
tools installed?

Any help is appreciated.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python testing tools

2013-07-20 Thread Ben Finney
cutems93  writes:

> I am currently doing some research on testing software for Python. I
> found that there are many different types of testing tools. These are
> what I've found.

You will find these discussed at the Python Testing Tools Taxonomy
http://wiki.python.org/moin/PythonTestingToolsTaxonomy>.

Hope that helps.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Share Code Tips

2013-07-20 Thread Devyn Collier Johnson


On 07/19/2013 09:51 PM, Dave Angel wrote:

On 07/19/2013 09:04 PM, Devyn Collier Johnson wrote:




 



Chris Angelico said that casefold is not perfect. In the future, I want
to make the perfect international-case-insensitive if-statement. For
now, my code only supports a limited range of characters. Even with
casefold, I will have some issues as Chris Angelico mentioned. Also, "ß"
is not really the same as "ss".



Sure, the casefold() method has its problems.  But you're going to 
avoid using it till you can do a "perfect" one?


Perfect in what context?  For "case sensitively" comparing people's 
names in a single language in a single country?  Perhaps that can be 
made perfect.  For certain combinations of language and country.


But if you want to compare words in an unspecified language with an 
unspecified country, it cannot be done.


If you've got a particular goal in mind, great.  But as a library 
function, you're better off using the best standard method available, 
and document what its limitations are.  One way of documenting such is 
to quote the appropriate standards, with their caveats.



By the way, you mentioned earlier that you're restricting yourself to 
Latin characters.  The lower() method is inadequate for many of those 
as well.  Perhaps you meant ASCII instead.


Of course not, Dave; I will implement casefold. I just plan to not stop 
there. My program should not come across unspecified languages. Yeah, I 
meant ASCII, but I was unaware that lower() had some limitation on Latin 
letters.


Mahalo,
DCJ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Share Code Tips

2013-07-20 Thread Devyn Collier Johnson


On 07/19/2013 11:44 PM, Steven D'Aprano wrote:

On Fri, 19 Jul 2013 21:04:55 -0400, Devyn Collier Johnson wrote:


In the future, I want to
make the perfect international-case-insensitive if-statement. For now,
my code only supports a limited range of characters. Even with casefold,
I will have some issues as Chris Angelico mentioned.

There are hundreds of written languages in the world, with thousands of
characters, and most of them have rules about case-sensitivity and
character normalization. For example, in Greek, lowercase Σ is σ except
at the end of a word, when it is ς.

≻≻≻ 'Σσς'.upper()
'ΣΣΣ'
≻≻≻ 'Σσς'.lower()
'σσς'
≻≻≻ 'Σσς'.casefold()
'σσσ'


So in this case, casefold() correctly solves the problem, provided you
are comparing modern Greek text. But if you're comparing text in some
other language which merely happens to use Greek letters, but doesn't
have the same rules about letter sigma, then it will be inappropriate. So
you cannot write a single "perfect" case-insensitive comparison, the best
you can hope for is to write dozens or hundreds of separate case-
insensitive comparisons, one for each language or family of languages.

For an introduction to the problem:

http://www.w3.org/International/wiki/Case_folding

http://www.unicode.org/faq/casemap_charprop.html





Also, "ß" is not really the same as "ss".

Sometimes it is. Sometimes it isn't.



Wow, my if-statement is so imperfect! Thankfully, only English people 
will talk to an English chatbot (I hope), so for my use of the code, it 
will work.

Do the main Python3 developers plan to do something about this?

Mahalo,
DCJ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Share Code Tips

2013-07-20 Thread Devyn Collier Johnson


On 07/19/2013 11:18 PM, Steven D'Aprano wrote:

On Fri, 19 Jul 2013 18:08:43 -0400, Devyn Collier Johnson wrote:


As for the case-insensitive if-statements, most code uses Latin letters.
Making a case-insensitive-international if-statement would be
interesting. I can tackle that later. For now, I only wanted to take
care of Latin letters. I hope to figure something out for all
characters.

As I showed, even for Latin letters, the trick of "if astring.lower() ==
bstring.lower()" doesn't *quite* work, although it can be "close enough"
for some purposes. For example, some languages treat accents as mere
guides to pronunciation, so ö == o, while other languages treat them as
completely different letters. Same with ligatures: in modern English, æ
should be treated as equal to ae, but in Old English, Danish, Norwegian
and Icelandic it is a distinct letter.

Case-insensitive testing may be easier in many non-European languages,
because they don't have cases.

A full solution to the problem of localized string matching requires
expert knowledge for each language, but a 90% solution is pretty simple:

astring.casefold() == bstring.casefold()

or before version 3.3, just use lowercase. It's not a perfect solution,
but it works reasonably well if you don't care about full localization.



Thanks for the tips. I am learning a lot from this mailing list. I hope 
my code helped some people though.


Mahalo,
DCJ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Stack Overflow moderator “animuson”

2013-07-20 Thread Joshua Landau
On 19 July 2013 23:35, Chris Angelico  wrote:
> On Sat, Jul 20, 2013 at 4:54 AM,   wrote:
>> And do not forget memory. The €uro just become expensive.
>>
> sys.getsizeof('
>> )
>> 26
> sys.getsizeof('€')
>> 40
>>
>> I do not know. When an €uro char need 14 bytes more that
>> a dollar, I belong to those who thing there is a problem
>> somewhere.
>
> Oh, I totally agree. But it's not just the Euro symbol that's
> expensive. Look how much I have to pay for a couple of square
> brackets!
>
 sys.getsizeof((1))
> 14
 sys.getsizeof([1])
> 40

But when you do it generically, square brackets save you space!

>>> sys.getsizeof((int))
392
>>> sys.getsizeof([int])
80

:D
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Joshua Landau
On 19 July 2013 18:29, Serhiy Storchaka  wrote:
> 19.07.13 19:22, Steven D'Aprano написав(ла):
>
>> I also expect that the string replace() method will be second fastest,
>> and re.sub will be the slowest, by a very long way.
>
>
> The string replace() method is fastest (at least in Python 3.3+). See
> implementation of html.escape() etc.

def escape(s, quote=True):
if quote:
return s.translate(_escape_map_full)
return s.translate(_escape_map)

I fail to see how this supports the assertion that str.replace() is
faster. However, some quick timing shows that translate has a very
high penalty for missing characters and is a tad slower any way.

Really, though, there should be no reason for .translate() to be
slower than replace -- at worst it should just be "reduce(lambda s,
ab: s.replace(*ab), mapping.items()¹, original_str)" and end up the
*same* speed as iterated replace. But the fact that it doesn't have to
re-build the string every replace means that theoretically it should
be a lot faster.

¹ I realise this won't actually work for several reasons, and doesn't
support things like passing in lists as mappings, but you could
trivially support the important builtin types² and fall back to the
original for others, where the pure-python __getitem__ is going to be
the slowest part anyway.

² List, tuple, dict, str, bytes -- so basically just mappings and
ordered iterables
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Serhiy Storchaka

19.07.13 21:08, Skip Montanaro написав(ла):

Serhiy> The string replace() method is fastest (at least in Python 3.3+). See
Serhiy> implementation of html.escape() etc.
I trust everybody knows by now that when you want to use regular
expressions you should shell out to Perl for the best performance. :-)


If you want to use regular expressions Python is not the best choice. 
But if you want to use Python regular expressions sometimes are the best 
choice.



--
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Serhiy Storchaka

20.07.13 14:16, Joshua Landau написав(ла):

On 19 July 2013 18:29, Serhiy Storchaka  wrote:

The string replace() method is fastest (at least in Python 3.3+). See
implementation of html.escape() etc.


def escape(s, quote=True):
 if quote:
 return s.translate(_escape_map_full)
 return s.translate(_escape_map)

I fail to see how this supports the assertion that str.replace() is
faster.


And now look at Python 3.4 sources.


However, some quick timing shows that translate has a very
high penalty for missing characters and is a tad slower any way.

Really, though, there should be no reason for .translate() to be
slower than replace -- at worst it should just be "reduce(lambda s,
ab: s.replace(*ab), mapping.items()¹, original_str)" and end up the
*same* speed as iterated replace.


It doesn't work such way. Consider 
'ab'.translate({ord('a'):'b',ord('b'):'a'}).



--
http://mail.python.org/mailman/listinfo/python-list


Re: Share Code Tips

2013-07-20 Thread Devyn Collier Johnson


On 07/19/2013 09:13 PM, Chris Angelico wrote:

On Sat, Jul 20, 2013 at 11:04 AM, Devyn Collier Johnson
 wrote:

On 07/19/2013 07:09 PM, Dave Angel wrote:

On 07/19/2013 06:08 PM, Devyn Collier Johnson wrote:


On 07/19/2013 01:59 PM, Steven D'Aprano wrote:


  


As for the case-insensitive if-statements, most code uses Latin letters.
Making a case-insensitive-international if-statement would be
interesting. I can tackle that later. For now, I only wanted to take
care of Latin letters. I hope to figure something out for all characters.


Once Steven gave you the answer, what's to figure out?  You simply use
casefold() instead of lower().  The only constraint is it's 3.3 and later,
so you can't use it for anything earlier.

http://docs.python.org/3.3/library/stdtypes.html#str.casefold

"""
str.casefold()
Return a casefolded copy of the string. Casefolded strings may be used for
caseless matching.

Casefolding is similar to lowercasing but more aggressive because it is
intended to remove all case distinctions in a string. For example, the
German lowercase letter 'ß' is equivalent to "ss". Since it is already
lowercase, lower() would do nothing to 'ß'; casefold() converts it to "ss".

The casefolding algorithm is described in section 3.13 of the Unicode
Standard.

New in version 3.3.
"""


Chris Angelico said that casefold is not perfect. In the future, I want to
make the perfect international-case-insensitive if-statement. For now, my
code only supports a limited range of characters. Even with casefold, I will
have some issues as Chris Angelico mentioned. Also, "ß" is not really the
same as "ss".

Well, casefold is about as good as it's ever going to be, but that's
because "the perfect international-case-insensitive comparison" is a
fundamentally impossible goal. Your last sentence hints as to why;
there is no simple way to compare strings containing those characters,
because the correct treatment varies according to context.

Your two best options are: Be case sensitive (and then you need only
worry about composition and combining characters and all those
nightmares - the ones you have to worry about either way), or use
casefold(). Of those, I prefer the first, because it's safer; the
second is also a good option.

ChrisA
Thanks everyone (especially Chris Angelico and Steven D'Aprano) for all 
of your helpful suggests and ideas. I plan to implement casefold() in 
some of my programs.


Mahalo,
DCJ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread Devyn Collier Johnson


On 07/20/2013 12:21 AM, Stefan Behnel wrote:

Devyn Collier Johnson, 20.07.2013 03:06:

I am making a chatbot that I host on Launchpad.net/neobot. I am currently
converting the engine from BASH code to Python3. I need to convert this for
cross-platform compatibility. I do not need to use Mplayer; I just show the
below code to give others a better idea what I am doing. I would prefer to
be Python3 independent; I do not want to use the system shell. I am fine
with using Python3 modules like Pygame (if there is a py3 module). As long
as the code is fast, efficient, and simple without depending on the system
shell or external apps, that would be nice. I also need the code to execute
while the rest of the script continues running.

 jobs = multiprocessing.Process(SEND = subprocess.getoutput('mplayer
-nogui -nolirc -noar -quiet ./conf/boot.ogg')) #Boot sound#

Well, since you mentioned it already, have you actually looked at pygame?
It should be able to do what you want. There's also pyaudio, which is more
specialised to, well, audio. A web search for python and ogg might provide
more.

Stefan


Thanks Stefan! I have not heard of Pyaudio; I will look into that. As 
for Pygame, I have not been able to find any good documentation for 
playing audio files. Plus, I recently learned that Pygame is not Python3 
compatible.


Mahalo,
DCJ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Share Code Tips

2013-07-20 Thread Devyn Collier Johnson


On 07/20/2013 12:26 AM, David Hutto wrote:
I didn't see that this was for a chess game. That seems more point and 
click. Everyone can recognize a bishop from a queen, or a rook from a 
pawn. So why would case sensitivity matter other than the 16 pieces on 
the board? Or am I misunderstanding the question?




On Sat, Jul 20, 2013 at 12:22 AM, David Hutto > wrote:


It seems that you could use import re, in my mind's pseudo code,
to compile a translational usage of usernames/passwords that could
remain case sensitive by using just the translational
dictionaries, and refining with data input tests/unit tests.


On Sat, Jul 20, 2013 at 12:15 AM, David Hutto
mailto:[email protected]>> wrote:

It seems, without utilizing this, or googling, that a case
sensitive library is either developed, or could be implemented
by utilizing case sensitive translation through a google
translation page using an urlopener, and placing in the data
to be processed back to the boolean value. Never attempted,
but the algorithm seems simpler than the dozens of solutions
method.




-- 
Best Regards,

David Hutto
/*CEO:*/ _http://www.hitwebdevelopment.com_




--
Best Regards,
David Hutto
/*CEO:*/ _http://www.hitwebdevelopment.com_


In the email, I am sharing various code snippets to give others ideas 
and inspiration for coding. I that particular snippet, I am giving 
Python3 programmers the idea of making chess tags on a HTML or XML 
interpreter. It would be neat to type a tag that would generate chess 
pieces instead of remembering the HTML ASCII code.


From my understanding, that email is not being displayed correctly. Are 
all of the lines run together?


Thank you for asking. I want everyone to understand the purpose of the 
email and that particular snippet. Remember, assumption is the lowest 
form of knowledge.


Mahalo,
DCJ
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Devyn Collier Johnson


On 07/20/2013 07:16 AM, Joshua Landau wrote:

On 19 July 2013 18:29, Serhiy Storchaka  wrote:

19.07.13 19:22, Steven D'Aprano написав(ла):


I also expect that the string replace() method will be second fastest,
and re.sub will be the slowest, by a very long way.


The string replace() method is fastest (at least in Python 3.3+). See
implementation of html.escape() etc.

def escape(s, quote=True):
 if quote:
 return s.translate(_escape_map_full)
 return s.translate(_escape_map)

I fail to see how this supports the assertion that str.replace() is
faster. However, some quick timing shows that translate has a very
high penalty for missing characters and is a tad slower any way.

Really, though, there should be no reason for .translate() to be
slower than replace -- at worst it should just be "reduce(lambda s,
ab: s.replace(*ab), mapping.items()¹, original_str)" and end up the
*same* speed as iterated replace. But the fact that it doesn't have to
re-build the string every replace means that theoretically it should
be a lot faster.

¹ I realise this won't actually work for several reasons, and doesn't
support things like passing in lists as mappings, but you could
trivially support the important builtin types² and fall back to the
original for others, where the pure-python __getitem__ is going to be
the slowest part anyway.

² List, tuple, dict, str, bytes -- so basically just mappings and
ordered iterables
Thanks Joshua Landau! str.replace() does appear to be best, so that is 
the suggestion that I will implement.


Mahalo,

DCJ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread Devyn Collier Johnson


On 07/20/2013 12:39 AM, David Hutto wrote:

you could use , and I think its

david@david:~$ python
Python 2.7.3 (default, Aug  1 2012, 05:16:07)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> subprocess.call(['espeak', 'word_spoken'], stdin=None, 
stdout=None, stderr=None, shell=False)




This is on ubuntu linux, using espeak.
this is on ubun


On Sat, Jul 20, 2013 at 12:21 AM, Stefan Behnel > wrote:


Devyn Collier Johnson, 20.07.2013 03:06:
> I am making a chatbot that I host on Launchpad.net/neobot. I am
currently
> converting the engine from BASH code to Python3. I need to
convert this for
> cross-platform compatibility. I do not need to use Mplayer; I
just show the
> below code to give others a better idea what I am doing. I would
prefer to
> be Python3 independent; I do not want to use the system shell. I
am fine
> with using Python3 modules like Pygame (if there is a py3
module). As long
> as the code is fast, efficient, and simple without depending on
the system
> shell or external apps, that would be nice. I also need the code
to execute
> while the rest of the script continues running.
>
> jobs = multiprocessing.Process(SEND =
subprocess.getoutput('mplayer
> -nogui -nolirc -noar -quiet ./conf/boot.ogg')) #Boot sound#

Well, since you mentioned it already, have you actually looked at
pygame?
It should be able to do what you want. There's also pyaudio, which
is more
specialised to, well, audio. A web search for python and ogg might
provide
more.

Stefan


--
http://mail.python.org/mailman/listinfo/python-list




--
Best Regards,
David Hutto
/*CEO:*/ _http://www.hitwebdevelopment.com_


Where did Espeak come from? This is about playing an ogg file using 
Python3 code.


DCJ
-- 
http://mail.python.org/mailman/listinfo/python-list


List as Contributor

2013-07-20 Thread Devyn Collier Johnson
Many users on here have answered my questions and given me ideas and 
suggestions for code that I am using in my open-source GPLv3 chatbot. 
When I release the next update (that will be in a month or two), does 
anyone that has contributed helpful ideas want to be listed as a 
contributor under the heading "Gave Suggestions/Ideas"? The chatbot, 
named Neobot, is hosted on Launchpad (Neobot is buggy right now).  
https://launchpad.net/neobot


For those of you that want to be listed, email me with your name as you 
want it displayed and your email or other contact information if you 
want. Once I release version 0.8, I will inform this mailing list.


Moderators: Would you like me to list the email address of this mailing 
list as a contributor?



Mahalo,

Devyn Collier Johnson
[email protected]
--
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Devyn Collier Johnson


On 07/20/2013 07:48 AM, Serhiy Storchaka wrote:

19.07.13 21:08, Skip Montanaro написав(ла):
Serhiy> The string replace() method is fastest (at least in Python 
3.3+). See

Serhiy> implementation of html.escape() etc.
I trust everybody knows by now that when you want to use regular
expressions you should shell out to Perl for the best performance. :-)


If you want to use regular expressions Python is not the best choice. 
But if you want to use Python regular expressions sometimes are the 
best choice.




That is an interesting concept. (^u^)

Mahalo,
DCJ
--
http://mail.python.org/mailman/listinfo/python-list


Re: List as Contributor

2013-07-20 Thread Chris Angelico
On Sat, Jul 20, 2013 at 10:48 PM, Devyn Collier Johnson
 wrote:
> Many users on here have answered my questions and given me ideas and
> suggestions for code that I am using in my open-source GPLv3 chatbot. When I
> release the next update (that will be in a month or two), does anyone that
> has contributed helpful ideas want to be listed as a contributor under the
> heading "Gave Suggestions/Ideas"? The chatbot, named Neobot, is hosted on
> Launchpad (Neobot is buggy right now).  https://launchpad.net/neobot

A simple enumeration of names would, I think, be appropriate and
simple ("With thanks to the following: Fred Foobar, Joe Citizen, John
Smith."). And thanks for the consideration! :)

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Joshua Landau
On 20 July 2013 12:57, Serhiy Storchaka  wrote:
> 20.07.13 14:16, Joshua Landau написав(ла):
>>
>> On 19 July 2013 18:29, Serhiy Storchaka  wrote:
>>>
>>> The string replace() method is fastest (at least in Python 3.3+). See
>>> implementation of html.escape() etc.
>>
>>
>> def escape(s, quote=True):
>>  if quote:
>>  return s.translate(_escape_map_full)
>>  return s.translate(_escape_map)
>>
>> I fail to see how this supports the assertion that str.replace() is
>> faster.
>
>
> And now look at Python 3.4 sources.

I'll just trust you ;).

>> However, some quick timing shows that translate has a very
>> high penalty for missing characters and is a tad slower any way.
>>
>> Really, though, there should be no reason for .translate() to be
>> slower than replace -- at worst it should just be "reduce(lambda s,
>> ab: s.replace(*ab), mapping.items()¹, original_str)" and end up the
>> *same* speed as iterated replace.
>
>
> It doesn't work such way. Consider
> 'ab'.translate({ord('a'):'b',ord('b'):'a'}).

*sad*

Still, it seems to me that it should be optimizable for sensible
builtin types such that .translate is significantly faster, as there's
no theoretical extra work that .translate *has* to do that .replace
does not, and .replace also has to rebuild the string a lot of times.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Dave Angel

On 07/20/2013 01:03 PM, Joshua Landau wrote:

On 20 July 2013 12:57, Serhiy Storchaka  wrote:

20.07.13 14:16, Joshua Landau написав(ла):







However, some quick timing shows that translate has a very
high penalty for missing characters and is a tad slower any way.

Really, though, there should be no reason for .translate() to be
slower than replace -- at worst it should just be "reduce(lambda s,
ab: s.replace(*ab), mapping.items()¹, original_str)" and end up the
*same* speed as iterated replace.



It doesn't work such way. Consider
'ab'.translate({ord('a'):'b',ord('b'):'a'}).


*sad*

Still, it seems to me that it should be optimizable for sensible
builtin types such that .translate is significantly faster, as there's
no theoretical extra work that .translate *has* to do that .replace
does not, and .replace also has to rebuild the string a lot of times.



translate is going to be faster (than replace) for Unicode if it has a 
"large" table.  For example, to translate from ASCII to EBCDIC, where 
every character in the string is replaced by a new one.  I have no idea 
what the cutoff is.  But of course, for a case like ASCII to EBCDIC, it 
would be very tricky to do it with replaces, probably taking much more 
than the expected 96 passes.


translate for byte strings is undoubtedly tons faster.  For byte 
strings, the translation table is 256 bytes, and the inner loop is a 
simple lookup.  But for Unicode, the table is a dict (or something very 
like it, I looked at the C code, not the Python code).


So for every character in the input string, it does a dict-type lookup, 
before it can even decide if the character is going to change.


Just for reference, the two files I was looking at were:

objects/unicodeobject.c
objects/bytesobject.c

Extracted from the bz2 downloaded from the page:
http://hg.python.org/cpython


--
DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Joshua Landau
On 20 July 2013 19:04, Dave Angel  wrote:
> On 07/20/2013 01:03 PM, Joshua Landau wrote:
>>
>> Still, it seems to me that it should be optimizable for sensible
>> builtin types such that .translate is significantly faster, as there's
>> no theoretical extra work that .translate *has* to do that .replace
>> does not, and .replace also has to rebuild the string a lot of times.
>>
>
> translate is going to be faster (than replace) for Unicode if it has a
> "large" table.  For example, to translate from ASCII to EBCDIC, where every
> character in the string is replaced by a new one.  I have no idea what the
> cutoff is.  But of course, for a case like ASCII to EBCDIC, it would be very
> tricky to do it with replaces, probably taking much more than the expected
> 96 passes.

My timings showed that for ".upper()", doing the full 26 passes "a" ->
"A", it was *way* slower to use .translate than .replace, unless you
used a list or equiv. with much faster lookup. Even then, it was
slower to use .translate.

I agree that for large tables it's obviously going to swing the other
way, but by the time you're running .replace 26 times you wouldn't (at
least I wouldn't) expect it still to be screamingly faster than
.translate.

> translate for byte strings is undoubtedly tons faster.  For byte strings,
> the translation table is 256 bytes, and the inner loop is a simple lookup.

For my above test, .translate is about 10x faster than iterated .replace.

> But for Unicode, the table is a dict (or something very like it, I looked at
> the C code, not the Python code).
>
> So for every character in the input string, it does a dict-type lookup,
> before it can even decide if the character is going to change.

The problem can be solved, I'd imagine, for builtin types. Just build
an internal representation upon calling .translate that's faster. It's
especially easy in the list case -- just build a C array¹ at the start
mapping int -> int and then have really fast C mapping speeds.

For dictionaries, you can do the same thing -- you just have to make
sure you're not breaking any memory barriers.

¹ I don't do C or other low level languages, so my knowledge in this
area is embarrassingly bad

> Just for reference, the two files I was looking at were:
>
> objects/unicodeobject.c
> objects/bytesobject.c
>
> Extracted from the bz2 downloaded from the page:
> http://hg.python.org/cpython

I didn't look at bytes first time, I might take a look later.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Joshua Landau
On 20 July 2013 19:37, Joshua Landau  wrote:
> mapping int -> int

Well, on second thought it's not quite this unless it's a 1:1 mapping.
Point remains valid, though, I think.
-- 
http://mail.python.org/mailman/listinfo/python-list


How can I make this piece of code even faster?

2013-07-20 Thread pablobarhamalzas
Ok, I'm working on a predator/prey simulation, which evolve using genetic 
algorithms. At the moment, they use a quite simple feed-forward neural network, 
which can change size over time. Each brain "tick" is performed by the 
following function (inside the Brain class):

def tick(self):
input_num = self.input_num 
hidden_num = self.hidden_num
output_num = self.output_num
 
hidden = [0]*hidden_num
output = [0]*output_num

inputs = self.input
h_weight = self.h_weight
o_weight = self.o_weight

e = math.e

count = -1
for x in range(hidden_num):
temp = 0
for y in range(input_num):
count += 1
temp += inputs[y] * h_weight[count]
hidden[x] = 1/(1+e**(-temp))  

count = -1  
for x in range(output_num):
temp = 0 
for y in range(hidden_num):
count += 1 
temp += hidden[y] * o_weight[count]
output[x] = 1/(1+e**(-temp))  
 
self.output = output 

The function is actually quite fast (~0.040 seconds per 200 calls, using 10 
input, 20 hidden and 3 output neurons), and used to be much slower untill I 
fiddled about with it a bit to make it faster. However, it is still somewhat 
slow for what I need it.
 
My question to you is if you an see any obvious (or not so obvious) way of 
making this faster. I've heard about numpy and have been reading about it, but 
I really can't see how it could be implemented here.

Cheers!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I make this piece of code even faster?

2013-07-20 Thread Fabio Zadrozny
On Sat, Jul 20, 2013 at 5:22 PM,  wrote:

> Ok, I'm working on a predator/prey simulation, which evolve using genetic
> algorithms. At the moment, they use a quite simple feed-forward neural
> network, which can change size over time. Each brain "tick" is performed by
> the following function (inside the Brain class):
>
> def tick(self):
> input_num = self.input_num
> hidden_num = self.hidden_num
> output_num = self.output_num
>
> hidden = [0]*hidden_num
> output = [0]*output_num
>
> inputs = self.input
> h_weight = self.h_weight
> o_weight = self.o_weight
>
> e = math.e
>
> count = -1
> for x in range(hidden_num):
> temp = 0
> for y in range(input_num):
> count += 1
> temp += inputs[y] * h_weight[count]
> hidden[x] = 1/(1+e**(-temp))
>
> count = -1
> for x in range(output_num):
> temp = 0
> for y in range(hidden_num):
> count += 1
> temp += hidden[y] * o_weight[count]
> output[x] = 1/(1+e**(-temp))
>
> self.output = output
>
> The function is actually quite fast (~0.040 seconds per 200 calls, using
> 10 input, 20 hidden and 3 output neurons), and used to be much slower
> untill I fiddled about with it a bit to make it faster. However, it is
> still somewhat slow for what I need it.
>
> My question to you is if you an see any obvious (or not so obvious) way of
> making this faster. I've heard about numpy and have been reading about it,
> but I really can't see how it could be implemented here.
>
> Cheers!
> --
> http://mail.python.org/mailman/listinfo/python-list
>



Low level optimizations:

If you're in Python 2.x (and not 3), you should use xrange() instead of
range(), or maybe even create a local variable and increment it and check
its value within a while (that way you can save a few instructions on
method invocations from xrange/range).

Anyways, if that's not fast enough, just port it to c/c++ (or one of the
alternatives to speed it up while still in python: numba, cython,
shedskin). Or (if you can), try to use PyPy and see if you get more speed
without doing anything.

Cheers,

Fabio
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I make this piece of code even faster?

2013-07-20 Thread Roy Smith
In article <[email protected]>,
 [email protected] wrote:

> Ok, I'm working on a predator/prey simulation, which evolve using genetic 
> algorithms. At the moment, they use a quite simple feed-forward neural 
> network, which can change size over time. Each brain "tick" is performed by 
> the following function (inside the Brain class):
> 
> def tick(self):
> input_num = self.input_num 
> hidden_num = self.hidden_num
> output_num = self.output_num
>  
> hidden = [0]*hidden_num
> output = [0]*output_num
> 
> inputs = self.input
> h_weight = self.h_weight
> o_weight = self.o_weight
> 
> e = math.e
> 
> count = -1
> for x in range(hidden_num):
> temp = 0
> for y in range(input_num):
> count += 1
> temp += inputs[y] * h_weight[count]
> hidden[x] = 1/(1+e**(-temp))  
> 
> count = -1  
> for x in range(output_num):
> temp = 0 
> for y in range(hidden_num):
> count += 1 
> temp += hidden[y] * o_weight[count]
> output[x] = 1/(1+e**(-temp))  
>  
> self.output = output 
> 
> The function is actually quite fast (~0.040 seconds per 200 calls, using 10 
> input, 20 hidden and 3 output neurons), and used to be much slower untill I 
> fiddled about with it a bit to make it faster. However, it is still somewhat 
> slow for what I need it.
>  
> My question to you is if you an see any obvious (or not so obvious) way of 
> making this faster. I've heard about numpy and have been reading about it, 
> but I really can't see how it could be implemented here.

First thing, I would add some instrumentation to see where the most time 
is being spent.  My guess is in the first set of nested loops, where the 
inner loop gets executed hidden_num * input_num (i.e. 10 * 20 = 200) 
times.  But timing data is better than my guess.

Assuming I'm right, though, you do compute range(input_num) 20 times.  
You don't need to do that.  You might try xrange(), or you might just 
factor out creating the list outside the outer loop.  But, none of that 
seems like it should make much difference.

What possible values can temp take?  If it can only take certain 
discrete values and you can enumerate them beforehand, you might want to 
build a dict mapping temp -> 1/(1+e**(-temp)) and then all that math 
becomes just a table lookup.
-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] pyparsing 2.0.1 released - compatible with Python 2.6 and later

2013-07-20 Thread Paul McGuire
In my releasing of Pyparsing 1.5.7/2.0.0 last November, I started to split 
supported Python versions: 2.x to the Pyparsing 1.5.x track, and 3.x to the 
Pyparsing 2.x track. Unfortunately, this caused a fair bit of pain for many 
current users of Python 2.6 and 2.7 (especially those using libs dependent on 
pyparsing), as the default installed pyparsing version using easy_install or 
pip would be the incompatible-to-them pyparsing 2.0.0.

I hope I have rectified (or at least improved) this situation with the latest 
release of pyparsing 2.0.1. Version 2.0.1 takes advantage of the 
cross-major-version compatibility that was planned into Python, wherein many of 
the new features of Python 3.x were made available in Python 2.6 and 2.7. By 
avoiding the one usage of ‘nonlocal’ (a Python 3.x feature not available in any 
Python 2.x release), I’ve been able to release pyparsing 2.0.1 in a form that 
will work for all those using Python 2.6 and later. (If you are stuck on 
version 2.5 or earlier of Python, then you still have to explicitly download 
the 1.5.7 version of pyparsing.)

This release also includes a bugfix to the new ‘<<=’ operator, so that ‘<<’ for 
attachment of parser definitions to Forward instances can be deprecated in 
favor of ‘<<=’.

Hopefully, most current users using pip and easy_install can now just install 
pyparsing 2.0.1, and it will be sufficiently version-aware to function under 
all Pythons 2.6 and later.

Thanks for your continued support and interest in pyparsing!

-- Paul McGuire

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem installing Pyparsing

2013-07-20 Thread Paul McGuire
Pyparsing 2.0.1 fixes this incompatibility, and should work with all versions 
of Python 2.6 and later.

-- Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Dave Angel

On 07/20/2013 02:37 PM, Joshua Landau wrote:

On 20 July 2013 19:04, Dave Angel  wrote:

On 07/20/2013 01:03 PM, Joshua Landau wrote:


Still, it seems to me that it should be optimizable for sensible
builtin types such that .translate is significantly faster, as there's
no theoretical extra work that .translate *has* to do that .replace
does not, and .replace also has to rebuild the string a lot of times.



translate is going to be faster (than replace) for Unicode if it has a
"large" table.  For example, to translate from ASCII to EBCDIC, where every
character in the string is replaced by a new one.  I have no idea what the
cutoff is.  But of course, for a case like ASCII to EBCDIC, it would be very
tricky to do it with replaces, probably taking much more than the expected
96 passes.


My timings showed that for ".upper()", doing the full 26 passes "a" ->
"A", it was *way* slower to use .translate than .replace, unless you
used a list or equiv. with much faster lookup. Even then, it was
slower to use .translate.

I agree that for large tables it's obviously going to swing the other
way, but by the time you're running .replace 26 times you wouldn't (at
least I wouldn't) expect it still to be screamingly faster than
.translate.


translate for byte strings is undoubtedly tons faster.  For byte strings,
the translation table is 256 bytes, and the inner loop is a simple lookup.


For my above test, .translate is about 10x faster than iterated .replace.


But for Unicode, the table is a dict (or something very like it, I looked at
the C code, not the Python code).

So for every character in the input string, it does a dict-type lookup,
before it can even decide if the character is going to change.


The problem can be solved, I'd imagine, for builtin types. Just build
an internal representation upon calling .translate that's faster. It's
especially easy in the list case


What "list case"?  list doesn't have a replace() method or translate() 
method.




-- just build a C array¹ at the start
mapping int -> int and then have really fast C mapping speeds.


As long as you can afford to have a list with a billion or so entries in 
it.  We are talking about strings and version 3.3, aren't we?  Of 
course, one could always examine the mapping object (table) and see what 
the max value was, and only build a "C array" if it was smaller than say 
50,000.




For dictionaries, you can do the same thing -- you just have to make
sure you're not breaking any memory barriers.

¹ I don't do C or other low level languages, so my knowledge in this
area is embarrassingly bad


Just for reference, the two files I was looking at were:

objects/unicodeobject.c
objects/bytesobject.c

Extracted from the bz2 downloaded from the page:
 http://hg.python.org/cpython


I didn't look at bytes first time, I might take a look later.




--
DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: Find and Replace Simplification

2013-07-20 Thread Joshua Landau
On 20 July 2013 22:56, Dave Angel  wrote:
> On 07/20/2013 02:37 PM, Joshua Landau wrote:
>>
>> The problem can be solved, I'd imagine, for builtin types. Just build
>> an internal representation upon calling .translate that's faster. It's
>> especially easy in the list case
>
> What "list case"?  list doesn't have a replace() method or translate()
> method.

I mean some_str.translate(some_list).

>> -- just build a C array¹ at the start
>> mapping int -> int and then have really fast C mapping speeds.
>
>
> As long as you can afford to have a list with a billion or so entries in it.
> We are talking about strings and version 3.3, aren't we?  Of course, one
> could always examine the mapping object (table) and see what the max value
> was, and only build a "C array" if it was smaller than say 50,000.

When talking about some_str.translate(some_list), this doesn't apply
very much -- they've already gotten a much bigger Python list.

In the dict case² I don't actually want to jump to the conclusion that
one should do array-based mappings because I can see the obvious
downsides and it's obviously not good to have 100 cases in there,
*but* I still think that there's a solution.

Here are some ideas:
· Latin and ASCII can obviously be done with a C array, and I imagine
that covers at least a fair portion of use-cases.
· If you only have a few characters in the mapping (so sys.getsizeof
is small) then it'll be a lot faster to just iterate through a C list
instead of checking the dict.
· Other cases are:
· Full-character-set or equiv. mappings, which are already faster
than .replace(). Those should really be re-made into lists so that the
list optimisation can take place, and lists are much faster even in
versions without these hypothetical optimizations, too.
· Custom objects. There's nothing we can do here.

I realise that this is a lot more code, so it's not something I'm
going to try to force. However, I think it's useful if it stops people
using .replace in a loop ;).

² some_str.translate(some_dict)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I make this piece of code even faster?

2013-07-20 Thread pablobarhamalzas
Hi there.
I'm using python 3, where xrange doesn't exist any more (range is now 
equivalent). And "temp" doesn't have any fixed discrete values it always takes.

I have tried cython but it doesn't seem to work well (maybe using it wrong?).

Any other ideas?



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I make this piece of code even faster?

2013-07-20 Thread Chris Angelico
On Sun, Jul 21, 2013 at 6:22 AM,   wrote:
> temp = 0
> for y in range(input_num):
> count += 1
> temp += inputs[y] * h_weight[count]
> hidden[x] = 1/(1+e**(-temp))

It's a micro-optimization that'll probably have negligible effect, but
it can't hurt: Instead of adding to temp and raising e to -temp, carry
the value of temp as a negative number:

temp -= inputs[y] * h_weight[count]
hidden[x] = 1/(1+e**temp)

Ditto in the second loop.

Not sure which way performance would go, but would it be more readable
to take an iterator for h_weight and o_weight? Something like this:

# Slot this into your existing structure
inputs = self.input
h_weight = iter(self.h_weight)
o_weight = iter(self.o_weight)

e = math.e

for x in range(hidden_num):
temp = 0
for y in inputs:
temp += y * next(h_weight)
hidden[x] = 1/(1+e**(-temp))

for x in range(output_num):
temp = 0
for y in hidden:
temp += y * next(o_weight)
output[x] = 1/(1+e**(-temp))
# End.

If that looks better, the next change I'd look to make is replacing
the 'for y' loops with sum() calls on generators:

temp = sum(y * next(o_weight) for y in hidden)

And finally replace the entire 'for x' loops with list comps... which
makes for two sizeable one-liners, which I like and many people
detest:

def tick(self):
inputs = self.inputs
h_weight = iter(self.h_weight)
o_weight = iter(self.o_weight)
e = math.e
hidden = [1/(1+e**sum(-y * next(h_weight) for y in inputs))
for _ in range(hidden_num)]
self.output = [1/(1+e**sum(-y * next(o_weight) for y in
hidden)) for _ in range(output_num)]

Up to you to decide whether you find that version more readable, or at
least sufficiently readable, and then to test performance :) But it's
shorter by quite a margin, which I personally like. Oh, and I'm
relying on you to make sure I've made the translation correctly, which
I can't confirm without a pile of input data to test it on. All I can
say is that it's syntactically correct.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I make this piece of code even faster?

2013-07-20 Thread pablobarhamalzas
Hi there Chris.
Unfortunately, using iterations was about twice as slow as the original 
implementation, so that's not the solution.
Thank's anyway.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I make this piece of code even faster?

2013-07-20 Thread Chris Angelico
On Sun, Jul 21, 2013 at 9:24 AM,   wrote:
> Hi there Chris.
> Unfortunately, using iterations was about twice as slow as the original 
> implementation, so that's not the solution.
> Thank's anyway.

Fascinating! Well, was worth a try anyhow. But that's a very surprising result.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [ANN] pyparsing 2.0.1 released - compatible with Python 2.6 and later

2013-07-20 Thread Steven D'Aprano
On Sat, 20 Jul 2013 14:30:14 -0700, Paul McGuire wrote:

> Thanks for your continued support and interest in pyparsing!

And thank you for pyparsing!

Paul, I thought I would mention that over the last week or so on the 
Python-Dev mailing list, there has been some discussion about adding a 
parser generator to the standard library for Python 3.4. If this is of 
interest to you, have a look for the thread 

PLY in stdlib (was cffi in stdlib)

at http://mail.python.org/mailman/listinfo/python-dev



Regards,


-- 
Steve
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I make this piece of code even faster?

2013-07-20 Thread Steven D'Aprano
On Sat, 20 Jul 2013 13:22:03 -0700, pablobarhamalzas asked:

"How can I make this piece of code even faster?"

- Use a faster computer.
- Put in more memory.
- If using Unix or Linux, decrease the "nice" priority of the process.

I mention these because sometimes people forget that if you have a choice 
between "spend 10 hours at $70 per hour to optimize code", and "spend 
$200 to put more memory in", putting more memory in may be more cost 
effective.

Other than that, what you describe sounds like it could be a good 
candidate for PyPy to speed the code up, although PyPy is still (mostly) 
Python 2. You could take this question to the pypy mailing list and ask 
there.

http://mail.python.org/mailman/listinfo/pypy-dev

You also might like to try Cython or Numba.

As far as pure-Python optimizations, once you have a decent algorithm, 
there's probably not a lot of room for major speed ups. But a couple of 
thoughts and a possible optimized version come to mind...

1) In general, it is better/faster to iterate over lists directly, than 
indirectly by index number:

for item in sequence:
process(item)

rather than:

for i in range(len(sequence)):
item = sequence[i]
process(item)


If you need both the index and the value:

for i, item in enumerate(sequence):
print(i, process(item))


In your specific case, if I have understood your code's logic, you can 
just iterate directly over the appropriate lists, once each.



2) You perform an exponentiation using math.e**(-temp). You will probably 
find that math.exp(-temp) is both faster and more accurate.


3) If you need to add numbers, it is better to call sum() or math.fsum() 
than add them by hand. sum() may be a tiny bit faster, or maybe not, but 
fsum() is more accurate for floats.


See below for my suggestion on an optimized version.


> Ok, I'm working on a predator/prey simulation, which evolve using
> genetic algorithms. At the moment, they use a quite simple feed-forward
> neural network, which can change size over time. Each brain "tick" is
> performed by the following function (inside the Brain class):
> 
> def tick(self):
> input_num = self.input_num
> hidden_num = self.hidden_num
> output_num = self.output_num
>  
> hidden = [0]*hidden_num
> output = [0]*output_num
> 
> inputs = self.input
> h_weight = self.h_weight
> o_weight = self.o_weight
> 
> e = math.e
> 
> count = -1
> for x in range(hidden_num):
> temp = 0
> for y in range(input_num):
> count += 1
> temp += inputs[y] * h_weight[count]
> hidden[x] = 1/(1+e**(-temp))
> 
> count = -1
> for x in range(output_num):
> temp = 0
> for y in range(hidden_num):
> count += 1
> temp += hidden[y] * o_weight[count]
> output[x] = 1/(1+e**(-temp))
>  
> self.output = output
> 
> The function is actually quite fast (~0.040 seconds per 200 calls, using
> 10 input, 20 hidden and 3 output neurons), and used to be much slower
> untill I fiddled about with it a bit to make it faster. However, it is
> still somewhat slow for what I need it.
>  
> My question to you is if you an see any obvious (or not so obvious) way
> of making this faster. I've heard about numpy and have been reading
> about it, but I really can't see how it could be implemented here.

Here's my suggestion:


def tick(self):
exp = math.exp
sum = math.fsum  # more accurate than builtin sum

# This assumes that both inputs and h_weight have exactly
# self.input_num values.
temp = fsum(i*w for (i, w) in zip(self.inputs, self.h_weight))
hidden = [1/(1+exp(-temp))]*self.hidden_num

# This assumes that both outputs and o_weight have exactly
# self.output_num values.
temp = fsum(o*w for (o, w) in zip(self.outputs, self.o_weight))
self.output = [1/(1+exp(-temp))]*self.output_num


I have neither tested that this works the same as your code (or even 
works at all!) nor that it is faster, but I would expect that it will be 
faster.

Good luck!



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread David Hutto
It was supposed to show you that you can use a command line function from
windows or linux that will play an ogg/.wav file/ etc with an if windows:
do this or if linux do this.

espeak was just a suggestion, unless you want your own voice played for the
chatbot, or a selection of a male or female voice, and why ogg, why not wav?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread David Hutto
With linux you can have your package listed in synaptic, and can use with a
sudo apt-get install whatever ogg player like ogg123, and windows I don't
work with that much, but I'm pretty sure I've played .wav files from the
command line before while working with cross platform just for practice, so
with python 3 you can use what's available in the system with an if command.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread Chris Angelico
On Sun, Jul 21, 2013 at 3:39 PM, David Hutto  wrote:
> With linux you can have your package listed in synaptic, and can use with a
> sudo apt-get install whatever ogg player like ogg123, and windows I don't
> work with that much, but I'm pretty sure I've played .wav files from the
> command line before while working with cross platform just for practice, so
> with python 3 you can use what's available in the system with an if command.

Correction: "With Debian-based Linux distributions, you can etc etc" -
aptitude is Debian's package manager, it's not something you'll find
on other Linuxes. And the exact packages available depend on your
repositories; again, most Debian-derived Linux distros will most
likely have ogg123, but it's not guaranteed. However, it's reasonably
likely that other package managers and repositories will have what
you're looking for.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread David Hutto
Yeah, its like yum used in others(or the point and click gui package
installers). The main point kind of is in cross platform it would seem that
you would just use what's available with try/except, or if statements, and
the question is what os's is he going for.

Then a simple usage of what's available in the command line for those os's.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread Chris Angelico
On Sun, Jul 21, 2013 at 4:07 PM, David Hutto  wrote:
> Yeah, its like yum used in others(or the point and click gui package
> installers). The main point kind of is in cross platform it would seem that
> you would just use what's available with try/except, or if statements, and
> the question is what os's is he going for.

Right, it's just safer to avoid annoying the Red Hat people by
pretending the whole world is Debian :)

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread David Hutto
Mainly I just use my apps for my own purposes. So it's usually on the
debian/ubuntu distro I have, although I do have Windows XP SP3 in virtual
box.

I have been meaning to install some other linux distros in virtual box that
are the main ones, percentage of utilization based, that are used, and
practice more with the cross platform, but haven't had a chance to lately.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Play Ogg Files

2013-07-20 Thread David Hutto
And just thinking about it,,, Devyn might want to install virtual box, and
place in the os's he wants to target first, because there's usually always
a glitch in 3rd party imports,

This is especially true when using a newer version of python that the other
developers of packages you can import from haven't had time to update them,
as well as update for os's that might have changed, and everyone has a
newer version.


On Sun, Jul 21, 2013 at 2:42 AM, David Hutto  wrote:

> Mainly I just use my apps for my own purposes. So it's usually on the
> debian/ubuntu distro I have, although I do have Windows XP SP3 in virtual
> box.
>
> I have been meaning to install some other linux distros in virtual box
> that are the main ones, percentage of utilization based, that are used, and
> practice more with the cross platform, but haven't had a chance to lately.
>
>
>


-- 
Best Regards,
David Hutto
*CEO:* *http://www.hitwebdevelopment.com*
-- 
http://mail.python.org/mailman/listinfo/python-list