Re: How make the judge with for loop?

2016-10-16 Thread Steven D'Aprano
On Sunday 16 October 2016 16:23, [email protected] wrote:

> c="abcdefghijk"
> len=len(c)
> n is a int
> sb=[[] for i in range(n)]
> 
> while (i < len) {
> for (int j = 0; j < n && i < len; j++)
> sb[j].append(c[i++]);
> for (int j = n-2; j >= 1 && i < len; j--) //
> sb[j].append(c[i++]);
> }
> 
> How to translate to python? I tried but my python code is really stupid

I don't know. What language is your code written in, and what does it do?

I can guess what some of it probably does, because it looks like Python code, 
but the for-loops are complicated and weird. What do they do?

It might help if you show the expected results.




-- 
Steven
git gets easier once you get the basic idea that branches are homeomorphic 
endofunctors mapping submanifolds of a Hilbert space.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How make the judge with for loop?

2016-10-16 Thread Peter Otten
[email protected] wrote:

> c="abcdefghijk"
> len=len(c)
> n is a int
> sb=[[] for i in range(n)]
> 
> while (i < len) {
> for (int j = 0; j < n && i < len; j++)
> sb[j].append(c[i++]);
> for (int j = n-2; j >= 1 && i < len; j--) //
> sb[j].append(c[i++]);
> }
> 
> How to translate to python? I tried but my python code is really stupid

It would still be good to provide it or a least a complete runnable C 
source. A description of the problem in plain English would be helpful, too.

What follows are mechanical translations of what I think your original C 
code does.

(1) You can use a while loop to replace C's for

# example: first for loop
j = ;
while j < n and i < length:
sb[j].append(chars[i])
i += 1
j += 1

which should be straight-forward, or 

(2) you can run a for loop until an exception is raised:

chars = "abcdefghijk"
...
chars = iter(chars)
try:
while True:
for sublist in sb[:n]:
sublist.append(next(chars))
for sublist in sb[n-2: 0: -1]:
sublist.append(next(chars))
except StopIteration:
pass # we ran out of characters

(3) If you concatenate the two list slices into one, and with a little help 
from the itertools (while True... becomes itertools.cycle()) you get:

up = sb[:n]
down = sb[n-2: 0: -1]

for sublist, c in zip(cycle(up + down), chars):
sublist.append(c)

(All untested code, you should check especially the slice indices)

-- 
https://mail.python.org/mailman/listinfo/python-list


Help me!, I would like to find split where the split sums are close to each other?

2016-10-16 Thread k . ademarus
Help me!, I would like to find split where the split sums are close to each 
other?

I have a list is

test = [10,20,30,40,50,60,70,80,90,100]

​and I would like to find split where the split sums are close to each other by 
number of splits = 3 that ​all possible combinations and select the split where 
the sum differences are smallest.

Please example code or simple code Python.

Thank you very much

K.ademarus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: pyserial: wait for execute

2016-10-16 Thread Grant Edwards
On 2016-10-16, Michael Okuntsov  wrote:

> is there a way, other than time.sleep(), to be sure that the command
> sent through a serial port has been fully executed?

If the remote device doesn't send a response telling you it's done
executing the command, then there is no way to know when that has
happened.

-- 
Grant


-- 
https://mail.python.org/mailman/listinfo/python-list


Announcement: Code generation from state diagrams

2016-10-16 Thread peter . o . mueller
>From (UML) state diagrams to Python code made easy.

State machines are without any doubt a very good way to model behavior. The new 
code generator from Sinelabore translates hierarchical state machines 
efficiently into different languages now including Python.

The generator accepts diagrams from many common UML tools. It automatically 
performs robustness tests  and allows interactive simulation of the model. The 
generated code is maintainable and readable. No special runtime libraries are 
required. 

More information and a demo version is available on www.sinelabore.com

Best regards,
Peter Mueller
-- 
https://mail.python.org/mailman/listinfo/python-list


How to pick out the same titles.

2016-10-16 Thread Seymore4Head
How to pick out the same titles.

I have a  long text file that has movie titles in it and I would like
to find dupes.

The thing is that sometimes I have one called "The Killing Fields" and
it also could be listed as "Killing Fields"  Sometimes the title will
have the date a year off.

What I would like to do it output to another file that show those two
as a match.

I don't know the best way to tackle this.  I would think you would
have to pair the titles with the most consecutive letters in a row.

Anyone want this as a practice exercise?  I don't really use
programming enough to remember how.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Help me!, I would like to find split where the split sums are close to each other?

2016-10-16 Thread breamoreboy
On Sunday, October 16, 2016 at 12:27:00 PM UTC+1, [email protected] wrote:
> Help me!, I would like to find split where the split sums are close to each 
> other?
> 
> I have a list is
> 
> test = [10,20,30,40,50,60,70,80,90,100]
> 
> ​and I would like to find split where the split sums are close to each other 
> by number of splits = 3 that ​all possible combinations and select the split 
> where the sum differences are smallest.
> 
> Please example code or simple code Python.
> 
> Thank you very much
> 
> K.ademarus

We do not write code for you.  I'd suggest you start here 
https://docs.python.org/3/library/itertools.html

Kindest regards.

Mark Lawrence.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to pick out the same titles.

2016-10-16 Thread Alain Ketterlin
Seymore4Head  writes:

[...]
> I have a  long text file that has movie titles in it and I would like
> to find dupes.
>
> The thing is that sometimes I have one called "The Killing Fields" and
> it also could be listed as "Killing Fields"  Sometimes the title will
> have the date a year off.
>
> What I would like to do it output to another file that show those two
> as a match.

Try the difflib module (read the doc, its default behavior may be
surprising).

-- Alain.
-- 
https://mail.python.org/mailman/listinfo/python-list


Build desktop application using django

2016-10-16 Thread ayuchitsaluja8
Hello I want to build a desktop application which retrieves data from server 
and stores data on server. I have basic experience of python and I dont know 
how to build that thing.
-- 
https://mail.python.org/mailman/listinfo/python-list


Writing library documentation?

2016-10-16 Thread tshepard
Is there a standard or best way to write documentation for
a particular python library?  I'd mostly target HTML, I guess.

Thanks!


Tobiah
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to pick out the same titles.

2016-10-16 Thread duncan smith
On 16/10/16 16:16, Seymore4Head wrote:
> How to pick out the same titles.
> 
> I have a  long text file that has movie titles in it and I would like
> to find dupes.
> 
> The thing is that sometimes I have one called "The Killing Fields" and
> it also could be listed as "Killing Fields"  Sometimes the title will
> have the date a year off.
> 
> What I would like to do it output to another file that show those two
> as a match.
> 
> I don't know the best way to tackle this.  I would think you would
> have to pair the titles with the most consecutive letters in a row.
> 
> Anyone want this as a practice exercise?  I don't really use
> programming enough to remember how.
> 

Tokenize, generate (token) set similarity scores and cluster on
similarity score.


>>> import tokenization
>>> bigrams1 = tokenization.n_grams("The Killing Fields".lower(), 2,
pad=True)
>>> bigrams1
['_t', 'th', 'he', 'e ', ' k', 'ki', 'il', 'll', 'li', 'in', 'ng', 'g ',
' f', 'fi', 'ie', 'el', 'ld', 'ds', 's_']
>>> bigrams2 = tokenization.n_grams("Killing Fields".lower(), 2, pad=True)
>>> import pseudo
>>> pseudo.Jaccard(bigrams1, bigrams2)
0.7


You could probably just generate token sets, then iterate through all
title pairs and manually review those with similarity scores above a
suitable threshold. The code I used above is very simple (and pasted below).


def n_grams(s, n, pad=False):
# n >= 1
# returns a list of n-grams
# or an empty list if n > len(s)
if pad:
s = '_' * (n-1) + s + '_' * (n-1)
return [s[i:i+n] for i in range(len(s)-n+1)]

def Jaccard(tokens1, tokens2):
# returns exact Jaccard
# similarity measure for
# two token sets
tokens1 = set(tokens1)
tokens2 = set(tokens2)
return len(tokens1&tokens2) / len(tokens1|tokens2)


Duncan


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Build desktop application using django

2016-10-16 Thread Michael Torrie
On 10/16/2016 11:38 AM, [email protected] wrote:
> Hello I want to build a desktop application which retrieves data from
> server and stores data on server. I have basic experience of python
> and I dont know how to build that thing.

Crystal balls are always risky things to turn to when trying to decipher
intentions from extremely vague questions. But my crystal ball says you
should read a few Django tutorials and try building a simple web-based
system with a database backend (even SQLite to start with).


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Help me!, I would like to find split where the split sums are close to each other?

2016-10-16 Thread Michael Torrie
On 10/16/2016 05:25 AM, [email protected] wrote:
> Help me!, I would like to find split where the split sums are close
> to each other?
> 
> I have a list is
> 
> test = [10,20,30,40,50,60,70,80,90,100]
> 
> ​and I would like to find split where the split sums are close to
> each other by number of splits = 3 that ​all possible combinations
> and select the split where the sum differences are smallest.
> 
> Please example code or simple code Python.

Ahh, sounds like a homework problem, which we will not answer directly,
if at all.

If you had no computer at all, how would you do this by hand?  Can you
describe the process that would identify how to split the piles of
candies, money, or whatever other representation you might use?  So
forgetting about Python and the specifics of making a valid program, how
would you solve this problem?  What steps would you take? This is known
as an algorithm.  Once you have that figured out, then you can begin to
express the algorithm as a computer program.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How make the judge with for loop?

2016-10-16 Thread alister
On Sat, 15 Oct 2016 22:23:29 -0700, 380162267qq wrote:

> c="abcdefghijk"
> len=len(c)
> n is a int sb=[[] for i in range(n)]
> 
> while (i < len) {
> for (int j = 0; j < n && i < len; j++)
> sb[j].append(c[i++]);
> for (int j = n-2; j >= 1 && i < len; j--) //
> sb[j].append(c[i++]);
> }
> 
> How to translate to python? I tried but my python code is really stupid

show your python code,
does it at least get the right results
if so you may get better feed back on how to imporve it

also as other has suggested try detailing the actual problem & working on 
that rather than trying to translate an existing solution 



-- 
Producers seem to be so prejudiced against actors who've had no training.
And there's no reason for it.  So what if I didn't attend the Royal 
Academy
for twelve years?  I'm still a professional trying to be the best actress
I can.  Why doesn't anyone send me the scripts that Faye Dunaway gets?
-- Farrah Fawcett-Majors
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Writing library documentation?

2016-10-16 Thread Irmen de Jong
On 16-10-2016 19:41, [email protected] wrote:
> Is there a standard or best way to write documentation for
> a particular python library?  I'd mostly target HTML, I guess.
> 
> Thanks!
> 
> 
> Tobiah
> 

I guess most people use Sphinx for this task
http://www.sphinx-doc.org/

You can use various themes to customize your html output.

Irmen

-- 
https://mail.python.org/mailman/listinfo/python-list


py.test/tox InvocationError

2016-10-16 Thread D.M. Procida
When I run:

   py.test --cov=akestra_utilities --cov=akestra_image_plugin
--cov=chaining --cov=contacts_and_people --cov=housekeeping --cov=links
--cov=news_and_events --cov=vacancies_and_studentships --cov=video
--cov-report=term-missing example tests

it works quite happily.

When I run tox -e coveralls, which runs the same thing, it raises an
InvocationError.

You can see the same error at
.

Any suggestions on what is different in the two cases?

Thanks,

Daniele
-- 
https://mail.python.org/mailman/listinfo/python-list


Term frequency using scikit-learn's CountVectorizer

2016-10-16 Thread Abdul Abdul
I have the following code snippet where I'm trying to list the term 
frequencies, where first_text and second_text are .tex documents:

from sklearn.feature_extraction.text import CountVectorizer
training_documents = (first_text, second_text)  
vectorizer = CountVectorizer()
vectorizer.fit_transform(training_documents)
print "Vocabulary:", vectorizer.vocabulary 
When I run the script, I get the following:

File "test.py", line 19, in 
vectorizer.fit_transform(training_documents)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 817, in fit_transform
self.fixed_vocabulary_)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 752, in _count_vocab
for feature in analyze(doc):
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 238, in 
tokenize(preprocess(self.decode(doc))), stop_words)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 115, in decode
doc = doc.decode(self.encoding, self.decode_error)
  File 
"/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py",
 line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa2 in position 200086: 
invalid start byte
How can I fix this issue?

Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Term frequency using scikit-learn's CountVectorizer

2016-10-16 Thread Abdul Abdul
Hello,

I have the following code snippet where I'm trying to list the term
frequencies, where first_textand second_text are .tex documents:

from sklearn.feature_extraction.text import CountVectorizer
training_documents = (first_text, second_text)
vectorizer = CountVectorizer()
vectorizer.fit_transform(training_documents)print "Vocabulary:",
vectorizer.vocabulary

When I run the script, I get the following:

File "test.py", line 19, in 
vectorizer.fit_transform(training_documents)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py",
line 817, in fit_transform
self.fixed_vocabulary_)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py",
line 752, in _count_vocab
for feature in analyze(doc):
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py",
line 238, in 
tokenize(preprocess(self.decode(doc))), stop_words)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py",
line 115, in decode
doc = doc.decode(self.encoding, self.decode_error)
  File 
"/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py",
line 16, in decode
return codecs.utf_8_decode(input, errors, True)UnicodeDecodeError:
'utf8' codec can't decode byte 0xa2 in position 200086: invalid start
byte

How can I fix this issue?

Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Term frequency using scikit-learn's CountVectorizer

2016-10-16 Thread MRAB

On 2016-10-17 02:04, Abdul Abdul wrote:

I have the following code snippet where I'm trying to list the term 
frequencies, where first_text and second_text are .tex documents:

from sklearn.feature_extraction.text import CountVectorizer
training_documents = (first_text, second_text)
vectorizer = CountVectorizer()
vectorizer.fit_transform(training_documents)
print "Vocabulary:", vectorizer.vocabulary
When I run the script, I get the following:

File "test.py", line 19, in 
vectorizer.fit_transform(training_documents)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 817, in fit_transform
self.fixed_vocabulary_)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 752, in _count_vocab
for feature in analyze(doc):
  File "/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 238, in 
tokenize(preprocess(self.decode(doc))), stop_words)
  File 
"/usr/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", 
line 115, in decode
doc = doc.decode(self.encoding, self.decode_error)
  File 
"/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py",
 line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa2 in position 200086: 
invalid start byte
How can I fix this issue?


I've had a quick look at the docs here:

http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer

and I think you need to tell it what encoding the text actually uses. By 
default CountVectorizer assumes the text uses UTF-8, but, clearly, your 
text uses a different encoding.


--
https://mail.python.org/mailman/listinfo/python-list


Re: Build desktop application using django

2016-10-16 Thread Mario R. Osorio
On Sunday, October 16, 2016 at 1:42:23 PM UTC-4, Ayush Saluja wrote:
> Hello I want to build a desktop application which retrieves data from server 
> and stores data on server. I have basic experience of python and I dont know 
> how to build that thing.

I agree with Martin's suspicion on you having no idea of what the F you are 
talking about. Nevertheless, I'm throwing in my 2 cents:

http://dabodev.com/
-- 
https://mail.python.org/mailman/listinfo/python-list