How python knows where non standard libraries are stored ?
Hello List sys.path contains all paths where python shall look for libraries. Eg on my system, here is the content of sys.path: >>> import sys >>> sys.path ['', 'C:\\Users\\jean-marc\\Desktop\\python', 'C:\\Program Files\\Python36-32\\python36.zip', 'C:\\Program Files\\Python36-32\\DLLs', 'C:\\Program Files\\Python36-32\\lib', 'C:\\Program Files\\Python36-32', 'C:\\Program Files\\Python36-32\\lib\\site-packages'] The last path is used as a location to store libraries you install yourself. If I am using a virtual environment (with venv) this last path is different 'C:\\Users\\jean-marc\\Desktop\\myenv\\lib\\site-packages' I looked for windows environment variables to tell python how to fill sys.path at startup but I didn't found. So how does it work ? -- https://mail.python.org/mailman/listinfo/python-list
Re: How python knows where non standard libraries are stored ?
ast writes: > I looked for windows environment variables to tell python > how to fill sys.path at startup but I didn't found. > > So how does it work ? Read the (so called) docstring at the beginning of the module "site.py". Either locate the module source in the file system and read it in an editor or in an interactive Python do: import site help(site) -- https://mail.python.org/mailman/listinfo/python-list
Re: How python knows where non standard libraries are stored ?
On 9/7/19, ast wrote: > > Eg on my system, here is the content of sys.path: > > >>> import sys > >>> sys.path > ['', In the REPL, "" is added for loading modules from the current directory. When executing a script, this would be the script directory. > 'C:\\Users\\jean-marc\\Desktop\\python', Probably this directory is in your %PYTHONPATH% environment variable, which gets inserted here, normally ahead of everything else except for the script directory. > 'C:\\Program Files\\Python36-32\\python36.zip', The zipped standard-library location is assumed to be beside the DLL or EXE. Next the interpreter adds the PythonPath directories from the registry. These are found in subkeys of r"[HKLM|HKCU]\Python\PythonCore\3.6-32\PythonPath". The base key has default core paths for the standard library, which normally are ignored unless the interpreter can't find its home directory. > 'C:\\Program Files\\Python36-32\\DLLs', > 'C:\\Program Files\\Python36-32\\lib', These two are derived from the default core standard-library paths, which are hard-coded in the C macro, PYTHONPATH: #define PYTHONPATH L".\\DLLs;.\\lib" At startup the interpreter searches for its home directory if PYTHONHOME isn't set. (Normally it should not be set.) If the zipped standard library exists, its directory is used as the home directory. Otherwise it checks for the landmark module "lib/os.py" in the application directory (i.e. argv0_path), and its ancestor directories down to the drive root. (If we're executing a virtual environment, the argv0_path gets set from the "home" value in its pyvenv.cfg file.) Normally the home directory is argv0_path. The home directory is used to resolve the "." components in the hard-coded PYTHONPATH string. If no home directory has been found, the interpreter uses the default core paths from the "PythonPath" registry key as discussed above. If even that isn't found, it just adds the relative paths, ".\\DLLs" and ".\\lib". > 'C:\\Program Files\\Python36-32', Windows Python has this peculiar addition. It always adds argv0_path (typically the application directory). Perhaps at some time in the past it was necessary because extension modules were located here. AFAIK, this is vestigial now, unless some embedding applications rely on it. At this point if it still hasn't found the home directory, the interpreter checks for the "lib/os.py" landmark in all of the directories that have been added to the module search path. This is a last-ditch effort to find the standard library and set sys.prefix. > 'C:\\Program Files\\Python36-32\\lib\\site-packages'] Now we're into the site module additions, including .pth files, which is pretty well documented via help(site) and the docs: https://docs.python.org/3/library/site.html The -S command-line option prevents importing the site module at startup. -- https://mail.python.org/mailman/listinfo/python-list
Which PyQt-compatible, performant graphing library should I use?
Hi, Currently I'm making a statistics tool for a game I'm playing with PyQt5. I'm not happy with my current graphing library though. In the beginning I've used matplotlib, which was way too laggy for my use case. Currently I have pyqtgraph, which is snappy, but is missing useful features. The Python graphing library selection is overwhelming, which is why I'm asking here for a recommendation. Things that I need the library to support: * PyQt5 integration * plot layout in a grid * performant navigation * scatter plots, simple and stacked bar charts Things that I don't strictly *require*, but would be really useful: * log scale support (specifically for y-axis) * tooltip support, or alternatively click callback support * plot legend * datetime axes support (like in matplotlib) * configurable colors, scatter spot sizes, bar widths, etc. Here are some screenshots how my application currently looks like with pyqtgraph: https://i.redd.it/rx423arbw5l31.png https://i.redd.it/r68twvfmw5l31.png I would be really grateful for some recommendations! -- https://mail.python.org/mailman/listinfo/python-list
fileinput module not yielding expected results
import csv
import fileinput
import sys
print("Version: " + str(sys.version_info))
print("Files: " + str(sys.argv[1:]))
with fileinput.input(sys.argv[1:]) as f:
for line in f:
print(f"File number: {fileinput.fileno()}")
print(f"Is first line: {fileinput.isfirstline()}")
I run this:
$ python3 program.py ~/Section*.csv > ~/result
I get this:
$ grep "^Version" ~/result
Version: sys.version_info(major=3, minor=7, micro=1, releaselevel='final',
serial=0)
$ grep "^Files" ~/result
Files: ['/home/jason/Section01.csv', '/home/jason/Section02.csv',
'/home/jason/Section03.csv', '/home/jason/Section04.csv',
'/home/jason/Section05.csv', '/home/jason/Section06.csv']
$ grep -c "True" ~/result
6
That all makes sense to me, but this does not:
$ grep "File number" ~/result | sort | uniq
File number: 3
I expected that last grep to yield:
File number: 1
File number: 2
File number: 3
File number: 4
File number: 5
File number: 6
My ultimate goal is as follows. I have multiple CSV files, each with the
same header line. I want to read the header line from the first file and
ignore it for subsequent files.
Thank you
--
https://mail.python.org/mailman/listinfo/python-list
Re: fileinput module not yielding expected results
On 9/7/19 11:12 AM, Jason Friedman wrote: $ grep "File number" ~/result | sort | uniq File number: 3 I expected that last grep to yield: File number: 1 File number: 2 File number: 3 File number: 4 File number: 5 File number: 6 As per https://docs.python.org/3/library/fileinput.html#fileinput.fileno, fileno is the underlying file descriptor of the file, and not at all what you're looking for. My ultimate goal is as follows. I have multiple CSV files, each with the same header line. I want to read the header line from the first file and ignore it for subsequent files. If you're certain that the headers are the same in each file, then there's no harm and much simplicity in reading them each time they come up. with fileinput ...: for line in f: if fileinput.isfirstline(): headers = extract_headers(line) else: pass # process a non-header line here Yes, the program will take slightly longer to run. No, you won't notice it. -- https://mail.python.org/mailman/listinfo/python-list
Re: fileinput module not yielding expected results
> On 7 Sep 2019, at 16:33, Dan Sommers <[email protected]> > wrote: > >with fileinput ...: >for line in f: >if fileinput.isfirstline(): >headers = extract_headers(line) >else: >pass # process a non-header line here If you always know you can skip the first line I use this pattern with fileinput ...: next(f) # skip header for line in f: # process a non-header line here Barry -- https://mail.python.org/mailman/listinfo/python-list
Re: fileinput module not yielding expected results
> > If you're certain that the headers are the same in each file, > then there's no harm and much simplicity in reading them each > time they come up. > > with fileinput ...: > for line in f: > if fileinput.isfirstline(): > headers = extract_headers(line) > else: > pass # process a non-header line here > > Yes, the program will take slightly longer to run. No, you won't > notice it. > > Ah, thank you Dan. I followed your advice ... the working code: with fileinput.input(sys.argv[1:]) as f: reader = csv.DictReader(f) for row in reader: if fileinput.isfirstline(): continue for key, value in row.items(): pass #processing I was hung up on that continue on firstline, because I thought that would skip the header of the first file. I think now the csv.DictReader(f) command consumes it first. -- https://mail.python.org/mailman/listinfo/python-list
2to3, str, and basestring
2to3 converts syntactically valid 2.x code to syntactically valid 3.x code. It cannot, however, guarantee semantic correctness. A particular problem is that str is semantically ambiguous in 2.x, as it is used both for text encoded as bytes and binary data. To resolve the ambiguity for conversions to 3.x, 2.6 introduced 'bytes' as a synonym for 'str'. The intention is that one use 'bytes' to create or refer to 2.x bytes that should remain bytes in 3.x and use 'str' to create or refer to 2.x text bytes that should become or will be unicode in 3.x. 3.x and hence 2to3 *assume* that one is using 'bytes' and 'str' this way, so that 'unicode' becomes an unneeded synonym for 'str' and 2to3 changes 'unicode' to 'str'. If one does not use 'str' and 'bytes' as intended, 2to3 may produce semantically different code. 2.3 introduced abstract superclass 'basestring', which can be viewed as Union(unicode, str). "isinstance(value, basestring)" is defined as "isinstance(value, (unicode, str))" I believe the intended meaning was 'text, whether unicode or encoded bytes'. Certainly, any code following if isinstance(value, basestring): would likely only make sense if that were true. In any case, after 2.6, one should only use 'basestring' when the 'str' part has its restricted meaning of 'unicode in 3.x'. "(unicode, bytes)" is semantically different from "basestring" and "(unicode, str)" when used in isinstance. 2to3 converts then to "(std, bytes)", 'str', and '(str, str)' (the same as 'str' when used in isinstance). If one uses 'basestring' when one means '(unicode, bytes)', 2to3 may produce semantically different code. Example based on https://bugs.python.org/issue38003: if isinstance(value, basestring): if not isinstance(value, unicode): value = value.decode(encoding) process_text(value) else: process_nontext(value) 2to3 produces if isinstance(value, str): if not isinstance(value, str): value = value.decode(encoding) process_text(value) else: process_nontext(value) If, in 3.x, value is always unicode, then the inner conditional is dead and can be removed. But if, in 3.x, value might be byte-encoded text, it will not be decoded and the code is wrong. Fixes: 1. Instead of decoding value after the check, do it before the check. I think this is best for new code. if isinstance(value, bytes): value = value.decode(encoding) ... if isinstance(value, unicode): process_text(value) else: process_nontext(value) 2. Replace 'basestring' with '(unicode, bytes)'. This is easier with existing code. if isinstance(value, basestring): if not isinstance(value, unicode): value = value.decode(encoding) process_text(value) else: process_nontext(value) (I believe but have not tested that) 2to3 produces correct 3.x code from either 1 or 2 after replacing 'unicode' with 'str'. In both cases, the 'unicode' to 'str' replacement should result in correct 3.x code. 3. Edit Lib/lib2to3/fixes/fix_basestring.py to replace 'basestring' with '(str, bytes)' instead of 'str'. This should be straightforward if one understands the ast format. Note that 2to3 is not meant for 2&3 code using exception tricks and six/future imports. Turning 2&3 code into idiomatic 3-only code is a separate subject. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: How python knows where non standard libraries are stored ?
On 9/7/2019 5:51 AM, ast wrote: 'C:\\Program Files\\Python36-32\\lib\\site-packages'] The last path is used as a location to store libraries you install yourself. If I am using a virtual environment (with venv) this last path is different 'C:\\Users\\jean-marc\\Desktop\\myenv\\lib\\site-packages' I looked for windows environment variables to tell python how to fill sys.path at startup but I didn't found. So how does it work ? I believe that the short answer, skipping the gory details provided by Eryk, is that the result is the same as os.path.dirname(sys.executable) + r"\lib\site-packages" You can check if this works for the venv. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
3 cubes that sum to 42
>>> (-80538738812075974)**3 + 80435758145817515**3 + 12602123297335631**3 == 42 True # Impressively quickly, in a blink of an eye. This is the last number < 100, not theoretically excluded, to be solved. Compute power provided by CharityEngine. For more, see Numberphile... https://www.youtube.com/watch?v=zyG8Vlw5aAw -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
issue in handling CSV data
I am trying to read a log file that is in CSV format.
The code snippet is below:
###
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
import os
import csv
from numpy import genfromtxt
# read the CSV and get into X array
os.chdir(r'D:\Users\sharanb\OneDrive - HCL Technologies
Ltd\Projects\MyBackup\Projects\Initiatives\machine
learning\programs\constraints')
X = []
#with open("constraints.csv", 'rb') as csvfile:
#reader = csv.reader(csvfile)
#data_as_list = list(reader)
#myarray = np.asarray(data_as_list)
my_data = genfromtxt('constraints.csv', delimiter = ',', dtype=None)
print (my_data)
my_data_1 = np.delete(my_data, 0, axis=1)
print (my_data_1)
my_data_2 = np.delete(my_data_1, 0, axis=1)
print (my_data_2)
my_data_3 = my_data_2.astype(np.float)
Here is how print (my_data_2) looks like:
##
[['"\t"81' '"\t5c']
['"\t"04' '"\t11']
['"\t"e1' '"\t17']
['"\t"6a' '"\t6c']
['"\t"53' '"\t69']
['"\t"98' '"\t87']
['"\t"5c' '"\t4b']
##
Finally, I am trying to get rid of the strings and get array of numbers using
Numpy's astype function. At this stage, I get an error.
This is the error:
my_data_3 = my_data_2.astype(np.float)
could not convert string to float: " "81
As you can see, the string "\t"81 is causing the error.
It seems to be due to char "\t".
I don't know how to resolve this.
Thanks for your help.
--
https://mail.python.org/mailman/listinfo/python-list
Re: issue in handling CSV data
On Sat, Sep 7, 2019 at 8:21 PM Sharan Basappa wrote:
>
> I am trying to read a log file that is in CSV format.
>
> The code snippet is below:
>
> ###
> import matplotlib.pyplot as plt
> import seaborn as sns; sns.set()
> import numpy as np
> import pandas as pd
> import os
> import csv
> from numpy import genfromtxt
>
> # read the CSV and get into X array
> os.chdir(r'D:\Users\sharanb\OneDrive - HCL Technologies
> Ltd\Projects\MyBackup\Projects\Initiatives\machine
> learning\programs\constraints')
> X = []
> #with open("constraints.csv", 'rb') as csvfile:
> #reader = csv.reader(csvfile)
> #data_as_list = list(reader)
> #myarray = np.asarray(data_as_list)
>
> my_data = genfromtxt('constraints.csv', delimiter = ',', dtype=None)
> print (my_data)
>
> my_data_1 = np.delete(my_data, 0, axis=1)
> print (my_data_1)
>
> my_data_2 = np.delete(my_data_1, 0, axis=1)
> print (my_data_2)
>
> my_data_3 = my_data_2.astype(np.float)
>
>
> Here is how print (my_data_2) looks like:
> ##
> [['"\t"81' '"\t5c']
> ['"\t"04' '"\t11']
> ['"\t"e1' '"\t17']
> ['"\t"6a' '"\t6c']
> ['"\t"53' '"\t69']
> ['"\t"98' '"\t87']
> ['"\t"5c' '"\t4b']
> ##
>
> Finally, I am trying to get rid of the strings and get array of numbers using
> Numpy's astype function. At this stage, I get an error.
>
> This is the error:
> my_data_3 = my_data_2.astype(np.float)
> could not convert string to float: " "81
>
> As you can see, the string "\t"81 is causing the error.
> It seems to be due to char "\t".
>
> I don't know how to resolve this.
>
> Thanks for your help.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
how about (strip(my_data_2).astype(np.float))
I haven't used numpy, but if your theory is correct, this will clean
up the string
--
Joel Goldstick
http://joelgoldstick.com/blog
http://cc-baseballstats.info/stats/birthdays
--
https://mail.python.org/mailman/listinfo/python-list
Re: issue in handling CSV data
On Sat, Sep 7, 2019 at 8:28 PM Joel Goldstick wrote:
>
> On Sat, Sep 7, 2019 at 8:21 PM Sharan Basappa
> wrote:
> >
> > I am trying to read a log file that is in CSV format.
> >
> > The code snippet is below:
> >
> > ###
> > import matplotlib.pyplot as plt
> > import seaborn as sns; sns.set()
> > import numpy as np
> > import pandas as pd
> > import os
> > import csv
> > from numpy import genfromtxt
> >
> > # read the CSV and get into X array
> > os.chdir(r'D:\Users\sharanb\OneDrive - HCL Technologies
> > Ltd\Projects\MyBackup\Projects\Initiatives\machine
> > learning\programs\constraints')
> > X = []
> > #with open("constraints.csv", 'rb') as csvfile:
> > #reader = csv.reader(csvfile)
> > #data_as_list = list(reader)
> > #myarray = np.asarray(data_as_list)
> >
> > my_data = genfromtxt('constraints.csv', delimiter = ',', dtype=None)
> > print (my_data)
> >
> > my_data_1 = np.delete(my_data, 0, axis=1)
> > print (my_data_1)
> >
> > my_data_2 = np.delete(my_data_1, 0, axis=1)
> > print (my_data_2)
> >
> > my_data_3 = my_data_2.astype(np.float)
> >
> >
> > Here is how print (my_data_2) looks like:
> > ##
> > [['"\t"81' '"\t5c']
> > ['"\t"04' '"\t11']
> > ['"\t"e1' '"\t17']
> > ['"\t"6a' '"\t6c']
> > ['"\t"53' '"\t69']
> > ['"\t"98' '"\t87']
> > ['"\t"5c' '"\t4b']
> > ##
> >
> > Finally, I am trying to get rid of the strings and get array of numbers
> > using Numpy's astype function. At this stage, I get an error.
> >
> > This is the error:
> > my_data_3 = my_data_2.astype(np.float)
> > could not convert string to float: " "81
> >
> > As you can see, the string "\t"81 is causing the error.
> > It seems to be due to char "\t".
> >
> > I don't know how to resolve this.
> >
> > Thanks for your help.
> >
> > --
> > https://mail.python.org/mailman/listinfo/python-list
>
> how about (strip(my_data_2).astype(np.float))
>
> I haven't used numpy, but if your theory is correct, this will clean
> up the string
>
oops, I think I was careless at looking at your data. so this doesn't
seem like such a good idea
> --
> Joel Goldstick
> http://joelgoldstick.com/blog
> http://cc-baseballstats.info/stats/birthdays
--
Joel Goldstick
http://joelgoldstick.com/blog
http://cc-baseballstats.info/stats/birthdays
--
https://mail.python.org/mailman/listinfo/python-list
Re: issue in handling CSV data
On 2019-09-08 01:19, Sharan Basappa wrote:
I am trying to read a log file that is in CSV format.
The code snippet is below:
###
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
import os
import csv
from numpy import genfromtxt
# read the CSV and get into X array
os.chdir(r'D:\Users\sharanb\OneDrive - HCL Technologies
Ltd\Projects\MyBackup\Projects\Initiatives\machine
learning\programs\constraints')
X = []
#with open("constraints.csv", 'rb') as csvfile:
#reader = csv.reader(csvfile)
#data_as_list = list(reader)
#myarray = np.asarray(data_as_list)
my_data = genfromtxt('constraints.csv', delimiter = ',', dtype=None)
print (my_data)
my_data_1 = np.delete(my_data, 0, axis=1)
print (my_data_1)
my_data_2 = np.delete(my_data_1, 0, axis=1)
print (my_data_2)
my_data_3 = my_data_2.astype(np.float)
Here is how print (my_data_2) looks like:
##
[['"\t"81' '"\t5c']
['"\t"04' '"\t11']
['"\t"e1' '"\t17']
['"\t"6a' '"\t6c']
['"\t"53' '"\t69']
['"\t"98' '"\t87']
['"\t"5c' '"\t4b']
##
Finally, I am trying to get rid of the strings and get array of numbers using
Numpy's astype function. At this stage, I get an error.
This is the error:
my_data_3 = my_data_2.astype(np.float)
could not convert string to float: " "81
As you can see, the string "\t"81 is causing the error.
It seems to be due to char "\t".
I don't know how to resolve this.
Thanks for your help.
Are you sure it's CSV (Comma-Separated Value) and not TSV (Tab-Separated
Value)?
Also the values look like hexadecimal to me. I think that
.astype(np.float) assumes that the values are decimal.
I'd probably start by reading them using the csv module, convert the
values to decimal, and then pass them on to numpy.
--
https://mail.python.org/mailman/listinfo/python-list
Is it 'fine' to instantiate a widget without parent parameter?
I know it is valid, according to the Tkinter source, every widget constructor has a 'master=None' default. What happens on doing this? In what circumstance, we do it this way? and will it cause any trouble? --Jach -- https://mail.python.org/mailman/listinfo/python-list
Re: issue in handling CSV data
On Saturday, 7 September 2019 21:18:11 UTC-4, MRAB wrote:
> On 2019-09-08 01:19, Sharan Basappa wrote:
> > I am trying to read a log file that is in CSV format.
> >
> > The code snippet is below:
> >
> > ###
> > import matplotlib.pyplot as plt
> > import seaborn as sns; sns.set()
> > import numpy as np
> > import pandas as pd
> > import os
> > import csv
> > from numpy import genfromtxt
> >
> > # read the CSV and get into X array
> > os.chdir(r'D:\Users\sharanb\OneDrive - HCL Technologies
> > Ltd\Projects\MyBackup\Projects\Initiatives\machine
> > learning\programs\constraints')
> > X = []
> > #with open("constraints.csv", 'rb') as csvfile:
> > #reader = csv.reader(csvfile)
> > #data_as_list = list(reader)
> > #myarray = np.asarray(data_as_list)
> >
> > my_data = genfromtxt('constraints.csv', delimiter = ',', dtype=None)
> > print (my_data)
> >
> > my_data_1 = np.delete(my_data, 0, axis=1)
> > print (my_data_1)
> >
> > my_data_2 = np.delete(my_data_1, 0, axis=1)
> > print (my_data_2)
> >
> > my_data_3 = my_data_2.astype(np.float)
> >
> >
> > Here is how print (my_data_2) looks like:
> > ##
> > [['"\t"81' '"\t5c']
> > ['"\t"04' '"\t11']
> > ['"\t"e1' '"\t17']
> > ['"\t"6a' '"\t6c']
> > ['"\t"53' '"\t69']
> > ['"\t"98' '"\t87']
> > ['"\t"5c' '"\t4b']
> > ##
> >
> > Finally, I am trying to get rid of the strings and get array of numbers
> > using Numpy's astype function. At this stage, I get an error.
> >
> > This is the error:
> > my_data_3 = my_data_2.astype(np.float)
> > could not convert string to float: " "81
> >
> > As you can see, the string "\t"81 is causing the error.
> > It seems to be due to char "\t".
> >
> > I don't know how to resolve this.
> >
> > Thanks for your help.
> >
> Are you sure it's CSV (Comma-Separated Value) and not TSV (Tab-Separated
> Value)?
>
> Also the values look like hexadecimal to me. I think that
> .astype(np.float) assumes that the values are decimal.
>
> I'd probably start by reading them using the csv module, convert the
> values to decimal, and then pass them on to numpy.
yes. it is CSV. The commas are gone once csv.reader processed the csv file.
The tabs seem to be there also which seem to be causing the issue.
Thanks for your response
--
https://mail.python.org/mailman/listinfo/python-list
