Re: How to create a python extension module from a shared library?

2018-01-03 Thread Etienne Robillard

Hi James,

I would love to write in C but I much prefer coding in Python :)

I was thinking that I could use Cython to dlopen the shared library 
dynamically with ctypes. No need to compile anything.



Best regards,

Etienne


Le 2018-01-03 à 05:30, James Chapman a écrit :
In my opinion, just write your extension in C following the 
traditional extension development guidelines unless you plan to 
actively maintain and contribute your changes in the relevant projects 
(CFFI and uWSGI).


Assuming you get this to work, you're going to have to support it and 
bug fix it. Something that initially will not be too difficult because 
it will all be fresh in your head, but 6 months down the line when you 
want to change something or discover a bug it's going to be very 
difficult. Will you be able to attach a debugger and step through the 
code?


I've been down the route of trying to do something clever and even 
succeeded. I then later regretted it, because what seemed like a good 
idea at the time turned out to be a PITA to support. Go with what's 
supported, go with what's documented, don't modify core components if 
you don't absolutely have to, because you'll have to modify those 
components with each update and ultimately you just end up generating 
work for yourself.



James



On 2 January 2018 at 21:21, Etienne Robillard > wrote:


Hi James,

Part of the problem is because the CFFI and uWSGI developers
aren't interested to support this. I need to modify CFFI to
support preprocessing C headers with clang.cindex myself.

I also need to make sure its possible to attach my Python script
to the master uWSGI process to dispatch FIFO commands.

Clang is needed because CFFI doesn't support preprocessing C
headers with #define or #include directives.

Best regards,

Etienne





--
Etienne Robillard
[email protected]
https://www.isotopesoftware.ca/

--
https://mail.python.org/mailman/listinfo/python-list


Re: How to create a python extension module from a shared library?

2018-01-03 Thread James Chapman
In my opinion, just write your extension in C following the traditional
extension development guidelines unless you plan to actively maintain and
contribute your changes in the relevant projects (CFFI and uWSGI).

Assuming you get this to work, you're going to have to support it and bug
fix it. Something that initially will not be too difficult because it will
all be fresh in your head, but 6 months down the line when you want to
change something or discover a bug it's going to be very difficult. Will
you be able to attach a debugger and step through the code?

I've been down the route of trying to do something clever and even
succeeded. I then later regretted it, because what seemed like a good idea
at the time turned out to be a PITA to support. Go with what's supported,
go with what's documented, don't modify core components if you don't
absolutely have to, because you'll have to modify those components with
each update and ultimately you just end up generating work for yourself.


James



On 2 January 2018 at 21:21, Etienne Robillard  wrote:

> Hi James,
>
> Part of the problem is because the CFFI and uWSGI developers aren't
> interested to support this. I need to modify CFFI to support preprocessing
> C headers with clang.cindex myself.
>
> I also need to make sure its possible to attach my Python script to the
> master uWSGI process to dispatch FIFO commands.
>
> Clang is needed because CFFI doesn't support preprocessing C headers with
> #define or #include directives.
>
> Best regards,
>
> Etienne
>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


7z archive reader akin to zipfile?

2018-01-03 Thread Skip Montanaro
The zipfile module is kind of cool because you can access elements of
the archive without explicitly uncompressing the entire archive and
writing the structure to disk. I've got some 7z archives I'd like to
treat the same way (read specific elements without first extractingg
the entire tree to disk). I see the pylzma module for compressing and
uncompressing files, but nothing slightly higher level. Does something
like that exist?

Thx,

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Progress on the Gilectomy

2018-01-03 Thread harindudilshan95
Why not make the garbage collector check the reference count before freeing 
objects? Only c extensions would increment the ref count while python code 
would just use garbage collector making ref count = 0. That way even the 
existing c extensions would continue to work. 


Regarding to Java using all the memory, thats not really true. It has a default 
heap size which may exceed the total memory in a particular environment(Android 
).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 7z archive reader akin to zipfile?

2018-01-03 Thread Gregory Ewing

Skip Montanaro wrote:

I've got some 7z archives I'd like to
treat the same way (read specific elements without first extractingg
the entire tree to disk).


If you're doing this a lot, it might be worth repackaging
your 7z files as zip files.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: 7z archive reader akin to zipfile?

2018-01-03 Thread Skip Montanaro
If you're doing this a lot, it might be worth repackaging your 7z files as
> zip files.


Good point. FWIW, these are the files:

http://untroubled.org/spam

Pretty static once a month or year is closed out...

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Numpy and Terabyte data

2018-01-03 Thread Albert-Jan Roskam

On Jan 2, 2018 18:27, Rustom Mody  wrote:
>
> Someone who works in hadoop asked me:
>
> If our data is in terabytes can we do statistical (ie numpy pandas etc)
> analysis on it?
>
> I said: No (I dont think so at least!) ie I expect numpy (pandas etc)
> to not work if the data does not fit in memory
>
> Well sure *python* can handle (streams of) terabyte data I guess
> *numpy* cannot
>
> Is there a more sophisticated answer?
>
> ["Terabyte" is a just a figure of speech for "too large for main memory"]

Have a look at Pyspark and pyspark.ml. Pyspark has its own kind of DataFrame. 
Very, very cool stuff.

Dask DataFrames have been mentioned already.

numpy has memmapped arrays: 
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.memmap.html
-- 
https://mail.python.org/mailman/listinfo/python-list


when the new version of XPN py2 newsreader src-tarball hits alt.binaries, the world will hold it's breath

2018-01-03 Thread XPN

when the new version of XPN py2 newsreader src-tarball hits
alt.binaries,  the world will hold it's breath.


major usability overhaul is ongoing.

release will be in style in usenet binary newsgroup.

full autoconfigure, no bs asked.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: when the new version of XPN py2 newsreader src-tarball hits alt.binaries, the world will hold it's breath

2018-01-03 Thread a
py2 now, gotta fix that one
-- 
https://mail.python.org/mailman/listinfo/python-list


xpn

2018-01-03 Thread [email protected]
need to fix those quirks.
-- 
https://mail.python.org/mailman/listinfo/python-list