[Cython] Fwd: Question about how best require compiler options for C sources

2016-04-08 Thread Erik Bray
Hi all,

I'd like to call attention to an issue I've been looking into for the
past couple days:

https://github.com/cython/cython/pull/360

To summarize the discussion, when building DLLs for Windows, functions
that should be exported by that DLL must be marked
__declspec(dllexport) in their declaration.  However, when using the
same header file in a project that links to that DLL the same function
must be declared __declspec(dllimport).

It's common practice to have a macro for this, whose value is
controlled by whether or not a macro (often called something like
"DLL_EXPORT").  When compiling the DLL we would define -DDLL_EXPORT to
output __declspec(dllexport).  Otherwise it outputs
__declspec(dllimport).

Cython currently handles this with such a macro called DL_IMPORT which
comes from Python. However, this macro was deprecated some time ago,
and is removed in current Python 3 versions.  So Cython must replace
it with its own.

I'm working on a patch for this--to reduce confusion the macro is
named specifically for the Cython module it's associated with.  For
example, for a Cython module named "foo.bar" there are two macros
DLL_EXPORT__foo__bar and EXPORT__foo__bar.  If the latter is defined
then the former outputs dllexport, otherwise it outputs dllimport.
I've attached the patch in progress.

I'm open to comment on this, but where I'm stuck now is that in order
for the "foo.bar" module to be compiled correctly it needs
EXPORT__foo__bar to be defined at compile time.  And it's not clear to
me what the best way is for the Cython compiler to pass down options
that are passed to the C/C++ compiler.  In general the best way would
be to attach this to the define_macros attribute of the associated
distutils/setuptools Extension object and let the distutils compiler
class generate the right compiler options.  But it's not clear to me
from Cython's internals where the best place to do that would be.

Thanks,
Erik


dll_export.patch
Description: Binary data
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-11 Thread Erik Bray
On Fri, Apr 8, 2016 at 5:49 PM, Nathaniel Smith  wrote:
> Can you give a tiny concrete example? My questions are basic enough that I
> feel like I'm missing something fundamental :-)

Yes, I think you might be missing something, but I'm not sure exactly
where.  In the issue I provided a tiny example which you can see here:

https://gist.github.com/embray/12a67edb82b213217e31f408007898e6

The C code generated by this example currently does not compile on
Windows, because of how Cython uses DL_IMPORT incorrectly.  Regardless
of what it *does* do, Cython should not be using the DL_IMPORT macro
at all actually since that no longer even exists in Python.

> My first question is why you even need this, since AFAIK there are no cases
> where it is correct to have a cython module dllexporting symbols that appear
> in header files. This is what cimport is for, right?

I don't think this has anything to do with cimport.  Could you explain
what you mean?

> My second question is why you would want to do this via the command line,
> when compiling the dll means that you are compiling some cython-generated
> .c, which means that you can put the #define directly in the source code,
> no?

Not necessarily. As you can see in my example the file fooutil.c is
hand-written C code that was not generated by Cython, but which uses a
function in the Cython-generated code.  It includes "foo.h".  In
principle you're right--the hand-written C code could set the proper
#defines but it would have to do so *before* including "foo.h".  It's
not very obvious that this would be needed.  Most other code I've seen
that addresses this issue--including cpython itself--do so by passing
an appropriate define to the compiler via the command-line, and that
seems the clearest to me.

Best,
Erik

> On Apr 8, 2016 5:35 AM, "Erik Bray"  wrote:
>>
>> Hi all,
>>
>> I'd like to call attention to an issue I've been looking into for the
>> past couple days:
>>
>> https://github.com/cython/cython/pull/360
>>
>> To summarize the discussion, when building DLLs for Windows, functions
>> that should be exported by that DLL must be marked
>> __declspec(dllexport) in their declaration.  However, when using the
>> same header file in a project that links to that DLL the same function
>> must be declared __declspec(dllimport).
>>
>> It's common practice to have a macro for this, whose value is
>> controlled by whether or not a macro (often called something like
>> "DLL_EXPORT").  When compiling the DLL we would define -DDLL_EXPORT to
>> output __declspec(dllexport).  Otherwise it outputs
>> __declspec(dllimport).
>>
>> Cython currently handles this with such a macro called DL_IMPORT which
>> comes from Python. However, this macro was deprecated some time ago,
>> and is removed in current Python 3 versions.  So Cython must replace
>> it with its own.
>>
>> I'm working on a patch for this--to reduce confusion the macro is
>> named specifically for the Cython module it's associated with.  For
>> example, for a Cython module named "foo.bar" there are two macros
>> DLL_EXPORT__foo__bar and EXPORT__foo__bar.  If the latter is defined
>> then the former outputs dllexport, otherwise it outputs dllimport.
>> I've attached the patch in progress.
>>
>> I'm open to comment on this, but where I'm stuck now is that in order
>> for the "foo.bar" module to be compiled correctly it needs
>> EXPORT__foo__bar to be defined at compile time.  And it's not clear to
>> me what the best way is for the Cython compiler to pass down options
>> that are passed to the C/C++ compiler.  In general the best way would
>> be to attach this to the define_macros attribute of the associated
>> distutils/setuptools Extension object and let the distutils compiler
>> class generate the right compiler options.  But it's not clear to me
>> from Cython's internals where the best place to do that would be.
>>
>> Thanks,
>> Erik
>>
>> ___
>> cython-devel mailing list
>> cython-devel@python.org
>> https://mail.python.org/mailman/listinfo/cython-devel
>>
>
> ___
> cython-devel mailing list
> cython-devel@python.org
> https://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-11 Thread Erik Bray
On Mon, Apr 11, 2016 at 2:51 PM, Nathaniel Smith  wrote:
> Now, back to your example: Here the caller and callee are both compiled into
> the same shared library, so you don't want dllexport/dllimport at all, you
> just want a shared-library-internal symbol, which as we see is much easier.

Sorry, I'll respond more to your (helpfully detailed and precise)
message in a second.  But something I wanted to point out is that my
example is incomplete and I should have arranged a more complete
example.  In this case I really do want the symbol "hello" to be
exported by the DLL, as well as be understood between translation
units making up the same library.  A more complete example would have
shown a separate library which links with "foo" and uses the "hello"
function in it.

Yes, it's arguable that exporting anything more than then initmodule
function from a Python extension module is not best practice, but the
possibility should not be ruled out either.

So I think later on you hit correctly on the deeper problem, which is
that Cython currently doesn't have a great way to distinguish between
intra-library visibility and *inter*-library visibility.

And if both are needed then some DL_IMPORT-like macro is needed that
sets the visibility for a symbol correctly depending on the context in
which it's being used. (And yes, this is not a problem on *nix, but it
is on Windows due to the way __declspec(dllimport) causes name
mangling :(

Thanks,
Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-11 Thread Erik Bray
On Mon, Apr 11, 2016 at 2:51 PM, Nathaniel Smith  wrote:
> On Apr 11, 2016 04:18, "Erik Bray"  wrote:
>>
>> On Fri, Apr 8, 2016 at 5:49 PM, Nathaniel Smith  wrote:
>> > Can you give a tiny concrete example? My questions are basic enough that
>> > I
>> > feel like I'm missing something fundamental :-)
>>
>> Yes, I think you might be missing something, but I'm not sure exactly
>> where.  In the issue I provided a tiny example which you can see here:
>>
>> https://gist.github.com/embray/12a67edb82b213217e31f408007898e6
>>
>> The C code generated by this example currently does not compile on
>> Windows, because of how Cython uses DL_IMPORT incorrectly.  Regardless
>> of what it *does* do, Cython should not be using the DL_IMPORT macro
>> at all actually since that no longer even exists in Python.
>
> Sure.
>
> For that example, the correct thing to do is to *not* export the function.

So as I wrote in my previous message, my example was incomplete.  But
indeed if the intent was *only* to share the declaration between TUs
of the same library then I agree the *most* correct thing would be to
not use dllexport. Unfortunately there isn't currently a way in Cython
to make this distinction.

> Backing up, to make sure we're on the same page:

Yep, I agree with your analysis that follows, and it's helpful to have
all the cases laid out in one place, so thanks for that!

There are three levels of
> symbol visibility in C: file-internal, shared library internal (different .c
> files that make up the library can see it, but users of the shared library
> can't see it), and shared library exported (everyone can see it; can also
> carry other consequences, e.g. on Linux then internal calls will become
> noticeably slower, and it becomes possible for weird symbol interposition
> issues to occur). So the rule of thumb is to make everything as private as
> you can get away with.
>
> Making this more interesting:
> - vanilla C only includes 2 ways to mark symbol visibility, which is not
> enough to make a 3-way distinction. Hence the need for an extra attribute
> thingummy.
> - everyone agrees that symbols marked 'static' should be file-internal, but
> different platforms disagree about what should happen if the extra attribute
> thingummy is missing.
>
> So on Windows, the convention is:
> 'static' -> file internal
> no marking -> shared library internal
> 'dllexport' -> public
>
> And on Linux it's:
> 'static' -> file internal
> 'visibility (hidden)' -> shared library internal
> no marking -> public
>
> It's generally agreed that Linux got this wrong and that you should always
> use '-fvisibility=hidden' to switch it to the windows style, but cython
> doesn't control compiler options and thus should probably generate code that
> works correctly regardless of such compiler settings. Fortunately, Linux
> does provide some markings to explicitly make things public: you can mark a
> symbol 'visibility (default)' (which means public), or you can use the
> dllexport syntax, just like Windows, because gcc is helpful like that.
>
> OTOH, windows is annoying because of this dllimport thing that started this
> whole thread: on other systems just marking symbols as extern is enough to
> handle both shared-object-internal and shared-library-exported symbols, and
> the linker will sort it out. On Windows, you have to explicitly distinguish
> between these. (And annoyingly, if you accidentally leave out the dllimport
> making on functions then it will use some fallback hack that works but
> silently degrades performance; on other symbols it just doesn't work, and
> ditto for if you use it when it isn't needed.)

Yep. Agreed with all of the above.

> So final conclusion: for non-static symbols, cython should first decide
> whether they are supposed to be shared-library-internal or actually exported
> from the shared library.
>
> For shared-library-internal symbols: their definition should be marked
> 'visibility(hidden)' on Linux, and unmarked on Windows. This is easy using
> some preprocessor gunk. (Or maybe simplest is: marked that everywhere except
> if using msvc, because I think everyone else will understand 'visibility
> (hidden)' even if it's a no op.) Their declaration in the header file should
> just be 'extern' everywhere.
>
> For shared-library-exported symbols: I am dubious about whether cython
> should even support these at all. But if it does, then the definitions
> should be marked 'dllexport' (no macro trickery needed, because everyone
> under

Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-12 Thread Erik Bray
On Mon, Apr 11, 2016 at 7:38 PM, Jeroen Demeyer  wrote:
> On 2016-04-11 15:23, Erik Bray wrote:
>>
>> In this case I really do want the symbol "hello" to be
>> exported by the DLL, as well as be understood between translation
>> units making up the same library.
>
>
> Are you really sure that you want this? I doubt that this is supported on OS
> X: on OS X, there are two kinds of files: shared libraries (with .dylib
> extension) and loadable modules or bundles (usually with .so extension but
> officially with .bundle extension).
>
> C extensions for Python are compiled as a loadable module and I don't think
> you can link against these. See
> http://stackoverflow.com/questions/2339679/what-are-the-differences-between-so-and-dylib-on-osx
>
> Some details in the above might be wrong, but I remember running into this
> issue in some early version of cysignals.

OSX issues aside, I was under the impression that this is needed for
cysignals in particular.  If I'm wrong on that then this simplifies
matters a good deal, and per njs Cython could just drop the use of
DL_IMPORT and we'd be fine.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-12 Thread Erik Bray
On Mon, Apr 11, 2016 at 8:36 PM, Ian Henriksen
 wrote:
> On Mon, Apr 11, 2016 at 11:50 AM Ian Henriksen
>  wrote:
>>
>> To answer the original question about define macros, it appears that the
>> canonical
>> way to pass preprocessor defines through distutils is to use the
>> define_macros
>> keyword when constructing your Extension class. You should also be able to
>> do
>> this within a Cython source file by including a directive like:
>>
>> # distutils: define_macros = MY_DEFINE, MY_DEFINE_2=2
>>
>> Unfortunately, it looks like there's a bug in that that's making it so
>> that these
>> macros are undef'ed rather than being defined, so, for now, just pass the
>> appropriate flags to your Extension object.
>>
>
> Small update on this, it looks like the issue with undef/define only applies
> when a
> define is specified this way without a value, so I either haven't gotten the
> syntax quite
> right, or that's not supported yet. Specifying an actual value for the macro
> works fine.

There is an issue specifically about this that I wanted to bring up,
though it's a bit tangential to my original issue (but maybe not
entirely?)

I know about putting "# distutils: " comments in a Cython source. But
I felt that requiring this in order to get dllexport/dllimport working
correctly on Windows is too much of a burden for a developer (who
normally might not be thinking about this).

The content of these comments *do* get written out to the generated C
sources under a "distutils" property in the JSON metadata at the top
of the file.  So I thought one way I might be able to influence the
compiler flags would be to modify the code that generates the JSON
metadata to include a `"distutils: {"define_macros": [ ... ]}` entry
in that metadata.  However, this was wrong because Cython itself never
actually does anything with that metadata, and it instead reads the
distutils directives (a second time) directly out of the Cython
sources during cythonize() in order to update the Extension
attributes.  I'm thinking it would make more sense if the distutils
options were read from the C sources, since those are what will
*actually* be handled by distutils, even if in the majority of cases
the JSON metadata is being generated directly from the Cython sources.

Changing this would have also provided a solution to my original
problem, even if we've agreed that it's mostly moot.  That said, I
think it makes more sense for cythonize() to read the distutils
options from the C source instead of the Cython source, though in
practice I don't know if it's a worthwhile change or not.

Thanks,
Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-12 Thread Erik Bray
On Mon, Apr 11, 2016 at 7:49 PM, Ian Henriksen
 wrote:
> That aside, I agree with Nathaniel that exporting public declarations as a
> part of the
> shared object interface was a design mistake. That aside, however, using an
> api
> declaration lets you get equivalent results without exposing anything as a
> part of
> the shared object API. Here's how this all works:

I don't know if I would outright call it a "mistake", but in
retrospect I think we're all in agreement now that maybe the behavior
of "public" should be curtailed?  Or should we still try to keep its
current behavior and "fix" it as I've been trying to do?

> public declarations: export things to C/C++ through the shared object
> interface.
> Provide a header that exports this interface.
> api declarations: export things to C/C++ through capsule objects. Provide a
> header
> for the Python module that exports that interface.
> cimports: Use capsule objects and parsing of pxd files to share things like
> external
> declarations, header includes, inline Cython functions, and Cython functions
> exported by modules between Cython modules.
>
> The public and api use cases are essentially the same most of the time, but
> api
> declarations use capsules rather than the OS's linker.
>
> There are still some trade-offs between public and api functions.
> Technically, the
> api functions require that the initialization routine exported in the api
> header be
> called for each translation unit that uses them. The public api just
> requires that the
> module already be initialized. In cases where no Python functionality is
> used in a
> public function, you may be able to get away with using the function without
> initializing the module, though I really wouldn't recommend that.

Yes, this is the main problem I have with the "api" declaration.
Don't get me wrong, it's a very clever approach and I think for a lot
of cases it's the right thing to do.  But it absolutely requires and
assumes initialization of the module.  In a general case that would be
the right thing to do. But in theory one could have some plain C code
that does not use any of the Python API, and that one can be sure
doesn't require anything initialized by the module init function.

> There are some more subtle issues here though. The reason api functions need
> to
> be initialized on a per-translation unit basis is that things exported as
> api
> declarations are exported as translation-unit-local (static) function
> pointers. They
> aren't shared by the different translation units within a module built from
> multiple
> source files. I think that's a mistake. It'd be ideal if we could have api
> interfaces (or
> something like them) provide things with shared object local visibility
> rather than
> translation unit local visibility. This would require that the API headers
> have more
> carefully structured ifdef directives so that a macro could be set in a
> given
> translation unit to designate when to emit the actual declarations for the
> needed
> pointers rather than just forward declaring them. It would also require that
> the main
> generated c/cpp file define the pointers it uses as shared-object-local
> rather rather
> than static.

Yep.  This is a problem cysignals is having if I recall correctly.  It
would be good to be able to have it both ways.

> In dynd-python we currently solve this problem by defining shared object
> local
> wrappers for the api exported function pointers and then using those
> instead, but
> I'm not a huge fan of that approach. It works well, but results in another
> unnecessary layer of indirection through the source files to connect the C++
> code
> back to its Python bindings.

Yes, I think some kind of context-dependent declarations using
carefully crafted preprocessor directives, or just using separate
header files entirely for the two contexts, would be best.

> With regards to the dllexporting/dllimporting of things: given that public
> declarations
> are already designed to export things through the shared object interface,
> we may
> as well fix the current setup to export the right things. It's a bad design
> that
> probably ought to be deprecated or at least documented better so people know
> not
> to use it unless their case actually requires sidestepping best practices.
> On the
> other hand, it's also a supported interface, so there's value in "fixing"
> it.
>
> I think the best way to do that is the following:
> - mark public symbols as dllimport unless a given (module specific)
> preprocessor
> define is set.
> - people using the public header outside of the module exporting the symbols
> should not have to set the define at all.
> - people using the public header to compile other source files that are
> linked in to
> the same Python module should set the preprocessor flag for that module.

That's the approach I was trying to take originally, yes.  But I think
it's a burden to require developers to set that preprocessor
flag--instead it 

Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-12 Thread Erik Bray
On Tue, Apr 12, 2016 at 10:27 AM, Jeroen Demeyer  wrote:
> On 2016-04-12 10:16, Erik Bray wrote:
>>
>> That said, I
>> think it makes more sense for cythonize() to read the distutils
>> options from the C source instead of the Cython source, though in
>> practice I don't know if it's a worthwhile change or not.
>
>
> I don't quite get what you mean. The C file is only read by distutils (which
> compiles the .c files to .so), not by Cython (which compiles the .pyx files
> to .c).

Right.  What I'm saying is that when you call cythonize() the output
is a list of Extension objects, which include instructions for how to
distutils should compile the C sources (in this case, the
define_macros attribute in particular).  cythonize(), at some step
along the way, reads "# distutils:" directives directly out of the
.pyx sources, but then applies them to the Extension object
representing the C sources to be compiled.

What I'm suggesting is that there should be a clearer separation--the
Cython compiler should generate the C sources first--including
outputting their JSON metadata which may include some distutils
directives, but otherwise remain completely agnostic as to how the C
sources will ultimately be compiled (currently this *is* true insofar
as how Cython's compile() works--the confusion is in cythonize()).

After the Cython sources have been compiled to C sources, the
cythonize() function should then inspect the resulting C sources
(which are given explicitly in CompilerResult objects, which are
currently ignored by cythonize()) and use the metadata in the C
sources to tell distutils what to do with them.  The Cython sources
shouldn't be consulted any further at this point.

It's a subtle difference which currently wouldn't affect the end
result.  The reason I thought of this is that I wanted to be able to
set "distutils" directives directly in the C sources without them
needing to be manually declared in the Cython sources.  But even
without that use case it's a clearer separation of concerns.

Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] cdef public declarations

2016-04-12 Thread Erik Bray
On Tue, Apr 12, 2016 at 10:36 AM, Jeroen Demeyer  wrote:
> (this thread is related to the thread "Question about how best require
> compiler options for C sources")
>
> I have a question for Cython users and developers: are you sure that "cdef
> public" actually works as documented at
> http://docs.cython.org/src/userguide/external_C_code.html#public-declarations
>
> I couldn't find it tested in the Cython testsuite, although it's not easy to
> know what to grep for. I hightly doubt that it works on OS X because of the
> difference between shared libraries (with .dylib extension) and loadable
> modules/bundles (with .so extension).

Do you have a reference about this distinction on hand?  I've seen
.dylib files on OSX before but never understood the distinction.  I
tend to irrationally avoid learning anything about OSX for as long as
possible, the way most OSS people do with Windows :)

Seems like a good question though.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cannot cythonize subclasses of setuptools.extension._Extension

2016-04-14 Thread Erik Bray
On Wed, Apr 13, 2016 at 9:35 PM, Manuel Nuno Melo
 wrote:
> Hello devs,
>
> I'm developing the setup.py for a scientific package, MDAnalysis (see PR
> #799). We depend on distutils and setuptool. Namely, we use
> setuptools.extension.Extension class for our extensions.
>
> Some older versions of setuptools (<18.0) do filename cythonization
> themselves upon initialization of the Extension object.
>
> Because we want to control name cythonization ourselves I try to directly
> use distutils.extension.Extension, which has none of setuptools'
> cythonization. However, this doesn't work because setuptools patches
> distutils, so that distutils.extension.Extension effectively becomes
> setuptools.extension.Extension.

I'm wondering what it is specifically you need to do in your
subclass--might it still be possible to do with a subclass of the
setuptools Extension?  Not saying I disagree with the overall idea,
but I also wonder if there isn't a better way.

Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cannot cythonize subclasses of setuptools.extension._Extension

2016-04-16 Thread Erik Bray
On Apr 14, 2016 21:07, "Manuel Nuno Melo" 
wrote:
>
> Our need to control cythonization comes from the fact that we implement
cython as a lazy and optional dependency. Lazy in the sense that we delay
as much as possible cythonization so that setuptools or pip have time to
install cython, if needed. Optional because we distribute both .pyx and
cythonized .c files, and decide on which to use based on user flags.
>
> We, therefore, need an Extension class that only cythonizes if we decide
to.
>
> Thanks for the feedback,
> Manel

I have a solution for exactly this in astropy_helpers which makes it
possible, for example, to pull in Cython via setup_requires, but only if
it's needed.

Remind me on Monday and I can point out how it works.

> On Apr 14, 2016 8:17 PM, "Matthew Brett"  wrote:
> >
> > On Thu, Apr 14, 2016 at 6:08 AM, Erik Bray 
wrote:
> > > On Wed, Apr 13, 2016 at 9:35 PM, Manuel Nuno Melo
> > >  wrote:
> > >> Hello devs,
> > >>
> > >> I'm developing the setup.py for a scientific package, MDAnalysis
(see PR
> > >> #799). We depend on distutils and setuptool. Namely, we use
> > >> setuptools.extension.Extension class for our extensions.
> > >>
> > >> Some older versions of setuptools (<18.0) do filename cythonization
> > >> themselves upon initialization of the Extension object.
> > >>
> > >> Because we want to control name cythonization ourselves I try to
directly
> > >> use distutils.extension.Extension, which has none of setuptools'
> > >> cythonization. However, this doesn't work because setuptools patches
> > >> distutils, so that distutils.extension.Extension effectively becomes
> > >> setuptools.extension.Extension.
> > >
> > > I'm wondering what it is specifically you need to do in your
> > > subclass--might it still be possible to do with a subclass of the
> > > setuptools Extension?  Not saying I disagree with the overall idea,
> > > but I also wonder if there isn't a better way.
> >
> > I know this is a terrible and ugly hack, but the projects I work in
> > have a 'fake_pyrex' directory, that fools setuptools into thinking
> > that 'pyrex' is installed, and therefore prevents it from doing the
> > .pyx -> .c filename conversions in the extension:
> >
> > https://github.com/regreg/regreg/blob/master/setup.py#L33
> >
> > Cheers,
> >
> > Matthew
> > ___
> > cython-devel mailing list
> > cython-devel@python.org
> > https://mail.python.org/mailman/listinfo/cython-devel
>
>
> ___
> cython-devel mailing list
> cython-devel@python.org
> https://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cannot cythonize subclasses of setuptools.extension._Extension

2016-04-18 Thread Erik Bray
On Sat, Apr 16, 2016 at 1:29 PM, Manuel Nuno Melo
 wrote:
> Hi Erik,
> Please post your solution; I'm curious to see it.

Will do in a bit. I need to see if I can distill it somewhat from its
original context so that it can be better understood.

> Currently, we're also using setup_requires but are moving away from it
> because of two main issues:
>
> 1- there's no flexibility to hook into it via setuptools.Distribution
> subclassing (unless you rewrite the entire __init__);

I don't really think that's a big issue but maybe you've encountered
some reason that it is.
The worse thing about it is the general chicken-egg problem that you
need to call setup() to get your setup_requires, but you may need some
packages pulled in from setup_requires in order to do anything useful
with setup().

d2to1 [1] solves this problem quite well, but unfortunately
development on it is stalled for the time being.  Another alternative
that can work quite well is some version of Daniel Holth's
"setup-requires" hack [2].

> 2- (and more serious) setuptools installs setup_requires dependencies in the
> local directory, not as a system-wide install.

That's a feature, not a bug.  Build requirements for a package might
be in conflict with the versions of those same requirements you have
already installed.  It also prevents things from being installed that
are *not* required at runtime.  It makes it easier for projects to pin
a little more strictly to their known-working build requirements,
without the constraints imposed by supporting a wider range of runtime
requirements.

> Even worse, it then puts a
> path link to the current directory under the system-wide easy_install.pth.
> This means that if you install from the source directory via 'sudo
> ./setup.py install', you get a Cython egg downloaded into that directory,
> and its path added to sys.path (!!). Needless to say this can break in many
> nasty ways...

Huh??  I'm not sure we're talking about the same thing now.  The local
.eggs cache directory created for setup_requires downloads most
certainly does not get added to easy_install.pth.  Are you sure you're
not thinking of `setup.py develop`?

[1] https://pypi.python.org/pypi/d2to1
[2] https://bitbucket.org/dholth/setup-requires/src

> On Sat, Apr 16, 2016 at 1:10 PM, Erik Bray  wrote:
>>
>> On Apr 14, 2016 21:07, "Manuel Nuno Melo" 
>> wrote:
>> >
>> > Our need to control cythonization comes from the fact that we implement
>> > cython as a lazy and optional dependency. Lazy in the sense that we delay 
>> > as
>> > much as possible cythonization so that setuptools or pip have time to
>> > install cython, if needed. Optional because we distribute both .pyx and
>> > cythonized .c files, and decide on which to use based on user flags.
>> >
>> > We, therefore, need an Extension class that only cythonizes if we decide
>> > to.
>> >
>> > Thanks for the feedback,
>> > Manel
>>
>> I have a solution for exactly this in astropy_helpers which makes it
>> possible, for example, to pull in Cython via setup_requires, but only if
>> it's needed.
>>
>> Remind me on Monday and I can point out how it works.
>>
>> > On Apr 14, 2016 8:17 PM, "Matthew Brett" 
>> > wrote:
>> > >
>> > > On Thu, Apr 14, 2016 at 6:08 AM, Erik Bray 
>> > > wrote:
>> > > > On Wed, Apr 13, 2016 at 9:35 PM, Manuel Nuno Melo
>> > > >  wrote:
>> > > >> Hello devs,
>> > > >>
>> > > >> I'm developing the setup.py for a scientific package, MDAnalysis
>> > > >> (see PR
>> > > >> #799). We depend on distutils and setuptool. Namely, we use
>> > > >> setuptools.extension.Extension class for our extensions.
>> > > >>
>> > > >> Some older versions of setuptools (<18.0) do filename cythonization
>> > > >> themselves upon initialization of the Extension object.
>> > > >>
>> > > >> Because we want to control name cythonization ourselves I try to
>> > > >> directly
>> > > >> use distutils.extension.Extension, which has none of setuptools'
>> > > >> cythonization. However, this doesn't work because setuptools
>> > > >> patches
>> > > >> distutils, so that distutils.extension.Extension effectively
>> > > >> becomes
>> > > >> setuptools.extension.Extension.
>> > > >
>> > > > I'm wondering what it is specificall

Re: [Cython] Cannot cythonize subclasses of setuptools.extension._Extension

2016-04-19 Thread Erik Bray
On Mon, Apr 18, 2016 at 6:56 PM, Manuel Nuno Melo
 wrote:
> Ah, sorry Erik, you're absolutely right. I mixed my results a bit and must
> elaborate:
>
> 'setup_requires' on its own will indeed not generate the behavior I
> described.
>
> However, if you define the same dependency under 'setup_requires' AND
> 'install_requires', then you get the mess I mentioned. Essentially, when you
> reach 'install_requires' setuptools decides that the egg it has under .eggs
> suffices, and just links to it.
>
> This isn't an issue when depending on cython because it's really just a
> setup-time dependency. However, in our package we depend on numpy at both
> setup- and runtime, and therefore makes sense to have it under both
> 'requires' flags.
>
> This is why hooking in at the setup_requires step could be a potential
> workaround, if we could get it to do a full dependency install instead of
> local egg. I'm currently finishing up this approach.

Interesting.  This sounds to me like a regression, as I don't think it
used to do that.  This should probably be brought up on distutils-sig
or in the setuptools bug tracker, rather than spending a lot of time
hacking together a workaround or requiring changes specifically in
Cython (though I don't disagree that distutils.Extension should be
allowed in the first place).

IIRC it used to be that if the same package was specified in both
setup_requires and install_requires it would be built/installed twice.
This was recognized as inefficient, and that the copy installed in the
local .eggs directory satisfies an install_requires requirement it
should be copied from .eggs to the system site-packages (or whatever
the install target is).  I remember there at least being *discussion*
about that, but not sure what changes were made.

It sounds like in your case it's just adding Cython from your .eggs
into easy-install.pth which I agree is wrong.  If that's the case it
should be fixed in setuptools over all else.  I'll see if I can
reproduce it.  Jason Coombs who mantains setuptools is usually quick
to release bug fix versions if he gets a patch.

Erik

> On Mon, Apr 18, 2016 at 11:16 AM, Erik Bray  wrote:
>>
>> On Sat, Apr 16, 2016 at 1:29 PM, Manuel Nuno Melo
>>  wrote:
>> > Hi Erik,
>> > Please post your solution; I'm curious to see it.
>>
>> Will do in a bit. I need to see if I can distill it somewhat from its
>> original context so that it can be better understood.
>>
>> > Currently, we're also using setup_requires but are moving away from it
>> > because of two main issues:
>> >
>> > 1- there's no flexibility to hook into it via setuptools.Distribution
>> > subclassing (unless you rewrite the entire __init__);
>>
>> I don't really think that's a big issue but maybe you've encountered
>> some reason that it is.
>> The worse thing about it is the general chicken-egg problem that you
>> need to call setup() to get your setup_requires, but you may need some
>> packages pulled in from setup_requires in order to do anything useful
>> with setup().
>>
>> d2to1 [1] solves this problem quite well, but unfortunately
>> development on it is stalled for the time being.  Another alternative
>> that can work quite well is some version of Daniel Holth's
>> "setup-requires" hack [2].
>>
>> > 2- (and more serious) setuptools installs setup_requires dependencies in
>> > the
>> > local directory, not as a system-wide install.
>>
>> That's a feature, not a bug.  Build requirements for a package might
>> be in conflict with the versions of those same requirements you have
>> already installed.  It also prevents things from being installed that
>> are *not* required at runtime.  It makes it easier for projects to pin
>> a little more strictly to their known-working build requirements,
>> without the constraints imposed by supporting a wider range of runtime
>> requirements.
>>
>> > Even worse, it then puts a
>> > path link to the current directory under the system-wide
>> > easy_install.pth.
>> > This means that if you install from the source directory via 'sudo
>> > ./setup.py install', you get a Cython egg downloaded into that
>> > directory,
>> > and its path added to sys.path (!!). Needless to say this can break in
>> > many
>> > nasty ways...
>>
>> Huh??  I'm not sure we're talking about the same thing now.  The local
>> .eggs cache directory created for setup_requires downloads most
>> certainly does

[Cython] Minor issue with running tests and -Wlong-long

2016-04-25 Thread Erik Bray
Hello,

As some of you already know I've been doing some work on Cython on
Cygwin [in the process I I'm constantly mixing the two up in speech,
but maybe in writing I'll do better :)].

There are several issues with the tests on Cygwin, and that's one
thing I'll work on.  But a major annoyance I've encountered when
running any tests is a huge number of warnings from gcc such as:

embray@PC-pret-47 ~/src/cython
$ CFLAGS="-O0" ./runtests.py -vv --no-cpp addloop
Python 2.7.10 (default, Jun  1 2015, 18:05:38)
[GCC 4.9.2]

Running tests against Cython 0.24 f68b5bd0fa620d0dc26166bffe5fe42d94068720
Backends: c

runTest (__main__.CythonRunTestCase)
compiling (c) and running addloop ...
=== C/C++ compiler error output: ===
In file included from /usr/include/python2.7/Python.h:58:0,
 from addloop.c:4:
/usr/include/python2.7/pyport.h:69:27: warning: ISO C90 does not
support ‘long long’ [-Wlong-long]
 #define PY_LONG_LONG long long
   ^
/usr/include/python2.7/pyport.h:793:34: note: in definition of macro
‘PyAPI_FUNC’
 #   define PyAPI_FUNC(RTYPE) RTYPE
  ^
/usr/include/python2.7/intobject.h:46:21: note: in expansion of macro
‘PY_LONG_LONG’
 PyAPI_FUNC(unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
 ^
In file included from /usr/include/python2.7/Python.h:58:0,
 from addloop.c:4:
...

And so on.

For now an easy workaround is to add -Wno-long-long to the compiler
flags.  But I was curious why I was seeing this in Cygwin but not on
my Ubuntu system, and here's why:

In runtests.py there is a check

https://github.com/cython/cython/blob/master/runtests.py#L829

if self.language=='c' and compiler='gcc':
ext_compile_flags.extend(['-std=c89', '-pedantic'])

where in this case `compiler` is assigned
`sysconfig.get_config_var('CC')`.  On my Linux system this expands to
"x86_64-linux-gnu-gcc -pthread".  Whereas on Cygwin (and probably many
other systems) it expands simply to "gcc".

I'm guessing that to do what it intended the above line should read
"and 'gcc' in compiler".

But this also raises the question: Why are the tests run with these
flags?  If Python was configured with HAVE_LONG_LONG, then these
warnings will be inevitable.

Thanks,
Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fwd: Question about how best require compiler options for C sources

2016-04-26 Thread Erik Bray
On Mon, Apr 11, 2016 at 7:49 PM, Ian Henriksen
 wrote:
> On Mon, Apr 11, 2016 at 7:46 AM Erik Bray  wrote:
>>
>> On Mon, Apr 11, 2016 at 2:51 PM, Nathaniel Smith  wrote:
>> > On Apr 11, 2016 04:18, "Erik Bray"  wrote:
>> >>
>> >> On Fri, Apr 8, 2016 at 5:49 PM, Nathaniel Smith  wrote:
>> >> > Can you give a tiny concrete example? My questions are basic enough
>> >> > that
>> >> > I
>> >> > feel like I'm missing something fundamental :-)
>> >>
>> >> Yes, I think you might be missing something, but I'm not sure exactly
>> >> where.  In the issue I provided a tiny example which you can see here:
>> >>
>> >> https://gist.github.com/embray/12a67edb82b213217e31f408007898e6
>> >>
>> >> The C code generated by this example currently does not compile on
>> >> Windows, because of how Cython uses DL_IMPORT incorrectly.  Regardless
>> >> of what it *does* do, Cython should not be using the DL_IMPORT macro
>> >> at all actually since that no longer even exists in Python.
>> >
>> > Sure.
>> >
>> > For that example, the correct thing to do is to *not* export the
>> > function.
>>
>> So as I wrote in my previous message, my example was incomplete.  But
>> indeed if the intent was *only* to share the declaration between TUs
>> of the same library then I agree the *most* correct thing would be to
>> not use dllexport. Unfortunately there isn't currently a way in Cython
>> to make this distinction.
>>
>> > Backing up, to make sure we're on the same page:
>>
>> Yep, I agree with your analysis that follows, and it's helpful to have
>> all the cases laid out in one place, so thanks for that!
>>
>> There are three levels of
>> > symbol visibility in C: file-internal, shared library internal
>> > (different .c
>> > files that make up the library can see it, but users of the shared
>> > library
>> > can't see it), and shared library exported (everyone can see it; can
>> > also
>> > carry other consequences, e.g. on Linux then internal calls will become
>> > noticeably slower, and it becomes possible for weird symbol
>> > interposition
>> > issues to occur). So the rule of thumb is to make everything as private
>> > as
>> > you can get away with.
>> >
>> > Making this more interesting:
>> > - vanilla C only includes 2 ways to mark symbol visibility, which is not
>> > enough to make a 3-way distinction. Hence the need for an extra
>> > attribute
>> > thingummy.
>> > - everyone agrees that symbols marked 'static' should be file-internal,
>> > but
>> > different platforms disagree about what should happen if the extra
>> > attribute
>> > thingummy is missing.
>> >
>> > So on Windows, the convention is:
>> > 'static' -> file internal
>> > no marking -> shared library internal
>> > 'dllexport' -> public
>> >
>> > And on Linux it's:
>> > 'static' -> file internal
>> > 'visibility (hidden)' -> shared library internal
>> > no marking -> public
>> >
>> > It's generally agreed that Linux got this wrong and that you should
>> > always
>> > use '-fvisibility=hidden' to switch it to the windows style, but cython
>> > doesn't control compiler options and thus should probably generate code
>> > that
>> > works correctly regardless of such compiler settings. Fortunately, Linux
>> > does provide some markings to explicitly make things public: you can
>> > mark a
>> > symbol 'visibility (default)' (which means public), or you can use the
>> > dllexport syntax, just like Windows, because gcc is helpful like that.
>> >
>> > OTOH, windows is annoying because of this dllimport thing that started
>> > this
>> > whole thread: on other systems just marking symbols as extern is enough
>> > to
>> > handle both shared-object-internal and shared-library-exported symbols,
>> > and
>> > the linker will sort it out. On Windows, you have to explicitly
>> > distinguish
>> > between these. (And annoyingly, if you accidentally leave out the
>> > dllimport
>> > making on functions then it will use some fallback hack that works but
>> > silently degrades performance; on ot

[Cython] Cygwin: Handling missing C99 long double functions

2016-04-26 Thread Erik Bray
Hi again,

Sorry if I'm spamming the list too much, but I've encountered another
pretty serious and unfortunate issue with Cython on Cygwin.

The problem now is that many of libm long double functions like sqrtl,
tanl, etc. are missing on Cygwin (missing from newlib to be specific).
I think this is a previously known issue, but nothing's ever really
been done about it.  Well, to be clear, sometimes they're present, but
only when sizeof(double) == sizeof(long double).  However on 64-bit
Cygwin sizeof(long double) == 16, so the functions are simply not
defined.

This seems to be due to lack of interest / effort:
https://www.cygwin.com/ml/cygwin/2011-04/msg00231.html  That post is 5
years old, but I can't find any evidence that this has changed.

There are quite a few tests in Cygwin's test suite that test long
double support.  I guess what I'm asking is what would be the best
thing to do about it.

I could just skip those tests on Cygwin, though I'm not sure the best
way to go about skipping an entire test for a given platform.

More generally though, outside the context of testing, this means
Cygwin will sometimes generate code that cannot be compiled on this
platform, and a question arises as to what to do about that.  I have
some thoughts, but am not sure if it's worth discussing any further or
not.

Thanks,
Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-04-26 Thread Erik Bray
On Tue, Apr 26, 2016 at 5:16 PM, Dima Pasechnik
 wrote:
> Hi,
> certainly we did something with Sage on cygwin to work around these...
> Just in case,

>From what I can tell there are several places where Sage has hacked
around this issue in different packages, but it's not doing anything
specifically with it for Cython.  Sage uses the Cephes math lib to
support these functions on FreeBSD--and apparently used to use it on
Cygwin too, but disabled that for some reason.

Regardless, Cython should ultimately do something sensible in these cases.

My general thinking is that in cases where Cython generates code
containing C math functions, it ought to support a fallback.  This
will require some feature checks so that Cython can generate wrappers,
when necessary, around the double versions of those functions (as
Numpy currently does).

Erik

> On Tue, Apr 26, 2016 at 3:58 PM, Erik Bray  wrote:
>> Hi again,
>>
>> Sorry if I'm spamming the list too much, but I've encountered another
>> pretty serious and unfortunate issue with Cython on Cygwin.
>>
>> The problem now is that many of libm long double functions like sqrtl,
>> tanl, etc. are missing on Cygwin (missing from newlib to be specific).
>> I think this is a previously known issue, but nothing's ever really
>> been done about it.  Well, to be clear, sometimes they're present, but
>> only when sizeof(double) == sizeof(long double).  However on 64-bit
>> Cygwin sizeof(long double) == 16, so the functions are simply not
>> defined.
>>
>> This seems to be due to lack of interest / effort:
>> https://www.cygwin.com/ml/cygwin/2011-04/msg00231.html  That post is 5
>> years old, but I can't find any evidence that this has changed.
>>
>> There are quite a few tests in Cygwin's test suite that test long
>> double support.  I guess what I'm asking is what would be the best
>> thing to do about it.
>>
>> I could just skip those tests on Cygwin, though I'm not sure the best
>> way to go about skipping an entire test for a given platform.
>>
>> More generally though, outside the context of testing, this means
>> Cygwin will sometimes generate code that cannot be compiled on this
>> platform, and a question arises as to what to do about that.  I have
>> some thoughts, but am not sure if it's worth discussing any further or
>> not.
>>
>> Thanks,
>> Erik
>> ___
>> cython-devel mailing list
>> cython-devel@python.org
>> https://mail.python.org/mailman/listinfo/cython-devel
> ___
> cython-devel mailing list
> cython-devel@python.org
> https://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-04-27 Thread Erik Bray
On Tue, Apr 26, 2016 at 10:55 PM, Robert Bradshaw  wrote:
> On Tue, Apr 26, 2016 at 8:36 AM, Erik Bray  wrote:
>>
>> On Tue, Apr 26, 2016 at 5:16 PM, Dima Pasechnik
>>  wrote:
>> > Hi,
>> > certainly we did something with Sage on cygwin to work around these...
>> > Just in case,
>>
>> From what I can tell there are several places where Sage has hacked
>> around this issue in different packages, but it's not doing anything
>> specifically with it for Cython.  Sage uses the Cephes math lib to
>> support these functions on FreeBSD--and apparently used to use it on
>> Cygwin too, but disabled that for some reason.
>>
>> Regardless, Cython should ultimately do something sensible in these cases.
>>
>> My general thinking is that in cases where Cython generates code
>> containing C math functions, it ought to support a fallback.  This
>> will require some feature checks so that Cython can generate wrappers,
>> when necessary, around the double versions of those functions (as
>> Numpy currently does).
>
>
> Let's make things concrete. You're complaining that something like
>
> cdef extern from "math.h":
> long double sqrtl(long double)
>
> def foo(long double x):
> return sqrtl(x)
>
> Doesn't work on Cygwin?
>
> The same is true for *any* C function that you use that's not totally
> portable (this is the bane of trying to use C). I don't think Cython should
> be detecting this and substituting a (less accurate) sqrt for sqrtl in this
> case. If you want do do this, write your own headers that (conditionally)
> define these things however you want.

No, not at all.  That would be silly.  Let me be clearer...

> Or, are you complaining that Cython's test suite doesn't pass on some Cygwin
> because there are tests of features not available on Cygwin? (Your original
> email isn't clear.) If so, the test framework can be set up to exclude these
> tests on that platform.

There are really two concerns.  This is one of them, yes.  The first
question is what would be the best way to exclude certain tests on
certain platforms?  There's no clear documentation on that (which is
fine, but it's why I'm asking :)

The second concern, and more serious, is that there *are* cases where
Cython generates code that uses long double functions where, for
example, long double is passed as an argument.  For example the
following code

def truncate_long_double(long double x):
cdef float r = int(x)
return r

compiles to something that includes:

  /* "truncl.pyx":2
 * def truncate_long_double(long double x):
 * cdef float r = int(x) # <<<<<<<<<<<<<<
 * return r
 */
  __pyx_t_1 = truncl(__pyx_v_x);
  __pyx_v_r = __pyx_t_1;

  /* "truncl.pyx":3
 * def truncate_long_double(long double x):
 * cdef float r = int(x)
 * return r # <<<<<<<<<<<<<<
 */
  __Pyx_XDECREF(__pyx_r);
  __pyx_t_2 = PyFloat_FromDouble(__pyx_v_r); if
(unlikely(!__pyx_t_2)) __PYX_ERR(0, 3, __pyx_L1_error)

It's not not clear what the best way would be to *not* use this code
on platforms where truncl missing (and there are a handful of other
examples besides truncl). This code is generated by an optimization
that was added here: http://trac.cython.org/ticket/400  In this case
replacing truncl with trunc shouldn't hurt.

It's not uncommon (I think) to ship pre-cythonized C sources in a
package's source distribution so that it can be installed (especially
by pip) without requiring the user to have Cython.  This code will
fail to compile on platforms like Cygwin.

But it's not clear how to handle this at Cythonization time either
since it doesn't know in advance what platform it will be targeting.
I'm suggesting that maybe rather than using these functions directly
Cython should generate some fallbacks for where they're not available,
or at least have the option to.

Erik

P.S. additional long double functions that can be generated by Cython
include powl and fmodl, and long double complex functions like conjl,
cabsl, and cpowl.  So it's really only a limited number at the moment.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-04-27 Thread Erik Bray
On Wed, Apr 27, 2016 at 12:07 PM, Erik Bray  wrote:
> On Tue, Apr 26, 2016 at 10:55 PM, Robert Bradshaw  wrote:
>> On Tue, Apr 26, 2016 at 8:36 AM, Erik Bray  wrote:
>>>
>>> On Tue, Apr 26, 2016 at 5:16 PM, Dima Pasechnik
>>>  wrote:
>>> > Hi,
>>> > certainly we did something with Sage on cygwin to work around these...
>>> > Just in case,
>>>
>>> From what I can tell there are several places where Sage has hacked
>>> around this issue in different packages, but it's not doing anything
>>> specifically with it for Cython.  Sage uses the Cephes math lib to
>>> support these functions on FreeBSD--and apparently used to use it on
>>> Cygwin too, but disabled that for some reason.
>>>
>>> Regardless, Cython should ultimately do something sensible in these cases.
>>>
>>> My general thinking is that in cases where Cython generates code
>>> containing C math functions, it ought to support a fallback.  This
>>> will require some feature checks so that Cython can generate wrappers,
>>> when necessary, around the double versions of those functions (as
>>> Numpy currently does).
>>
>>
>> Let's make things concrete. You're complaining that something like
>>
>> cdef extern from "math.h":
>> long double sqrtl(long double)
>>
>> def foo(long double x):
>> return sqrtl(x)
>>
>> Doesn't work on Cygwin?
>>
>> The same is true for *any* C function that you use that's not totally
>> portable (this is the bane of trying to use C). I don't think Cython should
>> be detecting this and substituting a (less accurate) sqrt for sqrtl in this
>> case. If you want do do this, write your own headers that (conditionally)
>> define these things however you want.
>
> No, not at all.  That would be silly.  Let me be clearer...
>
>> Or, are you complaining that Cython's test suite doesn't pass on some Cygwin
>> because there are tests of features not available on Cygwin? (Your original
>> email isn't clear.) If so, the test framework can be set up to exclude these
>> tests on that platform.
>
> There are really two concerns.  This is one of them, yes.  The first
> question is what would be the best way to exclude certain tests on
> certain platforms?  There's no clear documentation on that (which is
> fine, but it's why I'm asking :)
>
> The second concern, and more serious, is that there *are* cases where
> Cython generates code that uses long double functions where, for
> example, long double is passed as an argument.  For example the
> following code
>
> def truncate_long_double(long double x):
> cdef float r = int(x)
> return r
>
> compiles to something that includes:
>
>   /* "truncl.pyx":2
>  * def truncate_long_double(long double x):
>  * cdef float r = int(x) # <<<<<<<<<<<<<<
>  * return r
>  */
>   __pyx_t_1 = truncl(__pyx_v_x);
>   __pyx_v_r = __pyx_t_1;
>
>   /* "truncl.pyx":3
>  * def truncate_long_double(long double x):
>  * cdef float r = int(x)
>  * return r # <<<<<<<<<<<<<<
>  */
>   __Pyx_XDECREF(__pyx_r);
>   __pyx_t_2 = PyFloat_FromDouble(__pyx_v_r); if
> (unlikely(!__pyx_t_2)) __PYX_ERR(0, 3, __pyx_L1_error)
>
> It's not not clear what the best way would be to *not* use this code
> on platforms where truncl missing (and there are a handful of other
> examples besides truncl). This code is generated by an optimization
> that was added here: http://trac.cython.org/ticket/400  In this case
> replacing truncl with trunc shouldn't hurt.
>
> It's not uncommon (I think) to ship pre-cythonized C sources in a
> package's source distribution so that it can be installed (especially
> by pip) without requiring the user to have Cython.  This code will
> fail to compile on platforms like Cygwin.
>
> But it's not clear how to handle this at Cythonization time either
> since it doesn't know in advance what platform it will be targeting.
> I'm suggesting that maybe rather than using these functions directly
> Cython should generate some fallbacks for where they're not available,
> or at least have the option to.

It be clear, when I talk about "generate some fallbacks" I don't think
it's obvious that there's a great way to do that, but that's why I
bring it up.  I'll keep thinking about it in the meantime.

Thanks,
Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-05-03 Thread Erik Bray
On Thu, Apr 28, 2016 at 9:29 AM, Robert Bradshaw
 wrote:
> On Wed, Apr 27, 2016 at 3:07 AM, Erik Bray  wrote:
>>
>> On Tue, Apr 26, 2016 at 10:55 PM, Robert Bradshaw 
>> wrote:
>> > On Tue, Apr 26, 2016 at 8:36 AM, Erik Bray 
>> > wrote:
>> >>
>> >> On Tue, Apr 26, 2016 at 5:16 PM, Dima Pasechnik
>> >>  wrote:
>> >> > Hi,
>> >> > certainly we did something with Sage on cygwin to work around
>> >> > these...
>> >> > Just in case,
>> >>
>> >> From what I can tell there are several places where Sage has hacked
>> >> around this issue in different packages, but it's not doing anything
>> >> specifically with it for Cython.  Sage uses the Cephes math lib to
>> >> support these functions on FreeBSD--and apparently used to use it on
>> >> Cygwin too, but disabled that for some reason.
>> >>
>> >> Regardless, Cython should ultimately do something sensible in these
>> >> cases.
>> >>
>> >> My general thinking is that in cases where Cython generates code
>> >> containing C math functions, it ought to support a fallback.  This
>> >> will require some feature checks so that Cython can generate wrappers,
>> >> when necessary, around the double versions of those functions (as
>> >> Numpy currently does).
>> >
>> >
>> > Let's make things concrete. You're complaining that something like
>> >
>> > cdef extern from "math.h":
>> > long double sqrtl(long double)
>> >
>> > def foo(long double x):
>> > return sqrtl(x)
>> >
>> > Doesn't work on Cygwin?
>> >
>> > The same is true for *any* C function that you use that's not totally
>> > portable (this is the bane of trying to use C). I don't think Cython
>> > should
>> > be detecting this and substituting a (less accurate) sqrt for sqrtl in
>> > this
>> > case. If you want do do this, write your own headers that
>> > (conditionally)
>> > define these things however you want.
>>
>> No, not at all.  That would be silly.  Let me be clearer...
>>
>> > Or, are you complaining that Cython's test suite doesn't pass on some
>> > Cygwin
>> > because there are tests of features not available on Cygwin? (Your
>> > original
>> > email isn't clear.) If so, the test framework can be set up to exclude
>> > these
>> > tests on that platform.
>>
>> There are really two concerns.  This is one of them, yes.  The first
>> question is what would be the best way to exclude certain tests on
>> certain platforms?  There's no clear documentation on that (which is
>> fine, but it's why I'm asking :)
>
>
> Probably add a tag and then modify runtests.py to detect Cygwin and
> automatically add this to the exclusion list.

Okay.

>> The second concern, and more serious, is that there *are* cases where
>> Cython generates code that uses long double functions where, for
>> example, long double is passed as an argument.  For example the
>> following code
>>
>> def truncate_long_double(long double x):
>> cdef float r = int(x)
>> return r
>>
>> compiles to something that includes:
>>
>>   /* "truncl.pyx":2
>>  * def truncate_long_double(long double x):
>>  * cdef float r = int(x) # <<<<<<<<<<<<<<
>>  * return r
>>  */
>>   __pyx_t_1 = truncl(__pyx_v_x);
>>   __pyx_v_r = __pyx_t_1;
>>
>>   /* "truncl.pyx":3
>>  * def truncate_long_double(long double x):
>>  * cdef float r = int(x)
>>  * return r # <<<<<<<<<<<<<<
>>  */
>>   __Pyx_XDECREF(__pyx_r);
>>   __pyx_t_2 = PyFloat_FromDouble(__pyx_v_r); if
>> (unlikely(!__pyx_t_2)) __PYX_ERR(0, 3, __pyx_L1_error)
>>
>> It's not not clear what the best way would be to *not* use this code
>> on platforms where truncl missing (and there are a handful of other
>> examples besides truncl). This code is generated by an optimization
>> that was added here: http://trac.cython.org/ticket/400  In this case
>> replacing truncl with trunc shouldn't hurt.
>
>
> Here, yes, but in general truncl(x) != trunc(x) for, say, 2**54 + 1.
>
>> It's not uncommon (I

Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-05-04 Thread Erik Bray
On Tue, May 3, 2016 at 7:15 PM, Robert Bradshaw  wrote:
> On Tue, May 3, 2016 at 3:04 AM, Erik Bray  wrote:
>>
>> On Thu, Apr 28, 2016 at 9:29 AM, Robert Bradshaw
>>  wrote:
>> > On Wed, Apr 27, 2016 at 3:07 AM, Erik Bray 
>> > wrote:
>> >>
>> >> On Tue, Apr 26, 2016 at 10:55 PM, Robert Bradshaw 
>> >> wrote:
>> >> > On Tue, Apr 26, 2016 at 8:36 AM, Erik Bray 
>> >> > wrote:
>> >> >>
>> >> >> On Tue, Apr 26, 2016 at 5:16 PM, Dima Pasechnik
>> >> >>  wrote:
>> >> >> > Hi,
>> >> >> > certainly we did something with Sage on cygwin to work around
>> >> >> > these...
>> >> >> > Just in case,
>> >> >>
>> >> >> From what I can tell there are several places where Sage has hacked
>> >> >> around this issue in different packages, but it's not doing anything
>> >> >> specifically with it for Cython.  Sage uses the Cephes math lib to
>> >> >> support these functions on FreeBSD--and apparently used to use it on
>> >> >> Cygwin too, but disabled that for some reason.
>> >> >>
>> >> >> Regardless, Cython should ultimately do something sensible in these
>> >> >> cases.
>> >> >>
>> >> >> My general thinking is that in cases where Cython generates code
>> >> >> containing C math functions, it ought to support a fallback.  This
>> >> >> will require some feature checks so that Cython can generate
>> >> >> wrappers,
>> >> >> when necessary, around the double versions of those functions (as
>> >> >> Numpy currently does).
>> >> >
>> >> >
>> >> > Let's make things concrete. You're complaining that something like
>> >> >
>> >> > cdef extern from "math.h":
>> >> > long double sqrtl(long double)
>> >> >
>> >> > def foo(long double x):
>> >> > return sqrtl(x)
>> >> >
>> >> > Doesn't work on Cygwin?
>> >> >
>> >> > The same is true for *any* C function that you use that's not totally
>> >> > portable (this is the bane of trying to use C). I don't think Cython
>> >> > should
>> >> > be detecting this and substituting a (less accurate) sqrt for sqrtl
>> >> > in
>> >> > this
>> >> > case. If you want do do this, write your own headers that
>> >> > (conditionally)
>> >> > define these things however you want.
>> >>
>> >> No, not at all.  That would be silly.  Let me be clearer...
>> >>
>> >> > Or, are you complaining that Cython's test suite doesn't pass on some
>> >> > Cygwin
>> >> > because there are tests of features not available on Cygwin? (Your
>> >> > original
>> >> > email isn't clear.) If so, the test framework can be set up to
>> >> > exclude
>> >> > these
>> >> > tests on that platform.
>> >>
>> >> There are really two concerns.  This is one of them, yes.  The first
>> >> question is what would be the best way to exclude certain tests on
>> >> certain platforms?  There's no clear documentation on that (which is
>> >> fine, but it's why I'm asking :)
>> >
>> >
>> > Probably add a tag and then modify runtests.py to detect Cygwin and
>> > automatically add this to the exclusion list.
>>
>> Okay.
>>
>> >> The second concern, and more serious, is that there *are* cases where
>> >> Cython generates code that uses long double functions where, for
>> >> example, long double is passed as an argument.  For example the
>> >> following code
>> >>
>> >> def truncate_long_double(long double x):
>> >> cdef float r = int(x)
>> >> return r
>> >>
>> >> compiles to something that includes:
>> >>
>> >>   /* "truncl.pyx":2
>> >>  * def truncate_long_double(long double x):
>> >>  * cdef float r = int(x) # <<<<<<<<<<<<<<
>> >>  * return r
>> >>  */
>> >>   __pyx_

Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-05-11 Thread Erik Bray
On Tue, May 10, 2016 at 8:01 AM, Robert Bradshaw
 wrote:
> On Wed, May 4, 2016 at 8:01 AM, Erik Bray  wrote:
>> On Tue, May 3, 2016 at 7:15 PM, Robert Bradshaw  wrote:
>>> On Tue, May 3, 2016 at 3:04 AM, Erik Bray  wrote:
>>>> I agree that it would be safest not to use long double where long
>>>> double functions are not support, and an application that does should
>>>> probably check for that.
>>>>
>>>> The problem is that Cython is generating code that simply *cannot* be
>>>> compiled on such platforms.
>>>
>>> Does Cython ever generate such code when long doubles are not used in the
>>> source? If so, we should fix that. But I don't think Cython should be
>>> swapping in double versions for long double operations.
>>
>> No, but the problem is that Cython is a pretty heavy layer of
>> abstraction.  In the example I gave no long double function was ever
>> used explicitly--it was added by a compiler optimization.  I probably
>> wouldn't write that code thinking that I will need a workaround on
>> platforms that don't have truncl.
>
> I would argue that if you don't have truncl, you shouldn't be using
> long double. Possibly this particular use of truncl could be worked
> around (though note that using trunc is not safe) but it's starting to
> get on shaky ground.

Yeah but I will state again: Cython is compiling code that may be
distributed to a wide audience.  A developer who writes some code that
happens to use long double isn't going to think themselves "Gosh, I
guess I can't accept `long double` here because it may cause Cython to
generate code that contains "truncl" [how would they know that? It's
an internal optimization] and I can't do that because it won't work on
Cygwin [which the average developer wouldn't know]"

Instead they'll just write their code, ship it, and it will fail to
compile or install for users on Cygwin.  So it's a question of least
astonishment.

>> In the slightly more explicit cases of fmod and pow I might agree.
>> Though there still isn't a great way to anticipate the need to write
>> separate handling for long double functions into a module, for a
>> platform that doesn't support them.  I can't write something in a
>> Cython module like:
>>
>> if sys.platform != 'cygwin':
>> def my_long_double_func(long double x):
>> 
>
> No, that's not something we support well. You can put them in a
> separate module and conditionally compile/import them.


>> Maybe would be nice to have a simple compiler directive that outright
>> disables compiling a function depending on a platform check. (I think
>> this would also be useful for the test framework--the current system
>> of module-level tags is not fine-grained as I would like, I'm finding.
>>
>>>> My suggestion would be to replace the
>>>> current calls to [trunc,fmod,pow]l to the appropriate
>>>> __Pyx_%(func)_%(type) wrapper which would provide two alternatives:
>>>> One that works properly, and one that falls back to using the double
>>>> version of the function (Numpy does this so there is some precedent).
>>>> I would go further and suggest throwing up a pre-compiler warning when
>>>> using the double versions.
>>>>
>>>> The only question is what condition (in my mind) is what to use as a
>>>> condition for selecting the fallbacks.  For now I'm thinking something
>>>> quite specific like
>>>>
>>>> #if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
>>>>
>>>> since I'm apparently the only person who cares about this at the
>>>> moment, and Cygwin is the platform I'm targeting.
>>>>
>>>> Again, my only real concern right now is that Cython generates code
>>>> that can be compiled on Cygwin.
>>>
>>>
>>> If you want this behavior, wouldn't including a header
>>>
>>> #if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
>>> #define truncl trunc
>>> #define fmodl fmod
>>> #define powl pow
>>> #endif
>>>
>>> as part of your package do the job?
>>
>> That's a reasonable workaround. But I would argue that should be baked
>> into Cythonized code, at least optionally, and should generate a
>> warning.
>
> This is dangerous to bake into Cython code, even with a warning (who
> would see it?) I say if you want to pretend to support long 

Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-05-13 Thread Erik Bray
On Thu, May 12, 2016 at 12:55 AM, Greg Ewing
 wrote:
> Erik Bray wrote:
>>
>> A developer who writes some code that
>> happens to use long double isn't going to think themselves "Gosh, I
>> guess I can't accept `long double` here because it may cause Cython to
>> generate code that contains "truncl"
>
>
> Sounds to me like Cython shouldn't be taking it upon
> itself to generate calls to truncl just because long
> double is being used. The programmer should have to
> call it explicitly if that's what they want.

Since it's just a small optimization it could be disabled without
trouble too.  Preferably with a #define that can be set or unset at
compile time.

Best,
Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cygwin: Handling missing C99 long double functions

2016-06-06 Thread Erik Bray
On Wed, May 18, 2016 at 12:22 AM, Robert Bradshaw  wrote:
> On Fri, May 13, 2016 at 4:26 AM, Erik Bray  wrote:
>> On Thu, May 12, 2016 at 12:55 AM, Greg Ewing
>>  wrote:
>>> Erik Bray wrote:
>>>>
>>>> A developer who writes some code that
>>>> happens to use long double isn't going to think themselves "Gosh, I
>>>> guess I can't accept `long double` here because it may cause Cython to
>>>> generate code that contains "truncl"
>>>
>>>
>>> Sounds to me like Cython shouldn't be taking it upon
>>> itself to generate calls to truncl just because long
>>> double is being used. The programmer should have to
>>> call it explicitly if that's what they want.
>>
>> Since it's just a small optimization it could be disabled without
>> trouble too.  Preferably with a #define that can be set or unset at
>> compile time.
>
> https://github.com/cython/cython/commit/46d3efc4ff9123c73889bcb54b2b200d6be39ff4

The commit looks good to me.  Sorry I somehow missed this earlier.

The other cases are a bit hairier, but this is the one I was most
concerned about since it doesn't explicitly involve any operations
with long doubles.

Thanks,
Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Suggested action for "Python.h not found"

2016-06-09 Thread Erik Bray
On Thu, Jun 9, 2016 at 7:37 AM, Elizabeth A. Fischer
 wrote:
>> Or maybe by default it can just point the user to an Installation Page
>> which explains in detail what needs to be done to get those headers for
>> various systems ?
>
>
> I think having this information on an installation instruction page is
> definitely a good idea.  I don't think it makes sense to put in every single
> error message that users should consult the installation instructions; that
> should be a given.

I'm inclined to agree that putting instructions for several different
systems in the message is overkill, and is a slippery slope, but a
friendlier error message with links to documentation (and maybe
general, non-Cython-specific docs on building Python extension modules
too).

> It is my belief that non-developers should not be manually building
> software.  Nor should developers be doing that; if you're saying
> "./configure" yourself, it's like programming assembly code.
>
> There are many ways to install Cython without manually building:

Unfortunately sometimes, especially with Python software, the line
between "developer" and "non-developer" is blurry.  This is especially
true of scientific software where Cython finds its most use.  It's not
uncommon, for example, for Cython to be pulled in as a build
dependency of something else.  Of course as pointed out downthread
this is less of an issue on most platforms now that binary wheels are
available for so many (including now manylinux!).

So I guess point taken that when it comes to just building Cython
itself this is less of an issue than it used to be.

> * If you're on Linux (and maybe Mac too), install with Spack.  This can be
> as easy as typing:
>  spack install py-cython
>
> https://github.com/llnl/spack
>
> Spack gives you complete control over your compiler.  This is important if
> you're  building Python extensions, especially C++-based extensions, which
> must be built with the same compilers use to build Python.
>
> I use Spack to build my complete software stack: the application software I
> work on requires about 50 dependencies to build, and another ~20 Python
> dependencies to analyze the output.  That includes the full Numpy/Scipy
> stack.  Spack makes it easy for me to use consistent compiler and libraries
> for Python and my libraries.  I've had success making Spack-based install
> instructions that others (non-developers) can follow and get my software
> stack installed on their systems:
> https://github.com/citibeth/icebin

This one's new to me.  Will have to investigate--this could be
relevant to my interests.

Thanks,
Erik

> On Wed, Jun 8, 2016 at 1:25 PM, Robert Bradshaw  wrote:
>>
>> +1, want to submit a pull request?
>>
>> On Tue, Jun 7, 2016 at 11:28 PM, Abdeali Kothari
>>  wrote:
>> > Hi,
>> >
>> > Currently, when a user installs cython, if the Python headers are not
>> > found,
>> > an error message saying "Python.h: no such file or directory" is shown
>> > (Example: https://justpaste.it/v0gz). Would it be possible to suggest an
>> > action to install the headers ?
>> >
>> > I found http://trac.cython.org/ticket/247 which is quite old, and makes
>> > the
>> > error message easier to find, but doesn't recommend a solution.
>> >
>> > `sys.platform`, `platform.linux_distribution` or `distro`[1] can be used
>> > to
>> > find which OS, platform, etc is and use that to suggest a command like:
>> >
>> > if sys.version_info < (3, ) and sys.platform.startswith("linux") and
>> > platform.linux_distribution()[0] in ("ubuntu", "debian"):
>> > print("Python headers were not found. On Debian/Ubuntu, `sudo
>> > apt-get install python-dev` should install the Python headers.")
>> >
>> > elif sys.version_info < (3, ) and sys.platform.startswith("linux")
>> > and
>> > platform.linux_distribution()[0] in ("ubuntu", "debian"):
>> > print("Python headers were not found. On Debian/Ubuntu, `sudo
>> > apt-get install python3-dev` should install the Python3 headers.")
>> >
>> > This would help so that non-developers can use cython based packages
>> > easily
>> > as it provides helpful instructions on what to do. Currently, you
>> > essentially search for the error message and go to one of the
>> > forums/blogs
>> > explaining what can be done.
>> >
>> > Or maybe by default it can just point the user to an Installation Page
>> > which
>> > explains in detail what needs to be done to get those headers for
>> > various
>> > systems ?
>> >
>> > [1] - https://pypi.python.org/pypi/distro
>> >
>> > ___
>> > cython-devel mailing list
>> > cython-devel@python.org
>> > https://mail.python.org/mailman/listinfo/cython-devel
>> >
>> ___
>> cython-devel mailing list
>> cython-devel@python.org
>> https://mail.python.org/mailman/listinfo/cython-devel
>
>
>
> ___
> cython-devel mailing list
> cython-devel@python.org
> https

Re: [Cython] Minor issue with running tests and -Wlong-long

2016-09-05 Thread Erik Bray
On Sat, Sep 3, 2016 at 10:46 AM, Stefan Behnel  wrote:
> Erik Bray schrieb am 25.04.2016 um 17:18:
>> As some of you already know I've been doing some work on Cython on
>> Cygwin [in the process I I'm constantly mixing the two up in speech,
>> but maybe in writing I'll do better :)].
>>
>> There are several issues with the tests on Cygwin, and that's one
>> thing I'll work on.  But a major annoyance I've encountered when
>> running any tests is a huge number of warnings from gcc such as:
>>
>> embray@PC-pret-47 ~/src/cython
>> $ CFLAGS="-O0" ./runtests.py -vv --no-cpp addloop
>> Python 2.7.10 (default, Jun  1 2015, 18:05:38)
>> [GCC 4.9.2]
>>
>> Running tests against Cython 0.24 f68b5bd0fa620d0dc26166bffe5fe42d94068720
>> Backends: c
>>
>> runTest (__main__.CythonRunTestCase)
>> compiling (c) and running addloop ...
>> === C/C++ compiler error output: ===
>> In file included from /usr/include/python2.7/Python.h:58:0,
>>  from addloop.c:4:
>> /usr/include/python2.7/pyport.h:69:27: warning: ISO C90 does not
>> support ‘long long’ [-Wlong-long]
>>  #define PY_LONG_LONG long long
>>^
>> /usr/include/python2.7/pyport.h:793:34: note: in definition of macro
>> ‘PyAPI_FUNC’
>>  #   define PyAPI_FUNC(RTYPE) RTYPE
>>   ^
>> /usr/include/python2.7/intobject.h:46:21: note: in expansion of macro
>> ‘PY_LONG_LONG’
>>  PyAPI_FUNC(unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
>>  ^
>> In file included from /usr/include/python2.7/Python.h:58:0,
>>  from addloop.c:4:
>> ...
>>
>> And so on.
>>
>> For now an easy workaround is to add -Wno-long-long to the compiler
>> flags.  But I was curious why I was seeing this in Cygwin but not on
>> my Ubuntu system, and here's why:
>>
>> In runtests.py there is a check
>>
>> https://github.com/cython/cython/blob/master/runtests.py#L829
>>
>> if self.language=='c' and compiler='gcc':
>> ext_compile_flags.extend(['-std=c89', '-pedantic'])
>>
>> where in this case `compiler` is assigned
>> `sysconfig.get_config_var('CC')`.  On my Linux system this expands to
>> "x86_64-linux-gnu-gcc -pthread".  Whereas on Cygwin (and probably many
>> other systems) it expands simply to "gcc".
>>
>> I'm guessing that to do what it intended the above line should read
>> "and 'gcc' in compiler".
>
> Yes.
>
>> But this also raises the question: Why are the tests run with these
>> flags?  If Python was configured with HAVE_LONG_LONG, then these
>> warnings will be inevitable.
>
> Thanks for bringing this up. Adding "-Wno-long-long" seems the right thing
> to do, given that this depends on CPython and not Cython.
>
> Given that CPython *can* be configured without PY_LONG_LONG support,
> however, we should then guard any such code in Cython with HAVE_LONG_LONG,
> as CPython does internally, so that it actually compiles without it.
>
> I would otherwise like to keep those compiler flags, even if they didn't
> serve us yet, because it is not intended to put all too many constraints on
> the C compiler. Supporting C89, at least whenever possible, would be nice.

An alternative could be to just remove them on cygwin.  Or maybe,
though I haven't tested this I don't think, changing `-std=c89` to
`-std=gnu89`.  newlib, which is the libc implementation used by
Cygwin, has some bugs in its headers that can crop up in odd ways when
using `-std=c`, but with the GNU extensions it's usually a bit
more permissive.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Why does embed-position use relative filenames?

2017-10-25 Thread Erik Bray
On Wed, Oct 25, 2017 at 2:24 PM, Jeroen Demeyer  wrote:
> When Cython is run with the --embed-position option (or when
> Cython.Compiler.Options.embed_pos_in_docstring is true), Cython writes
> headers in the docstring like
>
> File: sage/rings/integer.pyx (starting at line 358)
>
> The filenames that Cython uses here are always made relative to the current
> working directory. This is annoying for tools which extract this information
> for introspection. It would be more useful to just write the path as-is
> instead of making it relative.
>
> It would be easy to change this behaviour in Cython but I was wondering if
> there is a specific reason for these relative paths.

IMO the relative paths make more sense in a way, because when the
module is installed the path is relative to the file's location
relative to its `sys.path` entry.

If it defaulted to absolute paths then it would get
`/path/to/my/source/code/package/module.py` baked into it, which is
deceptive if the module is then installed, say, to `site-packages`.

That's just my guess as to why it's a relative path by default.  I'm
not saying it's always necessarily the best choice--it would be nice
if there were a compiler directive to change this.

Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Why does embed-position use relative filenames?

2017-10-25 Thread Erik Bray
On Wed, Oct 25, 2017 at 2:50 PM, Jeroen Demeyer  wrote:
> On 2017-10-25 14:42, Erik Bray wrote:
>>
>> IMO the relative paths make more sense in a way, because when the
>> module is installed the path is relative to the file's location
>> relative to its `sys.path` entry.
>>
>> If it defaulted to absolute paths
>
>
> I didn't say that it should default to absolute paths. I am saying that it
> should take the path how it received from the caller. Don't make it relative
> or absolute.

I see--I agree, that would make the most sense.  I was implying you
thought the default should be absolute; just giving a reason it might
make sense in the common case to use relative paths.  Agreed that
ideally it should *not* convert a given absolute path to relative.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Why does embed-position use relative filenames?

2017-10-25 Thread Erik Bray
On Wed, Oct 25, 2017 at 2:50 PM, Jeroen Demeyer  wrote:
> On 2017-10-25 14:42, Erik Bray wrote:
>>
>> IMO the relative paths make more sense in a way, because when the
>> module is installed the path is relative to the file's location
>> relative to its `sys.path` entry.
>>
>> If it defaulted to absolute paths
>
>
> I didn't say that it should default to absolute paths. I am saying that it
> should take the path how it received from the caller. Don't make it relative
> or absolute.

I see--I agree, that would make the most sense.  I was implying you
thought the default should be absolute; just giving a reason it might
make sense in the common case to use relative paths.  Agreed that
ideally it should *not* convert a given absolute path to relative.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Why does embed-position use relative filenames?

2017-10-27 Thread Erik Bray
On Fri, Oct 27, 2017 at 11:35 AM, Jeroen Demeyer  wrote:
> My use case is for calling Cython at runtime. For example, using
> cython.inline or %%cython or the SageMath-specific cython() function. In
> that case, absolute paths make the most sense since you want to support the
> user running Cython somewhere and then doing chdir().
>
> Perhaps we could keep the default of relativizing the paths but support a
> parameter like
>
> cython --embed-position=abspath

+1

That seems like a straightforward solution that should make everyone happy.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


[Cython] Bug with inlined functions that access globals

2017-11-24 Thread Erik Bray
Hi,

I think maybe I've seen this brought up once or twice before in the
past, but with no real discussion.  There is a slightly unpythonic
problem with using cpdef inline functions defined in a .pxd file that
happen to access global variables.  For example:

$ cat foo.pxd
FOO_A = 1

cpdef inline foo():
return FOO_A

$ cat bar.pyx
from foo cimport foo

print(foo())

Running this like python -c 'import bar' results in:

Traceback (most recent call last):
  File "", line 1, in 
  File "bar.pyx", line 3, in init bar
print(foo())
  File "foo.pxd", line 4, in foo.foo
return FOO_A
NameError: name 'FOO_A' is not defined

Of course, if the function "foo" were not inlined this would work
fine.  But the inlining really breaks things, I think, in an
unpythonic way.  I could be misunderstanding how Cython's "inline"
should be interpreted but I feel like it's an implementation detail
that should not fundamentally change how the code in that function
works.

There are a couple issues here.  First of all, if a .pxd file exists
but not a .pyx, no actual module will be created.  This is by design
of course, but in this case either, inlined functions in a .pxd module
should be prohibited from accessing globals, or a trivial module
should be created to store the globals for that module.

The second problem boils down to the global variable lookup being
compiled to something like:

__pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_FOO_A)

Perhaps, when a function from one module gets inlined in another
module, at least in the case where it needs to access globals, the
originating module should be imported first and
__Pyx_GetModuleGlobalName replaced with an equivalent that uses the
originating module's globals.

So have a global variable like

static PyObject* __pyx_d_foo

and in the module init code something like:

_pyx_t_1 = __Pyx_Import("foo")
__pyx_d_foo = PyModule_GetDict(__pyx_t_1)

then in the inlined function, replace the call to
__Pyx_GetModuleGlobalName to an equivalent that uses __pyx_d_foo
instead of __pyx_d.

This all assumes, of course, that "foo" is an importable module.  I'm
not sure this makes sense otherwise...  If not then maybe inlining a
function should also "inline" any global variables it uses.

Thoughts?

Erik
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Bug with inlined functions that access globals

2017-11-24 Thread Erik Bray
On Fri, Nov 24, 2017 at 2:16 PM, Stefan Behnel  wrote:
> Erik Bray schrieb am 24.11.2017 um 13:44:
>> I think maybe I've seen this brought up once or twice before in the
>> past, but with no real discussion.  There is a slightly unpythonic
>> problem with using cpdef inline functions defined in a .pxd file that
>> happen to access global variables.  For example:
>>
>> $ cat foo.pxd
>> FOO_A = 1
>>
>> cpdef inline foo():
>> return FOO_A
>
> Hmm, I might be missing something, but it feels like globals in a .pxd file
> should not be allowed in the first place - unless there is a .pyx file for
> them. But that's difficult to prohibit, given that the .pyx file is not
> needed at compile time, and might not be there at all, just the readily
> compiled module (and maybe not even that).
>
> And I don't see why Python globals should be allowed in .pxd files at all.
>
> So - it seems probably difficult to improve the first case. Regarding your
> actual example, however, if there is really no use case for Python globals
> in .pxd files, then we should better disallow them.

That would be an acceptable resolution, though I believe there is a
use case for it.  If you take some function that was originally
defined in a .pyx module, but you want to be able to inline it, then
one is forced to rewrite it in some way if it in any way relied on
some module-level globals (which is a normal thing to do).  So it's a
problem of the implementation details driving what language features
one is allowed to use.  This is less than ideal...

Anyways there's really two issues here:

1) Python globals in .pxd files, which you're saying should be
disallowed--I have to wonder if there isn't some way to make that make
sense, but I don't feel strongly about it.
2) Non-local variables in functions defined in .pxd files--again these
should either be allowed by some mechanism, or entirely disallowed
since otherwise the result is either the function breaks with a
NameError, or behaves wildly unexpectedly if it gets dropped into a
module that happens to have a global of the same name
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Should we start requiring "language_level=2" for .pyx files?

2018-05-31 Thread Erik Bray
On Mon, May 28, 2018 at 8:56 PM, Stefan Behnel  wrote:
> Hi,
>
> Python 3 is clearly taking over the world these days, so it starts feeling
> arcane to require Py2 syntax in .pyx files. Increasingly, it means that
> people cannot just rename .py files anymore to start optimising them,
> because the .py file has a high chance of being written in Py3 syntax.
>
> Eventually, we will have to switch to Py3 syntax by default in order to
> follow what most people are (or will be) used to.
>
> As a transition, I think we could start warning about cases where the
> language level is not set explicitly. If people start marking their code as
> being "Cython 2.x code", either with an in-file directive or from their
> setup.py, we will have less of a problem in the future to change the default.
>
> What do you think? Any other ideas, comments, objections?


Perhaps you could clarify something:  I tried suggesting a while ago
that Sage start using language_level=3 at least when actually building
Sage on Python 3.  I know this isn't necessary but it just seemed to
make logical sense.  But Jeroen was convinced it wasn't necessary
because, according to him, language_level=3 doesn't really do
anything.

So what exactly does language_level=3 (or 2) do, such that it would
impact porting Python 3 code to Cython?
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Cython 0.29 – or 29.0 ?

2018-08-17 Thread Erik Bray
On Fri, Aug 17, 2018 at 12:03 PM Stefan Behnel  wrote:
>
> Robert Bradshaw schrieb am 16.08.2018 um 00:48:
> > If we're going to ditch the 0.x, I'd go for 1.0 as well. I'm a huge fan of
> > semantic versioning.
>
> I'm ok with it, just fear that it might become excessive for Cython (ok,
> I'm the one who proposed jumping to 29.0, but...). Basically, any time we
> add something to Includes/ or anything to the language, we'd have to
> increase the minor version, and any time we fix something that might break
> some person's code, we'd have to increase the major version? Some changes
> are simply not important enough to merit shouting out the breakage that
> almost no-one would otherwise notice. And unless we're adding new syntax
> that was an error before, almost any change in the compiler bears a tiny
> risk of breaking something for someone.

I believe it's okay to make sensical judgment calls on this one.  I
don't really like how semver.org defines usage of the major version
number, because one could easily take it too literally like this.  I
believe there are other reasons to bump major version numbers, such as
yes, marketing.  You can also use it like Linux and some other
projects do to indicate LTS.  Like, an odd version number means "we
guarantee not to signficantly break anything in these releases, and to
backport bug fixes for X amount of time", versus even major version
means "active changes are happening here; API may move around a bit;
no bug fixes to these releases after the next LTS release".

I'm not saying Cython should necessarily do that.  Just giving
examples of how I think semver.org's definition of major version is
too constrained.
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Preparing Cython 3.0 with breaking changes

2018-08-20 Thread Erik Bray
On Sat, Aug 18, 2018 at 9:19 AM Stefan Behnel  wrote:
>
> Hi all,
>
> following the versioning discussion, I created a milestone to collect
> (breaking) changes that should go into the future Cython 3.0 release.
>
> https://github.com/cython/cython/milestone/58
>
> While a major version change is a good time to fix things that, in
> retrospect, have led us too far in the wrong direction, we are not planning
> to remove the support for existing code any time soon that relies on the
> current semantics. For now, it looks like the major breaking changes will
> be to directive defaults, which means that setting them explicitly to their
> current configuration (e.g. in setup.py) should keep code working across
> Cython releases.
>
> Feedback welcome.

Hi Stefan,

Not related to the Cython language itself, but it would be good, as
related to issues like #1514 [1] and #2531 [2] to rip out a lot of the
old disutils-related code, which would be a breaking change.  Perhaps
this could be done in a way that isn't *too* breaking, such as leaving
behind some stubs, as this might also require some changes to
setuptools (which I can help make happen).

Of course, none of this should happen without something better to
replace it, along the lines that we discussed in Cernay.  I still have
my notes from our talk about that and will try to write up a summary
(which I should have done back then, but oh well).

Just thought I'd mention that a another potential "breaking change"
that might be worth adding to the list, and I don't think it will
break much if handled with a bit of care.

[1] https://github.com/cython/cython/issues/1514
[2] https://github.com/cython/cython/issues/2531
___
cython-devel mailing list
cython-devel@python.org
https://mail.python.org/mailman/listinfo/cython-devel