Why isn't "-std=c99" (and others) not part of python3-config's output?
I've successfully built and installed a copy of Python3.6.8 (replacing a probably buggy installation on my RHEL system, different story). Also I set up a virtualenv by doing "$ /usr/local/bin/python3.6dm -m venv /usr/local/pyenv36" In my activated virtualenv, I try "$ pip install numpy" but it fails with lots of errors about something that is valid only in C99. Obviously numpy relies on -std=c99 (pyenv36)$ /usr/local/bin/python3.6dm-config --cflags -I/usr/local/include/python3.6dm -I/usr/local/include/python3.6dm -Wno-unused-result -Wsign-compare -g -Og -Wall (pyenv36)$ This surprises me, because "my" /usr/local/bin/python3.6m itself was built -std=c99, as seen in these sysconfig entries: 'CONFIGURE_CFLAGS_NODIST': '-std=c99 -Wextra -Wno-unused-result ' 'PY_CFLAGS_NODIST': '-std=c99 -Wextra -Wno-unused-result ' 'PY_CORE_CFLAGS': '-Wno-unused-result -Wsign-compare -g -Og -Wall -std=c99 ' I can install numpy on the system Python3, and not surprisingly, there is -std=c99 among tons of other CFLAGS: $ /usr/bin/python3.6m-config --cflags -I/usr/include/python3.6m -I/usr/include/python3.6m -Wno-unused-result -Wsign-compare -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv $ My questions are: 1) Why does the output of /usr/local/bin/python3.6m-config --cflags differ from the CFLAGS in sysconfig.get_config_vars() ? I've confirmed that the /usr/local/bin/python3.6m binary and /usr/local/bin/python3.6m-config are really from the same build. 2) How does one activate the necessary CFLAGs for extension building? 3) Where do all the extra flags in the system /usr/bin/python3.6m-config --cflags come from? 4) Why isn't the config script installed into the /bin of a virtualenv? This is super annoying when building extensions on a multi-version system because it may lead to version mix-ups A bit of background on this: I've written a C extension that leaks memory like a sieve on my production RHEL7's system python3 but neither on my Debian development system nor on my "self-built" Python on the production server. So I'd like to install the self-built Python on the production server, but for that I need a bunch of other packages to work as well, including numpy. Thanks! -- https://mail.python.org/mailman/listinfo/python-list
Re: "Don't install on the system Python"
On Sun, 1 Dec 2019 01:33:50 -0800 (PST) John Ladasky wrote: > The only thing I must install with pip is tensorflow-gpu. For > everything else, I make use of the Ubuntu repositories. The Synaptic > package manager installs packages (including Python modules) for all > user accounts at the same time, which I like. > > When I installed tensorflow-gpu using pip, I was in fact frustrated > because I couldn't figure out how to deploy it across multiple user > accounts at one time. I ended up installing it three times, once in > each account. You're suggesting that's actually preferred, at least > when pip is performing the installation. OK, I will endure the > repetition. You can set up a system-wide virtualenv (for instance in /usr/local/lib/myenv) and use pip install as root to set up everything into that. All the normal users have to do then is prepend /usr/local/lib/myenv/bin to their PATH. After that, you have a system-wide consistent distribution of all your needed Python packages. You can then uninstall all python packages provided by the Linux distro which you don't need. At the moment it seems as if all you need to install locally with pip is tensorflow-gpu. This will change once some future version of tensorflow-gpu depends on newer versions of the system-provided packges. When that happens, pip will pull all those packages into the user's local venv, and it will have to do that individually for each user. BTW, it took me a long time to embrace Python's "virtualenv" concept because I had a hard time figuring out what it was and how it worked. Turns out that there is no magic involved, and that "virtual environment" is a misnomer. It is simply a full Python environment in a separate location on your system. Nothing virtual about it. -- https://mail.python.org/mailman/listinfo/python-list
More efficient/elegant branching
Hello,
I have a function with a long if/elif chain that sets a couple of
variables according to a bunch of test expressions, similar to function
branch1() below. I never liked that approach much because it is clumsy
and repetetive, and pylint thinks so as well. I've come up with two
alternatives which I believe are less efficient due to the reasons given
in the respective docstrings. Does anybody have a better idea?
def branch1(a, b, z):
"""Inelegant, unwieldy, and pylint complains
about too many branches"""
if a > 4 and b == 0:
result = "first"
elif len(z) < 2:
result = "second"
elif b + a == 10:
result = "third"
return result
def branch2(a, b, z):
"""Elegant but inefficient because all expressions
are pre-computed althogh the first one is most likely
to hit"""
decision = [
(a > 4 and b == 0, "first"),
(len(z) < 2, "second"),
(b + a == 10, "third")]
for (test, result) in decision:
if test: return result
def branch3(a, b, z):
"""Elegant but inefficient because expressions
need to be parsed each time"""
decision = [
("a > 4 and b == 0", "first"),
("len(z) < 2", "second"),
("b + a == 10", "third")]
for (test, result) in decision:
if eval(test): return result
(env) [dh@deham01in015:~/python/rscl_fdc/devel]$
--
https://mail.python.org/mailman/listinfo/python-list
Re: More efficient/elegant branching
I like it! I think it's a cute exercise but it doesn't really solve any problem. The if/elif chain can accomplish the same thing (and much more) in the same line count for the price of being clunkier. **D -- https://mail.python.org/mailman/listinfo/python-list
Re: More efficient/elegant branching
Hello Neil, thanks for the detailed answer. Question: are there other people/factors who/which should be regarded as more important than the linter's opinion? Yes. Mine. I was just puzzled at the linter's output (took me a while to figure out what it actually meant), and that got me started on the track if there was maybe a more Pythonic way of dealing with that decision chain. The usual dictionary-controlled construct I've seen (and used) involves using functions as the dict's values: [...] Yeah, I do that a lot, too, but for that you need a meaningful "key" object. In the case at hand, I'm really using individually formulated conditions. Is it ironic that the language does not have a form of case/switch statement (despite many pleas and attempts to add it), Wouldn't do me any good here because case/switch also compares a fiked key against a bunch of candidates, like a dict. Also, in terms of line count the if/elif chain isn't worse than a switch statement. yet the linter objects to an if-elif-else nesting??? Like I said, that's what got me started on this. And it's not even nested, it's purely linear. One reason why this code looks a bit strange (and possibly why PyLint reacts?) is because it is trivial. When we look at the overall picture, the question becomes: what will the 'mainline' do with "result" (the returned value)? Immediately one suspects it will be fed into another decision, eg: No, the "result" is a text message that actually means something and is eventually output for human consumption. The whole thing is rather academic. Also my efficiency argument doesn't hold water because this routine is executed just a few times per hour. I like the "condition table" approach for its lower line count but I'll stick with if/elif because it's more conventional and therefore easier to understand for the casual reader. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python3 - How do I import a class from another file
Am 10.12.2019 22:33 schrieb Paul Moore: You do understand that the reference counting garbage collector is an implementation detail of the CPython implementation *only*, don't you? I don't think that's true. Here's a sentonce from near the top of the "gc" module documentation of Python 3: https://docs.python.org/3/library/gc.html#module-gc "Since the collector supplements the reference counting already used in Python, you can disable the collector if you are sure your program does not create reference cycles." The way I read this is that Python automatically and immediately deletes objects once their refcount goes to zero, and the garbage collector only kicks in case of circular references or other obscure circumstances. The documentation makes no reference to the specific Python implementation, so I believe this is true for CPython as well as others. To be specific: Within the semantics of the Python documentation, freeing the resources used by an object by explicitly or implicitly using "del" is not garbage collection. Python garbage collection is like street cleaning in real life: If everybody looked after their own trash, we wouldn't need a municipal service to do it. When I first read about the Python garbage collector I was puzzled at the possibility of disabling it, thinking that over time a long-running program would fill all memory because no object's resources would ever be freed. But that is clearly not the case. Even Instagram can live without garbage collecton (although if you look how much garbage is on Instagram, maybe they should re-enable it): https://instagram-engineering.com/dismissing-python-garbage-collection-at-instagram-4dca40b29172 The (implementation independent) language semantics makes no assertion about what form of garbage collection is used, and under other garbage collectors, there can be an indefinite delay between the last reference to a value being lost and the object being collected (which is when __del__ gets called). Only when there are circular references. Otherwise every Python implementation will delete objects once their refcount goes to zero, wven when there is no garbage collection at all, see the doc. There is not even a guarantee that CPython will retain the reference counting GC in future versions. There is no "reference counting GC" in Python. Freeing objects based on their reference count going to zero happens independently of the GC, see the official docs quoted above. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python3 - How do I import a class from another file
Am 11.12.2019 11:01 schrieb Greg Ewing: On 11/12/19 7:47 am, R.Wieser wrote: what happens when the reference becomes zero: is the __del__ method called directly (as I find logical), or is it only called when the garbage collector actually removes the instance from memory (which Chris thinks what happens) ? In CPython, these are the same thing. As soon as the reference count becomes zero, the __del__ method is called Yes *and* the object is removed from memory. Not necessarily. Let's be more precise: The resources used by the object are handed back to memory management (marked as free for re-use). In a Python implementation that doesn't use reference counts, using "del" on the last reference to an object probably isn't going to do anything to the object. It does exactly the same thing: It tells the memory management subsystem that the instance's resources may be re-used. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python3 - How do I import a class from another file
Am 11.12.2019 11:22 schrieb R.Wieser: I think I will just go out on a limb and just assume that the __del__ method /will/ be called as part of a "del instance" request causing the reference count to reach zero (directly or indirectly), before the next command is executed [...]. That's what I take the word "when" to mean in the documentation: https://docs.python.org/3/reference/datamodel.html#object.__del__ "Note: del x doesn’t directly call x.__del__() — the former decrements the reference count for x by one, and the latter is only called when x’s reference count reaches zero." -- https://mail.python.org/mailman/listinfo/python-list
Re: Python3 - How do I import a class from another file
On Tue, 10 Dec 2019 14:56:10 -0500 Dennis Lee Bieber wrote: > > It is called when the language IMPLEMENTATION decides to call > it. That time is not specified in the language description/reference > manual. Yes it is: "Note: del x doesn’t directly call x.__del__() — the former decrements the reference count for x by one, and the latter is only called when x’s reference count reaches zero." Plain and simple: When the refcount reaches zero. A few lines down, however, it says: > Any code that is based upon assuming memory reclamation takes > place at any specific time (other than program exit) is erroneous. That is correct, but the decision when to reclaim memory is not made by __del__ but by the memory management subsystem after (for instance, in CPython) calls to PyMem_Free() > Some implementations do not use a reference counter -- they > rely solely upon a periodic mark&sweep garbage collector. cf: Correct again, but the fray in this thread is about when __del__ is called, not when memory reclaim takes place. Two different things. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python3 - How do I import a class from another file
On Tue, 10 Dec 2019 22:08:48 +0100 "R.Wieser" wrote: > And although you have been fighting me over when the __del__ method is > called, it /is/ called directly as a result of an "del instance" and > the refcount goes zero. There is /no/ delay.(with the only > exception is when a circular reference exists). > > Hence, no "race condition" problem. Under what circumstances would freeing memory in an unspecified order gerenate race conditions (except when freeing an unused chunk of memory too late would cause the system to run out of memory)? Genuinely interested, not loooking for a fight. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python3 - How do I import a class from another file
Hi Chris, The most important distinction, which that note is emphasizing, is that the "del" statement removes just one reference, and if there are other references, then __del__ will not be called. No argument there, that's how reference counting works, and it's clear from the docs. What is not clear from the documentation is not if or why or how but *when* the __del__ method is eventually called. The doc says: "when [an object's] reference count reaches zero." Initially I read that to mean "immediately upon the reference count hitting zero," but a couple paragraphs down we find this: "In particular: __del__() can be invoked when arbitrary code is being executed, including from any arbitrary thread. [...]" I used to believe that the __del__ method was called immediately after refcount = zero, and that only the memory management system's functions (typically called from __del__) could take their time doing their things. Not true, it seems: Calling __del__() can be deferred, and PyMem_Free() et. al. migh not immediately "do" anything as well. I've come to see the __del__ method as a tool to get rid of an object (and its resources) which can be used by the Python interpreter at its own discretion when and if it feels the need to do so, which may be anything between immediately after ob_refcnt == 0 and not at all. That said, I can't think of any reason to explicitly define __del__() in a pure Python class. Do you have an example? -- https://mail.python.org/mailman/listinfo/python-list
Re: Python3 - How do I import a class from another file
On Thu, 12 Dec 2019 11:33:25 + Rhodri James wrote: > On 11/12/2019 21:32, [email protected] wrote: > > Plain and simple: When the refcount reaches zero. > You are assuming that "when" implies "immediately on the occurence." I'm not implying that. It's the dictionary definition of the word "when." > This happens to be the behaviour in CPython, but other > implementations vary as Chris has explained several times now. And he's right. The documentation is unclear. It should be re-worded to reflect that the point at which the __del__() method is called after the refcount reaches zero is unspecified. -- https://mail.python.org/mailman/listinfo/python-list
Re: Transfer a file to httpserver via POST command from curl
On Fri, 13 Dec 2019 03:54:53 +1100 Chris Angelico wrote: > On Fri, Dec 13, 2019 at 3:44 AM Karthik Sharma > wrote: > > > > Is it really possible to transfer a large binary file from my > > machine to the above httpserver via POST command and download it > > again? If yes, is the above Flask app enough for that and what am I > > doing wrong? > > I think your Flask code is okay (haven't checked in detail, but at > first glance it looks fine), but for file uploads to be recognized in > request.files, you'll need to change the way you run curl. BTW, the canonical way to upload files via http is PUT, not POST. You might want to look into that, but here it is off-topic. -- https://mail.python.org/mailman/listinfo/python-list
Re: More efficient/elegant branching
On Fri, 13 Dec 2019 11:34:24 +0100 Antoon Pardon wrote: > Well if you really want to go this route, you may consider the > following: > > def branch4(a, b, z): > decision = [ > ((lambda: a > 4 and b == 0), "first"), > ((lambda: len(z) < 2), "second"), > ((lambda: b + a == 10), "third")] > for test, result in decision: > if test(): return result Nice. But as I've said before, in this case code legibility is more important than out-clevering myself, so I'll stick with if/elif. -- https://mail.python.org/mailman/listinfo/python-list
Re: Transfer a file to httpserver via POST command from curl
On Wed, 18 Dec 2019 04:52:33 +1100 Chris Angelico wrote: > On Wed, Dec 18, 2019 at 4:45 AM wrote: > > BTW, the canonical way to upload files via http is PUT, not POST. > > You might want to look into that, but here it is off-topic. > > Citation needed. https://tools.ietf.org/html/rfc2616#page-55 > Plenty of file uploads are done through POST > requests. Of course. Both work. It's just that the OP wanted to "upload a large binary file" using curl and in such cases I find that PUT can make for a cleaner, simpler interface. > Are you talking specifically about a RESTful API? Because > that's only one of many patterns you can follow. Sure. It's just that people sometimes aren't even aware of http methods besides GET and POST, and there's a chance for the OP to investigate this and maybe to find that PUT fits his needs better than POST in this case. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to get dynamic data in html (javascript?)
On Sat, 11 Jan 2020 14:39:38 +0100
Friedrich Rentsch wrote:
> I'm pretty good at hacking html text. But I have no clue how to get
> dynamic data like this : "At close: {date} {time}". I would
> appreciate a starting push to narrow my focus, currently awfully
> unfocused. Thanks.
Focus on the str.format() function.
--
https://mail.python.org/mailman/listinfo/python-list
Sandboxing eval() (was: Calculator)
Is it actually possible to build a "sandbox" around eval, permitting it only to do some arithmetic and use some math functions, but no filesystem acces or module imports? I have an application that loads calculation recipes (a few lines of variable assignments and arithmetic) from a database. exec(string, globals, locals) with locals containing the input variables, and globals has a __builtin__ object with a few math functions. It works, but is it safe? -- https://mail.python.org/mailman/listinfo/python-list
Re: Sandboxing eval() (was: Calculator)
On Mon, 20 Jan 2020 06:43:41 +1100 Chris Angelico wrote: > On Mon, Jan 20, 2020 at 4:43 AM wrote: > > It works, but is it safe? > > As such? No. That's what many people have said, and I believe them. But just from a point of technical understanding: If I start with empty global and local dicts, and an empty __builtins__, and I screen the input string so it can't contain the string "import", is it still possible to have "targeted" malicious attacks? Of course by gobbling up memory any script can try and crash the Python interpteter or the whole machine wreaking all sorts of havoc, but by "targeted" I mean accessing the file system or the operating system in a deterministic way. My own Intranet application needs to guard against accidents, not intentionally malicious attacks. > However, there are some elegant hybrid options, where you > can make use of the Python parser to do some of your work, and then > look at the abstract syntax tree. Sounds interesting. All I need is a few lines of arithmetic and variable assignments. Blocking ':' from the input should add some safety, too. > Research the "ast" module for some ideas on what you can do. Will do. -- https://mail.python.org/mailman/listinfo/python-list
Re: Clarification on Immutability please
Am 21.01.2020 19:38 schrieb Chris Angelico: On Wed, Jan 22, 2020 at 4:42 AM Stephen Tucker wrote: and even that the first id(mytup) returns the same address as the second one, I am left wondering exactly what immutability is. Let's look at id()'s documentation: id(object) Return the “identity” of an object. This is an integer which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. Are you sure that it does? I can't reproduce this. When you slice the first two from a tuple, you create a new tuple, and until the assignment happens, both the new one and the original coexist, which means they MUST have unique IDs. I'd expect that, too, but an "atomic" reassignment would not contradict the documentation. Somehow, it seems, tuples can be reduced in length (from the far end) (which is not what I was expecting), but they cannot be extended (which I can understand). Different ID means different object, but identical ID doesn't mean identical object. The Python implementation allows re-use of an object's ID after the object has been destroyed, and the documentation mentions this explicitly. -- https://mail.python.org/mailman/listinfo/python-list
Re: Sandboxing eval() (was: Calculator)
Thanks, Chris (and others), for the comprehensive answer (as usual). I got interesting insights into Python's inner workings. Of course, when everything is an object, everything has parents and other relatives, so by traversing that tree in the right way one can make one's way all the way to the core. Thanks. -- https://mail.python.org/mailman/listinfo/python-list
Re: Pandas rookie
On Wed, 19 Feb 2020 17:15:59 -0500 FilippoM wrote: > How can I use Pandas' dataframe magic to calculate, for each of the > possible 109 values, how many have VIDEO_OK, and how many have > VIDEO_FAILURE I have respectively? crosstab() -- https://mail.python.org/mailman/listinfo/python-list
Re: encapsulating a global variable
Am 25.02.2020 13:38 schrieb BlindAnagram: and I am wondering if it is possible to use a class something like class get_it(object): seen = dict() def __call__(piece): return seen[piece] What happened when you tried it? -- https://mail.python.org/mailman/listinfo/python-list
Re: Using zipfile to create a zip file with directories and files inside those directories
On Fri, 6 Mar 2020 20:06:40 -0700 Michael Torrie wrote: > The documentation talks about writing files from > disk, but I'm interested in creating these files from within Python > directly in the zip archive. But you have seen writestr(), haven't you? ZipFile.writestr(zinfo_or_arcname, data, compress_type=None, compresslevel=None) Write a file into the archive. The contents is data, which may be either a str or a bytes instance; > So I naively thought I could use the "open" > method of zipfile, giving it a relative path describing the relative > path and file name that I want to then write bytes to. But seems I am > mistaken. No your're not. writestr() and ZipInfo are your friends, unless I haven't understood what you're trying to do. -- https://mail.python.org/mailman/listinfo/python-list
Re: Using zipfile to create a zip file with directories and files
On Fri, 6 Mar 2020 20:06:40 -0700 Michael Torrie wrote: > The documentation talks about writing files from > disk, but I'm interested in creating these files from within Python > directly in the zip archive. But you have seen writestr(), haven't you? ZipFile.writestr(zinfo_or_arcname, data, compress_type=None, compresslevel=None) Write a file into the archive. The contents is data, which may be either a str or a bytes instance; > So I naively thought I could use the "open" > method of zipfile, giving it a relative path describing the relative > path and file name that I want to then write bytes to. But seems I am > mistaken. No your're not. writestr() and ZipInfo are your friends, unless I haven't understood what you're trying to do. -- https://mail.python.org/mailman/listinfo/python-list
Re: PEP Idea: Multi-get for lists/tuples and dictionaries (inspired in NumPy)
Hello,
either it's me or everybody else who's missing the point. I understand
the OP's proposal like this:
dict[set] == {k: dict[k] for k in set}
list[iterable] == [list[i] for i in iterable]
Am I right?
--
https://mail.python.org/mailman/listinfo/python-list
How to instantiate a custom Python class inside a C extension?
Hi guys,
I'm wondering how to create an instance of an extension class I wrote.
There's a minimal self-contained C module at the bottom of this post
which exports two things: 1) a class Series, and 2) a function
make_series() which is supposed to create a Series object on the C side
and return it. The make_series function uses PyObject_New() and
PyObject_Init() to create the new instance, but all it produces is some
kind of zombie instance which tends to crash the application with a
segfault in real life. When instantiated from Python using Series(), I
get a well-behaved instance.
I've sprinkled the New, Init and Finalize functions with fprintf()s to
see what happens to the object during its lifetime.
When I run this test script:
from series import *
print("From Python")
s1 = Series()
del s1
print("\nFrom C")
s2 = make_series()
del s2
I get this output:
From Python
New Series at 0x7f89313f6660
Init Series at 0x7f89313f6660
Finalize Series at 0x7f89313f6660
From C
Finalize Series at 0x7f89313f6678
So when created from C, neither the "new" nor the "init" functions are
called on the object, only "finalize". No wonder I get segfaults in the
real life application.
So how is this done right? Here's the C module:
#include
typedef struct {
PyObject_HEAD
void *data;
} Series;
static PyObject *Series_new(PyTypeObject *type,
PyObject *args, PyObject *kw) {
Series *self;
self = (Series *) type->tp_alloc(type, 0);
self->data = NULL;
fprintf(stderr, "New Series at %p\n", self);
return (PyObject*)self;
}
static int Series_init(Series *self, PyObject *args, PyObject *kw) {
fprintf(stderr, "Init Series at %p\n", self);
return 0;
}
static void Series_finalize(PyObject *self) {
fprintf(stderr, "Finalize Series at %p\n", self);
}
static PyMethodDef series_methods[] = {
{NULL, NULL, 0, NULL}
};
static PyTypeObject series_type = {
PyVarObject_HEAD_INIT(NULL, 0)
.tp_name = "_Series",
.tp_basicsize = sizeof(Series),
.tp_flags = 0
| Py_TPFLAGS_DEFAULT
| Py_TPFLAGS_BASETYPE,
.tp_doc = "Series (msec, value) object",
.tp_methods = series_methods,
.tp_new = Series_new,
.tp_init = (initproc) Series_init,
.tp_dealloc = Series_finalize,
};
/* To create a new Series object directly from C */
PyObject *make_series(void *data) {
Series *pyseries;
pyseries = PyObject_New(Series, &series_type);
PyObject_Init((PyObject *)pyseries, &series_type);
pyseries->data = data;
return (PyObject *) pyseries;
}
static PyMethodDef module_methods[] = {
{"make_series", (PyCFunction)make_series, METH_NOARGS,
"Instantiate and return a new Series object."},
{NULL, NULL, 0, NULL}
};
static PyModuleDef series_module = {
PyModuleDef_HEAD_INIT,
"series",
"Defines the Series (time, value) class"
,
-1,
module_methods
};
PyMODINIT_FUNC PyInit_series(void) {
PyObject *m;
m = PyModule_Create(&series_module);
if (PyType_Ready(&series_type) < 0) {
return NULL;
}
PyModule_AddObject(m, "Series", (PyObject*)&series_type);
return m;
}
--
https://mail.python.org/mailman/listinfo/python-list
Re: How to instantiate a custom Python class inside a C extension?
Am 01.04.2020 15:01 schrieb Rhodri James: I believe you do it in C as you would in Python: you call the Series class! pyseries = PyObject_CallObject((PyObject *)&series_type, NULL); Well, that dumps core just as everything else I tried. What does work, however, is calling PyType_Ready first: PyType_Ready(&series_type); pyseries = PyObject_New(Series, &series_type); PyObject_Init((PyObject *)pyseries, &series_type);o I don't understand, though, why I have to do that and when. Didn't that already happen when the module was imported? Do I need to do it whenever I create a new instance in C? -- https://mail.python.org/mailman/listinfo/python-list
