ating the obvious, but BSD code also
requires attribution either in the code itself or in the docs.
I'm told that Bing Copilot often displays links to the origin
of the generated code like Stackoverflow. So some tools do "know"
where the code came from and recognize the general c
egulations on the uploader, no matter
where the uploader was from.
I was told in no uncertain terms that this policy was just and that
it would protect the PSF (protection of uploaders was not a concern).
Stefan Krah
___
NumPy-Discussion mailin
ot; as the inner dimensions.
Sorry for the long mail, I hope this clears up a bit what function signatures
generally look like.
Stefan Krah
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
nt64".
> I realize that this is no longer about describing precisely what the
> function doing the calculation expects, but rather what an upper level is
> allowed to do before calling the function (i.e., take a dimension of 1 and
> broadcast it).
Yes, for datashape the proble
sonable. The only garbage
collected language that achieves the _same_ extension speed that I know of
is OCaml (which is quite an achievement).
SBCL Lisp, which has a great compiler for pure Lisp, also does not have
really fast extensions.
Does the Graal VM solve this iss
n CPython C-API.
All of that under the assumption that there are many API calls.
Stefan Krah
> Is there a proper reproducible benchmark?
Here are the benchmarks for _decimal:
bench.py
import time
import platform
if p
ersion 3.9, _decimal has been slowed down significantly
since I left.)
_decimal of course operates on scalars and has many API calls, so maybe
for NumPy this is not relevant except for small arrays.
Or perhaps HPy has evolved in the meantime (the above GitHub thread i
= y
print(y) # In case optimizing compilers have gotten smart and eliminate the
loop.
===
Stefan Krah
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an e