[Numpy-discussion] Re: Enhancement: np.convolve(..., mode="normalized")
Hi Filip, Am Mi., 22. Nov. 2023 um 14:24 Uhr schrieb : > > Convolution is often used for smoothing noisy data; a typical use will keep > the 'same' length of data and may look like this: > > >convol = 2**-np.linspace(-2,2,100)**2; > >y2 = np.convolve(y,convol/np.sum(convol), mode='same') ## simple > > smoothing > >ax.plot(x, y2, label="simple smoothing", color='g') First, maybe it might be useful to calculate the convolution "father" `convol` with the exponent of the normal distribution in its full glory; s.t. its standard deviation is known. Currently it's just "some" normal distribution which goes down to 0.0625 as its edges. What you might want to achieve is to use a *different kernel* at the edges. It seems you're trying to use, at the edges, a kernel version normalised using the positions which overlap with the domain of `y`. Before diving into this: It possibly would be safest, to just discard the positions in the support of `y` which suffer from the boundary effect. In your example, it would mean to cut away say 20 positions at each side. The choice can be made systematically by looking at the standard deviation of the kernel. This would produce reliable results without much headache and subsidiary conditions ... and it would make interpretation of the results much easier as it wouldn't be required to keep that extra maths in mind. > >convol = 2**-np.linspace(-2,2,100)**2; > >norm = np.convolve(np.ones_like(y),convol/np.sum(convol), mode='same') > >y2 = np.convolve(y,convol/np.sum(convol), mode='same')/norm ## simple > > smoothing > >ax.plot(x, y2, label="approach 2", color='k') `norm` holds the sums of the "truncated" Gaussians. Dividing by it should mean the same as using the truncated kernels as the edges, which are normalised *on their truncated domain*. So it should implement what I described above. Maybe this can be checked by applying the method to some artificial test function, most easily to a constant input function to be convolved. It should result in completely constant values also at the edges. I would be interested if this works. Looking at your "real world" result I am not entirely sure if am not mistaken at some point here. > In my experimental work, I am missing this behaviour of np.convolve in a > single function. I suggest this option should be accessible numpy under the > mode="normalized" option. (Actually I believe this could have been set as > default, but this would break compatibility.) I deem your possible solution as too special for this. It can be implemented by a few lines in "numpy user space", if needed. Best, Friedrich ___ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-le...@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: arch...@mail-archive.com
[Numpy-discussion] Re: Numpy not working
Hi Christophe, Am So., 10. Dez. 2023 um 19:49 Uhr schrieb Christophe Nassar : > > "(tf) C:\*\\\DiffMorph-master\DiffMorph-master>morph.py -s > images/img_1.jpg -t images/img_2.jpg > Traceback (most recent call last): > File "C:\Users\chris\Desktop\DiffMorph-master\DiffMorph-master\morph.py", > line 3, in > import numpy as np > ModuleNotFoundError: No module named 'numpy'" It looks to me like if the Python interpreter used to run your script 'morph.py' wenn you just say '> morph.py [...]' is different from the one you're using when e.g. running pip. Maybe you can try '> python morph.py [...]' and see if this works, or just '> python' followed by '>>> import numpy' to narrow down the error. You might also try to use the 'py' wrapper, it's got several options to specify which of the installed Pythons to run. HTH Friedrich ___ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-le...@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: arch...@mail-archive.com
[Numpy-discussion] Unreliable crash when converting using numpy.asarray via C buffer interface
Hi, This is with Python 3.8.2 64-bit and numpy 1.19.2 on Windows 10. I'd like to be able to convert some C++ extension type to a numpy array by using ``numpy.asarray``. The extension type implements the Python buffer interface to support this. The extension type, called "Image" here, holds some chunk of ``double``, C order, contiguous, 2 dimensions. It "owns" the buffer; the buffer is not shared with other objects. The following Python code crashes:: image = <... Image production ...> ar = numpy.asarray(image) However, when I say:: image = <... Image production ...> print("---") ar = numpy.asarray(image) the entire program is executing properly with correct data in the numpy ndarray produced using the buffer interface. The extension type permits reading the pixel values by a method; copying them over by a Python loop works fine. I am ``Py_INCREF``-ing the producer in the C++ buffer view creation function properly. The shapes and strides of the buffer view are ``delete[]``-ed upon releasing the buffer; avoiding this does not prevent the crash. I am catching ``std::exception`` in the view creation function; no such exception occurs. The shapes and strides are allocated by ``new Py_ssize_t[2]``, so they will survive the view creation function. I spent some hours trying to figure out what I am doing wrong. Maybe someone has an idea about this? I double-checked each line of code related to this problem and couldn't find any mistake. Probabaly I am not looking at the right aspect. Best, Friedrich ___ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Unreliable crash when converting using numpy.asarray via C buffer interface
Hi, Am Di., 26. Jan. 2021 um 09:48 Uhr schrieb Friedrich Romstedt : > > [...] The following Python > code crashes:: > > image = <... Image production ...> > ar = numpy.asarray(image) > > However, when I say:: > > image = <... Image production ...> > print("---") > ar = numpy.asarray(image) > > the entire program is executing properly with correct data in the > numpy ndarray produced using the buffer interface. > > [...] Does anyone have an idea about this? By the way, I noticed that this mailing list turned pretty quiet, am I missing something? For completeness, the abovementioned "crash" shows up as just a premature exit of the program. There is no error message whatsoever. The buffer view producing function raises Exceptions properly when something goes wrong; also notice that this code completes without error when the ``print("---")`` statement is in action. So I presume the culprit lies somewhere on the C level. I can only guess that it might be some side-effect unknown to me. Best, Friedrich ___ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Unreliable crash when converting using numpy.asarray via C buffer interface
Hello Matti, Am Mo., 1. Feb. 2021 um 09:46 Uhr schrieb Matti Picus : > > [...] > > It is very hard to help you from this description. It may be a refcount > problem, it may be a buffer protocol problem, it may be something else. Yes, indeed! > Typically, one would create a complete example and then pointing to the > code (as repo or pastebin, not as an attachment to a mail here). https://github.com/friedrichromstedt/bughunting-01 I boiled it down considerably, compared to the program where I stumbled upon the problem. In the abovementioned repo, you find a Python test script in the `test/` folder. Therein, a single `print` statement can be used to trigger or to avoid the error. On Linux, I get a somewhat more precise description than just from the premature exit on Windows: It is a segfault. Certainly it is still asked quite much to skim through my source code, however, I hope that I trimmed it down sufficiently. > - Make sure you give instructions how to build your project for Linux, > since most of the people on this list do not use windows. The code reproducing the segfault can be compiled by `$ python3 setup.py install`, both on Windows as well as on Linux. > - There are tools out there to analyze refcount problems. Python has > some built-in tools for switching allocation strategies. Can you give me some pointer about this? > - numpy.asarray has a number of strategies to convert instances, which > one is it using? I've tried to read about this, but coudn't find anything. What are these different strategies? Many thanks in advance, Friedrich ___ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Unreliable crash when converting using numpy.asarray via C buffer interface
Hi, Am Do., 4. Feb. 2021 um 09:07 Uhr schrieb Friedrich Romstedt : > Am Mo., 1. Feb. 2021 um 09:46 Uhr schrieb Matti Picus : > > Typically, one would create a complete example and then pointing to the > > code (as repo or pastebin, not as an attachment to a mail here). > > https://github.com/friedrichromstedt/bughunting-01 Last week I updated my example code to be more slim. There now exists a single-file extension module: https://github.com/friedrichromstedt/bughunting-01/blob/master/lib/bughuntingfrmod/bughuntingfrmod.cpp. The corresponding test program https://github.com/friedrichromstedt/bughunting-01/blob/master/test/2021-02-11_0909.py crashes "properly" both on Windows 10 (Python 3.8.2, numpy 1.19.2) as well as on Arch Linux (Python 3.9.1, numpy 1.20.0), when the ``print`` statement contained in the test file is commented out. My hope to be able to fix my error myself by reducing the code to reproduce the problem has not been fulfillled. I feel that the abovementioned test code is short enough to ask for help with it here. Any hint on how I could solve my problem would be appreciated very much. There are some points which were not clarified yet; I am citing them below. So far, Friedrich > > - There are tools out there to analyze refcount problems. Python has > > some built-in tools for switching allocation strategies. > > Can you give me some pointer about this? > > > - numpy.asarray has a number of strategies to convert instances, which > > one is it using? > > I've tried to read about this, but couldn't find anything. What are > these different strategies? ___ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Unreliable crash when converting using numpy.asarray via C buffer interface
Hello again, Am Mo., 15. Feb. 2021 um 16:57 Uhr schrieb Sebastian Berg : > > On Mon, 2021-02-15 at 10:12 +0100, Friedrich Romstedt wrote: > > Last week I updated my example code to be more slim. There now > > exists > > a single-file extension module: > > https://github.com/friedrichromstedt/bughunting-01/blob/master/lib/bughuntingfrmod/bughuntingfrmod.cpp > > . > > The corresponding test program > > https://github.com/friedrichromstedt/bughunting-01/blob/master/test/2021-02-11_0909.py > > crashes "properly" both on Windows 10 (Python 3.8.2, numpy 1.19.2) as > > well as on Arch Linux (Python 3.9.1, numpy 1.20.0), when the > > ``print`` > > statement contained in the test file is commented out. > > I have tried it out, and can confirm that using debugging tools (namely > valgrind), will allow you track down the issue (valgrind reports it > from within python, running a python without debug symbols may > obfuscate the actual problem; if that is the limiting you, I can post > my valgrind output). > Since you are running a linux system, I am confident that you can run > it in valgrind to find it yourself. (There may be other ways.) > > Just remember to run valgrind with `PYTHONMALLOC=malloc valgrind` and > ignore some errors e.g. when importing NumPy. >From running ``PYTHONMALLOC=malloc valgrind python3 2021-01-11_0909.py`` (with the preceding call of ``print`` in :file:`2021-01-11_0909.py` commented out) I found a few things: - The call might or might not succeed. It doesn't always lead to a segfault. - "at 0x4A64A73: ??? (in /usr/lib/libpython3.9.so.1.0), called by 0x4A64914: PyMemoryView_FromObject (in /usr/lib/libpython3.9.so.1.0)", a "Conditional jump or move depends on uninitialised value(s)". After one more block of valgrind output ("Use of uninitialised value of size 8 at 0x48EEA1B: ??? (in /usr/lib/libpython3.9.so.1.0)"), it finally leads either to "Invalid read of size 8 at 0x48EEA1B: ??? (in /usr/lib/libpython3.9.so.1.0) [...] Address 0x1 is not stack'd, malloc'd or (recently) free'd", resulting in a segfault, or just to another "Use of uninitialised value of size 8 at 0x48EEA15: ??? (in /usr/lib/libpython3.9.so.1.0)", after which the program completes successfully. - All this happens within "PyMemoryView_FromObject". So I can only guess that the "uninitialised value" is compared to 0x0, and when it is different (e.g. 0x1), it leads via "Address 0x1 is not stack'd, malloc'd or (recently) free'd" to the segfault observed. I suppose I need to compile Python and numpy myself to see the debug symbols instead of the "???" marks? Maybe even with ``-O0``? Furthermore, the shared object belonging to my code isn't involved directly in any way, so the segfault possibly has to do with some data I am leaving "uninitialised" at the moment. Thanks for the other replies as well; for the moment I feel that going the valgrind way might teach me how to debug errors of this kind myself. So far, Friedrich ___ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Unreliable crash when converting using numpy.asarray via C buffer interface
Hi Lev, Am Di., 16. Feb. 2021 um 11:50 Uhr schrieb Lev Maximov : > > I've reproduced the error you've described and got rid of it without valgrind. > Those two lines are enough to avoid the segfault. Okay, good to know, I'll try it! Thanks for looking into it. > But feel free to find it yourself :) Yes :-D Best wishes, Friedrich ___ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Unreliable crash when converting using numpy.asarray via C buffer interface
Hi Matti, Sebastian and Lev, Am Mo., 15. Feb. 2021 um 18:50 Uhr schrieb Lev Maximov : > > Try adding > view->suboffsets = NULL; > view->internal = NULL; > to Image_getbuffer finally I got it working easily using Lev's pointer cited above. I didn't follow the valgrind approach furthermore, since I found it likely that it'd produce the same finding. This is just to let you know; I applied the fix several weeks ago. Many thanks, Friedrich ___ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Re: An article on numpy data types
Am Sa., 25. Dez. 2021 um 10:03 Uhr schrieb Lev Maximov : > > https://axil.github.io/numpy-data-types.html > Speaking of zero-dimensional arrays more realistic example where you can run > into them is when you iterate over a numpy array with nditer: There seems to be missing an "a" before "more". Overflow warning: Instead of >>> np.array([2**63–1])[0] + 1 FloatingPointError: overflow encountered in longlong_scalars on my machine it runs:: >>> numpy.array([2 ** 63 - 1])[0] + 1 :1: RuntimeWarning: overflow encountered in long_scalars There are also some more significant unclarities remaining: 1. The RuntimeWarning is issued *only once*: >>> b = numpy.array([2 ** 63 - 1])[0] >>> b + 1 :1: RuntimeWarning: overflow encountered in long_scalars -9223372036854775808 >>> b + 1 -9223372036854775808 2. And I do not get the the difference here: >>> a = numpy.array(2 ** 63 - 1) >>> b = numpy.array([2 ** 63 - 1])[0] >>> a.dtype, a.shape (dtype('int64'), ()) >>> b.dtype, b.shape (dtype('int64'), ()) >>> with numpy.errstate(over='raise'): ... a + 1 ... -9223372036854775808 >>> with numpy.errstate(over='raise'): ... b + 1 ... Traceback (most recent call last): File "", line 2, in FloatingPointError: overflow encountered in long_scalars The only apparent difference I can get hold of is that: >>> a[()] = 0 >>> a array(0) but: >>> b[()] = 0 Traceback (most recent call last): File "", line 1, in TypeError: 'numpy.int64' object does not support item assignment While writing this down I realise that *a* is a zero-dimensional array, while *b* is an int64 scalar. This can also be seen from the beginning: >>> a array(9223372036854775807) >>> b 9223372036854775807 So, unclarity resolved, but maybe I am not the only one stumbling over this. Maybe the idiom ``>>> c = numpy.int64(2 ** 63 - 1)`` can be used? I never used this, so I am unsure about the exact semantics of such a statement. I am stopping studying your document here. Might be that I continue later. Friedrich ___ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-le...@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: arch...@mail-archive.com
[Numpy-discussion] Re: Feature query: fetch top/bottom k from array
Am Di., 22. Feb. 2022 um 14:25 Uhr schrieb Joseph Bolton : > > I find myself often requiring the indices and/or values of the top (or > bottom) k items in a numpy array. There has been discussion about this last year: https://mail.python.org/archives/list/numpy-discussion@python.org/thread/F4P5UVTAKRJJ3OORI6UOWFSUEE5CNTSC/ Mentioned in that thread is the following pull request, which has some more discussion: https://github.com/numpy/numpy/pull/19117 Friedrich ___ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-le...@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: arch...@mail-archive.com