thanks a lot, it is ok with this NMU, you can also commit to the git repositoy
if you want :).
https://salsa.debian.org/science-team/ifeffit
Cheers
Fred
Is it possible to make it binNMUable ?
Fred
- Le 14 Jan 25, à 13:20, Emilio Pozuelo Monfort po...@debian.org a écrit :
> Package: pyvkfft-cuda
> Version: 2024.1.4+ds1-4
> Severity: serious
>
> Hi,
>
> The ongoing python3.13 as python3 interpreter causes your package to be
> uninstallable,
Hello hkl and hs-hdf5 upstream here.
I can reproduce and see that the current hkl code is not compatible with the
haskell-hdf5 recompile with hdf 1.14.
This is something which should be solved at the hs-hdf5 level.
Build profile: -w ghc-9.6.6 -O1
In order, the following will be built (use -v fo
I am working on it at the upstream level
need a few more days.
Cheers
Fred
Here an analyse of the FTBFS
On the amd64, I have two failures dureing the test
Test Summary Report
---
testPVAServer.t(Wstat: 0 Tests: 0 Failed: 0)
Parse errors: No plan found in TAP output
Files=6, Tests=129, 1 wallclock secs ( 0.05 usr 0.01 sys + 0.09 cusr 0.06
csys
POCL_WORK_GROUP_METHOD=cbs python3 test.py
make it works
$ POCL_WORK_GROUP_METHOD=cbs python3 test.py
[SubCFG] Form SubCFGs in bsort_all
[SubCFG] Form SubCFGs in bsort_horizontal
[SubCFG] Form SubCFGs in bsort_vertical
[SubCFG] Form SubCFGs in bsort_book
[SubCFG] Form SubCFGs in bsort_file
[SubC
With latest version (PAS OK)
$ dpkg -l | grep pocl
ii libpocl2-common5.0-2.1
all common files for the pocl library
ii libpocl2t64:amd64 5.0-2.1
amd64
Debian12 (OK)
$ dpkg -l | grep pocl
ii libpocl2:amd64 3.1-3+deb12u1
amd64Portable Computing Language library
ii libpocl2-common 3.1-3+deb12u1
all co
On Debian12 it works out of the box
$ POCL_DEBUG=1 python3 test.py
[2024-03-11 10:05:31.837738936]POCL: in fn pocl_install_sigfpe_handler at line
229:
| GENERAL | Installing SIGFPE handler...
[2024-03-11 10:05:31.868890390]POCL: in fn POclCreateCommandQueue at line 98:
| GENERAL | Crea
We already had the warning message
[2024-03-10 14:26:18.189651850]POCL: in fn void
appendToProgramBuildLog(cl_program, unsigned int, std::string&) at line 111:
| ERROR | warning:
/home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:861:14: AVX vector argument
of type '__private float8' (vec
Here a log with POCL_DEBUG=all
picca@cush:/tmp$ python3 test.py
[2024-03-10 14:22:19.462191847]POCL: in fn pocl_install_sigfpe_handler at line
265:
| GENERAL | Installing SIGFPE handler...
[2024-03-10 14:22:19.475550217]POCL: in fn POclCreateCommandQueue at line 103:
| GENERAL | Create
It seems that here is an error here
[2024-03-10 14:22:19.550588408]POCL: in fn int
pocl_llvm_build_program(cl_program, unsigned int, cl_uint, _cl_program* const*,
const char**, int) at line 420:
| LLVM | all build options: -Dcl_khr_int64
-DPOCL_DEVICE_ADDRESS_BITS=64 -D__USE_CLANG_OPENC
Here a small script which trigger the errorfrom silx.image import medianfilter
import numpy
IMG = numpy.arange(1.0).reshape(100, 100)
KERNEL = (1, 1)
res = medianfilter.medfilt2d(
image=IMG,
kernel_size=KERNEL,
engine="opencl",
)
In order to reproduce the bug,
install python3-silx 2.0.0+dfsg-1
python3-pytest-xvfb pocl-opencl-icd
then
$ pytest --pyargs silx.image.test.test_medianfilter -v
===
test session starts
With the silx 2.0.0 version the failire is located in the OpenCL part
the backtrace is this one when running the median filter
# build the packag eintht echroot and enter into it once build
dgit --gbp sbuild --finished-build-commands '%SBUILD_SHELL'
run this command to obtain the backtrace...
the old and new hyperspy is not compatible with imagio > 0.28.
I kindly opened a bug report about the situation at the upstream git repository.
and a comment about this issue
https://github.com/g1257/dmrgpp/issues/38#issuecomment-1655740289
> I am just the messenger here, if you disagree, please feel free to
> contact ftpmasters or lintian maintainers.
This was not a rant about this, I just wanted to understand what is going on :).
> Your package has been built successfully on (some) buildds, but then the
> binaries upload got rejec
I just check this date is in the upstream tar file
https://files.pythonhosted.org/packages/54/84/ea12e176489b35c4610625ce56aa2a1d91ab235b0caa71846317bfd1192f/pyfai-2023.5.0.tar.gz
ok, it seems that I generated an orig.tag.gz with this (Thu Jan 1 00:00:00
1970).
I can not remember which tool I used to generate this file.
gbp import-orig --uscan
or
deb-new-upstream
Nevertheless, why is it a serious bug ?
thanks
Frederic
There is a fix from the upstream around enum.
https://github.com/boostorg/python/commit/a218babc8daee904a83f550fb66e5cb3f1cb3013
Fix enum_type_object type on Python 3.11
The enum_type_object type inherits from PyLong_Type which is not tracked
by the GC. Instances doesn't have to be tracked by
in order to debug this, I started gdb
set a breakpoint in init_module_scitbx_linalg_ext
then a catch throw and I end up with this backtrace
Catchpoint 2 (exception thrown), 0x770a90a1 in __cxxabiv1::__cxa_throw
(obj=0xb542e0, tinfo=0x772d8200 , dest=0x772c1290
) at
../../../..
Hello Anton, I have just pushed a few dependencies in the -dev package in the
salsa repo
I did not updated the changelog.
Cheers
Fred
Hello Anton, I try to checkout paraview in order to add the -dev dependencies
but I have this message
$ git clone https://salsa.debian.org/science-team/paraview
Clonage dans 'paraview'...
remote: Enumerating objects: 175624, done.
remote: Counting objects: 100% (78929/78929), done.
remote: Compre
Hello François,
thanks a lot, I removed the NMU number and release a -2 package. (uploaded)
thanks for your contribution to Debian.
Fred
It seems that it failing now
https://ci.debian.net/packages/p/pyfai/
I am on 0.21.2 but I do not know if it solve this mask issue.
Cheers
Fred
Hello Paul, just for info, I have already reported this issue here
https://github.com/g1257/dmrgpp/issues/38
cheers
Fred.
Is it not better to use the
DEB__MAINT_APPEND
variable in order to deal with this issue ?
It seems that this is an issue in gcc has observed when compiling tensorflow
https://zenn.dev/nbo/scraps/8f1505e365d961
Built with gcc-11 and -fno-lto it doesn not work.
(sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$
../../../test.py
Segmentation fault
(sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$
PYTHONPATH=. ../../../test.py
Segmentation fault
I tested matplotlib built with numpy 0.17 0.19 0.21. each time I got the
segfault.
another difference was the gcc compiler.
So I switched to gcc-10
(sid_mips64el-dchroot)picca@eller:~/matplotlib$ CC=gcc-10 python3 setup.py build
if failed with this error
lto1: fatal error: bytecode stream in
If I run in the sid chroot, but with the binaryed built from bullseye, it works.
(sid_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$
rm toto.png
(sid_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$
python3 test.py
(sid_mips64el-dchroot)p
Here no error during the build of numpy 1.19.5
= 10892 passed, 83 skipped, 108 deselected, 19 xfailed, 2 xpassed, 2 warnings
in 1658.41s (0:27:38) =
but 109 for numpy 1.21...
= 14045 passed, 397 skipped, 1253 deselected, 20 xfailed, 2 xpassed, 2
warnings, 109 errors in 869.47s (0:14:29) =
I investigated a bit more, it seems that cover is wrong.
In a bullseye chroot it works
$ python3 ./test.py
(bullseye_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$
ls
matplotlib mpl_toolkits pylab.py test.py toto.png
I found that the test failed between the 3.3
the full python backtrace
#8
#14 Frame 0x120debd80, for file
/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/lines.py,
line 2888, in draw (self=,
figure=<...>, _transform=None, _transformSet=False, _visible=True,
_animated=False, _alpha=None, clipbox=None, _clippath=None, _
Here the py-bt
(gdb) py-bt
Traceback (most recent call first):
File
"/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/lines.py",
line 2888, in draw
File
"/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py",
line 50, in draw_wrapper
return
I can confirm that the bullseye matplotlib does not produce a segfault
This small script trigger the segfault.
#!/usr/bin/env python3
import matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.title("foo")
plt.savefig("toto.png")
bugs report are already filled on matplotlib
#1000774 and #1000435
I will try to see if this is identical...
Here the backtrace on mips64el
#0
agg::pixfmt_alpha_blend_rgba,
agg::order_rgba>, agg::row_accessor >::blend_solid_hspan(int,
int, unsigned int, agg::rgba8T const&, unsigned char const*)
(covers=0x100 , c=..., len=, y=166, x=,
this=)
at extern/agg24-svn/include/agg_color_rg
> Strangely enough, I've already done that ;-)
my bad.
Cheers
Fred
> I have a package of Spyder 4 waiting to upload, but it requires five
> packages to be accepted into unstable from NEW first (pyls-server,
> pyls-black, pyls-spyder, abydos, textdistance); once that happens, the
> rest of the packages are almost ready to go.
Maybe you can contact the ftpmaster te
Ok, in that case, I think that a comment in the d/rules files is enough in
order to keep in mind that we have this issue with ppc64el.
> Well, the test is obviously broken and upstream currently can't be bothered
> to fix
> it on non-x86 targets. He will certainly have to do it at some point given
> that ARM64
> is replacing more and more x86_64 systems, but I wouldn't bother, personally.
so what is the best solution in order t
> Yes, good catch. The spec file for the openSUSE package has this [1]:
so it does not fit with our policy: do not hide problems ;)
The problem is that I do not have enougt time to investigate... on a porter box
Hello
looking at the Opensuze log, I can find this
[ 93s] + pytest-3.8 --ignore=_build.python2 --ignore=_build.python3
--ignore=_build.pypy3 -v -k 'not speed and not (test_model_nan_policy or
test_shgo_scipy_vs_lmfit_2)'
[ 97s] = test session starts
Hello Andreas,
I just built ghmm by removing --with-gsl.
It seems that the gsl implementation of blas conflict with the one provided in
atlas.
so --enable-gsl + --enable-atlas seems wrong...
+--+
| Summary
close 957430 6.5.1-3
thanks
I tryed a new build and I end up with this error
gpgv: unknown type of key resource 'trustedkeys.kbx'
gpgv: keyblock resource '/tmp/dpkg-verify-sig.Wwlhs1jL/trustedkeys.kbx':
General error
gpgv: Signature made Mon Dec 16 20:17:19 2019 UTC
gpgv:using RSA key E8FC295C86B8D7C049F97BA
you can look also at the CI, now that it works :)
https://salsa.debian.org/science-team/veusz/pipelines/137494
Cheers
Frederic
A work around for now is to install by hand
apt install python3-scipy
reassign -1 silx
thanks
Hello, if it is like for my ufo-core package, this could be due to a script
file with a shebang using python instead of python3
Cheers
Fred
Maybe this is due to this
picca@cush:~/Debian/ufo-core/ufo-core/bin$ rgrep python *
ufo-mkfilter.in:#!/usr/bin/python
ufo-prof:#!/usr/bin/env python
I will replace python -> python3 and see what is going on
Hello Sandro this is strange because, I have this in the control file
Package: libufo-bin
Architecture: any
Depends: ${misc:Depends}, ${python3:Depends}, ${shlibs:Depends}
Suggests: ufo-core-doc
Description: Library for high-performance, GPU-based computing - tools
The UFO data processing framewo
-lists.debian.net]
de la part de Andreas Tille [andr...@an3as.eu]
Envoyé : dimanche 22 décembre 2019 10:48
À : PICCA Frederic-Emmanuel
Cc : 943...@bugs.debian.org; MARIE Alexandre
Objet : Bug#943786: lmfit-py: failing tests with python3.8
On Sun, Dec 22, 2019 at 07:54:23AM +, PICCA Frederic-Emmanuel
Hello andreas, In fact we were wayting for the pacakging of ipywidget 7.x
the jupyter-sphinx extension expected by lmfit-py require a newer version of
ipywidget.
So maybe the best solution for now is to not produce the documentation until
this dependency is ok.
cheers
Frederic
looking in picca@sixs7:~/Debian/silx/silx/silx/opencl/test/test_addition.py
def setUp(self):
if ocl is None:
return
self.shape = 4096
self.data = numpy.random.random(self.shape).astype(numpy.float32)
self.d_array_img = pyopencl.array.to_device(self.q
I decided to concentrate myself on one opencl test (addition)
So I deactivated all other test by commenting the test in
silx/opencl/__init__.py
If I do not import silxs.io, this test works
(sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1
PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpyth
With the silx.io import I have this
(sid_amd64-dchroot)picca@barriere:~$
PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py
pocl error: lt_dlopen("(null)") or lt_dlsym() failed with 'can't close resident
module'.
note: missing symbols in the kernel binary might be repo
not better
test cpp engine for medfilt2d ... ok
testOpenCLMedFilt2d (silx.image.test.test_medianfilter.TestMedianFilterEngines)
test cpp engine for medfilt2d ... pocl error: lt_dlopen("(null)") or lt_dlsym()
failed with 'can't close resident module'.
note: missing symbols in the kernel binary mig
It seems that this test does not PASS
@unittest.skipUnless(ocl, "PyOpenCl is missing")
def testOpenCLMedFilt2d(self):
"""test cpp engine for medfilt2d"""
res = medianfilter.medfilt2d(
image=TestMedianFilterEngines.IMG,
kernel_size=TestMedianFilterEng
Use salsa-ci, python-qtconsole FTBFS due to pyzmq
https://salsa.debian.org/python-team/modules/python-qtconsole/-/jobs/435758
Hello
>Package: sardana
>Version: 3.0.0a+3.f4f89e+dfsg-1
>Severity: serious
>The release team have decreed that non-buildd binaries cannot migrate to
>testing. Please make a source-only upload so your package can migrate.
ok, but this packages comes from NEW.
So it would be nice if the proces
> I didn't notice it, so wasn't planning to add it. spyder_kernels
> imports without complaining, and spyder seems to start fine anyway.
> Where does it come to notice?
I do not know, but on wndows it is optional.
So maybe this is not a big issue.
Fred
It seems that wurlitzer which is a dependency of spyder-kernel is missing.
did you plan to add it ?
cheers
Hello
> Hi Frédéric, I prepared spyder (and spyder-kernels) for python2 removal.
> The removal of cloudpickle forces us to do it earlier than we otherwise
> might have.
no problem for me :), the faster we get rid of Python2, the better.
> With spyder, it made sense to me to keep spyder as the m
Hello, this is a probleme due to a bug in python-numpy which is already solved
in python-numpy 1.6.5
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933056
Cheers
De : debian-science-maintainers
[debian-science-maintainers-bounces+picca=synchrotron-s
thanks a lot both of you, I could not manage to find enought time these days
for this package...
Cheers
Fred
The upstream, Just packages the latest taurus.
So I think that you can defer your upload now.
thanks a lot for your help.
Frederic
Hello Adrian
If I look at the current boost1.67, I find this in the
boost python package
https://packages.debian.org/sid/amd64/libboost-python1.67.0/filelist
and
https://packages.debian.org/sid/amd64/libboost-python1.67-dev/filelist
We can find these
/usr/lib/x86_64-linux-gnu/libboost_python
looking at the fedora project they renames async -> async_
https://koji.fedoraproject.org/koji/buildinfo?buildID=1097515
In code search I found another package affected by this problem.
Which seems to embed pyOpenGL.
https://codesearch.debian.net/search?q=OpenGL.raw.GL.SGIX.async&perpkg=1
Cheers
looking at the fedora project they renames async -> async_
https://koji.fedoraproject.org/koji/buildinfo?buildID=1097515
> your autopkg tests loops over all *supported* python versions, but you only
> build the extension for the *default* python3 version. Try build-depending on
> python3-all-dev instead and see that you have extensions built for both 3.6
> and
> 3.7. Building in unstable, of course.
But , I alrea
Hello, Matthias,
I do not understand this bug report.
I use pybuild so fabio should be build for all python3 versions.
It is now FTBFS due to a problem with the cython package already reported.
#903909
Cheers
Frederic
This problem was due to this
python-fabio (0.5.0+dfsg-2) unstable; urgency=medium
* d/control
- python-qt4 -> python3-pyqt4-dbg (Closes: #876288)
Now that python-fabio was solved, it is ok to close this bug
thanks
Frederic
here the error message
~/Debian/nexus/bugs$ ./bug.py
Traceback (most recent call last):
File "./bug.py", line 15, in
f.flush()
File "/usr/lib/python2.7/dist-packages/nxs/napi.py", line 397, in flush
raise NeXusError, "Could not flush NeXus file %s"%(self.filename)
nxs.napi.NeXusError:
It seems that the fix is not enought
this test failed at the flush
import nxs
f = nxs.open("/tmp/foo.h5", "w5")
f.makegroup('entry', 'NXentry')
f.opengroup('entry')
f.makegroup('g', 'NXcollection')
f.opengroup('g', 'NXcollection')
f.makedata('d', 'float64', shape=(1,))
f.opendata('d')
f.putdata(1
Here after rebuilding hdf5 in debug mode
:~/Debian/nexus$ ./bug.py
H5get_libversion(majnum=0xbf8a5b04, minnum=0xbf8a5b08, relnum=0xbf8a5b0c) =
SUCCEED;
H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED;
H5open() = SUCCEED;
H5Pcreate(cls=8 (genprop class)) = 18 (genprop list)
activating the NXError reporting we got
filenamenxs.h5 5
ERROR: cannot open file: filenamenxs.h5
0
and looking for this errormessage,
we found it in the napi5.c file
NXstatus NX5open(CONSTCHAR *filename, NXaccess am,
NXhandle* pHandle)
{
hid_t attr1,ai
Herethe code ofthismethod
/**/
static NXstatus NXinternalopen(CONSTCHAR *userfilename, NXaccess am,
pFileStack fileStack);
/*--*/
NXstatus NXopen(
in the napi.h files we saw this.
define CONCAT(__a,__b) __a##__b/* token concatenation */
#ifdef __VMS
#define MANGLE(__arg) __arg
#else
#define MANGLE(__arg) CONCAT(__arg,_)
#endif
#define NXopen MANGLE(nxiopen)
/**
* Open a NeXus fi
Let's instrument the code
print filename, mode, _ref(self.handle)
status = nxlib.nxiopen_(filename,mode,_ref(self.handle))
print status
$ python bug.py
filenamenxs.h5 5
0
Hello
here the napi code which cause some trouble.
# Convert open mode from string to integer and check it is valid
if mode in _nxopen_mode: mode = _nxopen_mode[mode]
if mode not in _nxopen_mode.values():
raise ValueError, "Invalid open mode %s",str(mode)
> Ehm, yes. :)
so I just tested an upgrade from jessie to sid of tango-db and it works :)))
Now I have only one concern about the dump.
Since we had a failure with the dump when it ran as user, we discovered that
our procedures where wrong and necessitate the dbadmin grants in order to works.
W
Hello Paul
> Officially, no, because the documentation says: "If files exist in both
> data and scripts, they will both be executed in an unspecified order."
> However, the current behavior of dbconfig-common is to first run the
> script and then run the admin code and then run the user code. So y
Hello Paul
> I really hope I can upload this weekend. I have code that I believe does
> what I want. I am in the process of testing it.
thanks a lot.
> [...]
> What I meant,
> instead of the mysql code that runs as user, run a script for the
> upgrade (they are run with database administrator
Hello Paul,
> Once I fixed 850190,
Do you think that you will fix this bug before next week in order to let me
enought time to fix tango and upload it.
> I believe that ought to work, although that is
> still a hack. I was thinking of doing the "DROP PROCEDURE IF EXISTS *"
> calls with the adm
Hello,
I discuss with the tango-db upstream and he found that
this one line fixed the problem, befrore doing the tango-db upgrade
UPDATE mysql.proc SET Definer='tango@localhost' where Db='tango';
Ideally it should be something like
UPDATE mysql.proc SET Definer='xxx' where Db='yyy';
where xxx
Hello,
> I am suspecting that this commit may be related to the current behavior:
> https://anonscm.debian.org/cgit/collab-maint/dbconfig-common.git/commit/?id=acdb99d61abfff54630c4cfba6e4452357a83fb9
> I believe I implemented there that the drop of the database is performed
> with the user privi
> I am not sure that I follow what you are doing, but if you need the code
> to be run with the dbadmin privileges, you should put the code in:
>/usr/share/dbconfig-common/data/PACKAGE/upgrade-dbadmin/DBTYPE/VERSION
> instead of in:
>/usr/share/dbconfig-common/data/PACKAGE/upgrade/DBTYPE/VE
Thanks to reynald
1) On Jessie
with the tango account
mysql> use tango;
mysql> show create procedure class_att_prop\G
I got "Create Procedure": NULL
But If I use the root account (mysqladmin)
CREATE DEFINER=`root`@`localhost` PROCEDURE `class_att_prop` (IN class_name
VARCHAR(255), INOUT re
Hello, I would like to discuss about this bug [1]
I tryed to reproduce the scenary of piuparts in a virtual machine (gnome-box)
installed in 3 steps:
jessie base system
mysql-server (I need a working database)
tango-db (daemon)
It works ok, I have a running tango-db daemon (ps aux | gr
No i do not have access to my computer until 3 january
If you want to nmu go ahead
Cheers
De : Adrian Bunk [b...@stusta.de]
Envoyé : mercredi 21 décembre 2016 16:57
À : 811...@bugs.debian.org; Picca Frédéric-Emmanuel
Objet : Re: Bug#811973 closed by Pic
I Uploaded tango 9.2.5~rc3+dfsg1-1into Debian unstable.
I think that once migrated into testing it will be ok toclose this bug.
Thanks
Fred
Yes I work on this with the upstream :))
So don't worry I will tell you when it is ok.
Cheers
Fred
Hello Andreas,
> In jessie, tango-db used mysql-server-5.5 (via mysql-server).
> The upgrade of tango-db was performed after mysql-server had been upgraded
> to mariadb-server-10.0 (via default-mysql-server) and was started again.
do you know if the mariadb-server was running during the upgrade o
Hello, somenews about this issue ?
Cheers
Fred
Hello,
I just opened a bug for tango
https://github.com/tango-controls/cppTango/issues/312
what is the deadline where we can take the decision to upload or not zeromq
4.2.0 into Debian testing ?
This will let also some time in order to check if this 4.2.0 do not have other
size effect of dep
Hello Luca
> This is very unfortunate, but as explained on the mailing list, this
> behaviour was an unintentional internal side effect. I didn't quite
> realise it was there, and so most other devs.
I understand, I just wanted to point that the synchrotron community invest a
lot of efforts in o
1 - 100 of 118 matches
Mail list logo