Source: sklearn-pandas Version: 2.2.0-1 Severity: serious Justification: FTBFS Tags: bookworm sid ftbfs User: lu...@debian.org Usertags: ftbfs-20220624 ftbfs-bookworm
Hi, During a rebuild of all packages in sid, your package failed to build on amd64. Relevant part (hopefully): > debian/rules binary > dh binary --with python3 --buildsystem=pybuild > dh_update_autotools_config -O--buildsystem=pybuild > dh_autoreconf -O--buildsystem=pybuild > dh_auto_configure -O--buildsystem=pybuild > install -d /<<PKGBUILDDIR>>/debian/.debhelper/generated/_source/home > pybuild --configure -i python{version} -p "3.9 3.10" > I: pybuild base:239: python3.9 setup.py config > running config > I: pybuild base:239: python3.10 setup.py config > running config > dh_auto_build -O--buildsystem=pybuild > pybuild --build -i python{version} -p "3.9 3.10" > I: pybuild base:239: /usr/bin/python3.9 setup.py build > running build > running build_py > creating > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/__init__.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/pipeline.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/features_generator.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/cross_validation.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/transformers.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/dataframe_mapper.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas > I: pybuild base:239: /usr/bin/python3 setup.py build > running build > running build_py > creating > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/__init__.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/pipeline.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/features_generator.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/cross_validation.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/transformers.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas > copying sklearn_pandas/dataframe_mapper.py -> > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas > dh_auto_test -O--buildsystem=pybuild > pybuild --test --test-pytest -i python{version} -p "3.9 3.10" > I: pybuild base:239: cd > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build; python3.9 -m > pytest ; cd /<<PKGBUILDDIR>>; python3.9 -m doctest -v README.rst > ============================= test session starts > ============================== > platform linux -- Python 3.9.13, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 > rootdir: /<<PKGBUILDDIR>>, configfile: pytest.ini > collected 69 items > > tests/test_dataframe_mapper.py ......................................... [ > 59%] > .................. [ > 85%] > tests/test_features_generator.py .... [ > 91%] > tests/test_pipeline.py .... [ > 97%] > tests/test_transformers.py .. > [100%] > > =============================== warnings summary > =============================== > .pybuild/cpython3_3.9_sklearn-pandas/build/tests/test_dataframe_mapper.py: 13 > warnings > /usr/lib/python3/dist-packages/sklearn/utils/deprecation.py:87: > FutureWarning: Function get_feature_names is deprecated; get_feature_names is > deprecated in 1.0 and will be removed in 1.2. Please use > get_feature_names_out instead. > warnings.warn(msg, category=FutureWarning) > > .pybuild/cpython3_3.9_sklearn-pandas/build/tests/test_dataframe_mapper.py::test_sparse_features > > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/tests/test_dataframe_mapper.py:820: > DeprecationWarning: Please use `csr_matrix` from the `scipy.sparse` > namespace, the `scipy.sparse.csr` namespace is deprecated. > assert type(dmatrix) == sparse.csr.csr_matrix > > .pybuild/cpython3_3.9_sklearn-pandas/build/tests/test_dataframe_mapper.py::test_sparse_off > > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/tests/test_dataframe_mapper.py:834: > DeprecationWarning: Please use `csr_matrix` from the `scipy.sparse` > namespace, the `scipy.sparse.csr` namespace is deprecated. > assert type(dmatrix) != sparse.csr.csr_matrix > > .pybuild/cpython3_3.9_sklearn-pandas/build/tests/test_transformers.py::test_common_numerical_transformer > .pybuild/cpython3_3.9_sklearn-pandas/build/tests/test_transformers.py::test_numerical_transformer_serialization > > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build/sklearn_pandas/transformers.py:35: > DeprecationWarning: > NumericalTransformer will be deprecated in 3.0 version. > Please use Sklearn.base.TransformerMixin to write > customer transformers > > warnings.warn(""" > > -- Docs: https://docs.pytest.org/en/stable/warnings.html > ======================= 69 passed, 17 warnings in 3.20s > ======================== > Trying: > from sklearn_pandas import DataFrameMapper > Expecting nothing > ok > Trying: > import pandas as pd > Expecting nothing > ok > Trying: > import numpy as np > Expecting nothing > ok > Trying: > import sklearn.preprocessing, sklearn.decomposition, \ > sklearn.linear_model, sklearn.pipeline, sklearn.metrics, \ > sklearn.compose > Expecting nothing > ok > Trying: > from sklearn.feature_extraction.text import CountVectorizer > Expecting nothing > ok > Trying: > data = pd.DataFrame({'pet': ['cat', 'dog', 'dog', 'fish', 'cat', > 'dog', 'cat', 'fish'], > 'children': [4., 6, 3, 3, 2, 3, 5, 4], > 'salary': [90., 24, 44, 27, 32, 59, 36, 27]}) > Expecting nothing > ok > Trying: > mapper = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > (['children'], sklearn.preprocessing.StandardScaler()) > ]) > Expecting nothing > ok > Trying: > data['children'].shape > Expecting: > (8,) > ok > Trying: > data[['children']].shape > Expecting: > (8, 1) > ok > Trying: > np.round(mapper.fit_transform(data.copy()), 2) > Expecting: > array([[ 1. , 0. , 0. , 0.21], > [ 0. , 1. , 0. , 1.88], > [ 0. , 1. , 0. , -0.63], > [ 0. , 0. , 1. , -0.63], > [ 1. , 0. , 0. , -1.46], > [ 0. , 1. , 0. , -0.63], > [ 1. , 0. , 0. , 1.04], > [ 0. , 0. , 1. , 0.21]]) > ok > Trying: > sample = pd.DataFrame({'pet': ['cat'], 'children': [5.]}) > Expecting nothing > ok > Trying: > np.round(mapper.transform(sample), 2) > Expecting: > array([[1. , 0. , 0. , 1.04]]) > ok > Trying: > mapper.transformed_names_ > Expecting: > ['pet_cat', 'pet_dog', 'pet_fish', 'children'] > ok > Trying: > mapper_alias = DataFrameMapper([ > (['children'], sklearn.preprocessing.StandardScaler(), > {'alias': 'children_scaled'}) > ]) > Expecting nothing > ok > Trying: > _ = mapper_alias.fit_transform(data.copy()) > Expecting nothing > ok > Trying: > mapper_alias.transformed_names_ > Expecting: > ['children_scaled'] > ok > Trying: > mapper_alias = DataFrameMapper([ > (['children'], sklearn.preprocessing.StandardScaler(), {'prefix': > 'standard_scaled_'}), > (['children'], sklearn.preprocessing.StandardScaler(), {'suffix': > '_raw'}) > ]) > Expecting nothing > ok > Trying: > _ = mapper_alias.fit_transform(data.copy()) > Expecting nothing > ok > Trying: > mapper_alias.transformed_names_ > Expecting: > ['standard_scaled_children', 'children_raw'] > ok > Trying: > class GetColumnsStartingWith: > def __init__(self, start_str): > self.pattern = start_str > > def __call__(self, X:pd.DataFrame=None): > return [c for c in X.columns if c.startswith(self.pattern)] > Expecting nothing > ok > Trying: > df = pd.DataFrame({ > 'sepal length (cm)': [1.0, 2.0, 3.0], > 'sepal width (cm)': [1.0, 2.0, 3.0], > 'petal length (cm)': [1.0, 2.0, 3.0], > 'petal width (cm)': [1.0, 2.0, 3.0] > }) > Expecting nothing > ok > Trying: > t = DataFrameMapper([ > ( > sklearn.compose.make_column_selector(dtype_include=float), > sklearn.preprocessing.StandardScaler(), > {'alias': 'x'} > ), > ( > GetColumnsStartingWith('petal'), > None, > {'alias': 'petal'} > )], df_out=True, default=False) > Expecting nothing > ok > Trying: > t.fit(df).transform(df).shape > Expecting: > (3, 6) > ok > Trying: > t.transformed_names_ > Expecting: > ['x_0', 'x_1', 'x_2', 'x_3', 'petal_0', 'petal_1'] > ok > Trying: > from sklearn.base import TransformerMixin > Expecting nothing > ok > Trying: > class DateEncoder(TransformerMixin): > def fit(self, X, y=None): > return self > > def transform(self, X): > dt = X.dt > return pd.concat([dt.year, dt.month, dt.day], axis=1) > Expecting nothing > ok > Trying: > dates_df = pd.DataFrame( > {'dates': pd.date_range('2015-10-30', '2015-11-02')}) > Expecting nothing > ok > Trying: > mapper_dates = DataFrameMapper([ > ('dates', DateEncoder()) > ], input_df=True) > Expecting nothing > ok > Trying: > mapper_dates.fit_transform(dates_df) > Expecting: > array([[2015, 10, 30], > [2015, 10, 31], > [2015, 11, 1], > [2015, 11, 2]]) > ok > Trying: > mapper_dates = DataFrameMapper([ > ('dates', DateEncoder(), {'input_df': True}) > ]) > Expecting nothing > ok > Trying: > mapper_dates.fit_transform(dates_df) > Expecting: > array([[2015, 10, 30], > [2015, 10, 31], > [2015, 11, 1], > [2015, 11, 2]]) > ok > Trying: > mapper_df = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > (['children'], sklearn.preprocessing.StandardScaler()) > ], df_out=True) > Expecting nothing > ok > Trying: > np.round(mapper_df.fit_transform(data.copy()), 2) > Expecting: > pet_cat pet_dog pet_fish children > 0 1 0 0 0.21 > 1 0 1 0 1.88 > 2 0 1 0 -0.63 > 3 0 0 1 -0.63 > 4 1 0 0 -1.46 > 5 0 1 0 -0.63 > 6 1 0 0 1.04 > 7 0 0 1 0.21 > ok > Trying: > mapper_df = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > (['children'], sklearn.preprocessing.StandardScaler()) > ], drop_cols=['salary']) > Expecting nothing > ok > Trying: > np.round(mapper_df.fit_transform(data.copy()), 1) > Expecting: > array([[ 1. , 0. , 0. , 0.2], > [ 0. , 1. , 0. , 1.9], > [ 0. , 1. , 0. , -0.6], > [ 0. , 0. , 1. , -0.6], > [ 1. , 0. , 0. , -1.5], > [ 0. , 1. , 0. , -0.6], > [ 1. , 0. , 0. , 1. ], > [ 0. , 0. , 1. , 0.2]]) > ok > Trying: > mapper2 = DataFrameMapper([ > (['children', 'salary'], sklearn.decomposition.PCA(1)) > ]) > Expecting nothing > ok > Trying: > np.round(mapper2.fit_transform(data.copy()), 1) > Expecting: > array([[ 47.6], > [-18.4], > [ 1.6], > [-15.4], > [-10.4], > [ 16.6], > [ -6.4], > [-15.4]]) > ok > Trying: > from sklearn.impute import SimpleImputer > Expecting nothing > ok > Trying: > mapper3 = DataFrameMapper([ > (['age'], [SimpleImputer(), > sklearn.preprocessing.StandardScaler()])]) > Expecting nothing > ok > Trying: > data_3 = pd.DataFrame({'age': [1, np.nan, 3]}) > Expecting nothing > ok > Trying: > mapper3.fit_transform(data_3) > Expecting: > array([[-1.22474487], > [ 0. ], > [ 1.22474487]]) > ok > Trying: > mapper3 = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > ('children', None) > ]) > Expecting nothing > ok > Trying: > np.round(mapper3.fit_transform(data.copy())) > Expecting: > array([[1., 0., 0., 4.], > [0., 1., 0., 6.], > [0., 1., 0., 3.], > [0., 0., 1., 3.], > [1., 0., 0., 2.], > [0., 1., 0., 3.], > [1., 0., 0., 5.], > [0., 0., 1., 4.]]) > ok > Trying: > mapper4 = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > ('children', None) > ], default=sklearn.preprocessing.StandardScaler()) > Expecting nothing > ok > Trying: > np.round(mapper4.fit_transform(data.copy()), 1) > Expecting: > array([[ 1. , 0. , 0. , 4. , 2.3], > [ 0. , 1. , 0. , 6. , -0.9], > [ 0. , 1. , 0. , 3. , 0.1], > [ 0. , 0. , 1. , 3. , -0.7], > [ 1. , 0. , 0. , 2. , -0.5], > [ 0. , 1. , 0. , 3. , 0.8], > [ 1. , 0. , 0. , 5. , -0.3], > [ 0. , 0. , 1. , 4. , -0.7]]) > ok > Trying: > from sklearn_pandas import gen_features > Expecting nothing > ok > Trying: > feature_def = gen_features( > columns=['col1', 'col2', 'col3'], > classes=[sklearn.preprocessing.LabelEncoder] > ) > Expecting nothing > ok > Trying: > feature_def > Expecting: > [('col1', [LabelEncoder()], {}), ('col2', [LabelEncoder()], {}), ('col3', > [LabelEncoder()], {})] > ok > Trying: > mapper5 = DataFrameMapper(feature_def) > Expecting nothing > ok > /usr/lib/python3/dist-packages/sklearn/utils/deprecation.py:87: > FutureWarning: Function get_feature_names is deprecated; get_feature_names is > deprecated in 1.0 and will be removed in 1.2. Please use > get_feature_names_out instead. > warnings.warn(msg, category=FutureWarning) > Trying: > data5 = pd.DataFrame({ > 'col1': ['yes', 'no', 'yes'], > 'col2': [True, False, False], > 'col3': ['one', 'two', 'three'] > }) > Expecting nothing > ok > Trying: > mapper5.fit_transform(data5) > Expecting: > array([[1, 1, 0], > [0, 0, 2], > [1, 0, 1]]) > ok > Trying: > from sklearn.impute import SimpleImputer > Expecting nothing > ok > Trying: > import numpy as np > Expecting nothing > ok > Trying: > feature_def = gen_features( > columns=[['col1'], ['col2'], ['col3']], > classes=[{'class': SimpleImputer, 'strategy':'most_frequent'}] > ) > Expecting nothing > ok > Trying: > mapper6 = DataFrameMapper(feature_def) > Expecting nothing > ok > Trying: > data6 = pd.DataFrame({ > 'col1': [np.nan, 1, 1, 2, 3], > 'col2': [True, False, np.nan, np.nan, True], > 'col3': [0, 0, 0, np.nan, np.nan] > }) > Expecting nothing > ok > Trying: > mapper6.fit_transform(data6) > Expecting: > array([[1.0, True, 0.0], > [1.0, False, 0.0], > [1.0, True, 0.0], > [2.0, True, 0.0], > [3.0, True, 0.0]], dtype=object) > ok > Trying: > feature_def = gen_features( > columns=['col1', 'col2', 'col3'], > classes=[sklearn.preprocessing.LabelEncoder], > prefix="lblencoder_" > ) > Expecting nothing > ok > Trying: > mapper5 = DataFrameMapper(feature_def) > Expecting nothing > ok > Trying: > data5 = pd.DataFrame({ > 'col1': ['yes', 'no', 'yes'], > 'col2': [True, False, False], > 'col3': ['one', 'two', 'three'] > }) > Expecting nothing > ok > Trying: > _ = mapper5.fit_transform(data5) > Expecting nothing > ok > Trying: > mapper5.transformed_names_ > Expecting: > ['lblencoder_col1', 'lblencoder_col2', 'lblencoder_col3'] > ok > Trying: > from sklearn.feature_selection import SelectKBest, chi2 > Expecting nothing > ok > Trying: > mapper_fs = DataFrameMapper([(['children','salary'], SelectKBest(chi2, > k=1))]) > Expecting nothing > ok > Trying: > mapper_fs.fit_transform(data[['children','salary']], data['pet']) > Expecting: > array([[90.], > [24.], > [44.], > [27.], > [32.], > [59.], > [36.], > [27.]]) > ok > Trying: > mapper5 = DataFrameMapper([ > ('pet', CountVectorizer()), > ], sparse=True) > Expecting nothing > ok > Trying: > type(mapper5.fit_transform(data)) > Expecting: > <class 'scipy.sparse.csr.csr_matrix'> > ********************************************************************** > File "README.rst", line 475, in README.rst > Failed example: > type(mapper5.fit_transform(data)) > Expected: > <class 'scipy.sparse.csr.csr_matrix'> > Got: > <class 'scipy.sparse._csr.csr_matrix'> > Trying: > from sklearn_pandas import NumericalTransformer > Expecting nothing > ok > Trying: > mapper5 = DataFrameMapper([ > ('children', NumericalTransformer('log')), > ]) > Expecting nothing > ok > Trying: > mapper5.fit_transform(data) > Expecting: > array([[1.38629436], > [1.79175947], > [1.09861229], > [1.09861229], > [0.69314718], > [1.09861229], > [1.60943791], > [1.38629436]]) > ok > Trying: > import logging > Expecting nothing > ok > Trying: > logging.getLogger('sklearn_pandas').setLevel(logging.INFO) > Expecting nothing > ok > ********************************************************************** > 1 items had failures: > 1 of 72 in README.rst > 72 tests in 1 items. > 71 passed and 1 failed. > ***Test Failed*** 1 failures. > E: pybuild pybuild:369: test: plugin distutils failed with: exit code=1: cd > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_sklearn-pandas/build; python3.9 -m > pytest ; cd {dir}; python{version} -m doctest -v README.rst > I: pybuild base:239: cd > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build; python3.10 -m > pytest ; cd /<<PKGBUILDDIR>>; python3.10 -m doctest -v README.rst > ============================= test session starts > ============================== > platform linux -- Python 3.10.5, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 > rootdir: /<<PKGBUILDDIR>>, configfile: pytest.ini > collected 69 items > > tests/test_dataframe_mapper.py ......................................... [ > 59%] > .................. [ > 85%] > tests/test_features_generator.py .... [ > 91%] > tests/test_pipeline.py .... [ > 97%] > tests/test_transformers.py .. > [100%] > > =============================== warnings summary > =============================== > .pybuild/cpython3_3.10_sklearn-pandas/build/tests/test_dataframe_mapper.py: > 13 warnings > /usr/lib/python3/dist-packages/sklearn/utils/deprecation.py:87: > FutureWarning: Function get_feature_names is deprecated; get_feature_names is > deprecated in 1.0 and will be removed in 1.2. Please use > get_feature_names_out instead. > warnings.warn(msg, category=FutureWarning) > > .pybuild/cpython3_3.10_sklearn-pandas/build/tests/test_dataframe_mapper.py::test_sparse_features > > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/tests/test_dataframe_mapper.py:820: > DeprecationWarning: Please use `csr_matrix` from the `scipy.sparse` > namespace, the `scipy.sparse.csr` namespace is deprecated. > assert type(dmatrix) == sparse.csr.csr_matrix > > .pybuild/cpython3_3.10_sklearn-pandas/build/tests/test_dataframe_mapper.py::test_sparse_off > > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/tests/test_dataframe_mapper.py:834: > DeprecationWarning: Please use `csr_matrix` from the `scipy.sparse` > namespace, the `scipy.sparse.csr` namespace is deprecated. > assert type(dmatrix) != sparse.csr.csr_matrix > > .pybuild/cpython3_3.10_sklearn-pandas/build/tests/test_transformers.py::test_common_numerical_transformer > .pybuild/cpython3_3.10_sklearn-pandas/build/tests/test_transformers.py::test_numerical_transformer_serialization > > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build/sklearn_pandas/transformers.py:35: > DeprecationWarning: > NumericalTransformer will be deprecated in 3.0 version. > Please use Sklearn.base.TransformerMixin to write > customer transformers > > warnings.warn(""" > > -- Docs: https://docs.pytest.org/en/stable/warnings.html > ======================= 69 passed, 17 warnings in 3.78s > ======================== > Trying: > from sklearn_pandas import DataFrameMapper > Expecting nothing > ok > Trying: > import pandas as pd > Expecting nothing > ok > Trying: > import numpy as np > Expecting nothing > ok > Trying: > import sklearn.preprocessing, sklearn.decomposition, \ > sklearn.linear_model, sklearn.pipeline, sklearn.metrics, \ > sklearn.compose > Expecting nothing > ok > Trying: > from sklearn.feature_extraction.text import CountVectorizer > Expecting nothing > ok > Trying: > data = pd.DataFrame({'pet': ['cat', 'dog', 'dog', 'fish', 'cat', > 'dog', 'cat', 'fish'], > 'children': [4., 6, 3, 3, 2, 3, 5, 4], > 'salary': [90., 24, 44, 27, 32, 59, 36, 27]}) > Expecting nothing > ok > Trying: > mapper = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > (['children'], sklearn.preprocessing.StandardScaler()) > ]) > Expecting nothing > ok > Trying: > data['children'].shape > Expecting: > (8,) > ok > Trying: > data[['children']].shape > Expecting: > (8, 1) > ok > Trying: > np.round(mapper.fit_transform(data.copy()), 2) > Expecting: > array([[ 1. , 0. , 0. , 0.21], > [ 0. , 1. , 0. , 1.88], > [ 0. , 1. , 0. , -0.63], > [ 0. , 0. , 1. , -0.63], > [ 1. , 0. , 0. , -1.46], > [ 0. , 1. , 0. , -0.63], > [ 1. , 0. , 0. , 1.04], > [ 0. , 0. , 1. , 0.21]]) > ok > Trying: > sample = pd.DataFrame({'pet': ['cat'], 'children': [5.]}) > Expecting nothing > ok > Trying: > np.round(mapper.transform(sample), 2) > Expecting: > array([[1. , 0. , 0. , 1.04]]) > ok > Trying: > mapper.transformed_names_ > Expecting: > ['pet_cat', 'pet_dog', 'pet_fish', 'children'] > ok > Trying: > mapper_alias = DataFrameMapper([ > (['children'], sklearn.preprocessing.StandardScaler(), > {'alias': 'children_scaled'}) > ]) > Expecting nothing > ok > Trying: > _ = mapper_alias.fit_transform(data.copy()) > Expecting nothing > ok > Trying: > mapper_alias.transformed_names_ > Expecting: > ['children_scaled'] > ok > Trying: > mapper_alias = DataFrameMapper([ > (['children'], sklearn.preprocessing.StandardScaler(), {'prefix': > 'standard_scaled_'}), > (['children'], sklearn.preprocessing.StandardScaler(), {'suffix': > '_raw'}) > ]) > Expecting nothing > ok > Trying: > _ = mapper_alias.fit_transform(data.copy()) > Expecting nothing > ok > Trying: > mapper_alias.transformed_names_ > Expecting: > ['standard_scaled_children', 'children_raw'] > ok > Trying: > class GetColumnsStartingWith: > def __init__(self, start_str): > self.pattern = start_str > > def __call__(self, X:pd.DataFrame=None): > return [c for c in X.columns if c.startswith(self.pattern)] > Expecting nothing > ok > Trying: > df = pd.DataFrame({ > 'sepal length (cm)': [1.0, 2.0, 3.0], > 'sepal width (cm)': [1.0, 2.0, 3.0], > 'petal length (cm)': [1.0, 2.0, 3.0], > 'petal width (cm)': [1.0, 2.0, 3.0] > }) > Expecting nothing > ok > Trying: > t = DataFrameMapper([ > ( > sklearn.compose.make_column_selector(dtype_include=float), > sklearn.preprocessing.StandardScaler(), > {'alias': 'x'} > ), > ( > GetColumnsStartingWith('petal'), > None, > {'alias': 'petal'} > )], df_out=True, default=False) > Expecting nothing > ok > Trying: > t.fit(df).transform(df).shape > Expecting: > (3, 6) > ok > Trying: > t.transformed_names_ > Expecting: > ['x_0', 'x_1', 'x_2', 'x_3', 'petal_0', 'petal_1'] > ok > Trying: > from sklearn.base import TransformerMixin > Expecting nothing > ok > Trying: > class DateEncoder(TransformerMixin): > def fit(self, X, y=None): > return self > > def transform(self, X): > dt = X.dt > return pd.concat([dt.year, dt.month, dt.day], axis=1) > Expecting nothing > ok > Trying: > dates_df = pd.DataFrame( > {'dates': pd.date_range('2015-10-30', '2015-11-02')}) > Expecting nothing > ok > Trying: > mapper_dates = DataFrameMapper([ > ('dates', DateEncoder()) > ], input_df=True) > Expecting nothing > ok > Trying: > mapper_dates.fit_transform(dates_df) > Expecting: > array([[2015, 10, 30], > [2015, 10, 31], > [2015, 11, 1], > [2015, 11, 2]]) > ok > Trying: > mapper_dates = DataFrameMapper([ > ('dates', DateEncoder(), {'input_df': True}) > ]) > Expecting nothing > ok > Trying: > mapper_dates.fit_transform(dates_df) > Expecting: > array([[2015, 10, 30], > [2015, 10, 31], > [2015, 11, 1], > [2015, 11, 2]]) > ok > Trying: > mapper_df = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > (['children'], sklearn.preprocessing.StandardScaler()) > ], df_out=True) > Expecting nothing > ok > Trying: > np.round(mapper_df.fit_transform(data.copy()), 2) > Expecting: > pet_cat pet_dog pet_fish children > 0 1 0 0 0.21 > 1 0 1 0 1.88 > 2 0 1 0 -0.63 > 3 0 0 1 -0.63 > 4 1 0 0 -1.46 > 5 0 1 0 -0.63 > 6 1 0 0 1.04 > 7 0 0 1 0.21 > ok > Trying: > mapper_df = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > (['children'], sklearn.preprocessing.StandardScaler()) > ], drop_cols=['salary']) > Expecting nothing > ok > Trying: > np.round(mapper_df.fit_transform(data.copy()), 1) > Expecting: > array([[ 1. , 0. , 0. , 0.2], > [ 0. , 1. , 0. , 1.9], > [ 0. , 1. , 0. , -0.6], > [ 0. , 0. , 1. , -0.6], > [ 1. , 0. , 0. , -1.5], > [ 0. , 1. , 0. , -0.6], > [ 1. , 0. , 0. , 1. ], > [ 0. , 0. , 1. , 0.2]]) > ok > Trying: > mapper2 = DataFrameMapper([ > (['children', 'salary'], sklearn.decomposition.PCA(1)) > ]) > Expecting nothing > ok > Trying: > np.round(mapper2.fit_transform(data.copy()), 1) > Expecting: > array([[ 47.6], > [-18.4], > [ 1.6], > [-15.4], > [-10.4], > [ 16.6], > [ -6.4], > [-15.4]]) > ok > Trying: > from sklearn.impute import SimpleImputer > Expecting nothing > ok > Trying: > mapper3 = DataFrameMapper([ > (['age'], [SimpleImputer(), > sklearn.preprocessing.StandardScaler()])]) > Expecting nothing > ok > Trying: > data_3 = pd.DataFrame({'age': [1, np.nan, 3]}) > Expecting nothing > ok > Trying: > mapper3.fit_transform(data_3) > Expecting: > array([[-1.22474487], > [ 0. ], > [ 1.22474487]]) > ok > Trying: > mapper3 = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > ('children', None) > ]) > Expecting nothing > ok > Trying: > np.round(mapper3.fit_transform(data.copy())) > Expecting: > array([[1., 0., 0., 4.], > [0., 1., 0., 6.], > [0., 1., 0., 3.], > [0., 0., 1., 3.], > [1., 0., 0., 2.], > [0., 1., 0., 3.], > [1., 0., 0., 5.], > [0., 0., 1., 4.]]) > ok > Trying: > mapper4 = DataFrameMapper([ > ('pet', sklearn.preprocessing.LabelBinarizer()), > ('children', None) > ], default=sklearn.preprocessing.StandardScaler()) > Expecting nothing > ok > Trying: > np.round(mapper4.fit_transform(data.copy()), 1) > Expecting: > array([[ 1. , 0. , 0. , 4. , 2.3], > [ 0. , 1. , 0. , 6. , -0.9], > [ 0. , 1. , 0. , 3. , 0.1], > [ 0. , 0. , 1. , 3. , -0.7], > [ 1. , 0. , 0. , 2. , -0.5], > [ 0. , 1. , 0. , 3. , 0.8], > [ 1. , 0. , 0. , 5. , -0.3], > [ 0. , 0. , 1. , 4. , -0.7]]) > ok > Trying: > from sklearn_pandas import gen_features > Expecting nothing > ok > Trying: > feature_def = gen_features( > columns=['col1', 'col2', 'col3'], > classes=[sklearn.preprocessing.LabelEncoder] > ) > Expecting nothing > ok > Trying: > feature_def > Expecting: > [('col1', [LabelEncoder()], {}), ('col2', [LabelEncoder()], {}), ('col3', > [LabelEncoder()], {})] > ok > Trying: > mapper5 = DataFrameMapper(feature_def) > Expecting nothing > ok > /usr/lib/python3/dist-packages/sklearn/utils/deprecation.py:87: > FutureWarning: Function get_feature_names is deprecated; get_feature_names is > deprecated in 1.0 and will be removed in 1.2. Please use > get_feature_names_out instead. > warnings.warn(msg, category=FutureWarning) > Trying: > data5 = pd.DataFrame({ > 'col1': ['yes', 'no', 'yes'], > 'col2': [True, False, False], > 'col3': ['one', 'two', 'three'] > }) > Expecting nothing > ok > Trying: > mapper5.fit_transform(data5) > Expecting: > array([[1, 1, 0], > [0, 0, 2], > [1, 0, 1]]) > ok > Trying: > from sklearn.impute import SimpleImputer > Expecting nothing > ok > Trying: > import numpy as np > Expecting nothing > ok > Trying: > feature_def = gen_features( > columns=[['col1'], ['col2'], ['col3']], > classes=[{'class': SimpleImputer, 'strategy':'most_frequent'}] > ) > Expecting nothing > ok > Trying: > mapper6 = DataFrameMapper(feature_def) > Expecting nothing > ok > Trying: > data6 = pd.DataFrame({ > 'col1': [np.nan, 1, 1, 2, 3], > 'col2': [True, False, np.nan, np.nan, True], > 'col3': [0, 0, 0, np.nan, np.nan] > }) > Expecting nothing > ok > Trying: > mapper6.fit_transform(data6) > Expecting: > array([[1.0, True, 0.0], > [1.0, False, 0.0], > [1.0, True, 0.0], > [2.0, True, 0.0], > [3.0, True, 0.0]], dtype=object) > ok > Trying: > feature_def = gen_features( > columns=['col1', 'col2', 'col3'], > classes=[sklearn.preprocessing.LabelEncoder], > prefix="lblencoder_" > ) > Expecting nothing > ok > Trying: > mapper5 = DataFrameMapper(feature_def) > Expecting nothing > ok > Trying: > data5 = pd.DataFrame({ > 'col1': ['yes', 'no', 'yes'], > 'col2': [True, False, False], > 'col3': ['one', 'two', 'three'] > }) > Expecting nothing > ok > Trying: > _ = mapper5.fit_transform(data5) > Expecting nothing > ok > Trying: > mapper5.transformed_names_ > Expecting: > ['lblencoder_col1', 'lblencoder_col2', 'lblencoder_col3'] > ok > Trying: > from sklearn.feature_selection import SelectKBest, chi2 > Expecting nothing > ok > Trying: > mapper_fs = DataFrameMapper([(['children','salary'], SelectKBest(chi2, > k=1))]) > Expecting nothing > ok > Trying: > mapper_fs.fit_transform(data[['children','salary']], data['pet']) > Expecting: > array([[90.], > [24.], > [44.], > [27.], > [32.], > [59.], > [36.], > [27.]]) > ok > Trying: > mapper5 = DataFrameMapper([ > ('pet', CountVectorizer()), > ], sparse=True) > Expecting nothing > ok > Trying: > type(mapper5.fit_transform(data)) > Expecting: > <class 'scipy.sparse.csr.csr_matrix'> > ********************************************************************** > File "README.rst", line 475, in README.rst > Failed example: > type(mapper5.fit_transform(data)) > Expected: > <class 'scipy.sparse.csr.csr_matrix'> > Got: > <class 'scipy.sparse._csr.csr_matrix'> > Trying: > from sklearn_pandas import NumericalTransformer > Expecting nothing > ok > Trying: > mapper5 = DataFrameMapper([ > ('children', NumericalTransformer('log')), > ]) > Expecting nothing > ok > Trying: > mapper5.fit_transform(data) > Expecting: > array([[1.38629436], > [1.79175947], > [1.09861229], > [1.09861229], > [0.69314718], > [1.09861229], > [1.60943791], > [1.38629436]]) > ok > Trying: > import logging > Expecting nothing > ok > Trying: > logging.getLogger('sklearn_pandas').setLevel(logging.INFO) > Expecting nothing > ok > ********************************************************************** > 1 items had failures: > 1 of 72 in README.rst > 72 tests in 1 items. > 71 passed and 1 failed. > ***Test Failed*** 1 failures. > E: pybuild pybuild:369: test: plugin distutils failed with: exit code=1: cd > /<<PKGBUILDDIR>>/.pybuild/cpython3_3.10_sklearn-pandas/build; python3.10 -m > pytest ; cd {dir}; python{version} -m doctest -v README.rst > rm -fr -- /tmp/dh-xdg-rundir-fKyp54YC > dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p "3.9 > 3.10" returned exit code 13 The full build log is available from: http://qa-logs.debian.net/2022/06/24/sklearn-pandas_2.2.0-1_unstable.log All bugs filed during this archive rebuild are listed at: https://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=ftbfs-20220624;users=lu...@debian.org or: https://udd.debian.org/bugs/?release=na&merged=ign&fnewerval=7&flastmodval=7&fusertag=only&fusertagtag=ftbfs-20220624&fusertaguser=lu...@debian.org&allbugs=1&cseverity=1&ctags=1&caffected=1#results A list of current common problems and possible solutions is available at http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute! If you reassign this bug to another package, please marking it as 'affects'-ing this package. See https://www.debian.org/Bugs/server-control#affects If you fail to reproduce this, please provide a build log and diff it with mine so that we can identify if something relevant changed in the meantime.