Your message dated Fri, 21 Feb 2025 13:22:16 +0000
with message-id <e1tlsyu-002zgf...@fasolo.debian.org>
and subject line Bug#1093354: fixed in pytorch-geometric 2.6.1-2
has caused the Debian Bug report #1093354,
regarding pytorch-geometric: FTBFS: Error: Python 3.13+ not yet supported for 
torch.compile
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
1093354: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1093354
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Package: src:pytorch-geometric
Version: 2.6.1-1
Severity: serious
Tags: ftbfs trixie sid

Dear maintainer:

During a rebuild of all packages in unstable, your package failed to build:

--------------------------------------------------------------------------------
[...]
 debian/rules clean
dh clean --buildsystem pybuild
   dh_auto_clean -O--buildsystem=pybuild
   dh_autoreconf_clean -O--buildsystem=pybuild
   dh_clean -O--buildsystem=pybuild
 debian/rules binary
dh binary --buildsystem pybuild
   dh_update_autotools_config -O--buildsystem=pybuild
   dh_autoreconf -O--buildsystem=pybuild
   dh_auto_configure -O--buildsystem=pybuild
   dh_auto_build -O--buildsystem=pybuild
I: pybuild plugin_pyproject:129: Building wheel for python3.13 with "build" 
module
I: pybuild base:311: python3.13 -m build --skip-dependency-check --no-isolation 
--wheel --outdir /<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_torch-geometric  
* Building wheel...

[... snipped ...]

        model = Model(
            in_channels=8,
            hidden_channels=16,
            num_layers=2,
            **kwargs,
        ).to(device)
    
>       explanation = dynamo.explain(model)(x, edge_index)

test/nn/models/test_basic_gnn.py:359: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:832: in 
inner
    opt_f = optimize(
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:716: in 
optimize
    return _optimize(rebuild_ctx, *args, **kwargs)
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:755: in 
_optimize
    check_if_dynamo_supported()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def check_if_dynamo_supported():
        if sys.version_info >= (3, 13):
>           raise RuntimeError("Python 3.13+ not yet supported for 
> torch.compile")
E           RuntimeError: Python 3.13+ not yet supported for 
torch.compile

/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:689: 
RuntimeError
______________________ test_compile_graph_breaks[GIN-cpu] 
______________________

Model = <class 'torch_geometric.nn.models.basic_gnn.GIN'>
device = device(type='cpu')

    @withDevice
    @onlyLinux
    @withPackage('torch>=2.1.0')
    @pytest.mark.parametrize('Model', [GCN, GraphSAGE, GIN, GAT, EdgeCNN, PNA])
    def test_compile_graph_breaks(Model, device):
        import torch._dynamo as dynamo
    
        x = torch.randn(3, 8, device=device)
        edge_index = torch.tensor([[0, 1, 1, 2], [1, 0, 2, 1]], device=device)
    
        kwargs = {}
        if Model in {GCN, GAT}:
            # Adding self-loops inside the model leads to graph breaks :(
            kwargs['add_self_loops'] = False
    
        if Model in {PNA}:  # `PNA` requires additional arguments:
            kwargs['aggregators'] = ['sum', 'mean', 'min', 'max', 'var', 'std']
            kwargs['scalers'] = ['identity', 'amplification', 'attenuation']
            kwargs['deg'] = torch.tensor([1, 2, 1])
    
        model = Model(
            in_channels=8,
            hidden_channels=16,
            num_layers=2,
            **kwargs,
        ).to(device)
    
>       explanation = dynamo.explain(model)(x, edge_index)

test/nn/models/test_basic_gnn.py:359: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:832: in 
inner
    opt_f = optimize(
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:716: in 
optimize
    return _optimize(rebuild_ctx, *args, **kwargs)
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:755: in 
_optimize
    check_if_dynamo_supported()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def check_if_dynamo_supported():
        if sys.version_info >= (3, 13):
>           raise RuntimeError("Python 3.13+ not yet supported for 
> torch.compile")
E           RuntimeError: Python 3.13+ not yet supported for 
torch.compile

/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:689: 
RuntimeError
______________________ test_compile_graph_breaks[GAT-cpu] 
______________________

Model = <class 'torch_geometric.nn.models.basic_gnn.GAT'>
device = device(type='cpu')

    @withDevice
    @onlyLinux
    @withPackage('torch>=2.1.0')
    @pytest.mark.parametrize('Model', [GCN, GraphSAGE, GIN, GAT, EdgeCNN, PNA])
    def test_compile_graph_breaks(Model, device):
        import torch._dynamo as dynamo
    
        x = torch.randn(3, 8, device=device)
        edge_index = torch.tensor([[0, 1, 1, 2], [1, 0, 2, 1]], device=device)
    
        kwargs = {}
        if Model in {GCN, GAT}:
            # Adding self-loops inside the model leads to graph breaks :(
            kwargs['add_self_loops'] = False
    
        if Model in {PNA}:  # `PNA` requires additional arguments:
            kwargs['aggregators'] = ['sum', 'mean', 'min', 'max', 'var', 'std']
            kwargs['scalers'] = ['identity', 'amplification', 'attenuation']
            kwargs['deg'] = torch.tensor([1, 2, 1])
    
        model = Model(
            in_channels=8,
            hidden_channels=16,
            num_layers=2,
            **kwargs,
        ).to(device)
    
>       explanation = dynamo.explain(model)(x, edge_index)

test/nn/models/test_basic_gnn.py:359: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:832: in 
inner
    opt_f = optimize(
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:716: in 
optimize
    return _optimize(rebuild_ctx, *args, **kwargs)
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:755: in 
_optimize
    check_if_dynamo_supported()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def check_if_dynamo_supported():
        if sys.version_info >= (3, 13):
>           raise RuntimeError("Python 3.13+ not yet supported for 
> torch.compile")
E           RuntimeError: Python 3.13+ not yet supported for 
torch.compile

/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:689: 
RuntimeError
____________________ test_compile_graph_breaks[EdgeCNN-cpu] 
____________________

Model = <class 'torch_geometric.nn.models.basic_gnn.EdgeCNN'>
device = device(type='cpu')

    @withDevice
    @onlyLinux
    @withPackage('torch>=2.1.0')
    @pytest.mark.parametrize('Model', [GCN, GraphSAGE, GIN, GAT, EdgeCNN, PNA])
    def test_compile_graph_breaks(Model, device):
        import torch._dynamo as dynamo
    
        x = torch.randn(3, 8, device=device)
        edge_index = torch.tensor([[0, 1, 1, 2], [1, 0, 2, 1]], device=device)
    
        kwargs = {}
        if Model in {GCN, GAT}:
            # Adding self-loops inside the model leads to graph breaks :(
            kwargs['add_self_loops'] = False
    
        if Model in {PNA}:  # `PNA` requires additional arguments:
            kwargs['aggregators'] = ['sum', 'mean', 'min', 'max', 'var', 'std']
            kwargs['scalers'] = ['identity', 'amplification', 'attenuation']
            kwargs['deg'] = torch.tensor([1, 2, 1])
    
        model = Model(
            in_channels=8,
            hidden_channels=16,
            num_layers=2,
            **kwargs,
        ).to(device)
    
>       explanation = dynamo.explain(model)(x, edge_index)

test/nn/models/test_basic_gnn.py:359: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:832: in 
inner
    opt_f = optimize(
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:716: in 
optimize
    return _optimize(rebuild_ctx, *args, **kwargs)
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:755: in 
_optimize
    check_if_dynamo_supported()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def check_if_dynamo_supported():
        if sys.version_info >= (3, 13):
>           raise RuntimeError("Python 3.13+ not yet supported for 
> torch.compile")
E           RuntimeError: Python 3.13+ not yet supported for 
torch.compile

/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:689: 
RuntimeError
______________________ test_compile_graph_breaks[PNA-cpu] 
______________________

Model = <class 'torch_geometric.nn.models.basic_gnn.PNA'>
device = device(type='cpu')

    @withDevice
    @onlyLinux
    @withPackage('torch>=2.1.0')
    @pytest.mark.parametrize('Model', [GCN, GraphSAGE, GIN, GAT, EdgeCNN, PNA])
    def test_compile_graph_breaks(Model, device):
        import torch._dynamo as dynamo
    
        x = torch.randn(3, 8, device=device)
        edge_index = torch.tensor([[0, 1, 1, 2], [1, 0, 2, 1]], device=device)
    
        kwargs = {}
        if Model in {GCN, GAT}:
            # Adding self-loops inside the model leads to graph breaks :(
            kwargs['add_self_loops'] = False
    
        if Model in {PNA}:  # `PNA` requires additional arguments:
            kwargs['aggregators'] = ['sum', 'mean', 'min', 'max', 'var', 'std']
            kwargs['scalers'] = ['identity', 'amplification', 'attenuation']
            kwargs['deg'] = torch.tensor([1, 2, 1])
    
        model = Model(
            in_channels=8,
            hidden_channels=16,
            num_layers=2,
            **kwargs,
        ).to(device)
    
>       explanation = dynamo.explain(model)(x, edge_index)

test/nn/models/test_basic_gnn.py:359: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:832: in 
inner
    opt_f = optimize(
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:716: in 
optimize
    return _optimize(rebuild_ctx, *args, **kwargs)
/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:755: in 
_optimize
    check_if_dynamo_supported()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def check_if_dynamo_supported():
        if sys.version_info >= (3, 13):
>           raise RuntimeError("Python 3.13+ not yet supported for 
> torch.compile")
E           RuntimeError: Python 3.13+ not yet supported for 
torch.compile

/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py:689: 
RuntimeError
_________________________________ test_compile 
_________________________________

    @onlyLinux
    @withPackage('torch>=2.0.0')
    def test_compile():
>       model = torch.compile(torch.nn.Linear(1, 1))

test/test_isinstance.py:14: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

model = Linear(in_features=1, out_features=1, bias=True)

    def compile(
        model: _Optional[_Callable] = None,
        *,
        fullgraph: builtins.bool = False,
        dynamic: _Optional[builtins.bool] = None,
        backend: _Union[str, _Callable] = "inductor",
        mode: _Union[str, None] = None,
        options: _Optional[_Dict[str, _Union[str, builtins.int, 
builtins.bool]]] = None,
        disable: builtins.bool = False,
    ) -> _Union[
        _Callable[[_Callable[_InputT, _RetT]], _Callable[_InputT, _RetT]],
        _Callable[_InputT, _RetT],
    ]:
        """
        Optimizes given model/function using TorchDynamo and specified backend.
        If you are compiling an :class:`torch.nn.Module`, you can also use 
:meth:`torch.nn.Module.compile`
        to compile the module inplace without changing its structure.
    
        Concretely, for every frame executed within the compiled region, we 
will attempt
        to compile it and cache the compiled result on the code object for 
future
        use.  A single frame may be compiled multiple times if previous compiled
        results are not applicable for subsequent calls (this is called a "guard
        failure), you can use TORCH_LOGS=guards to debug these situations.
        Multiple compiled results can be associated with a frame up to
        ``torch._dynamo.config.cache_size_limit``, which defaults to 8; at which
        point we will fall back to eager.  Note that compile caches are per
        *code object*, not frame; if you dynamically create multiple copies of a
        function, they will all share the same code cache.
    
        Args:
           model (Callable): Module/function to optimize
           fullgraph (bool): If False (default), torch.compile attempts to 
discover compileable regions
            in the function that it will optimize. If True, then we require 
that the entire function be
            capturable into a single graph. If this is not possible (that is, 
if there are graph breaks),
            then this will raise an error.
           dynamic (bool or None): Use dynamic shape tracing.  When this is 
True, we will up-front attempt
            to generate a kernel that is as dynamic as possible to avoid 
recompilations when
            sizes change.  This may not always work as some 
operations/optimizations will
            force specialization; use TORCH_LOGS=dynamic to debug 
overspecialization.
            When this is False, we will NEVER generate dynamic kernels, we will 
always specialize.
            By default (None), we automatically detect if dynamism has occurred 
and compile a more
            dynamic kernel upon recompile.
           backend (str or Callable): backend to be used
    
            - "inductor" is the default backend, which is a good balance 
between performance and overhead
    
            - Non experimental in-tree backends can be seen with 
`torch._dynamo.list_backends()`
    
            - Experimental or debug in-tree backends can be seen with 
`torch._dynamo.list_backends(None)`
    
            - To register an out-of-tree custom backend:
              
https://pytorch.org/docs/main/torch.compiler_custom_backends.html#registering-custom-backends
           mode (str): Can be either "default", "reduce-overhead", 
"max-autotune" or "max-autotune-no-cudagraphs"
    
            - "default" is the default mode, which is a good balance between 
performance and overhead
    
            - "reduce-overhead" is a mode that reduces the overhead of python 
with CUDA graphs,
              useful for small batches.  Reduction of overhead can come at the 
cost of more memory
              usage, as we will cache the workspace memory required for the 
invocation so that we
              do not have to reallocate it on subsequent runs.  Reduction of 
overhead is not guaranteed
              to work; today, we only reduce overhead for CUDA only graphs 
which do not mutate inputs.
              There are other circumstances where CUDA graphs are not 
applicable; use TORCH_LOG=perf_hints
              to debug.
    
            - "max-autotune" is a mode that leverages Triton or template based 
matrix multiplications
              on supported devices and Triton based convolutions on GPU.
              It enables CUDA graphs by default on GPU.
    
            - "max-autotune-no-cudagraphs" is a mode similar to "max-autotune" 
but without CUDA graphs
    
            - To see the exact configs that each mode sets you can call 
`torch._inductor.list_mode_options()`
    
           options (dict): A dictionary of options to pass to the backend. Some 
notable ones to try out are
    
            - `epilogue_fusion` which fuses pointwise ops into templates. 
Requires `max_autotune` to also be set
    
            - `max_autotune` which will profile to pick the best matmul 
configuration
    
            - `fallback_random` which is useful when debugging accuracy issues
    
            - `shape_padding` which pads matrix shapes to better align loads on 
GPUs especially for tensor cores
    
            - `triton.cudagraphs` which will reduce the overhead of python with 
CUDA graphs
    
            - `trace.enabled` which is the most useful debugging flag to turn on
    
            - `trace.graph_diagram` which will show you a picture of your graph 
after fusion
    
            - For inductor you can see the full list of configs that it 
supports by calling `torch._inductor.list_options()`
           disable (bool): Turn torch.compile() into a no-op for testing
    
        Example::
    
            @torch.compile(options={"triton.cudagraphs": True}, fullgraph=True)
            def foo(x):
                return torch.sin(x) + torch.cos(x)
    
        """
        _C._log_api_usage_once("torch.compile")
        if sys.version_info >= (3, 13):
>           raise RuntimeError("Dynamo is not supported on Python 3.13+")
E           RuntimeError: Dynamo is not supported on Python 3.13+

/usr/lib/python3/dist-packages/torch/__init__.py:2416: RuntimeError
=============================== warnings summary 
===============================
torch_geometric/inspector.py:433: 60 warnings
test/contrib/nn/models/test_rbcd_attack.py: 792 warnings
test/explain/algorithm/test_attention_explainer.py: 483 warnings
test/explain/algorithm/test_captum.py: 46 warnings
test/explain/algorithm/test_explain_algorithm_utils.py: 106 warnings
test/explain/algorithm/test_gnn_explainer.py: 22530 warnings
test/explain/algorithm/test_graphmask_explainer.py: 18144 warnings
test/explain/algorithm/test_pg_explainer.py: 414 warnings
test/loader/test_neighbor_loader.py: 146 warnings
test/nn/conv/test_agnn_conv.py: 30 warnings
test/nn/conv/test_antisymmetric_conv.py: 11 warnings
test/nn/conv/test_arma_conv.py: 24 warnings
test/nn/conv/test_cg_conv.py: 68 warnings
test/nn/conv/test_cheb_conv.py: 20 warnings
test/nn/conv/test_cluster_gcn_conv.py: 13 warnings
test/nn/conv/test_create_gnn.py: 10 warnings
test/nn/conv/test_dir_gnn_conv.py: 20 warnings
test/nn/conv/test_dna_conv.py: 57 warnings
test/nn/conv/test_edge_conv.py: 22 warnings
test/nn/conv/test_eg_conv.py: 72 warnings
test/nn/conv/test_fa_conv.py: 15 warnings
test/nn/conv/test_feast_conv.py: 11 warnings
test/nn/conv/test_film_conv.py: 47 warnings
test/nn/conv/test_gat_conv.py: 144 warnings
test/nn/conv/test_gated_graph_conv.py: 13 warnings
test/nn/conv/test_gatv2_conv.py: 96 warnings
test/nn/conv/test_gcn2_conv.py: 13 warnings
test/nn/conv/test_gcn_conv.py: 66 warnings
test/nn/conv/test_gen_conv.py: 92 warnings
test/nn/conv/test_general_conv.py: 210 warnings
test/nn/conv/test_gin_conv.py: 63 warnings
test/nn/conv/test_gmm_conv.py: 82 warnings
test/nn/conv/test_gps_conv.py: 60 warnings
test/nn/conv/test_graph_conv.py: 38 warnings
test/nn/conv/test_gravnet_conv.py: 12 warnings
test/nn/conv/test_han_conv.py: 58 warnings
test/nn/conv/test_heat_conv.py: 33 warnings
test/nn/conv/test_hetero_conv.py: 352 warnings
test/nn/conv/test_hgt_conv.py: 105 warnings
test/nn/conv/test_hypergraph_conv.py: 44 warnings
test/nn/conv/test_le_conv.py: 14 warnings
test/nn/conv/test_lg_conv.py: 13 warnings
test/nn/conv/test_message_passing.py: 462 warnings
test/nn/conv/test_mf_conv.py: 21 warnings
test/nn/conv/test_mixhop_conv.py: 13 warnings
test/nn/conv/test_nn_conv.py: 22 warnings
test/nn/conv/test_pan_conv.py: 11 warnings
test/nn/conv/test_pdn_conv.py: 24 warnings
test/nn/conv/test_pna_conv.py: 24 warnings
test/nn/conv/test_point_conv.py: 13 warnings
test/nn/conv/test_point_gnn_conv.py: 14 warnings
test/nn/conv/test_point_transformer_conv.py: 51 warnings
test/nn/conv/test_ppf_conv.py: 16 warnings
test/nn/conv/test_res_gated_graph_conv.py: 52 warnings
test/nn/conv/test_rgat_conv.py: 3858 warnings
test/nn/conv/test_rgcn_conv.py: 211 warnings
test/nn/conv/test_sage_conv.py: 170 warnings
test/nn/conv/test_sg_conv.py: 13 warnings
test/nn/conv/test_signed_conv.py: 21 warnings
test/nn/conv/test_simple_conv.py: 46 warnings
test/nn/conv/test_ssg_conv.py: 13 warnings
test/nn/conv/test_static_graph.py: 30 warnings
test/nn/conv/test_supergat_conv.py: 25 warnings
test/nn/conv/test_tag_conv.py: 24 warnings
test/nn/conv/test_transformer_conv.py: 120 warnings
test/nn/conv/test_wl_conv_continuous.py: 13 warnings
test/nn/dense/test_dense_gat_conv.py: 64 warnings
test/nn/dense/test_dense_gcn_conv.py: 11 warnings
test/nn/dense/test_dense_gin_conv.py: 10 warnings
test/nn/dense/test_dense_graph_conv.py: 72 warnings
test/nn/dense/test_dense_sage_conv.py: 10 warnings
test/nn/models/test_attentive_fp.py: 48 warnings
test/nn/models/test_basic_gnn.py: 104798 warnings
test/nn/models/test_correct_and_smooth.py: 46 warnings
test/nn/models/test_deep_graph_infomax.py: 22 warnings
test/nn/models/test_deepgcn.py: 80 warnings
test/nn/models/test_label_prop.py: 11 warnings
test/nn/models/test_lightgcn.py: 792 warnings
test/nn/models/test_linkx.py: 24 warnings
test/nn/models/test_neural_fingerprint.py: 80 warnings
test/nn/models/test_pmlp.py: 11 warnings
test/nn/models/test_rect.py: 11 warnings
test/nn/models/test_rev_gnn.py: 208 warnings
test/nn/models/test_schnet.py: 100 warnings
test/nn/models/test_signed_gcn.py: 20 warnings
test/nn/models/test_visnet.py: 260 warnings
test/nn/pool/test_pan_pool.py: 11 warnings
test/nn/pool/test_sag_pool.py: 147 warnings
test/nn/test_sequential.py: 178 warnings
test/nn/test_to_hetero_module.py: 10 warnings
test/nn/test_to_hetero_transformer.py: 318 warnings
test/nn/test_to_hetero_with_bases_transformer.py: 140 warnings
test/profile/test_profiler.py: 20 warnings
test/test_inspector.py: 14 warnings
test/utils/test_embedding.py: 22 warnings
test/utils/test_subgraph.py: 22 warnings
test/visualization/test_influence.py: 22 warnings
  
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_torch-geometric/build/torch_geometric/inspector.py:433:
 DeprecationWarning: Failing to pass a value to the 'type_params' parameter of 
'typing._eval_type' is deprecated, as it leads to incorrect behaviour when 
calling typing._eval_type on a stringified annotation that references a PEP 695 
type parameter. It will be disallowed in Python 3.15.
    return typing._eval_type(value, _globals, None)  # type: ignore

torch_geometric/graphgym/config.py:19
  
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_torch-geometric/build/torch_geometric/graphgym/config.py:19:
 UserWarning: Could not define global config object. Please install 'yacs' via 
'pip install yacs' in order to use GraphGym
    warnings.warn("Could not define global config object. Please install "

torch_geometric/graphgym/imports.py:14
  
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_torch-geometric/build/torch_geometric/graphgym/imports.py:14:
 UserWarning: Please install 'pytorch_lightning' via  'pip install 
pytorch_lightning' in order to use GraphGym
    warnings.warn("Please install 'pytorch_lightning' via  "

test/loader/test_dataloader.py: 17 warnings
  /usr/lib/python3.13/multiprocessing/popen_fork.py:67: DeprecationWarning: 
This process (pid=170812) is multi-threaded, use of fork() may lead to 
deadlocks in the child.
    self.pid = os.fork()

test/loader/test_imbalanced_sampler.py: 2 warnings
test/loader/test_link_neighbor_loader.py: 22 warnings
test/loader/test_mixin.py: 3 warnings
test/loader/test_neighbor_loader.py: 16 warnings
test/loader/test_zip_loader.py: 2 warnings
test/nn/conv/test_pna_conv.py: 1 warning
  
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_torch-geometric/build/torch_geometric/sampler/neighbor_sampler.py:61:
 UserWarning: Using 'NeighborSampler' without a 'pyg-lib' installation is 
deprecated and will be removed soon. Please install 'pyg-lib' for accelerated 
neighborhood sampling
    warnings.warn(f"Using '{self.__class__.__name__}' without a "

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info 
============================
FAILED test/data/test_feature_store.py::test_feature_store - 
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
FAILED 
test/nn/conv/test_hetero_conv.py::test_compile_hetero_conv_graph_breaks[cpu]
 - RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED 
test/nn/conv/test_sage_conv.py::test_compile_multi_aggr_sage_conv[cpu] 
- RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED test/nn/models/test_basic_gnn.py::test_packaging - 
AttributeError: module 'typing' has no attribute 'io'. Did you mean: 'IO'?
FAILED 
test/nn/models/test_basic_gnn.py::test_compile_graph_breaks[GCN-cpu] - 
RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED 
test/nn/models/test_basic_gnn.py::test_compile_graph_breaks[GraphSAGE-cpu]
 - RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED 
test/nn/models/test_basic_gnn.py::test_compile_graph_breaks[GIN-cpu] - 
RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED 
test/nn/models/test_basic_gnn.py::test_compile_graph_breaks[GAT-cpu] - 
RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED 
test/nn/models/test_basic_gnn.py::test_compile_graph_breaks[EdgeCNN-cpu]
 - RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED 
test/nn/models/test_basic_gnn.py::test_compile_graph_breaks[PNA-cpu] - 
RuntimeError: Python 3.13+ not yet supported for torch.compile
FAILED test/test_isinstance.py::test_compile - RuntimeError: 
Dynamo is not supported on Python 3.13+
= 11 failed, 5533 passed, 862 skipped, 
53 deselected, 157563 warnings in 89.04s 
(0:01:29) =
E: pybuild pybuild:389: test: plugin pyproject failed with: exit code=1: cd 
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_torch-geometric/build; python3.13 -m 
pytest -k 'not test_citeseer and not test_enzymes and not test_mutag and not 
test_basic_gnn_inference and not _on_cora and not test_torch_profile and not 
test_appnp and not test_asap and not test_two_hop and not 
test_add_random_walk_pe and not test_graph_unet and not test_spspmm and not 
test_add_metapaths and not test_type_repr'
dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.13 
returned exit code 13
make: *** [debian/rules:12: binary] Error 25
dpkg-buildpackage: error: debian/rules binary subprocess returned exit status 2
--------------------------------------------------------------------------------

The above is just how the build ends and not necessarily the most relevant part.
If required, the full build log is available here:

https://people.debian.org/~sanvila/build-logs/202501/

About the archive rebuild: The build was made on virtual machines from AWS,
using sbuild and a reduced chroot with only build-essential packages.

If you could not reproduce the bug please contact me privately, as I
am willing to provide ssh access to a virtual machine where the bug is
fully reproducible.

If this is really a bug in one of the build-depends, please use
reassign and add an affects on src:pytorch-geometric, so that this is still
visible in the BTS web page for this package.

Thanks.

--- End Message ---
--- Begin Message ---
Source: pytorch-geometric
Source-Version: 2.6.1-2
Done: Andrius Merkys <mer...@debian.org>

We believe that the bug you reported is fixed in the latest version of
pytorch-geometric, which is due to be installed in the Debian FTP archive.

A summary of the changes between this version and the previous one is
attached.

Thank you for reporting the bug, which will now be closed.  If you
have further comments please address them to 1093...@bugs.debian.org,
and the maintainer will reopen the bug report if appropriate.

Debian distribution maintenance software
pp.
Andrius Merkys <mer...@debian.org> (supplier of updated pytorch-geometric 
package)

(This message was generated automatically at their request; if you
believe that there is a problem with it please contact the archive
administrators by mailing ftpmas...@ftp-master.debian.org)


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Format: 1.8
Date: Fri, 21 Feb 2025 07:58:41 -0500
Source: pytorch-geometric
Architecture: source
Version: 2.6.1-2
Distribution: unstable
Urgency: medium
Maintainer: Debian Deep Learning Team 
<debian-science-maintain...@lists.alioth.debian.org>
Changed-By: Andrius Merkys <mer...@debian.org>
Closes: 1093354
Changes:
 pytorch-geometric (2.6.1-2) unstable; urgency=medium
 .
   * Update skipped test list (Closes: #1093354)
Checksums-Sha1:
 3a7425ff04a83a033a2cb0f2dcbc7c9f79f096fa 2541 pytorch-geometric_2.6.1-2.dsc
 e67d5bd9f4d3fb8ab732dd8ebdd288fb2a2f94d6 10384 
pytorch-geometric_2.6.1-2.debian.tar.xz
 839c8697615daeb07714c9141acd86517ddcdfa7 12261 
pytorch-geometric_2.6.1-2_source.buildinfo
Checksums-Sha256:
 7b18c43aa477f5fbf4a9722f31c1d3108dde37cac7193c055a9308a26ff00f16 2541 
pytorch-geometric_2.6.1-2.dsc
 afd99ff18460dc76c971fd4bdd6133f1884e4b991eee5231ad829fa8b82e845b 10384 
pytorch-geometric_2.6.1-2.debian.tar.xz
 e89668153f8087a64ab5d811c57bb06fa45a1bda5bc3de69198daa79cb04051f 12261 
pytorch-geometric_2.6.1-2_source.buildinfo
Files:
 0a9c3cd9bcb879b76d2db2d43091041d 2541 science optional 
pytorch-geometric_2.6.1-2.dsc
 1c10a4ca69f35bb91bc1f8afaea3c819 10384 science optional 
pytorch-geometric_2.6.1-2.debian.tar.xz
 af0df6f9d921ecb92e701dc086cb645c 12261 science optional 
pytorch-geometric_2.6.1-2_source.buildinfo

-----BEGIN PGP SIGNATURE-----

iQJGBAEBCgAwFiEEdyKS9veshfrgQdQe5fQ/nCc08ocFAme4eQASHG1lcmt5c0Bk
ZWJpYW4ub3JnAAoJEOX0P5wnNPKHUvoQAJgnLX9howFWbRvgjg0zqRSZUdAACc+A
svx9VAxot6scH309GrTj8i5RcvqDR+AhKhI/icYccVMJyvwxD1IZTSLWj7Q9PUUp
Ns+uRvzqhrC11uqaiCzLcmFYKYO1GYUYn4Z/ZAzeBAngfxgxF6vzm57ZiFpo5osd
qGXc3GDQOWQY7jQKW9s4Wb/BxfF9UodxCLYmvAoefeO5LAICcr2XWYJDnt5ljE3Q
LsBZHIx07k9pnGl5d/e+yeEHP0xyVnO2Z1NWNeoDQB7+h4Z7dGxhdYmzmVAApn7C
p1A1WpuZG+FfMxLGKW6eE1yb/MxPB/cAwOqpeFUmZwuI/u586jWeKzrFmfqxauLN
oXQMwUoYjtYT4V45Lp73Dx5jhjYXz8oESoe4MTyqleg2ZzmVLTnWXy4jwsBNkq6T
3h53YbOzqNJjywdeshqUlNyeJB31cUd8MOuOMg5mErqnZNWc1eIiWeWxPKR56Qw+
EI7XuSc1OaHKCa0WzmX+XVYSFfCLpSMszd7cGKHvwybQN+SctjyeUu+YzdyNgrwZ
v6M2CGeXZlnp4hjesWcz8lCSfEDl+9mrPv3NI8a4wVo7PXdyk5APJ6rE3UAYQ5nL
i8K5bu2JMRkiNYSBUuUOj6aIuVa2ovR2AfGnNEjOEAWgH86hDZ5LN5d+ZtNFV4MT
Nyn65ZScBq9b
=ix4u
-----END PGP SIGNATURE-----

Attachment: pgpKbxmrvd99Y.pgp
Description: PGP signature


--- End Message ---

Reply via email to