https://bugs.kde.org/show_bug.cgi?id=499563

--- Comment #3 from farid <snd.no...@gmail.com> ---
With a few frames it does work but with something like 2 seconds I get a
freeze, crash or error:

Resize Array, COLS:
1
NumPy Array:
{0: array([[2239, 1293]])}
NumPy Array:
{0: array([1])}
using device: cuda:0
Traceback (most recent call last):
  File "/usr/share/kdenlive/scripts/automask/sam-objectmask.py", line 104, in
<module>
    sam2_model = build_sam2(model_cfg, sam2_checkpoint, device=device)
  File
"/home/farid/.local/share/kdenlive/venv-sam/lib/python3.13/site-packages/sam2/build_sam.py",
line 94, in build_sam2
    model = model.to(device)
  File
"/home/farid/.local/share/kdenlive/venv-sam/lib/python3.13/site-packages/torch/nn/modules/module.py",
line 1343, in to
    return self._apply(convert)
           ~~~~~~~~~~~^^^^^^^^^
  File
"/home/farid/.local/share/kdenlive/venv-sam/lib/python3.13/site-packages/torch/nn/modules/module.py",
line 903, in _apply
    module._apply(fn)
    ~~~~~~~~~~~~~^^^^
  File
"/home/farid/.local/share/kdenlive/venv-sam/lib/python3.13/site-packages/torch/nn/modules/module.py",
line 903, in _apply
    module._apply(fn)
    ~~~~~~~~~~~~~^^^^
  File
"/home/farid/.local/share/kdenlive/venv-sam/lib/python3.13/site-packages/torch/nn/modules/module.py",
line 903, in _apply
    module._apply(fn)
    ~~~~~~~~~~~~~^^^^
  [Previous line repeated 4 more times]
  File
"/home/farid/.local/share/kdenlive/venv-sam/lib/python3.13/site-packages/torch/nn/modules/module.py",
line 930, in _apply
    param_applied = fn(param)
  File
"/home/farid/.local/share/kdenlive/venv-sam/lib/python3.13/site-packages/torch/nn/modules/module.py",
line 1329, in convert
    return t.to(
           ~~~~^
        device,
        ^^^^^^^
        dtype if t.is_floating_point() or t.is_complex() else None,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        non_blocking,
        ^^^^^^^^^^^^^
    )
    ^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0
has a total capacity of 5.76 GiB of which 9.31 MiB is free. Including
non-PyTorch memory, this process has 286.00 MiB memory in use. Process 32606
has 296.00 MiB memory in use. Process 32629 has 298.00 MiB memory in use.
Process 32651 has 358.00 MiB memory in use. Process 32676 has 738.00 MiB memory
in use. Process 32697 has 1020.00 MiB memory in use. Process 32715 has 1020.00
MiB memory in use. Process 32734 has 1.12 GiB memory in use. Process 32758 has
698.00 MiB memory in use. Of the allocated memory 188.77 MiB is allocated by
PyTorch, and 5.23 MiB is reserved by PyTorch but unallocated. If reserved but
unallocated memory is large try setting
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See
documentation for Memory Management 
(https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to