gemini-code-assist[bot] commented on code in PR #272:
URL: https://github.com/apache/tvm-ffi/pull/272#discussion_r2534357042


##########
python/tvm_ffi/_optional_torch_c_dlpack.py:
##########
@@ -97,11 +101,18 @@ def load_torch_c_dlpack_extension() -> Any:  # noqa: 
PLR0912
                 args.append("--build-with-cuda")
             elif device == "rocm":
                 args.append("--build-with-rocm")
-            subprocess.run(
-                args,
-                check=True,
-            )
-            assert lib_path.exists(), "Failed to build torch c dlpack addon."
+
+            # use capture_output to reduce noise when building the torch c 
dlpack addon
+            result = subprocess.run(args, check=False, capture_output=True)
+            if result.returncode != 0:
+                msg = [f"Build failed with status {result.returncode}"]
+                if result.stdout:
+                    msg.append(f"stdout:\n{result.stdout.decode('utf-8')}")
+                if result.stderr:
+                    msg.append(f"stderr:\n{result.stderr.decode('utf-8')}")
+                raise RuntimeError("\n".join(msg))

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   This error handling logic for `subprocess.run` is very useful. I noticed 
that similar logic is also used in 
`python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py`. To improve 
maintainability and avoid code duplication, you could extract this into a 
shared utility function. The function would take a 
`subprocess.CompletedProcess` object and raise a `RuntimeError` with a detailed 
message on failure.



##########
python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py:
##########
@@ -625,43 +614,91 @@ def _generate_ninja_build(
         str(Path(path).resolve()) for path in extra_include_paths
     ]
 
+    # append include paths
+    for path in include_paths:
+        cflags.extend(["-I", str(path)])
+
+    # Get compiler and build paths
+    cxx = os.environ.get("CXX", "c++")
+    source_path_resolved = source_path.resolve()
+    lib_path = build_dir / libname
+
+    # Build command: compile and link in one step
+    build_cmd = [cxx, *cflags, str(source_path_resolved), *ldflags, "-o", 
str(lib_path)]
+
+    # Run build command
+    status = subprocess.run(build_cmd, cwd=str(build_dir), 
capture_output=True, check=False)
+    if status.returncode != 0:
+        msg = [f"Build failed with status {status.returncode}"]
+        if status.stdout:
+            msg.append(f"stdout:\n{status.stdout.decode('utf-8')}")
+        if status.stderr:
+            msg.append(f"stderr:\n{status.stderr.decode('utf-8')}")
+        raise RuntimeError("\n".join(msg))
+
+
+def _generate_ninja_build_windows(
+    build_dir: Path,
+    libname: str,
+    source_path: Path,
+    extra_cflags: Sequence[str],
+    extra_ldflags: Sequence[str],
+    extra_include_paths: Sequence[str],
+) -> None:
+    """Generate the content of build.ninja for building the module on 
Windows."""
+    from tvm_ffi.libinfo import find_dlpack_include_path  # noqa: PLC0415
+
+    default_cflags = [
+        "/std:c++17",
+        "/MD",
+        "/wd4819",
+        "/wd4251",
+        "/wd4244",
+        "/wd4267",
+        "/wd4275",
+        "/wd4018",
+        "/wd4190",
+        "/wd4624",
+        "/wd4067",
+        "/wd4068",
+        "/EHsc",
+    ]
+    default_ldflags = ["/DLL"]
+
+    cflags = default_cflags + [flag.strip() for flag in extra_cflags]
+    ldflags = default_ldflags + [flag.strip() for flag in extra_ldflags]
+    include_paths = [find_dlpack_include_path()] + [
+        str(Path(path).resolve()) for path in extra_include_paths
+    ]

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The initialization of `cflags`, `ldflags`, and `include_paths` is identical 
to the logic in `_run_build_on_linux_like` (lines 611-615). To improve code 
clarity and reduce duplication, consider extracting this common logic into a 
helper function. This function could handle stripping flags and resolving 
include paths, returning the processed lists for further platform-specific 
modifications.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to