geruh commented on code in PR #2979:
URL: https://github.com/apache/iceberg-python/pull/2979#discussion_r2740413594


##########
Makefile:
##########
@@ -66,11 +66,15 @@ install-uv: ## Ensure uv is installed
                echo "uv is already installed."; \
        fi
 
-setup-venv: ## Create virtual environment
-       uv venv $(PYTHON_ARG)
+setup-venv: ## Create virtual environment (if not exists)

Review Comment:
   Nit: shell check is nice but you could also use `uv venv $(PYTHON_ARG) 
--allow-existing` the allow existing flag will suppress and handle version 
swaps correctly. 



##########
Makefile:
##########
@@ -66,11 +66,15 @@ install-uv: ## Ensure uv is installed
                echo "uv is already installed."; \
        fi
 
-setup-venv: ## Create virtual environment
-       uv venv $(PYTHON_ARG)
+setup-venv: ## Create virtual environment (if not exists)
+       @if [ ! -d ".venv" ]; then \
+               uv venv $(PYTHON_ARG); \
+       else \
+               echo "Virtual environment already exists at .venv"; \
+       fi
 
 install-dependencies: setup-venv ## Install all dependencies including extras
-       uv sync $(PYTHON_ARG) --all-extras --reinstall
+       uv sync $(PYTHON_ARG) --all-extras

Review Comment:
   The --reinstall flag was added because of the Cython compiled code. After 
running `make clean` the .so files are removed and uv sync without --reinstall 
won't rebuild them. 
   
   We could use `--reinstall-package pyiceberg` conditionally instead. I see 
that it is a bit faster, and potentially only run when `.so` files don't exist.
   
   WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to