mccormickt12 commented on code in PR #2251:
URL: https://github.com/apache/iceberg-python/pull/2251#discussion_r2240361462
##########
pyiceberg/io/pyarrow.py:
##########
@@ -423,29 +423,67 @@ def _initialize_fs(self, scheme: str, netloc:
Optional[str] = None) -> FileSyste
def _initialize_oss_fs(self) -> FileSystem:
from pyarrow.fs import S3FileSystem
- client_kwargs: Dict[str, Any] = {
- "endpoint_override": self.properties.get(S3_ENDPOINT),
- "access_key": get_first_property_value(self.properties,
S3_ACCESS_KEY_ID, AWS_ACCESS_KEY_ID),
- "secret_key": get_first_property_value(self.properties,
S3_SECRET_ACCESS_KEY, AWS_SECRET_ACCESS_KEY),
- "session_token": get_first_property_value(self.properties,
S3_SESSION_TOKEN, AWS_SESSION_TOKEN),
- "region": get_first_property_value(self.properties, S3_REGION,
AWS_REGION),
- "force_virtual_addressing": property_as_bool(self.properties,
S3_FORCE_VIRTUAL_ADDRESSING, True),
+ # Mapping from PyIceberg properties to S3FileSystem parameter names
Review Comment:
why map these at all at pyiceberg level? why not just allow clients to keep
the same client configs and pyiceberg treat them opaquely and pass through.
Im imagining for hdfs, we have all our client configs in core-site.xml, and
needing to recreate this file for pyiceberg seems painful. (i could be
misinterpretting this)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]