nastra commented on PR #9902:
URL: https://github.com/apache/iceberg/pull/9902#issuecomment-2006226950

   > > In that case I would probably just add a new test class where reading 
and writing is done through Spark. I think the purpose of 
TestSparkReaderWithBloomFilter was to actually only read through Spark
   > 
   > Yes I can do that, but some logics will be duplicated (data writer for 
avro and spark, random data generator, ...), knowing that replacing avro writer 
with spark df write reduced the code size (`+435 −742` despite the addition of 
a new test), and simplified it. If you check the logic of the existing test, it 
didn't change, I just updated the extraction of row values to do the assertion. 
WDYT? If you still consider it an unnecessary change, I can create a new class 
for the new test.
   
   I'm still +1 on just adding a separate test class and not change the 
existing test (since that effectively changes the write path).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to