haizhou-zhao commented on code in PR #11093:
URL: https://github.com/apache/iceberg/pull/11093#discussion_r1852643846
##########
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBaseWithCatalog.java:
##########
@@ -59,20 +87,29 @@ protected static Object[][] parameters() {
}
@BeforeAll
- public static void createWarehouse() throws IOException {
+ public static void setUpAll() throws IOException {
TestBaseWithCatalog.warehouse = File.createTempFile("warehouse", null);
assertThat(warehouse.delete()).isTrue();
+ restCatalog = REST_SERVER_EXTENSION.client();
Review Comment:
I'm thinking, the other way of refactoring this is we can initialize Spark
session in a BeforeAllCallBack extension class (and that perhaps also mean we
need to refactor the existing HiveCatalog & HadoopCatalog initialization into
separate Extension classes to take care of the existing Spark tests).
Like mentioned before, as long as we are still initializing Spark session in
BeforeAll stage, we unlikely can delay init of catalogs to BeforeEach stage.
Yet if the Spark session is also init in a separate extension class, we
might not need to write getters on the extension class for catalogs to expose
anything related to catalogs to the outside world (info can pass freely between
extension classes) - we only need to write getters for Spark extension class so
that the test classes themselves can get a handle to the Spark session.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]