amogh-jahagirdar commented on code in PR #13022:
URL: https://github.com/apache/iceberg/pull/13022#discussion_r2083320831


##########
.github/workflows/jmh-benchmarks.yml:
##########
@@ -28,8 +28,8 @@ on:
         description: 'The branch name'
         required: true
       spark_version:
-        description: 'The spark project version to use, such as 
iceberg-spark-3.5'
-        default: 'iceberg-spark-3.5'
+        description: 'The spark project version to use, such as 
iceberg-spark-4.0'
+        default: 'iceberg-spark-4.0'

Review Comment:
   I think it would be better if we hold off on updating this to use 4.0 until 
the release is official? We can update it alongside the default spark version 
update when the Spark 4.0 is actually released.  Until then I think it makes 
sense for JMH benchmarks to just run against 3.5



##########
.github/workflows/recurring-jmh-benchmarks.yml:
##########
@@ -41,7 +41,7 @@ jobs:
                     "IcebergSourceNestedParquetDataReadBenchmark", 
"IcebergSourceNestedParquetDataWriteBenchmark",
                     "IcebergSourceParquetEqDeleteBenchmark", 
"IcebergSourceParquetMultiDeleteFileBenchmark",
                     "IcebergSourceParquetPosDeleteBenchmark", 
"IcebergSourceParquetWithUnrelatedDeleteBenchmark"]
-        spark_version: ['iceberg-spark-3.5']
+        spark_version: ['iceberg-spark-4.0']

Review Comment:
   Same as above, I think we should leave this at 3.5 until the 4.0 is official 
(and upgrade this alongside default spark version)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to