[GitHub] [iceberg] metalshanked opened a new issue, #6351: updated to 3.3_2.12-1.0.0.jar causes java.lang.NoClassDefFoundError: scala/$less$colon$less

2022-12-03 Thread GitBox


metalshanked opened a new issue, #6351:
URL: https://github.com/apache/iceberg/issues/6351

   ### Apache Iceberg version
   
   1.1.0 (latest release)
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   Update the iceberg runtime from 
   `iceberg-spark-runtime-3.3_2.12-1.0.0.jar`  to 
`iceberg-spark-runtime-3.3_2.13-1.1.0.jar` 
   For spark 3.3.1 and seem to get the below error from both pyspark and spark 
on any spark sql operation on iceberg
   
   the scala version installed in `2.12.15`
   
   ```
   An error was encountered:
   java.lang.NoClassDefFoundError: scala/$less$colon$less
 at 
org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions.$anonfun$apply$2(IcebergSparkSessionExtensions.scala:50)
 at 
org.apache.spark.sql.SparkSessionExtensions.$anonfun$buildResolutionRules$1(SparkSessionExtensions.scala:152)
 at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
 at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
 at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
 at scala.collection.TraversableLike.map(TraversableLike.scala:286)
 at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
 at scala.collection.AbstractTraversable.map(Traversable.scala:108)
 at 
org.apache.spark.sql.SparkSessionExtensions.buildResolutionRules(SparkSessionExtensions.scala:152)
 at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.customResolutionRules(BaseSessionStateBuilder.scala:216)
 at 
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anon$1.(BaseSessionStateBuilder.scala:190)
 at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.analyzer(BaseSessionStateBuilder.scala:182)
 at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.$anonfun$build$2(BaseSessionStateBuilder.scala:360)
 at 
org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:87)
 at 
org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:87)
 at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:76)
 at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
 at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:185)
 at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510)
 at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:185)
 at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
 at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:184)
 at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:76)
 at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
 at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
 at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
 at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
 at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
 at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
 at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
 at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
 ... 51 elided
   Caused by: java.lang.ClassNotFoundException: scala.$less$colon$less
 at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
 ... 83 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] metalshanked closed issue #6351: updated to iceberg-spark-runtime-3.3_2.13-1.1.0.jar - getting java.lang.NoClassDefFoundError: scala/$less$colon$less

2022-12-03 Thread GitBox


metalshanked closed issue #6351: updated to 
iceberg-spark-runtime-3.3_2.13-1.1.0.jar - getting 
java.lang.NoClassDefFoundError: scala/$less$colon$less
URL: https://github.com/apache/iceberg/issues/6351


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] ajantha-bhat commented on a diff in pull request #6267: Core: Update StatisticsFile interface in TableMetadata spec

2022-12-03 Thread GitBox


ajantha-bhat commented on code in PR #6267:
URL: https://github.com/apache/iceberg/pull/6267#discussion_r1038756561


##
core/src/main/java/org/apache/iceberg/TableMetadata.java:
##
@@ -823,7 +860,7 @@ public static class Builder {
 private long currentSnapshotId;
 private List snapshots;
 private final Map refs;
-private final Map> statisticsFiles;
+private final Map> statisticsFilesById;

Review Comment:
   I thought better to rename this similar to `snapshotsById`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] ajantha-bhat commented on a diff in pull request #6267: Core: Update StatisticsFile interface in TableMetadata spec

2022-12-03 Thread GitBox


ajantha-bhat commented on code in PR #6267:
URL: https://github.com/apache/iceberg/pull/6267#discussion_r1038756728


##
core/src/main/java/org/apache/iceberg/TableMetadata.java:
##
@@ -1183,14 +1222,16 @@ public Builder setStatistics(long snapshotId, 
StatisticsFile statisticsFile) {
   "snapshotId does not match: %s vs %s",
   snapshotId,
   statisticsFile.snapshotId());
-  statisticsFiles.put(statisticsFile.snapshotId(), 
ImmutableList.of(statisticsFile));

Review Comment:
   The interface supports returning multiple stats file for one snapshot. But 
was always overwriting instead of appending. So, fixed it.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] ajantha-bhat commented on a diff in pull request #6267: Core: Update StatisticsFile interface in TableMetadata spec

2022-12-03 Thread GitBox


ajantha-bhat commented on code in PR #6267:
URL: https://github.com/apache/iceberg/pull/6267#discussion_r1038756796


##
core/src/test/java/org/apache/iceberg/TestTableMetadata.java:
##
@@ -1274,20 +1275,20 @@ public void testSetStatistics() {
 Assert.assertEquals("Statistics file snapshot", 43L, 
statisticsFile.snapshotId());
 Assert.assertEquals("Statistics file path", "/some/path/to/stats/file", 
statisticsFile.path());
 
-TableMetadata withStatisticsReplaced =
+TableMetadata withStatisticsAppended =
 TableMetadata.buildFrom(withStatistics)
 .setStatistics(
 43,
 new GenericStatisticsFile(
 43, "/some/path/to/stats/file2", 128, 27, 
ImmutableList.of()))
 .build();
 
-Assertions.assertThat(withStatisticsReplaced.statisticsFiles())
-.as("There should be one statistics file registered")
-.hasSize(1);
-statisticsFile = 
Iterables.getOnlyElement(withStatisticsReplaced.statisticsFiles());
-Assert.assertEquals("Statistics file snapshot", 43L, 
statisticsFile.snapshotId());
-Assert.assertEquals("Statistics file path", "/some/path/to/stats/file2", 
statisticsFile.path());
+Assertions.assertThat(withStatisticsAppended.statisticsFiles())
+.as("There should be two statistics files registered")
+.hasSize(2)

Review Comment:
   As I fixed the overwrite to append. There will be two stats file for this 
snapshot id. Hence, updated the testcases. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] ajantha-bhat commented on pull request #6267: Core: Update StatisticsFile interface in TableMetadata spec

2022-12-03 Thread GitBox


ajantha-bhat commented on PR #6267:
URL: https://github.com/apache/iceberg/pull/6267#issuecomment-1336131494

   @rdblue, @findepi: PR is ready for review. This should help in unblocking 
#6090, #6091   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] tomtongue opened a new pull request, #6352: AWS: Fix inconsistent behavior of naming S3 location between read and write operations by allowing only s3 bucket name

2022-12-03 Thread GitBox


tomtongue opened a new pull request, #6352:
URL: https://github.com/apache/iceberg/pull/6352

   ### Fix
   Enable users to run read-queries when specifying only S3 bucket name as its 
Iceberg table location.
   
   ### Issue
   Current S3URI ValidationChecker doesn't allow users to specify only bucket 
name such as `s3://bucket/` or `s3://bucket`.
   
   However, we can specify an only bucket name when creating an Iceberg table. 
   
   For example, if you run the following DDL with Spark, your table location is 
specified as `s3://bucket`
   
   ```
   CREATE TABLE catalog.db.table(id int, data string)
   USING iceberg
   LOCATION 's3://bucket'
   ```
   
   (If you don't specify `LOCATION`, Spark creates a table under 
`s3:///db.db/table/` by default.)
   
   And you can also run `INSERT INTO` query for this. Through this operation, 
S3URI handles manifest list, files and metadata json in metadata directory. 
However, `SELECT` query needs to access its table location. Therefore, when the 
query access the location that only has bucket name such as `s3://bucket`, 
S3URI validation fails with `org.apache.iceberg.exceptions.ValidationException 
: Invalid S3 URI, path is empty: s3://bucket`.
   
   This also happens if you create a table with Athena. When Athena creates a 
table with only bucket name location, it fails with `ValidationException`. 
   
   I understand there's another way to fix this, that is not allowed using only 
bucket name. However, I made this choice (supporting only bucket name) because 
I believe some users want to create an Iceberg table at direct bucket.
   
   
   ### Simply actual Spark test (Spark 3.3 is used)
   
   ```scala
   import java.util.UUID.{randomUUID => uuid}
   import java.sql.Timestamp
   import scala.collection.JavaConverters._
   
   import org.apache.spark.{SparkContext, SparkConf}
   import org.apache.spark.sql.Row
   import org.apache.spark.sql.SparkSession
   import org.apache.spark.sql.types.{LongType, StringType, StructField, 
StructType}
   
   object IcebergSparkApp {
   def main(sysArgs: Array[String]) {
   val database = "db"
   val table = "s3url_validation_fix"
   val catalog: String = "catalog"
   val resourceName = s"$catalog.$database.$table"
   val warehouse = "s3://bucket/path"
   println(s"Resource: $resourceName")
   
   val spark = SparkSession.builder()
   .config(s"spark.sql.catalog.$catalog", 
"org.apache.iceberg.spark.SparkCatalog")
   .config(s"spark.sql.catalog.$catalog.warehouse", 
warehouse)
   
.config(s"spark.sql.catalog.$catalog.catalog-impl", 
"org.apache.iceberg.aws.glue.GlueCatalog")
   .config(s"spark.sql.catalog.$catalog.io-impl", 
"org.apache.iceberg.aws.s3.S3FileIO")
   .config("spark.sql.extensions", 
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
   .getOrCreate()
   
   val qCreate = s"""
   CREATE TABLE $resourceName(id int, data string)
   USING iceberg
   LOCATION 's3://bucket'
   """
   
   println(s"Creating a table: $qCreate")
   spark.sql(qCreate)
   
   println("Running INSERT query")
   spark.sql(s"INSERT INTO $resourceName VALUES(1, 'a')")
   
   println("Running SELECT query")
   spark.sql(s"SELECT * FROM $resourceName").show(false)
   }
   }
   
   /* Output
   Resource: catalog.db.s3url_validation_fix
   Creating a table: 
   CREATE TABLE catalog.db.s3url_validation_fix(id int, data string)
   USING iceberg
   LOCATION 's3://bucket'
   
   Running INSERT query
   Running SELECT query
   +---++
   |id |data|
   +---++
   |1  |a   |
   +---++
   */
   
   ```
   
   Iceberg table in Glue Data Catalog (AWS environment was used):
   
   ```
   aws glue get-table --database-name db --name s3url_validation_fix
   {
   "Table": {
   "Name": "s3url_validation_fix",
   "DatabaseName": "db",
   "CreateTime": 1670080197.0,
   "UpdateTime": 1670080210.0,
   "Retention": 0,
   "StorageDescriptor": {
   "Columns": [
   {
   "Name": "id",
   "Type": "int",
   "Parameters": {
   "iceberg.field.current": "true",
   "iceberg.field.id": "1",
   "iceberg.field.optional": "true"
   }
   },
   {
   "Name": "data",
   "Type": "string",
   "Parameters": {
   "iceberg.field.current": "true",
   "iceberg.field.id": "2",
   "iceberg.field.optional": "true"

[GitHub] [iceberg] islamismailov opened a new pull request, #6353: Make sure S3 stream opened by ReadConf ctor is closed

2022-12-03 Thread GitBox


islamismailov opened a new pull request, #6353:
URL: https://github.com/apache/iceberg/pull/6353

   What the title says. This is what currently is seen: 
   
   ```2022-12-03 10:25:11.154  WARN 5408 --- [  Finalizer] 
o.a.i.a.s.S3InputStream  : Unclosed input stream created by:
   org.apache.iceberg.aws.s3.S3InputStream.(S3InputStream.java:60)
   org.apache.iceberg.aws.s3.S3InputFile.newStream(S3InputFile.java:48)
   
org.apache.iceberg.parquet.ParquetIO$ParquetInputFile.newStream(ParquetIO.java:183)
   
org.apache.iceberg.bdp.shaded.org.apache.parquet.hadoop.ParquetFileReader.(ParquetFileReader.java:802)
   
org.apache.iceberg.bdp.shaded.org.apache.parquet.hadoop.ParquetFileReader.open(ParquetFileReader.java:667)
   org.apache.iceberg.parquet.ReadConf.newReader(ReadConf.java:218)
   org.apache.iceberg.parquet.ReadConf.(ReadConf.java:74)
   org.apache.iceberg.parquet.ParquetReader.init(ParquetReader.java:66)
   
org.apache.iceberg.parquet.ParquetReader.iterator(ParquetReader.java:77)
   
org.apache.iceberg.data.TableScanIterable$ScanIterator.hasNext(TableScanIterable.java:200)```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] amogh-jahagirdar opened a new pull request, #6354: Spark: Check fileIO instead of reading location when determining locality enabled

2022-12-03 Thread GitBox


amogh-jahagirdar opened a new pull request, #6354:
URL: https://github.com/apache/iceberg/pull/6354

   When checking if locality should be enabled, the fileIO instance should be 
used rather than reading the location


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] amogh-jahagirdar commented on a diff in pull request #6354: Spark: Check fileIO instead of reading location when determining locality enabled

2022-12-03 Thread GitBox


amogh-jahagirdar commented on code in PR #6354:
URL: https://github.com/apache/iceberg/pull/6354#discussion_r1038876030


##
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/SparkReadConf.java:
##
@@ -67,10 +67,9 @@ public boolean caseSensitive() {
   }
 
   public boolean localityEnabled() {
-InputFile file = table.io().newInputFile(table.location());
-
-if (file instanceof HadoopInputFile) {
-  String scheme = ((HadoopInputFile) file).getFileSystem().getScheme();
+if (table.io() instanceof HadoopFileIO) {
+  HadoopInputFile file = (HadoopInputFile) 
table.io().newInputFile(table.location());
+  String scheme = file.getFileSystem().getScheme();
   boolean defaultValue = LOCALITY_WHITELIST_FS.contains(scheme);
   return PropertyUtil.propertyAsBoolean(readOptions, 
SparkReadOptions.LOCALITY, defaultValue);
 }

Review Comment:
   Another approach is to treat this as best effort and in case of any errors 
during this process just return false. This also looks to be done in the flink 
case as well but I'm not sure if this is desired for Spark; I think it should 
be since in the worst case whether locality is enabled or not cannot be 
determined and just opt for assuming it's not. This shouldn't have any 
correctness impact, but maybe @aokolnychyi @RussellSpitzer can help me out here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] amogh-jahagirdar commented on a diff in pull request #6354: Spark: Check fileIO instead of reading location when determining locality enabled

2022-12-03 Thread GitBox


amogh-jahagirdar commented on code in PR #6354:
URL: https://github.com/apache/iceberg/pull/6354#discussion_r1038876030


##
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/SparkReadConf.java:
##
@@ -67,10 +67,9 @@ public boolean caseSensitive() {
   }
 
   public boolean localityEnabled() {
-InputFile file = table.io().newInputFile(table.location());
-
-if (file instanceof HadoopInputFile) {
-  String scheme = ((HadoopInputFile) file).getFileSystem().getScheme();
+if (table.io() instanceof HadoopFileIO) {
+  HadoopInputFile file = (HadoopInputFile) 
table.io().newInputFile(table.location());
+  String scheme = file.getFileSystem().getScheme();
   boolean defaultValue = LOCALITY_WHITELIST_FS.contains(scheme);
   return PropertyUtil.propertyAsBoolean(readOptions, 
SparkReadOptions.LOCALITY, defaultValue);
 }

Review Comment:
   Another approach is to treat this as best effort and in case of any errors 
during this process just return false. This also looks to be done in the flink 
case as well but I'm not sure if this is desired for Spark; I think it should 
be since in the worst case whether locality is enabled or not cannot be 
determined and just opt for assuming it's not. This shouldn't have any 
correctness impact, but maybe @aokolnychyi @RussellSpitzer can validate this 
assumption.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] github-actions[bot] commented on issue #4962: how to keep only one snapshot each day when write iceberg table with flinkcdc

2022-12-03 Thread GitBox


github-actions[bot] commented on issue #4962:
URL: https://github.com/apache/iceberg/issues/4962#issuecomment-1336284015

   This issue has been automatically marked as stale because it has been open 
for 180 days with no activity. It will be closed in next 14 days if no further 
activity occurs. To permanently prevent this issue from being considered stale, 
add the label 'not-stale', but commenting on the issue is preferred when 
possible.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] XBaith commented on issue #6330: iceberg : format-version=2 , when the job is running (insert and update), can not execute rewrite small data file ?

2022-12-03 Thread GitBox


XBaith commented on issue #6330:
URL: https://github.com/apache/iceberg/issues/6330#issuecomment-1336311904

   #4996 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] dependabot[bot] opened a new pull request, #6355: Build: Bump org.eclipse.jgit from 5.13.1.202206130422-r to 6.4.0.202211300538-r

2022-12-03 Thread GitBox


dependabot[bot] opened a new pull request, #6355:
URL: https://github.com/apache/iceberg/pull/6355

   Bumps org.eclipse.jgit from 5.13.1.202206130422-r to 6.4.0.202211300538-r.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.eclipse.jgit:org.eclipse.jgit&package-manager=gradle&previous-version=5.13.1.202206130422-r&new-version=6.4.0.202211300538-r)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[GitHub] [iceberg] dependabot[bot] opened a new pull request, #6356: Build: Bump moto from 4.0.10 to 4.0.11 in /python

2022-12-03 Thread GitBox


dependabot[bot] opened a new pull request, #6356:
URL: https://github.com/apache/iceberg/pull/6356

   Bumps [moto](https://github.com/spulec/moto) from 4.0.10 to 4.0.11.
   
   Changelog
   Sourced from https://github.com/spulec/moto/blob/master/CHANGELOG.md";>moto's 
changelog.
   
   4.0.11
   Docker Digest for 4.0.11: 
sha256:ba07f61edd4f91b221ea20368586dce024e7da4d018e2741aceafa1f07f47ec5
   New Services:
   * ACM-PCA:
   * create_certificate_authority()
   * delete_certificate_authority()
   * describe_certificate_authority()
   * get_certificate()
   * get_certificate_authority_certificate()
   * get_certificate_authority_csr()
   * import_certificate_authority_certificate()
   * issue_certificate()
   * list_tags()
   * revoke_certificate()
   * tag_certificate_authority()
   * update_certificate_authority()
   * untag_certificate_authority()
   New Methods:
   * create_api_mapping()
   * create_domain_name()
   * delete_api_mapping()
   * delete_domain_name()
   * get_api_mapping()
   * get_api_mappings()
   * get_domain_name()
   * get_domain_names()
   Miscellaneous:
   * APIGateway: create_rest_api() now supports the propagation of parameter 
disableExecuteApiEndpoint (not the actual behaviour)
   * APIGateway: put_integration() now supports the parameters contentHandling, 
credentials, tlsConfig
   * AWSLambda: create_function() is now able to validate the existence of the 
provided ImageURI. Set environment variable MOTO_LAMBDA_STUB_ECR=false to 
enable this.
   * Batch: submit_job() now adds validation for the jobName parameter
   * CloudWatch: get_metric_data() now adds support for filtering by the 
unit-value
   * DynamoDB: transact_write_items() now supports up to 100 items instead of 
25, in line with AWS
   * ELB: describe_instance_health() now validates the existence of the 
provided LoadBalancer
   * Polly: The list of available voices has been updated.
   * S3: put_object() now has improved support for filenames containing spaces
   * SQS: send_message() and send_message_batch() now adds validation for the 
DelaySeconds-parameter
   
   
   
   
   Commits
   
   https://github.com/spulec/moto/commit/532d4ca0765c1f9318cdf91833b373854d13650a";>532d4ca
 Update CHANGELOG.md
   https://github.com/spulec/moto/commit/b4f0fb7137bdcb5b78d928d2ed1e751e9e1b2bdc";>b4f0fb7
 Prepare release 4.0.11 (https://github-redirect.dependabot.com/spulec/moto/issues/5724";>#5724)
   https://github.com/spulec/moto/commit/ceffba0eac40667d4ea30d2f03711d0daa9c0c78";>ceffba0
 EC2: Only load custom amis when specified (https://github-redirect.dependabot.com/spulec/moto/issues/5697";>#5697)
   https://github.com/spulec/moto/commit/462fb23813fe6380fe5836d96c9fb56c1f3c468c";>462fb23
 Polly: Update the list of voices (https://github-redirect.dependabot.com/spulec/moto/issues/5723";>#5723)
   https://github.com/spulec/moto/commit/fda02e659e649ed083849757d0dc84cdfc3d29f4";>fda02e6
 CloudWatch: Add unit filter to query get_metric_data (https://github-redirect.dependabot.com/spulec/moto/issues/5722";>#5722)
   https://github.com/spulec/moto/commit/5ea02ec1b60a95b44e7672516baee8a75540233a";>5ea02ec
 DynamoDB: batch_get_item() now only returns up to 16MB of data (https://github-redirect.dependabot.com/spulec/moto/issues/5718";>#5718)
   https://github.com/spulec/moto/commit/5e4d39e189d1e98fc448514e1cea618710ae6301";>5e4d39e
 Techdebt: Refactor/remove warnings in ResourceGroups tests (https://github-redirect.dependabot.com/spulec/moto/issues/5716";>#5716)
   https://github.com/spulec/moto/commit/4844af09cc9e69043596e513daa5e47d6403ea17";>4844af0
 Batch: add jobname validation (https://github-redirect.dependabot.com/spulec/moto/issues/5720";>#5720)
   https://github.com/spulec/moto/commit/3d16834e6fec41af3675fef297a352ba00cfe0cb";>3d16834
 S3: support for etag with quotes in IfNoneMatch header (https://github-redirect.dependabot.com/spulec/moto/issues/5715";>#5715)
   https://github.com/spulec/moto/commit/672c95384a55779799e843b5dfcf0e81c8c2da80";>672c953
 Feature: ACM-PCA (https://github-redirect.dependabot.com/spulec/moto/issues/5712";>#5712)
   Additional commits viewable in https://github.com/spulec/moto/compare/4.0.10...4.0.11";>compare 
view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=moto&package-manager=pip&previous-version=4.0.10&new-version=4.0.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger De