stevenzwu commented on code in PR #8555:
URL: https://github.com/apache/iceberg/pull/8555#discussion_r1332405052


##########
flink/v1.17/flink/src/main/java/org/apache/iceberg/flink/sink/CachingTableSupplier.java:
##########
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.flink.sink;
+
+import java.time.Duration;
+import org.apache.flink.util.Preconditions;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.flink.TableLoader;
+import org.apache.iceberg.util.DateTimeUtil;
+import org.apache.iceberg.util.SerializableSupplier;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A table loader that will only reload a table after a certain interval has 
passed. WARNING: This
+ * table loader should be used carefully when used with writer tasks. It could 
result in heavy load
+ * on a catalog for jobs with many writers.
+ */
+class CachingTableSupplier implements SerializableSupplier<Table> {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(CachingTableSupplier.class);
+
+  private final Table initialTable;
+  private final TableLoader tableLoader;
+  private final Duration tableRefreshInterval;
+  private long nextReloadTimeMs;
+  private transient Table table;
+
+  CachingTableSupplier(Table initialTable, TableLoader tableLoader, Duration 
tableRefreshInterval) {
+    Preconditions.checkArgument(initialTable != null, "initialTable cannot be 
null");
+    Preconditions.checkArgument(tableLoader != null, "tableLoader cannot be 
null");
+    Preconditions.checkArgument(
+        tableRefreshInterval != null, "tableRefreshInterval cannot be null");
+    this.initialTable = initialTable;
+    this.table = initialTable;
+    this.tableLoader = tableLoader;
+    this.tableRefreshInterval = tableRefreshInterval;
+    this.nextReloadTimeMs = System.currentTimeMillis() + 
tableRefreshInterval.toMillis();
+  }
+
+  @Override
+  public Table get() {
+    if (table == null) {
+      this.table = initialTable;
+    }
+    return table;
+  }
+
+  public void refresh() {

Review Comment:
   > We may want to ensure the schema, partition spec, and Flink schema are in 
sync between the appender factory and writer when schema evolution is 
introduced?
   
   Once every writer task refreshed the table object for schema or partition 
spec change, the latest table object can be used directly by appender factory 
from the `Supplier<Table>`.  do we need to align the cutover across all writer 
tasks at checkpoint boundary? 
   
   Why can't every writer task refresh and apply schema and partition spec 
change independently? I see the need for committer to handle multiple partition 
specs, which needs to be enhanced. what else?
   
    >  schema evolution
   
   I see two forms of automatic schema evolution. Maybe a little diverted from 
the scope of this PR.
   
   1. Iceberg table schema is automatically updated out-of-band (e.g. by 
control plane). but we want to avoid the need to redeploy the Flink job to pick 
up the latest schema in the Iceberg table. This is the simpler form and easier 
to support. In this form, typically Iceberg table schema is updated way before 
data started to flow with new schema.
   
   2. Incoming data changed (e.g. a new column is added). Flink writer detected 
that the Iceberg table schema doesn't have the new column, update the Iceberg 
table schema, automatically start to write to the Iceberg table using the new 
schema. This is a bit more difficult to support as the parallel writers can 
compete to update Iceberg tables schema. In this scenario, there is no control 
plane or other automation to sync up data stream schema with Iceberg table 
schema.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to