szehon-ho commented on code in PR #6344:
URL: https://github.com/apache/iceberg/pull/6344#discussion_r1055734598


##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/ChangelogIterator.java:
##########
@@ -0,0 +1,204 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark;
+
+import java.io.Serializable;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Objects;
+import org.apache.iceberg.ChangelogOperation;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterators;
+import org.apache.spark.sql.Row;
+import org.apache.spark.sql.RowFactory;
+import org.apache.spark.sql.catalyst.expressions.GenericInternalRow;
+import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema;
+
+/**
+ * An iterator that transforms rows from changelog tables within a single 
Spark task. It assumes
+ * that rows are sorted by identifier columns and change type.
+ *
+ * <p>It removes the carry-over rows. Carry-over rows are the result of a 
removal and insertion of
+ * the same row within an operation because of the copy-on-write mechanism. 
For example, given a
+ * file which contains row1 (id=1, data='a') and row2 (id=2, data='b'). A 
copy-on-write delete of
+ * row2 would require erasing this file and preserving row1 in a new file 
written with row1' which
+ * is identical to row1. The change-log table would report this as (row1 
deleted) and (row1'
+ * inserted), since this row was not actually modified it is not an actual 
change in the table. The
+ * iterator finds out the carry-over rows and removes them from the result.
+ *
+ * <p>The iterator marks the delete-row and insert-row to be the update-rows. 
For example, these two
+ * rows
+ *
+ * <ul>
+ *   <li>(id=1, data='a', op='DELETE')
+ *   <li>(id=1, data='b', op='INSERT')
+ * </ul>
+ *
+ * <p>will be marked as update-rows:
+ *
+ * <ul>
+ *   <li>(id=1, data='a', op='UPDATE_BEFORE')
+ *   <li>(id=1, data='b', op='UPDATE_AFTER')
+ * </ul>
+ */
+public class ChangelogIterator implements Iterator<Row>, Serializable {
+  private static final String DELETE = ChangelogOperation.DELETE.name();
+  private static final String INSERT = ChangelogOperation.INSERT.name();
+  private static final String UPDATE_BEFORE = 
ChangelogOperation.UPDATE_BEFORE.name();
+  private static final String UPDATE_AFTER = 
ChangelogOperation.UPDATE_AFTER.name();
+
+  private final Iterator<Row> rowIterator;
+  private final int changeTypeIndex;
+  private final List<Integer> identifierFieldIdx;
+
+  private Row cachedRow = null;
+
+  private ChangelogIterator(
+      Iterator<Row> rowIterator, int changeTypeIndex, List<Integer> 
identifierFieldIdx) {
+    this.rowIterator = rowIterator;
+    this.changeTypeIndex = changeTypeIndex;
+    this.identifierFieldIdx = identifierFieldIdx;
+  }
+
+  /**
+   * Creates a new {@link ChangelogIterator} instance concatenated with the 
null-removal iterator.

Review Comment:
   Maybe we can avoid having so much implementation details, as it may change 
in future?  Also not sure the use of linking to this class itself.
   
   How about something more logical (feel free to elaborate a bit): Creates an 
iterator for records of changelog table



##########
spark/v3.3/spark/src/test/java/org/apache/iceberg/spark/TestChangelogIterator.java:
##########
@@ -0,0 +1,194 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import org.apache.iceberg.ChangelogOperation;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.spark.sql.Row;
+import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestChangelogIterator extends SparkTestHelperBase {
+  private static final String DELETE = ChangelogOperation.DELETE.name();
+  private static final String INSERT = ChangelogOperation.INSERT.name();
+  private static final String UPDATE_BEFORE = 
ChangelogOperation.UPDATE_BEFORE.name();
+  private static final String UPDATE_AFTER = 
ChangelogOperation.UPDATE_AFTER.name();
+
+  private final int changeTypeIndex = 3;
+  private final List<Integer> identifierFieldIdx = Lists.newArrayList(0, 1);
+
+  private enum RowType {
+    DELETED,
+    INSERTED,
+    CARRY_OVER,
+    UPDATED
+  }
+
+  @Test
+  public void testIterator() {
+    List<Object[]> pm = Lists.newArrayList();
+    // generate 24 permutations.
+    permute(
+        Arrays.asList(RowType.DELETED, RowType.INSERTED, RowType.CARRY_OVER, 
RowType.UPDATED),
+        0,
+        pm);
+    Assert.assertEquals(24, pm.size());
+
+    for (Object[] item : pm) {
+      validate(item);
+    }
+  }
+
+  private void validate(Object[] item) {
+    List<Row> rows = Lists.newArrayList();
+    List<Object[]> expectedRows = Lists.newArrayList();
+    for (int i = 0; i < item.length; i++) {
+      rows.addAll(toOriginalRows((RowType) item[i], i));
+      expectedRows.addAll(toExpectedRows((RowType) item[i], i));
+    }
+
+    Iterator<Row> iterator =
+        ChangelogIterator.iterator(rows.iterator(), changeTypeIndex, 
identifierFieldIdx);
+    List<Row> result = Lists.newArrayList(iterator);
+    assertEquals("Rows should match", expectedRows, rowsToJava(result));
+  }
+
+  private List<Row> toOriginalRows(RowType rowType, int order) {

Review Comment:
   Nit: for the variable name, maybe index is better than order, for a value 
that increases in a collection?  Order to me seems a property of whole 
collection.



##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/ChangelogIterator.java:
##########
@@ -0,0 +1,204 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark;
+
+import java.io.Serializable;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Objects;
+import org.apache.iceberg.ChangelogOperation;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterators;
+import org.apache.spark.sql.Row;
+import org.apache.spark.sql.RowFactory;
+import org.apache.spark.sql.catalyst.expressions.GenericInternalRow;
+import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema;
+
+/**
+ * An iterator that transforms rows from changelog tables within a single 
Spark task. It assumes
+ * that rows are sorted by identifier columns and change type.
+ *
+ * <p>It removes the carry-over rows. Carry-over rows are the result of a 
removal and insertion of
+ * the same row within an operation because of the copy-on-write mechanism. 
For example, given a
+ * file which contains row1 (id=1, data='a') and row2 (id=2, data='b'). A 
copy-on-write delete of
+ * row2 would require erasing this file and preserving row1 in a new file 
written with row1' which

Review Comment:
   Yea I think its hard to read now, because we use both row1' and row1 (id=1, 
data='a').  I think either we can just use row, row1' notation, or actual value 
notation throughout, and be consistent.



##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/ChangelogIterator.java:
##########
@@ -0,0 +1,204 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark;
+
+import java.io.Serializable;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Objects;
+import org.apache.iceberg.ChangelogOperation;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterators;
+import org.apache.spark.sql.Row;
+import org.apache.spark.sql.RowFactory;
+import org.apache.spark.sql.catalyst.expressions.GenericInternalRow;
+import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema;
+
+/**
+ * An iterator that transforms rows from changelog tables within a single 
Spark task. It assumes
+ * that rows are sorted by identifier columns and change type.
+ *
+ * <p>It removes the carry-over rows. Carry-over rows are the result of a 
removal and insertion of
+ * the same row within an operation because of the copy-on-write mechanism. 
For example, given a
+ * file which contains row1 (id=1, data='a') and row2 (id=2, data='b'). A 
copy-on-write delete of
+ * row2 would require erasing this file and preserving row1 in a new file 
written with row1' which
+ * is identical to row1. The change-log table would report this as (row1 
deleted) and (row1'
+ * inserted), since this row was not actually modified it is not an actual 
change in the table. The

Review Comment:
   The last part of the sentence does not seem gramatically correct (it's two 
sentences without any conjunction).  Maybe:
   ```since this row was not actually modified it is not an actual change in 
the table```
   =>
   ```despite it not being an actual change to the table```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to