pvary commented on code in PR #12774:
URL: https://github.com/apache/iceberg/pull/12774#discussion_r2164138195


##########
core/src/main/java/org/apache/iceberg/io/ObjectModel.java:
##########
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.io;
+
+import org.apache.iceberg.FileFormat;
+
+/**
+ * Direct conversion is used between file formats and engine internal formats 
for performance
+ * reasons. Object models encapsulate these conversions.
+ *
+ * <p>{@link ReadBuilder} is provided for reading data files stored in a given 
{@link FileFormat}
+ * into the engine specific object model.
+ *
+ * <p>{@link AppenderBuilder} is provided for writing engine specific object 
model to data/delete
+ * files stored in a given {@link FileFormat}.
+ *
+ * <p>Iceberg supports the following object models natively:
+ *
+ * <ul>
+ *   <li>generic - reads and writes Iceberg {@link 
org.apache.iceberg.data.Record}s
+ *   <li>spark - reads and writes Spark InternalRow records
+ *   <li>spark-vectorized - vectorized reads for Spark columnar batches. Not 
supported for {@link
+ *       FileFormat#AVRO}
+ *   <li>flink - reads and writes Flink RowData records
+ *   <li>arrow - vectorized reads for into Arrow columnar format. Only 
supported for {@link
+ *       FileFormat#PARQUET}
+ * </ul>
+ *
+ * <p>Engines could implement their own object models to leverage Iceberg data 
file reading and
+ * writing capabilities.
+ *
+ * @param <E> the engine specific schema of the input data for the appender
+ */
+public interface ObjectModel<E> {

Review Comment:
   > > Another possibility could be that we define an intermediate Object Model 
(maybe something like Arrow), and provide a double transformation File Format 
-> Arrow -> Engine, and Engine -> Arrow -> File Format.[..]
   > 
   > This is worth exploring. It may seem like it would be slower, but 
vectorized reads are much faster so I think it would be better overall as long 
as we could adapt to the right object model. It may be slightly slower for Avro 
(that can't vectorize reads and can produce specific classes) but overall I 
think it would be a win. Plus we would be able to consolidate reader code and 
focus on a shared high performance vectorized path.
   
   I was considering this as one of the next steps. The PR is already huge. If 
we start with the whole read/write path refactoring this will quickly becomes 
unmanageable. That is why I decided to move in smaller steps. I'm afraid of not 
getting reviews for an even bigger change.
   
   If we agree that this one is a good goal, I'm happy to explore and commit to 
work on the next steps.
   
   Regardless, I think the standardization of the reader and the writer api is 
a good first step.
   
   > > I don't see how we can push this behind a meaningful common interface.
   > 
   > The path to a better interface is to reduce the complexity of the object 
models that get built in the reader and writer functions. For Avro and Parquet 
(the ones I'm familiar with) there are different schemas passed to build the 
object model readers and writers for a couple of reasons. First, the 
format-specific schema is used so that the reader or writer is aligned exactly 
with the file schema (if Avro has an option, the option ID byte has to be 
written or read). Second, the engine-specific schema is used so that the 
correct engine type method (like `InternalRow#getByte`) is called. There is a 
possibility that we could add an adapter to remove engine-specific types 
(implement `InternalRow#getInt` that calls `InternalRow#getByte` instead of 
requiring the writer to do it). And lastly, the Iceberg schema is needed in 
some cases where we don't have information in the file-specific schema or when 
we need it to construct a generic record object. The former is why we recently 
added th
 e Iceberg schema to Parquet readers -- Parquet doesn't have a Variant 
annotation yet.
   > 
   > There may be a path forward where we solve these challenges, or where the 
API is not based on passing potentially all 3 schemas into the reader/writer 
build functions. @pvary have you explored simplifying this?
   
   I have spent serious time on trying to simplify the current read path. One 
of the constraints was to keep the input of the writers and the output of the 
readers intact, so if some external users depend on our readers/writers they 
could change the parametrization, but keep the other part of the code without 
changes. Another constraint was to not change anything below the reader/writer 
functions.
   
   If we are ready to change that part as well, then we need the following 
information as a "physical" constraint:
   - Data File schema
   - Input data schema
   
   We can simplify the user input to:
   - Requesting the Iceberg schema
   - Converter from the Iceberg schema to the Data File schema
   - Converter from the Iceberg schema to the Engine schema
   - Requesting the user to exactly match the input to the calculated Engine 
schema
   
   Again, this is something which is even bigger than the current PR. If we 
agree that this one is a good goal, I'm happy to explore and commit to work on 
this in the next PR. I need some time, and I need reviewers, but I would be 
more than happy to work on this!
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to