rdblue commented on code in PR #11240:
URL: https://github.com/apache/iceberg/pull/11240#discussion_r1811580485


##########
format/spec.md:
##########
@@ -454,35 +457,40 @@ The schema of a manifest file is a struct called 
`manifest_entry` with the follo
 
 `data_file` is a struct with the following fields:
 
-| v1         | v2         | Field id, name                    | Type           
              | Description |
-| ---------- | ---------- 
|-----------------------------------|------------------------------|-------------|
-|            | _required_ | **`134  content`**                | `int` with 
meaning: `0: DATA`, `1: POSITION DELETES`, `2: EQUALITY DELETES` | Type of 
content stored by the data file: data, equality deletes, or position deletes 
(all v1 files are data files) |
-| _required_ | _required_ | **`100  file_path`**              | `string`       
              | Full URI for the file with FS scheme |
-| _required_ | _required_ | **`101  file_format`**            | `string`       
              | String file format name, avro, orc or parquet |
-| _required_ | _required_ | **`102  partition`**              | `struct<...>`  
              | Partition data tuple, schema based on the partition spec output 
using partition field ids for the struct field ids |
-| _required_ | _required_ | **`103  record_count`**           | `long`         
              | Number of records in this file |
-| _required_ | _required_ | **`104  file_size_in_bytes`**     | `long`         
              | Total file size in bytes |
-| _required_ |            | ~~**`105 block_size_in_bytes`**~~ | `long`         
              | **Deprecated. Always write a default in v1. Do not write in 
v2.** |
-| _optional_ |            | ~~**`106  file_ordinal`**~~       | `int`          
              | **Deprecated. Do not write.** |
-| _optional_ |            | ~~**`107  sort_columns`**~~       | `list<112: 
int>`             | **Deprecated. Do not write.** |
-| _optional_ | _optional_ | **`108  column_sizes`**           | `map<117: int, 
118: long>`   | Map from column id to the total size on disk of all regions 
that store the column. Does not include bytes necessary to read other columns, 
like footers. Leave null for row-oriented formats (Avro) |
-| _optional_ | _optional_ | **`109  value_counts`**           | `map<119: int, 
120: long>`   | Map from column id to number of values in the column (including 
null and NaN values) |
-| _optional_ | _optional_ | **`110  null_value_counts`**      | `map<121: int, 
122: long>`   | Map from column id to number of null values in the column |
-| _optional_ | _optional_ | **`137  nan_value_counts`**       | `map<138: int, 
139: long>`   | Map from column id to number of NaN values in the column |
-| _optional_ | _optional_ | **`111  distinct_counts`**        | `map<123: int, 
124: long>`   | Map from column id to number of distinct values in the column; 
distinct counts must be derived using values in the file by counting or using 
sketches, but not using methods like merging existing distinct counts |
-| _optional_ | _optional_ | **`125  lower_bounds`**           | `map<126: int, 
127: binary>` | Map from column id to lower bound in the column serialized as 
binary [1]. Each value must be less than or equal to all non-null, non-NaN 
values in the column for the file [2] |
-| _optional_ | _optional_ | **`128  upper_bounds`**           | `map<129: int, 
130: binary>` | Map from column id to upper bound in the column serialized as 
binary [1]. Each value must be greater than or equal to all non-null, non-Nan 
values in the column for the file [2] |
-| _optional_ | _optional_ | **`131  key_metadata`**           | `binary`       
              | Implementation-specific key metadata for encryption |
-| _optional_ | _optional_ | **`132  split_offsets`**          | `list<133: 
long>`            | Split offsets for the data file. For example, all row group 
offsets in a Parquet file. Must be sorted ascending |
-|            | _optional_ | **`135  equality_ids`**           | `list<136: 
int>`             | Field ids used to determine row equality in equality delete 
files. Required when `content=2` and should be null otherwise. Fields with ids 
listed in this column must be present in the delete file |
-| _optional_ | _optional_ | **`140  sort_order_id`**          | `int`          
              | ID representing sort order for this file [3]. |
+| v1         | v2         | v3         | Field id, name                    | 
Type                         | Description |
+| ---------- | ---------- | ---------- 
|-----------------------------------|------------------------------|-------------|
+|            | _required_ | _required_ | **`134  content`**                | 
`int` with meaning: `0: DATA`, `1: POSITION DELETES`, `2: EQUALITY DELETES` | 
Type of content stored by the data file: data, equality deletes, or position 
deletes (all v1 files are data files) |

Review Comment:
   I think we should continue to set this as `1: POSITION DELETES` because 
that's what the content represents. Both file/pos deletes in Parquet/Avro/ORC 
and deletion vectors are applied to data files the same way during planning 
(and this spec update helps by introducing `referenced_data_file` to speed up 
v2 planning). I think it makes the most sense to track how the deletes are 
serialized in the `format` field.
   
   That raises the question about what to use in the format field. I think that 
`puffin` fits with the other format names because it is the format of the 
entire data file. If you wanted to go inspect that file, you'd instantiate a 
Puffin reader.
   
   Both of these choices fit with how v2 works today. Where this gets a bit 
tricky is that we are also changing the granularity. Instead of having a single 
entry per Puffin file, we are tracking individual deletion vectors (using the 
now renamed `content_size_in_bytes` and `content_offset`) to cut down on false 
positives.
   
   Now that we are pointing to a slice of a Puffin file of position deletes, I 
think there's an argument that we could/should store the blob type somewhere: 
that way we have both file type and the type of the content pointed to by the 
offset. If anything I'd introduce a `content_type` field for this. But given 
that the type is always going to be the same and there's an open question about 
whether we would just detect the type from magic bytes, I would opt to simply 
omit it for now.
   
   TL;DR: What we're really looking for is a content type for the blob, but we 
don't need one right now so let's leave our options open and omit it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to