rafal-c opened a new issue, #43496:
URL: https://github.com/apache/arrow/issues/43496
### Describe the bug, including details regarding any error messages,
version, and platform.
Consider a simple program (code below) which creates a Table, turns it into
a Dataset and writes the Dataset with Filename partitioning to a directory
`/tmp/dataset`. Let's call it `myprogram`. Now if you run `myprogram` and look
into /tmp/dataset repeatedly, this is what you may see:
```bash
➜ ./myprogram && ls /tmp/dataset
2019_part0.parquet 2020_part0.parquet 2021_part0.parquet
2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2019_part0.parquet 2020_part0.parquet 2021_part0.parquet
2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2019_part0.parquet 2020_part0.parquet 2021_part0.parquet
2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2020_part0.parquet 2021_part0.parquet 2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2019_part0.parquet 2020_part0.parquet 2021_part0.parquet
2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2019_part0.parquet 2020_part0.parquet 2021_part0.parquet
2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2020_part0.parquet 2021_part0.parquet 2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2019_part0.parquet 2020_part0.parquet 2021_part0.parquet
2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2019_part0.parquet 2021_part0.parquet 2022_part0.parquet
➜ ./myprogram && ls /tmp/dataset
2020_part0.parquet 2021_part0.parquet 2022_part0.parquet
```
So for some reason it randomly skips parts of the dataset on write. This is
not specific to Parquet and it happens on all major platforms
(Linux/Windows/MacOS) with Arrow 16.0.0.
Here is the full code to reproduce:
```cpp
#include <arrow/api.h>
#include <arrow/dataset/api.h>
#include <arrow/filesystem/api.h>
arrow::Result<std::shared_ptr<arrow::Table>> makeTable() {
using arrow::field;
auto schema = arrow::schema({field("a", arrow::int64()), field("year",
arrow::int64())});
std::vector<std::shared_ptr<arrow::Array>> arrays(2);
arrow::NumericBuilder<arrow::Int64Type> builder;
ARROW_RETURN_NOT_OK(builder.AppendValues({5, 2, 4, 100, 2, 4}));
ARROW_RETURN_NOT_OK(builder.Finish(&arrays[0]));
builder.Reset();
ARROW_RETURN_NOT_OK(builder.AppendValues({2019, 2020, 2021, 2021, 2022,
2022}));
ARROW_RETURN_NOT_OK(builder.Finish(&arrays[1]));
return arrow::Table::Make(schema, arrays);
}
int main() {
namespace ds = arrow::dataset;
// Create an Arrow Table
auto table = makeTable().ValueOrDie();
auto dataset = std::make_shared<ds::InMemoryDataset>(table);
auto scanner_builder = dataset->NewScan().ValueOrDie();
auto scanner = scanner_builder->Finish().ValueOrDie();
// The partition schema determines which fields are part of the
partitioning.
auto partition_schema = arrow::schema({arrow::field("year",
arrow::int64())});
auto partitioning =
std::make_shared<ds::FilenamePartitioning>(partition_schema);
// We'll write Parquet files.
auto format = std::make_shared<ds::ParquetFileFormat>();
ds::FileSystemDatasetWriteOptions write_options;
write_options.file_write_options = format->DefaultWriteOptions();
write_options.existing_data_behavior =
ds::ExistingDataBehavior::kDeleteMatchingPartitions;
write_options.filesystem =
std::make_shared<arrow::fs::LocalFileSystem>();;
write_options.base_dir = "/tmp/dataset";
write_options.partitioning = partitioning;
write_options.basename_template = "part{i}.parquet";
return ds::FileSystemDataset::Write(write_options, scanner) !=
arrow::Status::OK();
}
```
### Component(s)
C++
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]