huangzhengshun opened a new issue, #53150:
URL: https://github.com/apache/doris/issues/53150

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Version
   
   通过paimon写入新数据后,并通过doris刷新了表的元数据,但是查询时,依然提示找不到文件;
   
   ### What's Wrong?
   
   2025-07-12 16:01:24,626 [query] |Timestamp=2025-07-12 
16:01:17.851|Client=10.0.44.13:41772|User=root|Ctl=datahub6|Db=master_data|CommandType=Query|State=OK|ErrorCode=0|ErrorMessage=|Time(ms)=2|ScanBytes=0|ScanRows=0|ReturnRows=0|StmtId=446519|QueryId=105cfe2a3d8e4c70-ab6458a1be559999|IsQuery=false|IsNereids=false|FeIp=slave5|StmtType=REFRESH|Stmt=REFRESH
 TABLE 
master_data.dim_biz_dictionary_info|CpuTimeMS=0|ShuffleSendBytes=-1|ShuffleSendRows=-1|SqlHash=2a56781b1b579e253f207a8b20a1ff01|PeakMemoryBytes=0|SqlDigest=|ComputeGroupName=UNKNOWN|WorkloadGroup=|FuzzyVariables=|ScanBytesFromLocalStorage=-1|ScanBytesFromRemoteStorage=-1
   
   
   W20250712 16:04:48.601408 2900887 status.h:444] meet error status: 
[INTERNAL_ERROR]PStatus: (slave4)[INTERNAL_ERROR]Read parquet file 
hdfs://master:9000/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
 failed, reason = [NOT_FOUND](2), No such file or directory), reason: 
RemoteException: File does not exist: 
/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
           at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:87)
           at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:77)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:159)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2198)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:795)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:468)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169)
           at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
           at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203)
   
   
           0#  doris::io::HdfsFileReader::read_at_impl(unsigned long, 
doris::Slice, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:0
           1#  doris::io::FileReader::read_at(unsigned long, doris::Slice, 
unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           2#  doris::io::MergeRangeFileReader::_fill_box(int, unsigned long, 
unsigned long, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           3#  doris::io::MergeRangeFileReader::read_at_impl(unsigned long, 
doris::Slice, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           4#  doris::io::FileReader::read_at(unsigned long, doris::Slice, 
unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           5#  doris::io::BufferedFileStreamReader::read_bytes(unsigned char 
const**, unsigned long, unsigned long, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           6#  doris::vectorized::PageReader::_parse_page_header() at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           7#  doris::vectorized::PageReader::next_page_header() at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_page_reader.h:59
           8#  doris::vectorized::ColumnChunkReader::next_page() at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           9#  
doris::vectorized::ScalarColumnReader::read_column_data(COW<doris::vectorized::IColumn>::immutable_ptr<doris::vectorized::IColumn>&,
 std::shared_ptr<doris::vectorized::IDataType const>&, 
doris::vectorized::FilterMap&, unsigned long, unsigned long*, bool*, bool) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           10# 
doris::vectorized::RowGroupReader::_read_column_data(doris::vectorized::Block*, 
std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, 
std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, 
std::char_traits<char>, std::allocator<char> > > > const&, unsigned long, 
unsigned long*, bool*, doris::vectorized::FilterMap&) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_group_reader.cpp:427
           11# 
doris::vectorized::RowGroupReader::next_batch(doris::vectorized::Block*, 
unsigned long, unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_group_reader.cpp:321
           12# 
doris::vectorized::ParquetReader::get_next_block(doris::vectorized::Block*, 
unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:500
           13# 
doris::vectorized::TableFormatReader::get_next_block(doris::vectorized::Block*, 
unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/table/table_format_reader.h:46
           14# 
doris::vectorized::VFileScanner::_get_block_wrapped(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           15# 
doris::vectorized::VFileScanner::_get_block_impl(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           16# doris::vectorized::VScanner::get_block(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/scan/vscanner.cpp:0
           17# 
doris::vectorized::VScanner::get_block_after_projects(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           18# 
doris::vectorized::ScannerScheduler::_scanner_scan(std::shared_ptr<doris::vectorized::ScannerContext>,
 std::shared_ptr<doris::vectorized::ScanTask>) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:391
           19# std::_Function_handler<void (), 
doris::vectorized::ScannerScheduler::submit(std::shared_ptr<doris::vectorized::ScannerContext>,
 std::shared_ptr<doris::vectorized::ScanTask>)::$_1::operator()() 
const::{lambda()#1}>::_M_invoke(std::_Any_data const&) at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/shared_ptr_base.h:701
           20# doris::ThreadPool::dispatch_thread() at 
/home/zcp/repo_center/doris_release/doris/be/src/util/threadpool.cpp:0
           21# doris::Thread::supervise_thread(void*) at 
/var/local/ldb-toolchain/bin/../usr/include/pthread.h:562
           22# ?
           23# ?
   . cur path: 
hdfs://master:9000/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
   
           0#  doris::Status doris::Status::create<true>(doris::PStatus const&) 
at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/basic_string.h:187
           1#  std::_Function_handler<void (), 
doris::PInternalService::cancel_plan_fragment(google::protobuf::RpcController*, 
doris::PCancelPlanFragmentRequest const*, doris::PCancelPlanFragmentResult*, 
google::protobuf::Closure*)::$_0>::_M_invoke(std::_Any_data const&) at 
/home/zcp/repo_center/doris_release/doris/be/src/service/internal_service.cpp:0
           2#  doris::WorkThreadPool<false>::work_thread(int) at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/atomic_base.h:646
           3#  execute_native_thread_routine at 
/data/gcc-11.1.0/build/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/unique_ptr.h:85
           4#  ?
           5#  ?
   I20250712 16:04:48.601559 2900887 internal_service.cpp:654] Cancel query 
869b60c4aa884787-91befc130eea383c, reason: [INTERNAL_ERROR]PStatus: 
(slave4)[INTERNAL_ERROR]Read parquet file 
hdfs://master:9000/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
 failed, reason = [NOT_FOUND](2), No such file or directory), reason: 
RemoteException: File does not exist: 
/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
           at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:87)
           at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:77)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:159)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2198)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:795)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:468)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169)
           at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
           at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203)
   
   
           0#  doris::io::HdfsFileReader::read_at_impl(unsigned long, 
doris::Slice, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:0
           1#  doris::io::FileReader::read_at(unsigned long, doris::Slice, 
unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           2#  doris::io::MergeRangeFileReader::_fill_box(int, unsigned long, 
unsigned long, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           3#  doris::io::MergeRangeFileReader::read_at_impl(unsigned long, 
doris::Slice, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           4#  doris::io::FileReader::read_at(unsigned long, doris::Slice, 
unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           5#  doris::io::BufferedFileStreamReader::read_bytes(unsigned char 
const**, unsigned long, unsigned long, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           6#  doris::vectorized::PageReader::_parse_page_header() at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           7#  doris::vectorized::PageReader::next_page_header() at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_page_reader.h:59
           8#  doris::vectorized::ColumnChunkReader::next_page() at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           9#  
doris::vectorized::ScalarColumnReader::read_column_data(COW<doris::vectorized::IColumn>::immutable_ptr<doris::vectorized::IColumn>&,
 std::shared_ptr<doris::vectorized::IDataType const>&, 
doris::vectorized::FilterMap&, unsigned long, unsigned long*, bool*, bool) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           10# 
doris::vectorized::RowGroupReader::_read_column_data(doris::vectorized::Block*, 
std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, 
std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, 
std::char_traits<char>, std::allocator<char> > > > const&, unsigned long, 
unsigned long*, bool*, doris::vectorized::FilterMap&) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_group_reader.cpp:427
           11# 
doris::vectorized::RowGroupReader::next_batch(doris::vectorized::Block*, 
unsigned long, unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_group_reader.cpp:321
           12# 
doris::vectorized::ParquetReader::get_next_block(doris::vectorized::Block*, 
unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:500
           13# 
doris::vectorized::TableFormatReader::get_next_block(doris::vectorized::Block*, 
unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/table/table_format_reader.h:46
           14# 
doris::vectorized::VFileScanner::_get_block_wrapped(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           15# 
doris::vectorized::VFileScanner::_get_block_impl(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           16# doris::vectorized::VScanner::get_block(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/scan/vscanner.cpp:0
           17# 
doris::vectorized::VScanner::get_block_after_projects(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           18# 
doris::vectorized::ScannerScheduler::_scanner_scan(std::shared_ptr<doris::vectorized::ScannerContext>,
 std::shared_ptr<doris::vectorized::ScanTask>) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:391
           19# std::_Function_handler<void (), 
doris::vectorized::ScannerScheduler::submit(std::shared_ptr<doris::vectorized::ScannerContext>,
 std::shared_ptr<doris::vectorized::ScanTask>)::$_1::operator()() 
const::{lambda()#1}>::_M_invoke(std::_Any_data const&) at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/shared_ptr_base.h:701
           20# doris::ThreadPool::dispatch_thread() at 
/home/zcp/repo_center/doris_release/doris/be/src/util/threadpool.cpp:0
           21# doris::Thread::supervise_thread(void*) at 
/var/local/ldb-toolchain/bin/../usr/include/pthread.h:562
           22# ?
           23# ?
   . cur path: 
hdfs://master:9000/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
   
           0#  doris::Status doris::Status::create<true>(doris::PStatus const&) 
at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/basic_string.h:187
           1#  std::_Function_handler<void (), 
doris::PInternalService::cancel_plan_fragment(google::protobuf::RpcController*, 
doris::PCancelPlanFragmentRequest const*, doris::PCancelPlanFragmentResult*, 
google::protobuf::Closure*)::$_0>::_M_invoke(std::_Any_data const&) at 
/home/zcp/repo_center/doris_release/doris/be/src/service/internal_service.cpp:0
           2#  doris::WorkThreadPool<false>::work_thread(int) at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/atomic_base.h:646
           3#  execute_native_thread_routine at 
/data/gcc-11.1.0/build/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/unique_ptr.h:85
           4#  ?
           5#  ?
   I20250712 16:04:48.601635 2900887 pipeline_fragment_context.cpp:170] 
PipelineFragmentContext::cancel|query_id=869b60c4aa884787-91befc130eea383c|fragment_id=1|reason=[INTERNAL_ERROR]PStatus:
 (slave4)[INTERNAL_ERROR]Read parquet file 
hdfs://master:9000/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
 failed, reason = [NOT_FOUND](2), No such file or directory), reason: 
RemoteException: File does not exist: 
/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
           at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:87)
           at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:77)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:159)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2198)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:795)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:468)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169)
           at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
           at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203)
   
   
           0#  doris::io::HdfsFileReader::read_at_impl(unsigned long, 
doris::Slice, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:0
           1#  doris::io::FileReader::read_at(unsigned long, doris::Slice, 
unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           2#  doris::io::MergeRangeFileReader::_fill_box(int, unsigned long, 
unsigned long, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           3#  doris::io::MergeRangeFileReader::read_at_impl(unsigned long, 
doris::Slice, unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           4#  doris::io::FileReader::read_at(unsigned long, doris::Slice, 
unsigned long*, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           5#  doris::io::BufferedFileStreamReader::read_bytes(unsigned char 
const**, unsigned long, unsigned long, doris::io::IOContext const*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           6#  doris::vectorized::PageReader::_parse_page_header() at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           7#  doris::vectorized::PageReader::next_page_header() at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_page_reader.h:59
           8#  doris::vectorized::ColumnChunkReader::next_page() at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           9#  
doris::vectorized::ScalarColumnReader::read_column_data(COW<doris::vectorized::IColumn>::immutable_ptr<doris::vectorized::IColumn>&,
 std::shared_ptr<doris::vectorized::IDataType const>&, 
doris::vectorized::FilterMap&, unsigned long, unsigned long*, bool*, bool) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           10# 
doris::vectorized::RowGroupReader::_read_column_data(doris::vectorized::Block*, 
std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, 
std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, 
std::char_traits<char>, std::allocator<char> > > > const&, unsigned long, 
unsigned long*, bool*, doris::vectorized::FilterMap&) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_group_reader.cpp:427
           11# 
doris::vectorized::RowGroupReader::next_batch(doris::vectorized::Block*, 
unsigned long, unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/parquet/vparquet_group_reader.cpp:321
           12# 
doris::vectorized::ParquetReader::get_next_block(doris::vectorized::Block*, 
unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:500
           13# 
doris::vectorized::TableFormatReader::get_next_block(doris::vectorized::Block*, 
unsigned long*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/format/table/table_format_reader.h:46
           14# 
doris::vectorized::VFileScanner::_get_block_wrapped(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           15# 
doris::vectorized::VFileScanner::_get_block_impl(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           16# doris::vectorized::VScanner::get_block(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/vec/exec/scan/vscanner.cpp:0
           17# 
doris::vectorized::VScanner::get_block_after_projects(doris::RuntimeState*, 
doris::vectorized::Block*, bool*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:505
           18# 
doris::vectorized::ScannerScheduler::_scanner_scan(std::shared_ptr<doris::vectorized::ScannerContext>,
 std::shared_ptr<doris::vectorized::ScanTask>) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/status.h:391
           19# std::_Function_handler<void (), 
doris::vectorized::ScannerScheduler::submit(std::shared_ptr<doris::vectorized::ScannerContext>,
 std::shared_ptr<doris::vectorized::ScanTask>)::$_1::operator()() 
const::{lambda()#1}>::_M_invoke(std::_Any_data const&) at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/shared_ptr_base.h:701
           20# doris::ThreadPool::dispatch_thread() at 
/home/zcp/repo_center/doris_release/doris/be/src/util/threadpool.cpp:0
           21# doris::Thread::supervise_thread(void*) at 
/var/local/ldb-toolchain/bin/../usr/include/pthread.h:562
           22# ?
           23# ?
   . cur path: 
hdfs://master:9000/paimon/datahub6/master_data.db/dim_biz_dictionary_info/bucket-0/data-a1b89791-a036-4937-8367-9eaf9fe7c051-0.parquet
   
           0#  doris::Status doris::Status::create<true>(doris::PStatus const&) 
at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/basic_string.h:187
           1#  std::_Function_handler<void (), 
doris::PInternalService::cancel_plan_fragment(google::protobuf::RpcController*, 
doris::PCancelPlanFragmentRequest const*, doris::PCancelPlanFragmentResult*, 
google::protobuf::Closure*)::$_0>::_M_invoke(std::_Any_data const&) at 
/home/zcp/repo_center/doris_release/doris/be/src/service/internal_service.cpp:0
           2#  doris::WorkThreadPool<false>::work_thread(int) at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/atomic_base.h:646
           3#  execute_native_thread_routine at 
/data/gcc-11.1.0/build/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/unique_ptr.h:85
           4#  ?
           5#  ?
   
   
   ### What You Expected?
   
   通过 Catalog 查询外部表(如 Hive 数据表)时,系统将忽略不存在的文件: 
当从元数据缓存中获取文件列表时,由于缓存更新并非实时,因此可能存在实际的文件列表已删除、而元数据缓存中仍存在该文件的情况。为了避免由于尝试访问不存在的文件而导致的查询错误,系统会忽略这些不存在的文件
  [#35319]
   
   ### How to Reproduce?
   
   _No response_
   
   ### Anything Else?
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to