adriangb commented on code in PR #21055:
URL: https://github.com/apache/datafusion/pull/21055#discussion_r3243714627
##########
datafusion-examples/examples/custom_data_source/adapter_serialization.rs:
##########
@@ -275,6 +275,7 @@ impl PhysicalExtensionCodec for AdapterPreservingCodec {
buf: &[u8],
inputs: &[Arc<dyn ExecutionPlan>],
_ctx: &TaskContext,
+ _proto_converter: &dyn PhysicalProtoConverterExtension,
Review Comment:
We could add new methods with the param that default to the old methods to
avoid a breaking change. Did you consider that as an option?
##########
datafusion-examples/examples/custom_data_source/adapter_serialization.rs:
##########
@@ -303,6 +304,7 @@ impl PhysicalExtensionCodec for AdapterPreservingCodec {
&self,
_node: Arc<dyn ExecutionPlan>,
_buf: &mut Vec<u8>,
+ _proto_converter: &dyn PhysicalProtoConverterExtension,
Review Comment:
Naming these `converter` would match `try_from_physical_plan_with_converter`
and make the arg name shorter.
##########
datafusion/ffi/src/proto/physical_extension_codec.rs:
##########
@@ -145,8 +148,12 @@ unsafe extern "C" fn try_decode_fn_wrapper(
.collect::<Result<Vec<_>>>();
let inputs = sresult_return!(inputs);
- let plan =
- sresult_return!(codec.try_decode(buf.as_ref(), &inputs,
task_ctx.as_ref()));
+ let plan = sresult_return!(codec.try_decode(
+ buf.as_ref(),
+ &inputs,
+ task_ctx.as_ref(),
+ &DefaultPhysicalProtoConverter {},
Review Comment:
@timsaucer not sure if you have any suggestions but I think this is fine.
Generally we have not made any effort to make dynamic filters / expr
deduplication work with plans that go through FFI.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]