yashmayya commented on code in PR #13733: URL: https://github.com/apache/pinot/pull/13733#discussion_r1758559229
########## pinot-spi/src/main/java/org/apache/pinot/spi/utils/CommonConstants.java: ########## @@ -1192,6 +1209,9 @@ public enum WindowOverFlowMode { public static class PlanVersions { public static final int V1 = 1; } + + public static final String ASK_SERVERS_FOR_EXPLAIN_PLAN = "pinot.query.explain.ask.servers"; Review Comment: Agreed on usage of `physical`, what about something like `pinot.multistage.explain.query.include.segment.level.plan`? It's fairly verbose but hopefully should be able to convey intent clearly to users. ########## pinot-query-runtime/src/main/java/org/apache/pinot/query/service/dispatch/DispatchClient.java: ########## @@ -50,12 +51,17 @@ public ManagedChannel getChannel() { } public void submit(Worker.QueryRequest request, QueryServerInstance virtualServer, Deadline deadline, - Consumer<AsyncQueryDispatchResponse> callback) { - _dispatchStub.withDeadline(deadline).submit(request, new DispatchObserver(virtualServer, callback)); + Consumer<AsyncResponse<Worker.QueryResponse>> callback) { + _dispatchStub.withDeadline(deadline).submit(request, new LastValueDispatchObserver<>(virtualServer, callback)); } public void cancel(long requestId) { Worker.CancelRequest cancelRequest = Worker.CancelRequest.newBuilder().setRequestId(requestId).build(); _dispatchStub.cancel(cancelRequest, NO_OP_CANCEL_STREAM_OBSERVER); } + + public void explain(Worker.QueryRequest request, QueryServerInstance virtualServer, Deadline deadline, + Consumer<AsyncResponse<List<Worker.ExplainResponse>>> callback) { + _dispatchStub.withDeadline(deadline).explain(request, new AllValuesDispatchObserver<>(virtualServer, callback)); Review Comment: > What I don't get is why the proto file is defined as: > > ```rpc Submit(ServerRequest) returns (stream ServerResponse);``` > instead of > > ``` rpc Submit(ServerRequest) returns (ServerResponse);``` That's the proto definition for [GrpcQueryServer](https://github.com/apache/pinot/blob/de577bc457b580e89ecb4e82076ce09f209bca18/pinot-core/src/main/java/org/apache/pinot/core/transport/grpc/GrpcQueryServer.java#L65) right (which is for v1 streaming queries FWICT)? The proto definition for the multi-stage engine's [QueryServer](https://github.com/apache/pinot/blob/de577bc457b580e89ecb4e82076ce09f209bca18/pinot-query-runtime/src/main/java/org/apache/pinot/query/service/server/QueryServer.java#L50) `Submit` RPC is - https://github.com/apache/pinot/blob/de577bc457b580e89ecb4e82076ce09f209bca18/pinot-common/src/main/proto/worker.proto#L26 So I guess this makes sense now since the new `Explain` RPC returns a `stream ExplainResponse` and the implementation does call `onNext` multiple times. ########## pinot-query-runtime/src/main/java/org/apache/pinot/query/runtime/operator/MultiStageOperator.java: ########## @@ -167,6 +173,26 @@ protected TransferableBlock updateEosBlock(TransferableBlock upstreamEos, StatMa return upstreamEos; } + @Override + public ExplainInfo getExplainInfo() { + return new ExplainInfo(getExplainName(), getExplainAttributes(), getChildrenExplainInfo()); + } + + protected List<ExplainInfo> getChildrenExplainInfo() { + return getChildOperators().stream() + .filter(Objects::nonNull) + .map(Operator::getExplainInfo) + .collect(Collectors.toList()); + } + + protected String getExplainName() { + return toExplainString(); + } + + protected Map<String, Plan.ExplainNode.AttributeValue> getExplainAttributes() { + return Collections.emptyMap(); + } Review Comment: Hm, agree on the current state of the `Operator` interface. Thanks for adding Javadocs to all the explain related methods - that should help out quite a bit. I think we can discuss potential refactoring of that interface separately, this looks good for now. ########## pinot-query-planner/src/main/java/org/apache/pinot/query/planner/logical/PlanNodeToRelConverter.java: ########## @@ -0,0 +1,468 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.query.planner.logical; + +import com.google.common.base.Preconditions; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; +import org.apache.calcite.plan.RelOptCluster; +import org.apache.calcite.plan.RelTraitSet; +import org.apache.calcite.rel.RelCollation; +import org.apache.calcite.rel.RelCollations; +import org.apache.calcite.rel.RelDistribution; +import org.apache.calcite.rel.RelDistributions; +import org.apache.calcite.rel.RelNode; +import org.apache.calcite.rel.core.SetOp; +import org.apache.calcite.rel.core.Window; +import org.apache.calcite.rel.logical.LogicalIntersect; +import org.apache.calcite.rel.logical.LogicalMinus; +import org.apache.calcite.rel.logical.LogicalSort; +import org.apache.calcite.rel.logical.LogicalUnion; +import org.apache.calcite.rel.logical.LogicalWindow; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexLiteral; +import org.apache.calcite.rex.RexNode; +import org.apache.calcite.rex.RexWindowBound; +import org.apache.calcite.rex.RexWindowBounds; +import org.apache.calcite.sql.SqlAggFunction; +import org.apache.calcite.tools.RelBuilder; +import org.apache.calcite.util.ImmutableBitSet; +import org.apache.pinot.common.utils.DatabaseUtils; +import org.apache.pinot.core.operator.ExplainAttributeBuilder; +import org.apache.pinot.core.plan.PinotExplainedRelNode; +import org.apache.pinot.query.planner.plannode.AggregateNode; +import org.apache.pinot.query.planner.plannode.ExchangeNode; +import org.apache.pinot.query.planner.plannode.ExplainedNode; +import org.apache.pinot.query.planner.plannode.FilterNode; +import org.apache.pinot.query.planner.plannode.JoinNode; +import org.apache.pinot.query.planner.plannode.MailboxReceiveNode; +import org.apache.pinot.query.planner.plannode.MailboxSendNode; +import org.apache.pinot.query.planner.plannode.PlanNode; +import org.apache.pinot.query.planner.plannode.PlanNodeVisitor; +import org.apache.pinot.query.planner.plannode.ProjectNode; +import org.apache.pinot.query.planner.plannode.SetOpNode; +import org.apache.pinot.query.planner.plannode.SortNode; +import org.apache.pinot.query.planner.plannode.TableScanNode; +import org.apache.pinot.query.planner.plannode.ValueNode; +import org.apache.pinot.query.planner.plannode.WindowNode; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +/** + * Converts a {@link PlanNode} into a {@link RelNode}. + * + * This class is used to convert serialized plan nodes into RelNodes so they can be used when explain with + * implementation is requested. Therefore some nodes may be transformed in a way that loses information that is + * required to create an actual executable plan but not necessary in order to describe the plan. + */ +public final class PlanNodeToRelConverter { + private static final Logger LOGGER = LoggerFactory.getLogger(PlanNodeToRelConverter.class); + + private PlanNodeToRelConverter() { + } + + public static RelNode convert(RelBuilder builder, PlanNode planNode) { + ConverterVisitor visitor = new ConverterVisitor(builder); + planNode.visit(visitor, null); + + return visitor.build(); + } + + private static class ConverterVisitor implements PlanNodeVisitor<Void, Void> { Review Comment: Makes sense, thanks for elaborating! 😄 ########## pinot-query-planner/src/main/java/org/apache/pinot/query/QueryEnvironment.java: ########## @@ -172,7 +185,27 @@ public QueryPlannerResult explainQuery(String sqlQuery, SqlNodeAndOptions sqlNod SqlExplainLevel level = explain.getDetailLevel() == null ? SqlExplainLevel.DIGEST_ATTRIBUTES : explain.getDetailLevel(); Set<String> tableNames = RelToPlanNodeConverter.getTableNamesFromRelRoot(relRoot.rel); - return new QueryPlannerResult(null, PlannerUtils.explainPlan(relRoot.rel, format, level), tableNames); + if (!explain.withImplementation() || !askServers) { Review Comment: Sounds good, we can also plan to deprecate the `EXPLAIN IMPLEMENTATION PLAN FOR` syntax and change it to one of the `WITH <extension>` in the future. ########## pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java: ########## @@ -85,7 +85,7 @@ public class QueryContext { private final int _offset; private final Map<String, String> _queryOptions; private final Map<ExpressionContext, ExpressionContext> _expressionOverrideHints; - private final boolean _explain; + private final ExplainMode _explain; Review Comment: Fair enough, that makes sense. I can't think of better names either (`QueryMode` came to mind but agree with it being generic / potentially misleading); naming things is hard. 😄 ########## pinot-query-runtime/src/main/java/org/apache/pinot/query/runtime/QueryRunner.java: ########## @@ -256,4 +262,66 @@ private Map<String, String> consolidateMetadata(Map<String, String> customProper public void cancel(long requestId) { _opChainScheduler.cancel(requestId); } + + public StagePlan explainQuery( + WorkerMetadata workerMetadata, StagePlan stagePlan, Map<String, String> requestMetadata) { + + if (!workerMetadata.isLeafStageWorker()) { + LOGGER.debug("Explain query on intermediate stages is a NOOP"); + return stagePlan; + } + long requestId = Long.parseLong(requestMetadata.get(CommonConstants.Query.Request.MetadataKeys.REQUEST_ID)); + long timeoutMs = Long.parseLong(requestMetadata.get(CommonConstants.Broker.Request.QueryOptionKey.TIMEOUT_MS)); + long deadlineMs = System.currentTimeMillis() + timeoutMs; + + StageMetadata stageMetadata = stagePlan.getStageMetadata(); + Map<String, String> opChainMetadata = consolidateMetadata(stageMetadata.getCustomProperties(), requestMetadata); + + if (PipelineBreakerExecutor.hasPipelineBreakers(stagePlan)) { + // TODO: Support pipeline breakers before merging this feature. + LOGGER.error("Pipeline breaker is not supported in explain query"); + return stagePlan; + } Review Comment: Ah, I hadn't thought of that either, thanks for the explanation! I guess we can just update that TODO comment for now, it makes sense to defer this considering the current explain also doesn't properly support pipeline breaker. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org