fx19880617 commented on a change in pull request #5787:
URL: https://github.com/apache/incubator-pinot/pull/5787#discussion_r468275673



##########
File path: 
pinot-connectors/pinot-spark-connector/src/main/scala/org/apache/pinot/connector/spark/datasource/PinotDataSourceReader.scala
##########
@@ -0,0 +1,124 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.connector.spark.datasource
+
+import java.util.{List => JList}
+
+import 
org.apache.pinot.connector.spark.connector.query.SQLSelectionQueryGenerator
+import org.apache.pinot.connector.spark.connector.{
+  FilterPushDown,
+  PinotClusterClient,
+  PinotSplitter,
+  PinotUtils
+}
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.sources._
+import org.apache.spark.sql.sources.v2.DataSourceOptions
+import org.apache.spark.sql.sources.v2.reader.{
+  DataSourceReader,
+  InputPartition,
+  SupportsPushDownFilters,
+  SupportsPushDownRequiredColumns
+}
+import org.apache.spark.sql.types._
+
+import scala.collection.JavaConverters._
+
+class PinotDataSourceReader(options: DataSourceOptions, userSchema: 
Option[StructType] = None)
+  extends DataSourceReader
+  with SupportsPushDownFilters
+  with SupportsPushDownRequiredColumns {
+
+  private val pinotDataSourceOptions = PinotDataSourceReadOptions.from(options)
+  private var acceptedFilters: Array[Filter] = Array.empty
+  private var currentSchema: StructType = _
+
+  override def readSchema(): StructType = {
+    if (currentSchema == null) {
+      currentSchema = userSchema.getOrElse {
+        val pinotTableSchema = PinotClusterClient.getTableSchema(
+          pinotDataSourceOptions.controller,
+          pinotDataSourceOptions.tableName
+        )
+        PinotUtils.pinotSchemaToSparkSchema(pinotTableSchema)
+      }
+    }
+    currentSchema
+  }
+
+  override def planInputPartitions(): JList[InputPartition[InternalRow]] = {
+    val schema = readSchema()
+    val tableType = PinotUtils.getTableType(pinotDataSourceOptions.tableName)
+
+    // Time boundary is used when table is hybrid to ensure that the overlap
+    // between realtime and offline segment data is queried exactly once
+    val timeBoundaryInfo =
+      if (tableType.isDefined) {
+        None
+      } else {
+        PinotClusterClient.getTimeBoundaryInfo(
+          pinotDataSourceOptions.broker,
+          pinotDataSourceOptions.tableName
+        )
+      }
+
+    val whereCondition = 
FilterPushDown.compileFiltersToSqlWhereClause(this.acceptedFilters)
+    val generatedSQLs = SQLSelectionQueryGenerator.generate(
+      pinotDataSourceOptions.tableName,
+      timeBoundaryInfo,
+      schema.fieldNames,
+      whereCondition
+    )
+
+    val routingTable =
+      PinotClusterClient.getRoutingTable(pinotDataSourceOptions.broker, 
generatedSQLs)

Review comment:
       > Connecting to pinot server directly leads to querying routing-table / 
time-boundary, which the broker does. Wondering if there is plan to connect via 
the broker to avoid this? It may have the following advantages:
   > 
   > * No need to query routing-table / time-boundary, and unlike in this 
approach.
   > * Filter push down
   > 
   > One issue I see though, it may not be feasible to stream data out of 
broker with the current code. I am trying to see what the general 
direction/approach is with these connectors.
   
   We are opening the gate on server with streaming api which can be used by 
presto for more scalable solution. This is in general the same usage pattern: 
make pinot data queryable in warehouse.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to