mangrrua commented on a change in pull request #5787: URL: https://github.com/apache/incubator-pinot/pull/5787#discussion_r464524102
########## File path: pinot-connectors/pinot-spark-connector/documentation/read_model.md ########## @@ -0,0 +1,145 @@ +<!-- + + Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. + +--> +# Read Model + +Connector can scan offline, hybrid and realtime tables. `table` parameter have to given like below; +- For offline table `tbl_OFFLINE` +- For realtime table `tbl_REALTIME` +- For hybrid table `tbl` + +An example scan; + +```scala +val df = spark.read + .format("pinot") + .option("table", "airlineStats") + .load() +``` + +Custom schema can be specified directly. If schema is not specified, connector read table schema from Pinot controller, and then convert to the Spark schema. + + +### Architecture + +Connector reads data from `Pinot Servers` directly. For this operation, firstly, connector creates query with given filters(if filter push down is enabled) and columns, then finds routing table for created query. It creates pinot splits that contains **ONE PINOT SERVER and ONE OR MORE SEGMENT per spark partition**, based on the routing table and `segmentsPerSplit`(detailed explain is defined below). Lastly, each partition read data from specified pinot server in parallel. + + + + +Each Spark partition open connection with Pinot server, and read data. For example, assume that routing table informations for specified query is like that: + +``` +- realtime -> + - realtimeServer1 -> (segment1, segment2, segment3) + - realtimeServer2 -> (segment4) +- offline -> + - offlineServer10 -> (segment10, segment20) +``` + +If `segmentsPerSplit` is equal to 3, there will be created 3 Spark partition like below; + +| Spark Partition | Queried Pinot Server/Segments | +| ------------- | ------------- | +| partition1 | realtimeServer1 / segment1, segment2, segment3 | +| partition2 | realtimeServer2 / segment4 | +| partition3 | offlineServer10 / segment10, segment20 | + + +If `segmentsPerSplit` is equal to 1, there will be created 6 Spark partition; + +| Spark Partition | Queried Pinot Server/Segments | +| ------------- | ------------- | +| partition1 | realtimeServer1 / segment1 | +| partition2 | realtimeServer1 / segment2 | +| partition3 | realtimeServer1 / segment3 | +| partition4 | realtimeServer2 / segment4 | +| partition5 | offlineServer10 / segment10 | +| partition6 | offlineServer10 / segment20 | + + +If `segmentsPerSplit` value is too low, that means more parallelism. But this also mean that a lot of connection will be opened with Pinot servers, and will increase QPS on the Pinot servers. + +If `segmetnsPerSplit` value is too high, that means less parallelism. Each Pinot server will scan more segments per request. + +**Note:** Pinot servers prunes segments based on the segment metadata when query comes. In some cases(for example filtering based on the some columns), some servers may not return data. Therefore, some Spark partitions will be empty. In this cases, `repartition()` may be applied for efficient data analysis after loading data to Spark. + + +### Filter And Column Push Down +Connector supports filter and column push down. Filters and columns are pushed to the pinot servers. Filter and column push down improves the performance while reading data because of its minimizing data transfer between Pinot and Spark. In default, filter push down enabled. If filters are desired to be applied in Spark, `usePushDownFilters` should be set as `false`. + +Connector supports `Equal, In, LessThan, LessThanOrEqual, Greater, GreaterThan, Not, TEXT_MATCH, And, Or` filters for now. Review comment: The filters section of the readme outdated. I've changed pql with sql, but i did forget to change supported filters section. The connector supports all sql filters now. I'll fix it ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org