r-reis opened a new pull request, #15273:
URL: https://github.com/apache/pinot/pull/15273

   Related to this [issue](https://github.com/apache/pinot/issues/15262)
   
   The main change was the inclusion of confluent 
[JSONSchemaDeserializer](https://github.com/confluentinc/schema-registry/blob/master/json-schema-serializer/src/main/java/io/confluent/kafka/serializers/json/KafkaJsonSchemaDeserializer.java)
 to the input-format plugin.
   
   Follow the instruction on the issue, but now we have a new 
decoder.class.name 
"org.apache.pinot.plugin.inputformat.json.confluent.KafkaConfluentSchemaRegistryJsonMessageDecoder"
   
   Example of table config:
   
   ```
   {
     "tableName": "topic_1",
     "tableType": "REALTIME",
     "segmentsConfig": {
       "timeColumnName": "created_at",
       "timeType": "MILLISECONDS",
       "replicasPerPartition": "1"
     },
     "tenants": {},
     "tableIndexConfig": {
       "loadMode": "MMAP",
       "streamConfigs": {
         "stream.kafka.metadata.populate" : "true",
         "streamType": "kafka",
         "stream.kafka.decoder.prop.format": "JSON",
         "stream.kafka.consumer.type": "low-level",
         "stream.kafka.topic.name": "topic_1",
   
         "stream.kafka.decoder.class.name": 
"org.apache.pinot.plugin.inputformat.json.confluent.KafkaConfluentSchemaRegistryJsonMessageDecoder",
         "stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.plugin.stream.kafka30.KafkaConsumerFactory",
                
         "stream.kafka.schema.registry.url": "http://localhost:8081";,
         "stream.kafka.decoder.prop.schema.registry.rest.url": 
"http://localhost:8081";,         
         "stream.kafka.broker.list": "localhost:9092",
         "realtime.segment.flush.threshold.rows": "0",
         "realtime.segment.flush.threshold.time": "24h",
         "realtime.segment.flush.threshold.segment.size": "50M",
         "stream.kafka.consumer.prop.auto.offset.reset": "smallest",            
         "key.serializer": 
"shaded.org.apache.kafka.connect.storage.StringDeserializer",
         "value.serializer": 
"shaded.org.apache.kafka.connect.storage.StringDeserializer"
       }
     },
        "ingestionConfig": {
                "continueOnError": true
        },
     "metadata": {
       "customConfigs": {}
     }
   }
   ```
   
   Handling all exceptions and errors is the same as AvroSchemaDecoder and 
ProtoBufSchemaDecoder.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to