Stromberger
Date: Thursday, 27 April 2017 at 15:50
To: "user@cassandra.apache.org"
Subject: Re: How can I efficiently export the content of my table to KAFKA
Maybe
https://www.confluent.io/blog/kafka-connect-cassandra-sink-the-perfect-match/
On Wed, Apr 26, 2017 at 2:49 PM, Tobia
Maybe
https://www.confluent.io/blog/kafka-connect-cassandra-sink-the-perfect-match/
On Wed, Apr 26, 2017 at 2:49 PM, Tobias Eriksson <
tobias.eriks...@qvantel.com> wrote:
> Hi
>
> I would like to make a dump of the database, in JSON format, to KAFKA
>
> The database contains lots of data, milli
I admit that this could still be a way forward we have not
> evaluated it 100% yet, so I have not completely given up that thought
>
>
>
> -Tobias
>
>
>
>
>
> *From: *Justin Cameron
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Thursday, 27 Ap
t;
Subject: Re: How can I efficiently export the content of my table to KAFKA
You could probably save yourself a lot of hassle by just writing a Spark job
that scans through the entire table, converts each row to JSON and dumps the
output into a Kafka topic. It should be fairly straightforward
You could probably save yourself a lot of hassle by just writing a Spark
job that scans through the entire table, converts each row to JSON and
dumps the output into a Kafka topic. It should be fairly straightforward to
implement.
Spark will manage the partitioning of "Producer" processes for you
Hi
I would like to make a dump of the database, in JSON format, to KAFKA
The database contains lots of data, millions and in some cases billions of
“rows”
I will provide the customer with an export of the data, where they can read it
off of a KAFKA topic
My thinking was to have it scalable such