Hi, I recomend this
http://kdm.dataview.org/
its free and it has the implementation of good design practices that you
can find in the DataStax courses.
2018-09-19 11:26 GMT-06:00 Abdul Patel :
> Hi,
>
> Do we have somehwere cassandra system tables relation diagram?
> Or just system table dia
Hi, the nodes are still communicating because they have not yet done the
logical separation, this is done by changing the snitch for each node in
the cluster in the cassandra.yaml file of the node and then changing the
name of the DC in cassandra-rackdc.properties. It is very important to do a
tota
Hi Vitaly!
You can check
https://docs.datastax.com/en/developer/driver-matrix/doc/common/driverMatrix.html
2018-07-20 7:36 GMT-06:00 Vitaliy Semochkin :
> Hi,
>
> Which driver to use with cassandra 3
>
> the one that is provided by datastax, netflix or something else.
>
> Spring uses driver fr
connect, you can do it in two
ways: connecting directly to cassandra or through spark.
Best Regards.
Joseph Arriola
Big Data Consultant
2018-07-19 8:47 GMT-06:00 VAN HOLLEBEKE Emeric (SAFRAN CERAMICS) <
emeric.van-holleb...@safrangroup.com>:
> Okay thank you,
>
> Must I use a Spark
Hi Vishal!
Did you copy the sstables into data directory?
another thing is, check the id of the table that is the same as casaandra
has in the metadata with the directory.
https://docs.datastax.com/en/dse/5.1/cql/cql/cql_using/useCreateTableCollisionFix.html
El El lun, 11 de jun. de 2018 a
Based on the metrics you say, I think the big data architecture can be:
cassandra with spark. you mention high availability. the apis could use
node.js. This combination is powerful, the challenge is in the data model.
On the other hand, if you are willing to sacrifice high availability and
slow r
Hi Sudhakar!
each one have a different goals, which means that they are complementary.
Could you share more detail of the use case to give you a better advice?
El El jue, 31 de may. de 2018 a las 5:50 a. m., Sudhakar Ganesan
escribió:
> Team,
>
>
>
> I need to make a decision on Mongo DB vs Cas
Hi Jing.
How much information do you need to migrate? in volume and number of tables?
With Spark could you do the follow:
- Read the data and export directly to MySQL.
- Read the data and export to csv files and after load to MySQL.
Could you use other paths such as:
- StreamSets