Have you looked at KairosDB schema ?
https://kairosdb.github.io/
Regards,
Noorul
On Tue, Mar 28, 2017 at 6:17 AM, Ali Akhtar wrote:
> I have a use case where the data for individual users is being tracked, and
> every 15 minutes or so, the data for the past 15 minutes is inserted into
> the tab
Carlos,
Yes, I'm running multiple clients simultaneously. Each one of them tries to
create table if it doesn't exists in the cassandra.
Ali,
I've cleared the data directory. Next time, If it reoccurs, I'll follow the
steps listed and come here again.
Thanks for the informatio
nt.AbstractLocalAwareExecutorServ
ice$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-3.7.jar:3.7]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Is anyone faced similar exceptions ? How to resolve it ?
Regards,
Kamal C
Hello Marcus,
I altered the table to set timestamp_resolution to 'MICROSECONDS'. I
waited for sometime, but the sstable count did not come down. Do you
think I should specific command to reduce the count of sstables after
setting this?
Thanks and Regards
Noorul
On Mon, Feb 29, 2016 at 7:22 PM,
Yes, we have enabled it on OpsCenter. Is that the reason?
On Feb 29, 2016 8:07 PM, "Dominik Keil" wrote:
> Are you using incremental repais?
>
> Am 29.02.2016 um 14:36 schrieb Noorul Islam K M:
>
>
> Hi all,
>
> We are using below compaction settings for a table
>
> compaction = {'timestamp_resol
On Mon, Jan 11, 2016 at 10:25 PM, Jeff Jirsa wrote:
>
> Make sure streaming throughput isn’t throttled on the destination cluster.
>
How do I do that? Is stream_throughput_outbound_megabits_per_sec the
attribute in cassandra.yaml.
I think we can set that on the fly using nodetool setstreamthrou
Is DSE shipping with 3.x ?
Thanks and Regards
Noorul
On Fri, Dec 25, 2015 at 9:07 PM, Alexandre Dutra
wrote:
> Hi Jean,
>
> You should use 3.0.0-beta1.
>
> TL;DR
>
> DataStax Java driver series 2.2.x has been discontinued in favor of series
> 3.x; we explained why in this mail to the Java driver
Is there a way to keep writetime and ttl of each record as it is in new cluster?
Thanks and Regards
Noorul
On Mon, Dec 21, 2015 at 5:46 PM, DuyHai Doan wrote:
> For cross-cluster operation with the Spark/Cassandra connector, you can look
> at this trick:
> http://www.slideshare.net/doanduyhai/fa
As per the documentation, you don't have to if you don't delete or update.
On Sun, May 13, 2012 at 9:18 AM, Thanh Ha wrote:
> Hi All,
>
> Do I have to do maintenance "nodetool repair" on CFs that do not have
> deletions?
>
> I only perform deletes on two column families in my cluster.
>
>
> Than
Anyone?
On Wed, Jan 18, 2012 at 9:53 AM, Kamal Bahadur wrote:
> Hi All,
>
> It is great to know that Cassandra column family can accommodate 2 billion
> columns per row! I was reading about how Cassandra stores the secondary
> index info internally. I now understand that the ind
after the number of columns in hidden
index CF exceeds 2 billion? How does Cassandra handle this situation? I
guess, one way to handle this is to add more nodes to the cluster. I am
interested in knowing if any other solution exist.
Thanks,
Kamal
2012-01-04 13:44:00,887 ColumnFamilyStore.java (line
1563) Expanding slice filter to entire row to cover additional expressions
DEBUG [ReadStage:2] 2012-01-04 13:44:00,887 ColumnFamilyStore.java (line
1605) Scanning index 'Audit_Log.member EQ kamal' starting with
DEBUG [ReadStage:2] 2012-
sing secondary index?
Thanks in advance.
Thanks,
Kamal
On Thu, Dec 29, 2011 at 6:40 PM, Peter Schuller wrote:
> > Thanks for the response Peter! I checked everything and it look good to
> me.
> >
> > I am stuck with this for almost 2 days now. Has anyone had this issue?
>
Thanks for the response Peter! I checked everything and it look good to me.
I am stuck with this for almost 2 days now. Has anyone had this issue?
Thanks,
Kamal
On Wed, Dec 28, 2011 at 2:05 PM, Kamal Bahadur wrote:
> Hi All,
>
> My Cassandra cluster has 4 nodes with a RF of 2. I am
>From the documentation, I came to know that there is a limitation of
maximum number (2 billion) of columns that a column family can have. My
questions is, is there a way to purge the old columns when the number of
columns is nearing the 2 billion mark?
Thanks,
Kamal
This is how you create it dynamically:
KsDef ksdef = new KsDef();
ksdef.name = "ProgKS";
ksdef.replication_factor = 1;
ksdef.strategy_class =
"org.apache.cassandra.locator.RackUnawareStrategy";
List cfdefs = new ArrayList();
CfDef cfdef1 = new CfDef();
cfdef1.name = "ProgCF1";
cfdef1.ke
Hi
i'm using apache-cassandra-0.6.1. I am also trying to connnect to remotely and
always get java.net.ConnectException.
I have ensured Firewall is turned off in both windows machine.
I changed ThrifAddress to 192.168.2.55 as suggested in this thread but still get
same exception.
Exception conn
17 matches
Mail list logo