In the example you gave the primary key user _ name is the row key. Since
the default partition is random you are getting rows in random order.
Since each row no clustering column there is no further grouping of data.
Or in simple terms each row has one record and is being returned ordered by
colu
n the README.
>
> https://github.com/algermissen/cassandra-ruby-sharded-workers
>
> Given the tombstone problem I what I know by now, I'd rather not use a TTL
> on the messages but remove outdated time shards completely after e.g. a
> week. But since reads never really go
t;>> https://blog.mozilla.org/it/2012/06/30/mysql-and-the-leap-second-high-cpu-and-the-fix/
>>>
>>> Seeing high cpu consumption for cassandra process
>>>
>>>
>>>
>>
>>
>> --
>> Sent from Jeff Dean's printf() mobile console
>>
>
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.aeris.com>*
*http://narendrasharma.blogspot.com/ <http://narendrasharma.blogspot.com/>*
I think one table say record should be good. The primary key is record id.
This will ensure good distribution.
Just update the active attribute to true or false.
For range query on active vs archive records maintain 2 indexes or try
secondary index.
On Apr 23, 2015 1:32 PM, "Ali Akhtar" wrote:
>
the result efficiently to pick the employee who
> has gender column value equal to "male"
> >>>>>> 2/
> >>>>>> add a seconday index
> >>>>>> create index gender_index on people(gender)
> >>>>>> select * from people where company_id='xxx' and gender='male'
> >>>>>>
> >>>>>> I though #2 seems more appropriate, but I also thought the
> secondary index is helping only locating the primary row key, with the
> select clause in #2, is it more efficient than #1 where application
> responsible loop through the result and filter the right content?
> >>>>>> (
> >>>>>> It totally make sense if I only need to find out all the male
> employee(and not within a company) by using
> >>>>>> select * from people where gender='male"
> >>>>>> )
> >>>>>> thanks
> >>>>
> >>>
> >>
> >
> >
>
> --
> Sorry this was sent from mobile. Will do less grammar and spell check than
> usual.
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.aeris.com>*
*http://narendrasharma.blogspot.com/ <http://narendrasharma.blogspot.com/>*
Haddad wrote:
> Please include the output of "nodetool ring", otherwise no one can help
> you.
>
>
> On Thu, Jan 16, 2014 at 12:45 PM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> Any pointers? I am planning to do rolling restart of the clu
Any pointers? I am planning to do rolling restart of the cluster nodes to
see if it will help.
On Jan 15, 2014 2:59 PM, "Narendra Sharma"
wrote:
> RF=3.
> On Jan 15, 2014 1:18 PM, "Andrey Ilinykh" wrote:
>
>> what is the RF? What does nodetool ring show?
>&
RF=3.
On Jan 15, 2014 1:18 PM, "Andrey Ilinykh" wrote:
> what is the RF? What does nodetool ring show?
>
>
> On Wed, Jan 15, 2014 at 1:03 PM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> Sorry for the odd subject but something is wrong with
streaming from N1, N2, N6 and N7. I expect it to
steam from (worst case) N5, N6, N7, N8. What could potentially cause the
node to get confused about the ring?
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.aeris.com>*
*http://narendrasharma.blogspot.com/
ry Analyzer (Eclipse
> MAT <http://www.eclipse.org/mat>) to figure out root causes and potential
> leaks
>
> Hope this helps
> -- Nitin
>
>
> On Thu, Jan 2, 2014 at 9:00 PM, Narendra Sharma > wrote:
>
>> The root cause turned out to be high heap. The Li
. syslog helped figure this out.
About Linux OOM Killer
"It is the job of the linux 'oom killer' to *sacrifice* one or more
processes in order to free up memory for the system when all else fails"
On Thu, Jan 2, 2014 at 10:38 AM, Robert Coli wrote:
> On Thu, Jan 2, 2014 at 8
8 node cluster running in aws. Any pointers where I should start looking?
No kill -9 in history.
ling on something.
>>
>> Cheers
>>
>> -
>> Aaron Morton
>> New Zealand
>> @aaronmorton
>>
>> Co-Founder & Principal Consultant
>> Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>> On 17/
> @aaronmorton
>
> Co-Founder & Principal Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On 17/12/2013, at 12:28 pm, Narendra Sharma
> wrote:
>
> No snapshots.
>
> I restarted the node and now the Load in ring is in sync with the disk
ill link to sstables which will cause them not be deleted.
>
>
>
> -Arindam
>
>
>
> *From:* Narendra Sharma [mailto:narendra.sha...@gmail.com]
> *Sent:* Sunday, December 15, 2013 1:15 PM
> *To:* user@cassandra.apache.org
> *Subject:* Cassandra 1.1.6 - Disk usage and Loa
B
2. nodetool cfstats for the CF reported:
SSTable count: 16
Space used (live): 670524321067
Space used (total): 670524321067
3. 'ls -1 *Data* | wc -l' in the data folder for CF returned
16
4. 'du -ksh .' in the data folder for CF returned
625G
-Naren
--
Narendra Sh
I was successfully able to bootstrap the node. The issue was RF > 2. Thanks
again Robert.
On Wed, Oct 30, 2013 at 10:29 AM, Narendra Sharma wrote:
> Thanks Robert.
>
> I didn't realize that some of the keyspaces (not all and esp. the biggest
> one I was focusing on) had RF
ct 29, 2013 at 11:45 AM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> We had a cluster of 4 nodes in AWS. The average load on each node was
>> approx 750GB. We added 4 new nodes. It is now more than 30 hours and the
>> node is still in JOINING mode.
>&
further analyze the
issue? I haven't restarted the Cassandra process. I am afraid the node will
start bootstrap again if I restart the node.
Thanks,
Naren
--
Narendra Sharma
Software Engineer
*http://www.aeris.com*
*http://narendrasharma.blogspot.com/*
t;
> --
> Asankha C. Perera
> AdroitLogic, http://adroitlogic.org
>
> http://esbmagic.blogspot.com
>
>
>
>
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.persistentsys.com>*
*http://narendrasharma.blogspot.com/*
ould not even
> exist), what would happen when I delete a non existent column?
>
> Thanks,
>
> Drew
>
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.persistentsys.com>*
*http://narendrasharma.blogspot.com/*
zing+Storage+of+Small+Objects
> Does this apply to Cassandra column names?
>
>
> -- Drew
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.persistentsys.com>*
*http://narendrasharma.blogspot.com/*
ion. If you have received
>
> this e-mail by mistake, please contact us immediately and completely delete
>
> it (and any attachments) and do not forward it or inform any other person
> of
>
> its contents. If you send us messages by e-mail, we take this as your
>
> a
ian wrote:
> So what are the common RIGHT solutions/tools for this?
>
>
> On Jan 6, 2012, at 2:46 PM, Narendra Sharma wrote:
>
> >>>It's very surprising that no one seems to have solved such a common use
> case.
> I would say people have solved it using RIGHT tool
> instance where the lock manager is down and two users are
> >>>> registering with the same email, can cause major issues.
> >>>
> >>> For most applications, if the lock managers is down, you don't
> >>> acquire the lock, so you don't enter the critical section. Rather
> >>> than allowing inconsistency, you become unavailable (at least to
> >>> writes that require a lock).
> >>>
> >>> -Bryce
> >>
>
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.persistentsys.com>*
*http://narendrasharma.blogspot.com/*
RES SHR S %CPU %MEMTIME+
> COMMAND
>
> 2549 cassy 21 0 156g * 15g 11g *S 66.9 65.5 338:02.72 java
>
>
> Thank you in advance,
>
>
> Daning
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.persistentsys.com>*
*http://narendrasharma.blogspot.com/*
k with 0.7.1
>
> Thanks and Regards
> Ravi
>
>
>
--
Narendra Sharma
Software Engineer
*http://www.aeris.com <http://www.persistentsys.com>*
*http://narendrasharma.blogspot.com/*
r, columns are not being inserted.
> But, when tried from command line client, it worked correctly.
>
> Any pointer on this would be of great use
>
> Thanks in advance,
>
> Regards,
> Anuya
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
gt;>> I started first node then I when I restarted second node I got an error
>>> that token "0" is already being used.Why am I getting this error.
>>>
>>> Second Question: I already have cassandra running in two different data
>>> centers I want to add a new keyspace which uses networkTopology strategy
>>> in the light of above errors how can I accomplish this.
>>>
>>>
>>> Thanks
>>> Anurag
>>>
>>
>>
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
ching this problem
>
> Regards
>
> Sam
> *__
> Sam Ganesan Ph.D.
> Distinguished member, Technical Staff
> Motorola Mobility - On Demand Video
> 900 Chelmsford Street,
> Lowell, MA 01851
> tel:+1 978 614-3165 (changed)
> mob:+1 978 328-7132
> mailto: sam.ga
a.locator.RackUnawareStrategy
> replication_factor: 1
>
> Cassandra starts properly without giving any warnngs/error but does not
> create the keyspace offline
> which is defined above.
>
> Please suggest.
>
> Thanks
> Anurag
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
olution in Cassandra?
>
> Please see attachment for why this is important in some cases.
>
>
> Regards
>
> Milind
>
>
>
>
> I think about this often. LDAP servers like SunOne have pluggable
> conflict resolution. I could see the read-repair algorithm being
> pluggable.
>
>
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
ate or boot
> without the seed. But it is recommended to have multiple seeds in
> production system to maintain the ring.
>
>
>
> Thanks
> --
> maki
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
ttp://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Starting-the-Cassandra-server-from-Java-without-command-line-tp6273826p6273826.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
ke corrective action. So
> try QUORUM under normal circumstances, if unavailable try ONE. My questions
> -
> Do you guys see any flaws with this approach?
> What happens when DC1 comes back up and we start reading/writing at QUORUM
> again? Will we read stale data in this case?
>
ax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
>
> I have attached the code file.
>
> Cassandra is running on the port I am trying to connect to .
>
> Please Suggest
> Thanks
> Anurag
>
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
w
> ObjectName("org.apache.cassandra.db:type=ColumnFamilyStores,keyspace="+keyspace+",columnfamily="+columnfamily);
>
> // Create a dedicated proxy for the MXBean instead of
> // going directly through the MBean server connection
>
l with cfhistogram I dont fully understand the
> output.Can someone please shower some light on it.
> Thanks
> Anurag
>
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
011-03-30 18:46:33,272 CompactionManager.java
> (line 406) insufficient space to compact all requested files SSTableReader(
>
> I am using 16G of java heap space ,please let me know should I consider
> this as a sign of something which I need to worry about.
> Thanks
> Anurag
>
ng that is
causing OOM.
-Naren
On Wed, Mar 30, 2011 at 4:45 PM, Anurag Gujral wrote:
>
> I am using 16G of heap space how much more should i increase.
> Please suggest
>
> Thanks
> Anurag
>
> On Wed, Mar 30, 2011 at 11:43 AM, Narendra Sharma <
> narendra.sha...@gmail
ns:
>
> - compaction
> - repair
> - clean
>
> I understand that compaction consolidates the SSTables and physically
> performs deletes by taking into account the Tombstones. But what does clean
> and repair do then?
>
>
>
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
eap.
>
> There's a wiki page somewhere that describes the overall rule of thumb
> for heap sizing, but I can't find it right now.
>
> --
> / Peter Schuller
>
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
n later, and I don't want it to have
> to wait however long I've set the thrift socket timeout to be. The feedback
> I got initially was that I would run into problems with high load, and could
> run into delays when cassandra gets overwhelmed.
>
> Does this make sense or a
transaction can leave the data in an
> inconsistent state. Is there a way to figure out such inconsistencies ? Will
> Cassandra keep a log of failed batch_mutate() operations, or partially
> completed operations, that might require manual intervention when the client
> comes back up
Hope this makes it clear.
Thanks,
Naren
On Mon, Mar 28, 2011 at 2:15 PM, ruslan usifov wrote:
>
>
> 2011/3/29 Narendra Sharma
>
>> This is because the memtable threshold is not correct to the last byte.
>> The threshold basically account for column name, value and timestamp
Hope you find following useful. It uses raw thirft. In case you find
difficulty in build and/or running the code, please reply back.
private Cassandra.Client createClient(String host, int port) {
TTransport framedTransport = new TFramedTransport(new TSocket(host,
port));
TProtocol framedPr
Cassandra 0.7.4
Column names in my CF are of type byte[] but I want to order columns by
timestamp. What is the best way to achieve this? Does it make sense for
Cassandra to support ordering of columns by timestamp as option for a column
family irrespective of the column name type?
Thanks,
Naren
The logic to find the node is not complicated. You compute the MD5 hash of
the key. Create sorted list of tokens assigned to the nodes in the ring.
Find the first token greater than the hash. This is the first node. Next in
the list is the replica, which depends on the RF. Now this is simple becau
uslan usifov wrote:
>
>
> 2011/3/23 Narendra Sharma
>
>> I understand that. The overhead could be as high as 10x of memtable data
>> size. So overall the overhead for 16CF collectively in your case could be
>> 300*10 = 3G.
>>
>>
>> And how about G1 GC
I understand that. The overhead could be as high as 10x of memtable data
size. So overall the overhead for 16CF collectively in your case could be
300*10 = 3G.
Thanks,
Naren
On Wed, Mar 23, 2011 at 11:18 AM, ruslan usifov wrote:
>
>
> 2011/3/23 Narendra Sharma
>
>> I
I think it is due to fragmentation in old gen, due to which survivor area
cannot be moved to old gen. 300MB data size of memtable looks high for 3G
heap. I learned that in memory overhead of memtable can be as high as 10x of
memtable data size in memory. So either increase the heap or reduce the
me
take
lot of time.
Check if it is due to some JVM bug.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6477891
-Naren
On Thu, Mar 17, 2011 at 9:47 AM, ruslan usifov wrote:
>
>
> 2011/3/17 Narendra Sharma
>
>> What heap size are you running with? and Which version of C
What heap size are you running with? and Which version of Cassandra?
Thanks,
Naren
On Thu, Mar 17, 2011 at 3:45 AM, ruslan usifov wrote:
> Hello
>
> Some times i have very long GC pauses:
>
>
> Total time for which application threads were stopped: 0.0303150 seconds
> 2011-03-17T13:19:56.476+030
Is this new install or upgrade?
Thanks,
Naren
On Wed, Mar 16, 2011 at 11:15 PM, Anurag Gujral wrote:
> I am getting exception when starting cassandra 0.7.3
>
> ERROR 01:10:48,321 Exception encountered during startup.
> java.lang.NegativeArraySizeException
> at
> org.apache.cassandra.db.Colum
libcassandra isn't vary active. Since we already has a object pool library,
we went for using raw thrift in C++ instead of using any other library.
Thanks,
Naren
On Wed, Mar 16, 2011 at 10:03 PM, Primal Wijesekera <
primalwijesek...@yahoo.com> wrote:
> You could try this,
>
> https://github.com/
it wrong. the output shows a
> nice fancy column called "Owns" but i've only ever seen the percentage
> ... the amount of data or "load" is even ... doh. thanks for the
> reply. cheers
> -sd
>
> On Mon, Mar 14, 2011 at 10:47 PM, Narendra Sharma
>
On the same page there is a section on Load Balance that talks about python
script to compute tokens. I believe your question is more about assigning
new tokens and not compute tokens.
1. "nodetool loadbalance" will result in recomputation of tokens. It will
pick tokens based on the load and not t
Sometime back I looked at the code to find that out. Following is the
result. There will be some additional overhead for internal DS for
ConcurrentLinkedHashMap.
* (<8 bytes for position i.e. value> + +
<16 bytes for token (RP)> + <8 byte reference for DecoratedKey> + <8 bytes
for descriptor ref
Multiple write for same key and column will result in overwriting of column
in a memtable. Basically multiple updates for same (key, column) are
reconciled based on the column's timestamp. This happens per memtable. So if
a memtable is flushed to an sstable, this rule will be valid for the next
mem
I have been through tuning for GC and OOM recently. If you can provide the
cassandra.yaml, I can help. Mostly I had to play with memtable thresholds.
Thanks,
Naren
On Fri, Mar 4, 2011 at 12:43 PM, Mark wrote:
> We have 7 column families and we are not using the default key cache
> (20).
>
>
I am unable to enable/disable HH via JMX (JConsole).
Even though the load is on and read/writes happening, I don't see
"operations" component on Jconsole. To clarify further, I see only
Jconsole->MBeans->org.apache.cassandra.db.StorageProxy.Attributes. I don't
see Jconsole->MBeans->org.apache.cass
1. Why 24GB of heap? Do you need this high heap? Bigger heap can lead to
longer GC cycles but 15min look too long.
2. Do you have ROW cache enabled?
3. How many column families do you have?
4. Enable GC logs and monitor what GC is doing to get idea of why it is
taking so long. You can add following
You are missing the point. The coordinator node that is handling the request
won't wait for all the nodes to return their copy/digest of data. It just
wait for Q (RF/2+1) nodes to return. This is the reason I explained two
possible scenarios.
Further, on what basis Cassandra will know that the dat
ange will
> be discarded via read repair
>
> On Wed, Feb 23, 2011 at 6:47 PM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> Remember the simple rule. Column with highest timestamp is the one that
>> will be considered correct EVENTUALLY. So consider follow
Remember the simple rule. Column with highest timestamp is the one that will
be considered correct EVENTUALLY. So consider following case:
Cluster size = 3 (say node1, node2 and node3), RF = 3, Read/Write CL =
QUORUM
a. QUORUM in this case requires 2 nodes. Write failed with successful write
to on
Today it is not possible to change the comparators (compare_with and
compare_subcolumns_with). I went through the discussion on thread
http://comments.gmane.org/gmane.comp.db.cassandra.user/12466.
Does it make sense to atleast allow one way change i.e. from specific types
to generic type? For eg c
Version: Cassandra 0.7.1 (build from trunk)
Setup:
- Cluster of 2 nodes (Say A and B)
- HH enabled
- Using the default Keyspace definition in cassandra.yaml
- Using SuperCounter1 CF
Steps:
- Started the two nodes, loaded schema using nodetool
- Executed counter update and read operations on A wit
Version: Cassandra 0.7.1 (build from trunk)
Setup:
- Cluster of 2 nodes (Say A and B)
- HH enabled
- Using the default Keyspace definition in cassandra.yaml
- Using SuperCounter1 CF
Client:
- Using CL of ONE
I started the two Cassandra nodes, created schema and then shutdown one of
the instances
As per config:
# this defines the maximum amount of time a dead host will have hints
# generated. After it has been dead this long, hints will be dropped.
max_hint_window_in_ms: 360 # one hour
Will this result in deletion of existing hints (from mem and disk)? or it
will just stop creating ne
Version: Cassandra 0.7.1
I am seeing following exception at regular interval (very frequently) in
Cassandra. I did a clean install of Cassandra 0.7.1 and deleted all old
data. Any idea what could be the cause? The stack is same for all the
occurrances.
Thanks,
Naren
ERROR [ReadStage:11232] 2011-
e do
> still have minor compactions turned on.
>
>
> On Thu, Jan 27, 2011 at 12:56 PM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> Thanks Anand. Few questions:
>> - What is the size of nodes (in terms for data)?
>> - How long have you been running?
>
There is some latency that needs to be sorted out, but overall I
> am positive. This is with 6.6, am in the process of moving it to 0.7.
>
> On Wed, Jan 26, 2011 at 11:37 PM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> Anyone using Cassandra for storing large
Anyone using Cassandra for storing large number (millions) of large (mostly
immutable) objects (200KB-5MB size each)? I would like to understand the
experience in general considering that Cassandra is not considered a good
fit for large objects. https://issues.apache.org/jira/browse/CASSANDRA-265
Yes. See this http://wiki.apache.org/cassandra/FAQ#range_ghosts
-Naren
On Tue, Jan 25, 2011 at 2:59 PM, Nick Santini wrote:
> Hi,
> I'm trying a test scenario where I create 100 rows in a CF, then
> use get_range_slices to get all the rows, and I get 100 rows, so far so good
> then after the tes
With raw thrift APIs:
1. Fetch column from supercolumn:
ColumnPath cp = new ColumnPath("ColumnFamily");
cp.setSuper_column("SuperColumnName");
cp.setColumn("ColumnName");
ColumnOrSuperColumn resp = client.get(getByteBuffer("RowKey"), cp,
ConsistencyLevel.ONE);
Column c = resp.getColumn();
2. Add
The schema is not loaded from cassandra.yaml by default. You need to either
load it through jconsole or define it through CLI. Please read following
page for details:
http://wiki.apache.org/cassandra/LiveSchemaUpdates
Also look for "Where are my keyspaces" on following page:
http://wiki.apache.org
Hi,
We are working on defining the ring topology for our cluster. One of the
plans under discussion is to have a RF=2 and perform read/write operations
with CL=ONE. I know this could be an issue since it doesn't satisfy R+W >
RF. This will work if we can always force the clients to go to the first
gt; I am using Cassandra 0.7.0-rc2.
>
> I will try this DB client. Thanks.
>
>
> On Tue, Dec 28, 2010 at 10:41 AM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> Please do mention the Cassandra version you are using in all ur queries.
>> It helps.
&g
#1 - No limit
#2 - If you are referring to secondary indexes then NO. Also see
https://issues.apache.org/jira/browse/CASSANDRA-598
#3 - No limit
Following are key limitations:
1. All data for a single row must fit (on disk) on a single machine in the
cluster
2. A single column value may not be lar
Please do mention the Cassandra version you are using in all ur queries. It
helps.
Try https://github.com/driftx/chiton
Thanks,
Naren
On Mon, Dec 27, 2010 at 7:37 PM, Roshan Dawrani wrote:
> Hi,
>
> Is there a GUI client for a Cassandra database for a Windows based setup?
>
> I tried the one av
The comment in the cassandra.yaml says:
"specifies the probability with which read repairs should be invoked on *
non-quorum* reads"
Does this mean RR chance is applicable only for non-quorum reads?
Another question on same topic:
Will RR use one of the node in the other datacenter as coordinat
;t be sure of a query that was based on order of the rows in the
> column family, so I didn't explore that much.
>
>
>
> On Mon, Dec 27, 2010 at 9:55 PM, Narendra Sharma <
> narendra.sha...@gmail.com> wrote:
>
>> Did you look at get_range_slices? Once you get t
Did you look at get_range_slices? Once you get the columns from super
column, pick the first and last to form the range and fire the
get_range_slice.
Thanks,
-Naren
On Mon, Dec 27, 2010 at 6:12 AM, Roshan Dawrani wrote:
> This silly question is retrieved back with apology. There couldn't be
> an
Hi,
Is it possible to specify different sort order for each SuperColumn. My CF
has 3 different SuperColums. They contain different type of data. Is it
possible to specify different sort order for eg TimeUUIDType for
SuperColumn1 and AsciiType for SuperColumn2 etc? If yes, can someone share
the syn
nger to compact that row.
>
> Hope that helps.
> Aaron
>
>
> On 04 Dec, 2010,at 09:23 AM, Narendra Sharma
> wrote:
>
> What is the impact (performance and I/O) of row size (in bytes) on
> compaction?
> What is the impact (performance and I/O) of number of super col
What is the impact (performance and I/O) of row size (in bytes) on
compaction?
What is the impact (performance and I/O) of number of super columns and
columns on compaction?
Does anyone has any details and data to share?
Thanks,
Naren
s for super column families
>> is not supported yet, so you have to implement your own
>>
>> easy way: keep another column family where the row key is the value of
>> your field and the columns are the row keys of your super column family
>>
>> (inverted index)
>
Hi,
My schema has a row that has thousands of Super Columns. The size of each
super column is around 500B (20 columns). I need to query 1 SuperColumn
based on value of one of its column. Something like
SELECT SuperColumn FROM Row WHERE SuperColumn.column="value"
Questions:
1. Is this possible wi
Are there any C++ clients out there similar to Hector (in terms of features)
for Cassandra? I am looking for C++ Client for Cassandra 0.7.
Thanks,
Naren
On Mon, Nov 29, 2010 at 9:32 PM, Jonathan Ellis wrote:
> On Mon, Nov 29, 2010 at 11:26 PM, Narendra Sharma
> wrote:
> > Thanks Jonathan.
> >
> > Couple of more questions:
> > 1. Is there any technical limit on the number of secondary indexes that
> can
Nov 29, 2010 at 7:59 PM, Narendra Sharma
> wrote:
> > Is there any documentation available on what is possible with secondary
> > indexes?
>
> Not yet.
>
> > - Is it possible to define secondary index on columns within a
> SuperColumn?
>
> No.
>
> >
Is there any documentation available on what is possible with secondary
indexes? For eg
- Is it possible to define secondary index on columns within a SuperColumn?
- If I define a secondary index at run time, does Cassandra index all the
existing data or only new data is indexed?
Some documentatio
Hi,
I am using Cassandra 0.7 beta3 and Hector.
I create a mutation map. The mutation involves adding few columns for a
given row. After that I use batch_mutate API to send the changes to
Cassandra.
Question:
If there are multiple column writes on same row in a mutation_map, does
Cassandra show (
the performance of those two predicates is equivalent, assuming a row
> "start key" actually exists.
>
> On Thu, Oct 14, 2010 at 1:09 PM, Narendra Sharma
> wrote:
> > Hi,
> >
> > I am using Cassandra 0.6.5. Our application uses the get_range_slices to
>
Hi,
I am using Cassandra 0.6.5. Our application uses the get_range_slices to get
rows in the given range.
Could someone please explain how get_range_slices works internally esp when
a count parameter (value = 1) is also specified in the SlicePredicate? Does
Cassandra first search all in the given
Thanks Oleg!
Could you please share the patch. I have build Cassandra before from source.
I can definitely give it try.
-Naren
On Wed, Oct 6, 2010 at 3:55 AM, Oleg Anastasyev wrote:
> > Is it possible to retain the commit logs?
>
> In off-the-shelf cassandra 0.6.5 this is not possible, AFAIK.
Has any one used sstable2json on 0.6.5 and noticed the issue I described in
my email below? This doesn't look like data corruption issue as sstablekeys
shows the keys.
Thanks,
Naren
On Tue, Oct 5, 2010 at 8:09 PM, Narendra Sharma
wrote:
> 0.6.5
>
> -Naren
>
>
> On Tue,
Cassandra Version: 0.6.5
I am running a long duration test and I need to keep the commit log to see
the sequence of operations to debug few application issues. Is it possible
to retain the commit logs? Apart from increasing the value of
CommitLogRotationThresholdInMB
what is the other way to achie
0.6.5
-Naren
On Tue, Oct 5, 2010 at 6:56 PM, Jonathan Ellis wrote:
> Version?
>
> On Tue, Oct 5, 2010 at 7:28 PM, Narendra Sharma
> wrote:
> > Hi,
> >
> > I am using sstable2json to extract row data for debugging some
> application
> > issue. I first r
Hi,
I am using sstable2json to extract row data for debugging some application
issue. I first ran sstablekeys to find the list of keys in the sstable. Then
I use the key to fetch row from sstable. The sstable is from Lucandra
deployment. I get following.
-bash-3.2$ ./sstablekeys Documents-37-Data
1 - 100 of 103 matches
Mail list logo