I need to keep the data of some entities in a single CF but split in two
rows for each entity. One row contains an overview information for the
entity & another row contains detailed information about entity. I am
wanting to keep both rows in single CF so they may be retrieved in a single
query whe
this is how I tested it:
1) load cache with 1.500.000 entries
2) execute fill gc
3) mesure heap size (using visual vm)
4) execute flush row cahce over cli
5) execute full gc
6) and again mesure hap usage
The difference between 6) and 3) is the heap size used by cache
On Fri, Oct 28, 2011 at 3:26
On 10/28/2011 03:21 PM, Peter Schuller wrote:
During tests I've done mass mutations using an import of data. Using
CL.QUORUM the import takes around 3 times longer that using CL.ONE on a
cluster with 3 nodes.
Is the test sequential or multi-threaded? A factor 3 performance
difference seems like
> Is it possible, that single row (8 columns) can allocate about 2KB heap?
It sounds a bit much, though not extremely so (depending on how much
overhead there is per-column relative to per-row). Are you definitely
looking at the live size of the heap (for example, trigger a full GC
and look at res
You can do a column slice for columns between "image/" (the first
ASCII string that starts with that sub-string) and "image/~" (the last
printable ASCII string that starts with that sub-string).
On Thu, Oct 27, 2011 at 21:10, Jean-Nicolas Boulay Desjardins
wrote:
> Normally in SQL I would use "%"
bug is still there, i opened
https://issues.apache.org/jira/browse/CASSANDRA-3415 with command how to
reproduce.
> During tests I've done mass mutations using an import of data. Using
> CL.QUORUM the import takes around 3 times longer that using CL.ONE on a
> cluster with 3 nodes.
Is the test sequential or multi-threaded? A factor 3 performance
difference seems like a lot in terms of total throughput; but it
Thank you for the reply.
On 10/28/2011 11:45 AM, Peter Schuller wrote:
I've patched the classes WriteResponseHandler and ReadCallback to make sure
that the local node has returned before sending the condition signal. Can
anyone see any drawbacks with this approach? I realize this will only work
On Thu, Oct 27, 2011 at 10:06 PM, wrote:
> Why these two gives different results?
>
>
>
> ./nodetool -h 172.xx.xxx.xx getcompactionthreshold Timeseries TickData
>
>
>
> Current compaction thresholds for Timeseries/TickData:
>
> min = 1, max = 2147483647
>
>
>
>
>
> [default@Timeseries] show sch
Hi all,
I've tested row cache, and find out, that it requires large amount of Heap -
I would like to verify this theory.
This is my test key space:
{
TestCF: {
row_key_1: {
{ clientKey: "MyTestCluientKey" },
{ tokenSecret: "kd94hf93k423kf44" },
{
> Thank you for your explanations. Even with a RF=1 and one node down I don't
> understand why I can't at least read the data in the nodes that are still
> up?
You will be able to read data for row keys that do not live on the
node that is down. But for any request to a row which is on the node
t
> I've patched the classes WriteResponseHandler and ReadCallback to make sure
> that the local node has returned before sending the condition signal. Can
> anyone see any drawbacks with this approach? I realize this will only work
> as long as the replication factor is the same as the number of nod
Hi Peter,
Thank you for your explanations. Even with a RF=1 and one node down I don't
understand why I can't at least read the data in the nodes that are still
up? Also, why can't I at least perform writes with consistency level ANY and
failover policy ON_FAIL_TRY_ALL_AVAILABLE...shouldn't the nod
Hi,
We are using (or will use in later versions) cassandra as backend for a
java based CMS called SiteVision. Cassandra runs in the same jvm as the
servlet container and is started by the CMS webapp. Each cluster node is
a stand-alone installation of the CMS. Our production environments
inclu
> If you want to survive node failures, use an RF above 1. And then make
> sure to use an appropriate consistency level.
To elaborate a bit: RF, or replication factor, is the *total* number
of copies of any piece of data in the cluster. So with only one copy,
the data will not be available when a
> took a node down to see how it behaves. All of a sudden I couldn't write or
[snip]
> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be
[snip]
> Default replication factor = 1
So you have an RF=1 cluster (only one copy of data) and you bring a
node down. This fundamenta
Hi guys,
It's interesting to see this thread. I recently discovered a similar
problem on my 3 node Cassandra 0.8.5 cluster. It was working fine, then I
took a node down to see how it behaves. All of a sudden I couldn't write or
read because of this exception being thrown:
Exception in thread "mai
17 matches
Mail list logo