7256070484
> 2.2.2.3 us-east 1e Up Normal 93.22 GB100.00%
> 113427455640312821154458202477256070485
> 1.1.1.6 eu-west 1c Up Normal 75.39 GB 16.67%
>141784319550391026443072753096570088105
>
> What am I missing here?
>
> TIA,
> Katriel
>
--
Derek Williams
er mailto:synfina...@gmail.com
>> ><mailto:synfina...@gmail.com<mailto:synfina...@gmail.com>>>
>> Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org
>> ><mailto:user@cassandra.apache.org<mailto:user@cassandra.apache.org>>&
es. Theoretically a driver that with multiplex *should be* faster
> in *some* cases. However I have never seen any evidence to back up this
> theory anecdotal or otherwise.
>
> In fact
> https://github.com/pchalamet/cassandra-sharp/pull/24
>
>
> On Sun, May 5, 2013 at 4:09 PM
> >> hello,
> >> i want to know which cassandra client is better?
> >> and what are their advantages and disadvantages?
> >>
> >> thanks
>
--
Derek Williams
t; not extraordinarily high. No GC messages are being output to the log.
>
> These warnings do seem to be occurring doing compactions of column
> families using LCS with wide rows, but I'm not sure there is a direct
> correlation.
>
> We are running Cassandra 1.1.9, with a maximum heap of 8G.
>
> Any advice?
> Thanks,
> -Mike
--
Derek Williams
is in the Offset column, all the other
columns are the count for that bucket of the histogram. For example in
write latency the 3, 7, and 19 refer to how many requests had that latency.
3 write requests took 17us, 7 requests took 20us, and 19 took 24us.
--
Derek Williams
the CF in Cassandra and the table in MySQL,and I
> find the processing time of MySQL is better than Cassandra.
> So,I wander what are the advantages of Cassandra compare MySQL and how to
> improve the performance of Cassandra.
> Is this the right way to use Cassandra.
>
>
--
Derek Williams
Further information, in AZ1, when 143, 145, and 146 are up, all goes
> well. But when, say 143, fails, the client receives a TIMEOUT failure –
> even though 145 and 146 are up.
>
> ** **
>
> *From:* Derek Williams [mailto:de...@fyrie.net]
> *Sent:* Wednesday, March 2
] 2013-03-19 00:00:53,441 ReadCallback.java (line 79) Blockfor
> is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143
>
> ** **
>
> The batch mutates are as expected – locally, two replicas, and hints to DC
> AZ2, but why the unexpected behavior for the get_slice requests. This is
> observed throughout the log.
>
> ** **
>
> Thanks much
>
> ** **
>
> ** **
>
>
>
> ** **
>
--
Derek Williams
to local DC only, but RF:3 is not acceptable for us.
>
> Can we force somehow cassandra not to lookup keys in remote CD?
>
> Thanks for your answers!
>
--
Derek Williams
ed by 1000. I am afraid it will never catch up. We set
>
>
> This is going to be tricky to diagnose, sorry for asking silly
> questions...
>
>
> Do you have wide rows ? Are you seeing logging about "Compacting wide
> rows" ?
> Are you seeing GC activity logged or seeing C
act of Thrift
> # thread-per-client. (Best practice is for client connections to
> # be pooled anyway.) Only do so on Linux where it is known to be
> # supported.
> # u34 and greater need 180k
> JVM_OPTS="$JVM_OPTS -Xss180k"
>
> What value should I use? Java defaults
dra and astyanax are you using?
>
> For now, we had ot add all nodes ot the seeds list instead so it
> distributes amongst all nodes.
>
> Thanks,
> Dean
>
--
Derek Williams
>
> From: Wei Zhu < wz1...@yahoo.com >
> Reply-To: " user@cassandra.apache.org " < user@cassandra.apache.org >,
> Wei Zhu < wz1...@yahoo.com >
> Date: Friday, January 18, 2013 12:10 PM
> To: Cassandr usergroup < user@cassandra.apache.org >
> Subject: Cassandra pending compaction tasks keeps increasing
>
>
>
>
>
>
> Hi,
> When I run nodetool compactionstats
>
>
> I see the number of pending tasks keep going up steadily.
>
>
> I tried to increase the compactionthroughput, by using
>
>
> nodetool setcompactionthroughput
>
>
> I even tried the extreme to set it to 0 to disable the throttling.
>
>
> I checked iostats and we have SSD for data, the disk util is less than 5%
> which means it's not I/O bound, CPU is also less than 10%
>
>
> We are using levelcompaction and in the process of migrating data. We have
> 4500 writes per second and very few reads. We have about 70G data now and
> will grow to 150G when the migration finishes. We only have one CF and
> right now the number of SSTable is around 15000, write latency is still
> under 0.1ms.
>
>
> Anything needs to be concerned? Or anything I can do to reduce the number
> of pending compaction?
>
>
> Thanks.
> -Wei
>
>
>
>
>
>
>
>
>
>
--
Derek Williams
0 02
>>0 0
>> > 10 0 0 0
>>0 6261
>> > 12 0 02
>>0 117
>> > 14 0 08
>>0 0
>> > 17 0 3 69
>>0 255
>> > 20 0 7 163
>>0 0
>> > 24 019 1369
>>0 0
>> >
>>
>>
>>
>>
>>
>>
>>
>
--
Derek Williams
king on reproducing now with better notes this time.
>>
>> -Bryan
>>
>>
>>
>> On Thu, Jan 17, 2013 at 4:45 PM, Derek Williams wrote:
>>
>>> When you ran this test, is that the exact schema you used? I'm not
>>> seeing where you are setting
ata/data/metrics/request_summary/metrics-request_summary-he-386179-Data.db
>>>
>>>
>>> ** **
>>>
>>> $> ls -alF
>>> /virtual/cassandra/data/data/metrics/request_summary/metrics-request_summary-he-386179-Data.db
>>>
>>>
>>> -rw-rw-r-- 1 sandra sandra 5252320 Jan 16 08:42
>>> /virtual/cassandra/data/data/metrics/request_summary/metrics-request_summary-he-386179-Data.db
>>>
>>>
>>> ** **
>>>
>>> $> ./bin/sstable2json
>>> /virtual/cassandra/data/data/metrics/request_summary/metrics-request_summary-he-386179-Data.db
>>> -k $(echo -n 459fb460-5ace-11e2-9b92-11d67b6163b4 | hexdump -e '36/1 "%x"')
>>>
>>>
>>> {
>>>
>>> "34353966623436302d356163652d313165322d396239322d313164363762363136336234":
>>> [["app_name","50f21d3d",1357785277207001,"d"],
>>> ["client_ip","50f21d3d",1357785277207001,"d"],
>>> ["client_req_id","50f21d3d",1357785277207001,"d"],
>>> ["mysql_call_cnt","50f21d3d",1357785277207001,"d"],
>>> ["mysql_duration_us","50f21d3d",1357785277207001,"d"],
>>> ["mysql_failure_call_cnt","50f21d3d",1357785277207001,"d"],
>>> ["mysql_success_call_cnt","50f21d3d",1357785277207001,"d"],
>>> ["req_duration_us","50f21d3d",1357785277207001,"d"],
>>> ["req_finish_time_us","50f21d3d",1357785277207001,"d"],
>>> ["req_method","50f21d3d",1357785277207001,"d"],
>>> ["req_service","50f21d3d",1357785277207001,"d"],
>>> ["req_start_time_us","50f21d3d",1357785277207001,"d"],
>>> ["success","50f21d3d",1357785277207001,"d"]]
>>>
>>> }
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> Decoding the column timestamps to shows that the columns were written at
>>> "Thu, 10 Jan 2013 02:34:37 GMT" and that their TTL expired at "Sun, 13 Jan
>>> 2013 02:34:37 GMT". The date of the SSTable shows that it was generated on
>>> Jan 16 which is 3 days after all columns have TTL-ed out.
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> The schema shows that gc_grace is set to 0 since this data is
>>> write-once, read-seldom and is never updated or deleted.
>>>
>>> ** **
>>>
>>> create column family request_summary
>>>
>>> with column_type = 'Standard'
>>>
>>> and comparator = 'UTF8Type'
>>>
>>> and default_validation_class = 'UTF8Type'
>>>
>>> and key_validation_class = 'UTF8Type'
>>>
>>> and read_repair_chance = 0.1
>>>
>>> and dclocal_read_repair_chance = 0.0
>>>
>>> and gc_grace = 0
>>>
>>> and min_compaction_threshold = 4
>>>
>>> and max_compaction_threshold = 32
>>>
>>> and replicate_on_write = true
>>>
>>> and compaction_strategy =
>>> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
>>>
>>> and caching = 'NONE'
>>>
>>> and bloom_filter_fp_chance = 1.0
>>>
>>> and compression_options = {'chunk_length_kb' : '64',
>>> 'sstable_compression' :
>>> 'org.apache.cassandra.io.compress.SnappyCompressor'};
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> Thanks in advance for help in understanding why rows such as this are
>>> not removed!
>>>
>>> ** **
>>>
>>> -Bryan
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>>
>
--
Derek Williams
<><>
DataStax 1.2 docs.
>
> As for why I'd want such a thing, I just wanted to initialize some test
> values for a blob column with cqlsh.
>
> Thanks!
>
--
Derek Williams
ie page 1
> has newest content so this would complicate things when writing data and
> cause load if logic was included to reorganise page numbers etc.
>
> Cheers
>
> Sam
> http://Newsarc.net
>
--
Derek Williams
gt;>
>>>> Aaron
>>>>
>>>> On 25 Mar 2011, at 09:30, Narendra Sharma wrote:
>>>>
>>>> > Cassandra 0.7.4
>>>> > Column names in my CF are of type byte[] but I want to order columns
>>>> by timestamp. What is the best way to achieve this? Does it make sense for
>>>> Cassandra to support ordering of columns by timestamp as option for a
>>>> column family irrespective of the column name type?
>>>> >
>>>> > Thanks,
>>>> > Naren
>>>>
>>>>
>>>
>>
>>
>> --
>> Tyler Hobbs
>> DataStax <http://datastax.com/>
>>
>>
>
--
Derek Williams
't looked into it too much, but I
think forcing a major compaction when using leveled strategy doesn't have
the same effect as with size tiered.
--
Derek Williams
to be true. In your example only 1 node was
written to, when 2 were required to guarantee consistency. The intent to do
a quorum write is not the same as actually doing one.
--
Derek Williams
row tombstone, then it wont be deleted.
More info here: http://wiki.apache.org/cassandra/DistributedDeletes
--
Derek Williams
am I doing wrong?
>
> **
>
I think that package just contains server classes. Everything you need
should be in org.apache.cassandra.thrift.
To use cql3 I just use the client methods 'execute_cql_query',
'prepare_cql_query' and 'execute_prepared_cql_query', after setting cql
version to '3.0.0'.
--
Derek Williams
To clarify, I haven't tested it with compact storage, but it will only use
the first part of the primary key without compact storage.
On Jul 6, 2012 4:50 PM, "Derek Williams" wrote:
> Actually, my solution only makes a row for each unique value of the first
> part of the pr
able with
> even more rows; I wanted wide rows so the reads and writes would be more
> efficient.
>
> ** **
>
> *From:* Derek Williams [mailto:de...@fyrie.net]
> *Sent:* Friday, July 06, 2012 4:58 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Dynamic CF
&
earlier post on the subject:
http://www.mail-archive.com/user@cassandra.apache.org/msg23160.html
You can have a dynamic table with CQL3, you just can't have a table with a
mix of dynamic/nondynamic columns.
--
Derek Williams
the keys that match some filter
without iterating through all of them. There are some things you can do,
like use an ordered partitioner to store the rows in order, but that is not
recommended because you wont get even distribution throughout our cluster.
I think your best bet though is just use the master_id for your row key,
and then use composite columns that begin with the client_id.
--
Derek Williams
trips.
Also, I'm not trying to advocate this as being a better solution then just
using the old thrift interface, I'm just showing an example of how to do
it. I personally do prefer this way as it is more predictable, but of
course others will have a different opinion.
--
Derek Williams
y statement regarding legal, accounting or tax matters was not intended
> or written to be relied upon by any person as advice. Moon Capital does not
> waive confidentiality or privilege as a result of this email.
>
--
Derek Williams
ere was some amount of
null support.
My only other wishlist item is the ability to set the timestamp, ttl, and
consistency level using prepared statements.
--
Derek Williams
31 matches
Mail list logo