Thank for your response.
Can we reduce that value? Memory is used just 600M but the process occupy
3.2G. Too waste.
On Mon, Jul 18, 2011 at 6:53 PM, Jonathan Ellis wrote:
> That means that the mmaped files are indeed resident at the moment.
>
> On Mon, Jul 18, 2011 at 1:51 AM, JKnigh
Dear all,
I want to keep only 100 column of a key: when I add a column for a key, if
the number column of key is 100, another column (by order) will be deleted.
Does Cassandra have setting for that?
--
Best regards,
JKnight
:
> http://wiki.apache.org/cassandra/FAQ#mmap
>
> On Sun, Jul 17, 2011 at 11:54 PM, JKnight JKnight
> wrote:
> > Dear all,
> > I use JMX to monitor Cassandra server.
> > Heap Memory Usage show:
> > Used : 600MB, Commit 2.1G, Max: 2.1G
> > But htop show Cas
Dear all,
I use JMX to monitor Cassandra server.
Heap Memory Usage show:
Used : 600MB, Commit 2.1G, Max: 2.1G
But htop show Cassandra process consume 3.1G.
Could you tell me why Cassandra occupy memory very large than in used?
Thank a lot for support.
--
Best regards,
JKnight
Dear all,
Does Cassandra support HDFS storage?
Thank a lot for support.
--
Best regards,
JKnight
Dear all,
Could you tell me the best way to import data from Cassandra 0.6 to 0.8?
Thank you very much.
--
Best regards,
JKnight
is
> usually a safe bet, so the behavior you see is exactly as expected.
>
> If your hardware is 64-bit, make sure you're running a 64-bit OS and a
> 64-bit JVM. If you're stuck on 32-bit hardware that just happens to have
> lots of RAM, you could run multiple Cassandra instan
Hi all,
When I config Maximum heap size -Xmx4G, the memory will consume to 3.5G.
When I call Perform GC (jconsole), the used memory reduce to 1G.
When I config Maximum heap size -Xmx2G, Cassandra system run well.
Is that Casandra problem?
I want Cassandra use memory more effective. How can I do
Dear all,
Which Thrift version does Cassandra 0.66 using?
Thank a lot for support.
--
Best regards,
JKnight
Dear all,
At the blog of JONATHAN ELLIS, he said that Cassandra use a hack (trick) for
distributing key for each node. But the link point to that document is not
available.
Could you tell me more about that algorithm?
Thank
--
Best regards,
JKnight
;
> On 25 April 2010 10:48, JKnight JKnight wrote:
>
>> Dear all,
>>
>> My Cassandra server had thread leak when high concurrent load. I used
>> jconsole and saw many, many thread occur.
>>
>
> Just because there are a lot of threads, need not imply a thread
Dear all,
My Cassandra server had thread leak when high concurrent load. I used
jconsole and saw many, many thread occur.
I knew Cassandra use TThreadPoolServer for handling request. And Cassandra
use DebuggableThreadPoolExecutor to handling command (read/write).
I want to know the reason of thr
I know Cassandra is very flexible.
a. Because of super_column can not contain large number of columns, you
should not use design 1
b. Maybe with each query, you have to separate to each ColumnFamily
On Wed, Apr 21, 2010 at 1:17 PM, Steve Lihn wrote:
> Hi,
> I am new to Cassandra. I would like to
When import, all data in json file will load in memory. So that, you can not
import large data.
You need to export large sstable file to many small json files, and run
import.
On Mon, Apr 5, 2010 at 5:26 PM, Jonathan Ellis wrote:
> Usually sudden heap jumps involve compacting large rows.
>
> 0.
Yes, no problem with my live Cassandra server.
Thanks, Jonathan.
On Mon, Apr 5, 2010 at 11:19 PM, Jonathan Ellis wrote:
> On Mon, Apr 5, 2010 at 9:11 PM, JKnight JKnight
> wrote:
> > Thanks Jonathan,
> >
> > When I run "nodeprobe flush" with parameter -host
hould be trying.
>
> On Mon, Apr 5, 2010 at 2:37 AM, JKnight JKnight
> wrote:
> > Dear all,
> >
> > How can I flush all Commit Log for Cassandra version 042?
> > I use nodeprobe flush but It seem does not run.
> >
> > Thank a lot for support.
> >
> > --
> > Best regards,
> > JKnight
> >
>
--
Best regards,
JKnight
gt;> }
> >>
> >> So to update a score of a tag:
> >> 1) need to look old value of score to be able to remove it from
> >> Product_Tags_Ordered
> >> 2) remove Row from Product_Tags_Ordered with old score
> >> 3) update score in Product_Tags
> >
d Strauss wrote:
> I need the question about monotonicity answered, too.
>
> You should also know: Cassandra is not ideal for directly tracking
> values you increment or decrement.
>
> On 2010-04-05 08:04, JKnight JKnight wrote:
> > Thanks for for reply, David.
> >
&g
Mark{ //Column Family
gameId:{ //row key
mark_userId: ""// (column name : value),
mark2_userId2: ""
},
gameId2:{//row key
mark_userId: ""
}
}
On Sun, Apr 4, 2010 at 11:44 PM, David Strauss wrote:
> On 2010-04-05 02:48, JKni
Dear all,
How can I flush all Commit Log for Cassandra version 042?
I use nodeprobe flush but It seem does not run.
Thank a lot for support.
--
Best regards,
JKnight
Dear all,
I want to design the data storage to store user's mark for a large amount of
user. When system run, user's mark changes frequently.
I want to list top 10 user have largest mark.
Could we use Cassandra for store this data?
Ex, here my Cassandra data model design:
Mark{
userId{
wrote:
> Cassandra has always supported two great ways to prevent data loss:
>
> * Replication
> * Backups
>
> I doubt Cassandra will ever focus extensively on single-node recovery when
> it's so easy to wipe and rebuild any node from the cluster.
> ----------
Dear Jeremy Dunck,
I tried to compact, and get and error:
Caused by: java.io.UTFDataFormatException: malformed input around byte 13
at java.io.DataInputStream.readUTF(DataInputStream.java:617)
at java.io.RandomAccessFile.readUTF(RandomAccessFile.java:887)
at org.apache.cassandra.io.It
Dear all,
My Cassandra data file had problem and I can not get data from this file.
And all row after error row can not be accessed. So I lost a lot of data.
Will next version of Cassandra implement the way to prevent data lost. Maybe
we use the checkpoint. If data file corrupt, we will read from
24 matches
Mail list logo