On 11/15/2010 12:13 PM, Rob Coli wrote:
> On 11/15/10 12:08 PM, Reverend Chip wrote:
>>> "
>>> logger_.warn("Unable to lock JVM memory (ENOMEM)."
>>> or
>>> logger.warn("Unknown mlockall error " + errno(e));
>>> "
>>>
>>> Trunk also logs if it is successful :
>>> "
>>> logger.info("JNA mlockall suc
On 11/15/10 12:08 PM, Reverend Chip wrote:
"
logger_.warn("Unable to lock JVM memory (ENOMEM)."
or
logger.warn("Unknown mlockall error " + errno(e));
"
Trunk also logs if it is successful :
"
logger.info("JNA mlockall successful");
None of those messages has appeared in either output.log or sy
On Mon, Nov 15, 2010 at 2:08 PM, Reverend Chip wrote:
> None of those messages has appeared in either output.log or system.log. ?
This was only added to 0.6/0.7 branches a couple days ago.
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cass
On 11/15/2010 11:34 AM, Rob Coli wrote:
> On 11/13/10 11:59 AM, Reverend Chip wrote:
>> Swapping could conceivably be a
>> factor; the JVM is 32G out of 72G, but the machine is 2.5G into swap
>> anyway. I'm going to disable swap and see if the gossip issues resolve.
>
> Are you using JNA/memlock t
On 11/13/10 11:59 AM, Reverend Chip wrote:
Swapping could conceivably be a
factor; the JVM is 32G out of 72G, but the machine is 2.5G into swap
anyway. I'm going to disable swap and see if the gossip issues resolve.
Are you using JNA/memlock to prevent the JVM's heap from being swapped?
There
On 11/12/2010 6:46 PM, Jonathan Ellis wrote:
> On Fri, Nov 12, 2010 at 3:19 PM, Chip Salzenberg wrote:
>> After I rebooted my 0.7.0beta3+ cluster to increase threads (read=100
>> write=200 ... they're beefy machines), and putting them under load again, I
>> find gossip reporting yoyo up-down-up-do
On Fri, Nov 12, 2010 at 3:19 PM, Chip Salzenberg wrote:
> After I rebooted my 0.7.0beta3+ cluster to increase threads (read=100
> write=200 ... they're beefy machines), and putting them under load again, I
> find gossip reporting yoyo up-down-up-down status for the other nodes.
> Anyone know what
After I rebooted my 0.7.0beta3+ cluster to increase threads (read=100
write=200 ... they're beefy machines), and putting them under load again, I
find gossip reporting yoyo up-down-up-down status for the other nodes.
Anyone know what this is a symptom of, and/or how to avoid it? I haven't
seen an