Are you on the most recent version of the JVM? There have been bugs
fixed in FileChannel over the 1.6 lifespan.
On Thu, Sep 16, 2010 at 4:03 AM, Joseph Mermelstein
wrote:
> Hi - has anyone made any progress with this issue? We are having the same
> problem with our Cassandra nodes in production.
Hi - has anyone made any progress with this issue? We are having the same
problem with our Cassandra nodes in production. At some point a node (and
sometimes all 3) will jump to 100% CPU usage and stay there for hours until
restarted. Stack traces reveal several threads in a seemingly endless loop
I encounter the same problem, does anyone have sovled the problem?
On Tue, Apr 20, 2010 at 11:03 AM, Ingram Chen wrote:
> I check system.log both, but there is no exception logged.
>
> On Tue, Apr 20, 2010 at 10:40, Jonathan Ellis wrote:
>
>> I don't see csArena-tmp-6-Index.db in the incoming
We have this problem initially. but it disappeared after several days'
operation. so we have no chance to investigate problems more.
2010/5/10 Даниел Симеонов
> Hi,
> I've experienced the same problem, two nodes got stuck with CPU at 99%
> and the following source code from IncomingStreamRead
Hi,
I've experienced the same problem, two nodes got stuck with CPU at 99% and
the following source code from IncomingStreamReader class:
while (bytesRead < pendingFile.getExpectedBytes()) {
bytesRead += fc.transferFrom(socketChannel, bytesRead,
FileStreamTask.CHUNK_SIZE);
I check system.log both, but there is no exception logged.
On Tue, Apr 20, 2010 at 10:40, Jonathan Ellis wrote:
> I don't see csArena-tmp-6-Index.db in the incoming files list. If
> it's not there, that means that it did break out of that while loop.
>
> Did you check both logs for exceptions?
I don't see csArena-tmp-6-Index.db in the incoming files list. If
it's not there, that means that it did break out of that while loop.
Did you check both logs for exceptions?
On Mon, Apr 19, 2010 at 9:36 PM, Ingram Chen wrote:
> Ouch ! I talk too early !
>
> We still suffer same problems after
Ouch ! I talk too early !
We still suffer same problems after upgrade to 1.6.0_20.
In JMX StreamingService, I see several wired incoming/outgoing transfer:
In Host A, 192.168.2.87
StreamingService Status:
Done with transfer to /192.168.2.88
StreamingService StreamSources:
[/192.168.2.88]
Stre
On 4/17/10 6:47 PM, Ingram Chen wrote:
after upgrading jdk from 1.6.0_16 to 1.6.0_20, the problem solved.
FYI, this sounds like it might be :
https://issues.apache.org/jira/browse/CASSANDRA-896
http://bugs.sun.com/view_bug.do;jsessionid=60c39aa55d3666c0c84dd70eb826?bug_id=6805775
Where garb
FYI.
after upgrading jdk from 1.6.0_16 to 1.6.0_20, the problem solved.
On Fri, Apr 16, 2010 at 00:33, Ingram Chen wrote:
> Hi all,
>
> We setup two nodes and simply set replication factor=2 for test run.
>
> After both nodes, say, node A and node B, serve several hours, we found
> that "nod
Hi all,
We setup two nodes and simply set replication factor=2 for test run.
After both nodes, say, node A and node B, serve several hours, we found that
"node A" always keep 300% cpu usage.
(the other node is under 100% cpu, which is normal)
thread dump on "node A" shows that there are 3 busy
11 matches
Mail list logo