On Mon, Jul 21, 2014 at 8:07 AM, Bhaskar Singhal
wrote:
> I have not seen the issue after changing the commit log segment size to
> 1024MB.
>
Yes... your insanely over-huge commitlog will be contained in fewer files
if you increase the size of segments that will not make it any less of
an in
I have not seen the issue after changing the commit log segment size to 1024MB.
tpstats output:
Pool Name Active Pending Completed Blocked All
time blocked
ReadStage 0 0 0 0
0
RequestResponseStage
On Mon, Jul 7, 2014 at 9:30 PM, Bhaskar Singhal
wrote:
> I am using Cassandra 2.0.7 (with default settings and 16GB heap on quad
> core ubuntu server with 32gb ram)
>
16GB of heap will lead to significant GC pauses, and probably will not
improve total performance versus 8gb heap.
I continue to
Well with 4k maximum open files that still looks to be your culprit :)
I suggest you increase the size of your CL segments; the default is 32Mb,
and this is probably too small for the size of record you are writing. I
suspect that a 'too many open files' exception is crashing a flush which
then ca
Yes, I am.
lsof lists around 9000 open file handles.. and there were around 3000 commitlog
segments.
On Thursday, 17 July 2014 1:24 PM, Benedict Elliott Smith
wrote:
Are you still seeing the same exceptions about too many open files?
On Thu, Jul 17, 2014 at 6:28 AM, Bhaskar Singhal
Are you still seeing the same exceptions about too many open files?
On Thu, Jul 17, 2014 at 6:28 AM, Bhaskar Singhal
wrote:
> Even after changing ulimits and moving to the recommended production
> settings, we are still seeing the same issue.
>
> root@lnx148-76:~# cat /proc/17663/limits
> Lim
Even after changing ulimits and moving to the recommended production settings,
we are still seeing the same issue.
root@lnx148-76:~# cat /proc/17663/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited
On Tue, Jul 8, 2014 at 10:17 AM, Bhaskar Singhal
wrote:
> But I am wondering why does Cassandra need to keep 3000+ commit log
> segment files open?
>
Because you are writing faster than you can flush to disk.
=Rob
We have these precise settings but are still seeing the broken pipe exception
in our gc logs. Any clues?
Sent from my iPhone
> On Jul 8, 2014, at 1:17 PM, Bhaskar Singhal wrote:
>
> Thanks Mark. Yes the 1024 is the limit. I haven't changed it as per the
> recommended pr
ter ingesting
>around 120GB data, I start getting the following error:
>Operation [70668] retried 10 times - error inserting key 0070668
>((TTransportException): java.net.SocketException: Broken pipe)
>
>
>
>The cassandra server is still running but in the system.log I see
while(1600secs) but after ingesting
> around 120GB data, I start getting the following error:
> Operation [70668] retried 10 times - error inserting key 0070668
> ((TTransportException): java.net.SocketException: Broken pipe)
>
> The cassandra server is still running but in the syste
[70668] retried 10 times - error inserting key 0070668
((TTransportException): java.net.SocketException: Broken pipe)
The cassandra server is still running but in the system.log I see the below
mentioned errors.
ERROR [COMMIT-LOG-ALLOCATOR] 2014-07-07 22:39:23,617 CassandraDaemon.java (line
198
On Mon, Jun 9, 2014 at 10:43 PM, Colin Kuo wrote:
> You can use "nodetool repair" instead. Repair is able to re-transmit the
> data which belongs to new node.
>
Repair is not very likely to work in cases where bootstrap doesn't.
@OP : you probably will have to tune your phi detector to be more
tboundTcpConnection.java (line 418) Handshaking version with /10.156.1.2
> INFO [GossipStage:1] 2014-06-10 00:28:57,943 Gossiper.java (line 809)
> InetAddress /10.156.1.2 is now UP
>
> This brief interruption was enough to kill the streaming from node
> 10.156.1.2. Node 10.156.1.2 saw
6.1.2. Node 10.156.1.2 saw a similar "broken pipe" exception from the
bootstrapping node:
ERROR [Streaming to /10.156.193.1.3] 2014-06-10 01:22:02,345
CassandraDaemon.java (line 191) Exception in thread Thread[Streaming to /
10.156.1.3:1,5,main]
java.lang.RuntimeException: java.io.IOExcepti
3.
>
>
> On Mon, Dec 23, 2013 at 10:47 PM, Vivek Mishra wrote:
> Also to add. It works absolutely fine on single node.
>
> -Vivek
>
>
> On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra wrote:
> Hi,
> I have a 6 node, 2DC cluster setup. I
gt;
>>> On Mon, Dec 23, 2013 at 10:47 PM, Vivek Mishra wrote:
>>>
>>>> Also to add. It works absolutely fine on single node.
>>>>
>>>> -Vivek
>>>>
>>>>
>>>> On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra
>>
solutely fine on single node.
>>>
>>> -Vivek
>>>
>>>
>>> On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra wrote:
>>>
>>>> Hi,
>>>> I have a 6 node, 2DC cluster setup. I have configured consistency level
>>>>
0:47 PM, Vivek Mishra wrote:
>
>> Also to add. It works absolutely fine on single node.
>>
>> -Vivek
>>
>>
>> On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra wrote:
>>
>>> Hi,
>>> I have a 6 node, 2DC cluster setup. I have c
hra wrote:
>
>> Hi,
>> I have a 6 node, 2DC cluster setup. I have configured consistency level
>> to QUORUM. But very often i am getting "Broken pipe"
>> com.impetus.client.cassandra.CassandraClientBase
>> (CassandraClientBase.jav
Also to add. It works absolutely fine on single node.
-Vivek
On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra wrote:
> Hi,
> I have a 6 node, 2DC cluster setup. I have configured consistency level to
> QUORUM. But very often i am getting &qu
Hi,
I have a 6 node, 2DC cluster setup. I have configured consistency level to
QUORUM. But very often i am getting "Broken pipe"
com.impetus.client.cassandra.CassandraClientBase
(CassandraClientBase.java:1926) - Error while executing native CQL
query
ere any reason or known issue to explain why cassandra does not
> handle properly the number of connection and gives broken pipe? I'm
> supposing this can be a Cassandra problem, but feel free to let me know if
> you think I'm wrong.
>Thanks in advance.
>
>*Cassandra v
andle properly the number of connection and gives broken pipe? I'm
supposing this can be a Cassandra problem, but feel free to let me know if
you think I'm wrong.
Thanks in advance.
*Cassandra version:* 1.1.5
*Some property values:*
- rpc_keepalive: true
- rpc_
-
> Subject: Re: Trying to find the problem with a broken pipe
> From: Anthony Ikeda
> Date: Tue, August 09, 2011 3:11 am
> To: user@cassandra.apache.org
>
> Tim do you know if this is the actual reason that is causing the broken
> pipe? I'm having a hard time conv
Anthony,
All I can say is that the steps I outlined below fixed my problem and
allowed me to process 60 million rows of data.
Tim
Original Message
Subject: Re: Trying to find the problem with a broken pipe
From: Anthony Ikeda
Date: Tue, August 09, 2011 3:11 am
To: user
Tim do you know if this is the actual reason that is causing the broken
pipe? I'm having a hard time convincing my team that modifying this value
will fix the issue.
Jonathan, do you know if there is a valid explanation on why Tim no longer
has the problem based on this change?
Anthony
the problem with a broken pipe
From: aaron morton
Date: Fri, August 05, 2011 12:58 am
To: user@cassandra.apache.org
It's probably a network thing.
The only thing I can think of in cassandra is
thrift_max_message_length_in_mb in the config. That config setting will
result in a TException thro
ns to the client.
Perhaps check the server log.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 4 Aug 2011, at 23:05, Tim Snyder wrote:
> I am getting the same problem (Broken Pipe) on a loader program, after
> about 8 m
I am getting the same problem (Broken Pipe) on a loader program, after
about 8 million read, write pairs. I am pushing serialized objects into
a column with the program, the object it seems to be doing it on is much
larger than the prior objects, so I am wondering if it is possibly a
column size
> 2011-08-02 08:43:06,541 ERROR
> > [me.prettyprint.cassandra.connection.HThriftClient] - Could not flush
> > transport (to be expected if the pool is shutting down) in close for
> client:
> > CassandraClient
> > org.apache.thrift.transport.TTransportException:
> java.net
dra.connection.HThriftClient] - Could not flush
> transport (to be expected if the pool is shutting down) in close for client:
> CassandraClient
> org.apache.thrift.transport.TTransportException: java.net.SocketException:
> Broken pipe
>
> ...
> 2011-08-02 08:43:06,544 WARN
de:
>
> 2011-08-02 08:43:06,541 ERROR
> [me.prettyprint.cassandra.connection.HThriftClient] - Could not flush
> transport (to be expected if the pool is shutting down) in close for client:
> CassandraClient
> org.apache.thrift.transport.TTransportException: java.net.SocketException
lient
org.apache.thrift.transport.TTransportException:
java.net.SocketException: Broken pipe
...
2011-08-02 08:43:06,544 WARN
[me.prettyprint.cassandra.connection.HConnectionManager] - Could not
fullfill request on this host CassandraClient
...
2011-08-02 08:43:06,543 ERROR
[me.prettyprint.cassandra.connection.HConnectionManager] -
On Tue, Aug 2, 2011 at 4:36 PM, Anthony Ikeda
wrote:
> I'm not sure if this is a problem with Hector or with Cassandra.
> We seem to be seeing broken pipe issues with our connections on the client
> side (Exception below). A bit of googling finds possibly a problem with the
>
I'm not sure if this is a problem with Hector or with Cassandra.
We seem to be seeing broken pipe issues with our connections on the client
side (Exception below). A bit of googling finds possibly a problem with the
amount of data we are trying to store, although I'm certain our datase
have been nice to get an error regardless.
Thanks again for your help!
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Thursday, May 05, 2011 4:54 PM
To: user@cassandra.apache.org
Subject: Re: Decommissioning node is causing broken pipe error
Could you provide some of the log messages whe
Could you provide some of the log messages when the receiver ran out of disk
space ? Sounds like it should be at ERROR level.
Thanks
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6 May 2011, at 09:16, Sameer Farooqui wrote:
> Just wa
Just wanted to update you guys that we turned on DEBUG level logging on the
decommissioned node and the node receiving the decommissioned node's range.
We did this by editing /conf/log4j-server.properties and
changing the log4j.rootLogger to DEBUG.
We ran decommission again and saw the that the re
Yes that was what I was trying to say.
thanks
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 5 May 2011, at 18:52, Tyler Hobbs wrote:
> On Thu, May 5, 2011 at 1:21 AM, Peter Schuller
> wrote:
> > It's no longer recommended to run node
On Thu, May 5, 2011 at 1:21 AM, Peter Schuller
wrote:
> > It's no longer recommended to run nodetool compact regularly as it can
> mean
> > that some tombstones do not get to be purged for a very long time.
>
> I think this is a mis-typing; it used to be that major compactions
> were necessary to
> It's no longer recommended to run nodetool compact regularly as it can mean
> that some tombstones do not get to be purged for a very long time.
I think this is a mis-typing; it used to be that major compactions
were necessary to remove tombstones, but this is no longer the case in
0.7 so that t
ine
> 58) Need to re-stream file /raiddrive/MDR/MeterRecords-f-2283-Data.db to
> /10.206.63.208
> ERROR [Streaming:1] 2011-05-03 21:49:01,580 DebuggableThreadPoolExecutor.java
> (line 103) Error in ThreadPoolExecutor
> java.lang.RuntimeExceptio
/MeterRecords-f-2283-Data.db to
/10.206.63.208
ERROR [Streaming:1] 2011-05-03 21:49:01,580 DebuggableThreadPoolExecutor.java
(line 103) Error in ThreadPoolExecutor
java.lang.RuntimeException: java.io.IOException: Broken pipe
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34
thing on a smaller data set, no such thing happens.
>>
>> 2010-04-28
>>
>> Bingbing Liu
>>
>> 发件人: Jonathan Ellis
>> 发送时间: 2010-04-27 20:51:11
>> 收件人: user
>> 抄送: rucbing
&g
and when i do the same thing on a smaller data set, no such thing happens.
>
> 2010-04-28
>
> Bingbing Liu
>
> 发件人: Jonathan Ellis
> 发送时间: 2010-04-27 20:51:11
> 收件人: user
> 抄送: rucbing
> 主题: Re: Broken
[moving followups to user list]
2010/4/27 Bingbing Liu :
> when i use get_range_slices, i get the exceptions , i don't know what happens
>
> hope someone can help me
>
>
> org.apache.thrift.transport.TTransportException: java.net.SocketException:
47 matches
Mail list logo