The very first line README tells the story
THIS PROJECT IS NO LONGER ACTIVE
But you should be able to generate doc from source code.
Regards,
Noorul
Sungju Hong writes:
> Hello,
>
> I'm finding hector java api doc.
>
> I searched though google but couldn't find hector api doc.
>
> This li
Hello,
I'm finding hector java api doc.
I searched though google but couldn't find hector api doc.
This link is broken also.
https://hector-client.github.io/hector/build/html/content/api.html#
Can I know the way to get the doc?
Thanks.
Sungju.
Yes, because you keep a snapshot in the meanwhile if I remember correctly.
Regards,
Stefano
On Thu, Jun 23, 2016 at 4:22 PM, Jean Carlo
wrote:
> Cassandra 2.1.12
>
> In the moment of a repair -pr sequential, we are experimenting an
> exponential increase of number of sstables. For a table lcs.
Cassandra 2.1.12
In the moment of a repair -pr sequential, we are experimenting an
exponential increase of number of sstables. For a table lcs.
If someone of you guys kowns if theoritically speaking a sequential repair
doing more sstable streaming among replicas than a parallel repair?
Best re
Hi,
We have a 12 node 2.2.6 ring running in AWS, single DC with RF=3, that is
sitting at <25% CPU, doing mostly writes, and not showing any particular
long GC times/pauses. By all observed metrics the ring is healthy and
performing well.
However, we are noticing a pretty consistent number of conn
Hi,
If I were you I would stick on 3.0.x. I haven't experienced 3.x to be
production ready. We went to prod with 3.5 and I wish we hadn't.
/Oskar
> On 23 juni 2016, at 10:56, Bienek, Marcin wrote:
>
> Hi,
>
> We are short before going in prod with our cassandra cluster. Now I wonder if
>
Hi,
We are short before going in prod with our cassandra cluster. Now I wonder if
this maybe (while still not fully in prod) a good moment to switch from the
3.0.x to the new tick-tock versions.
On planet cassandra the tick-tock article mentions:
“…We do recognize that it will take some time f
Hi,
Sorry no one get back to you yet. Do you still have the issue?
It's unclear to me what produces this yet. A few ideas though:
We are quite pedantic about OS settings. All nodes got same settings
> and C* configuration.
Considering this hypothesis, I hope that's 100% true.
2 nodes behaving