Please ignore!
Sorry, sent to the wrong mailing list.
Viru
> On Apr 11, 2016, at 5:41 PM, Viru Kanjilal wrote:
>
> Hello Devs
>
> Its time for some spring cleaning. Please clean out the unused git branches
> you may have on bit bucket. We are nearing the 2GB limit.
>
> For instructions on
Hello Devs
Its time for some spring cleaning. Please clean out the unused git branches you
may have on bit bucket. We are nearing the 2GB limit.
For instructions on how to quickly clear up unused git branches, please refer
to https://sites.google.com/a/datos.io/datos-wiki/delete-unused-git-bran
+1 to Tupshin's proposal: 10k nodes (massive clusters) really is the next
frontier.
I don't expect the vnodes to add that much to the gossip dissemination as
the tokens per-node are sent out only a handful of times (when a node joins
the ring, mostly). Without having hard data to back myself up, I
As Jeremiah indicates, it's 3.0+ only. The docs should definitely reflect
this
On Mon, 11 Apr 2016 at 16:21, Jack Krupansky
wrote:
> Thanks, Benedict. Is this only true as of 3.x (new storage engine), or was
> the equivalent efficiency also true with 2.x?
>
> It would be good to have an explicit
Hi Haryadi,
Personally I'd love to see your approach extended to test up to 10K
nodes, or so.
There are not too many known instances of scaling past 1000 nodes, and
as the need for scale grows, and as scale out hardware becomes more
commonplace (high density, but with lots of small servers...aka
Thanks, Benedict. Is this only true as of 3.x (new storage engine), or was
the equivalent efficiency also true with 2.x?
It would be good to have an explicit statement on this efficiency question
in the spec/doc since the spec currently does say: "The option also *provides
a slightly more compact
When checking into doc for max_mutation_size_in_kb I noticed that there are
more Config properties that are neither in the yaml nor the DataStax doc -
or the old (outdated) Config wiki. For example, the first is
permissions_cache_max_entries.
Before suggesting/requesting that the DataStax doc guys
As I understand it "COMPACT STORAGE" only has meaning in the CQL parser for
backwards compatibility as of 3.0. The on disk storage is not affected by its
usage.
> On Apr 11, 2016, at 3:33 PM, Benedict Elliott Smith
> wrote:
>
> Compact storage should really have been named "not wasteful stora
Hi Jonathan,
Thanks for the reply!
We don't need a patched version of Cassandra. Specifically, this is what
we'd like to get help from you if possible:
Cassandra devs: "Here are recent JIRA entries that discuss scale-dependent
bugs: CASSANDRA-X, -Y, -Z (where XYZ are JIRA bug#)"
Our side: We
Compact storage should really have been named "not wasteful storage" - now
everything is "not wasteful storage" so it's void of meaning. This is true
without constraint. You do not need to limit yourself to a single non-PK
column; you can have many and it will remain as or more efficient than
"comp
My understanding is Thrift is being removed from Cassandra in 4.0, but will
COMPACT STORAGE be removed as well? Clearly the two are related, but
COMPACT STORAGE had a performance advantage in addition to Thrift
compatibility, so its status is ambiguous.
I recall vague chatter, but no explicit depr
The answer will depend on how conservative you are.
The most conservative choice overall would be to go with the 2.2.x line.
3.0.x if you want to the new nice and shiny 3.0 things, but can tolerate some
risk (the branch has a lot of relatively new core code, and hasn’t yet been
tried out by as
As an operator, I’d imagine it’s mostly the same as always - stability will
vary by workload, so test with your workload until you’re confident.
If x.y.Z where Z >= 6 was basically the guideline most people used before, then
it’s probably worth considering 3.5 and 3.7 as worth testing in your sp
On 04/11/2016 12:42 PM, Anuj Wadehra wrote:
> Can someone help me with this one?
This is the type of question you should ask the user@ list. The dev@
list is specifically for the development *of* Cassandra.
> What should be a resonable criteria for taking 3.x releases in
> production?
Short answ
Can someone help me with this one?
ThanksAnuj
Sent from Yahoo Mail on Android
On Sun, 10 Apr, 2016 at 11:07 PM, Anuj Wadehra wrote:
Hi,
Tick-Tock release strategy in 3.x was a good intiative to ensure frequent &
stable releases. While odd releases are supposed to get all the bug fixes and
+1
On Sun, Apr 10, 2016 at 11:43 AM, Jake Luciani wrote:
> I propose the following artifacts for release as 3.5.
>
> sha1: 020dd2d1034abc5c729edf1975953614b33c5a8b
> Git:
>
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/3.5-tentative
> Artifacts:
>
> https://repo
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.0.5.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source an
+1
On Sun, Apr 10, 2016 at 10:43 AM, Jake Luciani wrote:
> I propose the following artifacts for release as 3.5.
>
> sha1: 020dd2d1034abc5c729edf1975953614b33c5a8b
> Git:
>
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/3.5-tentative
> Artifacts:
>
> https://repo
+1
--
AY
On 11 April 2016 at 09:41:30, Benjamin Lerer (benjamin.le...@datastax.com)
wrote:
+1
On Sun, Apr 10, 2016 at 5:43 PM, Jake Luciani wrote:
> I propose the following artifacts for release as 3.5.
>
> sha1: 020dd2d1034abc5c729edf1975953614b33c5a8b
> Git:
>
> http://git-w
+1
On Sun, Apr 10, 2016 at 5:43 PM, Jake Luciani wrote:
> I propose the following artifacts for release as 3.5.
>
> sha1: 020dd2d1034abc5c729edf1975953614b33c5a8b
> Git:
>
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/3.5-tentative
> Artifacts:
>
> https://repos
Hi Jack,
Thanks for reporting the problem. I will fix it.
Benjamin
On Sun, Apr 10, 2016 at 10:10 PM, Jack Krupansky
wrote:
> I was baffled why I couldn't find a user's reported log message of
> "Mutation 32MB too large for maximum size of 16Mb" even when I searched
> GitHub for "too large for
21 matches
Mail list logo