Proposal: github pull requests for all code changes
To be able to use the github code review UI and closer CI integration we should make it obligatory to submit github pull requests for all code changes. The process would be: 1. Create or find a JIRA ticket 2. Submit GH pull request - one PR per branch (one for 3.0, one for 3.11 etc) - the PR title should mention the JIRA ticket so that the PR gets linked on the JIRA 3. Have it reviewed and approved on GH 4. Committer needs to do the manual work of getting the code committed to the apache repo like today - having "closes #123" in the commit message closes the pull request. Adding the same line to the merge commit messages should close the PRs for the different branches Apache Spark does something similar and their guidelines are here: https://spark.apache.org/contributing.html Anyone got any concerns or improvement suggestions to this? /Marcus
how to fix constantly getting out of memory (3.11)
Hi, I have seven nodes, debian stretch with c*3.11, each with 2TB disk (500G free), 32G Ram. I have a keyspace with seven tables. At the moment the cluster doesn't work at all reliably. Every morning at least 2 nodes are shut down due to out of memory. Repair afterwards fails with "some repair failed". I use G1 with 16G heap on 6 six nodes and cms with 8G heap on one node to see a difference. In munin it's easy to see a constantly rising memory consumption. There are no other services running. I cannot understand who is not releasing the memory. Some tables have some big rows (as mentioned in my last mail to the list). Can this be a source of the memory consumption? How do you track down this? Is there memory which doesn't get released and accumulates over time? I have not yet debugged such gc/memory issues. cheers Michael - To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org For additional commands, e-mail: dev-h...@cassandra.apache.org
Re: Proposal: github pull requests for all code changes
There's nothing that stops people from using github to discuss code changes. Many jiras already link to gh branches that can be used to review and comment code. But it's not always the best place to do so. The high level discussion should always take place on Jira. Although I'd have no problem to see in-depth code reviewing happening on gh, I'd hate to see significant parts of the discussion as a whole spread across Jira and different PRs related to the ticket. The missing gh integration with Jira and lack of administrative permissions is another problem. If we close a Jira ticket, the corresponding PRs will still stay open. We either have to ask the contributor to close it or have an ever growing number of open PRs. There's also no way for us to label, assign or otherwise use PR related features, so I'm really wondering why it would make sense to more heavily using them. On 12.12.2017 09:02, Marcus Eriksson wrote: > To be able to use the github code review UI and closer CI integration we > should make it obligatory to submit github pull requests for all code > changes. > > The process would be: > 1. Create or find a JIRA ticket > 2. Submit GH pull request > - one PR per branch (one for 3.0, one for 3.11 etc) > - the PR title should mention the JIRA ticket so that the PR gets > linked on the JIRA > 3. Have it reviewed and approved on GH > 4. Committer needs to do the manual work of getting the code committed to > the apache repo like today >- having "closes #123" in the commit message closes the pull request. > Adding the same line to the merge commit messages should close the PRs for > the different branches > > Apache Spark does something similar and their guidelines are here: > https://spark.apache.org/contributing.html > > Anyone got any concerns or improvement suggestions to this? > > /Marcus > - To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org For additional commands, e-mail: dev-h...@cassandra.apache.org
RE: how to fix constantly getting out of memory (3.11)
Hi, if you are talking about on-heap troubles, then the following might be related in 3.11.x: https://issues.apache.org/jira/browse/CASSANDRA-13929 Thomas -Original Message- From: Micha [mailto:mich...@fantasymail.de] Sent: Dienstag, 12. Dezember 2017 09:24 To: dev@cassandra.apache.org Subject: how to fix constantly getting out of memory (3.11) Hi, I have seven nodes, debian stretch with c*3.11, each with 2TB disk (500G free), 32G Ram. I have a keyspace with seven tables. At the moment the cluster doesn't work at all reliably. Every morning at least 2 nodes are shut down due to out of memory. Repair afterwards fails with "some repair failed". I use G1 with 16G heap on 6 six nodes and cms with 8G heap on one node to see a difference. In munin it's easy to see a constantly rising memory consumption. There are no other services running. I cannot understand who is not releasing the memory. Some tables have some big rows (as mentioned in my last mail to the list). Can this be a source of the memory consumption? How do you track down this? Is there memory which doesn't get released and accumulates over time? I have not yet debugged such gc/memory issues. cheers Michael - To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org For additional commands, e-mail: dev-h...@cassandra.apache.org The contents of this e-mail are intended for the named addressee only. It contains information that may be confidential. Unless you are the named addressee or an authorized designee, you may not copy or use it, or disclose it to anyone else. If you received it in error please notify us immediately and then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a company registered in Linz whose registered office is at 4040 Linz, Austria, Freistädterstraße 313 - To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org For additional commands, e-mail: dev-h...@cassandra.apache.org