Re: Policeman Jenkins => new hardware
On Tue, Mar 18, 2025 at 12:53 PM Uwe Schindler wrote: > Possibly migrate away from VirtualBOX to KVM, but it's unclear if Hackintoshs > work there. > -device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" - To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org For additional commands, e-mail: dev-h...@solr.apache.org
Re: [Operator] [VOTE] Release the Solr Operator v0.9.1 RC2
+1 (binding) > Successfully smoke tested the Solr Operator v0.9.1! - Houston On Wed, Mar 19, 2025 at 2:23 PM Jason Gerlowski wrote: > Please vote for release candidate 2 for the Solr Operator v0.9.1 > > The artifacts can be downloaded from: > > https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6 > > You can run the full smoke tester, with instructions below. > However, it is also encouraged to go and use the artifacts yourself in > a test Kubernetes cluster. > The smoke tester does not require you to download or install the RC > artifacts before running. > If you plan on just running the smoke tests, then ignore all other > instructions. > > The artifacts are laid out in the following way: > * solr-operator-v0.9.1.tgz - Contains the source release > * crds/ - Contains the CRD files > * helm-charts/ - Contains the Helm release packages > > The RC Docker image can be found at: > apache/solr-operator:v0.9.1-rc2 > > The RC Helm repo can be added with: > helm repo add apache-solr-rc > > https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6/helm-charts > > You can install the RC Solr Operator and Solr CRDs and an example Solr > Cloud with: > curl -sL0 "https://dist.apache.org/repos/dist/release/solr/KEYS"; | > gpg --import --quiet > # This will export your public keys into a format that helm can > understand. > # Skip verification by removing "--verify" in the helm command below. > if ! (gpg --no-default-keyring --keyring=~/.gnupg/pubring.gpg > --list-keys "60392455"); then gpg --export >~/.gnupg/pubring.gpg; fi > kubectl create -f > > https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6/crds/all-with-dependencies.yaml > || \ > kubectl replace -f > > https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6/crds/all-with-dependencies.yaml > helm install --verify solr-operator apache-solr-rc/solr-operator > --set image.tag=v0.9.1-rc2 > helm install --verify example apache-solr-rc/solr > > You can run the full smoke tester directly with this command: (First > checkout the release-0.9 branch of the solr-operator) > > # First clear your go-mod cache to make sure old cache entries don't > cause smoke test failures > make mod-clean > ./hack/release/smoke_test/smoke_test.sh -v "v0.9.1" -s "d35f0b5" -i > "apache/solr-operator:v0.9.1-rc2" -g "60392455" \ > -l ' > https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6 > ' > > If you want to run the smoke test with a specific version of > kubernetes, use the -k option with a full version tag. (e.g. -k > v1.19.3) > If you want to run the smoke test with a custom version of solr, use > the -t option with an official Solr image version. (e.g. -t 8.10.0) > However, for this smoke test, you must use a solr version that > supports incremental backups. (i.e. 8.9+) > > Make sure you have the following installed before running the smoke test: > - Docker (Give it enough memory and CPU to run ~12 containers, 3 of > which are Solr nodes) > More information on required resources can be found here: > https://kind.sigs.k8s.io/docs/user/quick-start/#settings-for-docker-desktop > - Go 1.22 > - Kubectl > - GnuPG > - Helm v3.4.0+ > - Kustomize (v4.0.0+) This will be installed for you, but NOT > upgraded if a lower version is already installed. > - yq > - jq > - coreutils (if using Mac OS) > > The vote will be open for at least 72 hours i.e. until 2025-03-22 20:00 > UTC. > > [ ] +1 approve > [ ] +0 no opinion > [ ] -1 disapprove (and reason why) > > Here is my +1 > > - > To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org > For additional commands, e-mail: dev-h...@solr.apache.org > >
Re: SolrJ backwards compatibility
If you compare the results from https://github.com/search?q=%22new+RequestWriter%28%29%22+language%3AJava+path%3Aorg%2Fapache%2Fsolr&type=code&ref=advsearch (294) and https://github.com/search?q=%22new+RequestWriter%28%29%22+language%3AJava&type=code&ref=advsearch (307) that suggests there are 13 places on GitHub where this gets called outside of our own code. I think that's a good indicator of how much you should accommodate. Mike On Wed, Mar 19, 2025 at 7:53 PM David Smiley wrote: > Looking to get more visibility on backwards compatibility for SolrJ: > > https://issues.apache.org/jira/browse/SOLR-17518?focusedCommentId=17935379&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17935379 > > Up until but not including SolrJ 9.9 (not released yet), a user could > create a new RequestWriter() to write a request to Solr in XML (HTTP > POST). In general users don't specify this; the default is "javabin", > which is much more efficient. The change in 9.9 is that new > RequestWriter() won't work at all, as it's abstract; new XMLRequestWriter() > should be used. Of course it ought to have been this way all along; better > late than never. > > Is compatibility here something we care to uphold? I tend to think so > because it's a major component. A simple revert and adding a dummy > subclass called XMLRequestWriter would be compatible and an onramp to > compatibility with SolrJ 10. > > ~ David Smiley > Apache Lucene/Solr Search Developer > http://www.linkedin.com/in/davidwsmiley >
Bug in IndexFetcher + Http2SolrClient Interaction
Greetings, During our rollout of 9.8 we discovered an interesting behavior indirectly caused by the Http2SolrClient migration of IndexFetcher: https://github.com/apache/solr/commit/25194b02caa383feda293490eed6ccbd7c3ecf05#diff-7af383a173bd8e05414b341ab08e9ca715b665077112c64150c4db00811d16a6 The change itself does not appear to be the problem, but rather the default behavior of Http2SolrClient applying the *idle* timeout to the overall request time: https://github.com/apache/solr/blob/2b8f933529fa736fe5fd2a9b0c751bedf352f0c7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java#L625-L629 Apparently this choice of default has some history: https://github.com/apache/solr/commit/a80eb84d5659a06a10860ad2470e87d80b19fa5d + in its current form: https://github.com/apache/solr/commit/d70af456058174d15a25d3c9b8cc5f7a8899b62b At any rate, in most cases this goes unnoticed because the default idle timeout is quite long (120 seconds) but can cause problems when applied to something like IndexFetcher which is probably *expected* to have sometimes really long-lived, healthy connections exceeding the 120s period. An *idle* timeout being applied to a long-lived, non-idle connection doesn't seem quite right... We saw this during replication of a 5GB segment which, at our bandwidth at the time, exceeded the 120 second time window and caused the Cloud to get stuck in a replication loop. The stacktrace tells the full story, namely in 120 seconds we were only able to copy 4.5GB (implied 300 Mbps) and were interrupted by this arguably misapplied idle timeout (on a clearly non-idle connection!) 2025-03-17 20:47:15.144 WARN IndexFetcher [] ? [] - Error in fetching file: _7rxzv.cfs (downloaded 457179136 of 5529080446 bytes) java.io.IOException: java.util.concurrent.TimeoutException: Total timeout 12 ms elapsed at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.toIOException(InputStreamResponseListener.java:343) ~[jetty-client-10.0.22.jar:10.0.22] at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:311) ~[jetty-client-10.0.22.jar:10.0.22] at java.base/java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:?] at org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:87) ~[solr-solrj-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:138) ~[solr-solrj-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:176) ~[solr-solrj-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.handler.IndexFetcher$FileFetcher.fetchPackets(IndexFetcher.java:1849) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.handler.IndexFetcher$FileFetcher.fetch(IndexFetcher.java:1790) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1772) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:1192) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:679) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:410) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:522) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:243) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.cloud.RecoveryStrategy.doSyncOrReplicateRecovery(RecoveryStrategy.java:697) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:333) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:309) ~[solr-core-9.8.0 d2673aab0d696a2f7330bf3267533525dfad1200 - cloud-user - 2025-01-29 16:21:49] at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:212) ~[metrics-core-
SolrJ backwards compatibility
Looking to get more visibility on backwards compatibility for SolrJ: https://issues.apache.org/jira/browse/SOLR-17518?focusedCommentId=17935379&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17935379 Up until but not including SolrJ 9.9 (not released yet), a user could create a new RequestWriter() to write a request to Solr in XML (HTTP POST). In general users don't specify this; the default is "javabin", which is much more efficient. The change in 9.9 is that new RequestWriter() won't work at all, as it's abstract; new XMLRequestWriter() should be used. Of course it ought to have been this way all along; better late than never. Is compatibility here something we care to uphold? I tend to think so because it's a major component. A simple revert and adding a dummy subclass called XMLRequestWriter would be compatible and an onramp to compatibility with SolrJ 10. ~ David Smiley Apache Lucene/Solr Search Developer http://www.linkedin.com/in/davidwsmiley
Re: Policeman Jenkins => new hardware
> > The problem with Macos is the emulation. > Kudos to you for even making it work. I never managed to get it working properly. There was always something that didn't work quite right or just broke after each Apple update. I'm sure they make it intentionally difficult to run a hackintosh. We do have true (I hope) mac runners on github so we do have some coverage there. https://github.com/apache/lucene/actions/runs/13932872683/job/38994065168 D.
Re: [DISCUSS] Community Virtual Meetup, March 2025
Thanks for the reminder! I'll show up halfway through. On Wed, Mar 19, 2025 at 8:31 AM Jason Gerlowski wrote: > Hey all - reminder that our Community Meetup will be today at 9am PT. > See you all there! > > Best, > > Jason > > On Tue, Mar 11, 2025 at 9:50 AM Jason Gerlowski > wrote: > > > > Hey all, > > > > Here's your monthly reminder about our Virtual Community Meetup will > > be held at 9am PT the 3rd Wednesday of this month (Wednesday March > > 19th). > > > > Please add your topics for discussion to the Meeting Notes page linked > > below and/or mention them here so others know what to expect. > > > > Meeting Notes: > https://cwiki.apache.org/confluence/display/SOLR/2025-03-19+Meeting+notes > > Google Meet: https://meet.google.com/mzq-iwcc-xvw > > > > Hope to see you all there! > > > > Best, > > > > Jason > > - > To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org > For additional commands, e-mail: dev-h...@solr.apache.org > >
Re: Policeman Jenkins => new hardware
The 10x write factor is probably logs. Solr writes a lot of logs. Most tests need very little data to write & read. On Wed, Mar 19, 2025 at 8:16 AM Michael Sokolov wrote: > Woohoo! thanks Uwe; exciting you were able to get 2x the lifespan of > the drives. Let's go for 4x this time! > > On Tue, Mar 18, 2025 at 12:53 PM Uwe Schindler wrote: > > > > Moin moin, > > > > Policeman Jenkins got new hardware yesterday - no functional changes. > > > > Background: The old server had some strange problems with the networking > adaptor (Intel's "igb" kernel driver) about "Detected Tx Unit Hang". This > caused some short downtimes and the monitoring complained all the time > about lost pings which drove me crazy at weekend. It worked better after a > restart and also with downgrade of kernel, but as I was about to replace > the machine by a newer one, I ordered a replacement to new Hardware version > (previously it was Hetzner AX51-NVME; now it is: Hetzner AX52). > > > > The migration was done starting yesterday lunch time europe (12:00 CET) > in the by booting the new server in the datacenter's recovery environment > booted from network on both servers with a temporary IP and then mounting > both root disks and doing a large rsync (with checksums, external > attributes, numeric uid/gid and delete option). Luckily this worked with > the old server (the Intel Adapter did not break). The whole downtime should > have taken only 1 to 1.5 hours (the time copy with 1 GBits and reconfig > needs), but unfortunately the PCIexpress on the new server complained about > (recoverable) errors on the NVME communications. After some support > roundtrips (they first replaced only the failing NVME controller which did > not help), the replaced the whole server. > > > > At 18:30 CET, I started copy to new server again and all went well, > dmesg showed no PCI express checksum errors. Finally, after fixing boot > (the old server used MBR the new one EFI), the server was mounted at the > original location by the team and all IPv4 adresses and IPv6 network were > available. Since then (approx 20:30 CET), Policeman Jenkins is back and > running. > > > > The TODOs for the future: > > > > Replace the MacOS VM and update it to a new version (it's complicated, > as it is a "Hackintosh", so it shouldn't be there according to Apple) > > Possibly migrate away from VirtualBOX to KVM, but it's unclear if > Hackintoshs work there. > > > > Have fun with the new hardware, the builds on Lucene main branch are now > 1.5 times faster (10 instead of 15 minutes). > > > > The new hardware is described here: > https://www.hetzner.com/dedicated-rootserver/ax52/; it has AVX 512 > let's see what comes out. No test failures yet. > > > > vendor_id : AuthenticAMD > > cpu family : 25 > > model : 97 > > model name : AMD Ryzen 7 7700 8-Core Processor > > stepping: 2 > > microcode : 0xa601209 > > cpu MHz : 5114.082 > > cache size : 1024 KB > > physical id : 0 > > siblings: 16 > > core id : 7 > > cpu cores : 8 > > apicid : 15 > > initial apicid : 15 > > fpu : yes > > fpu_exception : yes > > cpuid level : 16 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge > mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt > pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology > nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 > fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm > cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw > ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx > cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp > ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a > avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni > avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc > cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr > rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean > flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic > v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes > vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov > succor smca fsrm flush_l1d amd_lbr_pmc_freeze > > bugs: sysret_ss_attrs spectre_v1 spectre_v2 > spec_store_bypass srso > > bogomips: 7585.28 > > TLB size: 3584 4K pages > > clflush size: 64 > > cache_alignment : 64 > > address sizes : 48 bits physical, 48 bits virtual > > power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] > > > > # lspci | fgrep -i volati > > 01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe > SSD Controller PM9A1/PM9A3/980PRO > > 02:00.0 Non-Volatile memory controller: Micron Techn
Re: Policeman Jenkins => new hardware
Nothing to do with asf. This is just one individual setting up a Jenkins. Anyone can do that. On Wed, Mar 19, 2025, 11:20 AM Paul Irwin wrote: > First, congrats on setting up the new hardware! It's amazing what modern > hardware is capable of for such a low cost. > > It seems unwise to continue using a Hackintosh VM. It is questionably > legal at best (assuming a license was bought), and could result in > consequences for either the ASF or individuals involved in setting that up. > The "osk" string mentioned below is designed to be a warning about this. I > don't mean to kick an anthill, but I searched the archives for this list > and haven't seen that discussed until now. (I also don't think it's > productive to debate the merits of Apple's licensing choices or their > walled garden; we can't change that or the possible consequences either > way.) > > There are several legal options available for running macOS builds/jobs. > One is GitHub Actions, where the job could be manually dispatched via API > from Jenkins if needed. > https://docs.github.com/en/rest/actions/workflows?apiVersion=2022-11-28#create-a-workflow-dispatch-event > > There's also MacinCloud that offers hosted Mac servers for far less than > equivalently spec'ed Linux Azure/AWS/GCP VMs. https://www.macincloud.com/ > > I also daily drive macOS, and if there's any way I can help the team by > testing issues/fixes/features/etc. on macOS, just let me know and I'd be > happy to help on my end. > > Paul Irwin > > > On Wed, Mar 19, 2025 at 8:22 AM Robert Muir wrote: > >> On Tue, Mar 18, 2025 at 12:53 PM Uwe Schindler wrote: >> >> > Possibly migrate away from VirtualBOX to KVM, but it's unclear if >> Hackintoshs work there. >> > >> >> -device >> isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >>
Re: New UI (Kotlin) module: maybe separate repo/build but included as dependency?
If we keep it in the same repository, maybe spending some time in simplifying our gradle builds could come in handy. I was challenged to figure out how to add another "core module" that is not just a java library. To get an idea about the current complexity, after my latest discoveries, our custom logic in render-javadoc.gradle is causing one of the issues in the UI module, but the error is caused by the use of alternative JDKs and the error is thrown in the UI module. Took me some time to trace back the error, which could be fixed by changing a single function. With that said, I think we can reevaluate the current state. The unsupported OS (s390x) is probably something that cannot be fixed, and getting JetBrains supporting a discontinued OS sounds ridiculous. Perhaps we may also reconsider the support / usage of s390x? What other known issues do we have? The fix for the NPE in optimizeTask can be found at https://github.com/apache/solr/pull/3276. This should fix the alternative JDK issues I hope. And thanks a lot to all the folks that opt in to this discussion. :) Best, Christos On Mon, Mar 17, 2025 at 11:31 PM David Eric Pugh wrote: > Yeah, I am a bit suss of having it be in it's own repo. If it's in it's > own repo, folks won't know about it, and it'll struggle to gain traction. > We've gone down the separate repo path with some other efforts, like the > Yet Another Solr Admin UI project. > > Having it not build on various architectures seems just like we need to > put in a bit of work. > > The fact that it's failing on what appears to be a mainframe architecture > doesn't seem too surprise. I'm actually more surprised that Solr runs on > https://en.wikipedia.org/wiki/IBM_System/390! Do we have a active > community of main frame people running Solr???! > > On Monday, March 17, 2025 at 05:13:03 PM EDT, Houston Putman < > hous...@apache.org> wrote: > > Jason, if you look at the policeman jenkins, you will see a lot of > failures. (Because of alternate JDK support). You will also see failures > for https://ci-builds.apache.org/job/Solr/job/Solr-Test-main-s390x/ > because > that architecture isn't supported. There are a few other jenkins failures, > but the listed ones are problematic enough alone. > > As for separating out the project, I'm quite skeptical that it will > be maintainable in the long run. The Solr Operator works because it's an > entirely different project with code that runs completely independently of > Solr (and interacts with a very small amount of Solr APIs). If I'm correct, > the UI is not just a standalone application, it will also be the default UI > for Solr at some point in the future. If this is the case, I don't really > see how we are going to make sure of compatibility and guarantees in a way > that us volunteers are going to be able to handle well. I think that people > downloading the UI separately to use across multiple Solr instances is > going to be a very rare occurrence compared to people using the embedded > in-browser version. So IMO we should be developing to favor the later vs > the former, and IMO that would mean having the code live in the Solr repo. > Obviously that is causing a good amount of grief right now, but I don't > think splitting it up is the right solution for that grief. > > - Houston > > On Mon, Mar 17, 2025 at 2:46 PM Jason Gerlowski > wrote: > > > Can someone expand on the downsides of having the Kotlin UI code in > > the main repo? Or the concrete benefits of separating it out? > > > > I know there were some CI job failures around the time of the initial > > merge, but I haven't noticed it causing issues after that - did I miss > > some other discussion where those were covered? > > > > On Mon, Mar 17, 2025 at 2:47 PM Christos Malliaridis > > wrote: > > > > > > I was considering this from the beginning, and I was not aware of the > > whole > > > stack we were using for building Solr. I thought that the GitHub > > workflows > > > would be sufficient to guarantee at least the build processes to > succeed > > in > > > all our CI/CD environments, but I was wrong. > > > > > > Since the UI is generating files that are simply hosted / executed, it > > does > > > not need all the stack we currently have for Solr. Therefore, it does > > make > > > sense to use a separate repository. The risk of it getting stale is > high > > > indeed, but I believe it is still easier to discontinue it if it > becomes > > > stale before the full replacement of the current UI. > > > > > > If we decide to move it to a separate project, we would have to > guarantee > > > somehow the replacement of the current UI once the new UI covers most > (if > > > not all) features currently present. This was one of the main > discussions > > > and reasons we tried to integrate it in the current repository. If we > can > > > maintain its connection to the Solr project, like with the operator, I > > > believe it is an acceptable trade-off to move it out. I am personally >
Re: [DISCUSS] Solr Operator 0.9.1 Release
Jason, all the fixes needed have been made and backported to release-0.9. Feel free to create RC2! - Houston On Mon, Mar 17, 2025 at 1:46 PM Jason Gerlowski wrote: > I've uploaded some "Draft" release notes for 0.9.1; please take a look > and review if you have a few minutes! > > > https://cwiki.apache.org/confluence/display/SOLR/Solr+Operator+Release+Notes+v0.9.1 > > Best, > > Jason > > On Mon, Mar 17, 2025 at 1:32 PM Jason Gerlowski > wrote: > > > > Excellent, looks like all of the blockers mentioned last week are > > cleared up; will start on the first RC now! > > > > Best, > > > > Jason > > > > On Thu, Mar 13, 2025 at 1:23 PM Houston Putman > wrote: > > > > > > I'm adding https://github.com/apache/solr-operator/issues/761 as a > blocker, > > > but there is an easy fix! > https://github.com/apache/solr-operator/pull/766 > > > > > > - Houston > > > > > > On Thu, Mar 13, 2025 at 10:43 AM Jason Gerlowski < > gerlowsk...@gmail.com> > > > wrote: > > > > > > > In terms of outstanding blockers I'd like to get a fix in place for > > > > https://github.com/apache/solr-operator/issues/759 before we build > an > > > > RC. It looks like a small syntax error in the bash script run by our > > > > 'setup-zk' container. Hoping I can have a fix in the next day or so. > > > > > > > > I'm tempted to list > https://github.com/apache/solr-operator/issues/760 > > > > as a blocker as well, but I did some testing yesterday and struggled > > > > to reproduce the reported issue. If the reporter provides additional > > > > context+details soon, I might move to include it. But otherwise we > > > > probably don't want to hold the release on a response that might > never > > > > come... > > > > > > > > Best, > > > > > > > > Jason > > > > > > > > On Thu, Mar 13, 2025 at 11:38 AM Jason Gerlowski < > gerlowsk...@gmail.com> > > > > wrote: > > > > > > > > > > Alright, not hearing any objections so I'll proceed with a 0.9.1 > release! > > > > > > > > > > The usual spiel/reminder about our release branch conventions: > > > > > > > > > > I am now preparing for a Solr Operator bugfix release from branch > > > > release-0.9 > > > > > > > > > > Please observe the normal rules for committing to this branch: > > > > > > > > > > * Before committing to the branch, reply to this thread and discuss > > > > > why the fix needs backporting and how long it will take. > > > > > * All issues accepted for backporting should be marked with > Milestone > > > > v0.9.1 > > > > > in Github, and issues that should delay the release must be > marked as > > > > Blocker > > > > > * All patches that are intended for the branch should first be > committed > > > > > to the unstable branch, merged into the stable branch, and then > into > > > > > the current release branch. > > > > > * Only Github issues with Milestone v0.9.1 and priority "Blocker" > will > > > > delay > > > > > a release candidate build. > > > > > > > > > > Best, > > > > > > > > > > Jason > > > > > > > > > > On Tue, Mar 11, 2025 at 11:27 AM Houston Putman < > hous...@apache.org> > > > > wrote: > > > > > > > > > > > > Yes absolutely! > > > > > > > > > > > > On Tue, Mar 11, 2025 at 10:11 AM Jason Gerlowski < > > > > gerlowsk...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Hey all, > > > > > > > > > > > > > > We've had a handful of GH Issues come in for the operator (most > > > > > > > clustered around the SOLR-17690 regression), that are pretty > big > > > > > > > problems for folks hoping to adopt Operator 0.9.0. What do we > think > > > > > > > about doing a bugfix operator release in the next week or two? > > > > > > > > > > > > > > Pending any objections - is anyone actively working on a bugfix > > > > they'd > > > > > > > like to get into an 0.9.1 release? > > > > > > > > > > > > > > Best, > > > > > > > > > > > > > > Jason > > > > > > > > > > > > > > > - > > > > > > > To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org > > > > > > > For additional commands, e-mail: dev-h...@solr.apache.org > > > > > > > > > > > > > > > > > > > > > > - > > > > To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org > > > > For additional commands, e-mail: dev-h...@solr.apache.org > > > > > > > > > > - > To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org > For additional commands, e-mail: dev-h...@solr.apache.org > >
[Operator] [VOTE] Release the Solr Operator v0.9.1 RC2
Please vote for release candidate 2 for the Solr Operator v0.9.1 The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6 You can run the full smoke tester, with instructions below. However, it is also encouraged to go and use the artifacts yourself in a test Kubernetes cluster. The smoke tester does not require you to download or install the RC artifacts before running. If you plan on just running the smoke tests, then ignore all other instructions. The artifacts are laid out in the following way: * solr-operator-v0.9.1.tgz - Contains the source release * crds/ - Contains the CRD files * helm-charts/ - Contains the Helm release packages The RC Docker image can be found at: apache/solr-operator:v0.9.1-rc2 The RC Helm repo can be added with: helm repo add apache-solr-rc https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6/helm-charts You can install the RC Solr Operator and Solr CRDs and an example Solr Cloud with: curl -sL0 "https://dist.apache.org/repos/dist/release/solr/KEYS"; | gpg --import --quiet # This will export your public keys into a format that helm can understand. # Skip verification by removing "--verify" in the helm command below. if ! (gpg --no-default-keyring --keyring=~/.gnupg/pubring.gpg --list-keys "60392455"); then gpg --export >~/.gnupg/pubring.gpg; fi kubectl create -f https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6/crds/all-with-dependencies.yaml || \ kubectl replace -f https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6/crds/all-with-dependencies.yaml helm install --verify solr-operator apache-solr-rc/solr-operator --set image.tag=v0.9.1-rc2 helm install --verify example apache-solr-rc/solr You can run the full smoke tester directly with this command: (First checkout the release-0.9 branch of the solr-operator) # First clear your go-mod cache to make sure old cache entries don't cause smoke test failures make mod-clean ./hack/release/smoke_test/smoke_test.sh -v "v0.9.1" -s "d35f0b5" -i "apache/solr-operator:v0.9.1-rc2" -g "60392455" \ -l 'https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operator-v0.9.1-RC2-revd35f0b50852fb24b1c74e955fda1e33e9912dcc6' If you want to run the smoke test with a specific version of kubernetes, use the -k option with a full version tag. (e.g. -k v1.19.3) If you want to run the smoke test with a custom version of solr, use the -t option with an official Solr image version. (e.g. -t 8.10.0) However, for this smoke test, you must use a solr version that supports incremental backups. (i.e. 8.9+) Make sure you have the following installed before running the smoke test: - Docker (Give it enough memory and CPU to run ~12 containers, 3 of which are Solr nodes) More information on required resources can be found here: https://kind.sigs.k8s.io/docs/user/quick-start/#settings-for-docker-desktop - Go 1.22 - Kubectl - GnuPG - Helm v3.4.0+ - Kustomize (v4.0.0+) This will be installed for you, but NOT upgraded if a lower version is already installed. - yq - jq - coreutils (if using Mac OS) The vote will be open for at least 72 hours i.e. until 2025-03-22 20:00 UTC. [ ] +1 approve [ ] +0 no opinion [ ] -1 disapprove (and reason why) Here is my +1 - To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org For additional commands, e-mail: dev-h...@solr.apache.org
Re: [DISCUSS] Community Virtual Meetup, March 2025
Hey all - reminder that our Community Meetup will be today at 9am PT. See you all there! Best, Jason On Tue, Mar 11, 2025 at 9:50 AM Jason Gerlowski wrote: > > Hey all, > > Here's your monthly reminder about our Virtual Community Meetup will > be held at 9am PT the 3rd Wednesday of this month (Wednesday March > 19th). > > Please add your topics for discussion to the Meeting Notes page linked > below and/or mention them here so others know what to expect. > > Meeting Notes: > https://cwiki.apache.org/confluence/display/SOLR/2025-03-19+Meeting+notes > Google Meet: https://meet.google.com/mzq-iwcc-xvw > > Hope to see you all there! > > Best, > > Jason - To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org For additional commands, e-mail: dev-h...@solr.apache.org