Re: Return all docs with same last-value when sorting by non-unique-value
Say order_by=likes descending, limit(4). And the likes are::: 10,9,8,7,7,7,4,2. Then we'd get back all 10-7 documents, so 6 docs. The same thing if they sort in the middle. It can also have a max-limit, so we don't get too many docs returned. Makes sense ? On Sat, Apr 15, 2017 at 8:24 AM, Alexandre Rafalovitch wrote: > Not really making sense, no. Could you show an example? Also, you seem > to imply that after sorting only the documents sorted at the end may > have same values. What if they have the same values but sort into the > middle? > > Regards, >Alex > > http://www.solr-start.com/ - Resources for Solr users, new and experienced > > > On 15 April 2017 at 08:06, Dorian Hoxha wrote: > > Hi friends, > > > > Say we're sorting by a non-unique-value, and also have a limit(x). But > > there are more docs in the end of list(x) that have the same value. Is it > > possible to return them even if the number of items will be > x ? > > This will make it possible so I don't have to sort by (non-unique,unique) > > values. > > > > Makes sense ? > > > > Thank You, > > Dorian >
Sharding strategy for optimal NRT performance
Hi! I am looking for some advice on an sharding strategy that will produce optimal performance in the NRT search case for my setup. I have come up with a strategy that I think will work based on my experience, testing, and reading of similar questions on the mailing list, but I was hoping to run my idea by some experts to see if I am on the right track or am completely off base. *Let's start off with some background info on my use case*: We are currently using Solr (5.5.2) with the classic Master/Slave setup. Because of our NRT requirements, the slave is pretty much only used for failover, all writes/reads go to the master (which I know is not ideal, but that's what we're working with!). We have 6 different indexes with completely different schemas for various searches in our application. We have just over 300 tenants, which currently all reside within the same index for each of our indexes. We separate our tenants at query time via a filter query with a tenant identifier (which works fine). Each index is not tremendously large, they range from 1M documents to the largest being around 12M documents. Our load is not huge as search is not the core functionality of our application, but merely a tool to get to what they are looking for in the app. I believe our peak load barely goes over 1 QPS. Even though our number of documents isn't super high, we do some pretty complex faceting, and block joins in some cases, which along with crappy hardware in our data center (no SSDs) initially led to some pretty poor query times for our customers. This was due to the fact that we are constantly indexing throughout the day (job that runs once per minute), and we auto soft commit (openSearcher=true) every 1 minute. Because of the nature of our application, NRT updates are necessary. As we all know, opening searches this frequently has the drawback of invalidating all of our searcher-based caches, causing query times to be erratic, and slower on average. With our current setup, we have solved our query performance times by setting up autowarming, both on the filter cache, and via static warming queries. *The problem:* So now for the problem. While we are now running great from a performance perspective, we are receiving complaints from customers saying that the changes they are making are slow to be reflected in search. Because of the nature of our application, this has significant impact on their user experience, and is an issue we need to solve. Overall, we would like to be able to reduce our NRT visibility from the minutes we have now down to seconds. The problem is doing this in a way that won't significantly affect our query performance. We are already seeing maxWarmingSearchers warnings in our logs occasionally with our current setup, so just indexing more frequently is not a viable solution. In addition to this, autowarming in itself is problematic for the NRT use case, as the new searcher won't start serving requests until it is fully warmed anyway, which is sort of counter to the goal of decreasing the time it takes for new documents to be visible in search. And so this is the predicament we find ourselves in. We can index more frequently (and soft commit more frequently), but we will have to remove (or greatly decrease) our autowarming, which will destroy our search performance. Obviously there is some give and take here, we can't have true NRT search with optimal query performance, but I am hoping to find a solution that will provide acceptable results for both. *Proposed solution:* I have done a lot of research and experimentation on this issue and have started coming up with what I believe will be a decent solution to the aforementioned problem. First off, I would like to make the move over to Solr Cloud. We had been contemplating this for a while anyway, as we currently have no load balancing at all (since our slave is just used for failover), but I am also thinking that by using the right sharding strategy we can improve our NRT experience as well. I first started looking at the standard composite id routing, and while we can ensure that all of a single tenant's data is located on the same shard, because there is a large discrepancy between the amounts of data our tenants have, our shards would be very unevenly distributed in terms of number of documents. Ideally, we would like all of our tenants to be isolated from a performance perspective (from a security perspective we are not really concerned, as all of our queries have a tenant identifier filter query already). Basically, we don't want tiny tenant A to be screwed over because they were unlucky enough to land on Huge tenant B's shard. We do know the footprint of each tenant in terms of number of documents, so technically we could work out a sharding strategy manually which would evenly distribute the tenants based on number of documents, but since we have 6 different indexes, and with each index the tenant's document distribution will be differen
Re: Return all docs with same last-value when sorting by non-unique-value
I don't think Solr supports this. Maybe you can do that in a custom component by cutting off in Solr at max-limit and then cutting again in the search component afterwards down to limit. Or just get more documents and deal with this on the middle-ware side. Regards, Alex. http://www.solr-start.com/ - Resources for Solr users, new and experienced On 15 April 2017 at 12:40, Dorian Hoxha wrote: > Say order_by=likes descending, limit(4). And the likes are::: > 10,9,8,7,7,7,4,2. > Then we'd get back all 10-7 documents, so 6 docs. > The same thing if they sort in the middle. > It can also have a max-limit, so we don't get too many docs returned. > > Makes sense ? > > > > On Sat, Apr 15, 2017 at 8:24 AM, Alexandre Rafalovitch > wrote: > >> Not really making sense, no. Could you show an example? Also, you seem >> to imply that after sorting only the documents sorted at the end may >> have same values. What if they have the same values but sort into the >> middle? >> >> Regards, >>Alex >> >> http://www.solr-start.com/ - Resources for Solr users, new and experienced >> >> >> On 15 April 2017 at 08:06, Dorian Hoxha wrote: >> > Hi friends, >> > >> > Say we're sorting by a non-unique-value, and also have a limit(x). But >> > there are more docs in the end of list(x) that have the same value. Is it >> > possible to return them even if the number of items will be > x ? >> > This will make it possible so I don't have to sort by (non-unique,unique) >> > values. >> > >> > Makes sense ? >> > >> > Thank You, >> > Dorian >>
Re: Need help with auto-suggester
Hi - just wondering, what would be the difference between using a blob / binary field to store the JSON rather than simply using a string field? Thanks On Sat, Apr 15, 2017 at 2:50 AM, Walter Underwood wrote: > We recently needed multiple values in the payload, so I put a JSON blob in > there. It comes back as a string, so you have to decode that JSON > separately. Otherwise, it was a pretty clean solution. > > wunder > Walter Underwood > wun...@wunderwood.org > http://observer.wunderwood.org/ (my blog) > > > > On Apr 14, 2017, at 1:57 PM, OTH wrote: > > > > Thanks, that works! But is it possible to have multiple payloadFields? > > > > On Sat, Apr 15, 2017 at 1:23 AM, Marek Tichy wrote: > > > >> Utilize the payload field. > >>> I don't need to search multiple fields; I need to search just one field > >> but > >>> get the corresponding values from another field as well. > >>> I.e. if a user is searching for cities, I wouldn't need the countries > to > >>> also be searched. However, when the list of cities is displayed, I > need > >>> their corresponding countries to also be displayed. > >>> This is obviously possible with the regular Solr index, but I can't > >> figure > >>> out how to do it with the Suggester index, which seems to only be able > to > >>> have one field. > >>> Thanks > >>> > >>> On Fri, Apr 14, 2017 at 8:46 AM, Binoy Dalal > >> wrote: > >>> > You can create a copy field and copy to it from all the fields you > want > >> to > retrieve the suggestions from and then use that field with the > >> suggester. > > On Thu 13 Apr, 2017, 23:21 OTH, wrote: > > > Hello, > > > > I've followed the steps here to set up auto-suggest: > > https://lucidworks.com/2015/03/04/solr-suggester/ > > > > So basically I configured the auto-suggester in solrconfig.xml, > where I > > told it which field in my index needs to be used for auto-suggestion. > > > > The problem is: > > When the user searches in the text box in the front end, if they are > > searching for cities, I also need the countries to appear in the > drop-down > > list which the user sees. > > The field which is being searched is only 'city' here. However, I > need > to > > retrieve the corresponding value in the 'country' field as well. > > > > How could I do this using the suggester? > > > > Thanks > > > -- > Regards, > Binoy Dalal > > >> > >> > >
Re: Need help with auto-suggester
JSON does not have a binary data type, so true BLOBs are not possible in JSON. Sorry, I wasn’t clear. The payload I use is JSON in a string. It looks like this: suggest: { skill_names_infix: { m: { numFound: 10, suggestions: [ { term: "microsoft office", weight: 14, payload: "{"count": 1534255, "id": "microsoft office"}" }, { term: "microsoft excel", weight: 13, payload: "{"count": 940151, "id": "microsoft excel"}" }, wunder wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Apr 15, 2017, at 9:07 AM, OTH wrote: > > Hi - just wondering, what would be the difference between using a blob / > binary field to store the JSON rather than simply using a string field? > Thanks > > On Sat, Apr 15, 2017 at 2:50 AM, Walter Underwood > wrote: > >> We recently needed multiple values in the payload, so I put a JSON blob in >> there. It comes back as a string, so you have to decode that JSON >> separately. Otherwise, it was a pretty clean solution. >> >> wunder >> Walter Underwood >> wun...@wunderwood.org >> http://observer.wunderwood.org/ (my blog) >> >> >>> On Apr 14, 2017, at 1:57 PM, OTH wrote: >>> >>> Thanks, that works! But is it possible to have multiple payloadFields? >>> >>> On Sat, Apr 15, 2017 at 1:23 AM, Marek Tichy wrote: >>> Utilize the payload field. > I don't need to search multiple fields; I need to search just one field but > get the corresponding values from another field as well. > I.e. if a user is searching for cities, I wouldn't need the countries >> to > also be searched. However, when the list of cities is displayed, I >> need > their corresponding countries to also be displayed. > This is obviously possible with the regular Solr index, but I can't figure > out how to do it with the Suggester index, which seems to only be able >> to > have one field. > Thanks > > On Fri, Apr 14, 2017 at 8:46 AM, Binoy Dalal wrote: > >> You can create a copy field and copy to it from all the fields you >> want to >> retrieve the suggestions from and then use that field with the suggester. >> >> On Thu 13 Apr, 2017, 23:21 OTH, wrote: >> >>> Hello, >>> >>> I've followed the steps here to set up auto-suggest: >>> https://lucidworks.com/2015/03/04/solr-suggester/ >>> >>> So basically I configured the auto-suggester in solrconfig.xml, >> where I >>> told it which field in my index needs to be used for auto-suggestion. >>> >>> The problem is: >>> When the user searches in the text box in the front end, if they are >>> searching for cities, I also need the countries to appear in the >> drop-down >>> list which the user sees. >>> The field which is being searched is only 'city' here. However, I >> need >> to >>> retrieve the corresponding value in the 'country' field as well. >>> >>> How could I do this using the suggester? >>> >>> Thanks >>> >> -- >> Regards, >> Binoy Dalal >> >> >>
Re: Need help with auto-suggester
Sorry, that was formatted. The quotes are actually escaped, like this: {"term":"microsoft office","weight":14,"payload":"{\"count\": 1534255, \"id\": \"microsoft office\"}”} wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Apr 15, 2017, at 10:40 AM, Walter Underwood wrote: > > JSON does not have a binary data type, so true BLOBs are not possible in > JSON. Sorry, I wasn’t clear. > > The payload I use is JSON in a string. It looks like this: > > suggest: { > skill_names_infix: { > m: { > numFound: 10, > suggestions: [ > { > term: "microsoft office", > weight: 14, > payload: "{"count": 1534255, "id": "microsoft office"}" > }, > { > term: "microsoft excel", > weight: 13, > payload: "{"count": 940151, "id": "microsoft excel"}" > }, > > wunder > wunder > Walter Underwood > wun...@wunderwood.org > http://observer.wunderwood.org/ (my blog) > > >> On Apr 15, 2017, at 9:07 AM, OTH wrote: >> >> Hi - just wondering, what would be the difference between using a blob / >> binary field to store the JSON rather than simply using a string field? >> Thanks >> >> On Sat, Apr 15, 2017 at 2:50 AM, Walter Underwood >> wrote: >> >>> We recently needed multiple values in the payload, so I put a JSON blob in >>> there. It comes back as a string, so you have to decode that JSON >>> separately. Otherwise, it was a pretty clean solution. >>> >>> wunder >>> Walter Underwood >>> wun...@wunderwood.org >>> http://observer.wunderwood.org/ (my blog) >>> >>> On Apr 14, 2017, at 1:57 PM, OTH wrote: Thanks, that works! But is it possible to have multiple payloadFields? On Sat, Apr 15, 2017 at 1:23 AM, Marek Tichy wrote: > Utilize the payload field. >> I don't need to search multiple fields; I need to search just one field > but >> get the corresponding values from another field as well. >> I.e. if a user is searching for cities, I wouldn't need the countries >>> to >> also be searched. However, when the list of cities is displayed, I >>> need >> their corresponding countries to also be displayed. >> This is obviously possible with the regular Solr index, but I can't > figure >> out how to do it with the Suggester index, which seems to only be able >>> to >> have one field. >> Thanks >> >> On Fri, Apr 14, 2017 at 8:46 AM, Binoy Dalal > wrote: >> >>> You can create a copy field and copy to it from all the fields you >>> want > to >>> retrieve the suggestions from and then use that field with the > suggester. >>> >>> On Thu 13 Apr, 2017, 23:21 OTH, wrote: >>> Hello, I've followed the steps here to set up auto-suggest: https://lucidworks.com/2015/03/04/solr-suggester/ So basically I configured the auto-suggester in solrconfig.xml, >>> where I told it which field in my index needs to be used for auto-suggestion. The problem is: When the user searches in the text box in the front end, if they are searching for cities, I also need the countries to appear in the >>> drop-down list which the user sees. The field which is being searched is only 'city' here. However, I >>> need >>> to retrieve the corresponding value in the 'country' field as well. How could I do this using the suggester? Thanks >>> -- >>> Regards, >>> Binoy Dalal >>> > > >>> >>> >
Re: Need help with auto-suggester
I see, thanks. So I"m just using a string field to store the JSON. On Sat, Apr 15, 2017 at 11:15 PM, Walter Underwood wrote: > Sorry, that was formatted. The quotes are actually escaped, like this: > > {"term":"microsoft office","weight":14,"payload":"{\"count\": > 1534255, \"id\": \"microsoft office\"}”} > > wunder > Walter Underwood > wun...@wunderwood.org > http://observer.wunderwood.org/ (my blog) > > > > On Apr 15, 2017, at 10:40 AM, Walter Underwood > wrote: > > > > JSON does not have a binary data type, so true BLOBs are not possible in > JSON. Sorry, I wasn’t clear. > > > > The payload I use is JSON in a string. It looks like this: > > > > suggest: { > > skill_names_infix: { > > m: { > > numFound: 10, > > suggestions: [ > > { > > term: "microsoft office", > > weight: 14, > > payload: "{"count": 1534255, "id": "microsoft office"}" > > }, > > { > > term: "microsoft excel", > > weight: 13, > > payload: "{"count": 940151, "id": "microsoft excel"}" > > }, > > > > wunder > > wunder > > Walter Underwood > > wun...@wunderwood.org > > http://observer.wunderwood.org/ (my blog) > > > > > >> On Apr 15, 2017, at 9:07 AM, OTH wrote: > >> > >> Hi - just wondering, what would be the difference between using a blob / > >> binary field to store the JSON rather than simply using a string field? > >> Thanks > >> > >> On Sat, Apr 15, 2017 at 2:50 AM, Walter Underwood < > wun...@wunderwood.org> > >> wrote: > >> > >>> We recently needed multiple values in the payload, so I put a JSON > blob in > >>> there. It comes back as a string, so you have to decode that JSON > >>> separately. Otherwise, it was a pretty clean solution. > >>> > >>> wunder > >>> Walter Underwood > >>> wun...@wunderwood.org > >>> http://observer.wunderwood.org/ (my blog) > >>> > >>> > On Apr 14, 2017, at 1:57 PM, OTH wrote: > > Thanks, that works! But is it possible to have multiple > payloadFields? > > On Sat, Apr 15, 2017 at 1:23 AM, Marek Tichy > wrote: > > > Utilize the payload field. > >> I don't need to search multiple fields; I need to search just one > field > > but > >> get the corresponding values from another field as well. > >> I.e. if a user is searching for cities, I wouldn't need the > countries > >>> to > >> also be searched. However, when the list of cities is displayed, I > >>> need > >> their corresponding countries to also be displayed. > >> This is obviously possible with the regular Solr index, but I can't > > figure > >> out how to do it with the Suggester index, which seems to only be > able > >>> to > >> have one field. > >> Thanks > >> > >> On Fri, Apr 14, 2017 at 8:46 AM, Binoy Dalal < > binoydala...@gmail.com> > > wrote: > >> > >>> You can create a copy field and copy to it from all the fields you > >>> want > > to > >>> retrieve the suggestions from and then use that field with the > > suggester. > >>> > >>> On Thu 13 Apr, 2017, 23:21 OTH, wrote: > >>> > Hello, > > I've followed the steps here to set up auto-suggest: > https://lucidworks.com/2015/03/04/solr-suggester/ > > So basically I configured the auto-suggester in solrconfig.xml, > >>> where I > told it which field in my index needs to be used for > auto-suggestion. > > The problem is: > When the user searches in the text box in the front end, if they > are > searching for cities, I also need the countries to appear in the > >>> drop-down > list which the user sees. > The field which is being searched is only 'city' here. However, I > >>> need > >>> to > retrieve the corresponding value in the 'country' field as well. > > How could I do this using the suggester? > > Thanks > > >>> -- > >>> Regards, > >>> Binoy Dalal > >>> > > > > > >>> > >>> > > > >
Re: Long GC pauses while reading Solr docs using Cursor approach
Chetas Joshi wrote: > Thanks for the insights into the memory requirements. Looks like cursor > approach is going to require a lot of memory for millions of documents. Sorry, that is a premature conclusion from your observations. > If I run a query that returns only 500K documents still keeping 100K docs > per page, I don't see long GC pauses. 500K docs is far less than your worst-case 80*100K. You are not keeping the effective page size constant across your tests. You need to do that in order to conclude that it is the result set size that is the problem. > So it is not really the number of rows per page but the overall number of > docs. It is the effective maximum number of document results handled at any point (the merger really) during the transaction. If your page size is 100K and you match 8M documents, then the maximum is 8M (as you indirectly calculated earlier). If you match 800M documents, the maximum is _still_ 8M. (note: Okay, it is not just the maximum number of results as the internal structures for determining the result sets at the individual nodes are allocated from the page size. However, that does not affect the merging process) The high number 8M might be the reason for your high GC activity. Effectively 2 or 3 times that many tiny objects needs to be allocated, be alive at the same time, then de-allocated. A very short time after de-allocation, a new bunch needs to be allocated, so a guess is that the garbage collector has a hard time keeping up with this pattern. One strategy for coping is to allocate more memory and hope for the barrage to end, which would explain your jump in heap. But I'm in guess-land here. Hopefully it is simple for you to turn the page size way down - to 10K or even 1K. Why don't you try that, then see how it affects speed and memory requirements? - Toke
Re: Using BasicAuth with SolrJ Code
Ok, thank you. Regards, Edwin On 15 April 2017 at 08:05, Noble Paul wrote: > I'll test with this and let you know > > On Apr 13, 2017 23:06, "Zheng Lin Edwin Yeo" wrote: > > > The security.json which I'm using is the default one that is available > from > > the Solr Documentation https://cwiki.apache.org/confluence/display/ > > solr/Basic+Authentication+Plugin. > > > > { > > "authentication":{ > >"blockUnknown": true, > >"class":"solr.BasicAuthPlugin", > >"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= > > Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="} > > }, > > "authorization":{ > >"class":"solr.RuleBasedAuthorizationPlugin", > >"user-role":{"solr":"admin"}, > >"permissions":[{"name":"security-edit", > > "role":"admin"}] > > }} > > > > > > Regards, > > Edwin > > > > On 13 April 2017 at 19:53, Noble Paul wrote: > > > > > That looks good. can you share the security.json (commenting out > > > anything that's sensitive of course) > > > > > > On Wed, Apr 12, 2017 at 5:10 PM, Zheng Lin Edwin Yeo > > > wrote: > > > > This is what I get when I run the code. > > > > > > > > org.apache.solr.client.solrj.impl.HttpSolrClient$ > RemoteSolrException: > > > Error > > > > from server at http://localhost:8983/solr/testing: Expected mime > type > > > > application/octet-stream but got text/html. > > > > > > > > > > > > Error 401 require authentication > > > > > > > > HTTP ERROR 401 > > > > Problem accessing /solr/testing/update. Reason: > > > > require authentication > > > > > > > > > > > > > > > > at > > > > org.apache.solr.client.solrj.impl.HttpSolrClient. > > > executeMethod(HttpSolrClient.java:578) > > > > at > > > > org.apache.solr.client.solrj.impl.HttpSolrClient.request( > > > HttpSolrClient.java:279) > > > > at > > > > org.apache.solr.client.solrj.impl.HttpSolrClient.request( > > > HttpSolrClient.java:268) > > > > at org.apache.solr.client.solrj.SolrRequest.process( > > > SolrRequest.java:149) > > > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) > > > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) > > > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) > > > > at testing.indexing(testing.java:2939) > > > > at testing.main(testing.java:329) > > > > Exception in thread "main" > > > > org.apache.solr.client.solrj.impl.HttpSolrClient$ > RemoteSolrException: > > > Error > > > > from server at http://localhost:8983/solr/testing: Expected mime > type > > > > application/octet-stream but got text/html. > > > > > > > > > > > > Error 401 require authentication > > > > > > > > HTTP ERROR 401 > > > > Problem accessing /solr/testing/update. Reason: > > > > require authentication > > > > > > > > > > > > > > > > at > > > > org.apache.solr.client.solrj.impl.HttpSolrClient. > > > executeMethod(HttpSolrClient.java:578) > > > > at > > > > org.apache.solr.client.solrj.impl.HttpSolrClient.request( > > > HttpSolrClient.java:279) > > > > at > > > > org.apache.solr.client.solrj.impl.HttpSolrClient.request( > > > HttpSolrClient.java:268) > > > > at org.apache.solr.client.solrj.SolrRequest.process( > > > SolrRequest.java:149) > > > > at org.apache.solr.client.solrj.SolrClient.commit(SolrClient. > java:484) > > > > at org.apache.solr.client.solrj.SolrClient.commit(SolrClient. > java:463) > > > > at testing.indexing(testing.java:3063) > > > > at testing.main(testing.java:329) > > > > > > > > Regards, > > > > Edwin > > > > > > > > > > > > On 12 April 2017 at 14:28, Noble Paul wrote: > > > > > > > >> can u paste the stacktrace here > > > >> > > > >> On Tue, Apr 11, 2017 at 1:19 PM, Zheng Lin Edwin Yeo > > > >> wrote: > > > >> > I found from StackOverflow that we should declare it this way: > > > >> > http://stackoverflow.com/questions/43335419/using- > > > >> basicauth-with-solrj-code > > > >> > > > > >> > > > > >> > SolrRequest req = new QueryRequest(new SolrQuery("*:*"));//create > a > > > new > > > >> > request object > > > >> > req.setBasicAuthCredentials(userName, password); > > > >> > solrClient.request(req); > > > >> > > > > >> > Is that correct? > > > >> > > > > >> > For this, the NullPointerException is not coming out, but the > SolrJ > > is > > > >> > still not able to get authenticated. I'm still getting Error Code > > 401 > > > >> even > > > >> > after putting in this code. > > > >> > > > > >> > Any advice on which part of the SolrJ code should we place this > code > > > in? > > > >> > > > > >> > Regards, > > > >> > Edwin > > > >> > > > > >> > > > > >> > On 10 April 2017 at 23:50, Zheng Lin Edwin Yeo < > > edwinye...@gmail.com> > > > >> wrote: > > > >> > > > > >> >> Hi, > > > >> >> > > > >> >> I have just set up the Basic Authentication Plugin in Solr 6.4.2 > on > > > >> >> SolrCloud, and I am trying to modify my SolrJ code so that the > code > > > can > > > >> go > > > >> >> through the authentication and do the indexing. > > > >> >> > > > >> >> I tried using the following code