robably won't be horrible.
> > >
> > > Things to consider:
> > >
> > > * How often are documents assigned to new users?
> > > * How many documents does a user typically have?
> > > * Do you have a 'trigger' in your app that tells you a user has been
> > > assigned
> > >a new doc?
> > >
> > > You can use a pseudo join to implement this sort of thing - have a
> > > different core that contains the 'permissions', either a document that
> > > says "this document ID is accessible via these users" or "this user is
> > > allowed to see these document IDs". You are keeping your fast moving
> > > (authorization) data separate from your slow moving (the docs
> > > themselves) data.
> > >
> > > You can then say "find me all documents that are accessible via user X"
> > >
> > > Upayavira
> > >
>
--
Thanks & Regards
Umesh Prasad
Tech Lead @ flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
eportValve.java:103)
> >>-
> >>
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
> >>-
> >>
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
> >>
> >>
> >> The cache itself is very minimalistic
> >>
> >>
> >>>> autowarmCount="0"/>
> >> >> initialSize="512" autowarmCount="0"/>
> >> >> autowarmCount="0"/>
> >> >> autowarmCount="256" showItems="10" />
> >> >> initialSize="0" autowarmCount="10"
> >> regenerator="solr.NoOpRegenerator"/>
> >> true
> >> 20
> >> 200
> >>
> >> Solr version is 4.10.3
> >>
> >> Any of help is appreciated!
> >>
> >> sergey
>
--
Thanks & Regards
Umesh Prasad
Tech Lead @ flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
at 04:05, Sergey Shvets wrote:
> LRUCache
It
--
Thanks & Regards
Umesh Prasad
Tech Lead @ flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
uping
> >
> > However you will need to form the group queries before hand.
> >
> > Thanks & Regards
> > Umesh Prasad
> > Search
>
> > Lead@
>
> >
> > in.linkedin.com/pub/umesh-prasad/6/5bb/580/
>
> have seen this page before but i
10 rows altogether. Is this possible at all,
> > using a single query?
> >
> >
> >
> > --
> > View this message in context:
> http://lucene.472066.n3.nabble.com/Selectively-setting-the-number-of-returned-SOLR-rows-per-field-based-on-field-value-tp4153441.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks & Regards
Umesh Prasad
Search l...@flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
issa yapar...
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Grouping-based-on-multiple-filters-criterias-tp4153462.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks & Regards
Umesh Prasad
Search l...@flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
; To: solr-user@lucene.apache.org
> Subject: Substring and Case In sensitive Search
>
>
> Hi,
>
> I am very new to solr.How can I allow solr search on a string field case
> insensitive and substring?.
>
> Thanks,
> Nishanth
>
--
Thanks & Regards
Umesh Prasad
Search l...@flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
ce the Solr
> > logs do.
> >
> > http://www.slf4j.org/manual.html
> >
> > See also the wiki page on logging jars and Solr:
> >
> > http://wiki.apache.org/solr/SolrLogging
> >
> > Thanks,
> > Shawn
> >
> >
>
--
Thanks & Regards
Umesh Prasad
Search l...@flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
ervers a plugin
> >>> might
> >>> use, etc.
> >>>
> >>> Previously I was able to do this in solr.xml because it can do system
> >>> property substitution when defining which properties file to use for a
> >>> core.
> >>>
> >>> Now I'm not sure how to do this with core discovery, since the core is
> >>> discovered based on this file, and now the file needs to contain things
> >>> that
> >>> are specific to that core, like name, which previously were defined in
> >>> the
> >>> xml definition.
> >>>
> >>> Is there a way I can plugin some code that gets run before any schema
> or
> >>> solrconfigs are parsed? That way I could write a property loader that
> >>> adds
> >>> properties from ${solr.env}.properties to the JVM system properties.
> >>>
> >>> Thanks!
> >>> Ryan
> >
> >
>
--
Thanks & Regards
Umesh Prasad
Search l...@flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
t; >> > these comments as a new document inside main document (tree based
> > >> > structure). What is your suggestion for this case? I think it is a
> > common
> > >> > case of indexing webpages these days so probably I am not the only
> one
> >
gt;
> and get params by using
> rb.req.getParams()
>
> but how can I set params at search component?
>
> Thanks,
> Chunki.
>
>
--
Thanks & Regards
Umesh Prasad
Search l...@flipkart.com
in.linkedin.com/pub/umesh-prasad/6/5bb/580/
gt; (missing some "required" field).
> When I send the batch of docs to solr using HttpSolrServer.add(Collection<
> SolrInputDocument> docs) I am getting the following general exception:
>
> "org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:
> Server at http://172.23.3.91:8210/solr/template returned non ok
> status:500,
> message:Server Error"
>
> When I check Solr log, I can identify exactly which is the corrupted
> document.
>
> My question:
> Is it possible to identify the problematic document at the client side?
> (for
> recovery purposes)
>
> Thanks,
> Liram
>
>
> Email secured by Check Point
>
--
---
Thanks & Regards
Umesh Prasad
code
> > to
> > field value. But I'd like to believe that Solr allows for cleaner
> > solution.
> > I could think about either: a) custom query parameter (but I guess, it
> > will
> > require modifying request handlers, etc. which is highly undesirable) b)
> > getting value from other field (we obviously have 'language' field and we
> > do not have mixed-language records). If it is possible, could you please
> > describe the mechanism for doing this or point to relevant code examples?
> > Thank you very much and have a good day!
> >
> >
>
--
---
Thanks & Regards
Umesh Prasad
t. Any commit can drastically change the
> Lucene
> > > > document id numbers. It would be too expensive to determine which
> > > > numbers haven't changed. That means Solr must throw away all cache
> > > > information on commit.
> > > >
> > > > Two of Solr's caches support autowarming. Those caches use queries
> as
> > > > keys and results as values. Autowarming works by re-executing the
> top
> > N
> > > > queries (keys) in the old cache to obtain fresh Lucene document id
> > > > numbers (values). The cache code does take *keys* from the old cache
> > > > for the new cache, but not *values*. I'm very sure about this, as I
> > > > wrote the current (and not terribly good) LFUCache.
> > > >
> > > > Thanks,
> > > > Shawn
> > > >
> > > >
> > >
> >
>
--
---
Thanks & Regards
Umesh Prasad
nly
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Shuffle-results-a-little-tp1891206p4149973.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
---
Thanks & Regards
Umesh Prasad
help (and it may have a negative impact if you have a lot of
> segments).
> >>>> In
> >>>> fact the only use case where the bloom filter can help is when your
> term
> >>>> dictionary does not fit in RAM which is rarely the case.
> >>>>
>
at you think you are.
> > > >
> > > > Second, what happens when you query with &debug=query? That'll show
> you
> > > > what the search string looks like.
> > > >
> > > > If that doesn't help, please post the results of looking at those
> > things
> > > > here, that'll provide some information for us to work with.
> > > >
> > > > Best,
> > > > Erick
> > > >
> > > >
> > > > On Fri, May 30, 2014 at 3:32 AM, sunshine glass <
> > > > sunshineglassof2...@gmail.com> wrote:
> > > >
> > > > > Hi Folks,
> > > > >
> > > > > Any updates ??
> > > > >
> > > > >
> > > > > On Wed, May 28, 2014 at 12:13 PM, sunshine glass <
> > > > > sunshineglassof2...@gmail.com> wrote:
> > > > >
> > > > > > Dear Team,
> > > > > >
> > > > > > How can I handle compound word searches in solr ?.
> > > > > > How can i search "hand bag" if I have "handbag" in my index.
> While
> > > > using
> > > > > > shingle in query analyzer, the query "ice cube" creates three
> > tokens
> > > as
> > > > > > "ice","cube", "icecube". Only ice and cubes are searched but not
> > > > > > "icecubes".i.e not working for pair though I am using shingle
> > filter.
> > > > > >
> > > > > > Here's the schema config.
> > > > > >
> > > > > >
> > > > > >1. > > > > >positionIncrementGap="100">
> > > > > >2.
> > > > > >3. > > > > >synonyms="synonyms_text_prime_index.txt" ignoreCase="true"
> > > > > expand="true"/>
> > > > > >4. class="solr.HTMLStripCharFilterFactory"/>
> > > > > >5.
> > > > > >6. > > > > >maxShingleSize="2" outputUnigrams="true" tokenSeparator=""/>
> > > > > >7. > > > > >catenateWords="1" catenateNumbers="1" catenateAll="1"
> > > > > preserveOriginal="1"
> > > > > >generateWordParts="1" generateNumberParts="1"/>
> > > > > >8.
> > > > > >9. > > > > >language="English" protected="protwords.txt"/>
> > > > > >10.
> > > > > >11.
> > > > > >12.
> > > > > >13. > > > > >synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> > > > > >14. > > > > >maxShingleSize="2" outputUnigrams="true" tokenSeparator=""/>
> > > > > >15. > > > > >preserveOriginal="1"/>
> > > > > >16.
> > > > > >17. > > > > >language="English" protected="protwords.txt"/>
> > > > > >18.
> > > > > >19.
> > > > > >
> > > > > >Any help is appreciated.
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--
---
Thanks & Regards
Umesh Prasad
see if this is a common problem in
> > solr?
> >
> > Regards,
> >
> > Ali
> >
> >
> >
> > --
> > View this message in context:
> >
> http://lucene.472066.n3.nabble.com/Solr-gives-the-same-fieldnorm-for-two-different-size-fields-tp4150418p4150430.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
--
---
Thanks & Regards
Umesh Prasad
2014 at 5:07 PM, Smitha Rajiv <
> smitharaji...@gmail.com
> >> >
> >> > > wrote:
> >> > > > Hi,
> >> > > >
> >> > > >
> >> > > > I need some help on Solr Faceting.
> >> > > >
> >> > > >
> >> > > > How do I facet on two fields at the same time to get combination
> >> facets
> >> > > and
> >> > > > its count?
> >> > > >
> >> > > > I'm using below query to get facets with combination of language
> and
> >> > its
> >> > > > binding. But now I'm getting only selected facet in facetList of
> >> each
> >> > > > field and its count. For e.g. in language facets the query is
> >> > returning
> >> > > > "English" and its count. Instead I need to get other language
> facets
> >> > > which
> >> > > > satisfies binding type of paperback
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> http://localhost:8080/solr/collection1/select?q=software%20testing&fq=language%3A(%22English%22)&fq=Binding%3A(%22paperback%22)&facet=true&facet.mincount=1
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> &facet.field=Language&facet.field=latestArrivals&facet.field=Binding&wt=json&indent=true&defType=edismax&
> >> > > > json.nl=map
> >> > > >
> >> > > >
> >> > > >
> >> > > > Please provide me your inputs.
> >> > > >
> >> > > >
> >> > > > Thanks & Regards,
> >> > > >
> >> > > > Smitha
> >> > >
> >> >
> >>
>
--
---
Thanks & Regards
Umesh Prasad
++) {
matches[i] = parentIter.nextDoc();
}
return new SortedIntDocSet(matches); // you will need
SortedIntDocSet impl else docset interaction in some facet queries fails
later.
}
On 22 July 2014 19:59, Umesh Prasad wrote:
> Query parentFilterQuery = new TermQuery(new T
> the scoring fromt the child as well as the full child document. Is this
> possible?
>
> Cheers,
> Bjørn
>
> 2014-07-18 19:00 GMT+02:00 Umesh Prasad :
>
> > Comments inline
> >
> >
> > On 16 July 2014 20:31, Bjørn Axelsen
> > wrote:
> >
, ngrams based queries will results into a lot of
clauses n^2 .. exactly for just one field .. And if you are searching
across multiple fields then it will go to m * n ^ 2..
On 20 July 2014 10:31, Umesh Prasad wrote:
> Please ignore my earlier answer .. I had missed that you wanted a ma
abble.com/Match-indexed-data-within-query-string-tp4147896p4147958.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
---
Thanks & Regards
Umesh Prasad
ments is
> in a real parent-child relationship. This would mean a lot of dummy child
> documents.
>
>
>
> - or -
>
> 3) Should I just denormalize data and include the book information within
> each chapter document?
>
> - or -
>
> 4) ... or is there a smarter way?
>
> Your help is very much appreciated.
>
> Cheers,
>
> Bjørn Axelsen
>
--
---
Thanks & Regards
Umesh Prasad
.
>
> Thanks in advance.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Match-query-string-within-indexed-field-tp4147896.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
---
Thanks & Regards
Umesh Prasad
PS : You can give huge boosts to url at query time on a per request basis.
Don't specify the bqs on solrconfig.xml .. Always determine add bqs for the
query at run time..
On 18 July 2014 15:49, Umesh Prasad wrote:
> Or you can give huge boosts to url at query time. If you are usin
1.0
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/solr-boosting-any-perticular-URL-tp4147657p4147864.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
---
Thanks & Regards
Umesh Prasad
4,696 short[]
> > > 10: 345,854 11,067,328 java.util.HashMap$Entry
> > > 11: 8,823 10,351,024 * ConstantPoolKlass
> > > 12: 79,561 10,193,328 * MethodKlass
> > > 13: 228,587 9,143,480
> > org.apache.lucene.document.FieldType
> > > 14: 228,584 9,143,360
> org.apache.lucene.document.Field
> > > 15: 368,423 8,842,152 org.apache.lucene.util.BytesRef
> > > 16: 210,342 8,413,680 java.util.TreeMap$Entry
> > > 17: 81,576 8,204,648 java.util.HashMap$Entry[]
> > > 18: 107,921 7,770,312
> > org.apache.lucene.util.fst.FST$Arc
> > > 19: 13,020 6,874,560
> > org.apache.lucene.util.fst.FST$Arc[]
> > >
> > >
> >
> >
>
--
---
Thanks & Regards
Umesh Prasad
this list can answer .
> Here is the gist of that query
> https://gist.github.com/anonymous/f3a287ab726f35b142cf
>
> Any answers, suggestions ?
>
> Thanks
>
--
---
Thanks & Regards
Umesh Prasad
http://lucene.472066.n3.nabble.com/RE-SOLR-6143-Bad-facet-counts-from-CollapsingQParserPlugin-tp4140455p4146645.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
--
---
Thanks & Regards
Umesh Prasad
t;
> >> : > wrote:
> >> : >
> >> : > > Dears,
> >> : > > Hi,
> >> : > > According to my requirement I need to change the default behavior
> of
> >> Solr
> >> : > > for overwriting the whole document on unique-key duplication. I am
> >> going
> >> : > to
> >> : > > change that the overwrite just part of document (some fields) and
> >> other
> >> : > > parts of document (other fields) remain unchanged. First of all I
> >> need to
> >> : > > know such changing in Solr behavior is possible? Second, I really
> >> : > > appreciate if you can guide me through what class/classes should I
> >> : > consider
> >> : > > for changing that?
> >> : > > Best regards.
> >> : > >
> >> : > > --
> >> : > > A.Nazemian
> >> : > >
> >> : >
> >> :
> >> :
> >> :
> >> : --
> >> : A.Nazemian
> >> :
> >>
> >> -Hoss
> >> http://www.lucidworks.com/
> >>
> >
> >
> >
> > --
> > A.Nazemian
>
--
---
Thanks & Regards
Umesh Prasad
_position_title%3a(TEST))&rows=50&group=true&group.field=recruiterkeyid&group.limit=1&group.format=grouped&version=2.2
> > >>
> > >> I could also run one search to get the top X record Id's then run a
> > second Grouped query on those but I was hoping there was a less expensive
> > way run the search.
> > >>
> > >> So what I need to get back are the distinct recruiterkeyid's from the
> > top X query and the count of how many there are only in the top X
> results.
> > I'll ultimately want to query the results for each of individual
> > recruiterkeyid as well. I'm using SolrNet to build the query.
> > >>
> > >> Thank you for your help,
> > >> Aaron
> >
>
--
---
Thanks & Regards
Umesh Prasad
> >
> > I found that sometimes it works with this much load on solr but sometimes
> > it gives error "Sever Refused Connection" in solr.
> > On getting this error, i increased maxThreads to some higher value, and
> > then it works again.
> >
> > I would like to know why solr is behaving abnormally, initially when it
> was
> > working with maxThreads=200.
> >
> > Please provide me some pointers.
>
>
--
---
Thanks & Regards
Umesh Prasad
y=true if scoring is not needed for the collapse.
> I'll take a closer look at this.
>
> Joel Bernstein
> Search Engineer at Heliosearch
>
>
> On Mon, Jun 30, 2014 at 1:43 AM, Umesh Prasad
> wrote:
>
> > Hi Joel,
> > Thanks a lot for clarification
the ngroups as well along with hits.
> >
>
--
---
Thanks & Regards
Umesh Prasad
ery, to select the group head.
>
> I think trying to make true
> compatible with CollapsingQParsePlugin is
> probably not possible. So, a nice error message would be a good thing.
>
> Joel Bernstein
> Search Engineer at Heliosearch
>
>
> On Tue, Jun 24, 2014 at 3:31
atementRunner.run(ThreadLeakControl.java:360)
at java.lang.Thread.run(Thread.java:745)
---
Thanks & Regards
Umesh Prasad
1000,
> 1000))'}
>
> This seems like it would basically give you two sort critea: cscore(),
> which returns the score, would be the primary criteria. The recip of field
> "x" would be the secondary criteria.
>
>
>
>
>
>
>
>
>
>
>
logic.
Because of (a) the group head selected by CollapsingQParserPlugin will be
incorrect and subsequent sorting will break.
On 14 June 2014 12:38, Umesh Prasad wrote:
> Thanks Joel for the quick response. I have opened a new jira ticket.
>
> https://issues.apache.org/jira/browse/
; 3
> > sort fields are used.
> >
> > The failing test case patch is against Lucene/Solr 4.7 revision number
> > 1602388
> >
> > Can someone apply and verify the bug ?
> >
> > Also, should I re-open SOLR-5408 or open a new ticket ?
> >
> >
> > ---
> > Thanks & Regards
> > Umesh Prasad
> >
>
--
---
Thanks & Regards
Umesh Prasad
number
1602388
Can someone apply and verify the bug ?
Also, should I re-open SOLR-5408 or open a new ticket ?
---
Thanks & Regards
Umesh Prasad
l be in there. If not it will be Solr 4.7.
>
> https://issues.apache.org/jira/browse/SOLR-5408
>
> Joel
>
>
> On Wed, Dec 11, 2013 at 11:36 PM, Umesh Prasad
> wrote:
>
> > Issue occurs in Single Segment index also ..
> >
> > sort: "score
hu, Dec 12, 2013 at 9:50 AM, Umesh Prasad wrote:
> Hi All,
> I am using new CollapsingQParserPlugin for Grouping and found that it
> works incorrectly when I use multiple sort criteria.
>
>
>
> http://localhost:8080/solr/toys/select/?q=car%20and%20toys&version=2.2&star
- score: 0.12396862,
- [docid]: 9703
},
- {
-
I found a bug opened for same
https://issues.apache.org/jira/browse/SOLR-5408 ..
The bug is closed but I am not really sure that it works specially for
Multiple segment parts ..
I am using Solr 4.6.0 and my index contains 4 segments ..
Have anyone else faced the same issue ?
---
Thanks & Regards
Umesh Prasad
Mailing list by default removes attachments. So uploaded it to google drive
..
https://drive.google.com/file/d/0B-RnB4e-vaJhX280NVllMUdHYWs/edit?usp=sharing
On Fri, Nov 15, 2013 at 2:28 PM, Umesh Prasad wrote:
> Hi All,
> We are seeing memory leaks in our Search application wheneve
memory usage is so high on indexer.
On Wed, May 22, 2013 at 10:03 AM, Shawn Heisey wrote:
> On 5/21/2013 9:22 PM, Umesh Prasad wrote:
> > This is our own implementation of data source (canon name
> > com.flipkart.w3.solr.MultiSPCMSProductsDataSource) , which pulls the da
On Wed, May 22, 2013 at 5:03 AM, Shawn Heisey wrote:
> On 5/21/2013 5:14 PM, Umesh Prasad wrote:
>
>> We have sufficient RAM on machine ..64 GB and we have given JVM 32 GB of
>> memory. The machine runs Indexing primarily.
on a machine with more memory. Or did you do that already?
>
> -- Jack Krupansky
>
> -Original Message- From: Umesh Prasad
> Sent: Tuesday, May 21, 2013 1:57 AM
> To: solr-user@lucene.apache.org
> Subject: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1
>
662)
--
---
Thanks & Regards
Umesh Prasad
Sorry for late reply. I was trying to change our indexing pipeline and do
explicit intermediate commits for each core. That turned out to be a bit
more work that I have time for.
So, I do want to explore hard commits. I tried
:/solr//*update?commit=true* . But there is no
impact on Txn Log si
0
2HOUR
Thanks & Regards
Umesh Prasad
On Wed, Apr 17, 2013 at 4:57 PM, Erick Erickson wrote:
> How big are you transaction logs? They can be replayed on startup.
> They are truncated and a new one started when you do a hard commit
> (openSearcher true or fa
s
the same with Solr 3.5 also. But we never faced any issues.
--
---
Thanks & Regards
Umesh Prasad
Further update on same.
Build on Branch
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_2_1 succeeds
fine.
Build fails only for Source code downloaded from
http://apache.techartifact.com/mirror/lucene/solr/4.2.1/solr-4.2.1-src.tgz
On Sun, Apr 14, 2013 at 1:05 PM, Umesh Prasad
; a best practice, but it shouldn't be causing a compilation failure.
>
>
> -Hoss
>
--
---
Thanks & Regards
Umesh Prasad
tp://lucene.472066.n3.nabble.com/Not-able-to-replicate-the-solr-3-5-indexes-to-solr-4-2-indexes-tp4055313p4055477.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>
--
---
Thanks & Regards
Umesh Prasad
--
---
Thanks & Regards
Umesh Prasad
rg/solr/HowToContribute
> >
> > Thanks,
> > Otis
> > --
> > Solr & ElasticSearch Support
> > http://sematext.com/
> >
> >
> >
> >
> >
> > On Wed, Apr 10, 2013 at 9:43 PM, Umesh Prasad
> wrote:
> >> Root caused th
ean forceReplication)
The fix is pass along the version to fetchFileList and populate it.
A Patch is attached for trunk.
Thanks & Regards
Umesh Prasad
Search Engineer @ Flipkart : India's Online Megastore
-
Empowering Consumers Find Products ..
On Tue, Apr 9, 2013 at 9:28 PM, U
ter
*
[
- "indexVersion",
- 1323961124638,
- "generation",
- 107856,
- "filelist",
- [
- "_45e1.tii",
- "_45e1.nrm",
- "_45e2_1.del",
- "_45e2.frq",
- "_45e1_3.del",
- "_45e1.tis",
- ..
Can someone help. Our whole Migration to Solr 4.2 is blocked on Replication
issue.
---
Thanks & Regards
Umesh Prasad
] Other (someone in your company mirrors them internally or via a
> downstream project)
>
--
---
Thanks & Regards
Umesh Prasad
60 matches
Mail list logo