For real-time get, child doc transformer would need Real-time searcher and
I think this issue has been resolved in
https://issues.apache.org/jira/browse/SOLR-12722
Basically, needsSolrIndexSearcher should be set to true
public boolean needsSolrIndexSearcher() { return true; }
Regards,
Munendra
Hi,
I am using SOLR 6.6.0 and real-time get to retrieve documents. Randomly I am
seeing nullpointer exceptions in solr log files which in turn breaks the
application workflow. Below is the stack trace
I am thinking this could be related to real-time get, when transforming child
documents duri
Hi Mikhail,
Don't you
think org.apache.lucene.codecs.bloom.FuzzySet.java, contains(BytesRef
value) methods returns probability of having a field, and it is a place
where we are using hashing ?
Are there any other place in source which when given with document id,
could determine by calculating it
Harshvardhan,
There almost nothing like this in bare Lucene, the closest analogy is
http://wiki.apache.org/solr/SolrCaching#documentCache
On Thu, Feb 13, 2014 at 1:46 PM, Harshvardhan Ojha <
ojha.harshvard...@gmail.com> wrote:
> Hi Mikhail,
>
> Thanks for sharing this nice link. I am pretty com
Hi Mikhail,
Thanks for sharing this nice link. I am pretty comfortable with searching
of lucene and this is very beginner level question on storage, mainly
Hashing part(storage and retrieval).
Which DS(I don't know currently), is being used to keep and again calculate
that hash to get document bac
Hello
I think you can start from
http://www.lucenerevolution.org/2013/What-is-in-a-lucene-index
On Thu, Feb 13, 2014 at 12:56 PM, Harshvardhan Ojha <
ojha.harshvard...@gmail.com> wrote:
> Hi All,
>
> I have a question regarding retrieval of documents by lucene.
> I know lucene uses many files
Hi All,
I have a question regarding retrieval of documents by lucene.
I know lucene uses many files on disk to keep documents, each comprising
fields in it, and uses many IR algorithms, and inverted index to match
documents.
My question is :
1. How lucene stores these documents inside file system
On 2/11/2013 12:09 PM, devb wrote:
We are running a six node SOLR cloud which 3 shards and 3 replications. The
version of solr cloud is 4.0.0.2012.08.06.22.50.47. We use Python PySolr
client to interact with Solr. Documents that we add to solr have a unique id
and it can never have duplicates.
Ou
27;])
idsset.add(result['id'])
if i % 500 == 0:
print len(idslist), len(idsset)
i+=1
skip += limit
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-Duplicate-records-while-retrieving-documents-tp4039776.html
Sent from the Solr - User mailing list archive at Nabble.com.
itoring SaaS for Solr -
>> http://sematext.com/spm/solr-performance-monitoring/index.html
>>
>>
>>
>>>
>>> From: Dan McGinn-Combs
>>>To: "solr-user@lucene.apache.org"
>>>Sent: Saturday, December 17,
t;
>>
>> From: Dan McGinn-Combs
>>To: "solr-user@lucene.apache.org"
>>Sent: Saturday, December 17, 2011 9:30 AM
>>Subject: Re: Retrieving Documents
>>
>>Good pointer. Thank you, that is exactly what I had in mind.
__
> From: Dan McGinn-Combs
>To: "solr-user@lucene.apache.org"
>Sent: Saturday, December 17, 2011 9:30 AM
>Subject: Re: Retrieving Documents
>
>Good pointer. Thank you, that is exactly what I had in mind. To the
>second point, yes, sort of.
>
>I've manag
ents - one document for each page.
>
> Otis
>
>
> Performance Monitoring SaaS for Solr -
> http://sematext.com/spm/solr-performance-monitoring/index.html
>
>
>
>>
>> From: Dan McGinn-Combs
>> To: solr-user@lucene.apache.org
>
://sematext.com/spm/solr-performance-monitoring/index.html
>
> From: Dan McGinn-Combs
>To: solr-user@lucene.apache.org
>Sent: Friday, December 16, 2011 4:33 PM
>Subject: Retrieving Documents
>
>I've been doing a fair amount of reading a
I've been doing a fair amount of reading and experimenting with Solr
lately. I find that it does a good job of indexing very structured
documents. However, the application I have in mind is build around
long EPUB documents.
Of course, I found the Extract components useful for indexing the
EPUBs. H
15 matches
Mail list logo