I've read a bunch of the wiki's on solr heap usage and wanted to confirm my
understanding of what all does solr use the heap for:
1. Indexing new documents - until committed? if not how long are the new
documents kept in heap?
2. Merging segments - does solr load the entire segment in memory or c
For real-time get, child doc transformer would need Real-time searcher and
I think this issue has been resolved in
https://issues.apache.org/jira/browse/SOLR-12722
Basically, needsSolrIndexSearcher should be set to true
public boolean needsSolrIndexSearcher() { return true; }
Regards,
Munendra
Hi,
I am using SOLR 6.6.0 and real-time get to retrieve documents. Randomly I am
seeing nullpointer exceptions in solr log files which in turn breaks the
application workflow. Below is the stack trace
I am thinking this could be related to real-time get, when transforming child
documents duri
docValues are indeed, realized in Lucene. It’s just that Lucene has no notion
of “schema”. So when you define the schema, Solr carefully constructs the
appropriate low-level Lucene calls to take care of all of the options you’ve
specified in the schema, things like stored, indexed, docValues etc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Dimitris,
On 6/1/18 02:46, Dimitris Kardarakos wrote:
> Thanks a lot Shawn. I had tried with the documented approach, but
> since I use SolrClient.add to add documents to the index, I could
> not "port" the documented approach to my case (probably I
Terribly sorry about the duplicate post. It was just when i had first
subscribed, i mustn't have verified my subscription because i never
received any posts. I could also not find my post in the mailing list
archive, so I thought it never arrived. It was only today that I tried
subscribing again (+
On 5/31/2019 10:57 AM, Chuck Reynolds wrote:
Hey guys I’m try to do a backup of my Solr cloud cluster but it is never
starting.
When I execute the async backup command it returns quickly like I would expect
with the following response
0111234
But the backup never starts.
My reply is a
On 5/31/2019 12:40 PM, Erie Data Systems wrote:
My question is this, can I implement a "clustered" environment on single
server so I can take advantage of the segmented data? I have a TON (96gb)
of RAM and plenty of SSD disk space available...
Yes. One Solr instance can have many cores, and yo
Solr 8.0.0 (single server, single instance, single core) Centos 6x86_64
Error : number of documents in the index cannot exceed 2147483519
Ive read about the max number of documents which means I need to go with
SolrCloud..
My question is this, can I implement a "clustered" environment on single
s
Hi Sotiris,
Is this your second time asking this question here, or is there a
subtle difference I'm missing? You asked a very similar question a
week or so ago, and I replied with a few suggestions for changing your
security.json and with a few questions. In case you missed it for
whatever reaso
> Ah. So docValues are managed by Solr outside of Lucene. Interesting.
i was under the impression docValues are in lucene, and he is just saying
that an optimize is not a re-index, its just taking the actual files that
already exist in your index and arranging them and removing deletions, an
optim
Ah. So docValues are managed by Solr outside of Lucene. Interesting.
That actually answers a question I had not asked yet. I was curious if it was
safe to change the id field to docValues without reindexing if we never sorted
on it. It looks like fetching the value won’t work until everything i
Hey guys I’m try to do a backup of my Solr cloud cluster but it is never
starting.
When I execute the async backup command it returns quickly like I would expect
with the following response
0111234
But the backup never starts.
When I execute the REQUESTSTATUS it response with the followin
Actually, the WIKI is going away (for all Apache project, not just Solr).
So, the preferred way now is to contribute to the Solr Reference
Guide, which is developed as part of normal Solr process and can be
patched as any other Git-based project:
https://github.com/apache/lucene-solr/tree/master/s
The sitecore app will get errors when that core is not found, and it is then up
to the failover logic in sitecore to try the master. You should of course add
back the missing core and let it sync up.
Sendt fra min iPad
> 31. mai 2019 kl. 17:05 skrev Shawn Heisey :
>
>> On 5/31/2019 7:50 AM, Ma
On 5/31/2019 7:50 AM, Mahesh Varma, Y. A. wrote:
Hi Team,
What happens to Sitecore - Solr query handling when a core is corrupted in Solr
slave in a Master - slave setup?
Our Sitecore site's solr search engine is a master-slave setup. One of the
cores of the Slave is corrupted and is not avai
Hi Team,
What happens to Sitecore - Solr query handling when a core is corrupted in Solr
slave in a Master - slave setup?
Our Sitecore site's solr search engine is a master-slave setup. One of the
cores of the Slave is corrupted and is not available at all in Slave.
It is not being replicated
Hi everyone!
I've been trying unsuccessfully to read an alias to a collection with a
curl command.
The command only works when I put in the admin credentials, although the
user I want access for also has the required role for accessing.
Is this perhaps built-in, or should anyone be able to access a
SOLVED: Now implemented with a bespoke trust store set up for SOLR ...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
bq. but I optimized all the cores, which should rewrite every segment as
docValues.
Not true. Optimize is a Lucene level force merge. Dealing with segments, i.e.
merging and the like, is a low-level Lucene operation and Lucene has no notion
of a schema. So a change you made to the schema is irr
Hi All,
I am trying to use fcs method of faceting and set the facet thread count to
3 .But when I check the thread dump around 15 threads are visible for
faceting.
Do setting facet thread count to 3 open 3 thread for each segment?
On what basis should we decide no facet threads?
Thanks
Saurabh S
I’m talking about the filterCache. You said you were using this in a “filter
query”, which I assume is an “fq” clause, which is automatically cached in the
filterCache.
It would help a lot if you told us two things:
1> what the clause looks like. You say 1,500 strings. How are you assmebling
th
Sachin - that’s a confusing name for a field that represents a price and not a
“range”, but ok use the first one but with your field name:
&bq=price_range:[10 TO 25]
My bad below saying “boost” (takes a function, not a raw query). Use “bq”,
which takes a regular query.
Erik
> On Ma
23 matches
Mail list logo