On Tue, 2018-07-31 at 11:12 +0200, Fette, Georg wrote:
> I agree that receiving too much data in one request is bad. But I
> was surprised that the query works with a lower but still very large
> rows parameter and that there is a threshold at which it crashes the
> server.
> Furthermore, it seems
Hello team
I am using Solr-7.4.0 and on Solr we application , it is not possible to search
on all attributes by default as was the case with Solr-6.4.0
I have to mentioned key name for searching .
How can we enable search on all keys by default.
Thanks.
[Description: Description: Description
Thanks a lot Mikhail. But as per documentation below nested document
ingestion is possible. Is this limitation of DIH?
https://lucene.apache.org/solr/guide/6_6/uploading-data-with-index-handlers.html#UploadingDatawithIndexHandlers-NestedChildDocuments
Also can block join query be used to get exp
Hi Rajnish,
yes, you can use a general-generic field (that's the price of having
such "centralization") where, by means of the *copyField* directive, all
fields are copied there.
Then, you can use that field as a default search field (df parameter) in
your RequestHandler.
...
Best,
Andre
Hello,
We recently migrated from solr 4.7 to 7.3. We are having a problem with
legacy java code that we do not have control over where searches are coded
to query text as "text". Again, we do not the luxury in this instance to
update the code, so we were wondering whether there is a way to define
Hi,
field names with both leading and trailing underscores are reserved [1]
so you it would be better to avoid that.
I cannot tell you what exactly the problem is, using such naming; I
remember I had troubles with function queries, so, in general, I would
follow that advice.
Best,
Andrea
[1]
You have full control over it. Just change it in managed-schema and
probably in solrconfig.xml and reindex.
In general, you can also alias fields with eDisMax, see
https://lucene.apache.org/solr/guide/7_4/the-extended-dismax-query-parser.html#field-aliasing-using-per-field-qf-overrides
and the exa
This is how nested docs look like. These are document blocks with parent in
the end. Block Join Queries work on these blocks.
On Wed, Aug 8, 2018 at 12:47 PM omp...@rediffmail.com <
omkar.pra...@gmail.com> wrote:
> Thanks a lot Mikhail. But as per documentation below nested document
> ingestion i
On 13/07/2018 15:10, Charlie Hull wrote:
On 12/07/2018 10:28, Charlie Hull wrote:
Hi all,
A couple of years ago I ran two free Lucene Hackdays in London and
Boston (the latter just before Lucene Revolution). Here's what we got
up to with the kind support of Alfresco, Bloomberg, BA Insight and
Rahul, thanks, I do indeed want to be able to shard.
For now I'll go with Markus' suggestion and try to use the SPLITSHARD
command.
2018-08-07 15:17 GMT+02:00 Rahul Singh :
> Bjarke,
>
> I am imagining that at some point you may need to shard that data if it
> grows. Or do you imagine this data t
But in my case i see output as below
0
0
*:*
on
xml
1533734431931
IT
1
1
1608130338704326656
Data
1
2
1608130338704326656
omkar
1
1608130338704326656
ITI
2
3
1608130338712715264
Entry
2
Hello,
We've got, again, a little mystery here. Our main text collection is suddenly
running at a snail's pace since Monday very early in the morning, the
monitoring graph for response time went up. This is not unusual for Solr so the
JVM's were all restarted, it always solves a sluggish collec
Erick,
thanks, that is of course something I left out of the original question.
Our Solr is 7.1, so that should not present a problem (crossing fingers).
However, on my dev box I'm trying out the steps, and here I have some
segments created with version 6 of Solr.
After having copied data from m
I have a problem with solr suggested terms, when I search for a miss spelled
phrase or word, for example "halogan balbs" (0 results found) I want a
suggestion which will lead to results (eg "halogen bulbs").
I'm able to get a suggested phrase enabling spellcheck.collation and
spellcheck.maxCollati
Admin guide has UUIDField as a field type, but it's not defined in the default
schema.
The Admin guide describes it in conjunction with the
UUIDUpdateProcessorFactory. I see the updateProcessor defined in the default
schema.
The only place I see UUIDUpdateProcessorFactory discussed in the Adm
I am updating an existing document using the "add-distinct" directive. One of
my fields is declared:
The field being updated is a different field.
All I set in my code is
Map fields = new HashMap<>();
SolrInputDocument solrDoc = new SolrInputDocument();
fields.put("add-distinct", member);
so
Bjarke:
Using SPLITSHARD on an index with 6x segments just seems to not work,
even outside the standalone-> cloud issue. I'll raise a JIRA.
Meanwhile I think you'll have to re-index I'm afraid.
Thanks for raising the issue.
Erick
On Wed, Aug 8, 2018 at 6:34 AM, Bjarke Buur Mortensen
wrote:
> E
OK, thanks.
As long as it's my dev box, reindexing is fine.
I just hope that my assumption holds, that our prod solr is 7x segments
only.
Thanks again,
Bjarke
2018-08-08 20:03 GMT+02:00 Erick Erickson :
> Bjarke:
>
> Using SPLITSHARD on an index with 6x segments just seems to not work,
> even o
See: https://issues.apache.org/jira/browse/SOLR-12646
On Wed, Aug 8, 2018 at 11:24 AM, Bjarke Buur Mortensen
wrote:
> OK, thanks.
>
> As long as it's my dev box, reindexing is fine.
> I just hope that my assumption holds, that our prod solr is 7x segments
> only.
>
> Thanks again,
> Bjarke
>
> 20
19 matches
Mail list logo