You can send big queries as a POST request instead of a GET request.
Op do 18 feb. 2021 om 11:38 schreef Anuj Bhargava :
> Solr 8.0 query length limit
>
> We are having an issue where queries are too big, we get no result. And if
> we remove a few keywords we get the result.
>
> Error we get - er
ctually exists.
$ curl -i "
http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/foobar";
| head -n 1
HTTP/1.1 404 Not Found
$ curl -I "
http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/foobar";
| head -n 1
HTTP/1.1 200 OK
I presume that's a bug?
Thomas
might cause
this issue? Or is there a problem with Solr and Windows 10 or Amazon
Corretto?
As I already said, the procedure described above worked well for the
Solr versions since Solr 6.6.1, without java.lang.OutOfMemoryError after
creating the collection.
Best regards,
Thomas Heldmann
Hi Steve
I have a real-world use case. We don't apply a synonym filter at index
time, but we do apply a managed synonym filter at query time. This allows
content managers to add new synonyms (or remove existing ones) "on the fly"
without having to reindex any documents.
Thoma
"index":3},
{
"name":"all",
"role":"admin",
"index":4}],
"user-role":{
"solr":"admin",
"user1":"role1",
"user2":"role2"},
"":{"v":0}}}
With this setup, I'm unable to read from any of the cores with either user.
If I "delete-permission":4 both users can read from either core, not just
"their" core.
I have tried custom permissions like this to no avail:
{"name": "access-core1", "collection": "core1", "role": "role1"},
{"name": "access-core2", "collection": "core2", "role": "role2"},
{"name": "all", "role": "admin"}
Is it possible to do this for cores? Or am I out of luck because I'm not
using collections?
Regards
Thomas
. To get fully
correct positional queries when your synonym replacements are multiple
tokens, you should instead apply synonyms using this filter at query time.
Regards,
Thomas
Op do 30 jul. 2020 10:17 schreef Colvin Cowie :
> That does some like an unhelpful example to have, though
>
&
Hi,
Is it possible to specify a Tokenizer Factory on a Managed Synonym Graph
Filter? I would like to use a Standard Tokenizer or Keyword Tokenizer on
some fields.
Best,
Thomas
Op vr 3 jul. 2020 om 14:11 schreef Bram Van Dam :
> On 03/07/2020 09:50, Thomas Corthals wrote:
> > I think this should go in the ref guide. If your product depends on this
> > behaviour, you want reassurance that it isn't going to change in the next
> > release. No
ferent for docValues, that's even more reason to state it
clearly in the ref guide to avoid confusion.
Best,
Thomas
Op do 2 jul. 2020 om 20:37 schreef Erick Erickson :
> This is true _unless_ you fetch from docValues. docValues are SORTED_SETs,
> so the results will be both ordered an
Since "overseer" is also problematic, I'd like to propose "orchestrator" as
an alternative.
Thomas
Op vr 19 jun. 2020 04:34 schreef Walter Underwood :
> We don’t get to decide whether “master” is a problem. The rest of the world
> has already decided that it i
Can anybody shed some light on this? If not, I'm going to report it as a
bug in JIRA.
Thomas
Op za 13 jun. 2020 13:37 schreef Thomas Corthals :
> Hi
>
> I'm seeing different ordering on the spellcheck suggestions in cloud mode
> when using spellcheck.ex
quot;cord" (LD: 1, freq:
1) , "card" (LD: 2, freq: 4). In standalone mode, I get "corp", "cord",
"card" with extendedResults true or false.
The results are the same for the /spell and /browse request handlers in
that configset. I've put all combinations side by side in this spreadsheet:
https://docs.google.com/spreadsheets/d/1ym44TlbomXMCeoYpi_eOBmv6-mZHCZ0nhsVDB_dDavM/edit?usp=sharing
Is it something in the configuration? Or a bug?
Thomas
ON only allows UTF-8, UTF-16 or UTF-32.
Best,
Thomas
Op di 9 jun. 2020 07:11 schreef Hup Chen :
> Any idea?
> I still won't be able to get TolerantUpdateProcessorFactory working, solr
> exited at any error without any tolerance, any suggestions will be
> appreciated.
> curl &quo
cat:{"add-distinct":["a","b","d"]}}]'
{
"responseHeader":{
"rf":2,
"status":0,
"QTime":81}}
$ curl '
http://localhost:8983/solr/techproducts/select?q=id%3A123&omitHeader=true'
{
"response":{"numFound":1,"start":0,"docs":[
{
"id":"123",
"cat":["a",
"b",
"c",
"a",
"b",
"d"],
"_version_":1668919799351083008}]
}}
Is this a known issue or am I missing something here?
Kind regards
Thomas Corthals
olr to
copy the configset instead of using it directly but maybe I missed it.
Is this use case not possible with Solr Standalone or did I missed
something obvious ? My version is too old and there is something in
more recent version ?
Thanks,
--
Thomas Mortagne
j/
kind regards
Thomas
Thanks so much Erick. Sounds like this should be a perfect approach to helping
resolve our current issue.
On 2/24/20, 6:48 PM, "Erick Erickson" wrote:
Thomas:
Yes, upgrading to 7.5+ will automagically take advantage of the
improvements, eventually... No, you don’t have
Hi Folks –
Few questions before I tackled an upgrade here. Looking to go from 7.4 to 7.7.2
to take advantage of the improved Tiered Merge Policy and segment cleanup – we
are dealing with some high (45%) deleted doc counts in a few cores. Would
simply upgrading Solr and setting the cores to use
Hi
I've run into an issue with creating a Managed Stopwords list that has the
same name as a previously deleted list. Going through the same flow with
Managed Synonyms doesn't result in this unexpected behaviour. Am I missing
something or did I discover a bug in Solr?
On a newly started solr with
Thank you,
I will fix the image to have the correct collection name. It was confusing
showing a different collection image overview
that the one you see when following the tutorial.
/Thomas
On Thu, Jun 27, 2019 at 3:45 PM Alexandre Rafalovitch
wrote:
> Actually, the tutorial does say &quo
as indeed "techproducts", so it
is
the collection name that has changed.
It is just me doing something wrong? It is hard to believe a such obvious
error
has not been corrected yet? It seems the 7.1 tutorial has the same error.
/Thomas Egense
On 04.01.19, 09:11, "Thomas Aglassinger" wrote:
> When debugging a query using multiplicative boost based on the product()
> function I noticed that the score computed in the explain section is correct
> while the score in the actual result is wrong.
We digged into th
ilter.
The WhitespaceTokenizerFactory ensures that you can define synonyms with
hyphens like mac-book -> macbook.
Best regards, Thomas.
On 05.01.19, 02:11, "Wei" wrote:
Hello,
We are upgrading to Solr 7.6.0 and noticed that SynonymFilter and
WordDelimiterFilter ha
nd mention of a somewhat similar situation with BooleanQuery, which was
considered a bug and fixed in 2016:
https://issues.apache.org/jira/browse/LUCENE-7132
So my questions are:
1. Is there something wrong in my query that prevents the “Netzteil”-only
product to get a score of 2.0?
2. Shouldn’t the score in the result and the explain section always be the same?
Best regards,
Thomas
I suspect nobody wants to broach this topic, this has to have come up before,
but I can not find an authoritative answer. How does the Standard Query Parser
evaluate boolean expressions? I have three fields, content, status and
source_name. The expression
content:bement AND status:relevant
yie
procedure I need to adhere to if I want to contribute?
> On Nov 30, 2018, at 10:40 AM, Jason Gerlowski wrote:
>
> Hi Thomas,
>
> I recently added a first pass at JSON faceting support to SolrJ. The
> main classes are "JsonQueryRequest" and "DirectJsonQueryR
Hi Shawn, thanks for the prompt reply!
> On Nov 29, 2018, at 4:55 PM, Shawn Heisey wrote:
>
> On 11/29/2018 2:01 PM, Thomas L. Redman wrote:
>> Hi! I am wanting to do nested facets/Grouping/Expand-Collapse using SolrJ,
>> and I can find no API for that. I see I can add a
Hi! I am wanting to do nested facets/Grouping/Expand-Collapse using SolrJ, and
I can find no API for that. I see I can add a pivot field, I guess to a query
in general, but that doesn’t seem to work at all, I get an NPE. The
documentation on SolrJ is sorely lacking, the documentation I have foun
this, we could of course give it
a quick go.
-TZ
On 11/6/18, 12:35 PM, "Shawn Heisey" wrote:
>On 11/6/2018 10:12 AM, Zimmermann, Thomas wrote:
>> Shawn -
>>
>> Server performance is fine and request time are great. We are tolerating
>> the level of traffic,
I should mention I¹m also hanging out in the Solr IRC Channel today under
the nick ³apatheticnow² if anyone wants to follow up in real time during
business hours EST.
On 11/6/18, 11:39 AM, "Shawn Heisey" wrote:
>On 11/6/2018 9:06 AM, Zimmermann, Thomas wrote:
>> For exampl
On 11/6/18, 11:39 AM, "Shawn Heisey" wrote:
>On 11/6/2018 9:06 AM, Zimmermann, Thomas wrote:
>> For example - 75k request per minute going to this one box, and 3.5k
>>RPM to all other nodes in the cloud.
>>
>> All of those extra requests on the one box are
Question about CloudSolrClient and CLUSTERSTATUS. We just deployed a 3 server
ZK cluster and a 5 node solr cluster using the CloudSolrClient in Solr 7.4.
We're seeing a TON of traffic going to one server with just cluster status
commands. Every single query seems to be hitting this box for statu
We have a Solr v7 Instance sourcing data from a Data Import Handler with a Solr
data source running Solr v4. When it hits a single server in that instance
directly, all documents are read and written correctly to the v7. When we hit
the load balancer DNS entry, the resulting data import handler
In case anyone else runs into this, I tracked it down. I had to force maven to
explicitly include all of it’s dependent jars in the plugin jar using the
assembly plugin in the pom like so:
maven-assembly-plugin
2.5.3
jar-with-dependencies
Hi,
We have a custom java plugin that leverages the UpdateRequestProcessorFactory
to push data to multiple cores when a single core is written to. We are
building the plugin with maven, deploying it to /solr/lib and sourcing the jar
via a lib directive in our solr config. It currently works cor
7, 2018, 11:50 PM Shawn Heisey, wrote:
>
>> On 8/17/2018 6:15 PM, Zimmermann, Thomas wrote:
>> > I¹m trying to track down an odd issue I¹m seeing when using the
>> SolrEntityProcessor to seed some test data from a solr 4.x cluster to a
>> solr 7.x cluster. It seems li
Hi,
I’m trying to track down an odd issue I’m seeing when using the
SolrEntityProcessor to seed some test data from a solr 4.x cluster to a solr
7.x cluster. It seems like strings are being interpreted as multivalued when
passed from a string field to a text field via the copyTo directive. Any
stopped filtering on insert for those and switched to filtering on query
based on recommendations from the Solr Doc.
Thanks,
TZ
On 8/15/18, 3:17 PM, "Andrea Gazzarini" wrote:
>Hi Thomas,
>as you know, the two analyzers play in a different moment, with a
>different input and a
Hi,
We have the text field below configured on fields that are both stored and
indexed. It seems to me that applying the same filters on both index and query
would be redundant, and perhaps a waste of processing on the retrieval side if
the filter work was already done on the index side. Is thi
t;, "country": {"set": "country2"' &&
curl "$URL/select?q=id:TEST_ID1"
"response":{"numFound":1,"start":0,"docs":[
{
"id":"TEST_ID1",
"description":["desc
have several setups that triggers this reliably but
there is no simple test case that „fails“ if Tika 1.17 or 1.18 is used. I also
do not know if the error is inside Tika or inside the glue code that makes Tika
usable in SOLR.
Should I file an issue for this?
kind regards,
Thomas
>
Hi,
SOLR is shipping with a script that handles OOM errors. And produces log files
for every case with content like this:
Running OOM killer script for process 9015 for Solr on port 28080
Killed process 9015
This script works ;-)
kind regards
Thomas
> Am 02.08.2018 um 12:28 schr
if anybody else experienced the same
problems?
kind regards,
Thomas
signature.asc
Description: Message signed with OpenPGP
Hi,
We're in the midst of our first major Solr upgrade in years and are trying to
run some cleanup across all of our client codebases. We're currently using the
standard PHP Solr Extension when communicating with our cluster from our
Wordpress installs. http://php.net/manual/en/book.solr.php
F
. Memory consumption is
out-of-roof. Where previously 512MB heap was enough, now 6G aren’t enough to
index all files.
kind regards,
Thomas
> Am 04.07.2018 um 15:03 schrieb Markus Jelsma :
>
> Hello Andrey,
>
> I didn't think of that! I will try it when i have the courage aga
Thanks all! I think we will maintain our current approach of hand editing
the configs in git and implement something at the shell level to automate
the process of running upconfig and performing a core reload.
Hi,
We have several cores with identical configurations with the sole exception
being the language of their document sets. I'd like to leverage Config Sets to
manage the going forward, but ran into two issues I'm struggling to solve
conceptually.
Sample Cores:
our_documents
our_documents_de
ou
Hi,
We're transitioning from Solr 4.10 to 7.x and working through our options
around managing our schemas. Currently we manage our schema files in a git
repository, make changes to the xml files, and then push them out to our
zookeeper cluster via the zkcli and the upconfig command like:
/apps
M, Zimmermann, Thomas wrote:
>> I was wondering if there was a reason Solr 7.4 is still recommending ZK
>>3.4.11 as the major version in the official changelog vs shipping with
>>3.4.12 despite the known regression in 3.4.11. Are there any known
>>issues with running 7.4 alongsid
Hi,
I was wondering if there was a reason Solr 7.4 is still recommending ZK 3.4.11
as the major version in the official changelog vs shipping with 3.4.12 despite
the known regression in 3.4.11. Are there any known issues with running 7.4
alongside ZK 3.4.12. We are beginning a major Solr upgrad
I configured a DataImportHandler using a FileListEntityProcessor to import
files from a folder.
This setup works really great, but i do not now how i should handle changes
on the filesystem (e.g. files added, deleted,...)
Should I always do a "full-import"? As far as i read "delta-import" is only
s
g Support Training - http://sematext.com/
>
>
>
> > On 23 May 2018, at 06:46, Thomas Lustig wrote:
> >
> > dear community,
> >
> > Is it possible to index documents (e.g. pdf, word,...) for
> fulltextsearch
> > without storing their content(payload) inside Solr server?
> >
> > Thanking you in advance for your help
> >
> > BR
> >
> > Tom
>
>
dear community,
I would like to automatically add a sha256 filehash to a Document field
after a binary file is posted to a ExtractingRequestHandler.
First i thought, that the ExtractingRequestHandler has such a feature, but
so far i did not find a configuration.
It was mentioned that I should impl
dear community,
Is it possible to index documents (e.g. pdf, word,...) for fulltextsearch
without storing their content(payload) inside Solr server?
Thanking you in advance for your help
BR
Tom
Hello Team,
I am using solr 7.2.1. I am getting an exception while indexing saying that
"DocValuesField is too large, must be <= 32766, retry?"
This is my field in my managed schema.
When I checked this lucene ticket -
https://issues.apache.org/jira/browse/LUCENE-4583, it says its fixed long
Hello Team,
I have few experiences where restart of a solr node is the only option when
a core goes down. I am trying to automate the restart of a solr server when
a core goes down or the replica is unresponsive over a period of time.
I have a script to check if the cores/ replicas associated wit
This is understood.
My question is: I have a keep words filter on field2. field2 is used for
clustering.
Will the cluster algorithm use „some data“ or the result of the application of
the keep words filter applied to „some data“.
Cheers,
Thomas
> Am 26.07.2017 um 01:36 schrieb Erick Erick
I have defined a copied field on which I would like to use clustering. I
understood that the destination field will store the full content despite the
filter chain I defined.
Now, I have a keep word filter defined on the copied field.
If I run clustering on the copied field will it use the resu
Hey,
we have multiple documents that are matches for the query in question
("name:hubwagen"). Thing is, some of the documents only contain the
query, while others match 100% in the "name" field:
Hochhubwagen
5.9861565
Hubwagen
5.9861565
The debug looks like this (for the first and 5th
quot;.
So I have to look in the logfile the observe when the import was finished.
In the old solr, non-cloud and non-partitioned, there was a hourglass while the
import was running.
Any idea?
Best regards
Thomas
Hi,
I am new to Solr. I have a use case to add a new node when an existing node
goes down. The new node with a new IP should contain all the replicas that
the previous node had. So I am using a network storage (cinder storage) in
which the data directory (where the solr.xml and the core directorie
> Shawn Heisey hat am 17. Mai 2017 um 15:10 geschrieben:
>
>
> On 5/17/2017 6:18 AM, Thomas Porschberg wrote:
> > Thank you. I am now a step further.
> > I could import data into the new collection with the DIH. However I
> > observed the following exception
&g
> Tom Evans hat am 17. Mai 2017 um 11:48 geschrieben:
>
>
> On Wed, May 17, 2017 at 6:28 AM, Thomas Porschberg
> wrote:
> > Hi,
> >
> > I did not manipulating the data dir. What I did was:
> >
> > 1. Downloaded solr-6.5.1.zip
> > 2. ensured no
p;collection.configName=heise
{
"responseHeader":{
"status":0,
"QTime":2577},
"success":{"127.0.1.1:8983_solr":{
"responseHeader":{
"status":0,
"QTime":1441},
"core":"h
calhost:8983/solr/admin/collections?action=CREATE&name=karpfen&numShards=2&replicationFactor=1&maxShardsPerNode=2&collection.configName=karpfen
--> error
Thomas
> Susheel Kumar hat am 15. Mai 2017 um 14:36
> geschrieben:
>
>
> what happens if you create just on
quot; is a directory.
Are my steps wrong? Did I miss something important?
Any help is really welcome.
Thomas
ata/bestand/index/ manually , but
nothing changed.
What is the reason for this CREATE-error?
Thomas
> ANNAMANENI RAVEENDRA hat am 12. Mai 2017 um 15:54
> geschrieben:
>
>
> Hi ,
>
> If there is a request handler configured in solrconfig.xml and update the
> Co
s
solrconfig.xml ...
in the zookeeper directories (installation dir and data dir)
3. Started solr:
bin/solr start -c
4. Created a books collection with 2 shards
bin/solr create -c books -shards 2
Result: I see in the web-ui my books collection with the 2 shards. No errors so
far.
However, the Dataimport-entry says:
"Sorry, no dataimport-handler defined!"
What could be the reason?
Thomas
find one of them. How to setup DIH for the
solrcould?
Best regards
Thomas
when trying to restart a
solr server after activities such as patch.
Is it normal for old tlogs to never get removed in a CDCR setup?
Thomas Tickle
Nothing in this message is intended to constitute an electronic signature
unless a specific statement to the contrary is included in this me
q' and 'facet.*' to work with my setup?
I am still using SOLR 4.10.5
kind regards
Thomas
actly like #get. Also it's not consistent with
SolrInstance#query behavior which return me SolrDocument containing
values and not Fields.
Sorry if I missed it in the releases notes.
Thanks for your time !
--
Thomas
Hey,
I'm playing around with the suggester component, and it works perfectly
as described: Suggestions for 'logitech mouse' include 'logitech mouse
g500' and 'logitech mouse gaming'.
However, when the words in the record supplying the suggester do not
follow each other as in the search terms, no
y tough call whether to use them or not.
Cheers,
Thomas
On 2015-07-27 21:31, Erick Erickson wrote:
> The problem has been that field naming conventions weren't
> _ever_ defined strictly. It's not that anyone is taking away
> the ability to use other characters, rather it's
haracter available for field names
that is usually not allowed in programming language's identifiers (as a
cheap escape character).
Thanks in advance,
Thomas
Hi Ahmet,
Brilliant, thanks a lot!
I thought it might be possible with local parameters, but couldn't find
any information anywhere on how (especially setting the multi-valued
"qf" parameter).
Thanks again,
Thomas
On 2015-07-10 14:09, Ahmet Arslan wrote:
> Hi Tomasi
>
t the internal mechanics are
that make the one or the other faster? Are there other suggestions for
how to make a "filters-only" search as fast as possible?
Also, can it be that it recently changed that the "q.alt" parameter now
influences relevance (in Solr 5.x)? I could have sworn that wasn't the
case previously.
Thanks in advance,
Thomas
it's not
possible to dump all their contents into a single field for that purpose.
Thanks in advance,
Thomas
God damn. Thank you.
*ashamed*
Am 30.06.2015 00:21 schrieb Erick Erickson:
> Try not putting it in double quotes?
>
> Best,
> Erick
>
> On Mon, Jun 29, 2015 at 12:22 PM, Thomas Michael Engelke
> wrote:
>
>> A friend and I are trying to develop s
A friend and I are trying to develop some software using Solr in the
background, and with that comes alot of changes. We're used to older
versions (4.3 and below). We especially have problems with the
autosuggest feature.
This is the field definition (schema.xml) for our autosuggest field:
.
used the analysis tab in the admin UI? You can type in
sentences for both index and query time and see how they would be
analysed by various fields/field types.
Once you have got index time and query time to result in the same tokens
at the end of the analysis chain, you should start seeing matches in
Hey,
in german, you can string most nouns together by using hyphens, like
this:
Industrie = industry
Anhänger = trailer
Industrie-Anhänger = trailer for industrial use
Here [1], you can see me querying "Industrieanhänger" from the "name"
field (name:Industrieanhänger), to make sure the index a
I have Solr as the backend to an ECommerce solution where the fields can
be configured to be searchable, which generates a schema.xml and loads
it into Solr.
Now we also allow to configure Solr search weight per field to affect
queries, so my queries usually look something like this:
spellch
.
Do we have the use a similar set-up when using Solr, that is:
1. Create/update the index
2. Notify the Solr client
[cid:image001.jpg@01D08D5B.0112E420]
Guy Thomas
Analist-Programmeur
Provincie Vlaams-Brabant
D
boilerplate code look like?
Any ideas would be appreciated.
Kind regards,
Thomas
x. It didn't mention of any
corruption though.
Right now, the index has about 25k docs. I haven't optimized this index in
a while, and there are about 4000 deleted-docs. How can I confirm if we
lost anything? If we've lost docs, is there a way to recover it?
Thanks in advance!!
Regards
Thomas
n 4.10.3 I think. What version are you on?
- Mark
On Mon Jan 12 2015 at 7:35:47 AM Thomas Lamy wrote:
Hi,
I found no big/unusual GC pauses in the Log (at least manually; I found
no free solution to analyze them that worked out of the box on a
headless debian wheezy box). Eventually i tried with -
d once a problem occurs. We're also enabling GC logging for
zookeeper; maybe we were missing problems there while focussing on solr
logs.
Thomas
Am 08.01.15 um 16:33 schrieb Yonik Seeley:
It's worth noting that those messages alone don't necessarily signify
a problem with the s
dward
www.flax.co.uk
On 7 Jan 2015, at 10:01, Thomas Lamy wrote:
Hi there,
we are running a 3 server cloud serving a dozen
single-shard/replicate-everywhere collections. The 2 biggest collections are
~15M docs, and about 13GiB / 2.5GiB size. Solr is 4.10.2, ZK 3.4.5, Tomcat
7.0.56, Oracle
running, which does not have these problems
since upgrading to 4.10.2.
Any hints on where to look for a solution?
Kind regards
Thomas
--
Thomas Lamy
Cytainment AG & Co KG
Nordkanalstrasse 52
20097 Hamburg
Tel.: +49 (40) 23 706-747
Fax: +49 (40) 23 706-139
Sitz und Registerger
I was using the SOLR administrative interface to issue my queries. When I
bypass the administrative interface and go directly to SOLR, the JSON return
indicates the AID is as it should be. The issue is in the presentation layer of
the Solr Admin UI. Which is good news.
Thanks all, my bad. Shoul
I believe I have encountered a bug in SOLR. I have a data type defined as
follows:
I have not been able to reproduce this problem for smaller numbers, but for
some of the very large numbers, the value that gets stored for this “aid” field
is not the same as the number that gets indexed. For e
he request I send is this:
foo
ENVELOPE(25.89, 41.13,
47.07, 35.31)
Does anyone have any idea what could be going wrong here?
Thanks a lot in advance,
Thomas
no tomcat restart was neccessary - even
contra productive, since state changes may overwrite the just-fixed enty.
Best regards
Thomas
Am 13.11.2014 um 05:47 schrieb Jeon Woosung:
you can migrate zookeeper data manually.
1. connect zookeeper.
- zkCli.sh -server host:port
2. chec
Am 12.11.2014 um 15:29 schrieb Thomas Lamy:
Hi there!
As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530 on
a regular basis, we started upgrading our 7 mode cloud from 4.10.1 to
4.10.2.
The first node upgrade worked like a charm.
After upgrading the second node, two cores no
er.java:1037)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:355)
at
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)
Any hint on how to solve this? Google didn't reveal anything useful...
Kind regards
Thomas
--
Thomas Lamy
Cytainment AG & Co
Like in this article
(http://www.andornot.com/blog/post/Advanced-autocomplete-with-Solr-Ngrams-and-Twitters-typeaheadjs.aspx),
I am using multiple fields to generate different options for an
autosuggest functionality:
- First, the whole field (top priority)
- Then, the whole field as EdgeNGram
2014 08:52 schrieb Thomas Michael Engelke:
> I'm toying around with the suggester component, like described here:
> http://www.andornot.com/blog/post/Advanced-autocomplete-with-Solr-Ngrams-and-Twitters-typeaheadjs.aspx
> [1]
>
> So I made 4 fields:
>
> multiValued="
I'm toying around with the suggester component, like described here:
http://www.andornot.com/blog/post/Advanced-autocomplete-with-Solr-Ngrams-and-Twitters-typeaheadjs.aspx
So I made 4 fields:
stored="true" multiValued="true" />
stored="true" multiValued="true" />
indexed="true" stored=
ces only tokens that are in the main index. I think this is basically
> how all the Suggester implementations are designed to work already; are you
> using one of those, or are you using the TermsComponent, or something else?
>
> -Mike
>
> On 11/10/14 2:54 AM, Thomas Mi
1 - 100 of 305 matches
Mail list logo