Hi Emir,
I've a bunch of contentgroup values, so boosting them individually is
cumbersome. I've boosting on query fields
qf=text^6 title^15 IndexTerm^8
and
bq=Source:simplecontent^10 Source:Help^20
(-ContentGroup-local:("Developer"))^99
I was hoping *(-ContentGroup-local:("Developer"))^9
Binoy, 0.1 is still a positive boost. With title getting the highest weight,
this won't make any difference. I've tried this as well.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-time-de-boost-tp4259309p4259552.html
Sent from the Solr - User mailing list archive at
Emir, I don't Solr supports a negative boosting *^-99* syntax like this. I
can certainly do something like:
bq=(*:* -ContetGroup:"Developer's Documentation")^99 , but then I can't have
my other bq parameters.
This doesn't work --> bq=Source:simplecontent^10 Source:Help^20 (*:*
-ContetGroup:"Devel
Thanks Walter, I've tried this earlier and it works. But the problem in my
case is that I've boosting on few Source parameters as well. My ideal "bq"
should like this:
*bq=Source:simplecontent^10 Source:Help^20 (*:*
-ContentGroup-local:("Developer"))^99*
But this is not going to work.
I'm wo
I tried the function query route, but getting a weird exception.
*bf=if(termfreq(ContentGroup,'Developer Doc'),-20,0)* throws an exception
*org.apache.solr.search.SyntaxError: Missing end quote for string at pos 29
str='if(termfreq(ContentGroup,'Developer'* . Does it only accept single word
or the
David, this is tad weird. I've seen this error if you turn on docvalues for
an existing field. You can running an "optimize" on your index and see if it
helps.
--
View this message in context:
http://lucene.472066.n3.nabble.com/docValues-error-tp4260408p4260455.html
Sent from the Solr - User ma
Doug, do we've a date for the hard copy launch?
--
View this message in context:
http://lucene.472066.n3.nabble.com/understand-scoring-tp4260837p4260860.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks a lot, Erick. You are right, it's a tad small with around 20 million
documents, but the growth projection around 50 million in next 6-8 months.
It'll continue to grow, but maybe not at the same rate. From the index size
point of view, the size can grow up to half a TB from its current state.
Thanks Eric and Walter, this is extremely insightful. One last followup
question on composite routing. I'm trying to have a better understanding of
index distribution. If I use language as a prefix, SolrCloud guarantees that
same language content will be routed to the same shard. What I'm curious t
Thanks Alessandro, that answers my doubt. in a nutshell, to make MLT Query
parser work, you need to know the document id. I'm just curious as why this
constraint has been added. This will not work for a bulk of use cases. For
e.g. if we are trying to generate MLT based on a text or a keyword, how
w
Thanks Shawn and Alessandro. I get the part why id is needed. I was trying to
compare with the "mlt" request handler which doesn't enforce such
constraint. My previous example of title/keyword is not the right one, but I
do have fields which are unique to each document and can be used as a key to
e
I didn't use the REST API, instead updated the schema manually.
Can you be specific on removing the data directory content ? I certainly
don't want to wipe out the index. I've four Solr instances, 2 shards with a
replica each. Are you suggesting clearing the index and re-indexing from
scratch ?
Thanks Eric.
Here's the part which I'm not able to understand. I've for e.g. Source A, B,
C and D in index. Each source contains "n" number of documents. Now, out of
these, a bunch of documents in A and B are tagged with MediaType. I took the
following steps:
1. Delete all documents tagged with M
Hi Kevin,
Were you able to get a workaround / fix for your problem ? I'm also
looking to secure Collection and Update APIs by upgrading to 5.3. Just
wondering if it's worth the upgrade or should I wait for the next version,
which will probably address this.
Regards,
Shamik
--
Thanks Markus. I've been using field collapsing till now but the performance
constraint is forcing me to think about index time de-duplication. I've been
using a composite router to make sure that duplicate documents are routed to
the same shard. Won't that work for SignatureUpdateProcessorFactory
Thanks Scott. I could directly use field collapsing on adskdedup field
without the signature field. Problem with field collapsing is the
performance overhead. It slows down the query to 10 folds.
CollapsingQParserPlugin is a better option, unfortunately, it doesn't
support ngroups equivalent, which
Thanks for your reply. Have you customized SignatureUpdateProcessorFactory or
are you using the configuration out of the box ? I know it works for simple
dedup, but my requirement is tad different as I need to tag an identifier to
the latest document. My goal is to understand if that's possible usi
That's what I observed as well. Perhaps there's a way to customize
SignatureUpdateProcessorFactory to support my use case. I'll look into the
source code and figure if there's a way to do it.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-on-index-time-de-duplicati
The other way you can do that is to specify the startup parameters in
solr.in.sh.
Example :
SOLR_MODE=solrcloud
ZK_HOST="zoohost1:2181,zoohost2:2181,zoohost3:2181"
SOLR_PORT=4567
You can simply start solr by running "./solr start"
--
View this message in context:
http://lucene.472066.n3.n
Looks like it's happening for any field which is using docvalues.
java.lang.IllegalStateException: unexpected docvalues type NONE for field
'title_sort' (expected=SORTED). Use UninvertingReader or index with
docvalues.
Any idea ?
--
View this message in context:
http://lucene.472066.n3.nabbl
Thanks for your reply. Initially, I was under the impression that the issue
is related to grouping as group queries were failing. Later, when I looked
further, I found that it's happening for any field for which the docvalue
has turned on. The second example I took was from another field. Here's a
Well, I think I've narrowed down the issue. The error is happening when I'm
trying to do a rolling update from Solr 4.7 (which is our current version)
to 5.0 . I'm able to re-produce this couple of times. If I do a fresh index
on a 5.0, it works. Not sure if there's any other way to mitigate it.
Wow, "optimize" worked like a charm. This really addressed the docvalues
issue. A follow-up question, is it recommended to run optimize in a
Production Solr index ? Also, in a Sorl cloud mode, do we need to run
optimize on each instance / each shard / any instance ?
Appreciate your help Alex.
-
Thanks for your reply Eric.
In my case, I've 14 languages, out of which 50% of the documents belong to
English. German and CHS will probably constitute another 25%. I'm not using
copyfield, rather, each language has it's dedicated field such as title_enu,
text_enu, title_ger,text_ger, etc. Since I
Ok, I figured the steps in case someone needs a reference. It required both
zkcli and RELOAD to update the changes.
1. Use zkcli to load the changes. I ran it from the node which used the
bootstrapping.
sh zkcli.sh -cmd upconfig -zkhost zoohost1:2181 -confname myconf -solrhome
/mnt/opt/solrhom
Thanks Shawn for the pointer, really appreciate it.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unable-to-update-config-file-using-zkcli-or-RELOAD-tp4197376p4197494.html
Sent from the Solr - User mailing list archive at Nabble.com.
You should look at CollapsinQParserPlugin. It's much faster compared to a
Grouping query.
https://wiki.apache.org/solr/CollapsingQParserPlugin
It has a limitation though, check the following JIRA if it might affect your
use-case.
https://issues.apache.org/jira/browse/SOLR-6143
--
View this me
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-German-search-tp4206104p4206306.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Doug. I'm using eDismax
Here's my Solr query :
http://localhost:8983/solr/testhandlerdeu?debugQuery=true&q=title_deu:Software%20und%20Downloads
Here's my request handler.
explicit
0.01
velocity
Thanks a ton Doug, I should have figured this out, pretty stupid of me.
Appreciate your help.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-German-search-tp4206104p4206357.html
Sent from the Solr - User mailing list archive at Nabble.com.
t;: {
"v": 2
}
}
}
And authorization:
{
"responseHeader": {
"status": 0,
"QTime": 0
},
"authorization.enabled": true,
"authorization": {
"class": "solr.RuleBasedAuthorizationPlugin",
"user-role": {
"solr": "admin",
"superuser": [
"browseRole",
"selectRole"
],
"beehive": [
"browseRole",
"selectRole"
]
},
"permissions": [
{
"name": "security-edit",
"role": "admin"
},
{
"name": "select",
"collection": "gettingstarted",
"path": "/select/*",
"role": "selectRole"
},
{
"name": "browse",
"collection": "gettingstarted",
"path": "/browse",
"role": "browseRole"
}
],
"": {
"v": 7
}
}
}
I was under the impression that these roles are independent of each other,
based on the assignment, individual user should be able to access their
respective areas. On a related note, I was not able to make roles like
"all", "read" work.
Not sure what I'm doing wrong here. Any feedback will be appreciated.
Thanks,
Shamik
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issues-with-Authentication-Role-based-authorization-tp4276024p4276056.html
Sent from the Solr - User mailing list archive at Nabble.com.
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issues-with-Authentication-Role-based-authorization-tp4276024p4276153.html
Sent from the Solr - User mailing list archive at Nabble.com.
Brian,
Thanks for your reply. My first post was bit convoluted, tried to explain
the issue in the subsequent post. Here's a security JSON. I've solr and
beehive assigned the admin role which allows them to have access to "update"
and "read". This works as expected. I add a new role "browseRole"
Ok, I found another way of doing it which will preserve the QueryResponse
object. I've used DefaultHttpClient, set the credentials and finally passed
it as a constructor to the CloudSolrClient.
*DefaultHttpClient httpclient = new DefaultHttpClient();
UsernamePasswordCredentials defaultcreds = new
anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrj-Basic-Authentication-randomly-failing-request-has-come-without-principal-tp4277342p4277533.html
Sent from the Solr - User mailing list archive at Nabble.com.
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-context-field-filters-in-Solr-suggester-tp4283739p4283894.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Doug,
Congratulations on the release, I guess, lot of us have been eagerly
waiting for this. Just one quick clarification. You mentioned that the
examples in your book are executed against elasticsearch. For someone
familiar with Solr, will it be an issue to run those examples in a Solr
ins
Thanks for all the pointers. With 50% discount, picking a copy is a
no-brainer
--
View this message in context:
http://lucene.472066.n3.nabble.com/ANN-Relevant-Search-by-Manning-out-Thanks-Solr-community-tp4283667p4284107.html
Sent from the Solr - User mailing list archive at Nabble.com.
dcarticles^9.0 Source2:downloads^5.0
1.0/(3.16E-11*float(ms(const(147216960),date(PublishDate)))+1.0)
The part I'm confused is why the two queries are being interpreted
differently ?
Thanks,
Shamik
--
View this message in context:
http://lucene.472066.n3.nabble.com/Inventor-template-v
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Inventor-template-vs-Inventor-template-issue-with-hyphen-tp4293357p4293489.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Alex. With the conventional join query I'm able to return the parent
document based on a query match on the child. But, it filters out any other
documents which are outside the scope of join condition. For e.g. in my
case, I would expect the query to return :
1
Parent title
123
Thanks for getting back on this. I was trying to formulate a query in similar
lines but not able to construct it (multiple clauses) correctly so far. That
can be attributed to my inexperience with Solr queries as well. Can you
please point to any documentation / example for my reference ?
Appreci
Thanks Alex, this has been extremely helpful. There's one doubt though.
The query returns expected result if I use "select" or "query" request
handler, but fails for others. Here's the debug output from "/select" using
edismax.
http://localhost:8983/solr/techproducts/query?q=({!join%20from=manu_i
Sorry to bump this up, but can someone please explain the parsing behaviour
of a join query (show above) in respect to different request handler ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-retrieve-parent-documents-without-a-nested-structure-block-join-tp4297510
Thanks again Alex.
I should have clarified the use of browse request handler. The reason I'm
simulating the request handler parameters of my production system using
browse. I used a separate request handler, stripped down all properties to
match "select". I finally narrowed down the issue to Minim
Did you take a look at Collapsin Query Parser ?
https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-remove-duplicate-from-search-result-tp4298272p4298305.html
Sent from the Solr - User mailing l
You can try something like :
query.add("json.facet", your_json_facet_query);
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-doesn-t-work-with-Json-facet-api-tp4299867p4299888.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Koji, I've tried KeywordRepeatFilterFactory which keeps the original
term, but the Stopword filter in the analysis chain will remove it
nonetheless. That's why I thought of creating a separate field devoiding of
stopwords/stemmers. Let me know if I'm missing something here.
--
View this m
Hi Koji,
I'm using a copy field to preserve the original term with stopword. It's
mapped to titleExact.
textExact definition:
Any suggestion?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-trying-to-boost-phrase-containing-stop-word-tp4346860p4347068.html
Sent from the Solr - User mailing list archive at Nabble.com.
Apologies, 290gb was a typo on my end, it should read 29gb instead. I started
with my 5.5 configurations of limiting the RAM to 15gb. But it started going
down once it reached the 15gb ceiling. I tried bumping it up to 29gb since
memory seemed to stabilize at 22gb after running for few hours, of co
Thanks for your suggesting, I'm going to tune it and bring it down. It just
happened to carry over from 5.5 settings. Based on Walter's suggestion, I'm
going to reduce the heap size and see if it addresses the problem.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Walter, thanks again. Here's some information on the index and search
feature.
The index size is close to 25gb, with 20 million documents. it has two
collections, one being introduced with 6.6 upgrade. The primary collection
carries the bulk of the index, newly formed one being aimed at getting
po
I agree, should have made it clear in my initial post. The reason I thought
it's little trivial since the newly introduced collection has only few
hundred documents and is not being used in search yet. Neither it's being
indexed at a regular interval. The cache parameters are kept to a minimum as
w
Thanks, the change seemed to have addressed the memory issue (so far), but on
the contrary, the GC chocked the CPUs stalling everything. The CPU
utilization across the cluster clocked close to 400%, literally stalling
everything.On a first look, the G1-Old generation looks to be the culprit
that to
Emir, after digging deeper into the logs (using new relic/solr admin) during
the outage, it looks like a combination of query load and indexing process
triggered it. Based on the earlier pattern, memory would tend to increase at
a steady pace, but then surge all of a sudden, triggering OOM. After I
All the tuning and scaling down of memory seemed to be stable for a couple of
days but then came down due to a huge spike in CPU usage, contributed by G1
Old Generation GC. I'm really puzzled why the instances are suddenly
behaving like this. It's not that a sudden surge of load contributed to
this
I usually log queries that took more than 1sec. Based on the logs, I haven't
seen anything alarming or surge in terms of slow queries, especially around
the time when the CPU spike happened.
I don't necessarily have the data for deep paging, but the usage of sort
parameter (date in our case) has b
Susheel, my inference was based on the Qtime value from Solr log and not
based on application log. Before the CPU spike, the query time didn’t give
any indication that they are slow in the process of slowing down. As the GC
suddenly triggers a high CPU usage, query execution slows down or chocks,
b
Thanks Emir. The index is equally split between the two shards, each having
approx 35gb. The total number of documents is around 11 million which should
be distributed equally among the two shards. So, each core should take 3gb
of the heap for a full cache. Not sure I get the "multiply it by number
Thanks Eric, in my case, each replica is running on it's own JVM, so even if
we consider 8gb of filter cache, it still has 27gb to play with. Isn't this
is a decent amount of memory to handle the rest of the JVM operation?
Here's an example of implicit filters that get applied to almost all the
q
Zisis, thanks for chiming in. This is really an interesting information and
probably in line what I'm trying to fix. In my case, the facet fields are
certainly not high cardinal ones. Most of them have a finite set of data,
the max being 200 (though it has a low usage percentage). Earlier I had
fac
Thanks Emir and Zisis.
I added the maxRamMB for filterCache and reduced the size. I could the
benefit immediately, the hit ratio went to 0.97. Here's the configuration:
It seemed to be stable for few days, the cache hits and jvm pool utilization
seemed to be well within expected range. But th
Thanks for the pointer Alex . I'll go through all four articles, thanksgiving
will be fun :-)
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-include-facet-fields-in-keyword-search-tp4306967p4307020.html
Sent from the Solr - User mailing list archive at Nabble.com.
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Information-on-classifier-based-key-word-suggestion-tp4314942p4315492.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks, John.
The title is not unique, so I can't really rely on it. Also, keeping an
external mapping for url and id might not feasible as we are talking about
possibly millions of documents.
URLs are unique in our case, unfortunately, it can't be used as part of
Query elevation component since
Charlie, thanks for sharing the information. I'm going to take a look and get
back to you.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-combine-third-party-search-data-as-top-results-tp4318116p4318349.html
Sent from the Solr - User mailing list archive at Nabble.co
Charlie, this looks something very close to what I'm looking for. Just
wondering if you've made this available as a jar or can be build from
source? Our Solr distribution is not built from source, I can only use an
external jar. I'll appreciate if you can let me know.
--
View this message in con
6.210026
As you can see, the results are completely off, except for the first one.
Moreover, the number of results returned are different as well. Group query
has 13648 results which CollapsingQParserPlugin returns 27142, almost twice
the size.
I'm little baffled
Hi Joel, Thanks for taking a look into this. Here's the information you had
requested.*ADSKDedup:*I've attached separate files for debug information for
each query.Let me know if you need any information.Regards,Shamik
CollapsingQParserPlugin_Query_Debug.txt
<http://lucene.472066.
x27;m using a haproxy to perform a round-robin
request to any of the 6 servers (2 shard, 4 replicas). Ideally, a full crawl
should have updated the cache with the new set of data. I even re-started
the instance, but the problem seems to persist.
I'll appreciate if someone can provide the
it'll be probably easy for you to
look.
Regards,
Shamik
--
View this message in context:
http://lucene.472066.n3.nabble.com/CollapsingQParserPlugin-returning-different-result-set-tp4123716p4125290.html
Sent from the Solr - User mailing list archive at Nabble.com.
}}
Here's the sample query :
http://localhost:8983/solr/adskhelpportal?q=How%20can%20I%20obtain%20local%20offline%20Help&wt=xml&debugQuery=true&rows=1
I'm using SolrCloud with 2 shards and a replica each. I'm getting
parsedQuery / parse
Thanks Nicole. Leveraging dynamic field definitions is a great idea. Probably
work for me as I've a bunch of fields which are indexed as "String". Just
curious about the sharding, are you using Solr Cloud. I thought of taking
the dedicated shard / core route , but then, as using a composite key (fo
Awesome, thanks a lot Anshum, makes total sense now. Appreciate your help.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-on-3-level-composite-id-routing-tp4137044p4137071.html
Sent from the Solr - User mailing list archive at Nabble.com.
Turned out to be a weird exception. Apparently, the comments in the
stopwords_fr.txt disrupts the stop filter factory. After I stripped off the
comments, it worked as expected.
Referred to this thread :
http://mail-archives.apache.org/mod_mbox/lucene-dev/201309.mbox/%3CJIRA.12668581.1379112889603
I found the issue. It had to do with edismax "qf" entry in request handler. I
had the following entry :
name_fra^1.2 title_fra^10.0 description_fra^5.0
author^1
Except for author, all other fields are of type adsktext_fra, while author
was of the type text_general, which uses english stopfilter
Thanks Ahmet, I'll give it a shot.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-we-do-conditional-boosting-using-edismax-tp4141131p4141268.html
Sent from the Solr - User mailing list archive at Nabble.com.
Sorry, that's a typo when I copied the mlt definition from my solrconfig, but
there's comma in my test environment. It's not the issue.
--
View this message in context:
http://lucene.472066.n3.nabble.com/MLT-weird-behaviour-in-Solrcloud-tp4145066p4145145.html
Sent from the Solr - User mailing l
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/MLT-weird-behaviour-in-Solrcloud-tp4145066p4145502.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for the pointer Eric. You are right, I forgot to include "IJK" under
AB. Also, facet field names are different. Unfortunately, I'm using
Solrcloud and facet pivot doesn't seem to work in a distributed mode. I'll
get back some result if I use distrib=false, but then it's not the right
data.
Are there any plans to release this feature anytime soon ? I think this is
pretty important as a lot of search use case are dependent on the facet
count being returned by the search result. This issue renders renders the
CollapsingQParserPlugin pretty much unusable. I'm now reverting back to the
ol
Yes it does and pretty straight forward.
Refer to following url :
http://heliosearch.org/solr/atomic-updates/
http://www.mumuio.com/solrj-4-0-0-alpha-atomic-updates/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Does-solrj-support-partial-update-for-solr-cloud-tp4146654
ork.
Any pointers will be highly appreciated.
Regards,
Shamik
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Performance-Issue-tp4095940.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Primoz, I was suspecting that too. But then, its hard to imagine that
query cache is only contributing to the big performance hit. The setting
applies to the old configuration, and it works pretty well even with the
query cache low hit rate.
--
View this message in context:
http://lucene
I tried commenting out NOW in bq, but didn't make any difference in the
performance. I do see minor entry in the queryfiltercache rate which is a
meager 0.02.
I'm really struggling to figure out the bottleneck, any known pain points I
should be checking ?
--
View this message in context:
http
Thanks for the information. I think its good to have this issue fixed,
specially for cases where the spellcheck feature is on. I'll check out at
the source code and take a look, even a quick suppressing of the null
pointer exception might make a difference.
--
View this message in context:
http
Bumping up this thread as I'm facing similar issue . Any solution ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Grouping-performance-problem-tp3995245p4098566.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Joel, appreciate your help. Is Solr 4.6 due this year ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-grouping-performance-porblem-tp4098565p4100358.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for the update Shawn, will look forward to the release.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-grouping-performance-porblem-tp4098565p4101314.html
Sent from the Solr - User mailing list archive at Nabble.com.
Joel,
Thanks for the pointer. I went through your blog on Document routing, very
informative. I do need some clarifications on the implementation. I'll try
to run it based on my use case.
I'm indexing documents from multiple source system out of which a bunch
consist of duplicate content. I'm
Thanks Joel, I found the issue. It had to do with the schema definition for
adskdedup field. I had defined it as a text_general which was analyzing it
based on "-". After I changed it to type string, it worked as expected.
Thanks for looking into this.
--
View this message in context:
http://lu
Thanks Joel, really appreciate your help. I'll keep an eye on the 4.6.1
release.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Result-Grouping-vs-CollapsingQParserPlugin-tp4111331p4111486.html
Sent from the Solr - User mailing list archive at Nabble.com.
he
fact that I can't remove stale content.
Let me know if I'm missing something here.
- Thanks,
Shamik
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-question-on-individual-field-update-tp4116605p4116757.html
Sent from the Solr - User mailing list archive at Nabble.com.
Ok, I was wrong here. I can always set the indextimestamp field with current
time (NOW) for every atomic update. On a similar note, is there any
performance constraint with updates compared to add ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-question-on-individ
Thanks Eric and Shawn, appreciate your help.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-question-on-individual-field-update-tp4116605p4116831.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks a lot Shawn. Changing the appends filtering based on your suggestion
worked. The part which confused me bigtime is the syntax I've been using so
far without an issue (barring the q.op part).
Source:"TestHelp" | Source:"downloads" |
-AccessMode:"internal" | -workflowparentid:[* TO *]
Thanks, I'll take a look at the debug data.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Weird-issue-with-q-op-AND-tp4117013p4117047.html
Sent from the Solr - User mailing list archive at Nabble.com.
Jack, thanks for the pointer. I should have checked this closely. I'm using
edismax and here's my qf entry :
id^10.0 cat^1.4 text^0.5 features^1.0 name^1.2 sku^1.5 manu^1.1
title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0
As you can see, I was boosting id and
As Shawn had pointed, if you are using CloudSolrServer client, then you are
immune to the scenario where a shard and its replica(s) go down. The
communication should be ideally with the zookeepers and not the solr servers
directly, One thing you need to make sure is to add the shard.tolerant
parame
1 - 100 of 209 matches
Mail list logo