Thank you for this answer,
I'm surprised, because the method is suggested by
many blogs, many forums as well as the official solr documentation, there
must be a way to make it work, right? Or it is obsolete now?
Anyway, thank you, I'll try one of the solution you suggest,
JP.
The thing is, basi
It depends on your use case. What is you custom criteria how is stored etc.
For example I had two tables, lets say items and permissions tables.
Permissions table was holding itemId,userId pairs. Meaning userId can see this
itemId. My initial effort was index items and add a multivalued field
The initial request was slow as the UnivertedField was built and cached.
Subsequent queries will be fast. To ensure that users don't see this pause
after a new searcher is opened, you can warm the new searcher in the
background using a static warming query in the solrconfig.xml.
There are differen
Doug's requirement could be implemented :
1) index title length as an additional field ( may be via
CountFieldValuesUpdateProcessorFactory?)
Title: [solr] [the] [worlds] [greatest] [search] [engine]
title_length = 6
2) Compute query length at client side. applye percentage etc and use if in
fil
What you're looking for is a QParserPlugin. Here is an example:
http://svn.apache.org/viewvc/lucene/dev/tags/lucene_solr_4_6_0/solr/core/src/java/org/apache/solr/search/FunctionRangeQParserPlugin.java?revision=1544545&view=markup
You're probably want to implement the QParserPlugin as PostFilter.
Hi Erick,
After waiting for some days abt. a week (I did daily crawling & indexing),
here are the docs summary:
Num Docs: 9738
Max Doc: 15311
Deleted Docs: 5573
Version: 781
Segment Count: 5
The percentage of deletedDocs of NumDocs is near 57%.
In the other, the TieredMergePolicy in solrcon
Hi,
I'm currently looking at writing my first Solr plugin, but I could not
really find any "overview" information about how a Solr request works
internally, what the control flow is and what kind of plugins are
available to customize this at which point. The Solr wiki page on
plugins [1], in
We have a large index where each document has stored multi-valued string field
called products. Also we have lot of customization of search requests. Each
request goes through a pre-defined custom search handler and docId are stored
for facet calculation.
Following method is called to get fac
The thing is, basic auth doesn't work with Ajax requests .. which is, why you
don't see the page loaded.
The Server normally responds in such cases with an 401 Header, which makes your
browser prompt _you_ for the credentials, sending it back to the server which
then delivers the page you ask f
Hi;
I use Solr 4.5.1 I have a case: When a user searches for some specific
keywords some documents should be listed at much more higher than its usual
score. I mean I have probabilities of which documents user may want to see
for given keywords.
I have come up with that idea. I can put a new fiel
We have been doing exactly that through several versions of Solr: we unpack the
new version on one set of replicas, install empty directories for the core(s)
we want to use, and create empty core.properties files in these. Then, we start
the new replicas, using a (stand-alone) zookeeper for the
I don't know, what is "high number of cores"? 10? 100? 1,000,000?
In my initial tests I was getting around 1,000/second, macbook pro with
spinning disk.
Best,
Erick
On Sat, Nov 30, 2013 at 3:02 PM, Yago Riveiro wrote:
> Erick,
>
> I have no custom stuff:
>
>
>zkClientTimeout="${zkClientTi
I actually tweaked the Stanbol server to handle more results and
successfully ran 10K imports within 30 minutes with no server issue.
I'm looking for further improving the results with regard to the efficiency
and NLP accuracy.
Thanks,
Dileepa
On Sun, Dec 1, 2013 at 8:17 PM, Dileepa Jayakody
wro
Thanks all, for your valuable ideas into this matter. I will try them. :)
Regards,
Dileepa
On Sun, Dec 1, 2013 at 6:05 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> There is no support for throttling built into DIH. You can probably write a
> Transformer which sleeps a while afte
To expand a bit on the other replies, yes, your order data should definitely be
denormalized into one single order scheme. We store orders this way in Solr,
since near real-time search among live orders is a requirement for several of
our systems.
Something non-Solr though - consider denormaliz
There is no support for throttling built into DIH. You can probably write a
Transformer which sleeps a while after every N requests to simulate
throttling.
On 26 Nov 2013 14:21, "Dileepa Jayakody" wrote:
> Hi All,
>
> I have a requirement to import a large amount of data from a mysql database
> a
On 1 December 2013 11:29, subacini Arunkumar wrote:
> I have product core, order core ,Is there a way in solr to fetch all
> fields from two cores in a single query?Solr join can fetch fields from
> only 1 core.
> If we cant , how to achieve this??, is the only option is to index
> denormalize d
I was wondering if there is a way to upgrade Solr version without downtime.
Theoretically it seems possible when every shard in the cluster has at least
2 replicas - but Jetty does not refresh the web container until we delete
solr-webapp folder's content.
Can someone please share from his experien
Hi,
I'm using solr 4.4 with jetty and I'm trying to password-protect the
admin pages.
I've read many posts from this list, as well as the main solr security doc :
http://wiki.apache.org/solr/SolrSecurity#Jetty_realm_example
and added this to my web.xml
Solr authenticated applicatio
I still think this is one of the best ideas that someone has come up with
in years.
In many ways it would be used in most queries if anyone wanted to look at
the field indexes or the query parsed and get better results.
Maybe people are not talking about it because mm=1, mm=0 is still overly
conf
Well I think your issue is batchSize. batchSize="1" should be batchSize="-1"
I also recommend you use *readOnly="true"*
On Tue, Nov 26, 2013 at 1:50 AM, Dileepa Jayakody wrote:
> Hi All,
>
> I have a requirement to import a large amount of data from a mysql database
> and index documents (about
21 matches
Mail list logo