Hi,
Have you already tried "group.cache.percent" parameter? It might improve
grouping performance.
Or if you try CollapsingQParser, you can use expand component to acquire
all values in groups, I think.
(
https://lucene.apache.org/solr/guide/6_6/collapse-and-expand-results.html#collapse-and-expand
Hi Yasufumi
Thanks for the reply. Yes, you are correct. I also checked the code and it
seems the same.
We are facing performance issues due to grouping so wanted to be sure that
we are not leaving out any possibility of caching the same in Query Result
Cache.
was just exploring field collapsing
Hi,
I know few about groping component, but I think it is very hard. Because
query result cache has {query and conditions} -> {DocList} structure.
(
https://github.com/apache/lucene-solr/blob/e30264b31400a147507aabd121b1152020b8aa6d/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#
I am not sure, but it looks like your XML is invalid.
last_modified > XYZ
You need to switch to > or use something like a database view so that
the > and other < will not cause problems.
On Sat, Dec 17, 2016 at 7:01 AM, Per Newgro wrote:
> Hello,
>
> we are implementing a questionnaire tool
following up on this, I've created
https://issues.apache.org/jira/browse/SOLR-5826 , with a draft patch.
Regards,
Tommaso
2014-03-05 8:50 GMT+01:00 Tommaso Teofili :
> Hi all,
>
> I have the following requirement where I have an application talking to
> Solr via SolrJ where I don't know upfront
: I'd like to tell Solr to cache those boost queries for the life of the
: Searcher so they don't get recomputed every time. Is there any way to do
: that out of the box?
if the functions never change, you could just index the computed value up
front and save cycles at query time -- but that's t
Gregg,
The QueryResultCache caches a sorted int array of results matching the a query.
This should overlap very nicely with your desired behavior, as a hit in this
cache will not perform a Lucene query nor a need to calculate score.
Now, ‘for the life of the Searcher’ is the trick here. You
You can also try: https://www.varnish-cache.org/
2013/10/21 Alexandre Rafalovitch
> I have not used it myself, but perhaps something like
> http://www.crawl-anywhere.com/ is along what you were looking for.
>
> Regards,
>Alex.
>
> Personal website: http://www.outerthoughts.com/
> LinkedIn:
I have not used it myself, but perhaps something like
http://www.crawl-anywhere.com/ is along what you were looking for.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events fr
Thanks Alex.
I was thinking if something already exists of this sort.
On Mon, Oct 21, 2013 at 12:05 PM, Alexandre Rafalovitch
wrote:
> Not in Solr itself, no. Solr is all about Search. Caching (and rewriting
> resource links, etc) should probably be part of whatever does the document
> fetchi
Not in Solr itself, no. Solr is all about Search. Caching (and rewriting
resource links, etc) should probably be part of whatever does the document
fetching.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the q
Thanks, this is exactly what I'm looking for!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Caching-queries-tp3078271p3087497.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 6/17/2011 4:26 PM, arian487 wrote:
I'm wondering if something like this is possible. Lets say I want to query
5000 objects all pertaining to a specific search and I want to return the
top 100 or something and cache the rest on my solr server. The next time I
get the same query or something w
Well, it depends on how you've set the parameters in solrconfig.xml
for the queryResultWindowSize. Note that this size is simply
the size of a list of integers, so it's not a very expensive cache.
Best
Erick
On Fri, Jun 17, 2011 at 6:26 PM, arian487 wrote:
> I'm wondering if something like this
: Pretty much every one of my queries is going to be unique. However, the
: query is fairly complex and also contains both unique and non-unique
: data. In the query, some fields will be unique (e.g description), but
: other fields will be fairly common (e.g. category). If we could use
: those
I would suggest benchmarking this before doing any more complex
design. A field with only 10k unique integer or string values will
search very very quickly.
On Thu, May 6, 2010 at 7:54 AM, Nagelberg, Kallin
wrote:
> Hey everyone,
>
> I'm having some difficulty figuring out the best way to optimiz
> I'm setting up my Solr index to be
> updated every x minutes.
>
> Does Solr cache the result of a search, and then when next
> time the same search is requested, it'd recognize that the
> Index has not changed and therefore just return the previous
> result from cache without processing the sea
Shall we create an issue for this so we can list out desirable features?
On Sun, Jul 12, 2009 at 7:01 AM, Yonik Seeley wrote:
> On Sat, Jul 11, 2009 at 7:38 PM, Jason
> Rutherglen wrote:
> > Are we planning on implementing caching (docsets, documents, results) per
> > segment reader or is this s
On Sat, Jul 11, 2009 at 7:38 PM, Jason
Rutherglen wrote:
> Are we planning on implementing caching (docsets, documents, results) per
> segment reader or is this something that's going to be in 1.4?
Yes, I've been thinking about docsets and documents (perhaps not
results) per segment.
It won't make
I don't really understand your question. what do you mean by a "default
query" ?
as long as you have the caches that existing the example configs, then
solr with cache queries and filters for you. and as long as you have a
non-zero autowarm count for those caches, solr will use that number o
On Fri, Mar 13, 2009 at 12:58 PM, Jon Baer wrote:
> I have a few general questions re: caching ...
>
> 1. The FastLRU cache in 1.4 seems promising but is there a more
> comprehensive list of benefits? Is there a huge speed boost for using this
> type of cache?
It simply removes
Yes , We are waiting for the patch to get committed.
--Noble
On Fri, Apr 25, 2008 at 5:36 PM, Sean Timm <[EMAIL PROTECTED]> wrote:
> Noble--
>
> You should probably include SOLR-505 in your DataImportHandler patch.
>
> -Sean
>
>
>
> Noble Paul നോബിള് नोब्ळ् wrote:
>
> > It is caused by the new
Noble--
You should probably include SOLR-505 in your DataImportHandler patch.
-Sean
Noble Paul നോബിള് नोब्ळ् wrote:
It is caused by the new caching feature in Solr. The caching is done
at the browser level . Slr just sends appropriate headers. .We had
raised an issue to disable that.
BTW Th
It is caused by the new caching feature in Solr. The caching is done
at the browser level . Slr just sends appropriate headers. .We had
raised an issue to disable that.
BTW The command is not exactly
http://localhost:8983/solr/dataimport?command=status .
http://localhost:8983/solr/dataimport itse
Status pages should be sent with Pragma: no-cache. That is a bug.
wunder
On 4/24/08 6:29 PM, "Erik Hatcher" <[EMAIL PROTECTED]> wrote:
> The issue is the HTTP caching feature of Solr, for better or worse in
> this case. It confuses me often when I hit this myself. Try hitting
> that URL with c
The issue is the HTTP caching feature of Solr, for better or worse in
this case. It confuses me often when I hit this myself. Try hitting
that URL with curl and you'll see it change since no caching is
involved client-side.
For sanity's sake you can turn off HTTP caching in solrconfig.xml
No luck with control-R, or with F5. I'm on Windows here if you think
that's a potential problem.
For now I've found a silly workaround: If
http://localhost:8983/solr/dataimport?command=status
doesn't work, then you can replace "command=status" with almost
anything at all and then you'll be a
Chris - what happens if you hit ctrl-R (or command-R on OSX)? That should
bypass the browser cache.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Chris Harris <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Thursday, April 2
Here is the response XML faceted by multiple fields including state.
−
0
1782
−
-1
10
0
score desc
true
1
−
duns_number,company_name,phys_state, phys_city, score
phys_country:"United States"
2.2
−
sales_range
total_emp_range
company_type
phys_state
sic1
on
On 9/6/07, Y
On 9/6/07, Jae Joo <[EMAIL PROTECTED]> wrote:
> I have 13 millions and have facets by states (50). If there is a mechasim to
> chche, I may get faster result back.
How fast are you getting results back with standard field faceting
(facet.field=state)?
30 matches
Mail list logo