Would like to check, will this method of splitting the synonyms into
multiple files use up a lot of memory?
I'm trying it with about 10 files and that collection is not able to be
loaded due to insufficient memory.
Although currently my machine only have 4GB of memory, but I only have
500,000 rec
Hi Erick
Sorry I missed your reply.
Ya that is the alternative solution I am thinking of if it's not
possible through Solr.
-Derek
On 4/24/2015 12:01 AM, Erick Erickson wrote:
Not that I know of. But your application gets the original params back,
so you can order the display based on the p
Hi
Any advise on this?
Thanks,
Derek
On 4/23/2015 5:17 PM, Derek Poh wrote:
Hi
I am trying to search or filter by alist ofdocuments by their ids
(product id field).The requirement is the return documents must be in
the same order as search or filter by.
Eg.if i search or filter on the below
Hi Hussain,
Thank you so much for the information.
Regards,
Edwin
On 4 May 2015 at 10:16, Mohmed Hussain wrote:
> Hi Edwin
> Check this documentation
>
> https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-List
>
> Thanks
> -Hussain
>
> On Sun, May 3, 2015 at 6:3
Hi Edwin
Check this documentation
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-List
Thanks
-Hussain
On Sun, May 3, 2015 at 6:33 PM, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> Would like to check, is there any way to show the list of collections that
> are availabl
Hi,
Would like to check, is there any way to show the list of collections that
are available in the Solr server and output it onto a JSON format? I could
not managed to find any documentation on this.
Regards,
Edwin
I need it ;)
-Ursprüngliche Nachricht-
Von: Yonik Seeley [mailto:ysee...@gmail.com]
Gesendet: Sonntag, 3. Mai 2015 19:16
An: solr-user@lucene.apache.org
Betreff: Re: AW: "blocked" in org.apache.solr.core.SolrCore.getSearcher(...) ?
https://issues.apache.org/jira/browse/SOLR-6679
If you
https://issues.apache.org/jira/browse/SOLR-6679
If you don't use the suggest component, the easiest fix is to comment it out.
-Yonik
On Sun, May 3, 2015 at 1:11 PM, Clemens Wyss DEV wrote:
> I guess it's the "searcherExecutor-7-thread-1 (30)" which seems to be loading
> (updating?) the sugges
I guess it's the "searcherExecutor-7-thread-1 (30)" which seems to be loading
(updating?) the suggestions
org.apache.lucene.analysis.standard.StandardTokenizerImpl.getNextToken(StandardTokenizerImpl.java:764)
org.apache.lucene.analysis.standard.StandardTokenizer.incrementToken(Standar
Hope this is „readable“:
qtp787867107-59 (59)
* sun.management.ThreadImpl.getThreadInfo1(Native Method)
* sun.management.ThreadImpl.getThreadInfo(Unknown Source)
*
org.apache.solr.handler.admin.ThreadDumpHandler.handleRequestBody(ThreadDumpHandler.java:69)
*
org.apache.solr.h
On Sun, May 3, 2015 at 12:30 PM, Clemens Wyss DEV wrote:
> No load by/on any other thread.
Can we get a full thread dump (of all the threads) during this time?
This line:
org.apache.solr.core.SolrCore.getSearcher(boolean, boolean,
java.util.concurrent.Future[], boolean) line: 1646
Suggests there
I'd look at the thread view in the admin console. That would give an idea
about what the system is doing.
You can get the same information from the command line using
# jstack (pid) > output.log
Best,
Andrea
On 3 May 2015 18:53, "Clemens Wyss DEV" wrote:
> Just opened the very core in a "norma
OK, I don't think you actually need the managed schema stuff (although
you could use it).
So, you're analyzing these docs and making guesses (educated guesses,
probably very
sophisticated guesses, but guesses) about what kind of thing it is
(numeric, name, city,
concept, whatever).
You can simply
Just opened the very core in a "normal" Solr server instance. Same delay till
it's usable. I.e. nothing to do with embedded-mode or any other thread slowing
down things
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Sonntag, 3. Mai 2015 18:30
An:
Eric:
First of all, kudos for your problem description. Plainly you've
1> tried to diagnose the problem.
2> taken the time to write it up for us.
Far too often we see problem statements like "it doesn't work, what's
wrong" (one of my pet peeves).
Anyway, on to your problem. This should work as y
First, you shouldn't be using HttpSolrClient, use CloudSolrServer
(CloudSolrClient in 5.x). That takes
the ZK address and routes the docs to the leader, reducing the network
hops docs have to go
through. AFAIK, in cloud setups it is in every way superior to http.
I'm guessing your docs aren't huge
No load by/on any other thread. In fact I have 4 cores in my (embedded) Solr.
The other three, which contain "less" and "other" data, are up and running in
"no time" (<1s)
Sidenote:
The "slow core" is being filled by 7500 pdfs (overall 24G) extracted with Tika.
-Ursprüngliche Nachricht-
We ran into this as well on 4.10.3 (not related to an upgrade). It was
identified during load testing when a small percentage of queries would
take more than 20 seconds to return. We were able to isolate it by
rerunning the same query multiple times and regardless of cache hits the
queries would st
What are the other threads doing during this time?
-Yonik
On Sun, May 3, 2015 at 4:00 AM, Clemens Wyss DEV wrote:
> Context: Solr 5.1, EmbeddedSolrServer(-mode)
>
> I have a rather big index/core (>1G). I was able to initially index this core
> and could then search within it. Now when I restart
> more than 15 minutes
It took 37minutes!
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Sonntag, 3. Mai 2015 10:00
An: solr-user@lucene.apache.org
Betreff: "blocked" in org.apache.solr.core.SolrCore.getSearcher(...) ?
Context: Solr 5.1, EmbeddedS
Hi all,
Before doing a splitshard - Is there a way to figure out optimal hash ranges
for the shard that will evenly split the documents on the new sub-shards
that get created? Sort of a dry-run to the actual split shard command with
ranges parameter specified with it that just shows the number of
Context: Solr 5.1, EmbeddedSolrServer(-mode)
I have a rather big index/core (>1G). I was able to initially index this core
and could then search within it. Now when I restart my app I am no more able to
search.
getSearcher seems to "hang"... :
java.lang.Object.wait(long) line: not available [n
22 matches
Mail list logo