Hi,
I am wondering whether there is planning to implement cross collections join
query on multivalued field
Thanks
Sent from my iPhone
HI,
I know Solr can index rich documents, but I have one requirement.
I have all kind of documents, such as word, pdf, excel, ppt, jpg etcs
when Solr indexes them with Tika or OCR, it will extract text and save to
solr, but the format will be lost, so when the user opens the document, it
is not
Hi,
I have all kind of rich documents, such as excel, ppt, PDF, word, jpg ..., I
knew Tika or ocr can convert them to text and index it. But when I open the
document, the format is changed, how can I keep original document format, is
it possible in solr?
If not, can I use external field type
Thanks Hoss and Shawn for helping.
there are not many OOM stack details printed in the solr log file, it's
just saying No enough memory, and it's killed by oom.sh(solr's script).
My question(issue) is not it's OOM or not, the issue is why JVM memory
usage keeps growing up but never going down
Thanks Chris,
Is the matter to use the config file ? I am using custom config instead of
_default, my config is from solr 8.6.2 with custom solrconfig.xml
Derrick
Sent from my iPhone
> On Jan 28, 2021, at 2:48 PM, Chris Hostetter wrote:
>
>
> FWIW, I just tried using 8.7.0 to run:
>b
1048576K
}
[Times: user=0.01 sys=0.00, real=0.01 secs]
2021-01-28T21:24:18.400+0800: 34029.529: Total time for which application
threads were stopped: 0.0044183 seconds, Stopping threads took: 0.500
seconds
{Heap before GC invocations=1531 (full 412):
On Thu, Jan 28, 2021 at 1:23 PM Luke wrote
Mike
>
> On Wed, Jan 27, 2021 at 10:00 PM Luke wrote:
>
> > Shawn,
> >
> > it's killed by OOME exception. The problem is that I just created empty
> > collections and the Solr JVM keeps growing and never goes down. there is
> no
> > data at all. at th
usage
reachs 100%.
I have another solr 8.6.2 cloud(3 nodes) in separated environment , which
have over 100 collections, the Xxm = 6G , jvm is always 4-5G.
On Thu, Jan 28, 2021 at 2:56 AM Shawn Heisey wrote:
> On 1/27/2021 5:08 PM, Luke Oak wrote:
> > I just created a few collectio
Hi, I am using solr 8.7.0, centos 7, java 8.
I just created a few collections and no data, memory keeps growing but never go
down, until I got OOM and solr is killed
Any reason?
Thanks
Sent from my iPhone
SinceI changed heap size to 10G, I found that solr always uses around
6G-6.5G. Just wondering where I can set to limit memory usage, for example,
I just want to give solr 6G.
On Sun, Jan 24, 2021 at 1:51 PM Luke wrote:
> looks like the solr-8983-console.log was overridden after I restarted S
&
Thanks
Derrick
On Sun, Jan 24, 2021 at 2:26 AM Shawn Heisey wrote:
> On 1/23/2021 6:41 PM, Luke wrote:
> > I don't see any log in solr.log, but there is OutOfMemory error in
> > solr-8983-console.log file.
>
> Do you have the entire text of that exception? Can you
nup 6355M->6355M(10240M),
0.0164190 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
Thanks
Derrick
On Sat, Jan 23, 2021 at 7:54 PM Shawn Heisey wrote:
> On 1/23/2021 6:29 AM, Luke Oak wrote:
> > I use default settings to start solr , I set heap to 6G, I created 10
> colle
Hi there,
I use default settings to start solr , I set heap to 6G, I created 10
collections with 1node and 1 replica, however, there is not much data at all,
just 100 documents.
My server is 32 G memory and 4 core cpu, ssd drive 300g
It was ok when i created 5 collections. It got oom killed wh
can use
> shards.preference (Replica Types) to achieve our requirement
> https://lucene.472066.n3.nabble.com/Solrcloud-Reads-on-specific-nodes-tp4467568.html
>
> - Mohandoss
>
>> On Wed, Jan 20, 2021 at 7:22 PM Luke wrote:
>>
>> Hi,
>>
>> I have
Hi,
I have one data collection on 3 shards and 2 replicas, user searches on it.
Also I log all user queries and save to another collection on the same solr
cloud, but user queries are very slow when there are a lot of logs to be
written to the log collection.
any solution for me, please advise. o
I have one collection, 3 shards, 2 replicas, I defined route field: title,
and ID is the unique key.
I index two document with same ID and different title, I configured dedupe
chain and I can see signature is generated, but the old document was
removed by solr, please help, thanks
d are discussing it.
>
>Regards,
> Aled
>
>On Tue, Nov 12, 2019, 1:25 AM Luke Miller wrote:
>
>> Hi,
>>
>>
>>
>> I just noticed that since Solr 8.2 the Apache Solr Reference Guide is not
>> available anymore as PDF.
>>
Hi,
I just noticed that since Solr 8.2 the Apache Solr Reference Guide is not
available anymore as PDF.
Is there a way to perform a full-text search using the HTML manual? E.g. I'd
like to find every hit for "luceneMatchVersion".
* Using the integrated "Page title lookup." does no
I'm trying to use MoreLikeThis handler and mlt.qf to boost certain fields:
/solr/mlt?q=id:1&mlt.fl=body_title,text&mlt.qf=body_title^20.0+text^1.0&mlt.mintf=1
Looks like this has been an outstanding issue:
http://lucene.472066.n3.nabble.com/Querying-multiple-fields-with-the-MoreLikeThis-handler-
luck. Can someone shed some light here as a general guide
>> line in terms of what need to happen?
>>
>> I am using the CJKAnalyzer in the text field type and searching works fine,
>> but spelling does not work. Here are the things I have tried:
>>
>> 1. Put
Make sure your index and query analyzers are identical, and pay special
attention if you're using any of the
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Stemminganalyzers
- many of them have a number of configurable attributes that could
cause differences.
-L
On Wed, Sep 29, 2010
Check
http://doc.ez.no/Extensions/eZ-Find/2.2/Advanced-Configuration/Using-multi-core-features
It's for eZ-Find, but it's the basic setup for multiple cores in any
environment.
We have cores designed like so:
solr/sfx/
solr/forum/
solr/mail/
solr/news/
solr/tracker/
each of those core directori
We had to do the same thing - we draw our facet navigation links by looping
over the full result set from our database, and then we add the facet counts
and draw the link url's using the solr data.
-L
On Wed, Sep 29, 2010 at 8:42 AM, Markus Jelsma wrote:
> I'm afraid you'd have to add the missin
I notice we don't have the default=true, instead we manually specify
qt=dismax in our queries. HTH.
-L
On Tue, Sep 28, 2010 at 4:24 PM, Luke Crouch wrote:
> What you have is exactly what I have on 1.4.0:
>
>
>
>
> dismax
>
> And it has worked fine.
What you have is exactly what I have on 1.4.0:
dismax
And it has worked fine. We copied our solrconfig.xml from the examples and
changed them for our purposes. You might compare your solrconfig.xml to some
of the examples.
-L
On Tue, Sep 28, 2010 at 4:19 PM, Thumuluri, Sai <
sai.th
Is there a 1:1 ratio of db records to solr documents? If so, couldn't you
simply select the most recent updated record from the db and check to make
sure the corresponding solr doc has the same timestamp?
-L
On Tue, Sep 28, 2010 at 3:48 PM, Dmitriy Shvadskiy wrote:
> Hello,
> What would be the b
Yeah. You can specify two analyzers in the same fieldType:
...
...
-L
On Tue, Sep 28, 2010 at 2:31 PM, James Norton wrote:
> Hello,
>
> I am migrating from a pure Lucene application to using solr. For legacy
> reasons I must support a somewhat obscure query feature: lowercase words in
>
Are you removing the standard default requestHandler when you do this? Or
are you specifying two requestHandler's with default="true" ?
-L
On Tue, Sep 28, 2010 at 11:14 AM, Thumuluri, Sai <
sai.thumul...@verizonwireless.com> wrote:
> Hi,
>
> I am using Solr 1.4.1 with Nutch to index some of our
Can someone share some good resources (books, articles, links, etc.) for
tuning relevancy scores with multiple factors? I'm playing with different
fields and boosts in my 'qf', 'pf', and 'bf' defaults but I feel like I'm
shooting in the dark. http://wiki.apache.org/solr/SolrRelevancyCookbook has
a
What about if you do something like this? -
facet=true&facet.mincount=1&q=apple&facet.limit=10&facet.prefix=mou&facet.field=term_suggest&qt=basic&wt=javabin&rows=0&version=1
Jason Rutherglen wrote:
To clarify, the query analyzer returns that. Variations such as
"apple mou" also do not return
couple of million records and see if it is still acceptably
fast.
Anyway, does anyone know if there is something I could be doing wrong
that is causing dismax to not play nice with the two spatial searching
methods, or is this one for the JIRA?
Luke
Luke Tebbs wrote:
Thanks Dan,
T
corer2.score(BooleanScorer2.java:292)
I don't know if these are related - perhaps it's trying to compare
against the dismax records that don't have a geo_distance?
Did you get anything like this?
Luke
dan whelan wrote:
I experienced the same issue. The localsolr site s
Antonio Calo' wrote:
Il 02/09/2010 8.51, Lance Norskog ha scritto:
Loading a servlet creates a bunch of classes via reflection. These are
in PermGen and never go away. If you load&unload over and over again,
any PermGen setting will fill up.
I agree , taking a look to all the links suggested by
Anyone?
I'm really lost as to what to do here... if anyone has any experience
with this
or even ideas of things to try I'd really appreciate your input.
It seems like what I'm trying to do should work but for some reason
'defType' seems to be
ignored
Thankyou
I agree.
I wasn't proposing it as a fix merely as a means to reduce the time
between restarts.
Luke
Lance Norskog wrote:
Loading a servlet creates a bunch of classes via reflection. These are
in PermGen and never go away. If you load&unload over and over again,
any PermGen set
e mvn jetty:run on the CLI rather than launch jetty in eclipse. I
also use JRebel to reduce the number of restarts needed during dev.
As for a production instance, should you need to redeploy that often?
Luke
Antonio Calo' wrote:
Hi guys
I'm facing an error in our production envir
d only the title field is searched.
I'm a bit stumped on this one, any help would be greatly appreciated.
Luke
cally update the latter to
allow for the exchange rates changing - but I was hoping someone might
know a better way :)
Regards,
Luke
Not sure how up to date this is: http://www.basistech.com/customers/
I've only used their C++ products, which generally worked well for
web search with a few exceptions. According to http://
www.basistech.com/knowledge-center/chinese/chinese-language-
analysis.pdf , they provide Java APIs as
39 matches
Mail list logo