Any doughnut for me ?
Regards
Nawab
On Thu, Jul 27, 2017 at 9:57 AM Nawab Zada Asad Iqbal
wrote:
> Hi,
>
> I see a lot of discussion on this topic from almost 10 years ago: e.g.,
> https://issues.apache.org/jira/browse/LUCENE-1482
>
> For 4.5, I relied on 'System.out.println' for writing infor
bq: To me this seems like a design flaw. The Solr fieldtypes seem like they
allow a developer to create types that should handle wildcards
intelligently.
Well, that's pretty impossible. WordDelimiter(Graph)FilterFactory is a
case in point. It's designed to break up on
uppercase/lowercase/numeric/n
Can you reproduce with 4G heap?
On Wed, 26 Jul 2017 at 23:11, Markus Jelsma
wrote:
> Hello Mikhail,
>
> Spot on, there is indeed not enough heap when our nodes are in this crazy
> state. When the nodes are happy, the average heap consumption is 50 to 60
> percent, at peak when indexing there is
On 7/25/2017 5:21 PM, Lucas Pelegrino wrote:
> Trying to make solr work here, but I'm getting this error from this command:
>
> $ ./solr create -c products -d /Users/lucaswxp/reduza-solr/products/conf/
>
> Error CREATEing SolrCore 'products': Unable to create core [products]
> Caused by: null
>
> I
It doesn't seem to matter what you do in the query analyzer, if you have a
wildcard, it won't use it. Which is exactly the behavior I observed.
the solution was to set preserveOriginal="1" and change the etl process to
not strip the dashes, letting the index analyzer do that. We have a lot of
lega
Lucas may be hitting this issue:
https://stackoverflow.com/questions/4659151/recurring-exception-without-a-stack-trace-how-to-reset
Could you try running your server with jvm value:
-XX:-OmitStackTraceInFastThrow
?
Nawab
On Wed, Jul 26, 2017 at 11:42 AM, Anshum Gupta wrote:
> Hi Lucas,
>
>
Webster, did you try escaping the special character (assuming you did not
do what Shawn did by replacing - with some other text and your indexed
tokens have -)?
On Thu, Jul 27, 2017 at 12:03 PM, Webster Homer
wrote:
> Shawn,
> Thank you for that. I didn't know about that feature of the WDF. It d
Shawn,
Thank you for that. I didn't know about that feature of the WDF. It doesn't
help my situation but it's great to know about.
Googling solr wildcard searches I found this link
http://lucene.472066.n3.nabble.com/Wildcard-search-not-working-with-search-term-having-special-characters-and-digits-t
Hello,
I am having an issue. I have modified the solr.in.sh file to allow ssl
however, when I go to the https site it gives an error that I need to enable
TLS. But the http site is up and running. I have imported my certificates not
sure what I am missing.
Thank you,
Kent Younge
Syste
Hi Itay,
in IR research there’s a long tradition (TREC and alike) for measuring the
effectiveness of search engines. In this context it is measured by using a so
called test collection, which consists of three things:
1. Documents
2. Topics i.e. information needs/queries of users for these docu
Hi,
I see a lot of discussion on this topic from almost 10 years ago: e.g.,
https://issues.apache.org/jira/browse/LUCENE-1482
For 4.5, I relied on 'System.out.println' for writing information for
debugging in production.
In 6.6, I notice that some classes in Lucene are instantiating a Logger,
sh
Hi,
I'm trying to measure Precision and recall for a search engine which is
crawling data sources of an organization.
Are there any best practices regrading these indexes and specific
industries (e.g. for financial organizations, the recommended percentage
for precision and recall is ~60%).
Is t
> Max heap is 25G for each Solr Process. (Xms 25g Xmx 25g)
You can most likely drop this to Xmx 1g and all your problems are then most
likely solved just by doing so.
Regards,
Markus
-Original message-
> From:Atita Arora
> Sent: Thursday 27th July 2017 9:30
> To: solr-user@lucene.apa
Hi Shawn ,
Thank you for the pointers , here is the information :
What OS is Solr running on? I'm only asking because some additional
information I'm after has different gathering methods depending on OS.
Other questions:
*OpenJDK 64-Bit Server VM (25.141-b16) for linux-amd64 JRE (1.8.0_141-b1
Hi All,
We are upgrading from Solr/Lucene 4.5.1 to 4.10.4. When testing we found the
below issue.
The score field in the query response for a distributed query results in NaN.
This happens if the indexes were created in 4.5 and the query is received in
4.10.1.
Also the score in the explain
15 matches
Mail list logo