On 5/20/2015 12:06 AM, Shalin Shekhar Mangar wrote:
> Sounds similar to https://issues.apache.org/jira/browse/SOLR-6165 which I
> fixed in 4.10. Can you try a newer release?
I can't upgrade yet. I am using a plugin that hasn't been verified
against anything newer than 4.9. When a new version bec
Sounds similar to https://issues.apache.org/jira/browse/SOLR-6165 which I
fixed in 4.10. Can you try a newer release?
On Wed, May 20, 2015 at 6:51 AM, Shawn Heisey wrote:
> An unusual problem is happening with the DIH on a field that is an
> unsigned BIGINT in the MySQL database. This is Solr 4
thanks
On Tue, May 19, 2015 at 11:38 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Someone just opened https://issues.apache.org/jira/browse/SOLR-7574 which
> is exactly what you experienced.
>
> On Tue, May 19, 2015 at 8:34 PM, Shalin Shekhar Mangar <
> shalinman...@gmail.com> wro
Someone just opened https://issues.apache.org/jira/browse/SOLR-7574 which
is exactly what you experienced.
On Tue, May 19, 2015 at 8:34 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> That sounds reasonable. Please open a Jira issue.
>
> On Sat, May 16, 2015 at 9:22 AM, William Bell
Hi I have two server(Physical) that run my application and solr. I use
external file field to do some search result ranking.
According to the wiki page, external file field data need to resident in
{solr}\data directory. Because EFF data is generated by my application. How
can I push this file to
Sounds good. Thank you for the synonym (definitely will work on this) and
padding suggestions.
- Todd
--
View this message in context:
http://lucene.472066.n3.nabble.com/Wildcard-Regex-Searching-with-Decimal-Fields-tp4206015p4206421.html
Sent from the Solr - User mailing list archive at Nabble
An unusual problem is happening with the DIH on a field that is an
unsigned BIGINT in the MySQL database. This is Solr 4.9.1 without
SolrCloud, running on OpenJDK 7u79.
During actual import, everything is fine. The problem comes when I
restart Solr and the transaction logs are replayed. I get t
A field type based on BigDecimal could be useful, but that would be a fair
amount more work.
Double is usually sufficient for big data analysis, especially if you are doing
simple aggregates (which is most of what Solr can do).
If you want to do something fancier, you’ll need a database, not a
Then it seems like you can just index the raw strings as a string
field and suggest with that but fire the actual query against the
numeric type.
Best,
Erick
On Tue, May 19, 2015 at 3:25 PM, Todd Long wrote:
> Erick Erickson wrote
>> But I _really_ have to go back to one of my original quest
Well, double is all you've got, so that's what you have to work with.
_Every_ float is an approximation when you get out to some number of
decimal places, so you don't really have any choice. Of course it'll
affect the result. The question is whether it affects the result
enough to matter which is
This should be considered a bug in the /export handler. Please create a
jira ticket for this.
Thanks
Joel Bernstein
http://joelsolr.blogspot.com/
On Tue, May 19, 2015 at 2:32 PM, Angelo Veltens
wrote:
> Hi all!
>
> We use Solr 4.10.3 and have configured an /export SearchHandler in
> addition t
Erick Erickson wrote
> But I _really_ have to go back to one of my original questions: What's
> the use-case?
The use-case is with autocompleting fields. The user might know a frequency
starts with 2 so we want to limit those results (e.g. 2, 23, 214, etc.). We
would still index/store the numeric-
I have an entity which extracts records from a MySQL data source. One of the
fields is meant to be a multi-value field, except, this data source does not
store the values. Rather, it stores their ids in a single column as a
pipe-delimited string. The values themselves are in a separate table, in an
Also 10481.5711458735456*79* indexes to 10481.571145873546 using double
On Tue, May 19, 2015 at 2:57 PM, Vishal Swaroop
wrote:
> Thanks Erick... I can ignore the trailing zeros
>
> I am indexing data from Vertica database... Though *double *is very close
> but it SOLR indexes 14 digits after de
Thanks Erick... I can ignore the trailing zeros
I am indexing data from Vertica database... Though *double *is very close
but it SOLR indexes 14 digits after decimal
e.g. actual db value is 15 digits after decimal i.e. 249.81735425382405*2*
SOLR indexes 14 digits after decimal i.e. 249.8173542
No prob, easy mistake to make :)
On Tue, May 19, 2015 at 2:14 PM, shamik wrote:
> Thanks a ton Doug, I should have figured this out, pretty stupid of me.
>
> Appreciate your help.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Issue-with-German-search-tp4206104p4
Thanks a ton Doug, I should have figured this out, pretty stupid of me.
Appreciate your help.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-German-search-tp4206104p4206357.html
Sent from the Solr - User mailing list archive at Nabble.com.
I see the problem, instead of using debugQuery directly you might want to
check out splainer (http://splainer.io) it can help detect these sorts of
relevance problems. For example, in your case you just need to wrap the
search in parenthesis.
There's a difference between text:foo bar and text:(foo
Thanks Doug. I'm using eDismax
Here's my Solr query :
http://localhost:8983/solr/testhandlerdeu?debugQuery=true&q=title_deu:Software%20und%20Downloads
Here's my request handler.
explicit
0.01
velocity
Why do you want to keep trailing zeros? The original input is
preserved in the "stored" portion and will be returned if you specify
the field in your "fl" list. I'm assuming here that you're looking at
the actual indexed terms, and don't really understand why the trailing
zeros are important
Do no
Thank you John and Jack...
Looks like double is much closer... it removes trailing zeros...
a) Is there a way to keep trailing zeros
double : 194.846189733028000 indexes to 194.846189733028
b) If I use "String" then will there be issue doing range query
float
277.677836785372000 indexes to 277
Just to pile on:
How's your CPU utilization? That's the first place to look. The very first
question to answer is:
"Is Solr the bottleneck or the rest of the infrastructure?". One _very_
quick measure is
CPU utilization. If it's running along at 100% then you need to improve
your queries or add
mo
No cleaner ways that spring to mind. Although you might get some
mileage out of normalizing
_everything_ rather than indexing different forms. Perhaps all numbers
are stored left-padded
with zeros to 16 places to the left of the decimal point and
right-padded 16 places to the right
of the decimal p
What you've done _looks_ correct at a glance. Take a look at the Solr
logs. Don't bother trying to index things unless and until your nodes
are "active", it won't happen.
My first guess is that you have some error in your schema or
solrconfig.xml files, syntax errors, typos, class names that are
m
"double" (solr.TrieDoubleField) gives more precision
See:
https://lucene.apache.org/solr/5_1_0/solr-core/org/apache/solr/schema/TrieDoubleField.html
-- Jack Krupansky
On Tue, May 19, 2015 at 11:27 AM, Vishal Swaroop
wrote:
> Please suggest which numeric field type to use so that I can get comp
In a word "yes". The Solr servers are independently keeping their own
timers and one could trip on replica X while an update was in
transmission from the leader say. Or any one of a zillion other timing
conditions. In fact, this is why the indexes will have different
segments on replicas in a slice
I think the omitNorms option will normalize your field length. try setting
that to false (it defaults to true for floats) and see if it helps
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Tue, May 19,
Please suggest which numeric field type to use so that I can get complete
value.
e.g value in database is : 194.846189733028000
If I index it as float SOLR indexes it as 194.84619 where as I need
complete value i.e 194.846189733028000
I will also be doing range query on this field.
Regards
How are you searching shamik? What query parser are you using? Perhaps you
could share a sample Solr URL?
Cheers,
-Doug
On Tue, May 19, 2015 at 11:11 AM, shamik wrote:
> Anyone ?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Issue-with-German-search-tp4206104p4
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-German-search-tp4206104p4206306.html
Sent from the Solr - User mailing list archive at Nabble.com.
That sounds reasonable. Please open a Jira issue.
On Sat, May 16, 2015 at 9:22 AM, William Bell wrote:
> Can we gt this one fixed? If the Body is empty don't through a Null Pointer
> Exception?
>
> Thanks
>
> >*
> http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation
That link you provided is exactly I want to do. Thanks Ahmet.
With Regards
Aman Tandon
On Tue, May 19, 2015 at 5:06 PM, Ahmet Arslan
wrote:
> Hi Aman,
>
> changing protected words without reindexing makes little or no sense.
> Regarding protected words, trend is to use solr.KeywordMarkerFilterF
Hi there,
Unfortunately I don' t agree with Shawn when he suggest to update
server.xml configuration up to 1 in maxThreads. If Tomcat (due to the
concurrent overload you' re suffering, the type of the queries you' re
handling, etc.) cannot manage the requested queries what could happen is
that
I see what you're saying and that should do the trick. I could index 123 with
an index synonym 123.0. Then my regex query "/123/" should hit along with a
boolean query "123.0 OR 123.00*". Is there a cleaner approach to breaking
apart the boolean query in this case? Right now, outside of Solr, I'm j
Are you sure the requests are getting queued because the LB is detecting
that Solr won't handle them?
The reason why I'm asking is I know that ELB doesn't handle bursts well.
The load balancer needs to "warm up," which essentially means it might
be underpowered at the beginning of a burst. It
Hi all!
We use Solr 4.10.3 and have configured an /export SearchHandler in
addition to the default SearchHandler /select.
{!xport}
xsort
false
username,description
query
The handler follows the example from section "Exporting Result Sets" of
the user guide.
Shawn, I was going to say the same thing, but... then I was thinking about
SolrCloud and the fact that update processors are invoked before the
document is set to its target node, so there wouldn't be a reliable way to
tell if the input document field value exists on the target rather than
current
On 5/19/2015 12:21 AM, Kamal Kishore Aggarwal wrote:
> I am currently working with Java-1.7, Solr-4.8.1 with tomcat 7. The solr
> configuration has slave & master architecture. I am looking forward to
> upgrade Java from 1.7 to 1.8 version in order to take advantage of memory
> optimization done in
On 5/19/2015 3:02 AM, Bram Van Dam wrote:
> I'm looking for a way to have Solr reject documents if a certain field
> value is duplicated (reject, not overwrite). There doesn't seem to be
> any kind of unique option in schema fields.
>
> The de-duplication feature seems to make this (somewhat) poss
Hi Bram,
what do you mean with :
" I
would like it to provide the unique value myself, without having the
deduplicator create a hash of field values " .
This is not reduplication, but simple document filtering based on a
constraint.
In the case you want de-duplication ( which seemed from your ver
On 5/19/2015 1:51 AM, Jani, Vrushank wrote:
> We have production SOLR deployed on AWS Cloud. We have currently 4 live SOLR
> servers running on m3xlarge EC2 server instances behind ELB (Elastic Load
> Balancer) on AWS cloud. We run Apache SOLR in Tomcat container which is
> sitting behind Apache
Awesome, following it now!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, May 18, 2015 at 8:21 PM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Glad you figured things out and fou
Hi Aman,
changing protected words without reindexing makes little or no sense.
Regarding protected words, trend is to use solr.KeywordMarkerFilterFactory.
Instead I suggest you to work on a more general issue:
https://issues.apache.org/jira/browse/SOLR-1307
Ahmet
On Tuesday, May 19, 2015 3:16
Hi all
We have a cluster of standalone Solr cores (Solr 4.3) for which we had
built some custom plugins. I'm now trying to prototype converting the
cluster to a Solr Cloud cluster. This is how I am trying to deploy the
cores (in 4.7.2).
1.
Start solr with zookeeper embedded.
java -Dzk
Hi,
I am facing an issue (solr 4.10.2) when we are trying retrieve a document
and highlight the hits.
Below is the exception and this happens in a random fashion.
If we try again to reload the same document which threw this exception, it
loads without exception.
Any help would be appreciated.
*E
hi
wanted to know, when we do soft commit through configuration in
solrconfig.xml, will different replicas commit at different point of time
depending upon when the replica started...or will leader send commit to all
replicas at same time as per commit interval set in solrconfig.
thanks
gopal
> My organization has issues with Jetty (some customers don't want Jetty on
> their boxes, but are OK with WebSphere or Tomcat) so I'm trying to figure
> out: how to get Solr on WebSphere / Tomcat without using WAR knowing that
> the WAR will go away.
I understand that some customers are irrationa
Hi folks,
I'm looking for a way to have Solr reject documents if a certain field
value is duplicated (reject, not overwrite). There doesn't seem to be
any kind of unique option in schema fields.
The de-duplication feature seems to make this (somewhat) possible, but I
would like it to provide the
Hello,
We have production SOLR deployed on AWS Cloud. We have currently 4 live SOLR
servers running on m3xlarge EC2 server instances behind ELB (Elastic Load
Balancer) on AWS cloud. We run Apache SOLR in Tomcat container which is sitting
behind Apache httpd. Apache httpd is using prefork mpm
49 matches
Mail list logo