> -Original Message- From: Fredrik Rødland
> Sent: Monday, March 04, 2013 6:14 AM
>
> We've been trying to get our heads around this for some days now upgrading
> from 3.6 (where we didn't see this error) to 4.1 (where this error is very
> prominent.
>
> We have upgraded from SOLR 3.6.1
Can you also show, how you define a field rawData in the schema?
Dmitry
On Mon, Mar 4, 2013 at 4:13 PM, Van Tassell, Kristian <
kristian.vantass...@siemens.com> wrote:
> Does anyone have any ideas? I don't understand how the query can match, as
> I am querying against the same field, and yet get
thanks I got the problem it was with using *text_en_splitting* filed type
for indexingwhich actually includes fuzzy results aswell...but I dont
know it will take fuzzy results upto this extentnow im using
*text_en_splitting_tight
*and its giving correct results without fuzzy results
On
I work with Aditya, so this information is in continuation where Aditya left
off.
Here are some of the observations based on running a query on a particular
unique id . The nature of the document (corresponding to the uniqueid) is
such that it is fairly large if we were to run a query without an
Thanks for the insights to the query Hoss. I am going to try out the
methods you highlighted.
Thanks,
Indika
On 3 March 2013 01:19, Chris Hostetter wrote:
>
> : sessionAvailableNowQuery = {!edismax}(start_time:[* TO
> : 1970-01-01T12:37:030Z] AND end_time:[1970-01-01T12:37:030Z +
> : (_val_:ord
Looks like pivot facet with solrcloud does not work (I am using Solr 4.1).
The query below return no pivot search result unless I added
"&shards=shard1".
http://localhost:8995/solr/collection1/select?q=*%3A*&facet=true&facet.mincount=1&facet.pivot=source_domain,author&rows=1&wt=json&facet.limit=5
Well, that would definitely make the index bigger. Why don't you just try
it and see?
you should be able to see the effects with a reasonable subset of your
docs...
Another thing to keep in mind is if you have any additions "stored=true"
fields defined.
Best
Erick
On Mon, Mar 4, 2013 at 5:51 PM
No folding doesn't apply to punctuation, only a set of accents, circumflex,
etc. It essentially just removes all of the diacritics and "folds" the
letters into their unaccented counterparts
Best
Erick
On Mon, Mar 4, 2013 at 12:15 AM, Shawn Heisey wrote:
> On 2/28/2013 5:39 AM, Erick Eric
thanks mark.
That worked great.
-Mike
Mark Miller wrote:
Honestly, I'm not sure. Yonik did some testing around upgrading from
4.0 to 4.1 and said this was fine - but it sounds like perhaps there
are some hitches.
- Mark
On Mar 4, 2013, at 3:35 PM, "mike st. john" wrote:
Mark,
the od
Honestly, I'm not sure. Yonik did some testing around upgrading from 4.0 to 4.1
and said this was fine - but it sounds like perhaps there are some hitches.
- Mark
On Mar 4, 2013, at 3:35 PM, "mike st. john" wrote:
> Mark,
>
> the odd piece here i think was, this was a 4.0 collection numShards
Mark,
the odd piece here i think was, this was a 4.0 collection numShards=4
etc etc.
moved to 4.1, i would assume the doc router would have been set to
compositeId, not implicit, or is the move from 4.0 to 4.1 a complete
rebuild from the collections up?
-Mike
Mark Miller wrote:
On Mar 4
On Mar 4, 2013, at 3:27 PM, Michael Della Bitta
wrote:
> I personally don't know of one other than starting over with a new
> collection, but I'd love to be proven wrong, because I'm actually in the
> same boat as you!
I think it might be possible by using a zookeeper tool to edit
clusterstat
On Mar 4, 2013, at 1:32 PM, Michael Della Bitta
wrote:
> Are you sure sending it to the collection URL as opposed to one of the
> shard URLs?
FYI, it should work the same either way.
>
> If you go to the Cloud tab, click on Tree, and then click on
> clusterstate.json, what is the value for "
I personally don't know of one other than starting over with a new
collection, but I'd love to be proven wrong, because I'm actually in the
same boat as you!
On Mar 4, 2013 6:09 PM, "mike st. john" wrote:
> Hi michael,
>
> ah, thats seems to be the issue, its set to implicit.
>
> This install ori
Hi michael,
ah, thats seems to be the issue, its set to implicit.
This install originally was a 4.0 install, when it moved to 4.1 , the
problems started.
Is there an easy way to change the router to compositeId?
-Mike
Michael Della Bitta wrote:
Hi Mike,
Are you sure sending it to the co
Hi,
It is the index folder. tlog is only a few MB.
I have analysed all changed and found out that only one field in schema was
changed.
This field in non cloud
was changed to
in cloud to use fastVectorHighlighting.
Is it possible that this change could double index size?
Thanks.
Alex.
Hi Mark,
Thanks for trying it out.
Let me see if I explain it better: the number you have to select (in order
to later being able to tweak it with the slider), is any number that must
be in one of the parameters in the Scoring section.
The issue you have, is that you are using /select handler f
Hi,
I have been using this plugin with success:
https://github.com/healthonnet/hon-lucene-synonyms
While it gives you multi-word synonyms, you lose the ability to have different
synonym dictionaries per field.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Train
Can you tell whether it's the "index" folder that is that large or is it
including the "tlog" transaction log folder?
If you have a huge transaction log, you need to start sending hard commits more
often during indexing to flush the tlogs.
--
Jan Høydahl, search solution architect
Cominvent AS -
If you want multi-term synonyms at query time, you will need to enclose the
sequence of terms in quotes. Otherwise, the query analyzer will see only one
term at a time and not recognize any multi-term synonyms.
Note that the synonym filter will need to see "phrase one" as two separate
terms, s
Where it says:
"querystring":"DocumentContent:java",
"parsedquery":"(+((DocumentContent:java DocumentContent:notare
DocumentContent:jre)~2/no_coord) () () () () ())/no_coord",
That indicates that "java" was expanded to be equivalent to "java",
"notare", or "jre".
Are you sure you have docum
Hello Xavier,
Thanks for uploading this and sharing. I also read the other messages in the
thread.
I'm able to get part way through your Getting Started section, I get results,
but I get stuck on the editing values. I've tried with Java 6 and 7, with both
the 0.5 binary and from the source d
Hi Mike,
Are you sure sending it to the collection URL as opposed to one of the
shard URLs?
If you go to the Cloud tab, click on Tree, and then click on
clusterstate.json, what is the value for "router" for that collection?
Michael Della Bitta
A
Sorry, but phrase queries really are dependent on position information, by
definition. So, if you want phrase query, don't omit position info.
Yeah, the wiki should be updated to note that change about non-silent
failure.
-- Jack Krupansky
-Original Message-
From: Fredrik Rødland
S
I've done this, but I'm not a specialist, so, I see nothing interesting.
The log is: https://gist.github.com/caarlos0/4ad53583fb2b30ef0bec
Thanks.
On Mon, Mar 4, 2013 at 5:05 PM, Jack Krupansky wrote:
> You can simply test whether synonyms are being ignored or how they are
> being processed by
Add &debugQuery=true to your query and look at the "explain" section for
details of why a document was scored as it was.
Also look at the parsed query to see what fields it is actually searching.
-- Jack Krupansky
-Original Message-
From: Rohan Thakur
Sent: Monday, March 04, 2013 8:3
: I have some more info now that I have had time to dig into it.
:
: I can stop the loop and cause solr to behave by reloading two cores. It
: doesn't matter which two, any two will do.
As Otis already said: please verify which PID is causing the CPU
utilitization, and generate some thread dump
You can simply test whether synonyms are being ignored or how they are being
processed by using the Solr Admin UI Analysis page. Select the field and
enter the text to test. It will show you exactly what the synonym filter
does.
Make sure the synonym file is exactly as specified in the token f
Thanks.
That would help a lot.
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-does-collections-API-choose-nodes-for-CREATE-command-tp4044529p4044593.html
Sent from the Solr - User mailing list archive at Nabble.com.
James,
You are right. I was setting up spell checker incorrectly.
It works correctly as you described.
Spell checker is invoked after the query component and it does not stop
Solr from executing query.
Thanks for correcting me.
Saroj
On Fri, Mar 1, 2013 at 7:30 AM, Dyer, James wrote:
> I'
Hello Solr mailing list,
I have read many posts and run many tests, but still I cannot get
multi-word synonyms behaving the way I think they should. I would
appreciate your advice.
Here is an example of the behaviour I am trying to achieve:
*# Given synonyms.txt
wordOne, phrase one
*
1. At
Yeah, you need numShards from 4.1 up or you are in a mode were you have to
distrib updates yourself.
I thought core admin ui had this field, but if not pls file a JIRA. Until then,
you may need to use the http API.
Mark
Sent from my iPhone
On Mar 4, 2013, at 12:46 AM, Arkadi Colson wrote:
Sounds like you should file a JIRA so this can be looked into before 4.2 comes
out shortly.
Mark
Sent from my iPhone
On Mar 3, 2013, at 10:54 PM, "mike st. john" wrote:
> atomic updates are failing in solrcloud , unless the update is sent to the
> shard where the doc resides. Real time get
Hi,
running tomcat , solr 4.1 distributed 4 shards 2 replicas per shard.
Everything works fine searching, but i'm trying to use this instance as
a nosql solution as well. What i've noticed , when i send a partial
update i'll receive "missing required field" if the document is not
loca
SOLR-4321: Collections API will sometimes use a node more than once, even when
more unused nodes are available.
Fixed for 4.2. I'm going to try and drive a 4.2 release starting next week.
- Mark
On Mar 4, 2013, at 7:05 AM, adfel70 wrote:
> I'm using solr 4.1
> .
> I have a cluster with 30 nod
Thanks for the quick answer!
I am not quite sure whether or not that will help me though. The relation
between the files and the db entries is 1:1, so I am only expecting one
result set for each call, that cannot be cached as the key (the filename)
differs.. I will try to implement it anyway
What
Thank Chris.
I considered this approach but wasn't sure about resource consumption.
We've run into a couple of issues where a full index rebuild/swap/replicate
[overlapping] has left the slaves looking for an index that doesn't exist. This
should resolve that issue.
Jeremy D. Branham
Performa
Confirmed it was indeed caching, as I just updated my live master from
4.1 to the 4.2 snap shot and got the empty drop-down; cleared cache
and reloaded, and working.
On Mon, Mar 4, 2013 at 10:03 AM, Stefan Matheis
wrote:
> Thanks Jens! Didn't think about caching .. :/
>
> Perhaps we should change
Yes, we've had quite a few surprises with outdated information (and
mixtures of old and new information) in the admin UI, so I'd definitely
be in favor of getting rid of caching.
Jens
On 03/04/2013 04:03 PM, Stefan Matheis wrote:
Thanks Jens! Didn't think about caching .. :/
Perhaps we shoul
On 3/4/2013 2:06 AM, adm1n wrote:
here you go - http://wiki.solarium-project.org/index.php/V1:Ping_query
as for my cloud, average response time for ping request is 4 ms. but there
are several pings that take even 3 seconds. (I have about 10 pings/day)
I would suspect GC pauses, like I was
You can cache the subentity, then it will retrieve all the data for that entity
in 1 query.
See http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor for
more information. This section focuses on caching data from
SQLEntityProcessor. However, it is now possible to cache dat
I'm using solr 4.1
.
I have a cluster with 30 nodes, I use collections API's create command to
create a collection of 5 shards and 2 replicas.
I get a collection that is built on 6 nodes out of the 30, with multiple
shards using the same nodes.
There is nothing wrong with this, but as I have 30 nod
Thanks Jens! Didn't think about caching .. :/
Perhaps we should change the requests in favor of
https://issues.apache.org/jira/browse/SOLR-4311 to avoid any caching at the UI?
Results maybe in a few more (real) requests but i guess that would be okay?
Stefan
On Monday, March 4, 2013 at 2:21
Hi,
I am trying to use the DIH for crawling over some xml-files and xpathing
them and then access a db with the filename as a key. That works, but
reading ~30.000 docs would take almost 3h. When I looked at the
DIH-Debug-console it showed me, that way to many db-calls were made: 1 for
the 1st doc,
On 4 March 2013 19:02, Rohan Thakur wrote:
> hi all
>
> I wanted to know that why solr is showing irrelevant result as I search for
> "galaxy ace" its showing result "sony bravia" that does not have either of
> them galaxy or ace in the result but way down the order why is it doing so
> any Idea p
Does anyone have any ideas? I don't understand how the query can match, as I am
querying against the same field, and yet get zero highlighting occurring.
Just to clarify, this query, against a field called rawData returns 51 hits for
me:
?q=Working%20sheet%20numbers%20and%20names
qf=rawData
fl
Hi,
In my Solr config I have a request handler that boosts newer items, using
date field:
true
10
itemid,score
{!boost b=$bf v=$qq}
I am using "text_en_splitting" as while indexing is that the problem??
On Mon, Mar 4, 2013 at 7:02 PM, Rohan Thakur wrote:
> hi all
>
> I wanted to know that why solr is showing irrelevant result as I search
> for "galaxy ace" its showing result "sony bravia" that does not have either
> of them
Actually, just updated Chrome this morning, and it all appears to
work. Flushed cache as well, so could be part of that. All's well
that ends well I suppose.
neal
On Mon, Mar 4, 2013 at 4:44 AM, Jens Grivolla wrote:
> On 03/01/2013 07:46 PM, Neal Ensor wrote:
>>
>> Again, it appears to work on
On Mon, Mar 4, 2013 at 1:35 PM, Upayavira wrote:
> You'd be using 3.x style distributed search. You would do a query such
> as:
>
> http://localhost:8983/solr/select?q=foo:bar&shards=localhost:8983/solr,localhost:8984/solr
http://localhost:8983/solr/select?q=foo:bar&shards=localhost:8983/solr/c
You'd be using 3.x style distributed search. You would do a query such
as:
http://localhost:8983/solr/select?q=foo:bar&shards=localhost:8983/solr,localhost:8984/solr
The presence of the shards= parameter tells it that it is to be a
distributed search.
Of course you can wire this into your solrco
Is it possible to run solr without zookeeper, but still using sharding, if
it's all running on one host? Would the shards have to be explicitly
included in the query urls?
Thanks,
/Martin
On Fri, Mar 1, 2013 at 3:58 PM, Shawn Heisey wrote:
> On 3/1/2013 7:34 AM, Martin Koch wrote:
>
>> Most of
We've been trying to get our heads around this for some days now upgrading from
3.6 (where we didn't see this error) to 4.1 (where this error is very prominent.
We have upgraded from SOLR 3.6.1 to 4.1 and get the following error:
INFO [2013.03.04 09:22:40] http-12200-2 org.apache.solr.core.SolrCo
Hi Chris,
Thank you for the reply. okay understood about *fieldWeight*.
I am actually curious to know how are the documents sequenced in this case
when the product of tf idf and fieldnorm is same for both the documents?
Afaik, at the first step, documents are sequenced based on
fieldWeight(p
On 03/01/2013 07:46 PM, Neal Ensor wrote:
Again, it appears to work on Safari fine hitting the same container,
so must be something Chrome-specific (perhaps something I have
disabled?)
This sounds like it might just be a browser cache issue (if you used
Chrome to access the same URL previously
*Shawn:*
here you go - http://wiki.solarium-project.org/index.php/V1:Ping_query
as for my cloud, average response time for ping request is 4 ms. but there
are several pings that take even 3 seconds. (I have about 10 pings/day)
--
View this message in context:
http://lucene.472066.n3.nabbl
Mark,
it's there for ages
http://lucene.apache.org/core/3_6_0/api/all/org/apache/lucene/queryParser/core/package-summary.html
You are welcome!
On Mon, Mar 4, 2013 at 2:42 AM, Mark Bennett wrote:
> Hi Mikhail,
>
> Thanks for the links, looks like interesting stuff.
>
> Sadly this project is stu
Hi
When creating the shards through the admin interface it's not possibleto
specify the number of shards. I created the shards with the same shardid
so I'm having 2 shards for each collection. When checking the zookeeper
status the numShards = null. I did this before starting the index
proces
58 matches
Mail list logo