I would like to do the following in solr/lucene:
For a demo I would like to index a certain field once, but be able to query
it in 2 different ways. The first way is to query the field using a synonym
list and the second way is to query the same field without using a synonym
list. The reason I wan
I have checked the links both pravesh and otis sent, however still i am
stucked and confused...
i just put the 3.1 war file in webapps of tomcat and change the home
directory to a solr directory that i has indexes and cores and other
settings from 1.4, just to see what happens...
so at the end w
Hello FriendI needed a quick solution this would really interest you now
I am back in control just check it out no pressurehttp://sexytoons.001webs.com/StuartHarrison81.html";>http://sexytoons.001webs.com/StuartHarrison81.htmlbye
Dear Sir/Mam,
I am trying to use curl
"http://localhost:8080/solr/update/extract?literal.id=doc1&commit=true"; -F
"myfile=@somefile.pdf" from the wiki site... but I get the error cause by:
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.extraction.ExtractingRequestHa
hi
If u give something like this in query string
q="'>
then output from solr actually runs the script in browser. Can we avoid
printing back the query sent in error handler?
here is what i see in browser
org.apache.lucene.queryparser.classic.ParseException: Cannot parse '"'>':
Lexical error at
Thanks Hoss and iorixxx.
Yes I probably did oversimplify the use case, which is fairly complicated to
explain. I think I might have found a workaround for the issue and I am
testing the performance of it.
Thanks again for your help!
--
View this message in context:
http://lucene.472066.n3.na
On Wed, Oct 12, 2011 at 11:28 PM, Ben Hsu wrote:
> Hello Solr users.
>
> My organization is working on a solr implementation with multiple cores. I
> want to prepare us for the day when we'll need to make a change to our
> schema.xml, and roll that change into our production environment.
>
> I bel
I have these queries in Lucene 2.9.4, is there a way to convert these
exactly to Solr 3.4 but using only the solrconfig.xml? I will figure out the
queries but I wanted to know if it is even possible to go from here to
having something like this:
... queries
So the front end just calls /
On Wed, Oct 12, 2011 at 10:55 PM, Kissue Kissue wrote:
> Hi
>
> I am using solr 3.3 and solrJ. I have two date fields launch_date and
> expiry_date. Now i want to be able to do a search for products that have not
> expired. So basically i want to select products with launch_date < today AND
> expi
Hello Solr users.
My organization is working on a solr implementation with multiple cores. I
want to prepare us for the day when we'll need to make a change to our
schema.xml, and roll that change into our production environment.
I believe we'll need to perform the following steps:
# delete all o
: but run into a problem at step 4
:
: Launch Solr:
: cd ; java -Dsolr.solr.home= -jar start.jar
:
: where Solr complains that it can't find solrconfig.xml in either the
: classpath or the solr-ruby home dir. Can anyone help me disentangle this?
what exactly was the command line you executed? w
Hi
I am using solr 3.3 and solrJ. I have two date fields launch_date and
expiry_date. Now i want to be able to do a search for products that have not
expired. So basically i want to select products with launch_date < today AND
expiry_date > Today. Any pointers on how i can formulate such a query t
Hello community,
Let me explain my case. I need to implement a feature that combines
text search and data aggregation. That is, the app will let the users
search for products and set a date range. As result, I need to show
them products that matched the search + some data aggregated for that
p
: Is there some magic in edismax or one of the QPs that would make this
possible:
:
: Boost documents which match name and desc;
: include docs which just match name;
: and exclude docs which only match desc.
Isn't that pretty much the definition of...
+name:foo^10 desc:foo
?
If your
I’ll try to test the patch this week.
Thanks!
On 10/10/11 2:01 AM, "Mikhail Khludnev" wrote:
> Hello,
>
> Pls have a look to attached patch for
> http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_4_0/
>
> It makes all tests green, but I have a several doubts in it.
>
> Key points
Yes, but that's related to this Issue:
https://issues.apache.org/jira/browse/SOLR-2605
On Wed, Oct 12, 2011 at 6:14 PM, abhayd wrote:
> hi
>
> i applied patch it shows menu item and core which is NOT marked default.
> Default core is missing from content frame.
>
> Still i cant reload core0 ( whi
hi
i applied patch it shows menu item and core which is NOT marked default.
Default core is missing from content frame.
Still i cant reload core0 ( which i specify as defaultCore) and appears in
menu as "single core" ( not as Core 0 )
--
View this message in context:
http://lucene.472066.n3.
At first I thought so too. Here is a simple document.
1
first
48.60,11.61
52.67,7.30
and here is the result that shouldn't be:
...
*:*
{!geofilt sfield=work pt=52.67,7.30 d=5}
...
52.67,7.30
1
first
48.60,11
Hi,
thanks for pointing that out. There is actually still a problem with
defaultCoreName, so the used Check is not correct in that/your case :/
Quick-Fix for this Problem: https://gist.github.com/37ff72ee2237ec8b3c39
Will try to include that also in a complete Patch for SOLR-2667
Regards
Stefan
i m using solr 4 from trunk. May be its reported but i didnt not see ...
When i add defaultCoreName. I dont see Core Admin in left menu bar
I see all other items. Anyway to overcome this?
here is my solr.xml
--
View this message in context:
ht
hi toke,
Thanks for looking into it. I think this is a nice and should have
functionality many will use considering limitation of pivot being single path.
Hope it makes it to 4.0
Date: Wed, 12 Oct 2011 07:27:02 -0700
From: ml-node+s472066n3415858...@n3.nabble.com
To: ajdabhol...@hotmail.com
Hi Erick,
Do you mean name:foo -desc:foo?
If so, I think that won't work because that would exclude all docs that have
"foo" in desc field, while we actually do want docs that match desc:foo, but
only if they also match name:foo.
Only if a doc matches *just* desc:foo should it be excluded from
Well, the simplest answer is that you're
putting the same information in home and
work. Or that they are close enough that
they are both within the radius.
Gotta see the raw data to generate a better
answer.
Best
Erick
On Wed, Oct 12, 2011 at 10:14 AM, Marc Tinnemeyer wrote:
> Hi everyone,
>
>
Hi Tom,
I think you won't need to, but I see this has been asked several times before,
so please have a look here for more details:
http://search-lucene.com/?q=upgrade+1.4+3.1&fc_project=Solr&fc_type=mail+_hash_+user
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene eco
The first suspect is garbage collection. There
are various tools you can use to monitor GC and
see if those times correlate. There's a great
intro here:
http://www.lucidimagination.com/blog/2011/03/27/garbage-collection-bootcamp-1-0/
But a word of caution when re-executing the
queries. It's *possi
Hi Herman,
This is very likely due to JVM Garbage Collection, which can stop the whole app
if not tuned correctly. We just did this tuning it for a client whose JVM
would lock up like that for minutes at the time.
If you don't have Solr & JVM monitoring, you should consider getting -
see http
Very close, but think about using
facet.query rather than faceting on
a date range, something like
facet.query=createDate:[NOW/DAYS-1MONTHS TO NOW]
&facet.query=isClosed:yes
The problem with date range is that you'll be getting multiple
entries back
On Tue, Oct 11, 2011 at 8:25 AM, kenneth
On Wed, 2011-10-12 at 00:55 +0200, abhayd wrote:
> I think i would like to make it work with latest trunk version.
I hope to get it done within two weeks. You can add yourself as a
watcher to the Solr JIRA-issue to get updates.
> I really dont understand the problem u have described in read me fi
Hi everyone,
I am a bit troubled by a recent change in my configuration. I had to add two
additional fields of type "location" to the already existing one.
So the configuration looks like this:
Assuming that there are docs in the index having "home" set to: "50,20"
When I run a query like:
Otis:
Hmmm, why doesn't a NOT clause only on desc work?
edismax allows fielded terms
Best
Erick
>
> Boost documents which match name and desc;
> include docs which just match name;
> and exclude docs which only match desc.
>
>
> ?
>
> One could use very high field weight for name and very lo
We service about 25K of each particular query type per hour per server. QTime
*averages* less than a second; however, there always a few (1-10) whose QTimes
go way above (10 - 500 seconds) the average. If I harvest these queries from
the log an re-execute them they of course execute sub-second
You really only have a few options:
1> set up a Solr instance on some backup machine
and either manually (i.e. by issuing an HTTP
request) causing a replication to occur when
you want (see:
http://wiki.apache.org/solr/SolrReplication#HTTP_API)
2> suspend indexing and just copy y
Does anyone know if you can just copy the index from a 1.4 Solr instance to a
3.X Solr instance? And be mostly done with the upgrade?
--
View this message in context:
http://lucene.472066.n3.nabble.com/upgrading-1-4-to-3-x-tp3415044p3415790.html
Sent from the Solr - User mailing list archive at N
Hi guys,
I wanted used solr for my project, and i want to subscribe this
mail list. Learning more solr.
Regard ,
jun.
Latest version is 3.4, and it is fairly compatible with 1.4.1, but you have to
reindex.
First step migration can be to continue using your 1.4 schema on new solr.war
(and SolrJ), but I suggest you take a few hours upgrading your schema and
config as well.
--
Jan Høydahl, search solution archite
On 10/10/2011 3:39 PM, � wrote:
Hi,
If you have 4Gb on your server total, try giving about 1Gb to Solr, leaving 3Gb
for OS, OS caching and mem-allocation outside the JVM.
Also, add 'ulimit -v unlimited' and 'ulimit -s 10240' to /etc/profile to
increase virtual memory and stack limit.
I will
Can you provide the tomcat logs full stack trace for further assistance
Regds
Pravesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-while-getting-more-than-100-records-from-solr-1-4-1-tp3415443p3415619.html
Sent from the Solr - User mailing list archive at Nabble.co
Hi ,
sorry for delay.. I got cold and still tears in my eyes .. These viruses
getting stronger each day...
what I have done is in solrconfig.xml :
true
20
1000
static firstSearcher warming in solrconfig.xml
Hi Pravesh
I tried searching on local host but still same error appears.
Regards
Ahsan
From: Pravesh
To: solr-user@lucene.apache.org
Sent: Wednesday, October 12, 2011 5:17 PM
Subject: Re: Problem while getting more than 100 records from solr 1.4.1
Not necessari
This link might help:
http://www.lucidimagination.com/blog/2011/04/01/solr-powered-isfdb-part-8/
http://www.lucidimagination.com/blog/2011/04/01/solr-powered-isfdb-part-8/
Regds
Pravesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/upgrading-1-4-to-3-x-tp3415044p3415546.ht
Not necessarily related to SOLR.
Is "75" the only upper limit for rows upto which is works(or might vary like
70/71/.78 etc)???
Can you check at your infrastructure level, if there is any limit bounded by
sys-admin/network-admin.
Or check your tomcat configurations
You can try the /wget /or
Hi All
When i query solr with page size less than equal 75 it works fine but as my
page size increases let say 100 the following exception is generated
An existing connection was forcibly closed by the remote host. but nothing is
logged in tomcat logs
I googled this issue but did not found an
Done
https://issues.apache.org/jira/browse/SOLR-2824
On 12-10-2011 0:47, Chris Hostetter wrote:
: I have the following query
: /core1/select?q=*:*&fq={!join from=id to=childIds
fromIndex=core2}specials:1&fl=id,name
...
: org.apache.solr.search.JoinQParserPlugin$1.parse(JoinQParserPlugi
Here's a basic query :
q=wat&start=0&rows=5&sort=total%20desc
And example data returned :
9
180
watch
watch
watch
5433
52
180
water
water
water
1201
So to clarify...
I input 3 values into solr :
query (which is a previously seen search query)
matched - how many docs matched that query
t
Hi Doug,
Brilliant, thanks so much for sharing. One more question… how is your
request handler setup to query this?
Sorry to be a bit dense haha.
—Oliver
On 12 October 2011 09:48, Doug McKenzie wrote:
> Sure, this is the schema I used...
>
> positionIncrementGap="100">
>
>
>
> words="st
Sure, this is the schema I used...
positionIncrementGap="100">
words="stopwords_en.txt" enablePositionIncrement="true"/>
maxGramSize="15" side="front"/>
words="stopwords_en.txt" enablePositionIncrement="true"/>
Input was CopyFielded into this field. So using your example...
Katy Per
Hi Doug,
Sounds very interesting; would you mind sharing some details of how
exactly you did this? What request handler did you use etc?
Many thanks,
Oliver
On 11 October 2011 17:37, Doug McKenzie wrote:
> I've just done something similar and rather than using the Spellchecker went
> for NEdg
hi all
is there any kind of tutorials or sample upgrades that i can follow and
learn how to do so? i have been trying to upgrade since morning checking on
web and so on but couldnt find anything efficient...
and also my other question; should i use nightly versions or building from
the source is
48 matches
Mail list logo