Hi,
This might probably have been discussed long time back, but I got this error
recently in one of my production slaves.
SEVERE: java.lang.OutOfMemoryError: OutOfMemoryError likely caused by the
Sun VM Bug described in https://issues.apache.org/jira/browse/LUCENE-1566;
try calling FSDirectory.se
Hello, Solrs
we are trying to filter out documents written by (one or more of) the authors
from
a mediumish list (~2K). The document set itself is in the millions.
Apart from the obvious approach of building a huge OR-list and appending it
to the query, it seems that writing a Lucene[1] filter (
Thank u very much for ur reply iorixxx
i already know about the field type "suggest" and i am able to get
the related keywords in json response format my problem is i
developed one jsp page and integrated to solr if i type "test" in the jsp
page i will get the response whichever ha
hi everybody
i developed one response format which gives the following response when
ever i enter "t" in my solr search field...
{
"responseHeader":{
"status":0,
"QTime":0,
"params":{
"fl":"keywords",
"indent":"on",
"start":"0",
"q":"t\r\n",
hey,
On Tue, Aug 16, 2011 at 9:34 AM, Pranav Prakash wrote:
> Hi,
>
> This might probably have been discussed long time back, but I got this error
> recently in one of my production slaves.
>
> SEVERE: java.lang.OutOfMemoryError: OutOfMemoryError likely caused by the
> Sun VM Bug described in htt
Hi,
I just migrated to solr 3.3 from 1.4.1.
My index is still in 1.4.1 format (will be migrated soon).
I have an error when I use sharding with the new version:
org.apache.solr.common.SolrException: java.lang.RuntimeException: Invalid
version (expected 2, but 1) or the data in not in 'javabin' fo
>
>
> AFAIK, solr 1.4 is on Lucene 2.9.1 so this patch is already applied to
> the version you are using.
> maybe you can provide the stacktrace and more deatails about your
> problem and report back?
>
Unfortunately, I have only this much information with me. However following
is my speficiations
Hi,
We're getting a lot of these timeouts during bulk feeding or a large
document set. We're sending batches of 1000 documents and commiting every 15
minutes or for every 10.000 docs, whichever happens first. We find that the
first few commits (after 10'/20'/30' docs) go through without exceptions
We too were getting same issue.
We solved it by ensuring that when commit is in progress, no one access the
index.
Though SOLR's UpdateRequest does it, we will still read timeout issues
because of CommonsHttpSolrServer.
If we have another layer which doesnt send the request itself, then you wont
Hi Jay, thanks. great idea, in next days we'll try to do something like
you'd exposed.
best,
rode.
---
Rode González
Libnova, SL
Paseo de la Castellana, 153-Madrid
[t]91 449 08 94 [f]91 141 21 21
www.libnova.es
> -Mensaje original-
> De: Jaeger, Jay - DOT [mailto:jay.jae...@dot.wi.gov]
About updating the Wiki, just create your login and have at it. Anything
people think is wrong, they can edit
Best
Erick
On Sun, Aug 14, 2011 at 3:39 PM, Shawn Heisey wrote:
> On 8/13/2011 9:59 AM, Michael Sokolov wrote:
>>
>>> Shawn, my experience with SolrJ in that configuration (no autoC
On the surface, you could simply add some more fields to your schema. But as
far as I can tell, you would have to have a separate Solr "document" for each
SKU/size combination, and store the rest of the information (brand, model,
color, SKU) redundantly and make the unique key a combination of
Why do you care about the lock file on the slave? It shouldn't matter,
so I'm wondering if this is an XY problem:
>From Hossman's Apache page:
Your question appears to be an "XY Problem" ... that is: you are dealing
with "X", you are assuming "Y" will help you, and you are asking about "Y"
without
Right, so you're using edismax? This is expected. You can do
a number of things:
1> change the parameters of edismax
2> have your app filter out returns that dive beneath some threshold
that is relative to the score of the first doc in the list.
But I don't see why, given your example, it ma
What have you tried and what doesn't it do that you want it to do?
This works, instantiating the StreamingUpdateSolrServer (server) and
the JDBC connection/SQL statement are left as exercises for the
reader .:
while (rs.next()) {
SolrInputDocument doc = new SolrInputDocument();
S
On Mon, Aug 15, 2011 at 8:13 PM, Bill Bell wrote:
> How do I change the score to scale it between 0 and 100 irregardless of the
> score?
>
> q.alt=*:*&bq=lang:Spanish&defType=dismax
Doing this for a single query is easy: when you retrieve scores, the
maxScore is also reported. So just do
score/
Send gc log and force dump if you can when it happens.
Bill Bell
Sent from mobile
On Aug 16, 2011, at 5:27 AM, Pranav Prakash wrote:
>>
>>
>> AFAIK, solr 1.4 is on Lucene 2.9.1 so this patch is already applied to
>> the version you are using.
>> maybe you can provide the stacktrace and more
Hi Arcadius,
currently we have a migration project from verity k2 search server to solr.
I do not know IDOL, but autonomy bought verity before IDOL was released, so
possible it is comparable?
verity k2 works directly on xml-Files, in result the query syntax is a little
bit like xpath e.g. with "
Hi all-
I'm missing something fundamental yet I've been unable to find the definitive
answer for exact name matching. I'm indexing names using the standard "text"
field type and my search is for the name "clarke". My results include "clark",
which is incorrect, it needs to match clarke exactly
On 8/16/2011 4:16 AM, olivier sallou wrote:
I just migrated to solr 3.3 from 1.4.1.
My index is still in 1.4.1 format (will be migrated soon).
I have an error when I use sharding with the new version:
org.apache.solr.common.SolrException: java.lang.RuntimeException: Invalid
version (expected 2,
On 8/16/2011 7:14 AM, Erick Erickson wrote:
What have you tried and what doesn't it do that you want it to do?
This works, instantiating the StreamingUpdateSolrServer (server) and
the JDBC connection/SQL statement are left as exercises for the
reader.:
while (rs.next()) {
SolrInputD
"exact" can mean a lot of things (do diacritics count?, etc), but in
this case, it sounds like you just need to turn off the stemmer you
have on this fieldtype (or create a new one that doesn't include the
stemmer).
hth,
rob
On Tue, Aug 16, 2011 at 11:20 AM, Olson, Ron wrote:
> Hi all-
>
> I'm m
Jay, this is great information.
I don't know enough about Solr whether this is possible...Can we setup two
indexes in the same core, one for product_catalog and the other for
inventory? Then using a Solr query we could join the indexed content
together.
In Sql it would look like this
select
p.
No, I don't think so. A given core can only use one configuration and
therefore only one schema, as far as I know, and a schema can only have one key.
You could use two cores with two configurations (but that presumably wouldn't
be much help).
Solr is not a DBMS. It is an index.
-Origi
Thanks Jay, if we come to a reasonable solution are you interested in the
details?
On Tue, Aug 16, 2011 at 11:44 AM, Jaeger, Jay - DOT
wrote:
> No, I don't think so. A given core can only use one configuration and
> therefore only one schema, as far as I know, and a schema can only have one
> ke
Hi Ron,
There was a discussion about this some time back, which I implemented
(with great success btw) in my own code...basically you store both the
analyzed and non-analyzed versions (use string type) in the index, then
send in a query like this:
+name:clarke name_s:"clarke"^100
The name field
The problem with anything "automatic" is that I don't see how it could know
which fields in the document to map DB columns to. Unless you had
fields that exactly matched column names, it would be iffy...
I assume DIH actually does something like this, but don't know any way
of having SolrJ automag
Not particularly. Just trying to do my part to answer some questions on the
list.
-Original Message-
From: Steve Cerny [mailto:sjce...@gmail.com]
Sent: Tuesday, August 16, 2011 11:49 AM
To: solr-user@lucene.apache.org
Subject: Re: Product data schema question
Thanks Jay, if we come to
Thanx. I was using a build of the day you fixed the bug :)
Keep up the good work.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Exception-DirectSolrSpellChecker-when-using-spellcheck-q-tp3249565p3259372.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hey guys,
This might seem odd, but is it possible to use boost with random ordering?
That is, documents that get boosted are more likely to appear towards the
top of the ordering (I only display page 1, say 30 documents). Does that
make sense? I'm assuming that random ordering is, well, really ran
Hey,
How could I connect my solr server with external zookeeper?
Thanks,
Sharath
Just wanted to make people aware of a company called Kolera that is sending
threatening letters via their law firm, manatt, phelps, phillips in regard
to violation of their patent U.S. Patent No. 6,275,821 titled "Method and
System for Executing a Guided Parametric Search.
Apparently, they believe
Hi.
I have a solr core with job records and one guy can work in different
companies in
a specific range of "dateini" to "dateend".
IBM10012005
APPLE10012005
IBM10012005
APPLE10012005
Is possible to make a range query on a multivalue field
I'm sorry, i'm trying to do the same as he is.
I've read your reply many times now, but i still don't know how to do this.
Would somebody help me with this? Thanks a lot
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spatial-search-with-SolrJ-3-1-How-to-tp2961136p3259456.htm
> This might seem odd, but is it possible to use boost with
> random ordering?
> That is, documents that get boosted are more likely to
> appear towards the
> top of the ordering (I only display page 1, say 30
> documents). Does that
> make sense? I'm assuming that random ordering is, well,
> reall
On Wed, Aug 17, 2011 at 12:03 AM, LaMaze Johnson wrote:
[...]
> Just thought I would make others aware of this. I'd appreciate any insight
> others might have regarding the issue.
[...]
If you will permit me a moment of levity, from the perspective of
someone in India, I would say, move to a non
To make random results i'd use something related to dates and milliseconds,
not boosting. lemme think about this...
2011/8/16 Ahmet Arslan
> > This might seem odd, but is it possible to use boost with
> > random ordering?
> > That is, documents that get boosted are more likely to
> > appear towa
On 8/16/2011 11:23 AM, Erick Erickson wrote:
The problem with anything "automatic" is that I don't see how it could know
which fields in the document to map DB columns to. Unless you had
fields that exactly matched column names, it would be iffy...
I assume DIH actually does something like this,
Hello,
I am writing software for an e-commerce site. Different customers can have
different selections of product depending on what is priced out for them, so
to get the faceting counts correct I need to filter the values based on the
pricing. I have written a functionquery to get the pricing, w
Thank you for the response! I'm learning much about Solr... So I think
FieldCollapsing might do the trick... So if I understand correctly, I should
be able to group by type A, B, C, D, E, F, sort groups randomly, sort within
groups randomly, display simple format, and get an evenly distributed set
Gora Mohanty-3 wrote:
>
> On Wed, Aug 17, 2011 at 12:03 AM, LaMaze Johnson
> wrote:
> [...]
>> Just thought I would make others aware of this. I'd appreciate any
>> insight
>> others might have regarding the issue.
> [...]
>
> If you will permit me a moment of levity, from th
I know you mean well and are probably wondering what to do next, but such a
discussion is really beyond the scope of this mailing list. Most of us aren't
lawyers (I wonder if anyone here is?) and if we were, we wouldn't likely
speculate in public on something that can only be decided in the cou
I've been trying (unsuccessfully) to get multicore working for about a day and
a half now I'm nearly at wits end and unsure what to do anymore. **Any** help
would be appreciated.
I've installed Solr using the solr-jetty packages on Ubuntu 10.04. The default
Solr install seems to work fine.
Now
Grant Ingersoll-2 wrote:
>
> I know you mean well and are probably wondering what to do next, but such
> a discussion is really beyond the scope of this mailing list. Most of us
> aren't lawyers (I wonder if anyone here is?) and if we were, we wouldn't
> likely speculate in public on something t
solrQuery.setQuery("*:*");
solrQuery.addFilterQuery("{!func}geodist()");
solrQuery.set("sfield", "store");
solrQuery.set("pt", lat + "," + lon);
solrQuery.set("sort", "geodist() asc");
//disclaimer: I haven't run this
-
Author: https://www.packtpub.com/solr-1-4-enterprise
Perhaps your admin doesn’t work because you don't have
defaultCoreName="whatever-core-you-want-by-default" in your tag? E.g.:
Perhaps this was enough to prevent it starting any cores -- I'd expect a
default to be required.
Also, from experience, if you add cores, and you have securi
Lets try something simplier.
My start.jar is on \apache-solr-3.3.0\example\
Here's my local config placed in \apache-solr-3.3.0\example\solr\
Create \apache-solr-3.3.0\example\solr\softwares01\conf\
and \apache-solr-3.3.0\example\solr\softwares01\data\
http://localhost:8983/solr/ should
I've installed using aptitude so I don't have an example folder (that I can
find).
/solr/ does work (but lists no cores)
/solr/live/admin/ does not -- 404
On Tuesday, 16 August, 2011 at 1:13 PM, Alexei Martchenko wrote:
> Lets try something simplier.
> My start.jar is on \apache-solr-3.3.0\e
On 8/16/2011 1:12 PM, Shawn Heisey wrote:
On 8/16/2011 11:23 AM, Erick Erickson wrote:
The problem with anything "automatic" is that I don't see how it
could know
which fields in the document to map DB columns to. Unless you had
fields that exactly matched column names, it would be iffy...
I a
I tried setting `defaultCoreName="admin"` and that didn't seem to change
anything.
I also tried adding an `env-entry` for "solr/home" pointing to
"/home/webteam/config" but that didn't seem to help either.
The logs don't have any errors in them, besides 404 errors.
On Tuesday, 16 August, 20
AFAIK you're still seeing singlecore version
where is your start.jar?
search for solr.xml, see how many u've got plz.
2011/8/16 David Sauve
> I've installed using aptitude so I don't have an example folder (that I
> can find).
>
> /solr/ does work (but lists no cores)
> /solr/live/admin/
Just the one `solr.xml`. The one I added (well, symlinked form my config folder
-- I like to keep my configurations files organized so they can be managed by
git)
`start.jar` is in `usr/share/jetty/start.jar`.
On Tuesday, 16 August, 2011 at 1:33 PM, Alexei Martchenko wrote:
> AFAIK you're st
Is your solr.xml in usr/share/jetty/solr/solr.xml?
lets try this xml instead
Can you see the logs? You should see something like this
16/08/2011 17:30:55 org.apache.solr.core.SolrResourceLoader
*INFO: Solr home set to 'solr/'*
16/08/2011 17:30:55 org.apache.solr.servlet.SolrDispatchF
That won't work -- it would have to identify one of the three cores in your
cores list (say, "live").
-Original Message-
From: David Sauve [mailto:dnsa...@gmail.com]
Sent: Tuesday, August 16, 2011 3:29 PM
To: solr-user@lucene.apache.org
Subject: Re: Unable to get multicore working
I tri
Nope. Only thing in the log:
1 [main] INFO org.mortbay.log - Logging to
org.slf4j.impl.SimpleLogger(org.mortbay.log) via org.mortbay.log.Slf4jLog
173 [main] INFO org.mortbay.log - Redirecting stderr/stdout to
/var/log/jetty/2011_08_16.stderrout.log
On Tuesday, 16 August, 2011 at 1:45 PM, Ale
While I agree with Grant we shouldn't engage on a legal discussion, it may be
worth that this thread shares a few dates of when faceted search was used "in
the old times"...
paul
Le 16 août 2011 à 22:02, LaMaze Johnson a écrit :
>
> Grant Ingersoll-2 wrote:
>>
>> I know you mean well and ar
I tried on my own test environment -- pulling out the default core parameter
out, under Solr 3.1
I got exactly your symptom: an error 404.
HTTP ERROR 404
Problem accessing /solr/admin/index.jsp. Reason:
missing core name in path
The log showed:
2011-08-
Whoops: That was Solr 4.0 (which pre-dates 3.1).
I doubt very much that the release matters, though: I expect the behavior would
be the same.
-Original Message-
From: Jaeger, Jay - DOT [mailto:jay.jae...@dot.wi.gov]
Sent: Tuesday, August 16, 2011 4:04 PM
To: solr-user@lucene.apache.org
We had this type of error too.
Now we are using the StreamingUpdateSolrServer with a quite big queue and
2-4 threads depending on data type:
http://lucene.apache.org/solr/api/org/apache/solr/client/solrj/impl/StreamingUpdateSolrServer.html
And we do not do any intermediate commit. We send only on
I updated my `solr.xml` as follow:
and I'm still seeing the same 404 when I true to view /solr/admin/ or
/solr/live/admin/
That said, the logs are showing a different error now. Excellent! The site
schemas are loading!
Looks like the site schemas have an issue:
"SEVERE: org.apache.s
When you go to /solr what do you see?
On Tue, Aug 16, 2011 at 5:23 PM, David Sauve wrote:
> I updated my `solr.xml` as follow:
>
>
>
>
> dataDir="/home/webteam/preview/data" />
> dataDir="/home/webteam/staging/data" />
> dataDir="/home/webteam/live/data" />
>
>
>
>
> and I'm still seein
"Welcome to Solr" with a link to "Admin". The link returns a 404.
On Tuesday, 16 August, 2011 at 2:30 PM, Donald Organ wrote:
> When you go to /solr what do you see?
>
> On Tue, Aug 16, 2011 at 5:23 PM, David Sauve (mailto:dnsa...@gmail.com)> wrote:
>
> > I updated my `solr.xml` as follow:
> >
That said, the logs are showing a different error now. Excellent! The
site schemas are loading!
Great!
"SEVERE: org.apache.solr.common.SolrException: Unknown fieldtype 'long'
specified on field area_id"
Go have a look at your conf/schema.xml.
Is the following line present??
Ok. Fixed that too, now. The schema didn't define "long".
Looks like everything is a-okay, now. Thanks for the help. You guys saved me
from the insane asylum.
On Tuesday, 16 August, 2011 at 2:32 PM, Jaeger, Jay - DOT wrote:
> That said, the logs are showing a different error now. Excellent! T
Hello Karsten.
>From the doc you provided, it seems the two are totally different products.
I thought a bit about it and it seems that the best aproach would be to:
1-refactor our app and add an abstraction layer that will call the IDOL ACI
API.
Make sure we have good tests in place.
2-implement
So the way I generate war files now is by running an 'ant dist' in the solr
folder. It generates the war fine and I get a build success, and then I
deploy it to tomcat and once again the logs show it was successful (from the
looks of it). However, when I go to 'myip:8080/solr/admin' I get an HTTP
FWIW, we have some custom classes on top of solr as well. The way we do
it is using the following ant target:
...
Seems to work fine...basically automates what you have described in your
second paragraph, but allows us to keep ou
Interesting. I can use this as an option and create a custom 'war' target if
need be but I'd like to avoid this. I'd rather do a full build from the
source code I have checked out from the SVN. Any reason why 'ant dist'
doesn't produce a good war file?
--
View this message in context:
http://l
Interesting. I can use this as an option and create a custom 'war' target if
need be but I'd like to avoid this. I'd rather do a full build from the
source code I have checked out from the SVN. Any reason why 'ant dist'
doesn't produce a good war file?
--
View this message in context:
http://l
Hello,
I am using Solr 3.3. I have been following instructions at
https://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_3/solr/contrib/uima/README.txt
My setup looks like the following.
solr lib directory contains the following jars
apache-solr-uima-3.3.0.jar
commons-digester-2.0.jar
u
What have you tried already? In particular, have you looked at
http://wiki.apache.org/solr/SolrCloud
Best
Erick
On Tue, Aug 16, 2011 at 2:22 PM, Sharath Jagannath
wrote:
> Hey,
>
> How could I connect my solr server with external zookeeper?
>
> Thanks,
> Sharath
>
Naveen:
See below:
*NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable*. Any document that you add through update
becomes immediately searchable. So no need to commit from within your
update client code. Since there is no commit, the cache does
Why don't you use fields for each size? You can update our inventory only in
the event of a size becoming avaiable or unavaviable. That would remove a
lot of the load in inventory update.
Anothe way is to treat each sku/inventory pair as a document.
2011/8/16 Jaeger, Jay - DOT
> Not particularl
Hi,
I have this problem: We have about 100k persons with date of birth indexed
by solr. Now we need find persons with birth anniversary for any input date.
Exactly 1, 5 and 10 year is neccessary.
For example:
Input date: 17.8.2011
Required output:
1 year: persons with date of birth 17.8.2010, 17.
Hi All
I have this requirement of indexing and searching files (txt, doc,pdf) on my
disk using Solr Search which I have installed.
I am unable to find a relevant tutorial for the same, I would be thankfull if
anyone of you can actually help me out with the specific steps required.
Thanks and
75 matches
Mail list logo