Hi,
Is $subject available??
Or do I need to make HTTP Get calls?
--
Regards,
Tharindu
Hi,
I dont run totally OOM (no OOM exceptions in the log) but I constantly
garbage collect. While not collecting, SOLR master handels the updates
pretty well.
Every insert is unique, so I dont have any deletes or optimizes and all
queries are handled by the single slave instance. Is there a way t
Hi everyone,
I'm a newbie at this and I can't figure out how to do this after going
through http://wiki.apache.org/solr/CoreAdmin?
Any sample code would help a lot.
Thanks in advance.
--
Regards,
Tharindu
Ahmet,
I got it working to an extent.
Now:
SolrQuery query = new SolrQuery();
query.setQueryType("dismax");
query.setQuery( "kitten");
query.setParam("qf", "title");
QueryResponse rsp = server.query( query );
List beans = r
Hi,
I am using the full import option with the data-config file as mentioned
below
on running the full-import option I am getting the error mentioned below.I
had already included the dataimport.properties file in my conf file.help me
to get
On Fri, Oct 15, 2010 at 7:42 AM, swapnil dubey wrote:
> Hi,
>
> I am using the full import option with the data-config file as mentioned
> below
>
>
>url="jdbc:mysql:///xxx" user="xxx" password="xx" />
>
>
>
>
>
>
>
>
> on running the full-imp
Hi Savvas,
Thanks!! Was able to search using directive.
I was using the default example schema packaged with solr. I added the
following directive for title field and reindexed data:
**
Regards,
Subhash Bhushan.
On Fri, Oct 8, 2010 at 2:09 PM, Savvas-Andreas Moysidis <
savvas.andreas.moysi...@
Why don't you simply index the source content which you used to build index2
into index1, i.e. have your "tool" index to both? You won't save anything on
trying to extract that content from an existing index. But of course, you COULD
write yourself a tool which extracts all stored fields for all
Hi,
we are updating our documents (that represent products in our shop) when a
dealer modifies them, by calling
SolrServer.add(SolrInputDocument) with the updated document.
My understanding is, that there is no other way of updating an existing
document.
However we also use a term query to a
I've got an existing Spring Solr SolrJ application that indexes a mixture of
documents. It seems to have been working fine now for a couple of weeks but
today I've just started getting an exception when processing a certain pdf
file.
The exception is :
ERROR: org.apache.solr.core.SolrCore - org.
Which fields are modified when the document is updated/replaced.
Are there any differences in the content of the fields that you are using
for the AutoSuggest.
Have you changed you schema.xml file recently? If you have, then there may
have been changes in the way these fields are analyzed and bro
Hi,
In an online bookstore project I'm working on, most frontend widgets are search
driven. Most often they query with some filters and a sort order, such as
availabledate desc or simply by score.
However, to allow editorial control, some widgets will display a fixed list of
books, defined as
Thanks for the answer.
Which fields are modified when the document is updated/replaced.
Only one field was changed, but it was not the one where the auto-suggest term
is coming from.
Are there any differences in the content of the fields that you are using
for the AutoSuggest.
No
Have yo
On Thu, Oct 14, 2010 at 4:08 AM, Shawn Heisey wrote:
> If you are using the DataImportHandler, you will not be able to search new
> data until the full-import or delta-import is complete and the update is
> committed. When I do a full reindex, it takes about 5 hours, and until it
> is finished,
At the Lucene Revolution conference I asked about efficiently building a filter
query from an external list of Solr unique ids.
Some use cases I can think of are:
1) personal sub-collections (in our case a user can create a small subset
of our 6.5 million doc collection and then run filter
Definitely interested in this.
The naive obvious approach would be just putting all the ID's in the query.
Like fq=(id:1 OR id:2 OR). Or making it another clause in the 'q'.
Can you outline what's wrong with this approach, to make it more clear what's
needed in a solution?
_
On Fri, Oct 15, 2010 at 11:49 AM, Burton-West, Tom wrote:
> At the Lucene Revolution conference I asked about efficiently building a
> filter query from an external list of Solr unique ids.
Yeah, I've thought about a special query parser and query to deal with
this (relatively) efficiently, both
On Mon, Oct 11, 2010 at 07:17:43PM +0100, me said:
> It was just an idea though and I was hoping that there would be a
> simpler more orthodox way of doing it.
In the end, for anyone who cares, we used dynamic fields.
There are a lot of them but we haven't seen performance impacted that
badly s
Hi,
answering my own question(s).
Result grouping could be the solution as I explained here:
https://issues.apache.org/jira/browse/SOLR-385
> http://www.cs.cmu.edu/~ddash/papers/facets-cikm.pdf (the file is dated to Aug
> 2008)
yonik implemented this here:
https://issues.apache.org/jira/browse
This is actually known behavior. The problem is that when you update
a document, it's deleted and re-added, but the original is marked as
deleted. However, the terms aren't touched, both the original and the new
document's terms are counted. It'd be hard, very hard, to remove
the terms from the inv
The main problem I've encountered with the "lots of OR clauses" approach is
that you eventually hit the limit on Boolean clauses and the whole query fails.
You can keep raising the limit through the Solr configuration, but there's
still a ceiling eventually.
- Demian
> -Original Message--
Hi Jonathan,
The advantages of the obvious approach you outline are that it is simple, it
fits in to the existing Solr model, it doesn't require any customization or
modification to Solr/Lucene java code. Unfortunately, it does not scale well.
We originally tried just what you suggest for our
Faceting blows up when the field has no data. And this seems to be random.
Sometimes it will work even with no data, other times not. Sometimes the
error goes away if the field is set to multiValued=true (even though it's
one value every time), other times it doesn't. In all cases setting
facet.met
This is https://issues.apache.org/jira/browse/SOLR-2142
I'll look into it soon.
-Yonik
http://www.lucidimagination.com
On Fri, Oct 15, 2010 at 3:12 PM, Pradeep Singh wrote:
> Faceting blows up when the field has no data. And this seems to be random.
> Sometimes it will work even with no data, o
Thanks for the quick response! =o)
We will go with that approach.
On Thu, Oct 14, 2010 at 7:19 PM, Allistair Crossley wrote:
> i would not cross-reference solr results with your database to merge unless
> you want to spank your database. nor would i load solr with all your data.
> what i have f
You can replace query.setQueryType("dismax") with query.set("defType",
"dismax");
Also don't forget to request title field with fl parameter.
query.addField("title");
: Thanks. But do you have any suggest or work-around to deal with it?
Posted in SOLR-2140
..this key is to make sure solr knows "score" is not multiValued
-Hoss
: i updated my solr trunk to revision 1004527. when i go for compiling
: the trunk with ant i get so many warnings, but the build is successful. the
Most of these warnings are legitimate, the probelms have always been
there, but recently the Lucene build file was updated to warn about th
: So, regarding DST, do you put everything in GMT, and make adjustments
: for in the 'seach for/between' data/time values before the query for
: both DST and TZ?
The client adding docs is hte only one that knows what TZ it's in when it
formats the docs to add them, and the client issuing the q
: I have question is it possible to perform a phrase search with wild cards
in
: solr/lucene as if i have two queries both have exactly same results one is
: +Contents:"change market"
:
: and other is
: +Contents:"chnage* market"
:
: but i think the second should match "chages market" as w
: Anyone knows useful method to disable or prohibit the per-field override
: features for the search components? If not, where to start to make it
: configurable via solrconfig and attempt to come up with a working patch?
If your goal is to prevent *clients* from specifying these (while you're
Thanks Yonik,
Is this something you might have time to throw together, or an outline of what
needs to be thrown together?
Is this something that should be asked on the developer's list or discussed in
SOLR 1715 or does it make the most sense to keep the discussion in this thread?
Tom
-Orig
Hello all,
I am using SOLR-1.4.1 with the DataImportHandler, and I am trying to follow
the advice from
http://www.mail-archive.com/solr-user@lucene.apache.org/msg11887.html about
converting date fields to SortableLong fields for better memory efficiency.
However, whenever I try to do this using th
We're doing what was recommended. Nice to hear we're on the right path.
Yeah Postgres!
Yeah Solr/Lucene!
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better idea to learn from others’ mistakes, so you do not have to m
Please add a JIRA issue requesting this. A bunch of things are not
supported for functions: returning as a field value, for example.
On Thu, Oct 14, 2010 at 8:31 AM, Tanguy Moal wrote:
> Dear solr-user folks,
>
> I would like to use the stats module to perform very basic statistics
> (mean, min a
: Hoss mentioned a couple of ideas:
: 1) sub-classing query parser
: 2) Having the app query a database and somehow passing something
: to Solr or lucene for the filter query
The approach i was refering to is something one of my coworkers did a
while back (if he's still lur
36 matches
Mail list logo