You send the two strings separately with the same field name. They
will aggregate and both become separate values.
On Fri, Aug 6, 2010 at 6:27 PM, zMk Bnc wrote:
>
> Hi everybody,
> I am trying to add a multivalued content, so an item can belong to class A
> and class B at the same time.
> On sc
Use OR between multiple ranges.
On Fri, Aug 6, 2010 at 8:52 AM, Thomas Joiner wrote:
> This will work for a single range. However, I may need to support multiple
> ranges, is there a way to do that?
>
> On Fri, Aug 6, 2010 at 10:49 AM, Jan Høydahl / Cominvent <
> jan@cominvent.com> wrote:
>
: The other would be to somehow control the scores of each id. So a document
: with 2 ids matching should be worth more then the document with only 1 id
: matching (This is how it works now) but a document with 7 ids matching
: shouldn't be worth more, or at least not a lot more, then a document t
That's fine, i'm ok with the sub-queries and the additional overhead is to be
expected and how I would have thought it would work.
I guess my question is does it work that way at all or am I misinterpreting
something? Have others successfully imported dynamic multivalued fields in a
child entity
: lastModified:[ms(NOW/DAY-1DAY) TO ms()] AND ... regular query ...
:
: This however doesn't work. If I use the following:
...
: lastModified:[128081160 TO ms()] AND ... regular query ...
:
: I do get results.
Are you sure that last example works? it shouldn't.
If lastModif
: This is a background as to what I am trying to achieve. I want to be able
: to perform a search across numeric index ranges and get the results in
: logical ordering instead of a lexicographic ordering using dspace. Currently
As i tried to explain beofre: i have no idea what 'dspace' is...
:
: I want to add a plugin class to solr which can filter the results
: based on certain criteria.I have an array which has the solr document unique
: key as the index. and the value which will be one or zero.if it is zero I
: want to filter it from the result set.This filtering should happ
: Once I want to create a large index, can I split the index on different
: nodes and the merge all the indexs to one node. Any further suggestion
: for this case?
Your question is a little vague, w/o more details it's hard to be certain
what you are asking but it *sounds* like you are talking
: I wonder how do "NOT" queries work. Is it a pass on the result set and
: filtering out the "NOT" property or something like that?
it depends on what you mean by a "not" query.
If you mean at a low level: a Query object that is a BooleanQuery
consisting purely of Clauses that have the MUST_NOT
: I've so far observed this in relation to the strings "0", "0.1", "0.2"
: and "0.4" in indexed content.
Highlighting is an odd duck, and in the absense of more details in what
exactly your requests look like, i supsect that what's happening is that
your final query (ie: the query generated by
Hi everybody,
I am trying to add a multivalued content, so an item can belong to class A and
class B at the same time.
On schema.xml I have a definition that goes:
And then a field:
When adding an element "xx", for example, I am sendin
That's probably the most efficient way to do it... I believe the line you
are referring allows you to have sub-entities which , in the RDBMS, would
execute a separate query for each parent given a primary key. The downside
to this though is that for each parent you will be executing N separate
quer
Thanks, this helps a great deal, and I may be able to use this method.
Is this how DIH is intended to be used? The multi values should be returned
in 1 row then manipulated by a transformer? This is fine, but is just
unclear from the documentation. I was under the assumption that multiple
rows re
For multiple value fields using the DIH, i use group_concat with the
regextransformer's splitby:
ex:
FROM professors
WHERE professors.university_guid = '${universities.guid}'
"
transformer="RegexTransformer">
hope that's helpful.
@tommychheng
Progra
: I am running a Solr 1.4 instance on FreeBSD that generates large log
: files in very short periods. I used /etc/newsyslog to configure log file
: rotation, however once the log file is rotated then Solr doesn't write
: logs to the new file. I'm wondering if there is a way to let Solr know
:
: We have an index around 25-30G w/ 1 master and 5 slaves. We perform
: replication every 30 mins. During replication the disk I/O obviously shoots up
: on the slaves to the point where all requests routed to that slave take a
: really long time... sometimes to the point of timing out.
:
: Is the
:I have a requirement where I want to sum up the scores of the faceted
: fields. This will be decide the relevancy for us. Is there a way to do it on
: a facet field? Basically instead of giving the count of records for facet
: field I would like to have total sum of scores for those records.
See below:
On Fri, Aug 6, 2010 at 9:00 AM, PeterKerk wrote:
>
> Ah, I'm glad it does, makes me feel a bit less stupid ;)
>
> So to summarize and see if I understand it now:
> - the analyzers allow for many different ways to index a field, these
> analyzers are placed in a chain
>
Minor terminol
I'm having a difficult time understanding how multivariable fields work with
the DataImportHandler when the source is a RDBMS. I've read the following
from the wiki:
--
What is a row?
A row in DataImportHandler is a Map (Map). In the map , the
key is the name of the field and the value c
: Subject: Is it possible to get keyword/match's position?
:
: According to SO:
:
http://stackoverflow.com/questions/1557616/retrieving-per-keyword-field-match-position-in-lucene-solr-possible
:
: It is not possible, but it is one year ago, is it still true for now?
Not at a high level with any
Another way is to use DisMax parser, and give it a &qf=field1 field2 field3...
parameter, and it will automatically search in all fields specified. It is more
powerful than having one default field, and saves that disk space. Buy you
sacrifice some extra resources during querying.
--
Jan Høydah
> My solr doesn't have any cores defined. Do the Solr have a
> default core
> running all the time? In order to use SolrQueryRequest and
> SolrQueryResponse
> do we have to define any other classes?
I didn't understand your questions completely. But with the piece of code
-given in one of previ
forgot to mention:
1. yes, I upgraded to a version that allows sorting by Functions (thx Grant
for the work done on this feature, very cool)
2. when I try to sort by strdist, it doesnt seem to do any sorting; I get
the same results if I sort asc or desc, if I change the static string value,
if I
I tried I also noticed that I am unable to sort by the strdist function:
http://localhost:8080/solr/select?q=*:*&sort=strdist("seattle",city,edit)%20desc
Am I using the strdist incorrectly?
The version of Solr I am using is $Id: CHANGES.txt 903398 2010-01-26
20:21:09Z hossman $
I know it isnt
Ok. I got the RequestHandlerBase.
My solr doesn't have any cores defined. Do the Solr have a default core
running all the time? In order to use SolrQueryRequest and SolrQueryResponse
do we have to define any other classes?
Thanks for the help in advance.
--
View this message in context:
http:
In your schema.xml there is a field called
content
it may be something other than 'content'. This field is the one searched if
you don't specify one in the query.
You can explicitly put something there with an or you can have a
directive in your schema to move ap_* fields to the default
sear
Hello there,
I am pretty much new to SOLR and my question is about querying SOLR.
Beginning from what I am doing:
I have updated the index with the following SOLR document:
*
name-2
Moiz
**Bhukhiya*
*
*I know how to search for a particular field like
*q=ap_first
Yes. First of all you should use a Solr PHP client [1] to send the
changes you need.
A better approach is to add hooks to your application that will update
the Solr index as needed. By doing this you improve the efficiency of
the process a lot. It doesn't really make sense to re-index all the
cont
Looking at the code for
org.apache.solr.client.solrj.request.CoreAdminRequest.MergeIndexes it
looks like it should use params.add instead of .set:
if (indexDirs != null) {
for (String indexDir : indexDirs) {
params.set(CoreAdminParams.INDEX_DIR, indexDir);
}
Hi Andrei,
yes that's what I meant, the query result reflect the changes that I or the
script made in cat_978.xml which is not happen right now
I already commit it after the cat_987.xml changed, but the old xml file
still showed in query result, in other words the ID: 618436123 still there.
I tri
> public class DummyRequestHandler extends RequestHandlerBase
is the "requesthandlerbase" your own class?
--
View this message in context:
http://lucene.472066.n3.nabble.com/refreshing-synonyms-txt-or-other-configs-tp708323p1030886.html
Sent from the Solr - User mailing list archive at Nabble
This will work for a single range. However, I may need to support multiple
ranges, is there a way to do that?
On Fri, Aug 6, 2010 at 10:49 AM, Jan Høydahl / Cominvent <
jan@cominvent.com> wrote:
> Your use case can be solved by splitting the range into two int's:
>
> Document: {title: My doc
Your use case can be solved by splitting the range into two int's:
Document: {title: My document, from: 8000, to: 9000}
Query: q=title:"My" AND (from:[* TO 8500] AND to:[8500 TO *])
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Training in Europe - www.solrtraining.co
Hello,
I'm trying to make a change to the DSpace database, however when I do
Solr stops working correctly. How do I update Solr so it will work with
the new database layout (I've added a new column)?
Thank you for your help,
- Paul Brindley
***
What you are missing is a final
server.optimize();
Deleting a document will only mark it as deleted in the index until an
optimize. If disk space is a real problem in your case because you e.g. update
all docs in the index frequently, you can trigger an optimize(), say nightly.
--
Jan Høydahl,
Relatively new to solr, and I'm having trouble with indexing some
fields coming out of the solr cell extraction handler.
First question - what does the extraction handler do with text? For
example, if i throw it an excel file, what am I going to get back as
input to solr processing? is anything do
I need to have a field that supports ranges...for instance, you specify a
range of 8000 to 9000 and if you search for 8500, it will hit. However,
when googling, I really couldn't find any resources on how to create your
own field type in Solr.
But from what I was able to find, the AbstractSubType
Hello.
I am deliting old data from solr such way.
String url = "http://192.168.5.138:8080/apache-solr-1.4.0/";;
CommonsHttpSolrServer server = new
CommonsHttpSolrServer(url);
server.setDefaultMaxConnectionsPerHost(200);
server.setAllo
There is a real and actual class named "SolrDocument". it is a simpler
object then Lucene's "Document" class becuase in Solr the details about
the field types (stored, indexed, etc...) are handled by the schema, and
are not distinct per Field instance.
Chris Hostetter-3 wrote:
>
>
okay th
I thought that field collapsing as already 'ready for prime time', just not yet
integrated into the core?
Dennis Gearon
Signature Warning
EARTH has a Right To Life,
otherwise we all die.
Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php
--- On Thu, 8/5/10
Hi all,
I'm trying to set different autocommit settings to 2 separate request
handlers...I would like a requesthandler to use an update handler and a
second requesthandler use another update handler...
can I have more than one update handler in the same solrconfig?
how can I configure a request
Check out slides 36-38 in this presentation for some hint on a possible
solution:
http://www.slideshare.net/janhoy/migrating-fast-to-solr-jan-hydahl-cominvent-as-euro-con
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Training in Europe - www.solrtraining.com
On 7. ju
On 8/4/2010 11:11 PM, jayendra patil wrote:
ContentStreamUpdateRequest seems to read the file contents and transfer it
over http, which slows down the indexing.
Try Using StreamingUpdateSolrServer with stream.file param @
http://wiki.apache.org/solr/SolrPerformanceFactors#Embedded_vs_HTTP_Post
I think it's better to ask JTeam for help as they suggest on the download page.
http://www.jteam.nl/contact.html
On Fri, Aug 6, 2010 at 4:18 PM, Robert Neve wrote:
> Hi,
>
> I'm trying to use jteam's spatial plugin for solr
> (http://www.jteam.nl/products/spatialsolrplugin.html) which is based
Hi,
I'm trying to use jteam's spatial plugin for solr
(http://www.jteam.nl/products/spatialsolrplugin.html) which is based on
SOLR-773 but I can't seem to get it to work. If I do a normal query for "bil" I
get 3 results back from solr. When I do a spacial search within 1000 km I only
get 1 bac
Ah, I'm glad it does, makes me feel a bit less stupid ;)
So to summarize and see if I understand it now:
- the analyzers allow for many different ways to index a field, these
analyzers are placed in a chain
- when a field is indexed it can be searched
- a field could also be stored as-is, but I w
at first glance I see no difference between the 2 documents.
Perhaps you can illustrate which fields are not in the resultset that you
want to be there?
also use the 'fl'-param to describe which fields should be outputted in your
results.
Of course, you have to first make sure the fields you want
As I understand you expect that when re-indexing the cat_978.xml file
the document with the ID: 618436123 should disappear. It doesn't work
that way.
You need to do an explicit document delete request. The Solr index
it's not recreated from scratch every time when you post a new .xml
file.
Also r
I have implement a solrj client for quering index data from database. the
search result is in text but with SolrDocument[{description= which is a
field in xml. How i can parse out this. Thanks.
Hando
--
View this message in context:
http://lucene.472066.n3.nabble.com/Parsing-solrj-results-tp102
Did you commit your changes?
On 6 Aug 2010, at 04:52, twojah wrote:
>
> hi everyone,
> I run the query from the browser:
> http://172.16.17.126:8983/search/select/?q=AUC_CAT:978
>
> the query is based on cat_978.xml which was produced by my PHP script
> and I got the correct result like this:
50 matches
Mail list logo