> I dont know what pattern the user will configure the
> columns in a separate
> table.i have to read this table to map the solr-fields to
> these columns ,so
> i cant give dynamic fields also,and Transformers also seems
> to be no use in
> this case.
>
You don't need to know columns names. You
Whats the best way to get to the instance of DataImport handler from the
current context?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/DataImporter-from-context-tp825517p825517.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for the reply,
I dont know what pattern the user will configure the columns in a separate
table.i have to read this table to map the solr-fields to these columns ,so
i cant give dynamic fields also,and Transformers also seems to be no use in
this case.
Please provide me any other soluti
backslash*rhode
\*rhode may work.
On Mon, May 17, 2010 at 7:23 AM, Erick Erickson wrote:
> A couple of things:
> 1> try searching with &debugQuery=on attached to your URL, that'll
> give you some clues.
> 2> It's really worthwhile exploring the admin pages for a while, it'll also
> give you a wor
On 5/17/2010 2:40 PM, D C wrote:
We have a large index, separated
into multiple shards, that consists of records exported from a database. One
requirement is to support near real-time
synchronization with the database. To accomplish this we are considering
creating
a "daily" shard where creat
Just to close the loop.
I was fooling around the all the cache setting trying to figure out my
problem, so the filterCache is set as part of the experiments. It did
not cause any memory issue in this case. After the date rounding
adjustment, I re-ran the query with 15 threads with 6000 request a
Chris,
Just completed the re-run and your date rounding tip saved my day. I now
realized the "NOW" as a timestamp is a very bad idea for query caching as it
is never the same in value. NOW/DAY would at least makes a set facet queries
caches re-usable for a period of time. It turns on you can help
: Cache settings:
:
that's a monster filterCache ...i can easly imagine it causing an OOM if
your heap is only 5G.
: The date rounding suggest is a very good one, I will need to rerun the test
: and report back on the cache setting. I remember my filterCache hit ratio is
: around 0.7. I did u
Chris,
Thanks for the detailed response. No I am not using Date Facet but Facet
Query as for facet display. Here is the full configuration of my "dismax"
query handler:
dismax
explicit
0.01
title text^0.5 domain^0.1 nature^0.1 author
title
: fields during indexing. However, my search interface is just a text
: box like Google and I need to take the query and return only those
: documents that match ALL terms in the query and if I am going to take
as mentioned previously in this thread: this is exactly what the dismax
QParser was d
I've been investigating Solr on and off as a (or even the) search
solution for my employer's content management solution. One of the
biggest questions in my mind at this point is which version to go
with. In general, 1.4 would seem the obvious choice, as it's the only
released version on that list.
: Wait. If the default op is OR, I thought this query:
:
: (+category:xyz +price:[100 TO *]) -category:xyz
:
: meant "with xyz and range, OR without xyz" because without a plus or
Nope. regardless of hte default op, you've got a BooleanQuery with two
clauses, one of which is negative. the ot
: Subject: Date faceting and memory leaks
First off, just to be clear, you don't seem to be useing the "date
faceting" feature, you are using the "Facet Query" feature, your queries
just so happen to be on a date field.
Second: to help people help you, you need to provide all the details.
you
I just found out if I remove my deletedPkQuery then the import will work. Is
it possible that the there is some conflict between my delta indexing and my
delta deleting?
Any suggestions?
--
View this message in context:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-t
I am looking at SOLR-788, trying to apply it to latest trunk. It looks
like that's going to require some rework, because the included constant
PURPOSE_GET_MLT_RESULTS conflicts with something added later,
PURPOSE_GET_TERMS.
How hard would it be to rework this to apply correctly to trunk? Is
We have a large index, separated
into multiple shards, that consists of records exported from a database. One
requirement is to support near real-time
synchronization with the database. To accomplish this we are considering
creating
a "daily" shard where create and update documents
(records ne
No I still have the OOM issue with repeated facet query request on the date
field. I forgot to mention that I am running 64-bit IBM 1.5 JVM. I also
tried the Sun 1.6 JVM with and without your GC arguments. The GC pattern is
different but the heap size does not drop as the test going on. I tested
w
Is there anymore information I can post so someone can give me a clue on
whats happening?
--
View this message in context:
http://lucene.472066.n3.nabble.com/StackOverflowError-during-Delta-Import-tp811053p824516.html
Sent from the Solr - User mailing list archive at Nabble.com.
oh, nice.
so i can me make a jar-file with the query i need and in solrconfig.xml i
need to define this..
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-behavior-after-a-import-Log-delete-table-tp823232p824484.html
Sent from the Solr - User mailing list archive at Nabb
I have ~50 million docs, and use the follow lines without any issues:
-XX:MaxNewSize=24m -XX:NewSize=24m -XX:+UseParNewGC
-XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC
Perhaps try them out?
On May 17, 2010, at 2:47 PM, Ge, Yao (Y.) wrote:
> I do not have any GC specific setting in comm
I do not have any GC specific setting in command line. I had tried to
force GC collection via Jconsole at the end of the run but it didn't
seems to do anything the heap size.
-Yao
-Original Message-
From: Antonio Lobato [mailto:alob...@symplicity.com]
Sent: Monday, May 17, 2010 2:44 PM
T
What garbage collection settings are you running at the command line when
starting Solr?
On May 17, 2010, at 2:41 PM, Yao wrote:
>
> I have been running load testing using JMeter on a Solr 1.4 index with ~4
> million docs. I notice a steady JVM heap size increase as I iterator 100
> query terms
I have been running load testing using JMeter on a Solr 1.4 index with ~4
million docs. I notice a steady JVM heap size increase as I iterator 100
query terms a number of times against the index. The GC does not seems to
claim the heap after the test run is completed. It will run into OutOfMemory
Thank you Erik, I will follow this route
Sai Thumuluri
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Monday, May 17, 2010 10:22 AM
To: solr-user@lucene.apache.org
Subject: Re: Direct hits using Solr
Sai - this seems to be best built into your application t
Yes you can use.
But be careful with such Queries like *ababa, (might will blow up)
Also it depends on how you are anlysing the fields?
Ankit
-Original Message-
From: Robert Naczinski [mailto:robert.naczin...@googlemail.com]
Sent: Monday, May 17, 2010 5:22 AM
To: solr-user@lucene.apach
In our case, we had specific matching that we needed to return, so I can't
really contribute this to the code base, but we did get this working.
Basically, we have a custom request handler. After it receives the search
results, we then send this to our matcher algorithm. We then go through each
> thats what i try ! :D
>
> i dont want to do this with another script, because i never
> know when a
> delta-import is finished, and when he is completed, i dont
> know with which
> result. complete, fail, ?!?!?
If you are updating your index *only* with DIH, after every full/delta import
comm
Le 17/05/2010 17:49, Marco Martinez a écrit :
No, the equivalent for this will be:
- A: (the lazy fox) *OR* B: (the lazy fox)
- C: (the lazy fox)
Imagine the situation that you dont have in B 'the lazy fox', with the AND
you get 0 results although you have 'the lazy fox' in A and C
Marco Mart
No, the equivalent for this will be:
- A: (the lazy fox) *OR* B: (the lazy fox)
- C: (the lazy fox)
Imagine the situation that you dont have in B 'the lazy fox', with the AND
you get 0 results although you have 'the lazy fox' in A and C
Marco Martínez Bautista
http://www.paradigmatecnologico.co
Le 17/05/2010 16:57, Xavier Schepler a écrit :
Hey,
let's say I have :
- a field named A with specific contents
- a field named B with specific contents
- a field named C witch contents only from A and B added with copyField.
Are those queries equivalents in terms of performance :
- A: (th
> HI,
> I want to map my solr-fields using the Customized
> DataImport Handler
>
> For ex:
>
> I have a fields called
> />
>
>
> Actually my column-names comes dynamically from another
> table it varies from
> client to client.
> instead of giving the Mapped-Db-columns as 'NAME' i
> w
> How do I index an URL without
> indexing the content? Basically our requirement is that - we
> have certain search terms for which there need to be a URL
> that should come right on top. I tried to use elevate option
> within Solr - but from what I know - I need to have an id of
> the indexed con
Hey,
let's say I have :
- a field named A with specific contents
- a field named B with specific contents
- a field named C witch contents only from A and B added with copyField.
Are those queries equivalents in terms of performance :
- A: (the lazy fox) AND B: (the lazy fox)
- C: (the lazy
i have also thought about an autosuggest for our intranet search.
one other solution could be:
put all the searched queries into a database and do a lookup not on the terms
indexed by solr but rather a lookup to what have been searched in the past.
we have written a small script, that takes the
ho it is possible to search with termscomponent and shingle for things likeÖ:
"Driver Callaway" it should be the same suggestion come like when i search
for "Callaway Dri.."
--
View this message in context:
http://lucene.472066.n3.nabble.com/Related-terms-combined-terms-tp694083p823749.html
Sent
A couple of things:
1> try searching with &debugQuery=on attached to your URL, that'll
give you some clues.
2> It's really worthwhile exploring the admin pages for a while, it'll also
give you a world of information. It takes a while to understand what the
various pages are telling you, but you'll
Sai - this seems to be best built into your application tier above
Solr, such that you have a database of special terms and URL mappings
and simply present them above the results returned from Solr.
Erik
http://www.lucidimagination.com
On May 17, 2010, at 3:11 PM, sai.thumul.
Hi,
I was trying out a clustering example. which worked out as mentioned
in the document.
Now, I want to use the clustering feature in my multicore where i have my
core indexes saved.
so i edit the solrconfig.xml in tht file to add clustering information (i
did make sure that the lib declara
HI,
I want to map my solr-fields using the Customized DataImport Handler
For ex:
I have a fields called
Actually my column-names comes dynamically from another table it varies from
client to client.
instead of giving the Mapped-Db-columns as 'NAME' i want to configure this
dynamically
How do I index an URL without indexing the content? Basically our requirement
is that - we have certain search terms for which there need to be a URL that
should come right on top. I tried to use elevate option within Solr - but from
what I know - I need to have an id of the indexed content for
Lucene Revolution Call For Participation - Boston, Massachusetts October 7 & 8,
2010
The first US conference dedicated to Apache Lucene and Solr is coming to
Boston, October 7 & 8, 2010. The conference is sponsored by Lucid Imagination
with additional support from community and other commercia
thats what i try ! :D
i dont want to do this with another script, because i never know when a
delta-import is finished, and when he is completed, i dont know with which
result. complete, fail, ?!?!?
so i thought dih can delete the updated ID's in my database =(
i try also to empty the table li
> hm i think i can use "deletedPkQuery" but it dont works for
> me, but maybe you
> can help me. here is my conifg.
>
> transformer="script:BoostDoc"
> query="select i.id, i.shop_id, i.is_active, i.shop
> ...
>
> deltaImportQuery="select i.id, i.s ...WHERE
> ... AND
> i.id='${dataimporte
hm i think i can use "deletedPkQuery" but it dont works for me, but maybe you
can help me. here is my conifg.
so, i only want to delete these ID'S which are updatet. this is my
exception:
SCHWERWIEGEND: Delta Import Failed
org.apache.solr.handler.dataimport.DataImportHandlerException: Unable
hm .. no =(
i want to delete from a mysql database, not from my solr-index
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-behavior-after-a-import-Log-delete-table-tp823232p823264.html
Sent from the Solr - User mailing list archive at Nabble.com.
> for my Delta-Import, i get the Id's which are should be
> updatet from an
> extra table in my database.
>
> ... when dih finished the delta-import it's necessary, that
> the table with
> the ID's is to delete.
>
> can i put a sql query in the DIH for that issue ?
deletedPkQuery (sql query) is
Any suggestions?
I have thought in have two configurations per server and reload each one
with the appropiated config file but i would prefer another solution if its
possible.
Thanks,
Marco Martínez Bautista
http://www.paradigmatecnologico.com
Avenida de Europa, 26. Ática 5. 3ª Planta
28224 Pozu
> We have a need that
> search engine return a specific URL for a specific search
> term and that result is supposed to be the first result (per
> Biz) among the result set.
This part seems like http://wiki.apache.org/solr/QueryElevationComponent
> The URL is an external URL and
> there is no in
Hello.
for my Delta-Import, i get the Id's which are should be updatet from an
extra table in my database.
... when dih finished the delta-import it's necessary, that the table with
the ID's is to delete.
can i put a sql query in the DIH for that issue ? this code should only be
send to the dat
Hi, Is there a way to have Solr return a URL that is not part of index. We have
a need that search engine return a specific URL for a specific search term and
that result is supposed to be the first result (per Biz) among the result set.
The URL is an external URL and there is no intent to index
http://wiki.apache.org/solr/SolrQuerySyntax
On Mon, May 17, 2010 at 11:44 AM, Robert Naczinski <
robert.naczin...@googlemail.com> wrote:
> How I can do that? I that distribute example I'cant use wildcards ;-(
>
> 2010/5/17 Leonardo Menezes :
> > Yes, also you can use '?' for a single character "w
How I can do that? I that distribute example I'cant use wildcards ;-(
2010/5/17 Leonardo Menezes :
> Yes, also you can use '?' for a single character "wild card".
>
> On Mon, May 17, 2010 at 11:21 AM, Robert Naczinski <
> robert.naczin...@googlemail.com> wrote:
>
>> Hi,
>>
>> i'm new to solr. Can
Yes, also you can use '?' for a single character "wild card".
On Mon, May 17, 2010 at 11:21 AM, Robert Naczinski <
robert.naczin...@googlemail.com> wrote:
> Hi,
>
> i'm new to solr. Can I use wilcard like '*' in my queries?
>
> Thanx,
>
> Robert
>
Hi,
i'm new to solr. Can I use wilcard like '*' in my queries?
Thanx,
Robert
On 17.05.2010, at 11:04, gwk wrote:
> Hi,
>
> I'm not sure if this applies to your use case but when I was building our
> faceted search (see http://www.mysecondhome.co.uk/search.html) at first I
> wanted to do the same, retrieve the minimum and maximum values but when I did
> the few values
Maybe you would like something like this:
lowest value:
http://localhost:8983/solr/select?q=*:*&rows=1&fl=date&sort=date%20asc
highest value:
http://localhost:8983/solr/select?q=*:*&rows=1&fl=date&sort=date%20desc
Hope this helps,
Péter
- Original Message -
From: "gwk"
To:
Sent:
Hi,
I'm not sure if this applies to your use case but when I was building
our faceted search (see http://www.mysecondhome.co.uk/search.html) at
first I wanted to do the same, retrieve the minimum and maximum values
but when I did the few values that were a lot higher than the others
made it a
57 matches
Mail list logo