Hello Paul,
thank you for your reply.
The UPDATE in fact works fine: I only had to update the CREATION_TIME on the
DB :-)
Regarding the deletedPkQuery, I understand it has to return the primary keys
that should be removed from the index (because they have been removed from
the DB) but I don't ha
On Wed, Mar 18, 2009 at 3:15 PM, dabboo wrote:
>
> Hi,
>
> I am creating indexes in Solr and facing an unusual issue.
>
> I am creating 5 indexes and xml file of 4th index is malformed. So, while
> creating indexes it properly submits index #1, 2 & 3 and throws exception
> after submission of ind
But if I already have some indexes in the index folder then these old indexes
will also get deleted. Is there any way to roll back the operation.
Shalin Shekhar Mangar wrote:
>
> On Wed, Mar 18, 2009 at 3:15 PM, dabboo wrote:
>
>>
>> Hi,
>>
>> I am creating indexes in Solr and facing an unu
Hi
I've a little problem with optimization which is very interesting but juste
one time per day otherwise replication take ages to bring back index hard
link.
So my cron is every 30mn :
/solr/user/dataimport?command=delta-import&optimize=false&commit=false
otherwise i've cron for optimizing ever
Many thanks for your explanation. That really helped me a
lot in understanding DisMax - and finally I realized that
DisMax is not at all what I need. Actually I do not want
results where "blue" is in one field and "tooth" in another
(imagine you search for a notebook with blue tooth and get
so
Hello Paul,
thank you for your feedback. I will ask to add an expiration date to the DB
and run a process that updates the index accordingly.
Cheers,
Giovanni
On 3/18/09, Noble Paul നോബിള് नोब्ळ् wrote:
>
> it is not possible to query details from Solr and find out deleted
> items using DIH
>
Hello
I have a solr field:-
which an unrelated query reveals is populated with:-
file:///Volumes/spare/ts/ford/schema/data/news/fdw2008/jn71796.xml
however when I try and query for that exact document explicitly:-
http://localhost:8080/apache-solr-1.4-dev/select?q=fileAbsolutePath:fil
when executing this code I got in my index the field "includes" with this
value : "? ? ?" :
---
String content ="eaiou with circumflexes: êâîôû";
SolrInputDocument doc = new SolrInputDocument();
doc.addField( "id", "123", 1.0f );
doc.addField( "inclu
If you're using a recent 1.4-snapshot you should be able to do a
rollback: https://issues.apache.org/jira/browse/SOLR-670
Otherwise, if you have unique IDs in your index, you can just post new
documents over the top of the old ones then commit.
Toby.
On 18 Mar 2009, at 10:19, dabboo wrote:
Hi,
I am creating indexes in Solr and facing an unusual issue.
I am creating 5 indexes and xml file of 4th index is malformed. So, while
creating indexes it properly submits index #1, 2 & 3 and throws exception
after submission of index 4.
Now, if I look for index #1,2 & 3, it doesnt show up,
it is not possible to query details from Solr and find out deleted
items using DIH
you must maintain a deleted rows ids in the db or just flag them as deleted.
--Noble
On Wed, Mar 18, 2009 at 2:46 PM, Giovanni De Stefano
wrote:
> Hello Paul,
>
> thank you for your reply.
>
> The UPDATE in fac
Hi all,
I am trying to index words containing special characters like 'Räikkönen'.
Using EmbeddedSolrServer indexing is working fine, but if I use
CommonHttpSolrServer then it is indexing garbage values.
I am using Solr 1.4 and set URLEcoding as UTF-8 in tomcat. Is this a known
issue or am I doi
You'll need to escape the colon with a backslash, e.g.
fileAbsolutePath:file\:///Volumes/spare/ts/ford/schema/data/news/
fdw2008/jn71796.xml
see the lucene query parser syntax page:
http://lucene.apache.org/java/2_3_2/queryparsersyntax.html#Escaping%20Special%20Characters
Toby.
On 1
With SolrJ, you can use ClientUtils.escapeQueryChars(str)
Erik
On Mar 18, 2009, at 7:51 AM, Toby Cole wrote:
You'll need to escape the colon with a backslash, e.g.
fileAbsolutePath:file\:///Volumes/spare/ts/ford/schema/data/news/
fdw2008/jn71796.xml
see the lucene query parser synt
Hmm, I don't think there is currently a solution for this. #1 is not
viable for the reasons you mentioned and #2 is not supported by the
current code. That being said, I think it wouldn't be too hard to for
someone to work up a patch for this. Essentially, we need the ability
to add in p
Hmm -
Have you tested search speed (without optimizing) using a merge factor
of 2? If the speed is acceptable (should be much faster than MF:10), try
a merge factor of 3. Using a merge factor of 2 or 3 and never optimizing
should keep searches relatively fast, but also leave a lot of the index
Thanks Mark, going to try now...
markrmiller wrote:
>
> Hmm -
>
> Have you tested search speed (without optimizing) using a merge factor
> of 2? If the speed is acceptable (should be much faster than MF:10), try
> a merge factor of 3. Using a merge factor of 2 or 3 and never optimizing
> sho
I have the following root entity:
I get results when running the deltaQuery manually, but Solr doesn't import
anything!!!
What am I doing wrong?!
Thanks in advance,
Rui Pereira
Hello all,
here I am with another question :-)
I have to index the content of two different tables on an Oracle DB.
When it comes to only one table, everything is fine: one datasource, one
document, one entity in data-config, one uniqueKey in schema.xml etc. It
works great.
But now I have on th
: Many thanks for your explanation. That really helped me a lot in understanding
: DisMax - and finally I realized that DisMax is not at all what I need.
: Actually I do not want results where "blue" is in one field and "tooth" in
: another (imagine you search for a notebook with blue tooth and ge
To reply to my own message.
The following worked starting from scratch (example):
SolrConfig solrConfig = new SolrConfig(
"D:\\Projects\\FutureTerm\\apache-solr-1.3.0\\futu
Yes, approach #2 will certainly be useful. I'll open an issue.
On Wed, Mar 18, 2009 at 6:20 PM, Grant Ingersoll wrote:
> Hmm, I don't think there is currently a solution for this. #1 is not
> viable for the reasons you mentioned and #2 is not supported by the current
> code. That being said, I
Hi:
Is it easy to do daily incremental index update in Solr assuming the
index is around 1G? In terms of giving a document an ID to facilitate
index update, is it using the URL a good way to do so?
Thanks
Victor
Can you isolate this down to just a simple unit test?
On Mar 17, 2009, at 6:52 PM, Comron Sattari wrote:
I've recently upgraded to Solr 1.3 using Lucene 2.4. One of the
reasons I
upgraded was because of the nicer SearchComponent architecture that
let me
add a needed feature to the default re
I can try, for now I just decided to use Lucene's TermsFilter which does the
job perfectly. If I have some spare time I'll put together a unit test to
show the problem.
Thanks.
On Wed, Mar 18, 2009 at 12:23 PM, Grant Ingersoll wrote:
> Can you isolate this down to just a simple unit test?
>
>
>
Victor,
Daily updates (or hourly or more frequent) are not going to be a problem. I
don't follow your question about document ID and using URL.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: "Huang, Zijian(Victor)"
> To: solr-user@luc
Giovanni,
It sounds like you are after a JOIN between two indices a la RDBMS JOIN? It's
not possible with Solr, unless you want to do separate queries and manually
join. If you are talking about merging multiple indices of the same type into
a single index, that's a different story and doabl
I'm attempting to use and XML/HTTP datasource
[http://wiki.apache.org/solr/DataImportHandler#head-13ffe3a5e6ac22f08e063ad3315f5e7dda279bd4]
I went through the RSS example in
apache-solr-1.3.0/example/example-DIH and that all worked for me.
What I am now attempting to do is leverage 'useSolrAddSche
On Thu, Mar 19, 2009 at 1:29 AM, Sam Keen wrote:
>
> What I am now attempting to do is leverage 'useSolrAddSchema="true"' .
> I have a URL the responds with a well formatted solr add xml (I'm able
> to add it by POSTing). But when I try to add it using
> http://localhost:8983/solr/dataimport?com
Shyam,
I tried using spellcheck.collate=true, it doesn't return results
with correct word. Do I need to make any other settings?.
Thanks.
Karthik
-Original Message-
From: Shyamsunder Reddy [mailto:sjh...@yahoo.
Although I'm not answering your question (others have), why are you even
doing this at all with Solr when you could take advantage of Solr's filter
queries (fq param)?
~ David Smiley
Comron Sattari-3 wrote:
>
> I've recently upgraded to Solr 1.3 using Lucene 2.4. One of the reasons I
> upgraded
Hi, Otis:
so does Solr already has some kind of libraries build-in, which it
can automatically detect the different within two set of crawled
documents and update the index to the newer one?
I mean the document ID in Slor xml doc format. Inside the Solr wiki,
it tells me that I can update a
Because I need to filter on (possibly) more than 1024 terms and using a
query to do it just wouldn't work.
Comron Sattari
On Wed, Mar 18, 2009 at 1:30 PM, David Smiley @MITRE.org
wrote:
>
> Although I'm not answering your question (others have), why are you even
> doing this at all with Solr wh
Am 18.03.2009 um 21:27 schrieb Narayanan, Karthikeyan:
Shyam,
I tried using spellcheck.collate=true, it doesn't return
results with correct word. Do I need to make any other settings?.
doesn't work here either
Ingo
--
Ingo Renner
TYPO3 Core Developer, Release Manager TYPO3 4.
Maybe I miss something in solrconfig.xml ???
sunnyfr wrote:
>
> Hi
>
> I've a little problem with optimization which is very interesting but
> juste one time per day otherwise replication take ages to bring back index
> hard link.
>
> So my cron is every 30mn :
> /solr/user/dataimport?command
that worked perfectly Shalin. thanks so much for your help!
sam keen
On Wed, Mar 18, 2009 at 1:15 PM, Shalin Shekhar Mangar
wrote:
> On Thu, Mar 19, 2009 at 1:29 AM, Sam Keen wrote:
>
>>
>> What I am now attempting to do is leverage 'useSolrAddSchema="true"' .
>> I have a URL the responds wit
Hi
I've in my log optimize=true after a commit but I didnt allow it in my
solrconfig ???
/data/solr/video/bin/snapshooter
/data/solr/video/bin
-c
true
Do you have an idea where it comes from??
Thanks a lot,
--
View this message in context:
htt
Unfortunately, collate doesn't verify that the collated result
actually results in hits. So, it is likely that each term returns
results, but that doesn't mean the collation does. We probably should
add to the SpellCheckComponent to have an option to check to see if
the collation is going
Hi
I am using most recent drupal apachesolr module with solr 1.4 nightly build
* solrconfig.xml ==>
http://cvs.drupal.org/viewvc.py/drupal/contributions/modules/apachesolr/solrconfig.xml?revision=1.1.2.15&view=markup&pathrev=DRUPAL-6--1-0-BETA5
* schema.xml ==>
http://cvs.drupal.org/viewvc.py/dru
Thanks for the responses.
If we used a poll interval of one second (for 1.4), wouldn't we still have to
wait for the replication to finish? In that case, couldn't it take minutes
(depending on index size) to get that data on the slave? Or would there be a
lot less data to pull down because of
I am assuming that you are using a recent version of DIH.
I see some discrepency in the queries
SELECT Sub0.SUBID ... is the deltaQuery
and the join is done using
Sub0.SUBID =${dataimporter.delta.SUBID}" in deltaImportQuery
try making the first query as SELECT Sub0.SUBID as SUBID
or making the
it depends on a few things.
1) no:of docs added
2) is the index optimized
3) autowarming
if the no:of docs added are few and the index is not optimized , the
replication will be will be done in milliseconds (the changed files
will be small). If there is no autoWarming , there should be no delay
in
On Thu, Mar 19, 2009 at 2:14 AM, Huang, Zijian(Victor) <
zijian.hu...@etrade.com> wrote:
>
>I mean the document ID in Slor xml doc format. Inside the Solr wiki,
> it tells me that I can update a particular doc by its ID if I assigned
> one previously. I am thinking if using the url as the doc
Hi,
I want to use date field with facet query.
This is my query:
q=productPublicationDate_product_dt:[*%20TO%20NOW]&facet=true&facet.field=productPublicationDate_product_dt:[*%20TO%20NOW]&qt=dismaxrequest
This is exception, I am facing after running this query.
-
org.apache.solr.common
I have created a field,
The pattern is "_" (Underscore)
When I do field analysis using solr admin, it shows it correctly. Have a
look at attached image. e.g. cric_info
http://www.nabble.com/file/p22594575/field%2Banal
45 matches
Mail list logo