yes,the fdt file still is there. can i make new fdx file through fdt file.
is there a posibilty that during the process of updating and optimizing, the
index will be deleted then re-generated?
-- Original --
From: "Erick Erickson";
Date: Sat, Mar
3.5 is the latest release - when we talk about 4 we are talking about trunk -
the latest development - its a major release coming that has been in the works
for a long time now.
On Mar 2, 2012, at 4:18 PM, Mike Austin wrote:
> I've heard some people talk about solr4.. but I only see solr 3.5 a
(12/03/03 1:39), Donald Organ wrote:
I am trying to get synonyms working correctly, I want to map floor locker
tostorage locker
currently searching for storage locker produces results were as searching
for floor locker does not produce any results.
I have the following setup for index t
Hi there,
I have a document and its title is "20111213_solr_apache conference report".
When I use analysis web interface to see what tokens exactly solr analyze
and the following is the result
term text20111213_solrapacheconferencereportterm type
Why 20111213_solr tokenized as and "_" char w
I suppose it would help if I populated the list I try to remove things
fromI believe it's working once I did that. Now that this is out
there, is there a better way to do something like this?
On Fri, Mar 2, 2012 at 10:19 PM, Jamie Johnson wrote:
> On a previous version of a solr snapshot we
On a previous version of a solr snapshot we had a custom component
which did the following
boolean fsv =
req.getParams().getBool(ResponseBuilder.FIELD_SORT_VALUES,false);
if(fsv){
NamedList sortVals = (NamedList)
rsp.getValues().get("sort_values");
The code does everything in single-threaded mode, but is coded to use
a multi-threaded Java ExecutorService. So, I've filed a request:
https://issues.apache.org/jira/browse/SOLR-3197
On Fri, Mar 2, 2012 at 12:40 PM, Neil Hooey wrote:
>> Someone at Lucid Imagination suggested using multiple > e
A lot depends on the analysis chain your field is actually using, that is
the tokens that are in the index. Can you supply the schema.xml
file for the field in question?
Best
Erick
On Fri, Mar 2, 2012 at 7:21 AM, adrian.strin...@holidaylettings.co.uk
wrote:
> I've got a hierarchical facet in my
I'm not quite sure what you mean by "order with numeric logic".
You're right, the default ordering is by score. I can't think of anything
that would arbitrarily sort by a varying input string, that is
id:(a OR b OR c OR d) would sort differently than
id:(b OR a OR d Or c).
Perhaps if you outlined
As far as I know, fdx files don't just disappear, so I can only assume
that something external removed it.
That said, if you somehow re-indexed and had no fields where
stored="true", then the fdx file may not be there.
Are you seeing problems as a result? This file is used to store
index informat
I have an XML file that I would like to index, that has a structure similar to
this:
[message text]
...
...
I would like to have the documents in the index correspond to the messages in
the xml file, and have the user's [id-num] value stored as a field in each of
the user's d
Hi, there
In order to keep a DocID vs UID map, we added payload to a solr core. The
search on UID is very fast but we get a problem with adding/deleting docs.
Every time we commit an adding/deleting action, solr/lucene will take up to 30
seconds to complete. Without payload, a same action can
Sorry for not being clear enough.
I don't know the point of origin. All I know is that there are 20K retail
stores. Only the cities within 10 miles radius of these stores should be
searchable. Any city which is outside these small 10miles circles around these
20K stores should be ignored.
So w
But again, that doesn't answer the problem I posed. Where is your
point of origin?
There's nothing in what you've written that indicates how you would know
that 10 miles is relative to San Francisco. All you've said is that
you're searching
on "San". Which would presumably return San Francisco, San
thanks for responding. we will try the trie fields.
the reason we are not using filters is these date values would change from
query to query.
we are dynamically populating these date values in the queries using the
current time.
--
View this message in context:
http://lucene.472066.n3.nabble.c
take a look to
I think you must use dedup to solve this issue
-Original Message-
From: Thomas Dowling
To: solr-user
Cc: Mikhail Khludnev
Sent: Fri, Mar 2, 2012 1:10 pm
Subject: Re: Help with duplicate unique IDs
Thanks. In fact, the behavior I want is overwrite=true. I wan
I've heard some people talk about solr4.. but I only see solr 3.5 available.
Thanks
Thanks. In fact, the behavior I want is overwrite=true. I want to be
able to reindex documents, with the same id string, and automatically
overwrite the previous version.
Thomas
On 03/02/2012 04:01 PM, Mikhail Khludnev wrote:
Hello Tomas,
I guess you could just specify overwrite=false
ht
Hello Tomas,
I guess you could just specify overwrite=false
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22add.22
On Fri, Mar 2, 2012 at 11:23 PM, Thomas Dowling wrote:
> In a Solr index of journal articles, I thought I was safe reindexing
> articles because their uniq
> Someone at Lucid Imagination suggested using multiple event="firstSearcher"> tags, each with a single facet query in them,
> but those are still done in parallel.
I meant to say: "but those are still done in sequence".
On Fri, Mar 2, 2012 at 3:37 PM, Neil Hooey wrote:
> I'm trying to get Sol
I'm trying to get Solr to run warming queries in parallel with
listener events, but it always does them in sequence, pegging one CPU
while calculating facet counts.
Someone at Lucid Imagination suggested using multiple tags, each with a single facet query in them,
but those are still done in para
--- On Fri, 3/2/12, veerene wrote:
> From: veerene
> Subject: date queries too slow
> To: solr-user@lucene.apache.org
> Date: Friday, March 2, 2012, 8:29 PM
> Hello,
> we are having significant performance problems with date
> queries on our
> production server.
> we are using SOLR 1.4 (will
Once I had the same problem. I didn't know what's going on. After few
moment of analysis I created completely new index and removed old one
(I hadn't enough time to analyze problem). Problem didn't come back
any more.
--
Regards,
Pawel
On Fri, Mar 2, 2012 at 8:23 PM, Thomas Dowling wrote:
> In a
I've ensured the SOLR data subdirectories and files were completed cleaned
out, but the issue still occurs.
On Fri, Mar 2, 2012 at 9:06 AM, Erick Erickson wrote:
> Matt:
>
> Just for paranoia's sake, when I was playing around with this (the
> _version_ thing was one of my problems too) I removed
In a Solr index of journal articles, I thought I was safe reindexing
articles because their unique ID would cause the new record in the index
to overwrite the old one. (As stated at
http://wiki.apache.org/solr/SchemaXml#The_Unique_Key_Field - right?)
My schema.xml includes:
...
...
And:
id
Hello,
we are having significant performance problems with date queries on our
production server.
we are using SOLR 1.4 (will be upgrading to latest version in the near
future) and our index size is around 4GB with 2 million documents.
for e.g: the query "tag:obama AND expirationdate:[2012-02-21T0
> Ahmet, this is a good find. Can we still open a JIRA issue
> so that a
> more useful exception is thrown here?
Robert, I created SOLR-3193 and created a test using Andrew's files.
So let's say x=10 miles. Now if I search for San then San Francisco, San Mateo
should be returned because there is a retail store in San Francisco. But San
Jose should not be returned because it is more than 10 miles away from San
Francisco. Had there been a retail store in San Jose then it shou
I am trying to get synonyms working correctly, I want to map floor locker
tostorage locker
currently searching for storage locker produces results were as searching
for floor locker does not produce any results.
I have the following setup for index time synonyms:
> But - the wiki page has a foot note that says "a tokenizer
> must be defined
> for the field, but it doesn't need to be indexed". The body
> field has the
> type "dcx_text" which has a tokenizer.
>
> Is the documentation wrong here or am I misunderstanding
> something?
Ah, I never read that no
Hi,
I have two newbie questions. With all my searching I havent been able to
find which would be a better choice to run my SOLR / Nutch install, Tomcat
or Jetty. There seems to be a lot of people on the internet saying Jetty has
better performance but I havent been able to see any proof of that.
Ah, ok - thank you for looking at it.
But - the wiki page has a foot note that says "a tokenizer must be defined
for the field, but it doesn't need to be indexed". The body field has the
type "dcx_text" which has a tokenizer.
Is the documentation wrong here or am I misunderstanding something?
On Fri, Mar 2, 2012 at 9:41 AM, Ahmet Arslan wrote:
>
>> Robert, I just tried with
>> 3.6-SNAPSHOT 1296203 from svn - the problem is
>> still there.
>>
>> I am just about to leave for a vacation. I'll try to open a
>> JIRA issue this
>> evening.
>
> Andrew, thanks for providing files. I also re-pr
> Robert, I just tried with
> 3.6-SNAPSHOT 1296203 from svn - the problem is
> still there.
>
> I am just about to leave for a vacation. I'll try to open a
> JIRA issue this
> evening.
Andrew, thanks for providing files. I also re-produced it.
But cause of the exception is that you are trying
hello all,
example:
i have a field named itemNo
the user does a search, itemNo:665
there are three document in the core, that look like this
doc1 - itemNo = 1237899*665*
doc2 - itemNo = *665*1237899
doc3 - itemNo = 123*665*7899
does the location or placement of the search string (beginnin
Robert, I just tried with 3.6-SNAPSHOT 1296203 from svn - the problem is
still there.
I am just about to leave for a vacation. I'll try to open a JIRA issue this
evening.
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-highlight-InvalidTokenOffsetsException-in-Solr-3-
Mikhail,
Thanks for the reply. Regarding your comments:
1 - OK. That's good to know.
2 - I thought about adding the subcategories to the category after I sent
my original question. This could work, but there are times when we need the
subcategories returned within the parent document and times
A lot depends on the size here. If each user has a zillion records,
consider multiple
indexes. But by and large, if they all fit in a single index the maintenance is
simpler if you just have a single index (core). And a single core also makes
somewhat more efficient use of memory etc.
Best
Erick
Matt:
Just for paranoia's sake, when I was playing around with this (the
_version_ thing was one of my problems too) I removed the entire data
directory as well as the zoo_data directory between experiments (and
recreated just the data dir). This included various index.2012
files and the tlog
I don't see how this works, since your search for San could also return
San Marino, Italy. Would you then return all retail stores in
X miles of that city? What about San Salvador de Jujuy, Argentina?
And even in your example, San would match San Mateo. But should
the search then return any stores
I posted the files here: http://www.mediafire.com/?z43a5qyfvz4zxp1
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-highlight-InvalidTokenOffsetsException-in-Solr-3-5-tp3560997p3793496.html
Sent from the Solr - User mailing list archive at Nabble.com.
Interesting, with curl I get the content in xml format.
Thanks!
> CC: solr-user@lucene.apache.org
> From: erik.hatc...@gmail.com
> Subject: Re: Solr web admin in xml format
> Date: Fri, 2 Mar 2012 07:59:16 -0500
> To: solr-user@lucene.apache.org
>
> Not at
One other fault-tolerance issue is that you'll need at least one replica
per shard. As I understand it, at least *one* machine has to be running
for each shard for the cluster to work.
This doesn't address the shardId issue, but is something to keep in
mind when testing.
Best
Erick
On Wed, Feb 2
First, I really don't understand why you would have OOMs when
indexing even a humongous number of dates, that just seems weird.
But what happens if you think about it the other way? Instead of indexing
open dates, index booked dates. Then construct filter queries like
fq=-booked:[5 TO 23], where t
On Fri, Mar 2, 2012 at 7:37 AM, andrew wrote:
> I was able to create a test case.
>
> We are querying ranges of documents. When I tried to isolate the document
> that causes trouble, I found it happens with exactly every second request
> only for a single document query (it fails constantly when r
> Get values from the statistics web, but in xml format for
> parse it with a perl script.
Actually http://localhost:8080/solr/coreName/admin/stats.jsp is a XML already.
It is transformed with stats.xsl to generate web page.
You can use http://wiki.apache.org/solr/SolrJmx to retrieve stats too.
Not at my computer at the moment but there are request handlers that can give
you those details as as well as JMX.
But the stats page IS XML :). View source. :)
On Mar 2, 2012, at 7:50, Ricardo F wrote:
>
> Get values from the statistics web, but in xml format for parse it with a
> per
> I think it is not a good idea to post the Solr
> XML here - it is very
> long (text extract of a newspaper page) and may not
> reproduce verbatim
> (whitespace etc.) if I paste it here.
>
> iorixxx, koji - is it ok if I send the necessary artifacts
> (add XML, schema,
> config) via email?
I s
Get values from the statistics web, but in xml format for parse it with a perl
script.
Thanks
> Date: Fri, 2 Mar 2012 12:51:00 +0100
> From: matheis.ste...@googlemail.com
> To: solr-user@lucene.apache.org
> Subject: Re: Solr web admin in xml format
>
> Ri
> Query 1:
> http://localhost:8085/solr/select/?q=abc&version=2.2&start=0&rows=10&indent=on&defType=dismax
> [defType with capital T -- does not fetch results]
>
> Query 2:
> http://localhost:8085/solr/select/?q=abc&version=2.2&start=0&rows=10&indent=on&deftype=dismax
> [defType with small T --
I was able to create a test case.
We are querying ranges of documents. When I tried to isolate the document
that causes trouble, I found it happens with exactly every second request
only for a single document query (it fails constantly when requesting a
range of documents where that document is in
I've got a hierarchical facet in my Solr collection; root level values are
prefixed with 0;, and the next level is prefixed 1_foovalue;. I can get the
root level easily enough, but when foovalue is selected I need to retrieve the
next level in the hierarchy while still displaying all of the opt
Ricardo, What exactly do you need?
On Friday, March 2, 2012 at 12:05 PM, Ricardo F wrote:
>
> Hello,
> How can I get the output of the web interface in xml format? I need it for
> munin monitoring.
>
> Thanks
Hello,
How can I get the output of the web interface in xml format? I need it for
munin monitoring.
Thanks
> I am crawling my site using Nutch and posting it to
> Solr. I am trying to
> implement a feature where I want to get all data where url
> starts with
> "http://someurl/";
What is your field type for url? If its string type, then you can use this:
&q={!prefix f=url}http://someurl/
http://lucen
A weird behavior with respect to "defType". Any clues will be appreciated.
Query 1:
http://localhost:8085/solr/select/?q=abc&version=2.2&start=0&rows=10&indent=on&defType=dismax
[defType with capital T -- does not fetch results]
Query 2:
http://localhost:8085/solr/select/?q=abc&version=2.2&sta
1.)To have one big index and filter by customer-name
On Fri, Mar 2, 2012 at 11:25 AM, Ramo Karahasan <
ramo.karaha...@googlemail.com> wrote:
> 1.)To have one big index and filter by customer-name
--
Sincerely yours
Mikhail Khludnev
Lucid Certified
Apache Lucene/Solr Developer
Grid Dy
The only reference I found is:
http://stackoverflow.com/questions/5753079/solr-query-without-order
Anyone had the same problem? Maybe using a dynamic field could solve this
issue?
Thanks!
Luis Cappa.
2012/3/2 Luis Cappa Banda
> Hello!
>
> Just a brief question. I'm querying by my docs ids
Hello!
Just a brief question. I'm querying by my docs ids to retrieve the whole
document data from them, and I would like to retrieve them in the same
order as I queried. Example:
*q*=id:(A+OR+B+OR+C+OR...)
And I would like to get a response with a default order like:
response:
*docA*:{
Hi!
On Thu, Mar 1, 2012 at 23:54, Yonik Seeley wrote:
> On Thu, Mar 1, 2012 at 3:34 AM, Michael Jakl wrote:
>> The topic field holds roughly 5
>> values per doc, but I wasn't able to compute the correct number right
>> now.
>
> How many unique values for that field in the whole index?
> If you h
oh no sorry.
i need more than one. it was only an example.
0 - 9
A - F
G - I
M -R
S -Z
it will be so easy when i would get all person in this interval with
fq=person:[A TO F]. not only matches of entries
thx
alex
--
View this message in context:
http://lucene.472066.n3.nabble.com/alphanumeric
61 matches
Mail list logo