It seems strange when i refresh same url search.
time will change...sometime use *0.01021409034729 s, *sometime use *
0.0080091953277588 s.
*sometime use *0.024219989776611.
It change too big.
*
Only i use it and less search, so i think memory not all use.
why time changed very big, and i thi
ok, i find it only happen in win.
2007/6/19, James liu <[EMAIL PROTECTED]>:
It seems strange when i refresh same url search.
time will change...sometime use *0.01021409034729 s, *sometime use *
0.0080091953277588 s.
*sometime use *0.024219989776611 .
It change too big.
*
Only i use it and le
hi,
I am planning on reindexing from fresh on a daily basis, while keeping the
search online.
What I do is:
1. *:*
2. ...
3.
Works ok. But I've noticed that the faq recommends issuing before
reindexing. The problem is also seems to commit changes, so the
index is empty until I reindex and
I wish to index well formed xml documents as they are without escaping
all the tags with lt;s and gt;s. I searched this mailing list's archive
and found someone who suggested that you can make a new field type
having a file something like:
import org.apache.solr.schema.TextField;
import org.ap
Hi Hoss.
Yes, the idea is indexing each document independently (in my scenario they
are not translations, they are just documents with the same structure but
different languages). So that considerations you did about queries in a
range wouldn't be a problem in this case. The real issue I can see i
Is it reasonable to implement a RequestHandler that systematically uses a
DocSet as a filter for the restriction queries? I'm under the impression
that SolrIndexSearcher.getDocSet(Query, DocSet) would use the cache properly
& that calling it in a loop would perform the 'and' between the filters..
On 6/19/07, Henrib <[EMAIL PROTECTED]> wrote:
Is it reasonable to implement a RequestHandler that systematically uses a
DocSet as a filter for the restriction queries?
How many unique keys would typically be used to construct the filter?
I'm under the impression
that SolrIndexSearcher.getDocS
What I'm after is to restrict the 'whole' index through a set of unique
keys.
Each unique key set is likely to have between 100 & 1 keys and these
sets are expected to be different for most of the queries. I'm trying to see
if I can achieve a generic 'fk' (for filter key) kind of parameter so
Hi,
I'm also just at that point where I think I need a wildcard facet.field
parameter (or someone points out another solution for my problem...).
Here is my situation:
I have many products of different types with totally different
attributes. There are currently more than 300 attributes
On 19-Jun-07, at 2:48 AM, michael ravits wrote:
hi,
I am planning on reindexing from fresh on a daily basis, while
keeping the search online.
What I do is:
1. *:*
2. ...
3.
Works ok. But I've noticed that the faq recommends issuing
before reindexing. The problem is also
seems to co
On 18-Jun-07, at 10:28 PM, Toru Matsuzawa wrote:
I'm sorry. Because it was not possible to append it,
it sends it again.
I got the error below after adding CJKTokenizer to schema.xml. I
checked the constructor of CJKTokenizer, it requires a Reader
parameter,
I guess that's why I get this e
Hi List,
Thanks in advance for the help. I'm new to Solr and ran across a bit
of a problem. I installed Solr with the Jetty and tested the
exampledocs. Everything went great. Next I tried adding one of my own
documents to the collection. The XML is below:
test.xml
123456789
Hi List,
Thanks in advance for the help. I'm new to Solr and ran across a bit
of a problem. I installed Solr with the Jetty and tested the
exampledocs. Everything went great. Next I tried adding one of my own
documents to the collection. The XML is below:
Are you running the example without ch
On 6/19/07, Brian Whitman <[EMAIL PROTECTED]> wrote:
> Hi List,
>
> Thanks in advance for the help. I'm new to Solr and ran across a bit
> of a problem. I installed Solr with the Jetty and tested the
> exampledocs. Everything went great. Next I tried adding one of my own
> documents to the colle
: I have many products of different types with totally different
: attributes. There are currently more than 300 attributes
: I use dynamic fields to import the attributes into solr without having
: to define a specific field for each attribute. Now when I make a query I
: would like to get bac
: range wouldn't be a problem in this case. The real issue I can see in this
: approach, is related to Analyzers... How to make them deal with different
: languages properly using one Solr instance with the same set of fields being
: used by documents in different languages
i would still use
: I wish to index well formed xml documents as they are without escaping
: all the tags with lt;s and gt;s. I searched this mailing list's archive
: and found someone who suggested that you can make a new field type
: having a file something like:
in the thread in question...
http://www.nabble.c
: Works ok. But I've noticed that the faq recommends issuing
: before reindexing. The problem is also seems to commit
: changes, so the index is empty until I reindex and the search can't be
: online.
please note what question that recomendation appears in...
"How can I rebuild my index fro
Hi Guys,
Thanks very much for the response. Brian you were correct. I was
following a separate tutorial where the fields had been added to the
schema.
Cheers,
Spencer
On 6/19/07, Yonik Seeley <[EMAIL PROTECTED]> wrote:
On 6/19/07, Brian Whitman <[EMAIL PROTECTED]> wrote:
>
> > Hi List,
> >
>
: Thanks! Removing the entry in the config file fixed it.
: Could please explain to me what the property does exactly? It is not clear
: to me.
I've never had much of a chacne to write really good docs on the
DisMaxRequestHandler (anyone want to voluneet?) ... BUT! "mm" is one of
the few options
: search for documents. I'm planning to use Nutch to crawl that website
: and use Solr to cluster my search results. I tried integrating Nutch
: with Solr following FooFactory.com's blog ..but I could not follow
: few of the steps as I'm very new to both of them. If anyone of you have
: imp
On Jun 19, 2007, at 2:09 PM, Yonik Seeley wrote:
Or a Java bug?
http://www.innovation.ch/java/HTTPClient/urlcon_vs_httpclient.html
I'm not sure if it's possible to get the extra info with Java's
built-in HTTP client.
Spencer, does post.sh give you more error info?
Does for me, not very pret
Thanks Chris for replying my question. So I'm thinking about using a CMS and
when somebody publishes a page in CMS, I would generated this well structure
XML file and feed that xml to Solr to generate the index on those data. Then, I
can simply do faceted search using the correct Lucene query f
Is there a way to hook in my own custom HitCollector into the search
process and still have all the magic of Solr? I see from [1] that
someone else is interested in something like this, just don't know if
anything came of it. Worst case, I could hook in my HitCollector and
sacrifice the c
Hi all,
I'm very new to Solr and I did try the tutorial and deploy the provided
demo application. I got the basic understaning how it work. However, when I
tried deploying the same demo in Tomcat, it was unsuccessful. Anyway, what I'm
trying now is I want to create have a two jsp/servlet(
I am doing a full index daily (3,000,000 documents) without using
optimize, no problems at all, if that helps. Also note that you don't
need to delete all your documents, if delete your documents search
won't return any results until you have re-added the documents!
You should do a regular commit
Hi.
For a project i'm working on, I'm getting a RDF formatted feed.
I was wondering if someone has built a RDF to solr upload function
similar to the CSV and mysql ones sitting in Jira.
regards
Ian
: For a project i'm working on, I'm getting a RDF formatted feed.
:
: I was wondering if someone has built a RDF to solr upload function
: similar to the CSV and mysql ones sitting in Jira.
no, but i've been meaning to try and write an XsltUpdateHandler that can
do the inverse of the XsltResponse
Ok. Thanks. I will read up on it.
On 19/06/07, Chris Hostetter <[EMAIL PROTECTED]> wrote:
: Thanks! Removing the entry in the config file fixed it.
: Could please explain to me what the property does exactly? It is not
clear
: to me.
I've never had much of a chacne to write really good docs o
On Tue, 2007-06-19 at 11:09 -0700, Chris Hostetter wrote:
> I solve this problem by having metadata stored in my index which tells
> my custom request handler what fields to facet on for each category ...
How do you define this metadata?
Cheers,
Martin
> but i've also got several thousand catego
On Tue, 2007-06-19 at 19:16 +0200, Thomas Traeger wrote:
> Hi,
>
> I'm also just at that point where I think I need a wildcard facet.field
> parameter (or someone points out another solution for my problem...).
> Here is my situation:
>
> I have many products of different types with totally dif
31 matches
Mail list logo