Ya the problem is with the length of the URL, with a lot of filters coming
in the length goes beyond the length allowed. But, I guess extending url
length would be a better approcach.
Regards,
Rohit
-Original Message-
From: Sujit Pal [mailto:sujit@comcast.net]
Sent: 14 October 2011 1
Hi,
Interesting feature. See also https://issues.apache.org/jira/browse/LUCENE-3130
for a discussion of using TypeAttribtue to (de)boost certain token types such
as synonyms. Having the ability to remove a token type from the search, we
could do many kind of searches on the same field, that we
Hi,
The Highlighter is way too slow for this customer's particular use case - which
is veery large documents. We don't need highlighted snippets for now, but we
need to accurately decide what words (offsets) in the real HTML display of the
resulting page to highlight. For this we only need offs
On Fri, Oct 14, 2011 at 5:49 PM, Esteban Donato
wrote:
> I found soft commits very useful for NRT search requirements.
> However I couldn't figure out how replication works with this feature.
> I mean, if I have N replicas of an index for load balancing purposes,
> when I soft commit a doc in on
Hey everybody,
Next week, Lucene EuroCon will be invading Barcelona Spain, and I will
once again be in the hot seat for a session of Stump The Chump.
http://2011.lucene-eurocon.org/talks/20863
During the session, moderator and former "Chump" Grant Ingersoll will
present me with tough Luce
Hello guys,
I found soft commits very useful for NRT search requirements.
However I couldn't figure out how replication works with this feature.
I mean, if I have N replicas of an index for load balancing purposes,
when I soft commit a doc in one of this nodes, is there any way that
those "in-m
There's an open issue -
https://issues.apache.org/jira/browse/SOLR-2731which addresses adding
this kind of metadata to csv output. There's a patch
there which may be useful, and could probably be adapted if needed
-Simon
On Fri, Oct 14, 2011 at 4:37 PM, Fred Zimmerman wrote:
> Hi,
>
> I want to
: Can't find resource 'solrconfig.xml' in classpath or
: '/home/datadmin/public_html/apache-solr/example/solr/./conf/',
: cwd=/usr/local/jakarta/apache-tomcat-5.5.33/bin
a) several of the steps you mention rever to
/home/datadmin/public_html/apache-solr/example and
/home/datadmin/public_html/a
Hi,
I want to include the search query in the output of wt=csv (or a duplicate
of it) so that the process that receives this output can do something with
the search terms. How would I accomplish this?
Fred
On 10/14/2011 12:11 PM, Rohit wrote:
> I want to query, right now I use it in the following way,
>
> CommonsHttpSolrServer server = new CommonsHttpSolrServer("URL HERE");
> SolrQuery sq = new SolrQuery();
> sq.add("q",query);
> QueryResponse qr = server.query(sq);
QueryResponse qr = server.query(
What will be the name of this hard coded core? I was re arranging my
directory structure adding a separate directory for code. And it does work
with a single core.
On Fri, Oct 14, 2011 at 11:47 PM, Chris Hostetter-3 [via Lucene] <
ml-node+s472066n3422415...@n3.nabble.com> wrote:
>
> : I am not ru
: modified the solr/home accordingly. I have an empty directory under
: tomcat/webapps named after the solr home directory in the context fragment.
if that empty directory has the same base name as your context fragment
(ie: "tomcat/webapps/solr0" and "solr0.xml") that may give you problems
..
Hmm.. Reason:
When I debug it in eclipse, I am verifying that the value I am setting in
the SolrInputDocument includes '\r\n'. However it only has '\n' in the
index. It is just a simple string field in solr.
On Fri, Oct 14, 2011 at 2:23 PM, Chris Hostetter-3 [via Lucene] <
ml-node+s472066n3422
: Can one configure (e)dismax to add tat + to 1 or more fields, like name
: in that example, in order to require that clause though?
Otis: this thread has fallen behind it's deja-vu brother in terms of
useful information...
http://www.lucidimagination.com/search/document/86a9202b97df441d/disma
: We recently updated our Solr and Solr indexing from DIH using Solr 1.4 to our
: own Hadoop import using SolrJ and Solr 3.4.
...
: Any document that has a string field value with a carriage return "\r" is
: having that carriage return stripped before being added to the index. All
: li
: I am not running in a multi core environment. My application requires only a
: single search schema. Does it make sense to go for a multi core setup in
: this scenario? Given that we currently have a single core is there any
: alternative to RELOAD which work in a single core setup?
In recent v
After looking at this more, it appears that
solr.HTMLStripCharFilterFactory does not return a list which
AnalysisResponseBase is expecting. I have created a bug ticket
(https://issues.apache.org/jira/browse/SOLR-2834)
On Fri, Oct 14, 2011 at 8:28 AM, Shane Perry wrote:
> Hi,
>
> Using Solr 3.4.0
Hmmm, you need to post more information,
especially the results of appending &debugQuery=true
But right off the bat, the "string" type is probably not
what you want if your input has more than one
word in it. "string" types are completely unanalyzed,
tokens aren't extracted, no stemming or casing
Not the OP, but I put it in on /one/ of my solr custom handlers that
acts as a proxy to itself (ie the server its part of). It basically
rewrites the incoming query (usually short 50-250 chars at most) to a
set of very long queries and passes them in parallel to the server,
gathers up the results a
Why do you want to use POST? It is the wrong HTTP request type for search
results.
GET is for retrieving information from the server, POST is for changing
information on the server.
POST responses cannot be cached (see HTTP spec).
POST requests do not include the arguments in the log, which ma
If you use the CommonsHttpSolrServer from your client (not sure about
the other types, this is the one I use), you can pass the method as an
argument to its query() method, something like this:
QueryResponse rsp = server.query(params, METHOD.POST);
HTH
Sujit
On Fri, 2011-10-14 at 13:29 +, Ro
thanks Pravesh for your feedback. I have 10 million products and 165M
rows of visits accumulated for 2 years. The data-aggregated needs to
be shown in the search result page along with the product description.
I also felt option 2 was the most suitable but wanted to have a second
view. The only
I want to query, right now I use it in the following way,
CommonsHttpSolrServer server = new CommonsHttpSolrServer("URL HERE");
SolrQuery sq = new SolrQuery();
sq.add("q",query);
QueryResponse qr = server.query(sq);
Regards,
Rohit
-Original Message-
From: Yury Kats [mailto:yuryk...@yahoo.
I've spent today writing my own SynonymFilter and SynonymFilterFactory. And
it works!
I've followed Erick's advice and pre- and postfixed all the words that I
want to stem with a @. So, if I want to stem the word car, I injest it in
the query as @car@.
My adapted synonymfilter recognizes the pre/
You could restart your Solr instance. If you have just 1 Solr instance, that
means a bit of a downtime. If you have 2 Solr slaves behind a Load Balancer,
then you can avoid that downtime.
But I think you could also just configure your 1 Solr core via solr.xml and
then you can use that RELOAD c
I am not running in a multi core environment. My application requires only a
single search schema. Does it make sense to go for a multi core setup in
this scenario? Given that we currently have a single core is there any
alternative to RELOAD which work in a single core setup?
On Fri, Oct 14, 2011
Of course, if you change stop words you may want to reindex your old content,
so that the new state of stop words is reflected in all documents.
It's not an absolute must to do that, but if you do not do it, you may see
strange search results that will make you wonder why some documents matched a
Hi,
Using Solr 3.4.0, I am trying to do a field analysis via the
FieldAnalysisRequest feature in solrj. During the process() call, the
following ClassCastException is thrown:
java.lang.ClassCastException: java.lang.String cannot be cast to java.util.List
at
org.apache.solr.client.solrj.r
On 10/14/2011 9:29 AM, Rohit wrote:
> I want to user POST instead of GET while using solrj, but I am unable to
> find a clear example for it. If anyone has implemented the same it would be
> nice to get some insight.
To do what? Submit? Query? How do you use SolrJ now?
Hi Victor,
your wages are hopefully more than what costs disk space, nowadays?
I don't want to spoil the fun in thinking of new challenges when it
comes to SOLR, but from a project management point of view I would buy
some more storage and get it done with copyfield and two requesthandlers
that c
Am 14.10.2011 15:10, schrieb Jithin:
> Hi,
> Is it possible to refresh the stop word list periodically say once in 6
> hours. Is this already supported in Solr or are there any work arounds.
> Kindly help me in understanding this.
Hi,
you can trigger a reload command to the core admin, assuming y
Hi Erick,
First of all, thanks for your posts, I really appreciate this!
1) Yes, we have tested alternative stemmers, but I admit that a definite
decission has not been made yet. Anyway, we definately do not want to create
a stemmed index because of storage issues and we definately want to be abl
Hello folks,
i have a question about the MLT.
For example my query:
localhost:8983/solr/mlt/?q=gefechtseinsatz+AND+dna&mlt=true&mlt.fl=text&mlt.count=0&mlt.boost=true&mlt.mindf=5&mlt.mintf=5&mlt.minwl=4
*I have 1 Query-RESULT and 13 MLT-docs. The MLT-Result corresponds to
the half of my inde
H
A couple of things.
1> Have you looked at alternate stemmers? Porter stemmer is rather
aggressive. Perhaps a less-agressive stemmer would suit your
internal users.
2> Try a few things, but if you can't solve it reasonably quickly,
go back to your internal customer and exp
Hi Esteban,
A lot depends on a lot of things: 1) How much volume(total documents) 2)
size of index 3) How you represent the data-aggregated part in your UI.
Your option-2 seems to be a suitable way to go. This way you tune each cores
separately. Also the use-cases for updating each document/produ
This is really a Python question, basic
URL escaping. It has nothing to do with
Solr.
WARNING: I don't do Python, but a quick
google search shows:
http://stackoverflow.com/questions/1695183/how-to-percent-encode-url-parameters-in-python
Best
Erick
2011/10/14 Guizhi Shi :
>
> Hi Dear Sir/Madam
>
I need to extract the last 20 keywords in all my documents, sorted by score.
The keywords field is multivalued and the Solr schema I have defined like
this:
The problem is as follows: my query is ok but when I arrive to index 518
documents Solr seems to no longer work. But if I remove a document (
Hi Dear Sir/Madam
This is George, I m using SOLR,It looks so powerful, I meet a problem, it is
like:
I want to post a request from python script, and pass the http request to my
own SOLR server, but if the request contain the symbol, such as & SOLR will
responsea error.
I use the admin p
Just look into your tomcat logs in more detail.specifically the logs when
tomcat loads the solr application's web context. There you might find some
clues or just post the logs snapshot here.
Regds
Pravesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/upgrading-1-4-to-
Hi Jeremy,
The xsl files go into the subdirectory /xslt/ (you have to create that)
in the /conf/ directory of the core that should return the transformed
results.
So, if you have a core /myCore/ that you want to return transformed
results you need to put the example.xsl into:
$SOLR_HOME/myCore
Hi Erick,
I work for a very big library and we store huge amounts of data. Indexing
some of our collections can take days and the index files can get very big.
We are a non-profit organisation, so we want to provide maximum service to
our customers but at the same time we are bound to a fixed budg
Hi Monica,
AFAIK there is nothing like the filter you've described, and I believe it would
be generally useful. Maybe it could be called StopTermTypesFilter? (Plural on
Types to signify that more than one type of term can be stopped by a single
instance of the filter.)
Such a filter should
42 matches
Mail list logo