olr into Tomcat, you just need a binary solr.war, you don't
need to be repackaging it yourself - unless you're doing something
highly custom. And if do need to do something highly custom, I still
strongly suggest you back up steps to get things working.
Erik
On Oct 8, 2
Hi!
Have been going though the documentation for the more like this/these
feature
but haven't found anything about how to use it in Solrj.
Regards Erik
Thanks Bruce!
That worked very well.
Erik
On Wed, Oct 8, 2008 at 9:14 PM, Bruce Ritchie <[EMAIL PROTECTED]>wrote:
> Erik,
>
> I just got this to work myself and the documentation was only partially
> helpful in figuring it out. Two main points on making this work via sor1j:
&
080
>
Erik
Sounds like Solr's faceting is exactly what you're looking for. Have
you given it a try? How's it working for you?
Erik
On Oct 14, 2008, at 5:44 AM, klazzthy wrote:
Hello,
I am going mad these days trying to improve my site. I am trying to do
something that I
Solr's new DataImportHandler can index RSS (and Atom should be fine
too) feeds.
Erik
On Oct 14, 2008, at 11:37 AM, msizec wrote:
Thank you for your help.
I've just realized that Solr could not index pages from the web.
I wonder if someone of you guys would know an
ed, either as fl=url, or fl=*
Erik
Jeremy,
Great troubleshooting! You were spot on.
I've posted a new patch that fixes the issue.
Erik
On Oct 16, 2008, at 9:53 PM, Jeremy Hinegardner wrote:
After a bit more investigating, it appears that any facet tree where
the first
item is numerical or boolean or som
I don't think delete-by-query supports purely negative queries, even
though they are supported for q and fq parameters for searches.
Try using:
*:* AND -deptId:[1 TO *]
Erik
On Oct 27, 2008, at 9:21 AM, Alexander Ramos Jardim wrote:
Hey pals,
I am trying to delete a c
On Oct 28, 2008, at 6:33 AM, Kraus, Ralf | pixelhouse GmbH wrote:
is there a chance to override the Similarity in my search ?
In fact I want that all result return a 1 (with the idf methode).
Sure thing, see Solr 1.3.0's example/solr/conf/schema.xml
+1 - the GzipServletFilter is the way to go.
Regarding request handlers reading HTTP headers, yeah,... this will
improve, for sure.
Erik
On Oct 30, 2008, at 12:18 AM, Chris Hostetter wrote:
: You are partially right. Instead of the HTTP header , we use a
request
: parameter
="org.apache.solr.request.PHPSerializedResponseWriter"/>
Then in PHP, hit Solr directly like this:
$response = unserialize(file_get_contents($url));
Where $url is something like http://localhost:8983/solr/select?q=*:*
Erik
:
INFO: [core_de] webapp=/solr path=/select/
params={wt=phps&query=Tools&records=30&start_record=0} status=500
QTime=1
The parameter name should be "q" instead of "query".
And rows instead of records, and start instead of start_record. :)
Erik
On Oct 31, 2008, at 11:40 AM, Vincent Pérès wrote:
The last possibility is to use the solr-ruby library.
If you're using Ruby, that's what I'd use. Were your other proposals
to still do those calls from Ruby, but with the HTTP library directly?
Erik
t
exists) would be fine.
Yeah, this should work fine:
default="NOW/DAY" multiValued="false"/>
Erik
R in /lib so no need to patch Solr locally for that.
Erik
On Nov 4, 2008, at 7:20 PM, Chris Harris wrote:
My current pre-production Solr install is a 1.3 pre-release build, and
I think I'm going to update to a more recent version before an
upcoming product release. Actually, "rele
odern
servlet container should be fine. I'd just stick with Jetty and the
built-in start.jar unless you have a compelling reason to switch.
Erik
On Nov 4, 2008, at 11:16 PM, Muhammed Sameer wrote:
Salaam,
I read somewhere that it is better to write a new start.jar file
th
One quick question are you seeing any evictions from your
filterCache? If so, it isn't set large enough to handle the faceting
you're doing.
Erik
On Nov 4, 2008, at 8:01 PM, wojtekpia wrote:
I've been running load tests over the past week or 2, and I can
On Nov 6, 2008, at 1:54 AM, Sajith Vimukthi wrote:
Can someone of you tell me a source where I can find an elaborated
documentation for solr.
http://wiki.apache.org/solr
Erik
Would faceting on date (&facet.field=date&facet=on) satisfy your
need? It'll give you back all the dates and frequencies of them
within the matched results.
Erik
On Nov 6, 2008, at 4:59 AM, [EMAIL PROTECTED] wrote:
How can I get ALL the matching documents back? How ca
?
You should start with the Solr tutorial <http://lucene.apache.org/solr/tutorial.html
> and get a feel for how to work with the Solr example. Once you've
successfully walked through the tutorial, if you still have questions
we're here to help!
Erik
details, I load Solr's codebase into an IDE and navigate it that way
personally.
Erik
g the same keys used with HTTP requests.
Erik
Hi!
When making a query using the web interface the we get the expected
OR function. But when using the java client it look like it is treating the
query as an AND query.
Is there way to see what operator is used for the query using Solrj?
Regards Erik
Hi!
Sorry that I was unclear, when I wrote that it works in the web interface I
also
meant to say that it is set in the schema.xml file and therefore working
there.
Sorry about that
Regards Erik
On Fri, Nov 7, 2008 at 11:33 AM, Jorge Solari <[EMAIL PROTECTED]> wrote:
> setting in s
?
Regards Erik
Thanks Yonik for the answer1
Will try to implement that, but I can't seem to find how to do that using
Solrj,
do I just add it to the Query field or how is it done?
Regards Erik
On Mon, Nov 10, 2008 at 2:59 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> My gut tells me that
Note that the Java replication feature is Solr 1.4 and above. You'll
need to try a nightly or trunk build to get to this feature for now.
Erik
On Nov 11, 2008, at 1:38 PM, banished phantom wrote:
Hello everyone ! I'm new in the Solr list. I'va been using Solr 1.2
20 words as synonyms). Does this have such an adverse impact
Apparently so :/
Are there other components in your request handler that may also be
(re)executing a query? Does the debugQuery=true component timings
point to any other bottlenecks?
Erik
imings of all the components -
narrowing it down to the component would be the first step. My hunch
is that you've got an enormous dismax query going on, and perhaps it
is best to do index-time synonyms instead of query-time.
Erik
bq only works with dismax (&defType=dismax). To get the same effect
with the lucene/solr query parser, append a clause to the original
query (OR'ing it in).
Erik
On Nov 11, 2008, at 11:52 PM, Otis Gospodnetic wrote:
Hi,
It's hard to tell what you are replyin
ng with the other
parameters.
Erik
be pulled from that issue and applied to 1.3
release, even JAR'ing it up separately, and tossing it in as a
"plugin". We probably should be creating all these sorts of goodies
and independent modules of code that aren't "core", but that gets
fuzzy to say wh
-10-30T03:28:10.000Z" format and output it as a GMT formatted
date like "Oct 30 2008 03:28:10 GMT-0600".
Anyone got the incantation handy to make such a conversion in XSLT?
Erik
AND must be entirely capitalized to set clauses on both sides as
_required_.
Erik
On Nov 14, 2008, at 12:44 AM, Raghunandan Rao wrote:
Yes.
But is that how we do in Solrj by setting SolrQuery("text:Right And
title:go").
Thanks a lot.
-Original Message-
From: Rya
d append all of those together into a
single qf with a space separator (URL encoded):
&qf=title^30+title_en^20+description
Same with bf, it is single-valued so you have to combine everything
into a single parameter.
Erik
Fergus,
I just downloaded Tomcat 5.5.27, put a solr.xml file in conf/Catalina/
localhost with the following:
debug="0" crossContext="true" >
And Solr started up just fine and it's admin, etc worked as expected.
Oh, and on Mac OS X (of course!), versi
To be fair, my first message was about Solr trunk + Tomcat 5.5.27, but
I just tried it by pointing to a Solr 1.3.0 official release and it
worked fine as well.
Erik
On Nov 14, 2008, at 12:30 PM, Erik Hatcher wrote:
Fergus,
I just downloaded Tomcat 5.5.27, put a solr.xml file in
On Nov 14, 2008, at 4:11 PM, Dan A. Dickey wrote:
Note to whomever writes documentation:
We all do :)
Feel free to create a wiki account and edit the page if you like.
Much appreciated in fact!
You're right ... that's a confusing oddity about the command.
Erik
I
was all it took,
leaving the Solr date output as-is. Substring extraction would have
done the trick just fine though.
I'll be contributing these bits back to contrib/velocity or as part of
SolrJS when I get things running nicely.
Erik
On Nov 14, 2008, at 4:50 PM, Chris Hos
n the Solr
install directory. I've never put a Lucene JAR, or any other JAR for
that matter, into that directory.
I don't know what differs in Tomcat and Jetty startups, but tracking
down classloader issues can be a road to madness.
Erik
we'll get that fixed up.
Maybe it all works using the the codebase here? <http://wiki.apache.org/solr/SolrJS
>
Matthias and Ryan - let's get SolrJS integrated into contrib/
velocity. Any objections/reservations?
Erik
On Nov 16, 2008, at 10:59 AM, JCodina wrote:
t out this is a Solr deployment configuration
suitable for direct browser access, but we're not safely there yet are
we? Is this an absurd goal? Must we always have a moving piece
between browser and data/search servers?
Thanks,
Erik
e
lot of public apps now. In other words, another tier in front of Solr
doesn't add (much) to DoS protection to an underlying Solr, no?
Erik
What about SolrJS? Isn't it designed to hit a Solr directly? (Sure,
as long as the response looked like Solr response, it could have come
through some magic 'security' tier).
Erik
On Nov 16, 2008, at 5:54 PM, Ryan McKinley wrote:
I'm not totally sure what you
thias
and I will have to resolve that.
Erik
cript" and create a "dependency" to contrib/velocity for
ServerSideWidgets?
Sure, contrib/javascript sounds perfect.
If that's ok, I'll have a look at the directory structure and the
current ant build.xml to make them fit into the common solr
structure and build.
Awesome, thanks!
Erik
et me into design meetings any more ;(
Apparently they shouldn't let me into them either ;)
Erik
ade to work? (I plead ignorance on the guts of the Java-
based replication feature) - requires password protected handlers?
Shouldn't we bake some of this into the default example configuration
instead of update handlers being wide open by default?
Erik
mitingRowsSearchComponent could easily do this as a plugin though.
Erik
On Nov 16, 2008, at 6:55 PM, Walter Underwood wrote:
Limiting the maximum number of rows doesn't work, because
they can request rows 2-20100. --wunder
But you could limit how many rows could be returned in a single
request... that'd close off one DoS mechanism.
Erik
On Nov 17, 2008, at 9:07 AM, Yonik Seeley wrote:
On Mon, Nov 17, 2008 at 8:54 AM, Erik Hatcher
<[EMAIL PROTECTED]> wrote:
Sounds like the perfect case for a query parser plugin... or use
dismax as
Ryan mentioned. Shouldn't Solr be hardened for these cases
anyway? Or at
least
front? Authentication? Row limiting?
Erik
de contrib/javascript?
I need to understand it a bit more, but no subclass is necessary...
we'll patch it into contrib/velocity's VrW like you had it before.
Erik
embedded Solr. This way VrW can be separated from core Solr to
another "tier" and template on remote Solr responses. Thoughts on how
this feature might play out in that scenario?
Erik
On Nov 17, 2008, at 1:09 PM, Matthias Epheser wrote:
Erik Hatcher schrieb:
On Nov 1
want
to get it using SolrJ's API for request/response rather than the more
internal stuff we're using now.
Erik
Yeah, it'd work, though not only does the version of Lucene need to
match, but the field indexing/storage attributes need to jive as well
- and that is the trickier part of the equation.
But yeah, LuSQL looks slick!
Erik
On Nov 17, 2008, at 2:17 PM, Matthew Runo wrote:
trouble is, you can also GET /solr/update, even all on the URL, no
request body...
<http://localhost:8983/solr/update?stream.body=%3Cadd%3E%3Cdoc%3E%3Cfield%20name=%22id%22%3ESTREAMED%3C/field%3E%3C/doc%3E%3C/add%3E&commit=true
>
Solr is a bad RESTafarian.
Getting warmer!
Glen,
The thing is, Solr has a database integration built-in with the new
DataImportHandler. So I'm not sure how much interest Solr users
would have in LuSql by itself.
Maybe there are LuSql features that DIH could borrow from? Or vice
versa?
Erik
On Nov 17, 2008, at
ter_destroy hooks. See slide 13 of <http://code4lib.org/files/solr-ruby.pdf
>
Erik
The original term is also indexed, but during querying in
phrases, the common terms are again concatenated, thus making querying
a lot faster.
I may not have explained it entirely accurately, but that's the gist.
Have a look at Nutch's Analyzer for more details.
Erik
you plan on updating the rows in that table and reindexing them?
Seems like some kind of unique key would make sense for updating
documents.
But yeah, a more detailed description of your table structure and
searching needs would be helpful.
Erik
On Nov 19, 2008, at 5:18 AM, A
I kind if remember hearing that Solr was using SLF4J for the logging, but
I haven't been able to find any information about it. And in that case where
do you set it to redirect to you log4j server for example?
Regards Erik
Note that you can use a standard Lucene Analyzer subclass too. The
example schema shows how with this commented out:
Erik
On Nov 19, 2008, at 6:24 PM, Glen Newton wrote:
Thanks.
I've decided to use:
positionIncrementGap
ven't actually duplicated the issue myself though.
Thanks,
Erik
Oct 29, 2008 10:14:31 AM org.apache.catalina.startup.HostConfig
undeployApps
WARNING: Error while removing context [/search]
java.lang.NullPointerException
at
org
.apache
.solr.servlet.SolrDispatchFilt
I'd suggest aggregating those three columns into a string that can
serve as the Solr uniqueKey field value.
Erik
On Nov 20, 2008, at 1:10 AM, Raghunandan Rao wrote:
Basically, I am working on two views. First one has an ID column. The
second view has no unique ID column. What
1.3.0 final release.
Erik
On Nov 20, 2008, at 2:03 AM, Shalin Shekhar Mangar wrote:
Eric, which Solr version is that stack trace from?
On Thu, Nov 20, 2008 at 7:57 AM, Erik Hatcher <[EMAIL PROTECTED]
>wrote:
In analyzing a clients Solr logs, from Tomcat, I came acro
results.
Add &debugQuery=true to your query string, look at the debug section
of the output and the explanations for why documents are matching.
That'll reveal the secret.
Erik
light,
VuFind, fac-back-opac are the big ones. There's also a SolrMarc
project out there with a very customizable MARC indexer.
Just FYI.
Erik
it's even
possible to do both?
Regards Erik
Ok, thanks Ryan!
On Thu, Nov 20, 2008 at 9:03 AM, Ryan McKinley <[EMAIL PROTECTED]> wrote:
>
> On Nov 20, 2008, at 11:57 AM, Erik Holstad wrote:
>
> Thanks for the help Ryan!
>> Using the start.jar with 1.3 and added the slf4j jar to the classpath.
>> When
>
tter/cleaner as we go, so we appreciate your
early adopter help ironing out this stuff.
Erik
On Nov 20, 2008, at 5:44 PM, JCodina wrote:
I could not manage, yet to use it. :confused:
My doubts are:
- must I download solr from svn - trunk?
- then, must I apply the patches of solrjs a
On Nov 22, 2008, at 4:26 AM, Erik Hatcher wrote:
I just got the client-side demo on trunk to work (with a few tweaks
to make it work with the example core Solr data).
On trunk follow these steps:
* root directory: ant example
One extra step needed, for the pedantic...
* launch Solr
rtainly, though, it's not possible to show off all widgets this way
(like country selection), but the more unified we make SolrJS and VrW,
the easier it will be for folks to try and eventually adopt these cool
technologies.
Erik
On Nov 22, 2008, at 5:50 AM, Matthias Epheser wrote:
remove "myid:" from that value and you should be in good shape.
Granted it is confusing. But what's the alternative? Maybe calling
every attribute that needs to refer to a uniqueKey literally
"uniqueKey"? I don't think we want to have attributes changing their
name based on the uniqueKey field name.
Erik
; from value '2008-11-24T12:58:47Z'
Note that it says "type=sdouble". You need to have that mapped to a
date field, not sdouble. I guess you're getting caught by the *_d
mapping from the example schema? Try time_on_xml_dt instead, if
you've got that mapped.
Erik
query like this, but it is
possible with a sloppy phrase query where the position increment gap
(see example schema.xml) is greater than the slop factor.
Erik
values
is it possible?
It's not possible with a purely boolean query like this, but it is
possible with a sloppy phrase query where the position increment
gap (see example schema.xml) is greater than the slop factor.
Erik
I think what is needed here is the concept of SAME
t warm those filterCaches up and your
performance should be quite acceptable. Report back with more details
if not.
Erik
https://issues.apache.org/jira/secure/attachment/12394070/sslogo-solr-finder2.0.png
https://issues.apache.org/jira/secure/attachment/12394475/solr2_maho-vote.png
https://issues.apache.org/jira/secure/attachment/12393995/sslogo-solr-70s.png
You can omit documents. I recommend doing it with a filter query.
Append the following to your request to Solr:
&fq=-price:0
That does the trick? You'll have to have client logic to only add
that parameter when sorting by price, if that's how you want it to work.
e.org/viewvc/lucene/solr/trunk/client/ruby/solr-ruby/test/unit/standard_response_test.rb?view=markup
>
Erik
On Nov 28, 2008, at 3:41 AM, Robert Young wrote:
I'm not using Java unfortunately. Is there anything that allows me to
interact with it much like a normal mock object, se
On Nov 28, 2008, at 8:38 PM, Yonik Seeley wrote:
Or, it would be relatively trivial to write a Lucene program
to merge the indexes.
FYI, such a tool exists in Lucene's API already:
<http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/misc/IndexMergeTool.html
>
Erik
Adding constraints obtained from facets is best done using fq anyway,
so it's worth making that switch in your client code anyway.
Erik
On Nov 30, 2008, at 10:43 AM, Peter Wolanin wrote:
Hi Grant,
Thanks for your feedback. The major short-term downside to switching
to dismax
It means the request was successful. If the status is non-zero (err,
1) then there was an error of some sort.
Erik
On Dec 4, 2008, at 9:32 AM, Robert Young wrote:
In the standard response format, what does the status mean? It
always seems
to be 0.
Thanks
Rob
g back in
whatever format you want.
Erik
On Dec 5, 2008, at 3:02 PM, Dan Robin wrote:
I am using solrj to query solr and the QueryResponse.getResults()
returns a
SolrDocumentList. There is a SolrDocument in the list with the
results I
want. The problem is that I want to view these
dismax doesn't support field selection in it's query syntax, only via
the qf parameter.
add &debugQuery=true to see how the queries are being parsed, that'll
reveal what is going on.
Erik
On Dec 10, 2008, at 5:07 AM, sunnyfr wrote:
Hi,
I would like to
ry=true
Use bq (boosting query) for boosting by status
bq=status_official:true^2 and remove it from the qf parameter. That
should do the trick.
Erik
make requests or not it'd do the trick.
Erik
On Dec 13, 2008, at 12:54 AM, Kay Kay wrote:
For a particular application of ours - we need to suspend the Solr
server from doing any query operation ( IndexReader-s) for sometime,
and then after sometime in the near future (
, you'll have to merge those three fields into a single field as
Solr only uses one field for uniqueKey.
Erik
Solr trunk.
Thanks,
Erik
ansformer on the
entity rather than the field it is used on? I don't yet understand
why the transformer is entity-based rather than per-field.
Thanks for your help, Noble.
Erik
On Sun, Dec 14, 2008 at 7:18 AM, Erik Hatcher
wrote:
I'm trying to index a blog with
Thanks again, Noble. All is working fine for me now.
Erik
On Dec 14, 2008, at 10:33 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
Also, I want to add in date transformation like in the example above
commented out. How would I use both the TemplateTransformer and the
DateFormatTransformer
with the fields
specified in the XML.
Currently this is not possible, as far as I know. Maybe this sort of
thing could be coded to part of an update processor chain? Somehow
DIH and the Tika need to tie together eventually too, eh?
Erik
g
the extracting request to a file path visible by Solr.
Erik
Check your solrconfig.xml:
1
That's probably the truncating factor. That's the maximum number of
terms, not bytes or characters.
Erik
On Dec 15, 2008, at 5:00 PM, Antonio Zippo wrote:
Hi all,
i have a TextField containing over 400k of text
when i try t
, and configure it
into solrconfig.xml and should be good to go. Subclassing existing
classes, this should only be a handful of lines of code to do.
Erik
On Dec 16, 2008, at 3:54 AM, psyron wrote:
I have the same problem, also need to plugin my "customComparator",
but
Mark,
Looked at the code to discern this...
A fragmenter isn't responsible for the number of snippets - the higher
level SolrHighlighter is the component that uses that parameter. So
yes, it must be specified at the request handler level, not the
fragmenter configuration.
Can't we log with the core as part of the context of the logger,
rather than just the classname? This would give you core logging
granularity just by config, rather than scraping.
Yes?
Erik
On Dec 17, 2008, at 9:47 AM, Ryan McKinley wrote:
As is, the log classes are stati
FQCNs that's a decent convention for most
cases with a package structure that is well organized and filterable.
In this case having the core name in there as a prefix makes a lot of
sense to me.
We could provide a LoggerUtils.getLogger(core, clazz) or something
like to keep it DRY and consistent.
Erik
901 - 1000 of 1860 matches
Mail list logo