I've been investigating this further and I might have found another path
to consider.
Would it be possible to create a custom implementation of a SortField,
comparable to the RandomSortField, to tackle the problem?
I know it is not your standard question but would really appreciate all
feedback
First of all,
Do you expect every query to return results for all 4 buckets?
i.o.w: say you make a Sortfield that sorts for score 4 first, than 3, 2, 1.
When displaying the first 10 results, is it ok that these documents
potentially all have score 4, and thus only bucket 1 is filled?
If so, I can
Just to be clear,
this is for the use-case in which it is ok that potentially only 1 bucket
gets filled.
2010/6/14 Geert-Jan Brits
> First of all,
>
> Do you expect every query to return results for all 4 buckets?
> i.o.w: say you make a Sortfield that sorts for score 4 first, than 3, 2,
> 1.
>
Hello Geert-Jan,
This seems like a very promising idea, I will test it out later today.
It is not expected that we have results in all buckets, we have many
use-cases where only 1 or 2 buckets are filled.
It is also not a problem that the first 10 results (or 20 in our case)
all fall in the same b
After some more research, i found an even older thread on the list where it
was discussed a little more, but still no separat logfiles:
http://search.lucidimagination.com/search/document/a5cdc596b2c76a7c/setting_a_log_file_per_core_with_slf4
Anyway i will use this in my custom-code to add a p
Hi Alex,
as I understand the thread you will have to change the solr src then,
right? The logPath is not available or did I understand something wrong?
If you are okay with touching solr I would rather suggest repackaging
the solr.war with a different logging configuration. (so that the cores
do
On Monday 14 June 2010 13:21:31 Peter Karich wrote:
> as I understand the thread you will have to change the solr src then,
> right? The logPath is not available or did I understand something wrong?
For me, i will only change my own custom-code, not the orginal src from solr.
I had to write a cus
hi,
i have two questions:
1) how can i set a default value on an imported field if the
field/column is missing from a SQL query
2) i had a problem with the dataimporthandler. in one database column
(WebDst) i have a string with a comma/semicolon seperated numbers, like
100,200; 300;400,
hi,
check Regex Transformer
http://wiki.apache.org/solr/DataImportHandler#RegexTransformer
umar
On Mon, Jun 14, 2010 at 5:44 PM, wrote:
> hi,
>
> i have two questions:
>
> 1) how can i set a default value on an imported field if the
> field/column is missing from a SQL query
> 2) i had a probl
Hello.
I want to use the VelocityResponseWriter. I did all these steps from this
site: http://wiki.apache.org/solr/VelocityResponseWriter
Builded a war-file with "ant dist" and use it. but solr cannot find the
VelocityResponseWriter - Class
java.lang.NoClassDefFoundError: org/apache/solr/resp
What version of Solr are you using?
If you're using trunk, the VelocityResponseWriter is built in to the
example.
If you're using previous versions, try specifying
"solr.VelocityResponseWriter" as the class name, as it switched from
the request to the response packages, and the "solr." sh
ah okay.
i tried it with 1.4 and put the jars into lib of solr.home but it want be
work. i get the same error ...
i use 2 cores. and my solr.home is ...path/cores in this folder i put
another folder with the name: "lib" and put all these Jars into it:
apache-solr-velocity-1.4-dev.jar
velocity-
Hi all,
we are using woodstox-4.0 and solr-1.4 in our project.
As solr is using woodstox-3.2.7, there is a version clash.
So I tried to check if solr would run with woodstox-4.0.
I downloaded a clean solr-1.4.0 and replaced wstx-asl-3.2.7.jar with
stax2-api-3.0.2.jar and woodstox-core-lgpl-4.0.8
Just wanted to push the topic a little bit, because those question come up
quite often and it's very interesting for me.
Thank you!
- Mitch
MitchK wrote:
>
> Hello community and a nice satureday,
>
> from several discussions about Solr and Nutch, I got some questions for a
> virtual web-sear
All, FYI, as SolrCell is built on top of Tika, some folks might be interested
in this message I posted to the Tika lists.
Thanks!
Cheers,
Chris
-- Forwarded Message
From: "Mattmann, Chris A (388J)"
Reply-To:
Date: Fri, 11 Jun 2010 19:07:24 -0700
To:
Cc:
Subject: Tika in Action
Hi Folks
Hi Alex!
> Am I missing something? Anything more to test?
>
Are you using solrj too? If so, beware of:
https://issues.apache.org/jira/browse/SOLR-1950
Regards,
Peter.
On Jun 14, 2010, at 9:12 AM, stockii wrote:
i tried it with 1.4 and put the jars into lib of solr.home but it
want be
work. i get the same error ...
i use 2 cores. and my solr.home is ...path/cores in this folder i put
another folder with the name: "lib" and put all these Jars into it:
apache
Hi Peter!
Yes, we do.
Thanks for the hint!
Cheers,
Alex
Am 14.06.10 16:49 schrieb "Peter Karich" unter :
> Hi Alex!
>
>> Am I missing something? Anything more to test?
>>
>
> Are you using solrj too? If so, beware of:
> https://issues.apache.org/jira/browse/SOLR-1950
>
> Regards,
> Peter
Hi,
I use Solr Cell to send specific content files. I developped a dedicated
Parser for specific mime types.
However I cannot get Solr accepting my new mime types.
In solrconfig, in update/extract requesthandler I specified ./tika-config.xml , where tika-config.xml is in
conf directory (same as so
Hi Olivier,
Are you setting the mime type explicitly via the stream.type parameter?
-- Ken
On Jun 14, 2010, at 9:14am, olivier sallou wrote:
Hi,
I use Solr Cell to send specific content files. I developped a
dedicated
Parser for specific mime types.
However I cannot get Solr accepting my n
Yeap, I do.
As magic is not set, this is the reason why it looks for this specific
mime-type. Unfortunatly, It seems it either do not read my specific
tika-config file or the mime-type file. But there is no error log concerning
those files... (not trying to load them?)
2010/6/14 Ken Krugler
> H
I'm new to Solr, but I'm interested in setting it up to act like a google
search appliance to crawl and index my website.
It's my understanding that nutch provides the web crawling but needs to be
integrated with Solr in order to get a google search appliance type
experience.
Two questions:
1.
Hi,
Does anyone know how to access the dataimport handler on a multicore setup?
This is my solr.xml
I've tried http://localhost:8080/solr/advisors/dataimport but that
doesn't work. My solrconfig.xml for advisors looks like this:
This issue is your request handler path: , use name="/dataimport" instead. Implicitly
all access to a core is /solr/ and all paths in solrconfig
go after that.
Erik
On Jun 14, 2010, at 1:44 PM, Moazzam Khan wrote:
Hi,
Does anyone know how to access the dataimport handler on a mul
Thanks! It worked.
- Moazzam
On Mon, Jun 14, 2010 at 12:48 PM, Erik Hatcher wrote:
> This issue is your request handler path: name="/advisor/dataimport"...>, use name="/dataimport" instead. Implicitly
> all access to a core is /solr/ and all paths in solrconfig go
> after that.
>
> Erik
: i'm only want the response format of StandardSearchHandler for the
: TermsComponent. how can i do this in a simple way ? :D
I still don't understand what you are asking ... TermsComponent returns
data about terms. The SearchHandler runs multiple components, and returns
whatever data those c
i'm not very knowledgable on spatial search, but...
: for example, if I were to use a filter query such as
:
: {!frange l=0 u=75}dist(2,latitude,longitude,44.0,73.0)
:
: I would expect it to return all results within 75 mi of the given
: latitude and longitude. however, the values being returne
On Mon, Jun 14, 2010 at 3:35 PM, Chris Hostetter
wrote:
> fq={!frange l=0 u=1}hsin(XXX,44.0,73.0,latitude,longitude,true)
>
> ...where XXX is the radius of hte earth in miles (i didn't bother to look
> it up)
That's what the docs say, but it doesn't really work in my experience.
IMO, the spatial
: B- A backup of the current index would be created
: C- Re-Indexing will happen on Master-core2
: D- When Indexing is done, we'll trigger a swap between Master-core1 and
: core2
...
: But how can B,C, and D. I'll do it manually. Wait! I'm not sure my boss will
: pay for that.
: 1/Can I
: 10 minutes. Sure, but idea now is to index all documents with a index
: date, set this index date 10 min to the future and create a filter
: "INDEX_DATE:[* TO NOW]".
:
: Question 1: is it possible to set this as part of solr-config, so every
: implementation against the server will regard th
: this ist my request to solr. and i cannot change this.:
: http://host/solr/select/?q=string
:
: i cannot change this =( so i have a new termsComponent. i want to use
: q=string as default for terms.prefix=string.
:
: can i do somethin like this: ?
:
:
: true
: suggest
: ind
: Here the wrappers to use ...solrj.SolrServer
: [code]
: public class SolrCoreServer
: {
:private static Logger log = LoggerFactory.getLogger(SolrCoreServer.class);
:
:private SolrServer server=null;
:
:public SolrCoreServer(CoreContainer container, String coreName)
:{
: - I want my search to "auto" spell check - that is if someone types
: "restarant" I'd like the system to automatically search for restaurant.
: I've seen the SpellCheckComponent but that doesn't seem to have a simple way
: to automatically do the "near" type comparison. Is the SpellCheckCompone
: on one of the PDF documents and this causes indexing to stop (the
: TikaEntityProcessor) throws a Severe exception. Is it possible to ignore
: this exception and continue indexing by some kind of solr configuration ?
i'm not really a power user of DIH but have you tried adusting the value
of t
: > if you are only seeing one log line per request, then you are just looking
: > at the "request" log ... there should be more logs with messages from all
: > over the code base with various levels of severity -- and using standard
: > java log level controls you can turn these up/down for vario
: Does Solr handling having two masters that are also slaves to each other (ie
: in a cycle)?
no.
-Hoss
: the queryparser first splits on whitespace.
FWIW: robert is refering to the LuceneQParser, and it also applies to the
DismaxQParser ... whitespace is considered markup in those parsers unless
it's escaped or quoted.
The FieldQParser may make more sense for your usecase - or you may need a
c
You'll have to give us some specific details of what your code/queries
look like, and the exact error messages you are getting back if you expect
anyone to be able to compe up with a meaniniful guess as to what might be
going wrong for you
Off the top of my head, there is no reason i can think
: Problem is that they want scores that make results fall in buckets:
:
: * Bucket 1: exact match on category (score = 4)
: * Bucket 2: exact match on name (score = 3)
: * Bucket 3: partial match on category (score = 2)
: * Bucket 4: partial match on name (score = 1)
...
:
: How do you customize the RequestLog to include the query time, hits, and
the "RequestLog" is a jetty specific log file -- it's only going to know
the concepts that Jetty specificly knows about.
: Note, I do see this information in log.solr.0, but it also includes the full
: query parameters wh
: As it stands, solr works fine, and sites like
: http://locahost:8983/solr/admin also work.
:
: As soon as I put a solr.xml in the solr directory, and restart the tomcat
: service. It all stops working.
:
:
:
:
:
You need to elaborate on "It all stops working" ... what does that mea
: Is there a way to copy a multivalued field to a single value by taking
: for example the first index of the multivalued field?
Unfortunately no. This would either need to be done with an
UpdateProcessor, or on the client constructing hte doc (either the remote
client, or in your DIH config
: In solrconfig, in update/extract requesthandler I specified ./tika-config.xml , where tika-config.xml is in
: conf directory (same as solrconfig).
can you show us the full requestHandler decalration? ... tika.config needs
to be a direct child of the requestHandler (not in the defaults)
I also
: I believe I'll need to write some custom code to accomplish what I want
: (efficiently that is) but I'm unsure of what would be the best route to
: take. Will this require a custom request handler? Search component?
You'll need a customized version of the FacetComponent if you want to do
this
I can't think of any way this could happen -- can you provide some more
detials on what example you are doing, and hat you are doing to observe
the problem?
In particular:
* what do each of your DIH config files look like?
* what URLs are you using to trigger DIH imports?
* how are you ch
: ...you've already got the conceptual model of how to do it, all you need
: now is to implement it as a Component that does the secondary-faceting in
: the same requests (which should definitley be more efficient since you can
: reuse the DocSets) instead of issuing secondary requets from your cl
: I am currently working with the following:
:
: {code}
: {!frange l=0 u=1 unit=mi}dist(2,32.6126, -86.3950, latitude, longitude)
: {/code}
...
: {code}
: {!frange l=0 u=1 unit=mi}dist(2,32.6126, -86.3950, latitude,
: longitude) OR {!frange l=0 u=1 unit=mi}dist(2,44.1457, -73.8152,
: latit
: : ...you've already got the conceptual model of how to do it, all you need
: : now is to implement it as a Component that does the secondary-faceting in
: : the same requests (which should definitley be more efficient since you can
: : reuse the DocSets) instead of issuing secondary requets from
We're excited to announce Surge, the Scalability and Performance
Conference, to be held in Baltimore on Sept 30 and Oct 1, 2010. The
event focuses on case studies that demonstrate successes (and failures)
in Web applications and Internet architectures.
Our Keynote speakers include John Allspaw an
Hi Chris,
Thank you so much for the help & reply to my query However my
problem got resolved. There was a configuration problem in my solrconfig.xml
file. The tag was not configured properly that is why both core
were directing to the same directory for indexing.
Regards,
Siddharth
Hello,
when I rebuild the spellchecker index ( by optimizing the data index or
by calling cmd=rebuild ) the spellchecker index is not optimized. I even
cannot delete the old indexfiles on the filesystem, because they are
locked by the solr server. I have to stop the solr server(resin) to
optimize
51 matches
Mail list logo