Hi Cris
I've used TZ param with UpdateXmlMessages and data are indexed with
default GMT value on field timestamp, but one hour less than current
date system.
I am doing test using command curl, so:
/curl http://10.240.234.133:8080/solr/update?TZ=Europe/Madrid
--data-binary @data.xml -H "
Romita,
That isn't a Solaritas feature, that is a feature of any RequestHandler.
You can copy a request handler in solrconfigxml, change its name and set
parameters as defaults/incariants, and then use that new URL for your
queries.
Upayavira
On Thu, Mar 7, 2013, at 02:35 AM, Romita Saha wrote:
Hi,
Does the distributed search work when Solr servers have each different
schema.xml?
Can it work as long as I search for common field?
I have two Solr servers. The one has id, title, body and filename fields
(indexing file server's data) and the other has id, title, body and url fields
(indexi
Hello,
we have indexed a field, where we have removed the whitespaces before
the indexing.
For example:
50A91
Frei91\:9984
Now we want allow the users to search for:
50 A 91
Frei 91 \: 9984
Our idea was to add a PatternReplaceFilterFactory in the query analyzer
to remove the whitespaces:
You can use two fields, in one you keep the original data, and use the
second one as a copy field and use the Pattern Replace Filter combined with
the Keyword Tockenizer.
2013/3/7 Jochen Lienhard
> Hello,
>
> we have indexed a field, where we have removed the whitespaces before the
> indexing.
Hello Jochen
What are your tokenizers? I guess it should be 'KeywordTokenizerFactory'. To fully
understand, you might send the whole analyzer chain.
But there might be a simple mistake in your pattern, character classes are enclosed by
square brackets. We do a replace of all non-alphanumeric
Firstly, you could combine your two schemas into one, and have id,
title, body, filename and url. I'd also add 'source' too. Then all
questions of different schemas go away :-)
But, to answer your original question - so long as the fields that are
queried on exist on both sides, you should be okay
I had actually totally blown my previous configuration and didn't know it
(luckily it didn't reach production this way). I'm glad I ran into this
problem. I had defaulted the queries to one of the most useful fields and
never realized I wasn't searching the others. Thanks very much for all your
hel
Hello Jilal and Oliver,
hmmm ... I don't know, how two fields can help.
The problem seems to be, that solr does not recognize the whitespace.
We are using following analyser:
replacement="blubb" replace="all"/>
mapping="mapping-ISOLatin1Accent.txt"/>
It replaces in the Query: Frei 91 \
Hi,
I just indexed the sample documents in the exampledocs folder and saw the
search suggestions when I search for something in /browse.
Afterwards I deleted the index (like described..) and indexed a folder of
html+pdf files. Searching works but there are no suggestions.
What I need to adjust to
Are you thinking of spellchecking? Where are you seeing suggestions?
If you are thinking of spellchecking, by default the spellchecker uses
the 'name' field, and you have likely indexed into the 'text' field,
hence no results being returned.
Upayavira
On Thu, Mar 7, 2013, at 01:12 PM, alecx wrot
Hi Jochen
You could try this:
Remarks:
* I am not sure whether your sequence of filters is correct. I guess you should use
charFilter at the beginning of the chain only, and patternReplace after the tokenizer.
* If you use ICUFoldi
Hi Oliver.
thank for the answer.
We tried pattern="[\s]+" but it dont work.
I can replace anything but not the whitespace...
Here our schema:
positionIncrementGap="100">
mapping="mapping-ISOLatin1Accent.txt"/>
mapping="mappin
Hello Upayavira, thanks for your reply.
In the example I can see the suggestions "dollar" and "dock" when I type
"do" in Solritas (http://localhost:8983/solr/collection1/browse?q=).
I already changed the field "name" of spellchecker, because I verified the
name field in the admin section and the
Your issue, I would say, is that the whitespace is being interpreted by
the query parser, before it is getting to the analyzer.
A query of 'q=foo bar' would be converted to 'text:foo text:bar'
You can achieve what you want, but you require some quite whacky syntax.
To search for the term 'energy
Hi List,
we are using the JoinQuery (JoinQParserPlugin) via request parameter,
e.g. "{!join from=parentid to=productsid}" in Solr 4.1 which works great
for our purposes, but unfortunately, all docs returned get a score of
"1.0"... this makes the whole search pretty useless imho, since the
res
I'm getting intermittent issues with replication in my current
arrangement: one master, 3 slaves; all the same SOLR version/war file
deployment.
I update the master, which kicks off replication across the other
three; however, they never seem to "finish". In the data/ folders I
get an empty inde
I've managed to get this working by using smaller values in my
spatial-mapped-time field. The definition is now:
The values I'm adding are now given in hours since a custom epoch,
which seems to be working well. My hunch is that using very large
values is causing the quadtree to partition itself
Thanks for help, Alexandre.
It worked as you described.
I have other question. Suppose I have product catalogue that has many sub
categories, each one has different group of fields.
When a user search the catalogue, we should show corresponding facet fields
on left based on result set. That means
Here is the other server when it's locked:
https://gist.github.com/3529b7b6415756ead413
To be clear, neither is really "the replica", I have 32 shards and each
physical server is the leader for 16, and the replica for 16.
Also, related to the max threads hunch: my working cluster has many, many
f
I think you're on a slightly wrong track. In Solr 4.1, merging is
done as a background task. In 3.x, an incoming indexing
request would block until the merge completed. In 4.1, all
your indexing requests should return immediately, any merging
will be carried out by background threads so you don
On Mar 7, 2013, at 9:03 AM, Brett Hoerner wrote:
> To be clear, neither is really "the replica", I have 32 shards and each
> physical server is the leader for 16, and the replica for 16.
Ah, interesting. That actually could be part of the issue - some brain cells
are firing. I'm away from home
As a side note, do you think that was a poor idea? I figured it's better to
spread the master "load" around?
On Thu, Mar 7, 2013 at 11:29 AM, Mark Miller wrote:
>
> On Mar 7, 2013, at 9:03 AM, Brett Hoerner wrote:
>
> > To be clear, neither is really "the replica", I have 32 shards and each
>
We are in the process of upgrading a single instance Solr 3 implementation
to Solr 4. The instance contains multiple cores that share the same schema
(in fact they share the same instanceDir but with distinct dataDirs). We
also need the ability to perform cross-index queries. In Solr3 we have
be
No, not a poor idea at all, definitely a valid setup.
- Mark
On Mar 7, 2013, at 9:30 AM, Brett Hoerner wrote:
> As a side note, do you think that was a poor idea? I figured it's better to
> spread the master "load" around?
>
>
> On Thu, Mar 7, 2013 at 11:29 AM, Mark Miller wrote:
>
>>
>> O
Yes, the collections param is only for SolrCloud.
But if your not using SolrCloud, the same stuff you did on Solr3 should work on
Solr4…
- Mark
On Mar 7, 2013, at 9:35 AM, Kenneth Baltrinic
wrote:
> We are in the process of upgrading a single instance Solr 3 implementation
> to Solr 4. The
did you re-index everything after you changed from date to tdate? Looks to
me like you had some data already in your index, changed the defs, added a
few more docs and blew up.
I'd just blow away your entire index directory and re-index from scratch...
Best
Erick
On Tue, Mar 5, 2013 at 11:0
Ah thanks for the help. Actually I tried that first off before I went
researching and found the collection parameter. However, since you
prompted me, I went back and, looking at my logs, I realize now that I
mangled the shards syntax when I doing my hand coded tests against solr4.
The syntax is
Take a look at admin/analysis for the field in question, feed it values and
see how they are tokenized. My guess is that the token in the index is
a...@gmail.com (single token), which of course won't match the fragment "@
gmail.com" (assuming gmail.com@ is a typo)...
Best
Erick
On Wed, Mar 6, 20
As an update to this, I did my SolrCloud dance and made it 2xJVMs per
machine (2 machines still, the same ones) and spread the load around. Each
Solr instance now has 16 total shards (master for 8, replica for 8).
*drum roll* ... I can repeatedly run my delete script and nothing breaks. :)
On Th
Cool, useful info.
As soon as I can duplicate the issue I'll work out what we need to do
differently for this case.
- Mark
On Mar 7, 2013, at 10:19 AM, Brett Hoerner wrote:
> As an update to this, I did my SolrCloud dance and made it 2xJVMs per
> machine (2 machines still, the same ones) and
Hi Joseph,
I believe Nutch can index into Solr/SolrCloud just fine. Sounds like that
is the approach you should take.
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Thu, Mar 7, 2013 at 12:10 AM, Joseph Lim wrote:
> Hi Amit,
>
> Currently I am designing a Learning Management
HI,
I am new to apache solr,
I am doing a poc, where there is a folder (in sys or some repository) which
has different files with diff extensions pdf, doc, xls..,
I want to search with a file name and retrieve all the files with the name
matching
How do i proceed on this.
Please help me on this
You could use DataImportHandler with FileListEntityProcessor to get the
file names in:
http://wiki.apache.org/solr/DataImportHandler#FileListEntityProcessor
Then, if it is recursive enumeration and not just one level, you probably
want a tokenizer that splits on path separator characters (e.g. /).
Hi,
I'm trying to monitor some Solr behaviour, using JMX.
It looks like a great job was done there, but I can't find any
documentation on the MBeans themselves.
For example, DirectUpdateHandler2 attributes. What is the difference
between "adds" and "cumulative_adds"? Is "adds" count the last X se
I don't think anything survives a core reload.
It looks like cumulative just rolls back the stats for a rollback.
- Mark
On Thu, Mar 7, 2013 at 2:25 PM, Isaac Hebsh wrote:
> Hi,
>
> I'm trying to monitor some Solr behaviour, using JMX.
> It looks like a great job was done there, but I can't fin
: I'm trying to monitor some Solr behaviour, using JMX.
: It looks like a great job was done there, but I can't find any
: documentation on the MBeans themselves.
In general, the stats exposed by the various MBeans are going to largely
depend on the underlying plugin classes -- in most cases the
: https://wiki.apache.org/solr/UsingMailingLists
-Hoss
: I have highlighting working for a generic text field, but cannot get it
: to work for a field which contains raw data.
...
: hl.fl=rawData
...
:
You've shown us how the the fieldType named "raw" is declared, but not
how the field "rawData" is declared -- is it stored? does it
Hi,
As per one of our search requirement for searching on title. We have
implemented as below which servers us quite good.
Title : iTunes Sync
Analyzer on this field is
WhitespaceTokenizerFactory
WordDelimiterFilterFactory {generateNumberParts=1, catenateWords=1,
generateWordParts=1, catenate
What's the rest of your query? What you've indicated doesn't have any terms
to score. Join can be thought of as a bit like a filter query in this
sense; the fact that the join hit is just an inclusion/exclusion clause,
not a scoring.
Best
Erick
On Thu, Mar 7, 2013 at 10:32 AM, Stefan Moises wro
Hi David Smiley:
We use a 3rd party software to load Solr 3.4 so the behavior needs to be
transparent with the migration to 4.1, but I was expecting that I would need to
rebuild the solr database.
I moved/added the old solr 3.4 core to solr 4.1, with only minor modification
(commented out the
David Smiley:
Because we use a 3rd party software.. I checked to see if this would still
worked... search query still works. But adding data seems to be broken, likely
because of the geohash type.
So, below is the log file, which tells me to upgrade
If possible, it would be great to simply ge
I am setting up solrcloud with zookeeper.
- I am wondering if there are nicer ways to update the zookeeper config
files (data-import) besides restarting a node with the boostrap option?
- Right now I kill the node manually in order to restart it. Is there a
better way to restart?
Thanks,
Nate
Hi Erick,
if I try the same query without join I get different scores for each
hit... here is an example query:
http://localhost:8983/solr/ee/select?facet=true&facet.mincount=1&facet.limit=-1&rows=10&fl=oxid,score,oxtitle&debugQuery=true&start=0&facet.sort=lex&facet.field=oxprice&facet.field=m
45 matches
Mail list logo