Sorry hit send too fast. The shards were listed as active. Also the solr
instances were still running but the file system they wrote to had become
read only. I thought that would make replication fail and when the issue
was fixed and solr restarted replication would then succeed. Am I hitting
some
I have not tried to reproduce as of yet but hope to do so Monday. The
machine that had the issue was a vm out of my control so I'm not certain
how it was restored. I am using a fairly recent nightly build within the
last few weeks
On Friday, May 11, 2012, Mark Miller wrote:
> So it's easy to repr
A faster way to do Regex transform is to use the 'PatternReplace'
tokenizer or filter. These are inside the schema processing tree, not
in the DIH tree.
You would use to get the data from your input field to a
copy with the regex pattern analyzer type. Look in schema.xml for an
example of using t
No. fq queries are standard syntax queries. But they can be arbitrarily
complex, i.e. fq=model:(member OR new_member)
Best
Erick
On Thu, May 10, 2012 at 2:38 PM, anarchos78
wrote:
> Hello,
> Solr accepts fq parameter like: localhost:8080/solr/select/?q=blah+blah
> &fq=model:member+model:new_
One of the points of sharding is to use more _machines_. Running multiple
shards on a single machine is not magically going to make things faster. In
fact I'd expect your process to consume more resources since the
cores are now not sharing common data (i.e. having a single word
in more than one co
I know there are edge cases where "odd" field naming causes
problems, field names not well-defined/enforced with Solr. Rather than
banging my head against the wall and finding these cases
at inopportune moments, I'd confine myself to lower-case
and underscores.
Other stuff _may_ work, like capital
You're on the right track. In the default schemas it's kind of tricky. You
see the bit of the "location" definition as:
subFieldSuffix="_coordinate"
And later, you see:
So the latlng_0/latlng_1 _coordinate fields are created by this
dynamic field mapping.
You can either leave the dynamic field s
The DIH Template Transformer can do this, such as in:
...
You can combine input column values as well as literal strings.
See:
http://wiki.apache.org/solr/DataImportHandler#TemplateTransformer
-- Jack Krupansky
-Original Message-
From: divya
Sent: Friday, May 11, 2012 2:55 PM
To:
There's nothing that I know of that does what you want. Your problem is
that you want some intelligence built in to the faceting. It'd be difficult
since Solr couldn't know what a reasonable number of buckets were
until it found the entire result set, so you'e have to do some kind of
two-pass solut
Solaritas was never intended to be used for production situations at
all. It's reason for
existing is to provide:
1> a way to show something prettier than the XML (or JSON or)
responses to Solr
queries for people just getting started.
2> a way to provide a very quick proof-of-concept/proto
Your field needs to use a field type which has a character folding/mapping
filter, such as:
mapping="mapping-ISOLatin1Accent.txt"/>
Such as in:
positionIncrementGap="100" >
mapping="mapping-ISOLatin1Accent.txt"/>
See the example schema.
In older releases of Solr there was an I
No, this isn't what sharding is all about. Sharding is taking a single
logical index and splitting it up amongst a number of physical
units, often on individual machines. "Load and unload partitions
dynamically" doesn't make any sense when talking about shards.
So let's back up. You could create y
Did you shut down and restart or “reload” Solr? A core restart/reload is needed
for Solr to “see” schema changes.
See:
http://wiki.apache.org/solr/CoreAdmin#RELOAD
or, if you did try a reload, maybe there were errors which prevented Solr from
starting the new core initialization, which leaves
My query is
SolrQuery sQuery = new SolrQuery(query.getQueryStr());
sQuery.setQueryType("dismax");
sQuery.setRows(100);
if (!query.isSearchOnDefaultField()) {
sQuery.setParam("qf", queryFields.toArray(new
String[queryFields.size()]));
}
sQuery.
Hi ,
I am facing the same problem with my XSLT, after upgrading to 3.6 from solr
1.4
I was wondering if you have found the solution? Can you please share your
solution if you have found one?
This should be helpful to others as well who are struggling.
thanks
--Pramila
--
View this message in c
Mark
That sounds like a Use-Case for ExternalFileField, doesn't it? You'll find more
infos about that here:
http://lucene.apache.org/solr/api/org/apache/solr/schema/ExternalFileField.html
Stefan
On Saturday, May 12, 2012 at 7:00 PM, Mark Laurent wrote:
> Hello,
>
> Is it possible to perfo
People are working on "field update", but that feature is not currently
available in a release of Solr.
You can read about the current status of that work here:
https://issues.apache.org/jira/browse/SOLR-139
It may or may not be usable for you today - in trunk.
But if field update is important
No. Lucene and Solr commits replace the entire document. --wunder
On May 12, 2012, at 10:00 AM, Mark Laurent wrote:
> Hello,
>
> Is it possible to perform an index commit that Solr would add the incoming
> value to an existing fields' value?
>
> I have for example:
>
>
> required="
hi jack,
Thanks that worked. Also another option that worked was nested queries.
It was nice to see u in Lucene Conference.
abhay
--
View this message in context:
http://lucene.472066.n3.nabble.com/term-f-xy-OR-device-0-in-fq-has-strange-results-tp3980152p3982663.html
Sent from the Solr - User
Combining the syntax of two (or more) query parsers in a single query is
called "nested queries". This requires two elements: 1) the use of the
"magic" field "_query_" to embed or nest a query in a larger query, and 2)
enclosing the nested query in quotes since it is likely to have reserved
cha
20 matches
Mail list logo