Hi,
You could (theoretically) reduce the down-time to zero using a 'swap'
command:
http://wiki.apache.org/solr/CoreAdmin?highlight=%28swap%29#SWAP
Cheers
Henrib
muneeb wrote:
>
> Hi,
>
> I have indexed almost 7 million articles on two separate cores, each with
>
Yonik Seeley wrote:
>
> ...multi-core allows you to instantiate a completely
> new core and swap it for the old one, but it's a bit of a heavyweight
> approach
> ...a schema object would not be mutable, but
> that one could easily swap in a new schema object for an index at any
> time...
>
ryantxu wrote:
>
>
> Yes, include would get us some of the way there, but not far enough
> (IMHO). The problem is that (as written) you still need to have all
> the configs spattered about various directories.
>
>
I does not allow us to go *all* the way but it does allow to put
configu
ryantxu wrote:
>
> ...
> Yes, I would like to see a way to specify all the fieldtypes /
> handlers in one location and then only specify what fields are
> available for each core.
>
> So yes -- I agree. In 2.0, I hope to flush out configs so they are
> not monstrous.
> ...
>
What
t* the default:
${solr.data.dir:./solr/data}
Which will make both cores use the same index.
Hope this helps,
Henrib
rogerio.araujo wrote:
>
> Hi!
>
> I have a multicore installation with the following configuration:
>
>
>
>
>
>
>
Nikhil Chhaochharia wrote:
>
>
> I am assuming that these are part of some patch which will get applied
> before 1.3 releases, is that correct ?
>
> Nikhil
>
>
Yes, this is part of a patch and no, they most likely will not make it in
1.3.
However, I guess the following will bring you even c
Seems you want something like:
public SolrCore nikhilInit(final IndexSchema indexSchema) {
final String solrConfigFilename = "solrconfig.xml"; // or else
CoreContainer.Initializer init = new CoreContainer.Initializer() {
@Override
public CoreContainer initialize() {
Co
Hi,
It is likely to be related to how you initialize Solr & create your
SolrCore; there has been a few changes to ensure there is always a
CoreContainer created which (as its name stands), holds a reference to all
created SolrCore.
There is a CoreContainer.Initializer class that allows to easily c
Since I authored the patch, I'm guilty on all counts. :-)
Amit Nithian wrote:
>
> I am not sure why they chose that direction over built-in entity include.
>
Entities are not the most used or known feature and I just did not think of
this was a way to do it.
I also wanted variable expansion in
The other option is to use solr-646 which adds the ability to include files
through an .
Regards
henri
--
View this message in context:
http://www.nabble.com/XML-includes-in-solrconfig.xml-schema.xml-tp19096292p19097243.html
Sent from the Solr - User mailing list archive at Nabble.com.
Not sure if this is what you seek but solr-646 adds the allowing to import a resource and ..
to insert 'chunks' that have no natural root node.
These do work for solrconfig.xml & schema.xml.
Cheers
Henri
Jacob Singh-2 wrote:
>
> Hello,
>
> Is it possible to include an external xml file from
in go ballistic when they see logs in their console...)
Cheers
Henrib
zayhen wrote:
>
> Hello Henrib,
>
> I have read the issue and it seems an interesting feature for Solr, but I
> don't see how it address to my needs, as I need to point Solr to the
> multicore.properties
This should be one use-case for
https://issues.apache.org/jira/browse/SOLR-646 SOLR-646 .
If you can try it, don't hesitate to report/comment on the issue.
Henri
zayhen wrote:
>
> Hello guys,
>
> I have to load solr/home from a .properties file, because of some
> environment standards I have
We could harness solr-646, reusing the ..
syntax by creating a scope for field elements when reading the schema. Since
properties (the PropertyMap) are stored in the ResourceLoader, it seems we
should be able to access them in the useful places through the usual
suspects(the core, the config, the
I'm re-adapting some pretty-old/hacked (1.2dev) code that performs a query
filtered by a list of document unique keys and returning results based on
the list order. Anyone having same requirement/feature/code ?
I've been looking in QueryComponent where there is code to handle shards
that performs
Ryan McKinley wrote:
>
>> [ ] Keep solr logging as it is. (JDK Logging)
>> [X ] Use SLF4J.
>
Can't "keep as is" since this strictly precludes configuring logging in a
container agnostic way.
--
View this message in context:
http://www.nabble.com/-poll--Change-logging-to-SLF4J--tp17084684p1
Will,
I'd be definitely interested in your code but mostly in the config &
deployment options if you can share.
You did not happen to deploy on Websphere 6 by any chance ? I can't find a
way to configure jul to only log into our application logs (even less so in
our log4j logs); I'm not even sure
Hi,
I'm (still) seeking more advice on this deployment issue which is to use
org.apache.log4j instead of java.util.logging. I'm not seeking re-starting
any discussion on solr4j/commons/log4j/jul respective benefits; I'm seeking
a way to bridge jul to log4j with the minimum specific per-container
c
I'm trying to filter my document collection by an external "mean" that
produces a set of document unique keys.
Assuming this goes into a custom request handler (solr-281 making that
easy), any pitfall using a ConstantScoreQuery (or an equivalent filtering
functionality) as a Solr "filter query" ?
I believe that keeping you code as is but initializing the query parameters
should do the trick:
HashMap params = new HashMap();
params.add("fl", "id score"); // field list is id & score
...
Regards
John Reuning-2 wrote:
>
> My first pass was to implement the embedded solr example:
>
> --
We have an application where we index documents that can exist in many (at
least 2) languages.
We have 1 SolrCore per language using the same field names in their schemas
(different stopwords , synonyms & stemmers), the benefits for content
maintenance overweighting (at least) complexity.
Using EN
Another possible (and convoluted) way is to use SOLR-215 patch which allows
multiple indexes within one Solr instance (also at this stage, you'd loose
replication and would probably have to adapt the servlet filter).
Regards
Henri
Yu-Hui Jin wrote:
>
> Hi, there,
>
> I have a few basic questio
List(Query query, DocSet filter, Sort lsort, int
offset, int len, int flags) throws IOException;
and
public DocListAndSet getDocListAndSet(Query query, DocSet filter, Sort
lsort, int offset, int len, int flags) throws IOException;
It seems to work, I dont know if it is efficient cache wise & al.
Yon
results.docList = s.getDocList(query, rdocs,
SolrPluginUtils.getSort(req),
params.getInt(START,0),
params.getInt(ROWS,10),
flags);
}
Yonik Seeley wrote:
>
> On 6/18
if(termDocs.next())
bits.fastSet(termDocs.doc());
}
termDocs.close();
}
return new org.apache.solr.search.BitDocSet(bits);
}
Thanks again
Yonik Seeley wrote:
>
> On 6/17/07, Henrib <[EMAIL PROTECTED]> wrote:
>> Merely an efficien
Merely an efficiency related question: is there any other way to filter on a
uniqueKey set than using the 'fq' parameter & building a list of the
uniqueKeys?
In 'raw' Lucene, you could use filters directly in search; is this (close
to) equivalent efficiency wise?
Thanks
--
View this message in c
e have some language
> dependent parameters... It can be a problem, as I would like to have the
> same fields for all requests...
>
> Sorry to bother, but before I split all my data this way I would like to
> be
> sure that it's the best approach for me.
>
> Regards,
Hi Daniel,
If it is functionally 'ok' to search in only one lang at a time, you could
try having one index per lang. Each per-lang index would have one schema
where you would describe field types (the lang part coming through
stemming/snowball analyzers, per-lang stopwords & al) and the same field
Updated (forgot the patch for Servlet).
http://www.nabble.com/file/7996/solr-trunk-src.patch solr-trunk-src.patch
The change should still be compatible with the trunk it is based upon.
Henrib wrote:
>
> Following up on a previous thread in the Solr-User list, here is a patch
> th
the core we
want to observe.
And the scripts probably also need to have a 'core name' passed down...
I'm still building my knowledge on the subject so my simplistic view might
not be accurate.
Let me know if this helps.
Cheers
Henrib
mpelzsherman wrote:
>
> This sounds lik
You can not have more than one Solr core per application (to be precise, per
class-loader since there are a few statics).
One way is thus to have 2 webapps - when & if indexes do not have the same
lifetime/radically different schema/etc.
However, the common wisdom is that you usually dont really n
Following up on a previous thread in the Solr-User list, here is a patch that
allows managing multiple cores in the same VM (thus multiple
config/schemas/indexes).
The SolrCore.core singleton has been changed to a Map; the
current singleton behavior is keyed as 'null'. (Which is used by
SolrInfoRe
I suppose I'm not the only one having to cope with the kind of policies I
was describing (& their idiosynchrasies); in some organizations, trying to
get IT to modify anything related to 'deployment policy' is just (very close
to) impossible... Within those, having a dedicated Tomcat to run the
ap
, than
> to
> try to roll your own, or extensively modify solr.
>
> I know I'm sidestepping your stated requirements, but I'd take a long look
> at that one.
>
> BTW, We cut over from an embedded Lucene instance to Solr about 4 months
> ago, and are very happy th
I'm trying to choose between embedding Lucene versus embedding Solr in one
webapp.
In Solr terms, functional requirements would more or less lead to multiple
schema & conf (need CRUD/generation on those) and deployment constraints
imply one webapp instance. The choice I'm trying to make is thus:
35 matches
Mail list logo