Yes thats right, there is no "best" setup at all, only one that
gives most advantage to your requirements.
And any setup has some disadvantages.
Currently I'm short in time and have to bring our Cloud to production
but a write-up is in the queue as already done with other developments.
https://ww
On Tue, 2018-08-28 at 09:37 +0200, Bernd Fehling wrote:
> Yes, I tested many cases.
Erick is absolutely right about the challenge of finding "best" setups.
What we can do is gather observations, as you have done, and hope that
people with similar use cases finds them. With that in mind, have you
c
out the deployment configuration in solr cloud.
> When
> > > > we need to increase the number of shards in solr cloud, there are two
> > > > options:
> > > >
> > > > 1. Run multiple solr instances per host, each with a different port
> and
> > &
> >> graph
> >> which are not seen with a multi instance setup.
> >>
> >> Tested about 2 month ago with SolCloud 6.4.2.
> >>
> >> Regards,
> >> Bernd
> >>
> >>
> >> Am 26.08.2018 um 08:00 schrieb Wei:
> >
i,
I have a question about the deployment configuration in solr cloud. When
we need to increase the number of shards in solr cloud, there are two
options:
1. Run multiple solr instances per host, each with a different port and
hosting a single core for one shard.
2. Run one solr instance per ho
. Run one solr instance per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?
Thanks
ration in solr cloud. When
> > we need to increase the number of shards in solr cloud, there are two
> > options:
> >
> > 1. Run multiple solr instances per host, each with a different port and
> > hosting a single core for one shard.
> >
> > 2. Run one so
the number of shards in solr cloud, there are two
>> options:
>> 1. Run multiple solr instances per host, each with a different port and
>> hosting a single core for one shard.
>> 2. Run one solr instance per host, and have multiple cores(shards) in the
>> same so
instance per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?
Thanks,
Wei
lr instances per host, each with a different port and
>>> hosting a single core for one shard.
>>>
>>> 2. Run one solr instance per host, and have multiple cores(shards) in
>> the
>>> same solr instance.
>>>
>>> Which would be better
cloud. When
> > > we need to increase the number of shards in solr cloud, there are two
> > > options:
> > >
> > > 1. Run multiple solr instances per host, each with a different port and
> > > hosting a single core for one shard.
> > >
&
shards in solr cloud, there are two
> > options:
> >
> > 1. Run multiple solr instances per host, each with a different port and
> > hosting a single core for one shard.
> >
> > 2. Run one solr instance per host, and have multiple cores(shards) in
> the
.
2. Run one solr instance per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization
per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?
Thanks,
Wei
.
To: solr-user@lucene.apache.org
Subject: Re: Multiple cores versus a "source" field.
One more opinion on source field vs separate collections for multiple corpora.
Index statistics don’t really settle down until at least 100k documents. Below
that, idf is pretty noisy. With Ultraseek, we
December 2017 4:11 p.m.
> To: solr-user
> Subject: Re: Multiple cores versus a "source" field.
>
> That's the unpleasant part of semi-structued documents (PDF, Word, whatever).
> You never know the relationship between raw size and indexable text.
>
> Basically a
with that now.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, 5 December 2017 4:11 p.m.
To: solr-user
Subject: Re: Multiple cores versus a "source" field.
That's the unpleasant part of semi-structued documents (PDF, Word, whate
That's the unpleasant part of semi-structued documents (PDF, Word,
whatever). You never know the relationship between raw size and
indexable text.
Basically anything that you don't care to contribute to _scoring_ is
often better in an fq clause. You can also use {!cache=false} to
bypass actually u
>You'll have a few economies of scale I think with a single core, but frankly I
>don't know if they'd be enough to measure. You say the docs are "quite large"
>though, >are you talking books? Magazine articles? is 20K large or are the 20M?
Technical reports. Sometimes up to 200MB pdfs, but that
At that scale, whatever you find administratively most convenient.
You'll have a few economies of scale I think with a single core, but
frankly I don't know if they'd be enough to measure. You say the docs
are "quite large" though, are you talking books? Magazine articles? is
20K large or are the 2
I have two different document stores that I want index. Both are quite small
(<50,000 documents though documents can be quite large). They are quite capable
of using the same schema, but you would not want to search both simultaneously.
I can see two approaches to handling this case.
1/ Create a
This question has been asked before. I found a few postings to Solr user and a
couple on Google-in-the-large.
But I am still not sure which is best.
My project currently has two distinct datasets (documents) with no shared
fields.
But at times, we need to query across both of them.
So we
ents
>(faceting, etc) regardless of the source.
>
>From the management (e.g. import) and search relevance (e.g. analysis,
>relevance, etc) point of view, what is considered “best practice”:
>
>one core for all sources and import through different entities
>one core per
core for all sources and import through different entities
one core per source and search across multiple cores
something else
?
It would be great if you can share your experience or point me to some articles.
Thank you in advance!
t; >> >> JOIN parent
> >> >> WHERE child.parent_id = parent.id AND parent.tag = 'hoge'`
> >> >>
> >> >> child and parent is not that parent is more than in a many-to-one
> >> >> relationship.
> >> >> I
Let's back up a bit and ask what your primary goal is. Just indexing a
bunch of stuff as fast as possible? By and large, I'd index to a
single core with multiple threads rather than the approach you're
taking (I'm assuming that there's a MERGEINDEXES somewhere in this
process). You should be able t
Hi,
I wanted to check if the following would work;
1. Spawn n threads
2. Create n-cores
3. Index n records simultaneously in n-cores
4. Merge all core indexes into a single master core
I have been able to successfully do this for 5 threads (5 cores) with 1000
documents each. However, I wanted
>
>> >> child and parent is not that parent is more than in a many-to-one
>> >> relationship.
>> >> I try this but can not.
>> >>
>> >> /select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> >> http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
>> >> Sent from the Solr - User mailing list archive at Nabble.com.
>> >>
>
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
>
> child and parent is not that parent is more than in a many-to-one
> >> relationship.
> >> I try this but can not.
> >>
> >> /select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
> >>
> >>
> >>
> >>
> >>
> >> --
gt; relationship.
>> I try this but can not.
>>
>> /select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
ut can not.
/select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
Sent from the Solr - User mailing list archive at Nabble.com.
running Solr 5.2.1, so far without Solr Cloud, and I anticipate
> having multiple cores.
>
> For now, I can make use solr/corename/admin/ping, but how can I have Solr
> ping all cores?
>
> Dan Davis, Systems/Applications Architect (Contractor),
> Office of Computer and Communications Systems,
> National Library of Medicine, NIH
>
>
I'm wondering what different folks do out there for a health monitor for Solr.
I'm running Solr 5.2.1, so far without Solr Cloud, and I anticipate having
multiple cores.
For now, I can make use solr/corename/admin/ping, but how can I have Solr ping
all cores?
Dan Davis, Systems/Ap
For backup purposes to an offsite data center, I need to make sure that each
core's configuration has replication to a consistently defined backup directory
on a Netapp filer. The Netapp filer's snapshot can be invoked manually, and
its snap mirror will copy the data to the offsite data center
https://issues.apache.org/jira/browse/SOLR-6234
{!scorejoin} which is a Solr QParser brings Lucene JoinUtil, for sure.
replying into appropriate list.
On Wed, Dec 10, 2014 at 10:14 PM, Parnit Pooni wrote:
> Hi,
> I'm running into an issue attempting to sort, here is the scenario.
>
> I have my
Depending on the size, I'd go for (a). IOW, I wouldn't change the
sharding to use (a), but if you have the same shard setup in that
case, it's easier.
You'd index a type field with each doc indicating the source of your
document. Then use the grouping feature to return the top N from each
of the
As mentioned in antoher post we (already) have a (Lucene-based) generic
indexing framework which allows any source/entity to provide
indexable/searchable data.
Sources may be:
pages
events
products
customers
...
As their names imply they have nothing in common ;) Never the less we'd like to
sear
You really can't tell until you prototype and measure. Here's a long
blog on why what you're asking, although a reasonable request,
is just about impossible to answer without prototyping and measuring.
http://searchhub.org/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-an
And how many machines running the SOLR ?
On 12 August 2014 22:12, Noble Paul wrote:
> The machines were 32GB ram boxes. You must do the RAM requirement
>
And how many machines running the SOLR ?
I expect that I will have to add more servers. What I am looking for is how
do I calculate how m
The machines were 32GB ram boxes. You must do the RAM requirement
calculation for your indexes . Just the no:of indexes alone won't be enough
to arrive at the RAM requirement
On Tue, Aug 12, 2014 at 6:59 PM, Ramprasad Padmanabhan <
ramprasad...@gmail.com> wrote:
> On 12 August 2014 18:18, Noble
Ramprasad Padmanabhan [ramprasad...@gmail.com] wrote:
> I have a single machine 16GB Ram with 16 cpu cores
Ah! I thought you had more machines, each with 16 Solr cores.
This changes a lot. 400 Solr cores of ~200MB ~= 80GB of data. You're aiming for
7 times that, so about 500GB of data. Running t
On 12 August 2014 18:18, Noble Paul wrote:
> Hi Ramprasad,
>
>
> I have used it in a cluster with millions of users (1 user per core) in
> legacy cloud mode .We used the on demand core loading feature where each
> Solr had 30,000 cores and at a time only 2000 cores were in memory. You are
> just
Hi Paul and Ramprasad,
I follow your discussion with interest as I will have more or less the
same requirement.
When you say that you use on demand core loading, are you talking about
LotsOfCore stuff?
Erick told me that it does not work very well in a distributed
environnement.
How do you han
Hi Ramprasad,
I have used it in a cluster with millions of users (1 user per core) in
legacy cloud mode .We used the on demand core loading feature where each
Solr had 30,000 cores and at a time only 2000 cores were in memory. You are
just hitting 400 and I don't see much of a problem . What is y
On Tue, 2014-08-12 at 14:14 +0200, Ramprasad Padmanabhan wrote:
> Sorry for missing information. My solr-cores take less than 200MB of
> disk
So ~3GB/server. If you do not have special heavy queries, high query
rate or heavy requirements for index availability, that really sounds
like you could p
Sorry for missing information. My solr-cores take less than 200MB of disk
What I am worried about is If I run too many cores from a single solr
machine there will be a limit to the number of concurrent searches it can
support. I am still benchmarking for this.
Also another major bottleneck I fin
On Tue, 2014-08-12 at 11:50 +0200, Ramprasad Padmanabhan wrote:
> Are there documented benchmarks with number of cores
> As of now I just have a test bed.
>
>
> We have 150 million records ( will go up to 1000 M ) , distributed in 400
> cores.
> A single machine 16GB RAM + 16 cores search is w
production
Obviously I can always add more nodes to solr, but I need to justify how
much I need.
On 12 August 2014 12:48, Harshvardhan Ojha
wrote:
> I think this question is more aimed at design and performance of large
> number of cores.
> Also solr is designed to handle multiple cores e
I think this question is more aimed at design and performance of large
number of cores.
Also solr is designed to handle multiple cores effectively, however it
would be interesting to know If you have observed any performance problem
with growing number of cores, with number of nodes and solr
On Tue, 2014-08-12 at 08:40 +0200, Ramprasad Padmanabhan wrote:
> I need to store in SOLR all data of my clients mailing activitiy
>
> The data contains meta data like From;To:Date;Time:Subject etc
>
> I would easily have 1000 Million records every 2 months.
If standard searches are always insid
Hi Ramprasad,
You can certainly have a system with hundreds of cores. I know of more than
a few people who have done that successfully in their setups.
At the same time, I'd also recommend to you to have a look at SolrCloud.
SolrCloud takes away the operational pains like replication/recovery etc
I need to store in SOLR all data of my clients mailing activitiy
The data contains meta data like From;To:Date;Time:Subject etc
I would easily have 1000 Million records every 2 months.
What I am currently doing is creating cores per client. So I have 400 cores
already.
Is this a good idea to do
s indexed enabled.
Any inputs will be of great help.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-multiple-cores-tp4136059p4139063.html
Sent from the Solr - User mailing list archive at Nabble.com.
ue, May 13, 2014 at 8:27 PM, Jay Potharaju wrote:
> Hi,
> I am trying to join across multiple cores using query time join. Following
> is my setup
> 3 cores - Solr 4.7
> core1: 0.5 million documents
> core2: 4 million documents and growing. This contains the child documents
> for
seems as if the location of the suggester dictionary directory is not
core-specific, so when the suggester is defined for multiple cores, they
collide: you get exceptions attempting to obtain the lock, and the
suggestions bleed from one core to the other. There is an (undocumented)
"indexPath
ectory is not
> core-specific, so when the suggester is defined for multiple cores, they
> collide: you get exceptions attempting to obtain the lock, and the
> suggestions bleed from one core to the other. There is an (undocumented)
> "indexPath" parameter that can be used to
It seems as if the location of the suggester dictionary directory is not
core-specific, so when the suggester is defined for multiple cores, they
collide: you get exceptions attempting to obtain the lock, and the
suggestions bleed from one core to the other. There is an
(undocumented
uot; restriction but independent
conditions between coreA - coreB and coreA - coreC.
Regards.
On Wed, May 14, 2014 at 5:27 AM, Jay Potharaju wrote:
> Hi,
> I am trying to join across multiple cores using query time join. Following
> is my setup
> 3 cores - Solr 4.7
> core1: 0.5
Hi,
I am trying to join across multiple cores using query time join. Following
is my setup
3 cores - Solr 4.7
core1: 0.5 million documents
core2: 4 million documents and growing. This contains the child documents
for documents in core1.
core3: 2 million documents and growing. Contains records
consolidation of fields from multiple cores and there are two fields
in common across all cores.
I have data stored in normalized form across 3 cores on same JVM. Want to
merge and select multiple fields depending on WHERE clause/common fields in
each core.
Any help would be appreciated!
--
View
multiple cores
Select T1.*,T2.*
FROM Table1 T1,Table2 T2
WHERE T1.id = T2.id
--
View this message in context:
http://lucene.472066.n3.nabble.com/Equivalent-of-SQL-JOIN-in-SOLR-across-multiple-cores-tp4106152.html
Sent from the Solr - User mailing list archive at Nabble.com.
1, Are the cores join-able?
2. Could you give me an example about how to write a multiple core join?
3. Can we do equivalent of JOIN in SOLR across multiple cores
Select T1.*,T2.*
FROM Table1 T1,Table2 T2
WHERE T1.id = T2.id
--
View this message in context:
http://lucene
eyes, Mark wrote:
> Any good/recent documentation that I can reference on setting up multiple
> cores in Solr 4.5.0?
>
> Thanks all,
> Mark
>
>
> IMPORTANT NOTICE: This e-mail message is intended to be received only by
> persons entitled to receive the confidential informat
Any good/recent documentation that I can reference on setting up multiple cores
in Solr 4.5.0?
Thanks all,
Mark
IMPORTANT NOTICE: This e-mail message is intended to be received only by
persons entitled to receive the confidential information it may contain. E-mail
messages sent from
c...
Best,
Erick
On Fri, Oct 25, 2013 at 9:46 AM, Jamshaid Ashraf wrote:
> Hi,
>
> I'm using solr 4.3 and I have data in multiple cores which are different in
> structure like (Core1 - col1 & col2) & (Core2 - col3 & col4).
>
> Now I would like to run a search query o
Hi,
I'm using solr 4.3 and I have data in multiple cores which are different in
structure like (Core1 - col1 & col2) & (Core2 - col3 & col4).
Now I would like to run a search query on both of the cores and in the end
to get a single result set from the 2 cores combines.
Please
Hello,
I still have this issue using Solr 4.4, removing firstSearcher queries did
make the problem go away.
Note that I'm using Tomcat 7 and that if I'm using my own Java application
launching an Embedded Solr Server pointing to the same Solr configuration
the server fully starts with no hang.
W
Hi
I want to display result as one Dataset thorough solr using Multicore.In one
core Containg EnglishCollectionData and onther containg HindiCollectionData.
When I am join two core result is displayed when I am giving English Parameter
But does not work For Hindi Parameter.Could me give the so
olr? There was a library loading bug with multiple
> cores. Not a perfect match to your description but close enough.
>
> Regards,
> Alex
> On 21 Sep 2013 02:28, "Hayden Muhl" wrote:
>
> > I have two cores "favorite" and "user" running in the
Did you try latest solr? There was a library loading bug with multiple
cores. Not a perfect match to your description but close enough.
Regards,
Alex
On 21 Sep 2013 02:28, "Hayden Muhl" wrote:
> I have two cores "favorite" and "user" running in the same To
I have two cores "favorite" and "user" running in the same Tomcat instance.
In each of these cores I have identical field types "text_en", "text_de",
"text_fr", and "text_ja". These fields use some custom token filters I've
written. Everything was going smoothly when I only had the "favorite" core.
: Do all of your cores have "newSearcher" event listners configured or just
: 2 (i'm trying to figure out if it's a timing fluke that these two are
stalled, or if it's something special about the configs)
All of my cores have both the "newSearcher" and "firstSearcher" event listeners
configured.
: Sorry for the multi-post, seems like the .tdump files didn't get
: attached. I've tried attaching them as .txt files this time.
Interesting ... it looks like 2 of your cores are blocked in loaded while
waiting for the searchers to open ... not clera if it's a deaklock or why
though - in bot
bq: I'm actually not using the transaction log (or the
NRTCachingDirectoryFactory); it's currently set up to use the
MMapDirectoryFactory,
This isn't relevant to whether you're using the update log or not, this is
just how the index is handled. Look for something in your solrconfig.xml
like:
"tlog" in the name of
them.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Friday, September 06, 2013 9:18 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3 Startup with Multiple Cores Hangs on "Registering Core"
bq: I'm act
: I currently have Solr 4.3 set up with about 400 cores set to load upon
: start up. When starting Solr with an empty index for each core, Solr is
: able to load all of the cores and start up normally as expected.
: However, after running a dataimport on all cores and restarting Solr, it
: h
Hello,
I currently have Solr 4.3 set up with about 400 cores set to load upon start
up. When starting Solr with an empty index for each core, Solr is able to load
all of the cores and start up normally as expected. However, after running a
dataimport on all cores and restarting Solr, it hangs
Hi,
At lucene level we have MultiSearcher to search a few cores at the same time
with same query,
at solr level can we perform such search (if using same config/schema)? Here I
donot mean to
search across shards of the same collection but independent collections?
Thanks very much for helps, L
l-import
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Configuring-Tomcat-6-with-Solr431-with-multiple-cores-tp4078778.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 3/20/2013 1:28 PM, Li, Qiang wrote:
I just want to share the solrconfig.xml and schema.xml. As there should be
differences between collections for other files, such as the DIH's
configurations.
I believe that SolrCloud treats each config set as a completely separate
entity, with no abilit
-user@lucene.apache.org
Subject: Re: How to share config files in SolrCloud between multiple
cores(collections)
To share configs in SolrCloud you just upload a single config set and then link
it to multiple collections. You don't actually use solr.xml to do it.
- Mark
On Mar 19, 2013, at 10:43 AM,
To share configs in SolrCloud you just upload a single config set and then link
it to multiple collections. You don't actually use solr.xml to do it.
- Mark
On Mar 19, 2013, at 10:43 AM, "Li, Qiang" wrote:
> We have multiple cores with the same configurations, before usin
We have multiple cores with the same configurations, before using SolrCloud, we
can use relative path in solr.xml. But with Solr4, is seems denied for using
relative path for the schema and config in solr.xml.
Regards,
Ivan
This email message and any attachments are for the sole use of the
;> Tomcat, but haven't seen it with Solr 4.1 (yet), so if you're on 4.0,
>>>> you might want to try upgrading.
>>>>
>>>>
>>>> Michael Della Bitta
>>>>
>>>>
>>>&
gt; Michael Della Bitta
>>>
>>>
>>> Appinions
>>> 18 East 41st Street, 2nd Floor
>>> New York, NY 10017-6271
>>>
>>> www.appinions.com
>>>
>>> Where Influence Isn’t
gt;> you might want to try upgrading.
>>
>>
>> Michael Della Bitta
>>
>>
>> Appinions
>> 18 East 41st Street, 2nd Floor
>> New York, NY 10017-6271
>>
>> www.appinions.com
>>
>> W
ight want to try upgrading.
>
>
> Michael Della Bitta
>
>
> Appinions
> 18 East 41st Street, 2nd Floor
> New York, NY 10017-6271
>
> www.appinions.com
>
> Where Influence Isn’t a Game
>
>
> On Wed, Feb 6, 2013 at 6:09 AM, Marcos Mendez wrote:
>> Hi,
&g
Wed, Feb 6, 2013 at 6:09 AM, Marcos Mendez wrote:
> Hi,
>
> I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing the
> following issue and it eats up a lot of memory when shutting down. Has
> anyone seen this and have an idea how to solve it?
&
Hi,
I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing the
following issue and it eats up a lot of memory when shutting down. Has
anyone seen this and have an idea how to solve it?
Exception in thread "DefaultThreadPool 196" java.lang.OutOfMemoryError:
P
Hi,
I need to build a UI that can access multiple cores. And combine them all on
an Everything tab.
The solrajax example only has 1 core.
How do I setup multicore with solrajax?
Do I setup 1 manager per core? How much of a performance hit will I take
with multiple managers running?
Is there a
Hi Otis,
Thank you so much, that's exactly what I need!
Thanks
Nicholas
On Mon, Nov 26, 2012 at 10:28 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Would http://wiki.apache.org/solr/Solrj#EmbeddedSolrServer save you some
> work?
>
> Otis
> --
> SOLR Performance Monitoring - http:/
You can simplify your code by searching across cores in the SearchComponent:
1) public class YourComponent implements SolrCoreAware
--> Grab instance of CoreContainer and store (mCoreContainer =
core.getCoreDescriptor().getCoreContainer();)
2) In the process method:
* grab the core requested (SolrC
Would http://wiki.apache.org/solr/Solrj#EmbeddedSolrServer save you some
work?
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html
On Mon, Nov 26, 2012 at 7:18 PM, Nicholas Ding wrote:
> Hi,
>
> I'm workin
is hard.
quote: http://it.wikipedia.org/wiki/Bruno_Munari
--
View this message in context:
http://lucene.472066.n3.nabble.com/Searching-in-multiple-cores-via-SolrJ-tp4020320p4020359.html
Sent from the Solr - User mailing list archive at Nabble.com.
thanks anyway, Shawn.
On Wed, Nov 14, 2012 at 5:24 PM, Carlos Alexandro Becker wrote:
> hmm... the less-horrible way I could think (if solr doesn't support it by
> default), is to create another core that "mix" the informations from other
> cores, and then, search in it.
>
> But, well, it would
hmm... the less-horrible way I could think (if solr doesn't support it by
default), is to create another core that "mix" the informations from other
cores, and then, search in it.
But, well, it would be ugly.
On Wed, Nov 14, 2012 at 5:14 PM, Shawn Heisey wrote:
> On 11/14/2012 10:48 AM, Carlos
On 11/14/2012 10:48 AM, Carlos Alexandro Becker wrote:
Hm, and in the case of my cores have different schemes?
You might have to do all the heavy lifting yourself, after using SolrJ
to retrieve the results. I will say that I have no idea -- there may be
ways you can avoid doing that. I hope
Hm, and in the case of my cores have different schemes?
Thanks in advance.
On Wed, Nov 14, 2012 at 3:35 PM, Shawn Heisey wrote:
> On 11/14/2012 10:19 AM, Carlos Alexandro Becker wrote:
>
>> What's the best way to search in multiple cores and merge the results
>> usin
On 11/14/2012 10:19 AM, Carlos Alexandro Becker wrote:
What's the best way to search in multiple cores and merge the results using
solrj?
Your best bet really is to have Solr do this for you with distributed
search. You can add the shards parameter to your queries easily with
SolrJ, o
1 - 100 of 240 matches
Mail list logo