Hi
We are using Solr Cloud 4.6 in our production for searching service
since 2 years ago.And now it has 700GB in one cluster which is comprised
of 3 machines with ssd. At beginning ,everything go well,while more and
more business services interfered with our searching service .And a problem
Hi all
Thanks for your reply.I do some investigation for much time.and I will
post some logs of the 'top' and IO in a few days when the crash come again.
2016-03-08 10:45 GMT+08:00 Shawn Heisey :
> On 3/7/2016 2:23 AM, Toke Eskildsen wrote:
> > How does this relate to YouPeng reporting that the
fi
sleep 5
done
You have new mail in /var/spool/mail/root
---------
2016-03-08 21:39 GMT+08:00 YouPeng Yang :
> Hi all
> Thanks for your reply.I do some investigation for much time.and I will
> post some logs of th
when time is going.however the unknown
reason high sys cpu is right now as a nightmare.So I look for help from our
community.
Would you have some experience as me and how you solve this problem?
Best Regards
2016-03-17 14:16 GMT+08:00 Shawn Heisey :
> On 3/16/2016 8:27 PM, YouPeng Y
your memory - you paid for it :)
> >
> >Otis
> >--
> >Monitoring - Log Management - Alerting - Anomaly Detection
> >Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> >
> >
> >
> >
> >>
> >
> >
> >>
> >>
>
Hi Shawn
Here is my top screenshot:
https://www.dropbox.com/s/jaw10mkmipz943y/topscreen.jpg?dl=0
It is captured when my system is normal.And I have reduced the memory
size down to 48GB originating from 64GB.
We have two hardware clusters ,each is comprised of 3 machines,and On one
c
the crash? Would you please give me some suggestions?
Best Regards.
2016-03-16 14:01 GMT+08:00 YouPeng Yang :
> Hello
>The problem appears several times ,however I could not capture the top
> output .My script is as follows code.
> I check the sys cpu usage whether it exceed 3
/p3ctuxb3t1jgo2e/threaddump1.jpg?dl=0
https://www.dropbox.com/s/w0uy15h6z984ntw/threaddump2.jpg?dl=0
https://www.dropbox.com/s/0frskxdllxlz9ha/threaddump3.jpg?dl=0
https://www.dropbox.com/s/46ptnly1ngi9nb6/threaddump4.jpg?dl=0
Best Regards
2016-03-18 14:35 GMT+08:00 YouPeng Yang :
> Hi
> To P
about the "Overlapping onDeckSearchers" ,We set the the <
maxWarmingSearchers>20 and true.Is it right ?
Best Regard.
2016-03-29 22:31 GMT+08:00 Toke Eskildsen :
> On Tue, 2016-03-29 at 20:12 +0800, YouPeng Yang wrote:
> > Our system still goes down as times
the on deck searchers back to
> 2 and figure out why you have so many overlapping searchers.
>
> Best,
> Erick
>
> On Tue, Mar 29, 2016 at 8:57 PM, YouPeng Yang
> wrote:
> > Hi Toke
> > The number of collection is just 10.One of collection has 43
> s
Hi
We have used Solr4.6 for 2 years,If you post more logs ,maybe we can
fixed it.
2016-04-21 6:50 GMT+08:00 Li Ding :
> Hi All,
>
> We are using SolrCloud 4.6.1. We have observed following behaviors
> recently. A Solr node in a Solrcloud cluster is up but some of the cores
> on the nodes are
Hi Shawn
Thanks a lot. It is greatly helpful.
2014-04-23 0:43 GMT+08:00 Shawn Heisey :
> On 4/22/2014 10:02 AM, yypvsxf19870706 wrote:
>
>> I am curious of the influences when have more than 2G docs in a
>> core.And we plan to have 5g docs/core.
>>
>> Please give me some suggesti
Within Solr Cloud 4.6.0,I have a master core and indexing to it using
DIH.In the meantime,I also update indexes with solrJ.
Things go well until I create a replication of the master,Exception comes
out as [1].
When do I get this exception?
[1]---
org.apache.solr.common.So
SolrCloud 4.6.0
I am using SolrCloud 4.6.0 with a master and replica.I adopt the
DistributedUpdateProcessorFactory to distribute the doc between the master
and replica.
Firstly,If the master and replica both empty,the DIH will be succeed.
And then, DIH again,the replica always thow an
Unsupp
我的 iPhone
>
> 在 2014-4-24,18:30,Mikhail Khludnev 写道:
>
> > Are you sure that field _version_ is declared correctly in schema.xml?
> >
> >
> > On Thu, Apr 24, 2014 at 12:30 PM, YouPeng Yang <
> yypvsxf19870...@gmail.com>wrote:
> >
> >> SolrC
Hi
I have just compare the difference between the version 4.6.0 and 4.7.1.
Notice that the time in the getConnection function is declared with the
System.nanoTime in 4.7.1 ,while System.currentTimeMillis().
Curious about the resson for the change.the benefit of it .Is it
neccessory?
I hav
Thank you very much.
2014-04-26 20:31 GMT+08:00 YouPeng Yang :
> Hi
>I have just compare the difference between the version 4.6.0 and 4.7.1.
> Notice that the time in the getConnection function is declared with the
> System.nanoTime in 4.7.1 ,while System.currentTimeMillis().
what the situation may lead to
the problem.
Thanks very much.
2014-04-26 20:49 GMT+08:00 YouPeng Yang :
> Hi Mark Miller
> Sorry to get you in these discussion .
> I notice that Mark Miller report this issure in
> https://issues.apache.org/jira/browse/SOLR-5734 accordin
Hi
I have a colloection with 3 shards.
I want to delete some docs in one shard with the command:
http://10.1.22.1:8082/solr/tv_201402/update?&stream.body=BEGINTIME:["2014-03-01
00:00:00" TO *]&shards=tv_201402&commit=true
As the red font exepression, It is supposed that docs only in the sh
Hi
Anyone gives some suggestions.
Regards
2014-05-19 11:31 GMT+08:00 YouPeng Yang :
> Hi
> I have a colloection with 3 shards.
> I want to delete some docs in one shard with the command:
>
>
> http://10.1.22.1:8082/solr/tv_201402/update?&stream.body=BEGINTIME:[
mitted..
-
Regards.
2014-05-19 15:46 GMT+08:00 YouPeng Yang :
> Hi
> Anyone gives some suggestions.
>
>
> Regards
>
>
> 2014-05-19 11:31 GMT+08:00 YouPeng Yang :
>
> Hi
>> I have a
it will affect.
>
> It sounds like SolrCloud does not support the feature you would really
> like: support for distrb=false.
>
> You can file a Jira request for a feature "improvement."
>
> -- Jack Krupansky
>
> -Original Message- From: YouPeng Yang
> Sent
Hi.
I am using solr4.6, in one my core it contains 50 million docs,and I am
just click the optimized button on the overview page of the core,and the
whole web instance hangs,one phenomenon is the DIH on another core hanged.
Is it a known problem or something wrong with my env?
Regards
Hi Marcin
Thanks to your mail,now I know why my cloud hangs when I just click the
optimize button on the overview page of the shard.
2014-05-20 15:25 GMT+08:00 Ahmet Arslan :
> Hi Marcin,
>
> just a guess, pass distrib=false ?
>
>
>
> Ahmet
>
>
> On Tuesday, May 20, 2014 10:23 AM, Marcin Rzew
Hi
Maybe you can try _router_=myshard? I will check the source code ,note you
later.
2014-05-20 17:19 GMT+08:00 YouPeng Yang :
> Hi Marcin
>
> Thanks to your mail,now I know why my cloud hangs when I just click the
> optimize button on the overview page of the shard.
>
>
timize rewrites index so you might need
> additional disk space for this process. Optimizing works fine however I'd
> like to be able to do it on a single shard as well.
>
>
> On 20 May 2014 11:19, YouPeng Yang wrote:
>
> > Hi Marcin
> >
> > Thanks to your
Hi
Doing DIH to one of shards in my SolrCloud Colleciton.I notice that every
time do ing commit in the shard,all the other shards do commit too.
I have check the source code DistributedUpdateProcessor.processCommit ,it
said
that processCommit would extend to all the shard in the collection.
W
Hi
As the title.I am using solr 4.6 with solrCloud. One of my leader core
within a shard have bean unloaded,the ping to the unloaded core and
return OK.
Is it normal?
How to send the right ping request to the core,and get the no ok?
Hi
I build my SolrCloud using Solr 4.6.0 (java version:1.7.0_45). In my
cloud,I have a collection with 30 shard,and each shard has one replica.
each core of the shard contains nearly 50 million docs that is 15GB in
size,so does the replica.
Before applying my cloud in the real world,I do a pe
Hi
I think it is wonderful to have caches autowarmed when commit or soft
commit happens. However ,If I want to warm the whole cache other than the
only autowarmcount,the default the auto warming operation will take long
long ~~long time.So it comes up with that maybe it good idea to just
chang
ct if I
> have this wrong) is that the underlying Lucene document ids have a
> potential to change and so when a newSearcher is created the caches must be
> regenerated and not copied.
>
> Matt
>
> -Original Message-
> From: YouPeng Yang [mailto:yypvsxf19870...@gmail.c
eal
> problem before worrying about a solution! ;)
>
> Best,
> Erick
>
>
> On Fri, Jul 25, 2014 at 6:45 AM, Shawn Heisey wrote:
>
> > On 7/24/2014 8:45 PM, YouPeng Yang wrote:
> > > To Matt
> > >
> > > Thank you,your opinion is very valuable
}
2014-07-25 21:45 GMT+08:00 Shawn Heisey :
> On 7/24/2014 8:45 PM, YouPeng Yang wrote:
> > To Matt
> >
> > Thank you,your opinion is very valuable ,So I have checked the source
> > codes about how the cache warming up. It seems to just put items of the
> >
Hi
I search the solr with fq clause,which is like:
fq=BEGINTIME:[2013-08-25T16:00:00Z TO *] AND BUSID:(M3 OR M9)
I am curious about the parsing process . I want to study it.
What is the Java file name describes the parsing process of the fq
clause.
Thanks
Regards.
t; generated by JFlex, and a lot of the logic is in the base class of the
> generated class, org.apache.solr.parser.**SolrQueryParserBase.java.
>
> Good luck! Happy hunting!
>
> -- Jack Krupansky
>
> -Original Message- From: YouPeng Yang
> Sent: Monday, October 21, 2013 2
Hi
I am using SolrCloud withing solr 4.4 ,and I try the SolrJ API
deleteByQuery to delete the Index as :
CloudSolrServer cloudServer = new CloudSolrServer(myZKhost)
cloudServer.connect()
cloudServer.setDefaultCollection
cloudServer.deleteByQuery("indexname:shardTv_20131010");
cloudServer.com
n get
> hits on documents, something like
> blah/collection/q=indexname:shardTv_20131010
>
> Best,
> Erick
>
>
> On Wed, Oct 23, 2013 at 8:20 AM, YouPeng Yang >wrote:
>
> > Hi
> > I am using SolrCloud withing solr 4.4 ,and I try the SolrJ API
> >
Hi
I'm using the SolrCloud integreted with HDFS,I found there are lots of
small size files.
So,I'd like to increase the index file size while doing DIH
full-import. Any suggestion to achieve this goal.
Regards.
-logs-softcommit-and-commit-in-sorlcloud/
>
> The key is the section on truncating the tlog.
>
> And note the sizes of these segments will change as they're
> merged anyway.
>
> Best,
> Erick
>
>
> On Wed, Dec 4, 2013 at 4:42 AM, YouPeng Yang >wrote:
>
Hi
I get an weird problem.
I try to create a core within Solr4.6.
Firstly, on my solr web server tomcat, a log come out[1]:
Then lots of Overseer Info logs come out as [2]. And then the creation
failed.
I have also notice that there is a lot of qn in the Overseer on the
zookeeper:
[zk: loc
configHDFS_report_his.xml&collection=repCore
I just to upgrade to solr 4.6, the coreNodeName goes well in solr 4.4 .
please help.
Regards
2013/12/17 YouPeng Yang
> Hi
>
> I get an weird problem.
> I try to create a core within Solr4.6.
>
> Firstly, on my solr web serv
Hi
Any one help me?
I can only set coreNodeName mannully to one core in the same
collection.It fails when I add a new core to the existed collection.
please,please
2013/12/18 YouPeng Yang
> Hi
> I have to add some necessary information.
> I failed to create a core
Hi solr users
I have a string field to store a xml string. Now I want to update the
field.I use the command.
http://10.7.23.122:8080/solr/meta_core/update?stream.body=shardTv_20131031"REP_DATE>20130930
and REP_DATE<20131003
"&commit=true
The red color string is what I want to update .However
Hi
Thanks for your reply.
The is actually what I want to update the doc. That
is I intend to update the xml string to one of the fields of my doc.
The url I have not found I want.
Any way,thanks a lot.
Regards.
2013/12/20 Gora Mohanty
> On 20 December 2013 13:57, YouP
ohanty
> On 20 December 2013 14:18, YouPeng Yang wrote:
> > Hi
> >Thanks for your reply.
> >
> >The is actually what I want to update the doc. That
> > is I intend to update the xml string to one of the fields of my doc.
> [...]
>
> Ah, sor
via the admin UI and see that the data in the
> index
> > is correct? Have you ever had valid data in that field?
> >
> > That would already confirm whether the problem is in your index
> definition
> > or your indexing code.
> >
> > You still haven't
via the admin UI and see that the data in the
> index
> > is correct? Have you ever had valid data in that field?
> >
> > That would already confirm whether the problem is in your index
> definition
> > or your indexing code.
> >
> > You still haven't
Hi users
I get a very werid problem within solr 4.6
I just want to reload a core :
http://10.7.23.125:8080/solr/admin/cores?action=RELOAD&core=reportCore_201210_r1
However it give out an exception[1].As the exception the SolrCore
'collection1' does not exist. I create a default core not with
Hi users
Solr supports for writing and reading its index and transaction log files
to the HDFS distributed filesystem.
**I am curious about that there are any other futher improvement about
the integration with HDFS.*
**For the solr native replication will make multiple copies of the
maste
Hi
I am using solr.4.6. when I create a core with the request:
http://10.7.23.125:8081/solr/admin/cores?action=CREATE&schema=schema.xml&;
*shard*=reportCore_201202_b0&*coreNodeName*
=core_node1&collection.configName=myconf&name=reportCore_201202_b0&action=CREATE&config=solrconfig2.xml.bak&coll
Hi
Merry Christmas.
Before this mail,I am in trouble with a weird problem for a few days
when to create a new core with both explicite shard and coreNodeName. And I
have posted a few mails in the mailist,no one ever gives any
suggestions,maybe they did not encounter the same problem.
I
ved from ZK");
throw new SolrException(ErrorCode.NOT_FOUND,coreNodeName +"
is removed");
}
}
}
}
}
--
Regards
2013/12/27 Mark Miller
> If you are seeing an NPE there, sounds like y
Hi users
I have build a SolrCloud on tomcat.The cloud contains 22 shards with no
replica.Also the the solrcloud is integrated with HDFS.
After imported data for oracle to the solrcloud, I restart the tomcat
,it does not comes alive againt.
It always give an exceptions.
I'm really hav
Hi Mark
I have filed a jira about the NPE:
https://issues.apache.org/jira/browse/SOLR-5580
2013/12/27 YouPeng Yang
> Hi Mark.
>
>Thanks for your reply.
>
> I will file a JIRA issue about the NPE.
>
>By the way,would you look through the Question 2. After
Hi
There is a failed core in my solrcloud cluster(solr 4.6 with hdfs 2.2)
when I start my solrcloud . I noticed that there are lots of tlog files [1]
.
The start proccess was stuck ,it need to do log replay.However it
encountered error[2].
I do think that it is abnormal that there are still
ull) {
log.info("core_removed This core is removed from ZK");
throw new SolrException(ErrorCode.NOT_FOUND,coreNodeName +"
is removed");
}
}
}
}
}
--------
Hi Mark Miller
How can a log replay fail .
And I can not figure out the reason of the exception. It seems to no
BigDecimal type field in my schema.
Please give some suggestions
The exception :
133462 [recoveryExecutor-48-thread-1] WARN org.apache.solr.update.
UpdateLog – Starting
initially described, Solr's behavior when storing data on HDFS or YouPeng's
> other thread (Maybe a bug for solr 4.6 when create a new core) that looks
> like it might be a near duplicate of this one?
> >
> > Thanks,
> > Greg
> >
> >> On Dec 26, 2013, a
Hi
I find that the cpu ratio is very high when the tomcat contained solr
4.6 sleep.
The pid 13359 shows that my sleeping solr web container take high cpu
ratio
Any insights?
[solr@fkapp1 ~]$ top -d -1 -u solr
top - 17:30:15 up 302 days, 7:10, 5 users, load average: 4.54, 4.52, 4.47
Tas
an 15, 2014 at 6:29 AM, Mikhail Khludnev <
> mkhlud...@griddynamics.com> wrote:
>
> > Hello,
> >
> > Invoke top for particular process displaying threads enabled.
> > Find the hottest thread PID.
> > invoke jstack for this process, find the suspicious threa
Hi
By the ways,after I restart the web container ,the ratio returns normal.
So when does the sutuation come out?
Regards
2014/1/16 YouPeng Yang
> Hi
> Thanks for the reply.
> I get the information as following:
>
&g
this instance? Are you sure that everything was
> fine with the heap?
>
>
> On Thu, Jan 16, 2014 at 11:36 AM, YouPeng Yang >wrote:
>
> > Hi
> > Thanks for the reply.
> > I get the information as following:
> > -
Hi
We build the SolrCloud using solr4.6.0 and jdk1.7.60 ,our cluster contains
360G*3 data(one core with 2 replica).
Our cluster becomes unstable which means occasionlly it comes out long
time full gc.This is awful,the full gc take long take that the solrcloud
consider it as down.
Normally full
Hi
We build the SolrCloud using solr4.6.0 and jdk1.7.60 ,our cluster contains
360G*3 data(one core with 2 replica).
Our cluster becomes unstable which means occasionlly it comes out long
time full gc.This is awful,the full gc take long take that the solrcloud
consider it as down.
Normally full
ps://jmeter.apache.org/usermanual/jmeter_accesslog_sampler_step_by_step.pdf
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/
>
>
> On Sep 12, 2014, at 7:10 AM, Shawn Heisey wrote:
>
> > On 9/12/2014 7:36 AM, YouPeng Yang wrote:
>
Hi
One of my filed called AMOUNT is String,and I want to calculate the
sum of the this filed.
I have try it with the stats component,it only give out the stats
information without sum item just as following:
5000
24230
26362
Is there
Hi All
With the Solr-4.4 ,I try the Hdfs and SolrCloud.
I create a SolrCloud core with hdfs using :
http://10.7.23.125:8080/solr/admin/cores?action=CREATE&name=testcore8_1&shard=shard13&collection.configName=myconf&schema=schema.xml&config=solrconfig3.xml&collection=collection1&dataDir=/s
iven us no information to go on.
>
> Please review:
> http://wiki.apache.org/solr/UsingMailingLists
>
> At a guess, does your heihei directory exist and does it have
> the proper configuration files?
>
> Best
> Erick
>
>
> On Tue, Aug 20, 2013 at 6:38 AM, YouPeng
Hi Erick,Tanya
Happened I saw the mail,and I would like report anothe relative issue.
I create a core through the solr/admin URL.
when I set the collection name to / , the catalina.out of the tomcat
send lots of exceptions -Path must
not end with / character-. Worse, It seems never t
Hi All
I do have some diffculty with understand the relation between the
optimize and merge
Can anyone give some tips about the difference.
Regards
Hi all
About the RAMBufferSize and commit ,I have read the doc :
http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/60544
I can not figure out how do they make work.
Given the settings:
10
${solr.autoCommit.maxDocs:1000}
false
If the indexs docs up to 10
Hi all
I try to integrate solr with HDFS HA.When I start the solr server, it
comes out an exeception[1].
And I do know this is because the hadoop.conf.Configuration in
HdfsDirectoryFactory.java does not include the HA configuration.
So I want to know ,in solr,is there any way to includ
Hi Shawn
Thanks a lot. I got it.
Regards
2013/8/22 Shawn Heisey
> On 8/22/2013 2:25 AM, YouPeng Yang wrote:
> > Hi all
> > About the RAMBufferSize and commit ,I have read the doc :
> > http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/60544
> &
cache.direct.memory.allocation">true
> 16384
> true
> true
> true
> 16
> 192
> hdfs://nameservice1:8020/solr
> /etc/hadoop/conf.cloudera.hdfs1
>
> **
>
> Thanks,
> Greg
>
> -Or
Hi smanad
If I do not make a mistake, You can append the coreNodeName parameter to
your creation command:
http://10.7.23.125:8080/solr/admin/cores?action=CREATE&name=dfscore8_3&shard=shard3_3&collection.configName=myconf&schema=schema.xml&config=solrconfig3.xml&collection=collection1&dataDir
Hi solr user
I'm testing the Solr with HDFS.I happend to stop my hdfs before stopping
the solr . After than I started the solr again. An exception come out[1]
that I could not ignore.
Can anybody explain the reason and how to avoid it?
Regards
[1]==
Hi jerome.dupont
please check what is the updateHandler in your solrconfig.xml
--> by
default,it is solr.NoOpDistributingUpdateProcessorFactor
db-data-config.xml
sample
2013/9/3
>
> H
HI solrusers
I'm testing the replication within SolrCloud .
I just uncomment the replication section separately on the master and
slave node.
The replication section setting on the master node:
commit
startup
schema.xml,stopwords.txt
and on the sl
Hi again
I'm using Solr4.4.
2013/9/5 YouPeng Yang
> HI solrusers
>
>I'm testing the replication within SolrCloud .
>I just uncomment the replication section separately on the master and
> slave node.
>The replication section setting on the master no
from the master node?
regards
2013/9/5 YouPeng Yang
> Hi again
>
> I'm using Solr4.4.
>
>
> 2013/9/5 YouPeng Yang
>
>> HI solrusers
>>
>>I'm testing the replication within SolrCloud .
>>I just uncomment the replication sectio
Hi all
In which situations,one core will become down?And how can I simulate
these situations
Any suggestions will be appreatiated.
Regards
Hi solr users
I want to create a core with node_name through the api
CloudSolrServer.query(SolrParams params ).
For example:
ModifiableSolrParams params = new ModifiableSolrParams();
params.set("qt", "/admin/cores");
params.set("action", "CREATE");
params.set("nam
Hi
I'm using the DIH to import data from oracle database with Solr4.4
Finally I get 2.7GB index data and 4.1GB tlog data.And the number of
docs was 1090.
At first, I move the 2.7GB index data to another new Solr Server in
tomcat7. After I start the tomcat ,I find the total number o
/collection1/data/index/_149s.fdx (Too
many open files)
2013/9/17 Shawn Heisey
> On 9/16/2013 8:26 PM, YouPeng Yang wrote:
> >I'm using the DIH to import data from oracle database with Solr4.4
> >Finally I get 2.7GB index data and 4.1GB tlog data.And the number of
&
/_28x.nvm
386Bindex/_28x.si
...omited
-
2013/9/17 YouPeng Yang
> Hi Shawn
>
>Thank your very much for your reponse.
>
>I lauch the full-import task on the web page of solr/admin . And I
Hi
According to
http://wiki.apache.org/solr/SolrPerformanceProblems#Slow_startup。
It explains that the tlog file will swith to a new when hard commit
happened.
However,my tlog shows different.
tlog.003 5.16GB
tlog.004 1.56GB
tlog.002 610
> http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html
>
> Best,
> Erick
>
>
> On Tue, Sep 17, 2013 at 6:36 AM, Shawn Heisey wrote:
>
> > On 9/17/2013 12:32 AM, YouPeng Yang wrote:
> > > Hi
> > >Another werid prob
Hi
I want to import dataset in a partition of a partition table with DIH.
And I would like to explicitly define the partition when I do import job.
To be specific.
1. I define the DIH configuration like these
2.I send the url:
http://localhost:8983/solr/dataimport?command=full-import&p
Hi
I want to import dataset in a partition of a partition table with DIH.
And I would like to explicitly define the partition when I do import job.
To be specific.
1. I define the DIH configuration like these
2.I send the url:
http://localhost:8983/solr/dataimport?command=full-import&p
Hi Shalin
Thanks a lot. It is the point that I need.
Regards
2013/9/23 Shalin Shekhar Mangar
> You can use request parameters in your query e.g.
>
>
>
> http://wiki.apache.org/solr/DataImportHandler#Accessing_request_parameters
>
> On Mon, Sep 23, 2013 at 8:26 AM,
Hi
I have two collections with different schema.
And I want to do inner join like SQL:
select A.xx,B.xx
from A,B
where A.yy=B.yy
How can I achieve this in Solr. I'm using SolrCloud with solr 4.4
regards
91 matches
Mail list logo