Hi Mark, Thanks for confirming Dwane's advice from your own experience. I
will shift to a streaming expressions implementation.
Best
Goutham
On Fri, Sep 25, 2020 at 7:03 PM Mark H. Wood wrote:
> On Fri, Sep 25, 2020 at 11:49:22AM +0530, Goutham Tholpadi wrote:
> > I have around 30M documents in
-
> *From:* Goutham Tholpadi
> *Sent:* Friday, 25 September 2020 4:19 PM
> *To:* solr-user@lucene.apache.org
> *Subject:* Solr queries slow down over time
>
> Hi,
>
> I have around 30M documents in Solr, and I am doing repeated *:* queries
> with rows=10
On Fri, Sep 25, 2020 at 11:49:22AM +0530, Goutham Tholpadi wrote:
> I have around 30M documents in Solr, and I am doing repeated *:* queries
> with rows=1, and changing start to 0, 1, 2, and so on, in a
> loop in my script (using pysolr).
>
> At the start of the iteration, the calls to
/streaming-expressions.html.
Thanks,
Dwane
From: Goutham Tholpadi
Sent: Friday, 25 September 2020 4:19 PM
To: solr-user@lucene.apache.org
Subject: Solr queries slow down over time
Hi,
I have around 30M documents in Solr, and I am doing repeated *:* queries
with
Hi,
I have around 30M documents in Solr, and I am doing repeated *:* queries
with rows=1, and changing start to 0, 1, 2, and so on, in a
loop in my script (using pysolr).
At the start of the iteration, the calls to Solr were taking less than 1
sec each. After running for a few hours (
;> > /solr/mycollection/select?stats=true&stats.field=unique_ids&stats.cal
>> > cdistinct=true
>> ...
>> > Is there a way to block certain solr queries based on url pattern?
>> > i.e. ignore the stats.calcdistinct request in this case.
>>
>> It s
On Mon, 2019-10-07 at 10:18 -0700, Wei wrote:
> /solr/mycollection/select?stats=true&stats.field=unique_ids&stats.cal
> cdistinct=true
...
> Is there a way to block certain solr queries based on url pattern?
> i.e. ignore the stats.calcdistinct request in this case.
It sounds
quite
> > > expensive. But why a small volume of such query blocks other queries
> and
> > > make simple queries time out? I checked the solr thread pool and see
> > there
> > > are plenty of idle threads available. We are using solr 7.6.2 with a
> 10
> > > shard cloud set up.
> > >
> > > Is there a way to block certain solr queries based on url pattern? i.e.
> > > ignore the stats.calcdistinct request in this case.
> > >
> > >
> > > Thanks,
> > >
> > > Wei
> > >
> >
> >
> > --
> > Sincerely yours
> > Mikhail Khludnev
> >
>
--
Sincerely yours
Mikhail Khludnev
time out? I checked the solr thread pool and see
> there
> > are plenty of idle threads available. We are using solr 7.6.2 with a 10
> > shard cloud set up.
> >
> > Is there a way to block certain solr queries based on url pattern? i.e.
> > ignore the stats.calcdistinct request in this case.
> >
> >
> > Thanks,
> >
> > Wei
> >
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>
d so this query is quite
> expensive. But why a small volume of such query blocks other queries and
> make simple queries time out? I checked the solr thread pool and see there
> are plenty of idle threads available. We are using solr 7.6.2 with a 10
> shard cloud set up.
>
> Is th
We are using solr 7.6.2 with a 10
shard cloud set up.
Is there a way to block certain solr queries based on url pattern? i.e.
ignore the stats.calcdistinct request in this case.
Thanks,
Wei
/url?u=https-3A__github.com_apache_lucene-2Dsolr_blob_master_solr_solrj_src_test_org_apache_solr_client_solrj_io_stream_StreamExpressionTest.java&d=DwIBaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=lUsTzFRk0CX38HvagQ0wd52D67dA0fx_D6M6F3LHzAU&m=9tFliF4KA1tiG2lGmDJWO34hyq9-Sz1inAxRP
treamExpressionTest.java&d=DwIBaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=lUsTzFRk0CX38HvagQ0wd52D67dA0fx_D6M6F3LHzAU&m=9tFliF4KA1tiG2lGmDJWO34hyq9-Sz1inAxRPVKkz78&s=KjveDzxzQAKRmvzPYk2y1FQ-w6yAGWuwfTVGHMQP2ZA&e=
> ?
>
> On Fri, May 10, 2019 at 11:36 PM Pratik Patel wrote:
>
>> Hello Ever
/StreamExpressionTest.java
?
On Fri, May 10, 2019 at 11:36 PM Pratik Patel wrote:
> Hello Everyone,
>
> I want to write unit tests for some solr queries which are being triggered
> through java code. These queries includes complex streaming expressions and
> faceting queries which requires
Hello Everyone,
I want to write unit tests for some solr queries which are being triggered
through java code. These queries includes complex streaming expressions and
faceting queries which requires large number of documents to be present in
solr index. I can not create and push so many documents
Thanks Guys.
i will try two level document routing in case of file_collection.
i really don't understand why index size is high for file_collection as
same file is available in main_collection.
(each file indexed as one document with all commands in main collection
and same file is indexed as n
If you can find/know which fields (or combination) in your document divides
/ groups the data together would be the fields for custom routing. Solr
supports up to two level.
E.g. if you have field with say documentType or country or etc. would
help. See the document routing at
https://cwiki.apac
Usually I just let the compositeId do its thing and only go for custom
routing when the default proves inadequate.
Note: your 480M documents may very well be too many for three shards!
You really have to test
Erick
On Mon, Mar 14, 2016 at 10:04 AM, Anil wrote:
> Hi Erick,
> In b/w, Do you
Hi Erick,
In b/w, Do you recommend any effective shard distribution method ?
Regards,
Anil
On 14 March 2016 at 22:30, Erick Erickson wrote:
> Try shards.info=true, but pinging the shard directly is the most certain.
>
>
> Best,
> Erick
>
> On Mon, Mar 14, 2016 at 9:48 AM, Anil wrote:
> > HI Er
thanks Eric. i will try that. Some how i am not able to run a query on the
shard directly because of kerberos. i even tried curl --negotiate.
Regards,
Anil
On 14 March 2016 at 22:30, Erick Erickson wrote:
> Try shards.info=true, but pinging the shard directly is the most certain.
>
>
> Best,
>
Try shards.info=true, but pinging the shard directly is the most certain.
Best,
Erick
On Mon, Mar 14, 2016 at 9:48 AM, Anil wrote:
> HI Erik,
>
> we have used document routing to balance the shards load and for
> expand/collapse. it is mainly used for main_collection which holds one to
> many r
HI Erik,
we have used document routing to balance the shards load and for
expand/collapse. it is mainly used for main_collection which holds one to
many relationship records. In file_collection, it is only for load
distribution.
25GB for entire solr service. each machine will act as shard for som
Hi Shusheel,
we have enabled kerberos. so solr is accessed using Hue only. i will check
if I can get the similar information using Hue. Thanks.
Regards,
Anil
On 14 March 2016 at 19:34, Susheel Kumar wrote:
> Hello Anil,
>
> Can you go to Solr Admin Panel -> Dashboard and share all 4 memory
> p
bq: The slowness is happening for file_collection. though it has 3 shards,
documents are available in 2 shards. shard1 - 150M docs and shard2 has 330M
docs , shard3 is empty.
Well, this collection terribly balanced. Putting 330M docs on a single shard is
pushing the limits, the only time I've seen
For each of the solr machines/shards you have. Thanks.
On Mon, Mar 14, 2016 at 10:04 AM, Susheel Kumar
wrote:
> Hello Anil,
>
> Can you go to Solr Admin Panel -> Dashboard and share all 4 memory
> parameters under System / share the snapshot. ?
>
> Thanks,
> Susheel
>
> On Mon, Mar 14, 2016 at
Hello Anil,
Can you go to Solr Admin Panel -> Dashboard and share all 4 memory
parameters under System / share the snapshot. ?
Thanks,
Susheel
On Mon, Mar 14, 2016 at 5:36 AM, Anil wrote:
> HI Toke and Jack,
>
> Please find the details below.
>
> * How large are your 3 shards in bytes? (total
HI Toke and Jack,
Please find the details below.
* How large are your 3 shards in bytes? (total index across replicas)
-- *146G. i am using CDH (cloudera), not sure how to check the
index size of each collection on each shard*
* What storage system do you use (local SSD, local spinning
HI Shawn, Jack and Eric,
Thank you very much.
Regards,
Anil
On 14 March 2016 at 02:55, Shawn Heisey wrote:
> On 3/13/2016 9:36 AM, Jack Krupansky wrote:
> > (We should have a wiki/doc page for the "usual list of suspects" when
> > queries are/appear slow, rather than need to repeat the same m
On 3/13/2016 9:36 AM, Jack Krupansky wrote:
> (We should have a wiki/doc page for the "usual list of suspects" when
> queries are/appear slow, rather than need to repeat the same mantra(s) for
> every inquiry on this topic.)
There's this page, with the disclaimer that I wrote almost all of it:
ht
Yeah, there's some good material there, but probably still too inaccessible
for the average "help, my queries are slow" inquiry we get so frequently on
this list.
Another useful page is:
https://wiki.apache.org/solr/SolrPerformanceProblems
-- Jack Krupansky
On Sun, Mar 13, 2016 at 2:58 PM, Eric
Jack:
https://wiki.apache.org/solr/SolrPerformanceFactors
and
http://wiki.apache.org/lucene-java/ImproveSearchingSpeed
are already there, we can add to them
Best,
Erick
On Sun, Mar 13, 2016 at 9:18 AM, Anil wrote:
> Thanks Toke and Jack.
>
> Jack,
>
> Yes. it is 480 million :)
>
> I will sh
Thanks Toke and Jack.
Jack,
Yes. it is 480 million :)
I will share the additional details soon. thanks.
Regards,
Anil
On 13 March 2016 at 21:06, Jack Krupansky wrote:
> (We should have a wiki/doc page for the "usual list of suspects" when
> queries are/appear slow, rather than need to r
(We should have a wiki/doc page for the "usual list of suspects" when
queries are/appear slow, rather than need to repeat the same mantra(s) for
every inquiry on this topic.)
-- Jack Krupansky
On Sun, Mar 13, 2016 at 11:29 AM, Toke Eskildsen
wrote:
> Anil wrote:
> > i have indexed a data (com
Anil wrote:
> i have indexed a data (commands from files) with 10 fields and 3 of them is
> text fields. collection is created with 3 shards and 2 replicas. I have
> used document routing as well.
> Currently collection holds 47,80,01,405 records.
...480 million, right? Funny digit grouping in I
HI,
i have indexed a data (commands from files) with 10 fields and 3 of them is
text fields. collection is created with 3 shards and 2 replicas. I have
used document routing as well.
Currently collection holds 47,80,01,405 records.
text search against text field taking around 5 sec. solr is quer
SolrDispatchFilter holds CoreContainer cores, perhaps you can extend the
filter to manage it to publish cores into jndi, where core can be found in
other application, and is used for instantiating EmbeddedSolrServer.
On Fri, Jul 24, 2015 at 9:50 PM, Darin Amos wrote:
> Hello,
>
> I have an appli
On Fri, Jul 24, 2015, at 07:50 PM, Darin Amos wrote:
> Hello,
>
> I have an application server that is running both the solr.war and a REST
> API war within the same JVM. Is it possible to query the SOLR instance
> natively (non-blocking) without connecting over HTTP? I could use
> EmbeddedSolrS
Hello,
I have an application server that is running both the solr.war and a REST API
war within the same JVM. Is it possible to query the SOLR instance natively
(non-blocking) without connecting over HTTP? I could use EmbeddedSolrServer but
I cannot create a second instance of my core.
If I ca
Yes, I understand that reindexing is neccesary , however for some reason I
was not able to invoke the js script from the updateprocessor, so I ended
up using Java only solution at index time.
Thanks.
On Thu, Dec 11, 2014 at 7:18 AM, Ahmet Arslan
wrote:
>
> Hi,
>
> No special steps to be taken fo
Hi,
No special steps to be taken for cloud setup. Please note that for both
solutions, re-index is mandatory.
Ahmet
On Thursday, December 11, 2014 12:15 PM, S.L wrote:
Ahmet,
Thank you , as the configurations in SolrCloud are uploaded to zookeeper ,
are there any special steps that need to
Ahmet,
Thank you , as the configurations in SolrCloud are uploaded to zookeeper ,
are there any special steps that need to be taken to make this work in
SolrCloud ?
On Wed, Dec 10, 2014 at 4:32 AM, Ahmet Arslan
wrote:
>
> Hi,
>
> Or even better, you can use your new field for tie break purposes.
Mikhail,
Thank you for confirming this , however Ahmet's proposal seems more simpler
to implement to me .
On Wed, Dec 10, 2014 at 5:07 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
>
> S.L,
>
> I briefly skimmed Lucene50NormsConsumer.writeNormsField(), my conclusion
> is: if you suppl
S.L,
I briefly skimmed Lucene50NormsConsumer.writeNormsField(), my conclusion
is: if you supply own similarity, which just avoids putting float to byte
in Similarity.computeNorm(FieldInvertState), you get right this value in .
Similarity.decodeNormValue(long).
You may wonder but this is what's exa
Hi,
Or even better, you can use your new field for tie break purposes. Where scores
are identical.
e.g. sort=score desc, wordCount asc
Ahmet
On Wednesday, December 10, 2014 11:29 AM, Ahmet Arslan
wrote:
Hi,
You mean update processor factory?
Here is augmented (wordCount field added) versio
Hi,
You mean update processor factory?
Here is augmented (wordCount field added) version of your example :
doc1:
phoneName:"Details about Apple iPhone 4s - 16GB - White (Verizon)
Smartphone Factory Unlocked"
wordCount: 11
doc2:
phoneName:"Apple iPhone 4S 16GB for Net10, No Contract, White"
w
Hi Ahmet,
Is there already an implementation of the suggested work around ? Thanks.
On Tue, Dec 9, 2014 at 6:41 AM, Ahmet Arslan
wrote:
> Hi,
>
> Default length norm is not best option for differentiating very short
> documents, like product names.
> Please see :
> http://find.searchhub.org/doc
I wonder why your explains are so brief, mine looks like
0.4500489 = (MATCH) weight(text:inc in 17) [DefaultSimilarity], result of:
0.4500489 = fieldWeight in 17, product of:
1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
2.880313 = idf(docFreq=8, maxDocs=59)
0.15625
Hi,
Default length norm is not best option for differentiating very short
documents, like product names.
Please see :
http://find.searchhub.org/document/b3f776512ab640ec#b3f776512ab640ec
I suggest you to create an additional integer field, that holds number of
tokens. You can populate it via u
Hi ,
Mikhail Thanks , I looked at the explain and this is what I see for the two
different documents in questions, they have identical scores even though
the document 2 has a shorter productName field, I do not see any lenghtNorm
related information in the explain.
Also I am not exactly clear o
It's worth to look into to check particular scoring values. But
for most suspect is the reducing precision when float norms are stored in
byte vals. See javadoc for DefaultSimilarity.encodeNormValue(float)
On Mon, Dec 8, 2014 at 5:49 PM, S.L wrote:
> I have two documents doc1 and doc2 and each
I have two documents doc1 and doc2 and each one of those has a field called
phoneName.
doc1:phoneName:"Details about Apple iPhone 4s - 16GB - White (Verizon)
Smartphone Factory Unlocked"
doc2:phoneName:"Apple iPhone 4S 16GB for Net10, No Contract, White"
Here if I search for
q=iphone+4s+16gb&qf
cement=" " replace="all"/>
> class="solr.PatternReplaceFilterFactory"
> pattern="([^\w\d\*æøåÆØÅ ])" replacement="" replace="all"/>
> class="solr.PatternReplac
--
View this message in context:
http://lucene.472066.n3.nabble.com/Filtering-Solr-Queries-tp4131924.htm
te-math-now-and-filter-queries/
Best
Erick
> Should the date range query go in fq? As I mentioned, the default view shows
> stuff from the past 90 days. So on each new day does this like invalidate
> stuff in the cache? Or is stuff stored in the filtered cache in some way
> that makes
On 5/29/2012 4:18 AM, santamaria2 wrote:
*3)* I've rummaged around a bit, looking for info on when to use q vs fq. I
want to clear my doubts for a certain use case.
Where should my date range queries go? In q or fq? The default settings in
my site show results from the past 90 days with buttons
--
View this message in context:
http://lucene.472066.n3.nabble.com/A-few-random-questions-about-solr-queries-tp3986562p3986977.html
Sent from the Solr - User mailing list archive at Nabble.com.
the cache? Or is stuff stored in the filtered cache in some way
that makes it easy to fetch stuff from the past 89 days when a query is
performed the next day?
--
View this message in context:
http://lucene.472066.n3.nabble.com/A-few-random-questions-about-solr-queries-tp3986562.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Abhijeet,
On Mon, Aug 22, 2011 at 3:09 PM, abhijit bashetti wrote:
>
> 1. Can I update a specific field while re-indexing?
>
Solr doesn't support updating specific fields. You must always create a
complete document with values for all fields while indexing. If you keep the
same value for the
Hi,
I have some queries on Solr?
1. Can I update a specific field while re-indexing?
2. what are the ways to improve the performance of Indexing?
3. What should be ideal system configuration for solr indexing server?
Regards,
Abhijit
: I am looking for the simplest way to disable coord in Solr queries. I have
: found out Lucene allows this by construction of a BooleanQuery with
: diableCoord=false:
: public *BooleanQuery*(boolean disableCoord)
:
: Is there any way to activate this functionality directly from a Solr query
I am looking for the simplest way to disable coord in Solr queries. I have
found out Lucene allows this by construction of a BooleanQuery with
diableCoord=false:
public *BooleanQuery*(boolean disableCoord)
Is there any way to activate this functionality directly from a Solr query?
Thanks,
Ran
Hi,
Is it possible to execute multiple SOLR queries (basically same
structure/fields but due to the headersize limitations for long query URLs,
thinking of having multiple SOLR queries)
on single index like a batch or so?
Best Regards,
Kranti K K Parisa
t;> > Thanks
>>> > Dipti
>>> >
>>> >
>>> > On Fri, Jan 22, 2010 at 10:36 AM, Otis Gospodnetic <
>>> > otis_gospodne...@yahoo.com> wrote:
>>> >
>>> > > Dipti,
>>> > >
>>> >
zing the index on the
>> master
>> > > before replicating it?
>> > > There is no need to do that if you are constantly updating your index
>> and
>> > > replicating it every 10 minutes.
>> > > Don't optimize, and you'll repli
o need to do that if you are constantly updating your index
> and
> > > replicating it every 10 minutes.
> > > Don't optimize, and you'll replicate smaller portion of an index, and
> > thus
> > > you won't bust the OS cache on the slave
ll see further benefits from faster
> > searcher warmup times.
> >
> > Otis
> > --
> > Sematext -- http://sematext.com/ -- Solr - Lucene - Nutch
> >
> >
> >
> > - Original Message
> > > From: dipti khullar
> > >
is
> --
> Sematext -- http://sematext.com/ -- Solr - Lucene - Nutch
>
>
>
> - Original Message
> > From: dipti khullar
> > To: solr-user@lucene.apache.org
> > Sent: Thu, January 21, 2010 11:48:20 AM
> > Subject: Re: Improvising solr queries
> >
> &g
r-user@lucene.apache.org
> Sent: Thu, January 21, 2010 11:48:20 AM
> Subject: Re: Improvising solr queries
>
> Hi
>
> Sorry for getting back late on the thread, but we are focusing on
> configuration of master and slave for improving performance issues.
>
> We have obser
What this looks like (and I've only glanced) is that your
index updates are causing a new searcher to
be opened, and the first few queries after
the reopen will be slow.
Have you tried warmup queries after the reopen?
FWIW
Erick
On Thu, Jan 21, 2010 at 11:48 AM, dipti khullar wrote:
> Hi
>
> So
Hi
Sorry for getting back late on the thread, but we are focusing on
configuration of master and slave for improving performance issues.
We have observed following trend on production slaves:
After every 10 minutes the response time increases considerably. In between
all the queries are served by
On Tue, Jan 5, 2010 at 11:16 AM, dipti khullar wrote:
>
> This assettype is variable. It can have around 6 values at a time.
> But this is true that we apply facet mostly on just one field - assettype.
>
>
Ian has a good point. You are faceting on assettype and you are also
filtering on it so you
Hey Ian
This assettype is variable. It can have around 6 values at a time.
But this is true that we apply facet mostly on just one field - assettype.
Any idea if the use of date range queries is expensive? Also if Shalin can
put in some comments on
"sorting by date was pretty rough on CPU", I can
On 1/5/10 12:46 AM, Shalin Shekhar Mangar wrote:
sitename:XYZ OR sitename:"All Sites") AND (localeid:1237400589415) AND
> ((assettype:Gallery)) AND (rbcategory:"ABC XYZ" ) AND (startdate:[* TO
> 2009-12-07T23:59:00Z] AND enddate:[2009-12-07T00:00:00Z TO
> *])&rows=9&start=63&sort=date
> desc
Hi -
Something doesn't make sense to me here:
On Mon, Jan 4, 2010 at 5:55 AM, dipti khullar wrote:
> - optimize runs on master in every 7 minutes
> - using postOptimize , we execute snapshooter on master
> - snappuller/snapinstaller on 2 slaves runs after every 10 minutes
>
>
Why would you optim
On Mon, Jan 4, 2010 at 7:25 PM, dipti khullar wrote:
> Thanks Shalin.
>
> Following are the relevant details:
>
> There are 2 search servers in a virtualized VMware environment. Each has 2
> instances of Solr running on separates ports in tomcat.
> Server 1: hosts 1 master(application 1), 1 slave
>
Actually that is one of the first things that you should look at.
> We guys are trying to tune up Solr Queries being used in our project.
> Following sample query takes about 6 secs to execute under normal traffic.
> At peak hours this often increases to 10-15 secs.
>
>
we are using Solr 1.3.
So our last resort remains, improvising the queries. We are using SolrJ -
CommonsHttpSolrServer
We guys are trying to tune up Solr Queries being used in our project.
Following sample query takes about 6 secs to execute under normal traffic.
At peak hours this often increases
On Dec 29, 2009, at 8:59 AM, zoku wrote:
Hi there!
Is it possible, to limit the Solr Queries to predefined values e.g.:
If the User enters "/select?q=anyword&fq=anyfilter&rows=13" then the
filter
and rows arguments are ignored an overwritten by the predefined values
&quo
Hi there!
Is it possible, to limit the Solr Queries to predefined values e.g.:
If the User enters "/select?q=anyword&fq=anyfilter&rows=13" then the filter
and rows arguments are ignored an overwritten by the predefined values
"specialfilter" and "6".
The
> Hi,
> Suppose i have a content field of
> type text.
> an example on content field is as shown below:
> "After frustrated waiting period to get my credit card from
> the ICICI Bank,
> today I decided to write them a online petition stating my
> problem... Below
> is the unedited version of l
Hi,
Suppose i have a content field of type text.
an example on content field is as shown below:
"After frustrated waiting period to get my credit card from the ICICI Bank,
today I decided to write them a online petition stating my problem... Below
is the unedited version of letter I sent to IC
: thanks for your help so do you think I should execute solr queries twice ?
: or is there any other workarounds
http://people.apache.org/~hossman/#xyproblem
XY Problem
Your question appears to be an "XY Problem" ... that is: you are dealing
with "X", you are assuming "
From: Mark N [nipen.m...@gmail.com]
Sent: Monday, November 30, 2009 4:36 AM
To: solr-user@lucene.apache.org
Subject: Re: nested solr queries
thanks for your help so do you think I should execute solr queries twice ?
or is there any other workarounds
On Mon, Nov 30, 2009 at 3:07 PM, Shalin Shek
you think I should execute solr queries twice ?
> or is there any other workarounds
>
>
>
>
> On Mon, Nov 30, 2009 at 3:07 PM, Shalin Shekhar Mangar <
> shalinman...@gmail.com> wrote:
>
> > On Mon, Nov 30, 2009 at 2:26 PM, Mark N wrote:
> >
> > > fi
thanks for your help so do you think I should execute solr queries twice ?
or is there any other workarounds
On Mon, Nov 30, 2009 at 3:07 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Mon, Nov 30, 2009 at 2:26 PM, Mark N wrote:
>
> > field2="xyz&quo
On Mon, Nov 30, 2009 at 2:26 PM, Mark N wrote:
> field2="xyz" we dont know until we run query1
>
>
Ah, ok. I thought xyz was a literal that you wanted to search.
> To simply i was actually trying to do some kind of JOIN similar to
> following
> SQL query
>
>
> select * from table1 where *fi
field2="xyz" we dont know until we run query1
To simply i was actually trying to do some kind of JOIN similar to following
SQL query
select * from table1 where *field2* in
( select *field2 *from dbo.concept_db where field1='ABC' )
if this is not possible then i will have to search inner
On Mon, Nov 30, 2009 at 2:02 PM, Mark N wrote:
> hi shalin
>
> I am trying to achieve something like JOIN. Previously am doing this with
> two queries on solr
>
> solr index = ( field1 ,field 2, field3)
>
> query1 = ( for example field1="ABC" )
>
> suppose query1 returns results set1= { 1, 2 ,
hi shalin
I am trying to achieve something like JOIN. Previously am doing this with
two queries on solr
solr index = ( field1 ,field 2, field3)
query1 = ( for example field1="ABC" )
suppose query1 returns results set1= { 1, 2 ,3 ,4 } which matches query1
query2 = ( get all records having
On Mon, Nov 30, 2009 at 1:19 PM, Mark N wrote:
> Is it possible to write nested queries in Solr similar to sql like query
> where I can take results of the first query and use one or more of its
> fields as an argument in the second query.
>
>
That sounds like a join. If so, the answer would be
Is it possible to write nested queries in Solr similar to sql like query
where I can take results of the first query and use one or more of its
fields as an argument in the second query.
For example:
field1:XYZ AND (_query_: field3:{value of field4})
This should search for all types of XYZ and
Hi,
Sorry i forgot to mention that comment field is a text field.
Regards,
Raakhi
On Thu, Nov 12, 2009 at 8:05 PM, Grant Ingersoll wrote:
>
> On Nov 12, 2009, at 8:55 AM, Rakhi Khatwani wrote:
>
> > Hi,
> > I am using solr 1.3 and i hv inserted some data in my comment
> > field.
> > for
On Nov 12, 2009, at 8:55 AM, Rakhi Khatwani wrote:
> Hi,
> I am using solr 1.3 and i hv inserted some data in my comment
> field.
> for example:
>
> for document1:
>
> The iPhone 3GS finally adds common cell phone features like multimedia
> messaging, video recording, and voice dialing.
Hi,
I am using solr 1.3 and i hv inserted some data in my comment
field.
for example:
for document1:
The iPhone 3GS finally adds common cell phone features like multimedia
messaging, video recording, and voice dialing. It runs faster; its promised
battery life is longer; and the multimed
Hi Solr experts
I just want to know if there is a tool/way by which I can analyze which
query is heavy and is taking how much time to fetch the results from solr.
Our slave solr is serving about 12000 requests per hour and we need to
analyze queries served by it.
I have not explored luke tool mu
95 matches
Mail list logo