Hi,
A work around could be to add columns from the second table as fields to
the Solr document from the first table. E.g. For DB query:
SELECT project_id
FROM projects
MINUS
SELECT project_id
FROM archived_project;
Add archived_projects as a boolean field to Projects in Solr & then query
as:
q=(
Apologies for the mistake. Will always take care of it.
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: Friday, May 31, 2013 6:24 PM
To: solr-user@lucene.apache.org
Subject: Re: Highlighting fields
Please do not use an existing message thread for another to
Distributed search does the actual search twice: once to get the scores
and again to fetch the documents with the top N scores. This algorithm
does not play well with "deep searches".
On 06/02/2013 07:32 PM, Niran Fajemisin wrote:
Thanks Daniel.
That's exactly what I thought as well. I did tr
Thanks Daniel.
That's exactly what I thought as well. I did try passing the distrib=false
parameter and specifying the shards local to the initial server being invoked
and yes it did localize the search to the initial server that was invoked. I
unfortunately didn't see any marked improvement i
Let's assume that the Solr record includes the database record's
timestamp field.You can make a more complex DIH stack that does a Solr
query with the SolrEntityProcessor. You can do a query that gets the
most recent timestamp in the index, and then use that in the DB update
command.
On 06/02
Currently I have wired up the dataimporthandler to do a full and incremental
indexing. I was wondering if there was way to automatically update the indexes
as soon as the row in the table gets updated. I don't want to get into any sort
of cron jobs, triggers etc; Current what I do is as soon as
Also, take a look at the results with &debug=query to insure that
the query is being parsed as you expect.
Best
Erick
On Sun, Jun 2, 2013 at 5:30 PM, Jack Krupansky wrote:
> Ah... now I understand - they are separate terms in the same field.
>
> You want:
>
> NORM_BUS_NME:(TEST TEST1 TEST2)^35.4
Did you take a stack trace of your _server_ and see if the
fragment I posted is the place a bunch of threads are
stuck? If so, then it's what I mentioned, and the patch
I pointed to should fix it up (when it's ready)...
The fact that it hangs more frequently with replication > 1
is consistent with
Ah... now I understand - they are separate terms in the same field.
You want:
NORM_BUS_NME:(TEST TEST1 TEST2)^35.44 OR TRIGRAM_NORM_BUS_NME:(TEST TEST1
TEST2)^20
Even so, I'm not confident that I really know what you are really after -
try explaining in simple English first.
-- Jack Krupan
Hi Jack,
But the problem is after putting the backslash like this-
(NORM_BUS_NME:TEST\ TEST1\ TEST2)^35.44 OR (TRIGRAM_NORM_BUS_NME:TEST\
TEST1\ TEST2)^20 the records which I get are like AND not OR.
For an example if I check individually -
(NORM_BUS_NME:TEST\ TEST1\ TEST2) gives 1 result and
Hello Sascha,
I propose to call raw parser from standard one by nested query syntax
http://searchhub.org/2009/03/31/nested-queries-in-solr/
Regards.
On Fri, May 31, 2013 at 3:35 PM, Sascha Szott wrote:
> Hi folks,
>
> is it possible to use the raw query parser with a disjunctive filter
> quer
On 6/2/2013 12:25 PM, Yoni Amir wrote:
> Hi Shawn and Shreejay, thanks for the response.
> Here is some more information:
> 1) The machine is a virtual machine on ESX server. It has 4 CPUs and 8GB of
> RAM. I don't remember what CPU but something modern enough. It is running
> Java 7 without any
If you have a space in field value, either enclose the entire field value in
quotes:
"TEST1 TEST"
Or escape each space with a single backslash:
TEST1\ TEST
In your example, the space in the first term is preceded by a double
backslash and the space in the second term is unescaped.
-- Jack
Hi,
There seems to be a problem in the querying.My query is like-
(NORM_BUS_NME:TEST1\\ TEST)^35.44 OR (TRIGRAM_NORM_BUS_NME:TEST1 TEST)
Individually NORM_BUS_NME:TEST1\\ TEST query returns 1 result and
TRIGRAM_NORM_BUS_NME:TEST1 TEST
returns 355 results but after I do an OR operation the result
Hi Shawn and Shreejay, thanks for the response.
Here is some more information:
1) The machine is a virtual machine on ESX server. It has 4 CPUs and 8GB of
RAM. I don't remember what CPU but something modern enough. It is running Java
7 without any special parameters, and 4GB allocated for Java (-
I did try with din namespace and that didn't seem to make any difference. Since
the PK is a composite in my case, just specifying the bib_id was throwing an
exception stating "could not find the matching pk column" or something to that
effect. Although I realize the use cases for using one or th
On 6/2/2013 10:11 AM, PeriS wrote:
> I found using the strategy mentioned at
> http://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport, it
> works for me. Not sure what the difference is between this one and writing
> individual queries for fetching the IDs first and then getting th
Hi everyone.
I came across another need for term extraction: I want to find pairs of
words that appear in queries together. All of the "clustering" work is
ready. and the only hole is how to get the basic terms from the query.
Nobody tried it before? There is no clean way to do it?
On Tue, May
Shawn:
replicationFactor higher than one yes.
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Sunday, June 2, 2013 at 4:07 PM, Shawn Heisey wrote:
> On 6/2/2013 8:28 AM, Yago Riveiro wrote:
> > Erick:
> >
> > In my case, when server hangs, no exception is thrown,
Thanks for the info Shawn.
On Sun, Jun 2, 2013 at 6:22 AM, Shawn Heisey-4 [via Lucene] <
ml-node+s472066n4067630...@n3.nabble.com> wrote:
> On 6/1/2013 10:32 AM, Bala wrote:
> > Can somebody tell me if i can achieve SQL MINUS query in solr . here is
> > Sample SQL MINUS query. Need how to get th
Shawn,
I found using the strategy mentioned at
http://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport, it works
for me. Not sure what the difference is between this one and writing individual
queries for fetching the IDs first and then getting the data; I mean I know the
differen
Shawn,
The db-import-config.xml snippet can be found here:http://apaste.info/sTUw
Thanks
-Peri.S
On Jun 2, 2013, at 11:15 AM, Shawn Heisey wrote:
> On 6/2/2013 8:45 AM, PeriS wrote:
>> Ok, so I fixed the issue by providing the pk="" in the entity definition as
>> mentioned in
>> http://wiki.
On 6/2/2013 8:45 AM, PeriS wrote:
> Ok, so I fixed the issue by providing the pk="" in the entity definition as
> mentioned in
> http://wiki.apache.org/solr/DataImportHandler#Using_delta-import_command
>
> I also have a transformer declared for the entity and the DIH during the
> deltaImport do
On 6/2/2013 8:28 AM, Yago Riveiro wrote:
> Erick:
>
> In my case, when server hangs, no exception is thrown, the logs on both
> servers stop registering the update INFO messages. if a shutdown one node,
> immediately the log of the alive node register some update INFO messages that
> appears wa
On 6/2/2013 8:16 AM, Yoni Amir wrote:
> Hello,
> I am receiving OutOfMemoryError during indexing, and after investigating the
> heap dump, I am still missing some information, and I thought this might be a
> good place for help.
>
> I am using Solr 4.0 beta, and I have 5 threads that send update
A couple of things:
1) can you give some more details about your setup ? Like whether its cloud or
single instance . How many nodes if its cloud. The hardware - memory per
machine , JVM options. Etc
2) any specific reason for using 4.0 beta? The latest version is 4.3. I used
4.0 for a few w
Ok, so I fixed the issue by providing the pk="" in the entity definition as
mentioned in
http://wiki.apache.org/solr/DataImportHandler#Using_delta-import_command
I also have a transformer declared for the entity and the DIH during the
deltaImport doesn't seem to be passing all the fields to the
Erick:
In my case, when server hangs, no exception is thrown, the logs on both servers
stop registering the update INFO messages. if a shutdown one node, immediately
the log of the alive node register some update INFO messages that appears was
stuck at some place on the update operation.
Other
Hello,
I am receiving OutOfMemoryError during indexing, and after investigating the
heap dump, I am still missing some information, and I thought this might be a
good place for help.
I am using Solr 4.0 beta, and I have 5 threads that send update requests to
Solr. Each request is a bulk of 100
1> Maybe, maybe not. mssql text searching is pretty primitive
compared to Solr, just as Solr's db-like operations are
primitive compared to mssql. They address different use-cases.
So, you can store the docs in Solr and not touch your SQL db
at all to return the docs. You can store
bq: so you have all the shard data, logically you should be able to
index just using that...
This assumes
1> that the cluster state isn't changing and
2> that all the nodes are available
neither of these are guaranteed.
Consider a topology where there are two ZK servers and a bunch of nodes that
BTW the primary key is a combination of 2 fields. So not sure if thats the
issue.
On Jun 2, 2013, at 1:08 AM, PeriS wrote:
> I have configured the delta query properly, but not sure why the DIH is
> throwing the following error;
>
> SEVERE: Delta Import Failed
> java.lang.RuntimeException: ja
A must read when you are considering this is here:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Best
Erick
On Sat, Jun 1, 2013 at 8:56 AM, Ramkumar R. Aiyengar
wrote:
> In general, just increasing the cache sizes to make everything fit in
> memory might not always give
Hmmm, may we see your solrconfig.xml file? You're right, this
is a relatively vanilla test and should be just fine. BTW, I don't
know what your latency requirements are, but extending your
auto soft commit interval to as long as you can stand isn't
a bad idea. the soft commit will invalidate a numb
Thanks for letting us know!
Erick
On Fri, May 31, 2013 at 2:42 PM, ltenny wrote:
> Thanks! I've found the cause for this problem was the hardware load balancer
> (F5 LTM) was creating thousands of connections. So it turns out that it had
> nothing to do with SolrCloud. However, now I have ano
Yago:
Batches of 100k docs at a time are pretty big, you're way past the
diminishing returns point. I rarely go over 1,000. That said, reducing
the size might be a work-around, perhaps down to one.
All:
Look on your Solr servers (not client) for a stack trace fragment similar to:
at org.apache.
On 6/1/2013 10:32 AM, Bala wrote:
> Can somebody tell me if i can achieve SQL MINUS query in solr . here is
> Sample SQL MINUS query. Need how to get the same in solr
>
> select field1, field2, ... field_n
> from tables
> MINUS
> select field1, field2, ... field_n
> from tables;
I had to look up
You haven't given us any indication of what the analyzer for the default
search field looks like. In particular what stemmer it has configured. In
any case, use the Solr Admin UI Analysis page to view the intermediate
analysis results to see if or when the stemming filter is applied and what
th
Can somebody tell me if i can achieve SQL MINUS query in solr . here is
Sample SQL MINUS query. Need how to get the same in solr
select field1, field2, ... field_n
from tables
MINUS
select field1, field2, ... field_n
from tables;
Thanks
Bala
--
View this message in context:
http://lucene.472
Hi,
Can somebody tell me if i can achieve SQL MINUS query in solr . here is
Sample SQL MINUS query. Need how to aceive this in solr
select field1, field2, ... field_n
from tables
MINUS
select field1, field2, ... field_n
from tables;
On 6/2/2013 6:13 AM, Mysurf Mail wrote:
> "Each frame is hand-crafted in our Bothell facility to the optimum diameter
> and wall-thickness *required *of a premium mountain frame. The heat-treated
> welded aluminum frame has a larger diameter tube that absorbs the bumps."
>
> required!=require
>
>
On 6/2/2013 5:39 AM, Mysurf Mail wrote:
> I am running solr with two cores in solr.xml
> One is product (import from db) and one is collection1 (from the tutorial)
>
> Now in order to clear the index I run
>
> http://localhost:8983/solr/update?stream.body=*:*>
>
> http://localhost:8983/solr/upda
Using solr over my sql db I query the following
http://localhost:8983/solr/products/select?q=require&wt=xml&indent=true&fl=*,score
where the queried word "require" is found in the index since I imported the
following:
"Each frame is hand-crafted in our Bothell facility to the optimum diameter
an
I am running solr with two cores in solr.xml
One is product (import from db) and one is collection1 (from the tutorial)
Now in order to clear the index I run
http://localhost:8983/solr/update?stream.body=*:*
http://localhost:8983/solr/update?stream.body=
only the "collection1" core (of the tut
44 matches
Mail list logo