It also should be ${dataimporter.last_index_time}
Also, that's two queries - an outer query to get the IDs that are modified,
and another query (done repeatedly) to get the data. You can go faster
using a parameterized data import as described in the wiki:
http://wiki.apache.org/solr/DataImport
It looks like you are returning the transformed ID, along with some other
fields, in the deltaQuery command.deltaQuery should only return the ID,
without the "stk_" prefix, and then deltaImportQuery should retrieve the
transformed ID. I'd suggest:
I'm not sure which RDBMS you are using, bu
Thanks, Anshum - I should never have posted so late.It is true that
different users will have different word frequencies, but an application
exploiting that for better relevancy would be going far for the relevancy
of individual user's results.
On Thu, Feb 5, 2015 at 12:41 AM, Anshum Gupta
wr
Joel,
To give you some context, we are running queries against 6 million
documents in a Solr cloud environment. The grouping is done to de-duplicate
content based on an unique field. Unfortunately, due to some requirement
constraint, the only way for us to run the de-duplication is during query
Thank you very much Alvaro and Shawn. The DataImport Status command was what
I was looking for. I have tried it a bit, and I feel the output is good
enough for me.
Thanks again
Alvaro Cabrerizo wrote
> Maybe you are asking for the status command. Currently this is the url I
> invoke for checkin
On 2/5/2015 2:48 PM, O. Olson wrote:
> My setup is fairly similar to the examples. I start a Solr Import using the
> UI i.e. I go to:
> http://localhost:8983/solr/#/corename/dataimport//dataimport and click the
> Execute button to start the Import.
>
> First, I'm curious if there is a way of figu
Hi,
I am very new to Solr but I have been playing around with it a bit and my
imports are all working fine. However, now I wish to perform a delta import
on my query and I'm just getting nothing.
I have the entity:
I am not too sure if ${dih.delta.id} is supposed to be id or id2 but I ha
Maybe you are asking for the status command. Currently this is the url I
invoke for checking whether the import process is running (or has failed)
>From the cwiki:
The URL is
http://:/solr//dataimport?command=status.
It returns statistics on the number of documents created, deleted, queries
run,
My setup is fairly similar to the examples. I start a Solr Import using the
UI i.e. I go to:
http://localhost:8983/solr/#/corename/dataimport//dataimport and click the
Execute button to start the Import.
First, I'm curious if there is a way of figuring out if there is an import
running. I though
In our case, we have less than 20 distinct groups, and a typical search result
will return about 10 of those groups (usually 3 documents per group). We use
default sorting by score. There are 12 million docs spread across 3 shards.
We set group.facet=false. The wkcluster field is a string fie
On 2/5/2015 10:43 AM, Henrique Oliveira wrote:
> I am kinda new to Solr, working with it for almost a month now. I’ve been
> thru lots of tutorials and I am able to index my data, facet-query and even
> modify existing request handlers for my needs.
>
> Apart of this, it seems that everything tutor
Hello,
I am kinda new to Solr, working with it for almost a month now. I’ve been
thru lots of tutorials and I am able to index my data, facet-query and even
modify existing request handlers for my needs.
Apart of this, it seems that everything tutorial related is made under that
example dir that
Hi
I am using solr 4.9 and this is my first ventuer with solr. How do I parse a
raw user query. I am using manual parsing currently and found out it is as
competant and had to be complex. Is there a parser which solr provides to
parse the raw queries.
I am using HttpClient to make http query sear
Thanks James.
Your idea worked well( using multiple request handlers).
I will try and implement some code when I have some spare cycles. By the way
by coding you mean using the same request handler and some how querying it
simultaneously. Howz it possible?
Thanks
meena
--
View this message in
On Wed, 2015-02-04 at 23:31 +0100, Arumugam, Suresh wrote:
> We are trying to do a POC for searching our log files with a single
> node Solr(396 GB RAM with 14 TB Space).
We're running 7 billion larger-than-typical-log-entries documents from a
machine of similar size and it serves our needs well:
The mwikipage alone is indexed by solr, All other mapped entities are not
indexed / ignored by ApacheSolr while doing indexing,
*data-config.xml*
16 matches
Mail list logo