Hello,
I have an analyzer whose output can be used to fill different fields.
I could use copyfield and use the analyzer multiple times to fill each
field; or save the output to a "temporal" field and use multiple very
simple analyzers, one per field, that just split the output in the temporal
docvalues with reindexing does not seem viable option for me as of
now...regarding second question on Xmx4G so i tried
various options Xmx8G, Xmx10G, Xmx12G all not worked except Xmx14G which
not seem practical for production with 16gb ram.
While searching i came across :
https://issues.apache.org
Hi,
Search results are changed every time the following query is hit. Please
note that it is 7 shard cluster of Solr-5.2.1.
Query: q=network&start=50&rows=50&sort=f_sort asc&group=true&group.field=id
Following are the fields and their types in my schema.xml.
As per my understanding it see
I just installed solr cloud 5.3.x and found that the way to secure the amin
ui has changed. Aparently there is a new plugin which does role based
authentification and all info on how to secure the admin UI found on the
net is outdated.
I do not need role based authentification but just simply want
Hi,
Would like to check, is there any problem with this set of codes from Line
604-614 in solr.cmd for Solr 5.3.0? When using this set of codes, I'm not
able to start solr with it pointing to custom core directories.
IF "%SOLR_HOME%"=="" set "SOLR_HOME=%SOLR_SERVER_DIR%\solr"
IF NOT EXIST "%SOLR
If you need additional manipulation during the update process, you can use the
an update processor - there’s a script update processor that you can use to
JavaScript additional document processing. See
http://lucene.apache.org/solr/5_3_0/solr-core/org/apache/solr/update/processor/StatelessScri
Hi ,
I haves setup master slave solr version 5.2.1.
I have done indexing on master .
And replication is done.
When I am trying to run any query on slave its showing me error its not
running.
Shahper
Hi Mikhail,
- is it possible to keep both type of data at the same core? Why not?
We have two separate feeds populating what is mostly distinct data at different
times, hence the two indexes. IndexB is also used by other products which don’t
need any data from indexA.
- can you manually shar
An update request processor is a preferred approach - take the source
value, split it, and create separate source values for each of the
associated fields.
-- Jack Krupansky
On Wed, Sep 9, 2015 at 3:30 AM, Roxana Danger <
roxana.dan...@reedonline.co.uk> wrote:
> Hello,
> I have an analyzer
To explain what a join does:
It goes over to the joined index, and executes a query. This results in
a list of "ids" that will be used to do a search on the main index. The
more of these ids there are, the worse performance will be. Thus, if you
have 100k documents that match in the join core, you
Hi,
I have a question about the distributed Querying in solr (
https://wiki.apache.org/solr/DistributedSearch),
let us consider the below call being made to solr server.
https://server1:8080/solr/core1/select?shards=server1:8080/solr/core1,server2:8070
/solr/core2,server3:8090/solr/core3&q=*:*
On 9/8/2015 11:16 PM, Maulin Rathod wrote:
> When replicas are running it took around 900 seconds for indexing.
> After stopping replicas it took around 500 seconds for indexing.
>
> Is the replication happens in Sync or Async? If it is Sync, can we make it
> Async so that it will not affect inde
On 9/9/2015 7:55 AM, abhi Abhishek wrote:
> https://server1:8080/solr/core1/select?shards=server1:8080/solr/core1,server2:8070
> /solr/core2,server3:8090/solr/core3&q=*:*&rows=10*&start=0
>
> please correct if my understanding of the query processing here is
> incorrect!
>
> server1 acts as the m
Please review:
http://wiki.apache.org/solr/UsingMailingLists
You've essentially said "it doesn't work". There's not enough
information to say _anything_ intelligent.
How does it fail? An messages in the log file? What is
the query you're sending? Does the slave start up without
error?
Best,
Eri
You are correct for distributed search.
do worry care about join, solr will aggregate results from all core.
share your requirement what you want ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/working-of-Sharded-Query-in-SOLR-3-6-tp4227952p4227979.html
Sent from the Solr
When the primary sort criteria is identical for two documents,
then the _internal_ Lucene document ID is used to break the
tie. The internal ID for two docs can be not only different, but
in different _order_ on two separate shards. I'm assuming here
that each of your shards has multiple replicas
Hi,
Thanks for the reply Shawn and Mugeesh. I was just trying to understand
the working of Distributed Querying in SOLR.
Thanks,
Abhishek Das
On Wed, Sep 9, 2015 at 8:18 PM, Mugeesh Husain wrote:
> You are correct for distributed search.
> do worry care about join, solr will aggregate result
Hi Upayavira,
Here are a couple examples with debugQuery set.
I've mislead Mikhail as the query times are getting longer as the list of ids
gets bigger.
Can you see a reason why where indexB has only 6 id's in its list it still
takes 46 seconds?
Ids=6
{
"responseHeader": {
"status": 0,
Hi everyone,
this is my first post on this list and my first opensource project, so
please don't expect too much from either of them.
I've spent these last weeks trying to understand how to create Solr
plugins, so I started a simple project (a plugin itself) which evolved into
a small library nam
To be specific, each shard does a query against its own index, scoring
each document, and returning n rows.
Then, these n results are aggregated, picking the top n scoring docs out
of all of those returned from the shards.
For faceting and other components, the aggregation is somewhat
different.
Perhaps there is something preventing clean shutdown. Shutdown makes a best
effort attempt to publish DOWN for all the local cores.
Otherwise, yes, it's a little bit annoying, but full state is a combination
of the state entry and whether the live node for that replica exists or not.
- Mark
On W
I've never reviewed that join query debug info - very interesting.
Break it all down - do the universe: query directly. Then see what the
results are and manually construct a query using the results of the join
query. If none of that works, try either a profiler to see where Solr is
spending most
Hi,
the JavaDoc of SolrInputDocument.addField [1] states:
Add a field with implied null value for boost. The class type of value
and the name parameter should match schema.xml. schema.xml can be found
in conf directory under the solr home by default.
This sounds as if the value would need to be a
we are using solr as nosql database and periodically update large amount of
document and then commit changes(using commitwithin).and query response time
grows at least twice when it is happening.what can we do with this
--
View this message in context:
http://lucene.472066.n3.nabble.com/respo
Solr Version: 5.2.1
Container: Tomcat (still).
in SolrConfig.xml:
However, I see the class is not plugged in.
in log file:
org.apache.solr.core.SolrCore; Using default statsCache cache:
org.apache.solr.search.stats.LocalStatsCache
Any reason why?
Thanks,
Jae
Hello - there are several issues with StatsCache < 5.3. If it is loaded, it
won't work reliably. We are using it properly on 5.3. Statistics may be a bit
off if you are using BM25 though. You should upgrade to 5.3.
Markus
-Original message-
> From:Jae Joo
> Sent: Wednesday 9th Septem
Hi,
how to parse json response from Solr Term Vector Component?
I got following json structure from response when testing Solr 5.3.0
tvComponent:
{'responseHeader': {'status': 0, 'QTime': 4}, 'response': {'docs':
[{'resourcename': 'XXX.txt', 'id': 'XXX.txt', '_version_':
1511851008560463872, 'co
On Wed, Sep 9, 2015 at 5:24 PM, Russell Taylor <
russell.tay...@interactivedata.com> wrote:
> er.
>
> Can you see a reason why where indexB has only 6 id's in its list it still
> takes 46 seconds?
>
> Ids=6
> {
>
> },
> "debug": {
> "join": {
> "{!join from=sedolKey to=sedolKey fromI
Hey guys,
I've setup a slightly older version of solr (4.10) with apache tomcat
7.0.64. And set up some drupal configurations according to this guide:
http://duntuk.com/how-install-apache-solr-46-apache-tomcat-7-use-drupal
Everything seemed to work after I copied the log4j libraries to the cor
I can't seem to get delta-imports to work with a FileDataSource DIH
full-import works fine.
delta-import always imports nothing, no error. I can add a new file or
change an existing one, no joy.
my requesthandler declaration
class="org.apache.solr.handler.dataimport.DataImportHandler">
On 9/9/2015 3:35 PM, Tim Dunphy wrote:
> But I find that I am now getting these errors in the solr logs when I load
> up solr in the web browser:
>
> Time (Local)LevelLoggerMessage9/9/2015, 5:19:41 PMWARNSolrResourceLoaderCan't
> find (or read) directory to add to classloader:
> ../../../contrib/ex
Thanks Shawn! That was really helpful. I'll check into this tomorrow.
Sent from my iPhone
> On Sep 9, 2015, at 6:45 PM, Shawn Heisey wrote:
>
>> On 9/9/2015 3:35 PM, Tim Dunphy wrote:
>> But I find that I am now getting these errors in the solr logs when I load
>> up solr in the web browser:
On 9/9/2015 4:27 PM, Scott Derrick wrote:
> I can't seem to get delta-imports to work with a FileDataSource DIH
The information I have says delta-import won't work with that kind of
entity.
http://wiki.apache.org/solr/DataImportHandler#Using_delta-import_command-1
Also, please make note of this:
Thanks for your tip. Let me test in 5.3.
On Wed, Sep 9, 2015 at 4:23 PM, Markus Jelsma
wrote:
> Hello - there are several issues with StatsCache < 5.3. If it is loaded,
> it won't work reliably. We are using it properly on 5.3. Statistics may be
> a bit off if you are using BM25 though. You s
Hi,
I am trying to develop stemmer and stopword for Bengaly language which is
not shipped with solr.
I am trying to make this with machine learning approach but I couldn't find
any good documents to study. It would be very helpful if you could shed
some lights into this matter.
Thank you so much
Thanks Erick. There are no replicas on my cluster and the indexing is one
time. No updates or additions are done to the index and the segments are
optimized at the end of indexing.
So adding a secondary sort criteria is the only solution for such issue in
sort?
Regards,
Modassar
On Wed, Sep 9, 20
may be this patch could help if ported to 5.x
https://issues.apache.org/jira/browse/SOLR-4787
*BitSetJoinQParserPlugin aka bjoin -> "*can provide sub-second response
times on result sets of tens of millions of records from the fromIndex and
hundreds of millions of records from the main query"
But
Hi,
I need to ask that when i am looking for the all the parameters of the
query using the *echoParams=ALL*, I am getting the boost parameter twice in
the information printed on the browser screen.
So does it mean that it is also applying twice on the data/result set and
we are using the ?
**
*
Hi Shawn,
Thanks for reply. If we keep replication Async, Can error handling not work
same like replica down scenario?
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: 09 September 2015 19:40
To: solr-user@lucene.apache.org
Subject: Re: Replication Sync OR Async?
Hi,
I have a requirement to reorder the search results by multiplying the
*text relevance
score* of a product with the *product_guideline_score,* which will be
stored in index and will have some floating point number.
e.g. On searching the *jute* in title if we got some results ID1 & ID2
ID1 ->
Hmmm, not quite sure what's going on here, but then I'm not
deeply into the code for ordering sub-requests.
But adding a secondary sort is probably the easiest and will
stand you in good stead going forward.
Erick
On Wed, Sep 9, 2015 at 9:35 PM, Modassar Ather wrote:
> Thanks Erick. There are n
Hi,
I figured it out to implement the same. I will be doing this by using the
boost parameter
e.g. http://server:8112/solr/products/select?q=jute&qf=title
*&boost=product(1,product_guideline_score)*
If there is any other alternative then please suggest.
With Regards
Aman Tandon
On Thu, Sep 10,
42 matches
Mail list logo