A few flags are supported:
public static final int GET_DOCSET= 0x4000;
public static final int TERMINATE_EARLY = 0x04;
public static final int GET_DOCLIST =0x02; // get
the documents actually returned in a response
public static final int GET_SCORES =
You can use the ListFields method in the new Schema API:
https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-ListFields
Note that this will return all configured fields but it doesn't tell
you the actual dynamic field names in the index. I don't know if we
have anything better t
Hi David,
Thank you so much for the detailed reply. I've checked each and every lat
lng coordinates and its a purely polygon.
After some time I did one change in the lat lng indexing.
Changed the indexing format.
Initially I indexed the latitude and longitude separated by comma Eg:-
"location":[
Hi!
As a reader of the Solr mailing list, you might be interested in an
experimental search server for node.js, Forage.
Forage is built on the levelDB library from Google.
Check it out here: http://www.foragejs.net/
As always, feedback, pull requests, comments, praise, criticism and beer
are mo
Hello shekhar ,
Thanks for answering . Do I have to set GET_SCORES FLAG as last parameter
of getDocList method ?
Thanks
On 19-Nov-2013 1:43 PM, "Shalin Shekhar Mangar"
wrote:
> A few flags are supported:
> public static final int GET_DOCSET= 0x4000;
> public static final int TERM
Am 18.11.2013 14:39, schrieb Furkan KAMACI:
Atlassian Jira has two options at default: exporting to PDF and exporting
to Word.
I see, 'Word' isn't optimal for a reference guide. But OO can handle
'doc' and has epub plugins.
Could it be possible, to offer the doku also as 'doc(x)'
barefaced
U
Hi, we plan to establish an ensemble of solr with zookeeper.
We gonna have 6 solr servers with 2 instances on each server, also we'll
have 6 shards with replication factor 2, in addition we'll have 3
zookeepers.
Our concern is that we will send documents to index and solr won't index
them but *won
Hi, we plan to establish an ensemble of solr with zookeeper.
We gonna have 6 solr servers with 2 instances on each server, also we'll
have 6 shards with replication factor 2, in addition we'll have 3
zookeepers.
Our concern is that we will send documents to index and solr won't index
them but wo
Hi,
i'm new in Solr. I use Solr-3.6.o and i tried to use the highlighting
parameters to obtaine a result like Goolge. (if the searched word appears in
the title, this word must be bold).
Example
searched word = solr
retrieve document title
welcome in *Solr*
the highlighting parameters that i use
Hi,
A number of people/organizations use SPM for monitoring their Solr /
SolrCloud clusters, and since SolrCloud relies on ZooKeeper, we added
support for ZooKeeper monitoring and alerting to SPM.
This means you can now monitor ZooKeeper along your other clusters -
SolrCloud, Hadoop, HBase, Kafka
Hi,
I have been using external file field (eff) for holding rank of the document
which gets updated every day based on different stats collected by the
system. Once the rank is computed the new files are pushed to Master which
will eventually replicate to slaves on next commit.
Our eff file has
Dave, that's the exact symptoms we all have had in SOLR-5402. After many
attempted fixes (including upgrading jetty, switching to tomcat, messing with
buffer settings) my solution was to fall back to 4.4 and await a fix.
- Original Message -
From: "Dave Seltzer"
To: solr-user@lucene.ap
On 11/19/13 4:06 AM, "Dhanesh Radhakrishnan" wrote:
>Hi David,
>Thank you so much for the detailed reply. I've checked each and every lat
>lng coordinates and its a purely polygon.
>After some time I did one change in the lat lng indexing.
>Changed the indexing format.
>
>Initially I indexed t
Hi,
I am using Solr 4.2.1. I have a couple of questions regarding using leading and
trailing wildcards with phrase queries and doing positional ordering.
* I have a field called text which is defined as the text_general field.
I downloaded the ComplexPhraseQuery plugin
(https://issues.ap
On 11/19/2013 6:18 AM, adfel70 wrote:
> Hi, we plan to establish an ensemble of solr with zookeeper.
> We gonna have 6 solr servers with 2 instances on each server, also we'll
> have 6 shards with replication factor 2, in addition we'll have 3
> zookeepers.
You'll want to do one Solr instance pe
4.6 no longer uses XML to send requests between nodes. It’s probably worth
trying it and seeing if there is still a problem. Here is the RC we are voting
on today:
http://people.apache.org/~simonw/staging_area/lucene-solr-4.6.0-RC4-rev1543363/
Otherwise, I do plan on looking into this issue soo
On Friday, November 15, 2013 11:22 AM, Lemke, Michael SZ/HZA-ZSW wrote:
Judging from numerous replies this seems to be a tough question.
Nevertheless, I'd really appreciate any help as we are stuck.
We'd really like to know what in our index causes the facet.method=fc
query to fail.
Thanks,
Micha
I've often thought of possibly providing the reference guide in .epub
format, but wasn't sure of general interest. I also once tried to
convert the PDF version with calibre and it was a total mess. - but
PDF is probably the least-flexible starting point for conversion.
Unfortunately, the Word expo
Thanks. I have an Xtext DSL doing some config and code generation downstream
of the data ingestion. It probably wouldn't be that hard to generate a
solrconfig.xml, but for now I just want to build in some runtime reconciliation
to aid in dynamic query generation. It sounds like Luke is still
: My approach was something like:
: 1) Look at the categories that the user has preferred and compute the
: z-score
: 2) Pick the top 3 among those
: 3) Use those to boost search results.
I think that totaly makes sense ... the additional bit i was suggesting
that you consider is that instead of
I've been thinking about how SolrCloud deals with write-availability using
in-sync replica sets, in which writes will continue to be accepted so long
as there is at least one healthy node per shard.
For a little background (and to verify my understanding of the process is
correct), SolrCloud only
Yeah, this is kind of like one of many little features that we have just not
gotten to yet. I’ve always planned for a param that let’s you say how many
replicas an update must be verified on before responding success. Seems to make
sense to fail that type of request early if you notice there are
You're thinking is always one-step ahead of me! I'll file the JIRA
Thanks.
Tim
On Tue, Nov 19, 2013 at 10:38 AM, Mark Miller wrote:
> Yeah, this is kind of like one of many little features that we have just
> not gotten to yet. I’ve always planned for a param that let’s you say how
> many repl
Regarding data loss, Solr returns an error code to the callling app (either
HTTP error code, or equivalent in SolrJ), so if it fails to index for a
known reason, you'll know about it.
There are always edge cases though.
If Solr indexes the document (returns success), that means the document is
in
I’d recommend you start with the upcoming 4.6 release. Should be out this week
or next.
- Mark
On Nov 19, 2013, at 8:18 AM, adfel70 wrote:
> Hi, we plan to establish an ensemble of solr with zookeeper.
> We gonna have 6 solr servers with 2 instances on each server, also we'll
> have 6 shards
Aditya,
If you commit, docnums are supposed to be changed, hence the file should be
reloaded.
There might be few alternative approaches to address this problem, but they
are really bloody hacks, you know.
Hold on, if docs are pushed in few hours, but file is changed daily, can't
you mix that b
Mostly a lot of other systems already offer these types of things, so they were
hard not to think about while building :) Just hard to get back to a lot of
those things, even though a lot of them are fairly low hanging fruit. Hardening
takes the priority :(
- Mark
On Nov 19, 2013, at 12:42 PM,
Given a 4 solr node instance (i.e. 2 shards, 2 replicas per shard), and a
standalone zookeeper.
Correct me if any of my understanding is incorrect on the following:
If ZK goes down, most normal operations will still function, since my
understanding is that ZK isn't involved on a transaction by t
Given a 4 node Solr Cloud (i.e. 2 shards, 2 replicas per shard).
Let's say one node becomes 'nonresponsive'. Meaning sockets get created, but
transactions to them don't get handled (i.e. they time out). We'll also assume
that means the solr instance can't send information out to zookeeper or o
I got to thinking about this particular question while watching this
presentation, which is well worth 45 minutes if you can spare it:
http://www.infoq.com/presentations/partitioning-comparison
I created SOLR-5468 for this.
On Tue, Nov 19, 2013 at 11:58 AM, Mark Miller wrote:
> Mostly a lot o
Good questions ... From my understanding, queries will work if Zk goes down
but writes do not work w/o Zookeeper. This works because the clusterstate
is cached on each node so Zookeeper doesn't participate directly in queries
and indexing requests. Solr has to decide not to allow writes if it loses
Hi,
Is it possible to perform a shard split and stream data for the
new/sub-shards to remote nodes, avoiding persistence of new/sub-shards
on the local/source node first?
Thanks,
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Nov 19, 2013, at 2:24 PM, Timothy Potter wrote:
> Good questions ... From my understanding, queries will work if Zk goes down
> but writes do not work w/o Zookeeper. This works because the clusterstate
> is cached on each node so Zookeeper doesn't participate directly in queries
> and indexin
We got different results for these two queries. The first one returned 115
records and the second returns 179 records.
Thanks,
Fudong
Hello!
In the first one, the two terms 'Roger' and 'Miller' are run against
the attorney field. In the second the 'Roger' term is run against the
attorney field and the 'Miller' term is run against the default search
field.
--
Regards,
Rafał Kuć
Performance Monitoring * Log Analytics * Search A
Also, attorney:(Roger Miller) is same as attorney:"Roger Miller" right? Or
the term "Roger Miller" is run against attorney?
Thanks,
-Utkarsh
On Tue, Nov 19, 2013 at 12:42 PM, Rafał Kuć wrote:
> Hello!
>
> In the first one, the two terms 'Roger' and 'Miller' are run against
> the attorney field
Hello!
Terms surrounded by " characters will be treated as phrase query. So,
if your default query operator is OR, the attorney:(Roger Miller) will
result in documents with first or second (or both) terms in the
attorney field. The attorney:"Roger Miller" will result only in
documents that have th
I'm using solrj and attempting to index some latitude and longitudes. My
schema.xml has this dynamicField definition:
When I attempt to update a document like so
doc.setField("job_coordinate","40.7143,-74.006");
I get the following error:
Exception in thread "main"
org.apache.solr.client.solrj.
In our application, we index educational resources and allow searching for
them.
We allow our customers to change some of the non-textual metadata associated
with a resource (like booklevel, interestlevel etc) to serve their users
better.
So for each resource, in theory it could have different set
I'm not entirely sure i understand what you are *trying* to do, but what
you are currently doing is..
1) defining a dynamic field of type "tdouble"
2) indexing a doc with a value that can not be parsed as a double into a
field that uses this dynamic field
forget that you've named it "coordinat
I have a data coming in to SOLR as below.
X™ - Black
I need to store the HTML Entity (decimal) equivalent value (i.e. ™)
in SOLR rather than storing the original value.
Is there a way to do this?
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-index-X-as-8482-HTM
Thanks Mark and Tim. My understanding has been upgraded.
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Tuesday, November 19, 2013 1:59 PM
To: solr-user@lucene.apache.org
Subject: Re: Zookeeper down question
On Nov 19, 2013, at 2:24 PM, Timothy Potter wrote:
Hi,
After the reading this link about DocValues and be pointed by Mark Miller to
raise the question on the mailing list, I have some questions about the
codec implementation note:
"Note that only the default implementation is supported by future version of
Lucene: if you try an alternative format
Ah, I see where I went wrong. I didn't define that dynamic field, it was in
the Solr default schema.xml file. I thought that adding a dynamic field
called *_coordinate would basically do the same thing for latitude and
longitudes as adding a dynamic field like *_i does for integers, i.e. index
it
On 11/19/2013 4:10 PM, yriveiro wrote:
After the reading this link about DocValues and be pointed by Mark Miller to
raise the question on the mailing list, I have some questions about the
codec implementation note:
"Note that only the default implementation is supported by future version of
Luce
Thank you for opening the issue.
I'm not sure that my case is representative. I'm spending every day
three hours in the train (commuting to work). I like to use this time to
have a closer look into manuals. Printouts and laptops are horrible in
this situation. So there is only the alternative
Shawn,
This setup has big implication and I think that this problem is not describe in
proper way either wiki or ref.. guide and how can be overcame (all the process
that you describes).
+1 to find a way to upgrade without reindexing the data, I have not space
enough to do an optimize of 3T a
Other question,
Can someone confirm that I can upgrade from 4.5.1 to 4.6 in a safety and clean
way (without optimises and all stuff)?
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Wednesday, November 20, 2013 at 12:16 AM, Yago Riveiro wrote:
> Shawn,
>
> This s
Hi,
We have the ability to turn off caching for filter queries -
http://wiki.apache.org/solr/CommonQueryParameters#Caching_of_filters
I didn't try it, but I think one can't turn off caching for regular
queries, a la:
q={!cache=false}
Is there a reason this could not be done?
Thanks,
Otis
--
Pe
Garth,
Here is something else related to help push the upgrade further:
http://search-lucene.com/m/gUajqxuETB1/&subj=Re+SolrCloud+and+split+brain
Monitor your beast keeper: http://search-lucene.com/m/R9vEg2JmiR91
Otis
On Tue, Nov 19, 2013 at 5:56 PM, Garth Grimm <
garthgr...@averyranchconsult
Very fuzzy idea here, and maybe there are better approaches I'm not
thinking of right now, but would working with dynamic fields whose names
include customer ID work for you here?
e.g.
global field: booklevel=valueX
customer-specific field for customer 007: booklevel_007=valueY
Your query could t
You could use an update processor to map non-ASCII codes to SGML entities.
You could code it as a JavaScript script and use the stateless script update
processor.
-- Jack Krupansky
-Original Message-
From: Developer
Sent: Tuesday, November 19, 2013 5:46 PM
To: solr-user@lucene.apache
Why do you want to do this? You can always do this transformation on the
presentation side. Doing this on the search server could be a really bad idea.
wunder
On Nov 19, 2013, at 8:19 PM, "Jack Krupansky" wrote:
> You could use an update processor to map non-ASCII codes to SGML entities.
> Yo
Btw. isn't the situation Timothy is describing what hinted handoff is all
about?
http://wiki.apache.org/cassandra/HintedHandoff
http://www.datastax.com/dev/blog/modern-hinted-handoff
Check this:
http://www.jroller.com/otis/entry/common_distributed_computing_routines
Otis
--
Performance Monitorin
Have a look at https://issues.apache.org/jira/browse/SOLR-5027 +
https://wiki.apache.org/solr/CollapsingQParserPlugin
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Wed, Nov 13, 2013 at 2:46 PM, David Anthony Troiano <
dtr
Quickly scanned this and from what I can tell it picks values for things
like Xmx based on memory found on the host. This is fine first guess, but
ultimately one wants control over that and adjusts it based on factors
beyond just available RAM, such as whether sorting it used, or faceting, on
how
Hi,
I have a site that I crawl and host the index. The web site has changes every
month which requires it to re-crawl. Now there is a new SOLR index that is
created. How effectively can I swap the previous one with the new one with
minimal downtime for search.
We have tried swapping the core b
It should bypass cache for sure
https://github.com/apache/lucene-solr/blob/34a92d090ac4ff5c8382e1439827d678265ede0d/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L1263
20.11.2013 7:05 пользователь "Otis Gospodnetic"
написал:
> Hi,
>
> We have the ability to turn off caching for
Hi David,
Thank you for your reply
This is my current schema and field type "location_rpt" is a
SpatialRecursivePrefixTreeFieldType and
Field "location" is a type "location_rpt" and its multiValued
Whenever add a document to solr, I'll
Have a look at https://issues.apache.org/jira/browse/SOLR-4497 +
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateormodifyanAliasforaCollection
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
Since you are faceting on a text field (is this correct?) you deal with a
lot of unique values in it. So your best bet is enum method. Also if you
are on solr 4x try building doc values in the index: this suits faceting
well.
Otherwise start from your spec once again. Can you use shingles instead?
On 11/19/2013 10:18 PM, Tirthankar Chatterjee wrote:
> I have a site that I crawl and host the index. The web site has changes every
> month which requires it to re-crawl. Now there is a new SOLR index that is
> created. How effectively can I swap the previous one with the new one with
> minimal
62 matches
Mail list logo