,collection1_CT|
Does this mean that the schema.xml must be exactly same between those
collections or just partially same (share same fields used to satisfy
the query)?
cheers,
/Marcin
,collection1_CT
Does this mean that the schema.xml must be exactly same between those
collections or just partially same (share same fields used to satisfy
the query)?
cheers,
/Marcin
Hi guys,
I have noticed that Master/Slave replication process is slowing down
slave read/search performance during replication being done.
please help
cheers
Hi,
Since Solr 5.x there is no web archive (war) in standard distribution.
So I'm building a web archive (war) and I'm deploying it in my Tomcat
8 container. It works to Solr 5.3. Since Solr 5.4 it doesn't works
(admin's page doesn't works). A problem is with
org.apache.solr.servlet.LoadAdminUiSer
() + ':' + val.toString());
which, in case when field type is unknown, field is serialize as
string : (this is my case).
When I convert enum to string all works fine but I don't want do that.
How can I properly send a java object which contains an enum field to
solr via solrj?
Best regards,
Marcin
Hi everyone,
I got the following issue recently. I'm trying to use frange on a field
which has hyphen in name:
true
on
*:*
xml
{!frange l=1 u=99}sub(if(1, div(acc_curr_834_2-1900_tl,
1), 0), 1)
2.2
I got the following error:
DEBUG - 2014-03-19 12:11:53.805; or
gt; div(acc_curr_834_2-1900_tl,1)
>
> becomes:
>
> div(field('acc_curr_834_2-1900_tl'),1)
>
> -- Jack Krupansky
>
> -Original Message- From: Marcin Rzewucki
> Sent: Wednesday, March 19, 2014 8:13 AM
> To: solr-user@lucene.apache.org
&g
Hi,
I have the following issue with join query parser and filter query. For
such query:
*:*
(({!join from=inner_id to=outer_id fromIndex=othercore}city:"Stara
Zagora")) AND (prod:214)
I got error:
org.apache.solr.search.SyntaxError: Cannot parse 'city:"Stara': Lexical
error at line 1, column
Stara
>
> Or have a line break in the string you paste into the URL
> or something similar.
>
> Kind of shooting in the dark though.
>
> Erick
>
> On Wed, Mar 19, 2014 at 8:48 AM, Marcin Rzewucki
> wrote:
> > Hi,
> >
> > I have the followin
t; yours hasn't in your setup.
>
> Best,
> Erick
>
> On Thu, Mar 20, 2014 at 2:19 AM, Marcin Rzewucki
> wrote:
> > Nope. There is no line break in the string and it is not feed from file.
> > What else could be the reason ?
> >
> >
> >
> &
is
> > > may date from when he made a copy of the Lucene query parser for Solr
> and
> > > added the parsing of embedded nested query parsers to the grammar. It
> > seems
> > > like the embedded nested query parser is only being applied to a
> single,
> > > whit
Hi,
Do you know how to optimize index on a single shard only ? I was trying to
use "optimize=true&waitFlush=true&shard.keys=myshard" but it does not work
- it optimizes all shards instead of just one.
Kind regards.
11:19, YouPeng Yang wrote:
> Hi Marcin
>
> Thanks to your mail,now I know why my cloud hangs when I just click the
> optimize button on the overview page of the shard.
>
>
> 2014-05-20 15:25 GMT+08:00 Ahmet Arslan :
>
> > Hi Marcin,
> >
> > just a gu
code,unfortunatly,it seems the optimize action
> distrib overall the collection.you can reference the
> SolrCmdDistributor.distribCommit.
>
>
> 2014-05-20 17:27 GMT+08:00 Marcin Rzewucki :
>
> > Well, it should not hang if all is configured fine :) How many shards and
> >
Hi,
You should use CoreAdmin API (or Solr Admin page) and UNLOAD unneeded
cores. This will unregister them from the zookeeper (cluster state will be
updated), so they won't be used for querying any longer. Solrcloud restart
is not needed in this case.
Regards.
On 16 July 2013 06:18, Ali, Saqib
Hi,
I have a problem (wonder if it is possible to solve it at all) with the
following query. There are documents with a field which contains a text and
a number in brackets, eg.
myfield: this is a text (number)
There might be some other documents with the same text but different number
in bracke
at indexing time.
>
> --
> Oleg
>
>
> On Tue, Jul 16, 2013 at 10:12 AM, Marcin Rzewucki >wrote:
>
> > Hi,
> >
> > I have a problem (wonder if it is possible to solve it at all) with the
> > following query. There are documents with a field which contains
gt; 10 | text N3 | Z
>
> does it help?
>
>
>
> On Tue, Jul 16, 2013 at 10:51 AM, Marcin Rzewucki >wrote:
>
> > Hi Oleg,
> > It's a multivalued field and it won't be easier to query when I split
> this
> > field into text and numbers. I
; substring can be thought of as a simple range query. So, for example the
> following query:
>
> "lucene 1*"
>
> becomes behind the scenes: "lucene (10|11|12|13|14|1abcd)"
>
> the issue there is that it is a string range, but it is a range query - it
> just
m the cluster state
> not cores, unless you can unload cores specific to an already offline node
> from zookeeper.
>
>
> On Tue, Jul 16, 2013 at 1:55 AM, Marcin Rzewucki >wrote:
>
> > Hi,
> >
> > You should use CoreAdmin API (or Solr Admin page) and UNLOAD un
Hi,
After upgrading from solr 4.3.1 to solr 4.4 I have the following issue:
ERROR - 2013-07-25 20:00:15.433; org.apache.solr.core.CoreContainer; Unable
to create core: awslocal_shard5
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.(SolrCo
http://wiki.apache.org/solr/DocValues#Specifying_a_different_Codec_implementation
OK, it seems there's no back compat for disk based docvalues
implementation. I have to reindex documents to get rid of this issue.
On 25 July 2013 22:17, Marcin Rzewucki wrote:
> Hi,
>
> After
Hi,
Can somebody explain why there are additional requirements for a field to
be able to use DocValues ? For example: Trie*Fields have to be required or
have default value.
"Schema Parsing Failed: Field
tlong{class=org.apache.solr.schema.TrieLongField,analyzer=org.apache.solr.analysis.TokenizerCh
Hi Shawn,
Thank you for your response. Yes, that's strange. By enabling DocValues the
information about missing fields is lost, which changes the way of sorting
as well. Adding default value to the fields can change a logic of
application dramatically (I can't set default value to 0 for all
Trie*F
Hi,
I have a collection with more than 4K fields, but mostly Trie*Fields types.
It is used for faceting,sorting,searching and statsComponent. It works
pretty fine on Amazon 4xm1.large (7.5GB RAM) EC2 boxes. I'm using
SolrCloud, multi A-Z setup and ephemeral storage. Index is managed by mmap,
4GB f
Hi Chris,
Thanks for your detailed explanations. The default value is a difficult
limitation. Especially for financial figures. I may try with some
workaround like the lowest possible number for TrieLongField, but would be
better to avoid such :)
Regards.
On 22 March 2013 20:39, Chris Hostetter
Hi John,
Mark is right. DocValues can be enabled in two ways: RAM resident (default)
or on-disk. You can read more here:
http://www.slideshare.net/LucidImagination/column-stride-fields-aka-docvalues
Regards.
On 22 March 2013 16:55, John Nielsen wrote:
> "with the on disk option".
>
> Could you
Hi,
Atomic updates (single field updates) do not depend on DocValues. They were
implemented in Solr4.0 and works fine (but all fields have to be
retrievable). DocValues are supposed to be more efficient than FieldCache.
Why not enabled by default ? Maybe because they are not for all fields and
beca
chanism is not really a field update
> mechanism. It just looks like that from the outside. DocValues
> should make true field updates implementable.
>
> Otis
> --
> Solr & ElasticSearch Support
> http://sematext.com/
>
>
>
>
>
> On Fri, Mar 29, 2013
It just looks like that from the outside. DocValues
> should make true field updates implementable.
>
> Otis
> --
> Solr & ElasticSearch Support
> http://sematext.com/
>
>
>
>
>
> On Fri, Mar 29, 2013 at 3:30 PM, Marcin Rzewucki
> wrote:
> > Hi,
Hi,
Recently I noticed a lot of "Reordered DBQs detected" messages in logs. As
far as I checked in logs it could be related with deleting documents, but
not sure. Do you know what is the reason of those messages ?
Apr 23, 2013 1:20:14 AM org.apache.solr.search.SolrIndexSearcher
INFO: Opening Sea
OK. Thanks for explanation.
On 23 April 2013 23:16, Yonik Seeley wrote:
> On Tue, Apr 23, 2013 at 3:51 PM, Marcin Rzewucki
> wrote:
> > Recently I noticed a lot of "Reordered DBQs detected" messages in logs.
> As
> > far as I checked in logs it could be relat
Hi there,
StatsComponent currently does not have median on the list of results. Is
there a plan to add it in the next release(s) ? Shall I add a ticket in
Jira for this ?
Regards.
Hi,
Is there something similar to ElasticSearch search&scroll function, but in
Solr ? For me, it's very useful for making dump of some documents only.
Regards.
ilt-in functionality would be more efficient.
Regards.
On 4 June 2013 18:41, Patricia Gorla wrote:
> Marcin,
>
> For a recent project we implemented search and scroll by paging through the
> result set. Basically, you'll want to just count through your rows and see
> if they
Hi,
I have 4 solr collections, 2-3mn documents per collection, up to 100K
updates per collection daily (roughly). I'm going to create SolrCloud4x on
Amazon's m1.large instances (7GB mem,2x2.4GHz cpu each). The question is
what about zookeeper? It's going to be external ensemble, but is it better
t
ZooKeeper.
>
> > Performance-wise, I doubt it's a big deal either way.
>
> > - Mark
>
> > On Nov 21, 2012, at 8:54 AM, Marcin Rzewucki
> wrote:
>
> >> Hi,
> >>
> >> I have 4 solr collections, 2-3mn documents per collection, up to 100K
>> 3 ZK instances you can get them running side by side.
> >>
> >> --
> >> Regards,
> >> Rafał Kuć
> >> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
> ElasticSearch
> >>
> >> > Separate is generally nice bec
best one, as I told you, but I think that is optimal in terms of
> robustness, single point of failure and costs.
>
>
> It would be a pleasure to hear new suggestions from other people that
> dealed with this kind of issues.
>
> Regards,
>
>
> - Luis Cappa.
>
>
&g
Hi,
I'm using "cloud-scripts/zkcli.sh" script for reloading configuration, for
example:
$ ./cloud-scripts/zkcli.sh -cmd upconfig -confdir -solrhome
-confname -z
Then I'm reloading collection on each node in cloud, but maybe someone
knows better solution.
Regards.
On 22 November 2012 19:23, C
I think solrhome is not mandatory.
Yes, reloading is uploading config dir again. It's a pity we can't update
just modified files.
Regards.
On 22 November 2012 19:38, Cool Techi wrote:
> Thanks, but why do we need to specify the -solrhome?
>
> I am using the following command to load new config,
Hi,
I added authentication in Jetty and it works fine. However, it's strange
that url pattern like "/admin/cores*" is not working, but "/admin/*" works
correct.
Regards.
On 17 November 2012 01:10, Marcin Rzewucki wrote:
> Hi,
>
> Yes, I'm trying to
Hi,
It seems like the file is missing from Zookeeper. Can you confirm ?
Regards.
On 26 November 2012 07:57, deniz wrote:
> Hi all,
>
> I am working on solrcloud and trying to import from db... but I am getting
> this error:
>
>
>
>
> 500 name="QTime">3Error opening
> /configs/poppenuser//hom
Hi,
I have SolrCloud4x. I'd like to be able to make index backup on each node.
When I did /replication?command=backup on one of them I got:
File
/data/index.20121119140848151/segments_1am does not exist
What does it mean? File is not there indeed, but querying and indexing
works fine. Also I don'
ud, but I can't
> think of any reason it should not work.
>
> Have you tried a back up with a single node Solr setup with 4x?
>
> - Mark
>
> On Nov 27, 2012, at 4:28 PM, Marcin Rzewucki wrote:
>
> > Hi,
> >
> > I have SolrCloud4x. I'd like
Hi,
I think you should change/set value for multipartUploadLimitInKB attribute
of requestParsers in solrconfig.xml
Regards.
On 29 November 2012 07:58, deniz wrote:
> hello,
>
> during tests, I keep getting
>
> SEVERE: null:java.lang.IllegalStateException: Form too large305367>20
>
Hi,
I have Solr cluster and I want to use UUID for unique key. I configured
solrconfig and schema according to the rules on Wiki page:
http://wiki.apache.org/solr/UniqueKey
In logs I can see some UUID is being generated when adding new document:
INFO: [selekta] webapp=/solr path=/update params={}
JIRA ticket created: https://issues.apache.org/jira/browse/SOLR-4170
On 27 November 2012 23:41, Mark Miller wrote:
> Perhaps you can file a JIRA ticket with your findings?
>
> - Mark
>
> On Nov 27, 2012, at 5:31 PM, Marcin Rzewucki wrote:
>
> > Yes, I have and it
rstand it, uses its own protocol, so to some
> > reasonable extent it probablmy depends on yr load balancer. Also, as I
> > understand it, zookeeper maintains active connections to solr hosts,
> > which is not a common scenario for load balances as I understand it.
> >
> >
Right, that's a good idea. Thanks!
On 30 December 2012 17:41, Aloke Ghoshal wrote:
> Hi Marcin,
>
> Since you are thinking of this in the context of Amazon, I would suggest
> taking a different route. Assign an Elastic IP (EIP) to each EC2 instance
> running the ZK node &
and play with things like ramBufferSizeMB and anything else that has the
> potential of making indexing "gentler" on resources, be that CPU or disk
> or...
>
> Otis
> --
> Solr & ElasticSearch Support
> http://sematext.com/
>
>
>
>
>
> On Fri,
There's no problem with indexing while taking snapshot. The only issue I
found is some problem with index directory:
https://issues.apache.org/jira/browse/SOLR-4170
It looks like Solr always looks in .../data/index/ directory without
reading "index.properties" file (sometimes your index dir name ca
Definitely. I agree. It's good to stop loading before snapshot. Anyway,
doing index snapshot say every 1 hour and re-indexing documents never than
last 1-1.5 hour should reduce your index recovery time.
On 8 January 2013 07:36, Otis Gospodnetic wrote:
> Hi,
>
> Right, you can continue indexing, b
Hi Romita,
The 3rd parameter should be '/solr/' because ping() sends request
to /solr//admin/ping handler. Try, it should work.
Regards.
On 17 December 2012 03:23, Romita Saha wrote:
> Hi,
>
> I open the Solr browser using the following url:
>
> http://localhost:8983/solr/browser
>
> The PHP c
e
> any versions if it doesn't read them from a tlog on startup...
>
> - Mark
>
> On Jan 22, 2013, at 3:31 PM, Marcin Rzewucki wrote:
>
> > Hi,
> >
> > I'm using SolrCloud4.0 with 2 shards and I did such test: stopped Solr on
> > shard1 replica, remov
On 22 January 2013 23:06, Yonik Seeley wrote:
> On Tue, Jan 22, 2013 at 4:37 PM, Marcin Rzewucki
> wrote:
> > Sorry, my mistake. I did 2 tests: in the 1st I removed just index
> directory
> > and in 2nd test I removed both index and tlog directory. Log lines I've
> &
the logs you have shown
> make complete sense. It then says 'trying replication', which is what I
> would expect, and the bit you are saying has failed. So the interesting
> bit is likely immediately after the snippet you showed.
>
>
>
> Upayavira
>
>
>
>
>
&
>
>
>
>
>
> On Wed, Jan 23, 2013, at 10:28 AM, Marcin Rzewucki wrote:
>
> Hi,
>
> Previously, I took the lines related to collection I tested. Maybe some
> interesting part was missing. I'm sending the full log this time.
>
> It ends up with:
>
> INF
6:08 AM org.apache.solr.core.CachingDirectoryFactory get
> INFO: return new directory for
> /solr/cores/bpr/selekta/data/index.20130121090342477 forceNew:false
>
> Once you look in that dir, how do things look?
>
> Upayavira
>
> On Wed, Jan 23, 2013, at 10:45 AM, Marcin Rzew
h logging showing that activity.
>
> Is this Solr 4.0?
>
> - Mark
>
> On Jan 23, 2013, at 9:27 AM, Upayavira wrote:
>
> > Mark,
> >
> > Take a peek in the pastebin url Marcin mentioned earlier
> > (http://pastebin.com/qMC9kDvt) is there enough info there?
Hi,
Have you tried to add aliases to your network interface (for master and
slave)? Then you should use -Djetty.host and -Djetty.port to bind Solr with
appropriate IPs. I think you should also use different directories for Solr
files (-Dsolr.solr.home) as there may be some conflict with index
file
x.1234...
>
> Indexing and search seem to be fine. Can someone confirm that this is
> harmless?
>
> Marcin Rzewucki asked the same question on December 28 2012 and got no
> response. Can someone kindly respond please?
>
> Thanks
> PixalSoft
>
Hi,
The best is if you could find a query for all docs you want to remove. If
this is not simple you can use the following syntax: id: (1 2 3 4 5) to
remove group of docs by ID (and if your default query operator is OR).
Regards.
On 27 January 2013 11:47, Bruno Mannina wrote:
> Dear Solr users
You can write a script and remove say 50 docs in 1 call. It's always better
than removing 1 by 1.
Regards.
On 27 January 2013 13:17, Bruno Mannina wrote:
> Hi,
>
> Even If I have one or two thousands of Id ?
>
> Thanks
>
> Le 27/01/2013 13:15, Marcin Rzewucki a écrit
On Jan 23, 2013, at 3:14 PM, Marcin Rzewucki wrote:
>
> > Guys, I pasted you the full log (see pastebin url). Yes, it is Solr4.0. 2
> > cores are in sync, but the 3rd one is not:
> > INFO: PeerSync Recovery was not successful - trying replication.
> core=ofac
> > INFO: S
Hi,
If you add security constraint for /admin/*, SolrCloud will not work. At
least that's what I had in Solr4.0. I have not tried the same with Solr4.1,
but I guess it is the same.
Also I found some issues with URL patterns in webdefault.xml
This:
/core/update
works, but for some reason this
Check below if that's better for you:
http://pastebin.com/ardqNcC7
On 1 February 2013 21:25, Marcin Rzewucki wrote:
> Hi,
>
> I was trying to join documents across cores on same shard in SolrCloud4.1
> and I got this error:
>
> java.lang.
have a better error message rather than a NPE of course...
>
> -Yonik
> http://lucidworks.com
>
>
> On Fri, Feb 1, 2013 at 3:45 PM, Marcin Rzewucki
> wrote:
> > Check below if that's better for you:
> > http://pastebin.com/ardqNcC7
> >
> >
> > On
I meant I get fields from parent core only. Is it possible to get fields
from both cores using join query?
On 1 February 2013 23:36, Marcin Rzewucki wrote:
> Thanks Yonik. I see no errors now. Is it possible to get fields from both
> cores for returned results ?
>
>
>
> On 1 F
I'm experiencing same problem in Solr4.1 during bulk loading. After 50
minutes of indexing the following error starts to occur:
INFO: [core] webapp=/solr path=/update params={} {} 0 4
Feb 02, 2013 11:36:15 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Cl
the logs? That is the
> likely culprit for something like this. You may need to raise the timeout:
> http://wiki.apache.org/solr/SolrCloud#FAQ
>
> If you see no session timeouts, I don't have a guess yet.
>
> - Mark
>
> On Feb 2, 2013, at 7:35 PM, Marcin Rzewucki wrote:
&
On 3 February 2013 20:55, Mark Miller wrote:
> What led you to trying that? I'm not connecting the dots in my head - the
> exception and the solution.
>
> - Mark
>
> On Feb 3, 2013, at 2:48 PM, Marcin Rzewucki wrote:
>
> > Hi,
> >
> > I think the issue wa
On 3 February 2013 21:26, Shawn Heisey wrote:
> On 2/3/2013 1:07 PM, Marcin Rzewucki wrote:
>
>> I'm loading in batches. 10 threads are reading json files and load to Solr
>> by sending POST request (from couple of dozens to couple of hundreds docs
>> in 1 request)
value
for maxFormContentSize (1M) and there were no issues either.
Regards.
On 3 February 2013 22:16, Marcin Rzewucki wrote:
> Hi,
>
> I set this:
>
> org.eclipse.jetty.server.Request.maxFormContentSize
> 10485760
>
> multipartUploadLimitInKB is s
Hi,
It does not work for distributed search:
org.apache.solr.handler.component.ShardFieldSortedHitQueue.getCachedComparator(ShardDoc.java:193)
...
case DOC:
// TODO: we can support this!
throw new RuntimeException("Doc sort not supported");
...
Try to sort by unique ID.
Regards.
Hi,
I was able to implement custom hashing with the use of "_shard_" field. It
contains the name of shard a document should go to. Works fine. Maybe
there's some other method to do the same with the use of solrconfig.xml,
but I have not found any docs about it so far.
Regards.
On 18 February 20
Right. Collection API can be used here.
On 18 February 2013 21:36, Timothy Potter wrote:
> @Marcin - Maybe I mis-understood your process but I don't think you
> need to reload the collection on each node if you use the expanded
> collections admin API, i.e. the following will
Hi there,
Let's say we use custom hashing algorithm and there is a document already
indexed in "shard1". After some time the same document has changed and
should be indexed to "shard2" (because of routing rules used in indexing
program). It has been indexed without issues and as a result 2 "almost
Thank you very much for answer.
You were right. There was no luceneMatchVersion in solrconfig.xml of our dev
core. We thought that values not present in core configuration are copied
from main solrconfig.xml. I will investigate if our administrators did
something wrong during upgrade to 3.1.
On T
Solr version:
Solr Specification Version: 3.1.0
Solr Implementation Version: 3.1.0 1085815 - grantingersoll -
2011-03-26 18:00:07
Lucene Specification Version: 3.1.0
Lucene Implementation Version: 3.1.0 1085809 - 2011-03-26 18:06:58
Current Time: Wed Apr 27 14:28:34 CEST 2011
Server Start Time:Wed
Hallo,
I have a problem using threads option in entity in DIH it just does not work,
it either hangs it self or fails to import anything.
Does this feature even work ?
Without threads the import works fine, just too slow
**
Di
I am using Solr 3.1 but tried it with 4.0 beta too
Does it depend on the batchSize argument? I also have table relationships,
tried without them same effect
Is there an full featured example of how to use this threads parameter ?
rent BigDecimal values (so the
problem is not related to BigDecimal value).
We have no idea what's going on. Any ideas?
Greetings
--
Marcin P
? I'm guessing that your (SolrJ?) program is
> somehow messing this up...
>
> Best
> Erick
>
>
> On Wed, Oct 31, 2012 at 7:28 AM, Marcin Pilaczynski
> wrote:
>
>> Welcome all,
>>
>> We have a very strange problem with SOLR 3.5. It SOMETIMES throws
&g
nd Date. the JavaBinCodec has some logic for trying to deal with other
> types of objects -- but i *thought* it's fall back was to just rely on
> "toString" for any class it doesn't recognize -- so it does seem like
> there is a bug in there somewhere...
>
> https://issues.apache.org/jira/browse/SOLR-4021
>
>
> -Hoss
--
Marcin P
Hi,
It happens a lot of times that I need to update just 1 file in ZooKeeper.
I'm using zkcli.sh and -upconfig for the whole directory with configuration
files. I wonder, if it is possible to update a single file in ZooKeeper.
Do you have any ideas ?
Thanks!
Hi,
You can prepare the following structure:
value1
value2
You can find sample files in solr package (example/exampledocs/ dir) along
with "post.sh" script which might be useful for you.
Regards.
On 16 November 2012 15:38, Spadez wrote:
> Hi,
>
> I was wondering if someone could s
Hi,
Does anybody know if SOLR supports Admin Page authentication ?
I'm using Jetty from the latest solr package. I added security option to
start.ini:
OPTIONS=Server,webapp,security
and in configuration file I have (according to Jetty documentation):
noticed with 4.0 it no longer lives under /admin but rather /solr...and
> that means you can't just password-protect it without password-protecting
> all of solr. If I am wrong, please let me know...I would love to protect it
> somehow
>
>
> On 11/16/2012 10:55 AM, Marcin Rzew
Hi,
As far as I know CloudSolrServer is recommended to be used for indexing to
SolrCloud. I wonder what are advantages of this approach over external
load-balancer ? Let's say I have 4 nodes SolrCloud (2 shards + replicas) +
1 server running ZooKeeper. I can use CloudSolrServer for indexing or use
hing, will auto add/remove nodes from rotation based
> on the cluster state in Zookeeper, and is probably out of the box more
> intelligent about retrying on some responses (for example responses that
> are returned on shutdown or startup).
>
> - Mark
>
> On Nov 19, 2012
Hi,
Is there a way to query solr about fields which names contain
whitespaces? Indexing such data does not cause any problems but I have
been unable to retrieve it.
Regards,
Marcin Kuptel
Hi,
How can I make this kind of query work:
...&fl=Output Channels
where "Output Channels" is the name of a field? Escaping the whitespace
in the field's name does not seem to work.
Regards,
Marcin Kuptel
93 matches
Mail list logo