: > Using the stats component makes short work of things.
: >
: > stats.true&stats.field=foo
:
: The stats component has been rendered obsolete by the newer and shinier
: json.facet stuff.
json.facet still doesn't support multi-shard refinement, so saying stats
(and/or) facet components are ob
: how would you handle a query like "johnson AND johnson"? i don't want
: something that has "author: linden b. johnson" to hit, only things that
: actually have two occurrences.
I'm not even sure if/how that would be possible using the underlying
lucene Query objects available -- IIUC the Boole
pconfig -z 172.28.128.9:2181 -n solr/configs/tolkien -d
/home/bodl-tei-svc/solr-6.4.0/server/solr/configsets/tolkien_config
Thank you for your help with this.
Best,
Chris
On 22/02/2017, 16:15, "Shawn Heisey" wrote:
On 2/22/2017 8:25 AM, Chris Rogers wrote:
> … as uploaded wit
;bin/solr cp -r...' command and specify the destination as
zk:/solr/tolkien or something. upconfig/downconfig is just a form of
cp designed for configsets.
Erick
On Wed, Feb 22, 2017 at 8:27 AM, Chris Rogers
wrote:
> Hi Shawn,
>
> Thank
eld1:value1 and ChildDocument 1.2 fulfills field2:value4
Is this even possible? And if so, how would I go about performing and
constructing the query?
I am running with Solr 6.1 in a single core mode at the moment, but I plan
to eventually move to a Solr Cloud
Chris Bell
: In my schema.xml, I have these copyFields:
you haven't shown us the field/fieldType definitions for any of those
fields, so it's possible "simplex" was included in a field that is
indexed=true but not stored-false -- which is why you might be able to
search on it, but not see it in the field
s probably not the cause of it.
Are there any other conditions in which the slave core will do a full copy
of an index instead of only the necessary files?
Thanks,
Chris
l replication, shut down the slave,
> "rm -rf data". (data should be the parent of the "index" dir) and
> restart solr.
>
> Best,
> Erick
>
> On Mon, Mar 6, 2017 at 8:06 AM, Chris Ulicny wrote:
> > Hi all,
> >
> > We've recently had some
What are you doing with RTG that you care about sorting/paging?
https://people.apache.org/~hossman/#xyproblem
XY Problem
Your question appears to be an "XY Problem" ... that is: you are dealing
with "X", you are assuming "Y" will help you, and you are asking about "Y"
without giving more details
: My temporary solution is to add the command line option "-s /bin/bash" to
: the solr init script by hand.
:
: Is there already a better way to avoid this manual modification?
:
: If not - might it be a good idea to add an option to the installation
: script in order to specify a shell?
If i u
: From the first class, it seems similar to
: https://wiki.apache.org/solr/NegativeQueryProblems
See also: https://lucidworks.com/2011/12/28/why-not-and-or-and-not/
-Hoss
http://www.lucidworks.com/
1900
solr/TestCollection/select?q=*:*&fq=iqdocid:2957-TV-201604141900
Is there some configuration that I am missing?
Thanks,
Chris
rs, new and experienced
>
>
> On 15 March 2017 at 11:24, Chris Ulicny wrote:
> > Hi,
> >
> > I've been trying to use the get handler for a new solr cloud collection
> we
> > are using, and something seems to be amiss.
> >
> > We are running 6.3.0, so
t; Is this a typo or are you trying to use get with an "id" field and
> your filter query uses "iqdocid"?
>
> Best,
> Erick
>
> On Wed, Mar 15, 2017 at 8:31 AM, Chris Ulicny wrote:
> > Yes, we're using a fixed schema with the iqdocid field set as t
originally wrote it.
>
> Regards,
>Alex.
>
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 15 March 2017 at 13:22, Chris Ulicny wrote:
> > Sorry, that is a typo. The get is using the iqdocid field. There is no
> "
David Hastings
wrote:
> from your previous email:
> "There is no "id"
> field defined in the schema."
>
> you need an id field to use the get handler
>
> On Wed, Mar 15, 2017 at 1:45 PM, Chris Ulicny wrote:
>
> > I thought that "id" and
might make the most
sense, please let me know.
Thanks again
On Wed, Mar 15, 2017 at 5:49 PM Erick Erickson
wrote:
Wait... Is iqdocid set to the in your schema? That might
be the missing thing.
On Wed, Mar 15, 2017 at 11:20 AM, Chris Ulicny wrote:
> Unless the behavior's changed on
ngs
wrote:
i still would like to see an experiment where you change the field to id
instead of iqdocid,
On Thu, Mar 16, 2017 at 9:33 AM, Yonik Seeley wrote:
> Something to do with routing perhaps? (the mapping of ids to shards,
> by default is based on hashes of the id)
> -Yonik
>
>
fferent route field it's highly likely
> that's the issue.
> I was always against that "feature", and this thread demonstrates part
> of the problem (complicating clients, including us human clients
> trying to make sense of what's going on).
>
> -Yonik
ngkey.
If there is any more of the log that would be useful, let me know and I'll
can add that to it.
Thanks,
Chris
On Thu, Mar 16, 2017 at 3:55 PM Alexandre Rafalovitch
wrote:
Well, only router.field is the valid parameter as per
https://cwiki.apache.org/confluence/display/solr/Colle
: facing. We are storing messages in solr as documents. We are running a
: pruning job every night to delete old message documents. We are deleting
: old documents by calling multiple delete by id query to solr. Document
: count can be in millions which we are deleting using SolrJ client. We are
:
: Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
: bounded by 1K count, but after looking at heap dump I was amazed how can it
: keep more than 1K entries. But Yes I see around 7M entries according to
: heap dump and around 17G of memory occupied by BytesRef there.
what
: OK, The whole DBQ thing baffles the heck out of me so this may be
: totally off base. But would committing help here? Or at least be worth
: a test?
ths isn't DBQ -- the OP specifically said deleteById, and that the
oldDeletes map (only used for DBI) was the problem acording to the heap
dumps
: I know that the product in general is licensed as Apache 2.0, but
unfortunately there are packages
: included in the build that are considered "non-permissive" by my company and
as such, means that
...
: It appears that the vast majority of the licensing issues are within the
contri
The thing to keep in mind, is that w/o a fully deterministic sort,
the underlying problem statement "doc may appera on multiple pages" can
exist even in a single node solr index, even if no documents are
added/deleted between bage requests: because background merges /
searcher re-opening may h
Hi!
I am looking for some advice on an sharding strategy that will produce
optimal performance in the NRT search case for my setup. I have come up
with a strategy that I think will work based on my experience, testing, and
reading of similar questions on the mailing list, but I was hoping to run
m
Hi,
In Solr (6.5.0) cloud mode, is a string based (docValues=“true”) uniqueKey
still required to be stored?
I set it to false and got a “uniqueKey is not stored - distributed search and
MoreLikeThis will not work” warning.
Thanks,
--
Chris
Diff FieldType's encode diff values into terms in diff ways. at query
time the FieldTypes need to be consulted to know how to build the
resulting query object.
Solr's query parsers are "schema aware" and delegate to the appropriate
FieldType to handle any index term encoding needed -- but the
: The correct way for a plugin to do the sort of thing you are trying to do
: would be to use an instance of SolrQueryParser -- see for example the code
: in LuceneQParser and how it uses SolrQueryParser ... you'll most likeley
: just want to use LuceneQParser directly in your plugin to simplify
of the record
were preserved.
I know to use atomic updates all fields should be stored since the document
is read and reindexed internally, but was curious if there was any
consistency or expected results for the state of othertext_field after an
atomic update.
Thanks,
Chris
PM Erick Erickson
wrote:
> How is "otherText" getting values in the first place? If
> it's the destination of a copyField directive, it'll be repopulated
> if the source of the copyField is stored=true.
>
> Best,
> Erick
>
> On Tue, Apr 25, 2017 at
non-Text field?
However, that shouldn't apply to this since I'm concerned with a non-stored
TextField without docValues enabled.
Best,
Chris
On Wed, Apr 26, 2017 at 3:36 PM Shawn Heisey wrote:
> On 4/25/2017 1:40 PM, Chris Ulicny wrote:
> > Hello all,
> >
> > Suppose
serve the non-stored text field.
Thanks,
Chris
On Wed, Apr 26, 2017 at 4:07 PM Dorian Hoxha wrote:
> You'll lose the data in that field. Try doing a commit and it should
> happen.
>
> On Wed, Apr 26, 2017 at 9:50 PM, Chris Ulicny wrote:
>
> > Thanks Shawn, I didn't re
> On Wed, Apr 26, 2017 at 2:13 PM, Dorian Hoxha
> > wrote:
> > > There are In Place Updates, but according to docs they stll shouldn't
> > work
> > > in your case:
> > > https://cwiki.apache.org/confluence/display/solr/
> > Updating+Parts+of+Documents
&
an Hoxha
wrote:
> @Chris,
> According to doc-link-above, only INC,SET are in-place-updates. And only
> when they're not indexed/stored, while your 'integer-field' is. So still
> shenanigans in there somewhere (docs,your-code,your-test,solr-code).
>
> On Thu, Apr 27, 20
for some strange reason
>
> There better not be or it's a bug. Things will stick around until
> you issue a commit, is there any chance that's the problem?
>
> If you can document the exact steps, maybe we can reproduce
> the issue and raise a JIRA.
>
> Best,
> Er
, but I wasn't attempting to retrieve it so I never realized it
was being stored since I ended up looking at the wrong schema.
I switched the config sets and everything works as expected. Any atomic
updates clear out the indexed values for the non-stored field.
Thanks for bearing with me.
Chris
: tldr: Recently, I tried moving an existing solrcloud configuration from
: a local datacenter to EC2. Performance was roughly 1/10th what I’d
: expected, until I applied a bunch of linux tweaks.
How many total nodes in your cluster? How many of them running ZooKeeper?
Did you observe the hea
: I specify a timeout on all queries,
Ah -- ok, yeah -- you mean using "timeAllowed" correct?
If the root issue you were seeing is in fact clocksource related,
then using timeAllowed would probably be a significant compounding
factor there since it would involve a lot of time checks in a s
is if there are any other overheads involved
WRT collections vs shards, routing, zookeeper, etc.
Thanks,
Chris
tual physical solr cores. So for
example, what is the difference between having 1000 collections with 1
shard each, vs 1 collection with 1000 shards? Both cases will end up with
the same amount of physical solr cores right? Or am I completely off base?
Thanks again,
Chris
On Sat, May 6, 2017 at 10
Thanks for the great advice Erick. I will experiment with your suggestions
and see how it goes!
Chris
On Sun, May 7, 2017 at 12:34 AM, Erick Erickson
wrote:
> Well, you've been doing your homework ;).
>
> bq: I am a little confused on this statement you made:
>
> >
: I'm facing a issue when i'm querying the Solr
: my query is "xiomi Mi 5 -white [64GB/ 3GB]"
...
: +(((Synonym(nameSearch:xiaomi nameSearch:xiomi)) (nameSearch:mi)
: (nameSearch:5) -(Synonym(nameSearch:putih
: nameSearch:white))*(nameSearch:[64gb/ TO 3gb])*)~4)
...
: Now due to aut
eas? I can provide the actual queries from the logs if required.
Thanks,
Chris
Shalin,
Thanks for the response and explanation! I logged a JIRA per your request
here: https://issues.apache.org/jira/browse/SOLR-10695
Chris
On Mon, May 15, 2017 at 3:40 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Sun, May 14, 2017 at 7:40 PM, Chris Troullis
If you pass the array of documents without the opening and closing square
brackets it should work with the page defaults (at least in v6.3.0)
{ "id":"1",...},{"id":"2",...},...
instead of
[{ "id":"1",...},{"id":"2"
Reformatting for readability:
Hi,
I'm looking for some advice on specific issue that is holding us back.
I'm trying to create a custom RequestHandler with the Solr api (solrj) that
makes a query call back to the server.
I'm not finding any good, run-able examples of this on-line. Possibly I'm
ap
: I've been using cursorMark for quite a while, but I noticed that sometimes
: the value is huge (more than 8K). It results in Request-URI Too Long
FWIW: cursorMark values are simple "string safe" encoded forms of sort
fields -- so my guess is you are sorting on some really long string
values?
: Because the value of the function will be treated as a relevance value
: and relevance value of 0 (and less?) will cause the record to be
: filtered out.
I don't believe that's true? ... IIRC 'fq' doesn't care what the scores
are as long as the query is a "match" and a 'func' query will match
: I could have sworn I was paraphrasing _your_ presentation Hoss. I
: guess I did not learn my lesson well enough.
:
: Thank you for the correction.
Trust but verify! ... we're both wrong.
Boolean functions (like lt(), gt(), etc...) behave just like sum() -- they
"exist" for a document if and
: Thank-you, that all sounds great. My assumption about documents being
: missed was something like this:
...
: In that situation D would always be missed, whether the cursorMark 'C or
: greater' or 'greater than B' (I'm not sure which it is in practice), simply
: because the cursorMark is
: Reason: In an index with millions of documents I don't want to know that a
: certain query matched 1 million docs (of course it will take time to
: calculate that). Why don't just stop looking for more results lets say
: after it finds 100 docs? Possible??
but if you care about sorting, ie: you
I am running solrcloud version 4.3.0 with a 10 m1.xlarge nodes and using
zk to manage the state/data for collections and configs.I want to upgrade
to version 4.4.0.
When i deploy a 4.4 version of solrcloud in my test environment, none of
the collections/configs (created using the 4.3 version of
and vetting requirements, but I thought I'd ask No use
> going through this twice if you can avoid it.
>
> On Tue, Mar 11, 2014 at 12:49 PM, Chris W wrote:
> > I am running solrcloud version 4.3.0 with a 10 m1.xlarge nodes and using
> > zk to manage the state/data for coll
We decided to go with the latest (it seems to have a lot more bug
/performance fixes). The issue i mentioned was a red herring. I was able to
successfully upgrade
On Tue, Mar 11, 2014 at 2:09 PM, Chris W wrote:
> Moving 4 versions ahead may need much additional tests from my side to
>
Hi
I have a 3 node zk ensemble . I see a very high latency for zk responses
and also a lot of outstanding requests (in the order of 30-40)
I also see that the requests are not going to all zookeeper nodes equally.
One node has more requests/connections than the others. I see that CPU/Mem
and di
t; about your infrastructure and Solr logs? (PS: 50 mb data *may *cause a
> problem for your architecture)
>
> Thanks;
> Furkan KAMACI
>
>
> 2014-03-13 0:57 GMT+02:00 Chris W :
>
> > Hi
> >
> > I have a 3 node zk ensemble . I see a very high latency for zk
&g
Any help on this is much appreciated. Is it better to use more cores for
zookeeper (as opposed to 1 core machine)?
On Wed, Mar 12, 2014 at 4:28 PM, Chris W wrote:
> Hi Furkan
>
> Load on the network is very low when read workload is on the cluster.
> During indexing, a few of my &
Can you share a sample query ? Ensure you have filterquery, fl fields and
query result cache settings well tuned
To give you an example: A month ago I had an issue where a few of our
queries were taking 3+seconds with 5 shards and as I added more shards the
query was taking even longer. I figured
I wonder whether this is a known bug. In previous SOLR cloud versions, 4.4
or maybe 4.5, an explicit optimize(), without any parameters, it usually
took 2 minutes for a 32 core cluster.
However, in 4.6.1, the same call took about 1 hour. Checking the index
modification time for each core shows 2 m
I am running a 3 node zookeeper 3.4.5 Quorum. I am running into issues
with Zookeeper transaction logs
[myid:2] - ERROR [main:QuorumPeer@453] - Unable to load database on disk
java.io.IOException: Unreasonable length = 1048587
at
org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.j
der election and during
> collection API commands. It doesn't correlate directly with indexing
> but is correlated with how frequently you call commit.
>
> On Wed, Mar 19, 2014 at 5:46 AM, Shawn Heisey wrote:
> > On 3/18/2014 5:46 PM, Chris W wrote:
> >>
> >>
its at all. I was mistaken.
>
> On Wed, Mar 19, 2014 at 10:06 AM, Chris W wrote:
> > Thanks, Shawn and Shalin
> >
> > How does the frequency of commit affect zookeeper?
> >
> >
> > Thanks
> >
> >
> > On Tue, Mar 18, 2014 at 9:12 PM, Shali
Hi there
Is there a limit on the # of collections solrcloud can support? Can
zk/solrcloud handle 1000s of collections?
Also i see that the bootup time of solrcloud increases with increase in #
of cores. I do not have any expensive warm up queries. How do i speedup
solr startup?
--
Best
--
C
e available before leader
> election happens
>
> You can't do much about 1 right now I think. For #2, you can keep your
> transaction logs smaller by a hard commit before shutdown. For #3
> there is a leaderVoteWait settings but I'd rather not touch that
> unless it become
plicas? I'm not asking for a long list
> > here, just if you have a bazillion replicas in aggregate.
> >
> > Hours is surprising.
> >
> > Best,
> > Erick
> >
> > On Thu, Mar 20, 2014 at 2:17 PM, Chris W
> wrote:
> > > Thanks, Shalin
Thanks Tim. I would definitely try that next time. I have seen a few
instances where the overseer_queue not getting processed but that looks
like an existing bug which got fixed in 4.6 (overseer doesnt process
requests when reload collection fails)
One question: Assuming our cluster can tolerate d
Sorry for the piecemeal approach but had another question. I have a 3 zk
ensemble. Does making 2 zk as observer roles help speed up bootup of solr
(due to decrease in time it takes to decide leaders for shards)?
On Fri, Mar 21, 2014 at 11:49 AM, Chris W wrote:
> Thanks Tim. I would definit
(7)
On Fri, Mar 21, 2014 at 1:05 PM, Chris W wrote:
> Sorry for the piecemeal approach but had another question. I have a 3 zk
> ensemble. Does making 2 zk as observer roles help speed up bootup of solr
> (due to decrease in time it takes to decide leaders for shards)?
>
>
&
: What is the default value for the required attribute of a field element
: in a schema? I've just looked everywhere I can think of in the wiki, the
: reference manual, and the JavaDoc. Most of the documentation doesn't
: even mention that attribute.
Good catch, fixed...
https://cwiki.apache.o
: I have a date field in my Solr schema defined as described below
: When I'm trying to query fields stats, the max value for that date field is
: not constant, it changes between two distinct date values as I retry query.
: Any ideas as to why this is happening?
smells like ou might have replic
Hi
You can use the "details" command to check the status of replication.
http://localhost:8983/solr/core_name/replication?command=details
The command returns an xml output and look out for the "isReplicating"
field in the output. Keep running the command in a loop until the flag
becomes false. T
: I have a search that sorts on a boolean field. This search is pulling
: the following error: "java.lang.String cannot be cast to
: org.apache.lucene.util.BytesRef".
This is almost certainly another manifestation of SOLR-5920...
https://issues.apache.org/jira/browse/SOLR-5920
-Hoss
http://
AM, Fermin Silva wrote:
> Hi,
>
> that's what I'm trying. I'm however really cautious when it comes to a
> while (somethingIsTrue) { doSomething; sleep; }
>
> Is that safe? What if the slave hungs up, the network is slow/fails, etc?
>
> Thanks
>
>
&
: I'm not sure if I am missing something of if this is a bug but I am facing an
: issue with the following scenario.
The specific scnerios you are descirbing is an interesting edge case --
but i believe it's working as designed.
basically the range generation logic that computes the set of "(lo
What is the role of an overseer in solrcloud? The documentation does not
offer full details about it. What if an overseer node goes down?
--
Best
--
C
: What should I be doing to fix them? Is there a replacement for those
: classes? Do I just need to change the luceneMatchVersion to be LUCENE_461
: or something?
that's pretty much exactly what that warning message is trying to tell you
-- your config ways to use LUCENE_33 mode, but that won'
: Thanks for your response. Here is an example of what I'm trying to do. If I
: had the following documents:
what you are attempting is fairly trivial -- you want to query for all
parent documents, then kapply 3 filters:
* parent of a child matching item1
* parent of a child matching item2
*
: "field" : "" // this is the field that I want to learn which document has
: it.
How you (can) query for a field value like that is going to depend
entirely on the FieldTYpe/Analyzer ... if it's a string field, of uses
KeywordTokenizer then q=field:"" should find it -- if you use a more
tradi
: > I am trying to build lucene 4.7.1 from the sources. I can compile without
: > any issues but when I try to build the dist, lucene gives me
: > Cannot run program "svnversion" ... The system cannot find the specified
: > file.
: >
: > I am compiling on Windows 7 64-bit using java version 1.7.0.
Hi there
I am using solrcloud (4.3). I am trying to get the status of a core from
solr using (localhost:8000/solr/admin/cores?action=STATUS&core=) and
i get the following output
100
102
2
20527
20
*false*
What does current mean? A few of the cores are optimized (with segment
count 1) and show
Mark: first off, the details matter.
Nothing in your first email mae it clear that the {!join} query you were
refering to was not the entirety of your query param -- which is part of
the confusion and was a significant piece of Shawn's answer. Had you
posted the *exact* request you were sendi
Any help on this is much appreciated. I cannot find any documentation
around this and would be good to understand what this means
Thanks
On Thu, Apr 10, 2014 at 1:50 PM, Chris W wrote:
> Hi there
>
> I am using solrcloud (4.3). I am trying to get the status of a core from
>
Thanks, Shawn.
On Fri, Apr 11, 2014 at 11:11 AM, Shawn Heisey wrote:
> On 4/10/2014 2:50 PM, Chris W wrote:
>
>> Hi there
>>
>>I am using solrcloud (4.3). I am trying to get the status of a core
>> from
>> solr using (localhost:8000/solr/admin/cores?acti
: we tried another commands to delete the document ID:
:
: 1> For Deletion:
:
: curl http://localhost:8983/solr/update -H 'Content-type:application/json' -d
: '
: [
You're use of square brackets here is triggering the syntax-sugar that
let's you add documents as objects w/o needing the "add" k
: So, is Overseer really only an "implementation detail" or something that Solr
: Ops guys need to be very aware of?
Most people don't ever need to worry about the overseer - it's magic and
it will take care of itself.
The recent work on adding support for an "overseer role" in 4.7 was
speci
: Anyone knows why we can`t have an analysis chain on a numeric field ?
: Looks to me like it would be very useful to be able to
: manipulate/transform a value without an external resources.
Analysis only affects *indexed* terms -- it has no impact on the stored
values (or things like docValue
: here is the query:
:
http://localhost:7001/solr/collection1/select?q=*%3A*&rows=5&fl=*%2Cscore&wt=json&indent=true&debugQuery=true
:
:
: and here the response:
that's bizare.
Do me a favor, and:
* post the results of
.../select?q=*%3A*&rows=1&fl=score&wt=json&indent=true&echoParams=true
Shamik:
I'm not sure what the cause of this is, but it definitely seems like a bug
to me. I've opened SOLR-6039 and noted a workarround for folks who don't
care about the new "track" debug info and just want the same debug info
that was available before 4.7...
https://issues.apache.org/jira/
https://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead start a fresh email. Even if you change the
subject line of your email, other mail headers still track which
The Lucene PMC is pleased to announce that there is a new version of the
Solr Reference Guide available for Solr 4.8.
The 396 page PDF serves as the definitive user's manual for Solr 4.8. It
can be downloaded from the Apache mirror network:
https://www.apache.org/dyn/closer.cgi/lucene/solr/
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl.newTemplates(TransformerFactoryImpl.java:964)
: at
:
org.apache.solr.util.xslt.TransformerProvider.getTemplates(TransformerProvider.java:110)
:
: Has anyone run into a problem like this? Thanks!
:
: -- Chris
:
-Hoss
http://www.lucidworks.com/
: Hi everybody
: can anyone give me a suitable interpretation for cat_rank in
: http://people.apache.org/~hossman/ac2012eu/ slide 15
Have you seen the video?
http://vimeopro.com/user11514798/apache-lucene-eurocon-2012/video/55822630
That slide starts ~ 23:00 and i go through a descriptio
The full details are farther down in the stack...
: null:org.apache.solr.common.SolrException: SolrCore 'master' is not
: available due to init failure: Error initializing QueryElevationComponent.
...
: Caused by: org.apache.solr.common.SolrException: Error initializing
: QueryElevationCo
: My understanding is that DynamicField can do something like
: FOO_BAR_TEXT_* but what I really need is *_TEXT_* as I might have
: FOO_BAR_TEXT_1 but I also might have WIDGET_BAR_TEXT_2. Both of those
: field names need to map to a field type of 'fullText'.
I'm pretty sure you can get what you
: 'query' is a function returning a number.
: You can't use it as a query.
Well ... you can, you just have to use the correct query parser.
since there is nothing to make it clear to solr that you want it to parse
the "q" paramater as a function, it's using hte default parser, and
probably ser
As noted by others: you should definitely look into sharding your index --
fundementally there is no way to have that many documents in a single
Lucene index.
However: this is a terrible error for you to get, something in the stack
should have really given you an error when you tried to add th
: me incorporate these config files as before. I'm (naively?) trying the
: following:
:
: final StandardQueryParser parser = new StandardQueryParser();
: final Query luceneQuery = parser.parse(query, "text");
: luceneIndex.getIndexSearcher().search(luceneQuery, collector);
: Using Solr 4.6.1 and in my schema I have a date field storing the time a
: document was added to Solr.
what *exactly* does your schema look like? are you using "solr.DateField"
or "solr.TrieDateField" ? what field options do you have specified?
: I have a utility program which:
: - queries f
: The presence of the {!tag} entry changes the filter query generated by
: the {!field...} tag. Note below that in one case the filter query is a
: phrase query, and in the other it's parsed with one term against the
: specified field and the other against the default field.
I think you are missu
401 - 500 of 5498 matches
Mail list logo