re the source of the DataImportHandler error
originates?
Thanks!
Mark
url="jdbc:postgresql://dx1f/OHRFC" user="awips" />
deltaQuery="SELECT posttime FROM eventlogtext WHERE
lastmodtime > '${dataimporter.last_index_time}'">
things around to try to get this
working, and my lack of knowledge of where all the places are that it
looks for jars), but they would be the same versions.
Unfortunately, the piece of Solr that is not working for me
(DataImportHandler) is the very piece I need for my project. :-((
Mark
Perhaps there is something preventing clean shutdown. Shutdown makes a best
effort attempt to publish DOWN for all the local cores.
Otherwise, yes, it's a little bit annoying, but full state is a combination
of the state entry and whether the live node for that replica exists or not.
- Mar
olr and "Viola!" All works as designed! It even indexed my
entire database on the first try of a full-import! Woohooo!
Thanks for your help. I would have abandoned this project without your
persistence.
Mark
#x27;t think that would matter, though.
Another example... In one of the documents returned by the "Friday"
query results, I noticed in the text the name of a co-worker "Drzal".
So, I searched on "Drzal" and my results came up with 0 documents. (!?)
Any ideas where I went wrong??
Mark
to find a work-around. Does
Solr have any baseline processors that will handle the URL-encoding?
Being new to Solr, I'm not sure I have the skill to write my own. Or,
is there another kind of encoding I can use that Solr doesn't adversely
react to??
Mark
On 9/11/2015 12:11 PM, Erick
hich words are misspelled by the non-zero
list of suggestions? Or is there a third option I haven't thought of
(like, spell-check as I type)??
I'm just trying to picture the behavior in my head so I know what
programming approach to take. Thanks for the help!
Mark
tire sentence, not a single word. Would that matter?
Mark
nge anything! So, I guess I get to move on from
this and see what other hurdles I run into!
Thanks for the help!
Mark
On 9/15/2015 11:13 AM, Yonik Seeley wrote:
On Tue, Sep 15, 2015 at 11:08 AM, Mark Fenbers wrote:
I'm working with the spellcheck component of Solr for the first time. I
d for searching. In my snippet, the
variable "text" is what the end-user typed. "eventlogtext.logtext" is
the table.column th
code or configuration where I am
specifying a float value where I shouldn't be. My solrconfig.xml and
schema.xml are posted in another thread having a subject "Moving on to
spelling" if that helps you help me.
Thanks,
Mark
HTTP ERROR 500
Problem accessing /solr/EventLog/spellCheck
tags)? Does it matter which way I specify them?
thanks,
Mark
Ah ha!! Exactly my point in the post I sent about the same time you did
(same Thread)!
Mark
On 9/16/2015 8:03 AM, Mikhail Khludnev wrote:
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java#L97
this mean that
0.5
me document and same solrconfig.xml files
because "internal" is not defined in AbstractLuceneSpellchecker.java!
Once I edited these two problems in my own solrconfig.xml, the
stacktrace errors went away!! Yay!
But I'm not out of the woods yet! I'll resume later, after our sys
I need to find an English
dictionary file on the web and add Solr's FileBasedSpellChecker?? Or
does Solr already have what I need and it's a matter of me learning how
to configure that properly?? (If so, how?)
Mark
Have you used jconsole or visualvm to see what it is actually hanging on to
there? Perhaps it is lock files that are not cleaned up or something else?
You might try: find ~/.ivy2 -name "*.lck" -type f -exec rm {} \;
- Mark
On Wed, Sep 16, 2015 at 9:50 AM Susheel Kumar wrote:
> Hi
I mention the same thing in
https://issues.apache.org/jira/browse/LUCENE-6743
They claim to have addressed this with Java delete on close stuff, but it
still happens even with 2.4.0.
Locally, I now use the nio strategy and never hit it.
- Mark
On Wed, Sep 16, 2015 at 12:17 PM Shawn Heisey
You should be able to easily see where the task is hanging in ivy code.
- Mark
On Wed, Sep 16, 2015 at 1:36 PM Susheel Kumar wrote:
> Not really. There are no lock files & even after cleaning up lock files (to
> be sure) problem still persists. It works outside company network but
&
empty results on my text
that is chock full of misspelled words. Any ideas? Attached is my
solrconfig snippet:
Mark
text_general
index
logtext
solr.IndexBasedSpellChecker
.
true
0.5
no matter what I try. Can
you offer specific advice?
Mark
in a post that increasing
writeLockTimeout would help. It did not help for me even increasing it
to 20,000 msec. If I don't build, then my resultset count is always 0,
i.e., empty results. What could be causing this?
Mark
indent
debugQuery
dismax
edismax
hl
facet
spatial
spellcheck
s
e.toString() is what is formatting in JSON... Doink!!
Thanks for the nudge!
Mark
On 9/18/2015 6:15 PM, Upayavira wrote:
What URL are you posting to? Why do you want to use JSON or XML from
SolrJ, which is best using javabin anyway?
Get it right via a URL first, then try to port it over to S
now what
I could have done to break searching capabilities in the process.
Again, searching is not completely broken because it will return all the
documents with * as the token.
thanks,
Mark
tachment, showing zero results. The "logtext" field is what I search
on, and this field type is plain text, although I don't think I
specifically declare this anywhere.
Both attachments were run with debug on.
Thanks,
Mark
You should also check the "debugQuery" box on
ot; because it is a Timestamp object in Java and a "timestamp
without time zone" in PostgreSQL. But even with these changes, the
results are the same as before.
Do you have any more ideas why searching on any literal string finds
zero documents?
Thanks,
Mark
On 9/18/2015 10:30
A snippet of my solrconfig.xml is attached. The snippet only contains
the Spell checking sections (for brevity) which should be sufficient for
you to see all the pertinent info you seek.
Thanks!
Mark
On 9/19/2015 3:29 AM, Mikhail Khludnev wrote:
Mark,
What's your solconfig.xml?
O
ed is the pertinent schema.xml snippet you asked for.
The logtext column in my table contains merely keyboarded text, with the
infrequent exception that I add a \uFFFC as a placeholder for images.
So, should I be using something besides text_en as the fieldType?
Thanks,
Mark
On 9/21/2015 12:
s for
indexes (main, index-based and File-based)?? Can I make them subdirs of
the main index (in /localapps/dev/EventLog/index)? Or would that mess
up the main index?
Thanks for raising my awareness of these errors!
Mark
On 9/21/2015 5:07 PM, Mikhail Khludnev wrote:
Both of these guys below t
with a few files. So it seems "spellcheck.build" worked, but
I am still not getting any hits when I purposefully misspell a word.
But I'll post this problem with more details in a separate post.
Mark
On 9/21/2015 5:07 PM, Mikhail Khludnev wrote:
Both of these guys below try
27;t need to use the
asterisks to get correct results. Does this mean I have a problem in my
indexing process when I used /dataimport. Or does it mean I have
something wrong in my query?
Also, notice in the results that category, logtext, and username fields
are returned as arrays, even though I
;2012-07-10 13:23:39.0":"\n1.0 = logtext:*deeper*, product of:\n
1.0 = boost\n 1.0 = queryNorm\n",
"2012-07-10 17:39:09.0":"\n1.0 = logtext:*deeper*, product of:\n
1.0 = boost\n 1.0 = queryNorm\n",
"2012-07-11 12:39:56.0":"\n1.0 =
get several results, but I
don't know what the output means.
thanks,
Mark
ystified.
The ELall field is a red herring. The debug output shows you're searching
on the logtext field, this line is the relevant one:
"parsedquery_toString":"logtext:deeper",
Should I just get rid of "ELall"? I only created it with the intent to
be able to search on "fenbers" and get hits if "fenbers" occurred in
either place, the logtext field or the username field.
thanks,
Mark
On 9/23/2015 12:30 PM, Erick Erickson wrote:
Then my next guess is you're not pointing at the index you think you are
when you 'rm -rf data'
Just ignore the Elall field for now I should think, although get rid of it
if you don't think you need it.
DIH should be irrelevant here.
So let's back u
deltaQuery="SELECT posttime AS id FROM eventlogtext
WHERE lastmodtime > '${dataimporter.last_index_time}';">
Hope this helps!
Thanks,
Mark
On 9/24/2015 10:57 AM, Erick Erickson wrote:
Geraint:
Good Catch! I totally missed that. So all of our
ot; examples and setup Solr to work in my own
environment. Can someone please point me to the document(s)/tutorial(s)
that I am missing?
Mark
ay. But I will eventually. The complaint is it
can't find ELspell, which I had defined in the old setup that I blew
away, so I'll have to redefine it at some point! For now, I'm just
gonna delight in having searching working again!
Mark
On 9/26/2015 11:05 PM, Erick Erick
On 9/27/2015 12:49 PM, Alexandre Rafalovitch wrote:
Mark,
Thank you for your valuable feedback. The newbie's views are always appreciated.
Admin Admin UI command is designed for creating a collection based on
the configuration you already have. Obviously, it makes that point
somewhat less
. Default config puts tags around the search, but I'm not
using an HTML renderer and I don't want characters of any sort inserted
into the text returned in the result set. rather, I just want the
start/end position. How do I configure that?
Mark
without HTML. Is there a way to configure Solr to do that? I couldn't
find it. If not, how do I go about posting this as a feature request?
Thanks,
Mark
ghted) in the original string rather than returning an altered
string with tags inserted.
Mark
On 9/29/2015 7:04 AM, Upayavira wrote:
You can change the strings that are inserted into the text, and could
place markers that you use to identify the start/end of highlighting
elements. Does that
ach word.
2. provide markers showing where the misspelled/suspect words are
within the text.
and so my code will have to provide the latter functionality. Or does
Solr provide this capability, such that it would be silly to write my own?
Thanks,
Mark
e group. But I'll get there
eventually, starting with removing the wordbreak checker for the
time-being. Your response was encouraging, at least.
Mark
On 10/1/2015 9:45 AM, Alexandre Rafalovitch wrote:
Hi Mark,
Have you gone through a Solr tutorial yet? If/when you do, you will
see you
in the future. I would personally advocate that something like
> the autoManageReplicas <https://issues.apache.org/jira/browse/SOLR-5748>
> be
> introduced to make life much simpler on clients as this appears to be the
> thing I am trying to implement externally.
>
> If anyone has happened to to build a system to orchestrate Solr for cloud
> infrastructure and have some pointers it would be greatly appreciated.
>
> Thanks,
>
> -Steve
>
>
> --
- Mark
about.me/markrmiller
10 minutes
to get the Lucene spell checker working, but I agree that Solr would be
the better way to go, if I can ever get it configured properly...
Mark
On 10/1/2015 12:50 PM, Alexandre Rafalovitch wrote:
Is that with Lucene or with Solr? Because Solr has several different
spell-checker
right configuration to get it to do what I want it to.
Mark
On 10/1/2015 4:16 PM, Walter Underwood wrote:
If you want a spell checker, don’t use a search engine. Use a spell checker.
Something like aspell (http://aspell.net/ <http://aspell.net/>) will be faster
and better than Solr.
wunder
Walte
any clues about why I would get this error? Is my
stripped down configuration missing something, perhaps?
Mark
text_en
solr.FileBasedSpellChecker
logtext
FileDict
/usr/share/dict/words
UTF-8
/localapps/de
If it's always when using https as in your examples, perhaps it's SOLR-5776.
- mark
On Mon, Oct 5, 2015 at 10:36 AM Markus Jelsma
wrote:
> Hmmm, i tried that just now but i sometimes get tons of Connection reset
> errors. The tests then end with "There are still nodes
Not sure what that means :)
SOLR-5776 would not happen all the time, but too frequently. It also
wouldn't matter the power of CPU, cores or RAM :)
Do you see fails without https is what you want to check.
- mark
On Mon, Oct 5, 2015 at 2:16 PM Markus Jelsma
wrote:
> Hi - no, i don
with older issues most likely.
- Mark
On Mon, Oct 5, 2015 at 12:46 PM Rallavagu wrote:
> Any takers on this? Any kinda clue would help. Thanks.
>
> On 10/4/15 10:14 AM, Rallavagu wrote:
> > As there were no responses so far, I assume that this is not a very
> > common issue tha
Best tool for this job really depends on your needs, but one option:
I have a dev tool for Solr log analysis:
https://github.com/markrmiller/SolrLogReader
If you use the -o option, it will spill out just the queries to a file with
qtimes.
- Mark
On Wed, Sep 23, 2015 at 8:16 PM Tarala, Magesh
That amount of RAM can easily be eaten up depending on your sorting,
faceting, data.
Do you have gc logging enabled? That should describe what is happening with
the heap.
- Mark
On Tue, Oct 6, 2015 at 4:04 PM Rallavagu wrote:
> Mark - currently 5.3 is being evaluated for upgrade purposes
If it's a thread and you have plenty of RAM and the heap is fine, have you
checked raising OS thread limits?
- Mark
On Tue, Oct 6, 2015 at 4:54 PM Rallavagu wrote:
> GC logging shows normal. The "OutOfMemoryError" appears to be pertaining
> to a thread but not to JVM.
&
7;m using:
>
> solr version 5.3.1
>
> lucene 5.2.1
>
> zookeeper version 3.4.6
>
> indexing with:
>
>cd /opt/solr/example/films;
>
> /opt/solr/bin/post -c CollectionFilms -port 8081 films.json
>
>
>
> thx,
> .strick
>
--
- Mark
about.me/markrmiller
some "words" are a string of digits, but that shouldn't matter.
Does my snippet give any clues about why I would get this error? Is my
stripped down configuration missing something, perhaps?
Mark
text_en
solr.FileBasedSpellChecker
logtext
is very odd because I commented out all references to the
wordbreak checker in solrconfig.xml. What do I configure so that Solr
will give me sensible suggestions like:
fenders
embers
fenberry
and so on?
Mark
But I expected suggestions like fenders, embers, and fenberry, etc. I
also ran a query on Mark (which IS listed in linux.words) and got back
two suggestions in a similar format. I played with configurables like
changing the fieldType from text_en to string and the characterEncoding
from UTF-8 to A
On 10/13/2015 9:30 AM, Dyer, James wrote:
Mark,
The older spellcheck implementations create an n-gram sidecar index, which is
why you're seeing your name split into 2-grams like this. See the IR Book by
Manning et al, section 3.3.4 for more information. Based on the results you'r
I have removed all block
comments for your easier scrutiny. The source of the correctly spelled
words is a RedHat baseline file called /usr/share/dict/linux.words.
(Does this also mean it is the source of the suggestions?)
thanks for the help!
Mark
On 10/13/2015 7:07 AM, Alessandro
OK. I removed it, started Solr, adn refreshed the query, but my results
are the same, indicating that queryAnalyzerFieldType has nothing to do
with my problem.
New ideas??
Mark
On 10/19/2015 4:37 AM, Duck Geraint (ext) GBJH wrote:
"Yet, it claimed it found my misspelled word to be &q
I use as a
starting point, because there are several solrconfig.xml file nested in
the subfolders when I unzip the tarball? I'm using 5.3.0 in case that
matters.
Thanks!
Mark
openSearcher is a valid param for a commit whatever the api you are using
to issue it.
- Mark
On Wed, Nov 11, 2015 at 12:32 PM Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Does waitSearcher=false works like you need?
>
> On Wed, Nov 11, 2015 at 1:34 PM, Sathyaku
You can pass arbitrary params with Solrj. The API usage is just a little
more arcane.
- Mark
On Wed, Nov 11, 2015 at 11:33 PM Sathyakumar Seshachalam <
sathyakumar_seshacha...@trimble.com> wrote:
> I intend to use SolrJ. I only saw the below overloaded commit method in
> documen
.
--Mark H.
-Original Message-
From: Alfredo Vega Ramírez [mailto:alfredo.v...@vertice.cu]
Sent: Friday, November 13, 2015 11:28 AM
To: solr-user@lucene.apache.org
Subject: Re: HELP
Greetings, I'm new using Solr. I have problem to create a client application.
As I do, if I need t
If you see "WARNING: too many searchers on deck" or something like that in
the logs, that could cause this behavior and would indicate you are opening
searchers faster than Solr can keep up.
- Mark
On Tue, Nov 17, 2015 at 2:05 PM Erick Erickson
wrote:
> That's what was
want to attempt that kind of visibility, you should use the
softAutoCommit. The regular autoCommit should be at least 15 or 20 seconds.
- Mark
On Fri, Dec 11, 2015 at 1:22 PM Erick Erickson
wrote:
> First of all, your autocommit settings are _very_ aggressive. Committing
> every second is
I am running a SolrCloud 4.6 cluster with three solr nodes and three
external zookeeper nodes. Each Solr node has 12GB RAM. 8GB RAM dedicated to
the JVM.
When solr is started it consumes barely 1GB but over the course of 36 to 48
hours physical memory will be consumed and swap will be used. The i/
found and since you
are trying to specify the name, I'm guessing something about the command is
not working. You might try just shoving it in a browser url bar as well.
- Mark
On Wed Feb 18 2015 at 8:56:26 PM Hrishikesh Gadre
wrote:
> Hi,
>
> Can we please document which HTTP metho
I’ll be working on this at some point:
https://issues.apache.org/jira/browse/SOLR-6237
- Mark
http://about.me/markrmiller
> On Feb 25, 2015, at 2:12 AM, longsan wrote:
>
> We used HDFS as our Solr index storage and we really have a heavy update
> load. We had met much problems
If you google replication can cause index corruption there are two jira issues
that are the most likely cause of corruption in a solrcloud env.
- Mark
> On Mar 5, 2015, at 2:20 PM, Garth Grimm
> wrote:
>
> For updates, the document will always get routed to the leader of the
&
oesn't register because the
old leader won't give up the thrown. We don't try and force the new leader
because that may just hide bugs and cause data loss, we no leader is
elected.
I'd guess there are two JIRA issues to resolve here.
- Mark
On Sun, Mar 8, 2015 at 8:37 AM Markus J
Doesn't ConcurrentUpdateSolrServer take an HttpClient in one of it's
constructors?
- Mark
On Sun, Mar 22, 2015 at 3:40 PM Ramkumar R. Aiyengar <
andyetitmo...@gmail.com> wrote:
> Not a direct answer, but Anshum just created this..
>
> https://issues.apache.org/jira/brow
e parameter, i.e.
bin/solr start -e schemaless
What fundamentals am I missing? I'm coming to Solr from Elasticsearch, and
I've already recognized some differences. Is my ES background clouding my
grasp of Solr fundamentals?
Thanks for any help.
Mark Bramer | Technical Team Lead, DC S
Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Thursday, March 26, 2015 7:28 PM
To: solr-user@lucene.apache.org
Subject: Re: i'm a newb: questions about schema.xml
On 3/26/2015 4:57 PM, Mark Bramer wrote:
> I'm a Solr newb. I've been poking around for several days on my own te
the Files list in the Admin UI.
Thanks!
-Original Message-----
From: Mark Bramer
Sent: Thursday, March 26, 2015 7:42 PM
To: 'solr-user@lucene.apache.org'
Subject: RE: i'm a newb: questions about schema.xml
Hi Shawn,
Definitely helpful to know about the instance and files
Hmm...can you file a JIRA issue with this info?
- Mark
On Fri, Mar 27, 2015 at 6:09 PM Joseph Obernberger
wrote:
> I just started up a two shard cluster on two machines using HDFS. When I
> started to index documents, the log shows errors like this. They repeat
> when I execute searc
If copies of the index are not eventually cleaned up, I'd fill a JIRA to
address the issue. Those directories should be removed over time. At times
there will have to be a couple around at the same time and others may take
a while to clean up.
- Mark
On Tue, Apr 28, 2015 at 3:27 AM Ramku
A bug fix version difference probably won't matter. It's best to use the
same version everyone else uses and the one our tests use, but it's very
likely 3.4.5 will work without a hitch.
- Mark
On Tue, May 5, 2015 at 9:09 AM shacky wrote:
> Hi.
>
> I read on
&g
File a JIRA issue please. That OOM Exception is getting wrapped in a
RuntimeException it looks. Bug.
- Mark
On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV
wrote:
> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available
> for Solr.
>
> I am seeing the following O
We will have to a find a way to deal with this long term. Browsing the code
I can see a variety of places where problem exception handling has been
introduced since this all was fixed.
- Mark
On Wed, Jun 3, 2015 at 8:19 AM Mark Miller wrote:
> File a JIRA issue please. That OOM Exception
I didn't really follow this issue - what was the motivation for the rewrite?
Is it entirely under: "new code should be quite a bit easier to work on for
programmer
types" or are there other reasons as well?
- Mark
On Mon, Jun 15, 2015 at 10:40 AM Erick Erickson
wrote:
> Ga
as all GWT :) ).
- Mark
On Mon, Jun 15, 2015 at 11:35 AM Upayavira wrote:
> The current UI was written before tools like AngularJS were widespread,
> and before decent separation of concerns was easy to achieve in
> Javascript.
>
> In a sense, your paraphrase of the justification was as
oint in Solr Cloud thanks to a
> specific deletion policy?
>
> Thanks,
>
> Aurélien
>
--
- Mark
about.me/markrmiller
I think there is some better classpath isolation options in the works for
Hadoop. As it is, there is some harmonization that has to be done depending
on versions used, and it can get tricky.
- Mark
On Wed, Jun 17, 2015 at 9:52 AM Erick Erickson
wrote:
> For sure there are a few rough ed
highlighting so I need the text there.
What would be different about 5.2 that would account for this?
Thanks!
Mark Ehle
Computer Support Librarian
Willard Library
Battle Creek,MI
ring or not storing a field is a simple schema.xml
> configuration.
> This suggestion can be obvious, but … have you checked you have your
> "stored" attribute set "true" for the field you are interested ?
>
> I am talking about the 5.2 schema.
>
> Cheers
>
&g
do for sanity check is tail out the Solr log while
> indexing and querying, just to see "stuff" go by and see if any
> errors are thrown, although it sounds like you wouldn't see
> any search results at all if there was something wrong with
> indexing.
>
> And if n
r&wt=json&indent=true&hl=true&hl.fl=text&hl.simple.pre=%3Cem%3E&hl.simple.post=%3C%2Fem%3E
used to produce snippets of highlited text in 4.6. In 5.2 it does not.
Thanks -
Mark Ehle
Computer Support Librarian
Willard Library
Battle Creek, MI
On Tue, Jun 30, 2015 at 10
to be highlighted. This will avoid having to run documents through
> the analysis chain at query-time and will make highlighting significantly
> faster and use less memory, particularly for large text fields, and even
> more so when hl.usePhraseHighlighter is enabled."
>
> So y
time":0.0},
"debug":{
"time":0.0}},
"process":{
"time":23.0,
"query":{
"time":14.0},
"facet":{
"time":0.0},
"facet_module":{
rterStemFilterFactory to a field of type 'text' in both the indexing and
querying analyzers. Any comments, suggestions or explanations would be much
appreciated.
--
Mark F. Vega
Programmer/Analyst
UC Irvine Libraries - Web Services
veg...@uci.edu<mailto:veg...@uci.edu>
949.824.9872
--
tored="false" fields to stored="true" just
to accommodate atomic updates.
Could some one pls give your suggestions.
Thanks!
Mark.
Thanks Eric!
Best,
Mark
On Mon, May 23, 2016 at 1:35 PM, Erick Erickson
wrote:
> Yes, currently when using Atomic updates _all_ fields
> have to be stored, except the _destinations_ of copyField
> directives.
>
> Yes, it will make your index bigger. The affects on speed are
>
?
Thanks!
Mark.
among them, in that case how do I realize this multi field sort also.
Could some one suggest a way pls.
Thanks!
Mark.
Thanks for the reply Eric!
Can we write a custom sort component to achieve this?...
I am thinking of normalizing as the last option as clear separation of the
cores helps me.
Thanks!
Mark.
On Tue, May 31, 2016 at 11:12 AM, Erick Erickson
wrote:
> Join doesn't work like that, which is
result set and plugging in this
value for each result.
How will sort be applicable on this dynamically populated field as I am
already working on the results and is it too late to specify a sort and if
so how could it be possible.
Thanks!
Mark.
On Tue, May 31, 2016 at 11:10 AM, Erick Erickson
quot;product_tag" core (a different core).
Is there ANY way to achieve this scenario.
Thanks!
Mark.
On Tue, May 31, 2016 at 8:13 PM, Chris Hostetter
wrote:
>
> : When a query comes in, I want to populate value for this field in the
> : results based on some values passed i
ny initial sort was applied and can we re-sort at this very late
stage using some java sorting in the custom component.
Thanks!
Mark.
On Wed, Jun 1, 2016 at 6:44 AM, Mark Robinson
wrote:
> Thanks much Eric and Hoss!
>
> Let me try to detail.
> We have our "product" core wit
eed to sort the product results based on the tagValue of
that local store somehow!
Thanks!
Mark.
On Wed, Jun 1, 2016 at 1:18 PM, Chris Hostetter
wrote:
>
> : Let me try to detail.
> : We have our "product" core with a couple of million docs.
> : We have a couple of thousan
201 - 300 of 2249 matches
Mail list logo