Thanks Erick for the help. Appreciate it.
Regards,
Salman
On Wed, Mar 30, 2016 at 7:29 AM, Erick Erickson
wrote:
> Absolutely. You haven't said which version of Solr you're using,
> but there are several possibilities:
> 1> create the collection with replicationFactor=1, then use the
> ADDREPLI
Hi
Thanks you Erik.
The main collection that stores our trade data is set to the softcomit
when we import data using DIH. As you guess that the softcommit intervals
is " 1000 " and we have autowarm counts to 0.However
there is some collections that store our meta info in which we commit after
Hi Mugeesh, autocompletion world is not that simple as you would expect.
Which kind of auto suggestion are you interested in ?
First of all, simple string autosuggestion or document autosuggestion ? (
with more additional field to show then the label)
Are you interested in the analysis for the tex
OK, an update. I managed to remove the example/cloud directories, and stop
Solr. I changed my startup script to be much simpler (./solr start) and now
I get this:
*[root@ bin]# ./startsolr.sh*
*Waiting up to 30 seconds to see Solr running on port 8983 [|]*
*Started Solr server on port 8983 (pid=31
Hello - with TieredMergePolicy and default reclaimDeletesWeight of 2.0, and
frequent updates, it is not uncommon to see a ratio of 25%. If you want deletes
to be reclaimed more often, e.g. weight of 4.0, you will see very frequent
merging of large segments, killing performance if you are on spin
Hi folks,
It looks like the "-h" parameter isn't being processed correctly. I want
Solr to listen on 127.0.0.1, but instead it binds to all interfaces. Am
I doing something wrong? Or am I misinterpreting what the -h parameter
is for?
Linux:
# bin/solr start -h 127.0.0.1 -p 8180
# netstat -tlnp |
On 30 March 2016 at 02:49, Erick Erickson wrote:
> Please specify "growing and growing", Until it gets to 15% or more of the
> total
> then I'd start to worry. And then only if it kept growing after that.
I tested 'expungeDeletes' on four different cores, three of them were
nearly identical in t
On 30 March 2016 at 12:25, Markus Jelsma wrote:
> Hello - with TieredMergePolicy and default reclaimDeletesWeight of 2.0, and
> frequent updates, it is not uncommon to see a ratio of 25%. If you want
> deletes to be reclaimed more often, e.g. weight of 4.0, you will see very
> frequent merging
On 03/30/2016 08:23 AM, Jostein Elvaker Haande wrote:
On 30 March 2016 at 12:25, Markus Jelsma wrote:
Hello - with TieredMergePolicy and default reclaimDeletesWeight of 2.0, and
frequent updates, it is not uncommon to see a ratio of 25%. If you want deletes
to be reclaimed more often, e.g.
On Tue, Mar 29, 2016 at 11:30:06PM -0700, Aditya Desai wrote:
> I am running SOLR 4.10 on port 8984 by changing the default port in
> etc/jetty.xml. I am now trying to index all my JSON files to Solr running
> on 8984. The following is the command
>
> curl 'http://localhost:8984/solr/update?commit
On 3/30/2016 5:45 AM, Bram Van Dam wrote:
> It looks like the "-h" parameter isn't being processed correctly. I want
> Solr to listen on 127.0.0.1, but instead it binds to all interfaces. Am
> I doing something wrong? Or am I misinterpreting what the -h parameter
> is for?
The host parameter does
through a clever bit of reflection, you can set the
reclaimDeletesWeight variable from solrconfig by including something
like
5 (going from memory
here, you'll get an error on startup if I've messed it up.)
That may help..
Best,
Erick
On Wed, Mar 30, 2016 at 6:15 AM, David Santamauro
wrote
Whoa! I thought you were going for SolrCloud. If you're not interested in
SolrCloud, you don't need to know anything about Zookeeper.
So it looks like Solr is running. You say:
bq: When I try to connect to :8983/solr, I get a timeout.
Does it sound like firewall issues?
are you talking abou
Both of these are anit-patterns. The soft commit interval of 1 second
is usually far too aggressive. And committing after every add is
also something to avoid.
Your original problem statement is high CPU usage. To see if your
committing is the culprit, I'd stop committing at all after adding and
m
Jack, thanks for the reply. With other sites over https I'm not having
trouble. What logic suggests you change? Did not quite understand.
2016-03-29 21:01 GMT-03:00 Jack Krupansky :
> Medium switches from http to https, so you would need the logic for dealing
> with https security handshakes.
>
>
403 means "forbidden"
Something about the request Solr is sending -- or soemthing about the IP
address Solr is connecting from when talking to medium.com -- is causing
hte medium.com web server to reject the request.
This is something that servers may choose to do if they detect (via
headers
:
:
1) as a general rule, if you have a delcaration which includes
"WEB-INF" you are probably doing something wrong.
Maybe not in this case -- maybe "search-webapp/target" is a completley
distinct java application and you are just re-using it's jars. But 9
times out of 10, when people have
Hi Paul
Thanks a lot for your help! I have one small question, I have schema that
includes {Keyword,id,currency,geographic_name}. Now I have given
id
And
Whenever I am running your script I am getting an error as
4002Document is
missing mandatory uniqueKey field: id400
Can you please share yo
I think I am observing an unexpected behavior of ChildDocTransformerFactory.
The query is like this:
/select?q={!parent which= "type_s:doc.enriched.text "}t
ype_s:doc.enriched.text.entities +text_t:pjm +type_t:Company
+relevance_tf:[0.7%20TO%20*]&fl=*,[child parentFilter=type_s:doc.enriche
I'm not the best person to comment on this so perhaps someone could chime
in as well, but can you try using a wildcard for your childFilter?
Something like: childFilter=type_s:doc.enriched.text.*
You could also possibly enrich the document with depth information and use
that for filtering out.
On
You could use the curl command to read a URL on Medium.com. That would let
you examine and control the headers to experiment.
Google is able to index Medium.
Check the URL and make sure it's not on one of the paths disallowed by
medium.com/robots.txt (the one you gave seems fine):
User-Agent: *
Max,
Have you looked in External file field which is reload on every hard commit,
only disadvantage of this is the file (personal-words.txt) has to be placed
in all data folders in each solr core,
for which we have a bash script to do this job.
https://cwiki.apache.org/confluence/display/solr/Work
The document you're sending to Solr doesn't have an "id" field. The
copyField directive has
nothing to do with it. And you copyField would be copying _from_ the
id field _to_ the
Keyword field, is that what you intended?
Even if the source and dest fields were reversed, it still wouldn't
work sinc
Hi Erick
Thanks for your email. Here is the attached sample JSON file. When I
indexed the same JSON file with SOLR 5.5 using bin/post it indexed
successfully. Also all of my documents were indexed successfully with 5.5
and not with 4.10.
Regards
On Wed, Mar 30, 2016 at 3:13 PM, Erick Erickson
w
When I index 5.4.1 using luceneVer in solrlconfig.xml of 5.3.1, the
segmentsw_9 files has in it Lucene54. Why? Is this a known bug?
#strings segments_9
segments
Lucene54
commitTimeMSec
1459374733276
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
Hmmm, not sure and unfortunately won't be able to look very closely.
Do the Solr logs say anything more informative?
Also, the admin UI>>select core>>documents lets you submit docs
interactively to Solr, that's also worth a try I should think.
Best,
Erick
On Wed, Mar 30, 2016 at 3:15 PM, Aditya
Great, but there is any way to change solr header to set user-agent?
2016-03-30 17:13 GMT-03:00 Jack Krupansky :
> You could use the curl command to read a URL on Medium.com. That would let
> you examine and control the headers to experiment.
>
> Google is able to index Medium.
>
> Check the URL
Thanks Shawn and Elaine,
Elaine,
Yes all the documents of same route key resides on same shard.
Shawn,
I will try to capture the logs.
Thanks.
Regards,
Anil
On 25 March 2016 at 02:57, Elaine Cario wrote:
> Anil,
>
> I've seen situations where if there was a problem with a specific query,
>
Hi All,
I've similar query regarding autosuggestion. My use case is as below:
1. User enters product name (say Nokia)
2. I want suggestions along with the category with which the product
belongs. (e.g Nokia belongs to "electronics" and "mobile" category) so I
want suggestion like Nokia in electro
OK, solved. It seems I had to first create a core, then configure Drupal to
point to the path for that core.
I have to say, this is one of the more helpful lists I have used. Thanks a
lot for your help!
"Getting information off the Internet is like taking a drink from a fire
hydrant." - Mitchel
30 matches
Mail list logo