First of all , thanks a lot for the clarification.Is there any way to see,
how this cache is working internally and what are the objects being stored
and how much memory its consuming,so that we can get a clear picture in
mind.And how to test the performance through cache.
On Tue, Sep 22, 2009 at
Is there any way to analyze or see that which documents are getting cached
by documentCache -
On Wed, Sep 23, 2009 at 8:10 AM, satya wrote:
> First of all , thanks a lot for the clarification.Is there any way to see,
> how this cache is working internally and what are the objects
Can anyone help with this question that I posted on stackOverflow.
http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-keyword-tokenized-field
Thanks in advance.
HI,
I need help with defining a field ‘singerName’ with the right
tokenizers and filters such that it gives me the below described behavior:
I have a few documents as given below:
Doc 1
singerName: Justin Beiber
Doc 2:
singerName: Justin Timberlake
…
Below is the list of
:justin
exactName_noAlias_en_US:justin beiber) +exactName_noAlias_en_US:beiber",
"explain":{},
Satya.
On 9/16/16, 2:46 AM, "Emir Arnautovic" wrote:
Hi,
I missed that you already did define field and you are having troubles
with query (did not read stackove
Great, that worked. Thanks Ray and Emir for the solutions.
On 9/16/16, 3:49 PM, "Ray Niu" wrote:
Just add q.op=OR to change default operator to OR and it should work
2016-09-16 12:44 GMT-07:00 Gandham, Satya :
> Hi Emir,
>
>Thanks
hanks,
Satya
Susheel, Please see attached. There heap towards the end of graph has
spiked
On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar
wrote:
> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
> or so and share.
>
> Thnx
>
> On Wed, Jun 14, 2017 at 11:26
Hi,
I am running solr-6.3.0. There are4 nodes, when I start solr, only 3 nodes
are joining the cloud, the fourth one is coming up separately and not
joining the other 3 nodes.
Please see below in the picture on admin screen how fourth node is not
joining. Any suggestions.
Thanks,
satya
[image
gt; it somewhere else and provide a link.
>
> Best,
> Erick
>
>
> On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada >
> wrote:
>
> > Hi,
> >
> > I am running solr-6.3.0. There are4 nodes, when I start solr, only 3
> nodes
> > are joining the cl
Never mind. I had a different config for zookeeper on second vm which
brought a different cloud.
On Fri, Jun 16, 2017, 8:48 PM Satya Marivada
wrote:
> Here is the image:
>
> https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0
>
> There is a node on 002: 15101 port m
Hi All,
We are using solr-6.3.0 with external zookeeper. Setup is as below. Poi is
the collection which is big about 20G with each shard at 10G. Each jvm is
having 3G and the vms have 70G of RAM. The processors are at 6.
The cpu utilization when running queries is reaching more than 100%. Any
sug
ection, it shows 17,948,826 documents. It is so weird
that the query for all documents returns 28552 documents more.
Any suggestions/thoughts.
Thanks,
Satya
Source is database: 17,920,274 records in db
Indexed documents from admin screen: 17,920,274
Query the collection: 17,948,826
Thanks,
Satya
On Thu, Aug 24, 2017 at 3:44 PM Susheel Kumar wrote:
> Does this happen again if you repeat above? How much total docs does DIH
> query/source sh
/questions/45158394/replacing-old-indexed-data-with-new-data-in-apache-solr-with-zero-downtime
Any other suggestions please.
Thanks,
satya
Hi All,
I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
ut in production. Otherwise
> it looks like you may have a port scanner running. In any case don't use
> the zk that comes with solr
>
> > On Feb 26, 2017, at 6:52 PM, Satya Marivada
> wrote:
> >
> > Hi All,
> >
> > I have configured solr with SSL and e
Hi All,
I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
happening. But I really
need to use the original port that I had. Any suggestion for getting around.
Thanks,
Satya
java.lang.IllegalArgumentException: No Authority for
HttpChannelOverHttp@a01eef8{r=0,c=false,a=IDLE,uri=null}
java.lang.IllegalArgumentException: No Authority
at
There is nothing else running on port that I am trying to use: 15101. 15102
works fine.
On Fri, Mar 3, 2017 at 2:25 PM Satya Marivada
wrote:
> Dave and All,
>
> The below exception is not happening anymore when I change the startup
> port to something else apart from that I had
stored in another location as well apart from solr-6.3.0
distribution package, zookeeper-3.4.9 package and the solrdata folder?
Would solr write into any other directories?
Thanks,
Satya
be
helpful. Not sure where solr is seeing that port, when everything is
started clean.
Thanks,
Satya
Any ideas? "null:org.apache.solr.common.SolrException: A previous
ephemeral live node still exists. Solr cannot continue. Please ensure that
no other Solr process using the same port is running already."
Not sure, if JMX enablement has caused this.
Thanks,
Satya
On Tue, May 2, 2017
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
Thanks,
Satya
stored in version-2 folder. I
then had to upload the configuration again and placed my index fresh in the
solr.
It then came up fine. I was playing with JMX parameters to be passed to jvm
before this started to happen. Not sure if this got to do something with it.
Thanks,
Satya
On Wed, May 3
zookeeper, planning to use MNTR command to check on its status.
Thanks,
Satya
Hi,
Can someone please say what I am missing in this case? I have solr
6.3.0, and enabled http authentication, the configuration has been
uploaded to zookeeper. But I do see below error in logs sometimes. Are
the nodes not able to ciommunicate because of this error? I am not
seeing any functional
Hi Piyush and Shawn,
May I ask what is the solution for it, if it is the long gc pauses? I am
skeptical about the same problem in our case too. We have started with 3G
of memory for the heap.
Did you have to adjust some of the memory allotted? Very much appreciated.
Thanks,
Satya
On Sat, May 6
lly, running
> > with 6G of memory may lead to _fewer_ noticeable pauses since the
> > background threads can do the work, well, in the background.
> >
> > Best,
> > Erick
> >
> > On Mon, May 8, 2017 at 7:29 AM, Satya Marivada
> > wrote:
> >> Hi
(ServerConnector.java:382)
at
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:593)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
Thanks,
Satya
This is on solr-6.3.0 and external zookeeper 3.4.9
On Wed, May 3, 2017 at 11:39 PM Zheng Lin Edwin Yeo
wrote:
> Are you using SolrCloud with external ZooKeeper, or Solr's internal
> ZooKeeper?
>
> Also, which version of Solr are you using?
>
> Regards,
> Edwin
>
&g
nts
#maxClientCnxns=60
Thanks,
Satya
On Mon, May 8, 2017 at 12:04 PM Satya Marivada
wrote:
> The 3g memory is doing well, performing a gc at 600-700 MB.
>
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
>
> Here are my jvm start up
>
> The start up parameters are:
>
> java -ser
ect
> clientPort=2181
> # the maximum number of client connections.
> # increase this if you need to handle more clients
> #maxClientCnxns=60
>
> Thanks,
> Satya
>
> On Mon, May 8, 2017 at 12:04 PM Satya Marivada
> wrote:
>
>> The 3g memory is doing well, performing a
below vs 1.8.0_121 in PP7(one other environment)).
Any ideas around why files open and threads are more in one environment vs
other.
Files open is found by looking for fd in /proc/ directory and threads found
by ps -elfT | wc -l.
[image: pasted1]
Thanks,
Satya
ing to set both of them at the
same jvm level and see how it goes.
Thanks,
Satya
On Fri, May 12, 2017 at 3:41 PM Erick Erickson
wrote:
> Check the system settings with ulimit. Differing numbers of user processes
> or open files can cause things like this to be different on different
&g
hi all,
i am working with solr on tomcat. the indexing is good for xml files
but when i send the docs or html files or pdf's through curl i get the error
as lazy error. can u telll me the way. the output is as follows when i send
a pdf file i am working in ubuntu. solr home is /opt/e
hi,
I installed tika and made its jar files into solr home library and also
gave the path to the tika configuration file. But the error is same. the
tika config file is as follows:::
http://purl.org/dc/elements/1.1/
application/xml
Hi all,
i am new to solr and followed with the wiki and got the solr admin
run sucessfully. It is good going for xml files. But to index the rich
documents i am unable to get it. I followed wiki to make the richer
documents also, but i didnt get it.The error comes when i send an pdf/html
Hi all,
i am new to solr and followed with the wiki and got the solr admin
run sucessfully. It is good going for xml files. But to index the rich
documents i am unable to get it. I followed wiki to make the richer
documents also, but i didnt get it.The error comes when i send an pdf/html
hi,
yes i followed the wiki and can now tell me the procedure for it
regards,
swaroop
ya i checked the extraction request handler but couldnt get the
info... i installed tika-0.7 and copied the jar files into the solr
home library.. i started sending the pdf/html files then i get a lazy
error. i am using tomcat and solr 1.4
/p is
curl '
http://localhost:8080/solr/update/extract?literal.id=doc1000&commit=true&fmap.content=text'
-F "myfi...@java.pdf"
regards,
satya
hi,
i sent the commit after adding the documents. but the problem is same
regards,
satya
Hi all,
i Have a problem with the solr. when i send the documents(.doc) i am
not getting the response.
example:
sa...@geodesic-desktop:~/Desktop$ curl "
http://localhost:8080/solr/update/extract?stream.file=/home/satya/Desktop/InvestmentDecleration.doc&stream.co
hi,
i am sorry the mail u sent was in sent mail... I didnt look it I am
going to check now.. I will definetely tell u the entire thing
regards,
satya
hi,
I checked out the admin page and it is indexing for others.In the log
files i dont get anything when i send the documents. I checked out the log
in catalina(tomcat). I changed the dismax handler from q=*:* to q= . I
atleast get the response when i send pdf/html files but dont even get for
hi all,
now solr is working good.i am working in ubuntu and i was indexing
the documents which dont hav permissions . so the problem was that. i thank
all of u for ur reply to my queries.
thanking you,
satya
hi all,
i am a new one to solr and able to implement indexing the documents
by following the solr wiki. now i am trying to add the spellchecking. i
followed the spellcheck component in wiki but not getting the suggested
spellings. i first build it by spellcheck.build=true,...
here i give u
This is in solrconfig.xml:::
default
solr.IndexBasedSpellChecker
spell
./spellchecker
0.7
true
true
jarowinkler
lowerfilt
org.apache.lucene.search.spell.JaroWinklerDistance
./spellchecker
roblem ::: if i make build to jarowinkler dictionary which is
using the "spell" field is not going to create the dictionary and i only see
segments.gen and segments_1 in its directory
regards,
satya
Hi all,
The indexing part of solr is going good,but i got a error on indexing
a single pdf file. when i searched for the error in the mailing list i found
that the error was due to copyright of that file. can't we index a file
which has copy rights or any digital rights???
regards,
satya
hi all,
the error i got is ""Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@8210fc"" when i indexed a file similar
to the one in
https://issues.apache.org/jira/browse/PDFBOX-709/samplerequestform.pdfcant
we index those type files in solr???
regards,
satya
uestParsers.java:116)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
...
if any body know
please help me with this
regards,
satya
hi,
1) i use tika 0.8...
2)the url is https://issues.apache.org/jira/browse/PDFBOX-709 and the
file is samplerequestform.pdf
3)the entire error is::;
curl "
http://localhost:8080/solr/update/extract?stream.file=/home/satya/my_workings/satya_ebooks/8-
,
satya
... I be thankful if anybody can help me with regarding this..
Regards,
satya
RLConnection.java:1072)
> at
> sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2173)
> at java.net.URLConnection.getContentType(URLConnection.java:485) at
> org.apache.solr.common.util.ContentStreamBase$URLStream.(ContentStreamBase.java:81)
> at
> org.apache.solr.servlet.SolrRequestParsers.buildRequestFrom(SolrRequestParsers.java:138)
> at
> org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:117)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
> ...
>
Regards,
satya
content in the results
Regards,
satya
Hi all,
I am intrested to see the working of solr.
1)Can anyone tell me how to start with to know its working
Regards,
satya
???
Regards,
satya
when i
use it in dat module.. can any one tell me other ways like this to track the
path solr
Regards,
satya
let.SolrRequestParsers.parse(SolrRequestParsers.java:117)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
... 12 more
can anybody provide information regarding this??
Regards,
Satya
act?stream.url=http://remotehost:port/file_download.yaws%3Ffile=solr<http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws?file=solr>
%20%26%20apache.pdf&literal.id=schb5"
Regards,
satya
Did anybody try indexing files in remote system through stream.url, where
the files name contain escape characters like &,space
regards,
satya
escaped characters But solr is working good for files that has no
escaped characters in their name.
I sent the request through the curl by encoding the filename in url format
but the problem is same...
Regards,
satya
Hi Hoss,
Thanks for reply and it got working The reason was as you
said i was not double escaping i used %2520 for whitespace and it is
working now
Thanks,
satya
Hi All,
What is the difference of using shards,solr cloud and zookeeper..
which is the best way to scale the solr..
I need to reduce the index size in every system and reduce the search time
for a query...
Regards,
satya
example is given in jetty.Is it the same way to make it in tomcat???
Regards,
satya
Hi all,
i want to build the package of my solr and i found it can be done
using ant. When i type ant package in solr module i get an error as:::\
sa...@swaroop:~/temporary/trunk/solr$ ant package
Buildfile: build.xml
maven.ant.tasks-check:
BUILD FAILED
/home/satya/temporary/trunk/solr
HI ,
ya i dont have the jar file in the ant/lib where can i get the jar
file or wat is the procedure to make that maven-artifact-ant-2.0.4-dep.jar??
regards,
satya
package
---
---
---
generate-maven-artifacts:
[mkdir] Created dir: /home/satya/temporary/trunk/solr/build/maven
[mkdir] Created dir: /home/satya/temporary/trunk/solr/dist/maven
[copy] Copying 1 file to
/home/satya/temporary/trunk/solr/build/maven/src/maven
[artifact:install-provider
Hi all,
i updated my solr trunk to revision 1004527. when i go for compiling
the trunk with ant i get so many warnings, but the build is successful. the
warnings are here:::
common.compile-core:
[mkdir] Created dir:
/home/satya/temporary/trunk/lucene/build/classes/java
[javac
users using solr and Every day each user indexes
10 files of 1KB each and totally it leads to a size of 10MB for a day and it
goes on...???
2)How much of RAM is used by solr in genral???
Thanks,
satya
data.
Regards,
satya
Hi all,
I increased my RAM size to 8GB and i want 4GB of it to be used
for solr itself. can anyone tell me the way to allocate the RAM for the
solr.
Regards,
satya
not show the entire content of a file. It should show up
only a part of the content where the query word is present..As like the
google result and like search result in the lucidimagionation
Regards,
satya
hit
of the word java in that file...
Regards,
satya
Hi All,
Thanks for your reply.I have a doubt whether to increase the ram or
heap size to java or to tomcat where the solr is running
Regards,
satya
Hi All,
Can we get the results like google having some data about the
search... I was able to get the data that is the first 300 characters of a
file, but it is not helpful for me, can i be get the data that is having the
first found key in that file
Regards,
Satya
s
follows::
Java is one of the best language,java is easy to learn...
where this content is at start of the chapter,where the first word of java
is occured in the file...
Regards,
Satya
gards,
satya
Hi All,
Thanks for your suggestions.. I got the result of what i expected..
Cheers,
Satya
..??
Another question is there any Benchmarks of solr...??
Regards,
satya
:8080/solr/select?q=erlang/ericson
the result is
My query here is, do solr consider both the queries differently and what do
it consider for !,/ and all other escape characters.
Regards,
satya
Hi All,
I am able to get the response in the success case in json format by
stating wt=json in the query. But as in case if any errors i am geting in
html format.
1) Is there any specified reason to get in html format??
2)cant we get the error result in json format??
Regards,
satya
Hi Erick,
Every result comes in xml format. But when you get any errors
like http 500 or http 400 like wise we will get in html format. My query is
cant we make that html file into json or vice versa..
Regards,
satya
suggestions for the words
daka-data , usar-user. But actually i need only the spell suggestions.
But here time is getting consumed for displaying of files and then giving
spell suggestions. Cant we post a query to solr where we can get
the response as only spell suggestions???
Regards,
satya
try
to explain still furthur...
Regards,
satya
eck=true&spellcheck.count=10
In the o/p the suggestions will not be coming as
java is a word that spelt correctly...
But cant we get near suggestions as javax,javacetc.., ???
Regards,
satya
che.org/solr/Suggester . But i tried to implement it but got
errors as
*error loading class org.apache.solr.spelling.suggest.suggester*
Regards,
satya
collate=true&spellcheck.onlyMorePopular=true&spellcheck.count=20
the o/p i get is
-
data
have
can
any
all
has
each
part
make
than
also
but this words are not similar to the given word 'java' the near words
would be javac,javax,data,java.io... etc.., the stated words are present in
the index..
Regards,
satya
Hi Grijesh,
As you said you are implementing this type. Can you tell how
did you made in brief..
Regards,
satya
stions as we get
when we type a wrong word in spellchecking... If so please let me know...
Regards,
satya
umFound">20
0
4
-
java
away
jav
jar
ara
apa
ana
ajax
Now i need to know how to make ordering of the terms as in the 1st query the
result obtained is inorder and i want only javax, javac,javascript but not
javas,javabas how can it be done??
Regards,
satya
heck=true&spellcheck.count=5
the result will be
1)-
-
can we get the result as
2)
-
javax
javac
javabean
javascript
NOTE:: all the keywords in the 2nd result is are in index...
Regards,
satya
know its way of
calculation of score and ordering of results
Regards,
satya
?
Regards,
satya
score to only a particular docs???
If anybody know about it or any documentation regarding this please inform
me...
Regards,
satya
atabase to check the user indexed files
and then filter the result... i didnt have any cores.. i indexed all files
in a single index...
Regards,
satya
able to all users who are related to the group1, so i thought of to
edit the code...
Thanks,
satya
1 - 100 of 119 matches
Mail list logo