I just tried sending in 100,000 deletes and it didn't cause a problem:
the memory grew from 22M to 30M.

Random thought: perhaps it has something to do with how you are
sending your requests?
If the client creates a new connection for each request, but doesn't
send the Connection:close header or close the connection after use,
these persistent connections can cause new threads to be created in
the app-server.

You can do a thread-dump with the latest nightly build to see if you
are accumulating too many threads in the appserver while the deletes
are going on (prob shouldn't be more than 20):
http://localhost:8983/solr/admin/threaddump.jsp

Here's the python script I used to try the 100,000 deletes... maybe
you can try it in your environment to see if it reproduces your memory
problem or not.  If it doesn't, look to your client code for possible
bugs.

-Yonik

--------- python script ----------
import httplib

class SolrConnection:
 def __init__(self, host='localhost:8983', solrBase='/solr'):
   self.host = host
   self.solrBase = solrBase
   #a socket is not really opened at this point.
   self.conn = httplib.HTTPConnection(self.host)
   self.conn.set_debuglevel(1000000)

 def doUpdateXML(self, request):
   self.conn.request('POST', self.solrBase+'/update', request)
   rsp = self.conn.getresponse()
   print rsp.status, rsp.reason
   data = rsp.read()
   print "data=",data

 def delete(self, id):
   xstr = '<delete><id>'+id+'</id></delete>'
   self.doUpdateXML(xstr)

c = SolrConnection()
for i in range(100000):
 c.delete(str(i))

Reply via email to