[issue15523] Block on close TCP socket in SocketServer.py

2012-08-01 Thread Jarvis

New submission from Jarvis:

In the Python 2.4, it closes the socket only by calling request.close() method. 
There is a risk by using this method to close socket. If the socket handle 
count does not reach zero because another process still has a handle to the 
socket then the connection is not closed and the socket is not deallocated. So 
in Python 2.7 it updates it by calling request.shutdown() first, which can 
close the underlying connection and send a FIN/EOF to the peer regardless of 
how many processes have handles to the socket. After that, it calls 
request.close() to deallocate the socket. You can see this updates below that 
is from the file of C:\Python27\Lib\SocketServer.py.

def shutdown_request(self, request):
"""Called to shutdown and close an individual request."""
try:
#explicitly shutdown.  socket.close() merely releases
#the socket and waits for GC to perform the actual close.
request.shutdown(socket.SHUT_WR)
except socket.error:
pass #some platforms may raise ENOTCONN here
self.close_request(request)

However,it will block at self.close_request(request) after 
request.shutdown(socket.SHUT_WR) if there are remaining data on reading side of 
socket.

Here, it calls request.shutdown() with SHUT_WR flag, which means it only shuts 
down the writing side of the socket. The reading side of the socket isn't shut 
down at same time. So when calling close_request to deallocate the socket, it 
will always be waiting to read response until response data is available. It 
seems like an issue in SokcetServer.py library.

In terms of that, I replaced the SHUT_WR flag with SHUT_RDWR (shut down both 
reading and writing sides of the socket) for request.shutdown method. Then this 
issue was resolved, the SSL connection was closed immediately.

--
messages: 167110
nosy: jarvisliang
priority: normal
severity: normal
status: open
title: Block on close TCP socket in SocketServer.py
type: enhancement
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue15523>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40942] BaseManager cannot start with local manager

2020-06-10 Thread Mike Jarvis


New submission from Mike Jarvis :

I had a function for making a logger proxy that could be safely passed to 
multiprocessing workers and log back to the main logger with essentially the 
following code:
```
import logging
from multiprocessing.managers import BaseManager

class SimpleGenerator:
def __init__(self, obj): self._obj = obj
def __call__(self): return self._obj

def get_logger_proxy(logger):
class LoggerManager(BaseManager): pass
logger_generator = SimpleGenerator(logger)
LoggerManager.register('logger', callable = logger_generator)
logger_manager = LoggerManager()
logger_manager.start()
logger_proxy = logger_manager.logger()

return logger_proxy

logger = logging.getLogger('test')

logger_proxy = get_logger_proxy(logger)
```
This worked great on python 2.7 through 3.7. I could pass the resulting 
logger_proxy to workers and they would log information, which would then be 
properly sent back to the main logger.

However, on python 3.8.2 (and 3.8.0) I get the following:
```
Traceback (most recent call last):
  File "test_proxy.py", line 20, in 
logger_proxy = get_logger_proxy(logger)
  File "test_proxy.py", line 13, in get_logger_proxy
logger_manager.start()
  File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/managers.py", line 
579, in start
self._process.start()
  File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/process.py", line 
121, in start
self._popen = self._Popen(self)
  File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/context.py", line 
283, in _Popen
return Popen(process_obj)
  File 
"/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", 
line 32, in __init__
super().__init__(process_obj)
  File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_fork.py", 
line 19, in __init__
self._launch(process_obj)
  File 
"/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", 
line 47, in _launch
reduction.dump(process_obj, fp)
  File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/reduction.py", line 
60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 
'get_logger_proxy..LoggerManager'
```
So it seems that something changed about ForkingPickler that makes it unable to 
handle the closure in my get_logger_proxy function.

I don't know if this is an intentional change in behavior or an unintentional 
regression.  If the former, I would appreciate advice on how to modify the 
above code to work.

Possibly relevant system details:
```
$ uname -a
Darwin Fife 17.5.0 Darwin Kernel Version 17.5.0: Mon Mar  5 22:24:32 PST 2018; 
root:xnu-4570.51.1~1/RELEASE_X86_64 x86_64
$ python --version
Python 3.8.2
$ which python
/anaconda3/envs/py3.8/bin/python
$ conda info

 active environment : py3.8
active env location : /anaconda3/envs/py3.8
shell level : 2
   user config file : /Users/Mike/.condarc
 populated config files : /Users/Mike/.condarc
  conda version : 4.8.3
conda-build version : 3.18.5
 python version : 3.6.5.final.0
   virtual packages : __osx=10.13.4
   base environment : /anaconda3  (writable)
   channel URLs : https://conda.anaconda.org/conda-forge/osx-64
  https://conda.anaconda.org/conda-forge/noarch
  https://conda.anaconda.org/astropy/osx-64
  https://conda.anaconda.org/astropy/noarch
  https://repo.anaconda.com/pkgs/main/osx-64
  https://repo.anaconda.com/pkgs/main/noarch
  https://repo.anaconda.com/pkgs/r/osx-64
  https://repo.anaconda.com/pkgs/r/noarch
  package cache : /anaconda3/pkgs
  /Users/Mike/.conda/pkgs
   envs directories : /anaconda3/envs
  /Users/Mike/.conda/envs
   platform : osx-64
 user-agent : conda/4.8.3 requests/2.23.0 CPython/3.6.5 
Darwin/17.5.0 OSX/10.13.4
UID:GID : 501:20
 netrc file : /Users/Mike/.netrc
   offline mode : False

```

--
components: Library (Lib)
messages: 371211
nosy: Mike Jarvis
priority: normal
severity: normal
status: open
title: BaseManager cannot start with local manager
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue40942>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30376] Curses documentation refers to incorrect type

2017-05-15 Thread Ryan Jarvis

New submission from Ryan Jarvis:

Currently the Python curses documentation refers to `WindowObject` multiple 
times in the documentation.  The actual type signature is `_curses.curses 
window`.  WindowObject does not exist.  

Seen at 16.11.1. Textbox objects and curses.initscr() for both Python2 and 
Python3 documentation.   

https://docs.python.org/3/library/curses.html
https://docs.python.org/2/library/curses.html

There is no type information available the curses window object in the 
documentation.

--
assignee: docs@python
components: Documentation
messages: 293740
nosy: Ryan Jarvis, docs@python
priority: normal
severity: normal
status: open
title: Curses documentation refers to incorrect type
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker 
<http://bugs.python.org/issue30376>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32175] Add hash auto-randomization

2017-11-29 Thread Brian Jarvis

New submission from Brian Jarvis :

Hash auto-randomization is a mechanism to detect when a collision attack is 
underway and switch to a randomized keying scheme at that point.

This patch is for the 2.7 branch, where hash randomization is not on by default.

Using collided strings from 
https://github.com/Storyyeller/fnv-collider/tree/master/collided_strings, 10 
"attacks" of roughly 50,000 collided strings were launched against this. The 
unmodified Python had a median insert time of roughly 4.32 seconds and a median 
retrieve time of roughly 4.40 seconds. With the auto-randomized version of 
Python, the median insert time was roughly 3.99 seconds and median retrieve 
time was roughly 3.57 seconds. This is a 7.7% and 18.9% savings, respectively.

--
files: auto_rand_2.7.patch
keywords: patch
messages: 307278
nosy: bjarvis
priority: normal
severity: normal
status: open
title: Add hash auto-randomization
type: enhancement
versions: Python 2.7
Added file: https://bugs.python.org/file47305/auto_rand_2.7.patch

___
Python tracker 
<https://bugs.python.org/issue32175>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com