Re: [tor-dev] locked out of my bug report #26675

2018-07-07 Thread starlight . 2018q2
Thank you Nick! I was able to revise the ticket. Best to you too. On Sat, Jul 7, 2018 at 13:49 UTC, wrote: >Hi, Starlight! > >Some part of trac are temporarily disabled, due to a bunch of >vandalism last night. I hope somebody will re-enable soon, if they >have time to ke

[tor-dev] locked out of my bug report #26675

2018-07-06 Thread starlight . 2018q2
Today I opened a ticket regarding difficult to understand Torflow behavior. Subsequently I figured out the behavior, realizing it is a non-material bug that has no serious implication. However I can no longer revise the ticket, possibly due to https://trac.torproject.org/projects/tor/ticket/2667

Re: [tor-dev] stale entries in bwscan.20151029-1145

2015-11-05 Thread starlight . 2015q3
aAt 20:56 11/5/2015 -0600, you wrote: >On 5 November 2015 at 16:37, wrote: >> By having a single thread handle >> consensus retrieval and sub-division, >> issues of "lost" relays should >> go away entirely. > >So I'm coming around to this idea, after spending >an hour trying to explain why it was

Re: [tor-dev] stale entries in bwscan.20151029-1145

2015-11-05 Thread starlight . 2015q3
At 17:37 11/5/2015 -0500, you wrote: > >. . .Consensus allocation worker. . The consensus list manger could run as an independent Python process and "message" changes to the scanner processes to avoid complexities of trying to share data (I know very little about Python and whether sharing data i

Re: [tor-dev] stale entries in bwscan.20151029-1145

2015-11-05 Thread starlight . 2015q3
At 11:47 11/5/2015 -0600, Tom Ritter wrote: > . . . >So them falling between the slices would be my >best guess. . . Immediately comes to mind that dealing with the changing consensus while scanning might be handled in a different but nonetheless straightforward manner. Why not create a snapshot

Re: [tor-dev] stale entries in bwscan.20151029-1145

2015-11-05 Thread starlight . 2015q3
At 11:47 11/5/2015 -0600, Tom Ritter wrote: > . . . >So them falling between the slices would be my >best guess. . . Immediately comes to mind that dealing with the changing consensus while scanning might be handled in a different but nonetheless straightforward manner. Why not create a snapshot

Re: [tor-dev] running a BWauth

2015-11-04 Thread starlight . 2015q3
Thanks to all for the feedback. That Torflow works from links slower than those of the fastest relays seems to indicate it's measuring relative path resistance as much or more than absolute bandwidth. I often see it produce sensible results and hope that some tuning and fixing might produce be

[tor-dev] running a BWauth

2015-11-02 Thread starlight . 2015q3
I am considering starting up a passive BWauth in order to understand how they work and fix bugs. What is the minimum and ideal hardware configuration for a BWauth? Have my eye on an OVH config with unmetered 1G, a fuzzy promise of 500mbps minimum bandwidth and a 3.4GHz 4core/8thread AES-NI CPU:

[tor-dev] patch to improve consensus download decompress performance

2015-08-27 Thread starlight . 2015q3
tor-0.2.6.10-gz4x_guess.patch Description: Binary data ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

[tor-dev] request for advice regarding bug 15901

2015-08-23 Thread starlight . 2015q3
Could someone familiar with consensus document downloading, manipulation and validation by the tor relay examine the most recent post to bug #15901 and comment with advice regarding an idea I have for isolating the problem? Tor #15901: apparent memory corruption -- very difficult to isolate http

[tor-dev] small patch to allow gcc -fno-common for better ASAN coverage

2015-06-23 Thread starlight . 2015q2
Per https://code.google.com/p/address-sanitizer/wiki/Flags one benefits from compiling with -fno-common when -fsanitize=address is active The attached patch converts the single common variable to a global/extern variable. Seems a good idea to me. Have run this with no issues, but more ASAN co

[tor-dev] CellStatistics circuit distribution scale could perhaps use adjustment

2013-12-21 Thread starlight
Have been running a guard for a couple of months with 'CellStatistics' and noticed that the distribution looks out of whack: cell-stats-end 2013-12-20 18:13:10 (86400 s) cell-processed-cells 1409,9,6,6,6,5,4,3,2,1 cell-queued-cells 0.44,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.0