cheers looking forward to next release :)

loving the work you do !

On Saturday, July 16, 2016 at 3:01:05 AM UTC+10, Dormando wrote:
>
> Well, knew I forgot something :) Just pushed a fix for that for the next 
> release. 
>
> You'll want to avoid setting slab_chunk_max on your own. Only adjust with 
> -I, and you can see the state of -I via the "item_size_max" stat in there. 
>
> Also, for what it's worth: Your `-m 8` may use more than 8 megabytes of 
> RAM if you use more than one slab class. there is a minimum memory 
> assignable of 1mb per slab class. 
>
> -Dormando 
>
> On Fri, 15 Jul 2016, Centmin Mod George Liu wrote: 
>
> > with /usr/local/bin/memcached -d -m 8 -l 127.0.0.1 -p 11211 -c 2048 -b 
> 2048 -R 200 -t 4 -n 72 -f 1.25 -u nobody -o 
> slab_reassign,slab_automove,slab_chunk_max=16384 -P 
> > /var/run/memcached/memached1.pid 
> > stats output as 
> > 
> > echo stats settings | nc 127.0.0.1 11211       
> > STAT maxbytes 8388608 
> > STAT maxconns 2048 
> > STAT tcpport 11211 
> > STAT udpport 11211 
> > STAT inter 127.0.0.1 
> > STAT verbosity 0 
> > STAT oldest 0 
> > STAT evictions on 
> > STAT domain_socket NULL 
> > STAT umask 700 
> > STAT growth_factor 1.25 
> > STAT chunk_size 72 
> > STAT num_threads 4 
> > STAT num_threads_per_udp 4 
> > STAT stat_key_prefix : 
> > STAT detail_enabled no 
> > STAT reqs_per_event 200 
> > STAT cas_enabled yes 
> > STAT tcp_backlog 2048 
> > STAT binding_protocol auto-negotiate 
> > STAT auth_enabled_sasl no 
> > STAT item_size_max 1048576 
> > STAT maxconns_fast no 
> > STAT hashpower_init 0 
> > STAT slab_reassign yes 
> > STAT slab_automove 1 
> > STAT lru_crawler no 
> > STAT lru_crawler_sleep 100 
> > STAT lru_crawler_tocrawl 0 
> > STAT tail_repair_time 0 
> > STAT flush_enabled yes 
> > STAT hash_algorithm jenkins 
> > STAT lru_maintainer_thread no 
> > STAT hot_lru_pct 32 
> > STAT warm_lru_pct 32 
> > STAT expirezero_does_not_evict no 
> > STAT idle_timeout 0 
> > STAT watcher_logbuf_size 262144 
> > STAT worker_logbuf_size 65536 
> > STAT track_sizes no 
> > END 
> > 
> > On Friday, July 15, 2016 at 11:56:03 PM UTC+10, Centmin Mod George Liu 
> wrote: 
> >       any tools or command line available to get the slab_chunk_max size 
> in 1.4.29 ? 
> > memcached-tools doesn't see any in settings probably needs an update ? 
> > 
> > memcached-tool 127.0.0.1:11211 stats 
> > #127.0.0.1:11211   Field       Value 
> >          accepting_conns           1 
> >                auth_cmds           0 
> >              auth_errors           0 
> >                    bytes     1380656 
> >               bytes_read     1904536 
> >            bytes_written     4211631 
> >               cas_badval           0 
> >                 cas_hits           0 
> >               cas_misses           0 
> >                cmd_flush           0 
> >                  cmd_get        1649 
> >                  cmd_set        1306 
> >                cmd_touch           0 
> >              conn_yields           0 
> >    connection_structures          13 
> >    crawler_items_checked        1536 
> >        crawler_reclaimed           0 
> >         curr_connections          12 
> >               curr_items         532 
> >                decr_hits           0 
> >              decr_misses           0 
> >              delete_hits           0 
> >            delete_misses           0 
> >          direct_reclaims           0 
> >        evicted_unfetched           0 
> >                evictions           0 
> >        expired_unfetched           0 
> >              get_expired           0 
> >              get_flushed           0 
> >                 get_hits        1108 
> >               get_misses         541 
> >               hash_bytes      524288 
> >        hash_is_expanding           0 
> >         hash_power_level          16 
> >                incr_hits           0 
> >              incr_misses           0 
> >                 libevent 2.0.22-stable 
> >           limit_maxbytes   268435456 
> >      listen_disabled_num           0 
> >         log_watcher_sent           0 
> >      log_watcher_skipped           0 
> >       log_worker_dropped           0 
> >       log_worker_written           0 
> >      lru_crawler_running           0 
> >       lru_crawler_starts         366 
> >   lru_maintainer_juggles        9230 
> >        lrutail_reflocked           0 
> >             malloc_fails           0 
> >            moves_to_cold         478 
> >            moves_to_warm          15 
> >         moves_within_lru         281 
> >                      pid       21995 
> >             pointer_size          64 
> >                reclaimed           0 
> >     rejected_connections           0 
> >             reserved_fds          20 
> >            rusage_system    0.456666 
> >              rusage_user    1.096666 
> >    slab_global_page_pool           0 
> > slab_reassign_busy_items           0 
> > slab_reassign_chunk_rescues           0 
> > slab_reassign_evictions_nomem           0 
> > slab_reassign_inline_reclaim           0 
> >    slab_reassign_rescues           0 
> >    slab_reassign_running           0 
> >              slabs_moved           0 
> >                  threads           4 
> >                     time  1468590786 
> > time_in_listen_disabled_us           0 
> >        total_connections          78 
> >              total_items        1306 
> >               touch_hits           0 
> >             touch_misses           0 
> >                   uptime         987 
> >                  version      1.4.29 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to