> + * and reducing the number of active requests in the backing taskq.
> + *
> + * 4 GiB (zfs_dirty_data_max default) * 16 (multiplier default) = 64 GiB
> + * meaning by default we will call zfs_inactive_impl async for vnodes > 64 
> GiB
> + */
> +uint16_t zfs_inactive_async_multiplier = 16;
> +
> +void
> +zfs_inactive(vnode_t *vp, cred_t *cr, caller_context_t *ct)
> +{
> +     znode_t *zp = VTOZ(vp);
> +
> +     if (zp->z_size > zfs_inactive_async_multiplier * zfs_dirty_data_max) {
> +             if (taskq_dispatch(dsl_pool_vnrele_taskq(
> +                 dmu_objset_pool(zp->z_zfsvfs->z_os)), zfs_inactive_task,
> +                 vp, TQ_SLEEP) != NULL)

Seems like we might want to use TQ_NOSLEEP so that if there are a ton of 
deletions, and the queue gets full, then we will effectively recruit more 
threads to work on deletion (because the calling thread will work on deletion, 
rather than waiting for the taskq threads to make progress).

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r51762910
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to