Copilot commented on code in PR #12751:
URL: https://github.com/apache/apisix/pull/12751#discussion_r2917093471
##########
apisix/plugins/limit-count/limit-count-redis.lua:
##########
@@ -52,37 +44,61 @@ function _M.new(plugin_name, limit, window, conf)
return setmetatable(self, mt)
end
-function _M.incoming(self, key, cost)
+
+local function log_phase_incoming_thread(premature, self, key, cost)
local conf = self.conf
local red, err = redis.new(conf)
if not red then
- return red, err, 0
+ return red, err
end
+ return util.redis_log_phase_incoming(self, red, key, cost)
+end
- local limit = self.limit
- local window = self.window
- local res
- key = self.plugin_name .. tostring(key)
- local ttl = 0
- res, err = red:eval(script, 1, key, limit, window, cost or 1)
+local function log_phase_incoming(self, key, cost, dry_run)
+ if dry_run then
+ return true
+ end
- if err then
- return nil, err, ttl
+ local ok, err = ngx_timer_at(0, log_phase_incoming_thread, self, key, cost)
+ if not ok then
+ core.log.error("failed to create timer: ", err)
+ return nil, err
end
- local remaining = res[1]
- ttl = res[2]
+ return ok
+end
+
+
+function _M.incoming(self, key, cost, dry_run)
+ if get_phase() == "log" then
+ local ok, err = log_phase_incoming(self, key, cost, dry_run)
+ if not ok then
+ return nil, err, 0
+ end
+
+ -- best-effort result because lua-resty-redis is not allowed in log
phase
+ return 0, self.limit, self.window
+ end
+
+ local conf = self.conf
+ local red, err = redis.new(conf)
+ if not red then
+ return red, err, 0
+ end
+
+ local delay, remaining, ttl = util.redis_incoming(self, red, key, not
dry_run, cost)
+ if not delay then
+ local err = remaining
Review Comment:
When `util.redis_incoming` returns an error/rejection, this function returns
early before calling `red:set_keepalive(...)`, leaking the Redis connection on
the error path. Please make sure the connection is always put back into the
pool (best-effort) before returning, regardless of success/failure.
```suggestion
local err = remaining
-- best-effort: always try to put the connection back into the pool
local ok_keepalive, err_keepalive =
red:set_keepalive(conf.redis_keepalive_timeout,
conf.redis_keepalive_pool)
if not ok_keepalive then
core.log.error("failed to set redis keepalive on error path: ",
err_keepalive)
end
```
##########
apisix/plugins/ai-rate-limiting.lua:
##########
@@ -148,14 +173,38 @@ local function transform_limit_conf(plugin_conf,
instance_conf, instance_name)
show_limit_quota_header = plugin_conf.show_limit_quota_header,
-- we may expose those fields to ai-rate-limiting later
- policy = "local",
+ policy = plugin_conf.policy or "local",
key_type = "constant",
- allow_degradation = false,
+ allow_degradation = plugin_conf.allow_degradation or false,
sync_interval = -1,
limit_header = "X-AI-RateLimit-Limit",
remaining_header = "X-AI-RateLimit-Remaining",
reset_header = "X-AI-RateLimit-Reset",
}
+
+ -- Pass through Redis configuration if policy is redis or redis-cluster
+ if plugin_conf.policy == "redis" then
+ limit_conf.redis_host = plugin_conf.redis_host
+ limit_conf.redis_port = plugin_conf.redis_port
+ limit_conf.redis_username = plugin_conf.redis_username
+ limit_conf.redis_password = plugin_conf.redis_password
Review Comment:
Redis credentials (e.g., `redis_password`) are now accepted and forwarded,
but this plugin doesn’t resolve secrets (unlike
`apisix/plugins/limit-count.lua`, which calls `fetch_secrets`). If users
configure Redis auth via secret refs, the password will be passed through
unresolved and connections will fail. Consider calling
`apisix.secret.fetch_secrets(conf, true)` before building `limit_conf` in
access/log paths.
##########
apisix/plugins/limit-count/limit-count-redis-cluster.lua:
##########
@@ -57,26 +48,37 @@ function _M.new(plugin_name, limit, window, conf)
end
-function _M.incoming(self, key, cost)
- local red = self.red_cli
- local limit = self.limit
- local window = self.window
- key = self.plugin_name .. tostring(key)
+local function log_phase_incoming_thread(premature, self, key, cost)
+ return util.redis_log_phase_incoming(self, self.red_cli, key, cost)
+end
+
- local ttl = 0
- local res, err = red:eval(script, 1, key, limit, window, cost or 1)
+local function log_phase_incoming(self, key, cost, dry_run)
+ if dry_run then
+ return true
+ end
- if err then
- return nil, err, ttl
+ local ok, err = ngx_timer_at(0, log_phase_incoming_thread, self, key, cost)
+ if not ok then
+ core.log.error("failed to create timer: ", err)
+ return nil, err
end
- local remaining = res[1]
- ttl = res[2]
+ return ok
+end
+
+
+function _M.incoming(self, key, cost, dry_run)
+ if get_phase() == "log" then
+ local ok, err = log_phase_incoming(self, key, cost, dry_run)
+ if not ok then
+ return nil, err, 0
Review Comment:
Same as the single-Redis backend: `incoming(self, key, cost, dry_run)`
supports `dry_run`, but the current caller in `limit-count/init.lua` invokes
Redis Cluster backends with only `(key, cost)`. This makes `dry_run=true`
ineffective for `policy=redis-cluster` and can lead to double-counting when a
plugin does a check in access phase and commits in log phase.
##########
apisix/plugins/limit-count/limit-count-redis.lua:
##########
@@ -52,37 +44,61 @@ function _M.new(plugin_name, limit, window, conf)
return setmetatable(self, mt)
end
-function _M.incoming(self, key, cost)
+
+local function log_phase_incoming_thread(premature, self, key, cost)
local conf = self.conf
local red, err = redis.new(conf)
if not red then
- return red, err, 0
+ return red, err
end
+ return util.redis_log_phase_incoming(self, red, key, cost)
+end
- local limit = self.limit
- local window = self.window
- local res
- key = self.plugin_name .. tostring(key)
- local ttl = 0
- res, err = red:eval(script, 1, key, limit, window, cost or 1)
+local function log_phase_incoming(self, key, cost, dry_run)
+ if dry_run then
+ return true
+ end
- if err then
- return nil, err, ttl
+ local ok, err = ngx_timer_at(0, log_phase_incoming_thread, self, key, cost)
+ if not ok then
+ core.log.error("failed to create timer: ", err)
+ return nil, err
end
- local remaining = res[1]
- ttl = res[2]
+ return ok
+end
+
+
+function _M.incoming(self, key, cost, dry_run)
+ if get_phase() == "log" then
+ local ok, err = log_phase_incoming(self, key, cost, dry_run)
+ if not ok then
+ return nil, err, 0
Review Comment:
`incoming(self, key, cost, dry_run)` now supports a `dry_run` mode, but
`limit-count/init.lua` currently calls Redis backends as `lim:incoming(key,
cost)` (no `dry_run`). This means callers like `ai-rate-limiting` that pass
`dry_run=true` in access phase will still *commit* in Redis and then commit
again in log phase, causing double-counting and incorrect quota headers. The
Redis backend API and `limit-count/init.lua` need to agree on how
`dry_run`/commit is propagated for redis/redis-cluster policies.
##########
apisix/plugins/limit-count/limit-count-redis.lua:
##########
@@ -52,37 +44,61 @@ function _M.new(plugin_name, limit, window, conf)
return setmetatable(self, mt)
end
-function _M.incoming(self, key, cost)
+
+local function log_phase_incoming_thread(premature, self, key, cost)
local conf = self.conf
local red, err = redis.new(conf)
if not red then
- return red, err, 0
+ return red, err
end
+ return util.redis_log_phase_incoming(self, red, key, cost)
Review Comment:
In the log-phase timer callback, the Redis connection is never returned to
the keepalive pool (no `set_keepalive`) and `premature` is ignored. Under load
this can exhaust Redis connections/file descriptors; ensure the timer callback
exits on `premature` and always calls `red:set_keepalive(...)` (or closes)
after the `eval`, even on error.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]