Hello,
I'm seeking guidance in how to improve the performance of a slow query and
to have some other sets of eyes confirm that what I wrote does what I
intend.
According to the PostgreSQL wiki there is a set of metadata that I should
provide to help you help me. So let's begin there.
PostgreSQL version: PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc
(GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
Full table and index schema:
CREATE TABLE attempt_scores (
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
attempt_report_id bigint NOT NULL,
score_value double precision NOT NULL,
created_at timestamptz NOT NULL DEFAULT now(),
attempt_report_updated_at timestamptz NOT NULL,
student_id int NOT NULL,
course_id int NOT NULL,
assignment_id int NOT NULL,
score_name citext NOT NULL CHECK (length(trim(score_name)) > 0),
attempted_by citext NOT NULL CHECK (length(trim(attempted_by)) > 0),
CONSTRAINT for_upsert UNIQUE (attempt_report_id, score_name)
);
CREATE INDEX ON attempt_scores (attempt_report_updated_at);
COMMENT ON TABLE attempt_scores IS
$$The collection of assignment scores extracted from the LMS database.$$;
COMMENT ON COLUMN attempt_scores.attempt_report_id IS
$$Each assignment attempt has an associated attempt report (attempt_reports)
where the scores of the attempt is recorded. This column is the pk value from
that table.$$;
COMMENT ON COLUMN attempt_scores.score_value IS $$The score's value.$$;
COMMENT ON COLUMN attempt_scores.created_at IS $$The timestamp the
record was created.$$;
COMMENT ON COLUMN attempt_scores.student_id IS $$The student's ID.$$;
COMMENT ON COLUMN attempt_scores.course_id IS $$The course's primary
key in the LMS database.$$;
COMMENT ON COLUMN attempt_scores.assignment_id IS $$The assignment's
primary key in the LMS database.$$;
COMMENT ON COLUMN attempt_scores.score_name IS $$The source/name of
the score.$$;
COMMENT ON COLUMN attempt_scores.attempted_by IS 'The users.role column in LMS';
COMMENT ON COLUMN attempt_scores.attempt_report_updated_at IS
$$The timestamp value of attempt_reports.updated_at on the LMS side. We use it
to find new rows added since the last time we exported to Salesforce.$$;
Table metadata:
relname | relpages | reltuples | relallvisible | relkind | relnatts |
relhassubclass | reloptions | pg_table_size
+--+---+---+-+--+++---
attempt_scores | 130235 | 9352640 | 0 | r | 10
| f | [NULL] |1067180032
Other context: The PostgreSQL database is an Amazon RDS instance.
Next up is the query and a description of what it's supposed to do.
-- What this query is supposed to do is to compute averages for a set of
scoring/learning metrics but it's not so
-- straight forward. There is an umbrella metric that summarises the others
called the student performance index (SPI)
-- and the folks who want this info want the averages to be driven by the
SPI. So the basic algorithm is that for each
-- student/assignment pair, find the assignment that has the highest SPI
then use that to collect and average the
-- component metrics.
EXPLAIN (ANALYZE, BUFFERS)
WITH max_spi AS (
SELECT student_id, assignment_id, max(score_value) spi
FROM attempt_scores
WHERE score_name = 'student_performance_index'
GROUP BY student_id, assignment_id
HAVING max(score_value) > 0
), reports AS (
SELECT max(attempt_report_id) attempt_report_id, max(score_value) spi
FROM max_spi m NATURAL JOIN attempt_scores
WHERE score_value = m.spi
GROUP BY student_id, assignment_id
)
SELECT
avg(spi) spi,
avg(CASE score_name WHEN 'digital_clinical_experience' THEN score_value
END) dce,
avg(CASE score_name WHEN 'tier1_subjective_data_collection' THEN
score_value END) sdc
FROM reports NATURAL JOIN attempt_scores;
Finally, the EXPLAIN output and some links.
QUERY PLAN
-
Aggregate (cost=672426.02..672426.03 rows=1 width=24) (actual
time=903359.923..903368.957 rows=1 loops=1)
Buffers: shared hit=6167172 read=4199539, temp read=99551 written=99678
I/O Timings: read=839121.853
-> Nested Loop (cost=672389.80..672425.91 rows=8 width=37) (actual
time=36633.920..885232.956 rows=7034196 loops=1)
Buffers: shared hit=6167172 read=4199539, temp read=99551
written=99678
I/O Timings: read=839121.853
-> GroupAggregate (cost=672389.37..672389.39 rows=1 width=24)
(actual time=36628.945..41432.502 rows=938244 loops=1)
Group Key: attempt_scores_2.student_id,
attempt_scores_2.assignment_id
Buffers: shared hit=191072 read=329960, temp read=99551
written=99678
I/O Timings: read=18210.866