At 2025-02-07T16:20:23+0100, onf wrote: > I've been wondering lately about the rationale behind solving the > problem of font sizes being integer-only by adding scaled points.
It's covered in groff_diff(7), the man page I believe you said you
always forget about.
groff_diff(7):
Fractional type sizes and new scaling units
AT&T troff interpreted all type size measurements in points.
Combined with integer arithmetic, this design choice made it
impossible to support, for instance, ten and a half‐point type. In
GNU troff, an output device can select a scaling factor that
subdivides a point into “scaled points”. A type size expressed in
scaled points can thus represent a non‐integral size in points.
A scaled point is equal to 1/sizescale points, where sizescale is
specified in the device description file, DESC, and defaults to 1;
see groff_font(5). Requests and escape sequences in GNU troff
interpret arguments that represent a type size in points, which the
formatter multiplies by sizescale and converts to an integer.
Arguments treated in this way comprise those to the escape
sequences \H and \s, to the request ps, the third argument to the
cs request, and the second and fourth arguments to the tkf request.
Scaled points may be specified explicitly with the z scaling unit.
In GNU troff, the register \n[.s] can interpolate a non‐integral
type size. The register \n[.ps] interpolates the type size in
scaled points.
For example, if sizescale is 1000, then a scaled point is one
thousandth of a point. Consequently, “.ps 10.5” is synonymous with
“.ps 10.5z”; both set the type size to 10,500 scaled points, or
10.5 points.
Another new scaling unit, “s”, multiplies by the number of basic
units in a scaled point. Thus, “\n[.ps]s” is equal to “1m” by
definition. Do not confuse the “s” and “z” scaling units.
It makes no sense to use the “z” scaling unit in a numeric
expression whose default scaling unit is neither “u” nor “z”, so
GNU troff disallows this. Similarly, it is nonsensical to use
scaling units other than “p”, “s”, “z”, or “u”, in a numeric
expression whose default scaling unit is “z”, and so GNU troff
disallows those as well.
Output devices may be limited in the type sizes they can employ.
For example, if a type size of 10.95 points is requested, and the
nearest size permitted by a sizes request (or by the sizes or
sizescale directives in the device’s DESC file) is 11 points, the
output driver uses the latter value. The .s and .ps registers
represent the type size selected by the formatter as it understands
a device’s capability. The last requested type size is
interpolated in scaled points by the read‐only register .psr and in
points as a decimal fraction by the read‐only string‐valued
register .sr. Both are associated with the environment.
> Specifically, I can't help but feel like extending troff to represent
> font sizes everywhere (except the .s register) in basic units would
> be a much more straightforward solution, which wouldn't require new
> units or a new DESC entry.
>
> Am I missing something?
The crux of the problem, as I conceive it, is that a "point" is a real
unit of measure. It's 1/72nd of an inch. On groff's "ps" and "pdf"
output devices, that winds up being one thousand basic units "u".
$ groff <<EOF
.nr a 1p
.tm a=\na
EOF
a=1000
If we simply treat the point as a synonym for a basic unit in certain
contexts (as AT&T troff did), the formatter's measurement system becomes
inconsistent. We could nevertheless proceed on this basis, I think, by
extending the foregoing framework of rules regarding where certain
scaling units are accepted. However, I think doing so might frustrate
troff users who wish to apply points as a length measure to something
other than glyph rendering. (This is already common, even idiomatic,
practice when applying leading, a.k.a. selecting a vertical spacing.)
And in the elementary training I received in typesetting as part of a
high school journalism class, I did in fact encounter a working
typographer (the teacher) thinking of the point, like the pica, as a
fully general unit of length measure. And why not? We're all friendly
linear transforms around here.[1]
Given that, I think AT&T troff's approach to type size measurement was
mistaken, and James Clark's rethink of it for GNU troff an improvement.
Where I have a problem, as is often the case, is with our choice of
terminology. After years of discomfort and wondering whether I'm the
stupid one, I've decided that the "scaled point" is a simply terrible
term. It's not something that undergoes scaling in the mathematical
sense in any way distinct from any other scaling unit. Both "unscaled"
[sic] points and scaled points get scaled to basic units!
$ groff <<EOF
.nr a 3p
.nr b 3z
.tm a=\na, b=\nb
EOF
a=3000, b=3000
On my mental to-do list has been a renewed attack on this aspect of our
documentation, and I finally, within the past couple of months, thought
of a term I'm happy with to replace "scaled point".
Subdivided Point
Because that's exactly what it is. I intend no architectural change, so
we'll still have the unit "s", which happily abbreviates "subdivided
point" just as readily as "scaled point".
$ groff <<EOF
.nr a 3p
.nr b 3z
.nr c 3s
.tm a=\na, b=\nb, c=\nc
EOF
a=3000, b=3000, c=3
Not sure yet if the "sizescale" directive needs to change. It doesn't
seem to have _any_ obvious meaning; rather it gestures in a vaguely
accurate direction. "pointsubdivisions" would be really good, but
for its unwieldy length. (Still--how often do people ready edit device
description files, let alone type one in from scratch?) In any case, of
course if we pick a new name we'll need to keep recognizing the old one
as a synonym for a while. This won't be hard and I know where to
implement it.
Regards,
Branden
[1] Personal aside/anecdote:
One thing that makes me angry about my early education is that we
expose children to the distinction between linear and affine
transforms without identifying them as such.
For example, converting between inches and centimeters is a linear
transform.
Converting between degrees Fahrenheit and degrees Celsius is not a
linear, but an _affine_ transform.
Well into my engineering education, professors and TAs would
casually apply the term "linear transform" to both linear and affine
transforms. It wasn't until I got to linear algebra class (a
subject _everyone_ should study), where the concept of the kernel of
a transform became important, that the pedagogy finally found it
worthwhile to distinguish them--I suppose because it's pretty
awkward to try to teach the definition of a "kernel" without at long
last admitting the distinction.
As a rule, even the raunchiest or most misanthropic attempts at
humor don't offend me. But educate people in mathematics sloppily,
and I will happily call for your imprisonment in the most brutal of
conditions.
signature.asc
Description: PGP signature
