Dear Mario,

Thank you for your thoughtful, detailed, and even rather delightful
response. I'll respond to your email in a block here, in hopes of better
continuity in my reply.

Yes, I really am interested in extending a simple system of higher-order
logic with new predicates for new "datatypes" (on individuals in
particular!), and here is some more explanation together with some
responses to your email.

As it turns out, I am not using HOL4 at all, but a system I have been
writing bit by bit for some time, in JavaScript running in modern web
browsers, rather tightly coupled with a graphical user interface. It is
based on simple type theory, Q0 in particular, but does have some
capability for definitional extension. (See http://mathtoys.org/ for links,
though much technical explanation is not yet there.)

It is not intended as an aid for sophisticates such as typical members of
this list, but for people inexperienced with interactive theorem proving,
whose exposure to mathematical notation and mathematical reasoning may be
limited to ordinary high school math education and ordinary (let's say
American) math textbooks.

Aside from implementing basics of pure logic, the real numbers have been
its focus so far, though I have been stubborn enough not to limit the
design to just working with a single theory such as numbers. From near the
beginning it has used a predicate "R" for real numbers. Its rewriting rules
work with conditional equations, e.g. R x & R y => x + y = y + x, and it
automatically reduces assumptions such as R (x * y) to R x & R y as they
occur.

Although the overhead of managing assumptions is a drag on performance, I
am pleased with this approach so far, and its apparent usability, and wish
to continue in this style, with new datatypes being "individuals" in the
type system. Functions and predicates remain part of the logic's own type
system, but I believe this can be implemented to be sufficiently compatible
with traditional math education.

That is why I am interested in learning if others have experience with
logics and/or systems that extend the type system in this style. A little
passage from a well-known old paper by Dana Scott gives me some
encouragement as well. In "A type-theoretical alternative to ISWIM, CUCH,
OWHY", written in 1969 but only published in 1993, he comments:

"My present view is that all the data should be kept in type "i".  . . .
Numbers are only *one* type of data. We could imagine D_i as being divided
into *many disjoint* parts, each part with its own structure and with
axioms for the structure given in the same style as in A."


Hopefully Scott's comments here are not taken too much out of context.

Everything in your is interesting to me, and some of it is new or
clarifying at the very least. While my overall mathematical sophistication
is not nearly to the level of many members of this list, I have paid a lot
of attention to the foundational and what I would call philosophical issues.

For example, "junk" theorems is certainly a concern for me, and urelements
seem like a fine approach. The suggestion that

one could in principle add (akin to type definitions in HOL4) new sets of
atoms disjoint from the set of all previously-defined atoms; then objects
that are conceptually not sets (numbers, etc.) could be introduced as atoms
and there would not be “junk theorems”


makes perfect sense to me. But I think that overall, type theory is more
suited to my purposes, with collections of atoms ("individuals") distinct
(as appropriate) from all previously defined ones, each new collection
identified by one or more new predicates. So I would like to "do that" with
the individuals in simple type theory.

Just a couple of short comments on the attractions of type theory.  For me,
the axioms of Q0 in particular are models of what axioms ought to be. In
type theory, functions and predicates are fundamental, so except perhaps
for Currying, the road is immediately open to working with traditional
statements about numbers and other concepts as well. And of course
Russell's Paradox is not an issue, as you point out.

I do appreciate the important contributions set theory has made to
mathematics. Beyond that, foundational issues such as fitting category
theory into mathematical foundations are immensely intriguing, but I think
it unlikely that I myself will have the opportunity to properly grasp the
issues, at least not in this lifetime.

Best regards,
Cris


On Tue, Jan 23, 2018 at 10:16 AM, Mario Castelán Castro <
[email protected]> wrote:

> On 22/01/18 18:35, Cris Perdue wrote:
> > Are any of you readers of this list aware of investigations or
> > implementations of logics that, like the HOL family of provers, are based
> > on the simple theory of types, but which support introduction of new
> types
> > through new predicates rather than new type constants? Presumably numbers
> > then could be individuals, like other individuals, with some being
> > integers, some real, and so on.
>
> Hello.
>
> I really do not understand what you are looking form.
>
> If you want something like “subtypes”, you can use restricted
> quantification (theory "res_quan"). E.g. You can define a term “ODD:(num
> -> bool)” and then quantify as in “∀x::ODD. P x”.
>
> However, many tactics in HOL4 are blind to restricted quantification.
> Fixing this it is just a matter of making a change in the source code or
> writing a wrapper. In my repository of HOL4 developments
> <https://puszcza.gnu.org.ua/projects/hol-proofs> have a wrapper for
> “MESON_TAC” and a tactic that is like “MATCH_MP_TAC” but extended in
> that it adds support for restricted quantification and other things.
>
> Alternatively, you can embed a type system with coercing/casting
> functions. E.g. You have a function “COERCE_ODD (:num -> :num)” which
> returns the same number of it is odd, and an arbitrary odd number
> otherwise. Then to quantify over odd numbers, you write “∀x. P
> (COERCE_ODD x)” instead of (with the above approach) “∀x::ODD x”.
>
> As far as I know: Simple type theory has no means to introduce new
> types; that is a conservative extension found in some proof assistants
> like HOL4. The types in Church's formulation [1] are the individuals
> (type “ind” in HOL4), propositions (roughly type “bool” in HOL4; but
> Church allows the possibility of intensional propositional functions),
> and the scheme: if “'a” and “'b” are types, then “'a -> 'b” is a type.
>
> I would like to share some of my observations regarding type theories
> compared to set theories:
>
> If you want, roughly speaking, that all objects are treated the same (as
> “individuals”), set theory already does that. I am aware that a common
> complain against set theory is the existence of “junk theorems” like “0
> ∈ 1” (E.g.: this holds if 0 and 1 are ordinals with the usual
> Mirimanov-von Neumann construction, or reals with the Dedekind
> construction). But this is unfounded. In a proof assistant based on set
> theory with atoms (a.k.a. “urelements”) one could in principle add (akin
> to type definitions in HOL4) new sets of atoms disjoint from the set of
> all previously-defined atoms; then objects that are conceptually not
> sets (numbers, etc.) could be introduced as atoms and there would not be
> “junk theorems”; this is like a typical type theory, but instead of
> having bad-typed formulas, you would have theorems of non-elementhood
> like “∀S. 1 ∉ S”.
>
> Another complaint against well-founded (Zermelo-style) set theories is
> that in general, the set of all {groups, topological spaces, categories,
> etc} is not itself a set. The set theory New Foundations, which is not
> well-founded, has a set of all sets, but the price it pays is that in
> general separation does not hold (i.e.: There exists a set S and formula
> P in which x is free, for which {x | x ∈ S ∧ P} is _not_ a set). I hear
> that it has problems with constructions like “the set of all categories”
> but I have not investigated into that. By contrast, I am not aware of
> any type theory which has a equivalent of an universal set. Usually
> either there are terms without a type, or “kinds” that are not types,
> which is the antithesis of an universal set/type.
>
> Polymorphism gives _the illusion_ that one has the set of all sets _and_
> at the same time the separation axiom; but it is just an illusion, for
> if that was really the case the system would contain a contradiction
> (i.e.: Russell paradox). For example, in HOL4 you have the seemingly
> universal type {x | T}; and for any P, if “{x | P x}” is well-typed then
> it is a “set”). “UNIV ∈ UNIV” is provable; yet the occurrences of “UNIV”
> have different types; trying to derive “∃S. S ∈ S” fails because it is
> not even a WFF.
>
> Polymorphism is usually found in type theories, and it is usually hidden
> by the printer so it is not very conspicuous, thus enhancing the
> illusion of a very expressive theory. But this is just the current
> trend. One can polymorphism in set theory too. In a vague sense, New
> Foundations can be seen as having polymorphism “built-in”. It can be
> added to ZFC if one restricts the scope of quantifiers to sets. If you
> add classes (as in NBG) and pretty-printing on top, it can too give the
> illusion of having universal comprehension. Now, I _imagine_ that by a
> suitable limitation of quantification one could have a set of atoms
> “True” and “False” in a ZFC+atoms and then have propositions as ordinary
> functions. However I have not explored this possibility.
>
> In my opinion, sophisticated type theories (specifically, those with
> types that can depend on terms, like the Calculus of Constructs) merely
> make more complex what can be already expressed in set theory (incl.
> set-based model theory) but making it seem as it was something “deep”
> with syntax tricks (e.g.: intentional conflation of terms and proofs).
> We can already see this phenomenon (a theory that seems better; yet is
> not on careful examination) with classical second-order logic: It seems
> more “expressive” than first-order logic until you realize that all
> computable formulations of second-order logic have the same shortcomings
> as first-order logic (E.g.: A computable second-order logic can be
> complete w.r.t. frugal models, never w.r.t standard models; and with
> frugal models you again have Skolem paradox and non-standard models for
> second order PA).
>
> Set theory has given us deep theorems with relevance outside itself,
> like the independence of the choice axiom, and relative consistency
> proofs for weaker systems like PA. I am not aware of any result like
> that coming from type theory.
>
> [1]: “A Formulation of the Simple Theory of Types”. Alonzo Church; 1940.
>
> > Any references to results of existing efforts to explore the potential
> > and/or issues raised by such arrangements would be much appreciated.
> >
> > A significant part of my own interest in this approach is related to
> > usability by non-specialists, and in that sense style might be more an
> > issue than ultimate expressive power. Ideally such a system would also
> > support a relatively flexible and rich system of "types", without the
> need
> > to bring in the conceptual complexity of dependent types, and their
> > accompanying notations.
> >
> > Best regards and thanks in advance,
> > Cris
>
>
>
>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> hol-info mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/hol-info
>
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
hol-info mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/hol-info

Reply via email to