diff --git a/tex/ch0.tex b/tex/ch0.tex index 622c375..6b527d6 100644 --- a/tex/ch0.tex +++ b/tex/ch0.tex @@ -10,23 +10,22 @@ \chapter*{Introduction} Four Color Theorem~\cite{Gonthier08} and of the Odd Order Theorem~\cite{gonthier:hal-00816699}. -The reason of existence of this book is to break down the barriers to entry. +The reason for this book's existence is to break down the barriers to entry. While there are several books around covering the usage of the \Coq{} system~\cite{BC04,SF,CPDT} and the theory it is based on~\cite{Coq:manual}\cite{paulinmohring:hal-01094195,hottbook}, the \mcbMC{} library is built in an unconventional way. As a consequence, this book provides a -non-standard presentation of \Coq{}, putting upfront the formalization +non-standard presentation of \Coq{}, putting up front the formalization choices and the proof style that are the pillars of the library. -This book targets two classes of public. On the one hand, newcomers, +This book targets two kinds of audience. On the one hand, newcomers, even the more mathematically inclined ones, %may find a soft introduction to the programming language of \Coq{}, Gallina, and the SSReflect proof language. -On the other hand accustomed \Coq{} users %will hopefully -find a +On the other hand, accustomed \Coq{} users will, we hope, find a substantial account of the formalization style that made the \mcbMC{} library possible. @@ -39,7 +38,7 @@ \chapter*{Introduction} given as soon as possible sufficient tools to prove non-trivial mathematical results by reusing parts of the library. By the end of the first part, the reader has learned how to prove formally the -infinitude of prime numbers, or the correctness of the Euclidean's +infinitude of prime numbers, or the correctness of the Euclidean division algorithm, in a few lines of proof text. \paragraph{Acknowledgments.} @@ -83,21 +82,21 @@ \subsection*{Part 1: \partonename{}} % proofs gradually, and takes benefit from frequent interactions with % the system. \emph{SSReflect} is a language designed to -ease the activity of writing and maintaining formal proofs. +ease the task of writing and maintaining formal proofs. In particular the maintenance of large formal libraries requires a solid writing discipline and a language that supports it. % . to be tractable, % as it is the case for any large corpus of code. -\emph{SSReflect} provides linguistic constructs well adapted to +\emph{SSReflect} provides linguistic constructs that are well-adapted to writing scripts that can be easily fixed in response to % ancillary changes to the contents of the formal libraries. % or changing the formal % definitions. -Actually, \mcbSSR{} is firstly the name of a +Actually, \mcbSSR{} is first and foremost the name of a formalization methodology, sometimes also called \emph{Boolean Reflection}. -Initially conceived for the formal proof +Initially conceived for the formal proof of the Four Colors Theorem, it became a pillar of the \mcbMC{} library and of the formal proof of the Odd Order Theorem. The SSReflect proof language was named after this methodology @@ -159,7 +158,7 @@ \subsection*{Conventions} \Coq{} code is in \C{typewriter} font and \C{(surrounded by parentheses)} when it occurs in the middle of the text. Longer snippets are in boxes with line -numbers like the following one: +numbers, like the following one: \begin{coq}{}{title=A sample snippet} Example Gauss n : \sum_(0 <= i < n.+1) i = (n * n.+1) %/ 2. diff --git a/tex/chFingroup.tex b/tex/chFingroup.tex index c83cfb5..c00af9a 100644 --- a/tex/chFingroup.tex +++ b/tex/chFingroup.tex @@ -4,17 +4,17 @@ \chapter{Finite group theory} the \C{fingroup} and \C{solvable} directories. It covers a substantial part of~\cite{gorenstein2007finite} and~\cite{9781139175319} ranging from basic notions like morphisms, quotients, actions, cyclic and p-groups to more -substantial topics as Sylow and Hall groups, the Abelian -structure theorem and the Jordan Holder theorem. +substantial topics as Sylow and Hall groups, the abelian +structure theorem and the Jordan-H\"older theorem. The paper~\cite{DBLP:conf/mkm/Mahboubi13} describes the constructions and formalization techniques -needed in order to prove the Jordan Holder theorem. +needed in order to prove the Jordan-H\"older theorem. \paragraph{contents} \lib{..} -\paragraph{Basic formalization choices} To ease the study of sub-groups, +\paragraph{Basic formalization choices} To ease the study of subgroups, the type of groups is indexed over a container group called \C{finGroupType}, fixing the group law and imposing finiteness by inheriting from \C{finType}. @@ -25,16 +25,16 @@ \chapter{Finite group theory} Concepts already existing on sets are kept there and not redefined. Type inference is programmed to infer the group structure whenever possible. For example the type of \C{(G :&: H)} is \C{\{set gT\}} but such expression -is automatically promoted to a group when needed, similarly to what +is automatically promoted to a group when needed, similarly to what was done in section~\ref{sec:tupleinvariant} for tuples. -group concepts usually here defined on sets, but then -possibly using the generated group operation to fulfill -stricter requirements. This generalized the math concept and -ease the integration with set stuff. This is possible because -the container is a group and we can always generate from a set. +Group concepts are usually defined on sets, but then +possibly use the generated group operation to fulfill +stricter requirements. This generalizes the math concept and +eases the integration with set theory. This is possible because +the container is a group and we can always generate one from a set. -morphisms carry domain set in type. +Morphisms carry their domain set in their type. \paragraph{Notations} The same infix \C{*} notation can be used to multiply both sets (or groups) and their points. For example @@ -42,7 +42,7 @@ \chapter{Finite group theory} $g\cdot h \in \{ x\cdot y | x \in G, y \in H\}$. Given the \C{finGroupType} container many open notations are also supported, for example the normalizer of \C{A}, written \C{'N(A)} for \C{A} being a \C{\{set gT\}}, is a sensible -writing. Another bonus from the container. +writing, another bonus from the container. \begin{coq}{}{} Definition normaliser A := [set x | A :^ x \subset A]. @@ -63,7 +63,7 @@ \chapter{Finite group theory} notations for actions acts-on, transitive, Fix, 'N(A|P) and co. -\paragraph{Formalization trick: totality of operations} In standard math one +\paragraph{Formalization trick: totality of operations} In standard mathematics, one can write \C{A/H} only if the normality condition \C{(H <| A)} holds. Such construction is made total by defining \C{A/H} as \C{(('N(H) :&: A) * H) / H)}, i.e. \C{A} is intersected with @@ -76,7 +76,7 @@ \chapter{Finite group theory} Lemma quotientMl A B : A \subset 'N(H) -> A * B / H = (A / H) * (B / H). \end{coq} Remark that the equational form lets you require only one precondition: -if \C{B} exceeds the \C{'N(H)} then the equation still holds, since +if \C{B} exceeds \C{'N(H)}, then the equation still holds, since the intersection with the normalizer of \C{H} occurs on both sides. A similar choice is used for the (semi) direct and central product. @@ -98,7 +98,7 @@ \chapter{Finite group theory} the trivial group. \paragraph{Formalization trick: presentations} One cannot decide if a group -presented via generators and relations is finite, hence the integration of such +presented via generators and relations is finite, hence the integration of such a concept in a library of finite groups is tricky. The notation \C{(G \\homg Grp (x : (x ^+ n)))} states that the finite group \C{G} is generated by a single point \C{x} of order \C{n} (as in the standard @@ -106,7 +106,7 @@ \chapter{Finite group theory} The shrewdness is that the notation hides an existential quantification postulating the existence of a finite tuple of generators satisfying the equations and building \C{G}: the potentially infinite object is never -built as well the \C{\\homg} relation, standing for homomorphic image, +built, and the \C{\\homg} relation (standing for homomorphic image) suggests the fact that \C{G} is the image of the presented group, not the group itself. Notation \C{(G \\homg Grp (x_1 : .. x_n : (s_1 = t_1, .., s_m = t_m))} @@ -118,7 +118,7 @@ \chapter{Finite group theory} == (G, (t_1, .. (t_m-1, t_m) ..))] \end{coq} Such formula is generated reflexively, i.e. by a \Coq{} program that -takes in input the syntax of the presentation and produces that statement. +takes as input the syntax of the presentation and produces that statement. Remark the $m+1$ components compared by \C{(_ == _)}. It first compares the group generated by the generators \C{x_1 .. x_n} with \C{G}; then it compares all the expressions being related. @@ -145,6 +145,6 @@ \chapter{Representation and Character theory} \cite{isaacs1976character} -As an example of application, in particular of the linear algebra +As an example of application of the library, in particular of the linear algebra theory. diff --git a/tex/chGraph.tex b/tex/chGraph.tex index 8d4a833..6e84f74 100644 --- a/tex/chGraph.tex +++ b/tex/chGraph.tex @@ -13,8 +13,8 @@ \chapter{Finite graph theory} \item ordinals: injections, enumeration of/projection from a finite set. Example of indexing by a finite type (and not In). \item pick -\item tuples, as indexation domains, that can be enumerated by an - ordinal and avoid dflt element (cf \C{tnth} in +\item tuples, as indexing domains, that can be enumerated by an + ordinal and avoid default element (cf \C{tnth} in 6.4). \item may be restriction of finite functions by a predicate? \item images (of finite functions) diff --git a/tex/chHierarchy.tex b/tex/chHierarchy.tex index 3d2b88f..ba4d5e5 100644 --- a/tex/chHierarchy.tex +++ b/tex/chHierarchy.tex @@ -42,12 +42,12 @@ \chapter{Organizing Theories}{} dependent types, coercions and canonical instances of structures. It is only a ``simple matter of programming'', albeit one that involves some new formalisation idioms. This chapter describes the most -important: telescopes, packed classes, and phantom parameters. +important ones: telescopes, packed classes, and phantom parameters. % class coercions, and quotation. While some of these formalisation patterns are quite technical, casual users do not need to master them all. Indeed the documented interface -of structures suffices to use and declare instances of structures. We +of structures suffices for using and declaring instances of structures. We describe these interfaces first, so only those who wish to extend old or create new hierarchies need to read on. @@ -157,7 +157,7 @@ \section{Structure interface}\label{str:itf} Line 1 bundles the additive operations (0, $+$, $-$) and their properties in a \emph{mixin}, which is then used in line 2 to create a -canonical instance. After line 2 all the additive algebra provided in +canonical instance. After line 2 all the additive algebra provided by \lib{ssralg} becomes applicable to \lstinline/'I_p/; for example \C{0} denotes the zero element, and \C{i + 1} denotes the successor of \C{i} mod $p$. @@ -213,7 +213,7 @@ \section{Structure interface}\label{str:itf} Constraining the shape of the modulus $p$ is a simple and robust way to enforce $p>1$: it standardizes the proofs of $p>0$ and $p>1$, which -avoid the unpleasantness of multiple interpretations of \C{0} +avoids the unpleasantness of multiple interpretations of \C{0} stemming from different proofs of $p>0$ --- the latter tends to happen with ad hoc inference of such proofs using canonical structures or tactics. The shape constraint can however be inconvenient when the @@ -705,9 +705,9 @@ \section{Packed classes}\label{sec:packed} Two structures extend rings independently: \C{comRingType} provides multiplication commutativity, and \C{unitRingType} provides computable inverses for all units (i.e., invertible elements) along with a test -of invertibility. These structures are incomparable, and there are reasonable +of invertibility. These structures are incomparable, as there are reasonable instances of each: $2\times 2$ matrices over $\mathbb{Q}$ have -computable inverses but do not commute, while polynomials over +computable inverses but do not commute, whereas polynomials over $\mathbb{Z}_p$ commute but do not have easily computable inverses. The respective definitions of \C{comRingType} and \C{unitRingType} follow exactly the pattern we have seen, @@ -875,7 +875,7 @@ \section{Parameters and constructors}\label{sec:phant} For example, each packed class contains exactly the same definition of the clone constructor\footnote{More precisely, the situation is slightly different - in the case of structure with a parameter, like modules.}, following the introduction of + in the case of a structure with a parameter, like modules.}, following the introduction of section variables \C{T} and~\C{cT}, and the definition of \C{class}: \begin{coq}{name=phant21}{} @@ -912,7 +912,7 @@ \section{Parameters and constructors}\label{sec:phant} The code for the instance constructor for \C{choiceType} is almost identical, because it only extends \C{eqType} with a mixin that does not depend on \C{eqType}. -Note that this definition allows \Coq{} to infer \C{T} from \C{m}. +Note that this definition lets \Coq{} infer \C{T} from \C{m}. \begin{coq}{name=phant23}{} Definition pack T m := @@ -981,7 +981,7 @@ \section{Linking a custom data type to the library} \begin{coq}{}{} Inductive windrose := N | S | E | W. \end{coq} -The most naive way to show that \C{windrose} is a \C{finType} is +The most na\"ive way to show that \C{windrose} is a \C{finType} is to provide a comparison function, then a choice function, \ldots finally an enumeration. Instead, it is much simpler to show one can punt \C{windrose} in bijection with a pre-existing finite type, @@ -1027,12 +1027,12 @@ \section{Linking a custom data type to the library} Definition windrose_finMixin := PcanFinMixin pcan_wo4. Canonical windrose_finType := FinType windrose windrose_finMixin. \end{coq} -Only one tiny detail has been left on the side so far. To use +Only one tiny detail has been left aside so far. To use \C{windrose} in conjunction with the \C{\\in} infix notation or with the notation \C{#|...|} for cardinality, the type declaration has to be tagged as an instance of the \C{predArgType} structure, for types -which model a predicate, as it is the one which supports the latter -notations. It can be done as follows. \index[coq]{\C{preArgType}} +which model a predicate, as it is the structure that supports the latter +notations. It can be done as follows. \index[coq]{\C{predArgType}} \begin{coq}{}{} Inductive windrose : predArgType := N | S | E | W. @@ -1047,7 +1047,7 @@ \section{Linking a custom data type to the library} A generic technique is available in order to equip a data type with structures of \C{eqType}, \C{choiceType}, and \C{countType}. It consists in providing a correspondence with the generic tree data type -\C{(GenTree.tree T)}: an n-ary tree with nodes labelled with +\C{(GenTree.tree T)}: an n-ary tree with nodes labeled with natural numbers and leaves carrying a value in \C{T}. \index[coq]{\C{GenTree.tree}} diff --git a/tex/chLinalg.tex b/tex/chLinalg.tex index ef0a575..7d30bdd 100644 --- a/tex/chLinalg.tex +++ b/tex/chLinalg.tex @@ -1,11 +1,11 @@ \chapter{Linear algebra} \label{linalg:smith} -The library covers linear algebra in finite dimension in two distinct +The library covers linear algebra in finite dimensions in two distinct and almost independent parts. The first part provides a notion of abstract finite dimension vector spaces on top of the algebraic hierarchy. The second part represents vectors, families of vectors, -spaces and linear maps using the same matrix operations, and with a +spaces, and linear maps using the same matrix operations and with maximal sharing of operations. Warning: If you are going to formalize linear algebra, do not choose @@ -21,27 +21,27 @@ \section{Vector spaces} \begin{itemize} \item We provide an abstract interface for vector spaces, which - inherits from the interface from left modules from the algebraic + inherits from the interface for left modules from the algebraic hierarchy, together with the axiom \C{Vector.axiom_def} which states an isomorphism between the vector space and $R^n$ for some $n$. This axiom should be read as: ``any vector is uniquely determined by its - coordinates in a fixed basis'' All the theory of this interface is + coordinates in a fixed basis''. All the theory of this interface is described in \lib{algebra.vector}. \end{itemize} \paragraph{Formalization choices} The library provides reasoning on vector spaces (addition, complement, -intersection) and linear maps (identity, composition and inverse) as a -primitive objects, with a few bridges with elements (taking the space +intersection) and linear maps (identity, composition, and inverse) as +primitive objects, with a few bridges to elements (taking the space spanned by a family of vectors, taking a basis or picking a non-zero element). \emph{There is no bridge from and to matrices at this stage}, so if you want to deal with matrix representations of linear maps, you would rather use the other linear algebra library. The incomplete basis theorem is not provided as a theorem in the -library but is rather viewed as a composition of operations: first -take the complement space and the get its basis. +library but is rather viewed as a composition of operations: first, +take the complement space and then get its basis. \begin{coq}{}{width=8cm} Theorem incomplete_basis_theorem (F : fieldType) (E : vectType F) (v : seq E) : @@ -65,7 +65,7 @@ \section{Matrix based linear algebra} \paragraph{Contents} \begin{itemize} -\item The type of matrices and basic operation on them (\textit{i.e.} +\item The type of matrices and basic operations on them (\textit{i.e.} operations independent from linear algebra) are defined in the library \lib{algebra.matrix}. @@ -74,8 +74,8 @@ \section{Matrix based linear algebra} \item The library \lib{algebra.mxalgebra} overloads matrix product which means at the same time: composition of linear maps, - application of maps and image of space by a map. This simplifies the - theory as there is less lemmas to write. + application of maps, and the image of a space by a map. This simplifies the + theory as there are fewer lemmas to write. \end{itemize} @@ -84,7 +84,7 @@ \section{Comparison between vector spaces and matrix algebra} \label{sec:comp-betw-vect} The library \lib{algebra.vector} has the same primitives as mxalgebra -but with additional casts to go from one type to the other, that were +but with additional casts to go from one type to the other, that are identified in mxalgebra. A usual treatment of linear algebra would require the use of mxalgebra @@ -103,8 +103,8 @@ \section{Detailed Contents} The starting point for linear algebra is the library \lib{algebra.matrix}, which contains the basic definition of matrices -(as a table of number represented by finite functions), and the -following ``basic'' operations that are independent form the type: +(as a table of numbers represented by finite functions), and the +following ``basic'' operations that are independent from the type: \begin{itemize} \item building a matrix from its coefficients using big notations \C{$\backslash$matrix_(i < m, j < n) c_ i j}, \C{$\backslash$row_(j < n) c_ j}, \C{$\backslash$col_(i < @@ -118,12 +118,12 @@ \section{Detailed Contents} (\C{row'}, \C{col'}) \item exchanging two rows \C{xrow} or two columns \C{xcol}, or permuting rows \C{row_perm} or columns \C{col_perm} using a - permutation from the library \lib{fingroup.perm} + permutation from the library \lib{fingroup.perm}. \item building block matrices \end{itemize} If the base type has more structure, then one can provide more -structure on matrices as well. In particular matrix type inherit the +structure on matrices as well. In particular matrix types inherit the \C{eqType}, \C{choiceType}, \C{countType}, \C{finType} and \C{zmodType}. When the base type is a ring, we also have: \begin{itemize} @@ -136,21 +136,21 @@ \section{Detailed Contents} $E_{i_0,j_0} = \left(\delta_{i,i_0}\cdot\delta_{j,j_0}\right)_{i,j}$ (\C{delta_mx}). \item Permutation matrices \C{tperm_mx}, \C{perm_mx} -\item Trace \C{$\backslash$tr} and determinate \C{$\backslash$det} +\item Trace \C{$\backslash$tr} and determinant \C{$\backslash$det} \item The cofactor \C{cofactor} and the adjoint matrix \C{$\backslash$adj} \item Invertibility test \C{unitmx} and inversion of a matrix \C{invmx} \end{itemize} -\paragraph{linear maps and spaces as matrices} +\paragraph{Linear maps and spaces as matrices} The library \lib{algebra.mxalgebra} provides an interpretation of row matrices as vectors and matrices both as linear maps and as the -subspace of $R^n$ generated by the row vectors of matrices. Note that +subspace of $R^n$ generated by the row vectors of the matrices. Note that this particular choice, \textit{i.e.} representing vectors as rows and not as columns, implies the use of the English convention instead of -the french convention for application of a linear map to a +the French convention for application of a linear map to a vector, since we write: $v \cdot f$ for the application~$f(v)$ of~$f$ to~$v$. @@ -165,7 +165,7 @@ \section{Detailed Contents} \item The rank of the matrix is also the dimension of the space generated by its row vectors, \C{$\backslash$rank} \item Test whether the family of row vectors of a matrix is free or - full (\C{row_free}, \C{row_full} + full (\C{row_free}, \C{row_full}). \item Computation of the basis of the space \item Test \end{itemize} @@ -173,15 +173,15 @@ \section{Detailed Contents} Basic operations on matrices seen as linear maps: \begin{itemize} \item The kernel (\C{kermx}) of the map, represented as a matrix -\item Test if a map has a given eigenvalue, and the eigenspace - associated with it, represented as a matrix +\item Test if a map has a given eigenvalue and the eigenspace + associated with it, represented as a matrix. \end{itemize} \paragraph{Advanced features on Matrices} -There are also more advanced operations, that are unrelated to the -math: +There are also more advanced operations, that are unrelated to +mathematics: \begin{itemize} \item changing the type of the matrix using equalities on its dimensions (\C{castmx}) or forcing a matrix to have some dimensions diff --git a/tex/chNumbers.tex b/tex/chNumbers.tex index 31bae03..6c0da11 100644 --- a/tex/chNumbers.tex +++ b/tex/chNumbers.tex @@ -1,6 +1,6 @@ \chapter{Numbers} -The library contains various data structure for ``numbers'' +The library contains various data structures for ``numbers'' starting from natural numbers, integers, rationals, Cauchy reals, as well as related constructions as modular integers, algebraic real numbers and algebraic complex numbers. @@ -10,27 +10,27 @@ \chapter{Numbers} \item Natural numbers $\mathbb{N}$ play a special role in the \mcbMC{} library as they -are such a basic notion that many other concept needs it (e.g., sequences, \ldots). +are such a basic notion that many other concepts need them (e.g., sequences, \ldots). The definition of elementary arithmetic operations on natural numbers is given and studied in \lib{ssreflect.ssrnat}, except for division -that is given in \lib{ssreflect.div}. Elementary properties of -prime numbers and factorization in primes are proved in +which is given in \lib{ssreflect.div}. Elementary properties of +prime numbers and factorization into primes are proved in \lib{ssreflect.prime}. Falling factorial and binomial coefficients are defined and studied in \lib{ssreflect.binomial}. \item The data structure of integers $\mathbb{Z}$ is defined in \lib{algebra.ssrint} -as well as the operations that equip it with a structure of +as well as the operations that equip it with the structure of an ordered ring. Library \lib{algebra.intdiv} provides various results about integer divisibility. This library also provides results on polynomials and matrices with integer coefficients such as properties of the content of a polynomial or the existence of the Smith normal form for integer matrices~\ref{linalg:smith}. Library \lib{algebra.zmodp} defines arithmetic modulo a natural number $n$ -and the field structure of $\mathbb{Z}/\mathbb{Z}_p$ for $p$ a prime. +and the field structure of $\mathbb{Z}/p\mathbb{Z}$ for $p$ a prime. \item -The library \lib{algebra.rat} defined the ordered Archimedean ring of +The library \lib{algebra.rat} defines the ordered Archimedean ring of rational numbers $\mathbb{Q}$. \item @@ -42,7 +42,7 @@ \chapter{Numbers} Library \lib{real\_closed.cauchyreals} defines Cauchy real numbers. \item -The generic theory of ... and order are in \lib{algebra.ssrnum} +The generic theory of numbers and order are in \lib{algebra.ssrnum} and \lib{algebra.ssralg}. \end{itemize} diff --git a/tex/chProgramming.tex b/tex/chProgramming.tex index 1693baf..62799a6 100644 --- a/tex/chProgramming.tex +++ b/tex/chProgramming.tex @@ -22,7 +22,7 @@ \chapter{Functions and computation}\label{ch:prog} % \Coq{}, how to perform computations and how to define basic % mathematical objects. -This chapter provides code snippets that can be loaded in a Coq session, +This chapter provides code snippets that can be loaded in a \Coq{} session, after the header described at the end of the introduction chapter. These snippets should be loaded in order and without omission to avoid mistakes that would take too long to explain @@ -30,7 +30,7 @@ \chapter{Functions and computation}\label{ch:prog} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Functions}\label{sec:functions} - Before being more +Before being more precise about the logical foundations of the \Coq{} system in chapter~\ref{ch:ttch}, we review in this section some vocabulary and notations associated @@ -67,7 +67,7 @@ \subsection{Defining functions, performing of the binary operation of addition, applied to two natural numbers $2$ and $1$. The same expression can also represent the result of the unary operation of {\em adding one on the right to a natural number}, -applied to a natural number $2$. +applied to the natural number 2. In \Coq{}, the operation of {\em adding one on the right to a natural number} is written in the following manner: @@ -271,7 +271,7 @@ \subsection{Defining functions, performing in the \mcbMC{} library, as we shall see in chapter~\ref{ch:proofs}. It is however too early to be more precise about this normalization strategy. This would indeed -require describing in more details the formalism underlying the +require describing in more detail the formalism underlying the \Coq{} system, which we do only in chapter~\ref{ch:ttch}. The interested reader can consult \cite[section 5.3.7, ``Performing computations'']{Coq:manual} for @@ -518,10 +518,10 @@ \subsection{Higher-order functions} (and adding redundant parentheses for clarity), we obtain \C{(|==(nat -> nat)==| -> (|--nat--| -> nat))}: Thus, once the first argument -\C{(f : |==nat -> nat==|)} is passed, we obtain a term the type of which is +\C{(f : |==nat -> nat==|)} is passed, we obtain a term whose type is the right hand side of the main arrow, that is \C{(|--nat--| -> nat)}. By passing another argument, like \C{(2 : |--nat--|)}, we obtain an expression -the type of which is, again, the right hand side of the main arrow, +whose type is, again, the right hand side of the main arrow, here \C{nat}. Remark that each function has an arrow type and that the type of its argument matches the left hand side of this arrow (as depicted with @@ -867,7 +867,7 @@ \subsection{Natural numbers}\index[concept]{natural number} \coqrun{name=pred_run}{ssr} \index[coq]{\C{predn} |seealso {\C{.-1}}} -Observe that in the definitions of functions \C{predn} and +Observe that in the definitions of functions \C{pred} and \C{non_zero}, we did omit the type of the input \C{n}: matching \C{n} against the \C{p.+1} pattern imposes that \C{n} has type \C{nat}, as the \C{S} constructor belongs to \emph{exactly} @@ -1139,7 +1139,7 @@ \subsection{Recursion on natural numbers}\label{ssec:recnat} rule thus also covers the case where the second argument is non-zero while the first argument is 0: in that case, the result of the function is zero. Another possible view on -\C{subn} it to see it as a subtraction operation on natural numbers, +\C{subn} is to see it as a subtraction operation on natural numbers, made total by providing a default value in the ``exceptional'' cases. %We shall discuss totality of functions in more detail in %Section~\ref{sec:total}. @@ -1325,14 +1325,14 @@ \section{Containers}\label{sec:poly} really use the objects held in the list. A concrete example is the function that computes the size of the list; in the current setting such a function has to be written twice. -Worse, starting from the next chapter we will prove properties of programs, +Second, starting from the next chapter we will prove properties of programs, and given that the two size functions are ``different'', we would have to prove such properties twice. However it is clear that the two data types we just defined follow a similar schema, and so do the two functions for computing the size of a list -or the theorems we may prove about these functions. Hence, one would +or the theorems we may prove about these functions. Hence, one might want to be able to write something like the following, where $\alpha$ is a schematic variable: @@ -1387,7 +1387,7 @@ \subsection{The (polymorphic) sequence data type} new list of type \C{(seq A)} by combining an element of \C{A}, the \emph{head} of the sequence, and an existing list of type \C{(seq A)}, the \emph{tail} of the sequence. This -also means that this data-type +also means that this data type does not allow users to construct lists where the first element would be a boolean value and the second element would be a natural number. @@ -1401,7 +1401,7 @@ \subsection{The (polymorphic) sequence data type} that this also means that \C{(seq (seq nat))} is a valid data type (namely, the type of lists of lists of natural numbers), and that the construction can be iterated. -Types again avoid confusion: it is not licit to form the type \C{(seq 3)}, +Types again avoid confusion: it is not valid to form the type \C{(seq 3)}, since the argument \C{3} has type \C{nat}, while the function \C{seq} expects an argument of type \C{Type}.\footnote{For historical reasons \Coq{} may display the type of \C{nat} or \C{bool} as \C{Set} @@ -1708,7 +1708,7 @@ \subsection{Option and pair data types}\label{sec:othercontainers} %pairs that have a natural number as their first component and a boolean %value as the second one. This inductive type has -a \emph{single constructor} \C{mk_pair}. It takes over the two +a \emph{single constructor}, \C{mk_pair}. It takes over the two polymorphic parameters, that become its two first, implicit arguments. The constructor \C{mk_pair} has two more explicit arguments which are the data stored in the pair. This constructor is associated with @@ -1892,7 +1892,7 @@ \section{The Section mechanism}\label{sec:section} an input \C{x} (or \C{x'}, or \C{x1}, \ldots), its type is supposed to be \C{T}. Concretely, it lets us omit an explicit type annotation in the definition of programs using \C{x}, such as \C{iter}. -The \C{Implicit Type} command is used frequently in the +The \C{Implicit Type} command is used frequently in the \mcbMC{} library; the reader can refer to~\cite[``Extensions of Gallina'']{Coq:manual} for a more detailed documentation of it. @@ -2114,7 +2114,7 @@ \section{Symbolic computation}\label{sec:symcomp} normalization strategy to perform computation, the \C{simpl} evaluation strategy. One difference between the \C{Eval simpl} and \C{Compute} strategies is that the \C{simpl} one usually leaves -expressions in nicer forms whenever they contain variables. Here again +expressions in more readable forms whenever they contain variables. Here again we point the interested reader to \cite[section 5.3.7, ``Performing computations'']{Coq:manual} for @@ -2290,7 +2290,7 @@ \section{Notations, abbreviations}\label{sec:notabrev} For infix notations, which are meant to be printed between two arguments of the operator (like addition in \C{2 + 3}), we advise to always include -space around the infix operator, so that notations don't get mixed up with +space around the infix operator, so that notations do not get mixed up with potential notations occurring in the arguments. @@ -2334,7 +2334,7 @@ \section{Notations, abbreviations}\label{sec:notabrev} \end{coq} \coqrun{name=notation4}{ssr,notation_gt} -Another frequently used form of notation is called syntactic abbreviation. +Another frequently used form of notation is called a syntactic abbreviation. It simply lets one specify a different name for the same object. For example the \C{S} constructor of natural numbers can also be accessed by writing \C{succn}. This is useful if \C{S} is used in the current context @@ -2401,7 +2401,7 @@ \section{Notations, abbreviations}\label{sec:notabrev} specific cases, like binding \C{1} and \C{0} to ring elements. \item Postfix notations begin with \C{.}, as in \C{.+1} and \C{.-group} to let one write \C{(p.-group G)}. There is one exception for - the postfix notation of the factorial, which starts with + a postfix notation of the factorial, which starts with \D{`}, as in \D{m`!}. \item Taking inspiration from \LaTeX{} some symbols begin with \C{\\}, like \C{\\in}, \C{\\matrix}, \C{\\sum}, \ldots diff --git a/tex/chProofLanguage.tex b/tex/chProofLanguage.tex index 88fe853..72dd184 100644 --- a/tex/chProofLanguage.tex +++ b/tex/chProofLanguage.tex @@ -9,7 +9,7 @@ \chapter{A proof language for formal proofs}{} Still, the use of a dedicated proof language enables a higher level description of the formal proof being constructed. -The \mcbSSR{} proof language give us tools to structure proofs and +The \mcbSSR{} proof language gives us tools to structure proofs and to tame bookkeeping, a form of bureaucracy typical of formal proofs. Structure is given by splitting large proofs in blocks with a @@ -20,10 +20,10 @@ \chapter{A proof language for formal proofs}{} precisely cover this need. Bookkeeping is the ubiquitous activity of context management, that is -naming or discarding assumptions in order to track their lifetime to tide -the context up, as well as massaging assumptions and goals as to +naming or discarding assumptions in order to track their lifetime to tidy +the context up, as well as massaging assumptions and goals so as to please the formality requirements of Coq. A paradigmatic bookkeeping -example is destructing assumption of \C{(A && B)} in order to give +example is destructing the assumption \C{(A && B)} in order to give distinct names to \C{A} and \C{B}: unlikely to be a pregnant step of the proof. The language of intro-patterns together with the goal-as-stack model provides a very flexible tool to @@ -35,12 +35,12 @@ \chapter{A proof language for formal proofs}{} it~\cite{Weinberg:1985:PCP:536771} and hence repairing it. Moreover, structure confines errors arising after a change to smaller text blocks, and declares what the original author of the block being -repaired was doing effectively implementing a form of check pointing +repaired was doing, effectively implementing a form of checkpointing and documentation. All that works best if the tactics used to fill in the blocks have a predictable behavior: fail early and locally when a breaking change takes place. -This chapter covers the tools to structure and tie proofs up and discusses +This chapter covers the tools to structure and tie up proofs and discusses some of the good practices that made the development of the \mcbMC{} library possible. @@ -300,7 +300,7 @@ \subsection{Pulling from the stack} sacrifices readability in favor of compactness: what would you learn by reading a trivial proof? Of course, finding the right balance only comes with experience. As a rule -of thumb: follow the granularity of how you wound naturally +of thumb: follow the granularity of how you would naturally read your script to someone else. \gotcha{The case intro pattern \C{[..|..]} obeys an exception: when it is the first @@ -487,7 +487,7 @@ \subsection{Pushing to the stack} Note that listing context entry names inside curly braces purges them from the context. For instance the tactic \C{case: y \{odd\_y\}} clears the \C{odd\_y} fact. But this would lead to a dead end in the -present proof, so we don't use it here. +present proof, so we do not use it here. \index[ssr]{\C{tactic : ..}!\C{: \{name\}} (disposal)} @@ -816,7 +816,7 @@ \subsection{Pushing to the stack} \section{Structuring proofs, by examples} So far we've only tackled simple lemmas; most of them did admit a one line -proof. When proofs get longer \emph{structure} is the best ally in making +proof. When proofs get longer, \emph{structure} is the best ally in making them readable and maintainable. Structuring proofs means identifying intermediate results, factoring similar lines of reasoning (e.g., symmetries), signaling crucial steps to the reader, and so on. In short, a @@ -940,9 +940,9 @@ \subsection{Primes, a never ending story}\label{sec:infprimes} and not the \C{m1_gt1} fact we proved as an intermediate fact. We can resort to the flexibility of \C{have} to obtain a more -pertinent script: the first argument to \C{have}, here a name, can -actually be any introduction pattern, i.e. what follows -the \C{=>} operator, for example a view application. +pertinent script: the first argument to \C{have} (here a name) can +actually be any introduction pattern, i.e., what follows +the \C{=>} operator, for example, a view application. In the light of that, the script can be rearranged as follows. @@ -1009,11 +1009,10 @@ \subsection{Order and max, a matter of symmetry}\label{sec:leqmax} Lemma leq_total m n : (m <= n) || (n <= m). \end{coq} -Our first attempt takes no advantage of the symmetry argument: -we reason by cases on the order relation, -we name the resulting fact on the same line -(it eases tracking where things come from) and we solve the two -goals independently. +Our first attempt takes no advantage of the symmetry argument: we +reason by cases on the order relation, we name the resulting fact on +the same line, since it eases tracking where things come from, and we +solve the two goals independently. \begin{coq}{}{} Proof. @@ -1129,7 +1128,7 @@ \subsection{Order and max, a matter of symmetry}\label{sec:leqmax} The \C{suff} tactic (or \C{suffices}) is like \C{have} but swaps the two goals. -Note that here the sub proof is now the shortest paragraph. +Note that here the sub-proof is now the shortest paragraph. This is another recurrent characteristic of the proof script style we adopt in the \mcbMC{} library. @@ -1233,7 +1232,7 @@ \section{Proof maintenance: a matter of style} During this maintenance period breakage did happen, and breakage will happen again in the future. Things break for many reasons, some of which are even not -under the control of the author of a formal proof. First of all \Coq{} is an +under the control of the author of a formal proof. First of all, \Coq{} is an actively developed research software and changes in behavior, for the better, are hard to avoid and do happen in practice. Second nobody gets his definitions perfect at the first attempt in a formalization. Definitions and lemmas get @@ -1277,7 +1276,7 @@ \section{Proof maintenance: a matter of style} in order to use it in conjunction with a specific tactic. Finally, it is common sense to organize computer code -in components with separate concerns and well defined interfaces: +in components with separate concerns and well-defined interfaces: Unstructured code is well known to be hard to read, understand and locally modify. Also, mathematical theories are often organized and structured, for @@ -1300,7 +1299,7 @@ \subsection{On the readability of formal proofs} and their structure, when present, is hard to find because, at a glance, one can hardly look at the entire text. It is a bit like -typesetting a book by breaking the line after each an every word: in +typesetting a book by breaking the line after each and every word: in order to find the action the subject is performing one would need to turn page. Lengthy proofs without an apparent structure are hard to @@ -1328,7 +1327,7 @@ \subsection{On the readability of formal proofs} An undoubtedly \emph{readable proof step} is a declarative step asserting, in the proof text, a statement that holds. For example by using the \C{have} tactic. This step is as readable as the statements of -theorems is: It goes without saying that curating the way statements +theorems: It goes without saying that curating the way statements can be written plays a main role in readability. Moreover, given that the statement is part of the proof text, one does not need the assistance of \Coq{} in order to see it.\\ @@ -1380,7 +1379,7 @@ \subsection{On the readability of formal proofs} is the following one. Structure formal proofs as they are structured on paper. Then, when filling the blocks, use one line of formal text to perform one meaningful proof step for a human reader, and use the best tools you have to -achieve that: At worse, if the step is too obscure, the reader will +achieve that: at worst, if the step is too obscure, the reader will ignore the text, and just see what happens by executing it.\\ In other words, the flow of a proof script should be homogeneous and distill human-readable information at a suitable, regular pace. @@ -1388,7 +1387,7 @@ \subsection{On the readability of formal proofs} \subsection{Fail early and locally to ease repair} -There is a trade off between formal proofs that are robust in face of +There is a trade-off between formal proofs that are robust in the face of changes and formal proofs that are easy to repair. With robustness we mean that the very same formal proof text works even if the statements of the invoked lemmas change. @@ -1431,7 +1430,7 @@ \subsection{Fail early and locally to ease repair} Writing ``\C{rewrite lem1 ?lem2 ?lem3 //}'' can have the same meaning but is harder to repair. In particular one would discover the breakage later, in the next line, since after \C{lem1} all actions never fail. -Also, the next line is likely is going to be run on the side +Also, the next line is likely going to be run on the side condition, and not the main goal. The person repairing the proof is hence put on the wrong track: the line that fails was not even written to operate on the goal she sees. @@ -1482,13 +1481,13 @@ \subsection{Large, monolithic, proofs: a fact of life} by allowing to describe the latter via a pattern. The \C{have} tactic, and its variants \C{suffices} and \C{wlog}, provide syntactic facilities to state and use local definitions and -lemmas to a main proofs, with similar features as the \C{Lemma} +lemmas to main proofs, with similar features as the \C{Lemma} vernacular command (and its synonyms), like the automated introduction of named parameters. The \C{wlog} tactic goes one step further, and allows one to describe the verbose cut statements hidden behind a \emph{without loss of generality} proof pattern by providing only the names of the hypotheses in the context which should be included in the -generalization. This feature saves the user from an painful manual +generalization. This feature saves the user from a painful manual writing of the explicit cut statement. \subsection{On the usability of lemmas} @@ -1529,7 +1528,7 @@ \subsection{On the usability of lemmas} switch to \C{rewrite}. In some sense its ``default'' is wrong. By stating equalities with the most informative term on the left one -obtains lemmas that easier to rewrite with, since their +obtains lemmas that are easier to rewrite with, since their arguments and the subterm of the goal to be replaced are going to be found, automatically, by pattern matching. diff --git a/tex/chProofs.tex b/tex/chProofs.tex index 5784ad9..ad7e8ce 100644 --- a/tex/chProofs.tex +++ b/tex/chProofs.tex @@ -94,7 +94,7 @@ \subsection{Ground equalities}\label{ssec:groundeq} \end{coqout-right} \coqrun{name=r1}{ssr,check-eq} -Let's anatomize the two above examples. Indeed, just like \Coq{}'s +Let's anatomize the two examples above. Indeed, just like \Coq{}'s type system prevents us from applying functions to arguments of a wrong nature, it also enforces a certain nature of well-formedness at the time we enunciate sentences that are candidate diff --git a/tex/chRcf.tex b/tex/chRcf.tex index cca47cc..47cd3d7 100644 --- a/tex/chRcf.tex +++ b/tex/chRcf.tex @@ -5,7 +5,7 @@ \chapter{Real closed fields} \item direct construction of algebraic numbers \item real closure of archimedean fields \item construction of the algebraic closure of a Rcf -\item decidability of first order theory of real closed fields +\item decidability of the first-order theory of real closed fields \end{enumerate} diff --git a/tex/chSigmaBool.tex b/tex/chSigmaBool.tex index d2feb37..6579500 100644 --- a/tex/chSigmaBool.tex +++ b/tex/chSigmaBool.tex @@ -32,7 +32,7 @@ \chapter{Sub-types}{}\label{ch:sigmabool} integers smaller than 7 have, like being a finite collection? In standard mathematics one would simply say that the integers are an -infinite set (called \C{nat}), and that the integers smaller that 7 +infinite set (called \C{nat}), and that the integers smaller than 7 form a finite subset of the integers (called \C{'I_7}). Integers and integers smaller than 7 are interchangeable data: if \C{(n : nat)} and \C{(i : 'I_7)} one can clearly add \C{n} to \C{i}, and possibly show @@ -40,13 +40,13 @@ \chapter{Sub-types}{}\label{ch:sigmabool} knows which operations are compatible with a subset. E.g. \C{(i-1)} stays in \C{'I_7}, as well as \C{(i+n \%| 7)}. So in a sense, subsets also provide a linguistic construct to ask the reader to track an -easy invariant and relieving the proof text from boring details. +easy invariant and releaving the proof text of boring details. The closest notion provided by the \mcbCIC{} is the one of $\Sigma-$types, that we have already seen in the previous chapter in their general form of records. For example, one can define the type \C{'I_7} as the $\Sigma$-type $\Sigma_{(n:nat)} n -< 7$ which pairs a natural number $n$ and proof that it is smaller than $7$. Since proofs are terms, one can +< 7$, which pairs a natural number $n$ and a proof that it is smaller than $7$. Since proofs are terms, one can pack together objects and proofs of some properties to represent the objects that have those properties. For example $3$, when seen as an inhabitant of \C{'I_7}, is represented by a dependent @@ -118,7 +118,7 @@ \section{$n$-tuples, lists with an invariant on the length} we pick this simple example: a tuple is processed using functions defined on sequences, namely \lstinline/rev/ and \lstinline/map/. These operations do preserve the invariant of -tuples, i.e., they don't alter the length of the subjacent list. +tuples, i.e., they don't alter the length of the subjacent sequence. \begin{coq}{name=p1}{} Example seq_on_tuple n (t : n.-tuple nat) : @@ -222,7 +222,7 @@ \section{$n$-tuples, lists with an invariant on the length} Unifying \lstinline/(size (tval ?$_n$ ?$_T$ ?$_t$))/ with \lstinline/(size (rev (tval n T t)))/ is hard. -Both term's head symbol is \lstinline/size/, but then +Both terms' head symbol is \lstinline/size/, but then the projection \lstinline/tval/ applied to unification variables has to be unified with \lstinline/(rev ...)/, and both terms are in normal form. @@ -461,7 +461,7 @@ \subsection{The sub-type kit} Both \C{Sub} and \C{insub} expect the typing context to specify the sub-type. -Here is a few example uses, using \C{tuple}: +Here are a few example uses, using \C{tuple}: \begin{coq}{name=insub}{} Variables (s : seq nat) (t : 3.-tuple nat). @@ -753,11 +753,11 @@ \section{The ordinal subtype}\label{sec:ordfintype} %\mcbREQUIRE{} \section{Finite functions} -In standard mathematics, functions that are point wise equal are +In standard mathematics, functions that are pointwise equal are considered as equal. This principle, that we call \emph{functional extensionality}, is compatible with the \mcbCIC{} but is not built-in. At the time of writing, only very recent variations of CIC, -as Cubical Type Theory~\cite{cubicaltt}, include such principle. +like Cubical Type Theory~\cite{cubicaltt}, include such a principle. \index[concept]{functional extensionality} Still, this principle is provable in \Coq{} for functions with a @@ -771,7 +771,7 @@ \section{Finite functions} of finite functions changed. The new implementation is based on a specific list-like data type indexed over the list of the elements of the domain. Thanks to this more sophisticated -ecoding, the current Coq type checker accepts, for example, the following +encoding, the current Coq type checker accepts, for example, the following declaration of a 3-branch tree: \C{Inductive tree3 := Leaf of nat | Node of \{ffun 'I_3 -> tree3\}.} At the time of writing Coq rejects the declaration above with the definition @@ -878,7 +878,7 @@ \section{Finite functions} \end{coq} Such lemma, rephrased in mathematical notation down below, -states that the indices \C{i} and \C{j} are independently chosen. +states that the indices \C{i} and \C{j} are chosen independently. $$ \prod_{i \in I} \sum_{j \in J} F i j = \sum_{f \in I \to J} \prod_{i \in I} F i (f i) @@ -1155,8 +1155,8 @@ \subsection{Example: matrix product commutes under trace} \sum_i \sum_j A i j * B j i = \tr (B *m A) \end{coqout} -It is worth noticing that the equation we used to expand the left hand -side and the one we need to expand the right hand side are very +It is worth noticing that the equation we used to expand the left-hand +side and the one we need to expand the right-hand side are very similar. Actually the sub proof following \C{have} can be generalized to any pair of matrices \C{A} and \C{B}. The \mcbSSR{} proof language provides the \C{gen} modifier in order to tell \C{have} @@ -1210,7 +1210,7 @@ \subsection{Example: matrix product commutes under trace} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Block operations} -The size information stocked in the type of a matrix is +The size information stored in the type of a matrix is also used to drive the decomposition of a matrix into sub-matrices, called blocks. For example when the size expression of a square matrix is like \C{(n1 + n2)}, then the upper left block @@ -1328,7 +1328,7 @@ \subsubsection{Sorts and \C{reflect}} The curious reader may have spotted that the declaration of the \C{reflect} inductive predicate of section~\ref{ssec:specs} differs -from the one part of the \mcbMC{} library in a tiny detail. +from the one that is part of the \mcbMC{} library in a tiny detail. The real declaration indeed puts \C{reflect} in \C{Type} and not in \C{Prop}. diff --git a/tex/chSpecification.tex b/tex/chSpecification.tex index ef34405..1f5c645 100644 --- a/tex/chSpecification.tex +++ b/tex/chSpecification.tex @@ -961,11 +961,11 @@ \subsection{Inductive specs with indices}\label{ssec:specs} It is worth mentioning the convenience lemma \C{boolP} that takes a boolean formula and reasons by excluded middle providing some extra comfort -like an additional hypothesis in each sub goal. +like an additional hypothesis in each subgoal. Another reflection view worth mentioning is \C{leqP} that replaces, in one shot, both \C{(_ <= _)} and the converse \C{(_ < _)} by opposite truth values. -Sometime a proof works best by splitting into three branches, i.e., +Sometimes a proof works best by splitting into three branches, i.e., separating the equality case. The \C{ltngtP} lemma is designed for that. \index[coq]{\C{leqP}} \index[coq]{\C{ltngtP}} @@ -1002,9 +1002,9 @@ \section{Strong induction via inductive specs} goal, so another way to specify the predicate is to \emph{load} the goal before performing the induction. For example if the goal is \C{(G n)} and one loads it to -\C{(forall m, m <= n -> G m)} then he will have access to the +\C{(forall m, m <= n -> G m)} then one will have access to the induction hypothesis on all numbers smaller than \C{n} exactly as -if he was using the strong induction principle. +if one was using the strong induction principle. It is possible to do so very concisely using the discharging operator ``\C{:}''. @@ -1017,7 +1017,7 @@ \section{Strong induction via inductive specs} and then pushes to the stack \C{n} capturing all occurrences but for the second one. The \mcbMC{} library provides a few tools to load the goal like this -without recurring to occurrence selection, which can be tricky and fragile. +without resorting to occurrence selection, which can be tricky and fragile. The following inductive specification is crafted so that its only constructor holds as arguments exactly the items we pushed on the stack with the discharge @@ -1129,7 +1129,7 @@ \section{Showcase: Euclidean division, simple and correct}\label{sec:edivn} Lines 2 and 3 prepare the induction by unfolding the definition of \C{edivn} (to expose the initial value of the accumulators of \C{edivn_rec}) and makes the invariant of the division loop explicit replacing -\C{m} by \C{(0 * d.+1 + m)}. Recall the case \C{d} being \C{0} has +\C{m} by \C{(0 * d.+1 + m)}. Recall that the case \C{d} being \C{0} has already been handled. Line 4 performs a strong induction, also generalizing the initial @@ -1153,7 +1153,7 @@ \section{Showcase: Euclidean division, simple and correct}\label{sec:edivn} tells \mcbSSR{} to keep the body of the let-in, which would be otherwise deleted. -Line 5 unfolds the recursive function and uses the following lemma to push the +Line 5 unfolds the recursive function, and uses the following lemma to push the subtraction into the branches of the if statement. Then it reasons by cases on the guard of the if-then-else statement. @@ -1302,7 +1302,7 @@ \section{Notational aspects of specifications} A function automatically inserted by \Coq{} to prevent a type error is called \emph{coercion}. The most pervasive coercion in the \mcbMC{} library is -\lstinline/is_true/ one that lets one write statements using boolean +\lstinline/is_true/ that lets one write statements using boolean predicates. \index[concept]{coercion} diff --git a/tex/chTT.tex b/tex/chTT.tex index aee0ec5..f19f51e 100644 --- a/tex/chTT.tex +++ b/tex/chTT.tex @@ -178,8 +178,8 @@ \section{Propositions as types, proofs as programs}\label{sec:patpap} $$\Gamma \vdash t : T$$ and reads: \emph{in the context $\Gamma$, the term $t$ has the type $T$}. A type -is just a term, which is called a type when it occurs on the right -hand-side of a column in a well-formed typing judgment, like $T$ in +is just a term, which is called a type when it occurs on the right-hand +side of a colon in a well-formed typing judgment, like $T$ in our case. A context is a list of variables, each paired with a type, that can occur in $t$ and $T$. This typing judgment expresses that under the typing assumptions listed in the context $\Gamma$, $t$ and @@ -293,7 +293,7 @@ \section{Terms, types, sorts}\label{sec:terms} on a same and single collection of inductively defined terms. A judgment $\Gamma \vdash t : T$ relates two \emph{terms}, $t$ and $T$ (in the context $\Gamma$), and $T$ is called a \emph{type} because it appears -on the right hand side of the column symbol in a typing judgment. Term $T$ can be +on the right-hand side of the colon symbol in a typing judgment. Term $T$ can be thought of as a label for a collection of terms, and the judgment $\Gamma \vdash t : T$, as the statement that (in context $\Gamma$) ``term $t$ belongs to the collection $T$'', or even, to some extent, @@ -311,12 +311,12 @@ \section{Terms, types, sorts}\label{sec:terms} reason (internally) by case analysis on the proof that $t$ has type $T$, nor can we disprove (internally) that $t$ has type $T$ for some particular $T$. Finally, as we shall see later in this section, substitution of -equals does not behave the same for terms at the left of a column, and +equals does not behave the same for terms at the left of a colon, and for those on the right --- types. We assume a collection of distinct names, used to denote atomic terms -and called \emph{sorts}. One of this sorts is called \C{Prop}, and it +and called \emph{sorts}. One of these sorts is called \C{Prop}, and it is the type of statements: \begin{coq-left}{name=check-eq}{} @@ -327,8 +327,8 @@ \section{Terms, types, sorts}\label{sec:terms} \end{coqout-right} \coqrun{name=r2}{ssr,check-eq} -As noted already in Chapter~\ref{ch:proofs}, a well formed statement -is not necessary a provable one: +As noted already in Chapter~\ref{ch:proofs}, a well-formed statement +is not necessarily a provable one: \begin{coq-left}{name=check-eq}{} Check 7 = 9 : Prop. @@ -363,7 +363,7 @@ \section{Terms, types, sorts}\label{sec:terms} Type@{U1} : Type@{U2} (* U1 < U2 *) \end{coqout-right} -the instance of \C{Type} on the right side of the column has a greater +the instance of \C{Type} on the right-hand side of the colon has a greater index (named \C{U2} here) than the instance of \C{Type} on the left. In an empty context, @@ -494,7 +494,7 @@ \section{Terms, types, sorts}\label{sec:terms} nonsensical assertions, like ``$\pi$ is equilateral'' or ``2 is a Banach space''. But the same rule shall also play a nastier role, for instance if the user wants to casually embed natural numbers into -integers, or and integer into rational numbers: working with these +integers, or an integer into rational numbers: working with these obvious inclusions might require an explicit cast in a typed setting, if $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{Q}$ are represented by distinct types. @@ -933,7 +933,7 @@ \section{More connectives}\label{sec:moreconns} Pattern matching provides the elimination rule for conjunction, i.e. the (two) rules governing the construction of proofs -using a conjunctive hypothesis. Here is left elimination rule: +using a conjunctive hypothesis. Here is the left elimination rule: \begin{coq}{name=Ande1}{} Definition proj1 A B (p : A /\ B) : A := @@ -1001,7 +1001,7 @@ \section{More connectives}\label{sec:moreconns} the parameter has an arrow type. Constants \C{ex} and \C{ex_intro} can thus be be specialized to any predicate \C{P}, so that the \C{ex} inductive declaration can be used on any formula. Finally, the -\C{ex_intro} constant features a last, inner-most \C{forall a : A} +\C{ex_intro} constant features a last, innermost \C{forall a : A} quantifier, which binds a term variable \C{a} representing the witness of the existential statement for \C{P}.\\ @@ -1148,7 +1148,7 @@ \section{Inductive reasoning}\label{ssec:indreason} is defined. It is worth mentioning that this principle is general enough to -prove the ``stronger'' induction principle that give access, in +prove the ``stronger'' induction principle that gives access, in the inductive step, to the property \C{P} on all numbers smaller than \C{n.+1} and not just its predecessor. @@ -1197,7 +1197,7 @@ \section{Inductive reasoning}\label{ssec:indreason} that we will detail in Section~\ref{sec:ubnP}. Finally, recall that recursive functions are checked for termination: -Through the lenses of the proofs-as-programs correspondence, this means +Through the lens of the proofs-as-programs correspondence, this means that the induction principle just coded is sound, i.e., based on a well-founded order relation. \index[concept]{termination} diff --git a/tex/chTypeInference.tex b/tex/chTypeInference.tex index ae7e0ec..1961d1a 100644 --- a/tex/chTypeInference.tex +++ b/tex/chTypeInference.tex @@ -119,7 +119,7 @@ \section{Type inference and higher-order unification}\label{sec:hounif} Unfortunately it is not hard to come up with classes of examples where guessing appropriate values for implicit arguments is, in general, not -possible. In fact such guessing has been shown to be as hard as proof +possible. In fact, such guessing has been shown to be as hard as proof search in the presence of higher-order constructs. For example to unify \lstinline/(prime _)/ with \lstinline/true/ one has to guess a prime number. Remember that @@ -145,7 +145,7 @@ \section{Type inference and higher-order unification}\label{sec:hounif} The concrete syntax for implicit arguments, an underscore character, does not let one name the missing piece of information. -If an expression contains multiple occurrence +If an expression contains multiple occurrences of the placeholder ``\lstinline/_/'', they are all considered as potentially different by the system, and hence hold (internally) unique names. For the sake of clarity we take the freedom to @@ -245,8 +245,8 @@ \section{Type inference by example} Unset Printing Implicit Defensive. \end{coq} -Of course, the \C{Argument} command remains useful on specific cases -when a finer, manual tuning of implicit arguments remains needed. +Of course, the \C{Argument} command remains useful in specific cases +when a finer, manual tuning of implicit arguments is needed. The command \C{About} provides the user with the type of a given constant, and with the status, implicit or not, of each argument of the constant. @@ -353,7 +353,7 @@ \section{Type inference by example} algorithm and we did not need to compute or synthesize \emph{functions}. In the next section we illustrate how the unification algorithm used in type inference can be extended in order -to deal with higher-order problem. This extension is based on the use +to deal with higher-order problems. This extension is based on the use of declarative programs, and we present the encoding of the relations which describe these programs. As of today however there is no precise, published, documentation of the type @@ -464,7 +464,7 @@ \section{Records as relations}\label{sec:eqtype} The first one is \lstinline/nat_eqType/ and its type, \lstinline/eqType/, is trivially equal to the one expected by \lstinline/eq_op/. -The following two arguments are hence expected of to be of type +The following two arguments are hence expected to be of type \lstinline/(sort nat_eqType)/ but \lstinline/3/ and \lstinline/4/ are of type \lstinline/nat/. Recall that unification takes computation into account exactly as the @@ -577,7 +577,7 @@ \section{Records as relations}\label{sec:eqtype} \Coq{} gives up, leaving to the user the task of extending the unification algorithm with a declarative program that is able to solve -unification problems of the form \lstinline/(sort $\mcbimplm{e}$)/ +unification problems of the form \lstinline/(sort $\mcbimplm{e}$)/, versus \lstinline/T/ for any \lstinline/T/. Given the current context, it seems reasonable to write an extension that picks \lstinline/nat_eqType/ when \lstinline/T/ is @@ -633,7 +633,7 @@ \section{Records as relations}\label{sec:eqtype} In the table we reported only the relevant entries. Entries corresponding to the \lstinline/eq_op/ projection play no role -in the \mcbMC{} library. The name of such projections is +in the \mcbMC{} library. The names of such projections are usually omitted to signal that fact. What makes this approach interesting for a large library is that @@ -655,7 +655,7 @@ \section{Records as relations}\label{sec:eqtype} \index[vernac]{\C{Structure}} The computer-science inclined reader shall see records as first-class -values in the \mcbCIC{} programming language. Otherwise said, +values in the \mcbCIC{} programming language. In other words, the projections of a record are just ordinary functions, defined by pattern-matching on an inductive type, and which access the fields of the record. @@ -797,7 +797,7 @@ \section{Records as (first-class) interfaces}\label{rec:itf} When we define an overloaded notation, we convey through it more than just the arity (or the type) of the associated -operation. We associate to it a property, or a collection thereof. +operation. We associate with it a property, or a collection thereof. For example, in the context of group theory, the infix \C{+} symbol is typically preferred to \C{*} whenever the group law is commutative. @@ -1181,8 +1181,8 @@ \section{The generic theory of ``big'' operators}\label{sec:bigop} For example if $A$ is empty, then the union of all $g(a)$ is $\emptyset$, while if $n=0$, then the sum of all $f(i)$ is $0$. Then the range is an expression identifying a finite set of elements, -sometimes expressing an order (relevant when the iterated operation is not -commutative). Finally a general term describing the +sometimes expresses an order (relevant when the iterated operation is not +commutative). Finally, a general term describes the items being combined by the iterated operation. As already mentioned in section~\ref{sec:bigopnat}, @@ -1209,7 +1209,7 @@ \section{The generic theory of ``big'' operators}\label{sec:bigop} $$ \sum_{i < 2n,\, 2 \nmid i} i = n^2 $$ -An alternative writing for the same summation exploits the general +An alternative way of writing the same summation exploits the general terms to rule out even numbers: $$ \sum_{i < n} (i * 2 + 1) = n^2 @@ -1258,7 +1258,7 @@ \subsection{The generic notation for \C{foldr}} takes as argument a range \C{(r : seq I)}, a neutral element \C{(idx : R)}, and a binary operation \C{(op : R -> R -> R)}, to be iterated. Note that the type of the elements in the range are not the -same as the type of the arguments of the iterated operation. Indeed, +same as the types of the arguments of the iterated operation. Indeed, \C{bigop} also uses a function \C{(F : I -> R)}, as well as a filtering predicate \C{(P : pred I)}, to construct the actual iteration range. In the end, \C{bigop} filters the range \C{r} using @@ -1417,7 +1417,7 @@ \subsection{The generic notation for \C{foldr}} \end{coqout} It is clear that for the two lemmas to be provable, -one needs the associativity property of \C{addn} and also that +one needs the associative property of \C{addn} and also that \C{0} is neutral. In other words, the lemmas we used require the operation \C{*\%M} to form a monoid together with the unit \C{1}. @@ -1451,7 +1451,7 @@ \subsection{Assumptions of a bigop lemma}\label{sec:bigoplemmas} types, but on any term. In particular we can index them on function symbols to relate, for example, \C{addn} and its monoid structure. -Here we only present the \C{Monoid} interface an operation has to +Here we only present the \C{Monoid} interface that an operation has to satisfy in order to access a class of generic lemmas. Chapter~\ref{ch:hierarchy} adds other interfaces to the picture and organizes the bigop library around them. @@ -1551,7 +1551,7 @@ \subsection{Assumptions of a bigop lemma}\label{sec:bigoplemmas} the following unification problem has to be solved: \C{addn} versus \C{(operator ?m)}. -Inferring a value for \C{?m} mean +Inferring a value for \C{?m} means inferring a proof that \C{addn} forms a monoid with \C{0}; this is a prerequisite for the \C{big\_nat\_recr} lemma we don't have to provide by hand. @@ -1811,7 +1811,7 @@ \subsection{Querying canonical structures}\label{sec:phantom} %\warntechnical{} -It is possible to ask \Coq{} if a certain term does validate an +It is possible to ask \Coq{} whether a certain term validates an interface. For example, to check if \C{addn} forms a monoid one can \C{Check [law of addn]}. A notation of this kind exists for any interface, for example \C{[eqType of nat]} is another valid query to diff --git a/tex/cheatsheet.tex b/tex/cheatsheet.tex index c903720..7c87d52 100644 --- a/tex/cheatsheet.tex +++ b/tex/cheatsheet.tex @@ -38,7 +38,7 @@ \subsection*{Terminology} \begin{description} \item[Top] is the first assumption, \C{y} here -\item[Stack] alternative name for the list of $Assumptions$ +\item[Stack] is an alternative name for the list of $Assumptions$ \end{description} \subsection*{Popping from the stack} @@ -48,8 +48,8 @@ \subsection*{Popping from the stack} \begin{cheat} $cmd$=> x px \end{cheat} -Run $cmd$, then pop Top, put it in the context naming it \C{x} then -pop the new Top and names it \C{px} in the context +Run $cmd$, then pop Top, put it in the context, naming it \C{x}, then +pop the new Top and name it \C{px} in the context \begin{cheatout} ========= @@ -70,7 +70,7 @@ \subsection*{Popping from the stack} \end{cheat} Run $cmd$, then reason by cases on Top. In the first branch do nothing, in the second one pop two - assumptions naming then \C{x} and \C{xs}. Then get rid of + assumptions naming them \C{x} and \C{xs}. Then get rid of trivial goals. Note that, since only the first branch is trivial, one can write \C{=> [// | x xs]} too.\caveat{ Immediately after @@ -95,8 +95,8 @@ \subsection*{Popping from the stack} $cmd$=> /andP[pa pb] \end{cheat} Run $cmd$, then apply the view \C{andP} to Top, then destruct the - conjunction and introduce in the context the two - parts naming the \C{pa} and \C{pb} + conjunction and introduce the two + parts into the context, naming them \C{pa} and \C{pb} \begin{cheatout} ========= @@ -115,8 +115,8 @@ \subsection*{Popping from the stack} \begin{cheat} $cmd$=> /= {px} \end{cheat} - Run $cmd$ then simplify the goal then discard \C{px} from - then context + Run $cmd$, then simplify the goal, then discard \C{px} from + the context \begin{cheatout} x : nat @@ -135,9 +135,9 @@ \subsection*{Popping from the stack} \begin{cheat} $cmd$=> [y -> {x}] \end{cheat} - Run $cmd$ then destruct the existential, then introduce y, - then rewrite with Top left to right and - discard the equation, then clear x + Run $cmd$, then destruct the existential, introduce y, + then rewrite with Top from left to right, + discard the equation, and clear x \begin{cheatout} x : nat @@ -441,8 +441,8 @@ \subsection*{Proof commands} \begin{cheat} have pa : P a. \end{cheat} - Open a new goal for \C{P a}. Once resolved - introduce a new entry in the context for it named \C{pa} + Open a new goal for \C{P a}. Once it is resolved, + introduce a new entry for it in the context, named \C{pa} \begin{cheatout} a : T @@ -599,7 +599,7 @@ \subsection*{Idioms} \begin{cheat} have /andP[x /eqP->] : P a && b == c \end{cheat} - Open a subgoal for \C{P a && b == c}. When proved + Open a subgoal for \C{P a && b == c}. When proved, apply to it the \C{andP} view, destruct the conjunction, introduce \C{x}, apply the view \C{eqP} to turn \C{b == c} into \C{b = c}, then rewrite with it and discard the @@ -722,7 +722,7 @@ \subsection*{Rewrite patterns} \end{cheat} Rewrite the subterms selected by the pattern \C{pat} with \C{lem}. Then in the subterms selected by the pattern \C{pat2} - match the pattern inferred from the left hand side of \C{lem2} and + match the pattern inferred from the left-hand side of \C{lem2} and rewrite the terms selected by it. Lastly, in the subterms selected by \C{pat3} rewrite with \C{lem3} the subterms identified by \C{X} exactly @@ -766,8 +766,8 @@ \subsection*{Rewrite patterns} rewrite /def1 -[pat]/term /= \end{cheat} Unfold all occurrences of \C{def1}. Then match the goal against \C{pat} - and change all its occurrences into \C{term} (pure computation). Last - simplify the goal + and change all its occurrences into \C{term} (pure computation). Last, + simplify the goal. \begin{cheat} rewrite 3?lem2 // {hyp} => x px