Proceedings of the 39th Conference on the Mathematical Foundations of Programming Semantics. Held on the campus of Indiana University, Bloomington, IN and co-located with CALCO 2023, the 10th Conference on Algebra and Coalgebra in Computer Science.
DOI:
Separation logic is used to reason locally about stateful programs. State of the art program logics for higher-order store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop TULIP, a higher-order separation logic over the typed equational theory of higher-order store for a monadic version of System F{mu,ref}. The Tulip logic differs from operationally-based program logics in two ways: predicates range over the meanings of typed terms rather than over the raw code of untyped terms, and they are automatically invariant under the equational congruence of higher-order store, which applies even underneath a binder. As a result, "pure" proof steps that conventionally require focusing the Hoare triple on an operational redex are replaced by a simple equational rewrite in Tulip. We have evaluated Tulip against standard examples involving linked lists in the heap, comparing our abstract equational reasoning with more familiar operational-style reasoning. Our main result is the soundness of Tulip, which we establish by constructing a BI-hyperdoctrine over the denotational semantics of F{mu,ref} in an impredicative version of synthetic guarded domain theory.
Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation - sequentializing the graph - is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing them in an extention of MLL where axioms on a specific formula, the jumping formula, introduce […]
Lawvere showed that generalised metric spaces are categories enriched over $[0, \infty]$, the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three $[0,\infty]$-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant $1$ to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Quantitative equational logic can be interpreted in the third logic if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of the Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even (as is known) for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Pavelka or Ben Yaacov -- provability up to arbitrary precision -- does not hold. […]
Modern programming frequently requires generalised notions of program equivalence based on a metric or a similar structure. Previous work addressed this challenge by introducing the notion of a V-equation, i.e. an equation labelled by an element of a quantale V, which covers inter alia (ultra-)metric, classical, and fuzzy (in)equations. It also introduced a V-equational system for the linear variant of lambda-calculus where any given resource must be used exactly once. In this paper we drop the (often too strict) linearity constraint by adding graded modal types which allow multiple uses of a resource in a controlled manner. We show that such a control, whilst providing more expressivity to the programmer, also interacts more richly with V-equations than the linear or Cartesian cases. Our main result is the introduction of a sound and complete V-equational system for a lambda-calculus with graded modal types interpreted by what we call a Lipschitz exponential comonad. We also show how to build such comonads canonically via a universal construction, and use our results to derive graded metric equational systems (and corresponding models) for programs with timed and probabilistic behaviour.
This paper introduces a dynamic logic extension of separation logic. The assertion language of separation logic is extended with modalities for the five types of the basic instructions of separation logic: simple assignment, look-up, mutation, allocation, and de-allocation. The main novelty of the resulting dynamic logic is that it allows to combine different approaches to resolving these modalities. One such approach is based on the standard weakest precondition calculus of separation logic. The other approach introduced in this paper provides a novel alternative formalization in the proposed dynamic logic extension of separation logic. The soundness and completeness of this axiomatization has been formalized in the Coq theorem prover.
We present a simple functional programming language, called Dual PCF, that implements forward mode automatic differentiation using dual numbers in the framework of exact real number computation. The main new feature of this language is the ability to evaluate correctly up to the precision specified by the user -- in a simple and direct way -- the directional derivative of functionals as well as first order functions. In contrast to other comparable languages, Dual PCF also includes the recursive operator for defining functions and functionals. We provide a wide range of examples of Lipschitz functions and functionals that can be defined in Dual PCF. We use domain theory both to give a denotational semantics to the language and to prove the correctness of the new derivative operator using logical relations. To be able to differentiate functionals -- including on function spaces equipped with their compact-open topology that do not admit a norm -- we develop a domain-theoretic directional derivative that is Scott continuous and extends Clarke's subgradient of real-valued locally Lipschitz maps on Banach spaces to real-valued continuous maps on Hausdorff topological vector spaces. Finally, we show that we can express arbitrary computable linear functionals in Dual PCF.
Saturation is a fundamental game-semantic property satisfied by strategies that interpret higher-order concurrent programs. It states that the strategy must be closed under certain rearrangements of moves, and corresponds to the intuition that program moves (P-moves) may depend only on moves made by the environment (O-moves). We propose an automata model over an infinite alphabet, called saturating automata, for which all accepted languages are guaranteed to satisfy a closure property mimicking saturation. We show how to translate the finitary fragment of Idealized Concurrent Algol (FICA) into saturating automata, confirming their suitability for modelling higher-order concurrency. Moreover, we find that, for terms in normal form, the resultant automaton has linearly many transitions and states with respect to term size, and can be constructed in polynomial time. This is in contrast to earlier attempts at finding automata-theoretic models of FICA, which did not guarantee saturation and involved an exponential blow-up during translation, even for normal forms.
Orthogonality is a notion based on the duality between programs and their environments used to determine when they can be safely combined. For instance, it is a powerful tool to establish termination properties in classical formal systems. It was given a general treatment with the concept of orthogonality category, of which numerous models of linear logic are instances, by Hyland and Schalk. This paper considers the subclass of focused orthogonalities. We develop a theory of fixpoint constructions in focused orthogonality categories. Central results are lifting theorems for initial algebras and final coalgebras. These crucially hinge on the insight that focused orthogonality categories are relational fibrations. The theory provides an axiomatic categorical framework for models of linear logic with least and greatest fixpoints of types. We further investigate domain-theoretic settings, showing how to lift bifree algebras, used to solve mixed-variance recursive type equations, to focused orthogonality categories.
The concept of updating a probability distribution in the light of new evidence lies at the heart of statistics and machine learning. Pearl's and Jeffrey's rule are two natural update mechanisms which lead to different outcomes, yet the similarities and differences remain mysterious. This paper clarifies their relationship in several ways: via separate descriptions of the two update mechanisms in terms of probabilistic programs and sampling semantics, and via different notions of likelihood (for Pearl and for Jeffrey). Moreover, it is shown that Jeffrey's update rule arises via variational inference. In terms of categorical probability theory, this amounts to an analysis of the situation in terms of the behaviour of the multiset functor, extended to the Kleisli category of the distribution monad.
Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property -- i.e., whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved. In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions. Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and a novel probability monad on it.
Categorical semantics of type theories are often characterized as structure-preserving functors. This is because in category theory both the syntax and the domain of interpretation are uniformly treated as structured categories, so that we can express interpretations as structure-preserving functors between them. This mathematical characterization of semantics makes it convenient to manipulate and to reason about relationships between interpretations. Motivated by this success of functorial semantics, we address the question of finding a functorial analogue in abstract interpretation, a general framework for comparing semantics, so that we can bring similar benefits of functorial semantics to semantic abstractions used in abstract interpretation. Major differences concern the notion of interpretation that is being considered. Indeed, conventional semantics are value-based whereas abstract interpretation typically deals with more complex properties. In this paper, we propose a functorial approach to abstract interpretation and study associated fundamental concepts therein. In our approach, interpretations are expressed as oplax functors in the category of posets, and abstraction relations between interpretations are expressed as lax natural transformations representing concretizations. We present examples of these formal concepts from monadic semantics of programming languages and discuss soundness.
Various categories have been proposed as targets for the denotational semantics of higher-order probabilistic programming languages. One such proposal involves joint probability distributions (couplings) used in Bayesian statistical models with conditioning. In previous treatments, composition of joint measures was performed by disintegrating to obtain Markov kernels, composing the kernels, then reintegrating to obtain a joint measure. Disintegrations exist only under certain restrictions on the underlying spaces. In this paper we propose a category whose morphisms are joint finite measures in which composition is defined without reference to disintegration, allowing its application to a broader class of spaces. The category is symmetric monoidal with a pleasing symmetry in which the dagger structure is a simple transpose.
Cartesian differential categories come equipped with a differential combinator which axiomatizes the fundamental properties of the total derivative from differential calculus. The objective of this paper is to understand when the Kleisli category of a monad is a Cartesian differential category. We introduce Cartesian differential monads, which are monads whose Kleisli category is a Cartesian differential category by way of lifting the differential combinator from the base category. Examples of Cartesian differential monads include tangent bundle monads and reader monads. We give a precise characterization of Cartesian differential categories which are Kleisli categories of Cartesian differential monads using abstract Kleisli categories. We also show that the Eilenberg-Moore category of a Cartesian differential monad is a tangent category.
In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.
Within dependent type theory, we provide a topological counterpart of well-founded trees (for short, W-types) by using a proof-relevant version of the notion of inductively generated suplattices introduced in the context of formal topology under the name of inductively generated basic covers. In more detail, we show, firstly, that in Homotopy Type Theory, W-types and proof relevant inductively generated basic covers are propositionally mutually encodable. Secondly, we prove they are definitionally mutually encodable in the Agda implementation of intensional Martin-Loef's type theory. Finally, we reframe the equivalence in the Minimalist Foundation framework by introducing well-founded predicates as the logical counterpart for predicates of dependent W-types. All the results have been checked in the Agda proof-assistant.
In this paper we show that using implicative algebras one can produce models of set theory generalizing Heyting/Boolean-valued models and realizability models of (I)ZF, both in intuitionistic and classical logic. This has as consequence that any topos which is obtained from a Set-based tripos as the result of the tripos-to-topos construction hosts a model of intuitionistic or classical set theory, provided a large enough strongly inaccessible cardinal exists.
Typical arguments for results like Kleene's Second Recursion Theorem and the existence of self-writing computer programs bear the fingerprints of equational reasoning and combinatory logic. In fact, the connection of combinatory logic and computability theory is very old, and this paper extends this connection in new ways. In one direction, we counter the main trend in both computability theory and combinatory logic of heading straight to undecidability. Instead, this paper proposes using several very small equational logics to examine results in computability theory itself. These logics are decidable via term rewriting. We argue that they have something interesting to say about computability theory. They are closely related to fragments of combinatory logic which are decidable, and so this paper contributes to the study of such fragments. The paper has a few surprising results such as a classification of quine programs (programs which output themselves) in two decidable fragments. The classification goes via examination of normal forms in term rewriting systems, hence the title of the paper. The classification is an explanation of why all quine programs (in any language) are "pretty much the same, except for inessential details." In addition, we study the relational structure whose objects are the programs with the relation "p expresses q" meaning that if the program p is run on nothing, then it eventually outputs the program q.
We show that contrary to appearances, Multimodal Type Theory (MTT) over a 2-category M can be interpreted in any M-shaped diagram of categories having, and functors preserving, M-sized limits, without the need for extra left adjoints. This is achieved by a construction called "co-dextrification" that co-freely adds left adjoints to any such diagram, which can then be used to interpret the "context lock" functors of MTT. Furthermore, if any of the functors in the diagram have right adjoints, these can also be internalized in type theory as negative modalities in the style of FitchTT. We introduce the name Multimodal Adjoint Type Theory (MATT) for the resulting combined general modal type theory. In particular, we can interpret MATT in any finite diagram of toposes and geometric morphisms, with positive modalities for inverse image functors and negative modalities for direct image functors.
Type refinements combine the compositionality of typechecking with the expressivity of program logics, offering a synergistic approach to program verification. In this paper we apply dependent type refinements to SAX, a futures-based process calculus that arises from the Curry-Howard interpretation of the intuitionistic semi-axiomatic sequent calculus and includes unrestricted recursion both at the level of types and processes. With our type refinement system, we can reason about the partial correctness of SAX programs, complementing prior work on sized type refinements that supports reasoning about termination. Our design regime synthesizes the infinitary proof theory of SAX with that of bidirectional typing and Hoare logic, deriving some standard reasoning principles for data and (co)recursion while enabling information hiding for codata. We prove syntactic type soundness, which entails a notion of partial correctness that respects codata encapsulation. We illustrate our language through a few simple examples.
Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite lambda-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite lambda-terms as a projective limit of finite sets of usual lambda-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite lambda-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite lambda-term is compositional by constructing a cartesian closed category of profinite lambda-terms, and we establish that the embedding from lambda-terms modulo beta-eta-conversion to profinite lambda-terms is faithful using Statman's finite completeness theorem. Finally, we prove that the traditional Church encoding of finite words into lambda-terms can be extended to profinite words, and leads to a homeomorphism between the space of profinite words and the space of profinite lambda-terms of the corresponding Church type.