Proceedings of the 39th Conference on the Mathematical Foundations of Programming Semantics. Held on the campus of Indiana University, Bloomington, IN and co-located with CALCO 2023, the 10th Conference on Algebra and Coalgebra in Computer Science.
DOI:

Separation logic is used to reason locally about stateful programs. State ofthe art program logics for higher-order store are usually built on top ofuntyped operational semantics, in part because traditional denotational methodshave struggled to simultaneously account for general references and parametricpolymorphism. The recent discovery of simple denotational semantics for generalreferences and polymorphism in synthetic guarded domain theory has enabled usto develop TULIP, a higher-order separation logic over the typed equationaltheory of higher-order store for a monadic version of System F{mu,ref}. TheTulip logic differs from operationally-based program logics in two ways:predicates range over the meanings of typed terms rather than over the raw codeof untyped terms, and they are automatically invariant under the equationalcongruence of higher-order store, which applies even underneath a binder. As aresult, "pure" proof steps that conventionally require focusing the Hoaretriple on an operational redex are replaced by a simple equational rewrite inTulip. We have evaluated Tulip against standard examples involving linked listsin the heap, comparing our abstract equational reasoning with more familiaroperational-style reasoning. Our main result is the soundness of Tulip, whichwe establish by constructing a BI-hyperdoctrine over the denotational semanticsof F{mu,ref} in an impredicative version of synthetic guarded domain theory.

Linear logic has provided new perspectives on proof-theory, denotationalsemantics and the study of programming languages. One of its main successes areproof-nets, canonical representations of proofs that lie at the intersectionbetween logic and graph theory. In the case of the minimalist proof-system ofmultiplicative linear logic without units (MLL), these two aspects arecompletely fused: proof-nets for this system are graphs satisfying acorrectness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants,quantifiers and exponential modalities), this is not completely the case. Thepurely graphical approach of proof-nets deprives them of any sequentialstructure that is crucial to represent the order in which arguments arepresented, which is necessary for these extensions. Rebuilding this order ofpresentation - sequentializing the graph - is thus a requirement for a graph tobe logical. Presentations and study of the artifacts ensuring thatsequentialization can be done, such as boxes or jumps, are an integral part ofresearches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can expressintermediate degrees of sequentialization between a sequent calculus proof anda fully desequentialized proof-net. We propose to analyze the logical strengthof jumps by internalizing them in an extention of MLL where axioms on aspecific formula, the jumping formula, introduce constrains on the […]

Lawvere showed that generalised metric spaces are categories enriched over$[0, \infty]$, the quantale of the positive extended reals. The statement ofenrichment is a quantitative analogue of being a preorder. Towards seeking alogic for quantitative metric reasoning, we investigate three$[0,\infty]$-valued propositional logics over the Lawvere quantale. The basiclogical connectives shared by all three logics are those that can beinterpreted in any quantale, viz finite conjunctions and disjunctions, tensor(addition for the Lawvere quantale) and linear implication (here a truncatedsubtraction); to these we add, in turn, the constant $1$ to express integervalues, and scalar multiplication by a non-negative real to express generalaffine combinations. Quantitative equational logic can be interpreted in thethird logic if we allow inference systems instead of axiomatic systems. Foreach of these logics we develop a natural deduction system which we prove to bedecidably complete w.r.t. the quantale-valued semantics. The heart of thecompleteness proof makes use of the Motzkin transposition theorem. Consistencyis also decidable; the proof makes use of Fourier-Motzkin elimination of linearinequalities. Strong completeness does not hold in general, even (as is known)for theories over finitely-many propositional variables; indeed even anapproximate form of strong completeness in the sense of Pavelka or Ben Yaacov-- provability up to arbitrary precision -- does not hold. However, we can […]

Modern programming frequently requires generalised notions of programequivalence based on a metric or a similar structure. Previous work addressedthis challenge by introducing the notion of a V-equation, i.e. an equationlabelled by an element of a quantale V, which covers inter alia (ultra-)metric,classical, and fuzzy (in)equations. It also introduced a V-equational systemfor the linear variant of lambda-calculus where any given resource must be usedexactly once. In this paper we drop the (often too strict) linearity constraint by addinggraded modal types which allow multiple uses of a resource in a controlledmanner. We show that such a control, whilst providing more expressivity to theprogrammer, also interacts more richly with V-equations than the linear orCartesian cases. Our main result is the introduction of a sound and completeV-equational system for a lambda-calculus with graded modal types interpretedby what we call a Lipschitz exponential comonad. We also show how to build suchcomonads canonically via a universal construction, and use our results toderive graded metric equational systems (and corresponding models) for programswith timed and probabilistic behaviour.

This paper introduces a dynamic logic extension of separation logic. Theassertion language of separation logic is extended with modalities for the fivetypes of the basic instructions of separation logic: simple assignment,look-up, mutation, allocation, and de-allocation. The main novelty of theresulting dynamic logic is that it allows to combine different approaches toresolving these modalities. One such approach is based on the standard weakestprecondition calculus of separation logic. The other approach introduced inthis paper provides a novel alternative formalization in the proposed dynamiclogic extension of separation logic. The soundness and completeness of thisaxiomatization has been formalized in the Coq theorem prover.

We present a simple functional programming language, called Dual PCF, thatimplements forward mode automatic differentiation using dual numbers in theframework of exact real number computation. The main new feature of thislanguage is the ability to evaluate correctly up to the precision specified bythe user -- in a simple and direct way -- the directional derivative offunctionals as well as first order functions. In contrast to other comparablelanguages, Dual PCF also includes the recursive operator for defining functionsand functionals. We provide a wide range of examples of Lipschitz functions andfunctionals that can be defined in Dual PCF. We use domain theory both to givea denotational semantics to the language and to prove the correctness of thenew derivative operator using logical relations. To be able to differentiatefunctionals -- including on function spaces equipped with their compact-opentopology that do not admit a norm -- we develop a domain-theoretic directionalderivative that is Scott continuous and extends Clarke's subgradient ofreal-valued locally Lipschitz maps on Banach spaces to real-valued continuousmaps on Hausdorff topological vector spaces. Finally, we show that we canexpress arbitrary computable linear functionals in Dual PCF.

Saturation is a fundamental game-semantic property satisfied by strategiesthat interpret higher-order concurrent programs. It states that the strategymust be closed under certain rearrangements of moves, and corresponds to theintuition that program moves (P-moves) may depend only on moves made by theenvironment (O-moves). We propose an automata model over an infinite alphabet, called saturatingautomata, for which all accepted languages are guaranteed to satisfy a closureproperty mimicking saturation. We show how to translate the finitary fragment of Idealized Concurrent Algol(FICA) into saturating automata, confirming their suitability for modellinghigher-order concurrency. Moreover, we find that, for terms in normal form, theresultant automaton has linearly many transitions and states with respect toterm size, and can be constructed in polynomial time. This is in contrast toearlier attempts at finding automata-theoretic models of FICA, which did notguarantee saturation and involved an exponential blow-up during translation,even for normal forms.

Orthogonality is a notion based on the duality between programs and theirenvironments used to determine when they can be safely combined. For instance,it is a powerful tool to establish termination properties in classical formalsystems. It was given a general treatment with the concept of orthogonalitycategory, of which numerous models of linear logic are instances, by Hyland andSchalk. This paper considers the subclass of focused orthogonalities. Wedevelop a theory of fixpoint constructions in focused orthogonality categories.Central results are lifting theorems for initial algebras and final coalgebras.These crucially hinge on the insight that focused orthogonality categories arerelational fibrations. The theory provides an axiomatic categorical frameworkfor models of linear logic with least and greatest fixpoints of types. Wefurther investigate domain-theoretic settings, showing how to lift bifreealgebras, used to solve mixed-variance recursive type equations, to focusedorthogonality categories.

The concept of updating a probability distribution in the light of newevidence lies at the heart of statistics and machine learning. Pearl's andJeffrey's rule are two natural update mechanisms which lead to differentoutcomes, yet the similarities and differences remain mysterious. This paperclarifies their relationship in several ways: via separate descriptions of thetwo update mechanisms in terms of probabilistic programs and samplingsemantics, and via different notions of likelihood (for Pearl and for Jeffrey).Moreover, it is shown that Jeffrey's update rule arises via variationalinference. In terms of categorical probability theory, this amounts to ananalysis of the situation in terms of the behaviour of the multiset functor,extended to the Kleisli category of the distribution monad.

Stochastic memoization is a higher-order construct of probabilisticprogramming languages that is key in Bayesian nonparametrics, a modularapproach that allows us to extend models beyond their parametric limitationsand compose them in an elegant and principled manner. Stochastic memoization issimple and useful in practice, but semantically elusive, particularly regardingdataflow transformations. As the naive implementation resorts to the statemonad, which is not commutative, it is not clear if stochastic memoizationpreserves the dataflow property -- i.e., whether we can reorder the lines of aprogram without changing its semantics, provided the dataflow graph ispreserved. In this paper, we give an operational and categorical semantics tostochastic memoization and name generation in the context of a minimalprobabilistic programming language, for a restricted class of functions. Ourcontribution is a first model of stochastic memoization of constant Bernoullifunctions with a non-enumerable type, which validates data flowtransformations, bridging the gap between traditional probability theory andhigher-order probability models. Our model uses a presheaf category and a novelprobability monad on it.

Categorical semantics of type theories are often characterized asstructure-preserving functors. This is because in category theory both thesyntax and the domain of interpretation are uniformly treated as structuredcategories, so that we can express interpretations as structure-preservingfunctors between them. This mathematical characterization of semantics makes itconvenient to manipulate and to reason about relationships betweeninterpretations. Motivated by this success of functorial semantics, we addressthe question of finding a functorial analogue in abstract interpretation, ageneral framework for comparing semantics, so that we can bring similarbenefits of functorial semantics to semantic abstractions used in abstractinterpretation. Major differences concern the notion of interpretation that isbeing considered. Indeed, conventional semantics are value-based whereasabstract interpretation typically deals with more complex properties. In thispaper, we propose a functorial approach to abstract interpretation and studyassociated fundamental concepts therein. In our approach, interpretations areexpressed as oplax functors in the category of posets, and abstractionrelations between interpretations are expressed as lax natural transformationsrepresenting concretizations. We present examples of these formal concepts frommonadic semantics of programming languages and discuss soundness.

Various categories have been proposed as targets for the denotationalsemantics of higher-order probabilistic programming languages. One suchproposal involves joint probability distributions (couplings) used in Bayesianstatistical models with conditioning. In previous treatments, composition ofjoint measures was performed by disintegrating to obtain Markov kernels,composing the kernels, then reintegrating to obtain a joint measure.Disintegrations exist only under certain restrictions on the underlying spaces.In this paper we propose a category whose morphisms are joint finite measuresin which composition is defined without reference to disintegration, allowingits application to a broader class of spaces. The category is symmetricmonoidal with a pleasing symmetry in which the dagger structure is a simpletranspose.

Cartesian differential categories come equipped with a differentialcombinator which axiomatizes the fundamental properties of the total derivativefrom differential calculus. The objective of this paper is to understand whenthe Kleisli category of a monad is a Cartesian differential category. Weintroduce Cartesian differential monads, which are monads whose Kleislicategory is a Cartesian differential category by way of lifting thedifferential combinator from the base category. Examples of Cartesiandifferential monads include tangent bundle monads and reader monads. We give aprecise characterization of Cartesian differential categories which are Kleislicategories of Cartesian differential monads using abstract Kleisli categories.We also show that the Eilenberg-Moore category of a Cartesian differentialmonad is a tangent category.

In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth adistinction between non-linear proofs and linear proofs, where linear meansusing an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) isan extension of Linear Logic which includes additional rules for $!$ whichencode differentiation and the ability of linearizing proofs. On the otherhand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic insuch a way that $!$ is now indexed over a semiring $R$. This $R$-grading allowsfor non-linear proofs of degree $r \in R$, such that the linear proofs are ofdegree $1 \in R$. There has been recent interest in combining these twovariations of $\mathsf{LL}$ together and developing Graded Differential LinearLogic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for$\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we callgraded differential categories, using both coderelictions and derivingtransformations. We prove that symmetric powers always give graded differentialcategories, and provide other examples of graded differential categories. Wealso discuss graded versions of (monoidal) coalgebra modalities, additivebialgebra modalities, and the Seely isomorphisms, as well as theirimplementations in the sequent calculus of $\mathsf{GDiLL}$.

Within dependent type theory, we provide a topological counterpart ofwell-founded trees (for short, W-types) by using a proof-relevant version ofthe notion of inductively generated suplattices introduced in the context offormal topology under the name of inductively generated basic covers. In moredetail, we show, firstly, that in Homotopy Type Theory, W-types and proofrelevant inductively generated basic covers are propositionally mutuallyencodable. Secondly, we prove they are definitionally mutually encodable in theAgda implementation of intensional Martin-Loef's type theory. Finally, wereframe the equivalence in the Minimalist Foundation framework by introducingwell-founded predicates as the logical counterpart for predicates of dependentW-types. All the results have been checked in the Agda proof-assistant.

In this paper we show that using implicative algebras one can produce modelsof set theory generalizing Heyting/Boolean-valued models and realizabilitymodels of (I)ZF, both in intuitionistic and classical logic. This has asconsequence that any topos which is obtained from a Set-based tripos as theresult of the tripos-to-topos construction hosts a model of intuitionistic orclassical set theory, provided a large enough strongly inaccessible cardinalexists.

Typical arguments for results like Kleene's Second Recursion Theorem and theexistence of self-writing computer programs bear the fingerprints of equationalreasoning and combinatory logic. In fact, the connection of combinatory logicand computability theory is very old, and this paper extends this connection innew ways. In one direction, we counter the main trend in both computabilitytheory and combinatory logic of heading straight to undecidability. Instead,this paper proposes using several very small equational logics to examineresults in computability theory itself. These logics are decidable via termrewriting. We argue that they have something interesting to say aboutcomputability theory. They are closely related to fragments of combinatorylogic which are decidable, and so this paper contributes to the study of suchfragments. The paper has a few surprising results such as a classification ofquine programs (programs which output themselves) in two decidable fragments.The classification goes via examination of normal forms in term rewritingsystems, hence the title of the paper. The classification is an explanation ofwhy all quine programs (in any language) are "pretty much the same, except forinessential details." In addition, we study the relational structure whoseobjects are the programs with the relation "p expresses q" meaning that if theprogram p is run on nothing, then it eventually outputs the program q.

We show that contrary to appearances, Multimodal Type Theory (MTT) over a2-category M can be interpreted in any M-shaped diagram of categories having,and functors preserving, M-sized limits, without the need for extra leftadjoints. This is achieved by a construction called "co-dextrification" thatco-freely adds left adjoints to any such diagram, which can then be used tointerpret the "context lock" functors of MTT. Furthermore, if any of thefunctors in the diagram have right adjoints, these can also be internalized intype theory as negative modalities in the style of FitchTT. We introduce thename Multimodal Adjoint Type Theory (MATT) for the resulting combined generalmodal type theory. In particular, we can interpret MATT in any finite diagramof toposes and geometric morphisms, with positive modalities for inverse imagefunctors and negative modalities for direct image functors.

Type refinements combine the compositionality of typechecking with theexpressivity of program logics, offering a synergistic approach to programverification. In this paper we apply dependent type refinements to SAX, afutures-based process calculus that arises from the Curry-Howard interpretationof the intuitionistic semi-axiomatic sequent calculus and includes unrestrictedrecursion both at the level of types and processes. With our type refinementsystem, we can reason about the partial correctness of SAX programs,complementing prior work on sized type refinements that supports reasoningabout termination. Our design regime synthesizes the infinitary proof theory ofSAX with that of bidirectional typing and Hoare logic, deriving some standardreasoning principles for data and (co)recursion while enabling informationhiding for codata. We prove syntactic type soundness, which entails a notion ofpartial correctness that respects codata encapsulation. We illustrate ourlanguage through a few simple examples.

Combining ideas coming from Stone duality and Reynolds parametricity, weformulate in a clean and principled way a notion of profinite lambda-termwhich, we show, generalizes at every type the traditional notion of profiniteword coming from automata theory. We start by defining the Stone space ofprofinite lambda-terms as a projective limit of finite sets of usuallambda-terms, considered modulo a notion of equivalence based on the finitestandard model. One main contribution of the paper is to establish that,somewhat surprisingly, the resulting notion of profinite lambda-term comingfrom Stone duality lives in perfect harmony with the principles of Reynoldsparametricity. In addition, we show that the notion of profinite lambda-term iscompositional by constructing a cartesian closed category of profinitelambda-terms, and we establish that the embedding from lambda-terms modulobeta-eta-conversion to profinite lambda-terms is faithful using Statman'sfinite completeness theorem. Finally, we prove that the traditional Churchencoding of finite words into lambda-terms can be extended to profinite words,and leads to a homeomorphism between the space of profinite words and the spaceof profinite lambda-terms of the corresponding Church type.