41st Conference on the Mathematical Foundaions of Programming Semantics
MFPS; N°41
University of Strathclyde, Glasgow, Scotland; From 2025-06-16 To 2025-06-21
Programming language semantics and its mathematical foundations;
[10.46298/entics.proceedings.mfps41](https://doi.org/10.46298/entics.proceedings.mfps41)
Awodey, later with Newstead, showed how polynomial functors with extra structure (termed ``natural models'') hold within them the categorical semantics for dependent type theory. Their work presented these ideas clearly but ultimately led them outside of the usual category of polynomial functors to a particular \emph{tricategory} of polynomials in order to explain all of the structure possessed by such models. This paper builds off that work -- explicating the categorical semantics of dependent type theory by axiomatizing them entirely in terms of the usual category of polynomial functors. In order to handle the higher-categorical coherences required for such an explanation, we work with polynomial functors in the language of Homotopy Type Theory (HoTT), which allows for higher-dimensional structures to be expressed purely within this category. The move to HoTT moreover enables us to express a key additional condition on polynomial functors -- \emph{univalence} -- which is sufficient to guarantee that models of type theory expressed as univalent polynomials satisfy all higher coherences of their corresponding algebraic structures, purely in virtue of being closed under the usual constructors of dependent type theory. We call polynomial functors satisfying this condition \emph{polynomial universes}. As an example of the simplification to the theory of natural models this enables, we highlight the fact that a polynomial universe being closed under dependent product […]
Causality serves as an abstract notion of time for concurrent systems. A computation is causal, or simply valid, if each observation of a computation event is preceded by the observation of its causes. The present work establishes that this simple requirement is equally relevant when the occurrence of an event is invertible. We propose a conservative extension of causal models for concurrency that accommodates reversible computations. We first model reversible computations using a symmetric residuation operation in the general model of configuration structures. We show that stable configuration structures, which correspond to prime algebraic domains, remain stable under the action of this residuation. We then derive a semantics of reversible computations for prime event structures, which is shown to coincide with a switch operation that dualizes conflict and causality.
It is well-known that intersection type assignment systems can be used to characterize strong normalization (SN). Typical proofs that typable lambda-terms are SN in these systems rely on semantical techniques. In this work, we study $Λ_\cap^e$, a variant of Coppo and Dezani's (Curry-style) intersection type system, and we propose a syntactical proof of strong normalization for it. We first design $Λ_\cap^i$, a Church-style version, in which terms closely correspond to typing derivations. Then we prove that typability in $Λ_\cap^i$ implies SN through a measure that, given a term, produces a natural number that decreases along with reduction. Finally, the result is extended to $Λ_\cap^e$, since the two systems simulate each other.
Linear logic (LL) is a resource-aware, abstract logic programming language that refines both classical and intuitionistic logic. Linear logic semantics is typically presented in one of two ways: by associating each formula with the set of all contexts that can be used to prove it (e.g. phase semantics) or by assigning meaning directly to proofs (e.g. coherence spaces). This work proposes a different perspective on assigning meaning to proofs by adopting a proof-theoretic perspective. More specifically, we employ base-extension semantics (BeS) to characterise proofs through the notion of base support. Recent developments have shown that BeS is powerful enough to capture proof-theoretic notions in structurally rich logics such as intuitionistic linear logic. In this paper, we extend this framework to the classical case, presenting a proof-theoretic approach to the semantics of the multiplicative-additive fragment of linear logic (MALL).
We examine the relationships between axiomatic and cyclic proof systems for the partial and total versions of Hoare logic and those of its dual, known as reverse Hoare logic (or sometimes incorrectness logic). In the axiomatic proof systems for these logics, the proof rules for looping constructs involve an explicit loop invariant, which in the case of the total versions additionally require a well-founded termination measure. In the cyclic systems, these are replaced by rules that simply unroll the loops, together with a principle allowing the formation of cycles in the proof, subject to a global soundness condition that ensures the well-foundedness of the circular reasoning. Interestingly, the cyclic soundness conditions for partial Hoare logic and its reverse are similar and essentially coinductive in character, while those for the total versions are also similar and essentially inductive. We show that these cyclic systems are sound, by direct argument, and relatively complete, by translation from axiomatic to cyclic proofs.
Automata admitting at most one accepting run per structure, known as unambiguous automata, find applications in verification of reactive systems as they extend the class of deterministic automata whilst maintaining some of their desirable properties. In this paper, we generalise a classical construction of unambiguous automata from thin trees to thin coalgebras for analytic functors. This achieves two goals: extending the existing construction to a larger class of structures, and providing conceptual clarity and parametricity to the construction by formalising it in the coalgebraic framework. As part of the construction, we link automaton acceptance of languages of thin coalgebras to language recognition via so-called coherent algebras, which were previously introduced for studying thin coalgebras. This link also allows us to establish an automata-theoretic characterisation of languages recognised by finite coherent algebras.
Domain theory has been developed as a mathematical theory of computation and to give a denotational semantics to programming languages. It helps us to fix the meaning of language concepts, to understand how programs behave and to reason about programs. At the same time it serves as a great theory to model various algebraic effects such as non-determinism, partial functions, side effects and numerous other forms of computation. In the present paper, we present a general framework to construct algebraic effects in domain theory, where our domains are DCPOs: directed complete partial orders. We first describe so called DCPO algebras for a signature, where the signature specifies the operations on the DCPO and the inequational theory they obey. This provides a method to represent various algebraic effects, like partiality. We then show that initial DCPO algebras exist by defining them as so called Quotient Inductive-Inductive Types (QIITs), known from homotopy type theory. A quotient inductive-inductive type allows one to simultaneously define an inductive type and an inductive relation on that type, together with equations on the type. We illustrate our approach by showing that several well-known constructions of DCPOs fit our framework: coalesced sums, smash products and free DCPOs (partiality and power domains). Our work makes use of various features of homotopy type theory and is formalized in Cubical Agda.
We adapt Fiore, Plotkin, and Turi's treatment of abstract syntax with binding, substitution, and holes to account for languages with second-class sorts. These situations include programming calculi such as the Call-by-Value lambda-calculus (CBV) and Levy's Call-by-Push-Value (CBPV). Prohibiting second-class sorts from appearing in variable contexts changes the characterisation of the abstract syntax from monoids in monoidal categories to actions in actegories. We reproduce much of the development through bicategorical arguments. We apply the resulting theory by proving substitution lemmata for varieties of CBV.
The Functional Machine Calculus (Heijltjes 2022) is a new approach to unifying the imperative and functional programming paradigms. It extends the lambda-calculus, preserving the key features of confluent reduction and typed termination, to embed computational effects, evaluation strategies, and control flow operations. The first instalment modelled sequential higher-order computation with global store, input/output, probabilities, and non-determinism, and embedded both the call-by-name and call-by-value lambda-calculus, as well as Moggi's computational metalanguage and Levy's call-by-push-value. The present paper extends the calculus from sequential to branching and looping control flow. This allows the faithful embedding of a minimal but complete imperative language, including conditionals, exception handling, and iteration, as well as constants and algebraic data types. The calculus is defined through a simple operational semantics, extending the (simplified) Krivine machine for the lambda-calculus with multiple operand stacks to model effects and a continuation stack to model sequential, branching, and looping computation. It features a confluent reduction relation and a system of simple types that guarantees termination of the machine and strong normalization of reduction (in the absence of iteration). These properties carry over to the embedded imperative language, providing a unified functional-imperative model of computation that supports simple types, a […]
Binomial distributions capture the probabilities of `heads' outcomes when a (biased) coin is tossed multiple times. The coin may be identified with a distribution on the two-element set {0,1}, where the 1 outcome corresponds to `head'. One can also toss two separate coins, with different biases, in parallel and record the outcomes. This paper investigates a slightly different `bivariate' binomial distribution, where the two coins are dependent (also called: entangled, or entwined): the two-coin is a distribution on the product {0,1} x {0,1}. This bivariate binomial exists in the literature, with complicated formulations. Here we use the language of category theory to give a new succint formulation. This paper investigates, also in categorically inspired form, basic properties of these bivariate distributions, including their mean, variance and covariance, and their behaviour under convolution and under updating, in Laplace's rule of succession. Furthermore, it is shown how Expectation Maximisation works for these bivariate binomials, so that mixtures of bivariate binomials can be recognised in data. This paper concentrates on the bivariate case, but the binomial distributions may be generalised to the multivariate case, with multiple dimensions, in a straightforward manner.
Inference is a fundamental reasoning technique in probability theory. When applied to a large joint distribution, it involves updating with evidence (conditioning) in one or more components (variables) and computing the outcome in other components. When the joint distribution is represented by a Bayesian network, the network structure may be exploited to proceed in a compositional manner -- with great benefits. However, the main challenge is that updating involves (re)normalisation, making it an operation that interacts badly with other operations. String diagrams are becoming popular as a graphical technique for probabilistic (and quantum) reasoning. Conditioning has appeared in string diagrams, in terms of a disintegration, using bent wires and shaded (or dashed) normalisation boxes. It has become clear that such normalisation boxes do satisfy certain compositional rules. This paper takes a decisive step in this development by adding a removal rule to the formalism, for the deletion of shaded boxes. Via this removal rule one can get rid of shaded boxes and terminate an inference argument. This paper illustrates via many (graphical) examples how the resulting compositional inference technique can be used for Bayesian networks, causal reasoning and counterfactuals.
Probabilistic separation logic offers an approach to reasoning about imperative probabilistic programs in which a separating conjunction is used as a mechanism for expressing independence properties. Crucial to the effectiveness of the formalism is the frame rule, which enables modular reasoning about independent probabilistic state. We explore a semantic formulation of probabilistic separation logic, in which the frame rule has the same simple formulation as in separation logic, without further side conditions. This is achieved by building a notion of safety into specifications, using which we establish a crucial property of specifications, called relative tightness, from which the soundness of the frame rule follows.
We introduce continuation semantics for both fixpoint modal logic (FML) and Computation Tree Logic* (CTL*), parameterised by a choice of branching type and quantitative predicate lifting. Our main contribution is proving that they are equivalent to coalgebraic semantics, for all branching types. Our continuation semantics is defined over coalgebras of the continuation monad whose answer type coincides with the domain of truth values of the formulas. By identifying predicates and continuations, such a coalgebra has a canonical interpretation of the modality by evaluation of continuations. We show that this continuation semantics is equivalent to the coalgebraic semantics for fixpoint modal logic. We then reformulate the current construction for coalgebraic models of CTL*. These models are usually required to have an infinitary trace/maximal execution map, characterized as the greatest fixpoint of a special operator. Instead, we allow coalgebraic models of CTL* to employ non-maximal fixpoints, which we call execution maps. Under this reformulation, we establish a general result on transferring execution maps via monad morphisms. From this result, we obtain that continuation semantics is equivalent to the coalgebraic semantics for CTL*. We also identify a sufficient condition under which CTL can be encoded into fixpoint modal logic under continuation semantics.
Partial Markov categories are a recent framework for categorical probability theory that provide an abstract account of partial probabilistic computation with updating semantics. In this article, we discuss two order relations on the morphisms of a partial Markov category. In particular, we prove that every partial Markov category is canonically preorder-enriched, recovering several well-known order enrichments. We also demonstrate that the existence of codiagonal maps (comparators) is closely related to order properties of partial Markov categories. Finally, we introduce a synthetic version of the Cauchy--Schwarz inequality and, from it, we prove that updating increases validity.
We introduce the concept of compact quantitative equational theory. A quantitative equational theory is defined to be compact if all its consequences are derivable by means of finite proofs. We prove that the theory of interpolative barycentric (also known as convex) quantitative algebras of Mardare et. al. is compact. This serves as a paradigmatic example, used to obtain other compact quantitative equational theories of convex algebras, each axiomatizing some distance on finitely supported probability distributions.
In previous work, categories of algebras of endofunctors were shown to be enriched in categories of coalgebras of the same endofunctor, and the extra structure of that enrichment was used to define a generalization of inductive data types. These generalized inductive data types are parametrized by a coalgebra $C$, so we call them $C$-inductive data types; we call the morphisms induced by their universal property $C$-inductive functions. We extend that work by incorporating natural transformations into the theory: given a suitable natural transformation between endofunctors, we show that this induces enriched functors between their categories of algebras which preserve $C$-inductive data types and $C$-inductive functions. Such $C$-inductive data types are often finite versions of the corresponding inductive data type, and we show how our framework can extend classical initial algebra semantics to these types. For instance, we show that our theory naturally produces partially inductive functions on lists, changes in list element types, and tree pruning functions.
We introduce dicodensity monads: a generalisation of pointwise codensity monads generated by functors to monads generated by mixed-variant bifunctors. Our construction is based on the notion of strong dinaturality (also known as Barr dinaturality), and is inspired by denotational models of certain types in polymorphic lambda calculi - in particular, a form of continuation monads with universally quantified variables, such as the Church encoding of the list monad in System F. Extending some previous results on Cayley-style representations, we provide a set of sufficient conditions to establish an isomorphism between a monad and the dicodensity monad for a given bifunctor. Then, we focus on the class of monads obtained by instantiating our construction with hom-functors and, more generally, bifunctors given by objects of homomorphisms (that is, internalised hom-sets between Eilenberg--Moore algebras). This gives us, for example, novel presentations of monads generated by different kinds of semirings and other theories used to model ordered nondeterministic computations.
Traces form a coarse notion of semantic equivalence between states of a process, and have been studied coalgebraically for various types of system. We instantiate the finitary coalgebraic trace semantics framework of Hasuo et al. for controller-versus-environment games, encompassing both nondeterministic and probabilistic environments. Although our choice of monads is guided by the constraints of this abstract framework, they enable us to recover familiar game-theoretic concepts. Concretely, we show that in these games, each element in the trace map corresponds to a collection (a subset or distribution) of plays the controller can force. Furthermore, each element can be seen as the outcome of following a controller strategy. Our results are parametrised by a weak distributive law, which computes what the controller can force in a single step.