<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/">
  <channel>
    <title>Electronic Notes in Theoretical Informatics and Computer Science - Latest Publications</title>
    <description>Latest articles</description>
    
    <pubDate>Wed, 11 Mar 2026 01:53:11 +0000</pubDate>
    <generator>episciences.org</generator>
    <link>https://entics.episciences.org</link>
    <author>Electronic Notes in Theoretical Informatics and Computer Science</author>
    <dc:creator>Electronic Notes in Theoretical Informatics and Computer Science</dc:creator>
    <atom:link rel="self" type="application/rss+xml" href="https://entics.episciences.org/rss/papers"/>
    <atom:link rel="hub" href="http://pubsubhubbub.appspot.com/"/>
    <item>
      <title>The Functional Machine Calculus III: Control</title>
      <description><![CDATA[The Functional Machine Calculus (Heijltjes 2022) is a new approach to unifying the imperative and functional programming paradigms. It extends the lambda-calculus, preserving the key features of confluent reduction and typed termination, to embed computational effects, evaluation strategies, and control flow operations. The first instalment modelled sequential higher-order computation with global store, input/output, probabilities, and non-determinism, and embedded both the call-by-name and call-by-value lambda-calculus, as well as Moggi's computational metalanguage and Levy's call-by-push-value. The present paper extends the calculus from sequential to branching and looping control flow. This allows the faithful embedding of a minimal but complete imperative language, including conditionals, exception handling, and iteration, as well as constants and algebraic data types. The calculus is defined through a simple operational semantics, extending the (simplified) Krivine machine for the lambda-calculus with multiple operand stacks to model effects and a continuation stack to model sequential, branching, and looping computation. It features a confluent reduction relation and a system of simple types that guarantees termination of the machine and strong normalization of reduction (in the absence of iteration). These properties carry over to the embedded imperative language, providing a unified functional-imperative model of computation that supports simple types, a direct and intuitive operational semantics, and a confluent reduction semantics.]]></description>
      <pubDate>Sat, 20 Dec 2025 13:10:14 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16682</link>
      <guid>https://doi.org/10.46298/entics.16682</guid>
      <author>Heijltjes, Willem</author>
      <dc:creator>Heijltjes, Willem</dc:creator>
      <content:encoded><![CDATA[The Functional Machine Calculus (Heijltjes 2022) is a new approach to unifying the imperative and functional programming paradigms. It extends the lambda-calculus, preserving the key features of confluent reduction and typed termination, to embed computational effects, evaluation strategies, and control flow operations. The first instalment modelled sequential higher-order computation with global store, input/output, probabilities, and non-determinism, and embedded both the call-by-name and call-by-value lambda-calculus, as well as Moggi's computational metalanguage and Levy's call-by-push-value. The present paper extends the calculus from sequential to branching and looping control flow. This allows the faithful embedding of a minimal but complete imperative language, including conditionals, exception handling, and iteration, as well as constants and algebraic data types. The calculus is defined through a simple operational semantics, extending the (simplified) Krivine machine for the lambda-calculus with multiple operand stacks to model effects and a continuation stack to model sequential, branching, and looping computation. It features a confluent reduction relation and a system of simple types that guarantees termination of the machine and strong normalization of reduction (in the absence of iteration). These properties carry over to the embedded imperative language, providing a unified functional-imperative model of computation that supports simple types, a direct and intuitive operational semantics, and a confluent reduction semantics.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Traces via Strategies in Two-Player Games</title>
      <description><![CDATA[Traces form a coarse notion of semantic equivalence between states of a process, and have been studied coalgebraically for various types of system. We instantiate the finitary coalgebraic trace semantics framework of Hasuo et al. for controller-versus-environment games, encompassing both nondeterministic and probabilistic environments. Although our choice of monads is guided by the constraints of this abstract framework, they enable us to recover familiar game-theoretic concepts. Concretely, we show that in these games, each element in the trace map corresponds to a collection (a subset or distribution) of plays the controller can force. Furthermore, each element can be seen as the outcome of following a controller strategy. Our results are parametrised by a weak distributive law, which computes what the controller can force in a single step.]]></description>
      <pubDate>Sat, 20 Dec 2025 13:02:24 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16816</link>
      <guid>https://doi.org/10.46298/entics.16816</guid>
      <author>Plummer, Benjamin</author>
      <author>Cirstea, Corina</author>
      <dc:creator>Plummer, Benjamin</dc:creator>
      <dc:creator>Cirstea, Corina</dc:creator>
      <content:encoded><![CDATA[Traces form a coarse notion of semantic equivalence between states of a process, and have been studied coalgebraically for various types of system. We instantiate the finitary coalgebraic trace semantics framework of Hasuo et al. for controller-versus-environment games, encompassing both nondeterministic and probabilistic environments. Although our choice of monads is guided by the constraints of this abstract framework, they enable us to recover familiar game-theoretic concepts. Concretely, we show that in these games, each element in the trace map corresponds to a collection (a subset or distribution) of plays the controller can force. Furthermore, each element can be seen as the outcome of following a controller strategy. Our results are parametrised by a weak distributive law, which computes what the controller can force in a single step.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Strong Dinatural Transformations and Generalised Codensity Monads</title>
      <description><![CDATA[We introduce dicodensity monads: a generalisation of pointwise codensity monads generated by functors to monads generated by mixed-variant bifunctors. Our construction is based on the notion of strong dinaturality (also known as Barr dinaturality), and is inspired by denotational models of certain types in polymorphic lambda calculi - in particular, a form of continuation monads with universally quantified variables, such as the Church encoding of the list monad in System F. Extending some previous results on Cayley-style representations, we provide a set of sufficient conditions to establish an isomorphism between a monad and the dicodensity monad for a given bifunctor. Then, we focus on the class of monads obtained by instantiating our construction with hom-functors and, more generally, bifunctors given by objects of homomorphisms (that is, internalised hom-sets between Eilenberg--Moore algebras). This gives us, for example, novel presentations of monads generated by different kinds of semirings and other theories used to model ordered nondeterministic computations.]]></description>
      <pubDate>Sat, 20 Dec 2025 13:01:32 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16689</link>
      <guid>https://doi.org/10.46298/entics.16689</guid>
      <author>Piróg, Maciej</author>
      <author>Sieczkowski, Filip</author>
      <dc:creator>Piróg, Maciej</dc:creator>
      <dc:creator>Sieczkowski, Filip</dc:creator>
      <content:encoded><![CDATA[We introduce dicodensity monads: a generalisation of pointwise codensity monads generated by functors to monads generated by mixed-variant bifunctors. Our construction is based on the notion of strong dinaturality (also known as Barr dinaturality), and is inspired by denotational models of certain types in polymorphic lambda calculi - in particular, a form of continuation monads with universally quantified variables, such as the Church encoding of the list monad in System F. Extending some previous results on Cayley-style representations, we provide a set of sufficient conditions to establish an isomorphism between a monad and the dicodensity monad for a given bifunctor. Then, we focus on the class of monads obtained by instantiating our construction with hom-functors and, more generally, bifunctors given by objects of homomorphisms (that is, internalised hom-sets between Eilenberg--Moore algebras). This gives us, for example, novel presentations of monads generated by different kinds of semirings and other theories used to model ordered nondeterministic computations.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Functoriality of Enriched Data Types</title>
      <description><![CDATA[In previous work, categories of algebras of endofunctors were shown to be enriched in categories of coalgebras of the same endofunctor, and the extra structure of that enrichment was used to define a generalization of inductive data types. These generalized inductive data types are parametrized by a coalgebra $C$, so we call them $C$-inductive data types; we call the morphisms induced by their universal property $C$-inductive functions. We extend that work by incorporating natural transformations into the theory: given a suitable natural transformation between endofunctors, we show that this induces enriched functors between their categories of algebras which preserve $C$-inductive data types and $C$-inductive functions. Such $C$-inductive data types are often finite versions of the corresponding inductive data type, and we show how our framework can extend classical initial algebra semantics to these types. For instance, we show that our theory naturally produces partially inductive functions on lists, changes in list element types, and tree pruning functions.]]></description>
      <pubDate>Sat, 20 Dec 2025 13:00:38 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16583</link>
      <guid>https://doi.org/10.46298/entics.16583</guid>
      <author>Mulder, Lukas</author>
      <author>North, Paige Randall</author>
      <author>Péroux, Maximilien</author>
      <dc:creator>Mulder, Lukas</dc:creator>
      <dc:creator>North, Paige Randall</dc:creator>
      <dc:creator>Péroux, Maximilien</dc:creator>
      <content:encoded><![CDATA[In previous work, categories of algebras of endofunctors were shown to be enriched in categories of coalgebras of the same endofunctor, and the extra structure of that enrichment was used to define a generalization of inductive data types. These generalized inductive data types are parametrized by a coalgebra $C$, so we call them $C$-inductive data types; we call the morphisms induced by their universal property $C$-inductive functions. We extend that work by incorporating natural transformations into the theory: given a suitable natural transformation between endofunctors, we show that this induces enriched functors between their categories of algebras which preserve $C$-inductive data types and $C$-inductive functions. Such $C$-inductive data types are often finite versions of the corresponding inductive data type, and we show how our framework can extend classical initial algebra semantics to these types. For instance, we show that our theory naturally produces partially inductive functions on lists, changes in list element types, and tree pruning functions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Compact Quantitative Theories of Convex Algebras</title>
      <description><![CDATA[We introduce the concept of compact quantitative equational theory. A quantitative equational theory is defined to be compact if all its consequences are derivable by means of finite proofs. We prove that the theory of interpolative barycentric (also known as convex) quantitative algebras of Mardare et. al. is compact. This serves as a paradigmatic example, used to obtain other compact quantitative equational theories of convex algebras, each axiomatizing some distance on finitely supported probability distributions.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:59:39 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16876</link>
      <guid>https://doi.org/10.46298/entics.16876</guid>
      <author>Mio, Matteo</author>
      <dc:creator>Mio, Matteo</dc:creator>
      <content:encoded><![CDATA[We introduce the concept of compact quantitative equational theory. A quantitative equational theory is defined to be compact if all its consequences are derivable by means of finite proofs. We prove that the theory of interpolative barycentric (also known as convex) quantitative algebras of Mardare et. al. is compact. This serves as a paradigmatic example, used to obtain other compact quantitative equational theories of convex algebras, each axiomatizing some distance on finitely supported probability distributions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Order in Partial Markov Categories</title>
      <description><![CDATA[Partial Markov categories are a recent framework for categorical probability theory that provide an abstract account of partial probabilistic computation with updating semantics. In this article, we discuss two order relations on the morphisms of a partial Markov category. In particular, we prove that every partial Markov category is canonically preorder-enriched, recovering several well-known order enrichments. We also demonstrate that the existence of codiagonal maps (comparators) is closely related to order properties of partial Markov categories. Finally, we introduce a synthetic version of the Cauchy--Schwarz inequality and, from it, we prove that updating increases validity.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:58:20 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16686</link>
      <guid>https://doi.org/10.46298/entics.16686</guid>
      <author>Di Lavore, Elena</author>
      <author>Román, Mario</author>
      <author>Sobociński, Paweł</author>
      <author>Széles, Márk</author>
      <dc:creator>Di Lavore, Elena</dc:creator>
      <dc:creator>Román, Mario</dc:creator>
      <dc:creator>Sobociński, Paweł</dc:creator>
      <dc:creator>Széles, Márk</dc:creator>
      <content:encoded><![CDATA[Partial Markov categories are a recent framework for categorical probability theory that provide an abstract account of partial probabilistic computation with updating semantics. In this article, we discuss two order relations on the morphisms of a partial Markov category. In particular, we prove that every partial Markov category is canonically preorder-enriched, recovering several well-known order enrichments. We also demonstrate that the existence of codiagonal maps (comparators) is closely related to order properties of partial Markov categories. Finally, we introduce a synthetic version of the Cauchy--Schwarz inequality and, from it, we prove that updating increases validity.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Continuation Semantics for Fixpoint Modal Logic and Computation Tree Logics</title>
      <description><![CDATA[We introduce continuation semantics for both fixpoint modal logic (FML) and Computation Tree Logic* (CTL*), parameterised by a choice of branching type and quantitative predicate lifting. Our main contribution is proving that they are equivalent to coalgebraic semantics, for all branching types. Our continuation semantics is defined over coalgebras of the continuation monad whose answer type coincides with the domain of truth values of the formulas. By identifying predicates and continuations, such a coalgebra has a canonical interpretation of the modality by evaluation of continuations. We show that this continuation semantics is equivalent to the coalgebraic semantics for fixpoint modal logic. We then reformulate the current construction for coalgebraic models of CTL*. These models are usually required to have an infinitary trace/maximal execution map, characterized as the greatest fixpoint of a special operator. Instead, we allow coalgebraic models of CTL* to employ non-maximal fixpoints, which we call execution maps. Under this reformulation, we establish a general result on transferring execution maps via monad morphisms. From this result, we obtain that continuation semantics is equivalent to the coalgebraic semantics for CTL*. We also identify a sufficient condition under which CTL can be encoded into fixpoint modal logic under continuation semantics.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:57:20 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16653</link>
      <guid>https://doi.org/10.46298/entics.16653</guid>
      <author>Kojima, Ryota</author>
      <author>Cirstea, Corina</author>
      <dc:creator>Kojima, Ryota</dc:creator>
      <dc:creator>Cirstea, Corina</dc:creator>
      <content:encoded><![CDATA[We introduce continuation semantics for both fixpoint modal logic (FML) and Computation Tree Logic* (CTL*), parameterised by a choice of branching type and quantitative predicate lifting. Our main contribution is proving that they are equivalent to coalgebraic semantics, for all branching types. Our continuation semantics is defined over coalgebras of the continuation monad whose answer type coincides with the domain of truth values of the formulas. By identifying predicates and continuations, such a coalgebra has a canonical interpretation of the modality by evaluation of continuations. We show that this continuation semantics is equivalent to the coalgebraic semantics for fixpoint modal logic. We then reformulate the current construction for coalgebraic models of CTL*. These models are usually required to have an infinitary trace/maximal execution map, characterized as the greatest fixpoint of a special operator. Instead, we allow coalgebraic models of CTL* to employ non-maximal fixpoints, which we call execution maps. Under this reformulation, we establish a general result on transferring execution maps via monad morphisms. From this result, we obtain that continuation semantics is equivalent to the coalgebraic semantics for CTL*. We also identify a sufficient condition under which CTL can be encoded into fixpoint modal logic under continuation semantics.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Safety, Relative Tightness and the Probabilistic Frame Rule</title>
      <description><![CDATA[Probabilistic separation logic offers an approach to reasoning about imperative probabilistic programs in which a separating conjunction is used as a mechanism for expressing independence properties. Crucial to the effectiveness of the formalism is the frame rule, which enables modular reasoning about independent probabilistic state. We explore a semantic formulation of probabilistic separation logic, in which the frame rule has the same simple formulation as in separation logic, without further side conditions. This is achieved by building a notion of safety into specifications, using which we establish a crucial property of specifications, called relative tightness, from which the soundness of the frame rule follows.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:55:10 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16743</link>
      <guid>https://doi.org/10.46298/entics.16743</guid>
      <author>Jereb, Janez Ignacij</author>
      <author>Simpson, Alex</author>
      <dc:creator>Jereb, Janez Ignacij</dc:creator>
      <dc:creator>Simpson, Alex</dc:creator>
      <content:encoded><![CDATA[Probabilistic separation logic offers an approach to reasoning about imperative probabilistic programs in which a separating conjunction is used as a mechanism for expressing independence properties. Crucial to the effectiveness of the formalism is the frame rule, which enables modular reasoning about independent probabilistic state. We explore a semantic formulation of probabilistic separation logic, in which the frame rule has the same simple formulation as in separation logic, without further side conditions. This is achieved by building a notion of safety into specifications, using which we establish a crucial property of specifications, called relative tightness, from which the soundness of the frame rule follows.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Compositional Inference for Bayesian Networks and Causality</title>
      <description><![CDATA[Inference is a fundamental reasoning technique in probability theory. When applied to a large joint distribution, it involves updating with evidence (conditioning) in one or more components (variables) and computing the outcome in other components. When the joint distribution is represented by a Bayesian network, the network structure may be exploited to proceed in a compositional manner -- with great benefits. However, the main challenge is that updating involves (re)normalisation, making it an operation that interacts badly with other operations. String diagrams are becoming popular as a graphical technique for probabilistic (and quantum) reasoning. Conditioning has appeared in string diagrams, in terms of a disintegration, using bent wires and shaded (or dashed) normalisation boxes. It has become clear that such normalisation boxes do satisfy certain compositional rules. This paper takes a decisive step in this development by adding a removal rule to the formalism, for the deletion of shaded boxes. Via this removal rule one can get rid of shaded boxes and terminate an inference argument. This paper illustrates via many (graphical) examples how the resulting compositional inference technique can be used for Bayesian networks, causal reasoning and counterfactuals.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:54:08 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.17029</link>
      <guid>https://doi.org/10.46298/entics.17029</guid>
      <author>Jacobs, Bart</author>
      <author>Széles, Márk</author>
      <author>Stein, Dario</author>
      <dc:creator>Jacobs, Bart</dc:creator>
      <dc:creator>Széles, Márk</dc:creator>
      <dc:creator>Stein, Dario</dc:creator>
      <content:encoded><![CDATA[Inference is a fundamental reasoning technique in probability theory. When applied to a large joint distribution, it involves updating with evidence (conditioning) in one or more components (variables) and computing the outcome in other components. When the joint distribution is represented by a Bayesian network, the network structure may be exploited to proceed in a compositional manner -- with great benefits. However, the main challenge is that updating involves (re)normalisation, making it an operation that interacts badly with other operations. String diagrams are becoming popular as a graphical technique for probabilistic (and quantum) reasoning. Conditioning has appeared in string diagrams, in terms of a disintegration, using bent wires and shaded (or dashed) normalisation boxes. It has become clear that such normalisation boxes do satisfy certain compositional rules. This paper takes a decisive step in this development by adding a removal rule to the formalism, for the deletion of shaded boxes. Via this removal rule one can get rid of shaded boxes and terminate an inference argument. This paper illustrates via many (graphical) examples how the resulting compositional inference technique can be used for Bayesian networks, causal reasoning and counterfactuals.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Fresh Look at Bivariate Binomial Distributions</title>
      <description><![CDATA[Binomial distributions capture the probabilities of `heads' outcomes when a (biased) coin is tossed multiple times. The coin may be identified with a distribution on the two-element set {0,1}, where the 1 outcome corresponds to `head'. One can also toss two separate coins, with different biases, in parallel and record the outcomes. This paper investigates a slightly different `bivariate' binomial distribution, where the two coins are dependent (also called: entangled, or entwined): the two-coin is a distribution on the product {0,1} x {0,1}. This bivariate binomial exists in the literature, with complicated formulations. Here we use the language of category theory to give a new succint formulation. This paper investigates, also in categorically inspired form, basic properties of these bivariate distributions, including their mean, variance and covariance, and their behaviour under convolution and under updating, in Laplace's rule of succession. Furthermore, it is shown how Expectation Maximisation works for these bivariate binomials, so that mixtures of bivariate binomials can be recognised in data. This paper concentrates on the bivariate case, but the binomial distributions may be generalised to the multivariate case, with multiple dimensions, in a straightforward manner.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:53:03 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.17042</link>
      <guid>https://doi.org/10.46298/entics.17042</guid>
      <author>Jacobs, Bart</author>
      <dc:creator>Jacobs, Bart</dc:creator>
      <content:encoded><![CDATA[Binomial distributions capture the probabilities of `heads' outcomes when a (biased) coin is tossed multiple times. The coin may be identified with a distribution on the two-element set {0,1}, where the 1 outcome corresponds to `head'. One can also toss two separate coins, with different biases, in parallel and record the outcomes. This paper investigates a slightly different `bivariate' binomial distribution, where the two coins are dependent (also called: entangled, or entwined): the two-coin is a distribution on the product {0,1} x {0,1}. This bivariate binomial exists in the literature, with complicated formulations. Here we use the language of category theory to give a new succint formulation. This paper investigates, also in categorically inspired form, basic properties of these bivariate distributions, including their mean, variance and covariance, and their behaviour under convolution and under updating, in Laplace's rule of succession. Furthermore, it is shown how Expectation Maximisation works for these bivariate binomials, so that mixtures of bivariate binomials can be recognised in data. This paper concentrates on the bivariate case, but the binomial distributions may be generalised to the multivariate case, with multiple dimensions, in a straightforward manner.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Modular abstract syntax trees (MAST): substitution tensors with second-class sorts</title>
      <description><![CDATA[We adapt Fiore, Plotkin, and Turi's treatment of abstract syntax with binding, substitution, and holes to account for languages with second-class sorts. These situations include programming calculi such as the Call-by-Value lambda-calculus (CBV) and Levy's Call-by-Push-Value (CBPV). Prohibiting second-class sorts from appearing in variable contexts changes the characterisation of the abstract syntax from monoids in monoidal categories to actions in actegories. We reproduce much of the development through bicategorical arguments. We apply the resulting theory by proving substitution lemmata for varieties of CBV.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:51:09 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16879</link>
      <guid>https://doi.org/10.46298/entics.16879</guid>
      <author>Fiore, Marcelo P.</author>
      <author>Kammar, Ohad</author>
      <author>Moser, Georg</author>
      <author>Staton, Sam</author>
      <dc:creator>Fiore, Marcelo P.</dc:creator>
      <dc:creator>Kammar, Ohad</dc:creator>
      <dc:creator>Moser, Georg</dc:creator>
      <dc:creator>Staton, Sam</dc:creator>
      <content:encoded><![CDATA[We adapt Fiore, Plotkin, and Turi's treatment of abstract syntax with binding, substitution, and holes to account for languages with second-class sorts. These situations include programming calculi such as the Call-by-Value lambda-calculus (CBV) and Levy's Call-by-Push-Value (CBPV). Prohibiting second-class sorts from appearing in variable contexts changes the characterisation of the abstract syntax from monoids in monoidal categories to actions in actegories. We reproduce much of the development through bicategorical arguments. We apply the resulting theory by proving substitution lemmata for varieties of CBV.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Initial Algebras of Domains via Quotient Inductive-Inductive Types</title>
      <description><![CDATA[Domain theory has been developed as a mathematical theory of computation and to give a denotational semantics to programming languages. It helps us to fix the meaning of language concepts, to understand how programs behave and to reason about programs. At the same time it serves as a great theory to model various algebraic effects such as non-determinism, partial functions, side effects and numerous other forms of computation. In the present paper, we present a general framework to construct algebraic effects in domain theory, where our domains are DCPOs: directed complete partial orders. We first describe so called DCPO algebras for a signature, where the signature specifies the operations on the DCPO and the inequational theory they obey. This provides a method to represent various algebraic effects, like partiality. We then show that initial DCPO algebras exist by defining them as so called Quotient Inductive-Inductive Types (QIITs), known from homotopy type theory. A quotient inductive-inductive type allows one to simultaneously define an inductive type and an inductive relation on that type, together with equations on the type. We illustrate our approach by showing that several well-known constructions of DCPOs fit our framework: coalesced sums, smash products and free DCPOs (partiality and power domains). Our work makes use of various features of homotopy type theory and is formalized in Cubical Agda.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:50:04 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16512</link>
      <guid>https://doi.org/10.46298/entics.16512</guid>
      <author>van Collem, Simcha</author>
      <author>van der Weide, Niels</author>
      <author>Geuvers, Herman</author>
      <dc:creator>van Collem, Simcha</dc:creator>
      <dc:creator>van der Weide, Niels</dc:creator>
      <dc:creator>Geuvers, Herman</dc:creator>
      <content:encoded><![CDATA[Domain theory has been developed as a mathematical theory of computation and to give a denotational semantics to programming languages. It helps us to fix the meaning of language concepts, to understand how programs behave and to reason about programs. At the same time it serves as a great theory to model various algebraic effects such as non-determinism, partial functions, side effects and numerous other forms of computation. In the present paper, we present a general framework to construct algebraic effects in domain theory, where our domains are DCPOs: directed complete partial orders. We first describe so called DCPO algebras for a signature, where the signature specifies the operations on the DCPO and the inequational theory they obey. This provides a method to represent various algebraic effects, like partiality. We then show that initial DCPO algebras exist by defining them as so called Quotient Inductive-Inductive Types (QIITs), known from homotopy type theory. A quotient inductive-inductive type allows one to simultaneously define an inductive type and an inductive relation on that type, together with equations on the type. We illustrate our approach by showing that several well-known constructions of DCPOs fit our framework: coalesced sums, smash products and free DCPOs (partiality and power domains). Our work makes use of various features of homotopy type theory and is formalized in Cubical Agda.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Unambiguous Acceptance of Thin Coalgebras</title>
      <description><![CDATA[Automata admitting at most one accepting run per structure, known as unambiguous automata, find applications in verification of reactive systems as they extend the class of deterministic automata whilst maintaining some of their desirable properties. In this paper, we generalise a classical construction of unambiguous automata from thin trees to thin coalgebras for analytic functors. This achieves two goals: extending the existing construction to a larger class of structures, and providing conceptual clarity and parametricity to the construction by formalising it in the coalgebraic framework. As part of the construction, we link automaton acceptance of languages of thin coalgebras to language recognition via so-called coherent algebras, which were previously introduced for studying thin coalgebras. This link also allows us to establish an automata-theoretic characterisation of languages recognised by finite coherent algebras.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:49:02 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16832</link>
      <guid>https://doi.org/10.46298/entics.16832</guid>
      <author>Chernev, Anton</author>
      <author>Cîrstea, Corina</author>
      <author>Hansen, Helle Hvid</author>
      <author>Kupke, Clemens</author>
      <dc:creator>Chernev, Anton</dc:creator>
      <dc:creator>Cîrstea, Corina</dc:creator>
      <dc:creator>Hansen, Helle Hvid</dc:creator>
      <dc:creator>Kupke, Clemens</dc:creator>
      <content:encoded><![CDATA[Automata admitting at most one accepting run per structure, known as unambiguous automata, find applications in verification of reactive systems as they extend the class of deterministic automata whilst maintaining some of their desirable properties. In this paper, we generalise a classical construction of unambiguous automata from thin trees to thin coalgebras for analytic functors. This achieves two goals: extending the existing construction to a larger class of structures, and providing conceptual clarity and parametricity to the construction by formalising it in the coalgebraic framework. As part of the construction, we link automaton acceptance of languages of thin coalgebras to language recognition via so-called coherent algebras, which were previously introduced for studying thin coalgebras. This link also allows us to establish an automata-theoretic characterisation of languages recognised by finite coherent algebras.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Cyclic Proofs in Hoare Logic and its Reverse</title>
      <description><![CDATA[We examine the relationships between axiomatic and cyclic proof systems for the partial and total versions of Hoare logic and those of its dual, known as reverse Hoare logic (or sometimes incorrectness logic). In the axiomatic proof systems for these logics, the proof rules for looping constructs involve an explicit loop invariant, which in the case of the total versions additionally require a well-founded termination measure. In the cyclic systems, these are replaced by rules that simply unroll the loops, together with a principle allowing the formation of cycles in the proof, subject to a global soundness condition that ensures the well-foundedness of the circular reasoning. Interestingly, the cyclic soundness conditions for partial Hoare logic and its reverse are similar and essentially coinductive in character, while those for the total versions are also similar and essentially inductive. We show that these cyclic systems are sound, by direct argument, and relatively complete, by translation from axiomatic to cyclic proofs.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:47:58 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16696</link>
      <guid>https://doi.org/10.46298/entics.16696</guid>
      <author>Brotherston, James</author>
      <author>Le, Quang Loc</author>
      <author>Desai, Gauri</author>
      <author>Oda, Yukihiro</author>
      <dc:creator>Brotherston, James</dc:creator>
      <dc:creator>Le, Quang Loc</dc:creator>
      <dc:creator>Desai, Gauri</dc:creator>
      <dc:creator>Oda, Yukihiro</dc:creator>
      <content:encoded><![CDATA[We examine the relationships between axiomatic and cyclic proof systems for the partial and total versions of Hoare logic and those of its dual, known as reverse Hoare logic (or sometimes incorrectness logic). In the axiomatic proof systems for these logics, the proof rules for looping constructs involve an explicit loop invariant, which in the case of the total versions additionally require a well-founded termination measure. In the cyclic systems, these are replaced by rules that simply unroll the loops, together with a principle allowing the formation of cycles in the proof, subject to a global soundness condition that ensures the well-foundedness of the circular reasoning. Interestingly, the cyclic soundness conditions for partial Hoare logic and its reverse are similar and essentially coinductive in character, while those for the total versions are also similar and essentially inductive. We show that these cyclic systems are sound, by direct argument, and relatively complete, by translation from axiomatic to cyclic proofs.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Proof-Theoretic Approach to the Semantics of Classical Linear Logic</title>
      <description><![CDATA[Linear logic (LL) is a resource-aware, abstract logic programming language that refines both classical and intuitionistic logic. Linear logic semantics is typically presented in one of two ways: by associating each formula with the set of all contexts that can be used to prove it (e.g. phase semantics) or by assigning meaning directly to proofs (e.g. coherence spaces). This work proposes a different perspective on assigning meaning to proofs by adopting a proof-theoretic perspective. More specifically, we employ base-extension semantics (BeS) to characterise proofs through the notion of base support. Recent developments have shown that BeS is powerful enough to capture proof-theoretic notions in structurally rich logics such as intuitionistic linear logic. In this paper, we extend this framework to the classical case, presenting a proof-theoretic approach to the semantics of the multiplicative-additive fragment of linear logic (MALL).]]></description>
      <pubDate>Sat, 20 Dec 2025 12:46:29 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16888</link>
      <guid>https://doi.org/10.46298/entics.16888</guid>
      <author>Barroso-Nascimento, Victor</author>
      <author>Piotrovskaya, Ekaterina</author>
      <author>Pimentel, Elaine</author>
      <dc:creator>Barroso-Nascimento, Victor</dc:creator>
      <dc:creator>Piotrovskaya, Ekaterina</dc:creator>
      <dc:creator>Pimentel, Elaine</dc:creator>
      <content:encoded><![CDATA[Linear logic (LL) is a resource-aware, abstract logic programming language that refines both classical and intuitionistic logic. Linear logic semantics is typically presented in one of two ways: by associating each formula with the set of all contexts that can be used to prove it (e.g. phase semantics) or by assigning meaning directly to proofs (e.g. coherence spaces). This work proposes a different perspective on assigning meaning to proofs by adopting a proof-theoretic perspective. More specifically, we employ base-extension semantics (BeS) to characterise proofs through the notion of base support. Recent developments have shown that BeS is powerful enough to capture proof-theoretic notions in structurally rich logics such as intuitionistic linear logic. In this paper, we extend this framework to the classical case, presenting a proof-theoretic approach to the semantics of the multiplicative-additive fragment of linear logic (MALL).]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Strong normalization through idempotent intersection types: a new syntactical approach</title>
      <description><![CDATA[It is well-known that intersection type assignment systems can be used to characterize strong normalization (SN). Typical proofs that typable lambda-terms are SN in these systems rely on semantical techniques. In this work, we study $Λ_\cap^e$, a variant of Coppo and Dezani's (Curry-style) intersection type system, and we propose a syntactical proof of strong normalization for it. We first design $Λ_\cap^i$, a Church-style version, in which terms closely correspond to typing derivations. Then we prove that typability in $Λ_\cap^i$ implies SN through a measure that, given a term, produces a natural number that decreases along with reduction. Finally, the result is extended to $Λ_\cap^e$, since the two systems simulate each other.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:45:13 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16693</link>
      <guid>https://doi.org/10.46298/entics.16693</guid>
      <author>Barenbaum, Pablo</author>
      <author>Della Rocca, Simona Ronchi</author>
      <author>Sottile, Cristian</author>
      <dc:creator>Barenbaum, Pablo</dc:creator>
      <dc:creator>Della Rocca, Simona Ronchi</dc:creator>
      <dc:creator>Sottile, Cristian</dc:creator>
      <content:encoded><![CDATA[It is well-known that intersection type assignment systems can be used to characterize strong normalization (SN). Typical proofs that typable lambda-terms are SN in these systems rely on semantical techniques. In this work, we study $Λ_\cap^e$, a variant of Coppo and Dezani's (Curry-style) intersection type system, and we propose a syntactical proof of strong normalization for it. We first design $Λ_\cap^i$, a Church-style version, in which terms closely correspond to typing derivations. Then we prove that typability in $Λ_\cap^i$ implies SN through a measure that, given a term, produces a natural number that decreases along with reduction. Finally, the result is extended to $Λ_\cap^e$, since the two systems simulate each other.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Reversible computations are computations</title>
      <description><![CDATA[Causality serves as an abstract notion of time for concurrent systems. A computation is causal, or simply valid, if each observation of a computation event is preceded by the observation of its causes. The present work establishes that this simple requirement is equally relevant when the occurrence of an event is invertible. We propose a conservative extension of causal models for concurrency that accommodates reversible computations. We first model reversible computations using a symmetric residuation operation in the general model of configuration structures. We show that stable configuration structures, which correspond to prime algebraic domains, remain stable under the action of this residuation. We then derive a semantics of reversible computations for prime event structures, which is shown to coincide with a switch operation that dualizes conflict and causality.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:43:59 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16677</link>
      <guid>https://doi.org/10.46298/entics.16677</guid>
      <author>Aubert, Clément</author>
      <author>Krivine, Jean</author>
      <dc:creator>Aubert, Clément</dc:creator>
      <dc:creator>Krivine, Jean</dc:creator>
      <content:encoded><![CDATA[Causality serves as an abstract notion of time for concurrent systems. A computation is causal, or simply valid, if each observation of a computation event is preceded by the observation of its causes. The present work establishes that this simple requirement is equally relevant when the occurrence of an event is invertible. We propose a conservative extension of causal models for concurrency that accommodates reversible computations. We first model reversible computations using a symmetric residuation operation in the general model of configuration structures. We show that stable configuration structures, which correspond to prime algebraic domains, remain stable under the action of this residuation. We then derive a semantics of reversible computations for prime event structures, which is shown to coincide with a switch operation that dualizes conflict and causality.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Polynomial Universes in Homotopy Type Theory</title>
      <description><![CDATA[Awodey, later with Newstead, showed how polynomial functors with extra structure (termed ``natural models'') hold within them the categorical semantics for dependent type theory. Their work presented these ideas clearly but ultimately led them outside of the usual category of polynomial functors to a particular \emph{tricategory} of polynomials in order to explain all of the structure possessed by such models. This paper builds off that work -- explicating the categorical semantics of dependent type theory by axiomatizing them entirely in terms of the usual category of polynomial functors. In order to handle the higher-categorical coherences required for such an explanation, we work with polynomial functors in the language of Homotopy Type Theory (HoTT), which allows for higher-dimensional structures to be expressed purely within this category. The move to HoTT moreover enables us to express a key additional condition on polynomial functors -- \emph{univalence} -- which is sufficient to guarantee that models of type theory expressed as univalent polynomials satisfy all higher coherences of their corresponding algebraic structures, purely in virtue of being closed under the usual constructors of dependent type theory. We call polynomial functors satisfying this condition \emph{polynomial universes}. As an example of the simplification to the theory of natural models this enables, we highlight the fact that a polynomial universe being closed under dependent product types implies the existence of a distributive law of monads, which witnesses the usual distributivity of dependent products over dependent sums.]]></description>
      <pubDate>Sat, 20 Dec 2025 12:39:46 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.16885</link>
      <guid>https://doi.org/10.46298/entics.16885</guid>
      <author>Aberlé, C. B.</author>
      <author>Spivak, David I.</author>
      <dc:creator>Aberlé, C. B.</dc:creator>
      <dc:creator>Spivak, David I.</dc:creator>
      <content:encoded><![CDATA[Awodey, later with Newstead, showed how polynomial functors with extra structure (termed ``natural models'') hold within them the categorical semantics for dependent type theory. Their work presented these ideas clearly but ultimately led them outside of the usual category of polynomial functors to a particular \emph{tricategory} of polynomials in order to explain all of the structure possessed by such models. This paper builds off that work -- explicating the categorical semantics of dependent type theory by axiomatizing them entirely in terms of the usual category of polynomial functors. In order to handle the higher-categorical coherences required for such an explanation, we work with polynomial functors in the language of Homotopy Type Theory (HoTT), which allows for higher-dimensional structures to be expressed purely within this category. The move to HoTT moreover enables us to express a key additional condition on polynomial functors -- \emph{univalence} -- which is sufficient to guarantee that models of type theory expressed as univalent polynomials satisfy all higher coherences of their corresponding algebraic structures, purely in virtue of being closed under the usual constructors of dependent type theory. We call polynomial functors satisfying this condition \emph{polynomial universes}. As an example of the simplification to the theory of natural models this enables, we highlight the fact that a polynomial universe being closed under dependent product types implies the existence of a distributive law of monads, which witnesses the usual distributivity of dependent products over dependent sums.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Implicit automata in {\lambda}-calculi III: affine planar string-to-string functions</title>
      <description><![CDATA[We prove a characterization of first-order string-to-string transduction via $\lambda$-terms typed in non-commutative affine logic that compute with Church encoding, extending the analogous known characterization of star-free languages. We show that every first-order transduction can be computed by a $\lambda$-term using a known Krohn-Rhodes-style decomposition lemma. The converse direction is given by compiling $\lambda$-terms into two-way reversible planar transducers. The soundness of this translation involves showing that the transition functions of those transducers live in a monoidal closed category of diagrams in which we can interpret purely affine $\lambda$-terms. One challenge is that the unit of the tensor of the category in question is not a terminal object. As a result, our interpretation does not identify $\beta$-equivalent terms, but it does turn $\beta$-reductions into inequalities in a poset-enrichment of the category of diagrams.]]></description>
      <pubDate>Wed, 11 Dec 2024 17:06:34 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14804</link>
      <guid>https://doi.org/10.46298/entics.14804</guid>
      <author>Pradic, Cécilia</author>
      <author>Price, Ian</author>
      <dc:creator>Pradic, Cécilia</dc:creator>
      <dc:creator>Price, Ian</dc:creator>
      <content:encoded><![CDATA[We prove a characterization of first-order string-to-string transduction via $\lambda$-terms typed in non-commutative affine logic that compute with Church encoding, extending the analogous known characterization of star-free languages. We show that every first-order transduction can be computed by a $\lambda$-term using a known Krohn-Rhodes-style decomposition lemma. The converse direction is given by compiling $\lambda$-terms into two-way reversible planar transducers. The soundness of this translation involves showing that the transition functions of those transducers live in a monoidal closed category of diagrams in which we can interpret purely affine $\lambda$-terms. One challenge is that the unit of the tensor of the category in question is not a terminal object. As a result, our interpretation does not identify $\beta$-equivalent terms, but it does turn $\beta$-reductions into inequalities in a poset-enrichment of the category of diagrams.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Cost-sensitive computational adequacy of higher-order recursion in synthetic domain theory</title>
      <description><![CDATA[We study a cost-aware programming language for higher-order recursion dubbed $\textbf{PCF}_\mathsf{cost}$ in the setting of synthetic domain theory (SDT). Our main contribution relates the denotational cost semantics of $\textbf{PCF}_\mathsf{cost}$ to its computational cost semantics, a new kind of dynamic semantics for program execution that serves as a mathematically natural alternative to operational semantics in SDT. In particular we prove an internal, cost-sensitive version of Plotkin's computational adequacy theorem, giving a precise correspondence between the denotational and computational semantics for complete programs at base type. The constructions and proofs of this paper take place in the internal dependent type theory of an SDT topos extended by a phase distinction in the sense of Sterling and Harper. By controlling the interpretation of cost structure via the phase distinction in the denotational semantics, we show that $\textbf{PCF}_\mathsf{cost}$ programs also evince a noninterference property of cost and behavior. We verify the axioms of the type theory by means of a model construction based on relative sheaf models of SDT.]]></description>
      <pubDate>Wed, 11 Dec 2024 17:05:25 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14732</link>
      <guid>https://doi.org/10.46298/entics.14732</guid>
      <author>Niu, Yue</author>
      <author>Sterling, Jonathan</author>
      <author>Harper, Robert</author>
      <dc:creator>Niu, Yue</dc:creator>
      <dc:creator>Sterling, Jonathan</dc:creator>
      <dc:creator>Harper, Robert</dc:creator>
      <content:encoded><![CDATA[We study a cost-aware programming language for higher-order recursion dubbed $\textbf{PCF}_\mathsf{cost}$ in the setting of synthetic domain theory (SDT). Our main contribution relates the denotational cost semantics of $\textbf{PCF}_\mathsf{cost}$ to its computational cost semantics, a new kind of dynamic semantics for program execution that serves as a mathematically natural alternative to operational semantics in SDT. In particular we prove an internal, cost-sensitive version of Plotkin's computational adequacy theorem, giving a precise correspondence between the denotational and computational semantics for complete programs at base type. The constructions and proofs of this paper take place in the internal dependent type theory of an SDT topos extended by a phase distinction in the sense of Sterling and Harper. By controlling the interpretation of cost structure via the phase distinction in the denotational semantics, we show that $\textbf{PCF}_\mathsf{cost}$ programs also evince a noninterference property of cost and behavior. We verify the axioms of the type theory by means of a model construction based on relative sheaf models of SDT.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>GATlab: Modeling and Programming with Generalized Algebraic Theories</title>
      <description><![CDATA[Categories and categorical structures are increasingly recognized as useful abstractions for modeling in science and engineering. To uniformly implement category-theoretic mathematical models in software, we introduce GATlab, a domain-specific language for algebraic specification embedded in a technical programming language. GATlab is based on generalized algebraic theories (GATs), a logical system extending algebraic theories with dependent types so as to encompass category theory. Using GATlab, the programmer can specify generalized algebraic theories and their models, including both free models, based on symbolic expressions, and computational models, defined by arbitrary code in the host language. Moreover, the programmer can define maps between theories and use them to declaratively migrate models of one theory to models of another. In short, GATlab aims to provide a unified environment for both computer algebra and software interface design with generalized algebraic theories. In this paper, we describe the design, implementation, and applications of GATlab.]]></description>
      <pubDate>Wed, 11 Dec 2024 17:04:15 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14666</link>
      <guid>https://doi.org/10.46298/entics.14666</guid>
      <author>Lynch, Owen</author>
      <author>Brown, Kris</author>
      <author>Fairbanks, James</author>
      <author>Patterson, Evan</author>
      <dc:creator>Lynch, Owen</dc:creator>
      <dc:creator>Brown, Kris</dc:creator>
      <dc:creator>Fairbanks, James</dc:creator>
      <dc:creator>Patterson, Evan</dc:creator>
      <content:encoded><![CDATA[Categories and categorical structures are increasingly recognized as useful abstractions for modeling in science and engineering. To uniformly implement category-theoretic mathematical models in software, we introduce GATlab, a domain-specific language for algebraic specification embedded in a technical programming language. GATlab is based on generalized algebraic theories (GATs), a logical system extending algebraic theories with dependent types so as to encompass category theory. Using GATlab, the programmer can specify generalized algebraic theories and their models, including both free models, based on symbolic expressions, and computational models, defined by arbitrary code in the host language. Moreover, the programmer can define maps between theories and use them to declaratively migrate models of one theory to models of another. In short, GATlab aims to provide a unified environment for both computer algebra and software interface design with generalized algebraic theories. In this paper, we describe the design, implementation, and applications of GATlab.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On Kleisli liftings and decorated trace semantics</title>
      <description><![CDATA[It is well known that Kleisli categories provide a natural language to model side effects. For instance, in the theory of coalgebras, behavioural equivalence coincides with language equivalence (instead of bisimilarity) when nondeterministic automata are modelled as coalgebras living in the Kleisli category of the powerset monad. In this paper, our aim is to establish decorated trace semantics based on language and ready equivalences for conditional transition systems (CTSs) with/without upgrades. To this end, we model CTSs as coalgebras living in the Kleisli category of a relative monad. Our results are twofold. First, we reduce the problem of defining a Kleisli lifting for the machine endofunctor in the context of a relative monad to the classical notion of Kleisli lifting. Second, we provide a recipe based on indexed categories to construct a Kleisli lifting for general endofunctors.]]></description>
      <pubDate>Wed, 11 Dec 2024 17:02:47 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14751</link>
      <guid>https://doi.org/10.46298/entics.14751</guid>
      <author>Luckhardt, Daniel</author>
      <author>Beohar, Harsh</author>
      <author>Küpper, Sebastian</author>
      <dc:creator>Luckhardt, Daniel</dc:creator>
      <dc:creator>Beohar, Harsh</dc:creator>
      <dc:creator>Küpper, Sebastian</dc:creator>
      <content:encoded><![CDATA[It is well known that Kleisli categories provide a natural language to model side effects. For instance, in the theory of coalgebras, behavioural equivalence coincides with language equivalence (instead of bisimilarity) when nondeterministic automata are modelled as coalgebras living in the Kleisli category of the powerset monad. In this paper, our aim is to establish decorated trace semantics based on language and ready equivalences for conditional transition systems (CTSs) with/without upgrades. To this end, we model CTSs as coalgebras living in the Kleisli category of a relative monad. Our results are twofold. First, we reduce the problem of defining a Kleisli lifting for the machine endofunctor in the context of a relative monad to the classical notion of Kleisli lifting. Second, we provide a recipe based on indexed categories to construct a Kleisli lifting for general endofunctors.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An Ultrametric for Cartesian Differential Categories for Taylor Series Convergence</title>
      <description><![CDATA[Cartesian differential categories provide a categorical framework for multivariable differential calculus and also the categorical semantics of the differential $\lambda$-calculus. Taylor series expansion is an important concept for both differential calculus and the differential $\lambda$-calculus. In differential calculus, a function is equal to its Taylor series if its sequence of Taylor polynomials converges to the function in the analytic sense. On the other hand, for the differential $\lambda$-calculus, one works in a setting with an appropriate notion of algebraic infinite sums to formalize Taylor series expansion. In this paper, we provide a formal theory of Taylor series in an arbitrary Cartesian differential category without the need for converging limits or infinite sums. We begin by developing the notion of Taylor polynomials of maps in a Cartesian differential category and then show how comparing Taylor polynomials of maps induces an ultrapseudometric on the homsets. We say that a Cartesian differential category is Taylor if maps are entirely determined by their Taylor polynomials. The main results of this paper are that in a Taylor Cartesian differential category, the induced ultrapseudometrics are ultrametrics and that for every map $f$, its Taylor series converges to $f$ with respect to this ultrametric. This framework recaptures both Taylor series expansion in differential calculus via analytic methods and in categorical models of the differential $\lambda$-calculus (or Differential Linear Logic) via infinite sums.]]></description>
      <pubDate>Wed, 11 Dec 2024 17:01:40 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14670</link>
      <guid>https://doi.org/10.46298/entics.14670</guid>
      <author>Lemay, Jean-Simon Pacaud</author>
      <dc:creator>Lemay, Jean-Simon Pacaud</dc:creator>
      <content:encoded><![CDATA[Cartesian differential categories provide a categorical framework for multivariable differential calculus and also the categorical semantics of the differential $\lambda$-calculus. Taylor series expansion is an important concept for both differential calculus and the differential $\lambda$-calculus. In differential calculus, a function is equal to its Taylor series if its sequence of Taylor polynomials converges to the function in the analytic sense. On the other hand, for the differential $\lambda$-calculus, one works in a setting with an appropriate notion of algebraic infinite sums to formalize Taylor series expansion. In this paper, we provide a formal theory of Taylor series in an arbitrary Cartesian differential category without the need for converging limits or infinite sums. We begin by developing the notion of Taylor polynomials of maps in a Cartesian differential category and then show how comparing Taylor polynomials of maps induces an ultrapseudometric on the homsets. We say that a Cartesian differential category is Taylor if maps are entirely determined by their Taylor polynomials. The main results of this paper are that in a Taylor Cartesian differential category, the induced ultrapseudometrics are ultrametrics and that for every map $f$, its Taylor series converges to $f$ with respect to this ultrametric. This framework recaptures both Taylor series expansion in differential calculus via analytic methods and in categorical models of the differential $\lambda$-calculus (or Differential Linear Logic) via infinite sums.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Two-dimensional Kripke Semantics II: Stability and Completeness</title>
      <description><![CDATA[We revisit the duality between Kripke and algebraic semantics of intuitionistic and intuitionistic modal logic. We find that there is a certain mismatch between the two semantics, which means that not all algebraic models can be embedded into a Kripke model. This leads to an alternative proposal for a relational semantics, the stable semantics. Instead of an arbitrary partial order, the stable semantics requires a distributive lattice of worlds. We constructively show that the stable semantics is exactly as complete as the algebraic semantics. Categorifying these results leads to a 2-duality between two-dimensional stable semantics and categories of product-preserving presheaves, i.e. models of algebraic theories in the style of Lawvere.]]></description>
      <pubDate>Wed, 11 Dec 2024 17:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14767</link>
      <guid>https://doi.org/10.46298/entics.14767</guid>
      <author>Kavvos, G. A.</author>
      <dc:creator>Kavvos, G. A.</dc:creator>
      <content:encoded><![CDATA[We revisit the duality between Kripke and algebraic semantics of intuitionistic and intuitionistic modal logic. We find that there is a certain mismatch between the two semantics, which means that not all algebraic models can be embedded into a Kripke model. This leads to an alternative proposal for a relational semantics, the stable semantics. Instead of an arbitrary partial order, the stable semantics requires a distributive lattice of worlds. We constructively show that the stable semantics is exactly as complete as the algebraic semantics. Categorifying these results leads to a 2-duality between two-dimensional stable semantics and categories of product-preserving presheaves, i.e. models of algebraic theories in the style of Lawvere.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Algebraic Reasoning over Relational Structures</title>
      <description><![CDATA[Many important computational structures involve an intricate interplay between algebraic features (given by operations on the underlying set) and relational features (taking account of notions such as order or distance). This paper investigates algebras over relational structures axiomatized by an infinitary Horn theory, which subsume, for example, partial algebras, various incarnations of ordered algebras, quantitative algebras introduced by Mardare, Panangaden, and Plotkin, and their recent extension to generalized metric spaces and lifted algebraic signatures by Mio, Sarkis, and Vignudelli. To this end, we develop the notion of clustered equation, which is inspired by Mardare et al.'s basic conditional equations in the theory of quantitative algebras, at the level of generality of arbitrary relational structures, and we prove that it is equivalent to an abstract categorical form of equation earlier introduced by Milius and Urbat. Our main results are a family of Birkhoff-type variety theorems (classifying the expressive power of clustered equations) and an exactness theorem (classifying abstract equations by a congruence property).]]></description>
      <pubDate>Wed, 11 Dec 2024 16:54:37 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14598</link>
      <guid>https://doi.org/10.46298/entics.14598</guid>
      <author>Jurka, Jan</author>
      <author>Milius, Stefan</author>
      <author>Urbat, Henning</author>
      <dc:creator>Jurka, Jan</dc:creator>
      <dc:creator>Milius, Stefan</dc:creator>
      <dc:creator>Urbat, Henning</dc:creator>
      <content:encoded><![CDATA[Many important computational structures involve an intricate interplay between algebraic features (given by operations on the underlying set) and relational features (taking account of notions such as order or distance). This paper investigates algebras over relational structures axiomatized by an infinitary Horn theory, which subsume, for example, partial algebras, various incarnations of ordered algebras, quantitative algebras introduced by Mardare, Panangaden, and Plotkin, and their recent extension to generalized metric spaces and lifted algebraic signatures by Mio, Sarkis, and Vignudelli. To this end, we develop the notion of clustered equation, which is inspired by Mardare et al.'s basic conditional equations in the theory of quantitative algebras, at the level of generality of arbitrary relational structures, and we prove that it is equivalent to an abstract categorical form of equation earlier introduced by Milius and Urbat. Our main results are a family of Birkhoff-type variety theorems (classifying the expressive power of clustered equations) and an exactness theorem (classifying abstract equations by a congruence property).]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Typed Non-determinism in Concurrent Calculi: The Eager Way</title>
      <description><![CDATA[We consider the problem of designing typed concurrent calculi with non-deterministic choice in which types leverage linearity for controlling resources, thereby ensuring strong correctness properties for processes. This problem is constrained by the delicate tension between non-determinism and linearity. Prior work developed a session-typed {\pi}-calculus with standard non-deterministic choice; well-typed processes enjoy type preservation and deadlock-freedom. Central to this typed calculus is a lazy semantics that gradually discards branches in choices. This lazy semantics, however, is complex: various technical elements are needed to describe the non-deterministic behavior of typed processes. This paper develops an entirely new approach, based on an eager semantics, which more directly represents choices and commitment. We present a {\pi}-calculus in which non-deterministic choices are governed by this eager semantics and session types. We establish its key correctness properties, including deadlock-freedom, and demonstrate its expressivity by correctly translating a typed resource {\lambda}-calculus.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:53:24 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14735</link>
      <guid>https://doi.org/10.46298/entics.14735</guid>
      <author>Heuvel, Bas van den</author>
      <author>Nantes-Sobrinho, Daniele</author>
      <author>Paulus, Joseph W. N.</author>
      <author>Pérez, Jorge A.</author>
      <dc:creator>Heuvel, Bas van den</dc:creator>
      <dc:creator>Nantes-Sobrinho, Daniele</dc:creator>
      <dc:creator>Paulus, Joseph W. N.</dc:creator>
      <dc:creator>Pérez, Jorge A.</dc:creator>
      <content:encoded><![CDATA[We consider the problem of designing typed concurrent calculi with non-deterministic choice in which types leverage linearity for controlling resources, thereby ensuring strong correctness properties for processes. This problem is constrained by the delicate tension between non-determinism and linearity. Prior work developed a session-typed {\pi}-calculus with standard non-deterministic choice; well-typed processes enjoy type preservation and deadlock-freedom. Central to this typed calculus is a lazy semantics that gradually discards branches in choices. This lazy semantics, however, is complex: various technical elements are needed to describe the non-deterministic behavior of typed processes. This paper develops an entirely new approach, based on an eager semantics, which more directly represents choices and commitment. We present a {\pi}-calculus in which non-deterministic choices are governed by this eager semantics and session types. We establish its key correctness properties, including deadlock-freedom, and demonstrate its expressivity by correctly translating a typed resource {\lambda}-calculus.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Polynomials in homotopy type theory as a Kleisli category</title>
      <description><![CDATA[Polynomials in a category have been studied as a generalization of the traditional notion in mathematics. Their construction has recently been extended to higher groupoids, as formalized in homotopy type theory, by Finster, Mimram, Lucas and Seiller, thus resulting in a cartesian closed bicategory. We refine and extend their work in multiple directions. We begin by generalizing the construction of the free symmetric monoid monad on types in order to handle arities in an arbitrary universe. Then, we extend this monad to the (wild) category of spans of types, and thus to a comonad by self-duality. Finally, we show that the resulting Kleisli category is equivalent to the traditional category of polynomials. This thus establishes polynomials as a (homotopical) model of linear logic. In fact, we explain that it is closely related to a bicategorical model of differential linear logic introduced by Melli\`es.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:52:08 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14786</link>
      <guid>https://doi.org/10.46298/entics.14786</guid>
      <author>Harington, Elies</author>
      <author>Mimram, Samuel</author>
      <dc:creator>Harington, Elies</dc:creator>
      <dc:creator>Mimram, Samuel</dc:creator>
      <content:encoded><![CDATA[Polynomials in a category have been studied as a generalization of the traditional notion in mathematics. Their construction has recently been extended to higher groupoids, as formalized in homotopy type theory, by Finster, Mimram, Lucas and Seiller, thus resulting in a cartesian closed bicategory. We refine and extend their work in multiple directions. We begin by generalizing the construction of the free symmetric monoid monad on types in order to handle arities in an arbitrary universe. Then, we extend this monad to the (wild) category of spans of types, and thus to a comonad by self-duality. Finally, we show that the resulting Kleisli category is equivalent to the traditional category of polynomials. This thus establishes polynomials as a (homotopical) model of linear logic. In fact, we explain that it is closely related to a bicategorical model of differential linear logic introduced by Melli\`es.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Amortized Analysis via Coalgebra</title>
      <description><![CDATA[Amortized analysis is a cost analysis technique for data structures in which cost is studied in aggregate: rather than considering the maximum cost of a single operation, one bounds the total cost encountered throughout a session. Traditionally, amortized analysis has been phrased inductively, quantifying over finite sequences of operations. Connecting to prior work on coalgebraic semantics for data structures, we develop the alternative perspective that amortized analysis is naturally viewed coalgebraically in a category of cost algebras, where a morphism of coalgebras serves as a first-class generalization of potential function suitable for integrating cost and behavior. Using this simple definition, we consider amortization of other sample effects, non-commutative printing and randomization. To support imprecise amortized upper bounds, we adapt our discussion to the bicategorical setting, where a potential function is a colax morphism of coalgebras. We support algebraic and coalgebraic operations simultaneously by using coalgebras for an endoprofunctor instead of an endofunctor, combining potential using a monoidal structure on the underlying category. Finally, we compose amortization arguments in the indexed category of coalgebras to implement one amortized data structure in terms of others.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:50:58 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14797</link>
      <guid>https://doi.org/10.46298/entics.14797</guid>
      <author>Grodin, Harrison</author>
      <author>Harper, Robert</author>
      <dc:creator>Grodin, Harrison</dc:creator>
      <dc:creator>Harper, Robert</dc:creator>
      <content:encoded><![CDATA[Amortized analysis is a cost analysis technique for data structures in which cost is studied in aggregate: rather than considering the maximum cost of a single operation, one bounds the total cost encountered throughout a session. Traditionally, amortized analysis has been phrased inductively, quantifying over finite sequences of operations. Connecting to prior work on coalgebraic semantics for data structures, we develop the alternative perspective that amortized analysis is naturally viewed coalgebraically in a category of cost algebras, where a morphism of coalgebras serves as a first-class generalization of potential function suitable for integrating cost and behavior. Using this simple definition, we consider amortization of other sample effects, non-commutative printing and randomization. To support imprecise amortized upper bounds, we adapt our discussion to the bicategorical setting, where a potential function is a colax morphism of coalgebras. We support algebraic and coalgebraic operations simultaneously by using coalgebras for an endoprofunctor instead of an endofunctor, combining potential using a monoidal structure on the underlying category. Finally, we compose amortization arguments in the indexed category of coalgebras to implement one amortized data structure in terms of others.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Inferentialist Resource Semantics</title>
      <description><![CDATA[In systems modelling, a 'system' typically comprises located resources relative to which processes execute. One important use of logic in informatics is in modelling such systems for the purpose of reasoning (perhaps automated) about their behaviour and properties. To this end, one requires an interpretation of logical formulae in terms of the resources and states of the system; such an interpretation is called a 'resource semantics' of the logic. This paper shows how inferentialism -- the view that meaning is given in terms of inferential behaviour -- enables a versatile and expressive framework for resource semantics. Specifically, how inferentialism seamlessly incorporates the assertion-based approach of the logic of Bunched Implications, foundational in program verification (e.g., as the basis of Separation Logic), and the renowned number-of-uses reading of Linear Logic. This integration enables reasoning about shared and separated resources in intuitive and familiar ways, as well as about the composition and interfacing of system components.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:48:52 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14727</link>
      <guid>https://doi.org/10.46298/entics.14727</guid>
      <author>Gheorghiu, Alexander V.</author>
      <author>Gu, Tao</author>
      <author>Pym, David J.</author>
      <dc:creator>Gheorghiu, Alexander V.</dc:creator>
      <dc:creator>Gu, Tao</dc:creator>
      <dc:creator>Pym, David J.</dc:creator>
      <content:encoded><![CDATA[In systems modelling, a 'system' typically comprises located resources relative to which processes execute. One important use of logic in informatics is in modelling such systems for the purpose of reasoning (perhaps automated) about their behaviour and properties. To this end, one requires an interpretation of logical formulae in terms of the resources and states of the system; such an interpretation is called a 'resource semantics' of the logic. This paper shows how inferentialism -- the view that meaning is given in terms of inferential behaviour -- enables a versatile and expressive framework for resource semantics. Specifically, how inferentialism seamlessly incorporates the assertion-based approach of the logic of Bunched Implications, foundational in program verification (e.g., as the basis of Separation Logic), and the renowned number-of-uses reading of Linear Logic. This integration enables reasoning about shared and separated resources in intuitive and familiar ways, as well as about the composition and interfacing of system components.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Continuous Domains for Function Spaces Using Spectral Compactification</title>
      <description><![CDATA[We introduce a continuous domain for function spaces over topological spaces which are not core-compact. Notable examples of such topological spaces include the real line with the upper limit topology, which is used in solution of initial value problems with temporal discretization, and various infinite dimensional Banach spaces which are ubiquitous in functional analysis and solution of partial differential equations. If a topological space $\mathbb{X}$ is not core-compact and $\mathbb{D}$ is a non-singleton bounded-complete domain, the function space $[\mathbb{X} \to \mathbb{D}]$ is not a continuous domain. To construct a continuous domain, we consider a spectral compactification $\mathbb{Y}$ of $\mathbb{X}$ and relate $[\mathbb{X} \to \mathbb{D}]$ with the continuous domain $[\mathbb{Y} \to \mathbb{D}]$ via a Galois connection. This allows us to perform computations in the native structure $[\mathbb{X} \to \mathbb{D}]$ while computable analysis is performed in the continuous domain $[\mathbb{Y} \to \mathbb{D}]$, with the left and right adjoints used for moving between the two function spaces.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:47:08 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14736</link>
      <guid>https://doi.org/10.46298/entics.14736</guid>
      <author>Farjudian, Amin</author>
      <author>Jung, Achim</author>
      <dc:creator>Farjudian, Amin</dc:creator>
      <dc:creator>Jung, Achim</dc:creator>
      <content:encoded><![CDATA[We introduce a continuous domain for function spaces over topological spaces which are not core-compact. Notable examples of such topological spaces include the real line with the upper limit topology, which is used in solution of initial value problems with temporal discretization, and various infinite dimensional Banach spaces which are ubiquitous in functional analysis and solution of partial differential equations. If a topological space $\mathbb{X}$ is not core-compact and $\mathbb{D}$ is a non-singleton bounded-complete domain, the function space $[\mathbb{X} \to \mathbb{D}]$ is not a continuous domain. To construct a continuous domain, we consider a spectral compactification $\mathbb{Y}$ of $\mathbb{X}$ and relate $[\mathbb{X} \to \mathbb{D}]$ with the continuous domain $[\mathbb{Y} \to \mathbb{D}]$ via a Galois connection. This allows us to perform computations in the native structure $[\mathbb{X} \to \mathbb{D}]$ while computable analysis is performed in the continuous domain $[\mathbb{Y} \to \mathbb{D}]$, with the left and right adjoints used for moving between the two function spaces.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On a fibrational construction for optics, lenses, and Dialectica categories</title>
      <description><![CDATA[Categories of lenses/optics and Dialectica categories are both comprised of bidirectional morphisms of basically the same form. In this work we show how they can be considered a special case of an overarching fibrational construction, generalizing Hofstra's construction of Dialectica fibrations and Spivak's construction of generalized lenses. This construction turns a tower of Grothendieck fibrations into another tower of fibrations by iteratively twisting each of the components, using the opposite fibration construction.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:45:46 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14638</link>
      <guid>https://doi.org/10.46298/entics.14638</guid>
      <author>Capucci, Matteo</author>
      <author>Gavranović, Bruno</author>
      <author>Malik, Abdullah</author>
      <author>Rios, Francisco</author>
      <author>Weinberger, Jonathan</author>
      <dc:creator>Capucci, Matteo</dc:creator>
      <dc:creator>Gavranović, Bruno</dc:creator>
      <dc:creator>Malik, Abdullah</dc:creator>
      <dc:creator>Rios, Francisco</dc:creator>
      <dc:creator>Weinberger, Jonathan</dc:creator>
      <content:encoded><![CDATA[Categories of lenses/optics and Dialectica categories are both comprised of bidirectional morphisms of basically the same form. In this work we show how they can be considered a special case of an overarching fibrational construction, generalizing Hofstra's construction of Dialectica fibrations and Spivak's construction of generalized lenses. This construction turns a tower of Grothendieck fibrations into another tower of fibrations by iteratively twisting each of the components, using the opposite fibration construction.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Strong Nominal Semantics for Fixed-Point Constraints</title>
      <description><![CDATA[Nominal algebra includes $\alpha$-equality and freshness constraints on nominal terms endowed with a nominal set semantics that facilitates reasoning about languages with binders. Nominal unification is decidable and unitary, however, its extension with equational axioms such as Commutativity (which has a finitary first-order unification type) is no longer finitary unless permutation fixed-point constraints are used. In this paper, we extend the notion of nominal algebra by introducing fixed-point constraints and provide a sound semantics using strong nominal sets. We show, by providing a counter-example, that the class of nominal sets is not a sound denotation for this extended nominal algebra. To recover soundness we propose two different formulations of nominal algebra, one obtained by restricting to a class of fixed-point contexts that are in direct correspondence with freshness contexts and another obtained by using a different set of derivation rules.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:43:49 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14777</link>
      <guid>https://doi.org/10.46298/entics.14777</guid>
      <author>Caires-Santos, Ali K.</author>
      <author>Fernández, Maribel</author>
      <author>Nantes-Sobrinho, Daniele</author>
      <dc:creator>Caires-Santos, Ali K.</dc:creator>
      <dc:creator>Fernández, Maribel</dc:creator>
      <dc:creator>Nantes-Sobrinho, Daniele</dc:creator>
      <content:encoded><![CDATA[Nominal algebra includes $\alpha$-equality and freshness constraints on nominal terms endowed with a nominal set semantics that facilitates reasoning about languages with binders. Nominal unification is decidable and unitary, however, its extension with equational axioms such as Commutativity (which has a finitary first-order unification type) is no longer finitary unless permutation fixed-point constraints are used. In this paper, we extend the notion of nominal algebra by introducing fixed-point constraints and provide a sound semantics using strong nominal sets. We show, by providing a counter-example, that the class of nominal sets is not a sound denotation for this extended nominal algebra. To recover soundness we propose two different formulations of nominal algebra, one obtained by restricting to a class of fixed-point contexts that are in direct correspondence with freshness contexts and another obtained by using a different set of derivation rules.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>CaTT contexts are finite computads</title>
      <description><![CDATA[Two novel descriptions of weak {\omega}-categories have been recently proposed, using type-theoretic ideas. The first one is the dependent type theory CaTT whose models are {\omega}-categories. The second is a recursive description of a category of computads together with an adjunction to globular sets, such that the algebras for the induced monad are again {\omega}-categories. We compare the two descriptions by showing that there exits a fully faithful morphism of categories with families from the syntactic category of CaTT to the opposite of the category of computads, which gives an equivalence on the subcategory of finite computads. We derive a more direct connection between the category of models of CaTT and the category of algebras for the monad on globular sets, induced by the adjunction with computads.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:42:23 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14675</link>
      <guid>https://doi.org/10.46298/entics.14675</guid>
      <author>Benjamin, Thibaut</author>
      <author>Markakis, Ioannis</author>
      <author>Sarti, Chiara</author>
      <dc:creator>Benjamin, Thibaut</dc:creator>
      <dc:creator>Markakis, Ioannis</dc:creator>
      <dc:creator>Sarti, Chiara</dc:creator>
      <content:encoded><![CDATA[Two novel descriptions of weak {\omega}-categories have been recently proposed, using type-theoretic ideas. The first one is the dependent type theory CaTT whose models are {\omega}-categories. The second is a recursive description of a category of computads together with an adjunction to globular sets, such that the algebras for the induced monad are again {\omega}-categories. We compare the two descriptions by showing that there exits a fully faithful morphism of categories with families from the syntactic category of CaTT to the opposite of the category of computads, which gives an equivalence on the subcategory of finite computads. We derive a more direct connection between the category of models of CaTT and the category of algebras for the monad on globular sets, induced by the adjunction with computads.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Semantic Proof of Generalised Cut Elimination for Deep Inference</title>
      <description><![CDATA[Multiplicative-Additive System Virtual (MAV) is a logic that extends Multiplicative-Additive Linear Logic with a self-dual non-commutative operator expressing the concept of "before" or "sequencing". MAV is also an extenson of the the logic Basic System Virtual (BV) with additives. Formulas in BV have an appealing reading as processes with parallel and sequential composition. MAV adds internal and external choice operators. BV and MAV are also closely related to Concurrent Kleene Algebras. Proof systems for MAV and BV are Deep Inference systems, which allow inference rules to be applied anywhere inside a structure. As with any proof system, a key question is whether proofs in MAV can be reduced to a normal form, removing detours and the introduction of structures not present in the original goal. In Sequent Calcluli systems, this property is referred to as Cut Elimination. Deep Inference systems have an analogous Cut rule and other rules that are not present in normalised proofs. Cut Elimination for Deep Inference systems has the same metatheoretic benefits as for Sequent Calculi systems, including consistency and decidability. Proofs of Cut Elimination for BV, MAV, and other Deep Inference systems present in the literature have relied on intrincate syntactic reasoning and complex termination measures. We present a concise semantic proof that all MAV proofs can be reduced to a normal form avoiding the Cut rule and other "non analytic" rules. We also develop soundness and completeness proofs of MAV (and BV) with respect to a class of models. We have mechanised all our proofs in the Agda proof assistant, which provides both assurance of their correctness as well as yielding an executable normalisation procedure.- Our technique extends to include exponentials and the additive units.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:40:45 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14870</link>
      <guid>https://doi.org/10.46298/entics.14870</guid>
      <author>Atkey, Robert</author>
      <author>Kokke, Wen</author>
      <dc:creator>Atkey, Robert</dc:creator>
      <dc:creator>Kokke, Wen</dc:creator>
      <content:encoded><![CDATA[Multiplicative-Additive System Virtual (MAV) is a logic that extends Multiplicative-Additive Linear Logic with a self-dual non-commutative operator expressing the concept of "before" or "sequencing". MAV is also an extenson of the the logic Basic System Virtual (BV) with additives. Formulas in BV have an appealing reading as processes with parallel and sequential composition. MAV adds internal and external choice operators. BV and MAV are also closely related to Concurrent Kleene Algebras. Proof systems for MAV and BV are Deep Inference systems, which allow inference rules to be applied anywhere inside a structure. As with any proof system, a key question is whether proofs in MAV can be reduced to a normal form, removing detours and the introduction of structures not present in the original goal. In Sequent Calcluli systems, this property is referred to as Cut Elimination. Deep Inference systems have an analogous Cut rule and other rules that are not present in normalised proofs. Cut Elimination for Deep Inference systems has the same metatheoretic benefits as for Sequent Calculi systems, including consistency and decidability. Proofs of Cut Elimination for BV, MAV, and other Deep Inference systems present in the literature have relied on intrincate syntactic reasoning and complex termination measures. We present a concise semantic proof that all MAV proofs can be reduced to a normal form avoiding the Cut rule and other "non analytic" rules. We also develop soundness and completeness proofs of MAV (and BV) with respect to a class of models. We have mechanised all our proofs in the Agda proof assistant, which provides both assurance of their correctness as well as yielding an executable normalisation procedure.- Our technique extends to include exponentials and the additive units.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Positive Focusing is Directly Useful</title>
      <description><![CDATA[Recently, Miller and Wu introduced the positive $\lambda$-calculus, a call-by-value $\lambda$-calculus with sharing obtained by assigning proof terms to the positively polarized focused proofs for minimal intuitionistic logic. The positive $\lambda$-calculus stands out among $\lambda$-calculi with sharing for a compactness property related to the sharing of variables. We show that -- thanks to compactness -- the positive calculus neatly captures the core of useful sharing, a technique for the study of reasonable time cost models.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:39:06 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14758</link>
      <guid>https://doi.org/10.46298/entics.14758</guid>
      <author>Accattoli, Beniamino</author>
      <author>Wu, Jui-Hsuan</author>
      <dc:creator>Accattoli, Beniamino</dc:creator>
      <dc:creator>Wu, Jui-Hsuan</dc:creator>
      <content:encoded><![CDATA[Recently, Miller and Wu introduced the positive $\lambda$-calculus, a call-by-value $\lambda$-calculus with sharing obtained by assigning proof terms to the positively polarized focused proofs for minimal intuitionistic logic. The positive $\lambda$-calculus stands out among $\lambda$-calculi with sharing for a compactness property related to the sharing of variables. We show that -- thanks to compactness -- the positive calculus neatly captures the core of useful sharing, a technique for the study of reasonable time cost models.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Linear Arboreal Categories</title>
      <description><![CDATA[Arboreal categories, introduced by Abramsky and Reggio, axiomatise categories with tree-shaped objects. These categories provide a categorical language for formalising behavioural notions such as simulation, bisimulation, and resource-indexing. In this paper, we strengthen the axioms of an arboreal category to exclude `branching' behaviour, obtaining a notion of `linear arboreal category'. We then demonstrate that every arboreal category satisfying a linearisability condition has an associated linear arboreal subcategory related via an adjunction. This identifies the relationship between the pebble-relation comonad, of Montacute and Shah, and the pebbling comonad, of Abramsky, Dawar, and Wang, and generalises it further. As another outcome of this new framework, we obtain a linear variant of the arboreal category for modal logic. By doing so we recover different linear-time equivalences between transition systems as instances of their categorical definitions. We conclude with new preservation and characterisation theorems relating trace inclusion and trace equivalence with different linear fragments of modal logic.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:35:15 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14830</link>
      <guid>https://doi.org/10.46298/entics.14830</guid>
      <author>Abramsky, Samson</author>
      <author>Montacute, Yoàv</author>
      <author>Shah, Nihil</author>
      <dc:creator>Abramsky, Samson</dc:creator>
      <dc:creator>Montacute, Yoàv</dc:creator>
      <dc:creator>Shah, Nihil</dc:creator>
      <content:encoded><![CDATA[Arboreal categories, introduced by Abramsky and Reggio, axiomatise categories with tree-shaped objects. These categories provide a categorical language for formalising behavioural notions such as simulation, bisimulation, and resource-indexing. In this paper, we strengthen the axioms of an arboreal category to exclude `branching' behaviour, obtaining a notion of `linear arboreal category'. We then demonstrate that every arboreal category satisfying a linearisability condition has an associated linear arboreal subcategory related via an adjunction. This identifies the relationship between the pebble-relation comonad, of Montacute and Shah, and the pebbling comonad, of Abramsky, Dawar, and Wang, and generalises it further. As another outcome of this new framework, we obtain a linear variant of the arboreal category for modal logic. By doing so we recover different linear-time equivalences between transition systems as instances of their categorical definitions. We conclude with new preservation and characterisation theorems relating trace inclusion and trace equivalence with different linear fragments of modal logic.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Parametricity via Cohesion</title>
      <description><![CDATA[Parametricity is a key metatheoretic property of type systems, which implies strong uniformity & modularity properties of the structure of types within systems possessing it. In recent years, various systems of dependent type theory have emerged with the aim of expressing such parametric reasoning in their internal logic, toward the end of solving various problems arising from the complexity of higher-dimensional coherence conditions in type theory. This paper presents a first step toward the unification, simplification, and extension of these various methods for internalizing parametricity. Specifically, I argue that there is an essentially modal aspect of parametricity, which is intimately connected with the category-theoretic concept of cohesion. On this basis, I describe a general categorical semantics for modal parametricity, develop a corresponding framework of axioms (with computational interpretations) in dependent type theory that can be used to internally represent and reason about such parametricity, and show this in practice by implementing these axioms in Agda and using them to verify parametricity theorems therein. I then demonstrate the utility of these axioms in managing the complexity of higher-dimensional coherence by deriving induction principles for higher inductive types, and in closing, I sketch the outlines of a more general synthetic theory of parametricity, with applications in domains ranging from homotopy type theory to the analysis of program modules.]]></description>
      <pubDate>Wed, 11 Dec 2024 16:31:36 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.14710</link>
      <guid>https://doi.org/10.46298/entics.14710</guid>
      <author>Aberlé, C. B.</author>
      <dc:creator>Aberlé, C. B.</dc:creator>
      <content:encoded><![CDATA[Parametricity is a key metatheoretic property of type systems, which implies strong uniformity & modularity properties of the structure of types within systems possessing it. In recent years, various systems of dependent type theory have emerged with the aim of expressing such parametric reasoning in their internal logic, toward the end of solving various problems arising from the complexity of higher-dimensional coherence conditions in type theory. This paper presents a first step toward the unification, simplification, and extension of these various methods for internalizing parametricity. Specifically, I argue that there is an essentially modal aspect of parametricity, which is intimately connected with the category-theoretic concept of cohesion. On this basis, I describe a general categorical semantics for modal parametricity, develop a corresponding framework of axioms (with computational interpretations) in dependent type theory that can be used to internally represent and reason about such parametricity, and show this in practice by implementing these axioms in Agda and using them to verify parametricity theorems therein. I then demonstrate the utility of these axioms in managing the complexity of higher-dimensional coherence by deriving induction principles for higher inductive types, and in closing, I sketch the outlines of a more general synthetic theory of parametricity, with applications in domains ranging from homotopy type theory to the analysis of program modules.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Profinite lambda-terms and parametricity</title>
      <description><![CDATA[Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite lambda-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite lambda-terms as a projective limit of finite sets of usual lambda-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite lambda-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite lambda-term is compositional by constructing a cartesian closed category of profinite lambda-terms, and we establish that the embedding from lambda-terms modulo beta-eta-conversion to profinite lambda-terms is faithful using Statman's finite completeness theorem. Finally, we prove that the traditional Church encoding of finite words into lambda-terms can be extended to profinite words, and leads to a homeomorphism between the space of profinite words and the space of profinite lambda-terms of the corresponding Church type.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:57:23 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12280</link>
      <guid>https://doi.org/10.46298/entics.12280</guid>
      <author>van Gool, Sam</author>
      <author>Melliès, Paul-André</author>
      <author>Moreau, Vincent</author>
      <dc:creator>van Gool, Sam</dc:creator>
      <dc:creator>Melliès, Paul-André</dc:creator>
      <dc:creator>Moreau, Vincent</dc:creator>
      <content:encoded><![CDATA[Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite lambda-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite lambda-terms as a projective limit of finite sets of usual lambda-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite lambda-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite lambda-term is compositional by constructing a cartesian closed category of profinite lambda-terms, and we establish that the embedding from lambda-terms modulo beta-eta-conversion to profinite lambda-terms is faithful using Statman's finite completeness theorem. Finally, we prove that the traditional Church encoding of finite words into lambda-terms can be extended to profinite words, and leads to a homeomorphism between the space of profinite words and the space of profinite lambda-terms of the corresponding Church type.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Dependent Type Refinements for Futures</title>
      <description><![CDATA[Type refinements combine the compositionality of typechecking with the expressivity of program logics, offering a synergistic approach to program verification. In this paper we apply dependent type refinements to SAX, a futures-based process calculus that arises from the Curry-Howard interpretation of the intuitionistic semi-axiomatic sequent calculus and includes unrestricted recursion both at the level of types and processes. With our type refinement system, we can reason about the partial correctness of SAX programs, complementing prior work on sized type refinements that supports reasoning about termination. Our design regime synthesizes the infinitary proof theory of SAX with that of bidirectional typing and Hoare logic, deriving some standard reasoning principles for data and (co)recursion while enabling information hiding for codata. We prove syntactic type soundness, which entails a notion of partial correctness that respects codata encapsulation. We illustrate our language through a few simple examples.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:56:45 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12286</link>
      <guid>https://doi.org/10.46298/entics.12286</guid>
      <author>Somayyajula, Siva</author>
      <author>Pfenning, Frank</author>
      <dc:creator>Somayyajula, Siva</dc:creator>
      <dc:creator>Pfenning, Frank</dc:creator>
      <content:encoded><![CDATA[Type refinements combine the compositionality of typechecking with the expressivity of program logics, offering a synergistic approach to program verification. In this paper we apply dependent type refinements to SAX, a futures-based process calculus that arises from the Curry-Howard interpretation of the intuitionistic semi-axiomatic sequent calculus and includes unrestricted recursion both at the level of types and processes. With our type refinement system, we can reason about the partial correctness of SAX programs, complementing prior work on sized type refinements that supports reasoning about termination. Our design regime synthesizes the infinitary proof theory of SAX with that of bidirectional typing and Hoare logic, deriving some standard reasoning principles for data and (co)recursion while enabling information hiding for codata. We prove syntactic type soundness, which entails a notion of partial correctness that respects codata encapsulation. We illustrate our language through a few simple examples.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Semantics of multimodal adjoint type theory</title>
      <description><![CDATA[We show that contrary to appearances, Multimodal Type Theory (MTT) over a 2-category M can be interpreted in any M-shaped diagram of categories having, and functors preserving, M-sized limits, without the need for extra left adjoints. This is achieved by a construction called "co-dextrification" that co-freely adds left adjoints to any such diagram, which can then be used to interpret the "context lock" functors of MTT. Furthermore, if any of the functors in the diagram have right adjoints, these can also be internalized in type theory as negative modalities in the style of FitchTT. We introduce the name Multimodal Adjoint Type Theory (MATT) for the resulting combined general modal type theory. In particular, we can interpret MATT in any finite diagram of toposes and geometric morphisms, with positive modalities for inverse image functors and negative modalities for direct image functors.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:55:42 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12300</link>
      <guid>https://doi.org/10.46298/entics.12300</guid>
      <author>Shulman, Michael</author>
      <dc:creator>Shulman, Michael</dc:creator>
      <content:encoded><![CDATA[We show that contrary to appearances, Multimodal Type Theory (MTT) over a 2-category M can be interpreted in any M-shaped diagram of categories having, and functors preserving, M-sized limits, without the need for extra left adjoints. This is achieved by a construction called "co-dextrification" that co-freely adds left adjoints to any such diagram, which can then be used to interpret the "context lock" functors of MTT. Furthermore, if any of the functors in the diagram have right adjoints, these can also be internalized in type theory as negative modalities in the style of FitchTT. We introduce the name Multimodal Adjoint Type Theory (MATT) for the resulting combined general modal type theory. In particular, we can interpret MATT in any finite diagram of toposes and geometric morphisms, with positive modalities for inverse image functors and negative modalities for direct image functors.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Algebra of Self-Replication</title>
      <description><![CDATA[Typical arguments for results like Kleene's Second Recursion Theorem and the existence of self-writing computer programs bear the fingerprints of equational reasoning and combinatory logic. In fact, the connection of combinatory logic and computability theory is very old, and this paper extends this connection in new ways. In one direction, we counter the main trend in both computability theory and combinatory logic of heading straight to undecidability. Instead, this paper proposes using several very small equational logics to examine results in computability theory itself. These logics are decidable via term rewriting. We argue that they have something interesting to say about computability theory. They are closely related to fragments of combinatory logic which are decidable, and so this paper contributes to the study of such fragments. The paper has a few surprising results such as a classification of quine programs (programs which output themselves) in two decidable fragments. The classification goes via examination of normal forms in term rewriting systems, hence the title of the paper. The classification is an explanation of why all quine programs (in any language) are "pretty much the same, except for inessential details." In addition, we study the relational structure whose objects are the programs with the relation "p expresses q" meaning that if the program p is run on nothing, then it eventually outputs the program q.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:55:10 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12320</link>
      <guid>https://doi.org/10.46298/entics.12320</guid>
      <author>Moss, Lawrence S.</author>
      <dc:creator>Moss, Lawrence S.</dc:creator>
      <content:encoded><![CDATA[Typical arguments for results like Kleene's Second Recursion Theorem and the existence of self-writing computer programs bear the fingerprints of equational reasoning and combinatory logic. In fact, the connection of combinatory logic and computability theory is very old, and this paper extends this connection in new ways. In one direction, we counter the main trend in both computability theory and combinatory logic of heading straight to undecidability. Instead, this paper proposes using several very small equational logics to examine results in computability theory itself. These logics are decidable via term rewriting. We argue that they have something interesting to say about computability theory. They are closely related to fragments of combinatory logic which are decidable, and so this paper contributes to the study of such fragments. The paper has a few surprising results such as a classification of quine programs (programs which output themselves) in two decidable fragments. The classification goes via examination of normal forms in term rewriting systems, hence the title of the paper. The classification is an explanation of why all quine programs (in any language) are "pretty much the same, except for inessential details." In addition, we study the relational structure whose objects are the programs with the relation "p expresses q" meaning that if the program p is run on nothing, then it eventually outputs the program q.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Implicative models of set theory</title>
      <description><![CDATA[In this paper we show that using implicative algebras one can produce models of set theory generalizing Heyting/Boolean-valued models and realizability models of (I)ZF, both in intuitionistic and classical logic. This has as consequence that any topos which is obtained from a Set-based tripos as the result of the tripos-to-topos construction hosts a model of intuitionistic or classical set theory, provided a large enough strongly inaccessible cardinal exists.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:54:43 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12426</link>
      <guid>https://doi.org/10.46298/entics.12426</guid>
      <author>Maschio, Samuele</author>
      <author>Miquel, Alexandre</author>
      <dc:creator>Maschio, Samuele</dc:creator>
      <dc:creator>Miquel, Alexandre</dc:creator>
      <content:encoded><![CDATA[In this paper we show that using implicative algebras one can produce models of set theory generalizing Heyting/Boolean-valued models and realizability models of (I)ZF, both in intuitionistic and classical logic. This has as consequence that any topos which is obtained from a Set-based tripos as the result of the tripos-to-topos construction hosts a model of intuitionistic or classical set theory, provided a large enough strongly inaccessible cardinal exists.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A topological counterpart of well-founded trees in dependent type theory</title>
      <description><![CDATA[Within dependent type theory, we provide a topological counterpart of well-founded trees (for short, W-types) by using a proof-relevant version of the notion of inductively generated suplattices introduced in the context of formal topology under the name of inductively generated basic covers. In more detail, we show, firstly, that in Homotopy Type Theory, W-types and proof relevant inductively generated basic covers are propositionally mutually encodable. Secondly, we prove they are definitionally mutually encodable in the Agda implementation of intensional Martin-Loef's type theory. Finally, we reframe the equivalence in the Minimalist Foundation framework by introducing well-founded predicates as the logical counterpart for predicates of dependent W-types. All the results have been checked in the Agda proof-assistant.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:54:02 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.11755</link>
      <guid>https://doi.org/10.46298/entics.11755</guid>
      <author>Maietti, Maria Emilia</author>
      <author>Sabelli, Pietro</author>
      <dc:creator>Maietti, Maria Emilia</dc:creator>
      <dc:creator>Sabelli, Pietro</dc:creator>
      <content:encoded><![CDATA[Within dependent type theory, we provide a topological counterpart of well-founded trees (for short, W-types) by using a proof-relevant version of the notion of inductively generated suplattices introduced in the context of formal topology under the name of inductively generated basic covers. In more detail, we show, firstly, that in Homotopy Type Theory, W-types and proof relevant inductively generated basic covers are propositionally mutually encodable. Secondly, we prove they are definitionally mutually encodable in the Agda implementation of intensional Martin-Loef's type theory. Finally, we reframe the equivalence in the Minimalist Foundation framework by introducing well-founded predicates as the logical counterpart for predicates of dependent W-types. All the results have been checked in the Agda proof-assistant.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Graded Differential Categories and Graded Differential Linear Logic</title>
      <description><![CDATA[In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:53:29 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12290</link>
      <guid>https://doi.org/10.46298/entics.12290</guid>
      <author>Lemay, Jean-Simon Pacaud</author>
      <author>Vienney, Jean-Baptiste</author>
      <dc:creator>Lemay, Jean-Simon Pacaud</dc:creator>
      <dc:creator>Vienney, Jean-Baptiste</dc:creator>
      <content:encoded><![CDATA[In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Cartesian Differential Kleisli Categories</title>
      <description><![CDATA[Cartesian differential categories come equipped with a differential combinator which axiomatizes the fundamental properties of the total derivative from differential calculus. The objective of this paper is to understand when the Kleisli category of a monad is a Cartesian differential category. We introduce Cartesian differential monads, which are monads whose Kleisli category is a Cartesian differential category by way of lifting the differential combinator from the base category. Examples of Cartesian differential monads include tangent bundle monads and reader monads. We give a precise characterization of Cartesian differential categories which are Kleisli categories of Cartesian differential monads using abstract Kleisli categories. We also show that the Eilenberg-Moore category of a Cartesian differential monad is a tangent category.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:52:57 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12278</link>
      <guid>https://doi.org/10.46298/entics.12278</guid>
      <author>Lemay, Jean-Simon Pacaud</author>
      <dc:creator>Lemay, Jean-Simon Pacaud</dc:creator>
      <content:encoded><![CDATA[Cartesian differential categories come equipped with a differential combinator which axiomatizes the fundamental properties of the total derivative from differential calculus. The objective of this paper is to understand when the Kleisli category of a monad is a Cartesian differential category. We introduce Cartesian differential monads, which are monads whose Kleisli category is a Cartesian differential category by way of lifting the differential combinator from the base category. Examples of Cartesian differential monads include tangent bundle monads and reader monads. We give a precise characterization of Cartesian differential categories which are Kleisli categories of Cartesian differential monads using abstract Kleisli categories. We also show that the Eilenberg-Moore category of a Cartesian differential monad is a tangent category.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Joint Distributions in Probabilistic Semantics</title>
      <description><![CDATA[Various categories have been proposed as targets for the denotational semantics of higher-order probabilistic programming languages. One such proposal involves joint probability distributions (couplings) used in Bayesian statistical models with conditioning. In previous treatments, composition of joint measures was performed by disintegrating to obtain Markov kernels, composing the kernels, then reintegrating to obtain a joint measure. Disintegrations exist only under certain restrictions on the underlying spaces. In this paper we propose a category whose morphisms are joint finite measures in which composition is defined without reference to disintegration, allowing its application to a broader class of spaces. The category is symmetric monoidal with a pleasing symmetry in which the dagger structure is a simple transpose.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:52:08 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12279</link>
      <guid>https://doi.org/10.46298/entics.12279</guid>
      <author>Kozen, Dexter</author>
      <author>Silva, Alexandra</author>
      <author>Voogd, Erik</author>
      <dc:creator>Kozen, Dexter</dc:creator>
      <dc:creator>Silva, Alexandra</dc:creator>
      <dc:creator>Voogd, Erik</dc:creator>
      <content:encoded><![CDATA[Various categories have been proposed as targets for the denotational semantics of higher-order probabilistic programming languages. One such proposal involves joint probability distributions (couplings) used in Bayesian statistical models with conditioning. In previous treatments, composition of joint measures was performed by disintegrating to obtain Markov kernels, composing the kernels, then reintegrating to obtain a joint measure. Disintegrations exist only under certain restrictions on the underlying spaces. In this paper we propose a category whose morphisms are joint finite measures in which composition is defined without reference to disintegration, allowing its application to a broader class of spaces. The category is symmetric monoidal with a pleasing symmetry in which the dagger structure is a simple transpose.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Categorical Framework for Program Semantics and Semantic Abstraction</title>
      <description><![CDATA[Categorical semantics of type theories are often characterized as structure-preserving functors. This is because in category theory both the syntax and the domain of interpretation are uniformly treated as structured categories, so that we can express interpretations as structure-preserving functors between them. This mathematical characterization of semantics makes it convenient to manipulate and to reason about relationships between interpretations. Motivated by this success of functorial semantics, we address the question of finding a functorial analogue in abstract interpretation, a general framework for comparing semantics, so that we can bring similar benefits of functorial semantics to semantic abstractions used in abstract interpretation. Major differences concern the notion of interpretation that is being considered. Indeed, conventional semantics are value-based whereas abstract interpretation typically deals with more complex properties. In this paper, we propose a functorial approach to abstract interpretation and study associated fundamental concepts therein. In our approach, interpretations are expressed as oplax functors in the category of posets, and abstraction relations between interpretations are expressed as lax natural transformations representing concretizations. We present examples of these formal concepts from monadic semantics of programming languages and discuss soundness.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:51:38 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12288</link>
      <guid>https://doi.org/10.46298/entics.12288</guid>
      <author>Katsumata, Shin-ya</author>
      <author>Rival, Xavier</author>
      <author>Dubut, Jérémy</author>
      <dc:creator>Katsumata, Shin-ya</dc:creator>
      <dc:creator>Rival, Xavier</dc:creator>
      <dc:creator>Dubut, Jérémy</dc:creator>
      <content:encoded><![CDATA[Categorical semantics of type theories are often characterized as structure-preserving functors. This is because in category theory both the syntax and the domain of interpretation are uniformly treated as structured categories, so that we can express interpretations as structure-preserving functors between them. This mathematical characterization of semantics makes it convenient to manipulate and to reason about relationships between interpretations. Motivated by this success of functorial semantics, we address the question of finding a functorial analogue in abstract interpretation, a general framework for comparing semantics, so that we can bring similar benefits of functorial semantics to semantic abstractions used in abstract interpretation. Major differences concern the notion of interpretation that is being considered. Indeed, conventional semantics are value-based whereas abstract interpretation typically deals with more complex properties. In this paper, we propose a functorial approach to abstract interpretation and study associated fundamental concepts therein. In our approach, interpretations are expressed as oplax functors in the category of posets, and abstraction relations between interpretations are expressed as lax natural transformations representing concretizations. We present examples of these formal concepts from monadic semantics of programming languages and discuss soundness.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A model of stochastic memoization and name generation in probabilistic programming: categorical semantics via monads on presheaf categories</title>
      <description><![CDATA[Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property -- i.e., whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved. In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions. Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and a novel probability monad on it.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:51:05 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12291</link>
      <guid>https://doi.org/10.46298/entics.12291</guid>
      <author>Kaddar, Younesse</author>
      <author>Staton, Sam</author>
      <dc:creator>Kaddar, Younesse</dc:creator>
      <dc:creator>Staton, Sam</dc:creator>
      <content:encoded><![CDATA[Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property -- i.e., whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved. In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions. Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and a novel probability monad on it.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Pearl's and Jeffrey's Update as Modes of Learning in Probabilistic Programming</title>
      <description><![CDATA[The concept of updating a probability distribution in the light of new evidence lies at the heart of statistics and machine learning. Pearl's and Jeffrey's rule are two natural update mechanisms which lead to different outcomes, yet the similarities and differences remain mysterious. This paper clarifies their relationship in several ways: via separate descriptions of the two update mechanisms in terms of probabilistic programs and sampling semantics, and via different notions of likelihood (for Pearl and for Jeffrey). Moreover, it is shown that Jeffrey's update rule arises via variational inference. In terms of categorical probability theory, this amounts to an analysis of the situation in terms of the behaviour of the multiset functor, extended to the Kleisli category of the distribution monad.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:50:37 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12281</link>
      <guid>https://doi.org/10.46298/entics.12281</guid>
      <author>Jacobs, Bart</author>
      <author>Stein, Dario</author>
      <dc:creator>Jacobs, Bart</dc:creator>
      <dc:creator>Stein, Dario</dc:creator>
      <content:encoded><![CDATA[The concept of updating a probability distribution in the light of new evidence lies at the heart of statistics and machine learning. Pearl's and Jeffrey's rule are two natural update mechanisms which lead to different outcomes, yet the similarities and differences remain mysterious. This paper clarifies their relationship in several ways: via separate descriptions of the two update mechanisms in terms of probabilistic programs and sampling semantics, and via different notions of likelihood (for Pearl and for Jeffrey). Moreover, it is shown that Jeffrey's update rule arises via variational inference. In terms of categorical probability theory, this amounts to an analysis of the situation in terms of the behaviour of the multiset functor, extended to the Kleisli category of the distribution monad.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Fixpoint constructions in focused orthogonality models of linear logic</title>
      <description><![CDATA[Orthogonality is a notion based on the duality between programs and their environments used to determine when they can be safely combined. For instance, it is a powerful tool to establish termination properties in classical formal systems. It was given a general treatment with the concept of orthogonality category, of which numerous models of linear logic are instances, by Hyland and Schalk. This paper considers the subclass of focused orthogonalities. We develop a theory of fixpoint constructions in focused orthogonality categories. Central results are lifting theorems for initial algebras and final coalgebras. These crucially hinge on the insight that focused orthogonality categories are relational fibrations. The theory provides an axiomatic categorical framework for models of linear logic with least and greatest fixpoints of types. We further investigate domain-theoretic settings, showing how to lift bifree algebras, used to solve mixed-variance recursive type equations, to focused orthogonality categories.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:50:08 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12302</link>
      <guid>https://doi.org/10.46298/entics.12302</guid>
      <author>Fiore, Marcelo</author>
      <author>Galal, Zeinab</author>
      <author>Jafarrahmani, Farzad</author>
      <dc:creator>Fiore, Marcelo</dc:creator>
      <dc:creator>Galal, Zeinab</dc:creator>
      <dc:creator>Jafarrahmani, Farzad</dc:creator>
      <content:encoded><![CDATA[Orthogonality is a notion based on the duality between programs and their environments used to determine when they can be safely combined. For instance, it is a powerful tool to establish termination properties in classical formal systems. It was given a general treatment with the concept of orthogonality category, of which numerous models of linear logic are instances, by Hyland and Schalk. This paper considers the subclass of focused orthogonalities. We develop a theory of fixpoint constructions in focused orthogonality categories. Central results are lifting theorems for initial algebras and final coalgebras. These crucially hinge on the insight that focused orthogonality categories are relational fibrations. The theory provides an axiomatic categorical framework for models of linear logic with least and greatest fixpoints of types. We further investigate domain-theoretic settings, showing how to lift bifree algebras, used to solve mixed-variance recursive type equations, to focused orthogonality categories.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Saturating automata for game semantics</title>
      <description><![CDATA[Saturation is a fundamental game-semantic property satisfied by strategies that interpret higher-order concurrent programs. It states that the strategy must be closed under certain rearrangements of moves, and corresponds to the intuition that program moves (P-moves) may depend only on moves made by the environment (O-moves). We propose an automata model over an infinite alphabet, called saturating automata, for which all accepted languages are guaranteed to satisfy a closure property mimicking saturation. We show how to translate the finitary fragment of Idealized Concurrent Algol (FICA) into saturating automata, confirming their suitability for modelling higher-order concurrency. Moreover, we find that, for terms in normal form, the resultant automaton has linearly many transitions and states with respect to term size, and can be constructed in polynomial time. This is in contrast to earlier attempts at finding automata-theoretic models of FICA, which did not guarantee saturation and involved an exponential blow-up during translation, even for normal forms.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:49:40 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12277</link>
      <guid>https://doi.org/10.46298/entics.12277</guid>
      <author>Dixon, Alex</author>
      <author>Murawski, Andrzej S.</author>
      <dc:creator>Dixon, Alex</dc:creator>
      <dc:creator>Murawski, Andrzej S.</dc:creator>
      <content:encoded><![CDATA[Saturation is a fundamental game-semantic property satisfied by strategies that interpret higher-order concurrent programs. It states that the strategy must be closed under certain rearrangements of moves, and corresponds to the intuition that program moves (P-moves) may depend only on moves made by the environment (O-moves). We propose an automata model over an infinite alphabet, called saturating automata, for which all accepted languages are guaranteed to satisfy a closure property mimicking saturation. We show how to translate the finitary fragment of Idealized Concurrent Algol (FICA) into saturating automata, confirming their suitability for modelling higher-order concurrency. Moreover, we find that, for terms in normal form, the resultant automaton has linearly many transitions and states with respect to term size, and can be constructed in polynomial time. This is in contrast to earlier attempts at finding automata-theoretic models of FICA, which did not guarantee saturation and involved an exponential blow-up during translation, even for normal forms.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Language for Evaluating Derivatives of Functionals Using Automatic Differentiation</title>
      <description><![CDATA[We present a simple functional programming language, called Dual PCF, that implements forward mode automatic differentiation using dual numbers in the framework of exact real number computation. The main new feature of this language is the ability to evaluate correctly up to the precision specified by the user -- in a simple and direct way -- the directional derivative of functionals as well as first order functions. In contrast to other comparable languages, Dual PCF also includes the recursive operator for defining functions and functionals. We provide a wide range of examples of Lipschitz functions and functionals that can be defined in Dual PCF. We use domain theory both to give a denotational semantics to the language and to prove the correctness of the new derivative operator using logical relations. To be able to differentiate functionals -- including on function spaces equipped with their compact-open topology that do not admit a norm -- we develop a domain-theoretic directional derivative that is Scott continuous and extends Clarke's subgradient of real-valued locally Lipschitz maps on Banach spaces to real-valued continuous maps on Hausdorff topological vector spaces. Finally, we show that we can express arbitrary computable linear functionals in Dual PCF.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:49:05 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12303</link>
      <guid>https://doi.org/10.46298/entics.12303</guid>
      <author>Di Gianantonio, Pietro</author>
      <author>Edalat, Abbas</author>
      <author>Gutin, Ran</author>
      <dc:creator>Di Gianantonio, Pietro</dc:creator>
      <dc:creator>Edalat, Abbas</dc:creator>
      <dc:creator>Gutin, Ran</dc:creator>
      <content:encoded><![CDATA[We present a simple functional programming language, called Dual PCF, that implements forward mode automatic differentiation using dual numbers in the framework of exact real number computation. The main new feature of this language is the ability to evaluate correctly up to the precision specified by the user -- in a simple and direct way -- the directional derivative of functionals as well as first order functions. In contrast to other comparable languages, Dual PCF also includes the recursive operator for defining functions and functionals. We provide a wide range of examples of Lipschitz functions and functionals that can be defined in Dual PCF. We use domain theory both to give a denotational semantics to the language and to prove the correctness of the new derivative operator using logical relations. To be able to differentiate functionals -- including on function spaces equipped with their compact-open topology that do not admit a norm -- we develop a domain-theoretic directional derivative that is Scott continuous and extends Clarke's subgradient of real-valued locally Lipschitz maps on Banach spaces to real-valued continuous maps on Hausdorff topological vector spaces. Finally, we show that we can express arbitrary computable linear functionals in Dual PCF.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Dynamic Separation Logic</title>
      <description><![CDATA[This paper introduces a dynamic logic extension of separation logic. The assertion language of separation logic is extended with modalities for the five types of the basic instructions of separation logic: simple assignment, look-up, mutation, allocation, and de-allocation. The main novelty of the resulting dynamic logic is that it allows to combine different approaches to resolving these modalities. One such approach is based on the standard weakest precondition calculus of separation logic. The other approach introduced in this paper provides a novel alternative formalization in the proposed dynamic logic extension of separation logic. The soundness and completeness of this axiomatization has been formalized in the Coq theorem prover.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:48:28 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12297</link>
      <guid>https://doi.org/10.46298/entics.12297</guid>
      <author>de Boer, Frank S.</author>
      <author>Hiep, Hans-Dieter A.</author>
      <author>de Gouw, Stijn</author>
      <dc:creator>de Boer, Frank S.</dc:creator>
      <dc:creator>Hiep, Hans-Dieter A.</dc:creator>
      <dc:creator>de Gouw, Stijn</dc:creator>
      <content:encoded><![CDATA[This paper introduces a dynamic logic extension of separation logic. The assertion language of separation logic is extended with modalities for the five types of the basic instructions of separation logic: simple assignment, look-up, mutation, allocation, and de-allocation. The main novelty of the resulting dynamic logic is that it allows to combine different approaches to resolving these modalities. One such approach is based on the standard weakest precondition calculus of separation logic. The other approach introduced in this paper provides a novel alternative formalization in the proposed dynamic logic extension of separation logic. The soundness and completeness of this axiomatization has been formalized in the Coq theorem prover.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Complete V-Equational System for Graded lambda-Calculus</title>
      <description><![CDATA[Modern programming frequently requires generalised notions of program equivalence based on a metric or a similar structure. Previous work addressed this challenge by introducing the notion of a V-equation, i.e. an equation labelled by an element of a quantale V, which covers inter alia (ultra-)metric, classical, and fuzzy (in)equations. It also introduced a V-equational system for the linear variant of lambda-calculus where any given resource must be used exactly once. In this paper we drop the (often too strict) linearity constraint by adding graded modal types which allow multiple uses of a resource in a controlled manner. We show that such a control, whilst providing more expressivity to the programmer, also interacts more richly with V-equations than the linear or Cartesian cases. Our main result is the introduction of a sound and complete V-equational system for a lambda-calculus with graded modal types interpreted by what we call a Lipschitz exponential comonad. We also show how to build such comonads canonically via a universal construction, and use our results to derive graded metric equational systems (and corresponding models) for programs with timed and probabilistic behaviour.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:47:49 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12299</link>
      <guid>https://doi.org/10.46298/entics.12299</guid>
      <author>Dahlqvist, Fredrik</author>
      <author>Neves, Renato</author>
      <dc:creator>Dahlqvist, Fredrik</dc:creator>
      <dc:creator>Neves, Renato</dc:creator>
      <content:encoded><![CDATA[Modern programming frequently requires generalised notions of program equivalence based on a metric or a similar structure. Previous work addressed this challenge by introducing the notion of a V-equation, i.e. an equation labelled by an element of a quantale V, which covers inter alia (ultra-)metric, classical, and fuzzy (in)equations. It also introduced a V-equational system for the linear variant of lambda-calculus where any given resource must be used exactly once. In this paper we drop the (often too strict) linearity constraint by adding graded modal types which allow multiple uses of a resource in a controlled manner. We show that such a control, whilst providing more expressivity to the programmer, also interacts more richly with V-equations than the linear or Cartesian cases. Our main result is the introduction of a sound and complete V-equational system for a lambda-calculus with graded modal types interpreted by what we call a Lipschitz exponential comonad. We also show how to build such comonads canonically via a universal construction, and use our results to derive graded metric equational systems (and corresponding models) for programs with timed and probabilistic behaviour.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Propositional Logics for the Lawvere Quantale</title>
      <description><![CDATA[Lawvere showed that generalised metric spaces are categories enriched over $[0, \infty]$, the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three $[0,\infty]$-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant $1$ to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Quantitative equational logic can be interpreted in the third logic if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of the Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even (as is known) for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Pavelka or Ben Yaacov -- provability up to arbitrary precision -- does not hold. However, we can show it for theories axiomatized by a (not necessarily finite) set of judgements in normal form over a finite set of propositional variables when we restrict to models that do not map variables to $\infty$; the proof uses Hurwicz's general form of the Farkas' Lemma.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:47:12 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12292</link>
      <guid>https://doi.org/10.46298/entics.12292</guid>
      <author>Bacci, Giorgio</author>
      <author>Mardare, Radu</author>
      <author>Panangaden, Prakash</author>
      <author>Plotkin, Gordon</author>
      <dc:creator>Bacci, Giorgio</dc:creator>
      <dc:creator>Mardare, Radu</dc:creator>
      <dc:creator>Panangaden, Prakash</dc:creator>
      <dc:creator>Plotkin, Gordon</dc:creator>
      <content:encoded><![CDATA[Lawvere showed that generalised metric spaces are categories enriched over $[0, \infty]$, the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three $[0,\infty]$-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant $1$ to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Quantitative equational logic can be interpreted in the third logic if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of the Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even (as is known) for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Pavelka or Ben Yaacov -- provability up to arbitrary precision -- does not hold. However, we can show it for theories axiomatized by a (not necessarily finite) set of judgements in normal form over a finite set of propositional variables when we restrict to models that do not map variables to $\infty$; the proof uses Hurwicz's general form of the Farkas' Lemma.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The exponential logic of sequentialization</title>
      <description><![CDATA[Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation - sequentializing the graph - is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing them in an extention of MLL where axioms on a specific formula, the jumping formula, introduce constrains on the possible sequentializations. The jumping formula needs to be treated non-linearly, which we do either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:46:37 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12419</link>
      <guid>https://doi.org/10.46298/entics.12419</guid>
      <author>Alcolei, Aurore</author>
      <author>Pellissier, Luc</author>
      <author>Saurin, Alexis</author>
      <dc:creator>Alcolei, Aurore</dc:creator>
      <dc:creator>Pellissier, Luc</dc:creator>
      <dc:creator>Saurin, Alexis</dc:creator>
      <content:encoded><![CDATA[Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation - sequentializing the graph - is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing them in an extention of MLL where axioms on a specific formula, the jumping formula, introduce constrains on the possible sequentializations. The jumping formula needs to be treated non-linearly, which we do either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A denotationally-based program logic for higher-order store</title>
      <description><![CDATA[Separation logic is used to reason locally about stateful programs. State of the art program logics for higher-order store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop TULIP, a higher-order separation logic over the typed equational theory of higher-order store for a monadic version of System F{mu,ref}. The Tulip logic differs from operationally-based program logics in two ways: predicates range over the meanings of typed terms rather than over the raw code of untyped terms, and they are automatically invariant under the equational congruence of higher-order store, which applies even underneath a binder. As a result, "pure" proof steps that conventionally require focusing the Hoare triple on an operational redex are replaced by a simple equational rewrite in Tulip. We have evaluated Tulip against standard examples involving linked lists in the heap, comparing our abstract equational reasoning with more familiar operational-style reasoning. Our main result is the soundness of Tulip, which we establish by constructing a BI-hyperdoctrine over the denotational semantics of F{mu,ref} in an impredicative version of synthetic guarded domain theory.]]></description>
      <pubDate>Thu, 23 Nov 2023 15:44:50 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.12232</link>
      <guid>https://doi.org/10.46298/entics.12232</guid>
      <author>Aagaard, Frederik Lerbjerg</author>
      <author>Sterling, Jonathan</author>
      <author>Birkedal, Lars</author>
      <dc:creator>Aagaard, Frederik Lerbjerg</dc:creator>
      <dc:creator>Sterling, Jonathan</dc:creator>
      <dc:creator>Birkedal, Lars</dc:creator>
      <content:encoded><![CDATA[Separation logic is used to reason locally about stateful programs. State of the art program logics for higher-order store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop TULIP, a higher-order separation logic over the typed equational theory of higher-order store for a monadic version of System F{mu,ref}. The Tulip logic differs from operationally-based program logics in two ways: predicates range over the meanings of typed terms rather than over the raw code of untyped terms, and they are automatically invariant under the equational congruence of higher-order store, which applies even underneath a binder. As a result, "pure" proof steps that conventionally require focusing the Hoare triple on an operational redex are replaced by a simple equational rewrite in Tulip. We have evaluated Tulip against standard examples involving linked lists in the heap, comparing our abstract equational reasoning with more familiar operational-style reasoning. Our main result is the soundness of Tulip, which we establish by constructing a BI-hyperdoctrine over the denotational semantics of F{mu,ref} in an impredicative version of synthetic guarded domain theory.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The sheaf representation of residuated lattices</title>
      <description><![CDATA[The residuated lattices form one of the most important algebras of fuzzy logics and have been heavily studied by people from various different points of view. Sheaf presentations provide a topological approach to many algebraic structures. In this paper, we study the topological properties of prime spectrum of residuated lattices, and then construct a sheaf space to obtain a sheaf representation for each residuated lattice.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:26:34 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10399</link>
      <guid>https://doi.org/10.46298/entics.10399</guid>
      <author>Zhang, Huarong</author>
      <author>Zhao, Dongsheng</author>
      <dc:creator>Zhang, Huarong</dc:creator>
      <dc:creator>Zhao, Dongsheng</dc:creator>
      <content:encoded><![CDATA[The residuated lattices form one of the most important algebras of fuzzy logics and have been heavily studied by people from various different points of view. Sheaf presentations provide a topological approach to many algebraic structures. In this paper, we study the topological properties of prime spectrum of residuated lattices, and then construct a sheaf space to obtain a sheaf representation for each residuated lattice.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Hofmann-Mislove theorem for $c$-well-filtered spaces</title>
      <description><![CDATA[The Hofmann-Mislove theorem states that in a sober space, the nonempty Scott open filters of its open set lattice correspond bijectively to its compacts saturated sets. In this paper, the concept of $c$-well-filtered spaces is introduced. We show that a retract of a $c$-well-filtered space is $c$-well-filtered and a locally Lindel\"{o}f and $c$-well-filtered $P$-space is countably sober. In particular, we obtain a Hofmann-Mislove theorem for $c$-well-filtered spaces.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:26:01 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10369</link>
      <guid>https://doi.org/10.46298/entics.10369</guid>
      <author>Zhang, Liping</author>
      <author>Zhou, Xiangnan</author>
      <author>Li, Qingguo</author>
      <dc:creator>Zhang, Liping</dc:creator>
      <dc:creator>Zhou, Xiangnan</dc:creator>
      <dc:creator>Li, Qingguo</dc:creator>
      <content:encoded><![CDATA[The Hofmann-Mislove theorem states that in a sober space, the nonempty Scott open filters of its open set lattice correspond bijectively to its compacts saturated sets. In this paper, the concept of $c$-well-filtered spaces is introduced. We show that a retract of a $c$-well-filtered space is $c$-well-filtered and a locally Lindel\"{o}f and $c$-well-filtered $P$-space is countably sober. In particular, we obtain a Hofmann-Mislove theorem for $c$-well-filtered spaces.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Representations of Domains via CF-approximation Spaces</title>
      <description><![CDATA[Representations of domains mean in a general way representing a domain as a suitable family endowed with set-inclusion order of some mathematical structures. In this paper, representations of domains via CF-approximation spaces are considered. Concepts of CF-approximation spaces and CF-closed sets are introduced. It is proved that the family of CF-closed sets in a CF-approximation space endowed with set-inclusion order is a continuous domain and that every continuous domain is isomorphic to the family of CF-closed sets of some CF-approximation space endowed with set-inclusion order. The concept of CF-approximable relations is introduced using a categorical approach, which later facilitates the proof that the category of CF-approximation spaces and CF-approximable relations is equivalent to that of continuous domains and Scott continuous maps.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:25:36 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10420</link>
      <guid>https://doi.org/10.46298/entics.10420</guid>
      <author>Wu, Guojun</author>
      <author>Xu, Luoshan</author>
      <dc:creator>Wu, Guojun</dc:creator>
      <dc:creator>Xu, Luoshan</dc:creator>
      <content:encoded><![CDATA[Representations of domains mean in a general way representing a domain as a suitable family endowed with set-inclusion order of some mathematical structures. In this paper, representations of domains via CF-approximation spaces are considered. Concepts of CF-approximation spaces and CF-closed sets are introduced. It is proved that the family of CF-closed sets in a CF-approximation space endowed with set-inclusion order is a continuous domain and that every continuous domain is isomorphic to the family of CF-closed sets of some CF-approximation space endowed with set-inclusion order. The concept of CF-approximable relations is introduced using a categorical approach, which later facilitates the proof that the category of CF-approximation spaces and CF-approximable relations is equivalent to that of continuous domains and Scott continuous maps.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Strongly Continuous Domains</title>
      <description><![CDATA[Strong Scott topology introduced by X. Xu and D. Zhao is a kind of new topology which is finer than upper topology and coarser than Scott topology. Inspired by the topological characterizations of continuous domains and hypercontinuous domains, we introduce the concept of strongly continuous domains and investigate some properties of strongly continuous domains. In particular, we give the definition of strong way-below relation and obtain a characterization of strongly continuous domains via the strong way-below relation. We prove that the strong way-below relation on a strongly continuous domain satisfies the interpolation property, and clarify the relationship between strongly continuous domains and continuous domains, and the relationship between strongly continuous domains and hypercontinuous domains. We discuss the properties of strong Scott topology and strong Lawson topology, which is the common refinement of the strong Scott topology and the lower topology, on a strongly continuous domain.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:25:15 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10312</link>
      <guid>https://doi.org/10.46298/entics.10312</guid>
      <author>Song, Yinglong</author>
      <author>Yang, Jinbo</author>
      <dc:creator>Song, Yinglong</dc:creator>
      <dc:creator>Yang, Jinbo</dc:creator>
      <content:encoded><![CDATA[Strong Scott topology introduced by X. Xu and D. Zhao is a kind of new topology which is finer than upper topology and coarser than Scott topology. Inspired by the topological characterizations of continuous domains and hypercontinuous domains, we introduce the concept of strongly continuous domains and investigate some properties of strongly continuous domains. In particular, we give the definition of strong way-below relation and obtain a characterization of strongly continuous domains via the strong way-below relation. We prove that the strong way-below relation on a strongly continuous domain satisfies the interpolation property, and clarify the relationship between strongly continuous domains and continuous domains, and the relationship between strongly continuous domains and hypercontinuous domains. We discuss the properties of strong Scott topology and strong Lawson topology, which is the common refinement of the strong Scott topology and the lower topology, on a strongly continuous domain.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Further studies on open well-filtered spaces</title>
      <description><![CDATA[The open well-filtered spaces were introduced by Shen, Xi, Xu and Zhao to answer the problem whether every core-compact well-filtered space is sober. In the current paper we explore further properties of open well-filtered spaces. One of the main results is that if a space is open well-filtered, then so is its upper space (the set of all nonempty saturated compact subsets equipped with the upper Vietoris topology). Some other properties on open well-filtered spaces are also studied.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:24:52 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10392</link>
      <guid>https://doi.org/10.46298/entics.10392</guid>
      <author>Shen, Chong</author>
      <author>Xi, Xiaoyong</author>
      <author>Zhao, Dongsheng</author>
      <dc:creator>Shen, Chong</dc:creator>
      <dc:creator>Xi, Xiaoyong</dc:creator>
      <dc:creator>Zhao, Dongsheng</dc:creator>
      <content:encoded><![CDATA[The open well-filtered spaces were introduced by Shen, Xi, Xu and Zhao to answer the problem whether every core-compact well-filtered space is sober. In the current paper we explore further properties of open well-filtered spaces. One of the main results is that if a space is open well-filtered, then so is its upper space (the set of all nonempty saturated compact subsets equipped with the upper Vietoris topology). Some other properties on open well-filtered spaces are also studied.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Partial frames, their free frames and their congruence frames</title>
      <description><![CDATA[The context of this work is that of partial frames; these are meet-semilattices where not all subsets need have joins. A selection function, S, specifies, for all meet-semilattices, certain subsets under consideration, which we call the ``designated'' ones; an S-frame then must have joins of (at least) all such subsets and binary meet must distribute over these. A small collection of axioms suffices to specify our selection functions; these axioms are sufficiently general to include as examples of partial frames, bounded distributive lattices, sigma-frames, kappa-frames and frames. We consider right and left adjoints of S-frame maps, as a prelude to the introduction of closed and open maps. Then we look at what might be an appropriate notion of Booleanness for partial frames. The obvious candidate is the condition that every element be complemented; this concept is indeed of interest, but we pose three further conditions which, in the frame setting, are all equivalent to it. However, in the context of partial frames, the four conditions are distinct. In investigating these, we make essential use of the free frame over a partial frame and the congruence frame of a partial frame. We compare congruences of a partial frame, technically called S-congruences, with the frame congruences of its free frame. We provide a natural transformation for the situation and also consider right adjoints of the frame maps in question. We characterize the case where the two congruence frames are isomorphic and provide examples which illuminate the possible different behaviour of the two. We conclude with a characterization of closedness and openness for the embedding of a partial frame into its free fame, and into its congruence frame.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:24:26 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10459</link>
      <guid>https://doi.org/10.46298/entics.10459</guid>
      <author>Schauerte, Anneliese</author>
      <author>Frith, John</author>
      <dc:creator>Schauerte, Anneliese</dc:creator>
      <dc:creator>Frith, John</dc:creator>
      <content:encoded><![CDATA[The context of this work is that of partial frames; these are meet-semilattices where not all subsets need have joins. A selection function, S, specifies, for all meet-semilattices, certain subsets under consideration, which we call the ``designated'' ones; an S-frame then must have joins of (at least) all such subsets and binary meet must distribute over these. A small collection of axioms suffices to specify our selection functions; these axioms are sufficiently general to include as examples of partial frames, bounded distributive lattices, sigma-frames, kappa-frames and frames. We consider right and left adjoints of S-frame maps, as a prelude to the introduction of closed and open maps. Then we look at what might be an appropriate notion of Booleanness for partial frames. The obvious candidate is the condition that every element be complemented; this concept is indeed of interest, but we pose three further conditions which, in the frame setting, are all equivalent to it. However, in the context of partial frames, the four conditions are distinct. In investigating these, we make essential use of the free frame over a partial frame and the congruence frame of a partial frame. We compare congruences of a partial frame, technically called S-congruences, with the frame congruences of its free frame. We provide a natural transformation for the situation and also consider right adjoints of the frame maps in question. We characterize the case where the two congruence frames are isomorphic and provide examples which illuminate the possible different behaviour of the two. We conclude with a characterization of closedness and openness for the embedding of a partial frame into its free fame, and into its congruence frame.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>$SI_2$-quasicontinuous spaces</title>
      <description><![CDATA[In this paper, as a common generalization of $SI_{2}$-continuous spaces and $s_{2}$-quasicontinuous posets, we introduce the concepts of $SI_{2}$-quasicontinuous spaces and $\mathcal{GD}$-convergence of nets for arbitrary topological spaces by the cuts. Some characterizations of $SI_{2}$-quasicontinuity of spaces are given. The main results are: (1) a space is $SI_{2}$-quasicontinuous if and only if its weakly irreducible topology is hypercontinuous under inclusion order; (2) A $T_{0}$ space $X$ is $SI_{2}$-quasicontinuous if and only if the $\mathcal{GD}$-convergence in $X$ is topological.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:24:00 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10355</link>
      <guid>https://doi.org/10.46298/entics.10355</guid>
      <author>Ruan, Xiaojun</author>
      <author>Xu, Xiaoquan</author>
      <dc:creator>Ruan, Xiaojun</dc:creator>
      <dc:creator>Xu, Xiaoquan</dc:creator>
      <content:encoded><![CDATA[In this paper, as a common generalization of $SI_{2}$-continuous spaces and $s_{2}$-quasicontinuous posets, we introduce the concepts of $SI_{2}$-quasicontinuous spaces and $\mathcal{GD}$-convergence of nets for arbitrary topological spaces by the cuts. Some characterizations of $SI_{2}$-quasicontinuity of spaces are given. The main results are: (1) a space is $SI_{2}$-quasicontinuous if and only if its weakly irreducible topology is hypercontinuous under inclusion order; (2) A $T_{0}$ space $X$ is $SI_{2}$-quasicontinuous if and only if the $\mathcal{GD}$-convergence in $X$ is topological.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>One-step closure, weak one-step closure and meet continuity</title>
      <description><![CDATA[This paper studies the weak one-step closure and one-step closure properties concerning the structure of Scott closures. We deduce that every quasicontinuous domain has weak one-step closure and show that a quasicontinuous poset need not have weak one-step closure. We also constructed a non-continuous poset with one-step closure, which gives a negative answer to an open problem posed by Zou et al.. Finally, we investigate the relationship between weak one-step closure property and one-step closure property and prove that a poset has one-step closure if and only if it is meet continuous and has weak one-step closure.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:23:12 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10352</link>
      <guid>https://doi.org/10.46298/entics.10352</guid>
      <author>Miao, Hualin</author>
      <author>Li, Qingguo</author>
      <author>Zhao, Dongsheng</author>
      <dc:creator>Miao, Hualin</dc:creator>
      <dc:creator>Li, Qingguo</dc:creator>
      <dc:creator>Zhao, Dongsheng</dc:creator>
      <content:encoded><![CDATA[This paper studies the weak one-step closure and one-step closure properties concerning the structure of Scott closures. We deduce that every quasicontinuous domain has weak one-step closure and show that a quasicontinuous poset need not have weak one-step closure. We also constructed a non-continuous poset with one-step closure, which gives a negative answer to an open problem posed by Zou et al.. Finally, we investigate the relationship between weak one-step closure property and one-step closure property and prove that a poset has one-step closure if and only if it is meet continuous and has weak one-step closure.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A note on the category of c-spaces</title>
      <description><![CDATA[We prove that the category of c-spaces with continuous maps is not cartesian closed. As a corollary the category of locally finitary compact spaces with continuous maps is also not cartesian closed.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:22:20 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10362</link>
      <guid>https://doi.org/10.46298/entics.10362</guid>
      <author>Lyu, Z.</author>
      <author>Xie, X.</author>
      <author>Kou, H.</author>
      <dc:creator>Lyu, Z.</dc:creator>
      <dc:creator>Xie, X.</dc:creator>
      <dc:creator>Kou, H.</dc:creator>
      <content:encoded><![CDATA[We prove that the category of c-spaces with continuous maps is not cartesian closed. As a corollary the category of locally finitary compact spaces with continuous maps is also not cartesian closed.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Quantaloidal Completions of Order-enriched Categories and Their Applications</title>
      <description><![CDATA[By introducing the concept of quantaloidal completions for an order-enriched category, relationships between the category of quantaloids and the category of order-enriched categories are studied. It is proved that quantaloidal completions for an order-enriched category can be fully characterized as compatible quotients of the power-set completion. As applications, we show that a special type of injective hull of an order-enriched category is the MacNeille completion; the free quantaloid over an order-enriched category is the Down-set completion.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:21:49 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10339</link>
      <guid>https://doi.org/10.46298/entics.10339</guid>
      <author>Liu, Min</author>
      <author>Li, Yulin</author>
      <dc:creator>Liu, Min</dc:creator>
      <dc:creator>Li, Yulin</dc:creator>
      <content:encoded><![CDATA[By introducing the concept of quantaloidal completions for an order-enriched category, relationships between the category of quantaloids and the category of order-enriched categories are studied. It is proved that quantaloidal completions for an order-enriched category can be fully characterized as compatible quotients of the power-set completion. As applications, we show that a special type of injective hull of an order-enriched category is the MacNeille completion; the free quantaloid over an order-enriched category is the Down-set completion.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On $k$-ranks of topological spaces</title>
      <description><![CDATA[In this paper, the concepts of $K$-subset systems and $k$-well-filtered spaces are introduced, which provide another uniform approach to $d$-spaces, $s$-well-filtered spaces (i.e., $\mathcal{U}_{S}$-admissibility) and well-filtered spaces. We prove that the $k$-well-filtered reflection of any $T_{0}$ space exists. Meanwhile, we propose the definition of $k$-rank, which is an ordinal that measures how many steps from a $T_{0}$ space to a $k$-well-filtered space. Moreover, we derive that for any ordinal $\alpha$, there exists a $T_{0}$ space whose $k$-rank equals to $\alpha$. One immediate corollary is that for any ordinal $\alpha$, there exists a $T_{0}$ space whose $d$-rank (respectively, $wf$-rank) equals to $\alpha$.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:21:20 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10349</link>
      <guid>https://doi.org/10.46298/entics.10349</guid>
      <author>Jin, Mengjie</author>
      <author>Li, Qingguo</author>
      <dc:creator>Jin, Mengjie</dc:creator>
      <dc:creator>Li, Qingguo</dc:creator>
      <content:encoded><![CDATA[In this paper, the concepts of $K$-subset systems and $k$-well-filtered spaces are introduced, which provide another uniform approach to $d$-spaces, $s$-well-filtered spaces (i.e., $\mathcal{U}_{S}$-admissibility) and well-filtered spaces. We prove that the $k$-well-filtered reflection of any $T_{0}$ space exists. Meanwhile, we propose the definition of $k$-rank, which is an ordinal that measures how many steps from a $T_{0}$ space to a $k$-well-filtered space. Moreover, we derive that for any ordinal $\alpha$, there exists a $T_{0}$ space whose $k$-rank equals to $\alpha$. One immediate corollary is that for any ordinal $\alpha$, there exists a $T_{0}$ space whose $d$-rank (respectively, $wf$-rank) equals to $\alpha$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Weakly meet $s_{Z}$-continuity and $\delta_{Z}$-continuity</title>
      <description><![CDATA[Based on the concept of weakly meet $s_{Z}$-continuouity put forward by Xu and Luo in \cite{qzm}, we further prove that if the subset system $Z$ satisfies certain conditions, a poset is $s_{Z}$-continuous if and only if it is weakly meet $s_{Z}$-continuous and $s_{Z}$-quasicontinuous, which improves a related result given by Ruan and Xu in \cite{sz}. Meanwhile, we provide a characterization for the poset to be weakly meet $s_{Z}$-continuous, that is, a poset with a lower hereditary $Z$-Scott topology is weakly meet $s_{Z}$-continuous if and only if it is locally weakly meet $s_{Z}$-continuous. In addition, we introduce a monad on the new category $\mathbf{POSET_{\delta}}$ and characterize its $Eilenberg$-$Moore$ algebras concretely.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:20:53 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10344</link>
      <guid>https://doi.org/10.46298/entics.10344</guid>
      <author>Hou, Huijun</author>
      <author>Li, Qingguo</author>
      <dc:creator>Hou, Huijun</dc:creator>
      <dc:creator>Li, Qingguo</dc:creator>
      <content:encoded><![CDATA[Based on the concept of weakly meet $s_{Z}$-continuouity put forward by Xu and Luo in \cite{qzm}, we further prove that if the subset system $Z$ satisfies certain conditions, a poset is $s_{Z}$-continuous if and only if it is weakly meet $s_{Z}$-continuous and $s_{Z}$-quasicontinuous, which improves a related result given by Ruan and Xu in \cite{sz}. Meanwhile, we provide a characterization for the poset to be weakly meet $s_{Z}$-continuous, that is, a poset with a lower hereditary $Z$-Scott topology is weakly meet $s_{Z}$-continuous if and only if it is locally weakly meet $s_{Z}$-continuous. In addition, we introduce a monad on the new category $\mathbf{POSET_{\delta}}$ and characterize its $Eilenberg$-$Moore$ algebras concretely.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Quasiexact posets and the moderate meet-continuity</title>
      <description><![CDATA[The study of weak domains and quasicontinuous domains leads to the consideration of two types generalizations of domains. In the current paper, we define the weak way-below relation between two nonempty subsets of a poset and quasiexact posets. We prove some connections among quasiexact posets, quasicontinuous domains and weak domains. Furthermore, we introduce the weak way-below finitely determined topology and study its links to Scott topology and the weak way-below topology first considered by Mushburn. It is also proved that a dcpo is a domain if it is quasiexact and moderately meet continuous with the weak way-below relation weakly increasing.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:20:08 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10383</link>
      <guid>https://doi.org/10.46298/entics.10383</guid>
      <author>He, Zhaorong</author>
      <author>Yang, Zhongqiang</author>
      <author>Zhao, Dongsheng</author>
      <dc:creator>He, Zhaorong</dc:creator>
      <dc:creator>Yang, Zhongqiang</dc:creator>
      <dc:creator>Zhao, Dongsheng</dc:creator>
      <content:encoded><![CDATA[The study of weak domains and quasicontinuous domains leads to the consideration of two types generalizations of domains. In the current paper, we define the weak way-below relation between two nonempty subsets of a poset and quasiexact posets. We prove some connections among quasiexact posets, quasicontinuous domains and weak domains. Furthermore, we introduce the weak way-below finitely determined topology and study its links to Scott topology and the weak way-below topology first considered by Mushburn. It is also proved that a dcpo is a domain if it is quasiexact and moderately meet continuous with the weak way-below relation weakly increasing.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Continuous R-valuations</title>
      <description><![CDATA[We introduce continuous $R$-valuations on directed-complete posets (dcpos, for short), as a generalization of continuous valuations in domain theory, by extending values of continuous valuations from reals to so-called Abelian d-rags $R$. Like the valuation monad $\mathbf{V}$ introduced by Jones and Plotkin, we show that the construction of continuous $R$-valuations extends to a strong monad $\mathbf{V}^R$ on the category of dcpos and Scott-continuous maps. Additionally, and as in recent work by the two authors and C. Th\'eron, and by the second author, B. Lindenhovius, M. Mislove and V. Zamdzhiev, we show that we can extract a commutative monad $\mathbf{V}^R_m$ out of it, whose elements we call minimal $R$-valuations. We also show that continuous $R$-valuations have close connections to measures when $R$ is taken to be $\mathbf{I}\mathbb{R}^\star_+$, the interval domain of the extended nonnegative reals: (1) On every coherent topological space, every non-zero, bounded $\tau$-smooth measure $\mu$ (defined on the Borel $\sigma$-algebra), canonically determines a continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation; and (2) such a continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation is the most precise (in a certain sense) continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation that approximates $\mu$, when the support of $\mu$ is a compact Hausdorff subspace of a second-countable stably compact topological space. This in particular applies to Lebesgue measure on the unit interval. As a result, the Lebesgue measure can be identified as a continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation. Additionally, we show that the latter is minimal.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:19:25 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10351</link>
      <guid>https://doi.org/10.46298/entics.10351</guid>
      <author>Goubault-Larrecq, Jean</author>
      <author>Jia, Xiaodong</author>
      <dc:creator>Goubault-Larrecq, Jean</dc:creator>
      <dc:creator>Jia, Xiaodong</dc:creator>
      <content:encoded><![CDATA[We introduce continuous $R$-valuations on directed-complete posets (dcpos, for short), as a generalization of continuous valuations in domain theory, by extending values of continuous valuations from reals to so-called Abelian d-rags $R$. Like the valuation monad $\mathbf{V}$ introduced by Jones and Plotkin, we show that the construction of continuous $R$-valuations extends to a strong monad $\mathbf{V}^R$ on the category of dcpos and Scott-continuous maps. Additionally, and as in recent work by the two authors and C. Th\'eron, and by the second author, B. Lindenhovius, M. Mislove and V. Zamdzhiev, we show that we can extract a commutative monad $\mathbf{V}^R_m$ out of it, whose elements we call minimal $R$-valuations. We also show that continuous $R$-valuations have close connections to measures when $R$ is taken to be $\mathbf{I}\mathbb{R}^\star_+$, the interval domain of the extended nonnegative reals: (1) On every coherent topological space, every non-zero, bounded $\tau$-smooth measure $\mu$ (defined on the Borel $\sigma$-algebra), canonically determines a continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation; and (2) such a continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation is the most precise (in a certain sense) continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation that approximates $\mu$, when the support of $\mu$ is a compact Hausdorff subspace of a second-countable stably compact topological space. This in particular applies to Lebesgue measure on the unit interval. As a result, the Lebesgue measure can be identified as a continuous $\mathbf{I}\mathbb{R}^\star_+$-valuation. Additionally, we show that the latter is minimal.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The $d^{*}$-space</title>
      <description><![CDATA[In this paper, we introduce the concept of $d^{\ast}$-spaces. We find that strong $d$-spaces are $d^{\ast}$-spaces, but the converse does not hold. We give a characterization for a topological space to be a $d^{\ast}$-space. We prove that the retract of a $d^{\ast}$-space is a $d^{\ast}$-space. We obtain the result that for any $T_{0}$ space $X$ and $Y$, if the function space $TOP(X,Y)$ endowed with the Isbell topology is a $d^{\ast}$-space, then $Y$ is a $d^{\ast}$-space. We also show that for any $T_{0}$ space $X$, if the Smyth power space $Q_{v}(X)$ is a $d^{\ast}$-space, then $X$ is a $d^{\ast}$-space. Meanwhile, we give a counterexample to illustrate that conversely, for a $d^{\ast}$-space $X$, the Smyth power space $Q_{v}(X)$ may not be a $d^{\ast}$-space.]]></description>
      <pubDate>Tue, 21 Mar 2023 14:18:49 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10354</link>
      <guid>https://doi.org/10.46298/entics.10354</guid>
      <author>Chu, Xiangping</author>
      <author>Li, Qingguo</author>
      <dc:creator>Chu, Xiangping</dc:creator>
      <dc:creator>Li, Qingguo</dc:creator>
      <content:encoded><![CDATA[In this paper, we introduce the concept of $d^{\ast}$-spaces. We find that strong $d$-spaces are $d^{\ast}$-spaces, but the converse does not hold. We give a characterization for a topological space to be a $d^{\ast}$-space. We prove that the retract of a $d^{\ast}$-space is a $d^{\ast}$-space. We obtain the result that for any $T_{0}$ space $X$ and $Y$, if the function space $TOP(X,Y)$ endowed with the Isbell topology is a $d^{\ast}$-space, then $Y$ is a $d^{\ast}$-space. We also show that for any $T_{0}$ space $X$, if the Smyth power space $Q_{v}(X)$ is a $d^{\ast}$-space, then $X$ is a $d^{\ast}$-space. Meanwhile, we give a counterexample to illustrate that conversely, for a $d^{\ast}$-space $X$, the Smyth power space $Q_{v}(X)$ may not be a $d^{\ast}$-space.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Guarded Kleene Algebra with Tests: Automata Learning</title>
      <description><![CDATA[Guarded Kleene Algebra with Tests (GKAT) is the fragment of Kleene Algebra with Tests (KAT) that arises by replacing the union and iteration operations of KAT with predicate-guarded variants. GKAT is more efficiently decidable than KAT and expressive enough to model simple imperative programs, making it attractive for applications to e.g. network verification. In this paper, we further explore GKAT's automata theory, and present GL*, an algorithm for learning the GKAT automaton representation of a black-box, by observing its behaviour. A complexity analysis shows that it is more efficient to learn a representation of a GKAT program with GL* than with Angluin's existing L* algorithm. We implement GL* and L* in OCaml and compare their performances on example programs.]]></description>
      <pubDate>Tue, 28 Feb 2023 02:20:08 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10505</link>
      <guid>https://doi.org/10.46298/entics.10505</guid>
      <author>Zetzsche, Stefan</author>
      <author>Silva, Alexandra</author>
      <author>Sammartino, Matteo</author>
      <dc:creator>Zetzsche, Stefan</dc:creator>
      <dc:creator>Silva, Alexandra</dc:creator>
      <dc:creator>Sammartino, Matteo</dc:creator>
      <content:encoded><![CDATA[Guarded Kleene Algebra with Tests (GKAT) is the fragment of Kleene Algebra with Tests (KAT) that arises by replacing the union and iteration operations of KAT with predicate-guarded variants. GKAT is more efficiently decidable than KAT and expressive enough to model simple imperative programs, making it attractive for applications to e.g. network verification. In this paper, we further explore GKAT's automata theory, and present GL*, an algorithm for learning the GKAT automaton representation of a black-box, by observing its behaviour. A complexity analysis shows that it is more efficient to learn a representation of a GKAT program with GL* than with Angluin's existing L* algorithm. We implement GL* and L* in OCaml and compare their performances on example programs.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Continuous Functions on Final Comodels of Free Algebraic Theories</title>
      <description><![CDATA[In 2009, Ghani, Hancock and Pattinson gave a tree-like representation of stream processors $A^{\mathbb{N}} \rightarrow B^{\mathbb{N}}$. In 2021, Garner showed that this representation can be established in terms of algebraic theory and comodels: the set of infinite streams $A^{\mathbb{N}}$ is the final comodel of the algebraic theory of $A$-valued input $\mathbb{T}_A$ and the set of stream processors $\mathit{Top}(A^{\mathbb{N}},B^{\mathbb{N}})$ can be seen as the final $\mathbb{T}_A$-$\mathbb{T}_B$-bimodel. In this paper, we generalize Garner's results to the case of free algebraic theories.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:39:05 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10374</link>
      <guid>https://doi.org/10.46298/entics.10374</guid>
      <author>Yoshida, Tomoya</author>
      <dc:creator>Yoshida, Tomoya</dc:creator>
      <content:encoded><![CDATA[In 2009, Ghani, Hancock and Pattinson gave a tree-like representation of stream processors $A^{\mathbb{N}} \rightarrow B^{\mathbb{N}}$. In 2021, Garner showed that this representation can be established in terms of algebraic theory and comodels: the set of infinite streams $A^{\mathbb{N}}$ is the final comodel of the algebraic theory of $A$-valued input $\mathbb{T}_A$ and the set of stream processors $\mathit{Top}(A^{\mathbb{N}},B^{\mathbb{N}})$ can be seen as the final $\mathbb{T}_A$-$\mathbb{T}_B$-bimodel. In this paper, we generalize Garner's results to the case of free algebraic theories.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Patch Locale of a Spectral Locale in Univalent Type Theory</title>
      <description><![CDATA[Stone locales together with continuous maps form a coreflective subcategory of spectral locales and perfect maps. A proof in the internal language of an elementary topos was previously given by the second-named author. This proof can be easily translated to univalent type theory using resizing axioms. In this work, we show how to achieve such a translation without resizing axioms, by working with large, locally small, and small complete frames with small bases. This turns out to be nontrivial and involves predicative reformulations of several fundamental concepts of locale theory.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:38:40 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10808</link>
      <guid>https://doi.org/10.46298/entics.10808</guid>
      <author>Tosun, Ayberk</author>
      <author>Escardó, Martín Hötzel</author>
      <dc:creator>Tosun, Ayberk</dc:creator>
      <dc:creator>Escardó, Martín Hötzel</dc:creator>
      <content:encoded><![CDATA[Stone locales together with continuous maps form a coreflective subcategory of spectral locales and perfect maps. A proof in the internal language of an elementary topos was previously given by the second-named author. This proof can be easily translated to univalent type theory using resizing axioms. In this work, we show how to achieve such a translation without resizing axioms, by working with large, locally small, and small complete frames with small bases. This turns out to be nontrivial and involves predicative reformulations of several fundamental concepts of locale theory.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Category-Graded Algebraic Theories and Effect Handlers</title>
      <description><![CDATA[We provide an effect system CatEff based on a category-graded extension of algebraic theories that correspond to category-graded monads. CatEff has category-graded operations and handlers. Effects in CatEff are graded by morphisms of the grading category. Grading morphisms represent fine structures of effects such as dependencies or sorts of states. Handlers in CatEff are regarded as an implementation of category-graded effects. We define the notion of category-graded algebraic theory to give semantics of CatEff and prove soundness and adequacy. We also give an example using category-graded effects to express protocols for sending receiving typed data.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:38:22 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10491</link>
      <guid>https://doi.org/10.46298/entics.10491</guid>
      <author>Sanada, Takahiro</author>
      <dc:creator>Sanada, Takahiro</dc:creator>
      <content:encoded><![CDATA[We provide an effect system CatEff based on a category-graded extension of algebraic theories that correspond to category-graded monads. CatEff has category-graded operations and handlers. Effects in CatEff are graded by morphisms of the grading category. Grading morphisms represent fine structures of effects such as dependencies or sorts of states. Handlers in CatEff are regarded as an implementation of category-graded effects. We define the notion of category-graded algebraic theory to give semantics of CatEff and prove soundness and adequacy. We also give an example using category-graded effects to express protocols for sending receiving typed data.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Call-By-Name Is Just Call-By-Value with Delimited Control</title>
      <description><![CDATA[Delimited control operator shift0 exhibits versatile capabilities: it can express layered monadic effects, or equivalently, algebraic effects. Little did we know it can express lambda calculus too! We present $ \Lambda_\$ $, a call-by-value lambda calculus extended with shift0 and control delimiter $ \$ $ with carefully crafted reduction theory, such that the lambda calculus with beta and eta reductions can be isomorphically embedded into $ \Lambda_\$ $ via a right inverse of a continuation-passing style translation. While call-by-name reductions of lambda calculus can trivially simulate its call-by-value version, we show that addition of shift0 and $ \$ $ is the golden mean of expressive power that suffices to simulate beta and eta reductions while still admitting a simulation back. As a corollary, calculi $ \Lambda\mu_v $, $ \lambda_\$ $, $ \Lambda_\$ $ and $ \lambda $ all correspond equationally.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:38:01 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10502</link>
      <guid>https://doi.org/10.46298/entics.10502</guid>
      <author>Pyzik, Mateusz</author>
      <dc:creator>Pyzik, Mateusz</dc:creator>
      <content:encoded><![CDATA[Delimited control operator shift0 exhibits versatile capabilities: it can express layered monadic effects, or equivalently, algebraic effects. Little did we know it can express lambda calculus too! We present $ \Lambda_\$ $, a call-by-value lambda calculus extended with shift0 and control delimiter $ \$ $ with carefully crafted reduction theory, such that the lambda calculus with beta and eta reductions can be isomorphically embedded into $ \Lambda_\$ $ via a right inverse of a continuation-passing style translation. While call-by-name reductions of lambda calculus can trivially simulate its call-by-value version, we show that addition of shift0 and $ \$ $ is the golden mean of expressive power that suffices to simulate beta and eta reductions while still admitting a simulation back. As a corollary, calculi $ \Lambda\mu_v $, $ \lambda_\$ $, $ \Lambda_\$ $ and $ \lambda $ all correspond equationally.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Bi-invariance for Uniform Strategies on Event Structures</title>
      <description><![CDATA[A recurring problem in game semantics is to enforce uniformity in strategies. Informally, a strategy is uniform when the Player's behaviour does not depend on the particular indexing of moves chosen by the Opponent. In game semantics, uniformity is used to define a resource modality !, that can be exploited for the semantics of programming languages. In this paper we give a new account of uniformity for strategies on event structures. This work is inspired by an older idea due to Melli\`es, that uniformity should be expressed as "bi-invariance" with respect to two interacting group actions. We explore the algebraic foundations of bi-invariance, adapt this idea to the language of event structures and define a general notion of uniform strategy in this context. Finally we revisit an existing approach to uniformity, and show how this arises as a special case of our constructions.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:37:26 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10498</link>
      <guid>https://doi.org/10.46298/entics.10498</guid>
      <author>Paquet, Hugo</author>
      <dc:creator>Paquet, Hugo</dc:creator>
      <content:encoded><![CDATA[A recurring problem in game semantics is to enforce uniformity in strategies. Informally, a strategy is uniform when the Player's behaviour does not depend on the particular indexing of moves chosen by the Opponent. In game semantics, uniformity is used to define a resource modality !, that can be exploited for the semantics of programming languages. In this paper we give a new account of uniformity for strategies on event structures. This work is inspired by an older idea due to Melli\`es, that uniformity should be expressed as "bi-invariance" with respect to two interacting group actions. We explore the algebraic foundations of bi-invariance, adapt this idea to the language of event structures and define a general notion of uniform strategy in this context. Finally we revisit an existing approach to uniformity, and show how this arises as a special case of our constructions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Classifying topoi in synthetic guarded domain theory</title>
      <description><![CDATA[Several different topoi have played an important role in the development and applications of synthetic guarded domain theory (SGDT), a new kind of synthetic domain theory that abstracts the concept of guarded recursion frequently employed in the semantics of programming languages. In order to unify the accounts of guarded recursion and coinduction, several authors have enriched SGDT with multiple "clocks" parameterizing different time-streams, leading to more complex and difficult to understand topos models. Until now these topoi have been understood very concretely qua categories of presheaves, and the logico-geometrical question of what theories these topoi classify has remained open. We show that several important topos models of SGDT classify very simple geometric theories, and that the passage to various forms of multi-clock guarded recursion can be rephrased more compositionally in terms of the lower bagtopos construction of Vickers and variations thereon due to Johnstone. We contribute to the consolidation of SGDT by isolating the universal property of multi-clock guarded recursion as a modular construction that applies to any topos model of single-clock guarded recursion.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:37:04 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10323</link>
      <guid>https://doi.org/10.46298/entics.10323</guid>
      <author>Palombi, Daniele</author>
      <author>Sterling, Jonathan</author>
      <dc:creator>Palombi, Daniele</dc:creator>
      <dc:creator>Sterling, Jonathan</dc:creator>
      <content:encoded><![CDATA[Several different topoi have played an important role in the development and applications of synthetic guarded domain theory (SGDT), a new kind of synthetic domain theory that abstracts the concept of guarded recursion frequently employed in the semantics of programming languages. In order to unify the accounts of guarded recursion and coinduction, several authors have enriched SGDT with multiple "clocks" parameterizing different time-streams, leading to more complex and difficult to understand topos models. Until now these topoi have been understood very concretely qua categories of presheaves, and the logico-geometrical question of what theories these topoi classify has remained open. We show that several important topos models of SGDT classify very simple geometric theories, and that the passage to various forms of multi-clock guarded recursion can be rephrased more compositionally in terms of the lower bagtopos construction of Vickers and variations thereon due to Johnstone. We contribute to the consolidation of SGDT by isolating the universal property of multi-clock guarded recursion as a modular construction that applies to any topos model of single-clock guarded recursion.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Parsing as a lifting problem and the Chomsky-Schützenberger representation theorem</title>
      <description><![CDATA[We begin by explaining how any context-free grammar encodes a functor of operads from a freely generated operad into a certain "operad of spliced words". This motivates a more general notion of CFG over any category $C$, defined as a finite species $S$ equipped with a color denoting the start symbol and a functor of operads $p : Free[S] \to W[C]$ into the operad of spliced arrows in $C$. We show that many standard properties of CFGs can be formulated within this framework, and that usual closure properties of CF languages generalize to CF languages of arrows. We also discuss a dual fibrational perspective on the functor $p$ via the notion of "displayed" operad, corresponding to a lax functor of operads $W[C] \to Span(Set)$. We then turn to the Chomsky-Sch\"utzenberger Representation Theorem. We describe how a non-deterministic finite state automaton can be seen as a category $Q$ equipped with a pair of objects denoting initial and accepting states and a functor of categories $Q \to C$ satisfying the unique lifting of factorizations property and the finite fiber property. Then, we explain how to extend this notion of automaton to functors of operads, which generalize tree automata, allowing us to lift an automaton over a category to an automaton over its operad of spliced arrows. We show that every CFG over a category can be pulled back along a ND finite state automaton over the same category, and hence that CF languages are closed under intersection with regular languages. The last important ingredient is the identification of a left adjoint $C[-] : Operad \to Cat$ to the operad of spliced arrows functor, building the "contour category" of an operad. Using this, we generalize the C-S representation theorem, proving that any context-free language of arrows over a category $C$ is the functorial image of the intersection of a $C$-chromatic tree contour language and a regular language.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:36:40 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10508</link>
      <guid>https://doi.org/10.46298/entics.10508</guid>
      <author>Melliès, Paul-André</author>
      <author>Zeilberger, Noam</author>
      <dc:creator>Melliès, Paul-André</dc:creator>
      <dc:creator>Zeilberger, Noam</dc:creator>
      <content:encoded><![CDATA[We begin by explaining how any context-free grammar encodes a functor of operads from a freely generated operad into a certain "operad of spliced words". This motivates a more general notion of CFG over any category $C$, defined as a finite species $S$ equipped with a color denoting the start symbol and a functor of operads $p : Free[S] \to W[C]$ into the operad of spliced arrows in $C$. We show that many standard properties of CFGs can be formulated within this framework, and that usual closure properties of CF languages generalize to CF languages of arrows. We also discuss a dual fibrational perspective on the functor $p$ via the notion of "displayed" operad, corresponding to a lax functor of operads $W[C] \to Span(Set)$. We then turn to the Chomsky-Sch\"utzenberger Representation Theorem. We describe how a non-deterministic finite state automaton can be seen as a category $Q$ equipped with a pair of objects denoting initial and accepting states and a functor of categories $Q \to C$ satisfying the unique lifting of factorizations property and the finite fiber property. Then, we explain how to extend this notion of automaton to functors of operads, which generalize tree automata, allowing us to lift an automaton over a category to an automaton over its operad of spliced arrows. We show that every CFG over a category can be pulled back along a ND finite state automaton over the same category, and hence that CF languages are closed under intersection with regular languages. The last important ingredient is the identification of a left adjoint $C[-] : Operad \to Cat$ to the operad of spliced arrows functor, building the "contour category" of an operad. Using this, we generalize the C-S representation theorem, proving that any context-free language of arrows over a category $C$ is the functorial image of the intersection of a $C$-chromatic tree contour language and a regular language.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Revisiting Decidable Bounded Quantification, via Dinaturality</title>
      <description><![CDATA[We use a semantic interpretation to investigate the problem of defining an expressive but decidable type system with bounded quantification. Typechecking in the widely studied System Fsub is undecidable thanks to an undecidable subtyping relation, for which the culprit is the rule for subtyping bounded quantification. Weaker versions of this rule, allowing decidable subtyping, have been proposed. One of the resulting type systems (Kernel Fsub) lacks expressiveness, another (System Fsubtop) lacks the minimal typing property and thus has no evident typechecking algorithm. We consider these rules as defining distinct forms of bounded quantification, one for interpreting type variable abstraction, and the other for type instantiation. By giving a semantic interpretation for both in terms of unbounded quantification, using the dinaturality of type instantiation with respect to subsumption, we show that they can coexist within a single type system. This does have the minimal typing property and thus a simple typechecking procedure. We consider the fragments of this unified type system over types which contain only one form of bounded quantifier. One of these is equivalent to Kernel Fsub, while the other can type strictly more terms than System Fsubtop but the same set of beta-normal terms. We show decidability of typechecking for this fragment, and thus for System Fsubtop typechecking of beta-normal terms.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:36:20 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10474</link>
      <guid>https://doi.org/10.46298/entics.10474</guid>
      <author>Laird, James</author>
      <dc:creator>Laird, James</dc:creator>
      <content:encoded><![CDATA[We use a semantic interpretation to investigate the problem of defining an expressive but decidable type system with bounded quantification. Typechecking in the widely studied System Fsub is undecidable thanks to an undecidable subtyping relation, for which the culprit is the rule for subtyping bounded quantification. Weaker versions of this rule, allowing decidable subtyping, have been proposed. One of the resulting type systems (Kernel Fsub) lacks expressiveness, another (System Fsubtop) lacks the minimal typing property and thus has no evident typechecking algorithm. We consider these rules as defining distinct forms of bounded quantification, one for interpreting type variable abstraction, and the other for type instantiation. By giving a semantic interpretation for both in terms of unbounded quantification, using the dinaturality of type instantiation with respect to subsumption, we show that they can coexist within a single type system. This does have the minimal typing property and thus a simple typechecking procedure. We consider the fragments of this unified type system over types which contain only one form of bounded quantifier. One of these is equivalent to Kernel Fsub, while the other can type strictly more terms than System Fsubtop but the same set of beta-normal terms. We show decidability of typechecking for this fragment, and thus for System Fsubtop typechecking of beta-normal terms.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Sufficient Statistics and Split Idempotents in Discrete Probability Theory</title>
      <description><![CDATA[A sufficient statistic is a deterministic function that captures an essential property of a probabilistic function (channel, kernel). Being a sufficient statistic can be expressed nicely in terms of string diagrams, as Tobias Fritz showed recently, in adjoint form. This reformulation highlights the role of split idempotents, in the Fisher-Neyman factorisation theorem. Examples of a sufficient statistic occur in the literature, but mostly in continuous probability. This paper demonstrates that there are also several fundamental examples of a sufficient statistic in discrete probability. They emerge after some combinatorial groundwork that reveals the relevant dagger split idempotents and shows that a sufficient statistic is a deterministic dagger epi.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:35:56 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10520</link>
      <guid>https://doi.org/10.46298/entics.10520</guid>
      <author>Jacobs, Bart</author>
      <dc:creator>Jacobs, Bart</dc:creator>
      <content:encoded><![CDATA[A sufficient statistic is a deterministic function that captures an essential property of a probabilistic function (channel, kernel). Being a sufficient statistic can be expressed nicely in terms of string diagrams, as Tobias Fritz showed recently, in adjoint form. This reformulation highlights the role of split idempotents, in the Fisher-Neyman factorisation theorem. Examples of a sufficient statistic occur in the literature, but mostly in continuous probability. This paper demonstrates that there are also several fundamental examples of a sufficient statistic in discrete probability. They emerge after some combinatorial groundwork that reveals the relevant dagger split idempotents and shows that a sufficient statistic is a deterministic dagger epi.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Extended Addressing Machines for PCF, with Explicit Substitutions</title>
      <description><![CDATA[Addressing machines have been introduced as a formalism to construct models of the pure, untyped lambda-calculus. We extend the syntax of their programs by adding instructions for executing arithmetic operations on natural numbers, and introduce a reflection principle allowing certain machines to access their own address and perform recursive calls. We prove that the resulting extended addressing machines naturally model a weak call-by-name PCF with explicit substitutions. Finally, we show that they are also well-suited for representing regular PCF programs (closed terms) computing natural numbers.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:35:28 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10533</link>
      <guid>https://doi.org/10.46298/entics.10533</guid>
      <author>Intrigila, Benedetto</author>
      <author>Manzonetto, Giulio</author>
      <author>Munnich, Nicolas</author>
      <dc:creator>Intrigila, Benedetto</dc:creator>
      <dc:creator>Manzonetto, Giulio</dc:creator>
      <dc:creator>Munnich, Nicolas</dc:creator>
      <content:encoded><![CDATA[Addressing machines have been introduced as a formalism to construct models of the pure, untyped lambda-calculus. We extend the syntax of their programs by adding instructions for executing arithmetic operations on natural numbers, and introduce a reflection principle allowing certain machines to access their own address and perform recursive calls. We prove that the resulting extended addressing machines naturally model a weak call-by-name PCF with explicit substitutions. Finally, we show that they are also well-suited for representing regular PCF programs (closed terms) computing natural numbers.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Categorical Normalization Proof for the Modal Lambda-Calculus</title>
      <description><![CDATA[We investigate a simply typed modal $\lambda$-calculus, $\lambda^{\to\square}$, due to Pfenning, Wong and Davies, where we define a well-typed term with respect to a context stack that captures the possible world semantics in a syntactic way. It provides logical foundation for multi-staged meta-programming. Our main contribution in this paper is a normalization by evaluation (NbE) algorithm for $\lambda^{\to\square}$ which we prove sound and complete. The NbE algorithm is a moderate extension to the standard presheaf model of simply typed $\lambda$-calculus. However, central to the model construction and the NbE algorithm is the observation of Kripke-style substitutions on context stacks which brings together two previously separate concepts, structural modal transformations on context stacks and substitutions for individual assumptions. Moreover, Kripke-style substitutions allow us to give a formulation for contextual types, which can represent open code in a meta-programming setting. Our work lays the foundation for extending the logical foundation by Pfenning, Wong, and Davies towards building a practical, dependently typed foundation for meta-programming.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:35:02 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10360</link>
      <guid>https://doi.org/10.46298/entics.10360</guid>
      <author>Hu, Jason Z. S.</author>
      <author>Pientka, Brigitte</author>
      <dc:creator>Hu, Jason Z. S.</dc:creator>
      <dc:creator>Pientka, Brigitte</dc:creator>
      <content:encoded><![CDATA[We investigate a simply typed modal $\lambda$-calculus, $\lambda^{\to\square}$, due to Pfenning, Wong and Davies, where we define a well-typed term with respect to a context stack that captures the possible world semantics in a syntactic way. It provides logical foundation for multi-staged meta-programming. Our main contribution in this paper is a normalization by evaluation (NbE) algorithm for $\lambda^{\to\square}$ which we prove sound and complete. The NbE algorithm is a moderate extension to the standard presheaf model of simply typed $\lambda$-calculus. However, central to the model construction and the NbE algorithm is the observation of Kripke-style substitutions on context stacks which brings together two previously separate concepts, structural modal transformations on context stacks and substitutions for individual assumptions. Moreover, Kripke-style substitutions allow us to give a formulation for contextual types, which can represent open code in a meta-programming setting. Our work lays the foundation for extending the logical foundation by Pfenning, Wong, and Davies towards building a practical, dependently typed foundation for meta-programming.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The Functional Machine Calculus</title>
      <description><![CDATA[This paper presents the Functional Machine Calculus (FMC) as a simple model of higher-order computation with "reader/writer" effects: higher-order mutable store, input/output, and probabilistic and non-deterministic computation. The FMC derives from the lambda-calculus by taking the standard operational perspective of a call-by-name stack machine as primary, and introducing two natural generalizations. One, "locations", introduces multiple stacks, which each may represent an effect and so enable effect operators to be encoded into the abstraction and application constructs of the calculus. The second, "sequencing", is known from kappa-calculus and concatenative programming languages, and introduces the imperative notions of "skip" and "sequence". This enables the encoding of reduction strategies, including call-by-value lambda-calculus and monadic constructs. The encoding of effects into generalized abstraction and application means that standard results from the lambda-calculus may carry over to effects. The main result is confluence, which is possible because encoded effects reduce algebraically rather than operationally. Reduction generates the familiar algebraic laws for state, and unlike in the monadic setting, reader/writer effects combine seamlessly. A system of simple types confers termination of the machine.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:34:42 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10513</link>
      <guid>https://doi.org/10.46298/entics.10513</guid>
      <author>Heijltjes, Willem</author>
      <dc:creator>Heijltjes, Willem</dc:creator>
      <content:encoded><![CDATA[This paper presents the Functional Machine Calculus (FMC) as a simple model of higher-order computation with "reader/writer" effects: higher-order mutable store, input/output, and probabilistic and non-deterministic computation. The FMC derives from the lambda-calculus by taking the standard operational perspective of a call-by-name stack machine as primary, and introducing two natural generalizations. One, "locations", introduces multiple stacks, which each may represent an effect and so enable effect operators to be encoded into the abstraction and application constructs of the calculus. The second, "sequencing", is known from kappa-calculus and concatenative programming languages, and introduces the imperative notions of "skip" and "sequence". This enables the encoding of reduction strategies, including call-by-value lambda-calculus and monadic constructs. The encoding of effects into generalized abstraction and application means that standard results from the lambda-calculus may carry over to effects. The main result is confluence, which is possible because encoded effects reduce algebraically rather than operationally. Reduction generates the familiar algebraic laws for state, and unlike in the monadic setting, reader/writer effects combine seamlessly. A system of simple types confers termination of the machine.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The Internal Operads of Combinatory Algebras</title>
      <description><![CDATA[We argue that operads provide a general framework for dealing with polynomials and combinatory completeness of combinatory algebras, including the classical $\mathbf{SK}$-algebras, linear $\mathbf{BCI}$-algebras, planar $\mathbf{BI}(\_)^\bullet$-algebras as well as the braided $\mathbf{BC^\pm I}$-algebras. We show that every extensional combinatory algebra gives rise to a canonical closed operad, which we shall call the internal operad of the combinatory algebra. The internal operad construction gives a left adjoint to the forgetful functor from closed operads to extensional combinatory algebras. As a by-product, we derive extensionality axioms for the classes of combinatory algebras mentioned above.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:34:03 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10338</link>
      <guid>https://doi.org/10.46298/entics.10338</guid>
      <author>Hasegawa, Masahito</author>
      <dc:creator>Hasegawa, Masahito</dc:creator>
      <content:encoded><![CDATA[We argue that operads provide a general framework for dealing with polynomials and combinatory completeness of combinatory algebras, including the classical $\mathbf{SK}$-algebras, linear $\mathbf{BCI}$-algebras, planar $\mathbf{BI}(\_)^\bullet$-algebras as well as the braided $\mathbf{BC^\pm I}$-algebras. We show that every extensional combinatory algebra gives rise to a canonical closed operad, which we shall call the internal operad of the combinatory algebra. The internal operad construction gives a left adjoint to the forgetful functor from closed operads to extensional combinatory algebras. As a by-product, we derive extensionality axioms for the classes of combinatory algebras mentioned above.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Complete Diagrammatic Calculus for Boolean Satisfiability</title>
      <description><![CDATA[We propose a calculus of string diagrams to reason about satisfiability of Boolean formulas, and prove it to be sound and complete. We then showcase our calculus in a few case studies. First, we consider SAT-solving. Second, we consider Horn clauses, which leads us to a new decision method for propositional logic programs equivalence under Herbrand model semantics.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:33:34 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10481</link>
      <guid>https://doi.org/10.46298/entics.10481</guid>
      <author>Gu, Tao</author>
      <author>Piedeleu, Robin</author>
      <author>Zanasi, Fabio</author>
      <dc:creator>Gu, Tao</dc:creator>
      <dc:creator>Piedeleu, Robin</dc:creator>
      <dc:creator>Zanasi, Fabio</dc:creator>
      <content:encoded><![CDATA[We propose a calculus of string diagrams to reason about satisfiability of Boolean formulas, and prove it to be sound and complete. We then showcase our calculus in a few case studies. First, we consider SAT-solving. Second, we consider Horn clauses, which leads us to a new decision method for propositional logic programs equivalence under Herbrand model semantics.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Weakening and Iterating Laws using String Diagrams</title>
      <description><![CDATA[Distributive laws are a standard way of combining two monads, providing a compositional approach for reasoning about computational effects in semantics. Situations where no such law exists can sometimes be handled by weakening the notion of distributive law, still recovering a composite monad. A celebrated result from Eugenia Cheng shows that combining $n$ monads is possible by iterating more distributive laws, provided they satisfy a coherence condition called the Yang-Baxter equation. Moreover, the order of composition does not matter, leading to a form of associativity. The main contribution of this paper is to generalise the associativity of iterated composition to weak distributive laws in the case of $n = 3$ monads. To this end, we use string-diagrammatic notation, which significantly helps make increasingly complex proofs more readable. We also provide examples of new weak distributive laws arising from iteration.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:33:04 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10482</link>
      <guid>https://doi.org/10.46298/entics.10482</guid>
      <author>Goy, Alexandre</author>
      <dc:creator>Goy, Alexandre</dc:creator>
      <content:encoded><![CDATA[Distributive laws are a standard way of combining two monads, providing a compositional approach for reasoning about computational effects in semantics. Situations where no such law exists can sometimes be handled by weakening the notion of distributive law, still recovering a composite monad. A celebrated result from Eugenia Cheng shows that combining $n$ monads is possible by iterating more distributive laws, provided they satisfy a coherence condition called the Yang-Baxter equation. Moreover, the order of composition does not matter, leading to a form of associativity. The main contribution of this paper is to generalise the associativity of iterated composition to weak distributive laws in the case of $n = 3$ monads. To this end, we use string-diagrammatic notation, which significantly helps make increasingly complex proofs more readable. We also provide examples of new weak distributive laws arising from iteration.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Free Commutative Monoids in Homotopy Type Theory</title>
      <description><![CDATA[We develop a constructive theory of finite multisets in Homotopy Type Theory, defining them as free commutative monoids. After recalling basic structural properties of the free commutative-monoid construction, we formalise and establish the categorical universal property of two, necessarily equivalent, algebraic presentations of free commutative monoids using 1-HITs. These presentations correspond to two different equational theories invariably including commutation axioms. In this setting, we prove important structural combinatorial properties of finite multisets. These properties are established in full generality without assuming decidable equality on the carrier set. As an application, we present a constructive formalisation of the relational model of classical linear logic and its differential structure. This leads to constructively establishing that free commutative monoids are conical refinement monoids. Thereon we obtain a characterisation of the equality type of finite multisets and a new presentation of the free commutative-monoid construction as a set-quotient of the list construction. These developments crucially rely on the commutation relation of creation/annihilation operators associated with the free commutative-monoid construction seen as a combinatorial Fock space.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:32:41 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10492</link>
      <guid>https://doi.org/10.46298/entics.10492</guid>
      <author>Choudhury, Vikraman</author>
      <author>Fiore, Marcelo</author>
      <dc:creator>Choudhury, Vikraman</dc:creator>
      <dc:creator>Fiore, Marcelo</dc:creator>
      <content:encoded><![CDATA[We develop a constructive theory of finite multisets in Homotopy Type Theory, defining them as free commutative monoids. After recalling basic structural properties of the free commutative-monoid construction, we formalise and establish the categorical universal property of two, necessarily equivalent, algebraic presentations of free commutative monoids using 1-HITs. These presentations correspond to two different equational theories invariably including commutation axioms. In this setting, we prove important structural combinatorial properties of finite multisets. These properties are established in full generality without assuming decidable equality on the carrier set. As an application, we present a constructive formalisation of the relational model of classical linear logic and its differential structure. This leads to constructively establishing that free commutative monoids are conical refinement monoids. Thereon we obtain a characterisation of the equality type of finite multisets and a new presentation of the free commutative-monoid construction as a set-quotient of the list construction. These developments crucially rely on the commutation relation of creation/annihilation operators associated with the free commutative-monoid construction seen as a combinatorial Fock space.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Data Layout from a Type-Theoretic Perspective</title>
      <description><![CDATA[The specifics of data layout can be important for the efficiency of functional programs and interaction with external libraries. In this paper, we develop a type-theoretic approach to data layout that could be used as a typed intermediate language in a compiler or to give a programmer more control. Our starting point is a computational interpretation of the semi-axiomatic sequent calculus for intuitionistic logic that defines abstract notions of cells and addresses. We refine this semantics so addresses have more structure to reflect possible alternative layouts without fundamentally departing from intuitionistic logic. We then add recursive types and explore example programs and properties of the resulting language.]]></description>
      <pubDate>Wed, 22 Feb 2023 22:32:16 +0000</pubDate>
      <link>https://doi.org/10.46298/entics.10507</link>
      <guid>https://doi.org/10.46298/entics.10507</guid>
      <author>DeYoung, Henry</author>
      <author>Pfenning, Frank</author>
      <dc:creator>DeYoung, Henry</dc:creator>
      <dc:creator>Pfenning, Frank</dc:creator>
      <content:encoded><![CDATA[The specifics of data layout can be important for the efficiency of functional programs and interaction with external libraries. In this paper, we develop a type-theoretic approach to data layout that could be used as a typed intermediate language in a compiler or to give a programmer more control. Our starting point is a computational interpretation of the semi-axiomatic sequent calculus for intuitionistic logic that defines abstract notions of cells and addresses. We refine this semantics so addresses have more structure to reflect possible alternative layouts without fundamentally departing from intuitionistic logic. We then add recursive types and explore example programs and properties of the resulting language.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
  </channel>
</rss>
