The Linear Abstract Machine

25
The Linear Abstract Machine Y. Lafont Department of Computing Imperial College 180 Queen’s Gate London SW7 2BZ UK Abstract Linear Logic [Girard87] provides a refinement of functional programming and sug- gests a new implementation technique, with the following features: a synthesis of strict and lazy evaluation a clean semantics of side effects no garbage collector. keywords: categorical combinator, functional programming, garbage collection, intu- itionistic logic, linear λ-calculus, lazy evaluation, linear logic, monoidal category, sequent calculus. Introduction Let us consider some questions arising in the area of functional programming : Update Pure functional programming is elegant but not very efficient. Usually, references are added to make practical languages, but the transparency of the semantics is lost. In LISP, you don’t need references to modify data. For example, the “function” nconc (physical concatenation of lists, figure 1) replaces the final nil of its first argument with its second one. By the way, it alters all data sharing the last cell of its first argument. Is it possible to integrate those inpure functions (that alter or destroy their arguments) into a declarative style? Free cons In some LISP dialects, the free list is accessible to the user so that you can manage your garbage collection yourself. Sometimes, it is clear that data are no longer useful, and it is reasonable to salvage space, but in general, you may be easily mistaken, with unfortunate consequences! Is it possible to ensure that data will not be used later on? 1

Transcript of The Linear Abstract Machine

Page 1: The Linear Abstract Machine

The Linear Abstract Machine

Y. LafontDepartment of Computing

Imperial College180 Queen’s GateLondon SW7 2BZ

UK

Abstract

Linear Logic [Girard87] provides a refinement of functional programming and sug-gests a new implementation technique, with the following features:

• a synthesis of strict and lazy evaluation

• a clean semantics of side effects

• no garbage collector.

keywords: categorical combinator, functional programming, garbage collection, intu-itionistic logic, linear λ-calculus, lazy evaluation, linear logic, monoidal category, sequentcalculus.

Introduction

Let us consider some questions arising in the area of functional programming :

Update

Pure functional programming is elegant but not very efficient. Usually, references areadded to make practical languages, but the transparency of the semantics is lost.

In LISP, you don’t need references to modify data. For example, the “function” nconc(physical concatenation of lists, figure 1) replaces the final nil of its first argument with itssecond one. By the way, it alters all data sharing the last cell of its first argument.

Is it possible to integrate those inpure functions (that alter or destroy their arguments)into a declarative style?

Free cons

In some LISP dialects, the free list is accessible to the user so that you can manage yourgarbage collection yourself. Sometimes, it is clear that data are no longer useful, and it isreasonable to salvage space, but in general, you may be easily mistaken, with unfortunateconsequences!

Is it possible to ensure that data will not be used later on?

1

Page 2: The Linear Abstract Machine

1 SEQUENT CALCULUS 2

Laziness

There are two main evaluation mechanisms of functional languages. The strict one (callby value) is often more efficient, but the lazy one (call by need) allows attractive infiniteconstructions.

Is it possible to mix strictness and laziness harmoniously?

You may tackle all these problems by means of ad hoc analysis tools, but they becomevery intricate when you allow higher order functions.

What we propose is a typed calculus taking those implementation details into account.Doing this, we leave the area of functional programming, bringing in:

• a new kind of typed programming languages (linear languages)

• a new implementation technique (Linear Abstract Machine).

The underlying logic is the Intuitionistic Linear Logic of J.Y. Girard, as IntuitionisticLogic is the underlying logic of functional programming (Curry-Howard paradigm). Lin-ear Logic [Girard87] seems to answer many significant questions in computer science (see[Girard86] for example).

Since Linear Logic may be unfamiliar to the reader, we give an short introduction to it(sections 1 to 3). Then we present the Linear Abstract Machine, main subject of our article(sections 4 and 5), and we give a positive answer to the previous questions (section 6). Linearλ-Calculus (section 7) provides the basis of a programming language that can be implementedon that kind of machine. Quotations and technical details are carried over to appendices.

This paper is an extended version of [GirLaf].

1 Sequent Calculus

1.1 Gentzen Sequent Calculus

Less known than Natural Deduction, Gentzen Sequent Calculus [Gentzen, Kleene] is perhapsthe most elegant presentation of Intuitionistic Logic. There, we have a clear distinctionbetween structural and logical rules (see appendix A).

Structural rules are: exchange, identity (usually considered as an axiom, it can be re-stricted to atomic formulas), cut, contraction and weakening. The main property of SequentCalculus is the cut elimination theorem (Hauptsatz): In the proof of any sequent Γ ` A, thecut rule can be eliminated.

From the Hauptsatz, one deduces consistency (` ⊥ cannot be proved) and the subformulaproperty (every provable sequent A1, . . . , An ` B has a proof that contains only subformulasof A1, . . . , An and B).

If you forget the rules for the conjunction, you may hesitate between the following ones:

Γ ` A ∆ ` BΓ,∆ ` A ∧B

or Γ ` A Γ ` BΓ ` A ∧B

Page 3: The Linear Abstract Machine

1 SEQUENT CALCULUS 3

They are equivalent, because you derive the second from the first:

Γ ` A Γ ` BΓ,Γ ` A ∧B

...(contractions)

...Γ ` A ∧B

. . . and the first from the second:

Γ ` A...

(weakenings)...

Γ,∆ ` A

∆ ` B...

(weakenings)...

Γ,∆ ` BΓ,∆ ` A ∧B

Note that a similar phenomenon happens with >, which can be characterized by one ofthe following rules:

` >or

Γ ` >

1.2 Girard Sequent Calculus

If we remove contraction and weakening from Sequent Calculus, the previous rules are nolonger equivalent, and correspond to distinct connectors.

We obtain Girard Sequent Calculus for Intuitionistic Linear Logic. The connectors are⊗ (tensor product), 1 (tensor unit), −◦ (linear implication), & (direct product), t (directunit), ⊕ (direct sum) and 0 (direct zero)1.

Structural rules:

Γ, A,B,∆ ` CΓ, B,A,∆ ` C

(exchange)A ` A

(identity)Γ ` A ∆, A ` B

Γ,∆ ` B(cut)

(no contraction) (no weakening)

Logical rules:

Γ ` A ∆ ` BΓ,∆ ` A⊗B

Γ, A,B ` CΓ, A⊗B ` C ` 1

Γ ` AΓ,1 ` A

1In Classical Linear Logic [Girard87], there is another disjunction (the tensor sum).

Page 4: The Linear Abstract Machine

1 SEQUENT CALCULUS 4

Γ, A ` BΓ ` A−◦B

Γ ` A ∆, B ` CΓ,∆, A−◦B ` C

Γ ` A Γ ` BΓ ` A&B

Γ, A ` CΓ, A&B ` C

Γ, B ` CΓ, A&B ` C Γ ` t

Γ ` AΓ ` A⊕B

Γ ` BΓ ` A⊕B

Γ, A ` C Γ, B ` CΓ, A⊕B ` C Γ,0 ` A

In Gentzen Sequent Calculus, a sequent A1, . . . , An ` B means that B is a consequent ofA1 ∧ . . . ∧ An. Here, it means that B is a consequent of A1 ⊗ . . .⊗ An.

Theorem 1 (Haupsatz for Intuitionistic Linear Logic)In the proof of any sequent Γ ` A, the cut rule can be eliminated.

The demonstration is similar to Gentzen’s one, and even simpler because of the absenceof contraction and weakening.

1.3 Examples of proofs

From now on, linear means linear intuitionistic.

We have a straightforward translation from linear formulas to intuitionistic ones, mapping⊗ and & to ∧ (conjunction), 1 and t to > (true), −◦ to⇒ (implication), ⊕ to ∨ (disjunction)and 0 to ⊥ (false).

If a linear formula is provable, its translation is a provable intuitionistic formula. Forexample, the linear formula (A&B)−◦A (where A and B are atomic formulas) admits thefollowing proof:

A ` AA&B ` A

` (A&B)−◦ A

Of course, its translation (A ∧B)⇒ A is provable.

But the converse is false: The linear formula (A⊗ B)−◦ A is not provable, although itstranslation (A ∧B)⇒ A is still provable.

Let us show that (A⊗B)−◦A is not provable. A cut free proof of ` (A⊗B)−◦A endsinevitably like this:

...A,B ` AA⊗B ` A

` (A⊗B)−◦ A

But without weakening, it is clearly impossible to deduce A,B ` A.

Page 5: The Linear Abstract Machine

2 CATEGORICAL COMBINATORY LOGIC 5

We have the following distributivity property:

A⊗ (B ⊕ C) ≡ (A⊗B)⊕ (A⊗ C)

Here A ≡ B means that both A ` B and B ` A are provable.

Of course, it is not true if you replace ⊗ by & .

It is very easy to build a cut free proof in a bottom up fashion, for example:

A ` A B ` BA,B ` A⊗B

A,B ` (A⊗B)⊕ (A⊗ C)

A ` A C ` CA,C ` A⊗ C

A,C ` (A⊗B)⊕ (A⊗ C)

A,B ⊕ C ` (A⊗B)⊕ (A⊗ C)

A⊗ (B ⊕ C) ` (A⊗B)⊕ (A⊗ C)

1.4 Classical Linear Logic

In [Girard87], you will not find Intuitionistic but Classical Linear Logic.

At first sight, Classical Linear Logic is to Intuitionistic Linear Logic what Classical Logicis to Intuitionistic Logic. For example, you have the negation as a primitive connector, andthe excluded middle law. However Classical Linear Logic is constructive whereas ClassicalLogic is not!

We are now convinced that Classical Linear Logic is the most promising direction, butthe intuitionistic case is probably more accessible and it was the first implemented.

2 Categorical Combinatory Logic

2.1 The intuitionistic case

Categorical Combinatory Logic (appendix B) [Lambek68, Lambek80, Curien85, Curien86]provides an alternative presentation of Intuitionistic Logic.

A categorical combinator ϕ : A → B is the representation of a proof of A ` B (oneformula on the left side). In a way, Categorical Combinatory Logic is more elementary thanSequent Calculus. Its suitability to implementation is shown in [CouCurMau, MauSu].

2.2 Linear Combinators

The linear combinators are the basic operators for the theory of symmetric monoidal closedcategories with finite products and coproducts (appendix C):

Sequential compositors:

ϕ : A→ B ψ : B → C

ψ ◦ ϕ : A→ C id : A→ A

Page 6: The Linear Abstract Machine

2 CATEGORICAL COMBINATORY LOGIC 6

Parallel compositors:

ϕ : A→ B ψ : C → D

ϕ⊗ ψ : A⊗ C → B ⊗D 1 : 1→ 1

Arrange combinators:

assl : A⊗ (B ⊗ C)↔ (A⊗B)⊗ C : assr

insl : A↔ 1⊗ A : dell exch : A⊗B ↔ B ⊗ A : exch

These combinators are invertible (we write ϕ : A↔ B : ψ for ϕ : A→ B and ψ : B → A).The names are acronyms: assl (associate left), assr (associate right), insl (insert left), dell(delete left) and exch (exchange).

Logical combinators:

ϕ : A⊗B → C

cur(ϕ) : A→ B −◦ C app : (A−◦B)⊗ A→ B

ϕ : A→ B ψ : A→ C

〈ϕ, ψ〉 : A→ B&C fst : A&B → A snd : A&B → A 〈〉 : A→ t

inl : A→ A⊕B inr : B → A⊕Bϕ : A→ C ψ : B → C

{ϕ |ψ} : A⊕B → C {} : 0→ A

Proposition 1 The two systems (Sequent Calculus and Categorical Combinatory Logic) areequivalent:For every combinator A→ B there is a proof of A ` B, and for every proof of A1, . . . , An ` Bthere is a combinator A1 ⊗ . . .⊗ An → B.

The proof is almost straightforward, the only problematic rules being:

Γ, A ` C Γ, B ` CΓ, A⊕B ` C Γ,0 ` A

But you can use the linear implication to send Γ to the right side.

For example, the proof of A⊗ (B⊕C) ` (A⊗B)⊕ (A⊗C) corresponds to the followingcombinator:

app ◦ ({cur(inl ◦ exch) | cur(inr ◦ exch)}⊗ id) ◦ exch : A⊗ (B⊕C)→ (A⊗B)⊕ (A⊗C)

Page 7: The Linear Abstract Machine

3 THE MODALITY 7

2.3 About the products

A tensor product is clearly weaker than a cartesian one. For example, it allows permutation(exch), but no access (fst, snd).

The main point is that ⊗ has a closure (−◦), whereas the cartesian (or direct) product& hasn’t. In fact, A⊗B can be considered as a type of strict pairs, and A&B as a type oflazy pairs. Indeed, fst and snd are not strict in their arguments (only one of them will beused), but the linear application app needs both the function and its argument.

3 The Modality

It is possible to define recursive data types as in the intuitionistic case (e.g. concrete datatypes of ML).

3.1 Inductive data types

We have the usual equation defining the type of natural numbers:

Nat = 1⊕Nat

This is clearly enough in a programming language where general recursive programs areallowed. But for a logical system, we need an explicit primitive iteration operator, and Natis characterized by the following rules:

zero : 1→ Nat succ : Nat→ Nat

ϕ : 1→ A ψ : A→ A

iter(ϕ, ψ) : Nat→ A

The reader may find explicit rules for the following inductive data types:

List(A) = 1⊕ (A⊗ List(A))

BinaryTree(A) = A⊕ 1⊕ (BinaryTree(A)⊗BinaryTree(A))

3.2 Of course!

If you replace ⊕ by & in the previous equations, you obtain the (dual) notion of projectivedata type. The modality !A (read “of course A”) may be characterized by the followingequation:

!A = A& 1 & ( !A⊗ !A)

The corresponding rules are:

read : !A→ A kill : !A→ 1 dupl : !A→ !A⊗ !A

ϕ : A→ B ε : A→ 1 δ : A→ A⊗ Amake(ϕ, ε, δ) : A→ !B

Page 8: The Linear Abstract Machine

3 THE MODALITY 8

The modality is exactly what we need to recover what is lost in Linear Logic: contractionand weakening. The similarity with BinaryTree(A) is obvious, but that definition is notcomplete (see appendix D). However, it is enough for our purpose.

!A is a free coalgebra over A:

Proposition 2 Every ϕ : !A→ B can be lifted to a combinator lift(ϕ) : !A→ !B.

Morever, we have:

Proposition 3 The modality maps direct product to tensor product:

! t ≡ 1 ! (A&B) ≡ !A⊗ !B

To prove that proposition, we have to construct the following combinators:

subl : ! t↔ 1 : crys crac : ! (A&B)↔ !A⊗ !B : glue

The constructions are detailed in appendix E.

3.3 Intuitionistic Logic recovered

With the modality of course, it is possible to recover the expressive power of IntuitionisticLogic. More precisely, we have an embedding of Intuitionistic Logic into Linear Logic. Hence,the linear connectors appear as more primitive than the intuitionistic ones.

The embedding maps an intuitionistic formula A to a linear one |A|:

• |A| = A when A is an atomic formula.

• |A ∧B| = |A|& |B| and |>| = t.

• |A⇒ B| = ! |A| −◦ |B|.

• |A ∨B| = ! |A|⊕ ! |B| and |⊥| = 0.

Theorem 2 An intuitionistic formula A is provable (in Categorical Combinatory Logic) ifand only if its translation |A| is provable (in Linear Categorical Combinatory Logic).

To prove that A is provable when |A| is provable, we simply extend the translation of 1.3( !A has the same interpretation as A).

For the opposite, we need the following lemma:

Lemma 1 For every categorical combinator ϕ : A → B, there is a linear one |ϕ| : ! |A| →|B|.

The proof is a straightforward induction using propositions 2 and 3. For example, ifϕ : A → B gives |ϕ| : ! |A| → |B|, and ψ : B → C gives |ψ| : ! |B| → |C|, then |ψ ◦ ϕ| =ψ ◦ lift(ϕ) : ! |A| → |C|.

The previous theorem means that Linear Logic with the modality has the power ofIntuitionistic Logic. As a consequence, functional languages are implementable on the LinearAbstract Machine (section 5). However, this implementation is not realistic because of thetranslation from categorical combinators to linear ones, which generates huge code.

Page 9: The Linear Abstract Machine

4 THE THEORY OF EXECUTION 9

3.4 Sequent rules for the modality

In fact, Girard considered the modality as a new connector characterized by the followingSequent rules:

Γ, A ` BΓ, !A ` B

Γ ` BΓ, !A ` B

Γ, !A, !A ` BΓ, !A ` B

! Γ ` A! Γ ` !A

! Γ represents a sequence !A1, . . . , !An in the last rule, which has apparently no nicetranslation into Categorical Combinators. By the way, our rules can’t easily be expressed inSequent Calculus, but our modality is stronger (Girard’s rules are derivable). Moreover, wedo not introduce new concepts (modulo recursion, the modality is definable from direct andtensor product). We can say that our modality is an implementation of Girard’s one.

4 The Theory of Execution

4.1 Atomic and primitive combinators

We start from a given graph whose points are called atomic types and whose arrows arecalled atomic combinators. For example, the atomic types types may be the states, and theatomic combinators the basic actions of robots controlled by our computer.

A tensor product of atomic types is called a primitive type and a (sequential and par-allel) composition of arrange and atomic combinators is called a primitive combinator (seesection 2). Clearly, domains and codomains of primitive combinators are primitive types.

The effects of programs will be primitive combinators.

4.2 Canonical combinators

For every type A, we single out a class of combinators µ : X → A (where X varies overprimitive types) that we call canonical combinators of type A2:

• If A is atomic, the only canonical combinator of type A is id : A→ A.

• The canonical combinators of type A ⊗ B are the µ ⊗ ν : X ⊗ Y → A ⊗ B whereµ : X → A and ν : Y → B are canonical combinators.

• The only canonical combinator of type 1 is 1.

• For any other type A, the canonical combinators of type A are the γ ◦ µ : X → Awhere γ : Y → A is a constructor (any combinator of the form cur(ϕ), 〈ϕ, ψ〉, 〈〉, inl,inr) and µ : X → Y is canonical 3.

For example, the canonical combinators of type A−◦B are the cur(ϕ)◦µ : X → A−◦Bwhere ϕ : Y ⊗ A→ B, and µ : X → Y is canonical.

The data handled by programs will be canonical combinators.

2Note the similarity with the canonical terms of [Martinloef].3This definition is not well founded since Y is not, in general, a subformula of A. It doesn’t matter: What

is defined inductively is the relation “µ is a canonical combinator of type A”.

Page 10: The Linear Abstract Machine

4 THE THEORY OF EXECUTION 10

4.3 Execution relation

We define an execution relationϕ

µ ; να

when ϕ : A → B, µ : X → A, ν : Y → B and

α : X → Y , where µ, ν are canonical and α is primitive.

6 6

-

-

A

X

B

Y

µ ν

ϕ

α

This relation will have the following meaning: “The program ϕ applied to the datum µgives the datum ν, with the effect α”. The notion of effect was not introduced deliberately:It comes naturally when you try to define an operational semantics of linear combinators.

The execution relation is defined inductively:

(α is an atomic combinator)

αid ; id

α

ϕµ ; µ′

α

ψµ′ ; µ′′

β

ψ ◦ ϕµ ; µ′′

β ◦ α

idµ ; µ

id

ϕµ ; µ′

α

ψν ; ν ′

β

ϕ⊗ ψµ⊗ ν ; µ′ ⊗ ν ′

α⊗ β

11 ; 1

1

asslλ⊗ (µ⊗ ν) ; (λ⊗ µ)⊗ ν

assl

assr(λ⊗ µ)⊗ ν ; λ⊗ (µ⊗ ν)

assr

inslµ ; 1⊗ µ

insl

dell1⊗ µ ; µ

dell

exchµ⊗ ν ; ν ⊗ µ

exch

(γ is a constructor)

γµ ; γ ◦ µ

id

ϕλ⊗ µ ; ν

α

app(cur(ϕ) ◦ λ)⊗ µ ; ν

α

Page 11: The Linear Abstract Machine

4 THE THEORY OF EXECUTION 11

ϕµ ; ν

α

fst〈ϕ, ψ〉 ◦ µ ; ν

α

ψµ ; ν

α

snd〈ϕ, ψ〉 ◦ µ ; ν

α

ϕµ ; ν

α

{ϕ |ψ}inl ◦ µ ; ν

α

ψµ ; ν

α

{ϕ |ψ}inr ◦ µ ; ν

α

You can easily check:

Proposition 4 Ifϕ

µ ; να

then the corresponding square commutes (see appendix C.3):

6 6

-

-

A

X

B

Y

µ ν

ϕ

α

=

To execute a program ϕ on a datum µ, you apply the execution rules in a bottom up

fashion until you find a result ν and an effect α such thatϕ

µ ; να

. This is obviously

deterministic, but it is not clear if it terminates. The main result is:

Theorem 3 (termination)For every ϕ : A→ B and µ canonical combinator of type A, there is a canonical combi-

nator ν of type B and a primitive combinator α such thatϕ

µ ; να

.

This theorem is proved in appendix F.

ν and α are uniquely determined by ϕ and µ. Since ν is the result, and α the effect, wemake an abuse of notation4, writing ν = ϕ µ and α = ∂ ϕ µ.

It is then possible to reformulate the execution rules in a nice way, for example:

(ϕ ◦ ψ) µ = ϕ (ψ µ) ∂ (ϕ ◦ ψ) µ = ∂ ϕ (ψ µ) ◦ ∂ ψ µ

4Here, the combinator ϕ is considered as a program taking an input of type A and returning an outputof type B. On the other hand, the canonical combinators µ and ν are considered as data of types A and B,respectively.

Page 12: The Linear Abstract Machine

5 THE LINEAR ABSTRACT MACHINE 12

5 The Linear Abstract Machine

The Linear Abstract Machine (LAM) is a cousin of the Categorical Abstract Machine (CAM)[CouCurMau]. In particular it uses the same basic ingredients: code, environment and stack.It is a sequential implementation of the execution rules (section 4).

5.1 Environment and code

The environment is a representation of a canonical combinator:

• The canonical combinator µ⊗ν is represented by a pair (u, v) (strict pair) and 1 by ().

• The canonical combinator γ ◦µ is represented by a pair γ ·u where γ is a piece of code.

The code is a list of primitive instructions. Sequential composition becomes concatenation(in opposite order), but parallel composition has to be sequentialized:

• ϕ ◦ ψ becomes ψ@ϕ (concatenation) and id becomes [] (empty list).

• ϕ⊗ψ (= (id⊗ψ)◦ (ϕ⊗ id)) becomes [Splitl] @ϕ@ [Consl; Splitr] @ψ@ [Consr] and1 (= id) becomes [].

Splitl pushes the second component of the environment and get to the first one whereasConsl reconstructs the pair. Splitr and Consr are the symmetric instructions. Of course,the sequence [Consl; Splitr] can be abbreviated into a single instruction Swap that ex-changes the environment with the top of the stack.

For the other combinators, the translation is straightforward:assl becomes [Assl], . . . , cur(ϕ) becomes [Cur(ϕ)], 〈ϕ, ψ〉 becomes [Pair(ϕ, ψ)], . . . ,

{ϕ |ψ} becomes [Altv(ϕ, ψ)], . . .

Page 13: The Linear Abstract Machine

5 THE LINEAR ABSTRACT MACHINE 13

5.2 Execution

We can now specify the machine:

Linear Abstract Machine

Before Aftercode environment stack code environment stack

Splitl :: C (u, v) S C u v :: SConsl :: C u v :: S C (u, v) SSplitr :: C (u, v) S C v u :: SConsr :: C v u :: S C (u, v) SAssl :: C (u, (v, w)) S C ((u, v), w) SAssr :: C ((u, v), w) S C (u, (v, w)) SInsl :: C u S C ((), u) SDell :: C ((), u) S C u SExch :: C (u, v) S C (v, u) Sγ :: C u S C γ · u S

App :: C (Cur(C ′) · u, v) S C ′ (u, v) C :: SFst :: C Pair(C ′, C ′′) · u S C ′ u C :: SSnd :: C Pair(C ′, C ′′) · u S C ′′ u C :: S

Altv(C ′, C ′′) :: C Inl · u S C ′ u C :: SAltv(C ′, C ′′) :: C Inr · u S C ′′ u C :: S

[] u C :: S C u S

We use the notation x :: [x1; . . . ;xn] for the list [x;x1; . . . ;xn].

5.3 Possible extensions

It is of course possible to add machine numbers, with primitive instructions for arithmetics,for example:

Before Aftercode environment code environment

Num(p) :: C () C pSum :: C (p, q) C p+ q

In fact, because of the linear constraints, primitive instructions to duplicate or erasenumbers will be necessary as well.

It is also possible to add primitive instructions for external effects, e.g. inputs and outputs:

Before Aftercode environment code environment effect

Input :: C () C c c is recieved from the keyboardOutput :: C c C () c is sent to the screen

Those effects (especially the outputs) are similar to the atomic combinators we consideredin the theoretical model of section 4.

Page 14: The Linear Abstract Machine

5 THE LINEAR ABSTRACT MACHINE 14

5.4 Examples of execution

If α and β are instructions with some effect (for example, outputs) the program fst ◦ 〈α, β〉is executed without the effect of β:

instruction environment stack effect

Pair([α], [β])id

FstPair([α], [β]) · id

αid []

αid []

What happens now if we execute the program α⊗ β?

instruction environment stack effect

Splitl(id, id)

αid id

α

Conslid id

Splitr(id, id)

βid id

β

Consrid id

(id, id)

Because we choose to implement parallel composition in a specific order, α is executedbefore β. But that order is not specified by the program: We have a form of arbitraryinterleaving !

In fact, if T is the (atomic) type representing a terminal5, α⊗β is of type T⊗T → T⊗T .It needs two terminals!

Technically, we replace the datum id (which contains no information) by the address (orport) of the terminal:

instruction environment stack effect

Splitl(port1, port2)

αport1 port2 α on port1

Conslport1 port2

Splitr(port1, port2)

βport2 port1 β on port2

Consrport2 port1

(port1, port2)

There is no trouble since effects are executed on independant terminals.

5A console, not a terminal object in a category!

Page 15: The Linear Abstract Machine

6 MAIN FEATURES 15

5.5 Looping code

For the modality we do not need specific instructions. Indeed, at the implementation level,we have no reason to forbid looping code, so we use the recursive definitions:

!A = A& (1 & ( !A⊗ !A))

read = fst : !A→ A kill = fst ◦ snd : !A→ 1 dupl = snd ◦ snd : !A→ !A⊗ !A

make(ϕ, ε, δ) = π where π = 〈ϕ, 〈ε, (π ⊗ π) ◦ δ〉〉

Therefore, the combinator make( , , ) is compiled into looping code.

6 Main features

Here we answer to the questions of the introduction.

6.1 Side effects

Nothing forbids us to add instructions for side effects, that is internal modification of data.

For example, you can introduce a predefined type of difference lists whose objects arerepresented by a pair of pointers (the head and the end of the list), and a primitive instructionfor physical concatenation (as nconc).

That, of course, you can do in functional programming too, but then you expose yourselfto perverse side effects, since physical replacement may affect other parts of the environment.Nothing of the sort will happen here, for a very simple reason: There is no sharing!

6.2 Memory allocation

Consider the memory allocation of the Categorical Abstract Machine (figure 2). The envi-ronment is a graph whose nodes may be shared or orphan. It is even possible to have isolatedstructures.

Because there are shared nodes, it is not safe to salvage a cell after an access.

Because there are orphan nodes, it is useful to recover all unreachable cells when no freecell is available (garbage collection).

With the Linear Abstract Machine, the environment is a tree (without shared or orphannodes). More precisely, there is a tree for the register and for each component of the stack(figure 3).

Because there is no shared node, it is safe to salvage a cell after an access. Consider thetypical example of Splitr (figure 4). After access, the cell returns to the paradise of free cons!Access is not the only case: When you execute a destructor (App,Fst,Snd,Altv(C ′, C ′′)),you have to salvage the cell of the closure.

Because there is no orphan node, the memory is full only in hopeless situations.

Finally, the Linear Abstract Machine is a machine without garbage collector. In otherwords, garbage collection is part of the job of the basic instructions.

Page 16: The Linear Abstract Machine

7 λ-CALCULUS 16

6.3 Evaluation strategy

Connectors −◦, & ,⊕ could be described as lazy, because the corresponding data are not eval-uated: A constructor (Cur(C),Pair(C ′, C ′′), Inl or Inr) builds a closure which is evaluatedby a destructor (App,Fst,Snd or Altv(C ′, C ′′)).

• A datum of type A−◦B is a Cur(C)·u where C corresponds to a combinatorX⊗A→ Band u is the environment of type X: It is a closure.

• A datum of type A&B is a Pair(C ′, C ′′)·u where C ′ and C ′′ correspond to combinatorsX → A and X → B and u is the environment of type X.

• A datum of type A ⊕ B is a constructor Inl or Inr with a datum of type A or B: Itis the usual representation of coproducts.

A ⊗ B is the type of strict pairs, and A&B is the type of lazy pairs. Of course, theycorrespond to different uses: Strict structures are well adapted to permutation (e.g. sorting)whereas lazy ones are adapted to access (e.g. menus).

In short, strictness and laziness coexist harmoniously in the same type system. Theevaluation strategy insures that only necessary computations are performed.

7 λ-Calculus

7.1 Linear constraints

The absence of contraction and weakening in Sequent Calculus corresponds to linear con-straints in λ-Calculus.

We use λ-Calculus with explicit pair and conditional:

term : variableconstant( term , term )λ pattern . termterm termif term then term else term

pattern : variable()( pattern , pattern )

Page 17: The Linear Abstract Machine

7 λ-CALCULUS 17

In a closed term, a variable must occur twice: once in a pattern, where it is bound,and once in the scope of this pattern. The conditional follows a special rule, since the twomembers of the alternative must share the environment.

• Free x = {x} (x variable)

• Free c = ∅ (c constant)

• Free (M,N) = Free M ∪ Free N (Free M and Free N must be disjoint)

• Free (λP .M) = Free M \ Free P (Free P must be a subset of Free M)

• Free (M N) = Free M ∪ Free N (Free M and Free N must be disjoint)

• Free (if M thenN elseN ′) = FreeM ∪FreeN (FreeM and FreeN must be disjoint,Free N and Free N ′ must be equal)

For example λx . x is a correct (closed) term, but λx . x x is not (independently of typingrules).

7.2 Typing rules

The typing rules are essentially the same as for usual λ-Calculus. You simply replace carte-sian product by ⊗ and arrow by −◦. The boolean type is Bool = 1⊕ 1.

x : A ` x : A

xi : Ai `M : A yj : Bj ` N : B

xi : Ai, yj : Bj ` (M,N) : A⊗B ` () : 1

xi : Ai ` P : A xi : Ai, yj : Bj `M : B

yj : Bj ` λP .M : A−◦Bxi : Ai `M : A−◦B yj : Bj ` N : A

xi : Ai, yj : Bj `M N : B

` true : Bool ` false : Bool

xi : Ai `M : Bool yj : Bj ` N : A yj : Bj ` N ′ : A

xi : Ai, yj : Bj ` if M thenN elseN ′ : A

It is possible to add rules for the direct product and the direct sum, but it does notessentially enriches the calculus, since the corresponding constructions can be encoded inpure Linear λ-Calculus (with conditional and recursion).

7.3 A programming language

Pure Linear λ-Calculus is far simpler than pure λ-Calculus: reduction always terminates inlinear time! However, it is possible to add recursive constructions to make a real programminglanguage.

Page 18: The Linear Abstract Machine

7 λ-CALCULUS 18

Conclusion

We recapitulate here the panorama of Intuitionistic Linear Logic versus Intuitionistic Logic:

logic: Intuitionistic Logic Intuitionistic Linear Logicstructural exchange, identity, cut exchange, identity, cut

rules: contraction, weakening (no contraction, no weakening)connectors: ∧,>,⇒,∨,⊥ ⊗,1,−◦, & , t,⊕,0

(+ modality)categorical cartesian closed category symmetric monoidal closed

model: (with finite coproducts) category (with finiteproducts and coproducts)

machine: Categorical Abstract Machine Linear Abstract Machinecalculus: λ-Calculus Linear λ-Calculus

A more detailed study, with implementations is in [Lafont88].

We are now carrying out a similar study for the Classical Linear Logic of [Girard87].

Acknowlegments

The author is especially grateful to Jean-Yves Girard and Gerard Huet for their encourage-ment and to Samson Abramsky for his helpful corrections and suggestions.

Page 19: The Linear Abstract Machine

A GENTZEN SEQUENT CALCULUS FOR INTUITIONISTIC LOGIC 19

Appendixes

A Gentzen Sequent Calculus for Intuitionistic Logic

A sequent A1, . . . , An ` B means that the formula B is a consequence of the hypothesisA1, . . . , An. In the following rules, Γ and ∆ are sequences of formulas.

Structural rules:

Γ, A,B,∆ ` CΓ, B,A,∆ ` C

(exchange)A ` A

(identity)Γ ` A ∆, A ` B

Γ,∆ ` B(cut)

Γ, A,A ` BΓ, A ` B

(contraction) Γ ` BΓ, A ` B

(weakening)

Logical rules:

Γ ` A ∆ ` BΓ,∆ ` A ∧B

Γ, A ` CΓ, A ∧B ` C

Γ, B ` CΓ, A ∧B ` C ` >

Γ, A ` BΓ ` A⇒ B

Γ ` A ∆, B ` CΓ,∆, A⇒ B ` C

Γ ` AΓ ` A ∨B

Γ ` BΓ ` A ∨B

Γ, A ` C ∆, B ` CΓ,∆, A ∨B ` C ⊥ ` A

B Categorical Combinators

id : A→ A

ϕ : A→ B ψ : B → C

ψ ◦ ϕ : A→ C

ϕ : A→ B ψ : A→ C

〈ϕ, ψ〉 : A→ B ∧ C fst : A ∧B → A snd : A ∧B → A 〈〉 : A→ t

ϕ : A ∧B → C

cur(ϕ) : A→ B ⇒ C app : (A⇒ B) ∧ A→ B

inl : A→ A ∨B inr : B → A ∨Bϕ : A→ C ψ : B → C

{ϕ |ψ} : A ∨B → C {} : ⊥ → A

Page 20: The Linear Abstract Machine

C CATEGORICAL LINEAR LOGIC 20

C Categorical Linear Logic

C.1 Terminology

A symmetric monoidal category is a category C with a bifunctor ⊗ : C×C → C and an object1 ∈ C such that:

X ⊗ (Y ⊗ Z) ∼= (X ⊗ Y )⊗ Z X ∼= 1⊗X X ⊗ Y ∼= Y ⊗X

∼= denotes a natural isomorphism. These natural isomorphisms must satisfy the Mac-Lane-Kelly equations (see C.3).

A symmetric monoidal category C is closed if the functor X 7→ X ⊗A has a right adjointY 7→ A−◦ Y for every A ∈ C:

Hom(X ⊗ A, Y ) ∼= Hom(X,A−◦ Y )

A categorical model of Linear Logic is just a symmetric monoidal closed category (C, ⊗,1, −◦) with finite products ( & , t) and coproducts (⊕, 0).

C.2 Some categorical models

A category with finite products is a symmetric monoidal category, hence a cartesian closedcategory with finite coproducts is a categorical model of Linear Logic6. We call those modelsdegenerate because there is no distinction between tensor and direct product. In Set, forexample, ⊗ (and & ) is the cartesian product, ⊕ is the disjoint union, and X −◦ Y = Y X .

A more interesting example is the category of modules over a fixed ring: ⊗ is the tensorproduct and X−◦Y = Hom(X, Y ). Here there is no distinction between direct product ( & )and direct sum (⊕).

The category Top of topological spaces is not cartesian closed, but is a categorical modelof Linear Logic: X ⊗ Y is the set X × Y with the finest topology that makes sectionsx 7→ (x, y) and y 7→ (x, y) continuous. 1 is the usual one-point space. X −◦Y is the space ofcontinuous maps X → Y , with the pointwise convergence topology. & is the usual cartesianproduct and ⊕ is the disjoint union. Tensor and direct product are different, but 1 and tare the same.

Another example is the category of pointed sets: An object is a set with a distinguishedelement ⊥ and a morphism is a function preserving ⊥. The tensor product of X and Y isX × Y where you identify all pairs containing a ⊥, and the tensor unit is {⊥,>}. X −◦ Yis the set Hom(X, Y ) with the constant bottom function as ⊥. The direct product is thecartesian product, and the direct sum is the disjoint union where you identify the two ⊥. tand 0 are the same.

C.3 Equations

The definitions of C.1 yield the following equational theory:

6This justifies the translation of 1.3.

Page 21: The Linear Abstract Machine

C CATEGORICAL LINEAR LOGIC 21

Category axioms:

(ϕ ◦ ψ) ◦ χ = ϕ ◦ (ψ ◦ χ) id ◦ ϕ = ϕ ϕ ◦ id = ϕ

Functoriality of the tensor product:

(ϕ ◦ ϕ′)⊗ (ψ ◦ ψ′) = (ϕ⊗ ψ) ◦ (ϕ′ ⊗ ψ′) id⊗ id = id : A⊗B → A⊗B

1 = id : 1→ 1

Naturality of arrange combinators:

? ?

-

-

A⊗ (B ⊗ C) (A⊗B)⊗ C

A′ ⊗ (B′ ⊗ C ′) (A′ ⊗B′)⊗ C ′

ϕ⊗ (ψ ⊗ χ) (ϕ⊗ ψ)⊗ χ

assl

assl

= . . .

Inverse equations:

assl ◦ assr = id : (A⊗B)⊗ C → (A⊗B)⊗ C

assr ◦ assl = id : A⊗ (B ⊗ C)→ A⊗ (B ⊗ C)

insl ◦ dell = id : 1⊗ A→ 1⊗ A dell ◦ insl = id : A→ A

exch ◦ exch = id : A⊗B → A⊗B

MacLane-Kelly equations:

?

6

- -

-

A⊗ (B ⊗ (C ⊗D)) (A⊗B)⊗ (C ⊗D) ((A⊗B)⊗ C)⊗D

A⊗ ((B ⊗ C)⊗D) (A⊗ (B ⊗ C))⊗D

id⊗ assl assl⊗ id

assl assl

assl

=

��

��

@@@@R

-

A⊗B

1⊗ (A⊗B) (1⊗ A)⊗Bassl

insl insl⊗ id=

Page 22: The Linear Abstract Machine

D THE CO-MONAD OF COURSE! 22

? ?

- -

- -

A⊗ (B ⊗ C) (A⊗B)⊗ C C ⊗ (A⊗B)

A⊗ (C ⊗B) (A⊗ C)⊗B (C ⊗ A)⊗B

id⊗ exch assl

assl exch

assl exch⊗ id

=

Closure equations:

app ◦ (cur(ϕ)⊗ ψ) = ϕ ◦ (id⊗ ψ) cur(ϕ) ◦ ψ = cur(ϕ ◦ (ψ ⊗ id))

cur(app) = id : A−◦B → A−◦B

Product equations:

fst ◦ 〈ϕ, ψ〉 = ϕ snd ◦ 〈ϕ, ψ〉 = ψ 〈ϕ, ψ〉 ◦ χ = 〈ϕ ◦ χ, ψ ◦ χ〉

〈fst, snd〉 = id : A&B → A&B 〈〉 ◦ ϕ = ϕ 〈〉 = id : t→ t

Coproduct equations:

{ϕ |ψ} ◦ inl = ϕ {ϕ |ψ} ◦ inr = ψ χ ◦ {ϕ |ψ} = {χ ◦ ϕ |χ ◦ ψ}

{inl | inr} = id : A⊕B → A⊕B ϕ ◦ {} = ϕ {} = id : 0→ 0

D The co-monad of course!

We use the modality !A when we need an arbitrary number of copies of a datum of type A.But the rules of section 3 do not specify that we have copies of the same datum. Since ⊗is not a cartesian product, we cannot require, for example, that dupl : !A→ !A⊗ !A is thediagonal map. But things can be expressed in a roundabout way.

A type X with two morphisms ε : X → 1 and δ : X → X ⊗X is called a (commutative)co-monoid if it satisfies the following axioms:

��

@@R

-

δ insl

ε⊗ id

X

X ⊗X 1⊗X

(unit)

��

@@R

-

δ δ

exch

X

X ⊗X X ⊗X

(commutativity)

��

@@R

��

@@R

-

δ δ

id⊗ δ δ ⊗ id

assl

X

X ⊗X X ⊗X

X ⊗ (X ⊗X) (X ⊗X)⊗X(associativity)

= =

=

Page 23: The Linear Abstract Machine

E SOME COMBINATORS 23

Now we give the correct definition of the modality:!A is the co-free co-monoid co-generated by A.

Hence, the correct rules are:

read : !A→ A kill : !A→ 1 dupl : !A→ !A⊗ !A

( !A,dupl,kill is a co-monoid)

ϕ : A→ B ε : A→ 1 δ : A→ A⊗ A (A, δ, ε is a co-monoid)

make(ϕ, ε, δ) : A→ !B

We add the condition that make(ϕ, ε, δ) is the only morphism π : A→ !B such that:

read ◦ π = ϕ kill ◦ π = ε dupl ◦ π = (π ⊗ π) ◦ δ

In the degenerate case (see appendix C.2), the only co-monoids are the X, ε, δ where ε isthe canonical arrow, and δ is the diagonal. That means that degenerate models are modelsof the modality, with !A = A.

But the most interesting result is this categorical version of theorem 2:

Theorem 4 If C is a symmetric monoidal closed category with finite products and modality,then the co-Kleisli category associated to the co-monad ! is cartesian closed7.

The proof uses the following result, which is a categorical version of proposition 3:

Proposition 5 There is a canonical isomorphism: ! (A&B) ∼= !A⊗ !B.

E Some combinators

insr = exch ◦ insl : A→ A⊗ 1 delr = dell ◦ exch : A⊗ 1→ A

mix = assr◦((assl◦(id⊗exch)◦assr)⊗ id)◦assl : (A⊗B)⊗(C⊗D)→ (A⊗C)⊗(B⊗D)

x : !A→ B

lift(x) = make(x,kill,dupl) : !A→ !B

x : A→ B

!x = lift(x ◦ read) : !A→ !B

subl = kill : ! t→ 1 crys = make(〈〉, id, insl) : 1→ ! t

crac = ( ! fst⊗ ! snd) ◦ dupl : ! (A&B)→ !A⊗ !B

glue =

make(〈delr ◦ (read⊗ kill),dell ◦ (kill⊗ read)〉,dell ◦ (kill⊗ kill),mix ◦ (dupl⊗ dupl))

: !A⊗ !B → ! (A&B)

7For these notions of monad and Kleisli category, see [Maclane] for example.

Page 24: The Linear Abstract Machine

F PROOF OF THE TERMINATION THEOREM 24

F Proof of the termination theorem

We prove the theorem 3.

If ϕ : A→ B is a combinator, and C a set of canonical combinators of type B, we writeϕ µ ↓ C when µ is a canonical combinator of type A and there is ν ∈ C and a primitivecombinator α such that:

ϕµ ; ν

α

For every fomula A, we construct inductively a set Cv(A) (convergence of A) of canonicalcombinators of type A:

• Cv(A) = {id} (A is atomic)

• Cv(1) = {1}

• Cv(A⊗B) = {µ⊗ ν;µ ∈ Cv(A), ν ∈ Cv(B)}

• Cv(A−◦B) = {cur(ϕ) ◦ µ;ϕ : X ⊗ A→ B, (∀ν ∈ Cv(A)) ϕ (µ⊗ ν) ↓ Cv(B)}

• Cv(A&B) = {〈ϕ, ψ〉 ◦ µ;ϕ : X → A,ψ : X → B,ϕ µ ↓ Cv(A), ψ µ ↓ Cv(B)}

• Cv(t) = {〈〉 ◦ µ}

• Cv(A⊕B) = {inl ◦ µ;µ ∈ Cv(A)} ∪ {inr ◦ µ;µ ∈ Cv(B)}

• Cv(0) = ∅

By induction on combinators, and on canonical combinators, you check successively:

Lemma 2 For every combinator ϕ : A→ B and µ ∈ Cv(A), ϕ µ ↓ Cv(B).

Lemma 3 For every A, Cv(A) is the set of all the canonical combinators of type A.

The termination theorem is a consequence of those two lemmas.

Page 25: The Linear Abstract Machine

REFERENCES 25

References

[CouCurMau] G. Cousineau, P.L. Curien & M. Mauny, The Categorical Abstract Machine,in: J. P. Jouannaud, ed., Functional Programming Languages and Computer Architec-ture, LNCS 201 (Springer-Verlag, 1985) 50-64.

[Curien85] P. L. Curien, Categorical Combinatory Logic, in: W. Brauer, ed., ICALP 85,LNCS 194 (Springer-Verlag, Nafplion) 130-139.

[Curien86] P. L. Curien, Categorical Combinators, Sequential Algorithms and FunctionalProgramming (Pitman, 1986).

[Gentzen] The collected Papers of Gerhard Gentzen, E. Szabo, ed. (North-Holland, Amster-dam, 1969).

[Girard86] J.Y. Girard, Linear logic and Parallelism, Proceedings of the School on semanticsof Parallelism held in IAC (CNR, Roma, 1986).

[Girard87] J.Y. Girard, Linear logic, Theoretical Computer Science 50 (1987) 1-102.

[GirLaf] J.Y. Girard & Y. Lafont, Linear Logic and Lazy Computation, in: TAPSOFT ’87,Volume 2, LNCS 250 (Springer-Verlag, Pisa) 52-66.

[Kelly] G.M. Kelly, On Maclane’s conditions for coherence of natural associativities, J. Al-gebra 1 (1964) 397-402.

[Kleene] S.C. Kleene, Introduction to Meta-mathematics (North Holland, 1952).

[Lafont88] Y. Lafont, Logiques, Categories et Machines, These de Doctorat, Universite deParis 7, 1988.

[Lambek68] J. Lambek, Deductive systems and categories, Math. Systems Theory (1968).

[Lambek80] J. Lambek, From Lambda-calculus to Cartesian Closed Categories, in: J.P.Seldin & J.R. Hindley eds., To H. B. Curry: Essays on Combinatory Logic, Lambda-calculus and Formalism (Academic Press, 1980).

[LamSco] J. Lambek & P.J. Scott, Introduction to higher order categorical logic, Cambridgestudies in advanced mathematics (Cambridge University Press, 1986).

[Maclane] S. MacLane, Categories for the working mathematician, Graduate Texts in Math-ematics 5 (Springer-Verlag, 1971).

[Martinloef] P. MartinLof, Intuitionistic Type Theory, Studies in Proof Theory, LectureNotes (Bibliopolis, Napoli, 1984).

[MauSu] M. Mauny & A. Suarez, Implementing Functional Languages in the CategoricalAbstract Machine, in : Proceedings of the Lisp and Functional Programming Conference(ACM, Boston, 1986).

[Szabo] M.E. Szabo, Algebra of proofs, Studies in Logic and the Foundations of Mathematics88 (North-Holland, Amsterdam, 1978).