Rambling about pi calculus
I haven't actually shared this blog with anyone, so this is the perfect place to write stream-of-consciousness about my latest semi-formed ideas.
I'm trying to combine linear logic and pi calculus. I'm writing a whole database full of proofs in this system, although at the time of writing it is quite incomplete.
This idea started out when I was searching for a way to model pi calculus. After realizing that classical logic is ill-equipped to reason about stateful procedures, I eventually discovered linear logic. The terms of linear logic can be interpreted as games against an adversary, with linear implication denoting the adversary's choice of moves. They can also be interpreted as resources traded between various sources and sinks, even alongside the game-semantic interpretation. So, I thought to myself, what if the games are between a programmer and a computer?
(Some background on the two systems: Pi calculus can be thought of as a minimalistic, yet Turing-complete, programming language whose only operations are: Create a communication channel, send a value along a channel, receive a value from a channel, run two processes concurrently, and replicate a process infinitely. Linear logic is similar to classical propositional logic, but the duplication and deletion of terms is heavily restricted. This means that the terms of linear logic can represent not just propositions, but resources of various kinds.)
Under this combined system, a linear logic term represents a pi calculus process; it is valid if it always halts. The game-semantic interpretation is very useful for intuitively reasoning about this: Our adversary is the computer, who can make any legal transaction. This also introduces an explicit meaning to the resource interpretation: Programs "used together" in a multiplicative conjunction can communicate over channels, altering the state of both. These programs can halt normally (the multiplicative truth), or the computer can run out of legal transactions -- or simply refuse to perform them -- and deadlock (the additive truth). Either one counts as a victory for us, since the program has halted.
Unlike lambda calculus, there is no direct notion of a function. This introduces some syntactic difficulties for us, since the input and output of a process can be any arbitrary channel. To circumvent this, I use "nilad abstractions", inspired by the class abstractions used in the Metamath theorem database. Nilad abstractions are simply a way to attach a process to a channel. To evaluate an expression with a nilad, we splice the abstracted process in nearby, and replace the reference with the variable bound by the nilad abstraction. Using nilads, we can pass around bound channel names and reason about their behavior. We can do things like test for equality, make and evaluate anonymous functions, and even define some fun multi-threading operators. All of this is done in the background of linear logic, which lets us reason and make judgements about these stateful objects.
By combining linear logic and pi calculus, the dualism of linear logic suddenly applies to pi calculus processes. The channel creation, message send, and message receive operators all get dual versions, just like conjunction and disjunction. I don't yet fully understand what the interpretations of these novel operators is. As far as I can tell, this is what they mean:
- The dual of channel creation is channel generalization. This is the statement that the enclosed process always halts, no matter what other processes send and receive on that channel.
- The dual of message sending and receiving is selective sending and receiving: It is simply the send and receive operator, but we get to choose how they are used, not the computer.
Finally, there is a very important question: Is this system consistent? Linear logic is a paraconsistent logic, which means the requirements for being consistent are a little looser than more mainstream logics. However, adding the pi calculus operators introduces the possibility of self-reference, which is the foundation of many famous paradoxes. The halting problem, in particular, casts a rather unsettling shadow over the usefulness of this system. If it is possible to change a process's behavior based on whether another halts, that would allow us to construct a paradoxical program that both halts and doesn't. The ability to model anonymous functions is exciting and concerning in equal measure, thanks to deductive lambda calculus having been slain by Curry's paradox.
I'm far from the first person to try using computability as the background of a deductive logic. Deductive lambda calculus was a very similar endeavor, and G. Japaridze's Computability Logic is quite alike in its treatment of processes as logical terms. This project is mostly for my own enlightenment on formal proofs and proof systems, so breaking new ground is not necessarily my goal, but at least I might be the first to formalize my findings in a proof-verifier friendly format.
P.S. I refer to the combination of linear logic and pi calculus as "this system." I'm hesitant to give it an actual name since my combination of the two might not be original, but it'll make it a lot easier to talk about if I do. I mean, that's why we name things.
P.P.S. I keep mistyping "receive" as "recieve." I should probably learn to type that word correctly if I hope to write about this more.
Comments
Post a Comment