The Quantum Field Theory - Computer Science Correspondence

There is a deep-seated analogy between the mathematics and conceptual
structures of
			   Quantum Field Theory
and the fields
	         Formal Language and Automata Theory

which lie at the foundation of Computer Science -- but a correspondence
which is both unrelated to the issues of Quantum Computers or Quantum
Automata and has not seen the light of publication anywhere (except here on
the USENET) as far as I'm aware.

I'm still not entirely sure what the scope and nature of this analogy
is, but will try to outline some of the major points of correspodence
in a somewhat fragmented way on an item by item basis.

First some background.

A automaton is represented as a "state machine".  A state machine consists
of a (possibly infinite) set Q of states; distinguished subsets I and F
of Q ("initial" and "final" states), and a transition relation

                        H subset of Q x X* x Q

where X is the 'alphabet' and X* the set of words over X.  A transition
(q,w,r) in H is interpreted as a transition

                           q -> r on word w.

The relation H generates a relation S defined by:

                S:  q -> q on word 1  (1 denotes "empty word")
                S: q -> s on word vw, if
                   H: q -> r on word v; S: r -> s on word w

Then one can talk about S(q,r) as the set of all words w for which
q -> r on w.

The language 'recognized' by the state machine is the union of all
these sets as q ranges over the initial states I, r over the final
states F.

The first set of correspondences follows from this description.

I plays the role of the initial state vector
F plays the role of the final state vector
H plays the role of the interaction Hamiltonian
S is the S-matrix generated from H

and the relation by which S is derived from H is:

                  S = H* = 1 + H + HH + HHH + ...

which is analogous to Dyson's formula; where

                   + denotes union
                   1(q,r) = {1} if q = r; 1(q,r) = {} if q not r
             H(q,r) = { words w: (q,w,r) in H }
            HH(q,s) = { vw: v in H(q,r), w in H(r,s) for some state r}
                    = "product of H x H
            HHH = product of HH x H, etc.

If Q is a finite set, the state machine is a finite state automaton.
Then H, I and F can be represented as a QxQ matrix, a Q-dimensional
row and column vector respectively.

If Q is an infinite set of the form Q = Q0 x S*, where Q0 is finite,
then the state machine can be interpreted as a 1-stack machine with a
stack alphabet S.

One can then talk about symmetries and selection rules.  The standard
definition of a push down automaton is as a state machine with state
set Q0 x S*, where Q0 is finite and where the only allowed transitions
are those which have the forms:

                       (q, v) -> (r, vs)
                       (q, vs) -> (r, v)

where v is an arbitrary words in S* (a 'stack configuration'), s a
element of S ('stack symbol'); q, r states in Q0.  These would be the
"push" and "pop" actions.  Analogous to creation and annihilation

And part of the definition of a push down automaton is a symmetry
requirement on the transition relation H to the effect that if

                  (q, v) -> (r, vs) on a word z
                  (q, v') -> (r, v's) on the word z

for all other stack words v'.

The symmetry requirement is analogous to a symmetry requirement imposed
on a particle model's Hamiltonian or Lagrangian.

Different types of automata are defined as specializations of infinite
state automata where different symmetry requirements are posed on the
transition relation H.

This is analogous to defining different classes of theories by
different symmetry conditions.

The class of languages recognized by a push down automaton are characterized
by "context-free grammars".  A grammar is a system of transition relations
which recursively define the structure of a language top-down.  So, for

A -> u A v
A -> A A
A -> 1

(where 1 denotes the empty word), defines recursively a set of words over
{u, v}* in which the u's and v's form properly nested bracketing sequences.
A sample derivation of a word from the grammar would be:

          A -> A A -> uAv A -> uv A -> uv uAv -> uv uAAv
            -> uv u uAv A v -> uv u uv A v -> uv u uv uAv v -> uv u uv uv v

thus showing that uvuuvuvv is a word generated by the grammar.

The transition rules A -> u A v, etc. are analogous to vertices in a
Feynman diagram.  The corresponding vertex would represent the interaction
by which A transforms into u A v.  This is analogous to a transition in
electroweak theory e -> nu_e (W-); or u -> d (W+).

Other points of correspondence.

Suppose H is a Hilbert space generated by a set X.  That is, H is the
closure of the vector space which has X as a basis.  Then the set of
sequences X* generates the Hilbert space

                  C + H + HxH + HxHxH + ...

which is the Maxwell-Boltzmann Fock space.  If you impose the relations
(xy = yx) on the underlying set X, then the resulting Hilbert space is
the Einstein-Bose Fock space; and if you impose the relation (xy = -yx)
the underlying space is the Fermi-Dirac Fock space.

These latter 2 constructions are directly related to Finkelstein's
representation of these 2 Fock spaces.

Still further places where the linkage manifests itself can be
outlined.  The appearance of an analogue to the bra-ket algebra in
the representation of the 'context free' languages (those defined
by push down automata); the close relation between the underlying
algebra of 'regular expressions' (called Kleene algebras) and the
subspace lattices (such as quantum logic); and to convex spaces.

A Kleene algebra is described as

(1) A monoid
    (1a) An associative product operation exists (uv)w = u(vw)
    (1b) An identity exists u1 = u = 1u
(2) With a partial ordering <= such that
    (2a) Every finite subset {a0,a1,...,an} has a least upper bound
	    0 + a0 + a1 + ... + an
    (2b) If U, V are finite subsets then
	    sum(U) sum(V) = sum(UV), where UV = {uv: u in U, v in V}
(3) A 'kleene star' operation a |-> a* for which
    (3a) a* >= 1 + a a*
    (3b) If x >= a + bx + xc then x >= b* a c*

A common special case occurs where the star operator is defined in terms
of the underlying ordering relation:

          a* = least upper bound { 1, a, aa, aaa, ... }
	     = 1 + a + aa + aaa + ...

such that the least upper bound of

	      { a b^n c: n = 0, 1, 2, ... } = a b* c
A Kleene LINEAR algebra may then be defined as a linear associative
algebra which is partially ordered with the least upper bound denoted
		      0 v a0 v a1 v ... v an

and a star operator satisfying properties (2) and (3).

For a given linear algebra L, one can construct Kleene closures in 2
distinct ways:

   L is extended to the class AFF(L) of all affine subspaces of L
   <= is interpreted as subset ordering
   A v B is the affine join of subspaces A and B;
   0 is the empty set
   A* is defined as V{A^n: n = 0, 1, 2, ... }
   AB = { ab: a in A, b in B } is always an affine subspace if A, B are

   Similar, except one takes CONVEX(L) of all convex polytopes of in L

So, one can then also talk about a representation theory for Kleene
algebras by defining a representation of a Kleene algebra K to be a
		      f: K -> L

into a Kleene linear algebra L, which preserves least upper bounds and
products.  So, there is an entire (as of yet unknown and unpublished)
representation theory for formal language and automata structures and
concepts which is entirely analogous to the representation theory
the underlies constructs in Physics.


All of these cases are instances of a much more general phenomenon,
which hasn't yet been fully elucidated anywhere, which deeply connects
Quantum Field Theory and Computer Science at their foundations.