Build your own clarithmetic II

10 downloads 56682 Views 705KB Size Report
Oct 29, 2015 - Page 1 ... Build your own clarithmetic II ..... can do on his or her own; alternatively, our non-justified CL12-provability claims can always be ...
Build your own clarithmetic II Giorgi Japaridze

arXiv:1510.08566v1 [cs.LO] 29 Oct 2015

Abstract Clarithmetics are number theories based on computability logic. Formulas of these theories represent interactive computational problems, and their “truth” is understood as existence of an algorithmic solution. Various complexity constraints on such solutions induce various versions of clarithmetic. The P1 ,P2 ,P3 present paper introduces a parameterized/schematic version CLA11P . By tuning the three pa4 rameters P1 , P2 , P3 in an essentially mechanical manner, one automatically obtains sound and complete theories with respect to a wide range of target tricomplexity classes, i.e. combinations of time (set by P3 ), space (set by P2 ) and so called amplitude (set by P1 ) complexities. Sound in the sense that every theorem T of the system represents an interactive number-theoretic computational problem with a solution from the given tricomplexity class and, furthermore, such a solution can be automatically extracted from a proof of T . And complete in the sense that every interactive number-theoretic problem with a solution from the given tricomplexity class is represented by some theorem of the system. Furthermore, through tuning the 4th parameter P4 , at the cost of sacrificing recursive axiomatizability but not simplicity or elegance, the above extensional completeness can be strengthened to intensional completeness, according to which every formula representing a problem with a solution from the given tricomplexity class is a theorem of the system. This article is published in two parts. The previous Part I has introduced the system and proved its soundness, while the present Part II is devoted to a completeness proof and some corollaries of the main results.

MSC: primary: 03F50; secondary: 03F30; 03D75; 03D15; 68Q10; 68T27; 68T30 Keywords: Computability logic; Game semantics; Interactive computation; Peano arithmetic; Bounded arithmetic; Implicit computational complexity

Contents 1 Outline 2 Bootstrapping CLA11R A 2.1 How we reason in clarithmetic . 2.2 Reasonable Induction . . . . . 2.3 Reasonable Comprehension . . 2.4 Addition . . . . . . . . . . . . . 2.5 Trichotomy . . . . . . . . . . . 2.6 Subtraction . . . . . . . . . . . 2.7 Bit replacement . . . . . . . . . 2.8 Multiplication . . . . . . . . . . 3 The 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

2 . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

2 3 4 5 5 7 7 8 9

extensional completeness of CLA11R A X, X and (a, s, t) . . . . . . . . . . . . . . Preliminary insights . . . . . . . . . . . . The sentence W . . . . . . . . . . . . . . . The overline notation . . . . . . . . . . . . Configurations . . . . . . . . . . . . . . . The white circle and black circle notations Titles . . . . . . . . . . . . . . . . . . . . Further notation . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

11 11 12 13 14 14 16 17 18

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

1

3.9 3.10 3.11 3.12 3.13 3.14

Scenes . . . . . . . . . . The traceability lemma Junior lemmas . . . . . Senior lemmas . . . . . Main lemma . . . . . . . Conclusive steps . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

19 20 22 26 29 30

4 The intensional completeness of CLA11R 30 A! 4.1 On the intensional strength of CLA11R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 A 5 Harvesting

31

6 Final remarks

33

A Proof of Lemma 5.4 of [5]

34

B Proof of Lemma 5.2 of [5] B.1 Getting started . . . . . . . . . . . . . . . . . . B.2 Procedure Update Sketch . . . . . . . . . . . B.3 Procedure Fetch Symbol . . . . . . . . . . . B.4 Procedure Make History . . . . . . . . . . . B.5 The overall strategy and an example of its run B.6 K is a provident and prudent solution of H . . B.7 K plays in target tricomplexity . . . . . . . . . B.7.1 Amplitude . . . . . . . . . . . . . . . . . B.7.2 Space . . . . . . . . . . . . . . . . . . . B.7.3 Time . . . . . . . . . . . . . . . . . . . .

1

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

37 37 40 40 41 41 43 46 46 46 47

Outline

Being a continuation of [5], this article fully relies on the terminology and notation introduced in its predecessor, with which — or, at least, with the first two sections of which — the reader is assumed to be already familiar, and which is necessary to have at hand for references while reading this paper. The main purpose of the present piece of writing is to prove the completeness of CLA11, in the form of the “only if” directions of clauses 1 (extensional completeness) and 2 (intensional completeness) of Theorem 2.6 of [5]. This goal requires some “bootstrapping” of the system, which is done in Section 2. The extensional completeness of the system is proven in Section 3. Intensional completeness is then relatively painlessly derived from that result in Section 4. Section 5 discusses a certain series of particular clarithmetical theories (“harvest”) obtained from CLA11 by instantiating and varying its parameters. The remaining parts of the paper consist of a short section with final remarks, and two technical appendices.

2

Bootstrapping CLA11R A

Throughout this section, we assume that CLA11R A is a regular theory. Unless otherwise specified, “provable” means “provable in CLA11R ”. “Induction” and “Comprehension” mean “R-Induction” and “RA Comprehension”, respectively. We continue to use our old convention according to which, context permitting, F can be written instead of ⊓F . So far we have had no need to appeal to the strength of CLA11R A , because we have been concerned with soundness, trying to show that the system was not “too strong”. Now that we are getting to the completeness part, which says that the system is “strong enough”, we will certainly need to rely on the power of the system’s deductive machinery. Some work on establishing the provability of certain basic theorems in the system has to be done for that purpose. This sort of often boring but necessary work is called bootstrapping, named after the expression “to lift oneself by one’s bootstraps” (cf. [2]).

2

2.1

How we reason in clarithmetic

Trying to generate full formal proofs in CLA11R A , just like doing so in PA, would be far from reasonable in a paper meant to be read by humans. This task is comparable with showing the existence of a Turing machine for one or another function. Constructing Turing machines if full detail is seldom feasible, and one usually resorts to some sort of lazy/informal constructions, such as constructions that rely on the Church-Turing thesis. Thesis 9.2 of [4] will implicitly act in the role of “our Church-Turing thesis” when dealing with CLA11R A -provability, allowing us to replace formal proofs with informal/intuitive descriptions of interpretation-independent winning strategies — according to the thesis, once such a strategy exists for a given formula, we can be sure that the formula is provable. In addition, we will be heavily — but often only implicitly — relying on the observation (9) made in Section 2.5 of [5], according to which CLA11R A proves everything provable in PA. As noted earlier in Section 2.5 of [5], since PA is well known and since it proves “essentially all” true arithmetical facts, we will hardly ever try to justify the PA-provability claims that we (explicitly or implicitly) make. Furthermore, in relatively simple cases, we usually will not try to justify our CL12-provability claims of the sort CL12 ⊢ E1 , . . . , En ◦– F either and, instead, simply say that F follows from E1 , . . . , En by LC (Logical Consequence), or that F is a logical consequence of E1 , . . . , En , or that E1 , . . . , En logically imply F . What allows us to take this kind of liberty is that CL12 is an analytic system, and verifying provability in it is a mechanical (even if often long and tiresome) job that a distrustful reader can do on his or her own; alternatively, our non-justified CL12-provability claims can always be verified intuitively/informally based on Thesis 9.2 of [4].1 The following fact is the simplest of those established in this section, so let us look at its proof as a warm-up exercise. Remember from Section 2.2 of [5] that ˆ0 = 0, ˆ1 = 0 ′ , ˆ2 = 0 ′ ′ , ˆ3 = 0 ′ ′ ′ , . . . Fact 2.1 For any natural number n, CLA11R ˆ ). A ⊢ ⊔z(z = n ′ Proof. Fix an n and argue in CLA11R A . Using 0 and the Successor axiom, we find the value y1 of 0 . ′′ Then, using y1 and the Successor axiom again, we find the value y2 of 0 . And so on, n times. This way, we find the value yn of n ˆ . We now choose yn for z in ⊔z(z = n ˆ ) and win this game.

What is the precise meaning of the second sentence of the above proof? The Successor axiom ⊓x⊔y(y = x ′ ) is a resource that we can use any number of times. As such, it is a game played and always won by its provider (=our environment) in the role of ⊤ against us, with us acting in the role of ⊥. So, a value for x in this game should be picked by us. We choose 0, bringing the game down to ⊔y(y = 0 ′ ). The resource provider will have to respond with a choice of a value (constant) y1 for y, further bringing the game down to y1 = 0 ′ . This elementary game is true (otherwise the provider would have lost), meaning that y1 is the value — which we have just found — of 0 ′ . The rest of the proof of Fact 2.1 should be understood as that we play ⊓x⊔y(y = x ′ ) against its provider once again, but this time we specify the value of x as y1 (rather than 0), bringing the game down to ⊔y(y = y1 ′ ). In response, the provider will have to further bring the game down to y2 = y1 ′ for some constant y2 . This means that now we know the value y2 of 0 ′ ′ . And so on. Continuing this way, eventually we come to know the value yn of n ˆ . Now we can and do win the (by now almost forgotten) target game ⊔z(z = nˆ ) by choosing yn for z in it, thus bringing it down to the true yn = nˆ . Out of curiosity, let us also take a look at a formal counterpart of our informal proof of ⊔z(z = n ˆ ). For ˆ specificity, let us just consider the case of n = 2. A non-extended CLA11R A -proof of ⊔z(z = 2), i.e. of ⊔z(z = 0 ′ ′ ), consists of just the following two lines:

⊓x⊔y(y = x ′ ) Successor axiom II. ⊔z(z = 0 ′ ′ ) LC: I

I.

Step II above is justified by LC which, in an extended proof, needs to be supplemented with a CL12-proof of the sequent ⊓x⊔y(y = x ′ ) ◦– ⊔z(z = 0 ′ ′ ). Below is such a CL12-proof: 1. y1 = 0 ′ , y2 = y1 ′ ◦– y2 = 0 ′ ′

Wait: (no premises)

1 Of course, when dealing with formula schemes (e.g., as in Fact 2.2) rather than particular formulas, the analyticity of CL12 may not always be (directly) usable. However, in such cases, Thesis 9.2 of [4] still remains at our full disposal.

3

2. y1 = 0 ′ , y2 = y1 ′ ◦– 3. 4. 5. 6. 7.

⊔z(z = 0 ′ ′ ) ⊔-Choose: 1 y1 = 0 ′ , ⊔y(y = y1 ′ ) ◦– ⊔z(z = 0 ′ ′ ) Wait: 2 y1 = 0 ′ , ⊓x⊔y(y = x ′ ) ◦– ⊔z(z = 0 ′ ′ ) ⊓-Choose: 3 ⊔y(y = 0 ′ ), ⊓x⊔y(y = x ′ ) ◦– ⊔z(z = 0 ′ ′ ) Wait: 4 ⊓x⊔y(y = x ′ ), ⊓x⊔y(y = x ′ ) ◦– ⊔z(z = 0 ′ ′ ) ⊓-Choose: 5 ⊓x⊔y(y = x ′ ) ◦– ⊔z(z = 0 ′ ′ ) Replicate: 6

Unlike the above case, most formulas shown to be CLA11R A -provable in this section will have free occurrences of variables. As a very simple example, consider ⊔y(y = x). Remembering that it is just a lazy way to write ⊓x⊔y(y = x), our informal justification/strategy (translatable into a formal CLA11R A -proof) for this formula would go like this: Wait till Environment chooses a constant c for x, thus bringing the game down to ⊔y(y = c). Then choose the same c for y. We win because the resulting elementary game c = c is true. However, more often than not, in cases like this we will omit the routine phrase “wait till Environment chooses constants for all free variables of the formula”, and correspondingly treat the free variables of the formula as standing for the constants already chosen by Environment for them. So, a shorter justification for the above ⊔y(y = x) would be: Choose (the value of ) x for y. We win because the resulting elementary game x = x is true. Of course, an even more laconic justification would be just the phrase “Choose x for y.”, quite sufficient and thus acceptable due to the simplicity of the case. Alternatively, we could — and most likely would — simply say that the formula ⊔y(y = x) is logically valid (follows by LC from no premises). A reader who would like to see some additional illustrations and explanations, can browse Sections 11 and 12 of [3]. In any case, the informal methods of reasoning induced by computability logic and clarithmetic in particular cannot be concisely or fully explained, but rather they should be learned through experience and practicing, not unlike the way one learns a foreign language. A reader who initially does not find some of our informal CLA11R A -arguments very clear, should not feel disappointed. Greater fluency and better understanding will come gradually and inevitably. Counting on that, as we advance in this paper, the degree of “laziness” of our informal reasoning within CLA11R A will gradually increase, more and more often omitting explicit references to CL, PA, axioms or certain already established and frequently used facts when justifying certain relatively simple steps.

2.2

Reasonable Induction

Fact 2.2 The set of theorems of CLA11R A will remain the same if, instead of the ordinary R-Induction rule (7) of [5], one takes the following rule, which we call Reasonable R-Induction: F (0)

x < b|~s| ∧ F (x) → F (x ′ ) , x ≤ b|~s| → F (x)

(1)

where x, ~s, F (x), b are as in (7) of [5]. Proof. Call rule (7) of [5] “old induction”. To see that the two rules are equivalent, observe that, while having identical left premises and identical conclusions, the right premise of (1) is weaker than that of old induction — the latter immediately implies the former by LC. This means that whenever old induction is applied, its conclusion can just as well be obtained through first weakening the premise F (x) → F (x ′ ) to x < b|~s| ∧ F (x) → F (x ′ ) using LC, and then applying (1). For the opposite direction, consider an application of (1). Weakening (by LC) its left premise F (0), we find the following formula provable: 0 ≤ b|~s| → F (0). (2) Next, the right premise x < b|~s| ∧ F (x) → F (x ′ ) of (1), together with the PA-provable ∀(x ′ ≤ b|~s| → x < b|~s|) and ∀(x ′ ≤ b|~s| → x ≤ b|~s|), can be seen to logically imply   x ≤ b|~s| → F (x) → x ′ ≤ b|~s| → F (x ′ ) . (3) 4

 Applying old induction to (2) and (3), we get x ≤ b|~s| → x ≤ b|~s| → F (x) . The latter, by LC, immediately yields the target x ≤ b|~s| → F (x).

2.3

Reasonable Comprehension

Fact 2.3 The set of theorems of CLA11R A will remain the same if, instead of the ordinary R-Comprehension rule (8) of [5], one takes the following rule, which we call Reasonable R-Comprehension: y < b|~s| → p(y) ⊔ ¬p(y) , ⊔|x| ≤ b|~s|∀y < b|~s| Bit(y, x) ↔ p(y)

(4)

where x, y, ~s, p(y), b are as in (8) of [5]. Proof. Call rule (8) of [5] “old comprehension”. The two rules have identical conclusions, and the premise of (4) is a logical consequence of the premise of old comprehension. So, whatever can be proven using old comprehension, can just as well be proven using (4). For the opposite direction, consider an application of (4). Of course, CLA11R A proves the logically valid y = y ⊔ ¬y = y without using either version of comprehension (the formula simply follows from no premises by LC). From here, by old comprehension, we obtain  ⊔|x| ≤ b|~s|∀y < b|~s| Bit(y, x) ↔ y = y , (5) which essentially means that the system proves the existence of a number x0 whose binary representation consists of b|~s| “1”s. Argue in CLA11R A . Using (5), we find the above number x0 . From PA, we can see that |x0 | = b|~s|. Now, we can win the game y < b|~s| ⊔ ¬y < b|~s|.

(6)

Namely, our strategy for (6) is to find whether Bit(y, x0 ) is true or not using the Bit axiom; then, if true, we — based on PA — conclude that y < |x0 |, i.e. that y < b|~s|, and choose the left ⊔ -disjunct in (6); otherwise we conclude that ¬y < |x0 |, i.e. ¬y < b|~s|, and choose the right ⊔ -disjunct in (6). The following is a logical consequence of (6) and the premise of (4):   y < b|~s| ∧ p(y) ⊔ ¬ y < b|~s| ∧ p(y) . (7) Indeed, here is a strategy for (7). Using (6), figure out whether y < b|~s| is true or false. If false, choose the right ⊔ -disjunct in (7) and rest your case. Suppose now y < b|~s| is true. Then, using the premise of (4), figure out whether p(y) is true or false. If true (resp. false), choose the left (resp. right) ⊔ -disjunct in (7). Applying old comprehension to (7) yields   (8) ⊔|x| ≤ b|~s|∀y < b|~s| Bit(y, x) ↔ y < b|~s| ∧ p(y) . Now, the conclusion of (4), obtaining which was our goal, can easily be seen to be a logical consequence of (8).

2.4

Addition

Throughout this an the subsequent subsection, for safety, we assume that the variables involved in a formula whose provability is claimed are pairwise distinct, and also distinct from any other, “hidden” variables if such are present. Fact 2.4 CLA11R A ⊢ ⊔z(z = u + v).

5

Proof. We shall rely on the pencil-and-paper algorithm for adding two numbers with “borrowing” (“regrouping”), which everyone is familiar with as the algorithm is taught at the elementary school level (albeit for decimal rather than binary numerals). Here is an example to refresh our memory. Suppose we are adding the two (binary) numbers u = 10101 and v = 1101. They,2 together with the resulting number z = 100010, should be written in a column, one below the other, with the right edges (the least significant digits) aligned as shown below: 10101 + 1101 − − −− 100010 The algorithm constructs the sum z bit by bit, in the right-to-left order, i.e. starting from the least significant bit (z)0 . At any step y > 0 we have a “carry” cy−1 ∈ {0, 1} from the preceding step y − 1. For uniformity, at step 0, i.e. when computing (z)0 , the “carry” c−1 from the non-existing “preceding step” # − 1 is stipulated to be 0. Anyway, at each step y = 0, 1, 2, . . ., we first find the sum ty = (u)y + (v)y + cy−1 . Then we declare (z)y to be 0 (resp. 1) if ty is even (resp. odd); and we declare cy — the carry from the present step y that should be “carried over” to the next step y + 1 — to be 0 (resp. 1) if ty ≤ 1 (resp. ty > 1). Let Carry1(y, u, v) be a natural arithmetization of the predicate “When calculating the yth least significant bit of u + v using the above pencil-and-paper algorithm, the carry cy generated by the corresponding (yth) step is 1.” Argue in CLA11R A . Our main claim is   y ≤ |u| + |v| → Carry1(y, u, v) ⊔ ¬Carry1(y, u, v) ∧ Bit(y, u + v) ⊔ ¬Bit(y, u + v) , (9) which we justify by Induction on y. Note that the conditions of R-Induction are (indeed) satisfied here: in view of the relevant clauses of Definition 2.2 of [5], the linear bound u + v used in the antecedent of (9) is in Rtime as it should. To solve the basis   Carry1(0, u, v) ⊔ ¬Carry1(0, u, v) ∧ Bit(0, u + v) ⊔ ¬Bit(0, u + v) , (10) we use (twice) the Bit axiom and figure out whether Bit(0, u) and Bit(0, v) are true. If both are true, we choose Carry1(0, u, v) and ¬Bit(0, u + v) in the corresponding two conjuncts of (10). If both are false, we choose ¬Carry1(0, u, v) and ¬Bit(0, u+v). Finally, if exactly one of the two is true, we choose ¬Carry1(0, u, v) and Bit(0, u + v). The inductive step is   Carry1(y, u, v) ⊔ ¬Carry1(y, u, v) ∧ Bit(y, u + v) ⊔ ¬Bit(y, u + v) → (11) Carry1(y ′ , u, v) ⊔ ¬Carry1(y ′ , u, v) ∧ Bit(y ′ , u + v) ⊔ ¬Bit(y ′ , u + v) . The above is obviously solved by the following strategy. We wait till the adversary tells us, in the antecedent, whether Carry1(y, u, v) is true. After that, using the Successor axiom, we compute the value of y ′ and then, using the Bit axiom, figure out whether Bit(y ′ , u) and Bit(y ′ , v) are true. If at least two of these three statements are true, we choose Carry1(y ′ , u, v) in the left conjunct of the consequent of (11), otherwise choose ¬Carry1(y ′ , u, v). Also, if either one or all three statements are true, we additionally choose Bit(y ′ , u + v) in the right conjunct of the consequent of (11), otherwise choose ¬Bit(y ′ , u + v). (9) is thus proven. Of course (9) logically implies y < |u| + |v| → Bit(y, u + v) ⊔ ¬Bit(y, u + v), from which, by Reasonable Comprehension (where the comprehension bound u + v is linear and hence, by Definition 2.2 of [5], is guaranteed to be in Ramplitude as it should), we get  ⊔|z| ≤ |u| + |v|∀y < |u| + |v| Bit(y, z) ↔ Bit(y, u + v) . (12) The following is a true (by PA) sentence:    ∀u∀v∀|z| ≤ |u| + |v| ∀y < |u| + |v| Bit(y, z) ↔ Bit(y, u + v) → z = u + v . Now, the target 2 Their

⊔z(z = u + v) is a logical consequence of (12) and (13).

binary representations, that is. But let us not be so pedantic.

6

(13)

2.5

Trichotomy

Fact 2.5 CLA11R A ⊢ (u < v) ⊔ (u = v) ⊔ (u > v). x x x x Proof. Argue in CLA11R A . Let u < v be an abbreviation of (u mod 2 ) < (v mod 2 ), u = v an x x x x x abbreviation of (u mod 2 ) = (v mod 2 ), and u > v an abbreviation of (u mod 2 ) > (v mod 2 ). By Induction on x, we first want to prove

x ≤ |u| + |v| → (u x v).

(14)

The basis (u 0 v) of induction is won by choosing the obviously true u =0 v component. The inductive step is ′





(u x v) → (u x v).

(15)

To solve (15), using the Bit axiom, we figure out the truth status of Bit(x, u) and Bit(x, v). If Bit(x, u) is false ′ ′ while Bit(x, v) is true, we choose u x v. Finally, if both Bit(x, u) and Bit(x, v) are true or both are false, we wait till Environment resolves the antecedent ′ ′ of (15). If it chooses u x v) there, we choose u v) in the consequent. With some basic knowledge from PA, our strategy cab be seen to be successful. Having established (14), this is how we solve (u < v) ⊔ (u = v) ⊔ (u > v). Using the Log axiom and Fact 2.4, we find the value d with d = |u| + |v|. Next, we plug d for x (i.e., specify x as d) in (14), resulting in d ≤ |u| + |v| → (u d v).

(16)

The antecedent of (16) is true, so (16)’s provider will have to resolve the consequent. If the first (resp. second, resp. third) ⊔ -disjunct is chosen there, we choose the first (resp. second, resp. third) ⊔ -disjunct in the target (u < v) ⊔ (u = v) ⊔ (u > v) and rest our case. By PA, we know that (u d v) → (u > v) are true. It is therefore obvious that our strategy succeeds.

2.6

Subtraction

In what follows, we use ⊖ for a natural pterm for limited subtraction, defined by u ⊖ v = max(0, u − v). Fact 2.6 CLA11R A ⊢ ⊔z(z = u ⊖ v). Proof. The present proof is rather similar to our earlier proof of Fact 2.4. It relies on the elementary school pencil-and-paper algorithm for computing u − v (when u ≥ v). This algorithm, just like the algorithm for u + v, constructs the value z of u − v digit by digit, in the right-to-left order. At any step y > 0, we have a “borrow” (which is essentially nothing but a “negative carry”) bi−1 ∈ {0, 1} from the preceding step y − 1. For step 0, the “borrow” b−1 from the non-existing “preceding step” # − 1 is stipulated to be 0. At each step y = 0, 1, 2, . . ., we first find the value ty = (u)y − (v)y − cy−1 . Then we declare (z)y to be 0 (resp. 1) if ty is even (resp. odd); and we declare by — the value “borrowed” by the present step y from the next step y + 1 — to be 0 (resp. 1) if ty > −1 (resp. ty ≤ −1). Let Borrow1(y, u, v) be a natural arithmetization of the predicate “u ≥ v and, when calculating the yth least significant bit of u − v using the above pencil-and-paper algorithm, the value by borrowed from the (y + 1)th step is 1. For instance, Borrow1(0, 110, 101) is true, Borrow1(1, 110, 101) is false and Borrow1(2, 110, 101) is also false. Argue in CLA11R A . Our main claim is   y ≤ |u| → Borrow1(y, u, v) ⊔ ¬Borrow1(y, u, v) ∧ Bit(y, u ⊖ v) ⊔ ¬Bit(y, u ⊖ v) , (17) which we justify by Induction on y. For the basis   Borrow1(0, u, v) ⊔ ¬Borrow1(0, u, v) ∧ Bit(0, u ⊖ v) ⊔ ¬Bit(0, u ⊖ v) , using Fact 2.5, we figure out whether u ≥ v of not. If not, we choose ¬Borrow1(0, u, v) and ¬Bit(0, u ⊖ v). Now assume u ≥ v. Using the Bit axiom, we determine the truth status of Bit(0, u) and Bit(0, v). If 7

Bit(0, u) ↔ Bit(0, v), we choose ¬Borrow1(0, u, v) and ¬Bit(0, u ⊖ v); if Bit(0, u) ∧ ¬Bit(0, v), we choose ¬Borrow1(0, u, v) and Bit(0, u ⊖ v); and if ¬Bit(0, u) ∧ Bit(0, v), we choose Borrow1(0, u, v) and Bit(0, u ⊖ v). The inductive step is   Borrow1(y, u, v) ⊔ ¬Borrow1(y, u, v) ∧ Bit(y, u ⊖ v) ⊔ ¬Bit(y, u ⊖ v) → (18) Borrow1(y ′ , u, v) ⊔ ¬Borrow1(y ′ , u, v) ∧ Bit(y ′ , u ⊖ v) ⊔ ¬Bit(y ′ , u ⊖ v) . The above is obviously solved by the following strategy. Using Fact 2.5, we figure out whether u ≥ v of not. If not, we choose ¬Borrow1(y ′ , u, v) and ¬Bit(y ′ , u ⊖ v) in the consequent of (18). Now assume u ≥ v. We wait till the adversary tells us, in the antecedent, whether Borrow1(y, u, v) is true. Using the Bit axiom in combination with the Successor axiom, we also figure out whether Bit(y ′ , u) and Bit(y ′ , v) are true. If we have Borrow1(y, u, v) ∧ Bit(y ′ , v) or ¬Bit(y ′ , u) ∧ Borrow1(y, u, v) ∨ Bit(y ′ , v) , then we choose Borrow1(y ′ , u, v) in the consequent of (18), otherwise we choose ¬Borrow1(y ′ , u, v). Also, if Bit(y ′ , u) ↔  ′ Borrow1(y, u, v) ↔ Bit(y , v) , we choose Bit(y ′ , u ⊖ v) in the consequent of (18), otherwise we choose ¬Bit(y ′ , u ⊖ v). (17) is proven. It obviously implies y < |u| → Bit(y, u ⊖ v) ⊔ ¬Bit(y, u ⊖ v), from which, by Reasonable Comprehension, we get  ⊔|z| ≤ |u|∀y < |u| Bit(y, z) ↔ Bit(y, u ⊖ v) . (19) The following is a true (by PA) sentence:    ∀u∀v∀|z| ≤ |u| ∀y < |u| Bit(y, z) ↔ Bit(y, u ⊖ v) → z = u ⊖ v . Now, the target

2.7

(20)

⊔z(z = u ⊖ v) is a logical consequence of (19) and (20).

Bit replacement

Let Br0 (x, s) (resp. Br1 (x, s)) be a natural pterm for the function that, on arguments x and s, returns the number whose binary representation is obtained from that of s by replacing the xth least significant bit (s)x by 0 (resp. by 1).  Fact 2.7 For either i ∈ {0, 1}, CLA11R A ⊢ x < |s| → ⊔z z = Bri (x, s) . Proof. Consider either i ∈ {0, 1}. Arguing in CLA11R A , we claim that   Bit y, Bri (x, s) ⊔ ¬Bit y, Bri (x, s) .

(21)

This is our strategy for (21). Using Fact 2.5, we figure out whether y = x or not. If y = x, we choose the left ⊔ -disjunct of (21) if i is 1, and choose the right ⊔ -disjunct if i is 0. Now suppose y 6= x. In this case, using the Bit axiom, we figure out whether Bit(y, s) is true or not. If it is true, we choose the left ⊔ -disjunct in (21), otherwise we choose the right ⊔ -disjunct. It is not hard to see that, this way, we win.

From (21), by Comprehension, we get 



⊔|z| ≤ |s|∀y < |s| Bit(y, z) ↔ Bit y, Bri (x, s) . From PA, it can also be seen that the following sentence is true: i h   ∀s∀x < |s|∀|z| ≤ |s| ∀y < |s| Bit(y, z) ↔ Bit y, Bri (x, s) → z = Bri (x, s) .  Now, the target x < |s| → ⊔z z = Bri (x, s) is a logical consequence of (22) and (23).

8

(22)

(23)

2.8

Multiplication

In what follows, ⌊u/2⌋ is a pterm for the function that, for a given number u, returns the number whose binary representation is obtained from that of u by deleting the least significant bit if such a bit exists (i.e. if u 6= 0), and returns 0 otherwise. Lemma 2.8 CLA11R A ⊢ ⊔z(z = ⌊u/2⌋). Proof. Argue in CLA11R A . We first claim that Bit(y, ⌊u/2⌋) ⊔ ¬Bit(y, ⌊u/2⌋).

(24)

To win (24), we compute the value a of y ′ using the Successor axiom. Next, using the Bit axiom, we figure out whether the ath least significant bit of u is 1 or 0. If it is 1, we choose the left ⊔ -disjunct of (24), otherwise choose the right ⊔ -disjunct. From (24), by Comprehension, we get  ⊔|z| ≤ |u|∀y < |u| Bit(y, z) ↔ Bit(y, ⌊u/2⌋) . (25) From PA, we also know that    ∀u∀|z| ≤ |u| ∀y < |u| Bit(y, z) ↔ Bit(y, ⌊u/2⌋) → z = ⌊u/2⌋ . Now, the target

(26)

⊔z(z = ⌊u/2⌋) is a logical consequence of (25) and (26).

In what follows, Bitsum(x, y, u, v) is (a pterm for) the function defined by Bitsum(x, y, u, v) = (u)0 × (v)y⊖0 + (u)1 × (v)y⊖1 + (u)2 × (v)y⊖2 + . . . + (u)min(x,y) × (v)y⊖x (here, of course, min(x, y) means the smaller of y, x). Take a note of the following obvious facts:  PA ⊢ ∀ Bitsum(x, y, u, v) ≤ |u| .

 PA ⊢ ∀ x ≥ y → Bitsum (x , y, u, v) = Bitsum (x, y, u, v) .  PA ⊢ ∀ x > |u| → Bitsum(x, y, u, v) = Bitsum(|u|, y, u, v) . ′

(27) (28) (29)

 Lemma 2.9 CLA11R A ⊢ ⊔z z = Bitsum(x, y, u, v) . Proof. Argue in CLA11R A . By Induction on x, we want to show that  x ≤ |u| → ⊔|z| ≤ ||u|| z = Bitsum (x, y, u, v)

(30)

(here and later in similar cases, as expected, “||u||” is not any sort of new notation, it simply stands for “|(|u|)|)”. Note that the consequent of the above formula is logarithmically bounded (namely, the bound for ⊔ is |u|, unlike the linear bound u used in the antecedent) and hence, in view of clause 2 of Definition 2.2 of [5], is guaranteed to be Rspace -bounded as required  by the conditions of R-Induction. The basis ⊔|z| ≤ ||u|| z = Bitsum (0, y, u, v) is solved by choosing, for z, the constant b with b = (u)0 × (v)y . Here our writing “×” should not suggest that we are relying on the system’s (not yet proven) knowledge of how to compute multiplication. Rather, (u)0 × (v)y has a simple propositional-combinatorial meaning: it means 1 if both Bit(0, u) and Bit(y, v) are true, and means 0 otherwise. So, b can be computed by just using the Bit axiom twice and then, if b is 1, further using Fact 2.1. The inductive step is   ⊔|z| ≤ ||u|| z = Bitsum(x, y, u, v) → ⊔|z| ≤ ||u|| z = Bitsum(x ′ , y, u, v) . (31) To solve the above, we wait till Environment chooses a constant a for z in the antecedent. After that, using Fact 2.5, we figure out whether x < y. If not, we choose a for z in the consequent and, in view of (28), win. 9

Now suppose x < y. With the help of the Successor axiom, Bit axiom, Fact 2.6 and perhaps also Fact 2.1, we find the constant b with b = (u)x′ × (v)y⊖x′ . Then, using Fact 2.4, we find the constant c with c = a + b, and specify z as c in the consequent. With some basic knowledge from PA including (27), our strategy can be seen to win (31).  Now, to solve the target ⊔z z = Bitsum(x, y, u, v) , we do the following. We first wait till Environment specifies values x0 , y0 , u0 , v0 for the (implicitly ⊓-bound) variables x, y, u, v, thus bringing the game down to ⊔z z = Bitsum(x0 , y0 , u0 , v0 ) . (Ordinarily, such a step would be omitted in an informal argument and we would simply use x, y, u, v to denote the constants chosen by Environment for these variables; but we are being more cautious in the present case.) Now, using the Log axiom, we find the value c0 of |u0 | and then, using Fact 2.5, we figure out the truth status of x0 ≤ c0 . If it is true, then, choosing x0 , y0 , u0 , v0 for the free variables x, y, u, v of (30), we force the provider of (30) to choose a constant d for z such that  d = Bitsum (x0 , y0 , u0 , v0 ) is true. We select that very constant d for z in ⊔z z = Bitsum(x0 , y0 , u0 , v0 ) , and celebrate victory. Now suppose x0 ≤ c0 is false. We do exactly the same as in the preceding case, with the only difference that we choose c0 , y0 , u0 , v0 (rather than x0 , y0 , u0 , v0 ) for the free variables x, y, u, v of (30). In view of (29), we win. Fact 2.10 CLA11R A ⊢ ⊔z(z = u × v). Proof. The pencil-and-paper algorithm for multiplying binary numbers, which creates a picture like the following one, is also well known: ×

11011

101 − − −− 11011 + 000000 1101100 −−−−− 10000111 One way to describe it is as follows. The algorithm constructs the value z of the product u × v bit by bit, in the right-to-left order. At any step y > 0 we have a carry ci−1 from the preceding step y − 1 (unlike the carries that emerge in the addition algorithm, here the carry can be greater than 1). For step 0, the “carry” c−1 from the non-existing “preceding step” # − 1 is stipulated to be 0. At each step y = 0, 1, 2, . . ., we first find the sum ty = Bitsum (y, y, u, v) + cy−1 . Then we declare (z)y to be 0 (resp. 1) if ty is even (resp. odd); and we declare cy to be ⌊ty /2⌋. Let Carry (y, u, v) be a natural pterm for “the carry cy that we get at step y ≥ 0 when computing u × v”. Take a note of the following PA-provable3 fact:  ∀ Carry (y, u, v) ≤ |u| . (32) Arguing in CLA11R A , we claim that   y ≤ |u| + |v| → ⊔|w| ≤ ||u|| Carry (y, u, v) = w ∧ Bit(y, u × v) ⊔ ¬Bit(y, u × v) . This claim can be proven by Induction on y. The basis is   ⊔|w| ≤ ||u|| Carry (0, u, v) = w ∧ Bit(0, u × v) ⊔ ¬Bit(0, u × v) .

(33)

(34)

Our strategy for (34) is as follows. Using Lemma 2.9, we compute the value a of Bitsum (0, 0, u, v). Then, using Lemma 2.8, we compute the value b of ⌊a/2⌋. After that, we choose b for w in the left conjunct of (34). Also, using the Bit axiom, we figure out whether Bit(0, a) is true. If yes, if we choose Bit(0, u × v) in the right conjunct of (34), otherwise we choose ¬Bit(0, u × v). With some basic knowledge from PA including (32), we can see that victory is guaranteed. 3 It

can be verified by (ordinary) induction on y.

10

The inductive step is

⊔|w| ≤ ||u|| Carry (y, u, v) = w ∧ Bit(y, u × v) ⊔ ¬Bit(y, u × v) → ⊔|w| ≤ ||u|| Carry (y ′ , u, v) = w ∧ Bit(y ′ , u × v) ⊔ ¬Bit(y ′ , u × v) . 



(35)

Here is our strategy for (35). We wait till, in the antecedent, the adversary tells us the carry a = Carry (y, u, v) from the yth step. Using the Successor axiom, we also find the value b of y ′ . Then, using Lemma 2.9, we compute the value c of Bitsum (b, b, u, v). Then, using Fact 2.4, we compute the value d of a + c. Then, using Lemma 2.8, we compute the value e of ⌊d/2⌋. Now, we choose e for w in the consequent of (35). Also, using the Bit axiom, we figure out whether Bit(0, d) is true. If true, we choose Bit(y ′ , u × v) in the consequent of (35), otherwise we choose ¬Bit(y ′ , u × v). Again, with some basic knowledge from PA including (32), we can see that victory is guaranteed. The following formula is a logical consequence of (33) and the PA-provable fact ∀(y < |u| + |v| + ˆ1 → y ≤ |u| + |v|): y < |u| + |v| + ˆ1 → Bit(y, u × v) ⊔ ¬Bit(y, u × v). (36) From (36), by Reasonable Comprehension, we get

⊔|z| ≤ |u| + |v| + ˆ1∀y < |u| + |v| + ˆ1 Bit(y, z) ↔ Bit(y, u × v) . 

By PA, we also have    ∀ |z| ≤ |u| + |v| + ˆ 1 ∧ ∀y < |u| + |v| + ˆ1 Bit(y, z) ↔ Bit(y, u × v) → z = u × v . Now, the target

3

(37)

(38)

⊔z(z = u × v) is a logical consequence of (37) and (38).

The extensional completeness of CLA11R A

We let CLA11R A continue to be an arbitrary but fixed regular theory. Additionally, we pick and fix an arbitrary arithmetical problem A with an R tricomplexity solution. Proving the extensional completeness of CLA11R A — i.e., the completeness part of clause 1 of Theorem 2.6 of [5] — means showing the existence of † a theorem of CLA11R A which, under the standard interpretation , equals to (“expresses”) A. This is what the present section is exclusively devoted to.

3.1

X, X and (a, s, t)

By definition, the above A is an arithmetical problem because, for some sentence X, A = X † . For the rest of Section 3, we fix such a sentence X, and fix X as an HPM that solves A — and hence X † — in R tricomplexity. In view Lemma 10.1 of [4] and Lemma 2.3 of [5], we may and will assume that, as a solution of X † , X is provident. We further fix three unary bounds a(x) ∈ Ramplitude , s(x) ∈ Rspace and t(x) ∈ Rtime such that X is an (a, s, t) tricomplexity solution of X † . In view of conditions 2, 3 and 5 of Definition 2.2 of [5], we may and will assume that the following sentence is true:  ∀x x ≤ a(x) ∧ |t(x)| ≤ s(x) ≤ a(x) ≤ t(x) . (39) X may not necessarily be provable in CLA11R A , and our goal is to construct another sentence X for † which, just like for X, we have A = X and which, unlike X, is guaranteed to be provable in CLA11R A. Following our earlier conventions, more often than not we will drop the superscript † applied to (para)formulas, writing F † simply as F . We also agree that, throughout the present section, unless otherwise suggested by the context, different metavariables x, y, z, s, s1 , . . . stand for different variables of the language of CLA11R A.

11

3.2

Preliminary insights

It might be worthwhile to try to get some preliminary insights into the basic idea behind our extensional completeness proof before going into its details. Let us consider a simple special case where X is ⊓s⊔y p(s, y) for some elementary formula p(s, y). The assertion “X is an (a, s, t) tricomplexity solution of X”4 can be formalized in the language  of PA as a certain sentence W. Then we let the earlier mentioned X be the sentence ⊓s⊔y W → p(s, y) .5 Since W is true, W → p(s, y) is equivalent to p(s, y). This means that X and X, as games, are the same — that † is, X = X † . It now remains to understand why CLA11R A ⊢ X. Let us agree to write “X (s)” as an abbreviation of the phrase “X in the scenario where, at the very beginning of the play, X ’s adversary made the move #s, and made no other moves afterwards”. Argue in CLA11R A. A central lemma, proven by R-induction (which, in turn, relies on the results of Section 2), is one establishing that the work of X is (provably) “traceable”. A simplest version of this lemma applied to our present case would look like t ≤ t|s| → ⊔|v| ≤ s|s|Config(s, t, v), (40) where Config(s, t, v) is an elementary formula asserting that v is a partial description of the t’th configuration of X (s). Here v is not a full description as it omits certain information. Namely, v does not include the contents of X ’s buffer and run tape, because this could make |v| bigger than the allowed s|s|; on the other hand, v includes all other information necessary for finding a similar partial description of the next configuration, such as scanning head locations or work-tape contents. Tracing the work of X (s) up to its (t|s|)th step in the style of (40), one of the following two eventual scenarios will be observed: “X (s) does something wrong”;

(41)

¬(41) ∧ “at some point, X (s) makes the move #c for some constant c”.

(42)

Here “X (s) does something wrong” is an assertion that X (s) makes an illegal move, or makes an oversized (exceeding a|s|) move, or consumes too much (exceeding s|s|) work-tape space, or makes no moves at all, etc. — any observable fact that contradicts W. As an aside, why do we consider X (s)’s not making any moves as “wrong”? Because it means that X (s) either loses the game or violates the t time bound by making an (unseen by us) move sometime after step t|s|. Anyway, we will know precisely which of (41) or (42) is the case. That is, we will have the resource (41) ⊔ (42). (43) If (41) is the case,  then X does not satisfy what W asserts about it, so W is false. In this case, we can win ⊔y W → p(s, y) by choosing 0 (or any other constant) for y, because the resulting W → p(s, 0), having a false antecedent, is true. Thus, as we have just established,  (41) → ⊔y W → p(s, y) . (44) Now suppose (42) is the case. This means that the play of X by X (s) hits p(s, c). If W is true and thus X is a winning strategy for X, then p(s, c) has to be true, because hitting a false parasentence would  make X lose. Thus, W → p(s, c) is true. If so, we can win ⊔y W → p(s, y) by choosing c for y. But how can we obtain c? We know that c is on X (s)’s run tape at the (t|s|)th step. However, as mentioned, the partial description v of the (t|s|)th configuration that we can obtain from (40) does not include this possibly “oversized” constant. It is again the traceability of the work of X — in just a slightly different form from (40) — that comes in to help. Even though we cannot keep track of the evolving (in X ’s buffer) c in its entirety while tracing the work of X (s) in the style of (40), finding any given bit of c is no problem. And this is sufficient, because our ability to find all particular bits of c, due to Comprehension, allows us to assemble the constant c itself. In summary, we have  (42) → ⊔y W → p(s, y) . (45) Our target X is now a logical consequence of (43), (44) and (45). 4 Perhaps

in conjunction with a few other true assertions, such as (39). X will be result of prefixing every literal of X with “W → ”.

5 Generally,

12

What we saw above was about the exceptionally simple case of X = ⊓s⊔y p(s, y), and the general case is much more complex, of course. Among other things, showing the provability of X requires a certain metainduction on its complexity. But the idea that we have just tried to explain, with certain adjustments and refinements, still remains at the core of the proof.

3.3

The sentence W

Remember the operation of prefixation from [4]. It takes a constant game G together with a legal position Φ of G, and returns a constant game hΦiG. Intuitively, hΦiG is the game to which G is brought down by (the labmoves) of Φ. This is an “extensional” operation, insensitive with respect to how games are represented/written. Below we define an “intensional” version h·i!· of prefixation, which differs from its extensional counterpart in that, instead of dealing with games, it deals with parasentences. Namely: Assume F is a parasentence and Φ is a legal position of F . We define the parasentence hΦi!F inductively as follows: • hi!F = F (as always, hi means the empty position). • For any nonempty legal position hλ, Ψi of F , where λ is a labmove and Ψ is a sequence of labmoves: – If λ signifies a choice of a component Gi in an occurrence of a subformula G0 ⊔ G1 or G0 ⊓ G1 of F , and F ′ is the result of replacing that occurrence by Gi in F , then hλ, Ψi!F = hΨiF ′ . – If λ signifies a choice of a constant c for a variable x in an occurrence of a subformula ⊔xG(x) or ⊓xG(x) of F , and F ′ is the result of replacing that occurrence by G(c) in F , then hλ, Ψi!F = hΨiF ′ .   = E ∧ G(101). For example, h⊥1.#101, ⊤1.0i! E ∧ ⊓x G(x) ⊔ H(x) We assume that the reader is sufficiently familiar with G¨odel’s technique of encoding and arithmetizing. Using that technique, we can construct an elementary sentence W1 which asserts that “X is a provident (a, s, t) tricomplexity solution of X”.

(46)

While we are not going to actually construct W1 here, some clarifications could still be helpful. A brute force attempt to express (46) would have to include the phrase “for all computation branches of X ”. Yet, there are uncountably many computation branches, and thus they cannot be encoded through natural numbers. Luckily, this does not present a problem. Instead of considering all computation branches of X , for our purposes it is sufficient to only consider those branches that spell both ⊥-legal and ⊥-quasilegal runs of X. Call such branches relevant. Each branch is fully determined by what moves are made in it by Environment and when. And the number of Environment’s moves in any relevant branch is finite (in fact, is bounded by a certain constant). So, all relevant branches can be listed according to — and, in a sense, identified with — the corresponding finite sequences of Environment’s timestamped moves. This means that there are only countably many relevant branches, and they can be encoded with natural numbers. Next, let us say that a parasentence E is relevant iff E = hΓi!X for some legal position Γ of X. In these terms, the formula W1 can be constructed as a natural arithmetization of the following, expanded, form of (46): “a, s, t are bounds 6 and, for any relevant computation branch B, the following conditions are satisfied: 1. (X plays X in (a, s, t) tricomplexity): For any step c of B, where ℓ is the background of c, we have: (a) The spacecost of c does not exceed s(ℓ); (b) If X makes a move α at step c, then the magnitude of α does not exceed a(ℓ) and the timecost of α does not exceed t(ℓ). 2. (X wins X): There is a legal position Γ of X and a parasentence H such that Γ is the run spelled by B, H = hΓi!X, and the elementarization kHk of H is true. 6 I.e.,

a, s, t are monotone pterms — see Section 2.3 of [5]. This condition is implicit in (46).

13

3. (X plays X providently): There is an integer c such that, for any d ≥ c, X ’s buffer at step d of B is empty.” Clause 2 of the above description relies on the predicate “true” which, in full generality, by Tarski’s theorem, is non-arithmetical (inexpressible in the language of PA). However, in the present case, the truth predicate is limited to the parasentences kHk where H is a relevant parasentence. Due to H’s being relevant, all occurrences of blind quantifiers in kHk are inherited from X. This means that, as long as X is fixed (and, in our case, it is indeed fixed), the ∀, ∃-depth of kHk is bounded by a constant. It is well known (cf. [2]) that limiting the ∀, ∃-depths of arithmetical parasentences to any particular value makes the corresponding truth predicate expressible in the language of PA. So, it is clear that constructing W1 formally does not present a problem. We now define the sentence W by W =def W1 ∧ (39).

3.4

The overline notation

A literal is ⊤, ⊥, or a (nonlogical) atomic formula with or without negation ¬. By a politeral of a formula we mean a positive (not in the scope of ¬) occurrence of a literal in it. For instance, the occurrence of p, as well as of ¬q — but not of q — is a politeral of p ∧ ¬q. While a politeral is not merely a literal but a literal L together with a fixed occurrence, we shall often refer to it just by the name L of the literal, assuming that it is clear from the context which (positive) occurrence of L is meant. As we remember, our goal is to construct a formula X which expresses the same problem as X does and which is provable in CLA11R A . Where E is X or any other formula, we let E be the result of replacing in E every politeral L by W → L. †

Lemma 3.1 For any formula E, including X, we have E † = E . Proof. If E is a literal, then, since W is true, E is equivalent (in the standard model) to W → E, meaning † † that E † = E . The phenomenon E † = E now automatically extends from literals to all formulas. In view of the above lemma, what now remains to do for the completion of our extensional completeness proof is to show that CLA11R A ⊢ X. The rest of Section 3 is entirely devoted to this task. Lemma 3.2 For any formula E, CLA11R A ⊢ W ∨ ∀E. Proof. Induction on the complexity of E. The base, which is about the cases where E is a literal L, is straightforward, as then W ∨ ∀E is the classically valid W ∨ ∀(W → L). If E has the form H0 ∧ H1 , H0 ∨ H1 , H0 ⊓ H1 or H0 ⊔ H1 then, by the induction hypothesis, CLA11R A proves W ∨ ∀H0 and W ∨ ∀H1 , from which W ∨ ∀E follows by LC. Similarly, if E has the form ∀xH(x), ∃xH(x), ⊓xH(x) or ⊔xH(x), then, by the induction hypothesis, CLA11R A proves W ∨ ∀H(x), from which W ∨ ∀E follows by LC.

3.5

Configurations

Let us fix y as the number of work tapes of X , and d as the maximum possible number of labmoves in any legal run of X (the depth of X). For the rest of Section 3, by a configuration we shall mean a description of what intuitively can be thought of as the “current” situation at some step of X . Specifically, such a description consists of the following 7 pieces of information: 1. The state of X .

14

2. A y-element array of the contents of the corresponding y work tapes of X . 3. The content of X ’s buffer. 4. The content of X ’s run tape. 5. A y-element array of the locations of the corresponding y work-tape heads of X . 6. The location of the run-tape head of X . 7. The string that X put into its buffer on the transition to the “current” configuration from the predecessor configuration; if there is no predecessor configuration, then such a string is empty. Notice a difference between our present meaning of “configuration” (of X ) and the normal meaning of this word as given in [4]. Namely, the piece of information from item 7 is not normally part of a configuration, as this information — unlike everything else — is not really necessary in order to be able to find the next configuration. It also is important to point out that any possible combination of any possible settings of the above 7 parameters is considered to be a configuration, regardless of whether such settings can actually be reached in some computation branch of X or not. For this reason, we shall use the adjective reachable to characterize those configurations that can actually be reached. We fix some reasonable encoding of configurations. For technical convenience, we assume that every configuration has a unique code, and vice versa: every natural number is the code of some unique configuration. With this one-to-one correspondence in mind, we will routinely identify configurations with their codes. Namely, for a number c, instead of saying “the configuration encoded by c”, we may simply say “the configuration c”. “The state of c”, or “c’s state”, will mean the state of the machine X in configuration c — i.e., the 1st one of the above-listed 7 components of c. Similarly for the other components of a configuration, such as tape or buffer contents and scanning head locations. By the background of a configuration c we shall mean the greatest of the magnitudes of the ⊥-labeled moves on c’s run tape, or 0 if there are no such moves. The following definition, along with the earlier defined constant d, involves the constants m and p introduced later in Subsection 3.7. Definition 3.3 We say that a configuration c is uncorrupt iff, where Γ is the position spelled on c’s run tape, α is the string found in c’s buffer and ℓ is the background of c, all of the following conditions are satisfied: 1. Γ is a legal position of X. 2. ℓ ≤ a(ℓ) ∧ |t(ℓ)| ≤ s(ℓ) ≤ a(ℓ) ≤ t(ℓ). 3. |m| ≤ s(ℓ), where m is as in (49). 4. |d(a(ℓ) + p + 1) + 1| ≤ s(ℓ), where d is as on page 14 and p is as in (50). 5. The number of non-blank cells on any one of the work tapes of c does not exceed s(ℓ). 6. There is no ⊤-labeled move in Γ whose magnitude exceeds a(ℓ). 7. If α is nonempty, then there is a string β such that hΓ, ⊤αβi is a legal position of X and the magnitude of the move αβ does not exceed a(ℓ). As expected, “corrupt” means “not uncorrupt”. If c merely satisfies the first one of the above seven conditions, then we say that c is semiuncorrupt. We define the yield of a semiuncorrupt configuration c as the game hΓi!X, where Γ is the position spelled on c’s run tape. Let c, d be two configurations and k a natural number. We say that d is a kth unadulterated successor of c iff there is a sequence a0 , . . . , ak (k ≥ 0) of configurations such that a0 = c, ak = d and, for each i ∈ {1, . . . , k}, we have: (1) ai is a legitimate successor of (possible next configuration after) ai−1 , and (2) 15

ai ’s run tape content is the same as that of ai−1 . Note that every configuration c has at most one kth unadulterated successor. The latter is the configuration to which c evolves within k steps/transitions in the scenario where Environment does not move, as long as X does not move in that scenario either (otherwise, if X moves, c has no kth unadulterated successor). Also note that every configuration c has a 0th unadulterated successor, which is c itself. For simplicity and without loss of generality, we shall assume that the work-tape alphabet of X — for each of its work tapes — consists of just 0, 1 and Blank, and that the 1st (leftmost) cells of the work tapes never contain a 0.7 Then, remembering from [4] that an HPM never writes a Blank and never moves its head past the leftmost blank cell, the content of a given work tape at any given time can be understood as the bitstring bn−1 , . . . , b0 , where n is the number of non-blank cells on the tape8 and, for each i ∈ {1, . . . , n}, bn−i is the bit written in the ith cell of the tape (here the cell count starts from 1, with the 1st cell being the leftmost cell of the tape). We agree to consider the number represented by such a string — i.e., the number bn−1 × 2n−1 + bn−2 × 2n−2 + . . . + b1 × 21 + b0 × 20 — to be the code of the corresponding content of the work tape. As with configurations, we will routinely identify work-tape contents with their codes. For further simplicity and again without loss of generality, we assume that, on any transition, X puts at most one symbol into its buffer. We shall further assume that, on a transition to a move state, X never repositions any of its scanning heads and never modifies the content of any of its work tapes.

3.6

The white circle and black circle notations

For the rest of this paper we agree that, whenever τ (z) is a unary pterm but we write τ (~x) or τ (x1 , . . . , xn ) for any — not necessarily known — n ≥ 0, it is to be understood as an abbreviation of the pterm  τ max(x1 , . . . , xn ) (remember that, by convention, if n = 0, max(x1 , . . . , xn ) is 0). And if we write τ |~x|, it is to be understood as τ (|x1 |, . . . , |xn |). Let E(~s) be a formula all of whose free variables are among ~s (but not necessarily vice versa), and z be a variable not among ~s. We will write E ◦ (z, ~s) to denote an elementary formula whose free variables are z, ~s, and which is a natural arithmetization of the predicate that, for any constants a, ~c in the roles of z, ~s, holds (that is, E ◦ (a, ~c) is true) iff a is a reachable uncorrupt configuration whose yield is E(~c) and whose background does not exceed max(~c). Further, we will write E • (z, ~s) to denote an elementary formula whose free variables are z, ~s, and which is a natural arithmetization of the predicate that, for any constants a, ~c in the roles of z, ~s, holds iff E ◦ (a, ~c) is true and a has a (t|~c|)th unadulterated successor. Thus, while E ◦ (a, ~c) simply says that the formula E(~c) is the yield of the (reachable, uncorrupt and ≤ max(~c)-background) configuration a, the stronger E • (a, ~c) additionally asserts that such a yield E(~c) is persistent, in the sense that, unless the adversary moves, X does not move — and hence the yield of a remains the same E(~c) — for at least t|~c| steps beginning from a. We say that a formula E is critical iff one of the following conditions is satisfied: • E is of the form G0 ⊔ G1 or

⊔yG;

• E is of the form ∀yG or ∃yG, and G is critical; • E is of the form G0 ∨ G1 , and both G0 and G1 are critical; • E is of the form G0 ∧ G1 , and at least one of G0 , G1 is critical. Lemma 3.4 Assume E(~s) is a non-critical formula all of whose free variables are among ~s. Then  PA ⊢ ∀ E • (z, ~s) → kE(~s)k . 7 If not, X can be easily modified using rather standard techniques so as to satisfy this condition without losing any of the relevant properties of the old X . The same can be said about the additional assumptions made in the following paragraph. 8 If n = 0, then the string b n−1 , . . . , b0 is empty.

16

Proof. Assume the conditions of the lemma. Argue in PA. Consider arbitrary (∀) values of z and ~s, which we continue writing as z and ~s. Suppose, for a contradiction, that E • (z, ~s) is true but kE(~s)k is false. The falsity of kE(~s)k implies the falsity of kE(~s)k. This is so because the only difference between the two formulas is that, wherever the latter has some politeral L, the former has W → L. The truth of E • (z, ~s) implies that, at some point of some actual play, X reaches the configuration z, where z is uncorrupt, the yield of z is E(~s), the background of z is at most max(~s) and, in the scenario where Environment does not move, X does not move either for at least t|~s| steps afterwards. If X does not move even after t|~s| steps, then it has lost the game, because the eventual position hit by the latter is E(~s) and the elementarization of E(~s) is false (it is not hard to see that every such game is indeed lost). And if X does make a move sometime after t|~s| steps, then, as long as t is monotone (and if not, W is false), X violates the time complexity bound t, because the background of that move does not exceed max(~s) but the timecost is greater than t|~s|. In either case we have: W is false.

(47)

Consider any non-critical formula G. By induction on the complexity of G, we are going to show that kGk is true for any (∀) values of its free variables. Indeed: If G is a literal, then kGk is W → G which, by (47), is true. If G is H0 ⊓ H1 or ⊓xH(x), then kGk is ⊤ and is thus true. G cannot be H0 ⊔ H1 or ⊔xH(x), because then it would be critical. If G is ∀yH(y) or ∃yH(y), then kGk is ∀ykH(y)k or ∃ykH(y)k, where H(y) is non-critical. In either case kGk is true because, by the induction hypothesis, kH(y)k is true for every value of its free variables, including variable y. If G is H0 ∧ H1 , then both H0 and H1 are non-critical. Hence, by the induction hypothesis, both kH0k and kH1 k are true. Hence so is kH0k ∧ kH1k which, in turn, is nothing but kGk. Finally, if G is H0 ∨ H1 , then one of the formulas Hi is non-critical. Hence, by the induction hypothesis, kHik is true. Hence so is kH0k ∨ kH1k which, in turn, is nothing but kGk. Thus, for any non-critical formula G, kGk is true. This includes the case G = E(~s) which, however, contradicts our assumption that kE(~s)k is false. Lemma 3.5 Assume E(~s) is a critical formula all of whose free variables are among ~s. Then • CLA11R s) → ∀E(~s). A ⊢ ∃E (z, ~

(48)

Proof. Assume the conditions of the lemma. By induction on complexity, one can easily see that the ∃-closure of the elementarization of any critical formula is false. Thus, for whatever (∀) values of ~s, kE(~s)k is false. Arguing further as we did in the proof of Lemma 3.4 when deriving (47), we find that, if E • (z, ~s) is true for whatever (∃) values of z and ~s, then W is false. And this argument can be formalized in PA, so we have PA ⊢ ∃E • (z, ~s) → ¬W. This, together with Lemma 3.2, can be easily seen to imply (48) by LC.

3.7

Titles

A paralegal move means a string α such that, for some (possibly empty) string β, position Φ and player ℘ ∈ {⊤, ⊥}, hΦ, ℘αβi is a legal position of X. In terms of Section 5 of [5], a paralegal move is thus nothing but a (not necessarily proper) prefix of some move of some quasilegal run of X. Every paralegal move α we divide into two parts, called the header and the numer. Namely, if α does not contain the symbol #, then α is its own header, with the numer being 0 (i.e., the empty bit string); and if α is of the form β#c, then its header is β# and its numer, as agreed upon in Section 5 of [5], is c. When we simply say “a header”, it is to be understood as “the header of some paralegal move”. Note that, unlike numers, there are only finitely many headers. For instance, if X is ⊔xp ∧ ⊓y(q ⊔ r) where p, q, r are elementary formulas, then the headers are 0.#, 1.#, 1.0, 1.1 and their proper prefixes — nine strings altogether. Given a configuration x, by the title of x we shall mean a partial description of x consisting of the following four pieces of information, to which we shall refer as titular components: 1. x’s state. 17

2. The header of the move spelled in x’s buffer. 3. The string put into the buffer on the transition to x from its predecessor configuration; if x has no predecessor configurations, then such a string is empty. 4. The list ℘1 α1 , . . . , ℘n αm , where m is the total number of labmoves on x’s run tape and, for each i ∈ {1, . . . , m}, ℘i and αi are the label (⊤ or ⊥) and the header of the ith labmove. We say that a title is buffer-empty if its 2nd titular component is the empty string. Obviously there are infinitely many titles, yet only finitely many of those are titles of semiuncorrupt configurations. We fix an infinite, recursive list Title0 , Title1 , Title2 , . . . , Titlek , Titlek+1 , Titlek+2 . . . , Titlem , Titlem+1 , Titlem+2 . . . — together with the natural numbers 1 ≤ k ≤ m — of all titles without repetitions, where Title0 through Titlem−1 (and only these titles) are titles of semiuncorrupt configurations, with Title0 through Titlek−1 (and only these titles) being buffer-empty titles of semiuncorrupt configurations. By the titular number of a given configuration c we shall mean the number i such that Titlei is c’s title. We may and will assume that, where p is the size of the longest header, m is as above and d is as on page 14, PA proves the following sentences:  ˆ ≤ s(x) ; W → ∀x |m| (49)  ˆ + 1| ˆ ≤ s(x) . W → ∀x |dˆ(a(x) + pˆ + 1) (50) ˆ a(x) with a(x) + . . . + a(x) + kˆ Indeed, if this is not the case, we can replace s(x) with s(x) + . . . + s(x) + k, ˆ where “s(x)”, “a(x)” and “t(x)” are repeated k times, for some sufficiently and t(x) with t(x) + . . . + t(x) + k, large k. Based on (39) and Definition 2.2 of [5], one can see that, with these new values of a, s, t and the corresponding new value of W, (49) and (50) become provable while no old relevant properties of the triple are lost, such as X ’s being a provident (a, s, t) tricomplexity solution of X, (a, s, t)’s being a member of Ramplitude × Rspace × Rtime , or the satisfaction of (39).

3.8

Further notation

Here is a list of additional notational conventions. Everywhere below: x, u, z, t range over natural numbers; n ∈ {0, . . . , d}; ~s abbreviates an n-tuple s1 , . . . , sn of variables ranging over natural numbers; ~v abbreviates a (2y + 3)-tuple v1 , . . . , v2y+3 of variables ranging over natural numbers; “|~v | ≤ s|~s|” abbreviates |v1 | ≤ s|~s| ∧ . . . ∧ |v2y+3 | ≤ s|~s|; and “⊔|~v | ≤ s|~s|” abbreviates ⊔|v1 | ≤ s|~s| . . . ⊔|v2y+3 | ≤ s|~s|. Also, we identify informal statements or predicates with their natural arithmetizations. 1. N(x, z) states that configuration x does not have a corrupt kth unadulterated successor for any k ≤ z. 2. D(x, ~s, ~v ) is a ∧ -conjunction of the following statements: (a) “There are exactly n (i.e., as many as the number of variables in ~s) labmoves on configuration x’s run tape and, for each i ∈ {1, . . . , n}, if the ith (lab)move is numeric, then si is its numer”. (b) “v1 is the location of x’s 1st work-tape head, . . . , vy is the location of x’s yth work-tape head”. (c) vy+1 is the content of x’s 1st work tape, . . . , v2y is the content of x’s yth work tape”. (d) “v2y+1 is the location of x’s run-tape head”. (e) “v2y+2 is the length of the numer of the move found in x’s buffer”. ˆ (implying that x is semiuncorrupt)”. (f) “v2y+3 is x’s titular number, with v2y+3 < m 3. Dǫ (x, ~s, ~v ) abbreviates D(x, ~s, ~v ) ∧ v2y+3 < ˆk. 4. D⊔ (x, ~s) and Dǫ⊔ (x, ~s) abbreviate

⊔|~v | ≤ s|~s|D(x, ~s, ~v) and ⊔|~v| ≤ s|~s|Dǫ (x, ~s, ~v), respectively.

5. U(x, t, z, u) says “Configuration t is a uth unadulterated successor of configuration x, and u is the greatest number not exceeding z such that x has a uth unadulterated successor”. 18

6. U~s⊔ (x, t, z) abbreviates

⊔|u| ≤ s|~s|U(x, t, z, u).

7. U~s∃ (x, t) abbreviates ∃uU(x, t, t|~s|, u). h  i . 8. Q(~s, z) abbreviates ∀x D⊔ (x, ~s) → ¬N(x, z) ⊔ N(x, z) ∧ ∃t U~s⊔ (x, t, z) ∧ D⊔ (t, ~s) 9. F(x, y) says “y is the numer of the move found in configuration x’s buffer”.  ˜ ◦ (~s) abbreviates ∃x E ◦ (x, ~s) ∧ Dǫ (x, ~s) . 10. E ⊔  ˜ • (~s) abbreviates ∃x E • (x, ~s) ∧ Dǫ (x, ~s) . 11. E ⊔

3.9

Scenes

In this section and later, unless otherwise suggested by the context, n, ~s, ~v are as stipulated in Section 3.8. Given a configuration x, by the scene of x we shall mean a partial description of x consisting of the following two pieces of information for the run tape and each of the work tapes of x: • The symbol scanned by the scanning head of the tape. • An indication (yes/no) of whether the scanning head is located at the beginning of the tape. Take a note of the obvious fact that the number of all possible scenes is finite. We let j denote that number, and let us correspondingly fix a list Scene1 , . . . , Scenej of all scenes. Also, for each i ∈ {1, . . . , j}, we let Scenei (x) be a natural formalization of the predicate “Scenei is the scene of configuration x”. According to the following lemma, information on x contained in D(x, ~s, ~v ) is sufficient to determine (in CLA11R A ) the scene of x. Lemma 3.6 CLA11R A proves  ∀x D(x, ~s, ~v ) → Scene1 (x) ⊔ . . . ⊔ Scenej (x) .

(51)

Proof. Recall that ~s is the tuple s1 , . . . , sn and ~v is the tuple v1 , . . . , v2y+3 . Argue in CLA11R A . Consider an arbitrary (∀) configuration x, keeping in mind — here and later in similar contexts — that we do not really know the (“blind”) value of x. Assume D(x, ~s, ~v ) is true, for otherwise (51) will be won no matter how we (legally) act. Consider the 1st work tape of X . According to D(x, ~s, ~v ), v1 is the location of the corresponding scanning head in configuration x. Using Fact 2.5, we figure out whether v1 = 0. This way we come to know whether the scanning head of the tape is located at the beginning (leftmost cell) of the tape. Next, we know that vy+1 is the content of x’s 1st work tape. Using the Log axiom and Fact 2.5, we compare |vy+1 | with v1 . If v1 ≥ |vy+1 |, we conclude that the symbol scanned by the head is Blank. And if v1 < |vy+1 |, then the symbol is either a 0 or 1; which of these two is the case depends on whether Bit(vy+1 , v1 ) is true or false; we make such a determination using the Bit axiom. The other work tapes will be handled similarly. Finally, consider the run tape. We figure out whether x’s run-tape scanning head is looking at the leftmost cell of the tape by comparing v2y+1 with 0. The task of finding the symbol scanned by the scanning head in this case is less straightforward than in the case of the work tapes, but still doable in view of our ability to perform the basic arithmetic operations established in Section 2. We leave the details to the reader. The information obtained by now fully determines which of Scene1 , . . . , Scenej is the scene of x. We win (51) by choosing the corresponding ⊔ -disjunct in the consequent.

19

3.10

The traceability lemma

Lemma 3.7 CLA11R s| → Q(~s, z). A ⊢ z ≤ t|~ s, 0) abbreviates Proof. Argue in CLA11R A . We proceed by Reasonable R-Induction on z. The basis Q(~     ∀x D⊔ (x, ~s) → ¬N(x, 0) ⊔ N(x, 0) ∧ ∃t U~s⊔ (x, t, 0) ∧ D⊔ (t, ~s) . Solving it means solving the following problem for a blindly-arbitrary (∀) x:   D⊔ (x, ~s) → ¬N(x, 0) ⊔ N(x, 0) ∧ ∃t U~s⊔ (x, t, 0) ∧ D⊔ (t, ~s) . To solve the above, we wait till the adversary brings it down to   |~c| ≤ s|~s| ∧ D(x, ~s, ~c) → ¬N(x, 0) ⊔ N(x, 0) ∧ ∃t U~s⊔ (x, t, 0) ∧ D⊔ (t, ~s)

(52)

for some (2y + 3)-tuple ~c = c1 , . . . , c2y+3 of constants. From now on we will assume that |~c| ≤ s|~s| ∧ D(x, ~s, ~c)

(53)

is true, for otherwise (52) will be won no matter what. On this assumption, solving (52) (essentially) means solving its consequent, which disabbreviates as   (54) ¬N(x, 0) ⊔ N(x, 0) ∧ ∃t ⊔|r| ≤ s|~ s|U(x, t, 0, r) ∧ ⊔|~v | ≤ s|~s|D(t, ~s, ~v ) . In order to solve (54), we first of all need to figure out whether N(x, 0) is true.9 Note that N(x, 0) is true iff x is uncorrupt. So, it is sufficient to just go through the seven conditions of Definition 3.3 and test their satisfaction. From the D(x, ~s, ~c) conjunct of (53), we know that c2y+3 is x’s titular number. Therefore, x is ˆ And whether c2y+3 < m ˆ we semiuncorrupt — i.e., condition 1 of Definition 3.3 is satisfied — iff c2y+3 < m. can determine based on Facts 2.1 and 2.5. Next, from the title Titlec2y+3 of x, we can figure out which of the n moves residing on x’s run tape are numeric. We look at the numers of such moves from among s1 , . . . , sn and, using Fact 2.5 several times, find the greatest numer a. After that, using the Log axiom, we find the background ℓ of x, which is nothing but |a|. Knowing the value of ℓ, we can now test the satisfaction of condition 2 of Definition 3.3 based on clause 2 of Definition 2.5 of [5], the Log axiom and Fact 2.5. Conditions 3 and 4 of Definition 3.3 will be handled in a similar way. Next, from cy+1 , . . . , c2y , we know the contents of the work tapes of x. This, in combination with the Log axiom, allows us to determine the numbers of non-blank cells on those work tapes. Comparing those numbers with s(ℓ), we figure out whether condition 5 of Definition 3.3 is satisfied. Checking the satisfaction of conditions 6 and 7 of Definition 2.5 is also a doable task, and we leave details to the reader. So, now we know whether x is corrupt or not. If x is corrupt, we choose ¬N(x, 0) in (54) and win. And if x is uncorrupt, i.e., N(x, 0) is true, then we bring (54) down to  N(x, 0) ∧ ∃t |0| ≤ s|~s| ∧ U(x, t, 0, 0) ∧ |~c| ≤ s|~s| ∧ D(t, ~s, ~c) . We win because the above is a (classical-) logical consequence of (53), N(x, 0) and the obviously true |0| ≤ s|~s| ∧ U(x, x, 0, 0). The basis of our induction is thus proven. The inductive step is z < t|~s| ∧ Q(~s, z) → Q(~s, z ′ ), which partially disabbreviates as    z < t|~s| ∧ ∀x D⊔ (x, ~s) → ¬N(x, z) ⊔ N(x, z) ∧ ∃t U~s⊔ (x, t, z) ∧ D⊔ (t, ~s)    → ∀x D⊔ (x, ~ s) → ¬N(x, z ′ ) ⊔ N(x, z ′ ) ∧ ∃t U~s⊔ (x, t, z ′ ) ∧ D⊔ (t, ~s) .

(55)

With some thought, (55) can be seen to be a logical consequence of h    ∀x∀t z < t|~s| ∧ ¬N(x, z) ⊔ N(x, z) ∧ U~s⊔ (x, t, z) ∧ D⊔ (t, ~s)  i → ¬N(x, z ′ ) ⊔ N(x, z ′ ) ∧ ∃t U~s⊔ (x, t, z ′ ) ∧ D⊔ (t, ~ s) , 9 Even though we do not know the actual value of (the implicitly ∀-bounded) x, we do know that it satisfies (53), and this is sufficient for our purposes.

20

so let us pick arbitrary (∀) numbers a, b in the roles of the ∀-bounded variables x, t of the above expression and focus on    z < t|~s| ∧ ¬N(a, z) ⊔ N(a, z) ∧ U~s⊔ (a, b, z) ∧ D⊔ (b, ~s) (56)   s) . → ¬N(a, z ′ ) ⊔ N(a, z ′ ) ∧ ∃t U~s⊔ (a, t, z ′ ) ∧ D⊔ (t, ~ To solve (56), we wait till the ⊔ -disjunction in its antecedent is resolved. If the adversary chooses the first ⊔ -disjunct there, we do the same in the consequent and win, because ¬N(a, z) obviously implies ¬N(a, z ′ ). Now suppose the adversary chooses the second ⊔ -disjunct in the antecedent. We wait further until (56) is brought down to z < t|~s| ∧ N(a, z) s| ∧ D(b, ~s, ~c)  ∧ |d| ≤ s|~s| ∧ U(a, b, z, d) ∧ |~c| ≤ s|~  → ¬N(a, z ′ ) ⊔ N(a, z ′ ) ∧ ∃t U~s⊔ (a, t, z ′ ) ∧ D⊔ (t, ~ s)

(57)

for some constant d and some (2y + 3)-tuple ~c = c1 , . . . , c2y+3 of constants. From now on we will assume that the antecedent z < t|~s| ∧ N(a, z) ∧ |d| ≤ s|~s| ∧ U(a, b, z, d) ∧ |~c| ≤ s|~s| ∧ D(b, ~s, ~c)

(58)

of (57) is true, for otherwise we win (57) no matter what. So, our goal is to win the consequent of (57), i.e. the game   ¬N(a, z ′ ) ⊔ N(a, z ′ ) ∧ ∃t U~s⊔ (a, t, z ′ ) ∧ D⊔ (t, ~ s) . (59) Using Fact 2.5, we compare d with z. The case d > z is ruled out by our assumption (58), because it is inconsistent with the truth of U(a, b, z, d). If d < z, we bring (59) down to  N(a, z ′ ) ∧ ∃t |d| ≤ s|~s| ∧ U(a, t, z ′ , d) ∧ |~c| ≤ s|~s| ∧ D(t, ~s, ~c) , (60) which is a (classical-) logical consequence of N(a, z ′ ) ∧ |d| ≤ s|~s| ∧ U(a, b, z ′ , d) ∧ |~c| ≤ s|~s| ∧ D(b, ~s, ~c).

(61)

This way we win, because (61) is true and hence so is (60). Namely, the truth of (61) follows from the truth of (58) in view of the fact that, on our assumption d < z, U(a, b, z, d) obviously implies U(a, b, z ′, d) and N(a, z) implies N(a, z ′ ). Now suppose d = z. So, our resource (58) is the same as z < t|~s| ∧ N(a, z) ∧ |z| ≤ s|~s| ∧ U(a, b, z, z) ∧ |~c| ≤ s|~s| ∧ D(b, ~s, ~c).

(62)

The D(b, ~s, ~c) component of (62) contains sufficient information on whether the configuration b has any unadulterated successors other than itself.10 If not, N(a, z) obviously implies N(a, z ′ ) and U(a, b, z, z) implies U(a, b, z ′ , z); hence, (62) implies N(a, z ′ ) ∧ |z| ≤ s|~s| ∧ U(a, b, z ′ , z) ∧ |~c| ≤ s|~s| ∧ D(b, ~s, ~c), which, in turn, implies  N(a, z ′ ) ∧ ∃t |z| ≤ s|~s| ∧ U(a, t, z ′ , z) ∧ |~c| ≤ s|~s| ∧ D(t, ~s, ~c) .

(63)

So, we win (59) by bringing it down to the true (63). Now, for the rest of this proof, assume b has unadulterated successors other than itself. From the U(a, b, z, z) conjunct of (62) we also know that b is a zth unadulterated successor of a. Thus, a (z + 1)st unadulterated successor of a — call it e — exists, implying the truth of U(a, e, z ′ , z ′ ).

(64)

In order to solve (59), we want to find a tuple d~ = d1 , . . . , d2y+3 of constants satisfying ~ D(e, ~s, d) 10 Namely,

(65)

b has an unadulterated successor other than itself iff the state component of b — which can be found in Titlec2y+3 — is not a move state.

21

— that is, satisfying conditions 2(a) through 2(f ) of Subsection 3.8 with e, ~s and d~ in the roles of x, ~s and ~v , respectively. In doing so below, we shall rely on the truth of D(b, ~s, ~c) implied by (62). We shall then also rely on our knowledge of the scene of b obtained from D(b, ~s, ~c) based on Lemma 3.6, and our knowledge of the state component of b obtained from c2y+3 (the (2y + 3)rd constant of the tuple ~c). ~ condition 2(a) of Subsection 3.8 is satisfied with e in First of all, notice that, no matter how we select d, the role of x. This is so because, as implied by D(b, ~s, ~c), that condition is satisfied with b in the role of x, and e is an unadulterated successor of b, meaning that b and e have identical run-tape contents. From D(b, ~s, ~c), we know that the location of b’s 1st work-tape head is c1 ; based on our knowledge of the state and the scene of b, we can also figure out whether that tape’s scanning head moves to the right, to the left, or stays put on the transition from b to e. If it moves to the right, we apply the Successor axiom and compute the value d1 to be c1 ′ . If the head stays put or tries to move to the left while c1 = 0 (whether c1 = 0 we figure out using Fact 2.5), we know that d1 = c1 . Finally, if it moves to the left (while c1 6= 0), then d1 = c1 − 1, and we compute this value using Facts 2.1 and 2.6. We find the constants d2 , . . . , dy in a similar manner. The values dy+1 , . . . , d2y can be computed from cy+1 , . . . , c2y and our knowledge — determined by b’s state and scene — of the symbols written on X ’s work tapes on the transition from b to e. If such a symbol was written in a previously non-blank cell (meaning that the size of the work tape content did not change), we shall rely on Fact 2.7 in computing dy+i from cy+i (1 ≤ i ≤ y), as the former is the result of changing one bit in the latter. Otherwise, if the new symbol was written in a previously blank (the leftmost blank) cell, then dy+i is either cy+i + cy+i (if the written symbol is 0) or cy+i + cy+i + ˆ1 (if the written symbol is 1); so, dy+i can be computed using Facts 2.1 and 2.4. We find the value d2y+1 in a way similar to the way we found d1 , . . . , dy . From the state and the scene of b, we can also figure out whether the length of the numer of the string in the buffer has increased (by 1) or not on the transition from b to e. If not, we determine that d2y+2 = c2y+2 . If yes, then d2y+2 = c2y+2 ′ , which we compute using the Successor axiom. From the N(a, z) component of (62) we know that configuration a is uncorrupt and hence semiuncorrupt. From (64) we also know that e is an unadulterated successor of a. As an unadulterated successor of a semiuncorrupt configuration, e obviously remains semiuncorrupt, meaning that its titular number d2y+3 is an element of the set {0, . . . , m−1}. Which of these m values is precisely assumed by d2y+3 is fully determined by the title and the scene of b, both of which we know. All 2y + 3 constants from the d~ group are now found. ~ that is — we figure out whether e is corrupt in the same As our next step, from (65) — from D(e, ~s, d), style as from D(x, ~s, ~c) we figured out whether x was corrupt when building our strategy for (54). If e is corrupt, we choose ¬N(a, z ′ ) in (59) and win. Now, for the rest of this proof, assume e is uncorrupt. Using the Successor axiom, we compute the value g of z ′ and then we bring (59) down to  ~ ≤ s|~s| ∧ D(t, ~s, d) ~ , N(a, g) ∧ ∃t |g| ≤ s|~s| ∧ U(a, t, g, g) ∧ |d|

(66)

(67)

which is a (classical-) logical consequence of ~ ≤ s|~s| ∧ D(e, ~s, d). ~ N(a, g) ∧ |g| ≤ s|~s| ∧ U(a, e, g, g) ∧ |d|

(68)

To declare victory, it remains to see that (68) is true. The 3rd and the 5th conjuncts of (68) are true because they are nothing but (64) and (65), respectively. The 4th conjunct can be seen to follow from (65) and (66). From (62), we know that z < t|~s|, which implies g ≤ t|~s| and hence |g| ≤ |t|~s||. Since e is uncorrupt, by clause 2 of Definition 3.3, we also have |t|~s|| ≤ s|~s|. Thus, the second conjunct of (68) is also true. Finally, for the first conjunct of (68), observe the following. According to (62), N(a, z) is true, meaning that a does not have a corrupt kth unadulterated successor for any k with k ≤ z. By (66), e — which is the (unique) (z + 1)th unadulterated successor of a — is uncorrupt. Thus, a does not have a corrupt kth unadulterated successor for any k with k ≤ z + 1 = g. This means nothing but that N(a, g) is true.

3.11

Junior lemmas

 Lemma 3.8 CLA11R s| ∧ Q(~s, z) . A ⊢ ⊔z z = t|~ 22

Proof. Argue in CLA11R s. Then, A . Using Fact 2.5 several times, we find the greatest number s among ~ relying on the Log axiom and condition 2 of Definition 2.5 of [5], we compute the value b of t|s|. Specifying z as b in the resource provided by Lemma 3.7, we bring the latter down to b ≤ t|~s| → Q(~s, b). (69)  Now, the target ⊔z z = t|~s| ∧ Q(~s, z) is won by specifying z as b, and then synchronizing the second conjunct of the resulting b = t|~s| ∧ Q(~s, b) with the consequent of (69) — that is, acting in the former exactly as the provider of (69) acts in the latter, and “vice versa”: acting in the latter as Environment acts in former. For the purposes of the following two lemmas, we agree that Nothing (t, q) is an elementary formula that asserts that the numer c of the move found in configuration t’s buffer does not have a qth most significant bit (meaning that either q = 0 or |c| < q). Next, Zero(t, q) means “¬Nothing (t, q) and the qth most significant bit of the numer of the move found in t’s buffer is a 0”. Similarly, One(t, q) means “¬Nothing(t, q) and the qth most significant bit of the numer of the move found in t’s buffer is a 1”. Lemma 3.9 CLA11R A proves z ≤ t|~s| →  ∀x∀t N(x, z) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, z, z) → Nothing (t, q) ⊔ Zero (t, q) ⊔ One(t, q) .

(70)

Proof. Argue in CLA11R A . Reasonable Induction on z. The basis is  ∀x∀t N(x, 0) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, 0, 0) → Nothing (t, q) ⊔ Zero(t, q) ⊔ One(t, q) , which is obviously won by choosing Nothing(t, q) in the consequent. The inductive step is z < t|~s| ∧  ∀x∀t N(x, z) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, z, z) → Nothing (t, q) ⊔ Zero(t, q) ⊔ One(t, q) → ∀x∀t N(x, z ′ ) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, z ′ , z ′ ) → Nothing (t, q) ⊔ Zero (t, q) ⊔ One(t, q) .

(71)

To solve (71), we wait till the adversary makes a choice in the antecedent. If it chooses Zero (t, q) or One(t, q), we make the same choice in the consequent, and rest our case. Suppose now the adversary chooses Nothing (t, q), thus bringing (71) down to  z < t|~s| ∧ ∀x∀t N(x, z) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, z, z) → Nothing (t, q) →  (72) ∀x∀t N(x, z ′ ) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, z ′ , z ′ ) → Nothing (t, q) ⊔ Zero (t, q) ⊔ One(t, q) . In order to win (72), we need a strategy that, for arbitrary (∀) and unknown a and c, wins  z < t|~s| ∧ ∀x∀t N(x, z) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, z, z) → Nothing (t, q) →  N(a, z ′ ) ∧ |~v | ≤ s|~s| ∧ Dǫ (a, ~s, ~v ) ∧ U(a, c, z ′ , z ′ ) → Nothing (c, q) ⊔ Zero (c, q) ⊔ One(c, q) .

(73)

To solve (73), assume both the antecedent and the antecedent of the consequent of it are true (otherwise we win no matter what). So, all of the following statements are true: z < t|~s|;  ∀x∀t N(x, z) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, z, z) → Nothing (t, q) ; N(a, z ′ ) ∧ |~v | ≤ s|~s| ∧ Dǫ (a, ~s, ~v );

(75) (76)

U(a, c, z ′ , z ′ ).

(77)

(74)

Assumption (77) implies that a has (not only a (z ′ )th but also) a zth unadulterated successor. Let b be that successor. Thus, the following is true: U(a, b, z, z). (78) The N(a, z ′ ) conjunct of (76), of course, implies N(a, z). 23

(79)

From (75), we also get N(a, z) ∧ |~v | ≤ s|~s| ∧ Dǫ (a, ~s, ~v ) ∧ U(a, b, z, z) → Nothing(b, q), which, together with (76), (78) and (79), implies Nothing(b, q).

(80)

From (74), we have z ′ ≤ t|~s|. Hence, using Lemma 3.7 (in combination with the Successor axiom), we can obtain the resource Q(~s, z ′ ), which disabbreviates as h  i . ∀x D⊔ (x, ~s) → ¬N(x, z ′ ) ⊔ N(x, z ′ ) ∧ ∃t U~s⊔ (x, t, z ′ ) ∧ D⊔ (t, ~s) We bring the above down to h  i . ∀x |~v | ≤ s|~s| ∧ D(x, ~s, ~v ) → ¬N(x, z ′ ) ⊔ N(x, z ′ ) ∧ ∃t U~s⊔ (x, t, z ′ ) ∧ D⊔ (t, ~s)

(81)

 ǫ ~ s ′ Now (81),  in conjunction with (76) and the obvious fact ∀ D(x, ~s, ~v ) → D (a, ~s, ~v ) , implies ∃t U⊔ (a, t, z ) ∧ D⊔ (t, ~s) , i.e.  ∃t ⊔|r| ≤ s|~s|U(a, t, z ′ , r) ∧ D⊔ (t, ~s) . (82) From (77), by PA, we know that c is the unique number satisfying U(a, t, z ′ , r) in the role of t for some r (in fact, for r = z ′ and only for r = z ′ ). This implies that the provider of (82), in fact, provides (can only provide) the resource ⊔|r| ≤ s|~s|U(a, c, z ′ , r) ∧ D⊔ (c, ~s). Thus, D⊔ (c, ~s) is at our disposal, which disabbreviates as ⊔|~v | ≤ s|~s|D(c, ~s, ~v ). The provider of this resource will have to bring it down to ~ ≤ s|~s| ∧ D(c, ~s, d) ~ |d| (83) for some tuple d~ = d1 , . . . , d2y+3 of constants. Here d2y+2 is the length of the numer of the move found in c’s buffer. Using Fact 2.5, we figure out whether d2y+2 = q. If d2y+2 6= q, we choose Nothing(c, q) in the consequent of (73). Now suppose d2y+2 = q. In this case, from d2y+3 (the title of c), we extract information about what bit has been placed into the buffer on the transition from b to c.11 If that bit is 1, we choose One(c, q) in (73); otherwise choose Zero (c, q). With a little thought and with (80) in mind, it can be seen that our strategy succeeds. Lemma 3.10 CLA11R A proves  ∃x∃t∃y N(x, t|~s|) ∧ Dǫ (x, ~s, ~v ) ∧ U~s∃ (x, t) ∧ F(t, y) ∧ Bit(r, y) ⊔ ¬∃x∃t∃y N(x, t|~ s|) ∧ Dǫ (x, ~s, ~v ) ∧ U~s∃ (x, t) ∧ F(t, y) ∧ Bit(r, y) .

(84)

Proof. Argue in CLA11R A . From PA we know that values x, t, y satisfying Dǫ (x, ~s, ~v ) ∧ U~s∃ (x, t) ∧ F(t, y)

(85)

exist (∃) and are unique. Fix them for the rest of this proof. This allows us to switch from (84) to (86) as the target for our strategy, because the two paraformulas are identical as a games:   N(x, t|~s|) ∧ Bit(r, y) ⊔ ¬ N(x, t|~s|) ∧ Bit(r, y) . (86) Relying on the Log axiom, Fact 2.5 and clause 2 of Definition 2.5 of [5], we find the value of s|~s|. Then, using that value and relying on the Log axiom and Fact 2.5 again, we figure out the truth status of |~v | ≤ s|~s|. If it is false, then, with a little analysis of Definition 3.3, x can be seen to be corrupt; for this reason, N(x, t|~s|) 11 A symbol other than 0 or 1 could not have been placed into the buffer, because then, by clause 7 of Definition 3.3, c would be corrupt, contradicting the N(a, z ′ ) conjunct of (76).

24

is false, so we choose the right ⊔ -disjunct in (86) and rest our case. Now, for the remainder of this proof, assume |~v | ≤ s|~s|. (87) By Lemma 3.8, the resource Q(~s, t|~s|), i.e. h  i , ∀x D⊔ (x, ~s) → ¬N(x, t|~s|) ⊔ N(x, t|~s|) ∧ ∃t U~s⊔ (x, t, t|~s|) ∧ D⊔ (t, ~s) is at our disposal. We bring it down to h  i , ∀x |~v | ≤ s|~s| ∧ D(x, ~s, ~v ) → ¬N(x, t|~s|) ⊔ N(x, t|~s|) ∧ ∃t U~s⊔ (x, t, t|~s|) ∧ D⊔ (t, ~s)  which, in view of (85), (87) and the fact ∀ Dǫ (x, ~s, ~v ) → D(x, ~s, ~v ) , implies   ¬N(x, t|~ s|) ⊔ N(x, t|~s|) ∧ ∃t U~s⊔ (x, t, t|~s|) ∧ D⊔ (t, ~s) .

(88)

We wait till one of the two ⊔ -disjuncts of (88) is selected by the provider. If the left disjunct is selected, we choose the right ⊔ -disjunct in (86) and retire. Now suppose the right disjunct of (88) is selected. Such a move, with U~s⊔ (x, t, t|~s|) and D⊔ (t, ~s) disabbreviated, brings (88) down to    (89) N(x, t|~s|) ∧ ∃t ⊔u |u| ≤ s|~s| ∧ U(x, t, t|~s|, u) ∧ ⊔~v |~v | ≤ s|~s| ∧ D(t, ~s, ~v ) . We wait till (89) is fully resolved by its provider, i.e., is brought down to  ~ ≤ s|~s| ∧ D(t, ~s, d) ~ N(x, t|~s|) ∧ ∃t |a| ≤ s|~s| ∧ U(x, t, t|~s|, a) ∧ |d|

(90)

for some constant a and tuple d~ = d1 , . . . , d2y+3 of constants. By PA, (85) and (90) imply N(x, t|~s|) ∧ U(x, t, t|~s|, a) ∧ D(t, ~s, ~v ).

(91)

The U(x, t, t|~s|, a) conjunct of (91) further implies a ≤ t|~s| ∧ U(x, t, a, a).

(92)

By PA, the N(x, t|~s|) conjunct of (91) and the a ≤ t|~s| conjunct of (92) imply N(x, a).

(93)

~ conjunct of (91) implies that d2y+2 is the length of the numer of the move residing in t’s The D(t, ~s, d) buffer. By the F(t, y) conjunct of (85) we know that y is such a numer. Thus, d2y+2 = |y|. Let q = d2y+2 ⊖ r. This number can be computed using Fact 2.6. The rth least significant bit of y is nothing but the qth most significant bit of y. By Lemma 3.9, we have a ≤ t|~s| ∧ N(x, a) ∧ |~v | ≤ s|~s| ∧ Dǫ (x, ~s, ~v ) ∧ U(x, t, a, a) → Nothing (t, q) ⊔ Zero(t, q) ⊔ One(t, q).

(94)

The a ≤ t|~s| and U(x, t, a, a) conjuncts of the antecedent of (94) are true by (92); the N(x, a) conjunct is true by (93); the |~v | ≤ s|~s| conjunct is true by (87); and the Dǫ (x, ~s, ~v ) conjunct is true by (85). Hence, the provider of (94) has to resolve the ⊔ -disjunction in the consequent. If it chooses One(t, q), we choose the left ⊔ -disjunct in (86); otherwise we choose the right ⊔ -disjunct. In either case we win.

25

3.12

Senior lemmas

Let E be a formula not containing the variable y. We say that a formula H is a (⊥, y)-development of E iff H is the result of replacing in E: • either a surface occurrence of a subformula F0 ⊓ F1 by Fi (i = 0 or i = 1), • or a surface occurrence of a subformula

⊓xF (x) by F (y).

(⊤, y)-development is defined in the same way, only with ⊔ , ⊔ instead of ⊓ , ⊓. Lemma 3.11 Assume E(~s) is a formula all of whose free variables are among ~s, y is a variable not occurring ˜ ◦ s) → H ˜ ◦ (~s, y). in E(~s), and H(~s, y) is a (⊥, y)-development of E(~s). Then CLA11R A proves E (~ Proof. Assume the conditions of the lemma. The target formula whose CLA11R A -provability we want to show partially disabbreviates as   ∃x E ◦ (x, ~s) ∧ Dǫ⊔ (x, ~s) → ∃x H ◦ (x, ~s, y) ∧ Dǫ⊔ (x, ~s, y) . (95) Let ⊥β be the labmove that brings E(~s) down to H(~s, y),12 and let α be the header of β. For instance, if E(~s) is G → F0 ⊓ F1 and H(~s, y) is G → F0 , then both β and α are “⊥1.0”; and if E(~s) is G → ⊓zF (x) ∨ J and H(~s, y) is G → F (y) ∨ J, then β is 1.0.#y and α is 1.0.#. For each natural number j, let j + be the number such that the first three titular components of Titlej + are the same as those of Titlej , and the 4th titular component of Titlej + is obtained from that of Titlej by appending ⊥α to it. Intuitively, if Titlej is the title of a given configuration x, then Titlej + is the title of the configuration that results from x in the scenario where ⊥ made the (additional) move β on the transition to x from the predecessor configuration. Observe that, if j is a member of {0, . . . , m − 1}, then so is j + . Argue in CLA11R A . To win (95), we wait till Environment brings it down to   ∃x E ◦ (x, ~s) ∧ |~c| ≤ s|~s| ∧ Dǫ (x, ~s, ~c) → ∃x H ◦ (x, ~s, y) ∧ Dǫ⊔ (x, ~s, y) (96) for some tuple ~c = c1 , . . . , c2y+3 of constants. Based on clause 2 of Definition 2.5 of [5] and Facts 2.1 and ˆ If not, the antecedent of (96) can be seen to be false, so we win (96) by 2.5, we check whether c2y+3 < m. ˆ In this case we bring (96) down to doing nothing. Suppose now c2y+3 < m.   ∃x E ◦ (x, ~s) ∧ |~c| ≤ s|~s| ∧ Dǫ (x, ~s, ~c) → ∃x H ◦ (x, ~s, y) ∧ |~c + | ≤ s|~s| ∧ Dǫ (x, ~s, y, ~c + ) , (97) where ~c + is the same as ~c, only with c+ 2y+3 instead of c2y+3 . The elementary formula (97) can be easily seen to be true, so we win. Lemma 3.12 Assume E(~s) is a formula all of whose free variables are among ~s, y is a variable not occurring in E(~s), and H1 (~s, y), . . . , Hn (~s, y) are all of the (⊤, y)-developments of E(~s). Then CLA11R A proves ˜ • (~s) ⊔ ¬W ⊔ ⊔y H ˜ 1◦ (~s, y) ⊔ . . . ⊔ ⊔y H ˜ n◦ (~s, y). E˜ ◦ (~s) → E

(98)

Proof. Assume the conditions of the lemma and argue in CLA11R A to justify (98). The antecedent of (98)  ◦ ǫ disabbreviates as ∃x E (x, ~s) ∧ ⊔|~v | ≤ s|~s|D (x, ~s, ~v ) . At the beginning, we wait till the ⊔|~v | ≤ s|~s|Dǫ (x, ~s, ~v ) subcomponent of it is resolved and thus (98) is brought down to  ˜ ◦ (~s, y) ⊔ . . . ⊔ ⊔y H ˜ ◦ (~s, y) ∃x E ◦ (x, ~s) ∧ |~c| ≤ s|~s| ∧ Dǫ (x, ~s, ~c) → E˜ • (~s) ⊔ ¬W ⊔ ⊔y H (99) 1 n for some tuple ~c = c1 , . . . , c2y+3 of constants. From now on, we shall assume that the antecedent of (99) is true, or else we win no matter what. Let then x0 be the obviously unique number that, in the role of x, makes the antecedent of (99) true. That is, we have E ◦ (x0 , ~s) ∧ |~c| ≤ s|~s| ∧ Dǫ (x0 , ~s, ~c). 12 In

the rare cases where there are more than one such β, take the lexicographically smallest one.

26

(100)

In order to win (99), it is sufficient to figure out how to win its consequent, so, from now on, our target will be ˜ • (~s) ⊔ ¬W ⊔ ⊔y H ˜ ◦ (~s, y) ⊔ . . . ⊔ ⊔y H ˜ ◦ (~s, y). E (101) 1 n For some (⊔) constant a, Lemma 3.8 provides the resource a = t|~s| ∧ Q(~s, a), which disabbreviates as h  i . a = t|~s| ∧ ∀x D⊔ (x, ~s) → ¬N(x, a) ⊔ N(x, a) ∧ ∃t U~s⊔ (x, t, a) ∧ D⊔ (t, ~s) We use ~c to resolve the D⊔ (x, ~s) component of the above game, bringing the latter it down to h  i . a = t|~s| ∧ ∀x |~c| ≤ s|~s| ∧ D(x, ~s, ~c) → ¬N(x, a) ⊔ N(x, a) ∧ ∃t U~s⊔ (x, t, a) ∧ D⊔ (t, ~s)

(102)

Plugging the earlier fixed x0 for x in (102) and observing that |~c| ≤ s|~s| ∧ D(x0 , ~s, ~c) is true by (100), it is clear that having the resource (102), in fact, amounts to (at least, implies) having   (103) a = t|~s| ∧ ¬N(x0 , a) ⊔ N(x0 , a) ∧ U~s⊔ (x0 , t0 , a) ∧ D⊔ (t0 , ~s) for some (∃) t0 . We wait till the displayed ⊔ -disjunction of (103) is resolved by the provider. Suppose the left ⊔ -disjunct ¬N(x0 , a) is chosen in (103). Then N(x0 , a) has to be false. This means that x0 has a corrupt unadulterated successor. At the same time, from the E ◦ (x0 , ~s) conjunct of (100), we know that x0 is a reachable semiuncorrupt configuration. All this, together with (39), (49) and (50), as can be seen with some analysis, implies that W is false.13 So, we win (101) by choosing its ⊔ -disjunct ¬W. Now suppose the right ⊔ -disjunct is chosen in (103), bringing the game down to a = t|~s| ∧ N(x0 , a) ∧ U~s⊔ (x0 , t0 , a) ∧ D⊔ (t0 , ~s). We wait till the above is further brought down to ~ ≤ s|~s| ∧ D(t0 , ~s, d) ~ a = t|~s| ∧ N(x0 , a) ∧ |b| ≤ s|~s| ∧ U(x0 , t0 , a, b) ∧ |d|

(104)

for some constant b and some tuple d~ of constants. Take a note of the fact that, by the U(x0 , t0 , a, b) conjunct of (104), t0 is a bth unadulterated successor of x0 . Using Fact 2.5, we figure out whether b = a or b 6= a. First, assume b = a, so that, in fact, (104) is ~ ≤ s|~s| ∧ D(t0 , ~s, d). ~ a = t|~s| ∧ N(x0 , a) ∧ |a| ≤ s|~s| ∧ U(x0 , t0 , a, a) ∧ |d| ˜ • (~s) in (101) and then further bring the latter down to In this case we choose E  ∃x E • (x, ~s) ∧ |~c| ≤ s|~s| ∧ Dǫ (x, ~s, ~c) .

(105)

(106)

According to (100), E ◦ (x0 , ~s) is true. From the first and the fourth conjuncts of (105), we also know that the run tape content of e persists for “sufficiently long”, namely, for at least t|~s| steps. Therefore, E ◦ (x0 , ~s) implies E • (x0 , ~s). For this reason, (106) is true, as it follows from (100). We thus win. Now, for the rest of this proof, assume b 6= a. Note that then, by the U(x0 , t0 , a, b) conjunct of (104), b < a and, in the scenario that we are dealing with, X made a move on the (b + 1)st step after reaching configuration x0 , i.e. immediately (1 step) after reaching configuration t0 . Let us agree to refer to that move as σ, and use t1 to refer to the configuration that describes the (b + 1)st step after reaching configuration x0 — that is, the step on which the move σ was made. In view of [4]’s stipulation that an HPM never adds anything to its buffer when transitioning to a move state, we find that σ is exactly the move found in configuration t0 ’s buffer. Applying Comprehension to the formula (84) of Lemma 3.10 and taking ~c in the role of ~v , we get   ⊔|w| ≤ a|~s|∀r < a|~s| Bit(r, w) ↔ ∃x∃t∃y N(x, t|~s|) ∧ Dǫ (x, ~s, ~c) ∧ U~s∃ (x, t) ∧ F(t, y) ∧ Bit(r, y) . 13 Namely,

W is false because X “does something wrong” after reaching the configuration x0 .

27

The provider of the above resource will have to choose a value w0 for w and bring the game down to   |w0 | ≤ a|~s| ∧ ∀r < a|~s| Bit(r, w0 ) ↔ ∃x∃t∃y N(x, t|~s|) ∧ Dǫ (x, ~s, ~c) ∧ U~s∃ (x, t) ∧ F(t, y) ∧ Bit(r, y) . (107) From (100) we know that Dǫ (x0 , ~s, ~c) is true, and then from PA we know that x0 is a unique number satisfying Dǫ (x0 , ~s, ~c). Also remember from (104) that t|~s| = a. For these reasons, the (para)formula  ∃x∃t∃y N(x, t|~s|) ∧ Dǫ (x, ~s, ~c) ∧ U~s∃ (x, t) ∧ F(t, y) ∧ Bit(r, y) (108) can be equivalently re-written as  ∃t∃y N(x0 , a) ∧ U~s∃ (x0 , t) ∧ F(t, y) ∧ Bit(r, y) .

(109)

From the a = t|~s| and U(x0 , t0 , a, b) conjuncts of (104), by PA, we know that t0 is a unique number satisfying U~s∃ (x0 , t0 ). From (104) we also know that N(x0 , a) is true. And, from PA, we also know that there is (∃) a unique number — let us denote it by y0 — satisfying F(t0 , y0 ). Consequently, (109) can be further re-written as Bit(r, y0 ). So, (108) is equivalent to Bit(r, y0 ), which allows us to re-write (107) as  |w0 | ≤ a|~s| ∧ ∀r < a|~s| Bit(r, w0 ) ↔ Bit(r, y0 ) . (110) With the N(x0 , a) conjunct of (104) in mind, by PA we can see that t0 , being a bth unadulterated successor of x0 with b < a, is uncorrupt. If so, remembering that y0 is the numer of the move σ found in t0 ’s buffer, by condition 7 of Definition 3.3, we have |y0 | ≤ a|~s|. This fact, together with (110), obviously implies that y0 and w0 are simply the same. Thus, w0 is the numer of σ. ~ conjunct of (104), d2y+3 contains information on the header of σ. In view of the (truth of the) D(t0 , ~s, d) From this header, we can determine the number i ∈ {1, . . . , n} such that the move σ by X in position E(~s) yields Hi (~s, w0 ). Fix such an i. Observe that the following is true: Hi◦ (t1 , ~s, w0 ).

(111)

From d2y+3 we determine the state of t0 . Lemma 3.6 further allows us to determine the scene of t0 as well. These two pieces of information, in turn, determine the titular number of t0 ’s successor configuration ~ only with e instead of d2y+3 . t1 . Let e be that titular number. Let d~e be the same as d, ◦ From the E (x0 , ~s) conjunct of (100) we know that x0 is uncorrupt and hence semiuncorrupt. This implies that t1 is also (at least) semiuncorrupt, because x0 has evolved to t1 in the scenario where Environment made no moves. For this reason, the titular number e of t0 is smaller than m. From E ◦ (x0 , ~s) and x0 ’s being uncorrupt, in view of clause 3 of Definition 3.3, we also know that m ≤ s|~s|. Consequently, e ≤ s|~s|. This ~ ≤ s|~s| conjunct of (104), implies that fact, together with the |d| |d~e | ≤ s|~s|.

(112)

~ is true. This fact, in view of our earlier assumption that X Next, from (104) again, we know that D(t0 , ~s, d) never moves its scanning heads and never makes any changes on its work tapes on a transition to a move state, obviously implies that the following is also true: D(t1 , ~s, w0 , d~e ).

(113)

At this point, at last, we are ready to describe our strategy for (101). First, relying on Fact 2.5 several times, we figure out whether |d~e | ≤ s|~s, w0 |. If not, then, in view of (112), s is not monotone and hence W is false. In this case we select the ¬W disjunct of (101) and celebrate victory. Now suppose |d~e | ≤ s|~s, w0 |. In ˜ ◦ (~s, y) disjunct of (101), then bring the resulting game down to H ˜ ◦ (~s, w0 ), i.e., this case we select the ⊔y H i i  to ∃x Hi◦ (x, ~s, w0 ) ∧ ⊔|~v | ≤ s|~s, w0 |D(x, ~s, w0 , ~v ) , which we then further bring down to  ∃x Hi◦ (x, ~s, w0 ) ∧ |d~e | ≤ s|~s, w0 | ∧ D(x, ~s, w0 , d~e ) . The latter is true in view (111), (113) and our assumption |d~e | ≤ s|~s, w0 |, so we win.

28

3.13

Main lemma

Lemma 3.13 Assume E(~s) is a formula all of whose free variables are among ~s. Then CLA11R A proves ˜ ◦ (~s) → E(~s). E Proof. We prove this lemma by (meta)induction on the complexity of E(~s). By the induction hypothesis, for any (⊥, y)- or (⊤, y)-development Hi (~s, y) of E(~s) (if there are any), CLA11R A proves

which is the same as

˜ ◦ (~s, y) → Hi (~s, y), H i

(114)

 ∃x Hi◦ (x, ~s, y) ∧ Dǫ⊔ (x, ~s) → Hi (~s, y).

(115)

˜ ◦ s) → E(~s), which disabbreviates as Argue in CLA11R A to justify E (~  ∃x E ◦ (x, ~s) ∧ Dǫ⊔ (x, ~s) → E(~s).

(116)

To win (116), we wait till Environment brings it down to  ∃x E ◦ (x, ~a) ∧ |~c| ≤ s|~s| ∧ Dǫ (x, ~a, ~c) → E(~a)

(117)

for some tuples ~a = a1 , . . . , an and ~c = c1 , . . . , c2y+3 of constants.14 Assume the antecedent of (117) is true (if not, we win). Our goal is to show how to win the consequent E(~a). Let b be the (obviously unique) constant satisfying the antecedent of (117) in the role of x. Let H1◦ (~s, y), . . . , Hn◦ (~s, y) be all of the (⊤, y)-developments of E(~s). By Lemma 3.12, the following resource is at our disposal:  ˜ • (~s) ⊔ ¬W ⊔ ⊔y H ˜ 1◦ (~s, y) ⊔ . . . ⊔ ⊔y H ˜ n◦ (~s, y). ∃x E ◦ (x, ~s) ∧ Dǫ⊔ (x, ~s) → E (118) We bring (118) down to  ˜ • (~a) ⊔ ¬W ⊔ ⊔y H ˜ ◦ (~a, y) ⊔ . . . ⊔ ⊔y H ˜ ◦ (~a, y). ∃x E ◦ (x, ~a) ∧ |~c| ≤ s|~a| ∧ Dǫ (x, ~a, ~c) → E 1 n

(119)

Since the antecedent of (119) is identical to the antecedent of (117) and hence is true, the provider of (119) will have to choose one of the ⊔ -disjuncts in the consequent ˜ • (~a) ⊔ ¬W ⊔ ⊔y H ˜ ◦ (~a, y) ⊔ . . . ⊔ ⊔y H ˜ ◦ (~a, y). E n 1

(120)

Case 1: ¬W is chosen in (120). W has to be false, or else the provider loses. By Lemma 3.2, the resource W ∨ ∀E(~s) is at our disposal, which, in view of W’s being false, simply means having ∀E(~s). But the strategy that wins the latter, of course, also (“even more so”) wins our target E(~a). ˜ ◦ (~a, y) is chosen in (120). This should be followed by a further choice of some constant Case 2: One of ⊔y H i ˜ ◦ (~a, d). Plugging ~a and d for ~s and y in (114), we get H ˜ ◦ (~a, d) → Hi (~a, d). Thus, the two d for y, yielding H i i ◦ ◦ ˜ ˜ resources Hi (~a, d) and Hi (~a, d) → Hi (~a, d) are at our disposal. Hence so is Hi (~a, d). But, remembering that the formula Hi (~s, y) is a (⊤, y)-development of the formula E(~s), we can now win E(~a) by making a move α that brings (E(~a) down to Hi (~a, d) and hence) E(~a) down to Hi (~a, d), which we already know how to win. For example, imagine E(~s) is Y (~s) → Z(~s) ⊔ T (~s) and Hi (~s, y) is Y (~s) → Z(~s). Then the above move α will be “1.0”. It indeed brings (Y (~a) → Z(~a) ⊔ T (~a) down to Y (~a) → Z(~a) and hence) Y (~a) → Z(~a) ⊔ T (~a) down to Y (~a) → Z(~a). As another example, imagine E(~s) is Y (~s) → ⊔wZ(~s, w) and Hi (~s, y) is Y (~s) → Z(~s, y). Then the above move α will be “1.#d”. It indeed  brings Y (~a) → ⊔wZ(~a, w) down to Y (~a) → Z(~a, d). ˜ • (~a), i.e. ∃x E • (x, ~a) ∧ Dǫ (x, ~a) , is chosen in (120). It has to be true, or else the provider Case 3: E ⊔ loses. For this reason, ∃xE • (x, ~a) is also true. ˜ • (~s) is critical. Since ∃xE • (x, ~a) is true, so is ∃E • (z, ~s). By Lemma 3.5, we Subcase 3.1: The formula E • also have ∃E (z, ~s) → ∀E(~s). So, we have a winning strategy for ∀E(~s). Of course, the same strategy also wins E(~a). Subcase 3.2: The formula E˜ • (~s) is not critical. From ∃xE • (x, ~a) and Lemma 3.4, by LC, we find that the elementarization of E(~a) is true. This obviously means that if Environment does not move in E(~a), we 14 Here,

unlike the earlier followed practice, for safety, we are reluctant to use the names ~ s, ~ v for those constants.

29

win the latter. So, assume Environment makes a move α in E(~a). The move should be legal, or else we win. Of course, for one of the (⊥, y)-developments Hi (~s, y) of the formula E(~s) and some constant d, α brings E(~a) down to Hi (~a, d). For example, if E(~s) is Y (~s) → Z(~s) ⊓ T (~s), α could be the move “1.0”, which brings Y (~a) → Z(~a) ⊓ T (~a) down to Y (~a) → Z(~a); the formula Y (~s) → Z(~s) is indeed a (⊥, y)-development of the formula Y (~s) → Z(~s) ⊓ T (~s). As another example, imagine E(~s) is Y (~s) → ⊓wZ(~s, w). Then the above move α could be “1.#d”, which brings Y (~a) → ⊓wZ(~a, w) down to Y (~a) → Z(~a, d); the formula Y (~s) → Z(~s, y) is indeed a (⊥, y)-development of the formula Y (~s) → ⊓wZ(~s, w). Fix the above formula Hi (~s, y) and constant ˜ ◦ (~s, y) provided by Lemma 3.11, we get the d. Choosing ~a and d for ~s and y in the resource E˜ ◦ (~s) → H i ◦ ◦ • ˜ ˜ ˜ resource E (~a) → Hi (~a, d). Since E (~a) is chosen in (120), we have a winning strategy for E˜ • (~a) and hence ˜ ◦ (~a) → H ˜ ◦ (~a, d), by LC, yields H ˜ ◦ (~a, d). By choosing ~a and d for the weaker E˜ ◦ (~a). This, together with E i i for ~s and y in (114), we now get the resource Hi (~a, d). That is, we have a strategy for the game Hi (~a, d) to which E(~a) has evolved after Environment’s move α. We switch to that strategy and win.

3.14

Conclusive steps

Now we are ready to claim the target result of this section. Let a be the (code of the) start configuration of X where the run tape is empty. Without loss of generality we may assume that the titular number of a is 0. Let ~0 stand for a (2y + 3)-tuple of 0s. Of course, PA proves X ◦ (ˆ a) ∧ Dǫ (ˆ a, ~0),15 and hence  PA R ◦ ǫ ◦ ~ also proves ∃x X (x) ∧ D (x, 0) . Then, by LC, CLA11A proves ∃x X (x) ∧ ⊔|~v | ≤ s(0)Dǫ (x, ~v ) , i.e.  ˜ ◦ . By Lemma 3.13, CLA11R also proves X ˜ ◦ → X. These two imply the desired ∃x X ◦ (x) ∧ Dǫ⊔ (x) , i.e. X A X by LC, thus completing our proof of the extensional completeness of CLA11R A.

The intensional completeness of CLA11R A!

4

Fix an arbitrary regular theory CLA11R A and an arbitrary sentence X with an R tricomplexity solution. Proving the extensional completeness of CLA11R A! — i.e., the completeness part of clause 2 of Theorem 2.6 of [5]— means showing that CLA11R proves (not only X but also) X. This is what the present section is A! devoted to. Let X , (a, s, t), W be as in Section 3, and so be the meaning of the overline notation. Lemma 4.1 CLA11R A ⊢ W → X. Proof. First, by induction on the complexity of E, we want to show that (121) For any formula E, CLA11R A ⊢ ∀(E ∧ W → E).  If E is a literal, then ∀(E ∧ W → E) is nothing but ∀ (W → E) ∧ W → E . Of course CLA11R A proves this elementary sentence, which happens to be classically valid. Next, suppose E is F0 ∧ F1 . By the inR duction hypothesis, CLA11  A proves both ∀(F0 ∧ W → F0 ) and ∀(F1 ∧ W → F1 ). These two, by LC, imply ∀ (F0 ∧ F1 ) ∧ W → F0 ∧ F1 . And the latter is nothing but the desired ∀(E ∧ W → E). The remaining cases where E is F0 ∨ F1 , F0 ⊓ F1 , F0 ⊔ F1 , ⊓xF (x), ⊔xF (x), ∀xF (x) or ∃xF (x) are handled in a similar way. (121) is thus proven. R (121) implies that CLA11R A proves X ∧ W → X. As established in Section 3, CLA11A also proves X. R From these two, by LC, CLA11A proves W → X, as desired. As we remember from Section 3, W is a true elementary sentence. As such, it is an element of A! and R R is thus provable in CLA11R A! . By Lemma 4.1, (CLA11A and hence) CLA11A! also proves both X and R X ∧ W → X. Hence, by LC, CLA11A! ⊢ X. Done.

4.1

On the intensional strength of CLA11R A

R While CLA11R odel-Rosser incomA! is intensionally complete, CLA11A generally is not. Namely, the G¨ R pleteness theorem precludes CLA11A from being intensionally complete as long as it is consistent and A 15 Whatever

would normally appear as an additional ~s argument of Dǫ is empty in the present case.

30

is recursively enumerable. Furthermore, in view of Tarski’s theorem on the undefinability of truth, it is not 16 hard to see that CLA11R A , if sound, cannot be intensionally complete even if the set A is just arithmetical. R Intensionally, even though incomplete, CLA11A is still very strong. The last sentence of Section 1.6.3 of [5], in our present terms, reads: ... If a sentence F is not provable in CLA11R A , it is unlikely that anyone would find an R tricomplexity algorithm solving the problem expressed by F : either such an algorithm does not exist, or showing its correctness requires going beyond ordinary combinatorial reasoning formalizable in PA.  To explain and justify this claim, assume F has a b(x), c(x), d(x) tricomplexity solution/algorithm F ,  where b(x), c(x), d(x) ∈ Ramplitude × Rspace × Rtime . Let V be a sentence constructed from F , F and (b, c, d) in the same way as we earlier constructed W from X, X and (a, s, t). Note that V is a sentence asserting the “correctness” of F . Now, assume a proof of F ’s correctness can be formalized in PA, in the precise sense that PA ⊢ V. According to Lemma 4.1, we also have CLA11R A ⊢ V → F . Then, by LC, CLA11R A ⊢ F.

5

Harvesting

In this section we are going to see an infinite yet incomplete series of natural theories that are regular and thus adequate (sound and complete) in the sense of Theorem 2.6 of [5]. All these theories look like CLA11R ∅ , with the subscript ∅ indicating that there are no supplementary axioms. Given a set S of bounds, by S ♥ (resp. S ♠ ) we shall denote the linear (resp. polynomial) closure of S. Lemma 5.1 Consider any regular boundclass triple R, and any set S of bounds. Assume that, for every pterm p(~x) ∈ S, we have CLA11R x|) for some (=any) variable z not occurring in p. Then the ∅ ⊢ ⊔z(z = p|~ same holds for S ♠ — and hence also S ♥ — instead of S. Proof. Straightforward (meta)induction on the complexity of pterms, relying on the Successor axiom, Fact 2.4 and Fact 2.10. Lemma 5.2 Consider any regular boundclass triple R, any pterms p(~x) and a(~x), and any variable z not occurring in these pterms. Assume a(~x) is in Ramplitude , and CLA11R ∅ proves the following two sentences:  ⊓⊔z z = p(~x) ; (122)  ∀ p(~x) ≤ a(~x) . (123) Then CLA11R ∅ also proves

⊓⊔z(z = 2p|~x| ).

Proof. Assume the conditions of the lemma, and argue in CLA11R ∅ . We claim that Bit(y, 2p|~x| ) ⊔ ¬Bit(y, 2p|~x| ).

(124)

Our strategy for (124) is as follows. Using the Log axiom, we compute the values ~c of |~x|. Then, relying on (122), we find the value a of p(~c). From PA, we know that the ath least significant bit of 2a — and only that bit — is a 1. So, using Fact 2.5, we compare a with y. If a = y, we choose Bit(y, 2p|~x| ) in (124), otherwise choose ¬Bit(y, 2p|~x| ). From (124), by Comprehension, we get  ⊔|z| ≤ (a|~x|) ′ ∀y < (a|~x|) ′ Bit(y, z) ↔ Bit(y, 2p|~x| ) . The above, in view of the PA-provable fact |2a|~x| | = (a|~x|) ′ , implies

⊔|z| ≤ |2a|~x| |∀y < |2a|~x| | Bit(y, z) ↔ Bit(y, 2p|~x| ) . 

(125)

16 Arithmeticity of A is defined as existence of a formula p(x) of the language of PA such that p(ˆ n) is true if and only if n is the code of some element of A.

31

Obviously, from PA and (123), we also have    ∀ |z| ≤ |2a|~x| | ∧ ∀y < |2a|~x| | Bit(y, z) ↔ Bit(y, 2p|~x| ) → z = 2p|~x| . Now, the target

(126)

⊔z(z = 2p|~x| ) is a logical consequence of (125) and (126).

Here we define the following series B11 , B12 , B13 , . . . , B2 , B3 , B4 , B5 , B6 , B7 , B8 of sets of terms: 1. (a) B11 = {|x|}♥ (logarithmic boundclass); (b) B12 = {|x|2 }♥ ; (c) B13 = {|x|3 }♥ ; (d) . . . ; 2. B2 = {|x|}♠ (polylogarithmic boundclass); 3. B3 = {x}♥ (linear boundclass); 4. B4 = {x × |x|, x × |x|2 , x × |x|3 , . . .}♥ (quasilinear boundclass); 5. B5 = {x}♠ (polynomial boundclass); 2

3

6. B6 = {2|x| , 2|x| , 2|x| , . . .}♠ (quasipolynomial boundclass); 7. B7 = {2x }♠ (exponential-with-linear-exponent boundclass); 2

3

8. B8 = {2x , 2x , 2x , . . .}♠ (exponential-with-polynomial-exponent boundclass). Note that all elements of any of the above sets are bounds, i.e. monotone pterms. Further, since all sets have the form S ♥ or S ♠ , they are (indeed) boundclasses, i.e. are closed under variable renaming. Fact 5.3 For any boundclass triple R listed below, the theory CLA11R ∅ is regular: (B3 , B11 , B5 ); (B3 , B12 , B5 ); (B3 , B13 , B5 ); . . . ; (B3 , B2 , B5 ); (B3 , B2 , B6 ); (B3 , B2 , B7 ); (B3 , B3 , B5 ); (B3 , B3 , B6 ); (B3 , B3 , B7 ); (B4 , B11 , B5 ); (B4 , B12 , B5 ); (B4 , B13 , B5 ); . . . ; (B4 , B2 , B5 ); (B4 , B2 , B6 ); (B4 , B4 , B5 ); (B4 , B4 , B6 ); (B4 , B4 , B7 ); (B5 , B11 , B5 ); (B5 , B12 , B5 ); (B5 , B13 , B5 ); . . . ; (B5 , B2 , B5 ); (B5 , B2 , B6 ); (B5 , B5 , B5 ); (B5 , B5 , B6 ); (B5 , B5 , B7 ); (B5 , B5 , B8 ). Proof. Let R be any one of the above-listed triples. By definition, a theory CLA11R ∅ is regular iff the triple R is regular and, in addition, CLA11R satisfies the two conditions of Definition 2.5 of [5]. ∅ To verify that R is regular, one has to make sure that all five conditions of Definition 2.2 of [5] are satisfied by any R from the list. This is a rather easy — mostly straightforward — job. For instance, the satisfaction of condition 3 of Definition 2.2 of [5] is automatically guaranteed in view of the fact that all of the boundclasses B11 , . . . , B8 have the form S ♥ or S ♠ , and the Rtime component of each of the listed triples R has the form S ♠ . We leave a verification of the satisfaction of the other conditions of Definition 2.2 of [5] to the reader. As for Definition 2.5 of [5], condition 1 of it is trivially satisfied because the set of the supplementary axioms of each theory CLA11R ∅ under question is empty. So, it remains to only verify the satisfaction of condition 2. Namely, we shall show that, for every bound b(~x) from Ramplitude , Rspace or Rtime , CLA11R ∅ proves ⊔z(z = b|~x|). Let us start with Rspace . Assume Rspace = B11 = {|x|}♥ . In view of Lemma 5.1, in order to show (here and below in similar situations) that CLA11R x|) for every bound b(~x) from this boundclass, it is sufficient for us ∅ ⊢ ⊔z(z = b|~ R to just show that CLA11∅ ⊢ ⊔z(z = ||x||). But this is indeed so: apply the Log axiom to x twice. Assume Rspace = B12 = {|x|2 }♥ . Again, in view of Lemma 5.1, it is sufficient for us to show that CLA11R ∅ proves ⊔z(z = ||x||2 ), i.e. ⊔z(z = ||x|| × ||x||). But this is indeed so: apply the Log axiom to x twice to obtain the value a of ||x||, and then apply Fact 2.10 to compute the value of a × a. The cases of Rspace being B13 , B14 , . . . will be handled in a similar way, relying on Fact 2.10 several times rather than just once. 32

The case of Rspace = B2 = {|x|}♠ will be handled in exactly the same way as we handled Rspace = B11 = {|x|}♥ . So will be the case of Rspace = B3 = {x}♥ , with the only difference that, the Log axiom needs to be applied only once rather than twice. Assume Rspace = B4 = {x × |x|, x × |x|2 , x × |x|3 , . . .}♥ . In view of Lemma 5.1, it is sufficient for us to i show that, for any i ≥ 1, CLA11R ∅ ⊢ ⊔z(z = |x| × ||x|| ). This provability indeed holds due to the Log axiom (applied twice) and Fact 2.10 (applied i times). The case of Rspace = B5 = {x}♠ will be handled in exactly the same way as we handled Rspace = B3 = {x}♥ . Looking back at the triples listed in the present lemma, we see that Rspace is always one of B11 , B12 , . . ., B2 , B3 , B4 , B5 . This means we are done with Rspace . If Ramplitude or Rtime is one of B11 , B12 , . . ., B2 , B3 , B4 , B5 , the above argument applies without any changes. In fact, Ramplitude is always one of B3 , B4 , B5 , meaning that we are already done with Ramplitude as well. So, it only remains to consider Rtime in the cases where the latter is one of B6 , B7 , B8 . 2 3 Assume Rtime = B6 = {2|x|, 2|x| , 2|x| , . . .}♠ . In view of Lemma 5.1, it is sufficient for us to show that, ||x||i ). Consider any such i. Relying on the Log axiom once and Fact for any i ≥ 1, CLA11R ∅ ⊢ ⊔z(z = 2 R 2.10 i times, we find that CLA11∅ ⊢ ⊔z(z = |x|i ). Also, as R is a regular boundclass triple, Ramplitude is  at least linear, implying that it contains a bound a(x) with PA ⊢ ∀x |x|i ≤ a(x) . Hence, by Lemma 5.2, ||x||i CLA11R ), as desired. ∅ ⊢ ⊔z(z = 2 |x| Assume Rtime = B7 = {2x }♠ . It is sufficient to show that CLA11R ∅ ⊢ ⊔z(z = 2 ). The sentence R ⊔z(z = x) is logically valid and hence provable Ramplitude  in CLA11∅ . Also, due to being at least linear, |x| contains a bound a(x) with PA ⊢ ∀x x ≤ a(x) . Hence, by Lemma 5.2, CLA11R ⊢ ⊔ z(z = 2 ), as desired. ∅ x x2 x3 ♠ Finally, assume Rtime = B8 = {2 , 2 , 2 , . . .} . It is sufficient for us to show that, for any i ≥ 1, |x|i ). Consider any such i. Relying on Fact 2.10 i times, we find that CLA11R CLA11R ∅ ⊢ ∅ ⊢ ⊔z(z = 2 i ⊔z(z = x ). Also, Ramplitude , which in our case — as seen from the list of triples — can only be B5 = {x}♠ , contains the bound xi , for which we trivially have PA ⊢ ∀x(xi ≤ xi ). Hence, by lemma 5.2, CLA11R ∅ ⊢ |x|i ⊔z(z = 2 ), as desired. In view of Theorem 2.6 of [5], an immediate corollary of Fact 5.3 is that, where R is any one of the boundR class triples listed in Fact 5.3, the theory CLA11R ∅ (resp. CLA11∅! ) is extensionally (resp. intensionally) (B ,B ,B ) adequate with respect to computability in the corresponding tricomplexity. For instance, CLA11∅ 3 2 5 (B ,B2 ,B5 )

and CLA11∅! 3

are adequate with respect to (simultaneously) linear amplitude, polylogarithmic space (B ,B ,B )

(B ,B ,B )

and polynomial time computability; CLA11∅! 5 3 8 and CLA11∅! 5 3 8 are adequate with respect to polynomial amplitude, linear space and exponential time computability; and so on. Fact 5.3 was just to somewhat illustrate the scalability and import of Theorem 2.6 of [5]. There are many meaningful and interesting boundclasses and boundclass triples yielding (regular and hence) adequate theories yet not mentioned in this section.

6

Final remarks

In writing this paper, the author has tried to keep balance between generality and simplicity, often sacrificing the former for the sake of the latter. Among the ways that the present results could be strengthened is relaxing the concept of a regular theory RR A . Specifically, the condition of Ramplitude ’s being linearly closed can be removed as long as Definition 2.2 of [5] is correspondingly refined/readjusted. This condition, in fact, amounts to adopting an asymptotic view of amplitude complexity, which significantly simplifies the completeness proofs, allowing us to avoid numerous annoying exceptions and details one would need to otherwise deal with. As noted in [4], however, unlike time and space complexities, we may not always be willing to — and it is almost never really necessary to — settle for merely asymptotic analysis when it comes to amplitude complexity. A non-asymptotic approach to amplitude complexity would make it possible to consider much finer amplitude complexities, such as “strictly ℓ” (“non-size-increasing”, as studied in [1]), “ℓ plus a constant”, etc.

33

A

Proof of Lemma 5.4 of [5]

Lemma 5.4 of [5] states: There is an effective procedure that takes an arbitrary bounded formula H(~y ), an arbitrary HPM L and constructs an HPM M such that, as long as L is a provident solution of H(~y ), the following conditions are satisfied: 1. M is a quasilegal and unconditionally provident solution of H(~y). 2. If L plays H(~y ) prudently, then M plays H(~y ) unconditionally prudently. 3. For any arithmetical functions a, s, t, if L plays H(~y) in tricomplexity (a, s, t), then M plays in unconditional tricomplexity (a, s, t). Consider an arbitrary HPM L and an arbitrary bounded formula H(~y ) with all free variables displayed. We want to (show how to) construct an HPM M — with the same number of work tapes as L— satisfying the above conditions 1-3. From our construction of M it will be immediately clear that M is built effectively from H(~y ) and L. As usual, we may not always be very careful about the distinction between H(~y ) and ⊓H(~y ), but which of these two is really meant can always easily be seen from the context. We agree on the following terminology. A semiposition is a string S of the form ℘1 α1 . . . ℘n αn ω, where each ℘i is a label ⊤ or ⊥, each αi is a string over the keyboard alphabet, and ω ∈ {ǫ, Blank} (remember that ǫ stands for the empty string). When ω is Blank, we say that S is complete; otherwise S is incomplete. We say that a semiposition S ′ is a completion of S iff (1) either S is complete and S ′ = S, or (2) S is incomplete and S ′ = Sβ Blank for some (possibly empty) string β over the keyboard alphabet. When S is complete — namely, is ℘1 α1 . . . ℘n αn Blank — then the position spelled by S, as expected, is the position h℘1 α1 , . . . , ℘n αn i. We say that a semiposition S is legitimate (resp. quasilegitimate) iff there is a completion S ′ of S such that the position spelled by S ′ is a legal (resp. quasilegal) position of ⊓H(~y ). The compression of a legitimate or quasilegitimate semiposition S is the expression S resulting from S through replacing the numer of every numeric move by the symbol ⋆. Note that, while generally there are infinitely many possible legitimate or quasilegitimate semipositions, the number of their compressions is finite. The reason is that an infinite variety of legal runs of ⊓H(~y) exists only due to numer variations within numeric moves; in compressions, however, all numers degenerate into ⋆. In the context of a given step i of a given computation branch of a given HPM, by the so-far-seen semiposition we shall mean the semiposition W written at time i on the initial section of the run tape that has ever been visited (at steps ≤ i) by the run-tape scanning head, except that the last symbol of W should be Blank if the corresponding cell contained a Blank at the time when it was last seen by the scanning head, even if the content of that cell changed (namely, became ⊤ or ⊥) later. Intuitively, such a W is exactly what the machine knows at time i about its run-tape content based on what it has seen there so far. Next, let Z be the semiposition ⊤δ1 . . . ⊤δm , where δ1 , . . . , δm are the moves made by the machine so far (at steps ≤ i). And let π be the string residing in the buffer at time i. Then by the so-far-authored semiposition we shall mean the (complete) semiposition Z Blank if π is empty, and the (incomplete) semiposition Z⊤π if π is nonempty. The windup of a quasilegitimate yet incomplete semiposition V of the form ⊤δ1 . . . ⊤δm ⊤π is the lexicographically smallest string ω such that h⊤δ1 , . . . , ⊤δm , ⊤πωi is a ⊤-quasilegal position of ⊓H(~y ). Note that there is only a constant number of strings that are windups of some incomplete quasilegitimate semipositions. Also note that knowing the compression V of an (incomplete quasilegitimate) semiposition V is sufficient to determine V ’s windup. We let M keep (partial) track of the so-far-authored quasilegitimate semiposition V through remembering its compression V . Similarly, M keeps track of the so-far-seen legitimate semiposition W through remembering its compression W ; besides, one of the symbols of W is marked to indicate (keep track of) the current location of M’s run-tape scanning head.17 With appropriately arranged details that are not worth discussing here, it is possible for M, this way, to be able to immediately detect if and when W becomes illegitimate. If and when this happens, we let M retire; besides, if V is quasilegitimate yet incomplete at the time of this event, then M puts V ’s windup into the buffer and, simultaneously, enters a move state before retiring. We shall refer to a move made this way as a retirement move. Maintaining the above 17 Namely, a marked symbol of W other than ⋆ indicates that the head is looking at the corresponding symbol of W , and a marked ⋆ indicates that the head is looking at one of the bits of the corresponding numer.

34

W (together with its mark) and V only requires a constant amount of memory, so this can be fully done through M’s state (rather than tape) memory. This means that, as long as W remains legitimate, M can follow the work of L step-by-step without having any time or space overhead, and act (reposition heads, put things into the buffer, move, etc.) exactly like L, with the only difference between the two machines being that M has a greater number of states than L does, with any given state of L being imitated by one of many “counterpart” states of M, depending on the present values of V and (the marked) W that each such state “remembers” (e.g., is labeled with). For the rest of this appendix, assume L is a provident solution of H(~y ). Fix an arbitrary computation branch B of M, and let ΓB ∞ be the run spelled by B. From now on, whenever a context requires a reference to a computation branch but such a reference is missing, it should be understood as that we are talking about B. For simplicity, we shall assume that, in B, Environment made (legal) initial moves that brought ⊓H(~y ) down to H(~c) for some constants ~c. Fix these ~c. The case of B violating this assumption is not worth our attention for the reason of being trivial or, at least, much simpler than the present case. We also fix arbitrary arithmetical functions a, s, t. We may assume that all three functions are unary, or else replace them with their unarifications. Since the parameters B, ΓB c, a, s, t are arbitrary, it is sufficient ∞, ~ for us to focus on them and just show that the three conditions of the lemma are satisfied in the context of these particular parameters (for instance, to show that M plays ⊓H(~y ) quasilegally, it is sufficient to show that ΓB y )). ∞ is a ⊤-quasilegal run of ⊓H(~ We extend the notation ΓB ∞ from B to any computation branch C of either M or L, stipulating that C ΓC ∞ is the run spelled by C. We further agree that, for any i ≥ 0, Γi stands for the position spelled on the run tape of the corresponding machine at step i of branch C, and ℓC i stands for the background of that step. We also agree that WiC denotes the so-far-seen semiposition at step i of branch C, and ViC denotes the so-far-authored semiposition at step i of C. Finally, since ⊓H(~y) and H(~c) are the only formulas/games we deal with in this appendix, without risk of ambiguity we will often omit references to them when saying “legal”, “quasilegal”, “prudent” etc. Consider any i such that WiB is legitimate. The legitimacy of WiB means it has a completion U = ℘1 α1 . . . ℘n αn β Blank such that the position Ω = h℘1 α1 , . . . , ℘n αn βi spelled by U is legal. Let k be the number of ⊥-labeled moves in Ω. And let C be the computation branch of M in which Environment acts exactly as it does in B, with only the following two differences: (1) Environment stops making any moves after it makes its kth move (meaning that, if k = 0, Environment simply never moves); (2) If ℘n = ⊥, Environment’s kth move (i.e. the nth move of the play) is αn β. Of course, C spells a legal run. For this reason, in this branch M behaves just like L in the branch D where the environment makes exactly the same moves, in exactly the same order and at exactly the same times, as in C. We call such a D the WiB -induced branch of L. The following two lemmas are simple observations, hardly requiring any proofs: B Lemma A.1 Consider any j ≥ 0 such that ΓB j is legitimate, and let D be the Wj -induced branch of L. We have: 1. In D, L’s environment makes no moves at any step e with e > j. 2. ΓD y ). ∞ is a legal run of ⊓H(~ 3. The initial segment of ΓB y ) down to H(~c) is also an initial segment of ΓD ∞ that brings ⊓H(~ ∞. D B D ⊤ B 4. Vj+1 = Vj+1 , and hence also (Γj+1 ) = (Γj+1 )⊤ . B 5. For any e ≥ 0, ℓD e ≤ ℓe .

Lemma A.2 There is a number s such that, for every j ≥ s, Wj = Ws . The smallest of such numbers s we call the W -stabilization point. Having set up the above preliminaries, we prove the lemma clause by clause. CLAUSE 1. Our goal is to show that: ΓB ∞ is ⊤-won (so, M is a solution of

⊓H(~y ));

ΓB ∞ is ⊤-quasilegal (so, M plays quasilegally); B is provident (so, M plays unconditionally providently).

(127) (128) (129)

(127): From the description of M we can see that, if ΓB ∞ is ⊥-legal and thus the so-far-seen semiposition always remains legitimate, M interacts with its environment exactly like L interacts with its environment 35

B in the corresponding scenario18 and, since L is a solution of ⊓H(~y ), ΓB ∞ is ⊤-won. And if Γ∞ is ⊥-illegal, B then Γ∞ is automatically ⊤-won. (128): For a contradiction, suppose ΓB ∞ is not ⊤-quasilegal. Let i be the smallest number such that the position ΓB is not ⊤-quasilegal. Let φ be the (“offending”) move that M made at step i of B. i Assume Wi−1 is legitimate. Let D be the Wi−1 -induced branch of L. According to clause 4 of Lemma ⊤ B ⊤ D D D A.1, (ΓD i ) = (Γi ) . So, Γi is not ⊤-quasilegal, and then the same holds for the extension Γ∞ of Γi . Of D course, Γ∞ ’s not being ⊤-quasilegal implies that it is simply illegal. But this contradicts clause 2 of Lemma A.1, according to which ΓD ∞ is legal. Now assume Wi−1 is not legitimate. Note that i ≥ 2, because, at the initial step 0, M would not be able to see an illegitimate semiposition (at best, M would only see the label ⊥ in the leftmost cell, nothing else). Further note that the semiposition Wi−2 is legitimate, because otherwise M would have retired right after seeing it and thus would not have moved at step i. As soon as M sees the illegitimate Wi−1 , it retires. Thus, the move φ made at step i is a retirement move. Looking back at the conditions under which M makes a retirement move, we see that the so-far-authored semiposition ViB should be complete and quasilegitimate. B ⊤ B ⊤ B B Let ΣB i be the position spelled by Vi . So, Σi is ⊤-quasilegal. But note that (Σi ) = (Γi ) , and thus we B are facing a contradiction because, as we remember, Γi is not ⊤-quasilegal. (129): As already noted in the proof of (127), if the run ΓB ∞ is ⊥-legal, M and its environment behave exactly like L and its environment in the corresponding scenario. Then, since L plays providently, B is a provident branch. Suppose now ΓB ∞ is ⊥-illegal. First, assume the so-far-seen semiposition in B becomes illegitimate at some step i. Note that (i > 0 B B and) Wi−1 is legitimate. Let D be the Wi−1 -induced branch of L. By clauses 2 and 4 of Lemma A.1, ΓD ∞ D B is ⊥-legal and Vi = Vi . The semiposition ViD must be quasilegitimate because otherwise, as can be seen with a little thought, (the provident) L will have to make an illegal move in branch D at some point. But, in branch B, M retires immediately after seeing the non-legitimate Wi . The only possibility for the buffer content of M to remain nonempty after retirement would be if ViB was not quasilegitimate. However, as just observed, this is not the case. Now assume the so-far-seen semiposition in B never becomes illegitimate. Let i be the W -stabilization point (which exists according to Lemma A.2). And let D be the WiB -induced branch of L. It is not hard to see that, throughout the entire play, M behaves — makes moves, puts strings into the buffer, repositions scanning heads — the same way in B as L behaves in D. From clause 2 of lemma A.1, we also know that D spells a ⊥-legal run and hence, due to L’s playing providently, D contains infinitely many steps with empty buffer contents. Then so does B. That is, B is provident.

CLAUSE 2. Assume L is a prudent solution of H(~y). We want to show that the run ΓB ∞ is ⊤-prudent. For a contradiction, deny this. Let i be the smallest number such that ΓB i is not ⊤-prudent. Note that i > 0. It is obvious that a move is made in B at step i. Let us call that move φ. B B Assume Wi−1 is legitimate. Let D be the Wi−1 -induced branch of L. Clauses 3 and 4 of Lemma A.1 D D imply that Γi is not ⊤-prudent, and then the same holds for the extension ΓD ∞ of Γi . At the same time, D by clause 2 of the same lemma, Γ∞ is legal. This is a contradiction, because L is a prudent solution of H(~y ) and, as such, it could not have generated a (⊥-)legal run (ΓD ∞ ) that is not ⊤-prudent. B is not legitimate. Then, just as in the last paragraph of our proof of (128), i ≥ 2, Now assume Wi−1 B B Wi−2 is legitimate, and φ is a retirement move. Let D be the Wi−2 -induced branch of L. Analyzing the conditions under which M makes a retirement move, we see that φ (rather than some proper prefix of it) was the content of M’s buffer at step i − 1 of B. Then, by clause 4 of Lemma A.1, the same is the case for L’s buffer in branch D. But, since L plays providently and (by clause 2 of Lemma A.1) ΓD ∞ is legal, in D, sooner or later L will have to make a move φ′ such that φ is a prefix of φ′ . Obviously such a move φ′ , if legal, will inherit the imprudence of φ. This, together with clause 2 of Lemma A.1, contradicts our assumption that L is a prudent solution of H(~y). CLAUSE 3. Assume L is a (a, s, t) tricomplexity solution of H(~y ). Amplitude: Assume M makes a move φ at a step i. Let mφ be the magnitude of φ. We want to show that mφ ≤ a(ℓB i ). 18 Namely, in the computation branch where L’s environment makes exactly the same moves at exactly the same times and in exactly the same order as in B.

36

B B First, suppose Wi−1 is legitimate. Let D be the Wi−1 -induced branch of L. In view of clause 4 of Lemma A.1, in D, L makes the same move φ at the same time i. Since L plays in amplitude a and since, by clause D B 2 of Lemma A.1, the run ΓD ∞ is legal, we have mφ ≤ a(ℓi ). The desired mφ ≤ a(ℓi ) follows from here by clause 5 of Lemma A.1. B B Now suppose Wi−1 is not legitimate. Then, as in the last paragraph of our proof of (128) i ≥ 2, Wi−2 is B legitimate, and φ is a retirement move. Let D be the Wi−2 -induced branch of L. And let β be the content of M’s buffer at step i − 1 of B. By clause 4 of Lemma A.1, the same β is in the buffer of L at step i − 1 of D. At some step s ≥ i of D, the provident L should make a move γ such that β is a prefix of γ. Let mγ be the magnitude of that move. Since the run spelled by D is legal (clause 2 of Lemma A.1) and L plays D D D in amplitude a, we have mγ ≤ a(ℓD s ). But, in view of clause 1 of Lemma A.1, ℓs = ℓi . Thus, mγ ≤ a(ℓi ). B This, in view of clause 5 of Lemma A.1, implies mγ ≤ a(ℓi ). From the way we measure magnitudes and from the way the windup operation is defined, it is clear that mφ ≤ mγ . Consequently, mφ ≤ a(ℓB i ).

Space: Let i be the W -stabilization point, and let D be the WiB -induced branch of L. If WiB is legitimate, then, as observed in the last paragraph of our proof of (129), M’s behavior throughout B is indistinguishable from that of L in D; this, in view of clause 5 of Lemma A.1, means that B, just like D, does not violate the s space limits. Now suppose WiB is not legitimate. Whatever we said above still applies to the behavior of M up to (including) step i − 1. After that it makes a transition to step i and retires without consuming any additional space. So, the space consumption again remains within the limits of s. Time: Again, let i be the stabilization point, and let D be the WiB -induced branch of L. If WiB is legitimate, then, for the same reasons as in the case of space, B does not violate the t time limits. Now suppose WiB is not legitimate. Whatever we said in the preceding sentence still applies to the behavior of M in B up to (including) step i − 1. Then M makes a transition to step i and retires. If no move is made upon this transition, all is fine. And if a move is made, then, in view of the relevant clauses of Lemma A.1, it can be seen that the timecost of that move does not exceed the timecost of the move that the provident L would have to make in D sooner or later after time i − 1. So, the time bound t is not violated.

B

Proof of Lemma 5.2 of [5]

Lemma 5.2 of [5], to a proof of which this appendix is exclusively devoted, reads: There is an effective procedure that takes an arbitrary bounded formula H(~y ), an arbitrary HPM N and constructs an HPM K such that, for any regular boundclass triple R, if H(~y) is Rspace bounded and N is an R tricomplexity solution of H(~y ), then K is a provident and prudent R tricomplexity solution of H(~y ).

B.1

Getting started

Pick and fix an HPM N and a bounded formula H = H(~y ) = H(y1 , . . . , yu ) with all free variables displayed. The case of H(~y) being elementary is trivial, so we assume that H(~y ) contains at least one choice operator. Fix D as the maximum number of labmoves in any legal run of ⊓H. Further fix G as the superaggregate bound of H. Assume R is a regular boundclass triple such that the formula H(~y) is Rspace -bounded and N is an R tricomplexity solution of H(~y). Note that, by Lemma 5.1 of [5], G ∈ Rspace . It is important to point out that our construction of K below does not depend on R or any assumptions on it. In view of Lemma 10.1 of [4] and with Remark 2.4 of [5] in mind, we may and will assume that N plays H providently. Then Lemma 5.4 of [5] (whose proof does not rely on the present lemma) allows us to further assume that N is a quasilegal, unconditionally provident and unconditionally R tricomplexity solution of H. Following the  notational practice of Section 6.7 of [5], we shall write Rspace (ℓ) as an abbreviation of the phrase “O p(ℓ) for some p(z) ∈ Rspace ”. Similarly for Rtime (ℓ) and Ramplitude (ℓ). The technique that we employ below is very similar to the one used in Section 11 of [4]. Our goal is to construct a machine K such that K is a provident and prudent R-tricomplexity solution of H(~y ). From our construction it will be immediately clear that it (the construction), as required, is effective. In both our description of the work of K and our subsequent analysis of it, we shall rely — usually only implicitly — on the Clean Environment Assumption (cf. Section 8 of [4]), according to which K’s adversary 37

never makes illegal moves of ⊓H. Making such an assumption is safe because the desired properties of K are (1) being a solution of H(~y), (2) playing H(~y ) providently, (3) playing H(~y ) prudently and (4) playing H(~y ) in R tricomplexity. The definitions of all four of these properties, unlike, for instance, the definitions of the unconditional versions of the last three (cf. Section 5 of [5]), only look at the ⊥-legal plays of ⊓H by K. This means that it virtually does not matter what happens if K’s adversary starts playing illegally. We design K as a single-work-tape HPM. At the beginning of the play, as usual, it waits — without consuming any space — till Environment chooses constants ~c for all u free variables ~y of H. If this never happens, K is an automatic winner trivially complying with the providence, prudence and R tricomplexity conditions. Having said that, for the rest of this construction and our subsequent analysis of it, we shall assume that, in the scenario that we are considering, Environment indeed chose the constants ~c (fix them!) for ~y during an initial episode of the play. Let us agree that a quasilegal move (of H(~c)) means a move that may appear, with either label, in some quasilegal run of H(~c). And the truncation of a move α is the H(~c)-prudentization of the longest prefix α′ of α such that α′ is also a prefix of some quasilegal move. Note that, in view of our earlier assumption that H is not elementary, every move has a (possibly empty) truncation. Once all constants ~c are chosen by Environment, K computes the value of G| max(~c)| and remembers it for possible use in the future. It is not hard to see that, in view of the basic closure properties of boundclasses and the relevant conditions of Definition 2.2 of [5], G| max(~c)| can be computed and recorded in space Rspace | max(~c)| and time Rtime | max(~c)|. For this reason, when trying to show that K runs in tricomplexity R, the present episode of computing and remembering G| max(~c)| can (and will) be safely ignored. Upon the completion of the above step, K starts simulating N in the scenario where, at the very beginning of the play — on cycle 0, that is — the imaginary adversary of the latter chose the same constants ~c for the free variables of H as (K’s real) Environment did. A simulation would generally require maintaining and continuously updating configurations of N . However, the challenge is that K cannot afford to fully represent such configurations on its work tape. For instance, if all bounds in Rspace are sublinear, representing the run tape content of N would require more than Rspace space. Similarly, the size of the content of the buffer of N could occasionally go beyond the Rspace bound. For the above reasons, when dealing with a jth computation step of the simulated N , we let K, on its work tape, only keep representations of the other (and some additional, previously redundant) components of the corresponding configuration of N . Namely, with “current” below referring to an arbitrary given jth computation step of N , on its work tape K maintains the following pieces of information19 — call them together the sketch of the jth configuration (computation step) of N : 1st component: The current state of N . 2nd component: The current contents of the work tapes of N . 3rd component: The current locations of the work-tape heads of N . 4th component: The current location of the run-tape head of N . 5th component: The number of moves that N has made so far (at steps ≤ j) in the play. 6th component: The current number of symbols in the buffer of N . 7th component: The (possibly empty) string α that has been added to the buffer of N when it made a transition to the jth step from the preceding, (j − 1)th, step; here we stipulate that, if j = 0, i.e., if there is no preceding step, then such a string α is empty. 8th component: The truncation α′ of the move α currently written in the buffer. Lemma B.1 For any j, with ℓ standing for the background of the j’th step of the simulated N , maintaining the sketch for that step takes Rspace (ℓ) space. Proof. It is sufficient to verify that each of the eight components of the sketch, individually, can be maintained/recorded with Rspace (ℓ) space. Below we shall implicitly rely on Remark 2.4 of [5]. 1st component: Recording this component, of course, takes a constant and hence Rspace (ℓ) amount of space. 2nd component: Since N runs in unconditional space Rspace , this component can be represented with Rspace (ℓ) space. 19 Together with the never-changing representation of the transition function of N , as well as the earlier computed G| max(~ c)|. Whenever possible, we prefer not to mention explicitly these or similar, asymptotically irrelevant/superseded, pieces of information or events.

38

3rd component: The amount of space needed for recording this component obviously does not exceed the preceding amount — in fact, it is logarithmic in Rspace (ℓ). 4th component: By our definition of HPMs from [4], the run-tape head can never go beyond the leftmost blank cell. So, how many non-blank cells may be on the imaginary run tape of N ? Since N plays in unconditional amplitude Ramplitude , and since it plays H quasilegally and hence makes at most D moves, the ⊤-labeled moves residing on N ’s run tape only take Ramplitude (ℓ) space. Next, as we are going to see later, all ⊥-labeled moves residing on N ’s run tape are copies (made by K) of ⊥-labeled moves residing on K’s run tape, meaning (by the Clean Environment Assumption) that their quantity is bounded by D, and also implying that those moves are quasilegal, due to which (not only their magnitudes but also) their sizes do not exceed O(ℓ). For this reason, the ⊥-labeled moves of N ’s run tape, just like the ⊤-labeled moves, only take Ramplitude (ℓ) of total space. Thus, there are at most Ramplitude (ℓ) different possible locations of N ’s run-tape head. Representing any of such locations takes |Ramplitude (ℓ)| and hence — by clause 5 of Definition 2.2 of [5] — Rspace (ℓ) space. 5th component: Since N plays H quasilegally, the number of moves that N has made so far can never exceed D, so holding the 5th component in memory only takes a constant amount of space. 6th component: Let m be the number of symbols currently in N ’s buffer. Assume m > 0, for otherwise holding it takes no space. Consider the scenario where N ’s adversary does not make any moves beginning from the current point. Since N is unconditionally provident, sooner or later it should make a move α that is an extension of the move currently in the buffer, so the number of symbols in α is at least m. But, since N plays H quasilegally and runs in unconditional Ramplitude amplitude, the number of symbols in α cannot exceed Ramplitude (ℓ). That is, m does not exceed Ramplitude (ℓ). Holding such an m therefore requires at most |Ramplitude (ℓ)| space, and hence — again by clause 5 of Definition 2.2 of [5] — Rspace space. 7th component: Recording this component, of course, only takes a constant amount of space. 8th component: With a moment’s thought and with Lemma 5.1 of [5] in mind, it can be seen that, since α′ is a truncation, the number of symbols in it does not exceed Rspace (ℓ). Unfortunately, the sketch of a given computation step j of N alone is not sufficient to fully trace the subsequent steps of N and thus successfully conduct simulation. The reason is that, in order to compute (the sketch of) the (j + 1)th step of N , one needs to know the content of the cell scanned by the run-tape head of N . However, sketches do not keep track of what is on N ’s run tape, and that information — unless residing on the run tape of K itself by good luck — is generally forgotten. We handle this difficulty by letting the simulation routine recompute the missing information every time such information is needed. This is done through recursive calls to the routine itself. Properly materializing this general idea requires quite some care though. Among the crucial conditions for our recursive procedure to work within the required space limits is to make sure that the depth of the recursion stack never exceeds a certain constant bound. To achieve the above goal, we let K, in addition to the sketches for the simulated steps of N , maintain what we call the global history. The latter is a list of all moves made by N and its adversary throughout the imaginary play of H “so far”. More precisely, this is not a list of moves themselves, but rather entries with certain partial information on those moves. Namely, the entry for each move α does not indicate the actual content of α (which could require more than Rspace space), but rather only the label of α (⊤ or ⊥, depending on whether α was made by N or its adversary) and the size of α, i.e. the number of symbols in α. Recording this information only takes |Ramplitude (ℓ)| and hence Rspace (ℓ) space. Further, according to the forthcoming observation (132), the number of entries in the global history never exceeds 2D (in fact D, but why bother). Since D is a constant, we find that K only consumes an Rspace (ℓ) amount of space for maintaining the overall global history. While a move α is not the same as the entry for it in the global history, in the sequel we may terminologically identify these two. What do we need the global history for? As noted earlier, during its work, K will often have to resimulate some already simulated portions of the work of N . To make such a resimulation possible, it is necessary to have information on the times at which the adversary of N has made its moves in the overall scenario that we are considering and re-constructing. Recording the actual move times as they were detected during the initial simulation, however, could take us beyond our target space limits. After all, think of a situation where N waits “very long” before its environment makes a move. So, instead, we only keep track — via the global history — of the order of moves. Then we neutralize the problem of not remembering the “actual” times of N ’s adversary’s moves by simply assuming that N ’s adversary always makes its moves instantaneously

39

in response to N ’s moves. The point is that, if N wins H, it does so in all scenarios, including the above scenario of instantaneously responding adversary. It is important to note that, as will be immediately seen from our description of the work of K, the moves recorded in the global history at any step of the work of K are the same as the moves on the run tape of N . And the latter, in turn, are copies of moves on the run tape of K, with the only difference that, on K’s run tape, the ⊤-labeled moves appear in truncated forms. The orders of moves in the global history and on the run tape of N are exactly the same. As for the run spelled on the run tape of K, even if truncation did not really modify N ’s moves, it may not necessarily be the same as the run spelled on the run tape of N . Instead, the former is only guaranteed to be a ⊤-delay of the latter (see Section 3 of [4]). However, this kind of a difference, just like having the ⊤-labeled moves truncated, for our purposes (for K’s chances to win) is just as good as — or “even better than” — if the two runs were exactly the same. The work of K relies on the three subprocedures called Update Sketch, Fetch Symbol and Make History. We start with Update Sketch.

B.2

Procedure Update Sketch

In the context of a given global history H, this procedure takes the sketch Sj of a given computation step j of N , and returns the sketch Sj+1 of the next computation step j + 1 of N . Let m be the 5th component of Sj . The number m tells us how many moves N had made by time j. In most cases, Update Sketch will be used while re-constructing some past episode of N ’s work. It is then possible that the global history contains an (m + 1)th move by N (i.e. with label ⊤). If so, then such a move, as well as all subsequent moves of H, are “future moves” from the perspective of the jth step of N that Update Sketch is currently dealing with. This means that, when “imagining” the situation at the jth step of N , those moves should be discarded. So, let H′ be the result of deleting from H the (m + 1)th ⊤-labeled move and all subsequent, whatever-labeled moves (if there are no such moves, then simply H′ = H). Thus, H′ is exactly a record of the moves that N would see — in the same order as they appear in H′ — on its run tape at step j. The information contained in Sj is “almost” sufficient for Update Sketch to calculate the sought value of Sj+1 . The only missing piece of information is the symbol s scanned by the run-tape head of N on step j. Update Sketch thus needs, first of all, to figure out what that symbol s is. To do this, Update Sketch computes the sum p of the sizes of all moves (including their labels) of H′ . Next, let q (found in the 4th component of Sj ) be the number indicating the location of the run-tape head of N on step j. Note that, in the scenario that Update Sketch is dealing with, the length of the “active” content of N ’s run tape is p, with cell #(p + 1) and all subsequent cells being blank. So, Update Sketch compares q with p. If q > p, it concludes that s is Blank. Otherwise, if q ≤ p, s should be one of the symbols of one of the moves α recorded in H′ . From H, using some easy logarithmic-space arithmetic, Update Sketch figures out the author/label ℘ of α, and also finds two integers k and n. Here k is the number of moves made by ℘ before it made the move α. And n is the number such that the sought symbol s is the nth symbol of α. If ℘ = ⊥, using k and n, Update Sketch finds the sought symbol s on the run tape of K. Otherwise, if ℘ = ⊤, Update Sketch calls the below-described procedure Fetch Symbol on (k, n). As will be seen later, Fetch Symbol then returns the sought symbol s. Thus, in any case, Update Sketch now knows the symbol s read by the run-tape head of N on step j. Keeping the above s as well as the earlier computed value G| max(~c)| in mind,20 Update Sketch now additionally consults Sj and finds (all 8 components of) the sought sketch Sj+1 using certain rather obvious logarithmic space calculations, details of which we omit.

B.3

Procedure Fetch Symbol

In the context of a given global history H, this procedure takes two numbers k, n, where k is smaller than the number of ⊤-labeled moves in H, and n is a positive integer not exceeding the length of the (k + 1)th ⊤-labeled move there. The goal of Fetch Symbol is to return, through rerunning N , the nth symbol of the (k + 1)th ⊤-labeled move of H. 20 This

value is (could be) needed for determining the 8th component of Sj+1 .

40

To achieve the above goal, Fetch Symbol creates a sketch-holding variable S, and sets the initial value of S to the initial sketch. By the latter we mean the sketch of the initial configuration of N , i.e. the configuration where N is in its start state, the buffer and the work tapes are empty,21 and all scanning heads are looking at the leftmost cells of their tapes. After the above initialization step, Fetch Symbol performs the following subprocedure: 1. Perform Update Sketch on S. Let S′ be the resulting sketch, and let σ be the 7th component of S′ . Below, as always, |σ| means the length of (number of symbols in) σ. 2. Let a and b be the 5th and 6th components of S, respectively. If a = k and b < n ≤ b + |σ|, then return the (n − b)th symbol of σ. Otherwise, update (the value of) S to S′ , and go back to step 1. Before proceeding, the reader may want to convince himself or herself that, as promised, Fetch Symbol indeed returns the nth symbol of the (k + 1)th ⊤-labeled move of H.

B.4

Procedure Make History

This procedure takes a global history H as an argument and, treating H as a variable that may undergo updates, acts according to the following prescriptions: Stage 1: Create a variable S and initialize its value to the initial sketch of N . Proceed to Stage 2. Stage 2: Check out K’s run tape to see if Environment has made a new move (this can be done, say, by counting the ⊥-labeled moves on the run tape, and comparing their number with the number of ⊥labeled moves recorded in the global history). If yes, update H by adding to it a record for that move, and repeat Make History. If not, go to Stage 3. Stage 3: (a) Perform Update Sketch on S. Let T be the resulting sketch. (b) If N did not make a globally new move on its transition from S to T,22 change the value of the variable S to T, and go back to Stage 2. Here and later in similar contexts, by a “globally new” move we mean a move not recorded in the global history H. Figuring out whether N made a globally new move is easy. Technically, N made a globally new move if and only if, firstly, it did make a move, i.e., the 1st component of T is a move state; and secondly, such a move is not recorded in H, meaning that the 5th component of T exceeds the total number of ⊤-labeled moves recorded in H. (c) Suppose now N made a globally new move α. Let α′ be the 8th component of S. Thus, α′ is the truncation of α. Copy α′ to the buffer (of K) symbol by symbol, after which go to a move state. This results in K making the move α′ in the real play. Now update the global history H by adding to it a record for the move α, and repeat Make History.

B.5

The overall strategy and an example of its run

We continue our description of the overall work K, started on page 38 but interrupted shortly thereafter. As we remember, at the very beginning of the play, K waited till Environment specified the u constants ~c = c1 , . . . , cu for all free variables of H. What K does after that is that it creates the variable H, initializes its value to record the sequence h⊥c1 , . . . , ⊥cu i, and then switches to running Make History forever. This completes our description of K. Here we look at an example scenario to make sure we understand the work of K. Let     H = ⊓y |y| ≤ |x| → ⊔z |z| ≤ |x| ∧ p(z, y) ∨ ⊓u |u| ≤ |x| → ⊔v |v| ≤ |x| ∧ q(u, v) . Note that the superaggregate bound of this formula is the identity function G(w) = w. 21 As

for the run tape, what is on it is irrelevant because a sketch has no record of the run-tape content anyway. and later in similar contexts, we terminologically identify sketches with the corresponding steps of N .

22 Here

41

At the beginning of its work, K waits till Environment specifies a value for x. Let us say 1001 is that value. After calculating G|1001|, which in the present case is 4, K creates the variable H and sets its value to contain a record for the (single) labmove ⊥#1001. The rest of the work of K just consists in running Make History. So, in what follows, we can use “K” and “Make History” as synonyms. During its initialization Stage 1, Make History creates the variable S and sets its value to the initial sketch of N . The result of this step reflects the start situation, where “nothing has yet happened” in the mixture of the real play of H by K and the simulated play of H by N , except for Environment’s initial move #1001. Now Make History starts performing, over and over, Stages 2 and 3. The work in those two stages can be characterized as “global simulation”. This is a routine that keeps updating, one step at a time, the sketch S (Stage 3) to the sketch of the “next configurations” of N in the scenario where the imaginary adversary of N has made the move #1001 at the very beginning of the play; every time the simulated N is trying to read some symbol of this move, K finds that symbol on its own run tape and feeds it back to the simulation. Simultaneously, Make History keeps checking (Stage 2) the run tape of K to see if Environment has made a new move. This will continue until either Environment or the simulated N is detected to make a new move. In our example, let us imagine that Environment makes the move 0.#10, signifying choosing the constant 10 for y in H. What happens in this case? Make History simply restarts the global simulation by resetting the sketch S to the initial sketch of N . The earlier-described “Stage 2 and Stage 3 over and over” routine will be repeated, with the only difference that the global history H is now showing the presence of both ⊥#1001 and ⊥0.#10. This means that the simulation of N will now proceed in the scenario where, at the very beginning of the play, N ’s adversary had made the two moves #1001 and 0.#10. So, every time the simulated N tries to read one of the symbols of either move on its imaginary run tape, Make History— K, that is — looks that symbol up on its own run tape. By switching to this new scenario, Make History, in fact, deems the previous scenario invalid, and simply forgets about it. This routine will continue until either Environment or N , again, is detected to make a move. Let us say it is now N , which makes the imprudent move 0.1.#1111111, signifying choosing the “oversized” (of size > 4) constant 1111111 for z in H. In this event, Make History — K, that is — assembles the truncation 0.1.#1111 of 0.1.#1111111 in its buffer (copying it from the 8th component of S), and then makes the move 0.1.#1111 in the real play. After that, as always when a new (lab)move is detected, the global simulation restarts. Now the global history H is showing (records for) the sequence h⊥#1001, ⊥0.#10, ⊤0.1.#1111111i of three moves. In the present, 3rd attempt of global simulation, just like in the 2nd attempt, N is resimulated in the scenario where, at the beginning of the play, its adversary had made the moves #1001 and 0.#10. The only difference between the present attempt of global simulation and the previous one is that, once N is detected to make the expected move 0.1.#1111111, nothing special happens. Namely, the global history is not updated (as 0.1.#1111111 is already recorded there); the move 0.1.#1111 is not made in the real play (as it already has been made); and the global simulation continues in the ordinary fashion rather than restarts. The present attempt of global simulation, again, will be interrupted if and when either Environment or the simulated N is detected to make a globally new move, i.e. a move not recorded in the global history. Let us say it is again Environment, which makes the move 1.#1, signifying choosing the constant 1 for u in H. As always, a record for the new move is added to H, and the global simulation restarts. The resimulation of N will start in the scenario where, at the beginning of the play, its adversary had made the moves #1001 and 0.#10. We already know that, in this scenario, sooner or later, N will make its previously detected move 0.1.#1111111. Once this event is detected, N ’s simulation continues for the scenario where its adversary responded by the move 1.#1 immediately after N made the move 0.1.#1111111. Imagine that the final globally new move detected is one by N , and such a move is 1.1.#0, signifying choosing the constant 0 for v in H. Make History copies this move in the truncated form — which remains the same 1.1.#0 because this move is quasilegal and prudent — in the real play. Then, as always, H is correspondingly updated, and the global simulation is restarted with that updated H. The last attempt of global simulation (the one that never got discarded/reconsidered) corresponds to the “ultimate” scenario that determined K’s real play. Namely, in our present example, the “ultimate” scenario in which N was simulated is that, at the very beginning of the play, N ’s adversary had made the moves #1001 and 0.#10, to which N later responded with 0.1.#1111111, to which N ’s adversary immediately

42

responded with 1.#1, to which, some time later, N responded with 1.1.#0, and no moves were made ever after. While the imaginary run generated by N in this scenario is h⊥#1001, ⊥0.#10, ⊤0.1.#1111111, ⊥1.#1, ⊤1.1.#0i,

(130)

the real run generated by K is h⊥#1001, ⊥0.#10, ⊤0.1.#1111, ⊥1.#1, ⊤1.1.#0i,

(131)

with (131) being nothing but the result of replacing in (130) all ⊤-labeled moves by their truncations. Since it is our assumption that N wins H, (130) is a ⊤-won run of H. But then so is (131) because, as noted earlier, truncating a given player’s moves can (increase but) never decrease that player’s chances to win. Why do we need to restart the global simulation every time a globally new move is detected? The reason is that otherwise we generally would not be able to rely on calls of Fetch Symbol for obtaining required symbols. Going back to our example, imagine we did not restart the global simulation (Make History) after the moves #1001 and 0.#10 were made by Environment. Perhaps (but not necessarily), as before, N would still make its move 0.1.#1111111 sometime after 0.#10. Fine so far. But the trouble starts when, after that event, N tries to read some symbol of 0.1.#1111111 from its imaginary run tape. A way to provide such a symbol is to invoke Fetch Symbol, which will resimulate N to find that symbol. However, in order to properly resimulate N up to the moment when it made the move 0.1.#1111111 (or, at least, put the sought symbol of the latter into its buffer), we need to know when (on which computation steps of N ), exactly, the labmoves ⊥#1001 and ⊥0.#10 emerged on N ’s run tape. Unfortunately, we do not remember this piece of information, because, as noted earlier, remembering the exact times (as opposed to merely remembering the order) of moves may require more space than we possess. So, instead, we assume that the moves #1001 and 0.#10 were made right at the beginning of N ’s play. This assumption, however, disagrees with the scenario of the original simulation, where #1001 was perhaps only made at step 888, and 0.#10 perhaps at step 77777. Therefore, there is no guarantee that N will still generate the same move 0.1.#1111111 in response to those two moves. Restarting the global simulation — as we did — right after #1001 was made, and then restarting it again after 0.#10 was detected, neutralizes this problem. If N made its move 0.1.#1111111 after 0.#10 in this new scenario (the scenario where its imaginary adversary always acted instantaneously), then every later resimulation, no matter how many times Make History is restarted, will again take us to the same move 0.1.#1111111 made after 0.#10, because the global history, which “guides” resimulations, will always be showing the first three labmoves in the order ⊥#1001, ⊥0.#10, ⊤0.1.#1111111. To see this, note that all updates of the global history only add some moves to it, and otherwise do not affect the already recorded moves or their order. We also want to understand one remaining issue. As we should have noticed, Fetch Symbol always calls Update Sketch, and the latter, in turn, may again call Fetch Symbol. Where is a guarantee that infinitely many or “too many” nested calls will not occur? Let us again appeal to our present example, and imagine we (Update Sketch, that is) are currently simulating a step of N sometime after it already has made the move 1.1.#0. Whenever N tries to read a symbol of ⊤1.1.#0, Fetch Symbol is called to resimulate N and find that symbol. While resimulating N , however, we may find that, at some point, its run-tape head is trying to read a symbol of the earlier labmove ⊤0.1.#1111111. To get that symbol, Fetch Symbol will be again called to resimulate N and find that symbol. Can this process of mutual calls go on forever? Not really. Notice that, when Fetch Symbol is called to find the sought symbol of ⊤0.1.#1111111, Fetch Symbol, guided by the global history, will resimulate N only up to the moment when it made the move 0.1.#1111111. But during that episode of N ’s work, the labmove ⊤1.1.#0 was not yet on its run tape. So, Fetch Symbol will not have to be called further. Generally, as we are going to see in Subsubsection B.7.2, there can be at most a constant number of nested invocations of Fetch Symbol or Update Sketch.

B.6

K is a provident and prudent solution of H

Consider an arbitrary play by (computation branch of) K, and fix it for the rest of this appendix. As seen from the description of Make History, the ⊥-labeled moves (recorded) in H are the moves made by (K’s real) Environment. Since the latter is assumed to play legally, the number of ⊥-labeled moves in H cannot exceed D. Similarly, the ⊤-labeled moves of H are the moves made by N in a certain play.

43

Therefore, as N is a quasilegal solution of ⊓H, the number of such moves cannot exceed D, either. Thus, with “never” below meaning “at no stage of the work of K”, we have: The number of labmoves in H never exceeds 2D.23

(132)

Since every iteration of Make History increases the number of labmoves in H, an immediate corollary of (132) is that Make History is iterated at most 2D times. (133) Since Make History is restarted only finitely many times, the last iteration of it never terminates. Let Γ be the sequence of labmoves recorded in the final value of H (i.e. the value of H throughout the last iteration of Make History). This is the run generated by the simulated N in what we referred to as the “ultimate scenario” in the preceding subsection (scenario = computation branch). Next, let ∆ be the run generated by K in the real play that we are considering. Since N is a solution of ⊓H, Γ is a ⊤-won run of ⊓H. We want to verify that then ∆ is also a ⊤-won run of ⊓H, meaning that K, too, is a solution of ⊓H. How do Γ and ∆ differ from each other? As noted at the end of Subsection B.1, an analysis of the work of K, details of which are left to the reader, reveals that there are only two differences. The first difference is that the ⊤-labeled moves of Γ appear in ∆ in truncated forms. This is so because, whenever K makes a move (according to the prescriptions of Stage 3(c) of Make History), it copies that move from the 8th component of the sketch of the step of N on which the latter made a move α; but the 8th component of a sketch always holds the truncation of the move residing in N ’s buffer; thus, the move α′ made by K in the real play/run ∆ is the truncation of the move α made by N in the imaginary play/run Γ. Let us use Ω to denote the result of changing in Γ all ⊤-labeled moves to their truncations. The second difference between Γ and ∆ is that, even if we ignore the first difference — that is, even if we consider Ω instead of Γ — the run is still not guaranteed to be exactly the same as ∆; rather, we only know that the latter is a ⊤-delay of the former. The reason for this discrepancy is that, while performing Make History, K may notice a move δ by Environment with some delay, only after it has first noticed a move γ by N and made the truncation γ ′ of γ as a move in the real play; if this happens, ⊤γ will appear before ⊥δ in Γ but after ⊥δ in ∆. But the game ⊓H is static, as are all games studied in CoL. And, by the very definition of static games (cf. Section 3 of [4]), ∆’s being a ⊤-delay of Ω implies that, if Ω is a ⊤-won run of ⊓H, then so is ∆. This means that, in order to achieve our goal of proving that ∆ is a ⊤-won run of ⊓H, it is sufficient to simply show that Ω is a ⊤-won of ⊓H. This is what the rest of this subsection is devoted to, for the exception of the last paragraph of it. We may and will assume that different occurrences of quantifiers in ⊓H bind different variables. This is a legitimate assumption, because, if it is false, we can rename variables in ⊓H so as to make it true, with the new sentence, as a game (even if not as a formula), being virtually the same as the old sentence. By a unit we shall mean a subformula U of H of the form ⊓r(|r| ≤ b|~s| → E) (a ⊓-unit) or ⊔r(|r| ≤ b|~s| ∧ E) (a ⊔-unit). Here r is said to be the master variable of U , and |r| ≤ b|~s| is said to be the master condition of U . “Subunit” and “superunit”, applied to units, mean the same as “subformula” and “superformula”. The depth of a unit U is the number of its superunits (including U itself). A unit U is resolved iff Γ contains a move signifying choosing a constant for U ’s master variable. For instance, if H is ⊔x(|x| ≤ |y| ∧ x = 0) ∧ ⊓z(|z| ≤ |y| → ⊔t(|t| ≤ |z| ′ ∧ t = z + z)) and Γ is h⊥#1000, ⊥1.#11, ⊤1.1.#110i, then the units ⊓z(|z| ≤ |y| → ⊔t(|t| ≤ |z| ′ ∧ t = z + z)) and ⊔t(|t| ≤ |z| ′ ∧ t = z + z) are resolved while ⊔x(|x| ≤ |y| ∧ x = 0) is not. When w is a free variable of H or the master variable of some resolved unit, then the resolvent of w is the constant chosen for w in (according to) Γ. For instance, if H and Γ are as above, 1000 is the resolvent of y, 11 is the resolvent of z and 110 is the resolvent of t. A unit U is well-resolved iff U is resolved and the result of replacing all free variables by their resolvents in U ’s master condition is true. A unit is ill-resolved iff it is resolved but not well-resolved. A critical unit is an ill-resolved unit all of whose proper superunits are well-resolved. Let f be the subaggregate bound of H. For i ∈ {1, 2, . . .}, we define Gi (z) as max(f (z), f 2 (z), . . . , f i (z)). Note that the superaggregate bound G of H is nothing but GH , where H is the total number of all units. For this reason, taking into account that the depth of no unit can exceed H, we have: Whenever i is the depth of some unit, Gi  G. 23 In

fact, with some additional analysis, 2D can be lowered to D here, but why bother.

44

(134)

Lemma B.2 Consider an arbitrary resolved unit U . Let i be its depth, and a be the resolvent of its master variable. If all superunits of U (including U itself ) are well-resolved, then |a| ≤ Gi | max(~c)|. Proof. Induction on i. Assume the conditions of the claim. Let w be the master variable of U , and let |w| ≤ b(|x1 |, . . . , |xk |, |z1 |, . . . , |zm |) be the master condition of U , with all free variables displayed, where x1 , . . . , xk are from among the free variables of H, and z1 , . . . , zm are from among the master variables of the proper superunits of U . Let d1 , . . . , dk , e1 , . . . , em be the resolvents of x1 , . . . , xk , z1 , . . . , zm , respectively. Below we shall use c, d and e as abbreviations of max(~c), max(d1 , . . . , dk ) and max(e1 , . . . , em ), respectively. Let b′ be the unarification of b. U ’s being well-resolved means that |a| does not exceed b(|d1 |, . . . , |dk |, |e1 |, . . . , |em |). Hence, by the monotonicity of b, we have |a| ≤ b′ | max(d, e)|. But, of course, b′  f (recall that f is the subaggregate bound of H). Thus, |a| ≤ f | max(d, e)|. This means that, in order to verify our target |a| ≤ Gi |c|, it is sufficient to show that both f |d| ≤ Gi |c| and f |e| ≤ Gi |c|. That f |d| ≤ Gi |c| follows from the straightforward observations that d ≤ c and f  Gi . As for f |e| ≤ Gi |c|, first assume i = 1. Then m = 0 and hence e = 0; also, Gi is f . Thus, we want to show that f |0| ≤ f |c|. But this is immediate from the monotonicity of f . Now assume i > 1. By the induction hypothesis, |e| ≤ Gi−1 |c|. So, f |e| ≤ f (Gi−1 |c|). But, of course, f (Gi−1 |c|) ≤ Gi |c|. Thus, f |e| ≤ Gi |c|. We are now in the position to see that Ω inherits Γ’s being a ⊤-won run of ⊓H. Let   ⊓u1 |u1 | ≤ p1|~r1 | → A1 , . . . , ⊓ua |ua | ≤ pa |~ra | → Aa be all the critical larly, let

⊓-units, and let u◦1 , ~r1◦ , . . . , u◦a , ~ra◦ be the resolvents of u1, ~ri , . . . , ua , ~ra , respectively. Simi⊔v1 |v1 | ≤ q1 |~s1 | ∧ B1 , . . . , ⊔vb |vb | ≤ qb |~sb | ∧ Bb 



be all the critical ⊔-units, and let v1◦ , ~s1◦ , . . . , vb◦ , ~sb◦ be the resolvents of v1 , ~s1 , . . . , vb , ~sa , respectively. It is not hard to see that, following the notational conventions of Section 7 of [4], the paraformula hΓi!⊓H has the form (can be written as) i h (135) X |u◦1 | ≤ p1 |~r1◦ | → A◦1 , . . . , |u◦a | ≤ pa |~ra◦ | → A◦a , |v1◦ | ≤ q1 |~s1◦ | ∧ B1◦ , . . . , |vb◦ | ≤ qb |~sb◦ | ∧ Bb◦ for some X, A◦1 , . . . , A◦a , B1◦ , . . . , Bb◦ . With some additional analysis of the situation and with (134) and Lemma B.2 in mind, one can see that the paraformula hΩi!⊓H can then be written as h i X |u◦1 | ≤ p1 |~r1◦ | → A•1 , . . . , |u◦a | ≤ pa |~ra◦ | → A•a , |v1• | ≤ q1 |~s1◦ | ∧ B1• , . . . , |vb• | ≤ qb |~sb◦ | ∧ Bb• (136) for some v1• , . . . , vb• , A•1 , . . . , A•a , B1• , . . . , Bb• (and with all other parameters the same as in (135)). By the definition of the prefixation operation (Definition 2.2 of [4]), the fact that Γ is a ⊤-won run of ⊓H — written as hΓi⊓H = ⊤ — implies (in fact, means the same as) that the empty run hi is a ⊤-won run of hΓi⊓H, which, since hΓi⊓H = (135), can be written as h i hiX |u◦1 | ≤ p1 |~r1◦ | → A◦1 , . . . , |u◦a | ≤ pa |~ra◦ | → A◦a , |v1◦ | ≤ q1 |~s1◦ | ∧ B1◦ , . . . , |vb◦ | ≤ qb |~sb◦ | ∧ Bb◦ = ⊤. (137)  Consider any i ∈ {1, . . . , b}. Since the unit ⊔vi |vi | ≤ qi |~si | ∧ Bi is critical and hence ill-resolved, |vi◦ |  exceeds qi |~si◦ |. Hence hi |vi◦ | ≤ qi |~si◦ | ∧ Bi◦ = ⊥. This clearly allows us to rewrite (137) as h i hiX |u◦1 | ≤ p1 |~r1◦ | → A◦1 , . . . , |u◦a | ≤ pa |~ra◦ | → A◦a , ⊥, . . . , ⊥ = ⊤. The monotonicity of the operators (∧, ∨, ∀, ∃) of X, just as in classical logic, allows us to replace the ⊥s by whatever games in the above equation, so the latter can be further rewritten as i h hiX |u◦1 | ≤ p1 |~r1◦ | → A◦1 , . . . , |u◦a | ≤ pa |~ra◦ | → A◦a , |v1• | ≤ q1 |~s1◦ | ∧ B1• , . . . , |vb• | ≤ qb |~sb◦ | ∧ Bb• = ⊤. (138)

45

 Next, for similar reasons, for every i ∈ {1, . . . , a} we have |u◦i | > pi |~ri◦ | and hence hi |u◦i | ≤ pi |~ri◦ | → A•i = ⊤, which allows us to rewrite (138) as h i hiX |u◦1 | ≤ p1 |~r1◦ | → A•1 , . . . , |u◦a | ≤ pa |~ra◦ | → A•a , |v1• | ≤ q1 |~s1◦ | ∧ B1• , . . . , |vb• | ≤ qb |~sb◦ | ∧ Bb• = ⊤. (139) However, the X[. . .] part of (139) is identical to (136), which, in turn, is nothing but hΩi⊓H. If so, the target hΩi⊓H = ⊤ is an immediate consequence of (139). Thus, K is a solution of H, as desired. As such, it is both provident and prudent. K is provident because, as a simple examination shows, it only puts something into its buffer while acting according to clause 3(c) of the description of Make History; however, at the end of the same clause, we see a prescription for K to move, and thus empty the buffer. As for prudence, it is automatically achieved because K only makes truncated moves, and such moves are always prudent.

B.7

K plays in target tricomplexity

It remains to show that K plays H in tricomplexity R. Our analysis is going to be asymptotic, implicitly relying on Remark 2.4 of [5]. B.7.1

Amplitude

Since K merely mimics — perhaps in the truncated form and perhaps with some delay — N ’s moves, it is obvious that the amplitude complexity of the former does not exceed that of the latter. In fact, K’s running in the target amplitude is also guaranteed by the facts that Rspace  Ramplitude (clause 5 of Definition 2.2 of [5]), H is Rspace -bounded and K plays H prudently. B.7.2

Space

Let H be a global history, and m a natural number. We define the H-index of m as the number of moves in H′ , where H′ is the result of deleting from H the (m + 1)th ⊤-labeled move and all subsequent whateverlabeled moves; if here H does not contain more than m ⊤-labeled moves, then H′ is simply H. Next, where S is a sketch, we define the H-index of S as the H-index of m, where m is (the value of) the 5th component of S. We extend the concept of H-index to particular runs/iterations of Update Sketch and Fetch Symbol in the process of performing Make History. Namely, Update Sketch is always run on a sketch S, and we define the H-index of that run of Update Sketch to be the H-index of S. Similarly, Fetch Symbol is always called on a pair (k, n) for some numbers k and n, and we define the H-index of such a call/run of Update Sketch as the H-index of k (n is thus irrelevant here). If H is fixed or clear from the context, we may omit “H-” and simply say “index”. Lemma B.3 In the process of any given iteration of Make History and in the context of the then-current value of the global history variable H, we have: 1. The index of any run of Update Sketch does not exceed 2D. 2. Whenever a given run of Update Sketch calls Fetch Symbol, the index of the callee is strictly smaller than that of the caller. 3. Whenever a given run of Fetch Symbol calls Update Sketch, the index of the callee does not exceed that of the caller. Proof. Clause 1 is immediate from the obvious fact that an index can never exceed the number of moves in the global history, and the latter, according to (132), is bounded by 2D. Clauses 2 and 3 can be verified through a rather straightforward (albeit perhaps somewhat long) analysis of the two procedures Update Sketch and Fetch Symbol; details of such an analysis are left to the reader. We are now ready to examine the space complexity of K. The space consumption of K comes from the need to simultaneously maintain the global history and various sketches. As observed earlier, maintaining the global history consumes Rspace space, and, by Lemma B.1, each sketch also consumes Rspace space. At any given time, the global history is kept in memory in a single copy. So, to show that the overall space consumption is Rspace , we need to show that, at any given time, the number of sketches simultaneously 46

kept in the memory of K does not exceed a certain constant. But this is indeed so. Looking back at the work of Make History, we see that, at any time, its top level maintains a single sketch. It also keeps going through this sketch and updating it through Update Sketch, one step at a time. Since updates are done sequentially, space used for them can be recycled, so space consumptions for updating different sketches (this includes not only the top-level sketch of Make History, but also many additional sketches that will emerge during calls to Fetch Symbol when updating each individual sketch) do not add together unless those sketches happen to be on a same branch of nested recursive calls that Update Sketch and Fetch Symbol make to each other. In view of Lemma B.3, however, the depth of recursion (the height of the recursion stack at any time) is bounded, because the index of Update Sketch in the topmost level of recursion does not exceed 2D, and every pair of successor levels of recursion strictly decreases the index of the corresponding call of Update Sketch. B.7.3

Time

As we observed in (133), during the entire work of K, Make History is iterated at most 2D times. The last iteration runs forever, but K is not billed for that time because it makes no moves during that period. Likewise, K will not be billed for the time spent on an iteration of Make History that was terminated at Stage 2, because a move by Environment resets K’s time counter to 0. Call the remaining sorts of iterations of Make History — namely, the iterations that terminate according to the scenario of case (c) of Stage 3 — time-billable. So, it is sufficient for us to understand how much time a single time-billable iteration of Make History takes. Pick any such iteration and fix it throughout the context of the rest of this section, including the forthcoming Lemma B.4. We will use ℓ to denote the background of the last clock cycle of that iteration. Lemma B.4 The time consumed by any single run of Update Sketch or Fetch Symbol is Rtime (ℓ). Proof. We verify this lemma by induction on the index i ∈ {0, . . . , 2D} of the corresponding call/run of Update Sketch or Fetch Symbol. Assume i ≥ 0 is the index of a given run of Update Sketch. Looking back at our description of Update Sketch, we see that this routine makes at most one call of Fetch Symbol. First, assume no such call is made. Due to K’s playing prudently, max(ℓ, G(ℓ)) is the maximum magnitude of any move that may appear on K’s run tape at any given time of the iteration. We also know from lemma 5.1 of [5] that G  Rspace . So, K’s run-tape size (by which, as usual, we mean the size of the non-blank segment of the tape) is O(ℓ) + Rspace (ℓ) and hence, by the relevant clauses of Definition 2.2 of [5], is Rtime (ℓ). We also know that the sketch and the global history are both of size Rspace (ℓ) and hence Rtime (ℓ). Keeping these facts in mind, with some analysis it is obvious that, in this case, Update Sketch spends Rtime (ℓ) time. Now assume Update Sketch does call Fetch Symbol. By clause 2 of Lemma B.3, the index j of such a call is less than i. Hence, by the induction hypothesis, the time taken by the latter is Rtime (ℓ). In addition to this, Update Sketch only spends the same amount Rtime (ℓ) of time as in the preceding case to complete its work. Thus, in either case, the time consumption of Update Sketch is Rtime (ℓ). Now consider a run of Fetch Symbol, and let i ≥ 0 be its index. By clause 3 of Lemma B.3, the index of any call of Update Sketch that the given run of Fetch Symbol makes is at most i. By the induction hypothesis, each such call of Update Sketch consumes at most Rtime (ℓ) time. Processing any such call (doing additional work related to it), in turn, obviously takes at most Rtime (ℓ) time. So, each call of Update Sketch costs our run of Fetch Symbol at most Rtime (ℓ) time. How many such calls of Update Sketch will Fetch Symbol make? Since N runs in time Rtime , with a little thought one can see that the number of calls of Update Sketch is at most Rtime (ℓ). So, the overall time cost of the run of Fetch Symbol is Rtime (ℓ) × Rtime (ℓ), which, in view of the closure of Rtime under ×, remains Rtime (ℓ). We are now ready to look at the time consumption of the single time-billable iteration of Make History fixed earlier. Stage 1 of Make History obviously takes a constant amount of time, and this stage is iterated only once. So, asymptotically, it contributes nothing to the overall time consumption of the procedure. Stage 2 checks out the run tape, which may require moving the run-tape head of K from one end of the (non-blank segment of the) run tape to the other end. Additionally, the global history needs to be updated, 47

but this can be done even faster. So, this stage obviously takes as much time as the size of K’s run tape, which, as observed in the proof of Lemma B.4, is Rtime (ℓ). Stage 3 starts with performing Update Sketch (Substage 1), and this, by Lemma B.4, takes Rtime (ℓ) time. With a little thought, the time taken by Substages (b) and (c) of Stage 3 can be seen to be at most 2 quadratic in the size of K’s run tape. We know that the latter is Rspace (ℓ). Hence so is Rspace (ℓ) , because Rtime is closed under ×. To summarize, none of the 3 stages of Make History takes more than Rtime (ℓ) time. Stage 1 is repeated only once, and the remaining two stages are repeated at most Rtime (ℓ) times as can be seen with a little thought, keeping in mind that the iteration of Make History that we are dealing with is a time-billable one. If so, due to Rtime ’s closure under ×, the overall time consumption is Rtime (ℓ), which obviously implies that K plays ⊓H in time Rtime , as desired.

References [1] K. Aehlig, U. Berger, M. Hoffmann and H. Schwichtenberg. An arithmetic for non-size-increasing polynomial-time computation. Theoretical Computer Science 318 (2004), pp. 3-27. [2] S. Buss. First-order proof theory of arithmetic. In: Handbook of Proof Theory. S. Buss, editor. Elsevier, 1998, pp. 79-147. [3] G. Japaridze. Introduction to clarithmetic I. Information and Computation 209 (2011), pp. 13121354. [4] G. Japaridze. On the system CL12 of computability logic. Logical Methods in Computer Science 11 (2015), Issue 1, Paper 1, pp. 1-71. [5] G. Japaridze. Build your own clarithmetic I. This journal.

48

Index F 19 background (of a configuration) 15 Bitsum 9 Br0 (x, s), Br1 (x, s) 8 buffer-empty title 18 Borrow1 7 Carry 10 Carry1 6 complete semiposition 34 configuration 14 corrupt configuration 15 critical formula 16 critical unit 44 completion (of a semiposition) 34 compression (of a semiposition) 34 d 14 D 37 D 18 Dǫ 18 D⊔ 18 Dǫ⊔ 18 depth (of a unit) 44 Fetch Symbol 40 G 37 Gi 44 C ΓC ∞ , Γi 35 global history 39 globally new move 41 header (of a move) 17 ill-resolved unit 44 incomplete semiposition 34 index (H-index) 46 initial sketch 41 j 19 k 18 ℓC i 35 legitimate semiposition 34 literal 14 logically imply 3 m 18 Make History 41 master condition 44 master variable (of a unit) 44 min 9 N 18 Nothing 23 numer 17 One 23 paralegal move 17 politeral 14 provider (of a resource/game) 3 Q(~s, z) 19

quasilegal move 38 quasilegitimate semiposition 34 reachable configuration 15 Reasonable R-Induction 4 Reasonable R-Comprehension 5 relevant branch 13 relevant paraformula 13 resolved unit 44 resolvent 44 retirement move 34 scene (of a configuration) 19 Scenei 19 semiposition 34 semiuncorrupt configuration 15 sketch 38 so-far-authored semiposition 34 so-far-seen semiposition 34 time-billable 47 title (of a configuration) 17 Titlei 18 titular component 17 titular number 18 true relevant parasentence xtrp truncation 38 U 18 U~s⊔ 19 U~s∃ 19 unadulterated successor 15 uncorrupt configuration 15 Update Sketch 40 unit 44 ViC 35 W 14 W1 13 WiC 35 WiB -induced branch of L 35 W -stabilization point 35 well-resolved unit 44 windup 34 X, X 11 y 14 yield (of a configuration) 15 Zero 23

E ◦ 16 E • 16 ˜ ◦ (~s) 19 E ˜ • (~s) 19 E 49

⌊u/2⌋ 9 hΦi!F 13 E (where E is a formula) 14 S ♥ 31 S ♠ 31

50