## Computational Error Complexity Classes

The question is, for which complexity class can such problems be claimed as e ...... ae 1?k2 : Let In := fminfi j 'i( ) = xg j x 2 ng be the set of minimal Kolmogorov ...

Computational Error Complexity Classes Christian Schindelhauer Andreas Jakoby Medizinische Universitat Lubeck December 1997 Abstract

The complexity classes Nearly- BPP and Med  Dis P had been recently proposed as limits of ecient computation [Yam1 96, Schi 96]. For both classes, a polynomial time bounded algorithm with bounded probabilistic error has to compute correct outputs for at least 1 ? n!1(1) of inputs of length n . We generalize this notion to general error probabilities and arbitrary complexity classes. For proving the intractability of a problem it is necessary to show that it cannot be computed within a given error bound for every input length. For this, we introduce a new error complexity class, where the error is only in nitively often bounded by the error function. We identify sensible bounds for the error function and derive new diagonalizing techniques. Using these techniques we present time hierarchies of a new quality: We are able to show that there are languages computable in time T that a machine with asymptotically slower running time cannot predict within a smaller error than 12 . Further, we investigate two classical non recursive problems: the halting problem and the Kolmogorov complexity function. We give strict lower bounds proving that any heuristic algorithm that claims to solve one of these problems makes unrecoverable errors with constant probability.

1 Introduction We revisit the question of recursiveness from an unconventional point of view. One of the great achievements in computer science is the sound and unique de nition of recursiveness which resulted from various approaches to computability, like lambda-calculus, Turing machines, Post machines, Markov algorithms, Herbrand-Godel computable functions [Chur 36, Turi 36, Davi 65]. This development culminated in the thesis of Church, that every problem is computable even in an intuitive sense, if and only if it is computable by such a machine model. Many problems of practical interest are known to be non recursive. Despite this knowledge and because of their practical relevance, programmers try to solve these problems algorithmically. Of course their programs produce erroneous output or may not halt for some inputs. But the hope remains that this does not occur too often. The question is, for which complexity class can such problems be claimed as eciently solvable. It is common sense that at least P is a class of eciently solvable problems. Also introducing a bounded error probability giving the class BPP is acceptable. A valid enhancement of the notion of eciency is to measure the expectation over the running times of di erent inputs according to a probability distribution over the input space. Since this measure is not closed under polynomial time simulations, Levin introduced a weaker notion of Average- P [Levi 86]. Here it is sucient to bound the expectation of some root of the time by a polynomial. All these measures are not able to solve our problem of bounding erroneous inputs, since every positively weighted input is not allowed to cause non-halting behavior of the algorithm.  Institut fur Theoretische Informatik, Wallstrae 40, 23560 Lubeck, Germany fax: ++49-451-7030-438, phone: ++49-451-7040-417, email: jakoby / schindel @ informatik.mu-luebeck.de

1

So, a measure is more suitable, if it is stronger related to the statistical p -th quantile rather than the expectation. In [Schi 96] the complexity class MedDisTime(T; F ) is introduced. Here for a given machine there are at most F (`) inputs which for the ` most likely inputs x exceed the time bound T (jxj). But the machine has to accept the language correctly. If one now substitutes the time behavior of an input by a symbolic value 1 for an unrecoverable output error or a non-halting computation, this measure, called Med  DisTime(T; F ), is suitable for investigating the quality of heuristics for non-recursive problems. Yamakami proposed a related notion of measuring the error complexity: Nearly- P and Nearly- BPP [Yam1 96, Yam2 96]. Here the error probability has to be smaller than any polynomial in the input length. Hence, a polynomial number of test instances include an erroneous or non-polynomial time bounded computation only with small probability. Both approaches bound the error probability of a machine. Naturally, some non-recursive problems are included in these classes. But does this contradict the thesis of Church? Is it possible to compute most of the instances of a non-recursive problem, say the halting problem? This article is structured as follows. First, we introduce a general setting for a complexity class f -Err C that is derived from a complexity class C with respect to an error probability bound f . We make a rst classi cation of suitable error bounds and observe that there is a need of de ning another weaker complexity class f -Errio C that gives meaningful lower bounds for the error complexity. Here, we claim that the error bound holds only in nitively often and derive an untrivial lower io-error bound of 21 ? o(j1j)n=2 . This bound only slightly di ers from the trivial upper bound 12 . We extend these lower bounds to time hierarchies. Finally, we discuss in detail the error complexity of the halting problem and the Kolmogorov complexity problem, that is the problem to compute the shortest description for a given input. We will see that these problems have high lower error bounds. Such, for practice, these problems stay non-computable.

2 Notations

De ne A  B := (A n B ) [ (B n A) as the symmetric di erence of sets A and B . P (S ) denotes the power set of S . Let 8ae n P (n) be equivalent to 9n0 8n  n0 P (n) and 9io n P (n) to 8n09n  n0 P (n). Further, de ne f (n) ae g(n) as 8ae n : f (n)  g(n) and f (n) io g(n) as 9io n : f (n)  g(n) and similary for ; =;  . For a function f : IN ! IN we use the sets O(f ); o(f ); (f ); !(f ) for the asymptotical order. We consider strings over the at least binary  , where  denotes the empty word. n S alphabet i  n are all strings with length n ,  := in  . Further, we use the following straightforward isomorphism between IN and  : ord() = 0 and ord(s ) = ord(s) jj +ord( ), for 2 , s 2  , where ord() = f1; : : :; jjg . str : IN !  denotes the inverse function of ord. Sometimes we may skip these functions for readability. PR denotes the set of all partial recursive functions, R the set of all recursive function. Similarly, PRL and RL de ne the sets of all partial recursive, resp. recursive predicates. For a partial function f the domain is called dom(f ) . f (x) = ? denotes x 62 dom(f ). We consider Turing machines with two tapes if nothing else is speci ed. Let M0 ; M1 ; M2 ; : : : denote an enumeration of all Turing machines. 0 ; 1 ; : : : gives the enumeration of the corresponding partial functions. Furthermore, let L0; L1; L2; : : : be an enumeration of the languages in PRL over the alphabet . For easier notation we represent a language Li respectively a sub-language Li [a; b] by its characteristic string, that means Li 2 f0; 1g such that

8w 2  : Li (ord(w)) = 1 () w 2 Li and Li [a; b] := Li (a) Li (a + 1)    Li (b) : A machine M is T -time bounded if timeM (x)  T (jxj) for all x 2  . DTime(T ) is the class of all decision problems which can be decided by a T -time bounded deterministic Turing machine; 2

F DTime(T ) is the corresponding functional complexity class. DTime2(T ) refers to decision problems computed by 2-tape Turing machines. A function f is T -time k -tape computable, if a T (jxj) time bounded k -tape Turing machine outputs f (x). A function T : IN ! IN is time constructible, if there exists a T -time 2-tape Turing machine that computes T (n). We further refer to pairing functions hx; yi , hhx; yii as bijective functions which, like their inverse functions, are computable. As the standard pairing function we consider hx; yi = x + (x+y)(2x+y+1) .

3 Computational Error Complexity 3.1 An Error Measure for Upper Bounds

Traditionally, in complexity theory one investigates the worst case of machine behavior with respect to the input length. In average case complexity theory this behavior is weighted by a probability distribution over the input space. As a consequence, in worst case theory it is only necessary to consider functions f : IN ! IN for an entire description of the considered complexity bound. In average case theory there is a variety of ways to average over the resource function. It is therefore necessary to examine pairs of resource functions f :  ! IN (e.g. time) and probability distributions of the set of inputs. There is a variety of concepts introduced and classi ed for average measures [Levi 86, Gure 91, ReSc 96, Schi 96, BCG 92, CaSe 96, ScWa 94]. These concepts have in common that running times of all input values with positive weights account for the average behavior. In [Schi 96] a di erent approach is investigated. Here, a predicate (timeM;L ; ) 2 Med(T; F ) is de ned for a machine M , a language L , probability distributions over  , and bounds T; F if for all ` 2 IN: jf x 2  j jfy j (y)  (x)gj  ` and ((M (x) 6= L(x) or timeM (x) > T (jxj)) gj  F (`) : This means that the number of the ` most likely inputs x 2  with timeM (x) > T (jxj) or M (x) 6= L(x) is bounded by F (`). In [Schi 96] a strong relationship to known average case concepts is shown. The corresponding complexity class is de ned as MedDisTime(T; F ) := f(L; ) j 9M 2 DTM : (timeM;L; ) 2 Med(T; F )g : In the following we modify and extend this de nition to arbitrary classes C where input length is taken into account.

De nition 1 For a class C and a bound F : IN ! [0; 1] we de ne the distributional complexity class of F -error bounded C as set of pairs of languages L   and probability distributions  :  ! [0; 1] as F -ErrC := f(L; ) j 9S 2 C 8n Prob[x 2 (L 4 S ) j x 2 n ]  F (n) g : Whenever we want to address a complexity class of languages we consider the uniform probability distribution uni : L 2 F -Err C :, (L; uni) 2 F -Err C where uni(x) := 62 (jxj + 1)12  jjjxj :

For example for L 2 1` -Err REG with constant ` 2 IN there exists nite automaton which decides membership of L for at least each ` -th input correctly. It can easily be seen that the probability that an arbitrary string x 2 n is a palindrome is jj?bn=2c . Hence, every automaton verifying that an input x is a palindrome only if jxj  2  dlogjj `e and rejecting elsewhere, ful lls the conditions of 1` -Err REG , i.e. 8 ` 2 IN : Lpal 2 1` -Err REG : Using de nition 1 we can generalize the classes Nearly-BPP (de ned by Yamakami [Yam1 96,   1  Yam2 96]) and Nearly-P (resp. Med Dis P de ned by [Schi 96]) by Nearly-C := !(POL) -Err C for an arbitrary class C . Referring to our example: Lpal 2 Nearly-REG . 3

Nearly- C represent complexity classes with a very low error probability. More precisely, even a polynomial number of instances (chosen according to the distribution) induces only a super-polynomial small error bound again. Thus, Nearly- P and Nearly- BPP de ne reasonable ecient complexity classes if the corresponding algorithm is only used for a polynomial number of inputs for each length. It is clearly not acceptable to allow only polynomial error probability for practical issues. However, when the error probability tends to 1, the error complexity becomes meaningless. Proposition 1 For a complexity class C closed  underunion and exclusion of nite sets and every  function z (n) =ae 0 it holds P( ) = 1 ? jz(njn) -Err C .

Proof: For two languages L 2 C and S   de ne the nite sets I := fx 2 S j ord(x) < z (jxj)g and E := fx 62 S j ord(x) < z (jxj)g . Finally let L0 := (L [ I ) n E . Since Prob[L0 4 S j n ]  1 ? jz(nn)j the claim follows directly. Note that any class C covering the set of regular languages like REG ; L; NC ; P ; NP ; : : : ful lls this property. Consequently, all languages (even the non-recursive languages) can be computed with a nite automaton with error probability 1 ? jz(njn) . On the other hand we can construct a diagonal language L that cannot be computed with an arbitrary Turing-machine within an error probability 1 ? je(njn) if e(n) io 1.

Theorem 1 There exists a language, such that e(n) io 1 :

L 62





1 ? je(nn)j -Err PRL :

P Proof: De ne s(n) := ni=0 sgn(e(n)) and de ne L by its characteristic string  s(jxj) (x) 6= 0 or does not halt L(ord(x)) := 01 ;; M Ms(jxj)(x) = 0 For any machine Mi de ne n minimal such that s(n) = i . Hence, e(n) 6= 0 and Prob[x 2 L(Mi ) 4 L j x 2 n ] = 1 > 1 ? ej(nn)j :

3.2 An Error Measure for Lower Bounds

Since the error complexity measure F -Err C bounds the error probability of all input lengths it gives an upper bound for the computability of a language.  Note that for any language L 62 1 ? je(nn)j -Err PRL with e(n) io 1 (see Theorem 1) it holds: Every Turing acceptor in nitively often correctly computes L on n . So, the error probability of a machine computing L in nitely often takes both extreme values. To investigate meaningful lower bounds, we therefore de ne the new notion of in nitively often error complexity class. Here, the error probability has almost everywhere to be non-zero for problems not included. De nition 2 For a class C and a bound F : IN ! [0; 1] we de ne the distributional complexity class of in nitively often F -error bounded C as set of pairs of languages L   and probability distributions  :  ! [0; 1] as F -ErrioC := f(L; ) j 9S 2 C 9io n Prob[x 2 (L 4 S ) j x 2 n ]  F (n) g : We further de ne L 2 F -Errio C :, (L; uni) 2 F -Errio C . Similar to Proposition 1, we can show an upper bound for the io-error complexity measure of an arbitrary language: Proposition 2 For a complexity class C and a function z with z(n) =io 0 it holds that

P() =



1 2



? jz(njn) -Errio C

if C contains at least two complementary languages.

4

Proof: For a class C let L; L 2 C . If we focus our examination on input lengths n with z (n) = 0, it holds that n n 9io n 2 IN : j(L 4jSn)j\  j  12 ? zj(nn)j or 9io n 2 IN : j(L 4jSn)j\  j  12 ? zj(nn)j :

Consequently, a one state nite automaton can compute an arbitrary language with io-error probability 12 ? zj(njn) . For proving some lower bounds for the io-error measure of an arbitrary language we have to consider some technical properties of the Hamming distance of two languages:

De nition 3 For binary strings x; y 2 f0; 1gn , jx ? yj denotes the Hamming distance between x and y and m(k; n) denotes the number of binary strings of length n which have a Hamming distance of at most k to an arbitrary string x 2 f0; 1gn , i.e. jx ? yj :=

n X

jx[i] ? y[i]j

i=1 Lemma 1 Let x1 ; : : : xk 2 f0; 1g` and that mini2f1;:::;kg (jxi ? yj)  d .

and

m(k; n) :=

k X

 

i=1

n : i

d 2 IN with k  m(d; `) < 2` . There exists y 2 f0; 1g` such

Proof: From the de nition of the Hamming distance it follows that for each z 2 f0; 1g` the number of binary strings y of length ` with jz ? yj  d is given by m(d; `). Hence,

jf y j 9i 2 f1; : : : ; kg : jxi ? yj  dgj  k  m(d; `) : Since k  m(d; `) < 2` the claim follows directly.

Lemma 2 For < 12 and  n 2 IN it holds that m(  n; n) < ?  Proof: Note that for each  12 and k   n : k?n 1 1?  1 it follows that

k X i=1

 

 



n < n  1+ + i k 1? 1?

2

!

?

n  1? n  1?2 .

.?  n

k k = n?k+1

 

< n?  nn = 1? : Since 



+ : : :  nk  1 1??2   n n  1 1??2  :

Lemma 3 For any f 2 !(pn) there exists g 2 !(1) , such that g(n)  m (n=2 ? f (n); n) > {z < | hi; j i = > 02i ; 2i j=0; > : g(i; j ) j  1; 2i < log(j ? 1) + 1 ; S where g is an arbitrary bijective mapping from f(i; j ) j j  1; 2i < log(j ? 1)+1g to i2IN f0; 1g2i?1 , which ful lls the recursive properties of a pairing function. Now for any programming systems ' the claim follows: For any 'i there exists a machine 'j where 'j (x) is not de ned, if 'i (hj; xi) = 1 and 'j (x) is de ned, if '(hj; xi) = 0. The behavior of 'j on other inputs is not important. So 'i produces errors for all k on inputs hj; ki . This result shows again that for lower bounds this error measure does not give meaningful results, e.g. for the language of 'j it is open whether inputs of odd lengths may be solved without too many errors. Furthermore, for in nitely many even n the whole set f0; 1gn can be computed by a Turing machine.

4.2 Meaningful Lower Bounds

The last theorem shows that an arti cial pairing function by itself can cause high error complexity. We want to derive some more general results and de ne the notion of fairness. De nition 5 We call a pairing function fair, if for sets X; Y   with 9c1 > 0 8n : Prob[x 2 X j x 2 n ]  c1 and 9`1 ; `2 2 IN : Y = fw j ord(w)  `1 (mod `2 )g it holds 9c2 8aen Prob[x 2 X ^ y 2 Y j hx; yi 2 n ]  c2 : 7

Theorem 5 The standard pairing function hx; yi = x + (x+y)(2x+y+1) is fair. Proof: Note that the standard pairing function can also be de ned by the following recurrence

hx; yi :=

8 < :

0;

hy ? 1; 0i + 1; hx ? 1; y + 1i + 1;

x = 0 and y = 0; x = 0 and y 6= 0; x 6= 0 and y 6= 0 :

First we show lower bounds for the size respectively the length of the dark region Rr illustrated in gure 1. r+1 y 2  y

f( )j jh ij = g x; y

x

2 r

x; y

n

hord(j 0 8ae n : Prob[M (x) 6= H' (x) j x 2 n ]  : Therefore, every heuristic that claims to solve the halting problem makes at least a constant fraction of errors. This constant may be improved if the heuristic is modi ed. Corollary 2 For any dense programming system ' and for any function f 2 !(1) it holds H' 62 f1 -Errio PRL The question whether there exists a constant lower bound for the io-error complexity of halting is still open. Maybe the trivial constant upper bound can be improved showing that in the limit for more sophisticated Turing machines the error complexity tends to zero. The last corollary shows that H' 62 Nearly- PRL . Thus, even an improved upper bound would not help for practical issues. 9

5 Kolmogorov Complexity A problem where the proof of non-recursiveness has a complete di erent structure from the halting problem is the Kolmogorov complexity problem. For an excellent survey over the eld of Kolmogorov complexity we refer to [LiVi 93]. De nition 8 For a partial recursive function ' de ne the relative Kolmogorov complexity for all x; y 2  as C'(xjy) := minfjpj : '(p; y) = xg : A programming system ' is called universal if for all partial recursive functions f , it holds 8x; y 2  : C' (xjy)  Cf (xjy)+ O(1) . Fix such a universal programming system ' and de ne C' (x) := C' (xj) as the absolute Kolmogorov complexity of x . One of the fundamental results of Kolmogorov complexity theory is the existence of universal programming systems. A universal programming system is not necessarily dense, although many programming systems provide both properties. A sucient condition for both features is the capability of xing an input parameter of the universal program and store it one-to-one into the index string: 9s :  !  8x; y : 'u (hx; yi) = 's(x) (y) and js(x)j = jxj + O(1). In the following we de ne a decision problem for the Kolmogorov problem which will be classi ed with respect to lower io-error bounds later on. De nition 9 For a function f : IN ! IN de ne the set Cf as the set of all inputs x with smaller Kolmogorov complexity than f (jxj) : Cf := fx 2  j C(x)  f (jxj)g . For a constant c we de ne the function c : IN ! [0; 1] as c (n) := Prob[C(x)  n ? c j x 2 n ] . In general, the functions C, c , Cf are not recursive. At least the sets f(x; C(x)) j x 2  g and Cf are recursively enumerable. We now investigate the size of Cn?c and show linear lower and upper bounds.

Lemma 5 For any constant c  1 there exist constants k1 ; k2 > 0 such that k1 ae c ae 1 ? k2 . Proof: c ae k1 : It is sucient to prove that a constant share of all inputs of length n can be compressed more than c . It is easy to see that strings 0d1x for x 2 f0; 1gn?d?1 provide this

property for a sucient large constant d . c ae 1 ? k2 : Let In := fminfi j 'i () = xg j x 2 n g be the set of minimal Kolmogorov indices for the set n . Note, that jIn j = jn j . From the lower bound of c it follows that there exists a constant c > 0 such that jIn+1 \ n j  cjn j . Since jn?1 j < jn j there are at least k1 jn j elements of In which are not in n?1 . For recursive f; g  log n with f 2 (g); g 2 !(1) it is known that the set Cf is partial recursive. Furthermore, no in nite recursive enumerable set A is included in Cf , this is equivalent to A \ Cf 6= ; . For f (n) = n ? c we state the size of this set A \ Cf more precisely.

Lemma 6 Let A 2 RE such that Prob[x 2 A j x 2 n] ae c1 for a constant c1 > 0 . Then it holds for all c2 > 0 : 9c3 > 0 Prob[C(x)  n ? c2 j x 2 A \ n ] ae c3 : Proof: Consider an algorithm M which on input x 2  outputs the (ord(x) ? jjxjj)-th word of A \ jxj+c2 . If ord(x) ? jjxjj > jA \ jxj+c2 j the machine does not halt.

Clearly, M computes a partial recursive function. Let ai := M (i). Then for a constant c4 it holds C(ai )  C(i) + c4 . Let c5 = dmax(? logjj c1 ; c2 )e and c6 = c2 + c4 , then

jfx 2 A \ n j C (x)  jxj ? c1 gj ae jfi  min(jA \ n j; jjn?c2 ) j C (ai )  n ? c2 gj  jfi  jjn?c5 j C (ai )  n ? c2 gj  jfi  jjn?c5 j C (i)  n ? c2 ? c4 gj  jfw 2  0.

Theorem 7 For any c  1 there exists a constant < 1 such that Cn?c 62 -Errio PRL : 10

Proof: For a machine M de ne for all n 2 IN: An := fx 2 n j M (x) = 0g ; Fn := fx 2 n j M (x) 6= C (x)g ; Kn := n \ Cn?c : Note that n n (An 4 Kn )  Fn and therefore An \ Kn  Fn . From Lemma 5 it follows: 9k1 ; k2 > 0 : k1  jn j ae jKn j ae (1 ? k2 )  jn j n =) 9c1 ; c2 ; c3 > 0 : c1  j j ae jAn j ae (1 ? c2 )  jn j or jAn 4 Kn j ae (1 ? c3 )  jn j =) 9c3 ; c4 > 0 : jAn \ Kn j ae c4  jn j or jAn 4 Kn j ae (1 ? c3 )  jn j =) 9c5 > 0 : jFn j ae c5  jn j

No matter which algorithm tries to compute Cn?c , there is always at least the same constant fraction of erroneous output. For the investigation of the Kolmogorov complexity function we extend the io-error complexity measure to functional classes. De nition 10 E -ErrioFC := f(f; ) j 9g 2 FC 9ion : Prob [f (x) 6= g(x) j x 2 n]  E (n) g Again, when we omit the probability distribution we refer to the uniform distribution. For the functional problem it was known that every partial recursive function f where dom(f ) is in nitive, has at least one input x 2 dom(f ) such that f (x) 6= C (x). Theorem 7 shows that even the decision whether C (x)  jxj ? 1 is intractable. From that it follows immediately Corollary 3 There exists a constant < 1 such that C 62 -Errio PR :

6 Conclusion We introduce general complexity classes which bound the error distance between a language and a complexity class. For the time complexity classes we show that the known hierarchy induces a very high error distance between even smallest di erences of time bounds: There are diagonal languages L 2 DTime(T ) that any o(T )-time bounded machine cannot predict with substantially more than 50% reliability. We prove that every machine, trying to solve the halting problem, fails with a positive constant error probability on almost all input lengths. This does not imply a constant error probability for all machines, since this constant depends on the heuristic. Nevertheless, for every arbitrary slow increasing function f 2 !(1) and large enough input lengths n the error exceeds f (1n) for all machines. The Kolmogorov complexity function C intuitively seems to be easier, since contrary to the halting problem it can be computed in the limit, i.e. there exists a recursive function g(m; x) converging to C(x) for m ! 1 . But already the decision problem, whether a string can be compressed more than a constant, cannot be computed by any machine within a smaller error probability than a constant. It is notable that this error probability is independent from the machine. Both, the halting and the Kolmogorov problem are not in Nearly-BPP and thus not solvable in practice. Unfortunately, in practice often non-recursive problems have to be solved somehow. Our work shows that here this task stays hopeless even if unrecoverable errors of the algorithm are accepted.

7 Acknowledgment We would like to thank Barbara Goedecke, Karin Genther, Rudiger Reischuk, Gerhard Buntrock, Hanno Lefmann, and Stephan Weis for helpful suggestions, critics and fruitful discussions.

References [BCG 92] S. Bend-David, B. Chor, O. Goldreich, M. Luby, On the Theory of Average Case Complexity, Journal of Computer and System Sciences, vol. 44, 1992, 193-219. 11

[Chur 36] A. Church, An Unsolvable Problem of Elementary Number Theory, American Journal of Mathematics, vol. 58, 1936, 345-363. [CaSe 96] J. Cai, A. Selman, Fine Separation of Average Time Complexity Classes, Proc. 13th Symposium on Theoretical Aspects of Computer Science, 1996, 331-343. [Davi 65] M. Davis, The undecidable, Raven Press, 1965. [Gure 91] Y. Gurevich, Average Case Completeness, Journal of Computer and System Sciences, vol. 42, 1991, 346-398. [JRS 94] A. Jakoby, R. Reischuk, C. Schindelhauer, Circuit Complexity: From the Worst Case to the Average Case, Proc. 26. Symposium on the Theory of Computer Science, 1994, 58-67. [ReSc 96] R. Reischuk, C. Schindelhauer, An Average Complexity Measure that Yields Tight Hierarchies, Journal on Computional Complexity, vol. 6, 1996, 133-173. [Schi 96] C. Schindelhauer, Average- und Median-Komplexitatsklassen, Dissertation, Medizinische Universitat Lubeck, 1996. [ScWa 94] R. Schuler, O. Watanabe, Towards Average-Case Complexity Analysis of NP Optimization Problems, Proc. 10th Annual IEEE Conference on Structure in Complexity Theory, 1995, 148-159. [Levi 86] Leonid Levin, Average Case Complete Problems, SIAM Journal on Computing, vol. 15, 1986, 285-286. [LiVi 93] M. Li, P. Vitani, An Introduction to Kolmogorov Complexity and its Application, Springer, 1993. [Smit 94] C. Smith, A Recursive Introduction to the Theory of Computation, Springer, 1994. [Turi 36] A. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, vol. 42, 1936, 230-265. [Yam1 96] T. Yamakami, Average Case Computational Complexity Theory, Phd. Theses, Technical Report 307/97, Department of Computer Science, University of Toronto. [Yam2 96] T. Yamakami, Polynomial Time Samplable Distributions, Proc. Mathematical Foundations of Computer Science, 1996, 566-578.

12