Some Remarks on the Moment Problem and Its ... - Springer Link

0 downloads 0 Views 607KB Size Report
Some Remarks on the Moment Problem and Its Relation to the Theory of Extrapolation of Spaces. K. V. Lykov*. Samara State University. Received May 11, 2010 ...
ISSN 0001-4346, Mathematical Notes, 2012, Vol. 91, No. 1, pp. 69–80. © Pleiades Publishing, Ltd., 2012. Original Russian Text © K. V. Lykov, 2012, published in Matematicheskie Zametki, 2012, Vol. 91, No. 1, pp. 79–92.

Some Remarks on the Moment Problem and Its Relation to the Theory of Extrapolation of Spaces K. V. Lykov* Samara State University Received May 11, 2010; in final form, January 16, 2011

Abstract—It is well known that that the coincidence of integer moments (nth-power moments, where n is an integer) of two nonnegative random variables does not imply the coincidence of their distributions. Moreover, we show that, given coinciding integer moments, the ratio of half-integer moments may tend to infinity arbitrarily fast. Also, in this paper, we give a new proof of uniqueness in the continuous moment problem and show that, in that problem, it is impossible to replace the condition of coincidence of all moments by a two-sided inequality between them, while preserving the inequality between the distributions. In conclusion, we study the relationship with the theory of extrapolation of spaces. DOI: 10.1134/S0001434612010087 Keywords: nonnegative random variable, distribution function, integer moment, half-integer moment, continuous moment problem, extrapolation of spaces, Lebesgue measure, Orlicz space.

INTRODUCTION In this paper, we study the problem of the reconstruction of the distribution of a nonnegative random variable from its moments. It is well known that the coincidence of integer moments (power moments for all natural exponents) of two random variables ξ and η does not guarantee the coincidence of their distributions [1]–[4]. The coincidence of the moments of all orders, including noninteger orders, guarantees the uniqueness of the distribution [4, p. 262]. We develop this theme and prove some stronger statements. In what follows, we consider nonnegative random variables defined on a probability space {Ω, F , P}. Sometimes the role of a probability space is played by the half-interval (0, 1] with Lebesgue measure, which is then stipulated. As usual, the symbol E denotes expectation. Thus, the coincidence of all integer moments of two nonnegative random variables ξ and η is described by their equality: Eξ n = Eη n < ∞

for all n ∈ N.

By ξp we denote the Lp -norm (the quasinorm for p < 1) of the random variable ξ: ξp := (E|ξ|p )1/p ,

p > 0.

By the distribution function Fξ (x) of a random variable ξ we mean the function Fξ (x) = P{ω ∈ Ω : ξ(ω) ≤ x}. The structure of the paper is as follows. In Sec. 1, we show that the coincidence of all integer moments does not guarantee the inequality ξp ≤ Cηp for any constant C even for a fixed noninteger p and, besides, the ratio ξp /ηp can be unbounded. Moreover, we can do this so that, for coinciding integer moments, the ratio ξn+1/2 /ηn+1/2 will tend to infinity arbitrarily fast as the natural number n tends to infinity. The construction of the corresponding random variables is an effective one. *

E-mail: [email protected]

69

70

LYKOV

In Sec. 2, it is proved that if ξp = ηp for all positive p, then the distribution functions of the random variables |ξ| and |η| coincide. This result, apparently, also well known to Bernstein, was given in Akhiezer’s book [4, p. 262]. However, the proof in [4], is not in one piece, because the book concentrates on other topics. In Sec. 2, we give a proof using the fact that the function Eξ p of the variable p is analytic as well as the uniqueness property for characteristic functions. In Sec. 3, we present the following result, which may appear paradoxical against the results of the preceding section: the inequality 1 E|η|p ≤ E|ξ|p ≤ E|η|p < ∞ 2 does not imply the inequalities

for all real

p>0

for all

τ >0

P{ω : |ξ| > τ } ≤ CP{ω : |η| > τ }

for any constant C even for particular values of ξ and η. Finally, in Sec. 4, we briefly discuss the relationship with the theory of extrapolation of spaces. 1. THE COINCIDENCE OF ALL INTEGER MOMENTS DOES NOT IMPLY INEQUALITIES BETWEEN ALL THE MOMENTS WITH ANY CONSTANT In this section, we prove the following statement showing, in particular, that all the integer moments do not define the distribution uniquely. Proposition 1. For any sequence {Cj }∞ j=1 of positive numbers, there exist nonnegative random variables ξ and η such that 1) Eξ n = Eη n < ∞ for all n ∈ N; 2) for all natural numbers j, Eξ j+1/2 ≥ Cj . Eη j+1/2 Before proving the proposition, we state the following lemma, which is if interest in itself. Lemma 1. For any C > 0 and arbitrary m ∈ N, there exist nonnegative random variables ξ and η such that 1) Eξ n = Eη n < ∞ for all n ∈ N; 2) the following inequality holds: Eξ m+1/2 > C. Eη m+1/2 Proof. For the random variable ξ we shall take the absolutely continuous random variable with density  λ ke−αx , x > 0, f (x) = 0, x ≤ 0, where α > 0, 0 < λ < 1/2, and the constant k is chosen from the normalization condition. For η we take the absolutely continuous random variable with density g(x) = f (x)(1 +  sin(βxλ )), where β = α tan λπ, −1 <  < 1. In [3, p. 373], these random variables were considered with a view to show that integer moments of nonidentically distributed random variables can coincide. This example is suitable for our purpose as well. In [3], it was shown that Eξ n = Eη n < ∞ for all n ∈ N. Let us evaluate MATHEMATICAL NOTES Vol. 91 No. 1 2012

SOME REMARKS ON THE MOMENT PROBLEM

71

the moments of order m + 1/2 in the same way as integer moments in [3]. It is well known that, for p > 0 and complex q with Re q > 0, one has ˆ +∞ Γ(p) tp−1 e−qt dt = p , (1.1) q 0 where Γ(p) is the Euler gamma function. In this integral, we set p = (m + 3/2)/λ, q = α, and t = xλ , obtaining ˆ +∞ ˆ +∞ Γ((m + 3/2)/λ) λ λ λ((m+3/2)/λ−1) −αxλ λ−1 = x e λx dx = λ xm+1/2 e−αx dx = Eξ m+1/2 , (m+3/2)/λ k α 0 0 i.e., Eξ m+1/2 =

kΓ((m + 3/2)/λ) < ∞. λα(m+3/2)/λ

Now, setting p = (m + 3/2)/λ, q = α + iβ, and t = xλ in (1.1), we obtain ˆ +∞ Γ((m + 3/2)/λ) λ = xλ((m+3/2)/λ−1) e−(α+iβ)x λxλ−1 dx (m+3/2)/λ (α + iβ) 0 ˆ +∞ ˆ +∞ λ m+1/2 −αxλ λ x e cos βx dx − iλ xm+1/2 e−αx sin βxλ dx; =λ 0

hence

ˆ

+∞

m+1/2 −αxλ

x

λ

e

0

Further, −(m+3/2)/λ

(α + iβ)

 =  =

0



m + 3/2 sin βx dx = −Γ λ

cos λπ α cos λπ α

(m+3/2)/λ

+∞



λ

0

ˆ Eη

m+1/2

+∞

=

cos λπ α

x

g(x) dx = Eξ

0

= Eξ

      3 3 π + i sin − m + π ; m+ 2 2

(m+3/2)/λ

m+1/2

m+1

(1 + (−1)

 sin

ˆ m+1/2

1 . (α + iβ)(m+3/2)/λ

  3 π , m+ 2

kΓ((m + 3/2)/λ) (−1)m+1 (cos(λπ))(m+3/2)/λ . λα(m+3/2)/λ

xm+1/2 e−αx sin βxλ dx =

Therefore,

 cos

1 = − Im (α + iβ)(m+3/2)/λ

ˆ

· Im

(cos λπ + i sin λπ)−(m+3/2)/λ

(m+3/2)/λ 

this yields

k



λ

m+1/2

+∞

+ k 0 (m+3/2)/λ

(cos(λπ))

xm+1/2 e−αx sin βxλ dx λ

).

Now consider the ratio Eη m+1/2 = (1 + (−1)m+1 (cos(λπ))(m+3/2)/λ ). Eξ m+1/2 Choosing a sufficiently small λ, and  sufficiently close to 1 or −1 depending on the parity of m, we can ensure the validity of the inequality Eη m+1/2 0. We can set δ = 1/C, where C is the same as in the lemma. Then, for the chosen parameters λ and , Eξ m+1/2 1 > = C. δ Eη m+1/2 Proof of Proposition 1. The distributions of the random variables ξ and η will be determined from their distribution functions F (x) and G(x), respectively. We seek the functions F and G as F (x) =

∞  1 Fk (x), 2k

G(x) =

k=1

∞  1 Gk (x) 2k k=1

(a mixture of distributions), and the sequences of distribution functions {Fk } and {Gk } will be determined by induction. Suppose that F1 (x) and G1 (x) are continuous distribution functions such that G1 (x) = F1 (x) = 0 for x < 0 and, for all natural numbers n, ˆ +∞ ˆ +∞ n x dF1 (x) = xn dG1 (x) < ∞ 0

and

ˆ

0

ˆ

+∞ 3/2

x

dF1 (x) ≥ 2C1

0

+∞

x3/2 dG1 (x). 0

The existence of such distribution functions follows from Lemma 1. Multiplying the corresponding random variables by a common suitable positive constant, we can find the equality ˆ +∞ x3/2 dG1 (x) = 1. 0

Suppose that j ≥ 2, the functions F1 , F2 , . . . , Fj−1 , G1 , G2 , . . . , Gj−1 , have already been defined, and Aj :=

ˆ +∞ j−1  1 xj+1/2 dGk (x) < +∞. 2k 0 k=1

By Lemma 1, there exist nonnegative random variables with distribution functions Fj and Gj such that ˆ +∞ ˆ +∞ n x dFj (x) = xn dGj (x) < ∞ for all n ∈ N, 0

and

0

ˆ

ˆ

+∞ j+1/2

x

dFj (x) ≥ (1 + Aj )2 Cj

+∞

j

0

xj+1/2 dGj (x). 0

Just as in the case j = 1, we can assume that ˆ +∞ xj+1/2 dGj (x) = 1. 0

Thus, the sequences {Fk } and {Gk } have been constructed and the functions F and G found. By the construction of {Fk } and {Gk }, we have ˆ +∞ ˆ +∞ ∞  1 xn dF (x) = xn dFk (x) Eξ n = k 2 0 0 k=1 ˆ ˆ ∞ +∞ +∞  1 n x dG (x) = xn dG(x) = Eη n = k 2k 0 0 k=1

MATHEMATICAL NOTES Vol. 91 No. 1 2012

SOME REMARKS ON THE MOMENT PROBLEM

73

for all n ∈ N. Further, for k ≥ j, using Lyapunov’s inequality, we obtain ˆ +∞ 1/(j+1/2) ˆ +∞ 1/(k+1/2) j+1/2 k+1/2 x dGk (x) ≤ x dGk (x) = 1, 0

whence

0

ˆ

+∞

xj+1/2 dGk (x) ≤ 1

for k ≥ j,

0

and (assuming that A1 = 0) ˆ +∞ ˆ +∞ ∞  1 j+1/2 x dG(x) = xj+1/2 dGk (x) k 2 0 0 k=1

ˆ +∞ j−1 ∞   1 1 ·1+ xj+1/2 dGk (x) ≤ 1 + Aj < +∞. ≤ 2k 2k 0 k=j

k=1

In particular, it follows from the last inequality that all the moments of the random variables ξ and η are finite. In addition, ˆ ˆ +∞ ˆ +∞ 1 +∞ j+1/2 xj+1/2 dF (x) ≥ j x dFj (x) ≥ (1 + Aj )Cj ≥ Cj xj+1/2 dG(x), 2 0 0 0 as was required. 2. UNIQUENESS IN THE CONTINUOUS MOMENT PROBLEM It follows from the results of the previous section that the coincidence of the integer moments for nonnegative random variables does not guarantee the coincidence of their distributions. In the present section, we show that the distribution can be reconstructed if we consider the moments for a bounded sequence of exponents. Further (unless otherwise stated), the random variables are considered on the probability space {Ω, F , P}. Lemma 2. Suppose that η is a random variable (η : Ω → R) and, for some q ∈ (0, +∞), Eeqη < ∞. Then the function ψ(p) = Eepη is analytic in the strip Re p ∈ (0, q). Proof. Suppose that p0 = Re p ∈ (0, q). Then, by Lyapunov’s inequality, we have E|epη | = Eep0 η ≤ (Eeqη )p0 /q < ∞ and the function ψ(p) is well defined in the strip Re p ∈ (0, q). Further, for a chosen p and Δp, |Δp| < r = min{p0 , q − p0 }, let us consider the difference   Δpη − 1 − Δpη ψ(p + Δp) − ψ(p) pη pη e − Eηe = E e · (2.1) ϕ(Δp) = Δp Δp and show that it tends to zero as Δp → 0. Thus, we shall prove the presence of the derivative of the function ψ(p) in the strip under consideration, i.e., the assertion of the lemma. Note that E|ηepη | < ∞ and the function ϕ(Δp) is well defined. Indeed, because for ε > 0, |x|ex ≤ e(1+ε)x + C(ε) MATHEMATICAL NOTES Vol. 91

No. 1

2012

for all

x ∈ R,

74

LYKOV

it follows that |ηepη | = |η|ep0 η ≤

1 qη (e + C). p0

Further, let us estimate the expression under the sign of the expectation in (2.1):   ∞   pη eΔpη − 1 − Δpη  |η|n  ≤ ep0 η e · . |Δp|n−1 ·   Δp n! n=2

(2.2)

If |Δp| < min{r, r 2 }, then |Δp|n−1 = |Δp|n−2 · |Δp| ≤ r n−2 · r 2 = r n . For such values of Δp, using (2.2), we obtain   ∞   pη eΔpη − 1 − Δpη  r n · |η|n  ≤ ep0 η e · ≤ ep0 η+r|η| ≤ eqη + 1.   Δp n!

(2.3)

n=0

Since the random variable under the sign of the expectation in (2.1) tends to zero (because we have (epη )p = ηepη ) and inequality (2.3) holds, we can apply Lebesgue’s theorem on dominated convergence in (2.1), obtaining    pη eΔpη − 1 − Δpη   = 0. lim |ϕ(Δp)| ≤ lim Ee ·  Δp→0 Δp→0 Δp The lemma is proved. Lemma 3. By the assumptions of Lemma 2, for all t ∈ R, ϕη (t) = Eeitη = lim Ee(it+p0 )η . p0 →0+

Proof. We have |Ee(it+p0 )η − Eeitη | ≤ E|ep0 η − 1| ≤ E(eqη + 2) < ∞. Just as in the proof of Lemma 2, using Lebesgue’s theorem, we obtain the assertion of the lemma. Theorem 1. Suppose that ξ1 and ξ2 are nonnegative random variables such that Eξ1pn = Eξ2pn for some

sequence {pn }∞ n=1

(2.4)

⊂ [a, b], 0 < a < b < ∞, pl = pk for l = k, and, in addition,

Eξjb+ε < ∞

for some

ε > 0,

j = 1, 2.

(2.5)

Then their distribution functions coincide: Fξ1 (x) = P{ξ1 ≤ x} = P{ξ2 ≤ x} = Fξ2 (x)

for all x ∈ R.

Proof. Suppose that ξ1 and ξ2 are positive on Ω. In this case, we can consider the random variables η1 = ln ξ1 and η2 = ln ξ2 . Then Eepηj = Eξjp . By condition (2.5) and the assertion of Lemma 2, we can conclude that ψ1 (p) = Eepη1

and

ψ2 (p) = Eepη2

are analytic functions in the strip Re p ∈ (0, b + ε). By the property of analytic functions and condition (2.4) (ψ1 (pn ) = ψ2 (pn )), we have ψ1 (p) = ψ2 (p)

for

Re p ∈ (0, b + ε). MATHEMATICAL NOTES Vol. 91 No. 1 2012

SOME REMARKS ON THE MOMENT PROBLEM

75

Now we can use Lemma 3 and obtain the following equality for the characteristic functions of the random variables: ϕη1 (t) = lim ψ1 (it + p0 ) = lim ψ2 (it + p0 ) = ϕη2 (t). p0 →0+

p0 →0+

It is well known that the characteristic function uniquely determines the distribution [3, p. 360]. Therefore, for x > 0, we can write Fξ1 (x) = P{ξ1 ≤ x} = P{η1 ≤ ln x} = P{η2 ≤ ln x} = P{ξ2 ≤ x} = Fξ2 (x). But if x ≤ 0, then Fξ1 (x) = 0 = Fξ2 (x). Thus, the theorem is proved in the case of strictly positive random variables ξ1 and ξ2 . In the general case, consider the events Ωj = {ω ∈ Ω : ξj > 0},

j = 1, 2.

If P{Ω1 } = 0, then ξ1 = ξ2 = 0 almost surely. Further, we assume that P{Ωj } > 0. Consider the probability spaces {Ωj , Fj , Pj }, where Fj = F ∩ Ωj ,

Pj (A) =

P(A) P(Ωj )

for A ⊂ Ωj .

The corresponding expectations are denoted by Ej . Then, by Lemma 2 and Lemma 3, we see that the function Ej ξjp is analytic and lim Ej ξjp0 = 1,

p0 →0+

whence, taking into account the fact that for the random variables under consideration, we have Ej = (1/(P{Ωj }))E, we deduce that the function Eξjp is analytic and P{Ωj } = lim Eξjp0 .

(2.6)

p0 →0+

Therefore, P{Ω1 } = P{Ω2 } and E1 ξ1p = E2 ξ2p for Re p ∈ (0, b + ε). For positive random variables ξ1 and ξ2 on {Ω1 , F1 , P1 } and {Ω2 , F2 , P2 }, we similarly obtain the equality of the distribution functions. For x > 0, returning to the original probability space, we obtain P{ξ1 ≤ x} = P{Ω1 } · P1 {ξ1 ≤ x} + (1 − P{Ω1 }) = P{Ω2 } · P2 {ξ2 ≤ x} + (1 − P{Ω2 }) = P{ξ2 ≤ x}. For x < 0, Fξ1 (x) = Fξ2 (x) = 0 and, for x = 0, Fξ1 (0) = 1 − P{Ω1 } = 1 − P{Ω2 } = Fξ2 (0). The theorem is proved. 3. INEQUALITIES BETWEEN ALL THE MOMENTS DO NOT GUARANTEE INEQUALITIES BETWEEN THE DISTRIBUTIONS In this section, for the probability space {Ω, F , P} we consider the half-interval (0, 1] with Lebesgue measure μ. Let us show that there exist two nonnegative random variables ξ and η such that 1 p Eη ≤ Eξ p ≤ Eη p < ∞ 2

for all

but here lim sup τ →+∞

MATHEMATICAL NOTES Vol. 91

No. 1

P{ω : ξ > τ } = +∞. P{ω : η > τ }

2012

p > 0,

76

LYKOV

Lemma 4. Suppose that x(t) and y(t) are nonincreasing nonnegative functions on (0, 1]. If, for some C ≥ 1 and all τ > 0, μ{s ∈ (0, 1] : x(s) > τ } ≤ Cμ{s ∈ (0, 1] : y(s) > τ }, then, for all t ∈ (0, 1],

ˆ

t

ˆ

t

x(s) ds ≤ C

0

y(s) ds. 0

Proof. Since a monotone function has a finite or countable set of points of discontinuity, we can assume that the functions x(t) and y(t) are left-continuous. In this case, formula [5, p. 83] is valid: x(t) = inf{τ : μ{s ∈ (0, 1] : x(s) > τ } < t}. Further,

(3.1)

    s > τ = Cμ{s ∈ (0, 1] : y(s) > τ }; μ s ∈ (0, C] : y C

hence, using the assumptions of the lemma, we obtain     s >τ . μ{s ∈ (0, 1] : x(s) > τ } ≤ μ s ∈ (0, C] : y C Therefore, Eq. (3.1) and a similar formula for the function y(s/C) implies the inequality   s , x(s) ≤ y C which is valid for all s > 0 if we assume that x(s) and y(s/C) are zero outside (0, 1] and (0, C], respectively. Finally, integrating the last inequality, for all t ∈ (0, 1], we obtain ˆ t/C ˆ t   ˆ t ˆ t s ds = C x(s) ds ≤ y y(s) ds ≤ C y(s) ds. C 0 0 0 0

Lemma 5. There exist two nonincreasing nonnegative functions x(t) and y(t) on (0, 1] such that xp ≤ yp < ∞ and

´t lim sup ´0t t→0+

0

for all

x(s) ds y(s) ds

p>0

= +∞.

Proof. It was proved in [6, p. 984] that there exist two nonincreasing nonnegative functions x1 (t) and y(t) on (0, 1] such that x1 p ≤ yp < ∞ for all p ≥ 1, and ´t x1 (s) ds = +∞. lim sup ´0 t t→0+ 0 y(s) ds Consider the functions

ˆ

ˆ

1 p

x1 (s) ds

A(p) =

and

1

y(s)p ds

B(p) =

0

for p ∈ (0, 1).

0

The function A(p) is continuous for p > 0, and lim A(p) = μ{t ∈ (0, 1] : x1 (t) > 0} > 0

p→0+

MATHEMATICAL NOTES Vol. 91 No. 1 2012

SOME REMARKS ON THE MOMENT PROBLEM

77

(see relation (2.6) in the proof of Theorem 1). Therefore, the function A(p) is positive, bounded, and bounded away from zero on (0, 1). The same can be said about B(p). Therefore, for some C > 1, A(p) ≤ CB(p) for all p ∈ (0, 1), ˆ 1 ˆ 1 x1 (s)p ds ≤ C y(s)p ds for all p > 0. 0

0

Assuming that x1 (s) = 0 for s > 1, consider the function x(s) := x1 (Cs). We have ˆ ˆ 1/C ˆ 1 ˆ 1 1 1 p p p x(s) ds = x1 (Cs) ds = x1 (s) ds ≤ y(s)p ds, C 0 0 0 0 ˆ ˆ ˆ t 1 Ct 1 t x(s) ds = x1 (s) ds ≥ x1 (s) ds. C 0 C 0 0 For the functions x(t) and y(t) all the assumptions of the lemma hold. The following simple argument due to Astashkin shows that, in Lemma 5, the inequalities between the Lp -norms can be replaced by the equivalence of the moments. Considering the functions x(t) and y(t) from Lemma 5, we set x0 (t) = max{x(t), y(t)}. Then ˆ t ˆ t x0 (s) ds ≥ x(s) ds 0

0

and, therefore,

´t

x0 (s) ds = +∞. lim sup ´0 t t→0+ 0 y(s) ds Here

ˆ

ˆ

1

x0 (s)p ds = 0

ˆ ≤

ˆ x(s)p ds + {s:x(s)≥y(s)} 1 p

1

x(s) ds +

0

In addition,

ˆ

ˆ

y(s)p ds {s:x(s) 0

= +∞.

Suppose that x(s) and y(s) are the functions from Lemma 6. Assuming that x(s) = 0 for s > 1, consider the function x1 (s) := x(2s). We have ˆ ˆ 1/2 ˆ 1 1 1 x1 (s)p ds = x(2s)p ds = x(s)p ds, 2 0 0 0 ˆ ˆ ˆ t 1 2t 1 t x1 (s) ds = x(s) ds ≥ x(s) ds. 2 0 2 0 0 Setting ξ := x1 (t) and η := y(t) (recalling that we regard (0, 1] as a probability space) and invoking Lemma 4, we establish the following statement. MATHEMATICAL NOTES Vol. 91

No. 1

2012

78

LYKOV

Proposition 2. There exist two nonnegative random variables ξ and η such that 1 p Eη ≤ Eξ p ≤ Eη p < ∞ 2

for all

p > 0,

but here lim sup τ →+∞

P{ω : ξ > τ } = +∞. P{ω : η > τ }

4. ON THE RELATIONSHIP WITH THE THEORY OF EXTRAPOLATION OF SPACES The abstract theory of extrapolation of spaces was constructed by Jawerth and Milman in [7], [8]. Without going into details of the theory, let us consider the following example. The space of measurable functions on the closed interval [0, 1] with norm   ˆ 1 |x(t)|/λ e ≤2 0 : 0

is called the exponential Orlicz space Exp L. Similar spaces arise in different areas of calculus. It turns out that, for the norm of the exponential Orlicz space, the following extrapolation relation holds: xp xn ≤ C sup ≤ C 2 xExp L , 1≤p τ } = μ{t : |y(t)| > τ }

for all

τ > 0,

where μ is the Lebesgue measure, then y ∈ E and yE = xE . MATHEMATICAL NOTES Vol. 91 No. 1 2012

SOME REMARKS ON THE MOMENT PROBLEM

79

The spaces Lp and the Orlicz spaces are symmetric spaces. Consider the symmetric spaces contained in all the spaces Lp for p < ∞. By Theorem 1, the distribution of functions from such spaces, and hence also the symmetric norm, can be uniquely reconstructed from Lp -norms. However, we wish to have an explicit formula for reconstructing a symmetric norm from Lp -norms. In addition, we more often have to deal not with equalities for Lp -norms but rather with estimates, such as in Yano’s theorem. In this connection, the notion of extrapolation space was introduced in [6]. In such a space E, the norm satisfies the two-sided estimates (4.2) C −1 xp ≤ xE ≤ C xp , F

F

where F is a Banach lattice of functions of the argument p ∈ [1, +∞). It follows from (4.1) that the exponential Orlicz space is an extrapolation space. In [6], it was shown that there exists a nonextrapolation Marcinkiewicz space. The existence of such a space is related to the impossibility of reconstructing from Lp -norms (given up to equivalence) not only of distributions, but even of a more regular object, such as the K -functional: ˆ t x∗ (s) ds, K (t, x; L1 , L∞ ) = inf (x0 1 + tx1 ∞ ) = x=x0 +x1

0

x∗

is a nonincreasing rearrangement of |x(t)|. This result was used to prove Proposition 2 in the where present paper. As is seen from (4.1), the norm on Exp L can be reconstructed even from integer moments. The space Exp L belongs to the class of strong extrapolation spaces which are characterized by the action of the operator Sx(t) := x(t2 ) in them [12]. For all such spaces, the Banach lattice F from the definition (4.2) of the extrapolation property can be discretized by a method yielding, in the case of the Orlicz space Exp L, the relation x2n ≤ C 2 xExp L . n n∈N 2

xExp L ≤ C sup

The norm on strong extrapolation spaces can be reconstructed even from moments of order 2n . The condition xn 0, the Orlicz space Exp Lα constructed from a convex α function coinciding with et for a sufficiently large t is a strong extrapolation space. At the same time, using the Krein and Carleman conditions [1, p. 108], we can easily verify that the uniqueness of the moment problem holds for all nonnegative random variables from Exp Lα if and only if α ≥ 1/2. In the general case, the discrete and continuous variants of extrapolation can give different spaces. Thus, using the random variables η and ξ from Proposition 1 corresponding to Cj = j j , we see that the norms xn xp and x(2) = sup x(1) = sup 1≤p