Fractional Differential Forms II

5 downloads 202 Views 119KB Size Report
arXiv:math-ph/0301016 13 Jan 2003. 1. Fractional Differential Forms II. Kathleen Cotrill-Shepherd and Mark Naber. Department of Mathematics. Monroe County ...
Fractional Differential Forms II

Kathleen Cotrill-Shepherd and Mark Naber

arXiv:math-ph/0301016 13 Jan 2003

Department of Mathematics Monroe County Community College Monroe, Michigan, 48161-9746 [email protected]

ABSTRACT

This work further develops the properties of fractional differential forms. In particular, finite dimensional subspaces of fractional form spaces are considered. An inner product, Hodge dual, and covariant derivative are defined. Coordinate transformation rules for integral order forms are also computed. Matrix order fractional calculus is used to define matrix order forms. This is achieved by combining matrix order derivatives with exterior derivatives. Coordinate transformation rules and covariant derivative for matrix order forms are also produced. The Poincaré lemma is shown to be true for exterior fractional differintegrals of all orders excluding those whose orders are non-diagonalizable matrices.

AMS subject classification: Primary: 58A10; Secondary: 26A33. Key words and phrases: Fractional differential forms, Fractional differintegrals, Matrix order differintegrals.

1

1. Introduction.

The use of differential forms in physics, differential geometry, and applied mathematics is well known and wide spread. There is also a growing use of differential forms in mathematical finance [23], [3]. Many partial differential equations are expressible in terms of differential forms. This notation allows for insights and a geometric understanding of the quantities involved. Recently Maxwell’s equations have been generalized using fractional derivatives, in part, to better understand multipole moments [5]-[8]. Kobelev has also generalized Maxwell’s and Einstein’s equations with fractional derivatives for study on multifractal sets [10]-[12]. Many of the differential equations of finance have also been fractionalized to better predict the pricing of options and equities in financial markets [1], [22], [24], [25]. Fractional calculus is also being applied to statistics [18]-[20]. Hence, there is a need to further develop the notion of a fractional differential form to better understand fractional differential equations, fractional line elements, and fractal geometry.

In an earlier work [4] (hereafter referred to as FDF I) the study of fractional differential forms was initiated. The purpose of that study was to combine Riemann-Liouville fractional calculus with exterior calculus on an n-dimensional Euclidean space. FDF I focused primarily on differential order fractional forms. In this paper the properties of differential order fractional forms are extended. Integral and matrix order forms are also introduced and studied. Following Oldham and Spanier [17] the term fractional differintegral form shall be used when the result applies to both differential and integral order forms.

In section 2 some algebraic properties are worked out for various finite dimensional subspaces of the F(ν ,m,n), vector spaces defined in FDF I. In particular, a Euclidean inner product and Hodge dual are constructed for these finite dimensional subspaces and then for the infinite dimensional F(ν ,m,n) . In section 3 some differential properties are worked out

2

for the finite dimensional subspaces of F(ν ,m,n) . The fractional Poincaré lemma is considered for all orders of the fractional exterior differintegral, including orders that represent fractional integration. Coordinate transformation rules for integral order forms are constructed. An example of a change of coordinates for integral order forms is also provided. In section 4, matrix order differintegrals are used to define matrix order forms (see the appendix for the definition of matrix order derivatives and integrals). Coordinate transformation rules and a covariant derivative are also constructed. The fractional Poincaré lemma is extended to matrix order forms and found to be true provided that the matrix representing the order is diagonalizable.

As in FDF I the coordinate index will be a sub-script rather than the traditional superscript. The summation convention will not be used so as to maintain some clarity with the myriad of indices to be encountered. Throughout this paper { x i } shall represent Cartesian coordinates and, { y i } and {zi } shall represent arbitrary curvilinear coordinates on an ndimensional Euclidean space, E n . ai shall denote the initial point for evaluation of the fractional differintegrals in Cartesian coordinates and, a˜ i and a˜˜ i in the curvilinear coordinates.

Before beginning it is necessary to correct an error and two typographical errors from FDF I that may cause confusion. In FDF I equation (25) is in error and equations (26) and (27) have miss typed indices. The equations should read, respectively,

{dx

µ1 i

∧ dx µj 2 i, j ∈ {1,K , n}, ∀µ1, µ2 ≥ 0, ∋ µ1 + µ2 = ν }

n

β =∑ i= 1

n

∑ ∫ (β j =1

ν

0

ij

(ν i ,ν − ν i ) dx iν i ∧ dx νj −ν i ) dν i ,

3

,

(1)

(2)

n

n

n

β = ∑∑∑ ∫0

ν

i= 1 j = 1 k = 1



ν −ν i 0



ν

ijk

ν −ν i −ν

(ν i ,ν − ν j ,ν − ν i − ν j ) dx iν i ∧ dx j j ∧ dx k

j

) dν dν . j

i

(3)

A more compact notation for fractional differintegrals is also adopted. The notation for fractional integrals and derivatives is given below.

ai

λ a i D x f( x ) =

Dx-λf( x ) =

∂n ∂x n

1 Γ( λ )

 1  Γ( n − λ)



x ai



x ai

f (ξ )dξ

( x − ξ )1− λ

,

f (ξ )dξ   λ− n +1 − ξ x ( ) 

Re( λ) > 0

Re( λ) ≥ 0 n > Re( λ) (n is whole)

(4)

(5)

The parameter λ is the order of the integral or derivative and is allowed to be complex. In the appendix it is shown that λ may assume matrix order values as well. Equation (4) is a fractional integral and equation (5) is a fractional derivative. Taken together, the operation is referred to as differintegration (see page 61 of [17]).

Equations (6) – (13) are the basic identities from fractional calculus. In the following, m is a whole number and p and q are complex numbers whose real part is greater than zero.

∂m Dxqi f ( x ) = a i Dxqi+ m f ( x ) m ai ∂x i

ai

Dxqi a i D−x iq f ( x ) = f ( x )

4

(6)

(7)

D−x iq a i Dxqi f ( x ) ≠ f ( x )

(8)

Dxpi a i D−x iq f ( x ) = a i Dxpi− q f ( x )

(9)

ai

ai

ai

ai

D

p xi ai

D f ( x )= a i D

−p xi ai

D

q xi

p +q xi

D f ( x )= a i D q xi

ai

k

f ( x) − ∑ ai D

q− p xi

j =1

q− j xi

f ( x) xi = ai

k

f ( x) − ∑ ai D j =1

q− j xi

f ( x)

xi = ai

( x i − ai )− p − j Γ(1 − p − j )

(10)

( x i − ai ) p − j Γ(1 + p − j )

(11)

D−x ip ( a i D−x iq (f ( x ))) = a i D−x(i p + q ) (f ( x ))

q a i D x i (f( x )g( x )) =



q

∑ s ( s= 0

 

ai

Dqx i− sf ( x ))(∂ xsi g( x ))

(12)

(13)

In equations (10) and (11) k is the first whole number ≥ Re(q) . The above formulae and definitions can be found in [16], [17], [21].

2. Algebraic Properties.

In this section the restriction that the order of the fractional differintegrals be real is made. Many of the results will clearly be valid for complex order differintegrals as well. To begin, define a vector space G(ν , n) that is a finite dimensional subspace of F(ν ,m,n) (this is defined in FDF I) such that,

5

m

G(ν1,ν 2 , K ,ν m , n) = ∏ G(ν i , n) ,

(14)

i= 1

and,

F(ν ,m,n) =

U

G(ν1,ν 2 , K ,ν m , n) .

(15)

m

∑ν i = ν i =1

If ν ≥ 0 then ν i ≥ 0 for all i is imposed. If ν < 0 then ν i < 0 for all i is imposed.

α ∈ G(ν1,ν 2 , K ,ν m , n) ⇒ α =

n

∑α

i1 L im =1

i1 L im

dx iν1 1 ∧L ∧ dx iνmm .

(16)

Some of the terms in the sum may be zero if any of the orders of the coordinate differentials are the same. For example, dx1i / 2 ∧ dx1i / 2 = 0 . The dimensions of these vector spaces are

dim(G(ν1, n)) = n ,

(17)

 n dim(G(ν1,ν1, n)) =   , 2 

(18)

n  dim(G(ν1,ν1, K ,ν1, n)) =   m ≤ n .  m

(19)

In equation (19) there are m coordinate differentials of order ν1.

dim(G(ν1,ν 2 , n)) = n 2 6

ν1 ≠ ν 2

(20)

dim(G(ν1,ν 2 , K ,ν m ,n)) = n m

(21)

n  n  n  dim(G(ν1,K ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ν m ,n)) =    L    p1   p2   pm 

(22)

In equations (21) and (22) the ν i are distinct for i ∈ {1, 2,K , m} . In equation (22) there are p1 coordinate differentials of order ν1, p2 coordinate differentials of order ν 2 , etc., i.e.

m  pj  G(ν1,K ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ν m ,n) = ∏∏ G(ν i ,n) . j =1  i =1 

(23)

An inner product on G(ν1, n) can be defined in the usual way (see [2], [9], and [14]) and then built up to an inner product on the higher dimensional subspaces. Let α , β ∈ G(ν1, n) n

with α = ∑α idx

ν1 i

i=1

n

and β = ∑ β idx iν 1 . The Euclidean inner product on E n for G(ν1, n) is i=1

n

(α, β ) = ∑α i β i .

(24)

i= 1

In curvilinear coordinates equation (24) becomes

n

(α, β ) = ∑α i β j g ij ( y,ν1) .

(25)

i, j =1

The metric g ij ( y,ν1 ) is defined in equation (67) of FDF I. The dx iν 1 form an orthonormal basis on G(ν1, n) .

7

(dx iν 1 , dx νj 1 ) = δi j

(26)

If it is assumed that the ν i are distinct for i ∈ {1, K , m} , then the inner product on G(ν1,ν 2 , K ,ν m , n) is given as follows. Let α = n

∑β

β=

i1 L im =1

i1 L im

n

∑α

i1 L im i1 L im =1

dx iν1 1 ∧L ∧ dx iνmm and

dx iν1 1 ∧L ∧ dx iνmm then,

n

(α , β ) =

∑α

i1 L im =1

i1 L im

β i1 L im .

(27)

In curvilinear coordinates equation (27) becomes

(α , β ) =

n

n

∑ ∑α

i1 L im i1 L im =1 j1 L j m =1

β j1 L j m g i1 j1 ( y,ν1 )L g im j m ( y,ν m )

(28)

An orthonormal basis for G(ν1,ν 2 , K ,ν m , n) is given by,

{dx

ν1 i1

∧L ∧ dx iνmm | i1, i2 ,K im ∈ (1, 2,K , n )} and,

(dx iν1 1 ∧L ∧ dx iνmm , dx νj11 ∧L ∧ dx νj mm ) = δi1 j1 L δim j m .

(29)

By extension, an orthonormal basis for G(ν1,K ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ν m ,n) , where there are pi differentials of order ν i , and the ν i are distinct for i ∈ {1, 2, K , m} , is given by,

{dx

ν1 1 i1

}

∧L ∧ dx ν1 i1p ∧ dx ν2 i21 ∧L ∧ dx ν2 i2p ∧L ∧ dx νm mi1 ∧L ∧ dx νm mi p | 1i1,1 i2 , K ,m i p m ∈ (1, 2,K , n ) . 1

2

m

8

The coordinate indices assume all possible combinations with the constraint that 1 ≤ 1i1 < 1i2 < L < 1i p1 ≤ n , 1 ≤ 2 i1 < 2 i2 < L < 2 i p 2 ≤ n , . . . , 1 ≤ m i1 < m i2 < L < m i p m ≤ n .

m  pj l  qj   Let α ∈ G(ν1,K ,ν m , n ) = ∏∏ G(ν i , n ) and β ∈ G(µ1,K , µl , n ) = ∏∏ G(µi , n ) . j = 1  i= 1 j = 1  i= 1  

Then the exterior product of α and β obeys

α ∧ β = (−1)

 m  ∑ pi  i =1

  l  ∑ q j   j =1 

β ∧α .

(30)

The Hodge dual on G(ν1,ν1, K ,ν1, n) , in Cartesian coordinates, is constructed in the same way as in ordinary exterior algebra (see e.g., page 287 of [14] and page 108 of [21]). Specify a basis with an orientation {dx1ν 1 , dx 2ν 1 , K , dx nν 1 } for G(ν1, n) . The dx iν 1 form an orthonormal basis on G(ν1, n) , (dx iν 1 , dx νj 1 ) = δi j . * denotes the Hodge dual,

*(dx iν1 1 ∧L ∧ dx iνp1 ) = dx νj11 ∧L ∧ dx νj n1− p ,

(31)

where (i1, K , i p , j1, K , jn − p ) is an even permutation of (1, K , n ) . In curvilinear coordinates the Hodge dual of α =

n

∑α

i1 L i p =1

i1 L i p

dy iν1 1 ∧L ∧ dy iνp1 is also constructed just as it is

in exterior calculus but respecting the fractional coordinate transformation rules (see equation (57) of FDF I). Let J(ν ) = det( J ji ( x, y,ν )) and 1/ J(ν ) = det( J ji ( y, x,ν )) . Let

ε j1 L

jn

denote the Levi Civita permutation symbol, i.e. ε j1 L j n ε j1 L

notation,

9

jn

= n!. Then, in index

n 1 ε j1 L j n α= αjL ∑ ( n − p)! j1 L j p =1 J(ν ) 1

*

jp

,

α = (−1) p ( n − p )α .

**

If desired,

(32)

(33)

g(ν ) = det( gij ( x, y,ν )) may be used in place of J(ν ) = det( J ji ( x, y,ν )) .

A dual for G(ν1,K ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ν m ,n) can be constructed by noting that,

 pj   G(ν1,K ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ν m ,n) = ∏∏ G(ν i ,n) . j =1  i =1  m

(34)

Then,

 pj  ∏ G(ν i ,n) ,    i =1 

(35)

 n− p j  *G(ν1,K ,ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ,ν m ,n) = ∏ ∏ G(ν i ,n) . j =1  i =1 

(36)

m

*G(ν1,K ,ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ν m ,n) = ∏ j =1

*

m

For example consider,

α=

n

∑ 1 i1 L 1 i p1

n

L =1

∑α

1 i1

m i1 L m i p m = 1

L 1 i p1 L

m i1

L

m i pm

dy ν1 i11 ∧L ∧ dy ν1 i1p1 ∧L ∧ dy νm mi1 ∧L ∧ dy νm mi pm . (37)

10

In index notation,

α=

*

 m ε k i1 L k in  1 ∏ α 1 i1 L 1 i p1 L m i1 L m i pm ,  k =1 ( n − pk )! J(ν k )  m i1 L m i p m = 1

n



n



L

1 i1 L 1 i p1 = 1

  m p (n− p ) α = ∏ (−1) k k α .   k =1

(38)

**

(39)

For the discussion of the inner product and Hodge dual for F(ν ,m,n) the restriction that

ν ≥ 0 is made. The inner product and Hodge dual for F(ν ,1, n ) would be the same as for G(ν , n ) . For F(ν ,2, n ) , the basis elements are made up of two coordinate differentials.

{dx

µ1 i

∧ dx µj 2 i, j ∈ {1,K , n}, ∀µ1, µ2 ≥ 0, ∋ µ1 + µ2 = ν }

(40)

Consider two arbitrary elements of F(ν ,2, n )

α=

n

∑ ∫ (α ν

i, j =1

β=

n

0

∑ ∫ (β

ij

ν

i, j =1

0

ij

(ν i ,ν − ν i ) dx iν i ∧ dx νj −ν i ) dν i ,

(41)

(ν i ,ν − ν i ) dx iν i ∧ dx νj −ν i ) dν i .

(42)

The inner product on F(ν ,2, n ) is,

n

(α, β ) = ∑ ∫ 0 (α i j (ν i ,ν − ν i )β i j (ν i ,ν − ν i )) dν i . ν

i, j =1

11

(43)

In curvilinear coordinates equation (43) becomes,

n

(α, β ) = ∑ ∫ 0 (α i k (ν i ,ν − ν i )β j l (ν i ,ν − ν i ))g ij ( y,ν i )g kl ( y,ν − ν i ) dν i . ν

(44)

i, j =1

An inner product on F(ν , m, n ) would involve m-1 integrals and m summations.

The problem with doing a Hodge dual on F(ν ,2, n ) is that there is a subspace where the orders of the coordinate differentials are the same. That is G(ν/2,ν/2,n) . On this subspace the Hodge dual mapping will map into objects with (n – 2) coordinate differentials. On the rest of F(ν ,2, n ) the Hodge dual mapping will map into objects with (2n – 2) coordinate differentials. To avoid this problem consider an arbitrary element of F(ν ,2, n ) that has no component in G(ν/2,ν/2,n) .

α = lim− ε→

n

∑ ∫ (α ε

ν i, j =1 2

+ lim+ ε→

n

0

∑∫

ν

ε ν i, j =1 2

ij

(ν i ,ν − ν i ) dy iν i ∧ dy νj −ν i ) dν i

(α i j (ν i,ν − ν i )dyiν i ∧ dy νj −ν i ) dν i

(45)

Then, in index notation, the Hodge dual of α is,

α = lim−

*

ε→

ν 2



ε 0

 n    2 ε i1 L in ε j1 L j n 1 ∑  ( , ) α ν ν ν − i1  dν i1  i , j =1 ( n − 1)! J(ν ) J(ν − ν ) i1 j1 i1 i i 1 1  1 1

2  n   ν 1  ε i1 L in ε j1 L j n  + lim+ ∫ε ∑  α (ν ,ν − ν i1 ) dν i1  i , j =1 ( n − 1)! J(ν ) J(ν − ν ) i1 j1 i1  ν ε→ i1 i1 1 1  2

12

(46)

α = (−1)

**

2( n − 2 )

α =α

(47)

The Hodge dual for F(ν ,m,n) is constructed in the same way, with the restriction of removing subspaces where the orders of the coordinate differentials are the same. Note that m ≤ n is also required.

3. Differential Properties.

To begin this section, it is desired to determine the values of ν and α for which the fractional Poincaré identity is always true. ν being the order of the exterior fractional differintegral and α being the object that is acted upon.

dν dν (α ) = 0

(48)

Let α ∈ G(ν1,K ,ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ,ν m ,n) and denote the elements of an orthonormal N n  n  basis by σ l . To be specific, α = ∑α lσ l , N =    L  p1   p2  l =1

n   ,  pm 

{

}

σ l ∈ dx ν1 i11 ∧L ∧ dx ν1 i1p1 ∧L ∧ dx νm mi1 ∧L ∧ dx νm mi pm | 1i1,1 i2 ,K m i p m ∈ (1, 2,K , n ) , and the p1 L pm are defined in equations (22) and (23). Substituting this into equation (48) and expanding the fractional exterior differintegral gives,

N

n

l =1

j =1

dν dν (α ) = ∑ dν ∑ dx νj ∧a j Dνx j (α lσ l ) .

The product rule for fractional differintegrals is used to evaluate of section 1). 13

(49)

aj

Dνx j (α lσ l ) (equation (13)

aj

ν xj

D

ν   k= 0 k  ∞

(

(α lσ l ) = ∑

aj

)(

Dνx j− kα l ∂ xk j σ l

)

(50)

The second factor on the right-hand side will be zero for all k values except k = 0. Thus,

aj

Dνx j (α lσ l ) =

(

Dνx j α l σ l ,

)

(51)

Dνx j α l dx νj ∧ σ l .

)

(52)

aj

and equation (49) becomes,

N

d d (α ) = ∑ d ν

ν

l =1

ν

∑( n

aj

j =1

The second fractional exterior differintegral is expanded and reduced in the same manner yielding

N

n

dν dν (α ) = ∑ ∑

(

l =1 j , k =1

ak

Dνx k

(

aj

Dνx j α l

))(dx

ν k

∧ dx νj ∧ σ l ) .

(53)

The factor dx kν ∧ dx νj ∧ σ l is antisymmetric in the k and j indices. The fractional differintegral operators are linear and with respect to different coordinates and are thus symmetric on the k and j indices. Hence, each term in the above summation is zero. Thus the fractional Poincaré lemma is true for all values of ν ∈ C and for all

α ∈ G(ν1,K ν1,ν 2 ,K ,ν 2 ,K ,ν m ,K ν m ,n) (assuming that the components of α are differintegrable to the appropriate orders). Recall that, F(ν ,m,n) is the union of all the G(ν1,ν 2 , K ,ν m , n) i.e.

14

F(ν ,m,n) =

G(ν1,ν 2 , K ,ν m , n) .

U

(54)

m

∑ν i = ν i =1

Hence,

dν dν (α ) = 0

∀ ν ∈ C and ∀α ∈ F(ν ,m,n) .

(55)

In FDF I the coordinate transformation rules were worked out for fractional differential forms. For fractional integral forms the coordinate transformation rule is less compact. The reason for this is that fractional derivatives ( Re(ν ) ≥ 0) have a non-trivial kernel (see equation (31) of FDF I) while fractional integrals ( Re(ν ) < 0) do not. To begin, assume that the Cartesian coordinates, {xi } , can be written smoothly in terms of the curvilinear coordinates, {yi } , and note the expression for the exterior fractional differintegral in the two coordinate systems.

xi = xi ( y )

dν =

n

∑ dx i= 1

dν =

n

∑ dy j =1

(56)

ν i ai

Dνx i

(57)

ν j a˜ j

Dνy j

(58)

The goal is to find a coordinate transformation matrix, J ij ( x, y,ν ) , that will express the dx iν in terms of the dy iν .

15

dx

ν i

=

n

∑ dy

ν j

J ij ( x, y,ν )

(59)

j =1

To construct the matrix for the integral order forms let k ∈ {1,K , n} . Then, the following will generate n linear equations for the dx iν .

n

∑ dx iν ai Dνxi ( x k ) = i= 1

n

∑ dy j =1

ν j a˜ j

Dνy j ( x k ( y ))

(60)

Cramer’s rule can now be used to find the matrix for the coordinate transformation. Define the n × n matrix A to be,

Ai k = a i Dνx i ( x k ) ,

(61)

and a 1 × n column vector, bk =

n

∑ dy j =1

ν j a˜ j

Dνy j ( x k ( y )) .

(62)

Equation (60) can now be written as,

n

∑ dx

ν i

Ai k = b k .

i= 1

The dx iν can now be solved for in terms of the dy νj . Define i A to be the matrix A with column number i replaced with the column vector b k . Then, via Cramer’s rule,

16

(63)

det ( i A) det ( A)

dx iν =

(64)

Equation (64) defines the coordinate transformation matrix for fractional integral forms.

dx iν =

n

∑ dy

ν j

J ij ( x, y,ν )

(65)

j =1

As an example, consider the transformation from Cartesian to polar coordinates with the order ν = −1 and the initial point for the fractional integrals is the origin.

2(tan 2 (θ ) + 2) −1 2 tan(θ ) − 1 −1 dr − dθ 3 3r sin(θ )

(66)

2(cot 2 (θ ) + 2) −1 2 cot(θ ) − 1 −1 dy = dr + dθ 3 3r cos(θ )

(67)

dx −1 =

−1

To construct a covariant fractional derivative the restriction Re(ν ) ≥ 0 is made. A covariant fractional derivative must have the following properties.

lim a˜νi ∇ y i = ∇ y i ν →1

lim

{ y } →{ x }

n

∑J i= 1

i j

ν a˜ i

∇ y i = a i Dxνi

( y, z,ν ) a˜νi ∇ y i = a˜νj ∇ z j

17

(68)

(69)

(70)

Equation (68) is merely a statement that we get the covariant derivative that we are familiar with from differential geometry; i.e. covariantly constant metric and parallel transport. Please note that the metric need only be covariantly constant for the case of derivatives of order 1 and for a background space of order 1. This is because parallel transport is tied to the notion of the tangent vector. Equation (69) says that when the curvilinear coordinates become Cartesian the covariant fractional derivative becomes an ordinary fractional derivative. Just as in differential geometry the covariant derivative becomes a partial derivative when space becomes flat and the coordinates Cartesian. Equation (70) is just the statement that the fractional covariant derivative transform as a vector.

From FDF I the following transformation rules are available,

aj

n

D = ∑ J l j ( y, x,ν ) a˜ l Dνy l , ν xj

(71)

l =1

n

Vk ( x ) = ∑ J ak ( y, x,ν )Va ( y ) .

(72)

a =1

The covariant derivative of a vector must transform as,

n

∑ J ( y, x,ν )J ( y, x,ν )( i

m k

j

i, m =1

ν a˜ i

∇ y i Vm ( y ))= a i Dνx j Vk ( x ) .

(73)

Now substitute the known transformation rules into equation (73).

n

∑ J ( y, x,ν )J ( y, x,ν )( i

j

i, m =1

m k

ν a˜ i

∇ y i Vm ( y )) =

18

n

∑ J ( y , x ,ν ) l

j

a , l =1

a˜ l

Dνy l ( J ak ( y, x,ν )Va ( y ))

(74)

n

Multiply equation (74) by

n

∑ J ( x, y,ν ) and recall that, ∑ J ( y, x,ν )J ( x, y,ν ) = δ j

i

b

j b

j

j =1

i b

gives,

j =1

n

n

m =1

a =1

∑ J mk ( y, x,ν )( a˜νb ∇ yb Vm ( y )) = ∑ a˜ b Dνyb (J ak ( y, x,ν )Va ( y )) .

(75)

n

Multiply equation (75) by

∑ J ( x, y,ν ) , simplify again, and the fractional covariant k

l

k =1

derivative is seen to be

ν a˜ b

∇ y b Vl ( y ) =

n

∑ J ( x , y ,ν ) k

l

k , a =1

a˜ b

Dνy b ( J ak ( y, x,ν )Va ( y )) .

(76)

If ν →1 then the expression reduces to,

n

∇ bVl =

∑ J (∂ k l

k , a =1

a a y b Va J k + Va∂ y b J k ) = ∂ y b Vl +

n

∑V J ∂ a

k , a =1

k l yb

J ka .

(77)

Equation (77) is the usual expression for a covariant derivative, written in terms of the coordinate transformation matrix.

Now expand equation (76) using the product rule for fractional derivatives

ν a˜ b

∇ y b Vl ( y ) =



n

ν 

∑ J ( x, y,ν )∑ s ( k

l

k , a =1

s= 0

 

a˜ b

Dνy b− sVa ( y ))(∂ ysb J ak ( y, x,ν )) .

If the first term of the infinite series is separated out the connection can be isolated.

19

(78)

ν a˜ b

∞     ν a ν ∇ y b Vl ( y ) = ∑ J l ( x, y,ν )( a˜ b Dy b Va ( y ))( J k ( y, x,ν )) + ∑ ( a˜ b Dνy b− sVa ( y ))(∂ ysb J ak ( y, x,ν ))   k , a =1 s=1  s  (79) n

k

ν a˜ b

∇ y b Vl ( y )= a˜ b Dνy b Vl ( y ) +

∞   ν k J x , y , ν ∑ l ( )∑ s ( a˜ b Dνyb− sVa ( y ))(∂ysb J ak ( y, x,ν )) s=1   k , a =1 n

(80)

This is somewhat more complicated than the usual connection coefficients that arise when constructing a covariant derivative. So, adopt the following notation and refer to the connection as a connection functional (due to the integrals involved in the sum).

ν a˜ b

ν a˜ b

γ lb (V) =

∇ y b Vl ( y )= a˜ b Dνy b Vl ( y )+ a˜νb γ lb (V)



ν 

∑ J ( x, y,ν )∑ s (∂ n

k

l

k , a =1

s=1  

s yb

(81)

)

J ak ( y, x,ν )( a˜ b Dνy b− sVa ( y ))

(82)

As a cautionary note, recall that these objects are defined on a Euclidean space, i.e. there is no curvature. For a curved manifold there would be difficult issues concerning the nonlocal nature of fractional derivatives. Another difficult problem is to find an expression for a covariant derivative that is a different order than the background space. How does one construct Pµ ∇ y a on a space where the vectors transform under J ba ( y, z,ν ) ? The obvious path to approach this problem would be to use the chain rule. However, the chain rule in fractional calculus is computationally difficult to use for an arbitrary order (see pages 97 and 98 of [21]). For an alternate discussion of the fractional covariant derivative see Kobelev [11].

20

IV. Matrix Order Forms.

In the appendix fractional derivatives are extended to matrix order derivatives. Matrix order derivatives are shown to be well defined for all square matrices over the complex numbers ( C m × m ). Consider the definition of a fractional exterior differintegral.

dν =

n

∑ dx i= 1

ν i ai

Dνx i

(83)

To construct a matrix order form, replace the parameter ν ∈ C by a matrix A ∈ C m × m . This can be expressed directly or, using the spectral theorem (see page 517 of [15]).

 Dλ1  ai xi = ∑ dx iA a i DxAi = ∑ dx iAP   i= 1 i= 1  n

dA

n

d

A

=

k

n

∑ dx ∑ G A i

i= 1

j =1

O

  −1 P λm  a i Dxi 

λ

j ai

Dx ij

(84)

(85)

Where A = PDP−1 , D is a diagonal matrix whose entries are the eigenvalues of A, k

A = ∑ G i λi , k is the number of unrepeated eigenvalues, λi are the eigenvalues, and the G i i= 1

are the spectral projectors.

To develop an understanding of these new objects, consider the coordinate transformation rules for matrix order forms.

21

dx

A xi

=

n

∑ dy

A i j j

J ( A, x, y )

(86)

j =1

For now assume that A is diagonalizable, the Jordan case will be dealt with in a later paper. Express the Cartesian coordinates in terms of the curvilinear coordinates, xi = xi ( y) , and compute the exterior fractional differintegral in the two coordinate systems.

d A ( x k ) = d A ( x k ( y ))

n

n

i= 1

j =1

∑ dx iA ai DxAi ( x k ) =∑ dy Aj a˜ j DyAj ( x k ( y ))

(87)

(88)

To find the coordinate transformation matrix equation (88) needs to be solved for the dx iA . ai

DAx i ( x k ) and

a˜ j

DAy j ( x k ( y )) form n × n matrices as i, j, k ∈ {1, K , n} . Hence equation

(88) represents n 2 equations for the n dx iA . This is an over determined system, hence, dx iA cannot be viewed as a single object. The simplest thing to do is to view dx iν as an object continually depending on the parameter ν , and then write (see page 526 of [15]),

dx λ1  i A dx i = P   

O

With this assumption consider equation (88)

22

  −1 P . λm  dx i 

(89)

dx λ1  i P ∑   i= 1 

 dy λ1 n  −1 A  j O  P a i Dx i ( x k ) =∑ P   j =1 λm  dx i  

n

O

Equation (90) can be simplified if

ai

DAx i and

a˜ j

  −1 A  P a˜ j Dy j ( x k ( y )) . (90)  dy λj m 

DyAj are expressed in terms of their diagonal

matrices and all factors of P−1 and P are either multiplied together or factored out.

dx λ1 Dλ1 ( x )  i ai xi k O ∑ i= 1   n

dy λ1 Dλ1 ( x ( y ))   n  j a˜ j y j k O  = ∑  j =1  λm λm dx i a i Dx i ( x k ) 

    λm λm dy j a˜ j Dy j ( x k ( y )) (91)

This can be expressed as a single equation,

n

∑ dx i= 1

λl i ai

n

D ( x k ) = ∑ dy λj l a˜ j Dλy lj ( x k ( y )) , λl xi

(92)

j =1

where l ∈ {1,K , m} . A coordinate transformation matrix can be constructed using Cramer’s rule as was done in section 4. If all the eigenvalues are such that Re( λl ) ≥ 0 then the construction from FDF I can be used. In either case a prescription is available for the construction of the coordinate transformation matrix for matrix order forms.

dx λ1  i   

O

 dy λ1 J i ( λ , x, y )  n  j a˜ j j 1 O  = ∑ j =1  λm  dx i  

  .  λm i dy j J j ( λm , x, y )

If P and P−1 are inserted in the appropriate places equation (93) becomes, 23

(93)

dx

A xi

=

n

∑ dy

A i j j

J ( A, x, y )

(94)

j =1

where,

 J i ( λ , x, y )  j 1 i J j ( A, x, y ) = P  O  

  −1 P .  i J j ( λm , x, y )

(95)

With the coordinate transformation matrix found a metric and line element can be constructed in the same manor as in FDF I. A matrix order covariant derivative can also be constructed provided that attention is restricted to matrix orders that are positive definite.

A a˜ i

γ (V) =

A P lb

∇ y b Vl ( y )= a˜ i DyAb Vl ( y )+ a˜Ai γ lb (V)

∞   A k J x , y , A )∑ ( P DAyb− sIVa ( y ))(∂ysIb J ak ( y, x, A)) ∑ l( k , a =1 s=1  sI

(96)

n

(97)

J l k ( A, x, y ) is defined in equation (95), the other components of equation (97) are defined in

equations (98), (99), and (100).  λ1     s   A O  =P   sI   

24

    P−1  λm     s 

(98)

  −1 P λm − s  a˜ i D y b 

 ˜ Dλ1 − s  ai yb A− sI D P O =  a˜ i yb  

∂ s  yb ∂ ysIb =   

O

   s  ∂yb 

(99)

(100)

I is the m × m identity matrix. Equations (95)-(99) can also be expressed using the spectral theorem.

The Poincaré lemma also holds true for matrix order exterior differintegrals. Let

α ∈ G(ν1,K ν1,ν 2 ,K ,ν 2 ,K ,ν k ,K ν k ,n) and allow the coefficients of α to possibly be matrix valued, then d Ad Aα can be written as,

d Ad Aα =

n

∑ i= 1

dx λ1 ∧ dx λ1 Dλ1 Dλ1 j ai xi a j xj  i ∑P   j =1  n

O

  −1  P α . (101) λm λm λm λm  dx i ∧ dx j a i Dx i a j Dx j 

Each component of the right-hand side of equation (101) is now equivalent to the problem considered in section 3. Hence, d Ad A α = 0 for any A ∈ C m × m that is diagonalizable and for any α ∈ F(ν ,m,n) (assuming that the components of α are differintegrable to the appropriate orders). For matrices that are only Jordan diagonalizable the result does not hold. This is because the upper triangular Jordan blocks do not commute.

25

V. Conclusion.

In this paper the results of FDF I were extended to include an inner product, Hodge dual, and covariant derivative for fractional differential forms. The connection for covariant fractional order derivatives was found to be somewhat more complex than for covariant derivatives of order one. The resulting formula does reduce to the usual formula from differential geometry. The notion of a fractional differential form was also extended to fractional integral and matrix order forms. Coordinate transformation rules were worked out for both of these objects with a specific example presented for integral order forms. Matrix order forms were constructed by combining matrix order fractional calculus (see the appendix) with exterior derivatives. This was done in the same way that fractional order derivatives were combined with exterior derivatives in FDF I. The Poincaré lemma was found to be true for all orders of exterior differintegrals provided that the order of the differintegral was a diagonalizable matrix or any complex number.

One final note, the Riemann-Liouville fractional differintegral was used for the construction of fractional forms. There is no reason that the Grünwald-Letnikov, Weyl, or the Caputo definition of fractional differintegrals could not be used to define fractional forms. See references [17] and [21] for the definitions of these other fractional differintegrals.

Acknowledgment: The authors would like to thank P. Dorcey for helpful comments and a critical reading of the paper and Jean Krisch for asking the right questions to get all of this started.

26

Appendix: Matrix Order Differintegration

In this appendix the formulae from the Riemann-Liouville formulation of fractional calculus are adapted to matrix order for any matrix A ∈ C n × n ( n × n square matrices over the complex numbers). A brief review of matrix properties is provided to set notation and to provide a convenient reference for the reader.

The concepts of integration and differentiation have been expanded many times throughout the development of calculus. Almost immediately after the formulation of classical calculus by Leibnitz and Newton the question of half order derivatives arose (see [16] and [17] for historical reviews). Derivatives of complex and purely imaginary order were later developed (see e.g. [13] and references there in). Phillips considered fractional order derivatives of matrix-valued functions with respect to their matrix arguments, fractional matrix calculus. These operations were found to be well defined and subsequently applied to econometric distribution theory (see [18]-[20]).

To develop the notion of matrix order derivatives and integrals (differintegrals) it is useful to review the properties of functions with matrix order arguments. Three different means of computing functions with matrix order arguments will be used in this paper. The method chosen will be determined by the properties of the matrix being used and the specific application being considered.

A given non-zero matrix A ∈ C m × m can be placed into one or more of the following sets; Jordan block diagonalizable, diagonalizable, and/or normal. These sets will be denoted by Bm , Dm , and N m , respectively. Note that N m ⊂ Dm ⊂ Bm and every non-zero matrix A ∈ C m × m is at least Jordan block diagonalizable. Recall that the set of normal matrices contains the following subsets: real symmetric, real skew-symmetric, positive and negative 27

definite, Hermitian, skew-Hermitian, orthogonal, and unitary (see page 548 of [15] for further details). If a matrix A is normal then there exists a unitary matrix U such that

A = UDU∗ .

(102)

Where D is a diagonal matrix with the eigenvalues of A as the entries and ∗ denotes conjugate transpose. If A is real and normal then equation (102) reduces to

A = ODO T

(103)

and O is an orthogonal matrix. If a matrix A is diagonalizable then there exists an invertible matrix P such that

A = PDP−1 .

(104)

Alternatively, if A is diagonalizable the spectral theorem (see page 517 of [15]) can be used to find another representation for A. If A ∈ Dm then there exists a set of matrices

{G1, K , G k } such that,

k

A = ∑ Gi λi ,

(105)

i= 1

where {λ1, K , λk } are the unrepeated eigenvalues of A. The matrices {G1, K , G k } have the following properties;

G iG j = 0 for i ≠ j ,

28

(106)

G iG i = G i ,

(107)

k

I = ∑ Gi .

(108)

i= 1

If a matrix is Jordan block diagonalizable, then there is an invertible matrix P such that

A = PJP

−1

 J( λ1 )  O = P  0 

0   −1 P J( λk )

(109)

Where λ1, K , λk are the distinct eigenvalues of A ( k < m ), and J( λ j ) is the Jordan segment for the eigenvalue λ j .

 λj 1  O  J( λ j ) =    

O O

   1  λ j 

(110)

There are several ways to define a matrix function. Let A ∈ N m with eigenvalues

λ1, K , λm (they need not be distinct) and let g(z) be a function that is defined for all the eigenvalues of A. Then g(A) can be expressed as

g( λ1 )  g(A) = U O  

  ∗ U . g( λm )

If A is real replace U by O and U* by O T . If A is diagonalizable then equation (111) would be written as, 29

(111)

g( λ1 )  g(A) = P O  

  −1 P . g( λm )

(112)

The spectral representation can also be used (in either case).

k

g( A) = ∑ G i g( λi )

(113)

i= 1

Where λ1, K , λk are the distinct eigenvalues. For matrices that are Jordan block diagonalizable but not in Dm the situation is somewhat more complicated.

g(J( λ1 ))  O g(A) = P  

  −1 P g( J( λk ))

(114)

Where,

g( λi ) g'( λi ) g''( λi ) / 2! L g( l -1) ( λ ) /( l − 1)!  i   l -2) ( g( λi ) g'( λi ) L g ( λi ) /( l − 2)!    O O M . g(J( λi )) =  g( λi ) g'( λi )     g( λi )    

(115)

g(J( λi )) is an l × l upper triangular matrix. Note that for this case the function need be at least differentiable of order l −1 at λi . For further details see [15].

30

Using equations (4) and (5) of section 2 an analytic function can be defined (see page 49 of [17]) for a given f(x) that is suitably differentiable.

 1 x f (ξ )dξ , Re( λ) < 0,  ∫ Γ(−λ) a ( x − ξ )1+ λ  g( λ)= a Dxλf( x ) =   ∂n  1 x Re( λ) ≥ 0 f (ξ )dξ  (116)  ∫  n  λ− n +1 , a − x n ∂ λ λ n > Re( ) (n is whole) Γ   ( ) x ξ − ( )   

Let A ∈ N m then the differintegral of order A is given by,

 Dλ1 a x A D = U O  a x  

  ∗ U . λm  a Dx 

(117)

 Dλ1 a x A D = O O  a x  

  T O λm  a Dx 

(118)

 Dλ1 a x A O a D x = P  

  −1 P λm  a Dx 

(119)

If A is real and normal then,

If A ∈ Dm then

31

Any of the above three cases, equations (117), (118), or (119), can be expressed using the spectral theorem.

a

D = A x

k

∑G

i a

Dxλi

(120)

i= 1

Where the Gi are the matrices as given by the spectral theorem and the sum is over the unrepeated eigenvalues.

For matrices that are Jordan block diagonalizable but not diagonalizable some notation must be introduced. Recall equation (116).

 1 x f (ξ )dξ , Re( λ) < 0,  ∫ Γ(−λ) a ( x − ξ )1+ λ  g( λ)= a Dxλf( x ) =   ∂n  1 x f (ξ )dξ  Re( λ) ≥ 0  ∫  n  λ− n +1 , a n > Re( λ) (n is whole)   ∂x Γ( n − λ) ( x − ξ )

(121)

Derivatives of g( λ) with respect to λ will appear in the expression for the Jordan matrices.

 dk 1 x f (ξ )dξ , Re( λ) < 0,  k ∫ a dλ Γ( λ) ( x − ξ )1− λ  dk g( λ) = ka Dλx f( x ) =  k dλ  dk ∂ n  1 x f (ξ )dξ  Re( λ) ≥ 0 (122)   ∫  k n λ− n +1 a  n > Re( λ) (n is whole)  dλ ∂x Γ( n − λ) ( x − ξ )

In equation (122) k is restricted to be a whole number. The ka Dxλ can be expressed in terms of a Dλx . For example, if k = 1 the following is obtained.

32

1 a

1 a

∞ j λ  d  λ a D x ( x f ( x )) D f ( x ) = − ln(−λ) + ln( x ) a Dx f ( x ) − ∑  dλ  jxj j =1

λ x

Dλx f ( x ) = −

Re( λ) < 0

(123)

∞ λ− j j   λ− n Re( λ) ≥ 0 ∂ n  d a D x ( x f ( x ))   λ + − + ln( ) ln( x ) D f ( x ) (124) ∑   a x j n    ∂x  dλ jx  n > Re( λ) (n is whole) j =1

Similar expressions are available for the higher order derivatives of a Dxλ . With this notation matrix order differintegrals for A ∈ Bm are denoted by

 DJ ( λ1 ) a x A O a D x = P  

  −1 P J ( λk )  a Dx 

(125)

where,

 0 Dλi a x  J ( λi ) = a Dx   

1 a

Dλx i /1! L 0 λi L a Dx O

l -1 a l -2 a

Dλx i /( l − 1)!   Dλx i /( l − 2)!  M   0 λi D  a x

(126)

To determine some of the properties of matrix order differintegrals consider the composition of two matrix order differintegrals. Let A and B ∈ Dm such that A = PDP−1 and B = QEQ−1 where D and E are diagonal matrices with the eigenvalues of A and B as entries. Denote the eigenvalues of A as λi and the eigenvalues of B as ρ i . Then,

33

 Dλ1 a x A B D D = P  a x a x  

  D ρ1  −1  a x P Q  λm  a Dx  

O

O

  −1 Q . ρm  a Dx 

(127)

To simplify this denote by R = P−1Q and the components of R as R i j . Equation (127) now has the form,

a

[

ρ

]

DxA a DxB = P R ij a Dλx i a Dx j Q−1 .

(128)

There is no sum over the repeated indices, they merely denote the components of the matrix between P and Q−1. Alternatively the spectral theorem can be used to obtain another representation for the right hand side of equation (127).

k

λi A B a Dx a Dx = ∑ G i a Dx i= 1

k

l

i= 1

j =1

l

k

∑ H j a Dx j = ∑ ρ

j =1

i= 1

l

∑G H i

ρ

j a

Dλx i a Dx j

(129)

j =1

Where A = ∑ G i λi and B = ∑ H j ρ j .

Equation (4) carries over to the matrix order case.

∂m A ∂m k x G Dλi f ( x ) D f ( ) = x m ∑ i a x m ∂x ∂x i=1

(130)

∂m Dλi f ( x ) m a x ∂x

(131)

k

= ∑ Gi i= 1

34

k

= ∑ Gi a Dxλi + m f ( x )

(132)

= a DAx + mIf ( x )

(133)

i= 1

I is the identity matrix, m is a whole number, and f(x) may be a scalar, vector, or matrix order function.

Equation (7) carries over in the following sense, Let A be a matrix such that Re( λi ) ≥ 0. Now consider equation (127) with B replaced by –A.

 Dλ1 a x A -A D D = P  a x a x  

O

   λm  a Dx 

 Dλ1 D− λ1 a x a x A -A D D = P O  a x a x  

 D− λ1 a x   

O

  −1 P − λm  a Dx 

  −1 P − λm  λm a Dx a Dx 

(134)

(135)

Which, by equation (7), reduces to the identity operator.

Suppose that A and B commute (and are diagonalizable), then A and B can be diagonalized by the same matrix, say P. Equation (127) is now,

 Dλ1 Dρ1 a x a x A B D D = P  a x a x  

35

O

  −1 P . λm ρm  a Dx a Dx 

(136)

A further simplification of equation (136) will occur using equations (5), (7), (8), (9), or (10) as the signs of the eigenvalues dictate. For example, if the eigenvalues of A and B are such that Re( λi ) ≤ 0 and Re( ρ i ) ≤ 0 then equation (136) becomes,

 Dλ1 + ρ1 a x A B a Dx a Dx = P   

  −1 A+ B  P = a Dx . λm + ρ m  a Dx 

O

(137)

If the eigenvalues of matrices A and B are such that Re( λi ) ≤ 0 and Re( ρ i ) ≤ 0 but not commuting then equation (127) is,

a

[

λ +ρj

DAx a DBx = P R ij a Dx i

]Q

−1

.

(138)

.

(139)

Or, using the spectral theorem,

k

l

A B i a D x a D x = ∑ ∑ Gi H j a D x

λ +ρ j

i= 1 j = 1

Let A, B ∈ N m with A = PDP T and B = QEQ T where D and E are diagonal. Denote the eigenvalues of A by λi and the eigenvalues of B by ρ i . Now consider the transpose of equation (127).

(

a

DAx a D

)

B T x

 D ρ1 a x =Q  O   0

 Dλ1 0   T a x O Q P   ρm  a Dx   0

36

0   T P λm  a Dx 

(140)

= a DxB a DAx

(141)

As a final result consider the determinant of equation (119).

  m det( a DxA ) = det(P)∏ a Dxλi det(P−1 ) =   i=1

m



a

Dλx i

(142)

i=1

The right hand side of equation (142) is a sequential fractional derivative (see e.g. [16] and [17]). Thus a sequential factional derivative may be viewed as the determinant of a matrix order fractional derivative. Additionally if the eigenvalues of A are such that Re( λi ) ≤ 0 then equation (142) can be simplified to give,

det( a DxA ) = a DxTr(A)

(143)

where Tr(A) is the trace of the matrix A, i.e. the sum of the eigenvalues.

REFERENCES

1. Anh, V.V. and Nguyen, C.N.: Semimartingale representation of fractional Riesz-Bessel motion. Finance Stoch., 5, 1, 83-101, (2001).

2. Bishop, R. L. and Goldberg, S. I.: Tensor Analysis on Manifolds. Dover Publications, 1980.

3. Chiappori, P.A. and Ekeland, I.: Aggregation and Market Demand: An Exterior Differential Calculus Viewpoint. Econometrica, 67, 6, Nov. (1999).

37

4. Cottrill-Shepherd, K. and Naber, M.: Fractional Differential Forms. J. Math. Phys. 42, 5, 2203–2212, May (2001).

5. Engheta, N.: Fractoinal Duality in Electromagnetic Theory. In: Proceedings of the URSI International Symposium on Electromagnetic Theory. Thessaloniki, Greece, 1998.

6. Engheta, N.: Fractional Curl Operator in Electromagnetics. Microwave and Optical Technology Letters, 17, 2, February 5 (1998).

7. Engheta, N.: On Fractional Calculus and Fractional Multipoles in Electromagnetism. IEEE Transactions on Antennas and Propagation, 44, 4, April (1996).

8. Engheta, N.: On the Role of Fractional Calculus in Electromagnetic Theory. IEEE Antennas and Propagation, 39, 4, August (1997).

9. Flanders, H.: Differential Forms with Applications to the Physical Sciences. Dover, 1989.

10. Kobelev, L. Ya.: The Theory of Gravitation in the Space – Time with Fractal Dimensions and Modified Lorentz Transformations. arXiv:physics/0006029 v1 10 Jun (2000).

11. Kobelev, L. Ya.: Multifractality of time and space, covariant derivatives and gauge invariance. arXiv:hep-th/0002005 v1 1 Feb (2000).

12. Kobelev, L. Ya.: Maxwell equation, Schrodinger equation, Dirac equation, Einstein equation defined on multifractal sets of time and space. arXiv:gr-qc/0002003 1 Feb (2000).

38

13. Love, E. R.: Fractional Derivatives of Imaginary Order. J. London Math. Soc. 2, 3, 241–259 (1971).

14. Lovelock, D. and Rund, H.: Tensors, Differential Forms, and Variational Principles. Dover Publications Inc. New York, 1989.

15. Meyer, C. D.: Matrix Analysis and Applied Linear Algebra. SIAM, 2000.

16. Miller, K. S. and Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley & Sons, 1993.

17. Oldham, K.B. and Spanier, J.: The Fractional Calculus. Academic Press, 1974.

18. Phillips, P.C.: Fractional Matrix Calculus and the Distribution of Multivariate Test. Cowles Foundation paper 664, (1989).

19. Phillips, P.C.: The Exact Distribution of the Sur Estimator. Econometrica, 53, 4, 745–756, July (1985).

20. Phillips, P.C.: The Exact Distribution of the Wald Statistic. Econometrica, 54, 4, 881–895, July (1986).

21. Podlubny, I.: Fractional Differential Equations. Academic Press, 1999.

22. Sottinen, T.: Fractional Brownian motion, random walks and binary market models. Finance Stoch., 5, 3, 343-355, (2001).

39

23. Stampfli, J. and Goodman, V.: The Mathematics of Finance: Modeling and Hedging. Brooks/Cole 2001.

24. Willinger, W., Taqqu, M. S., and Teverovsky, V.: Stock market prices and long-range dependence. Finance Stoch., 3, 1, 1-13, (1999).

25. Wyss, W.: The Fractional Black-Scholes Equation. Fract. Calc. Appl. Anal., 3, 1, 51-61 (2000).

40