Modeling Complex Systems with Differential

0 downloads 0 Views 2MB Size Report
Beurs Van Berlage, Amsterdam 19-22 September 2016​. Andrei Ludu. Dept. Mathematics & Wave Lab. Embry-Riddle Aeronautical University, Daytona Beach, ...
2016 Conference on Complex Systems Beurs Van Berlage, Amsterdam 19-22 September 2016​

Modeling Complex Systems with Differential Equations of Time Dependent Order

Andrei Ludu

Dept. Mathematics & Wave Lab Embry-Riddle Aeronautical University, Daytona Beach, Florida

I. Introduction to motivation: • While modeling nature people usually try to have the real subject matching a certain mathematical model.

• The goal is to find a convenient conventional mathematical model enabling the understanding of the subject (in that mathematical frame) and hopefully to predict future based on model’s properties. • The procedure is to simplify the real subject in order to match the mathematics. • In general mathematics came half the way by inventing new structures (delta Dirac, small worlds, etc.). • How about seriously bending mathematics? • This talk is about such a new possible direction, and its goal is to find out if indeed this direction is new.

Examples of bending the math:

• Wavelet formalism for sushi: European dish, contraction, translations, summation, sushi. • Peripatetic axiom, empiricism, humans’ best senses, a possible “dog’s mathematics” based on olfaction.

AL, Boundaries of a Complex World (Springer, Heidelberg 2016)

𝑛

𝜕 𝑥 𝑛 𝜕𝑡

t x(t) n We have: t, x, x(t), and n

𝑡 ∈ 𝑇 = a topological poset

𝑥 ∈ 𝑋 = a space of well defined entities endowed with some intrinsic geometric and some categorial algebraic structure such that we can define local Lie groups of transformations G acting on 𝑋 Next step is to build a fiber bundle of base 𝑇 and fiber 𝑋 with structure group G ∝∈ 𝐴 = traditionally a countable set

𝑑∝ 𝑥 𝑑𝑡 ∝

exterior

𝑑𝜔

From t and x(t) points of view there are quite a lot of types of derivatives

𝑑𝐹(𝑣)

action

𝐷𝑣 (𝐹)

action

absolute fractional

𝑑𝐹 differential

𝜕 𝜕𝑥

the same

D𝜔

𝐷−1

pseudo

Lie

𝐷𝑣 (𝐹) covariant

𝛻𝑣 𝜔

covariant exterior

𝑣(𝜔)

𝑣 𝜔 = 𝛻𝑣 𝜔-𝛻𝜔 𝑣

action

[𝑣, 𝜔]

From n’s point of view there are only two types of derivatives: • n integer (positive for derivative and negative for integrals) • n=1 for the exterior derivative (𝒅𝟐 = 𝟎)

A hint and motivation:

• A physical property, the intensity of a source, interactions, etc. are modeled with help from function. • In an effort to find solutions one tries to simplify. • This includes expanding the function in series, take smaller orders, linearize, apply stability and perturbations and eventually weakly nonlinear perturbations. • In terms of the derivatives we actually begin with listing the smaller orders, except we do not have the function which generates them.

Beyond fractional

𝒁

𝑹

𝑪∞

𝜕𝑛𝑥 𝜕𝑡 𝑛

𝜕∝𝑥 𝜕𝑡 ∝

∝(𝑡)

𝜕 𝑥 𝜕𝑡 ∝(𝑡)

H(𝑞1 , 𝑞2 , 𝑝1 , 𝑝2 )

𝜕 𝑞1 𝑞2 𝜕𝑡 𝑞1

So how about this challenge:

𝑡

𝜕 𝑥 𝑡 𝜕𝑡

Possible applications: •

Some complex systems change their type of dynamics with time



Accelerating type of change



Anomalous fluctuations and transitions from anomalous to normal

Goal: A different mathematical approach: •

Differential equations with variable order of differentiation: VODE



Order of differentiation is itself an independent variable

𝑑 𝜉 𝑓(𝑡,𝜉) 𝑑𝑡 𝜉

= 𝐿(𝑡, 𝑓, 𝑓 ′ , … )

Examples of complex systems (from technology and business)with “accelerated change” type of time evolution • Moore’s law (1965): Exponential growth: number of transistors in a dense integrated circuit doubles approximately every two 𝑡

years: 𝑁𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡𝑠 ~2𝑇𝑡𝑟𝑎𝑛𝑠𝑖𝑠𝑡. • Rock’s law (2003): the cost of a semiconductor chip fabrication 𝑡 𝑇𝑐

plant doubles every four years: Cost~4

• Every new year allowed humans to carry out roughly 60% of the computations that possibly could have been executed by all existing general-purpose computers before that year: 𝐶𝑜𝑚𝑝𝑢𝑡𝑎𝑡𝑖𝑜𝑛~𝑎

𝑡 𝑇𝑐𝑜𝑚𝑝

• Kryder's law (2005): hard disk drive areal density increases exponentially. • Nielsen's law (1994): the bandwidth available to users increases by 50% annually.

Examples of complex systems with time evolution from epistemology and technology: Technological Singularity • In “futures studies” and the “history of technology”, accelerating change is a perceived increase in the rate of technological change throughout history. • Stanislaw Ulam (1958) and von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” • It may be accompanied by equally profound social and cultural change. • Good (1965): More capable machines could design machines of greater capability. These iterations of recursive selfimprovement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

Examples (from science and technology) of complex systems with accelerating time evolution

• Densification of graphs: 𝐸(𝑡)~𝑁(𝑡)𝑎 Number of edges grows faster than number of nodes (citations vs. papers, citations vs. patents, autonomous systems like internet, authors linked to their publications). • In the Toda oscillator model of self-pulsation, the logarithm of amplitude varies exponentially with time (for large amplitudes), thus the amplitude varies as doubly exponential function of time.

Carlson’s law (2006): Human innovation has an organicevolution-based dynamics. Computer power grows exponentially (Kurzweil 2000)

Neuronal phenomena have critical dynamics characterized by different power-law scaling exponent for different time scales.

Examples of complex systems with time evolution from life and social systems: • Von Foerster (1960,) the Doomsday equation. 𝑡 −𝑡 • Sergey Kapitsa (1965): 𝑁 = 𝐶 cot −1 0𝑇 • Hyperbolic growth models for population • Life is the most complex diversity spanning 21 orders of magnitude in size, obeying some empirical power laws (mass vs. metabolic rate, lifespan vs. heart rate, ¾ law, allometry, etc.) • Gurevich and Varfolomeyev (2001): double exponential law for population growth: 𝑡−1000 𝑁 𝑡 = 375.6 ∙ 1.001851.0073 with t in million years

It appears that in real world the rapid growth is a transition from exponential and criticality Many dynamical laws have exponential behavior over the long range, but critical phenomena are always faster in the neighborhood of the critical point (hyperbolic laws):

1 2−𝑥

𝒆

𝒆𝒙

𝑒𝑥 𝑛 𝑥 𝑒

𝑥𝑛

II. Present and/or Simplifying Approaches •

Source, potential, boundary, etc. terms with parametric dependence. Limited changes. Loosing linearity, so loosing spectral tools and linear stability analysis.



Not always justifiable. Not actually geometric changes.



Gluing solutions. Artificial.



Special functions solutions with parameter dependence: dividing zones (space ranges, time scales, adiabatic decoupling, etc.).



Balancing/weighting terms (telegraphists equation).

There is no fastest growing function.

If f(x) is the fastest growing function, then: • • • •

𝑓(𝑓 𝑥 ) is a faster growing function; 𝑓(𝑥)2 grows much more quickly than f(x); 𝑥… Iterated exponentials 𝑒 𝑒 are very fast. They satisfy 𝑓 𝑥 + 1 = 𝑓(𝑥)𝑒 > 𝑓(𝑥)2 The “busy beaver” function grows more quickly than any function that can in principle be computed. It is incomputable, like the Kolmogorov complexity.

• Tetration (hyper-4) is more rapid operator than exponentiation. 𝑛𝑎 (not combined in associative way!) • 𝑒𝑥𝑝𝑎 𝑛 (x) = 𝑎𝑎

𝑎 ..𝑎 𝑥

the notation for the iterated exponential.

n-times

• 𝑥𝑎=

𝑥 + 1 for x ∈ −1,0 𝑎 𝑥 for x ∈ 0,1 𝑎𝑎

(𝑥−1)

for x ∈ 1,2 , … etc.

=𝑎

𝑎.

.𝑎

n-times

What type of dynamics results in solutions that can match/model this behavior (transition): From exponential or power-law in the initial stages to hyperbolic or criticality in the final stages?

Model I: Dynamical ODE system with time dependent power-law in the source function. For example: 𝒚 𝒏 𝒕 = 𝒂 𝒚∝(𝒕)

• It can simulate any type of fast growing function. • It can change the behavior from exponential to hyperbolic growth: 𝑦 ′ 𝑡 = 𝑎 𝑦(𝑡) 𝑦 ′ 𝑡 = 𝑎 𝑦(𝑡)2

• Comment: In complex systems the nonlinear behavior is usually stable and follows (compact) patterns if one has several terms balancing, like in nonlinear systems where dispersion balances nonlinearity.

Constant exponent approximation 𝑦′ = 𝑦 ∝0 , y(0) = 𝑦0 , ∃𝑀 > 0, ∀𝑡 ∈ 𝐈 ⊂ 𝐑, |y(t)| 1 at t =

∝0 −1 𝑦0 1−∝0

∝0 =2.3

Gluing solutions for different ∝0

𝑦 𝑡, 1 = 𝑦0 𝑒 𝑡 ∝0 =1.6 ∝0 =0.8 ∝0 =0.4

t

Dynamical ODE system with time dependent power law in the source function

𝑦 ′ = 𝑓 𝑡, 𝑦 = 𝑦 𝛼(𝑡) , y(0) = 𝑦0 ,

𝛼(𝑡) ∈ 𝐶 0 (𝐑), ∃𝑀 > 0, ∀𝑡 ∈ 𝐈 ⊂ 𝐑,, |y(t)| (y, w) but we reverse roles t and f: 𝑑𝑓 𝑑𝑡

=

1 𝑑𝑤 𝑑𝑦

,

𝑑2 𝑓 𝑑𝑡 2

=−

𝑑2 𝑤 𝑑𝑦2 𝑑𝑤 3 𝑑𝑦

, 𝑧=

𝑑𝑤 , 𝑑𝑦

1 − 2𝑧 2

=

𝑦2 2

+ 𝐶,

𝑑𝑓 2 𝑑𝑡

+ 𝑓 2 =C

Of course, there are ways to perform periodic↔aperiodic, compact ↔noncompact smooth transitions by using special functions (like sn into sech and sin). The question is which one carries more physics.

Model II: VODE with time dependent order of differentiation

• It can model any type of fast growing function. • It can change the behavior from exponential to hyperbolic growth. • The change of order of differentiation can be justified by physical arguments: • The order of differentiation controls the “type of law”. • The order of differentiation controls the dependence on history.

Why is such a “variable order of differentiation” equation important? It may give insight into complex systems evolution process:  Detection of abnormally accelerated behavior, evolution.  Modeling adapting systems with intrinsic adjustment of memory

allocations, learning, energy resources, strategy.  Evolution sampling/factorizing: many real world systems grow too large

and too fast to deal with.

Higher order derivatives, finite difference and history dependence

𝑦 (𝑛)

= lim

ℎ→0

𝑛 𝑘=0

−1

𝑘 𝑛 𝑘 ℎ𝑛

𝑦(𝑥 − 𝑘ℎ)

Backward nth order finite difference

Finite difference equations of different orders: • 1st order • 4th order Constant order finite differences are linearly history dependent • Variable (increasing) order: the dynamics is also history dependent, but the amount of time taken into account from the past increases. Example: human civilization has three types of memory genetic, neuronal, and external, involving different history ranges dependence and different scales in time.

How does it look like? 𝑦 ′′′ = 𝑦

Glued solutions of increasing order of the derivative

𝑦 ′′ = 𝑦 𝑦′ = 𝑦

Gluing solutions from linear, similar ODE of increasing order n.

𝑑 𝑛 𝑦(𝑡,𝑎,𝑛) =a 𝑑𝑡 𝑛

y(t, a, n)

𝑦 (𝑗) (nT, a, n)= 𝑦 (𝑗) (nT, a, n-1), j=0,1,…n-1 a1 n=2

n=2 n=1

Higher order involves faster variation

n=3

• How can we change continuously the order of differentiation? • Can such an equation provide the transition between different degrees of accelerating evolution? • If it’s possible, do we have existence and uniqueness of solutions? • Changing order involves changing the dimension of the initial conditions manifold. How can these be done simultaneously?

• What is the best mathematical approach for ODE with time dependent (variable) order of differentiation? • Can we have the order of differentiation be a variable, independent coordinate itself? • If yes, are there conservation laws, stability criteria for such new systems?

IV. Fractional Derivative Approach:



Differential equations with variable order of differentiation



Order of differentiation is itself an independent variable.

A possibility to implement such equations would be fractional derivative ODE

𝐷𝑒

𝑎𝑥

𝑑 𝑎𝑥 = 𝑒 = 𝑎𝑒 𝑎𝑥 𝑑𝑥

𝐷𝑛 𝑒 𝑎𝑥 =𝑎𝑛 𝑒 𝑎𝑥

so 𝐷 ∝ 𝑒 𝑎𝑥 =𝑎∝ 𝑒 𝑎𝑥 ?

𝐷1/2 𝑒 𝑎𝑥 =𝑎1/2 𝑒 𝑎𝑥 ?

for

∝ ∈ 𝑸, 𝑹, … 𝑪

The meaning of integer order of differentiation: 𝑒 𝑎𝑥 = 𝐷𝐷−1 𝑒 𝑎𝑥 → 1 𝑒 𝑎𝑥 = 𝐷 𝑒 𝑎𝑥 𝑎

𝐷 −1 𝑒 𝑎𝑥 =

𝑒 𝑎𝑥 𝑑𝑥

in a formal way.

Generalizing:

𝐷 −2 𝑒 𝑎𝑥 =

𝑒 𝑎𝑥 𝑑𝑥 ,



𝐷−𝑛 𝑒 𝑎𝑥 =



𝑒 𝑎𝑥 𝑑𝑥

n-th iterated integral

If we generalize to rational number, we have technical questions: 1. 2. 3. 4.

Is this operator linear? Does it obey composition law (closed)? Is it correct to use antiderivatives? What classes of functions can be fractionally differentiated like that?

𝑝! 𝐷 𝑥 = 𝑥 𝑝−𝑛 𝑝−𝑛 ! 𝑛 𝑝

More actions:

𝐷∝𝑥 𝑝 =

Γ(𝑝 + 1) 𝑥 𝑝−∝ Γ 𝑝 −∝ +1

There are problems:

𝑒𝑥 𝐷∝ 𝑒 𝑥 ∞

𝐷∝ 𝑛=0

𝑥𝑛 = 𝑛!



𝑛=0

Γ 𝑛 + 1 𝑥 𝑛−∝ Γ 𝑛 −∝ +1 𝑛!

(unless ∝∈ 𝒁 )

This defect can be repaired by using geometrical insight for fractional derivative: 𝐷 −1 𝑓(𝑥) =

𝑥

𝑓(𝑥) 𝑑𝑥

𝑓 𝑡 𝑑𝑡 0

𝑡2

𝑡2 =x

𝑥 𝑡2

𝐷 −2 𝑓(𝑥) =

𝑓 𝑡1 𝑑𝑡1 𝑑𝑡2 0 0

𝑡1 𝑡2

𝑡2 =x

𝑥 𝑥

𝐷 −2 𝑓(𝑥) =

𝑓 𝑡1 𝑑𝑡1 𝑑𝑡2 0 𝑡1 𝑥 𝑥

=

𝑓 𝑡1 𝑑𝑡2 𝑑𝑡1 = 0 𝑡1

𝑡1

𝑥 0

(𝑥 − 𝑡1 )𝑓 𝑡1 𝑑𝑡1

Now we can define: 𝐷

−2

𝑥

𝑓(𝑥) =

(𝑥 − 𝑡)𝑓 𝑡 𝑑𝑡 0

and in general: 𝐷

−𝑛

𝑥

1 𝑓 𝑥 = 𝑛−1 !

𝑓 𝑡 (𝑥 − 𝑡)𝑛−1 𝑑𝑡

0

and even more general: the fractional integral 1 ∝ 𝐷 𝑓 𝑥 = Γ −∝

𝑥 0

𝑓 𝑡 𝑑𝑡 , (𝑥 − 𝑡)∝+1

∝< −1

The fractional derivative is introduced by ∝∈ 0,1 : 𝐷∝𝑓

𝑥

=𝐷∝−𝑚 𝐷𝑚 𝑓

𝑥

𝑑𝑚 ∝−𝑚 =𝐷 𝑓 𝑑𝑥 𝑚

𝑥 ,

𝑚 ≥ 1, ∝ −𝑚 < −1

One possible definition is the Liouville-Riemann fractional derivative: 𝑥

𝑑 1 ∝ 𝑓 𝑥 = 𝐷 𝑏 𝑥 𝑑𝑥 Γ 1 −∝

𝑏

𝑓 𝑡 𝑑𝑡 𝑥−𝑡 ∝

∝ 𝜖(0,1)

We also have the Caputo fractional derivative:

𝑏

𝐷∝

1 𝑓 𝑥 = 𝑥 Γ 1 −∝

𝑥 𝑏

𝑓′ 𝑡 𝑑𝑡 𝑥−𝑡 ∝

∝ 𝜖(0,1)

Higher order fractional derivatives: ∝ 0𝐷 𝑥 𝑓

𝑥 = 𝐷𝑛 0𝐷 ∝−𝑛 𝑥 𝑓 𝑥

∝ 𝜖[𝑛, 𝑛 + 1)

The Liouville-Riemann fractional derivative has a problem if f(x)=constant:

1 𝑑 ∝ 𝐾 = 𝐷 0 𝑥 Γ 1 −∝ 𝑑𝑥

𝑥 0

𝐾𝑑𝑡 𝑥−𝑡



=

𝑑 𝐾 ≠ 0, 𝑑𝑥 Γ 1 −∝ 𝑥 ∝

∝ 𝜖(0,1)

Jumarie definition of fractional derivative (2005)



𝐷 𝑓 𝑥 = 𝐷

∝−1

𝑥 𝑓(𝑥)



1 𝑑 = Γ 1 −∝ 𝑑𝑥

𝑥 0

1 𝑑 𝐷∝ 𝑓 𝑥 = (𝐷∝−𝑛 𝑥 ) 𝑓 𝑥 = Γ 1 −∝ +𝑛 𝑑𝑥

𝑓 𝑡 − 𝑓 0 𝑑𝑡 𝑥−𝑡 ∝

𝑥 0

𝑓 𝑡

𝑛

0 0.

Numerical solution of variable order FODE for various initial conditions 𝑑 tanh 𝑎 𝑡 𝐷 𝑓 𝑡 = sin(𝑓(𝑡) 𝑑𝑡

𝑡 ∈ 0,60 ,

𝑎 = 0.5.

VII. Nonlinear extension of the result:



The same FODE with the function term being a nonlinear functional.



Order of the FODE constant.

We have the uniqueness condition generated by the existence of fixed point. Theorem 1: The B.C. problems presented in Lemmas 1,2 have unique solutions u(t) if the following conditions hold: 1. 𝑓: 𝐽 × 𝑹 → 𝑹 is continuous, 2. There exists an increasing function 𝜑: [0, ∞) → [0, ∞) such that 𝜑 𝑥 < 𝑥

for all x > 0 and

𝜑 𝑥 𝑥

∈ 𝑆, such that

𝑓 𝑡, 𝑦 − 𝑓 𝑡, 𝑥



1 𝑠𝑢𝑝

𝑇 0

𝐺 𝑡, 𝑠 𝑑𝑠, 𝑡 ∈ 𝐽

𝜑 𝑦−𝑥 ,

for any x,y in R. The proof is based on building a complete metric space with distance given by sup and a contraction (defined as above) given by the convolution with Green functions.

Theorem 2:

Assume that there exists K > 0 such that 𝑓 𝑡, 𝑢 − 𝑓 𝑡, 𝑣

≤ 𝐾|𝑢 − 𝑣|

for 𝑡 ∈ 𝐽 and every 𝑢, 𝑣 ∈ 𝑹. If 𝑇

K 𝑠𝑢𝑝

𝐺 𝑡, 𝑠 𝑑𝑠, 𝑡 ∈ 𝐽 < 1 0

then there exists unique solution for the boundary value problems given in Lemmas 1 and 2.

VIII. Conclusions:

• • •

𝑑𝑦 𝑓 𝑥, 𝑦 = 𝐿 𝑓(𝑥, 𝑦) 𝑦 𝑑𝑥

It is possible to define a ODE with variable order of differentiation and find existence, uniqueness and smoothness properties of the solution. The formalism should be extended for fractional derivatives of higher order. The problem 𝑑∝ 𝑡 𝑓 𝑡 = 𝐿 𝑓(𝑡) 𝑑𝑡 ∝ 𝑡 𝑑𝑘 ∝ = 𝐹[∝ (𝑡)] 𝑑𝑡𝑘

should be formulated rigorously.

Theorem ∝ 𝑡 : 𝑅+ → (0,1) is continuous, and fulfils the condition that for all

If

𝑝 ∈ 1, min 𝑡≥0

and

1 1 , ∝ (𝑡) 1 −∝ (𝑡)

we have:

sup 𝑡≥0

Γ(1 + 𝑝 (∝ 𝑡 − 1) ≤ +∞ Γ(∝ 𝑡 )

𝑓 𝑡, 𝑥 : 𝑅+ × 𝑅 → 𝑅 is continuous and fulfils:

𝑓 𝑡, 𝑥 − 𝑓(𝑡, 𝑦) ≤ 𝐹(𝑡) 𝑥 − 𝑦

for all

𝑡 ≥ 0,

𝑥, 𝑦 ∈ 𝑅

then the VODE initial condition problem: 𝐷∝(𝑡) 𝑥 − 𝑥0 = 𝑓 𝑡, 𝑥 𝑡 , Has a unique solution on 𝑡 ≥ 0.

𝑥 0 = 𝑥0

The proof is based on the introduction of a Banach space endowed with an exponentially weighted metric (Bielecki metric), followed by the proof that the integral operator in the VODE derivative is a contraction in this Banach space.

The surface and parallel mesh represent the plot the family of constant ∝0 ∈ (0, 1) solutions. The red double curve represents the VODE solution for𝐷 ∝(𝑡) 𝑥 − 𝑥0 = 𝑓 𝑡, 𝑥 𝑡

A bifurcation point is obtained:

Solution for VODE equation with ∝ constant almost all interval t ∈ (0, 1), except a narrow and sharp drop at t = 0.5. The two solid lines are exact solutions for these extreme values of ∝ and the dashed curve is the VODE numerical solution:

Solution for VODE equation with ∝ a step function at t = 0.5. The two solid lines are exact solutions for the extreme values of ∝ and the dashed curve is the VODE numerical solution:

Solution for VODE equation with ∝ oscillating. The two solid lines are exact solutions for the extreme values of ∝ and the dashed curve is the VODE numerical solution:

∝ 𝑡 =1 𝑥 ′′ 𝑡 + 𝑎 𝐷 ∝ 𝑡 𝑥 𝑡 + 𝑏 𝑥 𝑡 = 0

Memory becomes shorter ∝ 𝑡 =

𝑡 𝑡𝑚𝑎𝑥

Memory becomes ∝ 𝑡 =1− longer

𝑡 𝑡𝑚𝑎𝑥

Recently the Northern Hemisphere monthly Temperature Anomalies (NHTA) and the Pacific Decadal Oscillation index (PDO) were justified by using a long term memory in climate variability with the help of fractional integrals. N. Yuan, Z. Fu and S. Liu, Nature-Scientific Reports, 4 (2014) 6577 The estimations of the long-lasting influences of historical climate states on the present time and prediction of the influence were obtained as climate memory signals. The whole climate variations can be decomposed into two parts: • The cumulative climate memory (CCM) and • The weather-scale excitation (WSE). If 𝜀 represents the ‘‘weather-scale’’ excitations, and 𝑥 is ‘‘climate-scale’’ variability, the dynamical equation used by Yan et al is:

𝑑𝑥 = 𝜀(𝑡) → 𝑥 𝑡 = 𝐷 ∝0 𝜀(𝑡) 𝑑𝑡

• Higher-order derivatives theories appear naturally in nonlinear waves, or as corrections to general relativity and cosmic strings. • A way to introduce non-local theories is to use fractional Lagrangians and Hamiltonians. Fractional Hamiltonian quantization of nonsingular systems possessing higher order derivatives. • The procedure is to have the Lagrangian dependent of a segment of the path, then introduced time translation as Taylor series in derivatives and build the fundamental Poisson brackets. • The higher orders of differentiation are represented by a countable system of fractional derivatives. • The variable order of differentiation Hamiltonian is introduce by smoothing the series through a functional integration.

Hamiltonian dynamics with variable order of differentiation Following the procedure from D. Baleanu, S. I. Muslih, K. Taş. , J. Math. Phys. 47, 10 (2006) 103503.

𝐿 𝑞 = L𝑛 𝑞 𝑡 , 𝑞 𝑡 , 𝑞 𝑡 , … , 𝑞

𝑛

Classical non-local theory

(𝑡)

𝐿𝑉𝑂𝐷𝐸 𝑡 = 𝐿[𝑞 𝑡 +∝ (𝑡) ] 𝑞 𝑡 → 𝑄 𝑡, ∝ 𝑡

= 𝑞 𝑡 +∝ (𝑡)

𝑄 𝑡, ∝ 𝑡 = P 𝑡, ∝ 𝑡 = 𝑞

𝑚

(𝑡), 𝑝

𝑚

∝ 𝑡 𝑚 𝑚 ∞ 𝑚=0 𝑚! 𝑞 ∞ 𝑚=0

𝜕

− 𝜕∝

𝑚

(𝑡) Fundamental

𝛿 ∝ 𝑝

𝑚

(𝑡)

(𝑡) = 𝛿 𝑛 𝑚

𝑄 𝑡, ∝ 𝑡 , 𝑃 𝑡′, ∝ 𝑡′

Poisson brackets

= 𝛿(𝑡 − 𝑡 ′ )

VODE realization of the Lagrangian and Hamiltonian 𝑞 𝑡 = 𝐷∝ 𝑡 𝑥(𝑡) Use the Mittag-Leffler function instead of the Taylor series of the exponential

𝐸∝ 𝑎𝑡 ∝ = 𝑘≥0

𝑎𝑘 𝑡 𝑘∝ Γ(1 + 𝑘 ∝)

𝑚! → Γ(1 + 𝑚)

The VODE Hamiltonian is given by 1

𝐻[𝑞, 𝑝] =

𝑝∝

𝑠

𝑞∝ 𝑠

+𝑠 ∝′ 𝑠

− 𝐿(𝑞, 𝑞 ∝

𝑠

) 𝑑𝑠

0



The VODE Euler-Lagrange equations: −∞

𝐷∝

𝑠

𝜕𝐿(𝑡) 𝑑𝑠 = 0 ∝ 𝑠 𝜕𝑞 (𝑡)

A Guinea Pig: the linear, free waves equation 𝜕𝛼 Ψ 𝜕𝛽 Ψ =𝐴 𝛽 𝜕𝑥 𝛼 𝜕𝑡 Dispersion relation for plane waves solutions Ψ = Ψ0 𝑒 𝑖

𝑘𝑥−𝜔𝑡



𝑘 ∝ = −1

𝛽 𝛽−∝

𝑖

𝐴 𝜔𝛽

To have the PDE and the energy-momentum relation independent of the solution we need either r=s (A=#, massless) or r=2s and A=mass, massive particles The VODE formalism can have this happening smoothly.

Other possible applications include:

• Higher-order derivatives used in linear/nonlinear waves equations and the dynamical changing of the dispersion relation. • Nerve Boussinesq equation (bio, neuro) • Mesoscopic superconductivity (quantum physics, stability problems, phase transition), • Weakly nonlinearities (solitons, compactons), • Dualities compact-open, finite-periodic-infinite (geometry), • Systems that need variable allocation of resources or memory (computer sciences, living systems)

Relation between VODE and other applied math fields:

• Quantum groups and Hopf algebras

• Finite difference equations as infinite order linear ODE and their representation as nonlinear differential equations of finite order

Future directions 1.

Find another math tool, beyond fractional differentiation, to allow continuous transitions over several (>1) orders of differentiation.

2.

Investigate spectral consequences of variable order differentiation.

3.

Re-define Lagrangian and Hamiltonian systems.

4.

Smooth out and involve dynamics in the nonlinear non-perturbational theories.

Thank you!

More Results on Existence 𝐶 𝐽, 𝑹 = {𝑢(𝑡): [0, 𝑇] → 𝑹 / continuous, ||𝑢||∞ = sup 𝑢 𝑡 : 𝑡 ∈ 𝐽 , 𝑇 > 0} Banach space. 𝐶 𝑛 𝐽, 𝑹 = 𝐶(𝐽, 𝑹) and class n. 𝑆 = {∝ (𝑡): 𝑹+ → 0, 1 / ∝ 𝑡𝑛 → 1, 𝑖𝑓 𝑡𝑛 → 0

We define the 2-parametric Mittag-Leffler function ∞

𝐸∝,𝛽 𝑧 = 𝑘=0

𝑧𝑘 Γ(𝑘 ∝ + 𝛽)

∝, > 0 𝛽 > 0

We also use the theorem: in a complete metric space (M, d) with a contraction T

with the property d(T(x),T(y)) < ∝ (d(x,y)) d(x,y), ∝∈ 𝑺, has a unique fixed point z and all sequences 𝑇 𝑛 (𝑥) converge to z for any x (Khamski, 2011)

Lemma 1. The nonlinear fractional ODE λ1 𝐷∝1 𝑢 𝑡 + λ0 𝑢 𝑡 = 𝑓 𝑡, 𝑢 𝑡 , 𝑡 ∈ 𝐽, λ0 , λ1 ∈ R, λ1 ≠ 0 , 0 ≤∝1 |𝑎|

1 𝑎

where

𝐸∝,𝛽

(𝑘)

𝑑𝑘 𝑦 = 𝐸 𝑦 = 𝑑𝑦 𝑘 ∝,𝛽



𝑗=0

𝑗 + 𝑘 ! 𝑦𝑗 𝑗! Γ ∝ 𝑗 +∝ 𝑘 + 𝛽

for k=0,1,2,…

Lemma 2: the result from Lemma 1 can be generalize to any linear fractional ordinary differential operator 𝑛

λ𝑘 𝐷∝𝑘 𝑢(𝑡)

𝐿 𝐷 𝑢 𝑡 = 𝑘=0

with nonzero real coefficients λ𝑘 and fractional orders 0 ≤∝0