A Short visit on Ising 2D Model

14 downloads 0 Views 507KB Size Report
she patiently listened my demystified explanation on this research work. .... sion without any mathematical proof, simply extrapolating from his one-dimensional.
A Short visit on Ising 2D Model

St. Xavier’s College Maitighar, Kathmandu Affiliated to Tribhuvan University For the partial fulfilment of the bachelor degree of Science and Technology

Submitted To: Mr. Pushpalal Shrestha Department of Statistics (Mathematics)

Submitted By: Damodar Rajbhandari Department of Statistics (Mathematics)

2017

Letter of Approval This report entitled “A short visit on Ising 2D Model” has been prepared and submitted by Damodar Rajbhandari and presented to the Physics and Statistics (Mathematics) Departments at St. Xavier’s College. It has been approved and accepted for the practical course on Research Methodology of B.Sc. Physics 3rd year.

Jonah Maxwell Miller Guelph-Waterloo Physics Institute University of Guelph

Mr. Drabindra Pandit HOD of Physics St. Xavier’s College

Mr. Pushpalal Shrestha Department of Statistics (Mathematics) St. Xavier’s College Date:

The final copy of this report has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the mentioned discipline.

i

Abstract This report aims to give a short description of the Ising two dimensional model and an introduction to Monte Carlo simulation. We focused on the magnetization results against temperature. To draw results, we implemented Metropolis Hastings algorithm which helps us to built likely configurations from any arbitrary configurations. Due to the finite size of our lattice, it is difficult to specify the curie temperature. In order to approximate this temperature, we have used Binder Cumulant for different lattice sizes. We assume reader have some basic of probability theory and an everlasting curious.

ii

Dedication To My Late Father who is in heaven

iii

Acknowledgments First and foremost, I would like to express my deep gratitude for my academic advisors Prof. Dr. Narayan Prasad Adhikari and Jonah Maxwell Miller∗ whose expertise, patience, encouragement and invaluable guidance provided me a beauty in my research. Without their help and insight, this work would not be what it is now. A special thanks must be extended to Head of Department of Physics, Mr. Drabindra Pandit whose everlasting effort on “Encouraging students to involve in research activities” gave me a lot of inspiration in my heart. And Mr. Pushpalal Shrestha, You are the coolest teacher I have ever met. Next, I would also like to thank the entire Physics and Statistics (Mathematics) Departments at St. Xavier’s College, including all of the faculty, staffs and students. On a personal note, I would like to thank to my parents especially to my Uncle and Aunt, whose continued love, encouragement, best wishes, support, and belief in my abilities have made it possible for me to go from a very mediocre student to an honors student. I personally thanks to my dearest and beautiful sister Dipika. Without being a science background student but a wonderful science loving person, she patiently listened my demystified explanation on this research work. Without her emotional support upto this years, I might never be a good explainer and to succeed a science flavor on her. I would like to thank to my classmate with whom I studied: Bindesh Tripathi. It was a lot of fun learning with you. There are many other people that deserve to be acknowledged for their help and support during this work and I apologize if I have forgotten to mention some of them. Thank you, everyone.



Guelph-Waterloo Physics Institute

iv

Contents

Letter of Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 i iii

1 INTRODUCTION 1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 2

2 BACKGROUND 2.1 Spin . . . . . . . . . . . . . . 2.2 Total Energy . . . . . . . . . 2.3 Possible Configuration . . . . 2.4 Periodic Boundary Condition 2.5 Exact Solution . . . . . . . .

3 3 5 5 6 7

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

3 NUMERICAL SIMULATION 9 3.1 Monte Carlo Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Metropolis Hastings Algorithm . . . . . . . . . . . . . . . . . . . . . 10 4 NUMERICAL TOOLS 13 4.1 Calculation of observables . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Binder Cumulant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5 ANALYSIS AND RESULTS 5.1 Magnetization Results . . . 5.2 Critical Point . . . . . . . . 5.3 Limitation . . . . . . . . . . 5.4 Conclusion . . . . . . . . . .

. . . .

. . . .

. . . .

A All about Code

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

15 15 19 22 23 25

v

vi

CONTENTS

B Code Optimization 31 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

List of Figures

2.1 2.2 2.3

Lattice illustration of an Ising 2D model. . . . . . . . . . . . . . . . . Illustration of Periodic Boundary Condition. . . . . . . . . . . . . . . Plot of Onsager’s formula for magetization. . . . . . . . . . . . . . . .

4 6 7

Absolute Magnetization per spin vs Temperature for L = 4. . . . . . Absolute Magnetization per spin vs Temperature for L = 8. . . . . . Absolute Magnetization per spin vs Temperature for L = 16. . . . . . Absolute Magnetization per spin vs Temperature for L = 32. . . . . . This plot shows the differing results of the Magnetization for varing lattice sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 This plot shows the unfitted Binder Cumulants for varing lattice sizes. 5.7 This plot shows the fitted Binder Cumulants for varing lattice sizes. . 5.8 This plot shows the fitted Binder Cumulants for lattice sizes L = 4 and 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 This plot shows the fitted Binder Cumulants for lattice sizes L = 4 and 16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 This plot shows the fitted Binder Cumulants for lattice sizes L = 8 and 16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16 16 17 17

5.1 5.2 5.3 5.4 5.5

vii

18 19 19 20 21 22

CHAPTER

1 INTRODUCTION “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it.” Steve Jobs

The primary goals of this report are two-fold: (1) To introduce the use of Monte Carlo Simulation as an useful technique to evaluate the observables of a ferromagnet. (2) To calculate the curie temperature and to compare with the theoretical value given by Onsager and show that there exists a phase transition. The first four chapters are devoted to the above goals and aim to introduce the ideas in a broad spectrum. In chapter 1, we introduce historical background of this model. In chapter 2, we develop formalism required to investigate the properties of a two dimensional ferromagnetic system with respect to its magnetization at varying temperature. In chapter 3, we introduce the Metropolis Hasting Algorithm, which is an efficient choice for the Monte Carlo methods. In chapter 4, we introduce the Binder Cumulant which helps us to specify the curie temperature . Finally in chapter 5, we discuss our results and limitations of our study. 1

2

CHAPTER 1. INTRODUCTION

1.1

History

Spontaneous magnetization is a very peculiar property in ferromagnetic material. So, many physicists in the early twentieth century found this phenomenon curious. Physicists lacked a good theoretical model until Ernst Ising introduced one∗ now known as the “Ising Model”, in 1925. But, the credit goes to his doctoral advisor Wilhelm Lenz, who gave it as a problem to him. It is believed that Lenz had this idea† first in 1920. We shall quote here the English translated resume [2] from his Dissertation: “I, Ernst Ising, was born on May 10, 1900, the son of the merchant Gutsav Ising and his wife Thekla, at L¨owe, K¨oln. Shortly thereafter my parents moved to Bochum, Westfall, where I started school in Easter 1907. I received my diploma at the Gymnasium there, in 1918. After brief military training I began my studies of mathematics and physics at G¨ottingen University in Easter 1919. After an absence of one semester, I continued my studies in Bonn, where I studied astronomy among other things. Two semesters later I went to Hamburg. There I turned especially to the study of theoretical physics at the suggestion of Professor Dr. W. Lenz, and at the end of 1922, I began under his guidance the investigation of ferro-magnetism which led to the results presented here.” Ernst Ising’s analysis of a linear chain of two-state spins using his eponymous model was unusual at that time. His work on the one-dimensional system showed there was no phase transition at finite temperature. He then concluded that there will not be a phase transition in higher dimensional systems. He asserted this conclusion without any mathematical proof, simply extrapolating from his one-dimensional result. This statement proved to be erroneous, as demonstrated by Lars Onsager‡ in the two-dimension version of the model. This discovery garnered much attention and Ising found himself an unexpected celebrity.



E. Ising, Z. Physik 31, 253 (1925) W. Lenz, Z. Physik 21, 613 (1920) ‡ L. Onsager, Phys. Rev. 65, 117 (1944) †

CHAPTER

2 BACKGROUND “If God has made the world a perfect mechanism, He has at least conceded so much to our imperfect intellects that in order to predict little parts of it, we need not solve innumerable differential equations, but can use dice with fair success.” Max Born

Before we proceed, there are number of important concepts that need to be known. We briefly develop these ideas herein.

2.1

Spin

From the Quantum Mechanics, we know that electrons have their own intrinsic angular momentum. And, this angular momentum we call as Spin. Each electron − has a magnetic dipole moment (→ µ ) such that, → − → − µ =γS → − where, γ is the gyro-magnetic ratio and S is the spin angular momentum of the electron [4]. In non-magnetized materials the associated magnetic dipoles of the atoms exists in a random orientations. The magnetic properties macroscopic materials exhibit are due to the magnetic moments of electrons. Therefore, since the spin is random, 3

4

CHAPTER 2. BACKGROUND

there will be no overall macroscopic magnetic moment. However, if atomic spins are aligned, a macroscopic magnetic moment can emerge. Now, our question will be: what is the atom’s overall spin? To explore this question, consider the hydrogen atom. Suppose, we have two spin-1/2 particles (i.e. the electron and the proton) in the ground state of hydrogen. The Stern-Gerlach experiment1 tells us that each can have spin up (+1/2) or spin down (-1/2), so there are four possible arrangements i.e. ↑↑,

↑↓,

↓↑,

↓↓

where the first arrow refers to the electron and the second to the proton. For above arrangements, its total spin depends on the numbers 1, 0, 0 , -1 respectively. Out of four, two arrangements (i.e.↑↑, ↓↓) can contribute the magnetic dipoles moment. To understand the Ising Model, our key ingredient is the spin and associated magnetic moments. For simplicity, we ignore the orbit of the electron around the proton (so-called “spin-orbit coupling”) as well as relativistic effects. Although spin is a quantum-mechanical phenomenon, we can gain a conceptual understanding and capture many relevant phenomena by studying a discrete, classically probabilistic system. So, for an atom we choose either spin up (+1) or spin down (-1). We put these spins in a two dimensional lattice like in figure 2.1.

Figure 2.1: Lattice illustration of an Ising 2D model. 1

Stern and Gerlach had no idea it was spin that they had discovered.

2.2. TOTAL ENERGY

5

The lattice configuration is square with dimension L and the total number of spins equals to N = L × L. Hence, our physical assumption is, for each lattice site it will have either one of the two possible states(i.e. spin up or spin down).

2.2

Total Energy

Our first step is to assign an independent variable si to each lattice site such that i = 1, . . . , N . The variables si take on only two values, si = ±1 which we shall call the two possible states of the lattice site. An assignment of values {si } = {s1 , s2 , . . . , sN } to each lattice is called a configuration of the system. Now, we need to define Hamiltonian of the system. In mathematical physics, the Hamiltonian refers to the total energy of a system, and it governs the dynamics. The Ising model makes the simplistic assumption that only short-range nearest-neighbor interactions and interactions of a lattice site with an external field contribute to the total energy of the system. For each configuration {si } we have, E({si }) = −J

N X

si sj − H



N X

si

(2.1)

i=1

where the parameters J and H correspond to the energies associated with nearestneighbor interactions and interactions with the external magnetic field respectively [3]. For our interest, we assume a vanishing external field and thus set H = 0. Also, for a ferromagnet, J ≥ 1 (i.e. J is positive) so that a magnetized configurations has a lower energy level than a non-magnetized configuration (refer section 2.3). For simplicity, we choose J = 1 . So the equation 2.1 becomes, E({si }) = −

N X

si sj

(2.2)



Also, the total energy of each spin si can be calculated by, E(si ) = −si

4 X

sj

(2.3)

j

where j refers to neighbor spins of the spin si .

2.3

Possible Configuration

In thermal equilibrium with a heat bath, Boltzmann canonical distribution law suggests that the states of a closed system in fixed volume favors the lower

6

CHAPTER 2. BACKGROUND

energy. In short, this law refers to energy minimization. Based on this law, we can compute the probability of finding the particular spin configuration and this probability is directly proportional to Boltzmann factor (i.e. exp(−E({si })/kB T )). And, the probability distribution governs different configurations is known as Gibbs distribution(or canonical distribution) [6]; it was discovered by J. W. Gibbs for classical statistics in 1901. From Gibbs distribution, the probability of finding a particular spin configuration {si } is, 1 (2.4) P ({si }) = e−E({si })/kB T Z P where Z = {si } exp(−E({si })/kB T ) is the partition function. It is the sum over all possible 2N spin configurations. Due to the Boltzmann factor, spin configurations with lower energy will be favored.

2.4

Periodic Boundary Condition

As we all know, the number of atoms is always of order Avogadro’s number, which is about 6.023 × 1023 . For computational work, it is impossible and costly to account all these atoms. Due to Boltzmann canonical distribution law which relies on canonical ensemble gives us the possibility to gain relevant observables by taking only few number of atoms or spins (refer to figure 2.1). We neglect any complications that might have been brought by the boundaries. So, we made spins at edges of the lattice to interact with geometrical opposite edges. This is known as Periodic Boundary Condition, simply PBC. It can be visualized more clearly if we consider a two dimensional rubber sheet being folded into a 3D torus with spins being on the surface of this topological structure.

Figure 2.2: Illustration of Periodic Boundary Condition.

2.5. EXACT SOLUTION

2.5

7

Exact Solution

Peierls proved the square lattice should have a phase transition2 . And then, Kramers and Wannier located the transition point3 . In 1944, Onsager calculated the free energy of this 2D model. Finally, in May 1949 at a conference of the International Union of Physics on statistical mechanics in Florence, Italy, Onsager announced the exact formula for magnetization. He said, “B. Kaufman and I have recently solved this problem”[1]. But, they did not publish their derivation. After three years later, C. N. Yang published it’s derivation in Physical Review. The Onsager’s formula for absolute magnetization per spin (M ) is given as [7]: ( √ (1 − [sinh(ln(1 + 2) TTc ]−4 ))1/8 T < Tc (2.5) M= 0 T > Tc

Figure 2.3: Plot of Onsager’s formula for magetization. For vanishing external field, the Onsager’s solution gives the critical temperature (Tc ) as [7]: KB Tc 2 √ ≈ 2.269 = J ln(1 + 2) 2 3

R. Peirels, Proc. Camb. Phil. Soc. 32, 477 (1936) H. A. Kramers and G. H. Wannier, Phys. Rev. 60, 252, 263 (1941)

(2.6)

8

CHAPTER 2. BACKGROUND We assume KB and J to be one so, equation 2.6 becomes, Tc =

2 √ ≈ 2.269 ln(1 + 2)

(2.7)

CHAPTER

3

NUMERICAL SIMULATION “In physics, we’re not actually trying to get at the truth. We’re trying to get at the truth within measurement error.” Jonah Maxwell Miller

In this chapter, we briefly describe the key ingredients for numerical simulation. We introduce Metropolis Hastings algorithm, this ubiquitous tool helps us to approximate the likely configurations from any arbitrary configurations.

3.1

Monte Carlo Step

First of all, we initialize our lattice by putting spins at random and this configuration of spins we call arbitrary (see figure 2.1). And then, we calculate the total energy of the configuration. In order to approximate the likely configurations, we implement the Metropolis Hastings algorithm on the Monte Carlo step (mcs). We consider the number of mcs as units of time. Now, we thermalize our system by performing mcs for number of times. Per mcs, we flip one spin at a time. This is herein as “Single Flip Spin Dynamics”. This flipped spin is accepted or rejected with the probabilities depending on how it affects the change in energy of the system. The algorithm for identifying and performing this step is given below. Algorithm 3.1.1. For any given configuration, we transform it into another allowed configuration by performing the following steps: 1. Select any one spin at random. 9

10

CHAPTER 3. NUMERICAL SIMULATION 2. Flip that spin. 3. Calculate it’s energy. 4. Calculate the change in energy with respect to it’s previous state. 5. Decide whether to accept the flipped spin or not. 6. Repeat until the desired configuration meets.

The only constraints for the mcs is that, they should be ergodic because we assume the collection of all possible Monte Carlo steps can map the system from any state to any other state.

3.2

Metropolis Hastings Algorithm

In mcs, the decision is made due to the Metropolis Hastings Algorithm. This sampling technique uses the idea of Markov chain of successive configurations1 {si } where each configuration {sj } is constructed from a previous configuration {si } via a suitable transition probability W ({si } → {sj }) [5]. The main idea is that, when reaching equilibrium, it should satisfy the detailed balance condition. i.e. Peq ({si })W ({si } → {sj }) = Peq ({sj })W ({sj } → {si })

(3.1)

This equation suggests us, at equilibrium there is an equal probability for the transitions {si } → {sj } and {sj } → {si }. Using equation 2.4 in 3.1, we get, 1 1 −E({si })/kB T e × W ({si } → {sj }) = e−E({sj })/kB T × W ({sj } → {si }) Z Z

(3.2)

Canceling Z in both side and putting alone the ratio of transition probabilities in left side, we get, W ({si } → {sj }) e−E({sj })/kB T = −E({si })/k T B W ({sj } → {si }) e

(3.3)

Now, the equation becomes, W ({si } → {sj }) = e−∆E/T W ({sj } → {si }) 1

(3.4)

The terms “successive configurations”and “successive states”are interchangeable because in a given configuration we try to change single spin’s state for each time in mcs.

3.2. METROPOLIS HASTINGS ALGORITHM

11

where, ∆E = E({sj }) − E({si }) and for simplicity, we assume Boltzmann constant (kB ) to be one. Using equation 2.2, the ∆E becomes, ∆E = (−

N X

sj sk ) − (−



N X

si sk )

(3.5)



Since, we only used one spin to change the configuration. So, the equation 3.5 will squeeze like in the form of equation 2.3. i.e.

∆E = (−sj

4 X

sk ) − (−si

k

4 X

sk )

(3.6)

k

∴ ∆E = E(sj ) − E(si )

(3.7)

We know, if we flip the spin then, only previous state spin’s energy sign will be changed. For this, pseudo-code is given below. Pseudo-code 3.1.2. • if sj == 1 : Use E(sj ) = −sj × E(si ) • else: Use E(sj ) = sj × E(si ) where, E(si ) = si ×(top + bottom + left + right) such that, top, bottom, left and right are the neighbor spins of the spin si . Now, let’s go back to our equation 3.4, This equation does not help us to specify W ({si } → {sj }) uniquely. In order to do this, we introduce [5]. ( e−∆E/T if ∆E > 0 W ({si } → {sj }) = (3.8) 1 otherwise The algorithm for making decision whether or not to accept the flip spin is given below. Algorithm 3.1.3. 1. If ∆E < 0, accept the flipped spin. 2. If not then, only accept with probability e−∆E/T . 3. If both step 1 and 2 fail, return the flipped spin into its previous state.

12

CHAPTER 3. NUMERICAL SIMULATION

CHAPTER

4

NUMERICAL TOOLS “Today’s science is tomorrow technology.” Narayan Prasad Adhikari

In this chapter, we highlight our discussion on “Binder Cumulant”which helps us to locate the curie temperature.

4.1

Calculation of observables

Suppose, we have taken n number of magnetization samples for each temperature which is ranging from 0 to T kelvin. Also, we have L lattice size, so we can have N = L × L number of spins. The observables of particular interest are hM a i and h|M |i. These are calculated as given below. n

1X a hM i = M n a

(4.1)

where, a can be 2 and 4. Similarly, n

1X h|M |i = |M | n So, absolute magnetization per spin is given as, h|M |i per spin = 13

h|M |i N

(4.2)

(4.3)

14

CHAPTER 4. NUMERICAL TOOLS

4.2

Binder Cumulant

In Statistical mechanics of many body systems, it is easy to use finite subsystems from an infinite systems. So, our limitation is the finite size of our lattice and which eventually turns into a problem of recognizing the specific point at which the phase transition occurs. The Binder Cumulant1 is an observational tool which helps us to estimate that point. The main idea of this tool is that, for different sizes of the lattices, the average magnetization curve always passes through a fixed point, which coincides with the critical points. In other words, the intersection between the different cumulant (U ) curves of different sizes of lattice gives the curie temperature. Thus, it is a visual characteristics for detection of phase transition and given as [5], U =1−

1

K. Binder, Z. Physik B 43, 119-140 (1981)

hM 4 i 3hM 2 i2

(4.4)

CHAPTER

5

ANALYSIS AND RESULTS “Invest your time in such a way that you won’t feel your happy time spend as time waste.” Anonymous

In this chapter, we focus on the discussion of our findings and limitations that we faced during our experiment.

5.1

Magnetization Results

We plotted the plot of absolute magnetization per spin vs temperature for different sizes of lattice (i.e. L = 4, 8, 16, 32) and sees it’s trend with Onsager’s magnetization curve. To thermalize our system, we have used 2000 number of mcs to every lattice sizes. This is because, we have limited amount of processor speed, and this number is not so good for higher lattice size. We can see the effect on the plot of L = 32. The plots are shown below.

15

16

CHAPTER 5. ANALYSIS AND RESULTS

Figure 5.1: Absolute Magnetization per spin vs Temperature for L = 4.

Figure 5.2: Absolute Magnetization per spin vs Temperature for L = 8.

5.1. MAGNETIZATION RESULTS

Figure 5.3: Absolute Magnetization per spin vs Temperature for L = 16.

Figure 5.4: Absolute Magnetization per spin vs Temperature for L = 32.

17

18

CHAPTER 5. ANALYSIS AND RESULTS

All of these three plots are good but in figure 5.4, the fitted curve is a bit different than all of the three. One possible reason maybe due to the number of mcs that we performed. This signifies that, the system is partially likely. In order to see the good plot, we suggests to use more number of mcs or allowed until it get properly thermalized. The best way to thermalize the system is to use while loop in the simulation code. The first lesson we can learn from the above plots is that, thermalization of the system is quite important to produce good results. And also, greater the number of mcs we use the more likely we are to introduce spontaneous flipping.

Figure 5.5: This plot shows the differing results of the Magnetization for varing lattice sizes. In figure 5.5, the magnetization results shows very elegantly that the shape of the gradient becomes more distinct as the lattice size is increased. And, in larger lattice size, there is more apparent continuous phase transition. This shows as theory prescribes. The second lesson we can learn is that, in order to see the sharp phase transition, higher lattice size is needed with perfect thermalization.

5.2. CRITICAL POINT

5.2

19

Critical Point

One of the limitations that, the Ising model encounter is the finite size of our lattice. This ensue the problem to specify the curie temperature. To solve this problem, we have used Binder Cumulant and plots are shown below.

Figure 5.6: This plot shows the unfitted Binder Cumulants for varing lattice sizes.

Figure 5.7: This plot shows the fitted Binder Cumulants for varing lattice sizes.

20

CHAPTER 5. ANALYSIS AND RESULTS

According to Binder Cumulant, there is common intersection for all fitted cumulants of different lattice sizes. But, in figure 5.7, there are three intersections with the combination of two lattice sizes within 2 − 2.3 kelvin. To follow the Binder’s rule, we select two lattice sizes three times and locate the intersection point. These cumulants’ plots are shown below.

Figure 5.8: This plot shows the fitted Binder Cumulants for lattice sizes L = 4 and 8. Lattice size combination (4,8)

Estimated Tc 2.054

Theoretical Tc 2.269

Table 5.1: Critical Temperature calculated from lattice sizes 4 and 8.

5.2. CRITICAL POINT

21

Figure 5.9: This plot shows the fitted Binder Cumulants for lattice sizes L = 4 and 16. Lattice size combination (4,16)

Estimated Tc 2.211

Theoretical Tc 2.269

Table 5.2: Critical Temperature calculated from lattice sizes 4 and 16.

22

CHAPTER 5. ANALYSIS AND RESULTS

Figure 5.10: This plot shows the fitted Binder Cumulants for lattice sizes L = 8 and 16. Lattice size combination (8,16)

Estimated Tc 2.297

Theoretical Tc 2.269

Table 5.3: Critical Temperature calculated from lattice sizes 8 and 16. Hence, the estimated Tc from our simulation is 2.187 ± 0.123 K.

5.3

Limitation

In this work, we have only researched one property (i.e. magnetization) of Ising two dimensional model, and found the trend of it’s curve exactly what we have expected from the theory. Due to the time bound, we haven’t observed other properties like magnetic susceptibility, heat capacity etc. Observing these properties thoroughly will helps us to build our understanding level, more on the behavior of ferromagnetic material especially spontaneous magnetization. Regarding Ising 2D simulator, We optimized our python code on the suggestion of Jonah sir and it gained a good computational speed than the one we wrote for the

5.4. CONCLUSION

23

first time. Keeping in the mind that, Python is pretty slow in looping. This always put red flag while we coding. What we have learned is that, even-though python has low performance as compared with C++ but we pointed out, it also depends upon how we build our code. So, optimization is a first priority for good programming.

5.4

Conclusion

The numerical results produced by the Monte Carlo simulation suggests us the alternative to an exact calculation. We approximate the curie temperature to be 2.187 ± 0.123 K, which can be comparable with the theoretical value. The accuracy of the results could be further improved by using larger lattice sizes and taking more samples. In a nutshell, the occurrence of spontaneous magnetization is directly proportional to the number of Monte Carlo steps used and inversely proportional to the lattice size considered. We successfully finished our experiment in the sense that we learned some of the basics of how to run simulations and achieved results that were comparable to the Onsager solution.

24

CHAPTER 5. ANALYSIS AND RESULTS

APPENDIX

A All about Code

The Ising 2D model simulator for this report1 was build in Python version 3.5 under Linux Mint and all the plots were created using matplotlib and QtiPlot. A full and up-to-date code2 is freely available for use and modification under the General Public License at https://github.com/Damicristi/Ising-Model-in-2D. A copy of code is also attached here.

1

For questions or comments about this report or the source code, please contact me at [email protected] or find me on my blog http://www.physicslog.com/ 2 For simulator and plots’ codes

25

26

APPENDIX A. ALL ABOUT CODE

##################################################### # Author : Damodar R a j b h a n d a r i # # Work : I s i n g Model i n two dimension # # S t r a t e g i e s : L a t t i c e and Spin Model # # Object : Ferromagnetic m a t e r i a l # # Aim : To show , t h e r e e x i s t s a p h a s e t r a n s i t i o n . # # And , To c a l c u l a t e t h e Curie t e m p e r a t u r e . # # ( i . e . C r i t i c a l Temperature ) # # V e r s i o n : Python v . 3 . 5 # # Programming Approach : O b j e c t O r i e n t e d Programming # ##################################################### #! / u s r / b i n / python3 import numpy a s np from numpy . random import ∗ seed ()

# I ’ l l be u s i n g s ys t e m c u r r e n t time f o r s e e d ’ s parameter .

class I s i n g ( ) : ””” Implementing t h e I s i n g 2D model ! ””” def

init ( s e l f , L , temp ) : ””” I n i t i a l values for l a t t i c e & temperature ”””

size

s e l f .L = L s e l f . T = temp

def I n i t i a l i z e ( s e l f ) : ””” C r e a t i n g a s y s te m o f l a t t i c e with spins configuration . ””” s e l f . s t a t e = np . z e r o s ( ( s e l f . L ,

s e l f .L))

# Below i t e r a t i o n s works as , # f i l l i n g rows one a f t e r a n o t h e r . f o r i in range ( s e l f . L ) : f o r j in range ( s e l f . L ) : i f randint (0 , 2) > 0 . 5 : s e l f . state [ i ] [ j ] = 1 else : s e l f . s t a t e [ i ] [ j ] = −1 return s e l f . s t a t e

def E n e r g y C a l c ( s e l f ) : ””” C a l c u l a t e e n e r g y o f each s p i n s & implemented P e r i o d i c Boundary C o n d i t i o n . ”””

27

s e l f . E = np . z e r o s ( ( s e l f . L ,

s e l f .L))

#Taking c a r e o f s p i n s e x c e p t e d g e s & c o r n e r s s e l f . E[ 1 : − 1 , 1 : − 1 ] = − s e l f . s t a t e [ 1 : − 1 , 1 : − 1 ] ∗ \ ( s e l f . s t a t e [2: ,1: −1]+ s e l f . s t a t e [: −2 ,1: −1] + s e l f . s t a t e [1: −1 ,: −2]+ s e l f . s t a t e [ 1 : − 1 , 2 : ] ) #Taking c a r e o f e d g e s #l e f t s e l f . E[ 1 : − 1 , 0 ] = − s e l f . s t a t e [ 1 : − 1 , 0 ] ∗ \ ( s e l f . s t a t e [ 2 : , − 1 ] + s e l f . s t a t e [: −2 , −1]+ s e l f . s t a t e [1: −1 , −1]+ s e l f . s t a t e [ 1 : − 1 , 2 ] ) #r i g h t s e l f . E[ 1 : − 1 , − 1 ] = − s e l f . s t a t e [ 1 : − 1 , 0 ] ∗ \ ( s e l f . s t a t e [ 2 : , − 1 ] + s e l f . s t a t e [: −2 , −1]+ s e l f . s t a t e [1: −1 , −2]+ s e l f . s t a t e [ 1 : − 1 , 0 ] ) #t o p s e l f . E[ − 1 , 1 : − 1 ] = − s e l f . s t a t e [ − 1 , 1 : − 1 ] ∗ \ ( s e l f . s t a t e [ 0 , 1 : − 1 ] + s e l f . s t a t e [ −2 ,1: −1]+ s e l f . s t a t e [ −1 ,: −2]+ s e l f . s t a t e [ − 1 , 2 : ] ) #bottom s e l f . E[ 0 , 1 : − 1 ] = − s e l f . s t a t e [ 0 , 1 : − 1 ] ∗ \ ( s e l f . s t a t e [ 1 , 1 : − 1 ] + s e l f . s t a t e [ −1 ,1: −1]+ s e l f . s t a t e [0 ,: −2]+ s e l f . s t a t e [ 0 , 2 : ] ) #t a k e c a r e o f c o r n e r s , which must be #h a n d l e d manually #bottom l e f t s e l f .E[ 0 , 0 ] = −s e l f . state [0 ,0]∗\ ( s e l f . s t a t e [ 1 , 0 ] + s e l f . s t a t e [ −1 ,0]+ s e l f . s t a t e [0 , −1]+ s e l f . s t a t e [ 0 , 1 ] ) #bottom r i g h t s e l f . E[ 0 , − 1 ] = − s e l f . s t a t e [ 0 , − 1 ] ∗ \ ( s e l f . s t a t e [1 , −1]+ s e l f . s t a t e [ −1 , −1]+ s e l f . s t a t e [0 , −2]+ s e l f . s t a t e [ 0 , 0 ] ) #t o p l e f t s e l f . E[ − 1 , 0 ] = − s e l f . s t a t e [ − 1 , 0 ] ∗ \ ( s e l f . s t a t e [ 0 , 0 ] + s e l f . s t a t e [ −2 ,0]+ s e l f . s t a t e [ −1 , −1]+ s e l f . s t a t e [ − 1 , 1 ] ) #t o p r i g h t s e l f . E[ −1 , −1] = − s e l f . s t a t e [ −1 , −1]∗\ ( s e l f . s t a t e [0 , −1]+ s e l f . s t a t e [ −2 , −1]+ s e l f . s t a t e [ −1 , −2]+ s e l f . s t a t e [ − 1 , 0 ] ) return s e l f . E

def C h o o s e S p i n ( s e l f ) : ””” This w i l l c h o o s e one s p i n a t random & c a l c u l a t e i t ’ s e n e r g y . ””” s e l f . x = r a n d i n t ( 0 , s e l f . L)

28

APPENDIX A. ALL ABOUT CODE

s e l f . y = r a n d i n t ( 0 , s e l f . L) s e l f . s = s e l f . state [ s e l f .x , s e l f . y ] s e l f . LE = s e l f . E [ s e l f . x , s e l f . y ] return s e l f . s , s e l f . LE

def F l i p ( s e l f ) : ””” F l i p the choosed spin . ””” if

s e l f . s == 1 : s e l f . s t a t e [ s e l f . x , s e l f . y ] = −1 else : s e l f . state [ s e l f .x , s e l f . y ] = 1 s e l f . fs = s e l f . state [ s e l f .x , s e l f . y ] return s e l f . f s

def New Energy ( s e l f ) : ””” C a l c u l a t e t h e new e n e r g y o f t h a t s p i n . ””” i f s e l f . f s == 1 : s e l f .NE = − s e l f . f s ∗ s e l f . LE else : s e l f .NE = s e l f . f s ∗ s e l f . LE return s e l f .NE

def Change Energy ( s e l f ) : ””” C a l c u l a t e t h e change i n e n e r g y for the f l i p p e d spin . ””” s e l f . dE = s e l f .NE − s e l f . LE return s e l f . dE

def D e c i s i o n ( s e l f ) : ””” We ’ l l make a d e c i s i o n on t h e b a s i s o f change o f e n e r g y . ””” i f s e l f . dE < 0 : s e l f . state [ s e l f .x , s e l f . y ] = s e l f . fs e l i f u n i f o r m (0 ,1) < np . exp(− s e l f . dE/ s e l f . T ) : s e l f . state [ s e l f .x , s e l f . y ] = s e l f . fs else : s e l f . state [ s e l f .x , s e l f . y ] = s e l f . s return s e l f . s t a t e [ s e l f . x , s e l f . y ]

29

def M a g n e t i z a t i o n ( s e l f ) : ””” We know , M a g n e t i z a t i o n depend on t h e number o f up or down s p i n s . In h e r e we ’ l l a p p l y , m a g n e t i z a t i o n depend on t h e sum o f a l l the spins . ””” return np . sum( s e l f . s t a t e )

def MC Step ( s e l f , s t e p = 2 0 0 0 ) : ””” We ’ l l have monte c a r l o moves t o get the f i n a l configuration . ””” s e l f . step = step f o r i in range ( s t e p ) : y . Energy Calc ( ) y . Choose Spin ( ) y . Flip () y . New Energy ( ) y . Change Energy ( ) y . Decision ()

return y . M a g n e t i z a t i o n ( )

if

name

== ”

main

”:

import o s try : o s . mkdir ( ” Data ” ) except OSError : pass

f o r temp in np . l i n s p a c e ( 0 . 1 , 4 , 1 2 8 ) : y = I s i n g ( 1 6 , temp ) y . I n i t i a l i z e () Mag Data = [ ] f o r i in np . a r a n g e ( 5 0 0 ) : Mag = y . MC Step ( ) Mag Data . append (Mag) #Below command w i l l s a v e t h e #d a t a from t h e s i m u l a t i o n np . s a v e t x t ( ” Data /Temp %s . c s v ” %temp , Mag Data , fmt=”%i ” , h e a d e r = ” M a g n e t i z a t i o n S a m p l e s ” )

30

APPENDIX A. ALL ABOUT CODE

APPENDIX

B

Code Optimization Our code can be further optimized by flipping the spin only after the change in energy ∆E is favor. In the following, we briefly describe, how to do this! Using equation 3.7 and pseudo-code 3.1.2. we have, • If sj = 1 then, ∆E = −sj × E(si ) − E(si ) = −(sj + 1) × E(si ) Putting the value of sj . we get, ∆E = −2 × E(si ) • If sj = −1 then, ∆E = sj × E(si ) − E(si ) = (sj − 1) × E(si ) Putting the value of sj . we get, ∆E = −2 × E(si ) This suggests us, the change in energy (∆E) does not depend upon which spin state we pick randomly. So, our Monte Carlo step algorithm will be modified as, 1. Make any arbitrary configuration. 2. Calculate each spin’ energies. 3. Select any one spin at random. 4. Calculate it’s change in energy using ∆E = −2 × E(si ). 5. On the basis of Metropolis Hasting algorithm, decide whether to flip the selected spin or not. 6. Repeat until the desired configuration meets. Due to the time bound, we haven’t implemented this algorithm and this was suggested by Jonah sir. 31

32

APPENDIX B. CODE OPTIMIZATION

Bibliography

[1] R.J. Baxter. Onsager and kaufamn’s calculation of the spontaneous magnetization of the ising model. arXiv:1103.3347v2, 2011. [2] Stephen G. Brush. History of the lenz-ising model. Reviews of Modern Physics, 39(4), 1967. [3] Barry A. Cipra. An introduction to the ising model. American Mathematical Monthly, 94(10):937–959, 1987. [4] David J. Griffiths. Introduction to Quantum Mechanics. Pearson Education, Inc., 2nd edition, 2005. [5] Jacques Kotze. An introduction to monte carlo methods for an ising model of a ferromagnet. arXiv:0803.0217v1, 2008. [6] L. D. Landau and E. M. Lifshitz. Statistical Physics, Course of Theoretical Physics, volume 5. Oxford: Pergamon Press, 3rd edition, 1980. Translated by J. B. Sykes and M. J. Kearsley. [7] Asher Preska Steinberg, Micheal Kosowsky, and Seth Fraden. Simulations: The ising model. Technical report, Brandeis University, Watham, MA 02453, May 2013. Advanced Physics Lab.

33

34

BIBLIOGRAPHY

Index

Avogadro number, 6

Magnetic susceptibility, 22 Metropolis Hastings algorithm, 9 Monte Carlo simulation, 23 Monte carlo step, 9

Binder Cumulant, 14 Boltzmann canonical distribution law, 5 Boltzmann factor, 6

Periodic Boundary Condition, 6 Phase transition, 14 Python, 22

Code optimization, 23 Curie temperature, 19 Ernst Ising, 2

Quantum Mechanics, 3

Ferromagnet, 1

Single Flip Spin Dynamics, 9 Source code, 25 Spin, 3 Spontaneous flipping, 18 Spontaneous magnetization, 2 Stern-Gerlach experiment, 4

Gibbs distribution, 6 Hamiltonian, 5 Heat capacity, 22 Lars Onsager, 2 Lattice site, 5

Torus, 6

Magnetic moments, 4

Wilhelm Lenz, 2

35