

NumPy provides a number of tools useful for simulations and for working with simple data types.
The Ising ModelThe Ising model is a simple model of a ferromagnet, using a regular lattice of magnetic “spins” that influence each other to line up, fighting against entropy all the while. Magnetic SpinsMagnets are atomic lattices with unpaired electrons, producing a spin magnetic moment at each position of the lattice: ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↑ ↑ ↑ ↑ → ← ↑ ↑ ↓ ← ↑ → → ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↓ ↑ ↑ ↓ ← ↑ ↓ ← → ↓ ← → ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ← ← ↑ ↑ ↑ ↑ ↑ ↑ ← ↓ → ↓ ← ↑ ← ↑ ← ↓ ↑ ↑ ↑ ← ↑ ↑ ↑ ↑ ↑ ↑ ↑ ← ← ← ↑ ↑ → → → → ↓ ← → ← ↑ ← ↓ → ↓ → ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ → → → ↑ ← ↑ ↓ ↓ → ↑ ← → ↑ ↓ Low Temperature Medium Temperature High Temperature These spins are tiny magnets and like to align with each other, and collectively can add up to a macroscopically measurable magnetization, such as in the bar magnet at the right: The higher the temperature, however, the more the thermal energy randomizes the spins, and the uniform alignment is broken up by patches of spins in other directions called domains, as in the red and green regions in the micrograph at the left. Above a critical temperature there is no longer a dominant magnetic direction, and the domain magnetizations cancel each other out. At infinite temperature the spins are completely random in their orientation even over short distances. We can describe the tendency of the spins to line up in terms of the (potential) energy of the spin interaction. Over the entire lattice of size N it is given by E = – Σ_{i≠j} J_{ij} s_{i}·s_{j} where J_{ij} is a positive coupling constant that incorporates the magnetic moment. Then,
Complete spin alignment will have a large negative energy, while thermally randomized spins will have a higher energy near zero. To simplify the model we can:
The energy thereby becomes E = –J Σ_{nn} s_{i}s_{j} And we can represent the atomic lattice by an array of ±1 values: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 This is known as the Ising Model of magnets, and as we will see it retains their basic characteristics. When the spins substantially align, there will be a measureable magnetization: M = (1/N)∣Σ s_{i}∣
The magnetization is an example of an order parameter, which we can use to monitor the system order. SpynsNot surprisingly, we can implement an Ising system and calculate its magnetization using Python. We’ll use a twodimensional lattice of size N = L × L: L = 10 NumPy can define arrays with preset values in a number of ways. For example, to create a system at zero temperature, all spins aligned, we use: import numpy as np ⇒ The first argument to The second argument sets the data type, in this case NumPy integers (otherwise it would produce a float). The magnetization will tell us the degree of order in the model: M = lambda : np.absolute(np.sum(spins))/N ⇒ 1.0 Perfectly ordered, by design! Warning: in Python 2.x, you’ll want to use The function To create a system at infinite temperature, with a random distribution of values, we can use a subset of NumPy tools, import numpy.random as npr ⇒ The first two arguments to The third argument to The result is a NumPy array of integers (0, 1), which allows broadcast calculations, here multiplication by 2 and then subtraction of 1 to produce the desired values of (–1, 1). The magnetization is close to zero, as it should be for a completely disordered state: print('M = ', M()) ⇒ M = 0.06 It will never be precisely zero because it’s defined here, essentially, as the standard deviation of the average spin, but the larger the lattice we choose, the closer to zero it will be: M = (1/N)∣Σ s_{i}∣~ (1/N)×N^{1/2} ~ 1/N^{1/2} So for Spyn StatisticsFor a given temperature, how can we put this system in an equilibrium state so that we can calculate its order parameter? First we better define what we mean by “equilibrium”. Statistical mechanics tells us that, for a system in equilibrium at a temperature T, representing a bath of thermal energy that randomizes the spins, the probability of a particular configuration of the spins is proportional to its Boltzmann factor: P({s_{i}}) ∝ exp(–E/kT) where the Boltzmann constant k “converts” temperature to a corresponding energy. Clearly the lowest energy state will have the highest probability. For the Ising Model: P({s_{i}}) ∝ exp(–(–J Σ_{nn} s_{i}s_{j})/kT) where β = J/kT varies between ∞ and 0 as the temperature T varies between 0 and ∞. In particular,
Now consider an individual spin s_{i} and its neighbors s_{inn}; it has an energy relative to its neighbors of E_{i} = –(J /2) s_{i} Σ_{nn}s_{inn} where the factor of 1/2 avoids doublecounting the energy once we include the neighboring spins. It’s helpful to define a spin sum S_{i} as: S_{i} = s_{i}Σ_{nn}s_{inn} E_{i} = –(J /2)S_{i} The probability of a particular microstate s_{i} given neighbors s_{inn} is then: P(s_{i}s_{inn}) ∝ exp(–E_{i}/kT) Distinct examples of the individual spin sum, energy, and probability are:
We can again see that:
In addition, note that flipping the central spin will always invert the signs of S_{i} and E_{i}. Now suppose we were to flip the central spin and this lowered the energy, e.g. we went from the second (blue) configuration in the table above to the last (pink) configuration. The spin sum is higher and the probability is also higher. On the other hand, if flipping the central spin increased the energy, e.g. going from the last (pink) configuration to the second (blue) configuration, the spin sum will be lower and the probabilty will also be lower. When in equilibrium the spins will flip back and forth over time, driven by the random energy of the temperature bath. However, on average, the system retains its state, in particular its degree of order. This means that the probability P of a microstate times the rate of transition W out of that microstate must be the same as the reverse process, i.e. over time the transition probabilities are equal: P(s_{i}s_{inn})W(s_{i} → –s_{i}) = P(–s_{i}s_{inn})W(–s_{i} → s_{i}) This condition is known as detailed balance, and is what we mean when we say a system is in equilibrium. Detailed balance can also be used to drive a nonequilbrium system to equilibrium by flipping spins in a compatible manner, and this is the key to simulating such a system. The ratio of the two transition rates is given by W(–s_{i} → s_{i})/ W(s_{i} → –s_{i}) = P(s_{i}s_{inn})/P(–s_{i}s_{inn}) Suppose flipping the spin s_{i} → –s_{i} lowers the energy; again this means that initially E_{i} > 0 and S_{i} < 0. Because only the above ratio is important, we can choose this transition rate equal to 1, i.e. the spin will always flip: W(E_{high} → E_{low}) = 1 Consequently, when flipping the spin s_{i} → –s_{i} increases the energy, initially E_{i} < 0 and S_{i} > 0, we must have W(E_{low} → E_{high}) = exp(–βS_{i}) < 1 and the spin only has a fractional probability of flipping. Increasing the energy is therefore less likely than decreasing it, and after repeated flipping attempts the higher energy state will become the most probable state, so that a nonequilbrium system must approach equilibrium over time. Monte Carlo SimulationsBecause of the randomizing effect of thermal energy and the probabilistic nature of the system energetics, we can use random numbers to equilibrate a system. The Monte Carlo ProcedureSuppose we start with the spins at infinite temperature, which means they are randomly either ±1. We then instantaneously lower the temperature, a process known as quenching, and allow the system to equilibrate, or anneal. Alternatively, for low temperatures, we can start with a system at zero temperature, with a uniform distribution of either +1 or –1, and then instantantaneously raise the temperature. We can simulate such systems as follows:
This Monte Carlo procedure is a standard practice in statistical science as a way to calculate the properties of complex systems. The Monte Python SimulationWe’ll save the previous code in a file named import numpy as np We’ll also set a relatively large value of β that should produce some amount of ordering since it corresponds to a low temperature: beta = 1 Now the next steps will take place in a loop after generating the spins, starting by randomly picking an initial spin, as sequential picking of spins would generate spatial correlations: i = 0 This last line returns two values in the range (0, L–1) as a NumPy array. It is converted to a tuple 1 1 1 sl s sr 1 sb 1 1 1 1 1 1 sb 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 st sr 1 1 1 1 st 1 1 sl s 1 1 1 1 sl s sr 1 1 sb st 1 1 1 1 sb 1 1 1 1 s sr 1 1 1 1 1 1 1 sl sb 1 1 1 1 1 1 st 1 1 1 1 1 1 st 1 sl s sr 1 Now locate the indices of the adjacent spins, left, right, bottom, and top: (sx, sy) = s sl = (sx1, sy) sr = ((sx+1)%L, sy) sb = (sx, sy1) st = (sx, (sy+1)%L) When the spin On the other hand, when the spin These are known as circular boundary conditions and they help to reduce errors associated with finite lattice size. Now calculate the spin sum S_{i} = s_{i}Σ_{nn}s_{inn}: S = spins[s] * ( spins[sl] + spins[sr] + spins[sb] + spins[st] ) Finally test for the energy condition: if S <= 0 : There is another random number generator here, Once i reaches N, this loop terminates. We can calculate the matrix and magnetization at this time to see how they have changed, with the following statements that are not indented so they are only calculated once: print(spins) Passing through the lattice once, on average, represents a single unit of time, since the entire lattice should have the chance to update once, no matter what its size. To run the loop for multiple units of time, define a number of time steps i = 0; tmax = 100 EnsemblesThe calculations above are for particular lattice sizes, and over particular periods of time. But they are also for particular configurations of spins. The most likely state is calculated by considering the probabilities of all configurations, known as an ensemble, and averaging them. Since a large lattice will have a huge number of configurations, 2^{N}, it’s impractical to calculate every one. However, a random selection of spins such as above can be repeated multiple times, and that should sample the configuration space, providing a good approximation after the results are averaged. We can add a loop to the above that repeats, for example, Z = 100 times, and accumulate each of the “measured” magnetizations for later calculation of their sample mean: ⟨M⟩ = (1/(Z1) Σ_{Z} M We can also calculate the sample mean of the magnetization squared: ⟨M^{2}⟩ = (1/(Z1)) Σ_{Z} M^{ 2} which allows the calculation of the standard deviation of M around the mean, given by: s_{M} = sqrt(⟨M^{2}⟩ – ⟨M⟩^{}^{2}) and then the standard error of the mean of M, which according to the Central Limit Theorem is given by: σ_{M} = s_{M} / sqrt(Z1) All together, the code adds an extra loop for Z, with initializations beforehand, accumulations after each step, and final calculations afterward: Z = 100 tmax = 100 m = 0 m2 = 0 Ms = () z = 0 while z < Z: z += 1 i = 0; while i < tmax * N: i += 1 s = tuple(npr.randint(0, L, 2)) (sx, sy) = s sl = (sx1, sy) sr = ((sx+1)%L, sy) sb = (sx, sy1) st = (sx, (sy+1)%L) S = spins[s] * ( spins[sl] + spins[sr] + spins[sb] + spins[st] ) if S <= 0 : spins[s] *= 1 else: W = np.exp(beta*S) R = npr.rand() if R < W : spins[s] *= 1 mi = M() Ms += (mi,) m += mi m2 += mi**2 Mavg = m / (Z1) # Average magnetization across the ensemble M2avg = m2 / (Z1) # Average magnetization squared across the ensemble sM = (M2avg  Mavg**2)**0.5 # Standard deviation of M around the mean sigmaM = sM/(Z1)**0.5 # Standard error of the mean of M Accumulating the magnetizations in a tuple import matplotlib.pyplot as plt figure, axes = plt.subplots() n, bins, patches = axes.hist(Ms, 10) # the histogram of the data in 10 bins # add a 'normal' fit curve y = ((1 / (np.sqrt(2 * np.pi) * sM)) * np.exp(0.5 * (1 / sM * (binsMavg))**2)) y *= Z/np.sum(y) # scale to match total axes.plot(bins, y, '', color='orange') axes.vlines(Mavg, 0, 25, color='orange') axes.set_xlabel('Magnetization') axes.set_ylabel('Count') axes.set_title(r'Histogram of Magnetizations: $\beta$ = ' + str(beta) + \ ', $M_{avg}$ = ' + str(round(Mavg,2)) + ', $s_M$ = ' + str(round(sM,2))) plt.show() plt.close()
HighPyrformance ComputingSimulating a large number of variations of a physical model can take a lot of CPU time, and computing clusters provide a lot of CPUs. The Amherst College Computing ClusterLarger lattices can take a long time to process in the loop above, and as in Homework #5 they should then be averaged over a number of different configurations. While the script above should run well in Spyder, it will be very slow overall. We can therefore turn to the Amherst College Computing Cluster, which provides 452 CPU cores on 76 Unix computers, and has Python already installed. On the cluster we can run a number of calculations, or jobs, on these multiple cores and for longer periods of time. Cluster jobs are prepared, tested, and submitted on an entry computer or head node, which you can access over the Internet. The College Computing Cluster’s head node has the domain name Procedure 1: Connecting to the Computing ClusterThe Cluster’s computing environment requires the use of text commands rather than a graphical user interface, so you’ll need to open a commandline interpreter, also called a “terminal” or “shell” program, and log in to the head node of the cluster over the network.
ssh username@cluster.amherst.edu and press the cluster.amherst.edu Cluster In subsequent sesssions you can simply click on the saved session Cluster and then click on the buttons Load and Save to use this configuration. pwd ls The first is a folder to store old projects; the second is a folder for current projects. cd clusterscratch mkdir ising cd ising Procedure 2: Sharing Files with the Computing ClusterTo submit jobs to the cluster, you must copy your Python script over to the head node of the cluster.
As each of these are distinct network drives, they must be mounted separately on your computer (note: the second two will also appear as subfolders of the first, but are not accessible that way, only when logged in to Python Scripts on the Computing ClusterOnce you’ve mapped a network drive to your local computer as in Procedure 2, you can also use the Spyder IDE to read and write files on it. Procedure 3: Setting Up Spyder to Use the Computing ClusterTo work with your Python script on the computing cluster, it’s simplest to set up Spyder to reference your working folder in
The To verify that the directory change worked, type IPython’s builtin “print working directory” command: %pwd ⇒ '/Volumes/clusterscratch/ising' When running your script on the Computing Cluster, there are a few changes you’ll want to make so that it works more simply with Unix and the Cluster’s job management software. Procedure 4: Making a Python Script SelfExecutableTo simplify the execution of your script, you can make it run as a “selfexecutable” Unix application, rather than as a file that’s an argument for the Python interpreter.
#! /usr/local/bin/python3 The first two characters are known as a shebang, and tell Unix to run the script with the specified interpreter. chmod +x ising.py ls l The three groups of permissions here, ./ising.py One additional useful capability is to be able to hand your executable one or more arguments that could have different values for different jobs in the cluster. In Unix, such commandline arguments are typed following the initial command and separated by spaces, e.g. ./ising.py 1 where If you import the “system” or import sys then you will have access to commandline arguments, passed to your executable as a list of strings: sys.argv ⇒ ['./ising.py', '1'] beta = float(sys.argv[1]) ⇒ 1.0 Note that item In general you will want to test to make sure that the expected number of arguments are provided, and either set a default value for missing arguments or exit the program with an error message (another function provided by the if len(sys.argv) < 2 : Then when running the executable without arguments: ./ising.py ⇒ Usage: ./ising.py beta One other important change for your executable is to format its output to make it easier to analyze large quantities of data; most simply, you can write commaseparated values so that the results of different jobs can all be concatenated together into a table: print('L,beta,Mavg,sigmaM') ⇒ The Condor JobManagement SystemThe Condor jobmanagement software provides an efficient means to control a project running on a large number of computers. A set of related jobs are submitted as a cluster of jobs that are automatically distributed over any available computer cores. By design CPU time is shared with other researchers; the general behavior is that newer clusters are given priority, and the longer they run the lower their priority becomes. Along with your executable program, Condor will need a command file for each job cluster, which provides details on how to run each job, including any commandline arguments that might vary from job to job. The basic structure of a command file is a series of keyvalue pairs, which describe either global job properties or individual job properties, for example: ## Global job properties universe = vanilla notification = error As with Python, a The keyword The keyword
The This is an automated email from the Condor system Condor job 150601.0 If you set The keywords The keyword The keyword The keyword The keywords The keyword There are many more commandfile keywords that might occasionally be useful and that are described in the Condor documentation starting on p. 717. Once you’ve created your command file, use Example: Creating a Condor Command File Using PythonA Condor command file is required to run a project on the Computing Cluster, to describe the characteristics of each job in the project cluster. The command file can be very long and the arguments and files will vary from job to job, so it’s convenient to create it from a program, using almost any language, but the example here will be Python (of course). This is especially useful when job arguments can be generated algorithmically. For the Monte Python simulation above, an example Condor command file generator “isingcmd.py” is downloadable here, and is detailed below:
#! /usr/local/bin/python3 import os, math # Create file references for this project: projectName = 'ising' executable = projectName + '.py' baseDirectory = os.getcwd() commandFile = baseDirectory + '/' + projectName + '.cmd' The generator first writes out global job parameters into the command file:
# Write the global parameters needed for all jobs into the command file: commands = open(commandFile, 'w') commands.writelines(['## Global job properties\n', 'universe = vanilla\n', 'notification = error\n', 'notify_user = username@amherst.edu\n', The results directory is created using more functions from the # Create the results directory and limit accessibility to yourself with # the permission rwx   (0o700): baseResultsDirectory = baseDirectory + '/results' if not os.path.exists(baseResultsDirectory) : os.mkdir(baseResultsDirectory, 0o700) The job arguments are generated with a simple algorithm for the values of # Job arguments, here [0.1, 0.2, 0.3, …, 1.3, 1.4, 1.5]: betaValues = [(x+1)/10 for x in range(15)] jobsCount = len(betaValues) Each job will have its own results directory within the main results directory, and giving them a uniformlength filename will allow them to be sorted in numerical order: # Create a format string that will produce results directory names that # have a standard length for sorting purposes, e.g. 'results/%02d' # => 'results/00', ..., 'results/09', 'results/10', ..., 'results/99' Now loop through the jobs and create their individual results directories by applying the string format operator: # Loop through the parameters to be passed for each job: for job in range(jobsCount) : # Create the results directory for this specific job. resultsDirectory = resultsDirectoryFormat % job if not os.path.exists(resultsDirectory) : os.mkdir(resultsDirectory, 0o700) In this loop the parameters for each job are also written into the command file:
# Write the arguments for this job. commands.writelines(['\n## Job properties\n', 'arguments = ' + str(betaValues[job]) + '\n', 'output = ' + resultsDirectory + '/out\n', 'error = ' + resultsDirectory + '/err\n', 'log = ' + resultsDirectory + '/log\n', 'queue\n']) Finally, close the command file: commands.close() The command file generator should be saved on the computing cluster in the same location as your project executable. As with your executable, the command file generator should be provided with executable permissions: chmod +x isingcmd.py The command file generator can then be run to create the command file and the results directories: ./isingcmd.py And you should also verify the presence of the results directories: ls results ⇒ 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15
Procedure 6: Running a Project on the Computing ClusterOnce you have an application and a Condor command file describing how to run it, such as described above, there are just a few commands to know to get it running on the cluster:
condor_submit ising.cmd ⇒ Submitting job(s) Note the cluster ID condor_q nobatch username ⇒  Submitter: csc2node01.amherst.edu : <148.85.78.21:46911> ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 149415.0 username 12/18 16:56 0+00:03:34 R 5 9.8 ising.py 0.1 149415.1 username 12/18 16:56 0+00:03:34 R 5 9.8 ising.py 0.2 149415.14 username 12/18 16:56 0+00:03:34 R 5 9.8 ising.py 1.4 149415.15 username 12/18 16:56 0+00:03:34 R 5 9.8 ising.py 1.5 Total for query: 15 jobs; 0 completed, 0 removed, 0 idle, 15 running, 0 held, 0 suspended Total for username: 15 jobs; 0 completed, 0 removed, 0 idle, 15 running, 0 held, 0 suspended Total for all users: 15 jobs; 0 completed, 0 removed, 0 idle, 15 running, 0 held, 0 suspended With the option The actual command that Condor is running is shown at the end, and you should verify that it’s what you expect. The status of each job is listed in the condor_rm 149415 149416.9 The command arguments are a list of cluster IDs and/or cluster.process IDs, or your user ID to remove all of your jobs. There are many more commands that might occasionally be useful and which are described in the Condor documentation.
Procedure 7: Processing Your ResultsThe results of your cluster job are now stored in the directories
cat results/*/out > ising.csv The wild card character cat ising.csv ⇒ L,beta,Mavg,sigmaM 10,0.10,0.077,0.006 L,beta,Mavg,sigmaM 10,0.20,0.105,0.008 .... L,beta,Mavg,sigmaM 10,1.40,0.978,0.011 L,beta,Mavg,sigmaM 10,1.50,0.978,0.013 cat results/*/out  sed e 1n e /L/d > ising.csv The pipe character cat ising.csv ⇒ L,beta,Mavg,sigmaM 10,0.10,0.087,0.006 10,0.20,0.103,0.007 .... 10,1.40,0.989,0.002 Unix has a wealth of such commands to process the content of files, you can find a list of them on Wikipedia. ising = np.loadtxt('ising.csv', delimiter=',', skiprows = 1) ising ⇒ array([[1.0e+01, 1.0e01, 9.0e02, 7.0e03], [1.0e+01, 2.0e01, 1.1e01, 7.0e03], .... [1.0e+01, 1.4e+00, 9.9e01, 1.0e03], [1.0e+01, 1.5e+00, 1.0e+00, 1.0e03]]) import matplotlib.pyplot as plt plt.plot(ising[:,1], ising[:,2]) The error bars here are relatively small, Problem 6: The Full MonteUsing the computing cluster, repeat Problem 5 over larger arrays and longer times, by generalizing the commandline input of your executable to allow runtime choices of these values. Calculate in each case the magnetization: ⟨M⟩ = (1/Z) Σ_{Z} M the average square magnetization: ⟨M^{2}⟩ = (1/Z) Σ_{Z} M^{ 2} and then the standard error of the mean value of M: σ_{M} = sqrt[(⟨M^{2}⟩ – ⟨M⟩^{}^{2})/Z]



