pythonpython-3.xphysicsmontecarlocomplex-networks

Ising model [Python]


I am trying to simulate Ising phase transition in a Barabasi-Albert network and trying to replicate results of some observables such as magnetization and energy as one would observe in an Ising grid simulation. However, I am having troubles interpreting my results: not sure if the physics is wrong or there is an error in the implementation. Here is a minimum working example:

import numpy as np 
import networkx as nx
import random
import math

## sim params

# coupling constant
J = 1.0 # ferromagnetic

# temperature range, in units of J/kT
t0 = 1.0
tn = 10.0
nt = 10.
T = np.linspace(t0, tn, nt)

# mc steps
steps = 1000

# generate BA network, 200 nodes with preferential attachment to 3rd node
G = nx.barabasi_albert_graph(200, 3)

# convert csr matrix to adjacency matrix, a_{ij}
adj_matrix = nx.adjacency_matrix(G)
top = adj_matrix.todense()
N = len(top)

# initialize spins in the network, ferromagnetic
def init(N):
    return np.ones(N)

# calculate net magnetization
def netmag(state):
    return np.sum(state)

# calculate net energy, E = \sum J *a_{ij} *s_i *s_j
def netenergy(N, state):
    en = 0.
    for i in range(N):
        for j in range(N):
            en += (-J)* top[i,j]*state[i]*state[j] 
    return en

# random sampling, metropolis local update
def montecarlo(state, N, beta, top):

    # initialize difference in energy between E_{old} and E_{new}
    delE = []

    # pick a random source node
    rsnode = np.random.randint(0,N)

    # get the spin of this node
    s2 = state[rsnode]

    # calculate energy by summing up its interaction and append to delE
    for tnode in range(N):
        s1 = state[tnode]
        delE.append(J * top[tnode, rsnode] *state[tnode]* state[rsnode])

    # calculate probability of a flip
    prob = math.exp(-np.sum(delE)*beta)

    # if this probability is greater than rand[0,1] drawn from an uniform distribution, accept it
    # else retain current state
    if prob> random.random():
        s2 *= -1
    state[rsnode] = s2

    return state

def simulate(N, top):

    # initialize arrays for observables
    magnetization = []
    energy = []
    specificheat = []
    susceptibility = []

    for count, t in enumerate(T):


        # some temporary variables
        e0 = m0 = e1 = m1 = 0.

        print 't=', t

        # initialize spin vector
        state = init(N)

        for i in range(steps):

            montecarlo(state, N, 1/t, top)

            mag = netmag(state)
            ene = netenergy(N, state)

            e0 = e0 + ene
            m0 = m0 + mag
            e1 = e0 + ene * ene
            m1 = m0 + mag * mag

        # calculate thermodynamic variables and append to initialized arrays
        energy.append(e0/( steps * N))
        magnetization.append( m0 / ( steps * N)) 
        specificheat.append( e1/steps - e0*e0/(steps*steps) /(N* t * t))
        susceptibility.append( m1/steps - m0*m0/(steps*steps) /(N* t *t))

        print energy, magnetization, specificheat, susceptibility

        plt.figure(1)
        plt.plot(T, np.abs(magnetization), '-ko' )
        plt.xlabel('Temperature (kT)')
        plt.ylabel('Average Magnetization per spin')

        plt.figure(2)
        plt.plot(T, energy, '-ko' )
        plt.xlabel('Temperature (kT)')
        plt.ylabel('Average energy')

        plt.figure(3)
        plt.plot(T, specificheat, '-ko' )
        plt.xlabel('Temperature (kT)')
        plt.ylabel('Specific Heat')

        plt.figure(4)
        plt.plot(T, susceptibility, '-ko' )
        plt.xlabel('Temperature (kT)')
        plt.ylabel('Susceptibility')

simulate(N, top)

Observed results:

  1. Magnetization trend as a function of temperature
  2. Specific Heat

I have tried comment the code heavily, in case there is something I overlooked please ask.

Questions:

  1. Is the magnetization trend correct? Magnetization decreases as temperature increases but cannot identify a critical temperature of phase transition.
  2. Energy approaches zero with increasing temperature, this seems coherent with whats observed in an Ising grid. Why do I get negative specific heat values?
  3. How does one choose number of monte carlo steps? Is this just by hit and trial based on number of network nodes?

EDIT: 02.06: simulation breaks down for anti-ferromagnetic configuration: simulation breaks down for anti-ferromagnetic configuration


Solution

  • First of all, since this is a programming site, let's analyse the program. Your computation is very inefficient, which makes it impractical to explore larger graphs. The adjacency matrix in your case is 200x200 (40000) elements with only about 3% non-zeros. Converting it to a dense matrix means much more computations while evaluating the energy difference in the montecarlo routine and the net energy in netenergy. The following code performs 5x faster on my system with better speedup expected with larger graphs:

    # keep the topology as a sparse matrix
    top = nx.adjacency_matrix(G)
    
    def netenergy(N, state):
        en = 0.
        for i in range(N):
            ss = np.sum(state[top[i].nonzero()[1]])
            en += state[i] * ss
        return -0.5 * J * en
    

    Note the 0.5 in the factor - since the adjacency matrix is symmetric, each pair of spins is counted twice!

    def montecarlo(state, N, beta, top):
        # pick a random source node
        rsnode = np.random.randint(0, N)
        # get the spin of this node
        s = state[rsnode]
        # sum of all neighbouring spins
        ss = np.sum(state[top[rsnode].nonzero()[1]])
        # transition energy
        delE = 2.0 * J * ss * s
        # calculate transition probability
        prob = math.exp(-delE * beta)
        # conditionally accept the transition
        if prob > random.random():
            s = -s
        state[rsnode] = s
    
        return state
    

    Note the factor 2.0 in the transition energy - it is missing in your code!

    There is some numpy index magic here. top[i] is the sparse adjacency row-vector for node i and top[i].nonzero()[1] are the column indices of the non-zero elements (top[i].nonzero()[0] are the rows indices, which are all equal to 0 since it is a row-vector). state[top[i].nonzero()[1]] is thus the values of the neighbouring nodes of node i.

    Now to the physics. The thermodynamic properties are wrong because:

    e1 = e0 + ene * ene
    m1 = m0 + mag * mag
    

    should really be:

    e1 = e1 + ene * ene
    m1 = m1 + mag * mag
    

    And:

    specificheat.append( e1/steps - e0*e0/(steps*steps) /(N* t * t))
    susceptibility.append( m1/steps - m0*m0/(steps*steps) /(N* t *t))
    

    should be:

    specificheat.append((e1/steps/N - e0*e0/(steps*steps*N*N)) / (t * t))
    susceptibility.append((m1/steps/N - m0*m0/(steps*steps*N*N)) / t)
    

    (you'd better average the energy and magnetisation early on)

    This brings the heat capacity and the magnetic susceptibility to the land of positive values. Note the single t in the denominator for the susceptibility.

    Now that the program is (hopefully) correct, let's talk physics. For each temperature, you start with an all-up spin state, then let it evolve one spin at a time. Obviously, unless the temperature is zero, this initial state is far from the thermal equilibrium, therefore the system will start to drift towards the part of the state space that corresponds to the given temperature. This process is known as thermalisation and collecting statical information during that period is meaningless. You have to always split the simulation at a given temperature in two parts - thermalisation and actually significant run. How many iterations are needed to get to the equilibrium? Hard to say - employ a moving average of the energy and monitor when it becomes (relatively) stable.

    Second, the update algorithm changes a single spin per iteration, which means that the program will explore the state space very slowly and you will need a huge number of iterations in order to get a good approximation of the partition function. With 200 spins, 1000 iterations might be just now enough.

    The rest of the questions really do not belong here.