Abstract
In this chapter we will introduce the powerful theoretical methods of renormalization. The fundamental idea is that at \(p=p:c\), a rescaling of the system does not change the most important features. By a rescaling we typically mean a coarse-graining of the system, such as merging \(2 \times 2\) cells into a single cell. The rule we use to choose the occupation probability of the new, coarse-grained cell, \(p'\), is a function of the probability p of the original lattice, \(p' = R(p)\). In renormalization theory, we use properties of this mapping, \(R(p)\), to deduce properties of the system such as critical exponents. In this chapter, you will be introduced to the fundamentals of renormalization theory in the context of percolation systems, in which the geometric nature of the remapping allow us to build intuition about renormalization as a concept. We will also apply the theory to different lattice structures and for one, two and three-dimensional systems.
You have full access to this open access chapter, Download chapter PDF
In this chapter we will introduce the powerful theoretical methods of renormalization. The fundamental idea is that at \(p=p:c\), a rescaling of the system does not change the most important features. By a rescaling we typically mean a coarse-graining of the system, such as merging \(2 \times 2\) cells into a single cell. The rule we use to choose the occupation probability of the new, coarse-grained cell, \(p'\), is a function of the probability p of the original lattice, \(p' = R(p)\). In renormalization theory, we use properties of this mapping, \(R(p)\), to deduce properties of the system such as critical exponents. In this chapter, you will be introduced to the fundamentals of renormalization theory in the context of percolation systems, in which the geometric nature of the remapping allow us to build intuition about renormalization as a concept. We will also apply the theory to different lattice structures and for one, two and three-dimensional systems.
We have now learned that when p approaches \(p:c\), the correlation length grows to infinity, and the spanning cluster becomes a self-similar fractal structure. This implies that the spanning cluster at \(p:c\) has statistical self-similarity: if we cut out a piece of the spanning cluster, and rescale the lengths in the system, the rescaled system will have the same statistical geometrical properties as the original system. In particular, the rescaled system will have the same mass scaling relation: it will also be a self-similar fractal with the same scaling properties.
What happens when \(p \neq p_c\)? In this case, there will be a finite correlation length, \(\xi \), and a rescaling of the lengths in the system implies that the correlation length is also rescaled. A rescaling by a factor b corresponds to making a coarse-graining over \(b^d\) sites in order to form the new lattice. Now, we will simply assume that this also implies that the correlation length is reduced by a factor b: \(\xi ' = \xi /b\). After a few iterations of this rescaling procedure, the correlation length will correspond to the lattice size and the lattice will be uniform.
We could have made this argument even simpler by initially stating that we divide the system into parts that are larger than the correlation length. Again, this would lead to a system that is homogeneous from the smallest lattice spacing an upwards. We can conclude that when \(p < p_c\), the system behaves as a uniform, unconnected system and when \(p > p_c\), the system is uniform and connected.
The argument we have sketched above is the essence of the renormalization group argument. It is only exactly at \(p = p_c\) that an iterative rescaling is a non-trivial fix point: the system iterates onto itself because it is a self-similar fractal. When p is away from \(p:c\), rescaling iterations will make the system progressively more homogeneous, and effectively bring the rescaled p towards either 0 or 1.
In this chapter we will provide an introduction to the theoretical framework for renormalization. This is a powerful set of techniques, introduced for equilibrium critical phenomena by Kadanoff [19] in 1966 and by Wilson [39] in 1971. Wilson later received the Nobel prize for his work on critical phenomena.
7.1 The Renormalization Mapping
What happens when we coarse-grain a percolation system? What does it mean to coarse-grain? It means that we replace a \(2 \times 2\) cell with a single cell using a specified rule, which aims at retaining connectivity. An example of such a rule is given in Fig. 7.1. For each possible \(2 \times 2\) configuration, we show if it maps onto an occupied or an empty cell. Let us now apply this rule to a \(64 \times 64\) system for three different values of p as illustrated in Fig. 7.2. We iterate the procedure several time, reducing the system size with a factor of 2 each time.
Behavior Through Iterations
What happens in this system? When \(p=p:c\), then \(\xi \) is infinte. This means that \(\xi \) does not change we divide the system size by 2. We see that the system appears similar throughout the iteration, and the final single site is occupied. This is because the system is a self-similar fractal and does not change significantly through the iterations. What happens when \(p>p_c\)? In this case, we see that the system becomes more homogeneous through each iteration and eventually the whole system is filled. This means that the effective percolation probability becomes higher through the iterations. Similarly, when \(p>p_c\), the system becomes more homogeneous, but also more empty, as the iterations proceed. This means that the effective percolation probability becomes lower through the iterations.
Renormalization Means Changing the Occupation Probability
In the original lattice the occupation probability is p. However, through our coarse-graining procedure, we may change the occupation probability for the new, averaged sites. We will therefore call the new occupation probability \(p'\), the probability to occupy a renormalized site. We write the mapping between the original and the new occupation probabilities as
where the renormalization function \(R(p)\), which provides the mapping, depends on the details of the rule used for renormalization.
Selecting a Renormalization Rule
There are many choices for the mapping between the original and the renormalized lattice. We have illustrated a particular mapping with a rescaling \(b=2\) in Fig. 7.1. Such a mapping describes how each of the \(4^2=16\) possible configurations c of the \(2 \times 2\) system is mapped onto a \(1 \times 1\) single site through a function \(f(c)\), where \(f(c)\) is 1 if the new site is occupied and 0 if it is empty. The renormalization mapping is then
where \(P(c)\) is the probability for configuration c. It is often practical to organize the configurations into classes k, where each class has the same number of occupied sites and hence the same probability \(P(k)\), and the number of configurations in class k is called the multiplicity \(g(k)\) of the class. Expressed in terms of the classes k, the renormalization mapping is
For the particular mapping provided in Fig. 7.1, the renormalization mapping becomes
This illustrates a particular rule, but there are many possible rules. Usually, we want to ensure that important aspects of the percolation system is preserved by the mapping. For example, we would want the mapping to conserve connectivity. That is, we would like to ensure that
However, even though we may ensure this on the level of the mapping, this does not ensure that the mapping actually conserves connectivity when applied to a large cluster. It may, for example, connect clusters that were unconnected in the original lattice, or disconnect clusters that were connected, as illustrated in Fig. 7.3.
Properties of the Renormalization Mapping
First, we will not consider the details of the renormalization mapping \(p'=R(p)\), but instead assume that such a map exists and study its qualitative features. Then we will address detailed properties of the renormalization mapping through two worked examples.
For any choice of mapping, the rescaling will result in a change in the correlation length \(\xi \):
We will use this relation to address the behavior of fixpoints of the mapping.
Fixpoint A fixpoint of a mapping \(R(p)\) is a point \(p^{\ast }\) that does not change when the mapping is applied. That is
Trivial Fixpoints
At a fixpoint, the iteration relation for the correlation length becomes:
The only possible solutions for this equation are that \(\xi = 0\) or \(\xi = \infty \). We call the case when \(\xi = 0\) a trivial fixed point. There are two trivial fixed points for any renormalization mapping at \(p=0\) and at \(p=1\).
Stable and Unstable Fixpoints
Let us assume that there exists a non-trivial fixpoint\(p^{\ast }\), and let us address the behavior for p close to \(p^{\ast }\). We notice that for any finite \(\xi \), iterations by the renormalization relation will reduce \(\xi \). That is, both for \(p < p^{\ast }\) and for \(p > p^{\ast }\) iterations will make \(\xi \) smaller. This implies that iterations will take the system further away from the non-trivial fixpoint, where the correlation length is infinite. The non-trivial fixpoint is therefore an unstable fixpoint. Similarly, for p close to a trivial fixpoint, where \(\xi = 0\), iterations will decrease \(\xi \), and the renormalized system will move closer to the fixpoint in each iteration. The trivial fixpoint is therefore stable.
Graphical Interaction of the Renormalization Relation
Iterations by the renormalization relation \(p' = R(p)\) may be studied on the graph \(R(p)\), as illustrated in Fig. 7.4. Consecutive iterations take the system along the arrows illustrated in the figure. Notice that the line \(p'=p\) is drawn as a dotted reference line. In the figure, the two end points, \(p=0\) and \(p=1\) are the only stable fixpoints, and the point \(p^{\ast }\) is the only unstable fixpoint. The actual shape of the function \(R(p)\) depends on the renormalization rule, and the shape may be more complex than what is illustrated in Fig. 7.4.
7.1.1 Iterating the Renormalization Mapping
We are now ready for a more quantitative argument for the effect of iterations through the renormalization mapping \(R(p)\). First, we notice that the non-trivial fixpoint corresponds to the percolation threshold of the renormalization model, since the correlation length is diverging for this value of p. (This does not imply that \(p^{\ast }\) is equal to \(p:c\). As we shall see, \(p^{\ast }\) depends on the choice of \(R(p)\)).
We will now assume that \(R(p)\) is differentiable, which it should be since \(R(p)\) is based on sums of polynomials of p and \(1-p\). Let us study the behavior close to \(p^{\ast }\) through a Taylor expansion of the mapping \(p' = R(p)\). First, we notice that
because \(p' = R(p)\) and \(p^{\ast } = R(p^{\ast })\). The Taylor expansion of \(R(p)\) for a p close to \(p^{\ast }\) is:
If we define \(\varLambda = R'(p^{\ast })\), we get that to first order in \(p - p^{\ast }\):
We see that the value of \(\varLambda \) characterizes the fixpoint. For \(\varLambda > 1\) the new point \(p'\) will be further away from \(p^{\ast }\) than the initial point p. Consequently, the fixpoint is unstable. By a similar argument, we see that for \(\varLambda < 1\) the fixpoint is stable. For \(\varLambda = 1\) we call the fixpoint a marginal fixpoint.
Let us now assume that the fixpoint is indeed the percolation threshold. In this case, when p is close to \(p:c\), we know that the correlation length is
for the initial point, and
for the renormalized point. We will now use (7.11) for \(p^{\ast } = p_c\), giving
Inserting this into (7.13) gives
We can rewrite this using \(\xi (p)\)
However, we also know that
Consequently, we have found that
This implies that the exponent \(\nu \) is a property of the fixpoint of the mapping \(R(p)\). We can find \(\nu \) from
where we remember that \(\varLambda = R'(p_c)\).
7.2 Examples
In the following we provide several examples of the application of the renormalization theory. Our renormalization procedure can be summarized in the following steps
-
1.
Coarse-grain the system into cells of size \(b^d\).
-
2.
Find a rule to determine the new occupation probability, \(p'\), from the old occupation probability, p: \(p' = R(p)\).
-
3.
Determine the non-trivial fixpoints, \(p^{\ast }\), of the renormalization mapping: \(p^{\ast } = R(p^{\ast })\), and use these points as approximations for \(p:c\): \(p_c = p^{\ast }\).
-
4.
Determine the rescaling factor \(\varLambda \) from the renormalization relation at the fixpoint: \(\varLambda = R'(p^{\ast })\).
-
5.
Find \(\nu \) from the relation \(\nu = \ln b / \ln \varLambda \).
It is important to realize that the renormalization mapping \(R(p)\) is not unique.In order to obtain useful results we should ensure that the mapping preserves connectivity on average.
7.2.1 Example: One-Dimensional Percolation
Let us first address the one-dimensional percolation problem using the renormalization procedure. We have illustrated the one-dimensional percolation problem in Fig. 7.5. We generate the renormalization mapping by ensuring that it conserves connectivity. The probability for two sites to be connected over a distance b is \(p^b\) when the occupation probability for a single site is p. A renormalization mapping that conserves connectivity is therefore:
The fixpoints for this mapping are
with only two possible solutions, \(p^{\ast } = 0\), and \(p^{\ast } = 1\). An example of a renormalization iteration is shown in Fig. 7.6. The curve illustrates that \(p^{\ast } = 0\) is the only attractive or stable fixpoint, and that \(p^{\ast } = 1\) is an unstable fixpoint.
We can also apply the theory directly to find the exponent \(\nu \). The renormalization relation is \(p' = R(p) = p^b\). We can therefore find \(\varLambda \) from:
where we are now studying the unstable fixpoint \(p^{\ast } = 1\). We can therefore determine \(\nu \) from (7.19):
We notice that b was eliminated in this procedure, which is essential since we do not want the exponent to depend on details such as the size of renormalization cell. The result for the scaling of the correlation length is therefore
when \(1-p \ll 1\).
7.2.2 Example: Renormalization on 2d Site Lattice
Let us now use this method to address a renormalization scheme for two-dimensional site percolation. We will use a scheme with \(b=2\). The possible configurations for a \(2 \times 2\) lattice are shown in Fig. 7.7.
In order to preserve connectivity, we need to ensure that classes \(k=1\) and \(k=2\) are occupied also in the renormalized lattice. However, we have some freedom as to which configurations to include in the class \(k=3\) and \(k=4\). We may choose only to consider spanning in one direction or spanning in both directions. In the mapping in Fig. 7.7 we only include horizontal spanning. Then the renormalization relation becomes
where \(f(k) = 1\) if class k is mapped onto an occupied site and \(f(k)=0\) if class k is mapped onto an empty site. The renormalization relation is illustrated in Fig. 7.8.
We will now follow steps 3 and 4. First, in step 3, we determine the fixpoints of the renormalization relation. That is, we find the solutions to the equation
The trivial solution \(p^{\ast } = 0\) is not of interest. Therefore we divide by \(p^{\ast }\) to produce
The other trivial fixpoint is \(p^{\ast } = 1\). We divide the equation by \(1 - p^{\ast }\) to get
The solutions to this second order equation are
We have therefore found an estimate of \(p:c\) by setting \(p_c = p^{\ast }\). This does not produce the correct value for \(p:c\) in a two-dimensional site percolation system, but the result is still reasonably correct. We can similarly estimate the exponent \(\nu \) by calculating \(R'(p^{\ast })\).
7.2.3 Example: Renormalization on 2d Triangular Lattice
We will now use the same method to address percolation on site percolation on a triangular lattice. A triangular lattice is a lattice where each point has six neighbors. In solid state physics, the lattice is known as the hexagonal lattice because of its hexagonal rotation symmetry. Site percolation on the triangular lattice is particularly well suited for renormalization treatment, because a coarse grained version of the lattice is also a triangular lattice, as illustrated in Fig. 7.9, with a lattice spacing \(b = \sqrt {3}\) times the original lattice size.
We will use the majority rule for the renormalization mapping. That is, we will map a set of three sites onto an occupied site if a majority of the sites are occupied, meaning that two or more sites are occupied. Otherwise, the renormalized site is empty. This mapping is illustrated in Fig. 7.9. This mapping does, as the reader may easily check, on the average conserve connectivity. The renormalization mapping becomes
The fixpoints of this mapping are the solutions of the equation
We observe that the trivial fixpoints \(p^{\ast } = 0\) and \(p^{\ast }=1\) indeed satisfy (7.31). The non-trivial fixpoint is \(p^{\ast } = 1/2\). We are pleased to observe that this is the exact solution for \(p:c\) for site percolation on the triangular lattice.
We can use this relation to determine the scaling exponent \(\nu \). First, we calculate \(\varLambda \):
As a result we find the exponent \(\nu \) from
which is very close to the exact result \(\nu = 4/3\) for two-dimensional percolation.
7.2.4 Example: Renormalization on 2d Bond Lattice
As our last example of renormalization in two-dimensional percolation problems, we will study the bond percolation problem on a square lattice. The renormalization procedure is shown in Fig. 7.10. In the renormalization procedure, we replace 8 bonds by 2 new bonds. We consider connectivity only in the horizontal direction, and may therefore simplify the lattice, by only considering the mapping of the H-cell, a mapping of five bonds onto one bond in the horizontal direction. The various configurations are shown in the figure. In Table 7.1 we have shown the number of such configurations, and the probabilities for each configuration, which is needed in order to calculate the renormalization connection probability \(p'\).
The resulting renormalization equation is given as
where we have used k to denote the various classes, \(P(k)\) is the probability for one instance of class k, \(n(k)\) is the number of different configurations due to symmetry consideration in class k, and \(\varPi |k\) is the spanning probability given that the configuration is in class k. The resulting relation is
The fixpoints for this mapping are \(p^{\ast } = 0\), \(p^{\ast } = 1\), and \(p^{\ast } = 1/2\). The fixpoint \(p^{\ast } = 1/2\) provides the exact solution for the percolation threshold on the bond lattice in two dimensions. We find \(\varLambda \) by derivation
The corresponding estimate for the exponent \(\nu \) is
which should be compared with the exact result of \(\nu = 4/3\) for two-dimensional percolation.
Exercises
Exercise 7.1 (Renormalization of nnn-Model)
-
(a)
Develop a renormalization scheme for a two-dimensional site percolation system with next-nearest neighbor (nnn) connectivity. That is, list the 16 possible configurations, and determine what configuration they map onto in the renormalized lattice.
-
(b)
Find the renormalized occupation probability \(p' = R(p)\).
-
(c)
Plot \(R(p)\) and \(f(p) = p\).
-
(d)
Find the fixpoints \(p^{\ast }\) so that \(R(p^{\ast }) = p^{\ast }\).
-
(e)
Find the rescaling factor \(\varLambda = R'(p^{\ast })\).
-
(f)
Determine the exponent \(\nu = \ln \varLambda / \ln b\).
-
(g)
How can we improve the estimates of \(p:c\) and \(\nu \)?
Exercise 7.2 (Renormalization of Three-Dimensional Site Percolation Model)
-
(a)
Find all \(2^8\) possible configurations for the \(2 \times 2 \times 2\) renormalization cell for three-dimensional site percolation.
-
(b)
Determine a renormalization scheme - what configurations map onto an occupied site?
-
(c)
Find the renormalized occupation probability \(p' = R(p)\).
-
(d)
Plot \(R(p)\) and \(f(p) = p\).
-
(e)
Find the fixpoints \(p^{\ast }\) so that \(R(p^{\ast }) = p^{\ast }\).
-
(f)
Find the rescaling factor \(\varLambda = R'(p^{\ast })\).
-
(g)
Determine the exponent \(\nu = \ln \varLambda / \ln b\).
Exercise 7.3 (Renormalization of Three-Dimensional Bond Percolation Model)
In this exercise we will develop an H-cell renormalization scheme for bond percolation in three dimensions. The three-dimensional H-cell is illustrated in Fig. 7.11.
-
(a)
Find all \(2^12\) possible configurations for this H-cell.
-
(b)
Determine a renormalization scheme - what configurations map onto an occupied site?
-
(c)
Find the renormalized occupation probability \(p' = R(p)\).
-
(d)
Plot \(R(p)\) and \(f(p) = p\).
-
(e)
Find the fixpoints \(p^{\ast }\) so that \(R(p^{\ast }) = p^{\ast }\).
-
(f)
Find the rescaling factor \(\varLambda = R'(p^{\ast })\).
-
(g)
Determine the exponent \(\nu = \ln \varLambda / \ln b\).
Exercise 7.4 (Numerical Study of Renormalization)
Use the following program to study the renormalization of a given sample of a percolation system.
Perform successive iterations for \(p=0.3\), \(p=0.4\), \(p=0.5\), \(p=p:c\), \(p=0.65\), \(p=0.70\), and \(p=0.75\), in order to understand the instability of the fixpoint at \(p = p_c\).
References
L.P. Kadanoff, Scaling laws for ising models near ${T}_{c}$. Phys. Phys. Fizika 2(6), 263–272 (1966)
K.G. Wilson, Renormalization group and critical phenomena. I. Renormalization group and the kadanoff scaling picture. Phys. Rev. B Condens. Matter Mater. Phys. 4(9), 3174–3183 (1971)
Author information
Authors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Malthe-Sørenssen, A. (2024). Renormalization. In: Percolation Theory Using Python. Lecture Notes in Physics, vol 1029. Springer, Cham. https://doi.org/10.1007/978-3-031-59900-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-59900-2_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-59899-9
Online ISBN: 978-3-031-59900-2
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)