Miscellaneous Ideas

Here I write down some ideas existing long time in my mind, I really appreciate every discussion and cooperation, even someone can use any idea from here without a notification to me.


Times Series Analysis of Music

Abstract

The idea to discover pattern of music by time series analysis is presented here. Possible relationship between the result and classification of music is also discussed. A plan including technic and direction is scheduled and a warm invitation is sent to everyone who may have some interest in this.

Introduction

The Time-frequency analysis of song can give us every note and this topic is so-called Gabor analysis. But here it's not our topic, what we want to analysis is the music, the song itself, not sound of song. I mean, the time series of nodes, which is the result of former analysis technology, is the object of this investigation. We look music as a time series of nodes and then we try to use time-series analysis technic and statistical method developed in Econophysics to discover the universal and special law of music. In Econophysics, when we take the time series of financial data as our object, such as stock prices, exchange rates and firm sizes, the time-series analysis is a useful tool to find the time dependent properties such as correlation and stability, and some time independent properties as distribution, universal scaling law. So here, for music, we also want to investigate all these properties. It's usual that most compositions have some cantus firmus and may change a little in every riffing. I think, patterns like this will be easy to discovered by our analysis. And maybe some properties not very obvious for normal appreciator like us can be found. Even I am just wandering that for some kind of music, maybe some initial several verses will define the whole music. If things like this exist, we can find them by correlation analysis. And maybe more interesting characters will be found in this way. Eventually Physics is for fun, right?

What kind of properties we can try

Most idea borrowed from Econophysics and fitted to the speciality of music. Time dependent and time independent properties are the two sides the same mirror, so only after both are explored throughout the whole profile will be found.

Time dependent properties

Stability
Since very composition is a finite length time series, it will definitely stable if it's expanded periodically. So the stability considerate the structure in time scale much shorter than the whole length of the composition.
Self correlations
As a famous styled fact in stock prices time series, a long time correlation exists in time series of absolute value of price return while extremely short time self correlation exists in price return itself, this strange correlation property encouraged a lot of works on mechanism models on this. And I think, since the much various kinds of music we know, the correlation property will more complex and more interesting.

Distribution properties

Distribution property of a time series means the frequency accounting results gotten when we neglect the time information and only pick up the values. Such properties can be explored in two ways, at single composition level and at ensemble composition level, in which we take many many compositions as a whole system.
distribution of single note
It's well known that a high skew strange distribution emergent from the frequency accounting of units digits in natural number sets such as accounting record, while at first sight everyone will regard that every number from 1 to 9 should have the same probability to be used. So the single note distribution is designed to investigate the similar question in music notes.
distribution of fixed length combination
As a music, it's natural that we take several notes like a verse as a unit of music. Maybe a verse is a good morpheme other than a single notes. So like the words in language, can you take a look at the distribution of the words in music? Are there any common words in different compositions as the same words in articles? And what the most common words, what the least common one, what's the distribution? Maybe we can find a new way to express our feeling in music like in language, even perhaps an easier way to learn to compose music can be found by this research.

Classification of music

Throughout this investigation, maybe some universal laws or grouped common properties can be found. And then, we can classify music by their natural character. Maybe this will lead us to complement the traditional classification system of music, or maybe totally new system. Perhaps it will boost the theoretical study of music. Oh, can we?

Cooperation and discussion is strongly appreciated

Although I know something about music, I am a poor amateur. So I will greatly appreciate the help if anyone can discuss some questions about music and composition with me. Another kind of cooperators are the ones can get and share data set about the note of compositions. Since we will have to need a lot of note time series, it's not a good idea to get all from paper based notes. So if someone have the technic to generate note time series from any other convenient sources, it's really a great new for us.

Reference

  1. Introduction to Econophysics
  2. Note papers
  3. Wavelet Analysis
  4. fda method
Top Music Thinking Representation Renormalization Software

Thinking in Universal Language

First, let's start from an investigation about the situation of program languages for computer. After the corresponding compilers are installed, computer can understand many different program languages. Even the communication between computers can be realized by high level languages such as Java. So we say, computers can think and communicate with others in many languages.

Now let's turn to the human language. Many different languages are used and we can understand each other. In mathematical language means a mapping between any two languages can be found and constructed. Let's take one sentence S for example. It can be expressed in English originally as Se, and then it can be translated into Chinese Sc, French Sf and any other languages as Si. Using the mathematical languages usually for Quantum Mechanics, Si is the presentation of the abstract S. Although presentation form has to be used to express, like we usually use the position presentation grad for abstract momentum operator P, the abstract form S is independent on any language forms.

OK, now think about the thinking process. When we think in our mind, we have to use a specific language, most time the native language. So for you, maybe usually thinking in English, for me, sometimes English, sometimes Chinese. Also use the sentence S for example, in your mind it's Se, for me Sc, and Si for all others. Since we can understand each other, a common meaning of Si must exist. Actually the common meaning is the abstract S. But unfortunately, no one can think in abstract form, we have to use language to help us to think, although thinking should be a process independent of language.

Now, let's ask the same question for computer. Can computer think in abstract form or a common language? Actually, the compilers give us the answer. What the most compilers do is translate program written in all high level languages into low level ones as bite code. Therefor, although bite code is also a specific language not a abstract form, it is the most common language being understood by most computers. However, bite code is machine dependent, so it's not a good choice for abstract language. We want to get a universal language which has the same form in every machine. So maybe interpreter language is a good choice. In fact, we can find more properties required as an abstract language and try to realize them in a specific language. For Computer Science, I think, Java is a good candidate. Although we know that any specific languages is only an approximation realization of the universal language, but a good approximation will definitely help the communication and development.

Hehe, we are too far from the mark. OK, let's come back to thinking and human language. Just like the common language and universal language for computer, can we find or construct a universal language for human? What's the necessary properties of the universal language? First, the universal language is designed for thinking process, so maybe in some kinds it looks like mathematical language and it should develop itself along the research progress in thinking process. Second, the universal language is designed for communication, so it must have the same form for every one and easily to be translated into any other languages. Like the C-like grammar in Java, our universal language must be constructed on the basis of the most common language. If we can define all the properties required for a universal language, we also can construct an approximate specific language. If it's possible to accomplish it, Oh, I can't image what will happen to our world.

Another reason for the mapping between different language and the reason we can understand each other, besides the common thinking process, it's the materials world. Since we have the same materials all over our world, we can understand each other every time by refer the sentence to the materials object. So here this object replaces the position of abstract form. However, not all representation form can be referred to such materials object. So we have to say that the reason for that we can understand each other is that we have the same thinking process. So if we can construct a specific representation directly related with thinking, everyone will be able to think in a universal language.

Top Music Thinking Representation Renormalization Software

Three Representations for Epidemics

Introduction

A wide range of real process can be looked as reaction diffusion process, such as population evolution, epidemic, chemical reaction, surface absorb, recently even in money-exchange model. In a mean field approximation, in which all components distributed evenly, all reaction diffusion process can be treated as a pure reaction process. Usually such approximation provide a good estimation. For instance, epidemic model on general networks can be investigated by simulation and the results can be compared with real data. Also if we have make an assumption that all individuals are fully mixed, or equivalently full randomly, we use differential equation to deal with epidemic model. In fact, three different classes of equations can be used, difference equation, rate equation[2] and master equation[1,3]. They are different presentations in different space for the same process.

However, sometimes a mixture mistake happens in such application[5]. And also because of their different characters and convenience, here we want to discuss the relation and the difference.

Relation between three presentations

Before we discuss the equations, I want to reserve some time on model description. Sometimes, some authors like to describe models by equations. Usually it's ok. But when we use rate equation, and want to transfer it to other presentation, because rate equation is a velocity equation not a probability based equation, some confusion often comes to us, even someone regards them as different process. So here, I want to always use the probability based description for every models. The different representations are like transformation in different mirror but keeping the process invariable. Let's take birth process and epidemic models for examples. The first one is the constant birth rate model, or we say Malthus Model, in which at very time step every individual can independently give birth with a fixed probability $\Omega$. The time-discrete difference equation for this model is
\begin{displaymath}
U_{t+1}=U_{t}+\Omega U_{t},
\end{displaymath} (1)

in which $U_{t}$ is the population at time $t$ and $\Omega$ is the birth percent (equals to the probability) for every individual. If we denote
\begin{displaymath}
u\left(t\right)=\frac{U_{t}}{N},
\end{displaymath} (2)

in which $N$ is the total capacity, we can use rate equation to describe the same model as
\begin{displaymath}
\frac{d}{dt}u\left(t\right)=\omega u\left(t\right).
\end{displaymath} (3)

Another set of variables can be constructed to use for Master equation. Consider an ensemble of such systems, in which every system independently develops as the same birth process. The population in every system can be looked as a random variable. We define the fraction of systems with specific population $i$ as
\begin{displaymath}
u\left(i,t\right)=\frac{N\left(i,t\right)}{\sum_{i}N\left(i,t\right)}.
\end{displaymath} (4)

It can be interpreted as the probability of system with size of $i$ at time $t$. The master equation of this distribution is
\begin{displaymath}
\frac{d}{dt}u\left(i,t\right)=\sum_{j}P(j\rightarrow
i)u\left(j,t\right) - \sum_{j}P(i\rightarrow j)u\left(i,t\right),
\end{displaymath} (5)

in which $P(j\rightarrow i)$ is the transfer rate from state $j$ to state $i$. In this process, it can be looked as the probability for a system with population $j$ change to population $i$. It's the probability that $\left(i-j\right)$ new individuals are gave birth at the same time,
\begin{displaymath}
P(j\rightarrow i) = C^{\left(i-j\right)}_{j}\Omega^{\left(i-j\right)}.
\end{displaymath} (6)

Since this model only have birth process, $i$ must be larger than $j$, but fortunately the combination number $C^{\left(i-j\right)}_{j}$ will be zero when $i<j$. So the master equation will be
\begin{displaymath}
\frac{d}{dt}u\left(i,t\right)=\sum_{j}\left[C^{\left(i-j\ri...
...ght)}_{i}\Omega^{\left(j-i\right)}u\left(i,t\right)\right].
\end{displaymath} (7)

The first order of this equation is
\begin{displaymath}
\frac{d}{dt}u\left(i,t\right)=\left(i-1\right)\Omega u\left(i-1,t\right)- i\Omega u\left(i,t\right),
\end{displaymath} (8)

while the second order of the right side of equation (7) is
\begin{displaymath}
\frac{1}{2}\left(i-1\right)\left(i-2\right)\Omega^{2}
u\le...
...\frac{1}{2}i\left(i+1\right)\Omega^{2}
u\left(i+2,t\right),
\end{displaymath} (9)

and no one can guarantee that the second order is much less than the first order unless
\begin{displaymath}
\Omega << \frac{1}{N}.
\end{displaymath} (10)

The master equation presentation looks much more complex than the other two. However, after all we have the differential equation for the distribution function and it's the most general one. In fact the former two equation can be regarded as the mean value equation and deduced from master equation. It's also worth noticing that $\Omega$ and $\omega$ is related as
\begin{displaymath}
\omega = \ln(1+\Omega),
\end{displaymath} (11)

which can be found from comparison of the corresponding solution $U_{t}=U_{0}\left(1+\Omega\right)^{t}$ and $u\left(t\right)=u_{0}e^{\omega t}$ for equation (2) and (3). Another example is pure infection process, in which at every time step one individual can be infected from infected individual contacting with him with a fixed probability $\Phi$. The difference equation is
\begin{displaymath}
U_{t+1}=U_{t} + \Phi\frac{U_{t}}{N}\left(N-U_{t}\right),
\end{displaymath} (12)

and the rate equation is
\begin{displaymath}
\frac{d}{dt}u\left(t\right)=\phi u\left(t\right)\left(1-u\left(t\right)\right).
\end{displaymath} (13)

The relation between probability $\Phi$ and rate $\phi$ is
\begin{displaymath}
\phi = - \ln\left(1-\Phi\right).
\end{displaymath} (14)

One can get this relation by substituting $u(t) =
\frac{1}{1+Ce^{-\phi t}}$, the solution of equation (13) into equation (12). And following the same analysis procedure, we can get the master equation as bellow,
\begin{displaymath}
\frac{d}{dt} u\left(i,t\right)=\sum_{j}\left[C^{\left(i-j\r...
...)}_{i}}{C^{\left(j-i\right)}_{N}} u\left(i,t\right)\right].
\end{displaymath} (15)

Similarly, when condition (10) holds, we can use the first order master equation,
\begin{displaymath}
\frac{d}{dt} u\left(i,t\right)=\sum_{j}\left[\left(N-i+1\ri...
...- \left(N-i\right)\Phi\frac{i}{N} u\left(i,t\right)\right].
\end{displaymath} (16)

In fact, sometimes another difference equation is used instead of equation (12) as in [4] as,
\begin{displaymath}
U_{t+1}=U_{t} + \left(1-\left(1-\Phi\right)^{U_{t}}\right)\left(N-U_{t}\right),
\end{displaymath} (17)

which also can be written down as
\begin{displaymath}
U_{t+1}=U_{t} + \Phi U_{t}\left(N-U_{t}\right),
\end{displaymath} (18)

when $\Phi << 1$. This equation is based on different assumption with equation (12) about the contacting frequency. In this equation, during every time step, every healthy individual contacts with every infected individual, while in the former case, every healthy individual only contact with one of all other individuals, so the probability to encounter an infected individual is $\frac{U_{t}}{N}$. In this paper, we use the constant contacting frequency as in equation (12) other than the sufficient contacting frequency in equation (17).

Beyond mean field approximation

In fact, all process evolute in a space, so full mixture is a first order approximation. Character of the space like possible contact and the spacial distribution of individuals have to be taken into consideration. The most common way for this is use structured networks as a stage of reaction process. And the role of different geometrical properties on different processes is deserved to be investigated throughout. This lead us to the Physical Models on Networks.

Reference

  1. James H. Matis, Thomas R. Kiffe, Stochastic population models : a compartmental perspective, Lecture notes in statistics (New York, Springer-Verlag, 2000) v.145;
  2. R. Pastor-Satorras and A. Vespignani, Epidemic dynamics and endemic states in complex networks, Phys. Rev. E 63, 066117(2001).
  3. G. Ghoshal, L. M. Sander, I. M. Sokolov, SIS epidemics with household structure: the self-consistent field method, e-print cond-mat/0304301.
  4. M. E. J. Newman, Spread of epidemic disease on networks, Phys. Rev. E 66, 016128(2002).
  5. Rinaldo B. Schinazi, On the Role of Social Clusters in the Transmission of Infectious Diseases, Theoretical Population Biology 61, 163-169(2002).
Top Music Thinking Representation Renormalization Software

A Good Way to Show the Spirit of Renormalization Group Theory in Statistical Physics



Introduction

Here we discuss Real Space Renormalization in Statistical Physics, the one in Momentum Space and the general from not in Statistical Physics is not our topic here, although they have the same spirit. In Statistical Physics, Renormalization is useful tool for critical phenomena, the things happen nearby the critical point. Critical point in Statistical Physics is a point (or a line, or most generally a hypersurface) in parameter space, and a second or higher order phase transition happens around the point. The first question is that since usually we have more than one parameter in parameter space, which one is the one related with critical point. Every parameter is possible, or must be some special one? If it has to be special one, why is it special? Then, after we find this parameter(s), can we know some properties of the phase transition? At the first sight, we may wonder such questions are similar with geometry more than mechanics. This imply the answer is related with some fundamental properties of space and dynamical symmetry. So Renormalization is related with scale transformation. It intends to discover unchanged properties under scale transformation. Now we use Ising model and percolation as examples to discuss Real Space Renormalization for second order phase transition.

Phenomenological fundamental of Renormalization

In a second order phase transition, it's well known that critical phenomena as critical fluctuation and long range correlation happen near critical point. In Ising model, we even know the characteristic length is related with temperature as
\begin{displaymath}
\xi \sim \left\vert t\right\vert^{-\nu}
.
\end{displaymath} (1)

So if we rescale the size of the system as $l\rightarrow
L^{'}=l^{-1}L$, then the characteristic length will change along with $L$ as $\xi\rightarrow \xi^{'}=l^{-1}\xi$. Applying this result to equation (1), we will have an induced transformation,
\begin{displaymath}
\left\vert t^{'}\right\vert^{-\nu} \sim \xi^{'} \sim l^{-1}...
...ert t\right\vert^{-\nu} \Rightarrow t^{'}=l^{\frac{1}{\nu}}t.
\end{displaymath} (2)

Two things we can know from this result. The first one is $t=0$ is the fixed point of this induced transformation, and it's a unstable fixed point. Since $\nu > 0$, when $l>1$ and we repeat it again and again, the system will leave farer and farer away from the fixed point. The second one is a rescale in size of the system has the same result of a corresponding change of temperature. Actually, we even can see these results directly from simulation, in which one can compare pictures at different scale under a fixed temperature with pictures at different temperatures. In equation (1), $\nu$ is a critical exponent, a series of such exponents can be found by experimental research. And near the critical point, the behavior or the system is determined by the exponents. So since $\nu$ is related with the rescale and induced transformation, can we use the inverse procedure to find the exponents? This leads to Scale Transformation and Renormalization Group Theory. First rescale the size while keeping the physics unchanged, then get the induced transformation of temperature and other parameters, and then find the fixed point and expand the relations on the fixed point. At last the unstable fixed point is the critical point and the coefficients in first order expansion are the critical exponents.

Transformation and invariant

Any transformation should have an intension to keep something unchanged. Here, in Physics, keeping the physics unchanged is the aim of rescale transformation. But which invariants are used to represent physics here? In Statistical Physics, the physical quantity is usually studied in thermodynamical limit, which implies something like micro-infinite size system. So the rescale of size can only interpret as change of temperature. So the first invariant of this transformation is the total interaction, which can be described by total free energy. The unchanged physics also means the interaction form is invariant, and this imply the function of density free energy has the same form. Those two invariants above are different, the first one says the value of function is fixed while the second one implies the form of function is unchanged. So the Rescale Transformation and Renormalization Group is defined as below, System: $H=H\left(\vec{x}; t,h\right)$ Definition: Total Free Energy $F=F\left(t,h\right)$, Free Energy Density $f_{s}=L^{-d}F\left(t,h\right)$ Critical Phenomena: $m \sim -t^{-\beta}$, $m \sim
h^{\frac{1}{\delta}}$ Rescale:
\begin{displaymath}
L \rightarrow L^{'}=l^{-1}L
\end{displaymath} (3)


\begin{displaymath}
\vec{x} \rightarrow \vec{x}^{'}={\it T_{l}}\vec{x}
\end{displaymath} (4)

Induced Transformation:
\begin{displaymath}
t \rightarrow t^{'}=f\left(t;l\right)
\end{displaymath} (5)


\begin{displaymath}
h \rightarrow h^{'}=g\left(h;l\right)
\end{displaymath} (6)

Invariants: To determine the induced transformation $f\left(t;l\right),
g\left(h;l\right)$ and then the fix points and the exponents.

Existence of such transformation and the relation with critical exponents

We want to transform the size of system (3) and correspondingly transform the dynamical variable (4), while we want to keep both the total interaction (7) and the interaction (8) form unchanged. This is a very illiberal requirement. We can't guarantee the existence of such transformation for any thermodynamical system. So here, the solution is only valid when such transformation can be found. OK, now question is that if we have found such transformation and have deduced the induced transformation for parameters, how can we decide the fixed point and exponent. And further more, coming back to the questions we ask in the introduction, can we have some understanding why they are different if the parameters is not as important as each other. Substitute definition into (7),

\begin{displaymath}
L^{d}l^{-d}f^{'}_s \left(t^{'}, h^{'}\right)=L^{d}f_{s}\left(t, h\right)
\end{displaymath}

and substitute (8) into equation above,

\begin{displaymath}
l^{-d}f_{s}\left(t^{'}, h^{'}\right) = f_{s}\left(t, h\right)
\end{displaymath}

Then, put the induced transformation in,
\begin{displaymath}
f_{s}\left(t,h\right)=l^{-d}f_{s}\left(f\left(t; l\right), g\left( h; l \right)
\right).
\end{displaymath} (9)

This equation is the requirement of induced transformation as $f\left(t;l\right),
g\left(h;l\right)$. Now we suppose $\left(0,0\right)$ is the fixed point. We have $f\left(0; l\right)=0$ and $g\left(0; l \right)=0$. Expand them at $\left(0,0\right)$,
\begin{displaymath}\left\{
\begin{array}{l}
f\left(t; l\right)=f\left(0; l\rig...
...t(0;
l\right)h+\circ\left(h^{2}\right)
\end{array}
\right.
\end{displaymath} (10)

truncate them at first order, and denote
\begin{displaymath}
\left\{
\begin{array}{l}
f\left(t; l\right)=\mu\left(l\ri...
...
g\left(h; l\right)=\nu\left(l\right)h
\end{array}
\right.
\end{displaymath} (11)

Using the critical phenomena as $m=\frac{\partial f}{\partial h}
\sim h^{\frac{1}{\delta}}(t=0)$,

\begin{displaymath}
m_{s}\left(0,h\right)=l^{-d}m_{s}\left(0,\nu\left(l\right)h\right)\nu\left(l\right)
\end{displaymath}

let $l_{0}: \nu\left(l_{0}\right)h=1$, then

\begin{displaymath}
m_{s}\left(0,h\right)=l_{0}^{-d}m_{s}\left(0,1\right)\nu\left(l_{0}\right)
\end{displaymath}

Therefor,
\begin{displaymath}
\left[\nu(l_{0})\right]^{{1+\frac{1}{\delta}}}\left[l_{0}\right]^{-d}=1
\end{displaymath} (12)

Similarly, from $m \sim -t^{-\beta}(h=0, t<0)$, we can get

\begin{displaymath}
m_{s}\left(t,0\right)=l^{-d}m_{s}\left(\mu\left(l\right)t,0\right)\nu\left(l\right)
\end{displaymath}

let $l^{'}_{0}: \mu\left(l^{'}_{0}\right)=-1$, then

\begin{displaymath}
m_{s}\left(t,0\right)=\left[l^{'}_{0}\right]^{-d}m_{s}\left(-1,0\right)\nu\left(l^{'}_{0}\right)
\end{displaymath}

Therefor,
\begin{displaymath}
\mu^{\beta}(l^{'}_{0})=\nu(l^{'}_{0})[l^{'}_{0}]^{-d}
\end{displaymath} (13)

At last,
\begin{displaymath}\left\{
\begin{array}{l}
\mu\left(l\right)=l^{-\frac{d}{(\d...
...right)=l^{\frac{d}{1+\frac{1}{\delta}}}
\end{array}
\right.
\end{displaymath} (14)

or
\begin{displaymath}
\left\{
\begin{array}{l}
f\left(t; l\right)=l^{y_{t}}t\\
g\left(h; l\right)=l^{y_{h}}h
\end{array}
\right.
\end{displaymath} (15)

where $y_{t}=-\frac{d}{(\delta+1)\beta}$, $y_{h}=\frac{d}{1+\frac{1}{\delta}}$. Now we rewrite the equation (9) as
\begin{displaymath}
f_{s}\left(t,h\right)=l^{-d}f_{s}\left(l^{y_{t}}t, l^{y_{h}}h
\right).
\end{displaymath} (16)

Examples

Now we provide such transformation in percolation and Ising model as Example. First we discuss 1-d Ising model, $H=-J\sum_{<ij>}S_{i}S_{j}$, in which $<ij>$ means nearest nearby spin, here in 1-d, j=$i\pm 1$. Let's use the decimation transform which transform every three spins on continuant positions into two nearby spin. And the value of new spins are kept the same values of the first spin and the third. So it looks like a decimation filter. OK, so the transformation is
\begin{displaymath}
\left\{ \begin{array}{l} \left(S_{1},S_{2},S_{3}\right)
\l...
...) \\
S^{'}_{1}=S_{1}, S^{'}_{3}=S_{3}
\end{array}
\right.
\end{displaymath} (17)

Invariant
\begin{displaymath}
\sum_{S_{2}} e^{J(S_{1}S_{2}+S_{2}S_{3})}=Ae^{J^{'}(S_{1}S_{3})},
\end{displaymath} (18)

where $A$ is an undeterminate constant coming from $F=kTlnZ$, and $kT$ has been set to 1. So we get equations as
\begin{displaymath}
\left\{ \begin{array}{l} e^{2J}+e^{-2J}=Ae^{J^{'}}\\
e^{0}+e^{0}=Ae^{-J^{'}}
\end{array}
\right.
\end{displaymath} (19)

So the solution is
\begin{displaymath}
J^{'}=\frac{1}{2}\ln\left(cosh(2J)\right).
\end{displaymath} (20)

The only fixed points of this equation are $J=0$ and $J=\infty$, which are corresponding to trivial fixed point $T=\infty$ and $T=0$. So no critical point we can find for this model. However, if a specific transformation shows no critical point it doesn't mean no critical point for the model. Maybe it's because not a suitable transformation was used. After all, in this short article, I only want to show the spirit of Renormalization Group, not a good solution for any models. Another important and valuable transformation can be used here is Majority Rule, in which three continuant spins are transformed into one, and the value of the new spin is decided by the majority of the three ones before. So the interaction between six spins will be transformed into interaction of two spins. Keeping the value of total free energy and the function form of free energy density will lead a new relation between $J^{'}$ and $J$. The second example is 1-d percolation. In percolation, the invariant is connectivity, any transformation should keep connectivity. So a natural transformation is combine two continuant cites into one. The new cite should be set as occupied If and only if both of those two cites are occupied, because only in this case, the transformation keeps the connectivity. So we have a induced parameter transformation as
\begin{displaymath}
P^{'}=P^{2}.
\end{displaymath} (21)

And this transformation have two fixed points, $P=0$ and $P=1$, both of them are trivial fixed points. So again, no critical point can be found in 1-d percolation. After two examples without critical point, now we consider Ising model on hierarchical Diamond Lattice, which is generated by iteration as shown in figure (1),
\begin{displaymath}
\sum_{S_{2}} e^{J(S_{1}S_{2}+S_{2}S_{3})}=Ae^{J^{'}(S_{1}S_{3})},
\end{displaymath}

Using the transformation $\left(S_{1},S_{2},S_{3},S_{4}\right)\longrightarrow\left(S_{1},S_{2}\right)$ as shown in figure (2).
\begin{displaymath}
\sum_{S_{2}} e^{J(S_{1}S_{2}+S_{2}S_{3})}=Ae^{J^{'}(S_{1}S_{3})},
\end{displaymath}

We have invariants as
\begin{displaymath}
\sum_{S_{3},S_{4}}e^{J\left(S_{1}S_{3}+S_{1}S_{4}+S_{3}S_{2}+S_{4}S_{2}\right)}=Ae^{J^{'}(S_{1}S_{2})},
\end{displaymath} (22)

which will give us
\begin{displaymath}
J^{'}=\ln(\cosh(2J)).
\end{displaymath} (23)

This equation have three fixed points, two trivial ones $J=0,\infty$ and a nontrivial critical point $J=J^{*}$ as shown in figure (3). Critical exponents can be deduced by expanding above equation nearby $J^{*}$.
\begin{displaymath}
\sum_{S_{2}} e^{J(S_{1}S_{2}+S_{2}S_{3})}=Ae^{J^{'}(S_{1}S_{3})},
\end{displaymath}

Summary

Existence of critical phenomena nearby critical point is the basement of Renormalization Group Theory in Statistical Physics. Any transformation in Physics should keep physics unchanged. This require that the value of total free energy and the function form of free energy density should be invariant. And this condition links together rescale transformation with critical point and critical exponents. But finding such an exact transformation, which including two parts, the scale and the value of new dynamical variables, is not a easy job. So usually, some approximations will be used instead. Such Renormalization Group Approach give us a whole picture of the structure of parameter space.

Bibliography

1
Rachel, A Moden Course of Statistical Physics(xxxx).

2
Henk W.J. Blöte, Renormalization Theory of Phase Transitions (Electrical lecture manuscript, Sept. 2001).

3
Ma Shang-Keng, Modern Theory of Critical Phenomena (W.A.Benjamin Inc. Advanced Book Program Reading, Massachusetts, 1976).


This document was generated using the LaTeX2HTML translator Version 2002 (1.62)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

Top Music Thinking Representation Renormalization Software

Software Developer: Scientific Project Manager

My idea of this software is coming from my own research work experience. Since usually I get new ideas very other days, it's really a important task for me to select the valuable ideas from them and keep them in my mind. And the harder part is how to update them if I ever think about them in some accidental time. Even when we actually start to realize one project by just several people, much time is used to exchange reference papers and ideas, and sometimes need short or long meeting to synchronize the progress. So I think for every researchers, a software to help them to organize all these information and resource is valuable. And such software should be easy to use and only developed for scientist. And it should include only a small part of the whole branch of functions in commercial software like EPM from Microsoft, which is designed to deal with the most general business projects.

The most important character of this software should be free and open source. So everyone can use it and improve it. It should be developed as a tool for scientists and by scientists. The second principle of this software is readable and compatible. Everything can be read by text editors and the whole structure of saving files, like the tree-like directory also can be read by human eyes. And the if the database has to be used here, a convertor for readable text file also should be distributed together with the software.

Next part is my function analysis of this software.


As Dirac, one genius in Physics, said Mathematics is the thing you create when you need, Knuth is the equivalent giant in Computer Science, I think, because when he decided to write down a great and huge book, he designed TeX system first. ^_^, therefore, after some initial research experience, I find maybe I should develop a project manager system when I have free time. Of course, if someone else want to create it first for all scientists, I really appreciate she/he just use this idea and structure. Only thing she/he need to do is drop me an email to let me know someone else try to do it. All my analysis is following the similar format and idea of the reference listed below.

Reference

Top Music Thinking Representation Renormalization Software