Miscellaneous Ideas
Here I write down some ideas existing long time in my mind, I really appreciate every discussion and cooperation, even someone can use any idea from here without a notification to me.
Times Series Analysis of Music
Abstract
The idea to discover pattern of music by time series analysis is
presented here. Possible relationship between the result and
classification of music is also discussed. A plan including
technic and direction is scheduled and a warm invitation is sent
to everyone who may have some interest in this.
Introduction
The Time-frequency analysis of song can give us every note and
this topic is so-called Gabor analysis. But here it's not our
topic, what we want to analysis is the music, the song itself, not
sound of song. I mean, the time series of nodes, which is the
result of former analysis technology, is the object of this
investigation. We look music as a time series of nodes and then we
try to use time-series analysis technic and statistical method
developed in Econophysics to discover the universal and special
law of music.
In Econophysics, when we take the time series of financial data as
our object, such as stock prices, exchange rates and firm sizes,
the time-series analysis is a useful tool to find the time
dependent properties such as correlation and stability, and some
time independent properties as distribution, universal scaling
law. So here, for music, we also want to investigate all these
properties. It's usual that most compositions have some cantus
firmus and may change a little in every riffing. I think, patterns
like this will be easy to discovered by our analysis. And maybe
some properties not very obvious for normal appreciator like us
can be found. Even I am just wandering that for some kind of
music, maybe some initial several verses will define the whole
music. If things like this exist, we can find them by correlation
analysis. And maybe more interesting characters will be found in
this way. Eventually Physics is for fun, right?
What kind of properties we can try
Most idea borrowed from Econophysics and fitted to the speciality
of music. Time dependent and time independent properties are the
two sides the same mirror, so only after both are explored
throughout the whole profile will be found.
Time dependent properties
Stability
Since very composition is a finite length time series, it will
definitely stable if it's expanded periodically. So the stability
considerate the structure in time scale much shorter than the
whole length of the composition.
Self correlations
As a famous styled fact in stock prices time series, a long time
correlation exists in time series of absolute value of price
return while extremely short time self correlation exists in price
return itself, this strange correlation property encouraged a lot
of works on mechanism models on this. And I think, since the much
various kinds of music we know, the correlation property will more
complex and more interesting.
Distribution properties
Distribution property of a time series means the frequency
accounting results gotten when we neglect the time information and
only pick up the values. Such properties can be explored in two
ways, at single composition level and at ensemble composition
level, in which we take many many compositions as a whole system.
distribution of single note
It's well known that a high skew strange distribution emergent
from the frequency accounting of units digits in natural number
sets such as accounting record, while at first sight everyone will
regard that every number from 1 to 9 should have the same
probability to be used. So the single note distribution is
designed to investigate the similar question in music notes.
distribution of fixed length combination
As a music, it's natural that we take several notes like a verse
as a unit of music. Maybe a verse is a good morpheme other than a
single notes. So like the words in language, can you take a look
at the distribution of the words in music? Are there any common
words in different compositions as the same words in articles? And
what the most common words, what the least common one, what's the
distribution? Maybe we can find a new way to express our feeling
in music like in language, even perhaps an easier way to learn to
compose music can be found by this research.
Classification of music
Throughout this investigation, maybe some universal laws or
grouped common properties can be found. And then, we can classify
music by their natural character. Maybe this will lead us to
complement the traditional classification system of music, or
maybe totally new system. Perhaps it will boost the theoretical
study of music. Oh, can we?
Cooperation and discussion is strongly appreciated
Although I know something about music, I am a poor amateur. So I
will greatly appreciate the help if anyone can discuss some
questions about music and composition with me. Another kind of
cooperators are the ones can get and share data set about the note
of compositions. Since we will have to need a lot of note time
series, it's not a good idea to get all from paper based notes. So
if someone have the technic to generate note time series from any
other convenient sources, it's really a great new for us.
Reference
- Introduction to Econophysics
- Note papers
- Wavelet Analysis
- fda method
Top Music Thinking Representation Renormalization Software
Thinking in Universal Language
First, let's start from an investigation about the situation of program languages for computer. After the corresponding compilers are installed, computer can understand many different program languages. Even the communication between computers can be realized by high level languages such as Java. So we say, computers can think and communicate with others in many languages.
Now let's turn to the human language. Many different languages are used and we can understand each other. In mathematical language means a mapping between any two languages can be found and constructed. Let's take one sentence S for example. It can be expressed in English originally as Se, and then it can be translated into Chinese Sc, French Sf and any other languages as Si. Using the mathematical languages usually for Quantum Mechanics, Si is the presentation of the abstract S. Although presentation form has to be used to express, like we usually use the position presentation grad for abstract momentum operator P, the abstract form S is independent on any language forms.
OK, now think about the thinking process. When we think in our mind, we have to use a specific language, most time the native language. So for you, maybe usually thinking in English, for me, sometimes English, sometimes Chinese. Also use the sentence S for example, in your mind it's Se, for me Sc, and Si for all others. Since we can understand each other, a common meaning of Si must exist. Actually the common meaning is the abstract S. But unfortunately, no one can think in abstract form, we have to use language to help us to think, although thinking should be a process independent of language.
Now, let's ask the same question for computer. Can computer think in abstract form or a common language? Actually, the compilers give us the answer. What the most compilers do is translate program written in all high level languages into low level ones as bite code. Therefor, although bite code is also a specific language not a abstract form, it is the most common language being understood by most computers. However, bite code is machine dependent, so it's not a good choice for abstract language. We want to get a universal language which has the same form in every machine. So maybe
interpreter language is a good choice. In fact, we can find more properties required as an abstract language and try
to realize them in a specific language. For Computer Science, I think, Java is a good candidate. Although we know that
any specific languages is only an approximation realization of the universal language, but a good approximation will
definitely help the communication and development.
Hehe, we are too far from the mark. OK, let's come back to thinking and human language. Just like the common language
and universal language for computer, can we find or construct a universal language for human? What's the necessary properties
of the universal language? First, the universal language is designed for thinking process, so maybe in some kinds it
looks like mathematical language and it should develop itself along the research progress in thinking process. Second,
the universal language is designed for communication, so it must have the same form for every one and easily to be translated
into any other languages. Like the C-like grammar in Java, our universal language must be constructed on the basis of the
most common language. If we can define all the properties required for a universal language, we also can construct an
approximate specific language. If it's possible to accomplish it, Oh, I can't image what will happen to our world.
Another reason for the mapping between different language and the reason we can understand each other, besides the common
thinking process, it's the materials world. Since we have the same materials all over our world, we can understand each
other every time by refer the sentence to the materials object. So here this object replaces the position of abstract form.
However, not all representation form can be referred to such materials object. So we have to say that the reason for that we
can understand each other is that we have the same thinking process. So if we can construct a specific representation directly
related with thinking, everyone will be able to think in a universal language.
Top Music Thinking Representation Renormalization Software
Three Representations for Epidemics
Introduction
A wide range of real process can be looked as reaction diffusion
process, such as population evolution, epidemic, chemical
reaction, surface absorb, recently even in money-exchange model.
In a mean field approximation, in which all components distributed
evenly, all reaction diffusion process can be treated as a pure
reaction process. Usually such approximation provide a good
estimation. For instance, epidemic model on general networks can
be investigated by simulation and the results can be compared with
real data. Also if we have make an assumption that all individuals
are fully mixed, or equivalently full randomly, we use
differential equation to deal with epidemic model. In fact, three
different classes of equations can be used, difference equation,
rate equation[2] and master
equation[1,3]. They are different presentations
in different space for the same process.
However, sometimes a mixture mistake happens in such
application[5]. And also because of their different
characters and convenience, here we want to discuss the relation
and the difference.
Relation between three presentations
Before we discuss the equations, I want to reserve some time on
model description. Sometimes, some authors like to describe models
by equations. Usually it's ok. But when we use rate equation, and
want to transfer it to other presentation, because rate equation
is a velocity equation not a probability based equation, some
confusion often comes to us, even someone regards them as
different process. So here, I want to always use the probability
based description for every models. The different representations
are like transformation in different mirror but keeping the
process invariable.
Let's take birth process and epidemic models for examples. The
first one is the constant birth rate model, or we say Malthus
Model, in which at very time step every individual can
independently give birth with a fixed probability
. The
time-discrete difference equation for this model is
 |
(1) |
in which
is the population at time
and
is the
birth percent (equals to the probability) for every individual. If
we denote
 |
(2) |
in which
is the total capacity, we can use rate equation to
describe the same model as
 |
(3) |
Another set of variables can be constructed to use for Master
equation. Consider an ensemble of such systems, in which every
system independently develops as the same birth process. The
population in every system can be looked as a random variable. We
define the fraction of systems with specific population
as
 |
(4) |
It can be interpreted as the probability of system with size of
at time
. The master equation of this distribution is
 |
(5) |
in which
is the transfer rate from state
to state
. In this process, it can be looked as the probability
for a system with population
change to population
. It's
the probability that
new individuals are gave
birth at the same time,
 |
(6) |
Since this model only have birth process,
must be larger than
, but fortunately the combination number
will be zero when
. So the master
equation will be
![\begin{displaymath}
\frac{d}{dt}u\left(i,t\right)=\sum_{j}\left[C^{\left(i-j\ri...
...ght)}_{i}\Omega^{\left(j-i\right)}u\left(i,t\right)\right].
\end{displaymath}](project\pic\representation\img17.png) |
(7) |
The first order of this equation is
 |
(8) |
while the second order of the right side of equation (7) is
 |
(9) |
and no one can guarantee that the second order is much less than the first order unless
 |
(10) |
The master equation presentation looks much more complex than the
other two. However, after all we have the differential equation
for the distribution function and it's the most general one. In
fact the former two equation can be regarded as the mean value
equation and deduced from master equation. It's also worth
noticing that
and
is related as
 |
(11) |
which can be found from comparison of the corresponding solution
and
for equation
(2) and (3).
Another example is pure infection process, in which at every
time step one individual can be infected from infected individual
contacting with him with a fixed probability
. The
difference equation is
 |
(12) |
and the rate equation is
 |
(13) |
The relation between probability
and rate
is
 |
(14) |
One can get this relation by substituting
, the solution of equation
(13) into equation (12). And
following the same analysis procedure, we can get the master
equation as bellow,
![\begin{displaymath}
\frac{d}{dt} u\left(i,t\right)=\sum_{j}\left[C^{\left(i-j\r...
...)}_{i}}{C^{\left(j-i\right)}_{N}} u\left(i,t\right)\right].
\end{displaymath}](project\pic\representation\img31.png) |
(15) |
Similarly, when condition (10) holds, we can use the first order master equation,
![\begin{displaymath}
\frac{d}{dt} u\left(i,t\right)=\sum_{j}\left[\left(N-i+1\ri...
...- \left(N-i\right)\Phi\frac{i}{N} u\left(i,t\right)\right].
\end{displaymath}](project\pic\representation\img32.png) |
(16) |
In fact, sometimes another difference equation is used instead of
equation (12) as in [4] as,
 |
(17) |
which also can be written down as
 |
(18) |
when
. This equation is based on different assumption
with equation (12) about the contacting frequency.
In this equation, during every time step, every healthy individual
contacts with every infected individual, while in the former case,
every healthy individual only contact with one of all other
individuals, so the probability to encounter an infected
individual is
. In this paper, we use the
constant contacting frequency as in equation (12)
other than the sufficient contacting frequency in equation
(17).
Beyond mean field approximation
In fact, all process evolute in a space, so full mixture is a
first order approximation. Character of the space like possible
contact and the spacial distribution of individuals have to be
taken into consideration. The most common way for this is use
structured networks as a stage of reaction process. And the role
of different geometrical properties on different processes is
deserved to be investigated throughout. This lead us to the
Physical Models on Networks.
Reference
- James H. Matis, Thomas R. Kiffe, Stochastic population models : a compartmental
perspective, Lecture notes in statistics (New York,
Springer-Verlag, 2000) v.145;
- R. Pastor-Satorras and A. Vespignani, Epidemic dynamics and endemic states
in complex networks, Phys. Rev. E 63, 066117(2001).
- G. Ghoshal, L. M. Sander, I. M. Sokolov, SIS epidemics with household structure: the self-consistent field method, e-print cond-mat/0304301.
- M. E. J. Newman, Spread of epidemic disease on
networks, Phys. Rev. E 66, 016128(2002).
- Rinaldo B. Schinazi, On the Role of Social Clusters in the Transmission of Infectious
Diseases, Theoretical Population Biology 61, 163-169(2002).
Top Music Thinking Representation Renormalization Software
A Good Way to Show the Spirit of Renormalization Group Theory in Statistical Physics
Introduction
Here we discuss Real Space Renormalization in Statistical Physics, the one in Momentum Space and the general from not in Statistical Physics is not our topic here, although they have the same spirit. In Statistical Physics, Renormalization is useful tool for critical phenomena, the things happen nearby the critical point. Critical point in Statistical Physics is a point (or a line, or most generally a hypersurface) in parameter space, and a second or higher order phase transition happens around the point. The first question is that since usually we have more than one parameter in parameter space, which one is the one related with critical point. Every parameter is possible, or must be some special one? If it has to be special one, why is it special? Then, after we find this parameter(s), can we know some properties of the phase transition? At the first sight, we may wonder such questions are similar with geometry more than mechanics. This imply the answer is related with some fundamental properties of space and dynamical symmetry. So Renormalization is related with scale transformation. It intends to discover unchanged properties under scale transformation. Now we use Ising model and percolation as examples to discuss Real Space Renormalization for second order phase transition.
In a second order phase transition, it's well known that critical
phenomena as critical fluctuation and long range correlation
happen near critical point. In Ising model, we even know the
characteristic length is related with temperature as
 |
(1) |
So if we rescale the size of the system as
, then the characteristic length will change along
with
as
. Applying this
result to equation (1), we will have an induced
transformation,
 |
(2) |
Two things we can know from this result. The first one is
is
the fixed point of this induced transformation, and it's a
unstable fixed point. Since
, when
and we repeat it
again and again, the system will leave farer and farer away from
the fixed point. The second one is a rescale in size of the system
has the same result of a corresponding change of temperature.
Actually, we even can see these results directly from simulation,
in which one can compare pictures at different scale under a fixed
temperature with pictures at different temperatures.
In equation (1),
is a critical exponent, a series
of such exponents can be found by experimental research. And near
the critical point, the behavior or the system is determined by
the exponents. So since
is related with the rescale and
induced transformation, can we use the inverse procedure to find
the exponents? This leads to Scale Transformation and
Renormalization Group Theory.
First rescale the size while keeping the physics unchanged, then
get the induced transformation of temperature and other
parameters, and then find the fixed point and expand the relations
on the fixed point. At last the unstable fixed point is the
critical point and the coefficients in first order expansion are
the critical exponents.
Any transformation should have an intension to keep something
unchanged. Here, in Physics, keeping the physics unchanged is the
aim of rescale transformation. But which invariants are used to
represent physics here?
In Statistical Physics, the physical quantity is usually studied
in thermodynamical limit, which implies something like
micro-infinite size system. So the rescale of size can only
interpret as change of temperature. So the first invariant of this
transformation is the total interaction, which can be described by
total free energy.
The unchanged physics also means the interaction form is
invariant, and this imply the function of density free energy has
the same form.
Those two invariants above are different, the first one says the
value of function is fixed while the second one implies the form
of function is unchanged.
So the Rescale Transformation and Renormalization Group is defined
as below,
System:
Definition: Total Free Energy
, Free Energy
Density
Critical Phenomena:
,
Rescale:
 |
(3) |
 |
(4) |
Induced Transformation:
 |
(5) |
 |
(6) |
Invariants:
- Value of total free energy
 |
(7) |
- Function form of free energy density
 |
(8) |
To determine the induced transformation
and then the fix points and the exponents.
We want to transform the size of system (3) and
correspondingly transform the dynamical variable (4),
while we want to keep both the total interaction (7)
and the interaction (8) form unchanged. This is a very
illiberal requirement. We can't guarantee the existence of such
transformation for any thermodynamical system.
So here, the solution is only valid when such transformation can
be found. OK, now question is that if we have found such
transformation and have deduced the induced transformation for
parameters, how can we decide the fixed point and exponent. And
further more, coming back to the questions we ask in the
introduction, can we have some understanding why they are
different if the parameters is not as important as each other.
Substitute definition into (7),
and substitute (8) into equation above,
Then, put the induced transformation in,
 |
(9) |
This equation is the requirement of induced transformation as
.
Now we suppose
is the fixed point. We have
and
. Expand them at
,
 |
(10) |
truncate them at first order, and denote
 |
(11) |
Using the critical phenomena as
,
let
, then
Therefor,
![\begin{displaymath}
\left[\nu(l_{0})\right]^{{1+\frac{1}{\delta}}}\left[l_{0}\right]^{-d}=1
\end{displaymath}](project\pic\renormalization\img34.png) |
(12) |
Similarly, from
, we can get
let
, then
Therefor,
![\begin{displaymath}
\mu^{\beta}(l^{'}_{0})=\nu(l^{'}_{0})[l^{'}_{0}]^{-d}
\end{displaymath}](project\pic\renormalization\img39.png) |
(13) |
At last,
 |
(14) |
or
 |
(15) |
where
,
. Now we rewrite the equation
(9) as
 |
(16) |
Now we provide such transformation in percolation and Ising model
as Example.
First we discuss 1-d Ising model,
, in
which
means nearest nearby spin, here in 1-d, j=
.
Let's use the decimation transform which transform every three
spins on continuant positions into two nearby spin. And the value
of new spins are kept the same values of the first spin and the
third. So it looks like a decimation filter. OK, so the
transformation is
 |
(17) |
Invariant
 |
(18) |
where
is an undeterminate constant coming from
, and
has been set to 1. So we get equations as
 |
(19) |
So the solution is
 |
(20) |
The only fixed points of this equation are
and
,
which are corresponding to trivial fixed point
and
. So no critical point we can find for this model.
However, if a specific transformation shows no critical point it
doesn't mean no critical point for the model. Maybe it's because
not a suitable transformation was used. After all, in this short
article, I only want to show the spirit of Renormalization Group,
not a good solution for any models. Another important and valuable
transformation can be used here is Majority Rule, in which three
continuant spins are transformed into one, and the value of the
new spin is decided by the majority of the three ones before. So
the interaction between six spins will be transformed into
interaction of two spins. Keeping the value of total free energy
and the function form of free energy density will lead a new
relation between
and
.
The second example is 1-d percolation. In percolation, the
invariant is connectivity, any transformation should keep
connectivity. So a natural transformation is combine two
continuant cites into one. The new cite should be set as occupied
If and only if both of those two cites are occupied, because only
in this case, the transformation keeps the connectivity. So we
have a induced parameter transformation as
 |
(21) |
And this transformation have two fixed points,
and
,
both of them are trivial fixed points. So again, no critical point
can be found in 1-d percolation.
After two examples without critical point, now we consider Ising
model on hierarchical Diamond Lattice, which is generated by
iteration as shown in figure (1),
Using the transformation
as shown in figure (2).
We have invariants as
 |
(22) |
which will give us
 |
(23) |
This equation have three fixed points, two trivial ones
and a nontrivial critical point
as shown in
figure (3). Critical exponents can be deduced by expanding above
equation nearby
.
Existence of critical phenomena nearby critical point is the
basement of Renormalization Group Theory in Statistical Physics.
Any transformation in Physics should keep physics unchanged. This
require that the value of total free energy and the function form
of free energy density should be invariant. And this condition
links together rescale transformation with critical point and
critical exponents. But finding such an exact transformation,
which including two parts, the scale and the value of new
dynamical variables, is not a easy job. So usually, some
approximations will be used instead. Such Renormalization Group
Approach give us a whole picture of the structure of parameter
space.
- 1
- Rachel, A Moden Course of Statistical Physics(xxxx).
- 2
- Henk W.J. Blöte, Renormalization Theory of Phase Transitions (Electrical lecture manuscript, Sept. 2001).
- 3
- Ma Shang-Keng, Modern Theory of Critical Phenomena (W.A.Benjamin Inc. Advanced Book Program Reading,
Massachusetts, 1976).
This document was generated using the LaTeX2HTML translator Version 2002 (1.62)
Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
Top Music Thinking Representation Renormalization Software
Software Developer: Scientific Project Manager
My idea of this software is coming from my own research work experience. Since usually I get new ideas very other days, it's really a important task for me to select the valuable ideas from them and keep them in my mind. And the harder part is how to update them if I ever think about them in some accidental time. Even when we actually start to realize one project by just several people, much time is used to exchange reference papers and ideas, and sometimes need short or long meeting to synchronize the progress. So I think for every researchers, a software to help them to organize all these information and resource is valuable. And such software should be easy to use and only developed for scientist. And it should include only a small part of the whole branch of functions in commercial software like EPM from Microsoft, which is designed to deal with the most general business projects.
The most important character of this software should be free and open source. So everyone can use it and improve it. It should be developed as a tool for scientists and by scientists. The second principle of this software is readable and compatible. Everything can be read by text editors and the whole structure of saving files, like the tree-like directory also can be read by human eyes. And the if the database has to be used here, a convertor for readable text file also should be distributed together with the software.
Next part is my function analysis of this software.
As Dirac, one genius in Physics, said Mathematics is the thing you create when you need, Knuth is the equivalent giant in Computer Science, I think, because when he decided to write down a great and huge book, he designed TeX system first. ^_^, therefore, after some initial research experience, I find maybe I should develop a project manager system when I have free time. Of course, if someone else want to create it first for all scientists, I really appreciate she/he just use this idea and structure. Only thing she/he need to do is drop me an email to let me know someone else try to do it. All my analysis is following the similar format and idea of the reference listed below.
Reference
- The TeXbook, Donald E. Knuth, Reading, Massachusetts: Addison-Wesley, 1984
- LaTeX: A Document Preparation System, Leslie Lamport, Addison-Wesley, 2nd edition, 1994, ISBN 0-201-52983-1.
- Reference Manager, http://www.refman.com/, ISI ResearchSoft
- EndNote, http://www.endnote.com/, ISI ResearchSoft
- Microsoft Office Enterprise Project Management (EPM) Solution, http://www.microsoft.com/canada/epm/default.mspx, Microsoft Corporation
Top Music Thinking Representation Renormalization Software