Name: andrew m childs
Date: Around 1995
I have heard of the concept of "renormalizing" theories, which, I think,
means that you have to make infinite subtractions (albeit a finite number of
them) to get the theory to work. How can this clever mathematical trick be
used to prove anything in the real world? How are these "renormalizable"
theories any different from ones which need an infinite number of infinite
The reason these things work is that the "infinities" involved appear in a
rather simple, standard fashion (usually). If it were being applied to any
arbitrary horrible mathematical summation (the infinities generally appear
in certain integrals) then you are right, you could not prove anything.
However, the infinities appear in theories that attempt to explain the real
world, and the infinities come when you extend the integration to infinite
momentum, where the theories are actually pretty simple (usually the
integration is of some simple algebraic functions). Of course, the regime
of arbitrarily high momentum is experimentally inaccessible, and so it is
perfectly reasonable to assume that the theory involved breaks down some-
where along the way. If we just cut off the integration at some arbitrarily
chosen high momentum, we should get the right answers, as long as we make
sure the integral is properly corrected (renormalized) to give observed
masses and other properties. The number of subtractions involved is the
number of these properties (like mass) that need to be used in the procedure
- then treating similar integrals the same way gives new information.
The reason we believe this works is that it really does work. The process
has been applied to the quantum theory of electromagnetism and the so-called
g-factor of the electron has been theoretically evaluated in this manner to
Like 12 digits, which all agree with experimental measurements.
Click here to return to the Mathematics Archives
Update: June 2012