Chapter 9

Quantum Theory and the Structure of Matter

Author avatar
| Oct 24, 2025
28 min read 5906 words
Table of Contents

The concept of matter has undergone a great number of changes in the history of human thinking. Different interpretations have been given in different philosophical systems. All these different meanings of the word are still present in a greater or lesser degree in what we conceive in our time as the word `matter.'

The early Greek philosophy from Thales to the Atomists, in seeking the unifying principle in the universal mutability of all things, had formed the concept of cosmic matter, a world substance which experiences all these transformations, from which all individual things arise and into which they become again transformed. This matter was partly identified with some specific matter like water was air or fire; only partly, because it had no other attribute but to be the material from which all things are made.

Later, in the philosophy of Aristotle, matter was thought of in the relation between form and matter. All that we perceive in the world of phenomena around us is formed matter. Matter is in itself not a reality but only a possibility, a `potentia' ; it exists only by means of form.

In the natural process the essence,' as Aristotle calls it, passes over from mere possibility through form into actuality. The matter of Aristotle is certainly not a specific matter like water or air, nor is it simply empty space; it is a kind of indefinite corporeal substratum, embodying the possibility of passing over into actuality by means of the form. The typical examples of this relation between matter and form in the philosophy of Aristotle are the biological processes in which matter is formed to become the living organism, and the building and forming activity of man. The statue is potentially in the marble before it is cut out by the sculptor. 98 Then, much later, starting from the philosophy of Descartes, matter was primarily thought of as opposed to mind. There were the two complementary aspects of the world, matter’ and mind,' or, as Descartes put it, the res extensa’ and the res cogitans.' Since the new methodical principles of natural science, especially of mechanics, excluded all tracing of corporeal phenomena back to spiritual forces, matter could be considered as a reality of its own independent of the mind and of any supernatural powers. The matter’ of this period is formed matter,' the process of formation being interpreted as a causal chain of mechanical interactions; it has lost its connection with the vegetative soul of Aristotelian philosophy, and therefore the dualism between matter and form is no longer relevant. It is this concept of matter which constitutes by far the strongest component in our present use of the word matter.' Finally, in the natural science of the nineteenth century another dualism has played some role, the dualism between matter and force, Matter is that on which forces can act; or matter can produce forces. Matter, for instance, produces the force of gravity, and this force acts on matter. Matter and force are two distinctly different aspects of the corporeal world. In so far as the forces may be formative forces this distinction comes closer to the Aristotelian distinction of matter and form. On the other hand, in the most recent development of modern physics this distinction between matter and force is completely lost, since every field of force contains energy and in so far constitutes matter. To every field of force there belongs a specific kind of elementary particles with essentially the same properties as all other atomic units of matter. When natural science investigates the problem of matter it can do so only through a study of the forms of matter. The infinite variety and mutability of the forms of matter must be the immediate object of the investigation and the efforts must be directed toward finding some natural laws, some unifying principles that can serve as a guide through this immense field. Therefore, natural science — and especially physics — has concentrated its interest for a long period on an analysis of the structure of matter and of the forces responsible for this structure. 99 Since the time of Galileo the fundamental method of natural science had been the experiment. This method made it possible to pass from general experience to specific experience, to single out characteristic events in nature from which its laws' could be studied more directly than from general experience. If one wanted to study the structure of matter one had to do experiments with matter. One had to expose matter to extreme conditions in order to study its transmutations there, in the hope of finding the fundamental features of matter which persist under all apparent changes. In the early days of modern natural science this was the object of chemistry, and this endeavor led rather early to the concept of the chemical element. A substance that could not be further dissolved or disintegrated by any of the means at the disposal of the chemist — boiling, burning, dissolving, mixing with other substances, etc. — was called an element. The introduction of this concept was a first and most important step toward an understanding of the structure of matter. The enormous variety of substances was at least reduced to a comparatively small number of more fundamental substances, the elements,’ and thereby some order could be established among the various phenomena of chemistry. The word `atom’ was consequently used to designate the smallest unit of matter belonging to a chemical element, and the smallest particle of a chemical compound could be pictured as a small group of different atoms. The smallest particle of the element iron, e.g., was an iron atom, and the smallest particle of water, the water molecule, consisted of one oxygen atom and two hydrogen atoms. The next and almost equally important step was the discovery of the conservation of mass in the chemical process. For instance, when the element carbon is burned into carbon dioxide the mass of the carbon dioxide is equal to the sum of the masses of the carbon and the oxygen before the process. It was this discovery that gave a quantitative meaning to the concept of matter: independent of its chemical properties matter could be measured by its mass. During the following period, mainly the nineteenth century, a number of new chemical elements were discovered; in our time this number has reached one hundred. This development showed quite 100 clearly that the concept of the chemical element had not yet reached the point where one could understand the unity of matter. It was not satisfactory to believe that there are very many kinds of matter, qualitatively different and without any connection between one another. In the beginning of the nineteenth century some evidence for a connection between the different elements was found in the fact that the atomic weights of different elements frequently seemed to be integer multiples of a smallest unit near to the atomic weight of hydrogen. The similarity in the chemical behavior of some elements was another hint leading in the same direction. But only the discovery of forces much stronger than those applied in chemical processes could really establish the connection between the different elements and thereby lead to a closer unification of matter. These forces were actually found in the radioactive process discovered in 1896 by Becquerel. Successive investigations by Curie, Rutherford and others revealed the transmutation of elements in the radioactive process. The a-particles are emitted in these processes as fragments of the atoms with an energy about a million times greater than the energy of a single atomic particle in a chemical process. Therefore, these particles could be used as new tools for investigating the inner structure of the atom. The result of Rutherford’s experiments on the scattering of a-rayswas the nuclear model of the atom in 1911. The most important feature of this well-known model was the separation of the atom into two distinctly different parts, the atomic nucleus and the surrounding electronic shells. The nucleus in the middle of the atom occupies only an extremely small fraction of the space filled by the atom (its radius is about a hundred thousand times smaller than that of the atom), but contains almost its entire mass. Its positive electric charge, which is an integer multiple of the so-called elementary charge, determines the number of the surrounding electrons – the atom as a whole must be electrically neutral – and the shapes of their orbits. This distinction between the atomic nucleus and the electronic shells at once gave a proper explanation of the fact that for chemistry the chemical elements are the last units of matter and that very much stronger forces are required to change the elements into each other. 101 The chemical bond between neighboring atoms is due to an inter-action of the electronic shells, and the energies of this interaction are comparatively small. An electron that is accelerated in a discharge tube by a potential of only several volts has sufficient energy to excite the electronic shells to the emission of radiation, or to destroy the chemical bond in a molecule. But the chemical behavior of the atom, though it consists of the behavior of its electronic shells, is determined by the charge of the nucleus. One has to change the nucleus if one wants to change the chemical properties, and this requires energies about a million times greater. The nuclear model of the atom, however, if it is thought of as a system obeying Newton’s mechanics, could not explain the stability of the atom. As has been pointed out in an earlier chapter, only the application of quantum theory to this model through the work of Bohr could account for the fact that, for example, a carbon atom after having been in interaction with other atoms or after having emitted radiation always finally remains a carbon atom with the same electronic shells as before. This stability could be explained simply by those features of quantum theory that prevent a simple objective description in space and time of the structure of the atom. In this way one finally had a first basis for the understanding of matter. The chemical and other properties of the atoms could be accounted for by applying the mathematical scheme of quantum theory to the electronic shells. From this basis one could try to extend the analysis of the structure of matter in two opposite directions. One could either study the interaction of atoms, their relation to larger units like molecules or crystals or biological objects; or one could try through the investigation of the atomic nucleus and its components to penetrate to the final unity of matter. Research has proceeded on both lines during the past decades and we shall in the following pages be concerned with the role of quantum theory in these two fields. The forces between neighboring atoms are primarily electric forces, the attraction of opposite and the repulsion of equal charges; the electrons are attracted by the nuclei and repelled from each other. But these forces act not according to the laws of Newtonian mechanics but those of quantum mechanics. 102 This leads to two different types of binding between atoms. In the one type the electron of one atom passes over to the other one, for example, to fill up a nearly closed electronic shell. In this case both atoms are finally charged and form what the physicist calls ions, and since their charges are opposite they attract each other. In the second type one electron belongs in a way characteristic of quantum theory to both atoms. Using the picture of the electronic orbit, one might say that the electron goes around both nuclei spending a comparable amount of time in the one and in the other atom. This second type of binding corresponds to what the chemists call a valency bond. These two types of forces, which may occur in any mixture, cause the formation of various groupings of atoms and seem to be ultimately responsible for all the complicated structures of matter in bulk that are studied in physics and chemistry. The formation of chemical compounds takes place through the formation of small closed groups of different atoms, each group being one molecule of the compound. The formation of crystals is due to the arrangement of the atoms in regular lattices. Metals are formed when the atoms are so tightly packed that their outer electrons can leave their shells and wander through the whole crystal. Magnetism is due to the spinning motion of the electron, and so on. In all these cases the dualism between matter and force can still be retained, since one may consider nuclei and electrons as the fragments of matter that are kept together by means of the electromagnetic forces. While in this way physics and chemistry have come to an almost complete union in their relations to the structure of matter, biology deals with structures of a more complicated and somewhat different type. It is true that in spite of the wholeness of the living organism a sharp distinction between animate and inanimate matter can certainly not be made. The development of biology has supplied us with a great number of examples where one can see that specific biological functions are carried by special large molecules or groups or chains of such molecules, and there has been an increasing tendency in modern biology to explain biological processes as consequences of 103 the laws of physics and chemistry. But the kind of stability that is displayed by the living organism is of a nature somewhat different from the stability of atoms or crystals. It is a stability of process or function rather than a stability of form. There can be no doubt that the laws of quantum theory play a very important role in the biological phenomena. For instance, those specific quantum-theoretical forces that can be described only inaccurately by the concept of chemical valency are essential for the understanding of the big organic molecules and their various geometrical patterns; the experiments on biological mutations produced by radiation show both the relevance of the statistical quantum-theoretical laws and the existence of amplifying mechanisms. The close analogy between the working of our nervous system and the functioning of modern electronic computers stresses again the importance of single elementary processes in the living organism. Still all this does not prove that physics and chemistry will, together with the concept of evolution, someday offer a complete description of the living organism. The biological processes must be handled by the experimenting scientist with greater caution than processes of physics and chemistry. As Bohr has pointed out, it may well be that a description of the living organism that could be called complete from the standpoint of the physicist cannot be given, since it would require experiments that interfere too strongly with the biological functions. Bohr has described this situation by saying that in biology we are concerned with manifestations of possibilities in that nature to which we belong rather than with outcomes of experiments which we can ourselves perform. The situation of complementarity to which this formulation alludes is represented as a tendency in the methods of modern biological research which, on the one hand, makes full use of all the methods and results of physics and chemistry and, on the other hand, is based on concepts referring to those features of organic nature that are not contained in physics or chemistry, like the concept of life itself. So far we have followed the analysis of the structure of matter in one direction: from the atom to the more complicated structures consisting of many atoms; from atomic physics to the physics of solid bodies, to chemistry and to biology. Now we have to turn to the 104 opposite direction and follow the line of research from the outer parts of the atom to the inner parts and from the nucleus to the elementary particles. It is this line which will possibly lead to an understanding of the unity of matter. Here we need not be afraid of destroying characteristic structures by our experiments. When the task is set to test the final unity of matter we may expose matter to the strongest possible forces, to the most extreme conditions, in order to see whether any matter can ultimately be transmuted into any other matter. The first step in this direction was the experimental analysis of the atomic nucleus. In the initial period of these studies, which filled approximately the first three decades of our century, the only tools available for experiments on the nucleus were the a-particles emitted from radioactive bodies. With the help of these particles Rutherford succeeded in 1919 in transmuting nuclei of light elements; he could, for instance, transmute a nitrogen nucleus into an oxygen nucleus by adding the a-particle to the nitrogen nucleus and at the same time knocking out one proton. This was the first example of processes on a nuclear scale that reminded one of chemical processes, but led to the artificial transmutation of elements. The next substantial progress was, as is well known, the artificial acceleration of protons by means of high-tension equipment to energies sufficient to cause nuclear transmutation. Voltages of roughly one million volts are required for this purpose and Cockcroft and Walton in their first decisive experiment succeeded in transmuting nuclei of the element lithium into those of helium. This discovery opened up an entirely new line of research, which may be called nuclear physics in the proper sense and which very soon led to a qualitative understanding of the structure of the atomic nucleus. The structure of the nucleus was indeed very simple. The atomic nucleus consists of only two kinds of elementary particles. The one is the proton which is at the same time simply the hydrogen nucleus; the other is called neutron, a particle which has roughly the mass of the proton but is electrically neutral. Every nucleus can be characterized by the number of protons and neutrons of which it consists. The normal carbon nucleus, for instance, consists of 6 protons and 6 neutrons. 105 There are other carbon nuclei, less frequent in number (called isotopic to the first ones), that consist of 6 protons and 7 neutrons, etc. So one had finally reached a description of matter in which, instead of the many different chemical elements, only three fundamental units occurred: the proton, the neutron and the electron. All matter consists of atoms and therefore is constructed from these three fundamental building stones. This was not yet the unity of matter, but certainly a great step toward unification and — perhaps still more important — simplification. There was of course still a long way to go from the knowledge of the two building stones of the nucleus to a complete understanding of its structure. The problem here was somewhat different from the corresponding problem in the outer atomic shells that had been solved in the middle of the twenties. In the electronic shells the forces between the particles were known with great accuracy, but the dynamic laws had to be found, and were found in quantum mechanics. In the nucleus the dynamic laws could well be supposed to be just those of quantum mechanics, but the forces between the particles were not known beforehand; they had to be derived from the experimental properties of the nuclei. This problem has not yet been completely solved. The forces have probably not such a simple form as the electrostatic forces in the electronic shells and therefore the mathematical difficulty of computing the properties from complicated forces and the inaccuracy of the experiments make progress difficult. But a qualitative understanding of the structure of the nucleus has definitely been reached. Then there remained the final problem, the unity of matter. Are these fundamental building stones — proton, neutron and electron — final indestructible units of matter, atoms in the sense of Democritus, without any relation except for the forces that act between them or are they just different forms of the same kind of matter? Can they again be transmuted into each other and possibly into other forms of matter as well? An experimental attack on this problem requires forces and energies concentrated on atomic particles much larger than those that have been necessary to investigate the atomic nucleus. Since the energies stored up in atomic nuclei are not big enough to provide us with a tool for such experiments, the physicists have to rely either on 106 the forces in cosmic dimensions or on the ingenuity and skill of the engineers. Actually, progress has been made on both lines. In the first case the physicists make use of the so-called cosmic radiation. The electomagnetic fields on the surface of stars extending over huge spaces are under certain circumstances able to accelerate charged atomic particles, electrons and nuclei. The nuclei, owing to their greater inertia, seem to have a better chance of remaining in the accelerating field for a long distance, and finally when they leave the surface of the star into empty space they have already traveled through potentials of several thousand million volts. There may be a further acceleration in the magnetic fields between the stars; in any case the nuclei seem to be kept within the space of the galaxy for a long time by varying magnetic fields, and finally they fill this space with what one calls cosmic radiation. This radiation reaches the earth from the outside and consists of nuclei of practically all kinds, hydrogen and helium and many heavier elements, having energies from roughly a hundred or a thousand million electron volts to, again in rare cases, a million times this amount. When the particles of this cosmic radiation penetrate-into the atmosphere of the earth they hit the nitrogen atoms or oxygen atoms of the atmosphere or may hit the atoms in any experimental equipment exposed to the radiation. The other line of research was the construction of big accelerating machines, the prototype of which was the so-called cyclotron constructed by Lawrence in California in the early thirties. The underlying idea of these machines is to keep by means of a big magnetic field the charged particles going round in circles a great number of times so that they can be pushed again and again by electric fields on their way around. Machines reaching up to energies of several hundred million electron volt are in use in Great Britain, and through the co-operation of twelve European countries a very big machine of this type is now being constructed in Geneva which we hope will reach up to energies of 25,000 million electron volts. The experiments carried out by means of cosmic radiation or of the big accelerators have revealed new interesting features of matter. Besides the three fundamental building stones of matter – electron, proton and neutron – new elementary 107 particles have been found which can be created in these processes of highest energies and disappear again after a short time. The new particles have similar properties as the old ones except for their instability. Even the most stable ones have lifetimes of roughly only a millionth part of a second, and the lifetimes of others are even a thousand times smaller. At the present time about twenty-five different new elementary particles are known; the most recent one is the negative proton.

These results seem at first sight to lead away from the idea of the unity of matter, since the number of fundamental units of matter seems to have again increased to values comparable to the number of different chemical elements. But this would not be a proper interpretation. The experiments have at the same time shown that the particles can be created from other particles or simply from the kinetic energy of such particles, and they can again disintegrate into other particles. Actually the experiments have shown the complete mutability of matter. All the elementary particles can, at sufficiently high energies, be transmuted into other particles, or they can simply be created from kinetic energy and can be annihilated into energy, for instance into radiation. Therefore, we have here actually the final proof for the unity of matter. All the elementary particles are made of the same substance, which we may call energy or universal matter; they are just different forms in which matter can appear.

If we compare this situation with the Aristotelian concepts of matter and form, we can say that the matter of Aristotle, which is mere potentia,’ should be compared to our concept of energy, which gets into actuality' by means of the form, when the elementary particle is created. Modern physics is of course not satisfied with only qualitative description of the fundamental structure of matter; it must try on the basis of careful experimental investigations to get a mathematical formulation of those natural laws that determine the forms’ of matter, the elementary particles and their forces. A clear distinction between matter and force can no longer be made in this part of physics, since each elementary particle not only is producing some forces and is acted upon by forces, but it is at the same time representing a certain 108 field of force. The quantum-theoretical dualism of waves and particles makes the same entity appear both as matter and as force. All the attempts to find a mathematical description for the laws concerning the elementary particles have so far started from the quantum theory of wave fields. Theoretical work on theories of this type started early in the thirties. But the very first investigations on this line revealed serious difficulties the roots of which lay in the combination of quantum theory and the theory of special relativity. At first sight it would seem that the two theories, quantum theory and the theory of relativity, refer to such different aspects of nature that they should have practically nothing to do with each other, that it should be easy to fulfill the requirements of both theories in the same formalism. A closer inspection, however, shows that the two theories do interfere at one point, and that it is from this point that all the difficulties arise. The theory of special relativity had revealed a structure of space and time somewhat different from the structure that was generally assumed since Newtonian mechanics. The most characteristic feature of this newly discovered structure is the existence of a maximum velocity that cannot be surpassed by any moving body or any traveling signal, the velocity of light. As a consequence of this, two events at distant points cannot have any immediate causal connection if they take place at such times that a light signal starting at the instant of the event on one point reaches the other point only after the time the other event has happened there; and vice versa. In this case the two events may be called simultaneous. Since no action of any kind can reach from the one event at the one point in time to the other event at the other point, the two events are not connected by any causal action. For this reason any action at a distance of the type, say, of the gravitational forces in Newtonian mechanics was not compatible with the theory of special relativity. The theory had to replace such action by actions from point to point, from one point only to the points in an infinitesimal neighborhood. The most natural mathematical expressions for actions of this type were the differential equations for waves or fields that were invariant for the Lorentz transformation. 109 Such differential equations exclude any direct action between ‘simultaneous’ events. Therefore, the structure of space and time expressed in the theory of special relativity implied an infinitely sharp boundary between the region of simultaneousness, in which no action could be transmitted, and the other regions, in which a direct action from event to event could take place. On the other hand, in quantum theory the uncertainty relations put a definite limit on the accuracy with which positions and momenta, or time and energy, can be measured simultaneously. Since an infinitely sharp boundary means an infinite accuracy with respect to position in space and time, the momenta or energies must be completely undetermined, or in fact arbitrarily high momenta and energies must occur with overwhelming probability. Therefore, any theory which tries to fulfill the requirements of both special relativity and quantum theory will lead to mathematical inconsistencies, to divergencies in the region of very high energies and momenta. This sequence of conclusions may perhaps not seem strictly binding, since any formalism of the type under consideration is very complicated and could perhaps offer some mathematical possibilities for avoiding the clash between quantum theory and relativity. But so far all the mathematical schemes that have been tried did in fact lead either to divergencies, i.e., to mathematical contradictions, or did not fulfill all the requirements of the two theories. And it was easy to see that the difficulties actually came from the point that has been discussed. The way in which the convergent mathematical schemes did not fulfill the requirements of relativity or quantum theory was in itself quite interesting. For instance, one scheme, when interpreted in terms of actual events in space and time, led to a kind of time reversal; it would predict processes in which suddenly at some point in space particles are created, the energy of which is later provided for by some other collision process between elementary particles at some other point. The physicists are convinced from their experiments that processes of this type do not occur in nature, at least not if the two processes are separated by measurable distances in space and time. Another mathematical scheme tried to avoid the divergencies through 110 a mathematical process which is called renormalization; it seemed possible to push the infinities to a place in the formalism where they could not interfere with the establishment of the welldefined relations between those quantities that can be directly observed. Actually this scheme has led to very substantial progress in quantum electrodynamics, since it accounts for some interesting details in the hydrogen spectrum that had not been understood before. A closer analysis of this mathematical scheme, however, has made it probable that those quantities which in normal quantum theory must be interpreted as probabilities can under certain circumstances become negative in the formalism of renormalization. This would prevent the consistent use of the formalism for the description of matter. The final solution of these difficulties has not yet been found. It will emerge someday from the collection of more and more accurate experimental material about the different elementary particles, their creation and annihilation, the forces between them. In looking for possible solutions of the difficulties one should perhaps remember that such processes with time reversal as have been discussed before could not be excluded experimentally, if they took place only within extremely small regions of space and time outside the range of our present experimental equipment. Of course one would be reluctant to accept such processes with time reversal if there could be at any later stage of physics the possibility of following experimentally such events in the same sense as one follows ordinary atomic events. But here the analysis of quantum theory and of relativity may again help us to see the problem in a new light. The theory of relativity is connected with a universal constant in nature, the velocity of light. This constant determines the relation between space and time and is therefore implicitly contained in any natural law which must fulfill the requirements of Lorentz invariance. Our natural language and the concepts of classical physics can apply only to phenomena for which the velocity of light can be considered as practically infinite. When we in our experiments approach the velocity of light we must be prepared for results which cannot be interpreted in these concepts. 111 Quantum theory is connected with another universal constant of nature, Planck’s quantum of action. An objective description for events in space and time is possible only when we have to deal with objects or processes on a comparatively large scale, where Planck’s constant can be regarded as infinitely small. When our experiments approach the region where the quantum of action becomes essential we get into all those difficulties with the usual concepts that have been discussed in earlier chapters of this volume. There must exist a third universal constant in nature. This is obvious for purely dimensional reasons. The universal constants determine the scale of nature, the characteristic quantities that cannot be reduced to other quantities. One needs at least three fundamental units for a complete set of units. This is most easily seen from such conventions as the use of the c-g-s system (centimeter, gram, second system) by the physicists. A unit of length, one of time, and one of mass is sufficient to form a complete set; but one must have at least three units. One could also replace them by units of length, velocity and mass; or by units of length, velocity and energy, etc. But at least three fundamental units are necessary. Now, the velocity of light and Planck’s constant of action provide only two of these units. There must be a third one, and only a theory which contains this third unit can possibly determine the masses and other properties of the elementary particles. Judging from our present knowledge of these particles the most appropriate way of introducing the third universal constant would be by the assumption of a universal length the value of which should be roughly io-13 cm, that is, somewhat smaller than the radii of the light atomic nuclei. When from such three units one forms an expression which in its dimension corresponds to a mass, its value has the order of magnitude of the masses of the elementary particles.

If we assume that the laws of nature do contain a third universal constant of the dimension of a length and of the order of io3 cm, then we would again expect our usual concepts to apply only to regions in space and time that are large as compared to the universal constant. We should again be prepared for phenomena of a qualitatively new character when we in our experiments approach regions in

space and time smaller than the nuclear radii. The phenomenon of time reversal, which has been discussed and which so far has only resulted from theoretical considerations as a mathematical possibility, might therefore belong to these smallest regions. If so, it could probably not be observed in a way that would permit a description in terms of the classical concepts. Such processes would probably, so far as they can be observed and described in classical terms, obey the usual time order. But all these problems will be a matter of future research in atomic physics. One may hope that the combined effort of experiments in the high energy region and of mathematical analysis will someday lead to a complete understanding of the unity of matter. The term `complete understanding’ would mean that the forms of matter in the sense of Aristotelian philosophy would appear as results, as solutions of a closed mathematical scheme representing the natural laws for matter.

Send us your comments!