<meta http-equiv="refresh" content="1; url=/nojavascript/"> In the Beginning | CK-12 Foundation
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 21st Century Physics - A Compilation of Contemporary and Emerging Technologies Go to the latest version.

“I can see no escape from the conclusion that [cathode rays] are charges of electricity carried by particles of matter. What are these particles? Are they atoms, or molecules, or matter in a still finer state of subdivision?'" 1897 Experiments, J. J. Thomson

And so it begins, the modern search for the building blocks of matter. What are we made of? What are the smallest constituents of all matter? What do they all have in common? What is different? What holds all the matter together? Where did we come from and where are we going? The search for the building blocks goes back to the days of Aristotle and has always had one goal: to simplify our understanding of nature.

Aristotle believed that there were four elements that comprised nature: earth, water, air, and fire. Democritus, a contemporary of Aristotle, stated that matter could be cut into smaller and smaller halves until you could cut the piece no smaller and it became indivisible. Our present word atom comes from Democritus’ use of the Greek word for indivisible, atomos. Aristotle’s theory of the four elements survived until the 18^{th} and 19^{th} century when these four elements were replaced by our modern chemical elements.

In the beginning there were a couple dozen elements, but this number soon grew to nearly 100. It appears that science went from a simple model (four building blocks) to a much more complex model (nearly 100 building blocks). Would 100 building blocks all be fundamental? Another change was about to occur with the discovery of the atom and the idea of the indivisible nature of matter returned. The atom was made up of three building blocks and it appeared that a simpler model was restored. This is where our chapter truly begins…with the discovery of these three “fundamental” particles.

Discovery of the Electron

In the mid–19^{th} century, many scientists traveled the country presenting lectures on various scientific ideas. One of the topics that most delighted the audiences at that time involved a glass tube and high voltage. By pumping out most of the air from the glass tube and connecting wires on either side of an evacuated tube, a high voltage would be applied across the tube and to the amazement of the audience the interior of the tube would glow! This device was called a Crooke’s tube or, a cathode ray tube. Now, to the audience all that mattered was the incredible mysterious glow that appeared within the tube, but to the scientists the main question was “What caused the glow?” To most, the notion was that there was some kind of ray being emitted from the cathode. But, what was this ray made up of…was it a wave or a particle? The dominant theory of the time was that light was a wave, but there was also the idea that maybe the ray was some type of unknown particle. What was this mysterious ray? Was this some type of wave traveling through the invisible fluid known as ether or a particle that developed out of the ether? The search for an answer was the mission of the British physicist J. J. Thomson.

Cathode Ray Tube

Cathode Ray Tube

As a result of Maxwell’s work in the 1860s it was known that all electromagnetic waves, including visible light, travel at a speed of 3 \times 10^8 \;\mathrm{m/s} in a vacuum. Experimentation with cathode rays showed that their direction of travel could be altered by placing the tube in a magnetic field. With these two ideas in mind, J. J. Thomson began his experimentation on the mystery of the cathode rays. In 1894, he decided to experimentally determine the velocity of the cathode rays. The measured velocity could then be compared to the speed of an electromagnetic wave, which could help possibly determine something about its structure. Through the use of mirrors and the cathode rays, Thomson was able to determine the velocity of the rays to be approximately 200,000 meters per second, which is significantly less than the speed of light. So, it appeared that cathode rays were not electromagnetic waves, but actually small particles. This result was not widely celebrated by the scientific community, but it did lead to further experimentation by other scientists.

The rays are influenced by a magnetic field and they travel much more slowly than an electromagnetic wave. From this experimental evidence, one might conclude that the rays are particles. Thomson did not stop at this point. He continued to use electric fields and magnetic fields to determine how much they influenced the motion of the rays. The first conclusion that he reached through this line of inquiry was that the rays must be particles or, as he called them, “corpuscles.” Thomson found that the mysterious stream would bend toward a positively charged electric plate. Thomson theorized, and was later proven correct, that the stream was in fact made up of small particles, pieces of atoms that carried a negative charge. These particles later became known as electrons. Thomson was unable to determine the mass of the electron, but he was able to determine the charge-to-mass ratio, or \mathrm{q/m}. He knew the \mathrm{q/m} for the hydrogen ion and it was much smaller than the \mathrm{q/m} for the cathode rays. He assumed that the mass of the particle was much smaller than the mass of the charged hydrogen atom. Thomson went on and “… made a bold speculative leap. Cathode rays are not only material particles, he suggested, but in fact the building blocks of the atom: they are the long-sought basic unit of all matter in the universe.” 1897 Experiments, J. J. Thomson.

Based on Thomson’s belief that the atom is divisible and consists of smaller blocks, namely the electron, he then developed a model for the atom. His finding has been called the “plum pudding model” in which the atom is represented as a positively charged ball with negatively charged particles inside. This model was the accepted explanation for the structure of the atom until Ernest Rutherford and his gold foil experiment in 1911.

Thomson’s Plum Pudding Model

Thomson’s Plum Pudding Model

Discovery of the Proton

In 1909, an experiment intended to verify Thomson’s plum pudding model was conducted under the guidance of Ernest Rutherford. Hans Geiger and Ernest Marsden, Rutherford’s students, directed alpha particles (the nuclei of helium atoms) at a very thin sheet of gold foil. Based on the plum pudding model the alpha particles should have barely been deflected, if at all. The reason for this is that the momentum of the alpha particles was so large that the particles should not be influenced by the relatively small mass of the electrons and the positive charge spread throughout the atom. However, they observed that a small number of the particles were deflected through large angles, including some reflecting back to the source.

Expected and actual results from Rutherford's gold foil experiment

Images showing the expected and the actual results from Rutherford’s gold foil experiment.

Geiger and Marsden spent many hours in a darkened room using a low–powered microscope to “see” tiny flashes of light on a scintillator screen. A variety of different foils were used as well as different thicknesses. Given the relatively high momentum of the alpha particles they expected that the particles would pass through without any, or minimal, deflection. For the majority of events, this held to be true. Amazingly, they found that approximately 1 in every 8,000 particles were reflected through angles greater than 90 degrees. Rutherford later remarked, "It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue paper and it came back and hit you." This observation was completely unexpected and appeared to contradict Thomson’s plum pudding model.

Rutherford’s gold foil scattering experiment

Rutherford’s Gold Foil Scattering Experiment

In 1911, Rutherford published a new atomic model that stated that the atom contained a very small positive charge that could repel the alpha particles if they came close enough. He also went on to state that the atom is mostly empty space, with most of the atom’s mass concentrated in the center, and that the electrons were held in orbit around it by electrostatic attraction. The center of the atom is called the nucleus. The idea of a massive, positively charged nucleus supported the observations of Geiger and Marsden. The alpha particles that came close to the nucleus had been deflected through varying angles, but the majority of alpha particles passed relatively far away and therefore experienced no deflection at all.

Over the next 10 years, Rutherford and many other physicists continued to explore the components of the atom. It was widely accepted that positively charged particles were contained within the nucleus. It was believed that the positive charge of any nucleus could be accounted for by an integer number of hydrogen nuclei. Rutherford was the first to refer to these hydrogen nuclei as protons in 1920.

Discovery of the Neutron

Ernest Rutherford continued to play a significant role in the discovery of the building blocks of matter. As physicists continued to study atomic events, they noticed that the atomic number of the atoms and the atomic mass did not match up. They were finding that the atomic number (number of protons) was typically less than the atomic mass (mass of atom). Due to the electron’s small mass, the prevailing thought was that there must be something besides the proton adding to the overall mass of the atom. The main theory put forward by Rutherford stated that additional electrons and protons, coupled together inside the nucleus, formed a neutral particle. This new particle, called the neutron, would not influence the overall charge of the atom, but would account for the missing mass.

At this point, Rutherford appointed a former student, James Chadwick, to the post of Assistant Director of his lab at Cambridge University. Chadwick spent the next ten years tracking down this elusive particle. It was not until some experiments carried out in Europe came to his attention that Chadwick achieved some success with his endeavor. Chadwick repeated their experiments with the goal of looking for a neutral particle—one with the same mass as a proton, but with zero charge. His experiments were successful. He was able to determine that the neutron did exist and that its mass was slightly greater than the proton’s. The third component of the atom had been discovered. The model of the atom now consisted of the positively charged proton and the neutral neutron that made up the nucleus and the negatively charged electron that moved around the empty space surrounding the nucleus.

Rutherford’s Planetary Model of the Atom

Rutherford’s Planetary Model of the Atom

One More Particle—the Photon

Long before the proton and neutron were discovered another fundamental particle was found—the photon. In 1900, Max Planck presented the revolutionary theory that energy was not actually continuous, but existed in tiny, discrete chunks. Each tiny chunk, or quantum, has a magnitude equal to E = hf. The energy of the quanta, E, is determined by multiplying Planck’s constant, h, by the frequency of oscillation, f, of the electromagnetic wave. The value of h is 6.63 \times 10^{-34} \;\mathrm{Js}. These energy packets are so small that we don’t notice their size in our everyday experiences. On our normal scale of events energy seems continuous. In other words, the motion of a ball down an inclined plane looks continuous, but according to quantum theory it is actually rolling down a set of extremely tiny stairs jumping from one level to the next.

At approximately the same time another phenomenon was discovered that connected electricity, light, and atomic theory. It was found that when light is shone on certain metallic surfaces, electrons are ejected from the surfaces. This is known as the photoelectric effect. In some way the light is giving up its energy to the electrons in the metal and causing them to be released and produce a current. However, not all colors of light will cause a current to flow. Two aspects of this experiment cannot be explained with the classical theory of light [i.e., electricity and magnetism]. (1) No matter how bright a red light one used, a current was never produced. But, a very dim blue light would allow for a current to be produced. (2) The current is observed immediately, and not several minutes as predicted by classical theory.

The Photoelectric Effect

The Photoelectric Effect

The problem was that these results could not be explained if light was thought of as a wave. Waves can have any amount of energy you want—big waves have a lot of energy, small waves have very little. And if light is a wave, then the brightness of the light affects the amount of energy—the brighter the light, the bigger the wave, the more energy it has. The different colors of light are defined by the amount of energy they have. If all else is equal, blue light has more energy than red light with yellow light somewhere in between. But this means that if light is a wave, a dim blue light would have the same amount of energy as a very bright red light. And if this is the case, then why won't a bright red light produce a current in a piece of metal as well as a dim blue light? In 1905, Einstein used Planck’s revolutionary idea about the quantization of energy and applied it to the photoelectric effect. Although it was universally agreed that light was a wave phenomenon, he realized that the only way to explain the photoelectric effect was to say light was actually made up of lots of small packets of energy called photons that behaved like particles http://www.lon-capa.org/~mmp/kap28/PhotoEffect/photo.htm Photoelectric [Effect Applet].

Einstein was able to explain all the observations of the photoelectric effect. The ejection of an electron occurs when a photon hits an electron and the photon gives its entire energy to the electron. If the photon has sufficient energy to transfer to the electron, the electron may be ejected from the atom and a current will start. If the photon does not have enough energy, then the electron will not be supplied enough energy and no current will be produced. The amount of energy each photon can transfer is dependent upon the frequency (color) of the light and not on its brightness. The energy of a photon is determined by Planck’s relationship, E = hf. So, no matter how bright the red light may be, the frequency of the red light will not provide it with enough energy to ever eject a photon, no matter how bright or how long that red light shines on the metal. Whereas dim blue light will eject electrons, because the frequency of blue light is large enough to provide enough energy to the photon to eject the electron.

With the discovery of the three fundamental particles of the atom and the development of the idea of the photon, it appeared that by 1932 the building blocks of matter had been rediscovered. The hundred different building blocks of matter had been replaced by a much simpler view of the physics world. This elegant picture of the physical world did not last for long, though. As technology improved and more questions were posed and eventually answered, many new and rather strange observations were made. The first and perhaps most bizarre discovery happened right after the neutron was discovered in 1932 and it represented an entirely new type of matter.

Not so Fast—Antimatter is Found!

In 1927, Paul Dirac, a British theoretical physicist, was able to formulate a special equation describing the motion of electrons. This equation was applied to Einstein’s theory of relativity to predict that there must be a particle that has the same mass as the electron, but with the opposite charge. This theory led to the conceptualization of antiparticles or broadly speaking, antimatter. Not only does the electron have an antiparticle, but Dirac’s equations predicted that all matter has a corresponding antiparticle.

Cloud Chamber

An actual cloud chamber picture from Carl Anderson’s experiment.

In the early 1930s, Carl Anderson was investigating cosmic rays using a cloud chamber. Charged particles produced by cosmic rays would leave “tracks” in the cloud chamber. These tracks would bend in circles because the chamber was surrounded by a strong magnetic field. As a result, positively charged particles bent one way and negatively charged particles bent in the opposite direction. During his investigation Anderson encountered unexpected particle tracks in his cloud chamber. He found equal numbers of positive and negative particles following very similar, yet oppositely directed paths. He assumed that the negatively charged particles were electrons, but what were the positively charged particles—protons? Anderson correctly interpreted the pictures as having been created by a particle with the same mass as the electron, but with an opposite charge. This discovery, announced in 1932, validated Dirac’s theoretical prediction of the positron, the first observed antimatter particle. Anderson obtained direct proof that positrons exist by shooting gamma rays, high–energy photons, into nuclei. This resulted in the creation of positron-electron pairs. This pair production exemplifies Einstein’s equation E = mc^2; energy (the massless gamma ray) is converted into mass (the pair of particles). Interestingly enough, the reverse is true as well. If an electron and a positron collide, their mass is converted into energy. This process is true for any matter-antimatter pair and is called pair annihilation.

With the discovery of the antielectron, the search for other antimatter particles heated up. It seemed reasonable that if the electron had an antiparticle, so too should a proton and a neutron. The methods for probing the reality of subatomic particles began with experiments as simple as those with which the electron, proton, and neutron were discovered—firing beams of light or electrons at various substances and then making very precise observations and drawing as many conclusions as possible. Physicists of the early 20^{th} century were able to make some amazing discoveries about the structure of the atom. However, from our point of view, their technology was limited, but they did the best with what they had to work with. In order to discover these new particles, a way to produce controlled, reliable high - energy experiments was needed. This led to the creation of particle accelerators and detectors.

Cosmic Rays

With the discovery of radioactivity in the late 1800s, measurement and detection of this radiation became a driving force in physics. It was soon found that more radiation was being measured on the Earth than was predicted. In an effort to find the source of this radiation, Victor Hess in 1912 carried detectors with him in a hot air balloon to a height of 5000 meters (without the aid of a breathing apparatus). At this height he was able to discover “cosmic rays,” which shower Earth from all parts of the universe at incredibly high speeds. Others soon discovered that the rays were actually charged particles, such as alpha particles and protons.

Cosmic Ray Shower

Cosmic Ray Shower

As it turns out, these charged particles that zoom through space began their journeys from the Sun, supernovae, and distant stars. Most of the primary cosmic rays are protons or alpha particles traveling at very high speeds. When they hit another nucleus in our atmosphere and stop, many more particles are knocked downward, creating a cascading effect called a shower. When these reactions and the particles that they produced were first analyzed it was quickly discovered that nothing like this had been seen on Earth before. Thus began a flurry of research to discover more about these particles from outer space.

Up until the 1950s and the development of particle accelerators, cosmic rays were the primary source of high–energy particles for physicists to study. Carl Anderson not only discovered antimatter through his cosmic ray research, but he went on to discover a particle that had a unit charge with a mass between the electron and proton. Muons were later shown not to have any nuclear interactions and to be heavier versions of electrons. In 1947, Cecil Powell discovered another particle that did interact with nuclei. The mass of this new particle was greater than the muon and it was soon determined that the particle would decay into a muon. This new particle was given the name pi-meson, or pion. A few months later, new particles with masses in between the pion and the proton were discovered. The kaon was a strange new particle that was always produced in pairs, had a relatively long lifetime, and decayed into pions and muons.

Although a number of exciting new particles were discovered with the cosmic rays, there were limitations to this type of research. Interesting events happen very rarely and when they do it is very difficult to catch them in a particle detector. Researchers have no control over when or where the cosmic ray shower will occur, making it very difficult to perform experiments. The other problem that was quickly becoming apparent was that all the low energy events seemed to be well researched and that the interesting events were the high–energy events. The problem with the high–energy events was that they were incredibly rare. So, the lack of control over when and where these events would occur and the infrequent high–energy cosmic ray events posed a problem for researchers. Physicists needed to come up with a solution to these problems—namely, to create controlled high–energy experiments in a laboratory-type setting.

Particle Accelerators

Particle accelerators were designed to study objects at the atomic scale. Particle accelerators allow for millions of particle events to occur and to be studied without waiting for the events to come from the sky. Accelerators do for particle physicists what telescopes do for astronomers. These instruments reveal worlds that would otherwise be left unseen. Vacuum tubes and voltage differences accelerated the first electrons and then the Cockcroft-Walton and Van de Graaff machines were invented using the same principles only on a grander, more complex scale. A modern example of this type of device is the linear accelerator, such as the Stanford Linear Accelerator (SLAC). In order to achieve high energies, all linear accelerators must be very long. For example, the Stanford Accelerator is nearly 2 miles long and actually crosses under a highway in California. SLAC is able to achieve energies of up to 50 \;\mathrm{GeV}. An electron volt (eV) is a unit of energy that is equivalent to 1.6 \times 10^{-19}\;\mathrm{J}. A GeV is equal to 10^9 \;\mathrm{eV}. The need for such great length to achieve the high energy is a major limitation with this type of accelerator.

Stanford Linear Accelerator Center, Palo Alto, CA

Stanford Linear Accelerator Center, Palo Alto, CA

The great breakthrough in accelerator technology came in the 1920s with Ernest O. Lawrence’s invention of the cyclotron. In the cyclotron, magnets guide the particles along a spiral path, allowing a single electric field to apply many cycles of acceleration. The first cyclotrons could actually fit in the palm of your hand and could accelerate protons to energies of 1 \;\mathrm{MeV}. Over the next decade or two, unprecedented energies were achieved (up to 20 \;\mathrm{MeV}), but even the cyclotron had its limitations due to relativistic effects and magnet strength. Fortunately, the same type of technology that allows for a cyclotron to work also works in the next version of the accelerator, a synchrotron. The synchrotron’s circular path can accelerate protons by passing them millions of times through electric fields allowing them to obtain energies of well over 1 \;\mathrm{TeV}. The first synchrotron to break the TeV energy level was at Fermilab National Accelerator Laboratory (Fermilab). The Tevatron at Fermilab is nearly 4 miles in circumference and can accelerate particles to 1 \;\mathrm{TeV} in each direction around the ring.

Fermi National Accelerator Laboratory, Batavia, IL

Fermi National Accelerator Laboratory, Batavia, IL

Jefferson National Accelerator Laboratory, Newport News, VA

Jefferson National Accelerator Laboratory, Newport News, VA

The last advancement in accelerator technology involved the collision of the accelerated particles. Up until the 1970s, all accelerators were fixed target machines. This means that the very energetic particles collide with a stationary target and all the newly produced particles continue moving in the same direction as the debris, the new particles and energy, which comes from the collision. As a result, not all of the mass-energy that derives from the accelerated particles is available to be converted into new particles and new reactions. Some of the mass - energy is lost into the target and not all of it is transferred into the particle collisions. Early in the 1960s, physicists had learned enough about accelerators to build colliders. In a collider, two carefully controlled beams pass around the synchrotron in opposite directions until they are made to collide at a specific point. Although colliders are significantly more challenging to build, the benefits are great. In a collider, the accelerating particles moving in opposite directions are brought to a point for the collision and because they are traveling in opposite directions their collision energy is greater than a fixed target collision and the net momentum is zero. This means that all their energy is now available for new reactions and the creation of new particles. For example, although the Tevatron at Fermilab can only accelerate the protons and antiprotons to energies of 1 \;\mathrm{TeV}, the energy that is involved in each proton-antiproton collision will approach 2 \;\mathrm{TeV}.

Why the need to achieve such high energies? High–energy physicists know that it takes particles with energy about 1 \;\mathrm{GeV} to probe the structure inside of a proton. In order to get to the even smaller parts of matter, higher energy is needed. Also, higher energies would allow for more “massive” particles to be created. Currently, the Fermilab's Tevatron has enough energy to produce the top quark (\sim 170 \;\mathrm{GeV}). If particle physicists want to learn more about the building blocks of matter they need more energy. Over the past decade in Geneva, Switzerland, they have been trying to accomplish just that—to build the world’s largest particle accelerator. In 2009, the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) is scheduled to go online. The LHC is 27 \;\mathrm{km} in circumference and will accelerate particles to energies approaching 7 \;\mathrm{TeV}. This means that at the collision point the energy will be up to 14 \;\mathrm{TeV} and the potential for new particle discoveries are endless.

Section of the Large Hadron Collider tunnel, CERN, Geneva, Switzerland

Section of the Large Hadron Collider tunnel, CERN, Geneva, Switzerland

Particle Detectors

The first particle detectors resembled the ones used by Rutherford in his famous gold foil experiment. The detectors involved the emission of light when charged particles hit a coated screen. Other methods for detecting radiation were soon developed, such as electroscopes (that could tell if a charged particle was present) and Geiger counters (which counted how many charged particles were present). All of these detectors could only tell if a charged particle was present and/or provide a rough approximation to how many charged particles were present. They were all incapable of providing any specific information about the properties of the charged particles.

Then a breakthrough came in 1912 when the cloud chamber was invented. The cloud chamber involved producing a vapor that remained in a supersaturated state. C. T. R. Wilson, a Scottish physicist, developed a cloud chamber based on his studies of meteorology and his research into the atmosphere and cloud formation. It was well known that an electrical charge could cause condensation in this kind of supersaturated state. Wilson was eager to find out if he could produce a similar effect with X-rays. In 1896, he performed an experiment and found that, like electricity, X-rays could induce condensation in the supersaturated vapor. In 1912, he incorporated all of his ideas into a device that he called a cloud chamber. He found that radiation from a charged particle left an easily observable track when it passed through the cloud chamber. The track was a result of the interaction between the charged particles and the air and molecules within the container. This interaction resulted in the formation of ions on which condensation occurred. This provided a plain view of the path of the radiation and so gave a clear picture of what was happening. The events could then be viewed by taking a photograph of them. When used, the cloud chamber is placed between the poles of a magnet. The magnetic field causes particles to bend in one direction or another, depending on the electrical charge they carry. The magnetic field B, the velocity v, the radius of the circular orbit R, the mass m, and the charge q are related by the formula: R = \frac{mv} {qB}. The kind of particles that have passed through the chamber can be determined by the types of tracks they leave. Although the cloud chamber had many useful applications, it was replaced by the bubble chamber that was invented in 1953 by Donald Glaser.

The bubble chamber is a more sophisticated version of the cloud chamber. Glaser's idea was to use a liquid, like liquid hydrogen, as a detecting medium because the particles in a liquid are much closer together than are those in a gas. Glaser's bubble chamber is essentially the opposite of a cloud chamber. It contains a liquid that is heated beyond its normal boiling point. If the liquid is kept under pressure it will not boil. Instead, it will remain in a superheated state. Particles released from the radioactive source will travel through the bubble chamber and interact with atoms and molecules in the liquid. This interaction will result in the formation of ions, atoms, or molecules that carry an electrical charge. The ions act as nuclei on which the liquid can begin to boil. The path taken by the particle as it moves through the bubble chamber is marked by the formation of many very tiny bubbles, formed where the liquid has changed into a gas. At this moment, the camera records the picture. Bubble chambers were widely used to study nuclear and particle events until the 1980s.

For a long time, bubble chambers were the most effective detectors in particle physics research. Bubble chambers were very effective, but they did require a picture to be taken and then analyzed. With the improvement in technology, it became desirable to have a detector with fast electronic read-out. Bubble chambers, thus, have largely been replaced by wire chambers, which allow particle numbers, particle energies, and particle paths to be measured all at the same time. The wire chamber consists of a very large number of parallel wires, where each wire acts as an individual detector. The detector is filled with carefully chosen gas, such that any charged particle that passes through the tube will ionize surrounding gaseous atoms. The resulting ions and electrons are accelerated by an electric potential on the wire, causing a cascade of ionization, which is collected on the wire and produces an electric current. This allows the experimenter to count particles and also determine the energy of the particle. For high - energy physics experiments, it is also valuable to observe the particle's path. When a particle passes through the many wires of a wire chamber it leaves a trace of ions and electrons, which drift toward the nearest wire. By noting which wires had a pulse of current, an experimenter can observe the particle’s path.

The wire chamber became one of the main types of detectors in modern particle accelerators. They were much more effective at collecting information about the particle events and in storing them to be analyzed at a later time. A bubble chamber could only produce one picture per second and that picture could not be stored in a computer. A typical wire chamber could record several hundred thousand events per second, which could then be immediately analyzed by a computer. The ability to collect hundreds of thousands of events and allow those events to be quickly analyzed and stored on a computer led to the creation of the magnificent modern particle detectors.

Schematic of the Compact Muon Solenoid Detector, CERN, Geneva, Switzerland

Schematic of the Compact Muon Solenoid Detector, CERN, Geneva, Switzerland

The Compact Muon Solenoid (CMS) is one of the two major detectors of the LHC (the other one is called ATLAS). Each of these detectors is quite similar in their general features and in their ability to collect and quickly analyze millions of particle events per second. CMS is 21 \;\mathrm{m} long, 15 \;\mathrm{m} wide, and 15 \;\mathrm{m} high and it weighs 12,500 tons. The huge solenoid magnet that surrounds the detector creates a magnetic field of 4 Teslas, this is about 100,000 times the strength of the Earth’s magnetic field. CMS is an excellent example for illustrating the construction of a modern particle detector. The various parts are shown in Figure 13 with a brief description following.


  • Purpose is to make a quick determination of particle momentum and charge.
  • The tracker consists of layers of pixels and silicon strips.
  • The pixels and strips cover an area the size of a tennis court.
  • 75 million separate electronic read-out channels, 6,000 connections per square centimeter.
  • The tracker records the particle paths without disturbing the energy or motion of the particle.
  • Each measurement that the tracker takes is accurate to  10 \ \mu \mathrm{m} , a fraction of the width of a human hair.
  • The tracker can re-create the paths of any charged particle; electrons, muons, hadrons, and short-lived decay particles.

Electromagnetic Calorimeter

  • Purpose is to identify electrons and photons and to do it very quickly (25 \;\mathrm{ns} between collisions).
  • Very special crystals are used that scintillate, momentarily fluoresce, when struck by an electron or photon.
  • These high-density crystals produce light in fast, short, well-defined photon bursts that is proportional to the particle’s energy.
  • The barrel and the endcap of the detector are made up of over 75,000 crystals.

Hadron Calorimeter

  • Purpose is to detect particles made up of quarks and gluons, for example protons, neutrons, and kaons.
  • Finds a particle’s position, energy, and arrival time.
  • Uses alternating layers of brass absorber plates and scintillator that produce a rapid light pulse when the particle passes through.
  • The amount of light measured throughout the detector provides a very good measurement of the particle’s energy.
  • There are 36 barrel “wedges,” each weighing 26 tons.

Muon Detector

  • The purpose of the muon detector is to detect muons, one of the most important tasks of CMS.
  • Muons can travel through several meters of iron without being stopped by the calorimeters, as a result the muon chambers are placed at the very edge of the detector.
  • Due to the placement of the muon chambers the only particles to register a signal will be a muon.
  • The muon chambers have a variety of detectors that help track these elusive particles.


  • One billion proton-proton interactions will take place inside the detector every second.
  • A very complex trigger system will be set up in the computers to eliminate many of the events that are not “interesting” to the physicists. Only less than 1 percent of all interactions will be saved to a server.
  • Nearly 5 petabytes, a million gigabytes, of data per year will be saved when running at peak performance.
  • To allow for the storage of all this data, a worldwide grid has been created that uses tens of thousands of regular computers. This distribution of the data allows for a much greater processing capacity than could ever be achieved by a couple of supercomputers.
  • The other benefit is now that the data are capable of being stored all over the world; physicists do not need to be at a central location (for instance CERN), in order for them to analyze the particle events coming from CMS.

Simulation of a Higgs Boson Event in CMS Detector

Simulation of a Higgs Boson Event in CMS Detector

The Little Neutral One

In the early 1900s a puzzling problem developed as a result of the extensive experimentation with radioactivity. When physicists looked at beta decay, they soon realized that the energy of the ejected electron was not what they expected. When a neutron decays into a proton and a neutron, due to conservation of momentum the two ejected particles should travel in opposite directions. Researchers found that this was not the case in every event. Also, they were able to determine the resulting energy of the electron and that it did not measure to be what they expected…the electron did not emerge with the same kinetic energy every time.

The Beta Decay Dilemma

The Beta Decay Dilemma (not drawn to scale)

This posed a serious problem for the scientists. They could choose to ignore the basic laws of physics or assume that one or more additional particles were emitted along with the electron. In 1930, Wolfgang Pauli proposed that a third particle, the neutrino (little neutral one), was involved. Due to the conservation laws, he was even able to predict its properties. The neutrino must be neutral and the neutrino’s rest mass must be very, very small.

Although many scientists did not expect it to take long for the neutrino to be detected, it took over 25 years for their existence to be confirmed. In 1956, Clyde Cowan and Frederick Reines finally detected neutrinos using radiation coming from the Savannah nuclear reactor. The properties of the neutrino were confirmed through the study of the results of this experiment. An interesting note is that the reason it took so long for the neutrino to be detected, and continues to be quite elusive to detect to this day, is due to the fact that the neutrino’s interaction with other particles is so weak that only one of a trillion neutrinos passing through the Earth is stopped.


With the explosion of new particles being detected from the 1950s to present times, it might appear that once again the simplified model of the early 1900s has become more complicated. It got so bad when over 150 new particles were identified, that physicists started referring to it as the particle zoo. It isn't quite as bad as that, though.

Just like zookeepers build order in their zoos by grouping the animals based on biological categories like genus and species, particle physicists started looking for a way to group all the particles into categories of similar properties. The observed particles were divided into two major classes: the material particles and the gauge bosons. We'll discuss the gauge bosons in another section. Another way to divide the particles was through the interactions in which they participated. The material particles that participate in the strong force are called hadrons and particles that do not participate in the strong force are called leptons. The strong force is one of the fundamental forces of nature. A discussion of the properties of the leptons may be found later in this chapter.

Most of those 150+ particles are mesons and baryons, or, collectively, hadrons. The word hadron comes from the Greek word for thick. Most of the hadrons have rest masses that are larger than almost all of the leptons. Hadrons still are extremely small but, due to their comparatively large size particle, physicists think that hadrons are not truly elementary particles. Hadrons all undergo strong interactions. The difference is that mesons have integral spin (0, 1, 2 \ldots ), while baryons have half-integral spin (1/2, 3/2, 5/2 \ldots). The most familiar baryons are the proton and the neutron; all others are short-lived http://hyperphysics.phy-astr.gsu.edu/Hbase/particles/baryon.html#c1 [Table of Baryons]. The most familiar meson is the pion; its lifetime is 26 nanoseconds, and all other mesons decay even faster http://hyperphysics.phy-astr.gsu.edu/Hbase/particles/meson.html#c1 [Table of Mesons].


The rapid increase in the number of particles soon led to another question: Is it reasonable to consider that all of these particles are fundamental? Or, is there a smaller set of particles that could be considered fundamental? To many physicists the idea of something even smaller making up hadrons seemed to be reasonable as experimental evidence supported the notion that the hadrons had some internal structure. In 1964, the most successful attempt to build the hadrons, the quark model, was developed by Murray Gell-Mann and George Zweig.

Quark Combinations for Various Hadrons

Quark Combinations for Various Hadrons

The original quark model started with three types, or flavors, of quarks (and their corresponding antiquarks). The first three quarks are currently called up (u), down (d), and strange (s). Each of these quarks has spin 1/2, and—the most radical claim of the model—a fractional charge when compared to the elementary charge of an electron. The fractional charge of the quark should make the quarks easy to find, but that has not been the case. No single quark has ever been detected in any particle experiment. Regardless, the quark model has been very successful at describing the overall properties of the hadrons.

In order to make a hadron, the quarks must be combined in a very specific way. The baryons are all made up of three quarks (the antibaryons are made up of three antiquarks). As an example, the proton is made up of two up quarks and one down quark and a neutron consists of two down quarks and one up quark. The mesons are all made up of one quark and one antiquark. For example, the positive pion is made up of one up quark and one anti-down quark. To make a particle out of quarks, or to determine the quarks of a known particle, it is simply a matter of checking the particle and quark properties in a chart and using some simple addition [make a Hadron Applet].

In 1974, a new particle was discovered that could only fit the quark model if a fourth quark was added. The quark was given the name charm (c). In 1977, a fifth quark was added, bottom (b), and finally in 1995 the existence of a sixth quark was confirmed, top (t). The six quarks of the quark model have all been verified and supported by experiments, but the existence of more quarks is still an open question in particle physics.

Flavor Symbol Charge
Down d -1/3
Up u +2/3
Strange s -1/3
Charm c +2/3
Bottom b -1/3
Top t +2/3


At almost the same time that the quark model was being developed another group of particles appeared to have a similar symmetry with the quarks. These particles, called leptons (Greek for light), appeared to be fundamental and seemed to match up in number to the quarks. Leptons are particles that are like the electron: they have spin 1/2, and they do not undergo the strong interaction.

There are three flavors of charged leptons: the electron, the muon, and the tau. They all have negative charge, and with the exception of the tau, are less massive than hadrons. The electron is the most stable and can be found throughout ordinary matter. The muon and the tau are both short-lived and are typically only found in accelerator experiments or cosmic ray showers. Each charged lepton has an associated neutral lepton partner. They are called the electron neutrino, the muon neutrino, and the tau neutrino. Neutrinos have almost zero mass, no charge, interact weakly with matter, and travel close to the speed of light. Each of these six particles has an associated antiparticle of opposite charge, bringing the total number of leptons to twelve.

Flavor Symbol Rest Mass (\;\mathrm{MeV}/c^2)
Electron e^- .511
Electron neutrino \nu_e \sim0
Muon \mu^- 105.7
Muon neutrino \nu_\mu \sim0
Tau \tau^- 1784
Tau neutrino \nu_\tau \sim0

Conservation Laws

Conservation laws apply in the particle world just as much as they apply in the macroscopic world. The conservation of momentum, mass-energy, angular momentum, and charge are all required by the particle events that have been discovered over the past 100 years. The importance of these conservation laws allowed for the prediction of the neutrino, as we saw earlier in this chapter. Any reaction that occurs must satisfy these laws. Look at the following two possibilities for beta decay:

n & \rightarrow p + e^- + \bar {v}_e\\n & \rightarrow p + e^+ + \bar {v}_e

Which of the two decays will actually occur? What conservation law(s) does the other decay violate?

The conservation of mass-energy is a little tricky. Due to Einstein’s principle of mass-energy equivalence, mass may be converted into energy and vice versa. Because energy can be converted into mass, when two moving particles collide it is possible that the incident kinetic energy will be converted into mass during the collision. In this case, the masses of the product particles may be greater than the masses of the incident particles. So, it is very difficult to determine if mass-energy is conserved in a particle interaction, because there is no way of knowing just how much kinetic energy each particle has to start with and how much of that energy is converted into mass. Although, typically when a particle decays into other particles, it can be shown that the sum of the masses of the product particles will be smaller than or equal to the rest mass of the particle that decayed.

As more and more particles were discovered and more and more particle events analyzed it became increasingly clear that more conservation laws were necessary to help explain what was seen, and maybe more importantly, what was not seen. One of the most important of these is the conservation of baryon number. Each of the baryons is assigned a baryon number B = +1, antibaryons a baryon number B = - 1, and all other particles a value of B = 0. In any reaction the sum of the baryon numbers before the interaction or decay must equal the sum of the baryon numbers after. No known decay process or interaction in nature changes the net baryon number. For example, suppose a positive pion collided with a neutron, which result could not happen?

\pi^{+} + n & \rightarrow p + \pi^{0} \\pi^{+} + n & \rightarrow \pi^+ + \pi^{-} + \pi^{+}

Because the baryon number in the first interaction is +1 before and +1 after, this interaction could occur. But, the second interaction has a +1 baryon number before and a baryon number of zero after, so this interaction cannot take place. The decay of a proton could not proceed by the following event, because the baryon number is not conserved. p \rightarrow \pi^{+} + \pi^{-} As a matter of fact, because the proton is the baryon of smallest mass it may not decay at all. Conservation of baryon number would require that any product of proton decay to have greater mass than the proton, and this would not be allowed due to conservation of mass-energy. As physicists continue to explore the particle world new discoveries may be made and new conservation laws may be created to allow for the decay of a proton, but for now a proton is considered stable. Also, there is not a conservation of meson number. Mesons can be involved in any particle event as long as they do not violate the other conservation laws.

There is a conservation law for leptons, but it is slightly more complicated than for the baryons. To first see how the lepton number is conserved; let us look at this variation of beta decay:

n \rightarrow p + e^- + v_e

This event has never been observed, but according to all the other conservation laws there is no reason that it could not be. Conservation of lepton numbers require that all leptons and corresponding neutrinos be assigned a lepton number of +1, the antileptons and antineutrinos a lepton number of -1, and all other particles a lepton number of 0. Looking at the example above, the lepton number before the event is 0 and the lepton number after the event is +2, so lepton number is not conserved. How could you conserve lepton number and make a valid reaction in the decay shown above?

A look at the following decay shows that there is a little bit more to the conservation of lepton number:

n \rightarrow p + e^- + \bar {v}_{\mu}

Following the rules of lepton number conservation, the preceding example could be observed, but it never has been. There must be something more to the conservation of lepton number and that is each lepton and neutrino partner are assigned its own specific number. So, there is a separate conservation of electron lepton number, muon lepton number, and tau lepton number. Because there are actually three lepton numbers that need to be conserved, the above example will not happen. If this reaction were to take place, electron lepton number and muon lepton number are both not conserved. The decay begins with an electron lepton number of 0 and ends with an electron lepton number of +1; also it begins with a muon lepton number of 0 and ends with a muon lepton number of -1. Clearly, this decay cannot proceed because it violates not one, but two lepton conservation laws.

A summary of the lepton numbers is shown in the table below (Note: all of the anti leptons have a lepton number of -1)

Lepton Conserved Quantity Lepton Number
e^- L_e +1
\nu_e L_e +1
\mu^- L_\mu +1
\nu_\mu L_\mu +1
\tau^- L_\tau +1
\nu_\tau L_\tau +1

Fundamental Interactions

There are four fundamental forces within all atoms that dictate interactions between individual particles and the large-scale behavior of all matter throughout the universe. They are the strong and weak nuclear forces, the electromagnetic force, and gravity.

Gravitation is a force of attraction that acts between each and every particle in the universe. Gravity is the weakest of all the fundamental forces. However, the range of gravity is unlimited—every object in the universe exerts a gravitational force on everything else. The effects of gravity depend on two things: the mass of two bodies and the distance between them. In more precise terms, the attractive force between any two bodies is directly proportional to the product of the masses and inversely proportional to the square of the distance between the bodies. It is always attractive, never repulsive. It pulls matter together, causes you to have a weight, apples to fall from trees, keeps the Moon in its orbit around the Earth, the planets confined in their orbits around the Sun, and binds together galaxies in clusters.

The electromagnetic force determines the ways in which electrically charged particles interact with each other and also with magnetic fields. Like gravity, the range of the electromagnetic force is infinite. Unlike gravity, electromagnetism has both attractive and repulsive properties that can combine or cancel each other out. Whereas gravity is always attractive, electromagnetism comes in two charges: positive and negative. Two positive or two negative things will repel each other, but one positive and one negative attract each other. The same rule applies for magnets, as well, and can be easily demonstrated when two magnets are placed near each other. A north pole near a north pole will cause a repulsive force and a north pole placed near a south pole will cause an attractive force to develop. The electromagnetic force binds negatively charged electrons into their orbital shells, around the positively charged nucleus of an atom. This force holds the atoms together.

The strong nuclear force binds together the protons and neutrons that comprise an atomic nucleus and prevents the mutual repulsion between positively charged protons from causing them to fly apart. The strong force is the strongest of the fundamental forces, but it is also very short range, limited to nuclear distances. It is also responsible for binding quarks into mesons and baryons. An interesting feature of the strong force is that the strength of the force behaves like a rubber band. It actually gets stronger as the quarks move apart, but just like a rubber band, it will eventually break apart when stretched too far. Unlike a rubber band, when the strong force breaks, new quarks are actually formed from the newly released energy. This process is called quark confinement. There has never been an experiment that has found a quark in isolation.

Quark confinement

Quark confinement

The weak nuclear force causes the radioactive decay of certain particular atomic nuclei. In particular, this force governs the process called beta decay, whereby a neutron breaks up spontaneously into a proton, an electron, and an antineutrino. It operates only on the extremely short distance scales found in an atomic nucleus.

According to modern quantum theories, forces are due to the exchange of force carriers. The various fundamental forces are conveyed between real particles by means of particles described by physicists as virtual particles. Virtual particles essentially allow the interacting particles to “talk to” one another without exchanging matter. The force–carrying particles, or bosons, for each of the forces are as follows: electromagnetic force—photons; weak nuclear interaction—very massive `W' and `Z' particles; and the strong nuclear interaction—gluons. Although it has not been possible to devise a completely satisfactory theory of gravitation, it too should have an exchange particle—the graviton (which has not yet been discovered).

The Standard Model

The theories and discoveries of thousands of physicists over the past century have created a remarkable picture of the fundamental structure of matter, the standard model of particles and forces.

The Standard Model of Particle Physics

The Standard Model of Particle Physics

The standard model currently has sixteen particles. Twelve of the particles are fermions, or matter particles and they are the six quarks and six leptons. Each elementary particle also has an antimatter partner. The remaining four particles are called bosons and are the exchange particles through which the four fundamental interactions are transmitted. The hypothetical exchange particle for gravity, the graviton, does not currently have a place in the standard model.

Every phenomenon observed in nature can be understood as the interplay of the fundamental forces and particles of the standard model. Interesting to note, that although the standard model does a terrific job at explaining all the matter and forces that occur in nature, nearly 85% of all matter that makes up the universe has still not been discovered—the elusive dark matter.

But physicists know that the standard model is not the end of the story. It does not account for gravity and the mysterious dark matter. The standard model also requires the existence of a new particle, known as the Higgs boson. The existence of this particle is essential to understand why the other building blocks (the quarks, the leptons, and the gauge particles) have mass. The Higgs has not yet been seen in any experiment. As the experiments become grander in scale and the discoveries multiply, will the standard model be supported or does a new model need to be developed? The standard model raises almost as many questions as it answers. Today physicists all over the world are searching for physics beyond the standard model that may lead to a possibly more elegant theory—a theory of everything.

Image Attributions

Files can only be attached to the latest version of None


Please wait...
Please wait...
Image Detail
Sizes: Medium | Original

Original text