With the success of recent movies such as “What the &$@# Do We Know?” and the ongoing -- and continuously surprising -- revelations of the unexpected nature of underlying reality that have been unfolding in quantum physics for three-quarters of a century now, it may not be particularly surprising that the quantum nature of the universe may actually now be making in-roads into what has previously been considered classical observational astronomy. Quantum physics has been applied for decades to cosmology, and the strange “singularity” physics of black holes. It is also applicable to macroscopic effects such as Einstein-Bose condensates (extremely cold conglomerations of material that behave in non-classical ways) as well as neutron stars and even white dwarfs (which are kept from collapse, not by nuclear fusion explosions but by the Pauli Exclusion Principle – a process whereby no two elementary particles can have the same quantum state and therefore, in a sense, not collapse into each other).
Well, congratulations if you have gotten through the first paragraph of this essay. I can’t honestly tell you that things will get better, but I can say that to the intrepid reader things should get even more interesting. The famous quantum physicist Richard Feynmann once said essentially that anyone who thought he understood quantum physics did not understand it enough to understand that he did not actually understand it! In other words, no classical interpretation of quantum physics is the correct one. Parallel evolving universes (one being created every time a quantum-level choice is made), faster-than-light interconnectedness underlying everything, nothing existing until it is observed, these are a few of the interpretations of quantum reality that are consistent with the experiments and observations.
There are many ways we could go now in examining quantum results. If conscious observation is needed for the creation of an electron (this is one aspect of the Copenhagen Interpretation, the most popular version of quantum physics interpretations), then ideas about the origin of consciousness must be revised. If electrons in the brain create consciousness, but electrons require consciousness to exist, one is apparently caught in circular reasoning at best. But for this essay, we shall not discuss quantum biology. Another path we might go down would be the application of quantum physics to cosmology -- either the Inflationary origin of the universe, or the Hawking evaporation of black holes, as examples. But our essay is not about this vast field either. Today we will discuss the scaling of the simple double-slit laboratory experiment to cosmic distances, what can truly be called, “quantum astronomy.”
The laboratory double-slit experiment contains a lot of the best aspects of the weirdness of quantum physics. It can involve various kinds of elementary particles, but for today’s discussion we will be talking solely about light – the particle nature of which is called the “photon.” A light shining through a small hole or slit (like in a pinhole camera) creates a spot of light on the screen (or film, or detector). However, light shown through two slits that are close together creates not two spots on the screen, but rather a series of alternating bright and dark lines with the brightest line in the exact middle of this interference pattern. This shows that light is a wave since such a pattern results from the interference of the waves coming from slit one (which we shall call “A”) with the waves coming from slit two (which we shall call “B”). When peaks of waves from light source A meet peaks from light source B, they add and the bright lines are produced. Not far to the left and right of this brightness peak, however, peaks from A meet troughs from B (because the crests of the light waves are no longer aligned) and a dark line is produced. This alternates on either side until the visibility of the lines fades out. This pattern is simply called an “interference pattern” and Thomas Young used this experiment to demonstrate the wave nature of light in the early 19th Century.
However, in the year 1900 physicist Max Planck showed that certain other effects in physics could only be explained by light being a particle. Many experiments followed to also show that light was indeed also a particle (a “photon”) and Albert Einstein was awarded the Nobel Prize in physics in 1921 for his work showing that the particle nature of light could explain the “photoelectric effect.” This was an experiment whereby low energy (red) light, when shining onto a photoelectric material, caused the material to emit low energy (slow moving) electrons, while high energy (blue) light caused the same material to emit high energy (fast moving) electrons. However, lots of red light only ever produced more low energy electrons, never any high-energy electrons. In other words, the energy could not be “saved up” but rather must be absorbed by the electrons in the photoelectric material individually. The conclusion was that light came in packets, little quantities, and behaved thus as a particle as well as a wave.
So light is both a particle and a wave. OK, kind of unexpected (like Jell-O) but perhaps not totally weird. But the double slit experiment had another trick up its sleeve. One could send one photon (or “quantum” of energy) through a single slit at a time, with a sufficiently long interval in between, and eventually a spot builds up that looks just like the one produced when a very intense (many photons) light was sent through the slit. But then a strange thing happened. When one sends a single photon at a time (waiting between each laser pulse, for example) toward the screen when both slits are open, rather than two spots eventually building up opposite the two slit openings, what eventually builds up is the interference pattern of alternating bright and dark lines! Hmm… how can this be, if only one photon was sent through the apparatus at a time?
The answer is that each individual photon must – in order to have produced an interference pattern -- have gone through both slits! This, the simplest of quantum weirdness experiments, has been the basis of many of the unintuitive interpretations of quantum physics. We can see, perhaps, how physicists might conclude, for example, that a particle of light is not a particle until it is measured at the screen. It turns out that the particle of light is rather a wave before it is measured. But it is not a wave in the ocean-wave sense. It is not a wave of matter but rather, it turns out that it is apparently a wave of probability. That is, the elementary particles making up the trees, people, and planets -- what we see around us -- are apparently just distributions of likelihood until they are measured (that is, measured or observed). So much for the Victorian view of solid matter!
The shock of matter being largely empty space may have been extreme enough -- if an atom were the size of a huge cathedral, then the electrons would be dust particles floating around at all distances inside the building, while the nucleus, or center of the atom, would be smaller than a sugar cube. But with quantum physics, even this tenuous result would be superseded by the atom itself not really being anything that exists until it is measured. One might rightly ask, then, what does it mean to measure something? And this brings us to the Uncertainly Principle first discovered by Werner Heisenberg. Dr. Heisenberg wrote, “Some physicist would prefer to come back to the idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist independently of whether we observe them. This however is impossible."
Perhaps that is enough to think about for now. So in the next essay we will examine, in some detail, the uncertainty principle as it relates to what is called “the measurement problem” in quantum physics. We shall find that the uncertainty principle will be the key to performing the double-slit experiment over astronomical distances, and demonstrating that quantum effects are not just microscopic phenomena, but can be extended across the cosmos.
In the first article, we discussed the double-slit experiment and how a quantum particle of light (a photon) can be thought of as a wave of probability until it is actually detected. In this article we shall examine another feature of quantum physics that places fundamental constraints on what can actually be measured, a basic property first discovered by Werner Heisenberg, the simplest form known as the "Heisenberg Uncertainty Principle."
In scientific circles we are perhaps used to thinking of the word "principle" as "order", "certainty", or "a law of the universe". So the term "uncertainty principle" may strike us as something akin to the terms "jumbo shrimp" or "guest host" in the sense of juxtaposing opposites. However, the uncertainty principle is a fundamental property of quantum physics initially discovered through somewhat classical reasoning -- a classically based logic that is still used by many physics teachers to explain the uncertainty principle today. This classical approach is that if one looks at an elementary particle using light to see it, the very act of hitting the particle with light (even just one photon) should knock it out of the way so that one can no longer tell where the particle actually is located -- just that it is no longer where it was.
Smaller wavelength light (blue, for example, which is more energetic) imparts more energy to the particle than longer wavelength light (red, for example, which is less energetic). So using a smaller (more precise) "yardstick" of light to measure position means that one "messes up" the possible position of the particle more by "hitting" it with more energy. While his sponsor, Nehls Bohr (who successfully argued with Einstein on many of these matters), was on travel, Werner Heisenberg first published his Uncertainty Principle Paper using this more-or-less classical reasoning just given. (The deviation from classical notion was the idea of light comes in little packets or quantities, known as "quanta," as discussed in article one). However the uncertainty principle was to turn out to be much more fundamental than even Heisenberg imagined in his first paper.
Momentum is a fundamental concept in physics. It is classically defined as the mass of a particle multiplied by its velocity. We can picture a baseball thrown at us at 100 miles per hour having a similar effect as a bat being thrown at us at ten miles per hour; they would both have about the same momentum although they have quite different masses. The Heisenberg Uncertainty Principle basically stated that if one starts to know the change in the momentum of an elementary particle very well (that is usually, what the change in a particle’s velocity is) then one begins to lose knowledge of the change in the position of the particle, that is, where the particle is actually located. Another way of stating this principle, using relativity in the formulation, turns out to be that one gets another version of the uncertainty principle. This relativistic version states that as one gets to know the energy of an elementary particle very well, one cannot at the same time know (i.e., measure) very accurately at what time it actually had that energy. So we have, in quantum physics, what are called "complimentary pairs." (If you’d really like to impress your friends, you can also call them "non-commuting observables.")
One can illustrate the basic results of the uncertainty principle with a not-quite-filled balloon. On one side we could write "delta-E" to represent our uncertainty in the value of the energy of a particle, and on the other side of the balloon write "delta-t" which would stands for our uncertainty in the time the particle had that energy. If we squeeze the delta-E side (constrain the energy so that it fits into our hand, for example) we can see that the delta-t side of the balloon would get larger. Similarly, if we decide to make the delta-t side fit within our hand, the delta-E side would get larger. But the total value of air in the balloon would not change; it would just shift. The total value of air in the balloon in our analogy is one quantity, or one "quanta," the smallest unit of energy possible in quantum physics. You can add more quanta-air to the balloon (making all the values larger, both in delta-E and delta-t) but you can never take more than one quanta-air out of the balloon in our analogy. Thus "quantum balloons" do not come in packets any smaller than one quanta, or photon. (It is interesting that the term "quantum leap" has come to mean a large, rather than the smallest possible, change in something, and the order of the dictionary definitions of "quantum leap" have now switched, with the popular usage first and the opposite, physics usage second. If you say to your boss, "We’ve made a quantum leap in progress today" this can still, however, be considered an honest statement of making absolutely no progress at all.)
When quantum physics was still young, Albert Einstein (and colleagues) would challenge Nehls Bohr (and colleagues) with many strange quantum puzzles. Some of these included effects that seemed to imply that elementary particles, through quantum effects, could communicate faster than light. Einstein was known to then imply that we really could not be understanding physics correctly for such effects to be allowed to take place for, among other things, such faster-than-light connectedness would deny the speed-of-light limit set by relativity. Einstein came up with several such self-evidently absurd thought experiments one could perform, the most famous being the EPR (Einstein, Podolski, Rosen) paradox, named after the three authors of this paper, which showed that faster-than-light communication would appear to be the result from certain quantum experiments and therefore argued that quantum physics was not complete–that some factors had to be, as yet, undiscovered. This led Nehls Bohr and his associates to formulate the "Copenhagen Interpretation" of quantum physics reality. This interpretation, (overly simplified in a nutshell), is that it makes no sense to talk about an elementary particle until it is observed because it really doesn’t exist unless it is observed. In other words, elementary particles might be thought of not just as being made up of forces, but that some constituents of it that must be taken into account are the observer or measurer as well, and that the observer can never really be separated from the observation.
Using the wave equations formulated for quantum particles by Erwin Schrödinger, Max Born was the first to make the suggestion that these elementary particle waves were not made up of anything but probabilities! So the constituents of everything we see are made up of what one might call "tendencies to exist" which are made into particles by adding the essential ingredient of "looking." Looking as an ingredient itself, it must be noted, took some getting used to! There were other possible interpretations we could follow, but it can be said that none of them was consistent with any sort of objective reality as Victorian physics had known it before. The wildest theories could fit the data equally well, but none of them allowed the particles making up the universe to consist of anything without either an underlying faster-than-light communication (theory of David Bohm), another parallel universe branching off ours every time there is a minute decision to be made (many worlds interpretation), or the "old" favorite, the observer creates the reality when he looks (the Copenhagen Interpretation).
Inspired by all these theories, a physicist at CERN in Switzerland named John Bell came up with an experiment that could perhaps test some of these theories and certainly test how far quantum physics was from classical physics. By now (1964) quantum physics was old enough to have distinguished itself from all previous physics to the point that physics before 1900 was dubbed "classical physics" and physics discovered after 1900 (mainly quantum physics) was dubbed "modern physics." So, in a sense, the history of science in broken up into the first 46 centuries (if one starts with Imhotep who built the first pyramid as the first historical scientist) and the last century, with quantum physics. So, we can see that we are quite young in the age of modern physics, this new fundamental view of science. It might even be fair to say that most people are not even aware, even after a century, of the great change that has been taking place in the fundamental basis of the scientific endeavor and interpretations of reality.
John Bell proposed an experiment that could measure if a given elementary particle could "communicate" with another elementary particle farther away faster than any light could have traveled between them. In 1984 a team led by Alain Aspect in Paris did this experiment and indeed, this was undeniably the apparent result. The experiment had to do with polarized light. For illustrative purposes, let’s say that you have a container of light, and the light is waving all over the place and -- if the container is coated with a reflective substance, except for the ends -- the light is bouncing off the walls. (One might picture a can of spaghetti with noodles at all orientations as the directions of random light waves.) At the ends we place polarizing filters. This means that only light with a given orientation (say like noodles that are oriented up-and-down) can get out, while back-and-forth light waves (noodles) cannot get out. If we rotate the polarizers at both ends by 90 degrees we would then let out back-and-forth light waves, but now not up-and-down light.
It turns out that if we were to rotate the ends so that they were at an angle of 30 degrees to each other, about half of the total light could get out of the container -- one-fourth from one side of the bottle and one-fourth through the other side. This is (close enough to) what John Bell proposed and Alain Aspect demonstrated. When the "bottle" was rotated at one end, making a 30-degree angle with the other side so that only half the light could escape, a surprising thing happened. Before any light could have had time to travel from the rotated side of the "bottle" (actually a long tube) to the other side, the light coming out of the opposite side from the one that was rotated changed to one-fourth instantaneously (or as close to instantaneous as anyone could measure). Somehow that side of the "bottle" had gotten the message that the other side had been rotated faster than the speed of light. Since then this experiment has been confirmed many times.
John Bell’s formulation of the fundamental ideas in this experiment have been called "Bell’s Theorem" and can be stated most succinctly in his own words; "Reality is non-local." In other words, not only do the elementary particles that make up the things we see around us not exist until they are observed (Copenhagen Interpretation), but they are not, at the most essential level, even identifiably separable from other such particles arbitrarily far away. John Muir, the 19th Century naturalist once said, "When we try to pick out anything by itself, we find it hitched to everything else in the universe." Well he might have been surprised how literally -- in physics as well as in ecology -- this turned out to be true.
In the next essay we will combine the uncertainty principle with the results of Bell’s Theorem and increase the scale of the double slit experiment to cosmic proportions with what Einstein’s colleague, John Wheeler, has called "The Participatory Universe." This will involve juggling what is knowable and what is unknowable in the universe at the same time.
In the previous two articles we discussed the basic double-slit experiment that demonstrates the dual nature of light -- wave and particle -- and then the Heisenberg Uncertainty Principle which demonstrates the complimentary (mutual exclusion) of what one can measure at the same time. In this article we shall discuss the more basic interpretation of quantum physics in terms of what one can even know or not know, and how this affects the results one is trying to measure.
John Bell formulated the uncertainty principle in terms of what one could know or not know in an experiment. Several Bell-type experiments have successfully shown that this would seem to be the simplest interpretation of the situation. Taking our double-slit example, if one puts a detector at one or the other of the two slits -- even one that does not destroy the photon, electron, or whatever particle as it goes through the slit -- then an interference pattern does not appear at the detector. This is because one has set up an experiment in which one can "know" which path the particle took (i.e., which slit the particle went through). As long as one can tell this, then the particle cannot go through both slits at once and one no longer gets an interference pattern.
Now, you might say, what if I decide not to look at the detector set up next to one or the other of the slits? Well, one still does not get an interference pattern because the potential exists for one to be able to tell which path the photon, for example, traveled. Even this potential (i.e., "knowability") is enough to stop the formation of an interference pattern. All ability to detect which path an elementary particle took (in this case a photon of light) must be removed to obtain interference. In other words, one must not even be able to tell -- even in principle -- which path the elementary particle took in order for it to "take" both paths and form an interference pattern.
This is the most fundamental concept in quantum physics -- knowable and unknowable. It is from this more fundamental concept of the uncertainty principle that we shall approach our quantum astronomy experiment. It has been experimentally verified that if one can know which path a photon traveled, then an interference pattern is not possible. But if one can become ignorant of which path the photon (or any elementary particle) took, then an interference pattern is assured. That is, if one is ignorant of which path the photon took, then an interference pattern is not just possible, it must occur.
This last point can produce some decidedly non-classical effects. One example is the phenomenon known as "quantum beats." Picture an atom (classically, for now) as consisting of a nucleus with electrons jumping all around it. Electrons do not move smoothly away from and toward their central nucleus; they take discrete steps (energy quanta, actually) to transition from a lower to a higher (farther from the nucleus) orbital level. They actually disappear from one level and reappear at another, but are never found in between the two. As an electron "jumps" from a higher-level step to a lower-level step, it emits a photon of light. Just the fact that one cannot, even in principle, tell which energy level jump the electron took, is enough to produce a special kind of interference fringes called "quantum beats."
Thus, while picturing probability distributions as classical waves (like water waves) may be helpful for beginning physics students, real quantum wave phenomena are decidedly non-classical and produce decidedly non-classical results. They are not waves made of anything but probabilities, or tendencies to exist. Yet they can interfere with each other in a wave-phenomena-type way before they are measured, and so "turn" from probability waves to measured particles. (This "collapse of the wave function" is also said to take place instantaneously, as we shall discuss more in article four.)
Einstein wrote several times "God does not play dice with the universe." Quantum physics, however, has reduced everything to probabilities mathematically, and such a formulation inherently implies dice rolling for all possibilities until a measurement is made. Richard Feynman pointed out that the mathematics really does mean all possibilities. Every elementary particle takes every path it possibly can -- a kind of infinite-slit experiment -- and then these infinite numbers of paths all cancel in the multi-dimensional mathematics -- called Hilbert space -- so that only one result is finally measured.
However, a colleague of Einstein’s, Professor John Wheeler of Princeton University, has pointed out that one could take another interpretation, an interpretation he has dubbed, "The Participatory Universe." In this approach one can look at the universe as directly participating in each quantum effect in real time. In other words, the concept of a First Cause starting things off (winding up the clock of the universe, one might say) and then leaving the laws of physics to run things, may be what is incorrect in the basic approach of classical physics. Rather, in this participatory scenario, the Cause of the laws of physics remains an active Participant. (If one would like to also draw some religious points into such discussions I would just say that it is important to understand what is being said and what is not being said here—that is, to not oversimplify what went into Prof. Wheeler’s introduction of this interesting proposed conceptualization for quantum reality.)
Professor Wheeler came up then with a Gedanken experiment (i.e., a thought experiment) that he called the "delayed choice" experiment. He proposed a huge scaling up (to cosmic proportions) of Young’s double slit experiment that we’ve talked so much about. In this Gedanken experiment gravitational lenses, which can bend light from distance quasars or galaxies, are used as sort-of giant slits to create two paths for photons from a quasar or distance galaxy. General Relativity shows that masses in space can bend light.
The first support for the confirmation of Einstein’s theory of relativity came with the measurement of the bending of light from stars by the Sun as they passed close behind it during a total solar eclipse. Light was indeed bent by the mass of the Sun (that is to say that space-time was curved near large masses). It turns out that large masses like galaxies that are rather close to being directly in between a distant quasar-galaxy and us will bend the light from the distant object toward us. One can think of light coming toward us more-or-less directly from a distant quasar-galaxy (let’s call this path A) while light shining from this quasar also heads off into space at a slightly different angle.
This light, however, encounters a massive galaxy along the way so that the light rays that would normally have missed the Earth get bent towards us as well (we’ll call this light path B). Thus it appears that we have two quasars with a massive galaxy in between. However, this is just one quasar whose light rays are coming more-or-less directly toward us along path A and whose second image appears on the other side of the massive galaxy image, these latter being the rays traveling along bent path B (i.e., bent toward us). Thus it appears that we have two quasars when we actually have two images of the same quasar.
John Wheeler realized that these two paths constituted a kind of double-slit experiment where the slits were the two gravitational lens images. The two paths of light from the quasar might be used then to interfere with each other. However, this could be done -- according to Bell’s approach to the uncertainty principle -- only if one could not tell which path any particular photon had traveled. One way of being able to avoid knowing which path an individual photon took is to make the paths equal (within the uncertainty principle) so that one could not tell whether any photons arriving had traveled along path A or path B. Even if there were a flare in the quasar, the flare (peak in brightness) would arrive at the same time at Earth and so one could not use timing to tell which way it came. (One can see that if the paths are not equal, the light from the flare would arrive along path A before path B and so one could tell the path difference, which would negate the possibility of getting an interference pattern.)
Professor Wheeler "solved" this problem by adding an immensely long fiber optics cable to path A to make it as long as path B. (The fiber optics cable turned out to have to be over a light year long in this case, so it really was truly a Gedanken experiment without much hope of realization -- but we will propose a possible solution to this problem in the fourth and last essay.) The delayed-choice part of the experiment was, nevertheless, still very interesting. Given, then, that one achieves an interference pattern in this way, one should be able to put a detector at the intersection of light paths A and B and just re-do a cosmic-scale version of the Young’s double-slit experiment (where light-photons from quasar images A and B crossing the universe are equivalent to light going through slits 1 and 2 in the laboratory).
Now it should be noted that one of the founders of quantum physics, P.A.M. Dirac, noted that, at least in the Young’s double-slit experiment, one could only get an interference pattern if each photon only ever interfered with itself -- that is, each single photon had to go through both slits and so only interfere with itself, not with any other photon. This certainly made sense in terms of the interference experiment being done with one photon at a time, and still producing interference. (There are also conservation of energy arguments -- two photons should not be expected to produce 4 times the energy when they meet sometimes -- making a bright line -- and no energy when they meet at other times -- making a dark line in the interference pattern.) Thus if one did the Wheeler delayed-choice experiment with single photons being detected at a time, one would still expect -- if one could not tell which path, A or B, the photons traveled along -- that an interference pattern would result where the two paths met. However, if one moved the detector to just detect photons along path A then these photons will have just traveled along path A (by classical reasoning). Or, similarly, if one moves the detector to intersect path B photons, then the photons will have traveled only path B.
The interesting part of Professor Wheeler’s thought experiment is that the quasar emitting the photons is about one billion light years away—that is, the light from this quasar is supposed to have taken a billion years to travel to Earth. It seems perplexing that any given photon will have had to have traveled both paths when you put the detector at the intersection of both paths, but then one path or the other path when you decide to put the detector directly into one of these paths rather than at their intersection.
In other words, how can your decision as to where to put the detector affect the path of a given photon a billion years after it supposedly started along one of the paths toward Earth -- long before humans even existed on this planet (much less discovered quantum physics)? It would appear that what has "happened" in the distant past in this case may be determined by what is happening right now even though it is supposed to have "happened" over a billion years ago. The choice of which path, in other words, has somehow been "delayed." One might view this as the Universe playing more the part of an active participant in what is happening rather than just in what has happened in the past in this case. Hence the "Participatory Universe" conceptualization.
This interesting Gedanken experiment points out what may be the main difference between general relativity and quantum physics. In general relativity time is a definite dimension, part of the already unalterable space-time continuum. While in quantum physics, time is, at best, a variable, and is also quantized (i.e. there are particles of time). Thus far from being an absolute, time in quantum physics is a not a solid background upon which particles in space change. In quantum physics time is not yet really, in a sense, even there until the "time particles" are measured.
In our fourth and final assay we will talk about the possible realization of Professor Wheeler’s Gedanken experiment, which may open up a whole new field of investigation -- a field which we will call "Quantum Astronomy."
In the preceding three essays we discussed Young’s double-slit experiment, where light was shown to behave as a wave. We also discussed the birth of quantum physics where light was also shown to behave like a particle. In the second article, we discussed a basic limitation on measurement imposed by the Heisenberg Uncertainty Principle and how one may "trade" knowledge of one measurement for another. In article three, we then discussed John Bell’s concept of knowability and unknowability, and then John Wheeler’s Gedanken (thought) experiment creating a cosmic-scale double-slit experiment requiring an immensely (billions of miles) long fiber optics cable. It is the application of John Bell’s concept of knowability and unknowability that we shall now apply to the uncertainty principle in order to try to perform John Wheeler’s cosmic double-slit experiment over cosmic distances that we shall discuss in this article.
In order to realize this experiment, however, one must come up with a substitute for this unbuildably long fiber optics cable, and this is where the SETI Institute’s new Allen Telescope Array and its narrow-band radio wave detectors can play an important part. SETI radio projects use the fact that, as far as we know, no natural (i.e., non-technological) source of radio waves can make a very narrow-band radio channel. When you tune to a station on the radio, one turn and you are on another channel. If you tune to a radio galaxy, however, you can turn the dial many dozens of times and you will still be on the same channel, so to speak—you will hear the same sounds. In other words, as far as we know, only technology can make a narrow (1 Hertz wide) radio channel. Thus, looking for narrow-band signals in space should be a good way to look for evidence of any radio-technological civilizations around other stars. Fortunately for quantum astronomy, it also turns out that an extremely narrow-band radio channel can also be used to replace that unrealistically long fiber optics cable! But to explain just how this can be done we need to first look again at the uncertainty principle.
When a colleague, Dr. David P. Carico of San Francisco State University, and I began thinking about actually carrying out Professor Wheeler’s delayed-choice experiment, we realized that the uncertainty principle needed to be satisfied if one is to obtain an interference pattern. That is, one needs to be ignorant of which path the light traveled—along path A (directly from the quasar) or along path B (the path most bent by the gravity of the intervening galaxy back toward Earth) so that it could "travel both paths" and so interfere with itself. (The terms "travel" and "path" as applied to a photon-wave, of course, do not have any real meaning in quantum physics if the particle-nature does not exist until it is measured. But for now we will use such terms, as it is difficult to speak of quantum effects without some reference to our classical notions of space and time.) The energy-time uncertainty principle, as we will recall, referred to the fact that knowing the energy of a given particle meant that one could not know precisely the time the particle had that energy. And, "complimentarily" (the term for this that Nehls Bohr used), if one knows the time to a high precision, one cannot then know what that energy was with greater accuracy than the basic quantum value. (This quantum value, as we will also recall, is called "Planck’s constant", or one quantum of energy and is actually quite a small value, so we do not usually notice this uncertainty constraint in everyday activities.)
Now in thinking about how to do this experiment we thought that perhaps it might be possible to "trade" knowledge of energy for knowledge of time, but in this case the time would be the delay time between the two paths of the gravitational lens images, A and B. The uncertainty in energy then might be able to replace the hugely long fiber optics cable with, instead, a very narrow-band radio detector. It’s OK. Read on. I can hopefully explain what I mean. We have seen that we can trade knowledge of energy for knowledge of time (remember the balloon image in a previous article with "delta-E" written on one end and "delta-t" written on the other.) We also remember that if we can tell which path each photon traveled, we will not get an interference pattern but rather just a picture of a quasar at A and another (image of it) at B. To understand this "trade" then, let’s take just a bit closer look at what we mean by a narrow-band radio wave.
It is known, in the physics of electromagnetic waves, that longer waves have less energy than shorter waves. The blue light we see has more energy per photon than the red light we see. (This can be extended to lower energy infrared photons, and higher energy ultraviolet photons, or even to very low energy radio photons, and even very much higher energy x-ray photons.) In photography, using a filter on the camera lens can allow only blue light, or red light into the camera. Sunlight is usually a whole mixture of blues, greens, yellows, oranges, reds, and so on, and therefore also a mixture of photons of light of all kinds of energy, high and low. When one uses, say, a red filter, one is cutting out the higher energy blue photons from going into the camera, and so only detects the lower energy red light. The narrower the filter, the less range of energy is let into the camera.
Similarly for radio detectors, if one has a broadband detector, one is letting in radio waves of all sorts of energies all at once. However if one has a very narrow-band radio detector (such as are used in the search for extraterrestrial intelligent technology), one is highly constraining the range of energies being detected. Only the radio photons of a very narrow spread in energy are actually measured. Remembering the uncertainty principle for energy and time, we can recognize that narrow-band radio detectors thus represent a constraint on the value of the energy being measured. Now what about time, however? For that, let’s look at the crossing of the radio waves (which is just long wavelength light) coming along paths A and/or B. We can only get an interference pattern if we cannot tell (or even potentially be able to tell) which path a radio photon took to reach our detector. But, if the difference in the travel time between paths A and B is long enough (this is called the "delay time" of the gravitational lens), then there is plenty of time to detect, for example, if a flare went off at the quasar so that image A brightened, followed by image B some time (the delay time) later. This is actually how the delay time between gravitational lens paths is measured. Now the next sentence is the most important. However, if we use a narrow enough radio bandpass, we can potentially constrain the energy to such a precise value, that the time uncertainty is so large that it exceeds the actual delay time of the gravitational lens. In other words, we can constrain the energy (by using narrow-band radio detectors) so much that we can exceeded the ability—even potentially—of measuring which path the photon travels because our uncertainty in the arrival time of the photon is now larger (because of the uncertainty principle) than the actual delay time or travel time difference between paths A and B. Thus we cannot tell along which path the photon traveled and so should get an interference pattern at the detectors. A very narrow-band (but real) radio detector then, can substitute for an unrealistically long fiber optics cable to get an interference pattern at the intersection of paths A and B.
So, how does one proceed to do this? We can start observing the gravitational lens using a radio telescope with very narrow-band detectors. We set the detectors on the narrowest-band possible (let’s say one-hundredth of a Hertz, which means we know the wavelength—and therefore energy—of the radio wave coming in to within one-hundredth of a wavelength per second). We focus the two images of the quasar across each other and (if the delay time is not too long—longer than 100 seconds in this case) we will obtain an interference pattern. This means we cannot know which path the radio photons "took." (We also assume no detectable rapid fluctuations from the quasar for simplicity, although there are ways of dealing with this effect, as well, using "choppers" in the path of the incoming light.) Now what happens if we increase the allowable energies being detected (i.e., increase the bandpass of the radio detectors)? At first we may still get an interference pattern. But if we continue to increase the bandpass, at some point the interference pattern will disappear, and we shall simply get a (radio) picture of a quasar at location A, and another of its image at location B. The interference pattern will have disappeared at exactly the point where we could begin to tell which path the photons took. In other words, by allowing ourselves to become more and more ignorant of the energy of the radio waves arriving, we simultaneously allowed an increased knowledge (according to the uncertainty principle) of the time interval. And when we decreased our knowledge of the energy to the point where our knowledge of the time interval could drop below the actual delay time between light paths of the gravitational lens, we could (at least in principle) tell which path each photon took. Thus, the uncertainty principle "kicks in" and says that one cannot know which path a photon took and still get a wave phenomenon (i.e., an interference pattern). One cannot have one’s photon and wave it too.
Thus we may be able to use very narrow-band radio detectors to realize the delayed-choice (perhaps no longer just Gedanken) experiment proposed by Professor Wheeler. What is of interest in doing such an experiment? First, it may represent a possible way to directly measure delay times for gravitational lenses that don’t vary much in brightness, and such delay times can be used to measure the expansion rate of the universe (this parameter is called the "Hubble constant") directly. But more intriguing, perhaps, is that it can possibly provide a measure of the minimum time it takes for a wave to "become" a particle. If the quasar is one billion light years away (that’s about six billion trillion miles) and the interference pattern is being formed by a probability wave that is traveling along both paths A and B, then when one increases the bandpass (say, over one hour’s time) to the point where the wave becomes a particle (photon) then one might be able to speak in terms of the wave "becoming" a particle at the minimum rate of a billion light years per hour. This rate is considered in most quantum physics formulations to be instantaneous, but one is reminded of Galileo and a colleague standing on opposite hillsides with lamps trying to measure the speed of light. When one opened the lampshade, as soon as the other saw it, they opened their lampshade, and so, back and forth. They decided that the speed of light was either instantaneous of very very fast. It turned out to be very very fast (186,300 miles per second)—far too fast to measure with shaded lamps on nearby hills. So perhaps quantum astronomy may someday allow such a measurement of the speed of the wave-to-particle transition, if it is not instantaneous. What we have outlined here is just one experiment in many possible experiments that could be performed in what may be one of the most interesting new fields of the 21st Century, quantum astronomy.
4 Comments:
With the success of recent movies such as “What the &$@# Do We Know?” and the ongoing -- and continuously surprising -- revelations of the unexpected nature of underlying reality that have been unfolding in quantum physics for three-quarters of a century now, it may not be particularly surprising that the quantum nature of the universe may actually now be making in-roads into what has previously been considered classical observational astronomy. Quantum physics has been applied for decades to cosmology, and the strange “singularity” physics of black holes. It is also applicable to macroscopic effects such as Einstein-Bose condensates (extremely cold conglomerations of material that behave in non-classical ways) as well as neutron stars and even white dwarfs (which are kept from collapse, not by nuclear fusion explosions but by the Pauli Exclusion Principle – a process whereby no two elementary particles can have the same quantum state and therefore, in a sense, not collapse into each other).
Well, congratulations if you have gotten through the first paragraph of this essay. I can’t honestly tell you that things will get better, but I can say that to the intrepid reader things should get even more interesting. The famous quantum physicist Richard Feynmann once said essentially that anyone who thought he understood quantum physics did not understand it enough to understand that he did not actually understand it! In other words, no classical interpretation of quantum physics is the correct one. Parallel evolving universes (one being created every time a quantum-level choice is made), faster-than-light interconnectedness underlying everything, nothing existing until it is observed, these are a few of the interpretations of quantum reality that are consistent with the experiments and observations.
There are many ways we could go now in examining quantum results. If conscious observation is needed for the creation of an electron (this is one aspect of the Copenhagen Interpretation, the most popular version of quantum physics interpretations), then ideas about the origin of consciousness must be revised. If electrons in the brain create consciousness, but electrons require consciousness to exist, one is apparently caught in circular reasoning at best. But for this essay, we shall not discuss quantum biology. Another path we might go down would be the application of quantum physics to cosmology -- either the Inflationary origin of the universe, or the Hawking evaporation of black holes, as examples. But our essay is not about this vast field either. Today we will discuss the scaling of the simple double-slit laboratory experiment to cosmic distances, what can truly be called, “quantum astronomy.”
The laboratory double-slit experiment contains a lot of the best aspects of the weirdness of quantum physics. It can involve various kinds of elementary particles, but for today’s discussion we will be talking solely about light – the particle nature of which is called the “photon.” A light shining through a small hole or slit (like in a pinhole camera) creates a spot of light on the screen (or film, or detector). However, light shown through two slits that are close together creates not two spots on the screen, but rather a series of alternating bright and dark lines with the brightest line in the exact middle of this interference pattern. This shows that light is a wave since such a pattern results from the interference of the waves coming from slit one (which we shall call “A”) with the waves coming from slit two (which we shall call “B”). When peaks of waves from light source A meet peaks from light source B, they add and the bright lines are produced. Not far to the left and right of this brightness peak, however, peaks from A meet troughs from B (because the crests of the light waves are no longer aligned) and a dark line is produced. This alternates on either side until the visibility of the lines fades out. This pattern is simply called an “interference pattern” and Thomas Young used this experiment to demonstrate the wave nature of light in the early 19th Century.
However, in the year 1900 physicist Max Planck showed that certain other effects in physics could only be explained by light being a particle. Many experiments followed to also show that light was indeed also a particle (a “photon”) and Albert Einstein was awarded the Nobel Prize in physics in 1921 for his work showing that the particle nature of light could explain the “photoelectric effect.” This was an experiment whereby low energy (red) light, when shining onto a photoelectric material, caused the material to emit low energy (slow moving) electrons, while high energy (blue) light caused the same material to emit high energy (fast moving) electrons. However, lots of red light only ever produced more low energy electrons, never any high-energy electrons. In other words, the energy could not be “saved up” but rather must be absorbed by the electrons in the photoelectric material individually. The conclusion was that light came in packets, little quantities, and behaved thus as a particle as well as a wave.
So light is both a particle and a wave. OK, kind of unexpected (like Jell-O) but perhaps not totally weird. But the double slit experiment had another trick up its sleeve. One could send one photon (or “quantum” of energy) through a single slit at a time, with a sufficiently long interval in between, and eventually a spot builds up that looks just like the one produced when a very intense (many photons) light was sent through the slit. But then a strange thing happened. When one sends a single photon at a time (waiting between each laser pulse, for example) toward the screen when both slits are open, rather than two spots eventually building up opposite the two slit openings, what eventually builds up is the interference pattern of alternating bright and dark lines! Hmm… how can this be, if only one photon was sent through the apparatus at a time?
The answer is that each individual photon must – in order to have produced an interference pattern -- have gone through both slits! This, the simplest of quantum weirdness experiments, has been the basis of many of the unintuitive interpretations of quantum physics. We can see, perhaps, how physicists might conclude, for example, that a particle of light is not a particle until it is measured at the screen. It turns out that the particle of light is rather a wave before it is measured. But it is not a wave in the ocean-wave sense. It is not a wave of matter but rather, it turns out that it is apparently a wave of probability. That is, the elementary particles making up the trees, people, and planets -- what we see around us -- are apparently just distributions of likelihood until they are measured (that is, measured or observed). So much for the Victorian view of solid matter!
The shock of matter being largely empty space may have been extreme enough -- if an atom were the size of a huge cathedral, then the electrons would be dust particles floating around at all distances inside the building, while the nucleus, or center of the atom, would be smaller than a sugar cube. But with quantum physics, even this tenuous result would be superseded by the atom itself not really being anything that exists until it is measured. One might rightly ask, then, what does it mean to measure something? And this brings us to the Uncertainly Principle first discovered by Werner Heisenberg. Dr. Heisenberg wrote, “Some physicist would prefer to come back to the idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist independently of whether we observe them. This however is impossible."
Perhaps that is enough to think about for now. So in the next essay we will examine, in some detail, the uncertainty principle as it relates to what is called “the measurement problem” in quantum physics. We shall find that the uncertainty principle will be the key to performing the double-slit experiment over astronomical distances, and demonstrating that quantum effects are not just microscopic phenomena, but can be extended across the cosmos.
In the first article, we discussed the double-slit experiment and how a quantum particle of light (a photon) can be thought of as a wave of probability until it is actually detected. In this article we shall examine another feature of quantum physics that places fundamental constraints on what can actually be measured, a basic property first discovered by Werner Heisenberg, the simplest form known as the "Heisenberg Uncertainty Principle."
In scientific circles we are perhaps used to thinking of the word "principle" as "order", "certainty", or "a law of the universe". So the term "uncertainty principle" may strike us as something akin to the terms "jumbo shrimp" or "guest host" in the sense of juxtaposing opposites. However, the uncertainty principle is a fundamental property of quantum physics initially discovered through somewhat classical reasoning -- a classically based logic that is still used by many physics teachers to explain the uncertainty principle today. This classical approach is that if one looks at an elementary particle using light to see it, the very act of hitting the particle with light (even just one photon) should knock it out of the way so that one can no longer tell where the particle actually is located -- just that it is no longer where it was.
Smaller wavelength light (blue, for example, which is more energetic) imparts more energy to the particle than longer wavelength light (red, for example, which is less energetic). So using a smaller (more precise) "yardstick" of light to measure position means that one "messes up" the possible position of the particle more by "hitting" it with more energy. While his sponsor, Nehls Bohr (who successfully argued with Einstein on many of these matters), was on travel, Werner Heisenberg first published his Uncertainty Principle Paper using this more-or-less classical reasoning just given. (The deviation from classical notion was the idea of light comes in little packets or quantities, known as "quanta," as discussed in article one). However the uncertainty principle was to turn out to be much more fundamental than even Heisenberg imagined in his first paper.
Momentum is a fundamental concept in physics. It is classically defined as the mass of a particle multiplied by its velocity. We can picture a baseball thrown at us at 100 miles per hour having a similar effect as a bat being thrown at us at ten miles per hour; they would both have about the same momentum although they have quite different masses. The Heisenberg Uncertainty Principle basically stated that if one starts to know the change in the momentum of an elementary particle very well (that is usually, what the change in a particle’s velocity is) then one begins to lose knowledge of the change in the position of the particle, that is, where the particle is actually located. Another way of stating this principle, using relativity in the formulation, turns out to be that one gets another version of the uncertainty principle. This relativistic version states that as one gets to know the energy of an elementary particle very well, one cannot at the same time know (i.e., measure) very accurately at what time it actually had that energy. So we have, in quantum physics, what are called "complimentary pairs." (If you’d really like to impress your friends, you can also call them "non-commuting observables.")
One can illustrate the basic results of the uncertainty principle with a not-quite-filled balloon. On one side we could write "delta-E" to represent our uncertainty in the value of the energy of a particle, and on the other side of the balloon write "delta-t" which would stands for our uncertainty in the time the particle had that energy. If we squeeze the delta-E side (constrain the energy so that it fits into our hand, for example) we can see that the delta-t side of the balloon would get larger. Similarly, if we decide to make the delta-t side fit within our hand, the delta-E side would get larger. But the total value of air in the balloon would not change; it would just shift. The total value of air in the balloon in our analogy is one quantity, or one "quanta," the smallest unit of energy possible in quantum physics. You can add more quanta-air to the balloon (making all the values larger, both in delta-E and delta-t) but you can never take more than one quanta-air out of the balloon in our analogy. Thus "quantum balloons" do not come in packets any smaller than one quanta, or photon. (It is interesting that the term "quantum leap" has come to mean a large, rather than the smallest possible, change in something, and the order of the dictionary definitions of "quantum leap" have now switched, with the popular usage first and the opposite, physics usage second. If you say to your boss, "We’ve made a quantum leap in progress today" this can still, however, be considered an honest statement of making absolutely no progress at all.)
When quantum physics was still young, Albert Einstein (and colleagues) would challenge Nehls Bohr (and colleagues) with many strange quantum puzzles. Some of these included effects that seemed to imply that elementary particles, through quantum effects, could communicate faster than light. Einstein was known to then imply that we really could not be understanding physics correctly for such effects to be allowed to take place for, among other things, such faster-than-light connectedness would deny the speed-of-light limit set by relativity. Einstein came up with several such self-evidently absurd thought experiments one could perform, the most famous being the EPR (Einstein, Podolski, Rosen) paradox, named after the three authors of this paper, which showed that faster-than-light communication would appear to be the result from certain quantum experiments and therefore argued that quantum physics was not complete–that some factors had to be, as yet, undiscovered. This led Nehls Bohr and his associates to formulate the "Copenhagen Interpretation" of quantum physics reality. This interpretation, (overly simplified in a nutshell), is that it makes no sense to talk about an elementary particle until it is observed because it really doesn’t exist unless it is observed. In other words, elementary particles might be thought of not just as being made up of forces, but that some constituents of it that must be taken into account are the observer or measurer as well, and that the observer can never really be separated from the observation.
Using the wave equations formulated for quantum particles by Erwin Schrödinger, Max Born was the first to make the suggestion that these elementary particle waves were not made up of anything but probabilities! So the constituents of everything we see are made up of what one might call "tendencies to exist" which are made into particles by adding the essential ingredient of "looking." Looking as an ingredient itself, it must be noted, took some getting used to! There were other possible interpretations we could follow, but it can be said that none of them was consistent with any sort of objective reality as Victorian physics had known it before. The wildest theories could fit the data equally well, but none of them allowed the particles making up the universe to consist of anything without either an underlying faster-than-light communication (theory of David Bohm), another parallel universe branching off ours every time there is a minute decision to be made (many worlds interpretation), or the "old" favorite, the observer creates the reality when he looks (the Copenhagen Interpretation).
Inspired by all these theories, a physicist at CERN in Switzerland named John Bell came up with an experiment that could perhaps test some of these theories and certainly test how far quantum physics was from classical physics. By now (1964) quantum physics was old enough to have distinguished itself from all previous physics to the point that physics before 1900 was dubbed "classical physics" and physics discovered after 1900 (mainly quantum physics) was dubbed "modern physics." So, in a sense, the history of science in broken up into the first 46 centuries (if one starts with Imhotep who built the first pyramid as the first historical scientist) and the last century, with quantum physics. So, we can see that we are quite young in the age of modern physics, this new fundamental view of science. It might even be fair to say that most people are not even aware, even after a century, of the great change that has been taking place in the fundamental basis of the scientific endeavor and interpretations of reality.
John Bell proposed an experiment that could measure if a given elementary particle could "communicate" with another elementary particle farther away faster than any light could have traveled between them. In 1984 a team led by Alain Aspect in Paris did this experiment and indeed, this was undeniably the apparent result. The experiment had to do with polarized light. For illustrative purposes, let’s say that you have a container of light, and the light is waving all over the place and -- if the container is coated with a reflective substance, except for the ends -- the light is bouncing off the walls. (One might picture a can of spaghetti with noodles at all orientations as the directions of random light waves.) At the ends we place polarizing filters. This means that only light with a given orientation (say like noodles that are oriented up-and-down) can get out, while back-and-forth light waves (noodles) cannot get out. If we rotate the polarizers at both ends by 90 degrees we would then let out back-and-forth light waves, but now not up-and-down light.
It turns out that if we were to rotate the ends so that they were at an angle of 30 degrees to each other, about half of the total light could get out of the container -- one-fourth from one side of the bottle and one-fourth through the other side. This is (close enough to) what John Bell proposed and Alain Aspect demonstrated. When the "bottle" was rotated at one end, making a 30-degree angle with the other side so that only half the light could escape, a surprising thing happened. Before any light could have had time to travel from the rotated side of the "bottle" (actually a long tube) to the other side, the light coming out of the opposite side from the one that was rotated changed to one-fourth instantaneously (or as close to instantaneous as anyone could measure). Somehow that side of the "bottle" had gotten the message that the other side had been rotated faster than the speed of light. Since then this experiment has been confirmed many times.
John Bell’s formulation of the fundamental ideas in this experiment have been called "Bell’s Theorem" and can be stated most succinctly in his own words; "Reality is non-local." In other words, not only do the elementary particles that make up the things we see around us not exist until they are observed (Copenhagen Interpretation), but they are not, at the most essential level, even identifiably separable from other such particles arbitrarily far away. John Muir, the 19th Century naturalist once said, "When we try to pick out anything by itself, we find it hitched to everything else in the universe." Well he might have been surprised how literally -- in physics as well as in ecology -- this turned out to be true.
In the next essay we will combine the uncertainty principle with the results of Bell’s Theorem and increase the scale of the double slit experiment to cosmic proportions with what Einstein’s colleague, John Wheeler, has called "The Participatory Universe." This will involve juggling what is knowable and what is unknowable in the universe at the same time.
In the previous two articles we discussed the basic double-slit experiment that demonstrates the dual nature of light -- wave and particle -- and then the Heisenberg Uncertainty Principle which demonstrates the complimentary (mutual exclusion) of what one can measure at the same time. In this article we shall discuss the more basic interpretation of quantum physics in terms of what one can even know or not know, and how this affects the results one is trying to measure.
John Bell formulated the uncertainty principle in terms of what one could know or not know in an experiment. Several Bell-type experiments have successfully shown that this would seem to be the simplest interpretation of the situation. Taking our double-slit example, if one puts a detector at one or the other of the two slits -- even one that does not destroy the photon, electron, or whatever particle as it goes through the slit -- then an interference pattern does not appear at the detector. This is because one has set up an experiment in which one can "know" which path the particle took (i.e., which slit the particle went through). As long as one can tell this, then the particle cannot go through both slits at once and one no longer gets an interference pattern.
Now, you might say, what if I decide not to look at the detector set up next to one or the other of the slits? Well, one still does not get an interference pattern because the potential exists for one to be able to tell which path the photon, for example, traveled. Even this potential (i.e., "knowability") is enough to stop the formation of an interference pattern. All ability to detect which path an elementary particle took (in this case a photon of light) must be removed to obtain interference. In other words, one must not even be able to tell -- even in principle -- which path the elementary particle took in order for it to "take" both paths and form an interference pattern.
This is the most fundamental concept in quantum physics -- knowable and unknowable. It is from this more fundamental concept of the uncertainty principle that we shall approach our quantum astronomy experiment. It has been experimentally verified that if one can know which path a photon traveled, then an interference pattern is not possible. But if one can become ignorant of which path the photon (or any elementary particle) took, then an interference pattern is assured. That is, if one is ignorant of which path the photon took, then an interference pattern is not just possible, it must occur.
This last point can produce some decidedly non-classical effects. One example is the phenomenon known as "quantum beats." Picture an atom (classically, for now) as consisting of a nucleus with electrons jumping all around it. Electrons do not move smoothly away from and toward their central nucleus; they take discrete steps (energy quanta, actually) to transition from a lower to a higher (farther from the nucleus) orbital level. They actually disappear from one level and reappear at another, but are never found in between the two. As an electron "jumps" from a higher-level step to a lower-level step, it emits a photon of light. Just the fact that one cannot, even in principle, tell which energy level jump the electron took, is enough to produce a special kind of interference fringes called "quantum beats."
Thus, while picturing probability distributions as classical waves (like water waves) may be helpful for beginning physics students, real quantum wave phenomena are decidedly non-classical and produce decidedly non-classical results. They are not waves made of anything but probabilities, or tendencies to exist. Yet they can interfere with each other in a wave-phenomena-type way before they are measured, and so "turn" from probability waves to measured particles. (This "collapse of the wave function" is also said to take place instantaneously, as we shall discuss more in article four.)
Einstein wrote several times "God does not play dice with the universe." Quantum physics, however, has reduced everything to probabilities mathematically, and such a formulation inherently implies dice rolling for all possibilities until a measurement is made. Richard Feynman pointed out that the mathematics really does mean all possibilities. Every elementary particle takes every path it possibly can -- a kind of infinite-slit experiment -- and then these infinite numbers of paths all cancel in the multi-dimensional mathematics -- called Hilbert space -- so that only one result is finally measured.
However, a colleague of Einstein’s, Professor John Wheeler of Princeton University, has pointed out that one could take another interpretation, an interpretation he has dubbed, "The Participatory Universe." In this approach one can look at the universe as directly participating in each quantum effect in real time. In other words, the concept of a First Cause starting things off (winding up the clock of the universe, one might say) and then leaving the laws of physics to run things, may be what is incorrect in the basic approach of classical physics. Rather, in this participatory scenario, the Cause of the laws of physics remains an active Participant. (If one would like to also draw some religious points into such discussions I would just say that it is important to understand what is being said and what is not being said here—that is, to not oversimplify what went into Prof. Wheeler’s introduction of this interesting proposed conceptualization for quantum reality.)
Professor Wheeler came up then with a Gedanken experiment (i.e., a thought experiment) that he called the "delayed choice" experiment. He proposed a huge scaling up (to cosmic proportions) of Young’s double slit experiment that we’ve talked so much about. In this Gedanken experiment gravitational lenses, which can bend light from distance quasars or galaxies, are used as sort-of giant slits to create two paths for photons from a quasar or distance galaxy. General Relativity shows that masses in space can bend light.
The first support for the confirmation of Einstein’s theory of relativity came with the measurement of the bending of light from stars by the Sun as they passed close behind it during a total solar eclipse. Light was indeed bent by the mass of the Sun (that is to say that space-time was curved near large masses). It turns out that large masses like galaxies that are rather close to being directly in between a distant quasar-galaxy and us will bend the light from the distant object toward us. One can think of light coming toward us more-or-less directly from a distant quasar-galaxy (let’s call this path A) while light shining from this quasar also heads off into space at a slightly different angle.
This light, however, encounters a massive galaxy along the way so that the light rays that would normally have missed the Earth get bent towards us as well (we’ll call this light path B). Thus it appears that we have two quasars with a massive galaxy in between. However, this is just one quasar whose light rays are coming more-or-less directly toward us along path A and whose second image appears on the other side of the massive galaxy image, these latter being the rays traveling along bent path B (i.e., bent toward us). Thus it appears that we have two quasars when we actually have two images of the same quasar.
John Wheeler realized that these two paths constituted a kind of double-slit experiment where the slits were the two gravitational lens images. The two paths of light from the quasar might be used then to interfere with each other. However, this could be done -- according to Bell’s approach to the uncertainty principle -- only if one could not tell which path any particular photon had traveled. One way of being able to avoid knowing which path an individual photon took is to make the paths equal (within the uncertainty principle) so that one could not tell whether any photons arriving had traveled along path A or path B. Even if there were a flare in the quasar, the flare (peak in brightness) would arrive at the same time at Earth and so one could not use timing to tell which way it came. (One can see that if the paths are not equal, the light from the flare would arrive along path A before path B and so one could tell the path difference, which would negate the possibility of getting an interference pattern.)
Professor Wheeler "solved" this problem by adding an immensely long fiber optics cable to path A to make it as long as path B. (The fiber optics cable turned out to have to be over a light year long in this case, so it really was truly a Gedanken experiment without much hope of realization -- but we will propose a possible solution to this problem in the fourth and last essay.) The delayed-choice part of the experiment was, nevertheless, still very interesting. Given, then, that one achieves an interference pattern in this way, one should be able to put a detector at the intersection of light paths A and B and just re-do a cosmic-scale version of the Young’s double-slit experiment (where light-photons from quasar images A and B crossing the universe are equivalent to light going through slits 1 and 2 in the laboratory).
Now it should be noted that one of the founders of quantum physics, P.A.M. Dirac, noted that, at least in the Young’s double-slit experiment, one could only get an interference pattern if each photon only ever interfered with itself -- that is, each single photon had to go through both slits and so only interfere with itself, not with any other photon. This certainly made sense in terms of the interference experiment being done with one photon at a time, and still producing interference. (There are also conservation of energy arguments -- two photons should not be expected to produce 4 times the energy when they meet sometimes -- making a bright line -- and no energy when they meet at other times -- making a dark line in the interference pattern.) Thus if one did the Wheeler delayed-choice experiment with single photons being detected at a time, one would still expect -- if one could not tell which path, A or B, the photons traveled along -- that an interference pattern would result where the two paths met. However, if one moved the detector to just detect photons along path A then these photons will have just traveled along path A (by classical reasoning). Or, similarly, if one moves the detector to intersect path B photons, then the photons will have traveled only path B.
The interesting part of Professor Wheeler’s thought experiment is that the quasar emitting the photons is about one billion light years away—that is, the light from this quasar is supposed to have taken a billion years to travel to Earth. It seems perplexing that any given photon will have had to have traveled both paths when you put the detector at the intersection of both paths, but then one path or the other path when you decide to put the detector directly into one of these paths rather than at their intersection.
In other words, how can your decision as to where to put the detector affect the path of a given photon a billion years after it supposedly started along one of the paths toward Earth -- long before humans even existed on this planet (much less discovered quantum physics)? It would appear that what has "happened" in the distant past in this case may be determined by what is happening right now even though it is supposed to have "happened" over a billion years ago. The choice of which path, in other words, has somehow been "delayed." One might view this as the Universe playing more the part of an active participant in what is happening rather than just in what has happened in the past in this case. Hence the "Participatory Universe" conceptualization.
This interesting Gedanken experiment points out what may be the main difference between general relativity and quantum physics. In general relativity time is a definite dimension, part of the already unalterable space-time continuum. While in quantum physics, time is, at best, a variable, and is also quantized (i.e. there are particles of time). Thus far from being an absolute, time in quantum physics is a not a solid background upon which particles in space change. In quantum physics time is not yet really, in a sense, even there until the "time particles" are measured.
In our fourth and final assay we will talk about the possible realization of Professor Wheeler’s Gedanken experiment, which may open up a whole new field of investigation -- a field which we will call "Quantum Astronomy."
In the preceding three essays we discussed Young’s double-slit experiment, where light was shown to behave as a wave. We also discussed the birth of quantum physics where light was also shown to behave like a particle. In the second article, we discussed a basic limitation on measurement imposed by the Heisenberg Uncertainty Principle and how one may "trade" knowledge of one measurement for another. In article three, we then discussed John Bell’s concept of knowability and unknowability, and then John Wheeler’s Gedanken (thought) experiment creating a cosmic-scale double-slit experiment requiring an immensely (billions of miles) long fiber optics cable. It is the application of John Bell’s concept of knowability and unknowability that we shall now apply to the uncertainty principle in order to try to perform John Wheeler’s cosmic double-slit experiment over cosmic distances that we shall discuss in this article.
In order to realize this experiment, however, one must come up with a substitute for this unbuildably long fiber optics cable, and this is where the SETI Institute’s new Allen Telescope Array and its narrow-band radio wave detectors can play an important part. SETI radio projects use the fact that, as far as we know, no natural (i.e., non-technological) source of radio waves can make a very narrow-band radio channel. When you tune to a station on the radio, one turn and you are on another channel. If you tune to a radio galaxy, however, you can turn the dial many dozens of times and you will still be on the same channel, so to speak—you will hear the same sounds. In other words, as far as we know, only technology can make a narrow (1 Hertz wide) radio channel. Thus, looking for narrow-band signals in space should be a good way to look for evidence of any radio-technological civilizations around other stars. Fortunately for quantum astronomy, it also turns out that an extremely narrow-band radio channel can also be used to replace that unrealistically long fiber optics cable! But to explain just how this can be done we need to first look again at the uncertainty principle.
When a colleague, Dr. David P. Carico of San Francisco State University, and I began thinking about actually carrying out Professor Wheeler’s delayed-choice experiment, we realized that the uncertainty principle needed to be satisfied if one is to obtain an interference pattern. That is, one needs to be ignorant of which path the light traveled—along path A (directly from the quasar) or along path B (the path most bent by the gravity of the intervening galaxy back toward Earth) so that it could "travel both paths" and so interfere with itself. (The terms "travel" and "path" as applied to a photon-wave, of course, do not have any real meaning in quantum physics if the particle-nature does not exist until it is measured. But for now we will use such terms, as it is difficult to speak of quantum effects without some reference to our classical notions of space and time.) The energy-time uncertainty principle, as we will recall, referred to the fact that knowing the energy of a given particle meant that one could not know precisely the time the particle had that energy. And, "complimentarily" (the term for this that Nehls Bohr used), if one knows the time to a high precision, one cannot then know what that energy was with greater accuracy than the basic quantum value. (This quantum value, as we will also recall, is called "Planck’s constant", or one quantum of energy and is actually quite a small value, so we do not usually notice this uncertainty constraint in everyday activities.)
Now in thinking about how to do this experiment we thought that perhaps it might be possible to "trade" knowledge of energy for knowledge of time, but in this case the time would be the delay time between the two paths of the gravitational lens images, A and B. The uncertainty in energy then might be able to replace the hugely long fiber optics cable with, instead, a very narrow-band radio detector. It’s OK. Read on. I can hopefully explain what I mean. We have seen that we can trade knowledge of energy for knowledge of time (remember the balloon image in a previous article with "delta-E" written on one end and "delta-t" written on the other.) We also remember that if we can tell which path each photon traveled, we will not get an interference pattern but rather just a picture of a quasar at A and another (image of it) at B. To understand this "trade" then, let’s take just a bit closer look at what we mean by a narrow-band radio wave.
It is known, in the physics of electromagnetic waves, that longer waves have less energy than shorter waves. The blue light we see has more energy per photon than the red light we see. (This can be extended to lower energy infrared photons, and higher energy ultraviolet photons, or even to very low energy radio photons, and even very much higher energy x-ray photons.) In photography, using a filter on the camera lens can allow only blue light, or red light into the camera. Sunlight is usually a whole mixture of blues, greens, yellows, oranges, reds, and so on, and therefore also a mixture of photons of light of all kinds of energy, high and low. When one uses, say, a red filter, one is cutting out the higher energy blue photons from going into the camera, and so only detects the lower energy red light. The narrower the filter, the less range of energy is let into the camera.
Similarly for radio detectors, if one has a broadband detector, one is letting in radio waves of all sorts of energies all at once. However if one has a very narrow-band radio detector (such as are used in the search for extraterrestrial intelligent technology), one is highly constraining the range of energies being detected. Only the radio photons of a very narrow spread in energy are actually measured. Remembering the uncertainty principle for energy and time, we can recognize that narrow-band radio detectors thus represent a constraint on the value of the energy being measured. Now what about time, however? For that, let’s look at the crossing of the radio waves (which is just long wavelength light) coming along paths A and/or B. We can only get an interference pattern if we cannot tell (or even potentially be able to tell) which path a radio photon took to reach our detector. But, if the difference in the travel time between paths A and B is long enough (this is called the "delay time" of the gravitational lens), then there is plenty of time to detect, for example, if a flare went off at the quasar so that image A brightened, followed by image B some time (the delay time) later. This is actually how the delay time between gravitational lens paths is measured. Now the next sentence is the most important. However, if we use a narrow enough radio bandpass, we can potentially constrain the energy to such a precise value, that the time uncertainty is so large that it exceeds the actual delay time of the gravitational lens. In other words, we can constrain the energy (by using narrow-band radio detectors) so much that we can exceeded the ability—even potentially—of measuring which path the photon travels because our uncertainty in the arrival time of the photon is now larger (because of the uncertainty principle) than the actual delay time or travel time difference between paths A and B. Thus we cannot tell along which path the photon traveled and so should get an interference pattern at the detectors. A very narrow-band (but real) radio detector then, can substitute for an unrealistically long fiber optics cable to get an interference pattern at the intersection of paths A and B.
So, how does one proceed to do this? We can start observing the gravitational lens using a radio telescope with very narrow-band detectors. We set the detectors on the narrowest-band possible (let’s say one-hundredth of a Hertz, which means we know the wavelength—and therefore energy—of the radio wave coming in to within one-hundredth of a wavelength per second). We focus the two images of the quasar across each other and (if the delay time is not too long—longer than 100 seconds in this case) we will obtain an interference pattern. This means we cannot know which path the radio photons "took." (We also assume no detectable rapid fluctuations from the quasar for simplicity, although there are ways of dealing with this effect, as well, using "choppers" in the path of the incoming light.) Now what happens if we increase the allowable energies being detected (i.e., increase the bandpass of the radio detectors)? At first we may still get an interference pattern. But if we continue to increase the bandpass, at some point the interference pattern will disappear, and we shall simply get a (radio) picture of a quasar at location A, and another of its image at location B. The interference pattern will have disappeared at exactly the point where we could begin to tell which path the photons took. In other words, by allowing ourselves to become more and more ignorant of the energy of the radio waves arriving, we simultaneously allowed an increased knowledge (according to the uncertainty principle) of the time interval. And when we decreased our knowledge of the energy to the point where our knowledge of the time interval could drop below the actual delay time between light paths of the gravitational lens, we could (at least in principle) tell which path each photon took. Thus, the uncertainty principle "kicks in" and says that one cannot know which path a photon took and still get a wave phenomenon (i.e., an interference pattern). One cannot have one’s photon and wave it too.
Thus we may be able to use very narrow-band radio detectors to realize the delayed-choice (perhaps no longer just Gedanken) experiment proposed by Professor Wheeler. What is of interest in doing such an experiment? First, it may represent a possible way to directly measure delay times for gravitational lenses that don’t vary much in brightness, and such delay times can be used to measure the expansion rate of the universe (this parameter is called the "Hubble constant") directly. But more intriguing, perhaps, is that it can possibly provide a measure of the minimum time it takes for a wave to "become" a particle. If the quasar is one billion light years away (that’s about six billion trillion miles) and the interference pattern is being formed by a probability wave that is traveling along both paths A and B, then when one increases the bandpass (say, over one hour’s time) to the point where the wave becomes a particle (photon) then one might be able to speak in terms of the wave "becoming" a particle at the minimum rate of a billion light years per hour. This rate is considered in most quantum physics formulations to be instantaneous, but one is reminded of Galileo and a colleague standing on opposite hillsides with lamps trying to measure the speed of light. When one opened the lampshade, as soon as the other saw it, they opened their lampshade, and so, back and forth. They decided that the speed of light was either instantaneous of very very fast. It turned out to be very very fast (186,300 miles per second)—far too fast to measure with shaded lamps on nearby hills. So perhaps quantum astronomy may someday allow such a measurement of the speed of the wave-to-particle transition, if it is not instantaneous. What we have outlined here is just one experiment in many possible experiments that could be performed in what may be one of the most interesting new fields of the 21st Century, quantum astronomy.
Post a Comment
<< Home