The Wave-Particle Conundrum -II-The Failure of Newton’s clockwork universe

The purpose of this essay is to review the wave-particle conundrum in an attempt, to disentangle scientific truths from unvetted speculations and wild unscientific claims.

We will try to show that the current understanding of theoretical Physics does not warrant the type of borderline conclusions, expressed by some, about the demise of objective reality and causality.

In part I of the essay “The Wave-Particle Conundrum I: The Quantum controversy” 1,

we presented recent double-slit experiments which demonstrated the wave-particle behavior exhibited by photons, electrons and other particles 2. These experiments among many others established that wave-particle duality is a characteristic exhibited by all the “stuff” that makes up Reality.

The “wave-particle duality” is the cornerstone of the theory of Quantum Mechanics; consequently, we presented briefly the basic elements of the controversy over the interpretation of the duality and of the theory of Quantum Mechanics.

In the second part we shall analyze the classical concepts of particle and wave, and describe the scientific landscape at the eve of the Quantum Mechanical upheaval, and in particular the impact of the relativistic shake-up on classical Physics.

Outline

  1. The Newtonian Universe
  2. The classical concept of particle
  3. Wave motion
  4. The wave model of light
  5. Superposition of Light waves
  6. Diffraction/Interference with “photons”
  7. Standing waves: Quantization of energy
  8. Failure of Newton’s clockwork Universe
  9. The impact of the theory of Relativity
  10. Summary

1- The Newtonian Universe

“All the world, is a stage”

In order to gain proper understanding of the scope of the problem posed by the wave-particle duality, we need to step back into the “classical” Universe constructed by Physics, before the advent of the modern theories of Quantum Mechanics and Relativity, early in the 20th century.

In 1687 Newton published “The Mathematical Principles of Natural Philosophy” in which he laid down the ground of the new Science of Mechanics. 3

From the outset, and in 8 definitions, 3 axioms, 6 corollaries and a “Scholium”, Newton defined systematically the various elements constituting his mechanical model of the Universe: time, space, mass, inertia, motion, particles and forces.

Newton’s methodology was a reflection of his empiricist views which he expounded in “the Rules of reasoning in philosophy” at the beginning of Book III and in the “General Scholium” at its end. 

In Rule III, Newton stresses the centrality of experimental evidence:

For since the qualities of bodies are only known to us by experiments, we are to hold for universal all such as universally agree with experiments; and such as are not liable to diminution can never be quite taken away. We are certainly not to relinquish the evidence of experiments for the sake of dreams and vain fictions of our own devising.”

In Rule IV he lays out his experimental methodology:

In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true.”

In the General Scholium, Newton adds: 

In this philosophy particular propositions are inferred from the phænomena, and afterwards rendered general by induction. Thus it was that the impenetrability, the mobility, and the impulsive force of bodies, and the laws of motion and of gravitation, were discovered”.

Newton chose to adopt the commonly accepted ideas about the phenomena constituting objective reality and use them to construct his model of the Universe. He extracted his definitions and axioms from observations and experiments, distilling the findings of his predecessors and contemporaries and ignoring the metaphysical musings of some. 4

Concerning space and time he states in the Scholium at the beginning of Book I:

 “Absolute space, in its own nature, without regard to anything external, remains always similar and immovable”.

 “Absolute, true, and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration”.

The Universe described by Newton is therefore a rigid box with rigid meter sticks and synchronized identical clocks ticking away the same seconds everywhere and fitted with Descartes’ three rigid coordinate axes.

Each point P in this “mathematical” Universe or Math-verse is specified by four numbers P (x, y, z, t) and represents an observable “event”.

An event is by definition something that happens at a particular point in space and at a particular instant of time, measured w.r.t. a given frame of reference.

In this mechanical view of the Universe, a natural phenomenon is modeled as a series of successive events

In this manner, the scene is set for the study of natural phenomena; Natural phenomena can be dissected, analyzed, classified, modeled and more importantly explained.

The box is a stage on (or in) which the diverse actors, i.e. particles, waves, forces, planets, moons, stars, comets, asteroids and whatever else, carry on their choreographed performance.

This mechanical Universe behaves like an intricate clockwork mechanism with “well-defined” elements obeying well-defined deterministic laws.

2- The classical concept of particle

One of the key concepts in Newtonian Mechanics (and classical physics, before Planck’s quantum hypothesis, in 1900) is the concept of material particle. The particle model is the first example of quantization of a physical feature of the Universe: the quantization of matter.

Newtonian Mechanics deals with idealized particles (point like masses). Solid bodies are replaced by their center of mass to which all the mass is assigned.

Note that the original Newtonian model does not provide tools for the study of the rotation of extended rigid bodies. 

The laws of rotational Mechanics that are taught in Mechanics courses and textbooks, and the concepts of angular momentum, moment of inertia, moment of a force and torque were discovered by Euler half a century later and integrated in what became “Classical Mechanics”. 5

Newton explains his concept of “particle” in Rule III of “The rules of reasoning” at the beginning of Book III. 6

He states:

The extension, hardness, impenetrability, mobility, and vis inertia of the whole, result from the extension, hardness, impenetrability, mobility, and vires inertia of the parts;

And thence we conclude the least particles of all bodies to be also all extended, and hard and impenetrable, and moveable, and endowed with their proper vires inertia.

And this is the foundation of all philosophy”.

Newton simply deduced the properties of the microscopic parts (particles) from the properties of macroscopic bodies, obtained by means of our senses.

The classical particle is therefore a very small chunk of matter, a miniature billiard ball, not unlike the hard spheres used by Huygens in his percussion experiments.

The classical particle has no internal structure. It possesses a mass (m), a finite volume (V), a precise location in space, motion (p = mv), kinetic energy (“vis viva”, added later by Leibnitz), and in addition: inertia, hardness, and impenetrability.

In Newton’s mechanical model, inertia (vis inertia) and mass were two related (proportional) but distinct properties of matter. The “Vis Inertia” was presented as an “impulsive force”, by which the body resists change in its own motion, or acts on other bodies to change their motion.

This “impulsive force of the body” has its origin in the inertia of matter. It can be measured, like any other Newtonian force by the “alteration of motion” of the other body participating in the interaction.

The property of hardness was required to explain elastic collisions between extended bodies in percussion experiments. Bernoulli (and later Maxwell and Boltzmann) used this property of the particles in the molecular kinetic model of gases and in Statistical Mechanics 7.

The concept proved useful also for chemists who, such as Lavoisier and Dalton, used it to describe the basic constituents of matter (atoms) taking part in chemical reactions.

On the other hand, the property of impenetrability signifies that no two particles can occupy the same space, at the same time, i.e. each particle occupies its own localized space. The classical particles are separate, countable and distinguishable.

They obey Newton’s laws and therefore possess a welldefined trajectory meaning that their positions and velocities can be determined univocally at each instant of time, provided the acting forces and the initial conditions are known.

  • Inadequacy of the Newtonian particle model

Alas, not so impenetrable

During the 19th century a number of discoveries revealed that the classical model of “hard, indivisible and impenetrable particles”, was too simple.

Chemists and thermodynamicists began to distinguish between particles made of single elements i.e. atoms on the one hand, and molecules made of more than one atom, on the other hand.

An additional “property” of matter was identified in the mid-18th century: the property of electric charge that scientists had to integrate into their model of matter and attempt to imagine how it was bound to the mass of the particle.

The electric charge is another example of quantization in Nature. Electric charges encountered in nature are always multiples of the elementary charge “e”, the charge of the electron being equal to “-e”; (e = 1.602×10-19 C or Coulombs).

These electric charges were found to be responsible for the electrical and magnetic forces which held the atoms together and thus imparted the molecules with additional internal degrees of freedom of rotation and vibration.

This more complex microscopic model of matter was essential to explain a variety of macroscopic mechanical, electrical and chemical properties and phenomena such as, changes of state, hardness, viscosity, conductivity, chemical structures and reactions etc..

As things progressed, the atoms were found to have internal structure as well: each atomic element consists of negatively charged electrons (e), positively charged protons (e+) and neutrons.

It was also discovered that atoms were mostly empty space (apart from the force fields holding them together), with most of their mass, consisting of protons and neutrons (~ 99.9%), concentrated in a central nucleus and occupying a vanishingly small fraction of the atomic volume (~10-14).  

The three subatomic particles, electron, proton and neutron were identified as the “fundamental elementary particles” of matter.

But not for long!

Neutrons and protons turned out to be made up of even “smaller” particles termed quarks; however, empirical evidence for the existence of the elusive quarks as separate entities could not be obtained. Quarks play a central role in the Standard model and their existence and properties are inferred from high energy collision experiments in particle accelerators.

Additional particles of one kind or another started popping out in high energy experiments, that were being carried out with more and more powerful particle accelerators.

Furthermore, the particles were found to possess an additional degree of freedom i.e. that of spin; the Stern-Gerlach experiment in 1922 demonstrated that electrons possessed an intrinsic magnetic moment termed spin which could take only two values: + ℏ/2 and – ℏ/2. 8a

This property would eventually cause even bigger conceptual headaches:

It was discovered later that all “elementary particles” possessed a spin.

In this regard, the elementary particles fall in two classes: 8b

  1. Fermions such as electrons which possess a spin equal to ℏ/2. They are termed material particles. (48 in number including the antiparticles)
  2. Bosons such as photons which possess a spin equal to ℏ or 0. Bosons are also termed quasi-particles: they are force carriers mediating fundamental interactions. The photon for example mediates the electromagnetic force. (12 in number + Higgs Boson?)

More importantly, identical particles were found to be indistinguishable, in complete contrast with classical particles. This property of indiscernibility was borne out in the statistical analysis of the collective behavior of identical particles e.g., electrons in crystals, photons in cavities etc…

Electrons and other fermions obey the Fermi-Dirac statistics (hence fermions), and Pauli’s exclusion principle which forbids two particles to be in the same state. The structure of matter and the whole electronics and computer industries depend on this property of fermions.  

Photons and other Bosons obey the Bose-Einstein Statistics which allows an unlimited number of particles of the same kind to be in the same state. Superconductivity, super fluidity, Laser emission, to name but the few are among the newly discovered and harnessed phenomena resulting from this unique property of bosons.

Even the material consistency of the material particles proved to be as ephemeral as other “intangible” radiation phenomena, when Einstein’s theory of Relativity showed, with the famous E = mc2, that matter was another form of energy.

Einstein’s relationship (E = mc2) signifies that it is possible to convert matter into energy integrally. It also means that energy can be converted into “matter” (m = E/c2).

The first process is termed annihilation, the second, creation. The creation (from photons) and annihilation (into photons) of material particles show that the distinction between mass and energy has become “blurred”.9

Both processes are also thought to have occurred during the early stages of the universe. The particle creation process was dominant during the first 100 seconds of the Universe, and is thought to be responsible for the existence of the matter of the Universe.

The classical concept of particle was undermined further, when De Broglie proposed in 1923 that all material particles possessed a wavelike nature and postulated that the wavelength was inversely proportional to its linear momentum (λ =h/p) of the particle , thus generalizing the Planck-Einstein hypothesis to all particles.

De Broglie’s hypothesis was confirmed experimentally by Davisson and Germer in1927 in their famous electron diffraction experiment 10.

They bombarded the surface of a Nickel crystal with low energy electrons (E=54Kev). The back scattered electrons off the crystal formed concentric rings with a wide bright center, akin to Newton’s rings obtained with electromagnetic waves.

Electrons behaved like waves. The measured wavelength (1.65 A0) compared well with the theoretical value (1.68 A0) calculated from De Broglie’s formula.

This wavelike behavior of all particles, material or otherwise proved to be, by far, the most challenging discovery; as noted earlier, the discovery of wave-particle duality is still one of the most enigmatic phenomena in modern Physics.

3- Wave motion

What do we mean by wavelike behavior?

Waves, in classical Mechanics, are disturbances that propagate in an elastic medium. The disturbance is produced by a mechanical agent that causes the particles of the medium to oscillate about their mean position.  

The disturbance may be natural, due to a sudden change occurring in the medium during an interaction or, induced artificially for the purpose of sending energy and/or information (signals) from one place to another.

It may consist of a short pulse or of a continuous harmonic (sinusoidal) vibration.

Once the disturbance is produced at a given point of the medium, it propagates in all direction as a traveling wave: The disturbance is transmitted progressively from one oscillator in the medium to the adjacent ones.

Mechanical waves such as acoustic waves, earthquakes (seismic waves) and water surface waves are produced by oscillations of the particles of the material media.

Electromagnetic waves are produced by oscillating electric charges. They can propagate in free space (vacuum) as well as in dielectric (transparent) material media.

If the vibration of the source of the wave is sinusoidal (harmonic motion) with a given frequency (f), the wave is also sinusoidal with the same frequency (f).

The frequency (f) is the fundamental quantity that defines the sinusoidal wave. It does not depend on the medium of propagation. However, it depends on the relative velocity of the source and the receiver, a phenomenon termed Doppler Effect.

Another fundamental parameter is the velocity of propagation of the wave front or phase velocity c. The phase velocity and therefore the wavelength (λ = c / f) depend on the medium of propagation.

For mechanical waves, the magnitude of the phase velocity is a function of two properties of the medium: a) elasticity, measured in N/m2 and, b) inertia, measured in terms of the density of the material: v = (elasticity /inertia)1/2.

For example: in solids, v = (Young’s Modulus /density)1/2.

For a wave on a string, v = (string tension / linear density)1/2.

  • The wave equation

Wave motion is modeled mathematically by a second order homogeneous partial derivative equation of the variables r and t.  The 1-D form of the wave equation was discovered and solved by d’Alembert in the mid-18th century. It is expressed as follows:

2u/∂t2 = c2 (∂2u/∂x2)                           [1]

Where, u is the displacement from equilibrium and (c) the phase velocity, or the velocity of the wave front.

The solution of the equation is a wave function which varies sinusoidally in time and in space, expressed in sine form as follows:

            u(x, t) = um sin (kx.x – ω t)                 [2]

Where, um is the amplitude (maximum displacement);

kx, the x-component of the wave vector (kx = 2π/λ);

ω, the angular frequency (ω = 2πf), c = ω/kx.

The phase angle at a point (x, t) is given by φ (x, t) = kx.x – ω.t

Note that the full solution includes also a term φ0 in the argument representing the initial phase angle of the wave at t = 0 and x =0.

This term is usually neglected unless the need arises to compare two waves such as in phenomena involving superposition of waves.

The phase angle at a point (x, t) is given by φ (x, t) = kx.x – ω.t + φ0

The complex form of the wave function is easier to manipulate in mathematical calculations:

u(x,t) = um exp [-i ( kx . x – ω t)]         [3]

(For simplicity, we will use k instead of kx from now on, unless noted otherwise).

The wave function [3] describes a travelling wave moving in the positive OX direction. It applies to all types of waves whether mechanical, electromagnetic or others.

Traveling waves transport the energy imparted, by the disturbance, to the medium.

For a mechanical wave given by equation [2], the mechanical energy imparted by the disturbance is calculated from the total energy of the source oscillator, assuming an ideally elastic medium where no losses occur.

The wave energy calculated over a single cycle is given by: 

Ecycle = ½ m.ω2. (um)2.λ                                                                      

or                                                                     [4]

            Ecycle = 2 π2 m.c.(um)2.f          

The wave energy is proportional to the square of the amplitude of oscillation um.

The energy depends also on the frequency of oscillation f: ω = 2πf, c= λf.

This last fact is usually skipped over because the frequency of the wave remains, in general, constant during propagation in material media (except for the Doppler effect), while the amplitude tends to decrease because of losses in the medium or upon reflection or refraction.

Even in a “lossless” media, the amplitude of circular and spherical waves obeys an inverse square law: it decreases as 1/r2.

However, the dependence of energy on the frequency (f) will become pertinent when we study the phenomenon of resonance in stationary waves.

  • The Wave-particle dichotomy

Unlike classical particles which are by definition impenetrable and therefore may not occupy the same point in space and time, waves may occupy the same space at the same time. This phenomenon is termed wave superposition.

When two waves are superimposed in a region in space, their amplitudes add algebraically, meaning that they may cancel each other at some of the points. This leads to observable phenomena such as diffraction, interference and standing waves.

The superposition is only temporary; it occurs while the two waves share a common space. Subsequently, the two waves continue their progression unaltered. During the superposition the waves do not” interact”; they do not exchange energy or information.

Note: This is strictly true only in the linear regime i.e. for small wave amplitudes. For example, in the case of high laser light intensities and in certain materials, non-linear effects occur, such as harmonic generation, 3- and 4-wave mixing etc…  

The phenomenon of superposition is a unique to waves: It distinguishes waves radically from particles.

In addition, unlike the particles which are localized, the waves, whether travelling or stationary, are extended in space i.e. they are not localized, and the degree of this extension depends on their wavelengths.

4- The wave model of light

In 1865, Maxwell published his electromagnetic field theory.

According to Maxwell’s theory, the electromagnetic state of any point in free space is characterized at each instant of time by two mutually perpendicular vectors E (termed electric field) and H (termed magnetic field), whose values are determined by the distribution of all charges and currents in the Universe, respectively.

The electric field vector E and the magnetic field vector H are two interdependent quantities linked by 4 partial derivatives equations. 11

However, when Maxwell merged the 4 equations he obtained a single wave equation identical to equation [1] describing an electromagnetic wave travelling in vacuum at the speed of light c.

The 1-D electromagnetic wave propagating in the OX direction is given by:

2u/∂t2 = c2 (∂2u/∂x2)                           [5]

Where: c = (ε0μ0-1/2 ,

ε0  = (36πx109)-1 F.m-1, is the dielectric permittivity of free space.

μ0­  = 4π x10-7 H.m-1, the magnetic permeability of free space.

The wave function u(x, t) represents in this case the amplitude of the electric field E or of the magnetic field H (B = μ0H). Thus, the solution of equation [5] for E and H gives:

E(x, t) = Em exp -i ( k.x – ω.t)             [6]

and,

            H(x, t) = Hm exp-i( k.x – ω.t)              [7]

Em and Hm are the (maximum) amplitudes of the electric or magnetic field, respectively.

Fig.2. below shows the electric and magnetic fields transverse oscillation, with the wave propagating along the x-axis in a direction perpendicular to the plane of oscillation.

The E-Field and H-field oscillate in phase as required by equations [6] and [7].

The speed of propagation c is a function of the two universal constants ε0 and μ0, which describe the electrical and magnetic properties of free space.

The value of the speed of light c can, therefore, be obtained by measuring ε0 and μ0 using purely electrical (capacitor) and magnetic (coil) experiments: c = 3×108 m/s.12

Note that, Maxwell’s theory  fixed the speed of light in vacuum to the value c, defining it as a universal constant, long before Einstein’s postulation of the constancy of the speed of light in the Theory of Relativity (in 1905). The constancy of the speed of light in vacuum appears as a natural consequence of Maxwell’s theory. The theory of relativity promoted it to postulate status.

This discovery led Maxwell to propose that light was a wave motion produced by electric charges oscillating at very high frequencies, i.e. a traveling electromagnetic disturbance. For oscillation frequencies between 4.3×1014 and 7.5×1014 Hz (wavelength range 400-700 nm) the waves become detectable by the normal human eye; we term them visible light.

Electromagnetic waves are classified according to their frequencies or vacuum wavelengths. The full range of observed frequencies constitutes the electromagnetic spectrum.13

The E-M wave is transverse: The electric field E and magnetic field H vectors are both perpendicular to the direction of propagation of the wave defined by the wave propagation vector k. This is shown in figure 2.

  • Electromagnetic energy

The E-M wave also transports energy.

The (instantaneous) energy transported by the wave per second through a unit area (1 m2) is given by the cross product of the instantaneous electric field E(x, t) and the instantaneous magnetic field and H(x, t):

            S(x, t)= E(x, t) x H(x, t)                                                         [8]

The vector S is termed Poynting vector, and represents the energy flux density carried by the electromagnetic wave. In vacuum and in isotropic media the Poynting vector S is collinear with the wave propagation vector k (Figure 2).

S is not a directly measurable quantity.

However, the electromagnetic intensity or radiance I (unit: W.m-2) is a measurable quantity, and is given by the average over one cycle of the magnitude of S, <S(x,t)>2π .

Substituting from equations [6] and [7] , and using the real parts of E and H, we have:

<S(x, t)> =  EmHm< cos2 ( k.x – ω. t)>           [9]

Hence, the electromagnetic intensity I is given by:

            I  = ½ EmHm                                              [10]

We introduce the impedance of free space:

Z0 = Em/Hm = (ε0/μ0-1/2                                

Finally we have:

I = ½ ε0cEm2 = Em2 / 2Z0             [11]                                                                                                                                                                             

Thus, the transported electromagnetic intensity is proportional to the square of the electric field amplitude Em (or magnetic field Hm):

            I α Em2                                                   [12]

However, the average was calculated over one cycle means that the energy is “smeared out” over the wavelength!

The electromagnetic wave carries also a linear momentum p whose magnitude is given by:

            p = I / c = ½ ε0Em2                                 [13]

  • Insufficiency of the wave model of Light

In 1887, Hertz demonstrated the “reality” of these EM waves. Hertz showed that these waves travelled at the speed of light c and possessed all the wave properties i.e. reflection, refraction, diffraction, interference, in addition to polarization.

The wave nature of light predicted by Maxwell “seemed” firmly established.

Not for long, again!

As it is often the case in scientific findings, this state of affair turned out to be only temporary; another episode in the “never ending scientific show”.

In a surprising twist, during the same experiments that vindicated Maxwell’s theory, Hertz discovered the “photoelectric” effect

This discovery showed that light behaved somewhat like “tiny billiard balls” (particles) knocking electrons off metal surfaces. 

In his experiments (1887), Hertz used spark gaps placed some distance ( about 50 feet or more) apart to emit and detect the EM waves. In addition to the EM wave at radio frequencies), the emitter produced a spark of light (hence the name spark gap) due to air ionization. Likewise, the detector spark gap produced a discharge spark whenever the radio wave reached it.

In a series of controlled observations, Hertz discovered that it was easier to produce the discharge at the detector whenever it was illuminated by UV light from the emitter spark.

This was the first experimental observation of the photoelectric effect. The UV light wave produced by the emitter spark gap was acting like a trigger ionizing the air gap of the detector, thus facilitating the discharge.

Hertz’ observations were confirmed a year later (1888) by Hallwachs who showed that the negatively charged plate of an electrometer was discharged by incident UV light, while a positively charged plate was not affected.

The “photoelectric” phenomenon was systematically investigated by Lenard between 1899 and 1902 in one of the most ingenious and thorough series of experiments.

Lenard’s results showed that electrons were ejected from the cathode’s metallic surface only when the incident light frequency reached a threshold frequency (fth) whose value depended on the type of material; no electron could  be ejected for frequencies below (fth) no matter how high the light intensity.

Lenard showed also that the kinetic energy of the ejected electrons for a given material was proportional the difference between the light frequency (f) and the threshold frequency (fth). 14

The phenomenon could not be explained with the wave model of light.

In 1905, Einstein interpreted the photoelectric effect using a quantum model of light in which light interacted with matter as if it were composed of a stream of “particles” (quanta, photons) each with energy,  ε = hf = ℏω, colliding with the surface electrons.

The photoelectric effect provided a direct method for the measurement of Planck’s constant. The experimental set-up is now available in teaching laboratories in schools and colleges. The measurement and data analysis are straight forward and allow students to glean into the quantum world and the wave-particle duality.

The importance of the photoelectric phenomenon and its applications in today’s society cannot be overestimated. To mention but the few:

Photovoltaic cells and panels are central in clean energy programs to reduce the reliance on fossil fuels.

Photodiodes and phototransistors are integrated in home appliances, cars, toys and every imaginable gadget.

High resolution, high definition cameras using Charge Coupled Detectors (CCD) have become essential in the modern communication-information-entertainment industry. They are finding of course more and more areas of use where real time monitoring is required: Surveillance, medical diagnostics, scientific research etc..

More important, the photoelectric effect was historically the first direct experimental affirmation of wave-particle duality.

5- Superposition of Light waves

The fundamental features of the theory of Quantum Mechanics are a direct consequence of the wavelike behavior of material particles.

Heisenberg’s Indeterminacy Principle (HIP), the quantization of energy, and even the probability interpretation of the wave function in addition to the Principle of superposition were already present in the behavior of classical waves and are not specific to Quantum Mechanics.

The experiments of single slit diffraction and of double slit interference of light provide visual demonstration of simultaneous diffraction and interference, two fundamental features of wavelike behavior as shown in Figure.1 at the beginning of the article.

The top image in Figure 1 shows the single-slit diffraction pattern produced by a red He-Ne laser falling on a narrow slit. The bottom image shows the double slit interference pattern produced by the same experimental arrangement with the same geometry.

In the double slit experiment, diffraction and interference occur simultaneously. The interference fringes are modulated by the wider diffraction fringes. The observed pattern is a convolution between the pure interference and the single slit diffraction pattern.

  • Modeling Single slit diffraction

Single slit diffraction (or interference) of light in the Fraunhofer or far field approximation can be modeled mathematically by means of the Fourier transform.

The Fourier transform theorem states the following:

Any integrable function of the spatial coordinate r (x, y, z)) can be expressed as an infinite sum of sinusoidal functions of the wave vector k. 15a

Of particular interest for our discussion here is the 1-D Fourier transform of the spatial coordinate (x) which reduces as follows:

f(x) = A∫R  F(k) exp( ik.x) dk                                   [14]

and

F(k) = A∫R  f(x) exp(- ik.x) dx                                  [15]

Where, (k) is the 1-D wave vector in the OX direction, k = 2π/λ.

A is the 1-D normalization constant: A = (2π)-1/2.

Consider a sine wave that is restricted, at a given instant of time, to a region of space of width (a); the wave is expressed mathematically by the function:

f(x) = A.exp (ik0.x)      for x ∈ [-a/2, +a/2]            

and                                                                              [16]

f(x) = 0            elsewhere

Equation [16] represents an electromagnetic plane wave with a propagation vector ko in the positive OX direction, incident normally on a screen containing a single rectangular slit of width (a), as shown in Figure 3 below.  At the screen, the wave is confined to the interval x ∈ [-a/2, +a/2].

The Fourier transform F(k) of the wave function f(x) represents the diffracted wave after passage through the slit.

Substituting equation [16] in equation [15], we have:

F(k) = A∫D  f(x) exp[-i.(k – k0).x] dx                         [17]

With D = [-a/2, +a/2]

The integration gives:

F(k) = B [sin(kx.- k0).a/2] / [(k – k0)a/2]                     

Or                                                                                [18]

F(k) = B sinc [(k – k0).a/2]                                        

Where, the symbol “sinc” represents the cardinal sine function. 16

The intersections of F(k) with the (k)axis are given by:

sin [(k – k0).a/2] = 0

or

kn = k0 ±  2n π / a , with n  ∈ N.                      [19]

The diffraction/interference pattern is modeled by the function F2(k)or the square of the Fourier transform of the waveform f(x):

F2(k) = B2 sinc2 [(k – k0).a/2]                                       [20]     

Where B2 = a/2π is a normalization factor. 17

In the case of visible light, a series of interference fringes are observed on the screen, as shown in (right) Figure 3, and in Figure 1 (top).

The minima correspond to the discrete values of k given by equation [19].

The central fringe contains about 90% of the pattern intensity. The width of the central fringe may be taken as an estimate of the spread of the k-values or the uncertainty Δ k on the value of k:

Δ k = k1 – k0  = 2 π / a                                      [21]

The uncertainty Δ k on the k-value is inversely proportional to the slit width a. When the slit width (a) is decreased the central fringe and the diffraction pattern become wider, and vice-versa.

The Heisenberg Indeterminacy Principle has its roots in the Fourier theorem

  • Probability distribution of the wave-vector k:

In the diffraction pattern shown in Figure 3, the light intensity falling on the screen along the OX direction) was modeled by the function F2(k) or the square of the Fourier transform of the waveform f(x) given by equation [20].           

Now, since F2(k) is a normalized converging continuous function, it may be interpreted as a probability density distribution of the random variable  k , or P(k).

Therefore, the light intensity distribution on the screen I(k) is proportional to the probability density P(k) as follows:

I(k) α  F2(k) = P(k)                                                       [22]

Let’s call I0 the total light intensity falling on the screen; the probability density distribution can be written as follows:

P(k) =  I(k) / I0 = E2(k) / E02

And from equation [12], we deduce:

P(k) = E2(k) / E02 = B2 sinc2 [(k.- k0).a/2]                    [23]     

This means that the normalized magnitude squared of the Electric field amplitude E2(k)represents the probability that thewave vector k takes a value between k and k+ dk.

The electromagnetic wave intensity distribution on the screen I(k) can be interpreted  as the probability density distribution P(k) of the wave vector component k.  The electric field vector E (or magnetic field H) acts as an amplitude of probability.

  • Double slit wave interference

Classically, when two waves ψ1 (r,t)  and ψ2 (r,t) overlap in a region of space at a given time, the state of the medium is described by the superposition of the two wave functions.

The single slit diffraction is a special case of wave interference, in which the wave interferes with itself. The wave front at the slit acts a source of multiple Huygens type wavelets which interfere at the screen producing the observed diffraction pattern

The second example of wave superposition is the phenomenon of double slit interference. Interference occurs when two coherent waves of same frequency superpose in a region of space. Two waves are said to be coherent when they possess a constant phase relation.

Coherent waves may be obtained by either dividing the amplitude (with a beam splitter for example) or dividing the wave front (double slit experiment) of the initial wave into two separate waves.

The schematic of the double-slit interference experiment, shown in Figure 5, is similar to the set-up of the famous Young experiment that settled once for all (or so it was thought at the time) the wave nature of light.

The two waves resulting from the division are coherent since they originate from the same wave, and therefore have a constant phase difference at the slits.

The detailed calculation is omitted to lighten to text (see Appendix 1).

The light intensity on the screen is modeled by the equation:

I(δ) = Itotal (1 + cos δ)                          [24]                 

Whereδ is the phase difference between the two interfering waves at the point of interference on the screen, and Itotal  is the light intensity exiting the slits.

As δ varies, the light intensity I on the screen varies sinusoidally, with maxima and minima given, respectively, by:

            δ = 2n π,  n  ∈ Z.

And                                                                 [25]

            δ = (2n+1) π,  n  ∈ Z.

The phase difference δ is expressed in terms of the path difference, Δ = (d2-d1), by:

            δ = 2π Δ/ λ                                         [26]

Thus, the position of the fringes is given, in terms of the path difference Δ between the two waves, by:

            Δ = n λ,    for the bright fringes

And                                                                 [27]     

            Δ = (2 n+1) λ/2,  for the dark  fringes

The phase difference δ is also expressed in terms of the position on the screen (y),

δ = 2π ay / λ d                                     [28]

  • Probability distribution of the phase difference (δ)

Similarly to single slit diffraction, the light intensity may be regarded as  a probability density distribution P(δ)  of the phase difference (δ) taken as a random variable:

From equation [24]:

I(δ)/ Itotal = (1+cos δ) = 2π P(δ)           [29]

(See Appendix B for details)

From equation [13] we have:

            I(δ)/itotal = Em2(δ)

Therefore, we can write:

            Em2(δ) α P(δ) α (1+cos δ)                  

Or                                                                    [30]

            Em2(δ) α P(δ) α cos2 (k.a.y/2d)

This means that the electromagnetic wave intensity distribution on the screen I(k) i.e. the magnitude squared of the electric field amplitude E2(k) can be interpreted  as the probability density distribution P(k) of the wave vector component k, as in the case of diffraction.

  • Diffraction/Interference with “photons”

This interpretation of the light intensity as a probability density was known long before the advent of Quantum Mechanics, and is independent of the nature of light whether it is considered a wave or a particle or both, and of the nature of the wave whether mechanical, electromagnetic or otherwise.

However this aspect was not deemed significant from a theoretical point of view until Born made the concept of probability a central element in the interpretation of the quantum mechanical wave function ψ(r, t).

In the quantum version of the interference/diffraction experiment, the monochromatic light intensity is dimmed with neutral density filters until the screen appears completely dark. As the light intensity is increased, bright dots start appearing randomly on the screen, showing as expected from quantum theory that light behaves also like a stream of particles, photons18a

As the dots count accumulates, a grainy pattern begins to emerge showing that the arrival of the photons is not truly random. After some time, the dots merge into the familiar interference pattern with its distinct bright and dark fringes.

Similarly when the experiment is repeated with one of the slits closed, a single slit diffraction pattern is formed gradually on the screen. 18b

In the photon model, the photon energy is given by ε = hf, and the intensity I0 of the monochromatic light passing through the slit (or double-slit) proportional to the photon energy,

I0 = n0.hf                                  [31]

Here, n0 represents the total photon flux through the slit, or (photons /m2.s).

The light intensity falling on the screen in the range between [y, y+dy] on the screen, is given by:

            I(y).dy = h.f.n(y) dy                [32]

Where n(y)dy represents the number of photons/m2.s falling in the range [y, y+dy]  on the screen.

Note that: ∫pattern n(y)dy =  n0, over the whole pattern along the y direction.

The photon probability density distribution on the screen is therefore:

            P(y) = I(y)/I0 = n(y)/n0            [33]

This applies to interference and diffraction:

From equations [23] and [30], the photon density distribution is proportional to the electric field squared and to the photon flux on the screen or the frequency of arrival of photons on the screen:

P(k) α sinc2 [(k.- k0).a/2] α E2(k)  α n(k)          for diffraction                        

And                                                     [34]

P(k) α cos2 (k.a.y/2d) α E2(k)  α n(k)               for interference

In the probability interpretation of the diffraction/interference experiments, the position of the photon (impact) on the screen cannot be predicted; we can only predict the probability that a particle arrives in a range [y, y+dy] on the screen.

The particles constitute a statistical ensemble with a probability density distribution given by equations [34] for diffraction and interference respectively.

  • Heisenberg Indeterminacy and Quantization

Analysis of the two experiments reveals the following:

a- When we try to squeeze the wave f(x) through a limited region of space (single slit with width a, or double slit with separation a), the wave diffracts i.e. the wave vector (k) which determines the direction of propagation of the wave, spreads out.

The width of the fringe pattern is an indicator of the degree of spreading of the wave vector i.e., the uncertainty on the value of k.

The smaller the slit width or the slit separation a, the wider the intensity distribution on the screen:  The uncertainty on the k-value increases with decreasing slit width.

For diffraction: Δ k = k1 – k0  = 2 π / a

For interference:  y = Δ d/ a

This is a precursor of Heisenberg’s Indeterminacy Principle, HIP.

b- The simple act of restricting the wave to a limited region of space results in the quantization of the wave vector component k (eq. [19] for diffraction) and,  (eq. [25] for interference), reflected by the series of maxima and minima of the intensity.

c- The shape of the fringes is rectangular; it is the image of the slit projected on the screen. Hence, the wave carries not only energy and momentum, but also spatial information on the shape of the slit i.e. on the physical objects in its path. Using a circular aperture instead of rectangular slit, we obtain circular fringes etc..

d- The fringes pattern is independent of the material of the slit. It depends only on its geometry which defines the boundary conditions responsible for the features listed above.

7- Standing waves: Quantization of energy

The third pertinent example of wave superposition is the phenomenon of stationary waves.

When a sinusoidal (harmonic) wave is confined within fixed boundaries the wave moves back and forth reflecting repeatedly at the boundaries. The superposition of the wave with the reflected wave gives rise to a phenomenon termed standing or stationary waves.

Consider the simple case of a (transverse) wave of angular frequency ω and amplitude A confined to propagate on a string of length OA = d with both ends fixed.

The displacement y of the string particles at any point (x) is given by the superposition of the incident and reflected waves:

The wave function y(x, t) satisfies Helmholtz 1-D partial derivative equation:

            ∂2y/∂x2 + k2y = 0                                           [35]

Equation [35] is termed time independent wave equation.

The wave described by equation [35] is termed stationary or standing wave, given by: 

y(x, t)  = A.eiωt sin(k.x)                                   [36]                                         

Equation [36] satisfies the following boundary conditions: y = 0 at x = 0 and x = d:

            sin (kd) = 0

or                                                                                 [37]

            kn = n π/d        n ∈ N*

The corresponding wavelengths are calculated from kn = 2π /λ:

            λn  = 2d/n                                                        [38]                 

Equation [40] limits the values of the vibration frequencies to a discrete series given by:

            fn = v / λn = n.v / 2d

or                                                                                 [39]

ωn = n π v / d

Where we used the relationships:  k = 2π/ λ , λf = v and ω = 2 πf

Equation [39] gives the normal frequencies (“eigen” frequencies) i.e. normal modes of standing wave vibrations permitted for the string (Figure 5).

The normal modes depend on the length of the string (d), which defines the boundary conditions, and on the speed v of the wave which is a function of the properties of the medium : the tension T and linear mass μ: v = (T/ μ)1/2.

  • Quantization of energy

The quantization of energy is associated in the mind of many with the theory of Quantum Mechanics and the two quantization hypotheses proposed by Planck-Einstein (ε = hf ) and by de Broglie (p = h / λ).

The phenomena of wave diffraction and interference have shown that quantization of the wave vector, wavelength, frequency and phase occur naturally when a wave is confined to a finite region of space (or time). This was demonstrated in the two examples above, quantization results from wave superposition either in interference or in stationary wave.

Quantization is a classical phenomenon associated with wave behavior, resulting from the simple act of restricting the wave to a limited region of space.

In signal processing, the restriction of a waveform in time (sampling) results in quantization of the frequency and of the power (energy) spectrum. This is a consequence of the time-frequency Fourier transform theorem 15b.

A clear example of energy quantization occurs in standing waves.

We have shown in equation [39] above that the frequency of the allowed modes of oscillation for stationary waves is limited to discrete value given by:

            fn = v / λn = n.v / 2d

or                                                                                 [39]

ωn = n π v / d

From equation [4] we know that the mechanical wave energy calculated over a single cycle is given by:   

Ecycle = ½ m.ω2. (um)2.λ                                                                      

or                                                                                 [4]

            Ecycle = 2 π2 m.c.(um)2.f

Equation [4] shows that the energy of the oscillator is a function of the frequency f as well as the amplitude um. Therefore, the allowed oscillation energies are also quantized. 

A consequence of the quantization of the energy is that the confined string system can only be excited by external oscillations with frequencies given by equation [39], termed resonance frequencies.

Only at these resonance frequencies does the string system absorb energy from the external source.

This means that an external harmonic excitation of the wrong frequency will not be absorbed by the stationary wave system, no matter how high the intensity (amplitude) of the mechanical wave!

A similar situation is observed in the interaction of electromagnetic radiation with material systems. The absorption and emission spectra of atoms for example consists of series of lines with specific (wavelength frequency values). Absorption of electromagnetic spectra occurs only for certain frequencies corresponding to resonance between the atomic system and the field.  The atomic system is described, in Quantum Mechanics, by three dimensional standing waves with spherical symmetry.

In fact, the solutions of the 3-D Helmholtz equation, [Δf (r ) + k2 f(r ) = 0], are 3-D standing waves termed spherical harmonics which describe the angular dependence of the atomic orbitals of hydrogen like atoms.19

8- Failure of Newton’s clockwork Universe

By the end of the XIXth century, Physics had become an impressive edifice, capable of explaining a wide variety of natural phenomena and experimental observations. 

These phenomena were classified under two broad areas: matter and radiation, each containing its set of laws, models and theories covering the whole physical domain.

In relation to the behavior of matter, Newtonian and Rational Mechanics (termed now Classical mechanics), complemented by Thermodynamics, Statistical Mechanics and Chemistry, gave an adequate explanation for most of the phenomena discovered at that time in their field of application, from the behavior of particles in a gas at the microscopic level  to the motion of planets. 

On the other hand, by the 1860s, all known electrical, magnetic and optical phenomena had been integrated in Maxwell’s electromagnetic wave-field theory.

A general optimism pervaded the 19th century scientific community and the informed society at large: there was a sense of triumph bordering on euphoria,  and a strong belief that science was on the verge of achieving its original purpose that of unraveling the last secrets of Nature.

The clockwork Universe constructed by Newton was deterministic. The natural phenomena were accounted for, or could be in principle by means of deterministic mathematical laws. There was increased confidence that Humanity had found in the scientific method and in its mechanical models the key to unlock the mysteries of the Universe

The following quote is attributed to Lord Kelvin: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.”20

However this proved to be premature. Behind this “façade” of success, a scientific upheaval was brewing, which will end up undermining the very foundations of the classical Universe.

In fact, there was already a general “malaise”, prevalent among top theoreticians, about the whole theoretical structure.  This malaise was linked to the incompatibility of the two views of the Universe, coexisting side by side i.e., the particle model of matter and the wave-field model of electromagnetic radiation.

These two totally distinct and apparently incompatible models were being used to describe, each, a different set of phenomena.

The particle model was imbedded in the framework of classical Mechanics, statistical mechanics and chemistry, while the wave-field model formed an integral part of Maxwell’s electromagnetic theory with its applications in electrical, magnetic and optical phenomena.

In the particle model, the Universe appeared like a grainy discontinuous canvass made up of small separate dots jostling and flying about randomly in all directions.

Conversely, in the field-wave model, the Universe appeared continuous: the same canvass consisted of wavy patterns filled with interpenetrating, superimposing, undulating ripples, kinks and wrinkles.

Too many “puzzling” phenomena

By the end of the century, a number of new puzzling phenomena had been discovered, and for which both fundamental theories of classical Physics, Newtonian Mechanics and Maxwell’s electromagnetism, failed to provide an explanation.

Among these new phenomena were the following,

  1. The Michelson-Morley experiment and the constancy of the speed of light.
  2. Black body radiation or the Ultraviolet Catastrophe.
  3. The photoelectric effect.
  4. Atomic line spectra.

This was only the tip of the iceberg. More experimental puzzles were to be revealed.

The microscopic world was not behaving according to the plan devised by classical scientists. The Universe seemed to shrug off, the straight jacket strung together by the laws of Mechanics and of Electromagnetism. 

As the century was coming to close, several experimental discoveries took place in rapid succession revealing new surprising features of the subatomic world: the anode rays or protons (Goldstein, 1885), natural radioactivity and α, β, γ emission (Becquerel, 1895), X-rays (Roentgen, 1897), cathode rays or electrons (Thomson, 1897). 

It was clear that new theoretical frameworks were needed to account for the new phenomena.

In fact, the solution of the puzzling results of the Michelson-Morley experiment 21 led to a complete revision of classical Mechanics and of the fundamental concepts of time, space, matter and energy, which resulted in the development of Special Relativity and General Relativity.  

The solution of problems (b, c and d) gave birth to the quantum theories of radiation and of the structure of matter which culminated in the theory of Quantum Mechanics.

This scientific « revolution » is still going on with profound effects on physics (in all its disciplines), on the other sciences, whether chemical, biological and even social, economic and human sciences, on technology, philosophy, education, religion and every activity of our human society and also on the environment.

The two theories of Relativity and Quantum Mechanics with their modern ramifications provide the theoretical framework for the potential modeling of every single known phenomenon in Nature, and have applications in atomic, nuclear, molecular physics, electronics, computer science, chemistry, biology, medicine, nanotechnology, biotechnology, cosmology and many other new scientific and technological fields.

Both theories have reshaped our ever changing view of the Universe, its origin, its structure and its evolution. In addition they have had far reaching impacts on our understanding of reality, beyond anything we once imagined.

However, everything indicates that the two modern theories suffer from the same problem that afflicted their classical predecessors: they propose two incompatible models describing two distinct Universes: the one continuous and deterministic, the other discontinuous and stochastic.

Both theories are bound to be replaced eventually.

9- The impact of the theory of Relativity

In 1905, Einstein presented the Theory of Restricted Relativity in which he imbedded the principle of relativity (principle of covariance) as a first postulate and the principle of invariance of the speed of Light in free space as second postulate.22

Postulate I or the Principle of Relativity states:

“The laws of nature are covariant w.r.t. inertial frames of reference”.

Postulate II states:

“The speed of light in free space is invariant w.r.t. inertial Frames of reference.

Starting from these two postulates alone, Einstein derived the Lorentz transformation of coordinates, and constructed a completely new mechanics.

Later on, in his theory of General Relativity (1915), Einstein extended the scope of the principle of Relativity to all (Non-Inertial) Frames of reference, turning it into a universal principle that applies anywhere and at any time for all natural Phenomena.

The main impact of the theory of relativity was to eliminate a number of “dualities” that formed an integral part of Classical Physics: space and time, matter and energy, inertia and gravity, electricity and magnetism?

The theory of Relativity eliminated space and time as two distinct and immutable entities and replaced them with a new entity, that of a space-time whose properties are defined by its matter and energy content.

The concept of a space-time whose properties and “shape” change and adapt in response to its matter and field constituents is a consequence of the scientific objectivization of reality. Space-time has always been there masked by our brain’s illusionist attempt at constructing a coherent narrative of the information gathered by our senses.

The illusion of separate time and space was our brain’s way of decomposing reality and reassembling it into the model that we have evolved to perceive.

The illusion of separate time and space is a product of our mind. It comes from the illusion of absolute rest which is engrained in our psyche and convinces us that we remain still in one place as time goes by.

Motion in space is always invariably accompanied by time passage and vice-versa.

When we are moving in space we are also moving in time and vice-versa.

The illusion of absolute rest is so deep in our psyche, that when we imagine scenarios involving time travel, (H.G. Wells’ time machine, back to the future, twelve Monkeys etc..), we invariably end up exactly at the same place, in the same backyard, on the same planet (same Galaxy and Cluster of course), although practically we would have travelled thousands or billions of kilometers depending on the elapsed time.

The theory of relativity also eliminated the matter-energy dichotomy by stating that mass is another form of energy. Matter became the last of a long list of phenomena that the concept of energy had been gradually absorbing: kinetic energy or the energy of motion, potential energy with all its manifestations, thermal energy and last: m = E/c2.

Relativity also removed the distinction between inertia and gravity showing, in the principle of equivalence of General Relativity, that both inertia and gravity are manifestations of the same phenomenon associated with the same property of matter.

Relativity also showed that the asymmetry between electrical and magnetic phenomena was an artifact of the frame of reference: the electromagnetic theory was initially formulated with the sources at rest in a particular frame. In a relativistic approach the electric and magnetic fields observed from moving frames are completely symmetrical forming one entity termed electromagnetic field.

The theory of relativity changed forever our view of the Universe. The new Universe is very dissimilar from the static and immutable universe described by Newton and experienced by our ancestors throughout humanity’s history.

  • The scope of the relativistic revision

Despite its far reaching effects on our perception of the phenomena, the theory of relativity kept the classical foundations of the scientific model of reality essentially intact.

The relativistic revolution did not affect the core elements of the classical Universe:  locality, causality, determinism and objective reality.

The Universe as a stage for the phenomena was still there.

Space and time ceased to be independent and were merged in space-time.  However, objects still existed in relativistic space-time the way they did in the classical space and time. They possessed a well-defined location specified by their space-time coordinates. Objects were distinguishable and evolved separately, their evolution being governed by deterministic laws obeying the principle of causality, and by their initial conditions (including position and velocity).

Consequently the study of the phenomena as a series of successive events was still valid.

Events were still defined as something that happens at a particular point in space and at a particular instant of time as measured w.r.t. a given frame of reference represented by a point P (x, y, z, t) in the new Math-verse.

Natural phenomena can still be dissected, analyzed, classified, modeled and explained as before.

Relativity preserves the principle of causality or equal cause and effect, which is at the heart of the deterministic view of reality, since the works of Descartes and Leibniz..

In fact Relativity, contrary to what some believe and who confuse it with relativism, established an overriding principle that governs the phenomena: the principle of Covariance or the “absoluteness” of the laws of Nature.

The principle of Covariance provides a criterion for the identification of objective and absolute universal laws governing natural phenomena. These laws of nature are valid everywhere and at any time, independent from the perspective (frame of reference) or “opinion” of the observer.24

The theory of Quantum Mechanics on the other hand, at least in its “official” interpretation threatens the very principles upon which Science was historically established: determinism, causality and objective reality, unless a rational realistic resolution of the wave-particle duality is adopted.

10- Recapitulation

The breakdown of Newton’s clockwork model severely undermined the classical particle model. The modern model of the “particle” bears little resemblance to Newton’s hard, indivisible, impenetrable, separate, countable, distinguishable and permanent “chunks of matter”.

All these properties were dropped gradually in rapid succession, until all what remained was “splashes” of energy on phosphorescent screens or whimsical tracks in a cloud chamber.

Einstein’s theory of Relativity showed also, with the famous E = mc2, that matter was another form of energy. The particles became intangible bundles of energy which may pop into being or vanish if the conditions are right.

More importantly, identical particles were found to be indistinguishable, in complete contrast to classical particles.

The only remaining “particle” properties are inertia, charge and the mysterious spin.

The modern particles are entangled, interpenetrating, indiscernible and “ephemeral entities with an identity crisis”. Due to the wave-particle duality, they appear “undecided” whether to manifest dot-like in localized region of space or spread over all available space.

Furthermore, the elimination of the matter-energy dichotomy raises the question of the true nature of material particles.

The whole concept of the nature of material particles has to be reviewed.

By comparison to the complete demise of the classical particle model, the wave model survived practically unscathed, apart from the quantum hypotheses, which were simply grafted on the classical wave as an additional property implying “graininess” at the microscopic level.

All the properties of classical waves were transposed intact to quantum mechanics by means of the De Broglie hypothesis i.e. the wavelike property of material particles.

The phenomenon of wave superposition is of course at the heart of Quantum mechanics where it is elevated to the status of a fundamental principle.

As we have shown, the fundamental features of the theory of Quantum Mechanics are a direct consequence of the wavelike behavior of material particles.

Heisenberg’s Indeterminacy Principle (HIP), the quantization of energy and of momentum, and even the probability interpretation of the wave function, in addition to the Principle of superposition, were already present in the behavior of classical waves and are not characteristics specific to Quantum Mechanics.

The phenomena of diffraction, interference and standing waves have shown that quantization of the wave vector, wavelength, frequency, phase and energy occur naturally when a wave is confined to a finite region of space (or time).

This is a consequence of the Fourier transform theorem.

The interpretation of the light intensity as a probability density was known long before the advent of Quantum Mechanics. However it was not deemed significant from a theoretical point of view until Born made the concept of probability a central element in the interpretation of the quantum mechanical wave function ψ.

This is a statement of the second postulate of Quantum Mechanics in the “Authorized” Version of the theory (sometimes stated as corollary to the first postulate):

The absolute square of the wave amplitude [ψ]2 = ψψ* represents the probability density of finding the particle in a given differential volume in space.

On the other hand, unlike Born’s probability waves, the classical waves play other more “tangible roles”. Mechanical waves carry energy manifested in the oscillation of the particles of the elastic medium. Electromagnetic waves carry energy in the oscillation of the carrier field or of a stream of photons. They also carry momentum and information about the geometry in the wave path, as observed in the images of the slits on the screen.

While quantum particles appear to be destitute of sufficient “reality” attributes, Schrödinger’s wave function ψ(r,t) is anchored in space-time.

It is defined unequivocally at each point in space and each instant in time. It evolves in time and propagates in space, interacting with slits and obstacles, carrying energy in the form of stream of “particles” or quanta, and transmitting information about the path geometry, forming real and “ghost” images in real space-time.

It is clear that the resolution of the wave-particle conundrum and the satisfactory clarification of the linkage between the wave and the particle are fundamental for the construction of the scientific model of the Universe.

In the next blog we will present the Quantum Mechanical solution of the wave-particle conundrum, according to the Authorized Version, or Copenhagen interpretation.

We will discuss the main features of the model as far as it succeeds in providing an objective, rational description of reality as expected from any scientific theory in the traditional sense.

Images

a- https://en.wikipedia.org/wiki/Double-slit_experiment

b- https://en.wikipedia.org/wiki/File:Onde_electromagnetique.svg

With modifications to suit the text

c- https://en.wikipedia.org/wiki/File:Doubleslit.svg

With modifications to suit the text

d- https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/16-6-standing-waves-and-resonance/

With modifications to suit the text

References and notes

1- See my blog:

2- Interference with Photons

https://photonterrace.net/en/photon/duality/

– Interference with electrons: Doubleslitexperiment_results_Tanamura_1.gif ‎

https://www.hitachi.com/rd/research/materials/quantum/doubleslit/index.html

– Olaf Nairz, Markus Arndt and Anton Zeilinger, (2002) “Quantum interference experiments with large molecules”, Am. J. Phys. 71 (4), 319.

3- Isaac Newton, “The Mathematical Principles of Natural Philosophy (1687), Translated by Andrew Motte, (1846), MacMillan, On Line Library.

4- Isaac Newton, The Mathematical Principles (1687).

Newton insisted that his fundamental statements whether definitions, laws or corollaries were extracted from empirical observations and data. He inferred Universal Gravitation from Tycho Brahe’s astronomical Data which had been distilled by Kepler into the three laws for planetary motion; he took from Kepler the concept of mutual attraction, from Galileo his experimental results on the free fall of objects and his model of projectile motion in terms of the composition of two motions one uniform and the second uniformly accelerated. 

Newton also adopted and adapted from Galileo the principle of relativity, from Galileo and Descartes the concept and principle of Inertia, from Descartes the concept of quantity of motion and from Descartes and Huygens the Principle of conservation of the quantity of motion.

For the Laws of motion, Newton may have inferred them from Percussion experiments carried out by Huygens, Wren and Wallis, the results of which had been summarized by Huygens in his “Rules of Percussion”.

See my Blog on the subject:

5- Euler, L. (1736) “Mechanica sive motus scientia analytice exposita”.

English translation, by Ian Bruce

http://www.17centurymaths.com/contents/mechanica1.html

6- Isaac Newton, The Mathematical Principles (1687).

In the explanation of Rule III in “the rules of reasoning” at the beginning of Book III, he expounds the concept of material particles.

The extension, hardness, impenetrability, mobility, and vis inertiæ of the whole, result from the extension, hardness, impenetrability, mobility, and vires inertiæ of the parts;  and thence we conclude the least particles of all bodies to be also all extended, and hard and impenetrable, and moveable, and endowed with their proper vires inertia. And this is the foundation of all philosophy”. Moreover, that the divided but contiguous particles of bodies may be separated from one another, is matter of observation; and, in the particles that remain undivided, our minds are able to distinguish yet lesser parts, as is mathematically demonstrated. But whether the parts so distinguished, and not yet divided, may by the powers of Nature, be actually divided and separated from one another, we cannot certainly determine. Yet, had we the proof of but one experiment that any undivided particle, in breaking a hard and solid body, suffered a division, we might by virtue of this rule conclude that the undivided as well as the divided particles may be divided and actually separated to infinity.”

7- The molecular kinetic model used a simplified model of an ideal gas. The model assumed that gases were constituted of a large number of identical very small particles in constant motion. The particles obey Newton’s laws of motion i.e. travel at constant velocity (uniform rectilinear motion), and change direction after collision with another particle or with the walls of the container. The collisions are assumed to be elastic i.e. without energy loss. The large numbers of collisions render the velocities of the molecules (speed and direction) essentially random, allowing the use of statistical methods to analyze their collective behavior.

8-  a- For the Stern-Gerlach experiment: https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach_experiment#History

b- For a list of particles: https://en.wikipedia.org/wiki/Particle_physics#Bosons

To know more about quarks: https://en.wikipedia.org/wiki/Quark

9- Both processes have been demonstrated experimentally.  [see for example, Pike, O, J. et al. 2014.‘A photon–photon collider in a vacuum hohlraum’. Nature Photonics, 18 May 2014.

10- C. Davisson, L. H. Germer: Diffraction of Electrons by A Crystal of Nickel, Physical Review 30/6, 705–40 (1927)

https://en.wikipedia.org/wiki/Davisson%E2%80%93Germer_experiment

11- Maxwell’s equations for free space are:

a) Curl E =  – μ0 ∂H/ ∂t            (Faraday’s law)

b) Curl H  =   ε0 ∂E/ ∂t            (Ampère’s law)

c) Div E = 0                            (Gauss’ law-electrical)

d) Div H = 0                           (Gauss’ law-magnetic)

12- The electrical permittivity ε0 has been  redefined in 2019 in terms of the vacuum permeability μ0 and the the speed of light using c = (ε0μ0-1/2  .

https://en.wikipedia.org/wiki/2019_redefinition_of_the_SI_base_units

13- See the following link for an overview of the spectrum:

https://commons.wikimedia.org/wiki/File:EM_Spectrum_Properties_edit.svg

14-  Goldbeck-Löwe, Harald  (2012). “The Hallwachs-Effect – The Photoelectric effect – Gate to Quantum Physics” 

https://www.researchgate.net/publication/269631693

15- a)- f( r ) = A∫∫∫  F(k) exp( ik.r) dk                             

Where, the triple integration is over the set of real numbers R.

A is normalization constant: A = (2π)-3/2.

k is the wave vector with magnitude k = 2π/λ

F(k) is termed is the transform function of f( r ), given by:

F( k ) = A∫∫∫  f(r) exp(- ik.r) dr                            

Where the triple integration is over the domain of definition D of the function f( r ).

b)- Similar equations as in [15] may be written for any integrable function of time f(t) in terms of the angular frequency  (w = 2pf) as follows:

f(t) = A∫D  G(w).exp( iwt) dt

and, for the transform,

G(w) = A∫R  f(t).exp(- iwt) dw

The normalization constant is in this case: A = (2π)-1/2

16- Useful properties of the sinc function:

Limit (y à 0) sinc(y) = 1

R  sinc(y) dy = π

R  sinc2(y) dy = π

R  sinc(πy) dy = 1

17- For the determination of B2, we write :

R  B2 sinc2 [(kx.- k0).a/2]dkx = 1

Change of variable: y = [(kx.- k0).a/2, and dy = (a/2)dkx.

This gives:

(a/2)B2R  sinc2 [y]dy = 1

Hence from note [32] above:  (a/2)B2 π =1

And: B2 = 2 π/a

18-a)- Reuben S. Aspden, Miles J. Padgett Gabriel C. Spalding. (2016).

 Video recording true single-photon double-slit interference

American Journal of Physics 84, 671

https://aapt.scitation.org/doi/full/10.1119/1.4955173

b)- (https://photonterrace.net/en/photon/duality/).

19- https://en.wikipedia.org/wiki/Helmholtz_equation

20- Wikipedia disputes the claim and suggests instead that the quote is a paraphrase of Michelson who in 1894 stated: “… it seems probable that most of the grand underlying principles have been firmly established … An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.”

https://en.wikipedia.org/wiki/William_Thomson,_1st_Baron_Kelvin#Disaster_and_triumph

21- The Michelson – Morley experiment in 1887 showed that the speed of light w.r.t. to the “ether”, supposedly at rest w.r.t. absolute space, was not affected by the earth’s motion around the Sun (~ 30 km/s).

https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_experiment

Today’s definition of the standard meter  is based on the fact that the speed of light in vacuum is a universal constant and has been measured with increased accuracy and precision to be c = 299 792 458 metres per second.

https://en.wikipedia.org/wiki/Speed_of_light

22- Einstein, Albert (1905). Zur Elektrodynamik bewegter Körper. Annalen der Physik. 17 (10): 891–921, translated from German by Meghnad Saha and Wikisource.

https://en.wikisource.org/wiki/Translation:On_the_Electrodynamics_of_Moving_Bodies

23See my Blog:

Appendix

A- Double slit interference

We have two light waves E1 and E2 overlapping in a given region of space at an instant of time t. The two waves are monochromatic with equal frequencies, and coherent, i.e. possess a constant phase difference;  in this case, Δφ = φ2 – φ1= 0.

From the principle of superposition, we have:

E = E1 + E2 = E01. exp[-i( k1.r1 )] + E02. exp[-i (k2.r2)]                     

Thus:

E2  = (E01)2­ + (E02)2 +2 E01. E02 cos (δ)         

Whereδ = (r1 r2).k  is the phase difference between the two interfering waves at the point of interference on the receiving screen, k1  ~ k= kin the far field approximation.

From equation [12], we have:

I = I01 + I02  + 2(I01 . I02)1/2 cos (δ). cos(E01, E02)                    

In the special case where, I01 = I02 and assuming E1 approximately parallel to E2, we obtain the simple expression of the intensity in terms of the phase difference δ:

I = 2 I01 (1 + cos δ)                             [A1]                            

The term “cos(E01, E02)” acts as an envelope modulating the maxima of the fringes. Its effect is neglected when the width of the fringes pattern is small compared to the slit-screen distance d. Otherwise the intensity of the bright fringes decreases gradually as we move away from the central fringe.

Furthermore, in the double slit experiment, diffraction and interference occur simultaneously. The interference fringes are modulated by the wider diffraction fringes given by the sinc2 function of equation[20] in the text. The observed pattern is a convolution between the pure interference and the single slit diffraction pattern.

The parameter δ is a function of the position y on the receiving screen.

As δ varies, the light intensity I on the screen varies sinusoidally, with maxima and minima given, respectively, by:

            δ = 2n π,  n  ∈ Z.

and                                                                  [A2]

            δ = (2n+1) π,  n  ∈ Z.

The phase difference δ is expressed in terms of the path difference, Δ = (d2-d1), by:

            δ / 2π = Δ/ λ

or                                                                     [A3]

            δ = 2π Δ/ λ

Thus, the position of the fringes is given, in terms of the path difference Δ between the two waves, by:

            Δ = n λ,    for the bright fringes

And                                                                 [A4]    

            Δ = (2 n+1) λ/2,  for the dark  fringes

The path difference Δ is expressed in terms of the position on the screen (y),

Δ = a. y /d                                           [A5]

The phase difference, expressed in terms of (y), is:

δ = 2π ay / λ d                                     [A6]

λ = a. y1 /d first maximum

B- Probability distribution

In the interference pattern on the screen, the light intensity distribution is a function of the phase difference, given by:

I(δ) = 2 I01 (1 + cos δ)                         [B1]

The term 2I01 is equal to the total light intensity (Itotal) passing through the two slits and, falling on the screen, thus forming the interference pattern:

I(δ)/ Itotal = 1+cos δ                             [B2]

The ratio (I(δ)/ Itotal)  is proportional to the probability distribution P(δ ) of the phase difference (δ) taken as a random variable:

I(δ)/ Itotal = (1+cos δ) = 2π P(δ)           [B3]

The factor 2π is normalization constant calculated as follows:

            ∫cycle  (1+cos δ) dδ =2 π          

Therefore, the probability density distribution P(δ) of the random variable δ .

            P(δ) =(1+cos δ) /2π                             [B4]

From equation [13] we have:

            I(δ) a Em2(δ)

Therefore, we can write:

            Em2(δ) a P(δ) =(1+cos δ) /2π              [B5]

3 thoughts on “The Wave-Particle Conundrum -II-The Failure of Newton’s clockwork universe

Leave a comment

Design a site like this with WordPress.com
Get started