In a tortured garden

Not by any stretch of the imagination can I be called good at gardening. Due to some unpleasant experiences during my childhood I have strong negative feelings associated with gardening. The result is, sadly, rather predictable. My poor garden, although not very big, suffers from a serious case of neglect. The weeds are taking over. Some of the shrubs have died. Others are severely overgrown.

Not being much of an outdoors person, I don’t go into this garden very often. On the rare occasion that I do venture into my garden, the sight of all the weeds investing the lily patch, would produce feelings of anxiety and guilt. And usually that is enough to drive me back indoor to my books and the other things with which I color my life.

There are however, those other times. I would be lures to my garden by a glimpse of color that I noticed through the window. In the garden I would then discover that the Azaleas, or the Pride-of-India suddenly burst into flowers.

It is as if my garden is sending me this message: “It’s OK. We understand. And don’t worry. In nature, where we all came from, none but the elements tend to us and still we thrive.” Then I don’t notice the weeds or the overgrown parts or even the dead shrubs. I just see a bit of a blessing. For once I can turn around with a happy heart when I leave the garden.

Scientistic atheism: delusion, deception, deceit

Human beings are community creatures. They prefer to maintain a close connection with one another. In doing so, their paradigms – the way they think about things – evolve in a collective manner. They like to think about things in, more or less, the same way as their fellow human being. This is a powerful mechanism that forms the foundation for cultures and improves the ability of the human race to survive as a whole in the long run.

However, the prevailing paradigm needs to remain on a healthy track to avoid self-destruction. Some notions that enter and propagate through it, could ultimately be detrimental to the long term survival of the human race. Normally, dangerous notions are removed from paradigms, because the damage they do is revealed to the community at large. However, such damaging notions can, on occation, survive. When they do and became too wide spread, it becomes harder to convince humanity of the danger.

Scientistic ateism is such a dangerous notion that has been able to use delusion, deception and deceit to propagate itself through the current paradigm. Although there have been numerous individuals that attempted to reveal its true nature to the world, this deception still strides forth in the minds of many people.

What is atheism?

A theist is a person that believes in the existence of a God. An agnostic is a person that doesn’t know what to believe and therefore doesn’t believe anything. An atheist is a person that believes that there is no God. Note the difference between an agnostic and an atheist. While an agnostic doesn’t believe anything, an atheist has a definite believe. In that sense a theist and an atheist share the fact that they both believe in something.

What is scientistic atheism?

A scientistic atheist is a person that, not only believes that there is no God, but also believes that this conviction is founded in science. Therein lies the delusion, for this is not true. Yet such a person would often mislead other people by claiming that science supports their convictions. Therein lies the deception. Some atheists may actually be aware of the fact that the latter believe is not true, but they vocally maintain the deception that their atheist believes are confirmed by science. Therein lies the deceit.

The word `scientistic’ and the word `scientific’ sound similar, but they are not the same. To understand what the word `scientistic’ means, we first need to understand what the word `scientific’ means. Knowledge is said to be scientific knowledge when the scientific method has been followed to obtain the knowledge.

What is the scientific method?

The scientific method is a process whereby theoretical predictions are tested through experimental observations. Typically, one first comes up with a hypothesis or a theory, which is then used to make predictions that can be tested. One then performs an experiment to see whether these predictions agree with what happens in the physical world. If the experimental observations agree with the predictions, we gain confidence in the theory and it is considered to be reliable. If the experimental observations disagree with the predictions, the theory is rejected.

There are a number of requirements for knowledge to be considered as being scientific. A scientific theory needs to be testable. In particular, it must be such that it could in principle be shown to be false. Theories that can never in principle be shown to be false are not very useful, because then we can never be sure they are true.

Scientific experiments also need to be repeatable. Anyone should in principle be able to redo the experiment and get the same results. If the experiment cannot be repeated, then how can we be sure that the original experiment was done correctly?

These properties – falsifiability and repeatablility – are but two of the restrictions that one needs to impose on knowledge to ensure that it is scientific.

There are unfortunately certain categories of knowledge that can in principle not obey these restrictions and can therefore never be considered as scientific knowledge. Examples are historic events, which are by their very nature not repeatable. As an example, when one comes up with a theory of how the moon was formed, the theory itself can be based on scientifically testable mechanisms, but whether or not the moon was in actual fact formed in this way is not testable and is not, and can never be, a scientific statement.

The same is true about biological evolution. Although the individual mechanisms that play a role in evolution can be tested in a scientific manner, any statement that a particular species evolved in this way from another species, can never be tested and is therefore not a scientific statement.

Other example of things that look like science, but is not science, include:\ (a) the notion of parallel universes, (b) what happens beyond an event horizon and (c) what happened before the big bang. These are examples of things that we can in principle never test and which for that reason are not falsifiable. Although these things could be cast in a mathematical theoretical language that looks like science or they are being researched by people that are regarded as scientists, this kind of knowledge is nevertheless non-scientific.

Now we can say what the term `scientistic’ means. When something is cast in a form that looks like science, but does not obey the requirements of the scientific method, then we refer to it as being scientistic. In particular, if one bases one’s believes on something that looks like science, but doesn’t obey the requirements of science, then those believes are called scientistic believes.

A scientistic atheist is a person that base its belief that there is no God on information that looks like science, but doesn’t adhere to the requirements of the scientific method.

There is no such thing as a `scientific atheist,’ because there is no scientific knowledge that supports the believe that there is no God. At the same time, there is also no `scientific theist,’ because there is also no scientific knowledge in support of the existence of God. One’s believe that God does or does not exist is never founded in science. It is a matter of choice, which may have been inspired by some arguments that may be based on our scientific knowledge, but the choice can never be forced by scientific facts. The atheist may look at the universe and see `evidence’ for his believe that God does not exist. The theist may look at the universe and see `evidence’ for her believe that God does exist. Both these observations are colored by their pre-scientific convictions.

Generally, a theist does not base his convictions on anything external, such as scientific knowledge. The basis for a theist’s convictions is faith itself. In that sense, these convictions are consistent and not deceptive. On the other hand, the convictions of a scientistic atheist are based on a lie, one that the atheist may or may not be aware of. Either way, these convitions are inconsistent and essentially deceptive. Moreover, the atheist does not have the option to base her convictions on faith alone, because that would contradict the very notion of what the atheist believes.

So, in conclusion, although both the theist and the atheist lack scientific support for their convictions, the theist is in a much better situation, because scientific support is not require for theistic believes, whereas atheistic believes do required some form of external support, such as scientific knowledge. The atheist must therefore employs delusion, deception and even deceit to propagate its dangerous and destructive convictions.

Energy and frequency

Tags

, , , ,

 

Max Planck

Warm bodies radiate light over a range of frequencies (or wavelengths). This process is called black body radiation. The shape of the frequency spectrum that is produced by black body radiation gave physicists a hard time to explain. Finally Max Planck came up with an idea that worked. He made the assumption that light was radiated in quanta of energy that are proportional to the frequency of the light. The proportionality constant has a very tiny value in term of the everyday units that we use. This constant came to be known as Planck’s constant and it is believed to be a fundamental constant of nature.

The notion that light is radiated (or absorbed) in energy quanta started the whole quantum mechanics revolution. Although there are several other counter intuitive principles associated with quantum mechanics, the relationship between energy and frequency still forms one of the cornerstones of the whole formalism. It naturally leads to a Fourier description of everything in the universe. Frequency is like an indexing label for the Fourier basis functions. As a result one can associate a specific energy with each of these specific basis functions. That then inadvertently leads to the notion of the Heisenberg uncertainty principle, which we have already discussed before.

To reach these conclusions requires a number of inductive steps, mental leaps, which are not necessarily justified. As we have explained before much of the conception baggage associated with quantum mechanics are not confirmed as established experimental facts. Sometimes that is because it is in principle impossible to make such observations. Therefore, it is necessary to review what we actually know for a fact about Planck’s relation.

All Planck did was to assume that light is radiated and absorbed in energy quanta proportional to the frequency. It was Einstein that eventually concluded that light actually exists in terms of these quanta. However, one can explain all phenomena without having to say that light must always exist in terms of these quanta. In fact, this is one of those things that one can never determine experimentally because one can only make observations of light by absorbing it somehow. As a result all that we can make statements about from an observational point of view is what happens during a radiation or absorption process, in other words, during an interaction. So it may well be that light exists as a continuous smooth field as long as it does not interact.

The real question then is: why are radiation and absorption processes quantized?

Virtual particles

Tags

, , , ,

Currently things are becoming more and more technical. Although I want to make this as accessible for the general reader as possible I fear that I may have already delved so deep into the technical aspects of the topic of quantum mechanics that any casual reader would simply loose interest. Please bare with me. I’ll try my best to make this simple.

At the heart of any scientific endeavour lies the desire to understand something about our universe. The sad fact is that the scientific method sometimes provides us with a mindless mathematical process of making successful predictions without a clear understanding of what this mathematical process tells us about nature. Quantum mechanics is such an example.

Some people believe that quantum mechanics cannot be a description of nature at the fundamental level. Even if this is true quantum mechanics must be true at some level. As a result the mathematical formulation of quantum mechanics must mean something. One should distinguish between the mathematical formulation of quantum mechanics and the interpretation of quantum mechanics. The latter includes such things as vacuum fluctuations and the existence of particles, which I previously argued may be wrong interpretations. What one then needs to do is to discard the current interpretation of quantum mechanics and consider only the mathematical formulation of quantum mechanics and then ask oneself what this mathematical formulation tells us about nature.

This brings me to the notion of virtual particles, which is based on both vacuum fluctuations and the existence of particles. There are many measurable physical effects that seem to point to the existence of virtual particles. Yet can that be true if there are no vacuum fluctuations and no particles? The point is that virtual particles are not necessarily the only possible explanation of these measurable physical effects. To see this we need to consider the mathematical formulation of quantum mechanics more carefully. (Here I include quantum field theory.)

Let’s take the Coulomb field of a charged particle as our example for this discussion. In classical electromagnetic theory this Coulomb field is taken as a static field (for a stationary particle), described by a smooth function of space. If another charge particle comes close to this particle it will experience a force due to this Coulomb field. The latter is something that we know from experiments.

In quantum mechanics (or rather quantum field theory) the Coulomb field is replaced by a cloud of virtual particles (virtual photons). The force that another particle would feel is now interpreted as an exchange of these virtual photons. At least that is according to the interpretation of the quantum physics. It does not follow directly from the mathematical formulation of (in this case) quantum field theory.

To explain why I’ll use a little diagram, called a Feynman diagram, after Richard Feynman, its inventor.
This type of diagram is used as an aid in quantum field theory calculations. This particular Feynman diagram represents the scattering between two charged particles (electrons) via the exchange of a virtual photon. Each line represents a particle. There are two incoming electrons, two outgoing electrons and one virtual photon that is exchanged between the two electrons. As such the diagram is interpreted as the interaction among particles. However, if one looks at the mathematical expression that this diagram represents, one finds that the lines actually represent plane waves. Each of the particles is actually a field that is expanded in terms of plane waves using Fourier theory. The diagram therefore represents the interaction among the plane waves of the different fields. It also involves the summations (or integrations) over all such plane waves. There would in general be one such summation for each line in the diagram, five in total. However, one can restrict the direction of propagation of the ingoing and outgoing fields, which would effectively remove the summations associated with these field. The summation over the plane waves of the (virtual) photon field (Coulomb field) would however still remain.

Richard Feynman was a staunched believer in particles, but I am sure he know full well that these diagrams are actually representations of the interaction among the plane waves that make up the different fields.

The mathematical calculation expands the Coulomb field in terms of plane waves and determines their interaction with the plane waves of the electrons fields. This Coulomb field is the same one that we find in the classical theory. Nowhere in the actual mathematical quantum field theory calculation does the existence of any particle, virtual or not, appear. By looking at these calculations we see that the existence of virtual particles does not as such play any role in any prediction made by quantum field theory. The fields that mediate the interaction in quantum physics do not need to consist of particles.

What particle?

Tags

, , , , ,

A particle is a dimensionless point travelling on a world-line … at least according to Eugene Wigner (cannot remember where I read that). A dimensionless point is a problematic thing if one wants to give it properties such as mass and charge. The mass density and charge density of the particle will have to be infinite, which is part of the reason for the infinities that one finds in quantum field theory.

Yet when subjected to high energy elastic scattering these particles produce a scale invariant behaviour to arbitrary high energies — a phenomenon called Bjorken scaling. This seems to suggest that they are point-like and dimensionless.

Let’s consider this carefully and be brutally honest with ourselves. What is it really that we are observing here? We don’t see any particles directly. We deduce their existence based on observations of scattering, absorption or some other type of interaction. If fact, without some form of interaction we simply won’t be able to make any observations of particles.

Now for the brutal honesty: what we really see is a localised interaction. Instead of observing a dimensionless point particle, what we in actual fact observation is a dimensionless interaction point. The notion of a dimensionless point particle is simple a way to explain why interactions are localised at dimensionless points. But is this the only possible explanation? Perhaps there are no particles, only fields. Perhaps the fields interact with each other (or with themselves) at these dimensionless points.

Why would these fields interact at dimensionless points, if there are no particles? Well we know that these fields often have internal degrees of freedom (like phase or spin). These internal degrees of freedom usually allow topological defects, such as vortices and monopoles, to exist in these fields. These topological may actually mediate these interactions.

That would explain why we always observe localised point-like interactions without using the notion of dimensionless point particle.

Vacuum does not fluctuate

Tags

, , , , , , ,

From Heisenberg’s uncertainty principle we learn that the uncertainty in time (the time interval of a measurement) multiplied by the uncertainty in the energy that we measure is always larger than Planck’s constant. Now the argument goes that since one cannot make measurements that are more accurate than what the uncertainty principle allows, nature is free to violate energy conservation on shorter time scales. Particles with non-zero mass (hence, non-zero energy based on E=m c^2) can pop in and out of existence, as long as they exist for times shorter than Planck’s constant divided by their implied energy fluctuation.

This reasoning is severely flawed. It assigns to nature the ability to violate more than just energy conservation. In fact, it assumes nature can violate the mathematical foundation of the Heisenberg uncertainty principle itself. The notion of a vacuum fluctuation says that a (virtual) particle associated with a particular energy can exist for a time that is shorter than allowed by the uncertainty relation. That is to say that one can produce a function with a frequency that exist for a shorter time than allowed by the time-bandwidth product. This is mathematically impossible and therefore contradicts the mathematical foundation of the Heisenberg uncertainty principle.

Fourier theory teaches us that a function and its Fourier transform are both complete representations of the information. The Fourier transform of a function does not provide additional information over and above the information that is already contained in the function itself. Since quantum theory is based on such a Fourier relationship, the same is true in nature. The temporal behaviour of a quantum system contains all the information in the system. There is no additional information that one can obtain by considering the energies of particles or anything else in that system. Smaller time intervals would obviously contain less information than larger time intervals. One cannot increase this amount of information by looking at the energy or frequency spectrum in that time interval. The same goes for the spectrum: by looking at smaller frequency (or energy) ranges one ends up with less information than what is available in larger frequency ranges and again, looking at the temporal variation within that frequency range cannot increase the amount of available information. This fact is encapsulated in the time-bandwidth product and thus also in the Heisenberg uncertainty principle.

What the time-bandwidth product (and by implication the Heisenberg uncertainty principle) actually tells us is that there is a fundamental limit in the amount of information contained in the description of any mathematical quantity over small intervals. Since nature is described by mathematics, the immediate implication is that if this limitation exists in mathematics then it also exists in nature. Nature cannot fluctuate over energy ranges that violate the Heisenberg uncertainty principle because it does not have enough information to enable such a fluctuation.

So, there is no such thing as vacuum fluctuations that can hide within the Heisenberg uncertainty principle. Now perhaps this is merely a poor choice of terminology, because, although it does not actually `fluctuate,’ nature does allow something to exist over those small intervals or small ranges, something that has a measurable effect on how nature behaves. This plays the role of `virtual’ particles, but I’ll leave this topic for later.

Is Heisenberg uncertainty fundamental?

Tags

, , , , , , , ,

Take a guitar string and pluck it. The oscillations on the string have the shape of sine or cosine functions. One can use these sine and cosine functions to construct virtually any function. This follows from a mathematical technique called Fourier theory. One would use a Fourier transform of a function to determine what the coefficients are that one needs for all the different sine and cosine functions to reconstruct said function. The coefficients also form a function that is called the spectrum and which is a function of frequency.

In physics sine and cosine functions are often extended to multiple dimensions to describe waves in space and time called plane waves. Fourier theory still works for these plane waves and can be used to reconstruct any field such as electromagnetic fields that exist in space-time.

The spectrum of a function usually have a specific width associated with it. The same is true for the function itself. There is an interesting relationship that exists between the width of the function and the width of its spectrum. If one stretches the function so that it becomes wider the effect on its spectrum is to make the spectrum narrower. One can actually see that the product of the width of a function and the width of its spectrum is a constant. For every function there is such a constant called the space-bandwidth product.

Now it turns out that one can not have space-bandwidth products that are arbitrarily small. The smallest value for the space-bandwidth product is found with a function called the Gaussian function. There exists no other function with a smaller space-bandwidth product.

When Max Planck discovered that there is a relationship between the frequency and the energy in black body radiation, he implicitly used Fourier theory to represent the radiation in terms of its frequency spectrum. As a result quantum mechanics was formulated as a Fourier expansion of particles and fields in terms of plane waves. Fourier theory therefore lies at the foundation of quantum mechanics.

Since quantum mechanics is founded on Fourier theory the properties of Fourier transforms would also be built into quantum mechanics. The restriction that exists on space-bandwidth products would therefore also exist in quantum mechanics. This manifests as the so called Heisenberg uncertainty principle.

One can, for instance, express the Heisenberg uncertainty principle as a restriction that exists on the product of the uncertainty in position and the uncertainty in momentum. While the uncertainty in position is given by the width of the position space function that represents an object, the uncertainty in momentum is then given by the width of its spectrum. This is because the spatial frequencies (of which this spectrum is a function) are related to the momentum of the object via Planck’s constant. We see therefore that the Heisenberg uncertainty relation is nothing else but the mathematical restriction in space-bandwidth product that exists for all functions.

Our conclusions is that the Heisenberg uncertainty principle is not a fundamental principle of nature. It is a direct result of the fact that quantum mechanics is formulated in terms of Fourier theory.

Ominous omission

Tags

, , , ,

If one is to agree with Lee Smolins that physics is in trouble and if that would imply, as Lee proposes, that we need to complete the revolution that has been started by Einstein and his contemporaries — go back and fill in the gaps that was left empty — then one would find oneself in a bit of a dilemma. Mmm, it now occurs to me that Lee Smolins neglected to discuss a rather pertinent aspect in his otherwise excellent book. Perhaps the omission was accidental, then again perhaps not.

What am I rambling on about? In short, it is something called pseudo science — that all to common activity where people produce `theories’ with an appearance of science, but which is not science. The dilemma is that if one would spend one’s time pondering those gaps and eventually come up with some proposals, how would one distinguish this from pseudo science?

I can almost hear the sneering chorus. For most physicists it is easy to identify a pseudo scientist. The hard part is to convince pseudo scientists that their theories are nonsense. If you want to pin it down, then one can simply impose the pinnacle of the scientific method: falsifiable experimental predictions.

This is however precisely where the problem lies. Having read The Trouble with Physics, with all its examples of modern day physicist that are proposing extravagant theories, I came to look on people that I would have identified as pseudo scientists with a new perspective. Although Lee makes a strong case for falsifiable experimental predictions, not all these extravagant theories can make such predictions, at least not right from the start. What then distinguishes these extravagant theories from pseudo science? I can propose two possible criteria:

  1. Often a pseudo scientist would not only be unable to use their theories to make falsifiable predictions, but they don’t understand the concept of a falsifiable prediction, and they deliberately design their theories in such a way that it is impossible to falsify.
  2. Pseudo scientific theories usually don’t contain much technical depth. The mathematical formulation is often borrowed from existing theories and usually applied in a way the reveals a rather superficial understanding.

Both these criteria are on shaky ground. As for the first criterion, to determine whether some theoretical endeavour may one day be able to make falsifiable predictions could be rather difficult and may turn out to be a subjective judgement. And as for the second criterion, it is said that genius lies in simplicity. One can not require a profound mathematical formulation to be a prerequisite for true science.

In fact, string theory is mathematically quite complicated. Yet, due to its lack of falsifiable predictions and the lack of any prospect ever to be able to make such falsifiable predictions it is a good candidate for being judged a pseudo science. Not very flattering I confess. Perhaps that is why Lee Smolins did not address this aspect in his book.

So why do I address this issue? It occurred to me that much of the discussions that I intend to have on the fundamental issues of physics will look very much like pseudo science. There won’t be any new predictions, at least not at first and I do not intend to use complicated mathematics in these discussions.

So all that is left for me is to appeal to the possible reader to please bare with me.

Einstein Podolsky Rosen

Tags

, , , , ,

Albert Einstein during a lecture in Vienna in ...

Albert Einstein during a lecture in Vienna in ...

The first aspect of quantum mechanics that I want to discuss is the one that, in some way, we are most certain of. We are most certain of this aspect thanks to some remarkable experimental observations.

In 1935 Albert Einstein, together with Boris Podolsky and Nathan Rosen, published an article [Physical Review Vol. 47, p 777 (1935)] in which they described an experiment that they believed could sidestep the Heisenberg uncertainty principle. The idea is based on the entanglement of the quantum states (such as position and momentum) of two particles. Quantum entanglement implies that any particular quantum state of one particle fixes that of the other. In this way, if one would measure the position (momentum) of particle A (particle B), one would know the position (momentum) of the other. So one can them measure the position of particle A and the momentum of particle B, thereby being able to determine the position and momentum of both particles to an accuracy that could in principle acceed what Heisenberg uncertainty allows.

Alain Aspect on a visit to Tel Aviv University...

Alain Aspect on a visit to Tel Aviv University...

Although this was a profound idea, it had to wait until the 1980’s before it could be experimentally tested, through the work of Alain Aspect (using the polarisation of light instead of position and momentum) [Physical Review Letters, Vol. 49, pp.91-94 (1982)] … with surprising results.

Before that time John S. Bell reformulated the ideas of Einstein, Podolsky and Rosen in terms of an inequality for the probability of observations that can be made in such an experiment. Bell’s inequality is based on two rather innocent looking assumptions. The first assumption is that there is a unique reality. In other words, according to this assumption I am either at home or at work; I cannot be at work and at home at the same time. My tea cup cannot be full of tea and empty at the same time.

The other assumption is that all interactions are local — one thing must be in contact with something else to interact with it. If I pick up a brick I have to make physical contact with the brick. Perhaps I could use some force field, but even then the force field somehow have to be in physical contact with the brick. No “spooky action at a distance” or telekinetics are allowed.

 

Four combinations of which one has been ruled out by the EPR experiment

When Alain Aspect performed his experiment he would discover that the results violate Bell’s inequality. 😯 In other words, the experiment revealed that at least one of those two assumptions must be wrong. Either there are multiple realities or there are non-local interactions.

The impact of these results is astounding. To understand why, one needs to employ a perspective of Karl Popper, which states that one can never verify a theory. At best one can only falsify theories. According to this perspective science progresses through a process of elimination. Well, if that is the case then Alain Aspect has eliminated a quarter of all possible theories with this one experiment. Remarkable progress indeed!

Of the three remaining possible combinations, the popular combination is the one that allows multiple realities while enforcing locality. This is what the Copenhagen interpretation implies. Another interpretation would be that there is a unique reality but that interactions could be non-local. To give up on both assumptions is too horrible to contemplate.

————————————————

Now if you are like me, you probably won’t believe a single word about these experimental results. How can any experiment give such a profound result? I would have wanted to know exactly how this experiment was performed so that I can understand for myself how one can draw such remarkable conclusions from the results. Well, if that is how you feel, let me try to explain in the simplest possible language how this experiment was able to come to such a profound conclusion.

The experiment was made possible by the fact that there exist certain nonlinear processes that produces two output photons out of a single input photon. These two photons are then entangled by the requiremnt that their polarisation states must be orthogonal. One can then use a polariser (or analyser) to influence one of the photons before it is observed by a detector. Assume that the polariser is oriented with an angle of 45 degrees with respect to the orientation of the linearly polarised incident light. Then each photon has a 50% probability to pass through it and be observed.

One needs to make sure that every observed pair of photons is indeed a pair of entangled photons (i.e. a pair that originated from the same input photon) and not just two separate photons that originated from different input photons. For this reason the light intensity is reduced until photons are observed one by one. Moreover, the two detectors are both connected to a coincidence counter that counts only those photons that are measured at the same time and rejects those cases where only one photon is registered at one of the detectors.

 

EPR experimental setup using polarisation entangled photons

The surprising result of this xperiment is that when the two polarisers are oriented in the same direction then no coincident observations are made. This is surprising because in some cases the photons at both sides could be oriented at 45 degrees with respct to the polariser, which should imply the there is a 50% chance to see some coincidence measurements. Instead none is observed. The implication is that when the photon on one side passes through the polariser it is transformed or projected into the polarisation state that matches the orientation of the polariser. At the same time the other photon with which it is entangled will be transformed or projected into the polarisation state that is orthogonal to that of the first photon. As a result the second photon cannot pass through the polariser. So it seems that the influence of the polariser on one photon is instantaneously communicated to the other photon regardless of how far away it is from the first photon. This experiment has been repeated several times since it was first performed by Alain Aspect. Every time the same surprising violation of Bell’s inequality is obtained.

So now there are two possibilities. Either it is possible for the photons to have a non-local interaction or there are multiple realities allowing all possible orientations at the same time. At the beginning of this post I stated that this aspect of quantum is the one that we are most certain of. However, there is actually three different combinations that we can choose from. What we are however certain of is that the combination of local realism is ruled out.

I hope to come back to this remarkable experimental result and discuss the consequences.

Quantum mechanics

Tags

, , , , , , ,

In his book The trouble with physics, Lee Smolin says there is a crisis in physics, because we are not making as much progress as we use to. Perhaps what we need to do is to go back and review what we have accomplished; recheck what we believe we understand; clean up some of the messy corners.

No doubt, one of the least comfortable parts of the current knowledge that we have in physics today is quantum mechanics. This is not just me. Richard Feynman said that nobody really understands quantum mechanics. Gerard ‘t Hooft believes the quantum mechanics is not fundamental. Most physicists have some sort of a love-hate relationship with quantum mechanics.

Then why do we use it? Fact is, it works. Quantum mechanics is a mathematical formalism (not a theory) that is extremely successful in making predictions. It is an inevitable consequence of the scientific method that it would optimize mathematical formalisms or theories for their ability to make successful predictions and not for their ability to provide an understanding. If we also somehow gain the latter then it is a bonus. So if we want an understanding we need to aim specifically for it, almost as an ad hoc motive in addition to `scientific progress.’

Well that is precisely what I have in mind: to revisit quantum mechanics and try to see if one can increase our understanding of it somehow. At least one can try to remove the misconceptions that may exist. The approach is to revisit the principles on which quantum mechanics are based. What are the principles of quantum mechanics? In his two volumes on quantum field theory Steven Weinberg considered quantum mechanics as a principle in itself. However, I believe that a principle should be something simple that one can phrase in a simple sentence. The whole construct that is quantum mechanics is too complicated to pass as a single principle.

So let’s see if we can compile a list of principle that can act as foundation for the formalism of quantum mechanics. To do this I intend to discuss the following aspects that I believe play a central role in quantum mechanics:

Hopefully these discussions will not only lead to a succinct list of principles, but will also clear up some of the misleading concepts in quantum mechanics.