Observation is more important than model; if we take the model too seriously, we can be led astray. It's much like extending a metaphor too far.
We observe double-slit diffraction and model it with the wave-function. This doesn't preclude other models, and some of those models will be more intuitive than others. The model we use may only give us a slice of insight. We can model a roll of the dice with a function with 6 strong peaks and consider the state of the dice in superposition. The fact that the model is a continuous real function is an artifact of the model, a weakness not a strength. We are modeling a system who's concrete state is unknown between measurements (the dice is fundamentally "blurred"), and we keep expecting more from the model than it wants to give.
Programmers may have better models, actually. The world is a tree where the structure of a node births a certain number of discrete children at a certain probability, one to be determined "real" by some event (measurement), but it says little about "reality". The work of the scientist is to enumerate the children and their probabilities for ever more complex parent nodes. The foundations of quantum mechanics may be advanced by new experiments, but not, I think, by staring at the models hoping for inspiration.
The models of quantum mechanics have already withstood experiments to a dozen decimal places. You aren't going to find departures just by banging around in your garage; you just can't generate enough precision.
The only way forward at this point is to start with the model and design experiments focusing on some specific element that strikes you as promising. Unless you're staring at the model you're just guessing, and it's practically impossible that you're going to guess right.
>You aren't going to find departures just by banging around in your garage
This kind of rhetoric saddens me. Someone says "design an experiment" and you jump to the least charitable conclusion. That people do this is perhaps understandable, but to do it and not get pushback leads to it happening more and more, to the detriment of civil conversation.
No, the experiment I had in mind would take place near the Schwarzchild radius of a black hole. This would require an enormous effort, and (civilizational) luck to defy the expectations set by the Drake equation/Fermi paradox. It's something to look forward to, even if not in our lifetimes!
I mean you did just suggest that classical QM can be supplanted by your heavily underspecified finite(?)-state model for which you provide essentially no details, you must admit that's pretty crank-y behaviour.
This is one of the reasons I believe science and technology as a whole are on an S-curve. This is obviously not a precise statement and more of a general observation, but each step on the path is a little harder than the last.
Whenever a physics theory gets replaced it becomes even harder to make an even better theory. In technology low hanging fruit continues to get picked and the next fruit is a little higher up. Of course there are lots of fruits and sometimes you miss one and a solution turns out to be easier than expected but overall every phase of technology is a little harder and more expensive.
This actually coincides with science. Technology is finding useful configurations of science, and practically speaking there are only so many useful configurations for a given level of science. So the technology S-curve is built on the science S-curve.
I don't think this is strictly true. Rather it seems that the problem is that we, at some point, invariably assume the truth of something that is false, which then makes it really difficult to move beyond that because we're working off false premises, and relatively few people are going out of there way to go back in time and challenge/rework every single assumption, especially when those assumptions are supported by decades (if not centuries) of 'progress.'
An obvious example of this is the assumption of the geocentric universe. That rapidly leads to ever more mind-boggling complex phenomena like multitudes of epicycles, planets suddenly turning around mid-orbit, and much more. It turns out the actual physics are far more simple, but you have to get passed that flawed assumption.
In more modern times relativity was similar. Once it became clear that the luminiferous aether was wrong, and that the universe was really friggin weird, all sorts of new doors opened for easy access. The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong, rather than that the next door is just unimaginably difficult to open. This is probably even more true given the vast numbers of open questions for which we have defacto answers, but yet they seem to defy every single test of their correctness.
---
All that said, I don't disagree that technology may be on an s curve, but simply because I think the constraints on 'things' will be far greater than the constraints on knowledge. The most sophisticated naval vessel of modern times would look impressive but otherwise familiar to a seaman of hundreds or perhaps even thousands of years ago. Even things like the engines wouldn't be particularly hard to explain because they would have known full well that a boiling pot of water can push off its top, which is basically 90% of the way to understanding how an engine works.
It's true that Ptolemaic cosmology stuck thinkers in a rut for a very long time; but what got us out of that rut was observation (and simplification). Copernicus saw that heliocentrism led to a simpler model that fit observation better (ironically he wanted to recover Ptolemy's perfectly circular orbits!). In turn, Kepler's perfectionism led him to ditch the circular orbit idea to yield the first accurate description of orbits as ellipses. Yes, transgression against long-held belief was necessary to move forward, but in every case the transgression explained observation. Transgression itself is undesirable. In fact, transgression unmotivated by observation is what powers the dark soul of the "crank", who is at best a time-waster and at worst a spreader of mental illness.
Even Einstein did not produce (e.g. special relativity) out of whole cloth. He provided a consistent conceptualization of Lorentz contraction, itself the result of observing descrepencies in the motion of Jupiter's moons. The same could be said of the photoelectric effect, the ultraviolet catastrophe, and QM.
All this to say that your statement "The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong" is unsupported. Nothing could be more popular than questioning fundamental assumptions in science today!
It could very well be that, as Sean Carroll puts it, we really know how everything larger than the diameter of a nuetron works! Moreover, we know that even if we find strangeness at tiny scales, our current theories WILL remain valid approximations, just like Newtonian mechanics are valid approximations of special and general relativity. The path to progress will not happen because a rogue genius finds something everyone missed and boldly questions assumptions long-held. Scientific revolution first requires an observation inconsistent with known models, but even the LHC hasn't given us even one of those. There is reason to think that GR, QM, and the standard model are all there is...until we do some experiments near a black hole!
> Copernicus saw that heliocentrism led to a simpler model that fit observation better.
That's not true, he didn't.
Geocentric model of the time was a better fit of the data than the Copernican model. What Copernican model had was simplicity (at some cost to observational data fidelity).
Making the heliocentric model approach (and breach) the accuracy obtained by the geocentric model took a lifetime of work by many people.
As a kinematic model (description of the geometry of motions) as observed from Earth's reference frame geocentric is still pretty darn accurate. There's a reason why it is so. Compositions of epicycles are a form of Fourier analysis -- they are universal approximators. They can fit any 'reasonably well behaved' function. The risk is, and it's the same risk with ML, deep neural nets, that one (i) could overfit and (ii) it could generate a model with high predictive accuracy without being a causal model that generalises.
Heliocentric model was proposed much much earlier than Copernicus but the counterarguments were non-ignorable. Reality, it turned out was very surprising and unintuitive.
Truth be told, I don't know much about Copernicus. He may indeed have been right but for the wrong reasons! If so, he's a very good example against my point that observation must precede successful revolution. It seems strange that the Catholic church took him so seriously if his claim was supported by his enthusiasm and not observation. It's definitely something I'd like to learn more about - any book recommendations?
This history is absolutely fascinating. Let me find a blog post by Baez that covers a lot of that history.
I don't think this history says anything against your point -- sometimes the time is just not right for the idea -- and even classical science can be very unintuitive and weird, so much so that common sense seems like very strong counter arguments against what eventually turn out to be better models.
I of course learned this over
many books, but the mind blanks out over which one to suggest. I think biographies of Copernicus and Kepler would be good places to start.
HN do you know what happened to John Baez's blog that listed his multiparty blog posts ? They are a treasure trove that I do not want to lose. Azimuthproject too seems to have disappeared
As a tangential hit on this issue, the relationship between the Catholic Church and science [1] is an interesting read. It's nowhere near as antagonistic as contemporary revisionary takes would suggest. In particular the most famed example of this is with Galileo (whose name is mentioned no less than 146 times on that fairly short page...) yet that was far more interpersonal issues than his concepts being an affront to theology. He wrote a book calling the Pope (at the time very much one of his supporters) through hardly veiled proxy, a simple minded idiot. Burning bridges is bad enough, but burning one you're standing on is lunacy.
If one does genuinely believe in a God then the existence of science need not pose a threat to that, since there's nothing preventing one from believing that God also then created the sciences and rationality of the universe. The classical 'gotchas' like 'Can God create a stone so heavy that he could not lift it?' were trivial to answer by simply accepting that omnipotence does not extend to things which are logically impossible, like a square circle.
I especially like your last paragraph. Even if our fundamental assumptions are wrong, current theories still work very well within appropriate bounds. And those bounds basically contain all practical scenarios here on earth. That's a big reason why it's hard to make progress on string theory, because we can't create scenarios extreme enough here on earth to test it.
So even if our fundamental assumptions are wrong and some new theory is able to explain a bunch of new stuff, chances are it won't impact the stuff we can practically do here on earth, because scientists have already been doing the most extreme experiments they can, and so far progress is still stalled on fundamental physics.
Copernicus and Kepler did interpretations, not observations, they explained observations, but geocentrism explained observations too, so heliocentrism wasn't unquestionably superior.
Heliocentrism from its earliest formulation was pretty bad for many reasons, including as you mentioned the desire to maintain circular orbits, as well as uniform velocities, epicycles, and more. You could easily pick a million holes in heliocentrism to 'disprove' it. And the geocentric view, as convoluted as it was, was observably accurate and predictive with 'holes' being plugged by simply having the entire dysfunctional model absorb them - e.g. by simply assuming retrograde motion as a natural phenomena, and otherwise - just add more epicycles.
Heliocentrism was most fundamentally driven by somebody, with extremely poor interpersonal skills (which is much more the reason he was left living his final days in house imprisonment, rather than his theory itself), moving forward on his own somewhat obsessive bias.
Similarly, with relativity. I have no idea what you mean by a 'consistent conceptualization' of Lorentz contraction, but length contraction was a completely ad hoc explanation for the Michelson Morley experiment. It's correctness was/is more incidental than anything else. Einstein did not cite Lorentz (or anybody for that matter), and I do not think that was unfair or egotistical of him.
--
I'm also unsure of what you're referencing with Sean Carroll, but I'd offer a quote from Michelson of the Michelson-Morley experiment saying essentially the same, "The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.... Our future discoveries must be looked for in the sixth place of decimals."
So convinced was Michelson that the 'failure' of his experiment was just a measurement issue that he made that comment in 1894, near to a decade after his experiment and shortly before physics and our understanding of the universe was about to revolutionary explode thanks to a low ranking patent inspector.
Max Planck famously said, "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is
familiar with it."
Now we know how to prevent it: popularize ideas like "physics is mathematics", "shut up and calculate", "it's useless philosophy not worth to think about", "nobody can understand it, so it's useless to even try". Also a nice excuse for ignorance.
>I have no idea what you mean by a 'consistent conceptualization' of Lorentz contraction, but length contraction was a completely ad hoc explanation for the Michelson Morley experiment. It's correctness was/is more incidental than anything else. Einstein did not cite Lorentz (or anybody for that matter), and I do not think that was unfair or egotistical of him.
In "On the Electrodynamics of Moving Bodies"[1] Einstein checks his derivation against Lorentz contraction. It's on page 20 of the referenced English translation. Lorentz' model was ad hoc, E derived it with only 2 postulates (equivalence principle; c invariance). Lorentz was indeed cited, and the cite is useful to connect E's theory to real-world observation. This is true whether or not you want to get pedantic about the meaning of "cite" vs "reference".
> The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong, rather than that the next door is just unimaginably difficult to open.
We actually know we have:
Bell’s inequality tells us that the universe is non-local or non-real. We originally preferred to retain locality (ie, Copenhagen interpretation) but were later forced to accept non-locality. But now we have a pedagogy and machinery built on this (incorrect) assumption — which people don’t personally benefit from re-writing.
Science appears trapped in something all too familiar to SDEs:
A technical design choice turned out to be wrong, but a re-write is too costly and risky for your career, so everyone just piles on more tech debt — or modern epicycles.
And perhaps that’s not a bad thing, in and of itself. Eg, geons were initially discarded because the math doesn’t work out — but with the huge asterisk that they might still be topologically stabilized. But the math there is hard and so it makes sense to continue piling onto the current model until enough advances in modeling (eg, 4D anyons) allow for exploring that idea again.
Similar to putting off moving tech stacks until someone else demonstrates it solves their problems.
But at least topological geons would explain one question: why does space look like geometry but particles look like algebra?
Because topological surgery looks like both!
- - - -
> clear that the luminiferous aether was wrong
Another interpretation is that the aether exists, but we’re also made of aether stuff — so we squish when we move, rather than rigidly moving through it (as per the theory tested by Michelson-Morley). That squishing cancels out the expected measurement in MM. LIGO (a scaled MM experiment) then works because waves in the aether squish and stretch us in a detectable way.
Modern theories are effectively this: everything is fields, which we believe to be low-energy parts of some unified field.
It's just accelerated. AI is bound by physics just like everything else.
The S-curve is really about fundamental limits. Lets say ASI helps us make multiple big leaps ahead, I mean mind blowing stuff. That still doesn't change that there must be a limit somewhere. The idea that science and tech is infinite is pure science fiction.
The first turn in an S-curve can easily look like an exponential. ASI has physical limitations, so I don’t see why it wouldn’t take an S-curve as well, although at a much different rate than human intelligence.
That is true for classical probability, but the idea that unknown quantities are determining the outcomes in quantum mechanics has been disproven in the event of the speed of light being a true limit on communication speed. This is known as, "Bell's theorem."
Reality can be interpreted as non-local. There has been no conclusive proof it isn't.
c isn't a limit on the kind of non-locality that is required, because you can have a mechanism that appears to operate instantaneously - like wavefunction collapse in a huge region of space - but still doesn't allow useful FTL comms.
Bell's Theorem has no problem with this. Some of the Bohmian takes on non-locality have been experimentally disproven, but not all of them.
The Copenhagen POV is that particles do not necessarily exist between observations. Only probabilities exist between observations.
So there has to be some accounting mechanism somewhere which manages the probabilities and makes sure that particle-events are encouraged to happen in certain places/times and discouraged in others, according to what we call the wavefunction.
This mechanism is effectively metaphysical at the moment. It has real consequences and was originally derived by analogy from classical field theory, with a few twists. But it is clearly not the same kind of "object" as either a classical field or particle.
There may be no conclusive proof, but it's a philosophically tough pill to swallow.
Non-locality means things synchronise instantly across the universe, can go back in time in some reference frames, and yet reality _just so happens_ to censure these secret unobservable wave function components, trading quantum for classical probability so that it is impossible for us to observe the difference between a collapsed and uncollapsed state. Is this really tenable?
Strip back the metaphysical baggage and consider the basic purpose of science. We want a theoretical machine that is supplied a description about what is happening now and gives you a description of what will happen in the future. The "state" of a system is just that description. A good _scientific_ theory's description of state is minimal: it has no redundancy, and it has no extraneous unobservables.
My understanding is that it is not that simple, pilot-wave theories, are not the traditional hidden-variable theories. While some setups look very simple in pilot-wave compared to say the schrodinger equation, other setups are as unintuitive in pilot-wave as schrodinger equation is in some.
My lightly held conclusion is if it really was a full and more straight forward solution it would dominate the conversation more than it does now. This option was formed reading some primary sources but mostly reviews and comparisons of QM theories. Unlike other methodologies I have never working through a full QM example problem in pilot-wave theory.
I'm not sure what the point is you're trying to make. OP claimed
> the idea that unknown quantities are determining the outcomes in quantum mechanics has been disproven in the event of the speed of light being a true limit on communication speed.
and I provided an immediate counterexample. Yes, Bell's Theorem and its exact assumptions are not entirely straightforward but let's please stop propagating those falsehoods that die-hard proponents of the Copenhagen interpretation commonly propagate.
Let me throw in "Hydrodynamic Quantum Analogs" [1] as a fascinating review of how quantum effects emerge in experiments with bouncing oil drops on liquid. This is fully a pilot wave driven experiment and there has been a lot of academic work analyzing the system and trying to fit it into the de Broglie-Bohm formulations of quantum dynamics.
To quote section 10.2: "The [experimental] system represents a classical realization of wave–particle duality as envisaged by de Broglie, wherein a real object has both wave and particle components."
We've already got all those fields interacting in the real world, so I don't find it very far fetched that quantum mechanics emerges from their fully classically described interactions, probably expressed in some really gnarly 4D math.
Tim Maudlin's "Philosophy of Physics: Quantum Theory" makes for an excellent read! It addresses tons of questions which are rarely answered (let alone asked) in your run-of-the-mill university-level QM class.
> The foundations of quantum mechanics may be advanced by new experiments, but not, I think, by staring at the models hoping for inspiration.
To come up with new experiments that might shed light it certainly helps to spend time exploring the models to come up with new predictions that they might make. Sure, one can also come up with new experiments based only on existing observations, but it's most interesting when we can make predictions, as testing those advances some theories and crushes others.
The trouble with QM is with it's interpretations, not with the accuracy of it's predictions. The latter informs interest in the former. QM works, but the models imply that nature is neither "local" - e.g. entanglement experiments undermine hidden-variables, nor "real" - e.g. a particle does not have a momentum (or position) until you measure it. These physical properties are not just hidden, they are undefined. These implications fly in the face of basic macroscale intuitions about what "physical reality" means, which makes it interesting. Inconsistency is a signal that we have discoveries yet to make. Note that "Many worlds people" think there is no inconsistency - my sketch of a model is fully consistent with that interpretation, if you wish, by simply assign a new universe to every child node in which the node is reached.
What you say doesn't quite correspond to quantum physics as it's known. Quantum physics is quantitative and precise, so it's difficult to say there's something undefined there. It doesn't suggest nonlocality, absence of hidden variables means only absence of hidden variables. It doesn't suggest antirealism, if only due to precision, you can say it doesn't work how you want, but at worst this makes it unintuitive. Conversely Dirac formalism works as if quantum state exists in itself in precise form, which has a good compatibility with basic macroscale intuitions about what "physical reality" means.
But quantum physics can't predict exactly where the individual dots on the detector will be, only their distribution. That does not sound totally quantitative and precise and defined. You would not accept such predictions for macroscopic objects :)
At least it shouldn't be nonlocal just because of the erroneous rumor that Bell proved that quantum physics is nonlocal or because randomness, nonlocality and retrocausality are just directly observed.
Would you be satisfied if the theory clearly states: "At the time of measurement, the position of the photon interaction is determined by randomly sampling from the quantum distribution"?
Your 6-sided dice example sort of brings some focus to his argument of 'its not a real wave it's a math wave ". The result of a 6-sided dice roll exists more in our minds as "math dice" because for most people, if you rolled and it fell in a sewer, lost etc, you wouldn't consider the roll complete until you grabbed a different dice and rolled it. More attached to the person rolling it and the resulting 'what does the number affect'.
>The fact that the model is a continuous real function is an artifact of the model, a weakness not a strength.
The wave function is the square root of a probability distribution. The wavefunction is a continuous real function of position because position is modeled as a continuous real variable. The idea of the wavefunction as a function of position is generally supported by the fact that it can be used to predict the measurement results of diffraction experiments like the double-slit experiment, but also practically the whole field of X-ray diffraction.
There is not just one experimental result that is explained by wavefunctions. There are widely used measurement techniques whose outcomes are calculated according to the quantum properties of matter — like X-ray diffraction and Raman scattering — which are widely considered to be extremely reliable. There is a good reason to explain the model of reality expressed by the equations as clearly as possible, because we want people to be able to use the equations.
Plenty of people (though certainly not all) expect quantum mechanics to be eventually modified to have a consistent theory of gravity. But physicists have experience with this. Special relativity and classical quantum mechanics were both more complex than Newtonian (classical) mechanics, and quantum field theory is more complicated than either. General relativity is substantially more involved than special relativity. It is likely that further extensions will continue to get worse.
The model of reality taught by Newtonian (classical) mechanics is also still widely discussed and used in introductory physics courses and many areas of physics (such as fluid dynamics) and engineering. This model also discusses position on the real line. Even though classical mechanics had to be modified, the use of Cartesian coordinates and real numbers turned out to be durable.
Usually the finitists will formally "rescue" countability by suggesting that the world could exist on the computable numbers, which are countable and invariant under computable rotations. But the computable numbers are a very unsatisfying model of reality, and have a lot of the same "weirdness" as the real numbers. Therefore they suggest that some other model must exist without giving a lot of specifics. Why this should be somehow helpful and not injurious to the pedagogy of physics is not clear.
Doesn't the difference between measurement and observation stem from an extension of the double slit experiment discussed in thus artucle?
It you place a detector on one of the two slits in the prior experiment, (so that you measure which slit each individual photon goes through) the interference pattern disappears.
If you leave the detector in place, but don't record the data that was measured, the interference pattern is back.
> If you leave the detector in place, but don't record the data that was measured, the interference pattern is back.
This is not remotely true. It looks like you read an explanation of the quantum eraser experiment that was either flawed or very badly written, and you're now giving a mangled account of it.
I have heard similar things but this is THE most deeply weird result and I’ve never heard a good explanation for the setup.
A lot of people pose it as a question of pure information: do you record the data or not?
But what does that mean? The “detector” isn’t physically linked to anything else? Or we fully physically record the data and we look at it in one case vs deliberately not looking in the other? Or what if we construct a scenario where it is “recorded” but encrypted with keys we don’t have?
People are very quick to ascribe highly unintuitive, nearly mystical capabilities with respect to “information” to the experiment but exactly where in the setup they define “information” to begin to exist is unclear, although it should be plain to anyone who actually understands the math and experimental setup.
It's a little simpler than you're thinking: only fully matching configurations (of all particles etc) can interfere. If you have a setup where a particle can pass through one of two slits and then end up in the same location (with the same energy etc) afterward, so that all particles everywhere are in the same arrangement including the particle that passed through one of the slits, then these two configurations resulting from the possible paths can interfere. If anything is different between these two resulting configurations, such as a detector's particles differently jostled out of position, then the configurations won't be able to interfere with each other.
An interesting experiment to consider is the delayed-choice quantum eraser experiment, in which a special detector detects which path a particle went through, and then the full results of the detector are carefully fully stomped over so that the particles of the detector (and everything else) are in the same exact state no matter which path had been detected. The configurations are able to interfere once this erasure step happens and not if the erasure step isn't done.
Another fun consequence of this all is that we can basically check what configurations count as the same to reality by seeing if you still get interference patterns in the results. You can have a setup where two particles 1 and 2 of the same kind have a chance to end up in locations A and B respectively or in locations B and A, and then run it a bunch of times and see if you get the interference patterns in the results you'd expect if the configurations were able to interfere. Successful experiments like this have been done with many kinds of particles including photons, subatomic particles, and atoms of a given element and isotope, implying that the individual particles of these kinds have no unique internal structure or tracked identity and are basically fungible.
If anything is different between the two resulting configurations of possibly affected particles, such as the state of the particles of the detector, then interference can't happen. It's not just about whether the individual particle going through one of the slits is in an identical location.
An important thing to realize is that interference is a thing that happens between whole configurations of affected particles, not just between alternate versions of a single particle going through the slit.
Hm, it says the observer-at-the-slit experiment hasn't been performed because it would absorb the photons. But it also says the experiment can be done with larger particles, so that shouldn't be a problem ...
We observe double-slit diffraction and model it with the wave-function. This doesn't preclude other models, and some of those models will be more intuitive than others. The model we use may only give us a slice of insight. We can model a roll of the dice with a function with 6 strong peaks and consider the state of the dice in superposition. The fact that the model is a continuous real function is an artifact of the model, a weakness not a strength. We are modeling a system who's concrete state is unknown between measurements (the dice is fundamentally "blurred"), and we keep expecting more from the model than it wants to give.
Programmers may have better models, actually. The world is a tree where the structure of a node births a certain number of discrete children at a certain probability, one to be determined "real" by some event (measurement), but it says little about "reality". The work of the scientist is to enumerate the children and their probabilities for ever more complex parent nodes. The foundations of quantum mechanics may be advanced by new experiments, but not, I think, by staring at the models hoping for inspiration.