In my work in academia (which I’m considering leaving), I’m very familiar with the common mathematical objects you mentioned. Where could I look for a job similar to yours? It sounds very interesting
Sorry, I'm in academia too, but my ex-colleagues who left found themselves doing nearly identical work doing MFT research at hedge funds, climate modelling at our federal weather bureau, and SciML in big tech. I know of someone doing this kind of work in telecoms too, but I haven't spoken to them lately. Having said that, it's rough out there right now. A couple of people I know looking for another job right now (academia or otherwise) with this kind of training are not having much luck...
Some ideas are too complex to explain accurately in simple terms.
You can give someone a simple explanation of quantum chromodynamics and have them walk away feeling like they learned something, but only by glossing over or misrepresenting critical details. You’d basically just be lying to them.
Quantum Mechanics is the example of a subject where supposed experts don’t really understand it either and hence can’t explain it adequately.
Also, it’s hilarious to get comments like this voted down by non-experts who assume this must be an outsider’s uninformed point of view.
I have a physics degree and I studied the origins and history of quantum mechanics. Its “founding fathers” all admitted that it’s a bunch of guesswork and that the models we have are arbitrary and lack something essential needed for proper understanding.
The math that describes it is known precisely. Specific implications of this are known. There's no information transfer, there's no time delay, etc.
And yet lay people keep incorrectly thinking it can be used for communication. Because lay-audience descriptions by experts keep using words that imply causality and information transfer.
This is not a failure of the experts to understand what's going on. It's a failure to translate that understanding to ordinary language. Because ordinary language is not suited for it.
> Its “founding fathers” all admitted that it’s a bunch of guesswork and that the models we have are arbitrary and lack something essential needed for proper understanding.
We don't have a model of why it works / if there's a more comprehensible layer of reality below it. But it's characterized well enough that we can make practical useful things with it.
> This is not a failure of the experts to understand what's going on.
> We don't have a model of why it works / if there's a more comprehensible layer of reality below it.
Counterpoint:
You’ve just admitted they don’t understand what’s going on — they merely have descriptive statistics. No different than a DNN that spits out incomprehensible but accurate answers.
So this is an example affirming that QM isn’t understood.
QM isn't less well understood though than Newton's mechanics. Neither cover the "why". But both provide a model of the world, the model (!) is very precisely understood and it matches observations in certain parts of reality. Like all reasonable scientific theories do. They have limits, and beyond those limits they don't apply, but that doesnt mean they are not understood. It's reality that is not sufficiently well understood and by coming up with more and more refined models/theories, we keep approximating it, likely without ever having a "fully correct" theory encompassing everything without limits. (But that's ok.)
The only descriptive / empirical parts is the particle masses.
But it sounds like your objection is that reality isn't allowed to be described by something as weird as complex values that you multiply to get probabilities, so there necessarily must be another layer down that would be more amenable to lay descriptions?
My point is that their models are fitted tensors/probability distributions, often retuned to fit new data (eg, the epicyclic nature of collider correction terms) — the same as fitting a DNN would be.
Their inability to describe what is happening is precisely the same as in the DNN case.
Actually it is just the opposite. QED is comprehensive and, as far as we know, accurate.
But it is impractical to use in most situations so major simplifications are required.
The correction factors that you mention are the result of undoing some of those simplifications, sometimes by including more of the basic theory and sometimes by saying something like "we know that we ignored something important here and it has to have this shape but we can only kinda sort measure how big it might be because it's too hard to actually calculate".
As I pointed out, eg, the high number of correction terms when trying to tune the model to actual particle accelerator data is evidence that our model is missing something. (And some things are plain missing: neutrino behavior, dark matter, dark energy, etc.)
In the same way that a high number of epicycles was evidence our theory of geocentrism was wrong — even though adding epicycles did compute increasingly accurate results.
> As I pointed out, eg, the high number of correction terms when trying to tune the model to actual particle accelerator data is evidence that our model is missing something. (And some things are plain missing: neutrino behavior, dark matter, dark energy, etc.)
This is rather a problem of the standard model. Physicists will immediately admit that something is missing there, and they are incredibly eager to find a better model. But basically every good attempt that they could come up with (e.g. supersymmetric extensions of the standard model; but I'm not a physicist) has by now (at least modtly) been falsified by accelerator experiments.
The comment you originally replied to was about entanglement, not the entire standard model. The math there is very simple, not built on correction terms.
... So it's about not being able to observe short-lived particles directly, and having to work backwards from longer lived interaction or decay products? Or about how those intermediate particles they have to calculate through also have empirically-determined properties?
Most of that is measured corrections, not a theoretical model.
Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs. We can calculate that effect because we’ve fitted models, but that’s it.
Similarly, to predict proton collisions, you need to add a bunch of corrective epicycles (“virtual quarks”) to get what we measure out of the basic theory. But adding such corrections is just curve fitting via adding terms in a basis to match measurement. Again, we can’t say what is happening or why that occurs.
We have great approximators that produce accurate and precise results — but we don’t have a model of what and why, hence we don’t understand QM.
> Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs. We can calculate that effect because we’ve fitted models, but that’s it.
Bell's theorem was a prediction from math before people found ways to measure and confirm it. A model based on fitting to observations would have happened in the other order.
> A model based on fitting to observations would have happened in the other order.
We’d already had models which said that certain quantities were conserved in a system — and entanglement says that is true of certain systems with multiple particles.
To repeat myself:
> Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs.
Bell’s inequality is just a way to measure that correlation, ie, statistical effect — and I think it’s supporting my point the way to measure entanglement is via statistical effect.
ER=EPR is an example of a model that tries to explain what and why of entanglement.
Reminds me of the old videos on the Mill CPU architecture. There is multi hour long video about “the belt”, a primary concept in understanding the Mill architecture and instruction scheduling. It’s portrayed in the slides as an actual belt with a queue of items about to be processed, etc.
Only in the end to reveal the belt is truely conceptualized and does not formally exist. The belt is an accurate visual representation and teaching tool, but the actual mechanics emerge from data latches and the timing of releasing the data, etc.
To me, every profession—from software engineering to farming—has its complexities, yet most professionals can explain what they do in clear terms. When academics say they can’t offer a basic explanation, it often feels like an attempt to protect their status or avoid the effort—if not a kind of intellectual arrogance. Yes, the topics are challenging—you don’t need to throw in quantum buzzwords to convince me—but simplifying your work isn’t “dumbing it down”; it often sharpens your own understanding too.
I encounter this idea too much..the idea that complex topics can always be explained in a way to make everyone understand it...and that just isn't true. There is usually a point on any topic where further reduction/compression is no longer lossless. Yes, I think the analogy of image compression works pretty well. Lossless compression can only go so far. Further reduction introduces loss, but the image may still be understandable, but at a certain point, the loss from compression prevents understanding of the image, and may even mislead (Is that a bear, or uncle Robert?).
I personally think of this in terms of giving directions.
It's easy to give directions to somewhere near where you currently are -- "Just head down the road, it's the second left, then 3 doors down".
When giving directions to a far-away place you either have to get less accurate "it's on the other side of the world", or they get really, really long. Unless of course they already know the layout of the land -- "You already know Amy's house, over in Algebra Land? Oh, then it's just down the road, fourth left, six doors down".
People often seem cleverer because they know the layout of some really obscure land, but often it's just because people have never been anywhere near it. I have a joke about my research where I say, "A full explanation isn't that hard to explain, it's just long. About 4 hours probably. Are you interested?" So far, I've had 3 people take me up on that, and they all seemed to have an understanding once I'd finished (or, they really really wanted to escape).
So, what's a horse? Well, you look at it: it’s this big animal, standing on four legs, with muscles rippling under its skin, breathing steam into the cold air. And already — that’s amazing. Because somehow, inside that animal, grass gets turned into motion. Just grass! It eats plants, and then it runs like the wind.
Now, let’s dig deeper. You see those legs? Bones and tendons and muscles working like pulleys and levers — a beautiful system of mechanical engineering, except it evolved all by itself, over millions of years. The hoof? That’s a toe — it’s walking on its fingernail, basically — modified for speed and power.
And what about the brain? That horse is aware. It makes decisions. It gets scared, or curious. It remembers. It can learn. Inside that head is a network of neurons, just like yours, firing electricity and sending chemical messages. But it doesn’t talk. So we don’t know exactly what it thinks — but we know it does think, in its own horselike way.
The skin and hair? Cells growing in patterns, each one following instructions written in a long molecule called DNA. And where’d that come from? From the horse’s parents — and theirs, all the way back to a small, many-toed creature millions of years ago.
So the horse — it’s not just a horse. It’s a machine, a chemical plant, a thinking animal, a product of evolution, and a living example of how life organizes matter into something astonishing. And what’s really amazing is, we’re just scratching the surface. There’s still so much we don’t know. And that is the fun of it!
The quip you're referring to was meant to be inspirational. It doesn't pass even the slightest logical scrutiny when taken at its literal meaning. Please. (Apologies if this was just a reference without any further rhetorical intent though.)
It's like claiming that hashes are unique fingerprints. No, they aren't, they mathematically cannot be. Or like claiming how movie or video game trailers should be "perfectly representative" - once again, by definition, they cannot be. It's trivial to see this.
Yes, though 70% is a normal cut-off, I think most versions more heavily bias the placement towards 1/2 in the past square instead of the 1/9th of real chance. Without the bias it is simpler to always guess no.
Thanks! I'm definitely planning to refresh my Python skills by building a small portfolio. I’ve also been considering learning the basics of Rust, but I think I should settle on a clear direction first.