A new framework of unification
We have a new paper on the arXiv today in which we present a novel framework of unification in high-energy physics. In this newsletter I will give a non-technical presentation of our results
Happy new year everyone!
Johannes and myself have just posted a new paper on the arXiv in which we present a new framework of unification based on noncommutative geometry. Specifically, we have identified a mechanism that brings together bosonic and fermionic quantum field theory (these describe the forces and matter fields in for instance the standard model of particle physics), and, indirectly, also key elements of general relativity. In the following I will discuss how this works, what we think it means, and where we go from here — and as always I will attempt to do this in a way so that everyone can keep up.
Why unification is essential
There exists a convincing argument1 based on quantum mechanics and general relativity that suggests that a final theory must exist. That is, that there exists a theory that once and for all will end the tower of scientific theories that takes us from biology and chemistry through atomic and nuclear physics all the way down to particle physics.
But if we really believe that such a theory exists and if we take that as a fact, then that piece of information tells us something. In fact, it tells us two things:
if a theory is truly final then it means that it cannot be scientifically reduced. That is, it will be the end of the road, the last peel of the onion, a hereto and no further. But how can one possible image a theory that cannot be reduced? The answer must be that such a theory will be based on a small number of conceptually extremely simple principles. These principles must be close to trivial and involve very little conceptual input.
At the same time these founding principles must be sufficiently rich to give rise to the complex mathematics that we encounter in contemporary high-energy physics.
There is a tremendous amount of tension between these two statements: on the one hand almost trivial, on the other hand anything but trivial. This is what makes this problem so fascinating, it is difficult to imagine that such a theory exists and yet, it is equally difficult to believe that it does not.
Now, in my work with Johannes we have already addressed the first point by suggesting that what one needs to do is to start with the mathematics of empty space. I have written about this Ansatz elsewhere (see for instance earlier blog posts or my book “Shell Beach - the search for the final theory”, and, of course, our scientific publications), and thus it is the second point that I will focus on here. This point comes with a corollary:
if a theory is to be capable of generating the whole range of mathematics found in contemporary high-energy physics and if it is to do that starting from a point that is close to empty, then that theory must involve some very powerful mechanisms of unification.
And thus the search for such mechanisms of unification must be central in our search for that theory.
The task is a daunting one: contemporary high-energy physics consist essentially of three pillars: bosonic and fermionic quantum field theory, Einsteins theory of general relativity, and the standard model of particle physics. These pillars come with a number of internal problems and challenges (for instance, we do not know how to formulate quantum field theory rigorously), and thus the task is to solve all of these issues while we at the same time bring the three pillars together in a single framework that cannot be scientifically reduced. Very daunting indeed, but doable: since we believe that a final theory really exists, then this problem must have a solution and thus it is just a question of figuring it out.
Unification and noncommutative geometry
One of the most interesting frameworks of unification is found in the work of Chamseddine and Connes, who have formulated the standard model of particle physics coupled to Einstein theory of relativity in terms of what is known as noncommutative geometry.
Let me briefly summarise what noncommutative geometry is all about. The central insight in noncommutative geometry is that there exists an equivalent formulation of Riemannian geometry (Riemannian geometry is the mathematics2 that Einstein used in his theory of general relativity) that is not based on a metric field, as it is the case in the usual formulation, but rather on algebras and Dirac operators. Specifically, the key ingredient in this alternative formulation is a so-called a spectral triple that consist of three ingredients: an algebra of functions on the space under consideration, a Dirac operator, which is a type of gradient operator that is well known from particle physics, and finally a Hilbert space, which you don’t need to know anything about unless you’re an expert.
Now, the key insight in noncommutative geometry is that this equivalent formulation of Riemannian geometry permits a generalisation, which is not accessible when one uses the old formulation. The point is that the algebra used in the alternative formulation is a priori commutative (i.e. it does not matter how its elements are ordered when multiplied) but the mathematics of the alternative formulation can be straight forwardly generalised to encompass also non-commutative algebras. That is, algebras where it does matter in which order its elements are ordered when multiplied. This then opens the door to a completely new and vast landscape of geometries known as noncommutative geometries.
And the amazing thing is that one of the first examples one comes across, when one starts wandering around in this new garden of exotic geometries, is the standard model of particle physics coupled to general relativity.
Now, I am oversimplifying things here, it is not possible for me to do justice to this incredibly interesting topic in this newsletter (see for instance an older blog post for more details). The key point that I wish to highlight here is how Chamseddine and Connes obtains the bosonic sector of the standard model — that is, the strong, weak, and electromagnetic forces together with the Higgs field — from their spectral triple. What they found is that a certain type of rotation of their Dirac operator gives them a modified Dirac operator — this is called a fluctuated Dirac operator — which, when put through the machinery of noncommutative geometry, delivers the entire bosonic sector including what is known as Yang-Mills theory as well as the famous Higgs mechanism. It is rather like magic.
Only that it is not magic. It is just advanced mathematics and it even comes with a couple of downsides. One of the most important shortcomings with this new mechanism of unification that Chamseddine and Connes discovered is that it is essentially classical. That is, it does not involve quantum theory in any fundamental manner; it deals primarily with classical fields and not operator-valued fields, as would be the case if it were to encompass quantum field theory. This is where my work with Johannes Aastrup enters the picture.
But before I get into that let me note that it is clear why Chamseddine and Connes’ work cannot in its present form include quantum theory at a fundamental level. The reason is that their work is inherently gravitational. Their spectral triple is a geometrical object, it is a type of relativity theory extended to a new class of spaces, and thus, if their work were to include quantum theory, it would necessarily have to include some type of quantisation of gravity. But if there is one thing we known only too well in theoretical high-energy physics then it is that it’s hard, very hard, to quantise gravity. And thus Chamseddine and Connes’ work does not go that far (but it goes very far indeed).
Geometries of configuration spaces
As I already alluded to previously the fundamental idea in my work with Johannes Aastrup is to base a final theory on the mathematics of empty space. Concretely, what this means is that we consider the geometry of what is called a configuration space of gauge connections.
So what does that mean? First of all, the configuration space that we consider is an enormous space that contains information about how objects are moved around in three-dimensional space. This configuration space naturally arises from the mathematics of three-dimensional space itself and what we suggest is to consider its geometry — and in particular, to consider dynamical geometries.
This type of configuration spaces are well known in theoretical high-energy physics where they for instance play a central role in quantum field theory. What is new, however, is the idea to consider their geometry3.
We have already shown that many of the key building blocks of contemporary high-energy physics emerges from this idea: the Hamilton operators of bosonic and fermionic quantum field theory — the Hamilton operators give us the evolution of time — together with the canonical commutation and anti-commutation relations — — these are the basic mathematical building blocks in a quantum theory — of ditto, as well as certain elements of general relativity.
These building blocks emerge in the following manner: building on the insights from noncommutative geometry we first construct a type of spectral triple on the configuration space. In particular, we construct a Dirac operator on the configuration space. As we know from noncommutative geometry this is one of the simplest geometrical objects, and we have been able to prove that such an object does exist rigorously at least in certain cases.
Now, the canonical commutation and anti-commutation relations, which form the basis of bosonic and fermionic quantum field theory, emerge from this Dirac operator in this way: first the bosonic commutation relations emerge from the interaction of the Dirac operator with a suitable algebra4, and secondly, the fermionic commutation relations emerge from the so-called Clifford algebra, that is used to construct the Dirac operator5.
Furthermore, once we have a Dirac operator we can take its square, and it is from this that the Hamilton operators emerge.
Now, this is where the details become critically important. The thing is that in order to get this second result, the obtaining of the Hamilton operators, we need to add a certain term to our Dirac operator. Technically, we construct what is known as a Bott-Dirac operator, and it is the Bott-Dirac operator that delivers the Hamilton operators once we take its square.
These results are intersting since they provide us with a completely new, geometrical interpretation of quantum field theory. For instance, in this formulation fermions play an intrinsically geometrical role. Nevertheless, when we first obtained the second result with the Hamilton operators we were also left with a slightly uneasy feeling, because what justifies the choice of a Bott-Dirac operator instead of the ordinary Dirac operator? The fact that it is the former that gives us the right result is not a good explanation, we thought, and thus we kept searching.
A fluctuated Dirac operator on a configuration space
In our new paper we present an alternative — and possibly better — solution. What we have found is that if we apply the mechanism of unification, which Chamseddine and Connes found, to our setting with a configuration space, then we obtain a new and powerful tool of unification that appears to tie together (at least in part) the three fundamental pillars of contemporary high-energy physics that I mentioned above. In particular, it involves quantum theory and quantised fields at a fundamental level6.
So what we do is to rotate our Dirac operator (the original Dirac operator on the configuration space, not the Bott-Dirac operator) and thereby obtain a new so-called fluctuated Dirac operator. And what we have found is that this operator gives us the building blocks of bosonic and fermionic quantum field theory when we take its square without us having to add any additional terms.
Now, there are some technical details that I need to mention here (and if anyone already feels exhausted from too much math-lingo then I think it’s best to skip this part). First of all, it turns out that the rotation of the Dirac operator, that we need, is not exactly the same as what Chamseddine and Connes did. In order to get the right result we need to add a certain ‘twist’ to the rotation7. This twist is a mathematically very subtle thing that we do not yet fully understand the significance of. I will not attempt to explain this here but simply refer the interested reader to our paper. Secondly, it has to be emphasised that the entire geometrical construction on the configuration space is a highly delicate matter that still remains to be fully explored. We have outlined how such a construction can be accomplished and proven that it exists rigorously in a few special cases, but much work remains to be done.
We need broad hypotheses that address all the fundamental problems
There is one more reason, why I believe that our recent findings are particularly significant. The point is that they tell us that the hypothesis, that we are exploring, is a very broad one that brings together almost all of the basic building blocks of contemporary high-energy physics8. This is significant because there exist exceedingly few hypotheses that accomplishes just that.
What I want to say is that contemporary high-energy physics is faced with three key fundamental problems9:
there is the problem of understanding how general relativity and quantum theory can be reconciled,
there is the problem of rigorously formulating quantum field theory,
and there is the problem of understanding the origin of the algebraic structure of the standard model.
What we need are hypothesis that address all of these problems at the same time. We do not need a number of different theories that each solve one of these problems, we need one theory that solves them all. If a fundamental theory — or a final theory — actually exists (and the fact that we are all searching for it must mean that we as a community believe it exists) then it must necessarily solve all of these problems, and thus, as a necessary corollary, the search for hypotheses that address all three problems must be one of our highest priorities.
However, that is not the case. There exist almost no hypotheses in contemporary high-energy physics that aspires to do what I just outlined, and as far as I can tell exceedingly few researchers are actively searching for new, broad hypotheses. Instead the community has organised itself around a small number of fairly old hypotheses that mostly10 address only one problem while they ignore the rest. Also, we see that people keep exploring the same set of old hypotheses for a very long time.
This is a problem. A strategy that is based on a high level of compartmentalisation is very unlikely to lead us to the theory that we seek: compartmentalisation makes sense in a situation where we have already settled upon the overall theoretical framework (as is the case in for instance chemistry or atomic physics), not in a situation where such a framework has not yet been found.
What I am suggesting is that we should be more ambitious and not settle for incremental improvements based on partial hypotheses; that we should believe in our ability to find the theory we are searching for, and that we should translate that belief into a search strategy designed to meet that end. And such a search strategy cannot be based on excessive levels of compartmentalisation, incremental improvements of old ideas, and an endless list of toy-models that leads us nowhere. If we aim high we might score high; if we aim low the chances of us making a home run are close to nil.
This is the reason why it is important to identify new hypotheses that address the full range of fundamental problems faced in theoretical high-energy physics. It is important to show that such hypotheses do exist; to show that it is possible to come up with completely new ideas. The hypothesis that Johannes and myself are working on is an example of a hypothesis that addresses all of the three fundamental problems listed above, and thus it demonstrates that there are more games in town. I hope that others will be encouraged by this to start searching for new approaches, because that is what we need: new bold ideas.
All the best for 2025
With this I will end this newsletter. As always I would like to thank my sponsors for their incredible generous support; without them I would not be able to continue my work with Johannes.
I wish you all the very best for 2025, may it be a great year, full of adventure and joy. Take good care,
Kind regards, Jesper
The argument basically goes like this: as we measure shorter and shorter distances the probe, which we use for these measurements, will necessarily carry increasingly higher energies, which in turn will curve space and time more and more. At some point this curvature will become so great that a black hole event horizon is created, which will prevent any signal from emerging from our measurements. The scale at which this happens is the Planck scale and thus this argument suggests that measurements beyond the Planck scale are operational meaningless. That, in turn, suggests that the tower of fundamental theories must end at the Planck length. This is of course not a proof, but it is a fairly convincing argument.
To be precise, Einsteins theory of relativity is based on pseudo-Riemannian geometry, which is a generalisation of Riemannian geometry.
Interestingly, the idea to consider the geometry of a configuration spaces goes back to Richard Feynman (Nucl. Phys. B 188, 1981) and Isadore Singer (Phys. Scripta 24, 1981). What they did was to consider the trivial/canonical geometry of the configuration spaces. In our work we consider nontrivial and dynamical geometries.
The algebra that we use is the so-called HD-algebra (holonomy-diffeomorphism), which basically encode how tensorial degrees of freedom are moved around on the underlying space. This algebra comes with a very high level of canonicity and thus it is natural to use it as the foundation of a fundamental theory.
Since the configuration space is infinite-dimensional then so is the Dirac operator on it and so will the Clifford algebra, which is used to construct the Dirac operator, be. An infinite-dimensional Clifford algebra is also known as the CAR algebra, which is exactly the algebra that gives the fermionic anti-commutation relations.
The unification of bosonic and fermionic degrees of freedom that we have identified takes place at a level that is deeper than the emergence of quantum field theory. In this way it is more fundamental than supersymmetry would have been, had it been realised in nature. Supersymmetry is a symmetry that is introduced at the level of the Lagrangian in a quantum field theory, i.e. it takes place after quantum field theory has been introduced. The correspondence between bosonic and fermionic degrees of freedom, that exists in the framework that we propose, emerges from the construction of the Dirac operator on the configuration space, and thus it is essentially a metric correspondence. Technically, the fermionic degrees of freedom emerge from the Clifford algebra that is required to construct the Dirac operator, which means that their origin go right back to the geometry of the configuration space.
This is (as far as I can tell) not related to so-called twisted spectral triples (see for instance e-Print: 2301.08346).
I will elaborate on this point in later posts. But for more details you can check out some of our latest papers.
Theoretical high-energy physics is of course faced with more problems than what I list here. Also, it is not possible for a hypothesis to address all the problems that the field is faced with in one go. I think it is helpful to distinguish between fundamental and secondary problems; the fundamental problems being the most important ones. A secondary problem could for instance be the problem of understanding the origin of dark matter or the origin of the cosmological constant. Secondary problem are the ones one expect that a fundamental theory will solve, but they do not play a key role at the level of the fundamental hypothesis. A famous example would be the perihelion precession of Mercury, which played a critical role in the test of Einsteins theory of relativity, but not in its foundation and not in Einsteins search for his theory. Now, this distinction between fundamental and secondary problems is of course somewhat arbitrary and others might prefer a different one. This is fine with me, my point is simply that most researchers today work on very narrow hypotheses that only address a small subset of problems (usually a subset of one). In other words, they base their approach on a limited use of the evidence, that we are given (an unsolved problem also constitute a type of evidence). I do not believe it is possible to solve a complex problem with such a strategy.
string theory is of course the exception here. String theory is the only hypothesis that I know of in contemporary high-energy physics that addresses a fairly wide range of fundamental problems such as the origin of the standard model and the problem of quantum gravity (one can discuss how successful that theory is, but that is not the point here).