Did you know this secret? The worst is that beauty is not only terrible, but also a mystery. God and the Devil fight in it, and their battlefield is the heart of man.
Fiódor Dostoyevski
Mei Xiaochun published a paper 3 years ago in which he claimed that the Riemann hypothesis does not even make sense because there are already four serious inconsistencies in the 1859 text to begin with. In a later paper, he uses a standard method to prove that the Riemann zeta function does not have a single non-trivial zero. Zero zeros. Needless to recall that, according to the prevailing mathematical opinion, have been calculated billions of them.
Mei Xiaochun is not overcomplicating matters. The inconsistencies he talks about are very basic, they infringe even the very Cauchy-Riemann equations that are at the basis of complex analysis. I am not a mathematician and prefer to defer my judgment on the relevance of his arguments, but I think that, at the very least, they deserve an answer; though we are unlikely to find it anywhere. If some mathematician deigned to answer, surely he would say something like that the analytical continuation has principles that the author seems to ignore, but no one ignores that creating new principles out of convenience is the most elegant way of not having them.
Mei is a physicist at the Institute of Innovative Physics in Fuzhou. Looking at a list of his scientific production it is easy to see that he has specialized in the critique of highly speculative aspects of modern science that pass for normal science: general relativity, black holes, the LHC and LIGO projects or even electromagnetism. Regarding the latter, he has a series of papers on retarded electromagnetic interaction that I find particularly interesting, since he links it with temporal irreversibility in certain processes, and we have repeatedly discussed on this page the theory of the retarded potential and also irreversibility in fundamental physics.
Right or wrong, what is clear is that in the West one cannot belong to a university or any official research institution and publish articles like Mei’s. It would not even be worth trying, since rejection would be guaranteed in advance. That this is allowed in China only highlights what we already know, and that is that Western science is not assumed there as something of its own, but only as a tool of enormous power that is impossible to ignore.
Otherwise Mei is not trying to «deconstruct the West»; his arguments are perfectly legitimate and right to the point, and what is really revealing is that no Western scientist belonging to the institutions can afford to say these things. Perhaps the last researcher on the payroll who could do so was the Canadian Paul Marmet, and I am not even sure if he was still working at the University of Ottawa when he published Absurdities in Modern Physics in 1993.
What can the mathematicians here make of Mei’s arguments? We will have to ask them. I have been told that the things I have written about the zeta function are «utterly speculative», and it would only be worrying if they were not, when I am not even trying to do mathematics. The fact is that the Riemann hypothesis itself is «utterly speculative» right from the start, and the only worrying thing would be that mathematicians do not realize it.
To begin with, I have never believed that the hypothesis can be solved, which is what apparently motivates most mathematicians; I think that this special function is interesting in itself independently of the hypothesis, even if that naturally includes the zeros on the critical line. At least here we have raised doubts about the correctness of the methods of calculus, doubts that arise even before we enter into complex analysis, with the most elementary differential calculus. We have referred once and again to the work of Miles Mathis, who has highlighted the systematic illegal algebraic manipulations to arrive at the known results. We have also talk about the reverse engineering of calculus applied to physics, and what is striking is that in the case of the zeta function the circumstance is the opposite: physics, and in particular many body quantum systems, would reproduce some of the patterns of this function. But, needless to say, all that is no less “utterly speculative». I fully agree with Mathis in his assessment that calculus is just applied mathematics, not pure mathematics; what I don’t know is how anyone could claim otherwise. Why else are they still calculating zeros?
The Riemann hypothesis “would just say» that the prime numbers are as orderly as they can be because if they were more orderly they would cease to be prime numbers. That is, it is a more restrictive but still highly uncertain condition on Legendre’s and Gauss’ hypothesis that later became a theorem thanks precisely to the Riemann zeta function. We put “would just say» in quotation marks since this same thing can be formulated in a thousand and one different ways having as many different resonances in the most diverse fields of mathematics. It is understandable that most mathematicians wish it to be true, since it would provide a criterion of order of immense scope, and the opposite, that the pattern of prime numbers starts to be different at very large numbers, besides being unimaginable, would have devastating effects. Moreover, there is no hope of disproving the hypothesis with the brute force of computation, pathetically impotent in cases like this.
Of course, here we are assuming that these non-trivial zeros of the function exist and that they have a certain and unequivocal relation to the prime numbers and their distribution; but this is exactly what Mei Xiaochun denies. The minds of mathematicians, industrious by nature, will always choose to have something to work with. If an analyst found himself in limbo, heaven or hell, and had to choose between spending eternity idly, or spending it calculating endlessly, he would not think twice.
It does not take much humor to conclude that the proof of the Riemann hypothesis, let alone its implications, is inconceivable, and that its refutation, let alone its implications, is equally inconceivable; and that in the middle of these two inconceivabilities, between zero and one, and zero and infinity, lies the critical line with a real value equal to 1/2.
Well considered, one can see that Mei’s just objections do not work to the detriment of Riemann, but unintentionally reveal as never before the genius of the most profound of mathematicians. Like writers and philosophers, great mathematicians hardly do more than exploit and amplify as far as they can the scope of a single idea, by which they are more or less possessed. Everyone does it, but the success of forcing an idea soon finds its limits in reality and in the credit that others are willing to give us.
The 19th century is above all the century of the complex variable and Riemann is its greatest exponent. It is well known that the origin of the complex zeta function is in the theory of functions and not in number theory. Riemann knew that complex analysis greatly simplifies many problems of real analysis, just as he knew that complex numbers provide way more room to move. Riemann sought to better define that still hyperbolically imprecise and unmanageable idea that «prime numbers are as orderly as they can be because if they were any more orderly they would cease to be prime numbers.» What he then did was to look for the kind of manipulations that could express that idea in the complex plane starting from Euler’s product formula and combining Cauchy’s and his own breakthroughs in complex analysis with the indispensable ones of Gauss, Dirichlet and Chebyshev in arithmetic. Riemann was looking for a simplifying criterion and finally came up with one that indeed seemed to be extremely simplifying at the expense of doing four forbidden operations. The criterion was given by his own sui generis calculus that was by itself a new modality but which miraculously put the zeros of the function on a critical line. Riemann could not help but be aware of the precariousness of his manipulations and so he dropped his hypothesis as discreetly as possible. But only now does it become clear that the hypothesis was from the beginning the motivation for all his work on the function.
And what could matter four unjustified operations when something so extraordinary emerged? It took time for mathematicians to begin to glimpse the new possibilities opened up by the Riemann hypothesis, but once they got into the game, they found impossible to give up this unexpected and tempting paradise. It is readily admitted that Riemann was not exactly a model of rigor, but no one doubts the boldness and conscientiousness of his approaches. It is impossible that he did not know what he was doing; he simply concluded that it was justified because it gave way to a truth of a higher order. However, the mathematicians who came after him ceased to be aware of this because they already have enough work justifying their additional and increasingly rarefied assumptions.
I have no desire to undermine either Riemann’s work or that of all those who have followed in his wake. We can feel sympathy for their ways of proceeding, precisely because we can see them as human weaknesses, and it is foolish to be indignant with our own weakness. I think the Riemann hypothesis is just sublime, and yet I also think Mei is right and the manipulations involved are strictly speaking illegal —even by Riemann’s permissive standards. One is tempted to say that the Riemann hypothesis is sublime even if it is absurd, and that even absurdity would prove it right if it could.
Of course, it is not only a matter of human weakness, because if there is any greatness in the life of a scientist, it lies in the fact that his discoveries are not merely due to his intelligence, but to a faith that advances in the dark and to perseverance in the face of adversity. Physics and mathematics have had ample doses of this, although many do not realize it.
I am convinced that what has been said conforms to the reality of the facts because the whole history of calculus gives us all kinds of evidence in the same sense, which is transparent in spite of the triumphalist bias with which its foundations are judged today. Being from its very origins a series of heuristic recipes, by the time Riemann came, analysts had been practicing systematic opportunism for almost two centuries with no other immediate imperative than to extend the domain of the calculus at all costs, repeating at every opportunity that familiar «a rigorous proof would be desirable». In this Riemann was only adding to a practice that by then had become a tradition.
A sagacious mathematician often gets results without knowing how he arrived at them; the Riemann hypothesis would be the most extreme case of this kind of anticipation. The mathematical community has never justified Riemann’s procedure, but has accepted it at face value, and accepting it literally, has to justify it later by way of a proof that promises to be infinitely complicated and equally insignificant. However, there are in Riemann’s implausible anticipation obscure general reasons and specific reasons that have been erased by that typical forgetfulness that any present needs to attain the illusion of existence.
It is said that the foundation of calculus gave the desired rigor to the great edifice of analysis. But again the foundation was a justification of known results, only with a higher degree of generalization. If previously a right result was arrived at by wrong means, now it was claimed to do so for the right reasons, but how can wrong mathematics work to begin with, and how is it that right mathematics does not change the results of wrong mathematics at all? And of course, the problem with the Riemann hypothesis is that it does not rely on known results, or if it does it is through a connection as elusive as it is problematic.
The very fact that mathematics is expanding in all directions ensures that the gaps everywhere are also increasing. If its knowledge, as in the rest of the sciences, doubles every 15 years, so do its loopholes, doubtful moves and castles in the air. In fifty years they are tenfold, and in 150, thousandfold. More than one hundred and sixty years have passed since the Riemann hypothesis, so that, with a cursory calculation, it can be estimated that many current works on this function could retroactively involve on the order of 6,000 grave violations. This makes serious criticism an almost impossible task, and besides, no one is to blame for this enormous accumulation of knowledge. So what are we talking about when we talk about rigor?
It has always been discussed whether mathematics or the great physical laws are discovered or invented. I have no doubt that they are invented, and the case in point seems to me to be very eloquent; which, rather than detracting from their merit, adds to the work of Newton, Riemann and so many other illustrious scientists. Nothing is discovered without the will to advance in a certain direction. However, the continuators fatally tend to take for granted things that the creators saw with the most extreme and justified caution.
From Galileo’s inclined plane to Mendel’s peas, the whole history of science is littered with forced arguments in the face of experimental evidence that does not match enough but should. Mathematics is supposed to be the most rigorous of all the sciences, but, once again, and in view of it all, what are we talking about when we talk about rigor? We have had ample occasion to see it everywhere and in this days as never before: rigor is nothing compared to the momentum of research programs and the collective effect they generate. Too many scientists have become specialists in cloaking the wildest ideas in the mask of rigor, opacity and boredom.
*
This may bring to mind that distant book by Sokal and Bricmont on intellectual impostures. No doubt already in the 1990s there was a whole clique of authors and so-called philosophers who were saying ridiculous things, and with some famous names, the derision was even too easy. Otherwise, anyone approves the calls to separate knowledge from political and culture wars, something that in less insane times would not even have been necessary. But in fact Sokal and Bricmont, a mathematician and a theoretical physicist, were also waging their own very timely culture war, diverting attention from the rampant proliferation of inconsistencies in their own disciplines. And this they knew first hand.
Do we want to suggest here that the Riemann hypothesis is an intellectual imposture? The question seems to me of the utmost interest, but not to dabble in the sociology of science and the culture wars, but for the mathematical core of the question. I personally do not believe that the hypothesis nor the current way of calculating the function is an imposture or mere collective illusion, at least for two reasons: because, even if it has arisen from an unsound manipulation, and especially if that were the case, all those zeros still ask for an explanation, rather than a solution; and because its extolled beauty is also problematic in itself.
Much has been said about beauty in mathematics and it is a subject from which nothing can be drawn, not because it is unimportant but because it exceeds the competence of the mathematician. The mathematician cannot say anything meaningful about beauty, he can only, at best, create mathematical beauty, and more for its own sake than thinking of those who can appreciate it. However beauty in mathematics is not just a result, but a generative and productive end. Simplicity is sought, and in the meantime, one tries to simplify continuously, which is one with the process of abstraction, of distillation of heterogeneous materials.
The history of mathematics is fascinating because it reflects things about us that cannot be seen in any other mirror. There is an expressive, physiognomic level to mathematics that can be appreciated without having to know the meticulous intricacies of this science; just as a good connoisseur can judge character, mood, and state of health by countenance without having to know anything about physiology. Mathematics not only has its own physiology but is also unwittingly displaying a historical physiology on its own plane, a logic of internal development of which the «social state» would be, at best, only a snapshot that would leave its only precise figure in chronology. There is an aesthetic plane of the isolated mathematical object, and another aesthetic plane of the relation of all mathematical objects in their interaction at a given moment; and the applications of mathematics to physics are a crucial part of this constellation.
Poincaré judged with good sense that arithmetic is a priori and geometry a posteriori and therefore a matter of experience; other mathematicians have said that there is no part of this science, however abstract it may be, that cannot come to reflect relevant aspects of Nature. Here we would also like to suggest the opposite: there is no cognition of Nature on our part that cannot migrate and join the compact monolith of arithmetic —provided one knows enough about number theory, of course. This retrograde or posterior path is much less conscious than the descent from arithmetic to geometry in applications through algebra and calculus, and although in theory they should be reciprocal, in practice they are far from being so because of the various contingencies in their development.
Riemann devoted at least as much time to theoretical and experimental physics, especially in the company of Wilhelm Weber, as he devoted to mathematics; Euler and Gauss also devoted themselves to physics to the best of their abilities. Riemann, no one doubts, was a great analyst, a great geometer, and a great number theorist, algebra being his blind spot or at least his most indifferent area. This is very characteristic, and in a way allows us to excuse the liberties he always took in his operations. Being a mind so absorbed in theory, and having a short life and poor health, it is surprising to note the patience with which he attended to the details of repetitive experimental protocols; especially when we note that all his major contributions took place in nine years, between 1851 and 1859, the last of them being, when he was barely 33, the work on the zeta function.
Bernhard Riemann’s eight-page paper On the number of primes less than a given magnitude was published in November 1859, the same month as On the origin of species. That very same year saw the beginning of statistical mechanics with a pioneer work by Maxwell, and the starting point of quantum mechanics with spectroscopy and the definition of the black body by Kirchhoff. Statistical mechanics is the mother of information theory, and, on the other hand, Riemann was also interested in thermodynamics which was in the forge in those same years. I have always felt that the zeta function alone represents a counterpoint, and probably an antithesis, to the triumphant bid at that time to explain everything in Nature with concepts such as chance, entropy or natural selection. Riemann sought in everything the underlying unity, and, with Leibniz’s permission, one can venture that never an analyst had such a strongly synthetic orientation. In his writings one can see how his mind oscillated between abstract intuition and abstract imagination, two very different faculties that only occasionally coincide. This highly synthetic intelligence also devoted much time to natural philosophy, the mechanics of audition, the conduction of heat, electromagnetism, light and gravity within a theory of the ether; and although these concerns and works, with an unmistakable hallmark of depth, did not bear the desired fruit, they somehow poured into the only synthesis he was able to achieve —where one could least expect it, in his only work on number theory.
I also believe that in formulating his hypothesis, Riemann was not motivated by the desire to extend the predictive power of calculus but by the desire for simplification or synthesis, and that is why beauty here has a character, never better said, functional, and, at the same time, the least obvious that could ever be imagined. Because, does it really simplify things at all, or are we just looking at another mirror whose relation with numbers never quite closes?
Authors such as those already mentioned seem scandalized by the abusive use of mathematics by some philosophers to construct the most inept metaphors. Here again they are not without reason, and many of the examples they expose to scorn really deserve it; but, again, all this is too patent to devote time to it. However they do not tell us anything interesting about the use and abuse of metaphors in the so-called contemporary hard sciences. It has been said that what would be science without metaphors, and in truth, metaphors are inextirpable, not only in the popularization for laymen, but at the very heart of scientific activity. And it is also eloquent that, in the midst of so much abuse by one or the other, the Riemann hypothesis, so exalted in its beauty, is equally inaccessible to metaphors.
Potential infinity is not a metaphor, but actual infinity is, and yet it is quietly accepted by almost all contemporary mathematicians. In fact, actual infinity is, as Lakoff and Núñez put it, the «Basic Metaphor» of our present mathematics, and it has also something to do with the idea that the Riemann hypothesis must have a solution. Of course, potential infinity is not infinite at all, but only an indefinite finite series. Riemann himself was still thinking in terms of potential infinity, for as we know the actual infinite only began to be contemplated, and not without fierce resistance, after Cantor’s work. If actual infinity is so commonly accepted today, it is due to the typical hypertrophy of algebra and its symbolic indifference to the real.
Teratology, the catalog of monstrous and absurd metaphors, begins with the hard sciences themselves, and if in mathematics the aberrant and pathological always remains beyond our sensible recreations, with physics it enters fully into the imaginary. This anomalous use starts historically with the infinitesimal calculus and algebra, which assists it in the application of mathematics to change. It just happens that we have become accustomed to these leaps over reality.
There we have black holes, for example, so well dissected by Mei Xiaochun and many other authors before him; why don’t the professionals of skepticism and the artisans of scandal criticize them, when it is so easy to see their absurdity? More than easy, because to pretend that a mathematical singularity can have physical reality is absurd, but to want on top of that that mathematical physics goes beyond a purely mathematical singularity, is epically ridiculous. And yet, we can understand that the fact that every day we have breakfast with the news that two or three new black holes have been discovered has to do not only with the entertainment industry, but it is also the consequence of a logical development of the same theory that comes from far back.
But no matter how much it is the logical consequence of a series of misunderstandings, there is no beauty in a black hole, nor any mystery as to why they exist beyond the mystery of human credulity. Now, once inhibitions against absurdity and ridicule have been overcome, the sense of mystery and depth are guaranteed. Sokal and Bricmont spoke of postmodern intellectuals and fashionable nonsense, but postmodern science, which at the time of their book was in full swing, was already ramping up in the sixties with «the golden age of black holes», and in the particle zoo with entities such as the quarks, whose name alone says it all. Incidentally, the decisive feature of the nuclear force, the relevance of non-central forces in Nature, is totally eclipsed by something as aseptic as the celebrated asymptotic freedom that conceals it.
General relativity was not crazy and inconsistent enough, even with his mother special relativity, and the only remaining theoretical restraint, the veto on black holes, had to be tear down. In a course that does not allow one to go back and seriously reconsider the fundamentals, the only thing left is the flight forward, and giving physical reality to singularities was the ultimate passport to Neverland. This was done by taking advantage of the curvature that Riemann had devised in his daring geometrical generalizations and whose application to physics was in his time a utterly remote possibility. In fact Riemann sketched a theory of matter particles as sinks of ether and considered for them spaces with and without curvature. The cautious and scrupulous Poincaré, half a century later, saw that, if special relativity was assumed, there were two ways left for general relativity: curving the space-time or curving the light. It was much simpler to do the latter, as well as being in accordance with our experience; that this was perfectly feasible is still demonstrated by current theories such as the so-called gauge gravity.
Postmodern science is self-ironic by mere defensive reflex, for it knows very well that it cannot take any idea too seriously. Not black holes, which we take only as an example, but almost everything in contemporary science is destined to a supreme irony, the product of its twists and shortcuts in the double movement of accumulation and rarefaction of knowledge. And that supreme irony will be very different from the small ironies now sprinkled everywhere. The one who has not the slightest hint of irony speaking is Mei Xiaochun, the most unexpected incarnation of frankness.
And something of that supreme irony that is approaching can already be glimpsed in the turns of theory itself in its multiple manifestations. The path of excess also leads to wisdom, if one knows how to get to the very end. Things like the so-called «anthropic principle» in cosmology have long since revealed the degree of disorientation of speculative physics, but here we will only mention the holographic principle because it is closer to our subject, since it arose from the thermodynamics of black holes.
In a black hole, relativistic mechanics, quantum mechanics and thermodynamics —and therefore also information theory- converge. Taking the various theories to the ultimate conceivable limit and trying to see what happens in their contact: this is interesting, even if it is proposed in the demented and pathological environment of an ultrasingularity. The Riemann zeta function is already implicit in Planck’s calculations of blackbody radiation, and thus in very basic aspects of quantum mechanics such as vacuum energy levels. On the other hand, this function has been used to regularize the thermodynamics of black holes to obtain finite results, which is only one method among others that does not imply any necessary connection between both.
The holographic principle is already a supreme irony of the theory for several reasons. First, because after having raised the number of dimensions for a theory of gravity to four and for string theories to ten or twenty-six, we end up being told that any physical evolution with all its information is reducible, no even to a volume, but to a surface. This cannot fail to create the greatest perplexity, when it was assumed that processes such as gravity or Huygens’ principle of light propagation cannot operate in two dimensions. But that is still a minor irony. The greater one would be that the holographic principle is based on the geometric phase, which is a supplement to quantum mechanics but does not properly belong to it: a loop or curvature added to the unitary, closed evolution of the Hamiltonian. The geometric phase does not belong to the projective Hilbert space, but reflects instead the geometry of the environment which is not included in the definition of a closed system. And this openness of a closed system to its environment is what is now supposed to define the limits of our experience of the world.
In other words, the geometric phase, a mere appendix to a quantum mechanics that claims to be complete but clearly cannot be so, would give us the fine grain of the skin of the universe. The geometric phase, the anomalous shift of the potential, is not part of the scheme of dynamical interactions and is therefore considered entirely secondary. The history of dynamics subordinates the potential to the force. The force is the determinant and the potential, as a mere position, was only understood as a passive auxiliary; but it cannot be that that which has an instantaneous effect, as quantum mechanics demonstrates, is passive with respect to that which needs time to react.
It is as if physics had to some extent inverted the ideas of potency and act; and the truth is that this is far from being the only thing it has inverted. But what is entirely illusory is to pretend that the geometric phase is exclusive to the quantum world, when it is present at all scales and in the best known theories under the misleading name of «retarded potentials» —another of the recurring themes in Mei Xiaochun’s papers. Retarded potentials first appeared in physics in 1848 with the electrodynamics of Weber, Riemann’s faithful friend and close colleague; and in a different way, also in Riemann’s own electrodynamics of 1858 whose publication withdrew himself. If retarded potentials are applied to gravity, black holes become impossible because with increasing velocity the force decreases proportionally.
*
Analyzing the relational mechanics of Weber and some of his more recent continuators such as Nikolay Noskov and André Assis, we come to the conclusion that the so-called «retarded potential» is not such but simply expresses the temporal difference between action and reaction when we leave the implicit principle of global synchronization which dominates all classical mechanics including relativity, and in which the “local causality» of quantum mechanics is embedded. Let us briefly recall, once again, that in a relational mechanics such as those cited, generally compatible with observation, we can replace the first principle of inertia by that of dynamic equilibrium, in the second principle the force ceases to be constant and to depend only on distances, and in the third principle ceases to be simultaneity between action and reaction.
This modification seems at first only a slight deviation, but we soon realize that it implies an idea of time that is very unusual and difficult to conceive. Each thing has its own time, and if there is a «global synchronizer», which as it comes out of Newton and it has always been conceived does not cease to be a mere metaphysical principle, it cannot be in the domain of dynamics and interactions, but in the potential, that which has always seemed passive to us. With respect to something instantaneous, the dynamic interaction can only be a passage from potency to act, but it is the potential that directs the action.
Wolfgang Smith, a mathematician and physicist of Aristotelian orientation, has proposed for quantum mechanics the idea of vertical causality. This concept seems to me not only useful but perhaps necessary for our issue, but in a very different sense from the one Smith gives it. Smith adheres to the Copenhagen interpretation, assumes the radical difference between macro and microphysics, and relies on the collapse of the wave function for a deliberately verticalist and transcendent interpretation. By contrast here we do not see that there is any irreducible difference between classical and quantum potentials, and the so-called vertical causality would be a relation of another order between the different proper times that have processes that we normally consider in the same plane, that is, globally synchronized. Such vertical causality could indeed have a relation to the Riemann zeta function.
Needless to say, this «verticality» has nothing to do with the critical line being represented parallel to the ordinate axis, which is a mere convention; rather, it has to do with the non-separability of potentials, the alignment of their respective dynamical equilibria, the connection between different proper time lines and the elementary, irreducible but unspecific relation that arithmetic has with temporality —just as the zeta function has an intrinsic relation with complexity not because it uses complex numbers, but because of its relation to many-body interaction problems.
This idea of vertical causality, which can best be realized along the lines of physical theories that are now forgotten —and indeed it is forgotten that Weber’s relational mechanics, which yields non-central forces, made many relativistic innovations unnecessary- could be inspiring to say the least. It should not be understood literally, since the idea that there is a horizontal causality that extends virtually without limits would simply be another illusion. There is neither horizontal nor vertical with respect to a primitive homogeneous undifferentiated medium, which is otherwise the only true infinity; the contrast only arises when we assume that there can be a global synchronization by dynamics itself, which besides being metaphysical involves a contradiction in terms. Vertical causality is an implicate order with its own temporal lines and layers.
Then, we can call this vertical causality a metaphor if we want; but a much more restrained metaphor than the notion of actual infinity, or the metaphysical principle of global synchronization, or the intrinsically contradictory principle of inertia, which asks us to consider a closed system that is not closed. The real difference lies in the fact that we are not accustomed to this new metaphor as we are to the others, and we have yet to delve much deeper into it before we begin to make sense of it.
On the other hand, if the geometric phase and the retarded potential form a feedback loop that we may modulate, as the Weber-Noskov theory would suggest, perhaps there are ways to test not only the holographic principle —which would have to be ubiquitous by definition- but also the relation of these ideas with the Riemann zeta function itself —something that, we should say in advance, has nothing to do with the idea of a «physical proof» of the hypothesis.
Someone might find curious that a physicist as insightful as Michael Berry, the main generalizer of the geometric phase, who has also carried out some of the most influential studies on the relation of the Riemann zeta to physics, has not worked with these or similar ideas. The answer would pass first of all through the fundamental incompatibility of the theories, then through the scarce recognition of relational mechanics and finally through the even scarcer awareness of its temporal implications. Let us recall that according to the informed opinion of Berry and Keating, for a «Riemann dynamics» to exist it should have properties such as scale symmetry and a classical counterpart; be chaotic and unstable; be temporally irreversible; be quasi-one-dimensional. Naturally this still leaves all the room in the world for speculation, but it has its value as an estimate of dynamically relevant asymptotic features. Here we have also commented that it should be an irreversible dynamics in a very different sense, but now this would take us too far.
I devote a good deal of space to the relation of the zeta with physics for good reasons: because I think that the hypothesis has no mathematical solution, and because on the other hand I believe that it is a pole of inspiration and attraction for non-standard physics, understood in the most conservative but unexpected sense. Concluding one of his texts, Mei Xiachun says that it should be considered whether or not the study of the distribution of prime numbers should return to the domain of real numbers, although no doubt he is not unaware that the Riemann zeta function was essential for the proof of the prime theorem in 1896 and that the «elementary» proofs, presented much later, proved to be more complex indeed.
But, apart from the fact that Mei does not pretend to settle the matter, perhaps there is no contradiction. The zeta function served to prove the theorem because Riemann already assumed it to begin with when he managed to find out transformations and equivalences of the function. But from there to being a potential game changer in number theory —an enthusiast said that if the hypothesis were solved it would be like shifting from a screwdriver to a bulldozer- there is a gulf. Nothing can justify such expectations. Number theorists may say that their advances have been spectacular, but I think that, rather than being of real use, the only thing this function does is to make other results look in its mirror. These are completely different things. In itself, the zeta function could be effectively as sterile as a mirror, and yet have a singular role as a guideline. The zeta function would not be a shortcut to anything new because it would be the ultimate shortcut itself.
The truth is that, knowing that no major breakthrough has been made in over one hundred and sixty years, expecting the picture to change suddenly and radically thanks to a demonstration seems too much like a fantasy of enlightenment: once on the top of the world, there would be a whole eternity to explore and delve into the infinite variations of all the associated L-functions. But true understanding rather comes when we realize that no enlightenment is possible; this cuts the tree from the root. Since most do not even want to hear about this, one has to devise indirect strategies, in which one imagines that one is getting closer to the truth without harming oneself too much. Of course, analytic number theory cannot give up the complex plane and that is out of question. At its interface with physics, however, there are other very basic issues pending.
Given the nature of the hypothesis, one might superficially think that, even if it were shown to be true, it would not change the landscape of knowledge in number theory at all; and if it were shown to be false, neither would it. And although I believe that neither of these two things will happen, the difference between both possibilities is that more than a thousand theorems, some of them absolutely amazing, have been advanced assuming that the hypothesis is true, but, as far as I know, the idea that the hypothesis is false does not spark ideas of the same scope. That is, the hypothesis that the Riemann hypothesis is false is not quite productive, while the hypothesis of its truth is, and this already tells us something. For Mei Xiaochun the hypothesis is neither true nor false, but plainly meaningless; his judgment stands by the ground of the matter, but something that continually produces theorems of a very different nature does seem to make sense and be a source of meaning.
According to this, seen from the foundation the hypothesis has no sense, the sense emerges beyond the foundation. But this has been happening continuously in the history of science, only with a sense that is very easy to identify, either for the sake of prediction or explanation. Here however meaning might demand another level in description.
An example could be entropy. The interpretation of entropy as disorder, due to Boltzmann’s statistical mechanical rationalization, introduces an unnecessary subjective element besides diverting attention from the clear natural tendency to the production of maximum entropy already perceived by Clausius; and yet the view of entropy as disorder still overwhelmingly prevails. If one had to speak of order and disorder, it would be much closer to reality to say, with Rod Swenson, that the world tends to produce order because order produces entropy faster than disorder. The most elementary physical interpretation of the Riemann hypothesis is as a random walk -the discrete counterpart of Brownian motion-, and this allows us to treat both prime numbers and zeros in terms of numerical entropy. But are entropy and the degree of structure of the function opposed here?
The claim that order produces entropy faster seems quite related to the idea that «prime numbers are as orderly as they can be», and yet in both cases we seem to be asking for a better definition of order than either of the ones we have. The backward heat equation has also been linked to the zeros of the Riemann zeta, which is an interesting way to connect it to signal processing and retrodictive determinism, but there still seems to be a problem of definition and description. On the other hand, it is clear that entropy can be defined in other ways without any need to appeal to the Riemann hypothesis, and that is the case for any important concept externally related to its function.
*
One of the tragedies of history is that, no matter how much more perspective we gain, most of the times this broader scope is of no use to change anything. Neither changes the past, nor changes the future towards which we are shot like a cannonball. In real life no one has to convince us that events are irreversible, but in intellectual constructs, such as mathematics and physics, there always seems to be room for making amends in an elegant and bloodless way. Of course, this is only an illusion, since the organic development of the sciences as a whole is completely irreversible. Or is there any question of overturning the foundations of analysis, relativity or quantum mechanics? Not at all, it is only conceivable that they can be transformed forward with always greater complications, never backward. Reflection is permitted on an individual basis, but for the collective development of knowledge, «advancing backwards» would only be to deactivate oneself.
It is only for this reason that things like black holes exist, which express like nothing else the fatal flight forward of theory when looking backwards is impossible and stopping even more so. Truly Faustian is our science, indeed, expanding and striving until the last moment to try to forget that it has long since he sold his soul to the devil.
I write these lines not as a scientist, nor as a historian, nor as a critic, but just as a lover of natural philosophy. It is too easily forgotten that Riemann loved natural philosophy at a time when it was still possible, even though he was mainly responsible for the leap of mathematics to new heights of abstraction in those years. In fact, as we have already seen, 1859 would mark the coup de grace for a natural philosophy already badly wounded since Newton’s Principia; although maybe it would be unfair to forget Thomson & Tait’s 1867 Treatise on Natural Philosophy. No doubt Riemann’s heart was sharply divided in this respect, but it is easily to understand that the tension between the analytic mind and the search for unity was always the mainspring of his mathematical creativity.
As I do not believe that there is a mathematical solution to the Riemann hypothesis, neither do I believe that there is any kind of «physical solution» to the problem. I believe, without the slightest mathematical foundation, and with the conviction of my own natural philosophy, that the hypothesis is inherently open both above and below, both through infinity and unity; and that it is that very thing that makes it seem impenetrably monolithic. Which need not be a bad thing, for even if we can never encompass it, we can still surround it on all sides as long as we do not pretend to possess it.
It is infinitely easier to make physical sense of the zeta function than to prove the hypothesis; and yet no physical interpretation seems to have even remotely the depth of implications that its simplest mathematical formulation has. It is thus a challenge in itself to find physical systems that can reproduce the behavior of this function or of the associated L-functions, as for example the one proposed by França and Leclerc some years ago with an electrostatic field and its potential; or many other physical analogies that can be modulated, as for example the Brownian motion of electric dipoles in random classical radiation. And it is an even greater challenge to find a physical interpretation that contributes to give a full sense to the hypothesis rather than trivializing it. Riemann and Weber, the close collaborators in theoretical and experimental electrodynamics, would have greatly appreciated these analogies and would have probed them with much more penetration than we have.
It is remarkable the flippancy with which these analogies are disdained with the argument that they are not useful to prove the hypothesis, but everything starts from the fact that those who propose them pretend that they can be used for that purpose. We should approach the subject in a very different manner. The problem is not that they do not serve to prove the hypothesis, which should be taken for granted, the problem is that modern physics is incapable of describing impartially any process, because it has subordinated everything to prediction. Precisely, we speak of natural philosophy when what we seek is neither prediction, as in the physical sciences, nor explanation as in cosmology or the theory of evolution, but a kind of description that is in itself sufficiently complete and eloquent. And from this point of view, the deficit of modern science is truly abysmal.
Incidentally, neither Gauss’s electrodynamics, nor Weber’s nor Riemann’s were «failed theories», as is still commonly read in history books. They are simply approaches that have not found the same development as Maxwell’s equations and were prematurely abandoned, but they are not false and rather show another treatment and a different point of view —something that Maxwell, who owed so much to Weber, knew better than anyone else. The same Faust who sold his soul to predictive power has tried to silence his conscience with explanatory sciences such as the theory of evolution or cosmology, but in vain, because only probity in description can find a balance between prediction and explanation.
What we are saying is that in known physics there are already more things than we think, if only we know how to read between the lines. The prediction always leaves us on the surface of things, but the explanation, historically subordinated to the prediction, does nothing more than justify them. We see this with theories such as the big bang or black holes, which try to give us an illusion of depth when in fact they are nothing more than the ultimate extension of the principle of inertia. What happens with the so-called «foundation» of calculus is along the same line of justification.
Riemann had the confidence, repeatedly expressed in his writings, that man has no access to the infinitely large, but that we can know the natural laws thanks to the analysis of the infinitely small. But surely this too is an illusion: the mathematical analysis of physical change cannot have access to the infinitely small either, it should be content with correctly analyzing the physical dimensions of a problem by means of finite intervals. However, it does neither one thing nor the other. Bloch’s principle that there is nothing in the infinite that does not exist before in the finite has apparently borne much fruit in complex analysis, but, if we wanted to make sense of it in physics, we would have to travel in time.
To my knowledge, Miles Mathis is the only author who has so far made a relevant critique of the foundations of calculus and its application to physics in more than three centuries; a critique that goes far beyond the well-known objections of finitists and constructivists. The standard calculus conceals from us at least one dimension of the physical geometry of processes, as is proved by the mere fact that the acceleration of a mobile in a straight line is described by a curve. Analysis soon forgot its humble origin in the calculus of curves in order to generalize itself algebraically, and, for the sake of prediction, it definitively broke its commitment to the description of processes. Potential infinity was formalized by the concept of limit, but in reality, if this is not based on a finite interval, it is nothing more than a way of disguising the infinitesimals, and so one can return to them without great difficulty.
Constant differential calculus, having a higher dimension and being more faithful to the physical geometry, makes it possible to deal fruitfully with the question of how to interpret more realistically the use of complex numbers in physics. If this higher dimension does not only concern physical problems, those who are willing to use alternative forms of calculus can check if there is a real zeta function with intermediate values that can be revealing with respect to the Riemann zeta function. Unfortunately for physicists and mathematicians, this calculus is also much more restrictive in its operations, which understandably invites to ignore it. But the big gap between natural concepts and metaphors and those of higher mathematics begins right here. So if anyone wants to close it again, this is the best place to start.
It is not that natural philosophy is outmoded, but that it is older than we think, or, in other words, it is much more ageless than any restricted theory. Despite the incredible expansion of the sciences, we still have a rather provincial idea of what Nature might be. Even such basic concepts as «potency» and «act» hide much more, not only than Aristotle could imagine, but also than today’s interpreters of quantum mechanics can imagine. And yet the best way to grasp that unimaginable would be to try to close the huge gap between our experience and our abstraction.
If it is true that there is nothing in mathematics that cannot have its reflection in Nature, we do not fully realize what this means, nor do our theories conduct in harmony with what it implies —for it is intimating to us that Nature and Spirit are only aspects of the same thing. But if we do not wish to get loss in this radical unity which inevitably leaves us speechless, we can always say that we find math in Nature simply because we probe her through it, which is equally true. In any case, there is in Nature a plenum, a superabundance not only of complexity, that no theory, barely a spider’s web in the bark of this great tree, can ever show us. The Riemann zeta function, for which we have no true physical correspondence, would express something of this plenum much better than all our predictive and explanatory frameworks. It would be demanding a different framework by itself.
The permanent expansion of knowledge is not only constantly creating new gaps, but also new rifts that are basically one and the same. Although the Riemann zeta function began as a purely mathematical subject, it is inadvertently but uncontrollably migrating also to physics, chaos and complexity theory, and computation. We could even see it at the crossroads of these four major areas, and yet it remains a purely mathematical question. Why then are mathematicians themselves increasingly appealing to the so-called «physical intuition» or computational arguments to grasp this problem? No doubt because there are no good mathematical ideas, for on the other hand it is more than doubtful that there is today a single physical idea that has not been emptied of any intuition. But surely all these speculations about quantum chaos, Hilbert spaces, operators and so on are just ways of beating about the bush.
Although it is hard to accept, and given that we are talking about calculus rather than prime numbers, the most distinctive mathematical feature of the complex zeta function and its critical line are the infractions committed in its calculation: this would be the zero zone, and perhaps the best way to see through it is to contrast them with the infractions in the calculus of the real zeta function with respect to the more stringent constant differential calculus. This threefold transparency should yield greater depth of vision on the purely mathematical plane, if that is at all possible, and also on the most elementary «physical intuition», and on the relationship between both. There is no substitute for rectitude. Riemann looked for something straighter than the straight line, and almost found it: all the more reason to retrace the shortcuts, of which the history of calculus is the greatest collection. Bringing divide to the heart of the mathematician is the best way to take him also to the heart of the problem, to the origin of its conception; but now he has the great advantage that he can go much further back in time than the German mathematician.
It would be good to start here, and then, if one wishes, with a fresher look, to ask oneself in what other calculus could the zeta function make more sense, in what other electromagnetic theory, in what other quantum theory, in what other theory of information, in what other quantum theory of information, in what other quantum thermodynamics, in what other spectral theory, and so on. Today, for example, the question is asked whether it is possible to do quantum mechanics without complex numbers, and some studies answer in the negative; the subject is of great interest but such studies are of very little scope because they are self-limited to the basic quantum formalisms. What is instructive, in order to understand the role of complex numbers in quantum mechanics, is to try to reproduce many of their results with more basic theories that are completely independent of such formalisms, some of them already indicated.
All the areas just enumerated, being entirely focused on prediction, do not even deal with the point that interests us most here. The so-called physical intuition, like any other intuition, is mostly misleading because it is hardly more than the sum of our habits; we would have to regress our intuition to things prior to those habits, and that is why we have considered constant differential calculus, as a case in point of «immediate non-intuitive knowledge”, to use the term coined by Jacob Fries.
And if anyone still wonders how an “unsound calculus” like Riemann’s could serve to shed light on other theories than those being contemplated, he only needs to recap history, for all the frameworks that are standard today have likewise been built on an elementally deficient analysis. We know very well how this is possible, what we do not know is how an unsound calculus can reproduce what it does not know.
*
Number theory, with its sovereign uselessness, is an specific continental product; it is only around 1914, when the theoretical corpus has already been comprehensively systematized and even analytic number theory has exemplary manuals such as Landau’s, that there begin to be significant contributions from the Anglo-Saxon world. It is from then on that talk about «conquering the Everest of mathematics” begins and the demonstration or refutation of the famous hypothesis takes on the tinge of a sporting competition in select circles. Does the demand for a solution trivialize the problems? Not always, but in this specific and very different case it seems obvious to me that it does: looking for the answer in terms of «yes» or «no» distances us from the question.
Mathematicians themselves tell us that what is missing is an understanding of the relationship between addition and multiplication at the most basic level, and, in good logic, something so basic cannot arise from such derivative, sophisticated and complex ideas as those proposed to obtain a proof. Without an understanding of that so simple and fundamental, seeking a proof should not even be desirable.
Riemann’s natural philosophy, which was not the German romantic Naturphilosophie, may sound too strange for our time; partly because it is unfinished and we only have tentative sketches, and partly because of the resulting conflict of seeking on the one hand «a complete and self-contained mathematical theory», and on the other hand wanting «to go beyond the foundations of astronomy and physics laid down by Galileo and Newton to penetrate to the interior of Nature». This was just a discreet way of saying that he intended to turn the glove of Newton’s natural philosophy inside out. Riemann wants to attain the space and the quantitative without taking it for granted, starting from the qualitative, from the inside out. The extent of Herbart’s influence on his natural philosophy is disputed, but it is worth remembering that for Herbart reality is absolutely simple and therefore excludes any quantitative concept. Once again we find the German mathematician in the middle of two highly incompatible demands.
The momentous changes in twentieth century physics prevent us from seeing how ambitious this solitary project of a unified conception of Nature was. For it was not only a matter of «unifying» the main forces known at the time, in the style of modern attempts of unification in current theoretical physics, but first of all of finding a previous foundation from below and from within, which was to include all the psychic activity of the soul towards that simplicity of the real which was already completely unattainable for science. Riemann’s natural philosophy is an interface between physics and metaphysics: he tried to give back to the great animal of cosmos that soul that mechanism seems to have taken away from it, without renouncing any of the knowledge of physics at that time.
Riemann knew too much to be content with any crude simplification of unity; in fact he knew as much as Helmholtz about natural sciences, and then some, but he was capable of imagining things that no scientist of Helmholtz’s style could even dream of. Riemann died at 39 and Newton at 41 had not yet begun to write the Principia; but at any rate Riemann’s natural philosophy, of a completely different scope, was destined never to be born: not for «lack of information», as today’s more superficial reading will judge, but for multiple reasons including internal incompatibilities or the burden of working with the classical formulation of mechanics. So there is nothing to regret here. However, I am convinced that natural philosophy was Riemann’s only muse throughout his entire scientific production.
Natural philosophy can provide us with certainties that go beyond the domain of physics or mathematics; it is quite another thing what use they may have for these, which will depend as much on the knowledge of these particular sciences as on the possibilities of application. In any case, this philosophy has a retrograde path that necessarily begins with interpretation, seeks its methods accordingly, and finally arrives at principles. And although it is an reverse path to the usual one in normal science, it is not difficult to see that modern physics itself was established in this reverse order in the leaps that mark its foundation: from Galileo and Kepler to Descartes to Newton.
When physics came to Newton’s Principia, what took place was a closure through the axioms: the principles as starting points that should not be changed for knowledge to advance and accumulate. But for those for whom the Principle is first and last, and this was certainly the case for Riemann, the principles still have a further evolution and are starting points as well as basic distinctions and foundations of unity. The principles understood as axioms make the fathomless irretrievable: the first mechanical principle of inertia is not definitively closed until the formulation of the third principle of simultaneous action and reaction, which tacitly implies global synchronization. The reformulation of the three principles according to Weber’s mechanics, for example, allows us to recover what was buried in the foundation. Inertia is a good example of something fathomless, which for more than a thousand years could not be intuited, and yet is now taken for granted. Transformative basic ideas can only occur at this level. One of these slow sculptures could be reflected in number theory, but our physical intuitions, or lack of them, are preventing its revelation.
My surmise, I have already said, is that Riemann, by putting his previous knowledge at the service of number theory, activated both voluntarily and involuntarily a retrograde method or reentrance of those conceptions into the a priori of arithmetic —or at least, pointing in that direction. And the conclusion is inevitable: one can not find in number theory better ideas than those introduced into it.
Today academics employ their best arts in smuggling past or present ignored ideas, but commitment to the dominant trends makes impossible to assimilate them. We know only too well that one cannot ask a discipline to go backwards and reconsider its past; only creating new sciences is viable. We spoke above of a quadruple crossover between mathematics, physics, complexity theory and computational theory. Well, today it is perfectly possible to create a morphology of the most general type in the space not covered by any of these sciences. It is not a product of them; on the contrary, it takes them for granted and tries to distance itself from them as much as possible, with another idea of calculus, causality, generation, equilibrium, and a complexity that does not depend only on the number of elements.
This morphology, which does not yet exist but which may be born at any moment, is precisely the science that shows and describes how and why there are no shortcuts in the forms of Nature; that is, in what sense they are necessary. Calculus cannot know this, not because it is unknowable, but because to date it is a systematic collection of shortcuts. This morphology will largely fulfill Riemann’s dream of contemplating Nature both from within and from without, and if it succeeds it will be because it can integrate without difficulty aspects incompatible with current developments —and because it can afford to rectify Riemann’s own ideas about calculus, modern physical notions of curvature, and even the very principles of mechanics.
Such a science would not be imperialistic or opportunistic because it neither pretends to envelop the others nor to borrow their expansive impulse, but on the contrary it deals with that which the other sciences have excluded and rejected, seeking to distill its quintessence. We will devote our next work to this morphology, but if we comment on it in this one, it is with regard to the question concerning the expansion of knowledge, one with its loss of value and with the dissipation of its symbolic capital and credit.
It is easy to see that everything that has «added value» cannot depend on the rate of reproduction of knowledge but would have to go rather in the opposite direction. Natural morphology wants nothing to do with such intractable things as the Riemann hypothesis because it has its own object and is based on incomparably simpler notions, but the important thing to remember is that the multiplication of knowledge means nothing if we do not consciously direct it and do not know what we are looking for. Trying to prove something does not at all guarantee our awareness of the question. In science the most difficult thing is not to want to solve the questions that one has not asked oneself.
*
Although Riemann was excessively cryptic in his most famous work, it is known from Siegel’s research that he had made an exhaustive previous study with very powerful analytical tools; if he did not write a much larger text it was due to the length requirements of the Berlin Academy. No doubt he would have liked to expand more on the topic, including the technicalities of his calculus, but in the seven more years he lived no one asked him for detailed explanations about them —surely because the few mathematicians who had understood the text were still trying to digest it. What Riemann would certainly not have liked to talk about in public was his natural philosophy, for this was his intimate laboratory, the only place where he was allowed to make all sort of speculations and hypotheses.
In Riemann’s grand program, natural philosophy served to connect mathematics and physics with psychophysics, psychology and metaphysics. There is a clear connection between his concept of n-dimensional manifolds and Herbart’s qualitative continuum. His idea of constant ether flow in matter obeys other considerations than those of relativity; it arises also in the context of light and electrodynamics, but Riemann does not bother adding physical aspects that only complicate the characterization, such as tension, pressure, heat, color, and so on, because he starts from the idea that there must be a general psychophysical correspondence between quantitative values and qualities. To our age, which has seen the development of so many specialties, this sounds like an undesirable conflation of very different planes, but what Riemann was rather looking for was how the contact between these different planes was possible to begin with. His efforts are best understood along the lines of scientific philosophies such as those of Clifford and Whitehead, with whom he has many similarities; the difference being that Riemann’s philosophical ideas had a still unnoticed effect on mathematics.
But perhaps, looking further back, the philosopher with whom Riemann could be most associated is Leibniz himself. One of the many interesting things about Leibniz is that his philosophy is associated with the mechanical time of clocks —with Newtonian-style global synchronization- but in fact his ideas in physics are purely relational, which is something else entirely. Leibnizian pre-established harmony does not point so much to the synchronicity between events or horizontal causality, which would be at most its external limit, but to the vertical causality of their infinite coimplication. The so-called universality of the Riemann zeta function, which would allow it to reproduce an infinite number of times any differentiable curve with any degree of approximation, could remind us the universal concert of monads, sharing this time at least one common thread. Some would prefer to speak in this case of «pre-established chaos», but at any rate, order and disorder here tend to dangerously coincide.
Relativity arises to rectify certain asymmetries of electrodynamics; but relational electrodynamics deals with them in a different way. An intuitive mathematician always needs to think with images. Riemann, who undoubtedly also had in mind projective ideas in line with the Riemann sphere, on the one hand assumed that there should be an abstract a priori symmetry between imponderable or qualitative aspects, and ponderable and controllable quantitative aspects in the relations between matter and space; and on the other hand he must have harbored his own tentative idea, still rather undecided, about the possible symmetries in electrodynamics in both real and complex variables. In any case, in Riemann’s electrodynamics masses were mere coefficients for density, and density and frequency are indeed fundamental concepts with multiple applications in arithmetic; this way the transmigration of his natural philosophy to the domain of number theory could have taken place.
It is assumed that the zeros of the Riemann zeta emerge from the difference between a sum and an integral, and on the other hand it is no coincidence that the first ones to deal with Green’s theorem, fundamental for electrodynamics and potential theory, were precisely the creators of complex analysis, Cauchy in 1846 and Riemann in 1851; but the German mathematician, to whom we owe precisely the first proof of the theorem, could not miss that this problem may be approached by the discrete method of Riemann sums that he developed in 1853 on the way to the Riemann integral and that are also suitable for counting in number theory. The so called Green functions are really distributions, but the projective analogy that Riemann might have had in mind, considering a transition between point and wave aspects of electrodynamics, would also be related to the Möbius transform, which shows a one-to-one correspondence with the Lorentz transform but has a different temporal connotation in the much earlier theory of the retarded potential.
One could still think of such things as the so called Wick rotation and its association with an imaginary time derived from the Lorentz group itself – such a rotation is nothing but a mathematical trick, but on the other hand the “real imaginary time» is the global synchronization underlying all present mechanics. In special relativity the extended energy-momentum relation unveils a mass-potential symmetry that in principle could allow sign reversals between matter and antimatter, but both the present annihilation/cancellation theory of matter and our idea of synchronization have a big central hole to be filled. Although Wick rotation as an analytical continuation is just a formal procedure, it also reveals an important connection at the foundations of quantum mechanics, thermal physics or Fourier analysis and stochastic processes such as Brownian motion that admits a reformulation in a more primary context.
To put it short, although Riemann had in the foreground the connection between calculus and prime numbers, he also had in mind not only a sort of theory of the ether and the ether of frequencies, but the ultimate limit for his general approach. If it is unquestionable that the motivation of the function comes from the prime counting problem, for which the well-proven method of Eratosthenes was already available, the search for an internal symmetry within the function is due primarily to the inspiration of Riemann’s natural philosophy. The adventurous and celebrated hypothesis comes from natural philosophy, and without it, Riemann would never have twisted the arm of calculus to find it. Without Riemann’s preconception of a fundamental symmetry, mathematicians today would frequent the complex zeta function without even suspecting the existence of a critical line.
*
But may be this is not just archeology of knowledge. As we have already said on other occasions, there are at least three absolutely fundamental equilibria in physics and in electrodynamics in particular: the zero-sum dynamical equilibrium, the ergontropic equilibrium between minimum energy variation and maximum entropy, and the equilibrium of densities with a unit product, which are hardly contemplated in current frameworks and when this is done they are not properly related. Of course, that Riemann conceived a symmetry in the zeta function does not at all mean that it should find a translation in physical reality, but if that is really the case, as so many studies now suppose, and we try to guess what that unimaginable symmetry responds to, it would be desirable to study it in the light of these equilibria and their relation to vertical causality.
Henrik Stenlund has recently noticed that the ratio between a zero of the Riemann zeta function and its conjugate in the negative half of the plane, besides a certain phase has a unity absolute value, instead of being singular as one would expect. This, which is a fact for all zeros on the critical line, fails immediately outside of it, though the deviation increases gradually as one moves away from the line. His basic idea is to substitute the zeta function in the functional equations by more elementary functions, but it is also the sort of thing that Riemann himself might have taken into account in looking for a critical line in the function, and furthermore it makes one think that along with the analytic aspect of the function there exists an absolutely simple and unitary aspect; and the most basic link between the analytic realm and the irreducibly simple is the constant differential calculus based on the unit interval.
If the Riemann hypothesis makes sense at all, the most essential point about the zeros on the critical line of the function is still unknown; but the most essential, be it the most ambiguous or not, is always that which most directly links the divisible with the indivisible. There should be something much more basic than anything so far considered, but it cannot be recognized due to the order of conception created by the very history of the development of calculus as applied mathematics. Riemann’s sui generis calculus would be a cut through various strata that have constituted themselves seeking also the shortest path. The best way to go deeper into pure mathematics should be to delve into applied mathematics, for ultimately mathematics is always about Nature, and this was still the most profound conviction of Euler, Gauss or Riemann. But it is easy to see, or at least to feel, that if calculus has been modeled by the shortest paths, a posteriori we always lack the background to know for what is a simplification. It is only natural that deep down we do not believe that mathematical physics deals with Nature, since this has been trimmed everywhere.
The surmise that there is a vertical causality that we cannot now conceive of is also based on the certainty that both calculus and physics, contrary to the claims, are determined from the top down and from the global to the local. As we have already seen on other occasions, this has been the case since Newton, both in the actual proceeding and in the inverted interpretation at play with the reverse engineering that makes prediction possible. From then on, the inversion of the relation between the global and the local advances by leaps and bounds: in analytic mechanics, in the same number theory, in complex analysis, in the passage from the macroscopic to the microscopic in statistical mechanics and quantum mechanics, in the holographic principle, and so on. This is the general procedure, but since a foundation is given, the claim is that all this is built bottom up.
We see for example that fractional calculus and fractional dynamics show long-range spatial and temporal correlations and non-local dependence on history; however classical calculus, in which this does not happen, is only a particular case. We spoke earlier of Brownian motion in dipoles, and, although the fractional dynamics of this type of motion is generally local at the macroscopic level, it would be of interest to trace the correspondence with the microscopic level when using non-relativistic mechanics entirely based on retarded potentials for transitions between low and high velocity motions. This would not only be different from the study done by Mussardo and LeClair on Brownian motion and the Riemann zeta but would have a distinct motivation, namely, to try to give content to the idea of vertical causality and see if it makes sense at all. The underlying idea is that the global synchronization that has dominated mechanics since Newton prevents us from seeing another type of homogeneity, much more general in Nature, which affects our whole conception of space, time and causality, matter, form and motion.
Martin Gutzwiller noticed long ago the correspondence between the zeta function and the delay of an electron with the resulting phase shift in scattering problems, the zeros corresponding to the values of the momentum where the time delay changes abruptly. The smooth quantum chaos would have here a classical counterpart, but to go from locally smooth processes to globally chaotic ones begs not only for a more appropriate mathematics, as he points out, but also for a different mechanics and interpretation. However one can have all three without innovations, filling in the gaps in the line that goes from Weber to Noskov.
The phase of the Riemann zeta function along the zeros of the critical line has been previously associated with the geometric phase of a quantum potential and with the inverted harmonic oscillator, in which the frequency of the oscillator is analytically continued to imaginary values; the inverted oscillator allows an almost canonical passage from linear systems to nonlinear and chaotic systems with the possibility of feedback, both at the classical and quantum level. The inverted harmonic oscillator can be hidden in the most unsuspected places in Nature: in the same vertical stance of the human being, which is nothing less than an inverted pendulum that needs continuous adjustments in order not to fall. Yes, quiet standing implies a balancing random walk without translational motion. The balancing feedback of the inverted oscillator can also be associated with torque and torsion and generalized to an ergontropic reformulation of mechanics and vortices in all kinds of natural environments.
Leibniz believed in infinitesimals but conceived physics in a relational, top-down manner; Newton used fluxions in a sense closer to the modern concept of limit but his physics, also constructed top-down, claims the opposite through a foundation not only axiomatic but also metaphysical; Riemann was closer to Newton in matters of calculus but closer to Leibniz in his ideas about physics; Mathis, who does not even find a use for complex analysis, has finally shown as no one else to what extent calculus inverted the physical geometry of problems, but still believes that physics can be built from local causality.
The contours of this constellation suggest that historically we still live in the first phase of calculus or analysis, that which is dominated by the movement of resolution from top to bottom; and that this phase will only finish when it is reflexively understood that it has always been a descending movement since the beginning. That is to say, knowing that it is not so much an ascending construction as a descent will mark the end of the very descent of calculus with respect to the natural world. And after this, there would still be two stages to cover, one of ascent after the unavoidable inflection, and one of final descent.
The second phase of analysis, after the indispensable awareness of its true situation, will be dominated by the ascent to the global by new methods of abstraction. This can hardly be possible without assuming Mathis’ critique, a critique less motivated for the sake of controversy, as many will be tempted to think, than by an undoubtedly positive argument that is also in harmony with the demands of a sound natural philosophy. In parallel there is another motivation for a mathematical abstraction of a new kind, and as we have already suggested, it is to be found in the possibility of a general morphology that starts from very simple notions of physical geometry. The conjunction of both aspects, critical and generative, will make it possible to create, among many other things, new kinds of dimensional analysis and measure and dimension theories.
The third phase of analysis may not even deserve that name anymore, and is simply an entirely different dimension and conception of applied mathematics. This second descent, starting from a new position within the always obscure notion of the global, may perhaps show us if a «Riemann dynamics» makes sense in the physical world, but in any case this would be only a particular case within the much more general subject of downward causality and vertical causality in Nature.
Thus, all known analysis and its indefinite continuation in the same direction does not even reach the end of the first phase of applied mathematics. If such phases are to be conceived as a historical development, then it is impossible to know what periods of time separate them, whether lapses like that between Newton and us, or much longer ones like that from Archimedes to Leibniz; but they can also be seen in a perfectly timeless way, as movements of the spirit that are often already present even within the proof of a simple theorem.
The demon of the endless calculus and the daimon of finite analysis could not be more different, but by insisting that they are one and the same, the spirit of circumspection is confused and the evolution of applied mathematics, no matter what its growth, is hopelessly misdirected. Peras and Apeiron have very little to do with the modern notions of limit and infinitesimal, for if the latter was an idealization since the beginning, the concept of limit was from the start pure rationalization. It is extremely dangerous to conflate both because it leads one to think that there is no more room for anything else, and yet in the midst of their busy commerce there persists an axis inaccessible to arbitrary manipulations.
It is to be hoped that this adamant axis of constant differential calculus will also permit a much more natural reentrance from the finite to the homogeneous which is the background of all comprehension. The future belongs to simplicity, and it is even more so because of the nature of the present complexity. André Weil was asked if he believed that there was any path left to explore in the work of the great creators of 19th century number theory, and his answer was a categorical no. Weil’s judgment is undoubtedly competent but typical of a specialist; number theory is less and less isolated by the day, which has both good and bad aspects. At any rate, the development of number theory is a peculiar case within modern mathematics, and the order of conception that dominates the whole science is, as in physics, that of calculus or analysis. And in both analysis and physics we know positively that some valid options have been ignored that even today may give rise to a completely different perspective. There is no history of science less interesting than the one that assumes that the possibilities of the past are exhausted.
The infinity of present mathematics is above all the infinity of complexity; that of future mathematics will be an infinity of simplicity, and even then it will be in a certain sense only an intermediate stage. Detecting the homogeneous outside is a way for the heterogeneous to contrast with the homogeneous inside, which is also undetectable but is the support of discrimination. To level both aspects is tantamount to crossing a threshold that in the present circumstances is not even considered. Today there are two things that systematically block the possibility of syntheses of a higher order: the first is the elementary deficiency of analysis, of a foundational and obstructive character; the second is the absence of an abstractive synthetic imagination for the morphological aspects intrinsic to Nature that are already present at the level of phenomena. But today we already have anticipations enough to see what can respond to these gaps, anticipations that must contrast sharply with the excess of superfluous complexity of analysis and with the excess of simplification that inevitably accompanies it and that erases the natural context from which it emerges.
Needless to say, this outlook has nothing to do with those of the mathematicians about the future of their discipline: its interest lies precisely in the fact that there is nothing inevitable about it, in that it is a latent possibility independent of any chronology. In the end it matters very little whether in two or three generations computers can prove theorems better than humans, what matters is what man can understand and even more what he wants to understand. The future of mathematics is a reflection of the future of the spirit among us, and the spirit is not just the intellect but the union of intellect and will.
At the bottom, math is the only true science; and yet it cannot define its relation to reality, which remains entirely open. It cannot even find a single criterion on how to conduct itself in its own internal affairs, those wich seem to be beyond any possible application. But in order to take possession of herself, and not be at the mercy of external interests, as she is now to an extreme degree in the midst of her fictitious freedom, she should be able to reconsider at all times both the criterion of application and the degree of receptivity she shows to Nature; for the two things always go hand in hand.
In the strange case of the zeta function, long before considering whether the hypothesis can be proved, without doubt it would be desirable to understand the role played by complex numbers in physics and Nature; the relation this may have to time; the meaning of the zeros of the function; and finally, how we should interpret vertical causality, which, it is worth remembering, is but the antithesis of the tacit principle of global synchronization that dominates all physics. These four questions are closely related, and of course a large number of mathematicians will see them as purely philosophical; however we have the advantage that at least three of them can also be posed in much simpler and general cases than the one addressed by Riemann. If these three were understood, it is likely that the fourth would be understood too, provided that the zeros have some meaning. We are just talking about understanding, not about proving anything.
It is constantly said that the Riemann hypothesis is strictly a problem of pure mathematics, but if we agree that calculus is applied mathematics, it is undoubtedly a bicephalous problem that necessarily concerns both spheres. To conceive it only as pure mathematics already miss the mark. This reveals the misplaced Platonism that still underlies current mathematics, even when it is assured that it is just a human construction; and the same can be said of current physics, so often pure mathephysics. What is not understood is the supreme importance of the criterion of application, which decides the relationship between the time and the timeless aspects of mathematics. This means infinitely more than any attempt at demonstration that does not even succeed in posing the question.
Without duly rethinking their own foundations, the sciences are serfs of the accumulated debt that they are unable to cancel and that drives the enormous inertia of their career. Some will not understand that a science like mathematics, which should need little more than a pencil and paper, may have debts issues, but she has them like any other, as of course we are referring to internal debts, to all that she owes to herself. Of course, one can always ignore that issues to keep in the same direction.
We talked about the future of simplicity, but in order to properly appreciate the simple, we must first educate our taste —we must develop an acquired taste that we do not have right now. Certainly, the taste for complexity we have developed enough, and although one may come to think of it as a spoiled taste, it is also a necessary condition for the formation of a much more conscious one that presupposes it, but which we would like to leave behind as much as possible. The Riemann zeta function could be one of those intermediaries between extreme complexity and simplicity that could help in the re-education of taste, at least among the most recalcitrant complexity addicts, but for that one should first reconsider the fundamentals; and even then one cannot rely too much on such an issue. For one can get simplicity through simplification and through simplicity itself.
That the Riemann hypothesis fully affects the criterion of application of calculus has nothing to do with hypothetical practical repercussions such as those sometimes considered; for example, the fantastic associations that have been made with respect to its impact on cryptography, when no one has specified how it might lead to faster methods of factorization. And yet fantasies of this kind, together with the widespread surveillance on the web and the kind of careers that many researchers lead, not to mention the very fantasy of an imminent demonstration, only tend to increase the opacity, secrecy and mystifications in this minoritarian but nonetheless collective quest.
Consciously and unconsciously, as in everything else, the ceremony of confusion is promoted. The hypothesis would conceal something absolutely basic, but its understanding could only be reached through the most incredibly complicated detours. And it is added for the avoidance of doubt: any naive attempt of elementary proof is doomed in advance. But in all these statements it is already assumed that the only desirable thing is that otherwise almost unattainable demonstration —the rest being only philosophy or semantics. A narrative is constructed to channel the «young talents» so that the unwary and the deluded will look in the same direction without daring to consider any other. And in this way, no matter how much we are told about mystery and grandeur, everything tends to the lowest possible level and nobody questions the criterion of application of mathematics, which is what matters above all else.
Being well assured that there is no proof around the corner and no risk of cryptographic meltdowns should help more free collaboration among mathematicians. And by openly rejecting the bait that a proof is desirable without first understanding the much more basic things that would need to be understood, the quality of that collaboration will increase beyond quantification.
What certainty is that which natural philosophy has always conveyed? That the Principle is not only the starting point, but also the aim of all our inquiries. Not much, but sufficient for those who know how to adhere to it. With modern calculus begins an immense work of reverse engineering of Nature that nevertheless is irremissibly moving us away from her. And in the midst of this dangerous journey, emerges the Riemann zeta function, of which we still do not know whether it represents an even greater diversion or a portentous maneuver of rapprochement with yet highly inadequate methods. There should be simple ways to answer this question, ways that do not require a thousand years of preparation.
Against a reverse engineering that separates us more and more from Nature, we have ventured the existence of a retroprogressive method, that is, of a progress towards simplicity, a progress in the direction of a Principle that has little to do with any mythical origin situated in the past. I would like to think that Riemann’s hypothesis points in the same direction, but I do not yet have the slightest indication that would allow me to affirm it. In any case, here too the retrograde method can be applied from ends to means to principles with an incomparably lower time cost and a much greater impact, from the moment we begin to ask our own questions.
My appeal to Nature is anything but rhetorical because she is still the great absent in this equation. It is already significant that physics and mathematics are only in their comfort zone precisely when they can evade the question of their relationship with Nature, and if we want to know why this is so, we only need to interrogate the entire history of calculus in terms of excess and defect, and guess of what it has had too much and what it has lacked; and as a counterpart, find out what this has meant in terms of abstraction and mathematical imagination over more than three centuries.
*
The «Riemann moment» of 1859 stand alone in the history of science, completely apart from all others. Not even its clear precedents in Euler and Gauss shed light on its true nature. Numbers have generation but no causality. Physics is supposed to have causality but this becomes irrelevant at the level of the so-called fundamental laws. The classical methods of calculus, which analytic theory applies to numbers, further obscure this question of causality, besides the relations within the number line. How is it that prime numbers, being so important for arithmetic, have practically no relevance in Nature? This is a question that is never asked and that we should ask ourselves. Here we are referring, of course, to the natural world of directly observable phenomena; as, moreover, this point is in sharp contrast with the multiple echoes of the zeta function in physical systems by well known and unknown reasons.
Mathematicians try to reformulate the Riemann hypothesis in multiple equivalent statements; and on the other hand a great number of physical phenomena can be associated with the zeros of a function. What is not done is to try to reformulate the principles of mechanics in equivalent forms to see what new kind of interpretation it allows of functions of complex variable; for, to begin with, some of these reformulations can dispense with the use of complex numbers in the description of both classical and quantum systems.
We should place more value on the attempts to proof the hypothesis by the most elementary means, for only in this way can we attempt to answer the question just asked. Some of these approaches bring us closer to the essential points than impressive displays of mathematical weaponry. We will cite only three of them, associated with natural numbers instead of complex ones, the Möbius and Mertens functions, random walks and Denjoy’s probabilistic interpretation: that of K. Eswaran, that of Spencer Brown and that of Henk Diepenmaat. Spencer Brown reformulated the hypothesis a la Legendre in the cumulative terms of the Möbius function relative to its previous values. In this way the function would have a self-regulating negative feedback balance and would not be random at all. Diepenmaat’s approach has also common ground with Eswaran’s, while including a recursive component like Brown’s. Diepenmaat offers an original view of prime numbers as a driving pattern in social development in what he calls «recursive perspectivism» although probably the major difference between society and Nature is that we perceive the former from within while we tend to believe without the slightest justification that the latter is only out there.
Needless to say, the mathematical community disdains these approaches. Rather than attempts of proof, they may be seen as attempts of assimilation, but they are valuable nonetheless. It has rightly been said that if Kepler had had more complete data on the trajectories of the planets he would never have advanced his laws of elliptic planetary motion. Today the specialists on the subject know too much on the one hand, while on the other hand there are too many things they prefer to ignore. In the long run we are likely to learn more from attempts of assimilation, from the desire to incorporate the problem and make it our own, than from any kind of proof whatsoever . Other researchers have also noted the scale invariance in the distribution of primes and its relation to power laws; the presence of the latter being as frequent in social phenomena as in natural ones.
Ultimately, any effort to demonstrate the hypothesis based on complex analysis is an attempt of assimilation because its principles, like those of mechanics itself, are a matter of convention. Which of course is not to say that they are merely arbitrary. What Riemann was doing in his original paper was already an assimilation, although no one doubts that complex analysis offers another perspective for innumerable problems.
However, the prime counting problem is just the handle to introduce a much broader -if there was a way to justify it. As for the prime numbers themselves, already countable without error margin in the classical manner, the German mathematician could expect a fair behavior, just as in the probabilities of tossing a coin infinite times; but this kind of ideal balance, the discrete expression of the random walk, would only be the most basic form of equilibrium on a scale extending to the other numerical fields and which gives the function its great richness of implications.
In any case, if this function may be considered the expression of a natural philosophy, such a philosophy could only rest on the idea of equilibrium, an idea truncated right from the start in the conception and foundations of modern physics. If dynamic equilibrium underlies the idea of inertia, its unfolding as action-reaction is also above the plane of global synchronization with the fiction of the instant and the simultaneity that is also at the heart of classical calculus, and the retarded potential would be an index of that unknown scale yet to be interpreted and assimilated. But on the other hand, the existence of natural finite intervals at the core of calculus suggests that there is a theory of proportion hidden in analysis, that this should be connected with the theory of equilibrium in a homogeneous relational mechanics, and that both aspects would be intimately linked with the «deep structure» of the function.
If the hypothesis is true, the most direct implication is that the prime numbers have no deep structure yet to be known; why then should it be disturbing that different physical systems tend to reproduce the patterns of the function?
Epilogue (June 2022)
For those who take for granted the usual way of calculating the function, that is to say, almost everyone, there is probably no simpler explanation of the critical line and its zeros than the one offered by Bertrand Wong in this brief survey instantly understandable by any layman with some interest. Evidently his argument in terms of optimal equilibrium cannot prove anything, but at least it shows clearly that the idea of proportionality still remains essential even in classical analysis. From this purely mathematical point of view, there is not the slightest mystery in the hypothesis, nor is there room for mystification; any proof is supposed to aspire to this, although it can hardly be achieved with demonstrations of two hundred pages. The mystery, if any, would be in the physical resonances of the function. If the simplest is also the broadest and most universal, then the physical parallels of the function should also be understandable in terms of equilibrium and proportionality, and the problem here would be in recognizing how our physics and calculus have truncated these concepts.
Scot C. Nelson discovered in late 2001 that the logarithmic spirals of plant growth —sunflowers, daisies, pine cones, etc. — “serve as a simple and naturally efficient prime number sieve”. Like everything related to the continuous proportion and its associated number series, this has received barely any attention and seems relegated in advance to the ever-growing section of anecdotal coincidences. And yet this is the first basic connection that has been found between prime numbers and these ubiquitous patterns of phyllotaxis, which should have told us something. In light of Nelson’s finding, there appears to be a “central symmetry of prime numbers within three-dimensional objects”, and vegetal growth would have natural prime number generating algorithm in its becoming. The same passage from the number line to the unfolding of these patterns on surfaces and in three dimensions should be a thread for the geometric intuition of the fundamental theme of arithmetic. It is not the same to try to link arithmetic with modern abstract “geometry” than to link it with a natural geometry. A mechanical analogy comes to mind: the parts repel each other like magnetic dipoles with a minimization of the energy between them, and as the plant grows the time delay between the formation of new primordia is reduced. One can think of it in ergontropic and information entropy terms as well.
Needless to say, when linking primes with complex variable Riemann always had harmonic analysis in mind. Since the zeta establishes such a deep connection between discrete and continuous aspects, this should not only be contemplated as an industrious abstract elaboration but also in its more basic association with waves. In fact an Australian team recently showed that the simple superposition of optical waves can generate the sequence of prime numbers with no need of dynamical chaos or ad hoc factorization constraints, «encoding them holographically» into the propagating wave-field. What is really surprising is that such simple things have not been noticed before; findings like this show there is still room for elementary discoveries in the area. And after all, the idea that the interference or non-interference of the primes tendentially affects their distribution is inevitable and absolutely natural. Besides, it brings back the suspicion of a certain projective analogy and a certain «projective arithmetic» in a different sense from those considered today under such terms, as well as of a space-time geometrodynamics which need not be four-dimensional, and which would restore de Broglie’s notion of phase harmony, with its duality between the energy-momentum and the periodicity of space-time, to the perspective of the retarded potential prior to the relativistic description. Which would allow to connect the so-called «external symmetries» of classical physics and the «internal symmetries» of particles in a much more natural way than by the more popular methods of algebraic deformation and deformation quantization. If de Broglie particles are elementary clocks reciprocating mass and time, the crucial question about the time parameter of the hypothetic Riemann dynamics would find its meaning in a relational mechanics that can be expressed with and without complex numbers. Today it is admitted that the constants emerging in detailed calculations in physics are almost the same as those arising in number theory, but to see the other side of global synchronization one should first go out of its domains. Both the primes and this special function could finally break into our conception of Nature if we give them the proper place in our representation.
References
Bernhard Riemann, Riemanniana selecta, Consejo Superior de Investigaciones Científicas, 2000
Mei Xiaochun, The Inconsistency Problem of the Riemann Zeta Function Equation, 2019
Mei Xiaochun, A Standard Method to Prove That the Riemann Zeta Function Equation Has No Non-Trivial Zeros, 2020
Miles Mathis, A redefinition of the derivative —Why the calculus works, and why it doesn’t, 2003
J. Neuberger, C. Feiler, H. Maier, W. Sleich, Newton flow of the Riemann zeta function: Separatrices control the appearance of zeros, 2014