Talk:The Riemann zeta function and tuning

From Xenharmonic Wiki
Jump to navigation Jump to search
This article is deemed to be of high priority for the Xenharmonic Wiki, as it is often seen by new users or easily accessed from the main page or sidebar. Edits made to this article will have a significantly larger impact than on others, and poorly-written content will stand out more. As a result, it has been semi-protected to prevent disruptive editing and vandalism.

Please be mindful of this when making edits to the article.

Review needed

This page is linked from almost every important edo, and I would consider it "high priority". Despite the length of the page and numerous derivations, some questions and problems remain:

  • The construction itself remains largely unmotivated. Why the specific error functions? There is a large amount of handwaving and heuristics. We can use different cyclic error functions which will not lead to zeta, but they seem equally valid. If there is no specific reason to use cosine functions with these specific weights, then this should be clearly mentioned. Currently it seems like the page is actively trying to obscure these facts to make it seem like the connection is more "natural" than it actually is.
  • Why focus specifically on the critical strip? The page currently states
As s approaches the value s = 1/2 of the critical line, the information content, so to speak, of the zeta function concerning higher primes increases [...]
This is very vague and should be clearly explained.
  • The derivation starts from the 'naive' definition of the zeta function, which does not converge for s < 1. At some point it switches to the analytic continuation. It is not clear why or how this is valid for this application (although empirically it seems to work out). I should note that being careless about such things is what leads people to ridiculous claims such as 1+2+3+... = -1/12.

Finally, I believe there are still a lot of inaccuracies, although those may be easily fixed. I am not an expert on this topic, it would be great to have the page checked by someone who actually is an expert on complex analysis or the like.

Sintel🎏 (talk) 18:01, 5 April 2025 (UTC)

Recommendation to include Gene's derivation and explanation of its motivations

Of all the derivations I've seen, I've found Gene's derivation to be the most intuitive, understandable and well-motivated. There's only really a small handful of minor issues, which except for one issue are not issues with the derivation but rather with how the derivation is likely to be perceived by a less mathematically-inclined audience:
* The importance of squaring errors is that it achieves a balance between minimising the maximum error and minimising the sum of the error.
* The importance of the cosine function is that it behaves like a version of the squared error that punishes things for being out more and is more forgiving to things that are only somewhat out (which is an intuitively desirable property).
* The importance of the step where we introduce the Von Mangoldt function corresponds to fixing a real flaw where we've not even tried to preserve a certain mathematical property implicit in the previous weighting: specifically, the 1/log2(p) weighting is the unique weighting that has two important properties. Firstly, adjacent harmonics should have about the same complexity/weight, so that the difference in complexity/weight between adjacent harmonics tends to 1/1 as the harmonics go to infinity; this is to reflect the intuitive desire to have EG 25, 26, 27 be of similar complexity/importance, with 25 slightly less complex than 26 and 26 slightly less than 27. Secondly, we want the complexity to correspond to complexity when stacking; 81 is four times as complex as 3 because it's 34. We can also explain it as being the unique prime weighting that assigns the logarithmic size of every harmonic as its complexity. For these reasons this is generally considered to be the "natural" weighting because it weights the primes so that the prime factorisations of harmonics have the intuitively "expected" complexity of decreasing smoothly and respecting how they're reached in terms of factors. Therefore, we need motivations to change this weighting cuz it's generally been considered the most mathematically natural choice of "default" weighting for primes.
* This one I think is debatable, but since I saw people complain about it: the step where we say that a certain expression is a "known one" for zeta is perhaps not satisfying enough for the curious reader, but it at least makes sense from an exposition perspective to make the derivation more accessible so that we don't include some unnecessary technical mathematical details.
The issue that doesn't depend on a less mathematically-inclined audience is this: whether the analytic continuation meaningfully preserves the assumptions we put into it. This is the big one, and as far as I can see, the only flaw in the derivation. The derivation would be "perfect" (in the sense of being mathematically clear and well-motivated) if we instead took the 1 line, but then we'd get less information, so there is a desire to get more information on higher-limit behaviour, but the cost is that the result is being pulled from a mathematical blackbox.
Overall, I'm of the opinion that it absolutely should be kept on the main page for zeta because it also connects many other topics in the mathematics of tuning theory in a natural way, but I do agree that it could be more accessible by explaining what might not be obvious to a reader within the derivation itself, or maybe in a notes section so that the derivation doesn't need to become longer than it already is, so that someone already familiar with the motivation can continue reading undisturbed.
--Godtone (talk) 14:40, 8 April 2025 (UTC)
Oh I forgot there's one other unaddressed motivational issue which is the change of weighting to the reciprocal of a power of a prime; specifically, while the principle of addressing errors of prime powers is correct, this change definitely deserves some elucidation beyond "to make it converge". Luckily, I do have an explanation: if we count the number of times a given prime p occurs in prime factorisations of all harmonics in the harmonic series (meaning we are counting with repetition), we find it's 1/(p - 1), which surprisingly means the average number of 2's in a harmonic, when counting with repetition, is exactly 1. This comes from half of all harmonics having at least one 2 (+1/2), a quarter having at least two 2's (+1/4), an eighth having at least three 2's (+1/8), etc. so that we have 1/2 + 1/4 + 1/8 + ... = 1, which can be verified empirically: the sum from 1 to 2n will always have 2n factors of 2 counted. Point being, if we fix 1, the only change made by zeta is using 1/p instead of 1/(p - 1), so that it's slightly biased to larger primes but is still asymptotically correct, so a bonus if anything (given we want more high-limit behaviour captured and we know the behaviour at 1 is fine because it can be reached by converging to 1 from s > 1). --Godtone (talk) 15:11, 8 April 2025 (UTC)

EDT list

I fixed some errors in the list of peak EDTs in the 'Removing primes' section. I did this by visually inspecting the graph, so it would be nice if someone could double check using a more sophisticated method.

Sintel🎏 (talk) 10:24, 9 April 2025 (UTC)

Reworking page

I am working on a new version of this page, on User:Sintel/Zeta_working_page. I would love to hear any feedback.

Sintel🎏 (talk) 16:23, 13 April 2025 (UTC)

Re: "When we talk about how well an equal temperament (ET) approximates just intonation, we're essentially asking: "How accurately can this system represent the harmonic series?""
I believe this is wrong/misleading. The reason the list of EDOs given by zeta records is so sparse is because it measures only pure relative error and doesn't care whether you are looking at 2edo or 2000edo. It's fundamentally not fair to characterise it as "how accurately it can represent"; it's "how tone-efficiently it can represent for its size, with no other considerations". I've shown you get a lot more interesting EDOs/ETs if you multiply the resulting score by the size of the EDO/ET, and yet more if you consider things that, while not record peaks, are still better than one of the best n peaks found so far (where I suggest n = 3 as recovering almost all interesting tuning info). --Godtone (talk) 18:56, 14 April 2025 (UTC)
Also, I'm not convinced that zeta integral is a useless metric to the point of not including it because it represents how well an equal temperament does when detuned so that it is a "peak" in a more general sense. In other words, because of this property, I firmly believe the zeta integral list makes more sense to think of as a "zeta EDO list" than the zeta peak list. You could say taking the values at the EDOs makes the most sense but after consideration of how zeta works/behaves, allowing octave tempering can be seen as a method of accounting for the "tendency" of an equal temperament (whether it generally tunes primes sharp or flat), hence resulting in only favouring systems that tend close to just. --Godtone (talk) 19:01, 14 April 2025 (UTC)
Ok, it could be made more clear that it's a relative error measurement. It's there in the intro but not in the derivation. To be honest I wouldn't read that much into the "intuitive explanation" section, there's a lot of details that I'm skipping over deliberately.
As for your second point, I don't personally believe integral or gap lists are that meaningful, if only because they depend on the choice of sigma = 1/2. But currently the plan is the leave the main lists as they are right now, so the integral list would be included. Also, the general sharp/flat tendency is taken into account already by taking non-integer peaks, so I do agree that restricting to integers is not that interesting.
Sintel🎏 (talk) 20:08, 14 April 2025 (UTC)