The Riemann zeta function and tuning: Difference between revisions

Inthar (talk | contribs)
Mike Battaglia (talk | contribs)
Added *lots* of new stuff, more results, additional derivations, etc
Line 1: Line 1:
__FORCETOC__
__FORCETOC__
=Preliminaries=
 
The Riemann zeta function is a famous mathematical function, best known for its relationship with the Riemann Hypothesis, a 200-year old unsolved problem involving the distribution of the prime numbers. However, it also has an incredible musical interpretation as measuring the "harmonicity" of an equal temperament. Put simply, the zeta function shows, in a certain sense, how well a given equal temperament approximates the harmonic series, and indeed *all* rational numbers, even up to "infinite-limit JI."
 
As a result, although the zeta function is best known for its use in analytic number theory, the zeta function is ever-present in the background of tuning theory - Harmonic Entropy can be shown to be related to the Fourier Transform of the zeta function, and several tuning-theoretic metrics, if extended to the infinite-limit, yield expressions that are related to the zeta function. Sometimes these are in terms of the "prime zeta function," which is closely related and can also be derived as an simple expression of the zeta function.
 
Much of the below is thanks to the insights of [[Gene Ward Smith]]. Below is the original derivation as he presented it, followed by a different derivation from [[Mike Battaglia]] below which extends some of the results.
 
=Gene Smith's Original Derivation=
==Preliminaries==
Suppose x is a variable representing some equal division of the octave. For example, if x = 80, x reflects 80edo with a step size of 15 cents and with pure octaves. Suppose that x can also be continuous, so that it can also represent fractional or "nonoctave" divisions as well. The [[Bohlen-Pierce|Bohlen-Pierce scale]], 13 equal divisions of 3/1, is approximately 8.202 equal divisions of the "octave" (although the octave itself does not appear in this tuning), and would hence be represented by a value of x = 8.202.
Suppose x is a variable representing some equal division of the octave. For example, if x = 80, x reflects 80edo with a step size of 15 cents and with pure octaves. Suppose that x can also be continuous, so that it can also represent fractional or "nonoctave" divisions as well. The [[Bohlen-Pierce|Bohlen-Pierce scale]], 13 equal divisions of 3/1, is approximately 8.202 equal divisions of the "octave" (although the octave itself does not appear in this tuning), and would hence be represented by a value of x = 8.202.


Line 41: Line 49:
so that we see that the absolute value of the zeta function serves to measure the relative error of an equal division.
so that we see that the absolute value of the zeta function serves to measure the relative error of an equal division.


=Into the critical strip=
==Into the critical strip==
So long as s is greater than or equal to one, the absolute value of the zeta function can be seen as a relative error measurement. However, the rationale for that view of things departs when s is less than one, particularly in the [http://mathworld.wolfram.com/CriticalStrip.html critical strip], when s lies between zero and one. As s approaches the value s=1/2 of the [http://mathworld.wolfram.com/CriticalLine.html critical line], the information content, so to speak, of the zeta function concerning higher primes increases and it behaves increasingly like a badness measure (or more correctly, since we have inverted it, like a goodness measure.) The quasi-symmetric [http://planetmath.org/encyclopedia/FunctionalEquationOfTheRiemannZetaFunction.html functional equation] of the zeta function tells us that past the critical line the information content starts to decrease again, with 1-s and s having the same information content. Hence it is the zeta function between s=1/2 and s=1, and especially the zeta function along the critical line s=1/2, which is of the most interest.
So long as s is greater than or equal to one, the absolute value of the zeta function can be seen as a relative error measurement. However, the rationale for that view of things departs when s is less than one, particularly in the [http://mathworld.wolfram.com/CriticalStrip.html critical strip], when s lies between zero and one. As s approaches the value s=1/2 of the [http://mathworld.wolfram.com/CriticalLine.html critical line], the information content, so to speak, of the zeta function concerning higher primes increases and it behaves increasingly like a badness measure (or more correctly, since we have inverted it, like a goodness measure.) The quasi-symmetric [http://planetmath.org/encyclopedia/FunctionalEquationOfTheRiemannZetaFunction.html functional equation] of the zeta function tells us that past the critical line the information content starts to decrease again, with 1-s and s having the same information content. Hence it is the zeta function between s=1/2 and s=1, and especially the zeta function along the critical line s=1/2, which is of the most interest.


Line 48: Line 56:
Because the value of zeta increased continuously as it made its way from +∞ to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if -ζ'(z) is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to [[Wikipedia:Bernhard Riemann|Bernhard Riemann]] which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the [[Wikipedia:Riemann-Siegel formula|Riemann-Siegel formula]] since [[Wikipedia:Carl Ludwig Siegel|Carl Ludwig Siegel]] went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of ζ(1/2 + i g) at the corresponding Gram point should be especially large.
Because the value of zeta increased continuously as it made its way from +∞ to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if -ζ'(z) is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to [[Wikipedia:Bernhard Riemann|Bernhard Riemann]] which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the [[Wikipedia:Riemann-Siegel formula|Riemann-Siegel formula]] since [[Wikipedia:Carl Ludwig Siegel|Carl Ludwig Siegel]] went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of ζ(1/2 + i g) at the corresponding Gram point should be especially large.


=The Z function=
==The Z function==
The absolute value ζ(1/2 + i g) at a Gram point corresponding to an edo is near to a local maximum, but not actually at one. At the local maximum, of course, the partial derivative of ζ(1/2 + i t) with respect to t will be zero; however this does not mean its derivative there will be zero. In fact, the [[Wikipedia:Riemann hypothesis|Riemann hypothesis]] is equivalent to the claim that all zeros of ζ'(s + i t) occur when s > 1/2, which is where all known zeros lie. These do not have values of t corresponding to good edos. For this and other reasons, it is helpful to have a function which is real for values on the critical line but whose absolute value is the same as that of zeta. This is provided by the [[Wikipedia:Z function|Z function]].
The absolute value ζ(1/2 + i g) at a Gram point corresponding to an edo is near to a local maximum, but not actually at one. At the local maximum, of course, the partial derivative of ζ(1/2 + i t) with respect to t will be zero; however this does not mean its derivative there will be zero. In fact, the [[Wikipedia:Riemann hypothesis|Riemann hypothesis]] is equivalent to the claim that all zeros of ζ'(s + i t) occur when s > 1/2, which is where all known zeros lie. These do not have values of t corresponding to good edos. For this and other reasons, it is helpful to have a function which is real for values on the critical line but whose absolute value is the same as that of zeta. This is provided by the [[Wikipedia:Z function|Z function]].


Line 87: Line 95:
Note that for one of its neighbors, 271, it isn't entirely clear which peak value corresponds to the line of real values from +∞. This can be determined by looking at the absolute value of zeta along other s values, such as s=1 or s=3/4, and in this case the local minimum at 271.069 is the value in question. However, other peak values are not without their interest; the local maximum at 270.941, for instance, is associated to a different mapping for 3.
Note that for one of its neighbors, 271, it isn't entirely clear which peak value corresponds to the line of real values from +∞. This can be determined by looking at the absolute value of zeta along other s values, such as s=1 or s=3/4, and in this case the local minimum at 271.069 is the value in question. However, other peak values are not without their interest; the local maximum at 270.941, for instance, is associated to a different mapping for 3.


=Zeta EDO lists=
==Zeta EDO lists==
If we examine the increasingly larger peak values of |Z(x)|, we find they occur with values of x such that Z'(x) = 0 near to integers, so that there is a sequence of [[EDO|edo]]s
If we examine the increasingly larger peak values of |Z(x)|, we find they occur with values of x such that Z'(x) = 0 near to integers, so that there is a sequence of [[EDO|edo]]s
{{EDOs|1, 2, 3, 4, 5, 7, 10, 12, 19, 22, 27, 31, 41, 53, 72, 99, 118, 130, 152, 171, 217, 224, 270, 342, 422, 441, 494, 742, 764, 935, 954, 1012, 1106, 1178, 1236, 1395, 1448, 1578, 2460, 2684, 3395, 5585, 6079, 7033, 8269, 8539, 11664, 14348, 16808, 28742, 34691,}} ... of ''zeta peak edos''. This is listed in the On-Line Encyclopedia of Integer Sequences as {{OEIS|A117536}}.
{{EDOs|1, 2, 3, 4, 5, 7, 10, 12, 19, 22, 27, 31, 41, 53, 72, 99, 118, 130, 152, 171, 217, 224, 270, 342, 422, 441, 494, 742, 764, 935, 954, 1012, 1106, 1178, 1236, 1395, 1448, 1578, 2460, 2684, 3395, 5585, 6079, 7033, 8269, 8539, 11664, 14348, 16808, 28742, 34691,}} ... of ''zeta peak edos''. This is listed in the On-Line Encyclopedia of Integer Sequences as {{OEIS|A117536}}.
Line 99: Line 107:
We may define the ''strict zeta edos'' to be the edos that are in all four of the zeta edo lists. The list of strict zeta edos begins {{EDOs|2, 5, 7, 12, 19, 31, 53, 270, 1395, 1578}}... .
We may define the ''strict zeta edos'' to be the edos that are in all four of the zeta edo lists. The list of strict zeta edos begins {{EDOs|2, 5, 7, 12, 19, 31, 53, 270, 1395, 1578}}... .


=Optimal Octave Stretch=
==Optimal Octave Stretch==
Another use for the Riemann zeta function is to determine the optimal tuning for an EDO, meaning the optimal octave stretch. This is because the zeta peaks are typically not integers. The fractional part can give us the degree to which the generator diverges from what you would need to have the octave be a perfect 1200 cents. Here is a list of successively higher zeta peaks, taken to five decimal places:
Another use for the Riemann zeta function is to determine the optimal tuning for an EDO, meaning the optimal octave stretch. This is because the zeta peaks are typically not integers. The fractional part can give us the degree to which the generator diverges from what you would need to have the octave be a perfect 1200 cents. Here is a list of successively higher zeta peaks, taken to five decimal places:


Line 206: Line 214:
<span style="background-color: #ffffff; color: #1d2129; font-family: Helvetica,Arial,sans-serif; font-size: 14px;">34691.00191</span>
<span style="background-color: #ffffff; color: #1d2129; font-family: Helvetica,Arial,sans-serif; font-size: 14px;">34691.00191</span>


=Removing primes=
==Removing primes==
The [http://mathworld.wolfram.com/EulerProduct.html Euler product] for the Riemann zeta function is
The [http://mathworld.wolfram.com/EulerProduct.html Euler product] for the Riemann zeta function is


Line 221: Line 229:
Removing 2 leads to increasing adjusted peak values corresponding to the division of 3 (the "tritave") into 4, 7, 9, 13, 15, 17, 26, 32, 39, 45, 52, 56, 71, 75, 88, 131, 245, 316 ... parts. A striking feature of this list is the appearance not only of [[13edt|13edt]], the [[Bohlen-Pierce|Bohlen-Pierce]] division of the tritave, but the multiples 26, 39 and 52 also.
Removing 2 leads to increasing adjusted peak values corresponding to the division of 3 (the "tritave") into 4, 7, 9, 13, 15, 17, 26, 32, 39, 45, 52, 56, 71, 75, 88, 131, 245, 316 ... parts. A striking feature of this list is the appearance not only of [[13edt|13edt]], the [[Bohlen-Pierce|Bohlen-Pierce]] division of the tritave, but the multiples 26, 39 and 52 also.


=The Black Magic Formulas=
==The Black Magic Formulas==
When [[Gene_Ward_Smith|Gene Smith]] discovered these formulas in the 70s, he thought of them as "black magic" formulas not because of any aura of evil, but because they seemed mysteriously to give you something for next to nothing. They are based on Gram points and the Riemann-Siegel theta function θ(t). Recall that a Gram point is a point on the critical line where ζ(1/2 + ig) is real. This implies that exp(iθ(g)) is real, so that θ(g)/π is an integer. Theta has an [[Wikipedia:asymptotic expansion|asymptotic expansion]]
When [[Gene_Ward_Smith|Gene Smith]] discovered these formulas in the 70s, he thought of them as "black magic" formulas not because of any aura of evil, but because they seemed mysteriously to give you something for next to nothing. They are based on Gram points and the Riemann-Siegel theta function θ(t). Recall that a Gram point is a point on the critical line where ζ(1/2 + ig) is real. This implies that exp(iθ(g)) is real, so that θ(g)/π is an integer. Theta has an [[Wikipedia:asymptotic expansion|asymptotic expansion]]


Line 234: Line 242:
The fact that x is slightly greater than 12 means 12 has an overall sharp quality. We may also find this out by looking at the value we computed for θ(2πr)/π, which was 31.927. Then 32 - 31.927 = 0.0726, which is positive but not too large; this is the second black magic formula, evaluating the nature of an edo x by computing floor(r ln(r) - r + 3/8) - r ln(r) + r + 1/8, where r = x/ln(2). This works more often than not on the clearcut cases, but when x is extreme it may not; 49 is very sharp in tendency, for example, but this method calls it as flat; similarly it counts 45 as sharp.
The fact that x is slightly greater than 12 means 12 has an overall sharp quality. We may also find this out by looking at the value we computed for θ(2πr)/π, which was 31.927. Then 32 - 31.927 = 0.0726, which is positive but not too large; this is the second black magic formula, evaluating the nature of an edo x by computing floor(r ln(r) - r + 3/8) - r ln(r) + r + 1/8, where r = x/ln(2). This works more often than not on the clearcut cases, but when x is extreme it may not; 49 is very sharp in tendency, for example, but this method calls it as flat; similarly it counts 45 as sharp.


=Computing zeta=
==Computing zeta==
There are various approaches to the question of computing the zeta function, but perhaps the simplest is the use of the [[Wikipedia:Dirichlet eta function|Dirichlet eta function]] which was introduced to mathematics by [[Wikipedia:Johann Peter Gustav Lejeune Dirichlet|Johann Peter Gustav Lejeune Dirichlet]], who despite his name was a German and the brother-in-law of [[Wikipedia:Felix Mendelssohn|Felix Mendelssohn]].
There are various approaches to the question of computing the zeta function, but perhaps the simplest is the use of the [[Wikipedia:Dirichlet eta function|Dirichlet eta function]] which was introduced to mathematics by [[Wikipedia:Johann Peter Gustav Lejeune Dirichlet|Johann Peter Gustav Lejeune Dirichlet]], who despite his name was a German and the brother-in-law of [[Wikipedia:Felix Mendelssohn|Felix Mendelssohn]].


Line 244: Line 252:
The Dirichlet series for the zeta function is absolutely convergent when s&gt;1, justifying the rearrangement of terms leading to the alternating series for eta, which converges conditionally in the critical strip. The extra factor introduces zeros of the eta function at the points 1 + 2πix/ln(2) corresponding to pure octave divisions along the line s=1, but no other zeros, and in particular none in the critical strip and along the critical line. The convergence of the alternating series can be greatly accelerated by applying [[Wikipedia:Euler summation|Euler summation]].
The Dirichlet series for the zeta function is absolutely convergent when s&gt;1, justifying the rearrangement of terms leading to the alternating series for eta, which converges conditionally in the critical strip. The extra factor introduces zeros of the eta function at the points 1 + 2πix/ln(2) corresponding to pure octave divisions along the line s=1, but no other zeros, and in particular none in the critical strip and along the critical line. The convergence of the alternating series can be greatly accelerated by applying [[Wikipedia:Euler summation|Euler summation]].


=Relationship to Harmonic Entropy=
=Mike Battaglia's Expanded Results=
==Zeta Yields "Relative Error" Over All Rationals==
 
Above, Gene proves that the zeta function measures the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]], sometimes called "Tenney-Euclidean Simple Badness," of any EDO, taken over all 'prime powers'. The relative error is simply equal to the tuning error times the size of the EDO, so we can easily get the raw "non-relative" tuning error from this as well by simply dividing by the size of the EDO.
 
Here, we strengthen that result to show that the zeta function additionally measures weighted relative error over all 'rational numbers,' relative to the size of the EDO.
 
Let's dive in!
 
First, let's take the zeta function, expressed as a Dirichlet series:
 
<math> \displaystyle
\zeta(s) = \sum_n n^{-s}</math>
 
Now let's do two things: we're going to expand s = σ+it, and we're going to multiply ζ(s) by its conjugate ζ(s)', noting that ζ(s)' = ζ(s') and ζ(s)·ζ(s)' = |ζ(s)|<span style="font-size: 90%; vertical-align: super;">2</span>. We get:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \left[\sum_n n^{-(\sigma+it)}\right] \cdot \left[\sum_d d^{-(\sigma-it)}\right]</math>
 
where d is a new variable used internally in the second summation.
 
Now, let's focus on σ &gt; 1, so that both series are absolutely convergent. The following rearrangement of terms is then justified:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n,d} \left[n^{-(\sigma+it)} \cdot d^{-(\sigma-it)}\right] = \sum_{n,d} \frac{\left({\tfrac{n}{d}}\right)^{-it}}{(nd)^{\sigma}}</math>
 
<span style="line-height: 1.5;">Now let's do a bit of algebra with the exponential function, and use Euler's identity:</span>
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n,d} \frac{e^{-it \ln\left({\tfrac{n}{d}}\right)}}{(nd)^{\sigma}}
= \sum_{n,d} \frac{\cos\left(-t \ln\left({\tfrac{n}{d}}\right)\right) + i\sin\left(-t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}
= \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math>
 
where the last equality makes use of the fact that cos(-x) = cos(x) and sin(-x) = -sin(x).
 
Now, let's decompose the sum into three parts: n=d, n&gt;d, n&lt;d. Here's what we get:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n=d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] +
\sum_{n>d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] +
\sum_{n< d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right]</math>
 
We'll deal with each of these separately.
 
First, in the leftmost summation, we can see that n=d implies ln(n/d) = 0. Since sin(0) = 0, the sin term in the numerator cancels out, yielding:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n=d} \left[ \frac{\cos\left( t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] +
\sum_{n>d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] +
\sum_{n< d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right]</math>
 
We will not simplify the cosine term further right now, the reasons for which will become apparent below.
 
Now, let's handle the two summations on the right. The key thing to note here is that we can pair up every term in the second summation with a corresponding term in the third summation that interchanges n and d. To make this clear, let p and q be two integers, and assume without loss of generality that p&gt;q. The term corresponding to n=p, d=q will then appear in the second summation, and the term n=q, d=p will appear in the third summation. Juxtaposing those together, we get the following:
 
<math> \displaystyle
\frac{\cos\left(t \ln\left({\tfrac{p}{q}}\right)\right) - i\sin\left(t \ln\left({\tfrac{p}{q}}\right)\right)}{(pq)^{\sigma}} +
\frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math>
 
Now, noting that ln(p/q) = -ln(q/p) and that sin is an odd function, we can see that the sin terms cancel out, leaving
 
<math> \displaystyle
\frac{\cos\left(t \ln\left({\tfrac{p}{q}}\right)\right)}{(pq)^{\sigma}} +
\frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math>
 
Now, since every term in these two summations has a pair like this, and since we've done nothing but continued rearrangements of an absolutely convergent series, we can modify the original three-part summation to cancel the sin terms out as follows:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n=d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] +
\sum_{n>d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] +
\sum_{n< d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right]</math>
 
Putting the whole thing back into one series, we get
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math>
 
Finally, by making the mysterious substitution t = 2π/ln(2) · x, the musical implications of the above will start to reveal themselves:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(2\pi x \log_2\left(\tfrac{n}{d}\right)\right)}{(nd)^{\sigma}}</math>
 
Let's take a breather and see what we've got.
 
== Interpretation Of Results: "Cosine Relative Error" ==
 
For every strictly positive rational n/d, there is a cosine with period 2π log<span style="font-size: 90%; vertical-align: sub;">2</span>(n/d). This cosine peaks at x=N/log<span style="font-size: 11.6999998092651px; vertical-align: sub;">2</span>(n/d) for all integer N, or in other words, the Nth-equal division of the rational number n/d, and hits troughs midway between.
 
Our mysterious substitution above was chosen to set the units for this up nicely. The variable x now happens to be measured in divisions of the octave. (The original variable t, which was the imaginary part of the zeta argument s, can be thought of as the number of divisions of the interval e<span style="font-size: 90%; vertical-align: super;">2π</span> ≈ 535.49, or what [[Keenan_Pepper|Keenan Pepper]] has called the "natural interval.")
 
As mentioned in Gene's original zeta derivation, these cosine functions can be thought of as good approximations to the terms in the TE error computation, which are all the squared errors for the different primes. Rather than taking the square of the error, we instead put the error through the function <math>(1-cos(x))/2</math> - which is "close enough" for small values of x. Since we are always rounding off to the best mapping, this error is never more 0.5 steps of the EDO, so since we have <math> -0.5 < x < 0.5</math> we have a decent enough approximation.
 
We will call this '''cosine (relative) error''', by analogy with '''TE (relative) error'''. It is easy to see that the cosine error is approximately equal to the TE error when the error is small, and only diverges slightly for large errors.
 
There are three major differences between our "cosine error" functions, and the way we're incorporating them into the result, and what TE is doing:
 
1. First, the function here is flipped upside down - that is, we're measuring "accuracy" rather than error - as well as shifted vertically down along the y-axis. Since it is trivial to convert between the two, and since we only care about the relative rankings of EDOs, it is clear that we're measuring essentially the same thing.
2. Instead of weighting each interval by <math>1/\log(nd)</math>, we weight it by <math>1/(nd)^\sigma</math>.
3. Instead of only looking at the primes, as we do in TE, we are now looking at 'all' intervals, and in particular looking at the best mapping for each interval.
 
The last one is nontrivial, and we will go into detail below.
 
There are also a few notes we will only write in passing, for now, perhaps to build on later:
1. If we do want <math>1/\log(nd)</math> weighting, we can derive this kind of weighting from an antiderivative of the zeta function.
2. If we only want the primes, rather than all intervals, we can use something called the "Prime Zeta Function" to get those kinds of summations.
3. If we do want the true TE squared error rather than our cosine error, then we would end up getting something called "parabolic waves" rather than cosine waves for each interval. A parabolic wave is the antiderivative of a sawtooth wave, and as it is a periodic signal, it has a Fourier series and can be expressed as a sum of sinusoids. We can use this to get a derivation of the squared error as an infinite sum of zeta functions.
 
For now, though, we will focus only on the basic zeta result that we have.
 
Going back to the infinite summation above, we note that these cosine error (or really "cosine accuracy") functions are being weighted by 1/(nd)<span style="font-size: 90%; vertical-align: super;">σ</span>. Note that σ, which is the real part of the zeta argument s, serves as sort of a complexity weighting - it determines how quickly complex rational numbers become "irrelevant." Framed another way, we can think of it as the degree of "'''rolloff'''" formed by the resultant (musical, not mathematical) harmonic series formed by those rationals with d=1. Note that this rolloff is much stronger than the usual 1/log(nd) rolloff exhibited by TE error, which is one reason that zeta converges to something coherent for all rational numbers, whereas TE fails to converge as the limit increases. We will use the term "rolloff" to identify the variable σ below.
 
Putting this all together, we can take the approach to fix σ, specifying a rolloff, and then let x (or t) vary, specifying an EDO. The resulting function gives us the measured accuracy of EDOs across all unreduced rational numbers with respect to the chosen rolloff. Taking it all together, we get a Tenney-weighted sum of cosine accuracy over all unreduced rationals. QED.
 
<span style="line-height: 1.5;">It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In addition to our usual error metric on the primes, we also go to each rational, look for the best "direct" or "patent" mapping of that rational within the EDO, and add 'that' to the EDO's score. In particular, we do this even when the best mapping for some rational doesn't match up with the mapping you'd get from it just looking at the primes.
 
So, for instance, in 16-EDO, the best mapping for 3/2 is 9 steps out of 16, and using that mapping, we get that 9/8 is 2 steps (9*2 - 16 = 2). However, there is a better mapping for 9/8 at 3 steps - one which ignores the fact that it is no longer equal to two 3/2's. This can be particularly useful for playing chords: 16-EDO's "direct mapping" for 9 is useful when playing the chord 4:5:7:9, and the "indirect" or "prime-based" mapping for 9 is useful when playing the "major 9" chord 8:10:12:15:18. We can think of the zeta function as rewarding equal temperaments not just for having a good approximation of the primes, but also for having good "extra" approximations of rationals which can be used in this way. And although 16-EDO is pretty high error, similar phenomena can be found for any EDO which becomes [[consistency|inconsistent]] for some chord of interest.
 
One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the [https://en.wikipedia.org/wiki/Free_group free group] over the strictly positive rationals, which we'll call "'''meta-JI'''." The zeta function can then be thought of as yielding an error for all meta-JI [[Patent_val|generalized patent vals]]. Whether this can be extended to all meta-JI vals, or modified to yield something nice like a "norm" on the group of meta-JI vals, is an open question. Regardless, this may be a useful conceptual bridge to understand how to relate the zeta function to "ordinary" regular temperament theory.
 
Now, one nitpick to notice above is that this expression technically involves all 'unreduced' rationals, e.g. there will be a cosine error term not just for 3/2, but also for 6/4, 9/6, etc. However, we can easily show that the same expression also measures the cosine relative error for unreduced rationals:
 
==From Unreduced Rationals to Reduced Rationals==
Let's go back to this expression here:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math>
 
Note that since there's no restriction that n and d be coprime, the "rationals" we're using here don't have to be reduced. So this shows that zeta yields an error metric over all unreduced rationals, but leaves open the question of how reduced rationals are handled. It turns out that the same function also measures the error of reduced rationals, scaled only by a rolloff-dependent constant factor across all EDOs.
 
To see this, let's first note that every "unreduced" rational n/d can be decomposed into the product of a reduced rational n'/d' and a common factor c/c. Furthermore, note that for any reduced rational n'/d', we can generate all unreduced rationals n/d corresponding to it by multiplying it by all such common factors c/c, where c is a strictly positive natural number.
 
This allows us to change our original summation so that it's over three variables, n', d', and c, where n' and d' are coprime, and c is a strictly positive natural number:
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{cn'}{cd'}}\right)\right)}{(cn' \cdot cd')^{\sigma}}</math>
 
Now, the common factor c/c cancels out inside the log in the numerator. However, in the denominator, we get an extra factor of c<span style="font-size: 90%; vertical-align: super;">2</span> to contend with. This yields
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(c^2 \cdot n'd')^{\sigma}}
= \sum_{n',d',c} \left[ \frac{1}{c^{2\sigma}} \cdot \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right]</math>
 
Now, since we're still assuming that σ &gt; 1 and everything is absolutely convergent, we can decompose this into a product of series as follows
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \left[ \sum_c \frac{1}{c^{2\sigma}} \right] \cdot \left[ \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right]</math>
 
Finally, we note that on the left summation we simply have another zeta series, yielding
 
<math> \displaystyle
\left| \zeta(s) \right|^2 = \zeta(2\sigma) \cdot \left[ \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right]</math>
 
<math> \displaystyle
\frac{\left| \zeta(s) \right|^2}{\zeta(2\sigma)} = \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}}</math>
 
Now, since we're fixing σ and letting t vary, the left zeta term is constant for all EDOs. This demonstrates that the zeta function also measures cosine error over all the reduced rationals, up to a constant factor. QED.
 
==Measuring Error on Harmonics Only==
So far we have shown the following:
 
Error on prime powers: <math>\log |\zeta(\sigma+it)|<math>
Error on unreduced rationals: <math>|\zeta(\sigma+it)|^2<math>
Error on reduced rationals: <math>|\zeta(\sigma+it)|^2/\zeta(2\sigma)<math>
 
Since the second is a simple monotonic transformation of the first, we can see that the same function basically measures both the relative error on just the prime powers, and also on all unreduced rationals. The third function is really just the second function divided by a constant, since we only really care about letting <math>t</math> vary - we instead typically set <math>\sigma</math> to some value which represents the weighting "rolloff" on rationals.
 
We also note that, above, Gene tended to look at things in terms of the <math>Z(t)</math> function, which is defined so that we have <math>|Z(t)| = |\zeta(t)|</math>. So, the absolute value of the <math>Z</math> function is also monotonically equivalent to the above set of expressions, so that any one of these things will produce the same ranking on EDOs.
 
It turns out that using the same principles of derivation above, we can also derive another expression, this time for the relative error on only the harmonics - i.e. those intervals of the form <math>1/1, 2/1, 3/1, ... n/1, ...</math>. This was studied in a paper by Peter Buch called [[:File:Zetamusic5.pdf|"Favored cardinalities of scales"]]. The expression is:
 
Error on harmonics only: <math>|\textbf{Re}[zeta(\sigma+it)]|<math>
 
Note that, although the last four expressions were all monotonic transformations of one another, this one is not - this is the 'real part' of the zeta function, whereas the others were all some simple monotonic function of the 'absolute value' of the zeta function. The results, however, are  very similar - in particular, the peaks are approximately to one another, shifted by only a small amount (at least for reasonably-sized EDOs up to a few hundred).
 
==Relationship to Harmonic Entropy==


The expression
The expression