The Riemann zeta function and tuning: Difference between revisions

ArrowHead294 (talk | contribs)
mNo edit summary
ArrowHead294 (talk | contribs)
m Use characters instead of entities
Line 3: Line 3:
The Riemann zeta function is a famous mathematical function, best known for its relationship with the Riemann hypothesis, a 200-year old unsolved problem involving the distribution of the prime numbers. However, it also has an incredible musical interpretation as measuring the "harmonicity" of an equal temperament. Put simply, the zeta function shows, in a certain sense, how well a given equal temperament approximates the harmonic series, and indeed ''all'' rational numbers, even up to "infinite-limit JI."
The Riemann zeta function is a famous mathematical function, best known for its relationship with the Riemann hypothesis, a 200-year old unsolved problem involving the distribution of the prime numbers. However, it also has an incredible musical interpretation as measuring the "harmonicity" of an equal temperament. Put simply, the zeta function shows, in a certain sense, how well a given equal temperament approximates the harmonic series, and indeed ''all'' rational numbers, even up to "infinite-limit JI."


As a result, although the zeta function is best known for its use in analytic number theory, the zeta function is ever-present in the background of tuning theory—the [[harmonic entropy]] model of [[concordance]] can be shown to be related to the Fourier transform of the zeta function, and several tuning-theoretic metrics, if extended to the infinite-limit, yield expressions that are related to the zeta function. Sometimes these are in terms of the "prime zeta function," which is closely related and can also be derived as an simple expression of the zeta function.
As a result, although the zeta function is best known for its use in analytic number theory, the zeta function is ever-present in the background of tuning theory—the [[harmonic entropy]] model of [[concordance]] can be shown to be related to the Fourier transform of the zeta function, and several tuning-theoretic metrics, if extended to the infinite-limit, yield expressions that are related to the zeta function. Sometimes these are in terms of the "prime zeta function," which is closely related and can also be derived as an simple expression of the zeta function.


Much of the below is thanks to the insights of [[Gene Ward Smith]]. Below is the original derivation as he presented it, followed by a different derivation from [[Mike Battaglia]] below which extends some of the results.
Much of the below is thanks to the insights of [[Gene Ward Smith]]. Below is the original derivation as he presented it, followed by a different derivation from [[Mike Battaglia]] below which extends some of the results.
Line 11: Line 11:
Suppose ''x'' is a variable representing some equal division of the octave. For example, if {{nowrap|''x'' {{=}} 80}}, ''x'' reflects 80edo with a step size of 15 cents and with pure octaves. Suppose that ''x'' can also be continuous, so that it can also represent fractional or "nonoctave" divisions as well. The [[Bohlen-Pierce|Bohlen-Pierce scale]], 13 equal divisions of 3/1, is approximately 8.202 equal divisions of the "octave" (although the octave itself does not appear in this tuning), and would hence be represented by a value of {{nowrap|''x'' {{=}} 8.202}}.
Suppose ''x'' is a variable representing some equal division of the octave. For example, if {{nowrap|''x'' {{=}} 80}}, ''x'' reflects 80edo with a step size of 15 cents and with pure octaves. Suppose that ''x'' can also be continuous, so that it can also represent fractional or "nonoctave" divisions as well. The [[Bohlen-Pierce|Bohlen-Pierce scale]], 13 equal divisions of 3/1, is approximately 8.202 equal divisions of the "octave" (although the octave itself does not appear in this tuning), and would hence be represented by a value of {{nowrap|''x'' {{=}} 8.202}}.


Now suppose that ⌊''x''⌉ denotes the difference between ''x'' and the integer nearest to ''x'':
Now suppose that ''x''denotes the difference between ''x'' and the integer nearest to ''x'':


<math>\rround{x} = \abs{x - \floor{x + \frac{1}{2}}}</math>
<math>\rround{x} = \abs{x - \floor{x + \frac{1}{2}}}</math>


For example, &lfloor;8.202&rceil; would be 0.202, since it is the difference between 8.202 and the nearest integer, which is 8. Meanwhile, &lfloor;7.95&rceil; would be 0.05, which is the difference between 7.95 and the nearest integer, which is 8.
For example, ⌊8.202⌉ would be 0.202, since it is the difference between 8.202 and the nearest integer, which is 8. Meanwhile, ⌊7.95⌉ would be 0.05, which is the difference between 7.95 and the nearest integer, which is 8.


For any value of ''x'', we can construct a ''p''-limit [[Patent_val|generalized patent val]]. We do so by rounding {{nowrap|''x'' log<sub>2</sub>(''q'')}} to the nearest integer for each prime ''q'' up to ''p''. Now consider the function
For any value of ''x'', we can construct a ''p''-limit [[Patent_val|generalized patent val]]. We do so by rounding {{nowrap|''x'' log<sub>2</sub>(''q'')}} to the nearest integer for each prime ''q'' up to ''p''. Now consider the function
Line 21: Line 21:
<math>\displaystyle \xi_p(x) = \sum_{\substack{2 \leq q \leq p \\ q \text{ prime}}} \left(\frac{\rround{x \log_2 q}}{\log_2 q}\right)^2</math>
<math>\displaystyle \xi_p(x) = \sum_{\substack{2 \leq q \leq p \\ q \text{ prime}}} \left(\frac{\rround{x \log_2 q}}{\log_2 q}\right)^2</math>


This function has local minima, corresponding to associated generalized patent vals. The minima occur for values of ''x'' which are the [[Tenney-Euclidean_Tuning|Tenney-Euclidean tuning]]s of the octaves of the associated vals, while &xi;<sub>''p''</sub> for these minima is the square of the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]] of the val&mdash;equal to the TE error times the TE complexity, and sometimes known as "TE simple badness."
This function has local minima, corresponding to associated generalized patent vals. The minima occur for values of ''x'' which are the [[Tenney-Euclidean_Tuning|Tenney-Euclidean tuning]]s of the octaves of the associated vals, while ξ<sub>''p''</sub> for these minima is the square of the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]] of the val—equal to the TE error times the TE complexity, and sometimes known as "TE simple badness."


Now suppose we don't want a formula for any specific prime limit, but which applies to all primes. We can't take the above sum to infinity, since it doesn't converge. However, we could change the weighting factor to a power so that it does converge:
Now suppose we don't want a formula for any specific prime limit, but which applies to all primes. We can't take the above sum to infinity, since it doesn't converge. However, we could change the weighting factor to a power so that it does converge:
Line 27: Line 27:
<math>\displaystyle \xi_\infty(x) = \sum_{\substack{q \geq 2 \\ q \text{ prime}}} \frac{\rround{x \log_2 q}^2}{q^s}</math>
<math>\displaystyle \xi_\infty(x) = \sum_{\substack{q \geq 2 \\ q \text{ prime}}} \frac{\rround{x \log_2 q}^2}{q^s}</math>


If ''s'' is greater than one, this does converge. However, we might want to make a few adjustments. For one thing, if the error is low enough that the tuning is consistent, then the error of the square of a prime is twice that of the prime, of the cube tripled, and so forth until the error becomes inconsistent. When the weighting uses logarithms and error measures are consistent, then the logarithmic weighting cancels this effect out, so we might consider that prime powers were implicitly included in the Tenney-Euclidean measure. We can go ahead and include them by adding a factor of {{sfrac|1|''n''}} for each prime power ''p''<sup>''n''</sup>. A somewhat peculiar but useful way to write the result of doing this is in terms of the {{w|Von Mangoldt function}}, an {{w|arithmetic function}} on positive integers which is equal to ln(''p'') on prime powers ''p''<sup>''n''</sup>, and is zero elsewhere. This is written using a capital lambda, as &Lambda;(''n''), and in terms of it we can include prime powers in our error function as
If ''s'' is greater than one, this does converge. However, we might want to make a few adjustments. For one thing, if the error is low enough that the tuning is consistent, then the error of the square of a prime is twice that of the prime, of the cube tripled, and so forth until the error becomes inconsistent. When the weighting uses logarithms and error measures are consistent, then the logarithmic weighting cancels this effect out, so we might consider that prime powers were implicitly included in the Tenney-Euclidean measure. We can go ahead and include them by adding a factor of {{sfrac|1|''n''}} for each prime power ''p''<sup>''n''</sup>. A somewhat peculiar but useful way to write the result of doing this is in terms of the {{w|Von Mangoldt function}}, an {{w|arithmetic function}} on positive integers which is equal to ln(''p'') on prime powers ''p''<sup>''n''</sup>, and is zero elsewhere. This is written using a capital lambda, as Λ(''n''), and in terms of it we can include prime powers in our error function as


<math>\displaystyle \xi_\infty(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{\rround{x \log_2 n}^2}{n^s}</math>
<math>\displaystyle \xi_\infty(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{\rround{x \log_2 n}^2}{n^s}</math>
Line 33: Line 33:
where the summation is taken formally over all positive integers, though only the primes and prime powers make a nonzero contribution.
where the summation is taken formally over all positive integers, though only the primes and prime powers make a nonzero contribution.


Another consequence of the above definition which might be objected to is that it results in a function with a {{w|Continuous function#Relation to differentiability and integrability|discontinuous derivative}}, whereas a smooth function be preferred. The function &lfloor;''x''&rceil;<sup>2</sup> is quadratically increasing near integer values of ''x'', and is periodic with period 1. Another function with these same properties is {{nowrap|1 &minus; cos(2&pi;''x'')}}, which is a smooth and in fact an {{w|entire function}}. Let us therefore now define for any {{nowrap|''s'' &gt; 1}}:
Another consequence of the above definition which might be objected to is that it results in a function with a {{w|Continuous function#Relation to differentiability and integrability|discontinuous derivative}}, whereas a smooth function be preferred. The function ''x''<sup>2</sup> is quadratically increasing near integer values of ''x'', and is periodic with period 1. Another function with these same properties is {{nowrap|1 cos(''x'')}}, which is a smooth and in fact an {{w|entire function}}. Let us therefore now define for any {{nowrap|''s'' &gt; 1}}:


<math>\displaystyle E_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{1 - \cos(2 \pi x \log_2 n)}{n^s}</math>
<math>\displaystyle E_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{1 - \cos(2 \pi x \log_2 n)}{n^s}</math>
Line 43: Line 43:
<math>\displaystyle F_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{\cos(2 \pi x \log_2 n)}{n^s}</math>
<math>\displaystyle F_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{\cos(2 \pi x \log_2 n)}{n^s}</math>


This new function has the property that {{nowrap|F<sub>s</sub>(''x'') {{=}} F<sub>s</sub>(0) &minus; E<sub>s</sub>(''x'')}}, so that all we have done is flip the sign of E<sub>s</sub>(''x'') and offset it vertically. This now increases to a maximum value for low errors, rather than declining to a minimum. Of more interest is the fact that it is a known mathematical function, which can be expressed in terms of the real part of the logarithm of the {{w|Riemann zeta function}}:
This new function has the property that {{nowrap|F<sub>s</sub>(''x'') {{=}} F<sub>s</sub>(0) E<sub>s</sub>(''x'')}}, so that all we have done is flip the sign of E<sub>s</sub>(''x'') and offset it vertically. This now increases to a maximum value for low errors, rather than declining to a minimum. Of more interest is the fact that it is a known mathematical function, which can be expressed in terms of the real part of the logarithm of the {{w|Riemann zeta function}}:


<math>\displaystyle F_s(x) = \Re \ln \zeta\left(s + \frac{2 \pi i}{\ln 2}x\right)</math>
<math>\displaystyle F_s(x) = \Re \ln \zeta\left(s + \frac{2 \pi i}{\ln 2}x\right)</math>
Line 54: Line 54:


=== Into the critical strip ===
=== Into the critical strip ===
So long as {{nowrap|''s'' &ge; 1}}, the absolute value of the zeta function can be seen as a relative error measurement. However, the rationale for that view of things departs when {{nowrap|''s'' &lt; 1}}, particularly in the [http://mathworld.wolfram.com/CriticalStrip.html critical strip], when {{nowrap|0 &lt; ''s'' &lt; 1}}. As s approaches the value {{nowrap|''s'' {{=}} {{sfrac|1|2}}}} of the [http://mathworld.wolfram.com/CriticalLine.html critical line], the information content, so to speak, of the zeta function concerning higher primes increases and it behaves increasingly like a badness measure (or more correctly, since we have inverted it, like a goodness measure.) The quasi-symmetric [https://planetmath.org/encyclopedia/FunctionalEquationOfTheRiemannZetaFunction.html functional equation] of the zeta function tells us that past the critical line the information content starts to decrease again, with {{nowrap|1 &minus; ''s''}} and ''s'' having the same information content. Hence it is the zeta function between {{nowrap|''s'' {{=}} {{sfrac|1|2}}}} and {{nowrap|''s'' {{=}} 1}}, and especially the zeta function along the critical line {{nowrap|''s'' {{=}} {{sfrac|1|2}}}}, which is of the most interest.
So long as {{nowrap|''s'' &ge; 1}}, the absolute value of the zeta function can be seen as a relative error measurement. However, the rationale for that view of things departs when {{nowrap|''s'' &lt; 1}}, particularly in the [http://mathworld.wolfram.com/CriticalStrip.html critical strip], when {{nowrap|0 &lt; ''s'' &lt; 1}}. As s approaches the value {{nowrap|''s'' {{=}} {{sfrac|1|2}}}} of the [http://mathworld.wolfram.com/CriticalLine.html critical line], the information content, so to speak, of the zeta function concerning higher primes increases and it behaves increasingly like a badness measure (or more correctly, since we have inverted it, like a goodness measure.) The quasi-symmetric [https://planetmath.org/encyclopedia/FunctionalEquationOfTheRiemannZetaFunction.html functional equation] of the zeta function tells us that past the critical line the information content starts to decrease again, with {{nowrap|1 ''s''}} and ''s'' having the same information content. Hence it is the zeta function between {{nowrap|''s'' {{=}} {{sfrac|1|2}}}} and {{nowrap|''s'' {{=}} 1}}, and especially the zeta function along the critical line {{nowrap|''s'' {{=}} {{sfrac|1|2}}}}, which is of the most interest.


As {{nowrap|''s'' &gt; 1}} gets larger, the Dirichlet series for the zeta function is increasingly dominated by the 2 term, getting ever closer to simply {{nowrap|1 + 2<sup>&minus;''z''</sup>}}, which approaches 1 as {{nowrap|''s'' {{=}} Re(''z'')}} becomes larger. When {{nowrap|''s'' &#x226B; 1}} and ''x'' is an integer, the real part of zeta is approximately {{nowrap|1 + 2<sup>&minus;''s''</sup>}}, and the imaginary part is approximately zero; that is, zeta is approximately real. Starting from {{nowrap|''s'' {{=}} +&infin;}} with ''x'' an integer, we can trace a line back towards the critical strip on which zeta is real. Since when {{nowrap|''s'' &#x226B; 1}} the derivative is approximately &minus;{{sfrac|ln(2)|2<sup>''s''</sup>}}, it is negative on this line of real values for zeta, meaning that the real value for zeta increases as ''s'' decreases. The zeta function approaches 1 uniformly as ''s'' increases to infinity, so as ''s'' decreases, the real-valued zeta function along this line of real values continues to increase though all real values from 1 to infinity monotonically. When it crosses the critical line where {{nowrap|''s'' {{=}} {{sfrac|1|2}}}}, it produces a real value of zeta on the critical line. Points on the critical line where {{nowrap|&zeta;({{frac|1|2}} + ''ig'')}} are real are called "Gram points", after {{w|Jørgen Pedersen Gram}}. We thus have associated pure-octave edos, where ''x'' is an integer, to a value near to the pure octave, at the special sorts of Gram points which corresponds to edos.
As {{nowrap|''s'' &gt; 1}} gets larger, the Dirichlet series for the zeta function is increasingly dominated by the 2 term, getting ever closer to simply {{nowrap|1 + 2<sup>''z''</sup>}}, which approaches 1 as {{nowrap|''s'' {{=}} Re(''z'')}} becomes larger. When {{nowrap|''s'' 1}} and ''x'' is an integer, the real part of zeta is approximately {{nowrap|1 + 2<sup>''s''</sup>}}, and the imaginary part is approximately zero; that is, zeta is approximately real. Starting from {{nowrap|''s'' {{=}} +&infin;}} with ''x'' an integer, we can trace a line back towards the critical strip on which zeta is real. Since when {{nowrap|''s'' 1}} the derivative is approximately {{sfrac|ln(2)|2<sup>''s''</sup>}}, it is negative on this line of real values for zeta, meaning that the real value for zeta increases as ''s'' decreases. The zeta function approaches 1 uniformly as ''s'' increases to infinity, so as ''s'' decreases, the real-valued zeta function along this line of real values continues to increase though all real values from 1 to infinity monotonically. When it crosses the critical line where {{nowrap|''s'' {{=}} {{sfrac|1|2}}}}, it produces a real value of zeta on the critical line. Points on the critical line where {{nowrap|ζ({{frac|1|2}} + ''ig'')}} are real are called "Gram points", after {{w|Jørgen Pedersen Gram}}. We thus have associated pure-octave edos, where ''x'' is an integer, to a value near to the pure octave, at the special sorts of Gram points which corresponds to edos.


Because the value of zeta increased continuously as it made its way from +&infin; to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if &minus;&zeta;'(''z'') is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to {{w|Bernhard Riemann}} which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the {{w|Riemann&ndash;Siegel formula}} since {{w|Carl Ludwig Siegel}} went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of {{nowrap|&zeta;({{frac|1|2}} + ''ig'')}} at the corresponding Gram point should be especially large.
Because the value of zeta increased continuously as it made its way from +&infin; to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if −ζ'(''z'') is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to {{w|Bernhard Riemann}} which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the {{w|Riemann–Siegel formula}} since {{w|Carl Ludwig Siegel}} went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of {{nowrap|ζ({{frac|1|2}} + ''ig'')}} at the corresponding Gram point should be especially large.


=== The Z function ===
=== The Z function ===
The absolute value of {{nowrap|&zeta;({{frac|1|2}} + ''ig'')}} at a Gram point corresponding to an edo is near to a local maximum, but not actually at one. At the local maximum, of course, the partial derivative of {{nowrap|&zeta;({{frac|1|2}} + ''it'')}} with respect to ''t'' will be zero; however this does not mean its derivative there will be zero. In fact, the {{w|Riemann hypothesis}} is equivalent to the claim that all zeros of {{nowrap|&zeta;'(''s'' + ''it'')}} occur when {{nowrap|''s'' &gt; {{sfrac|1|2}}}}, which is where all known zeros lie. These do not have values of ''t'' corresponding to good edos. For this and other reasons, it is helpful to have a function which is real for values on the critical line but whose absolute value is the same as that of zeta. This is provided by the {{w|''Z'' function}}.
The absolute value of {{nowrap|ζ({{frac|1|2}} + ''ig'')}} at a Gram point corresponding to an edo is near to a local maximum, but not actually at one. At the local maximum, of course, the partial derivative of {{nowrap|ζ({{frac|1|2}} + ''it'')}} with respect to ''t'' will be zero; however this does not mean its derivative there will be zero. In fact, the {{w|Riemann hypothesis}} is equivalent to the claim that all zeros of {{nowrap|ζ'(''s'' + ''it'')}} occur when {{nowrap|''s'' &gt; {{sfrac|1|2}}}}, which is where all known zeros lie. These do not have values of ''t'' corresponding to good edos. For this and other reasons, it is helpful to have a function which is real for values on the critical line but whose absolute value is the same as that of zeta. This is provided by the {{w|''Z'' function}}.


In order to define the Z function, we need first to define the {{w|Riemann&ndash;Siegel theta function}}, and in order to do that, we first need to define the [http://mathworld.wolfram.com/LogGammaFunction.html Log Gamma function]. This is not defined as the natural log of the Gamma function since that has a more complicated branch cut structure; instead, the principal branch of the Log Gamma function is defined as having a branch cut along the negative real axis, and is given by the series
In order to define the Z function, we need first to define the {{w|Riemann–Siegel theta function}}, and in order to do that, we first need to define the [http://mathworld.wolfram.com/LogGammaFunction.html Log Gamma function]. This is not defined as the natural log of the Gamma function since that has a more complicated branch cut structure; instead, the principal branch of the Log Gamma function is defined as having a branch cut along the negative real axis, and is given by the series


<math>\displaystyle\Upsilon(z) = -\gamma z - \ln z + \sum_{k=1}^\infty \left(\frac{z}{k} - \ln\left(1 + \frac{z}{k}\right)\right)</math>
<math>\displaystyle\Upsilon(z) = -\gamma z - \ln z + \sum_{k=1}^\infty \left(\frac{z}{k} - \ln\left(1 + \frac{z}{k}\right)\right)</math>


where &gamma; is the {{w|Euler&ndash;Mascheroni constant}}. We now may define the Riemann&ndash;Siegel theta function as
where γ is the {{w|Euler–Mascheroni constant}}. We now may define the Riemann–Siegel theta function as


<math>\displaystyle\theta(z) = \frac{\Upsilon\left(\frac{1 + 2 i z}{4}\right) - \Upsilon\left(\frac{1 - 2 i z}{4}\right)}{2 i} - \frac{\ln(\pi)}{2} z</math>
<math>\displaystyle\theta(z) = \frac{\Upsilon\left(\frac{1 + 2 i z}{4}\right) - \Upsilon\left(\frac{1 - 2 i z}{4}\right)}{2 i} - \frac{\ln(\pi)}{2} z</math>
Line 77: Line 77:
- \arctan\left(\frac{2t}{4n+1}\right)\right)</math>
- \arctan\left(\frac{2t}{4n+1}\right)\right)</math>


Since the arctangent function is holomorphic in the strip with imaginary part between &minus;1 and 1, it follows from the above formula, or arguing from the previous one, that &theta; is holomorphic in the strip with imaginary part between &minus;{{frac|1|2}} and {{frac|1|2}}. It may be described for real arguments as an odd real analytic function of ''x'', increasing when {{nowrap|{{!}}''x''{{!}} &gt; 6.29}}. Plots of it may be studied by use of the Wolfram [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelTheta online function plotter].
Since the arctangent function is holomorphic in the strip with imaginary part between −1 and 1, it follows from the above formula, or arguing from the previous one, that θ is holomorphic in the strip with imaginary part between {{frac|1|2}} and {{frac|1|2}}. It may be described for real arguments as an odd real analytic function of ''x'', increasing when {{nowrap|{{!}}''x''{{!}} &gt; 6.29}}. Plots of it may be studied by use of the Wolfram [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelTheta online function plotter].


Using the theta and zeta functions, we define the {{w|Z function}} as
Using the theta and zeta functions, we define the {{w|Z function}} as
Line 83: Line 83:
<math>Z(t) = \exp(i \theta(t)) \zeta\left(\frac{1}{2} + it\right)</math>
<math>Z(t) = \exp(i \theta(t)) \zeta\left(\frac{1}{2} + it\right)</math>


Since &theta; is holomorphic on the strip with imaginary part between &minus;{{sfrac|1|2}} and {{sfrac|1|2}}, so is Z. Since the exponential function has no zeros, the zeros of Z in this strip correspond one to one with the zeros of &zeta; in the critical strip. Since the exponential of an imaginary argument has absolute value 1, the absolute value of Z along the real axis is the same as the absolute value of &zeta; at the corresponding place on the critical line. And since theta was defined so as to give precisely this property, Z is a real even function of the real variable ''t''.
Since θ is holomorphic on the strip with imaginary part between {{sfrac|1|2}} and {{sfrac|1|2}}, so is Z. Since the exponential function has no zeros, the zeros of Z in this strip correspond one to one with the zeros of ζ in the critical strip. Since the exponential of an imaginary argument has absolute value 1, the absolute value of Z along the real axis is the same as the absolute value of ζ at the corresponding place on the critical line. And since theta was defined so as to give precisely this property, Z is a real even function of the real variable ''t''.


Using the [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelZ online plotter] we can plot Z in the regions corresponding to scale divisions, using the conversion factor {{nowrap|''t'' {{=}} {{sfrac|2&pi;|ln(2)}}''x''}}, for ''x'' a number near or at an edo number. Hence, for instance, to plot 12 plot around 108.777, to plot 31 plot around 281.006, and so forth. An alternative plotter is the applet [http://web.viu.ca/pughg/RiemannZeta/RiemannZetaLong.html here].
Using the [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelZ online plotter] we can plot Z in the regions corresponding to scale divisions, using the conversion factor {{nowrap|''t'' {{=}} {{sfrac||ln(2)}}''x''}}, for ''x'' a number near or at an edo number. Hence, for instance, to plot 12 plot around 108.777, to plot 31 plot around 281.006, and so forth. An alternative plotter is the applet [http://web.viu.ca/pughg/RiemannZeta/RiemannZetaLong.html here].


If you have access to {{w|Mathematica}}, which has Z, zeta and theta as a part of its suite of initially defined functions, you can do even better. Below is a Mathematica-generated plot of Z({{frac|2&pi;''x''|ln(2)}}) in the region around 12edo:
If you have access to {{w|Mathematica}}, which has Z, zeta and theta as a part of its suite of initially defined functions, you can do even better. Below is a Mathematica-generated plot of Z({{frac|''x''|ln(2)}}) in the region around 12edo:


[[File:plot12.png|alt=plot12.png|plot12.png]]
[[File:plot12.png|alt=plot12.png|plot12.png]]
Line 105: Line 105:
== Mike Battaglia's expanded results ==
== Mike Battaglia's expanded results ==
=== Zeta yields "relative error" over all rationals ===
=== Zeta yields "relative error" over all rationals ===
Above, Gene proves that the zeta function measures the [[Tenney-Euclidean_metrics|Tenney&ndash;Euclidean relative error]], sometimes called "Tenney&ndash;Euclidean Simple Badness," of any EDO, taken over all "prime powers". The relative error is simply equal to the tuning error times the size of the EDO, so we can easily get the raw "non-relative" tuning error from this as well by simply dividing by the size of the EDO.
Above, Gene proves that the zeta function measures the [[Tenney-Euclidean_metrics|Tenney–Euclidean relative error]], sometimes called "Tenney–Euclidean Simple Badness," of any EDO, taken over all "prime powers". The relative error is simply equal to the tuning error times the size of the EDO, so we can easily get the raw "non-relative" tuning error from this as well by simply dividing by the size of the EDO.


Here, we strengthen that result to show that the zeta function additionally measures weighted relative error over all rational numbers, relative to the size of the EDO.
Here, we strengthen that result to show that the zeta function additionally measures weighted relative error over all rational numbers, relative to the size of the EDO.
Line 116: Line 116:
\zeta(s) = \sum_n n^{-s}</math>
\zeta(s) = \sum_n n^{-s}</math>


Now let's do two things: we're going to expand {{nowrap|''s'' {{=}} &sigma; + ''it''}}, and we're going to multiply &zeta;(s) by its conjugate &zeta;(''s'')', noting that {{nowrap|&zeta;(''s'')' {{=}} &zeta;(''s''{{'}})}} and {{nowrap|&zeta;(''s'') &#x22C5; &zeta;(''s'')' {{=}} &zeta;(''s'')<sup>2</sup>}}. We get:
Now let's do two things: we're going to expand {{nowrap|''s'' {{=}} σ + ''it''}}, and we're going to multiply ζ(s) by its conjugate ζ(''s'')', noting that {{nowrap|ζ(''s'')' {{=}} ζ(''s''{{'}})}} and {{nowrap|ζ(''s'') ⋅ ζ(''s'')' {{=}} ζ(''s'')<sup>2</sup>}}. We get:


<math> \displaystyle
<math> \displaystyle
Line 123: Line 123:
where d is a new variable used internally in the second summation.
where d is a new variable used internally in the second summation.


Now, let's focus on {{nowrap|&sigma; &gt; 1}}, so that both series are absolutely convergent. The following rearrangement of terms is then justified:
Now, let's focus on {{nowrap|σ &gt; 1}}, so that both series are absolutely convergent. The following rearrangement of terms is then justified:


<math> \displaystyle
<math> \displaystyle
Line 135: Line 135:
= \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math>
= \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math>


where the last equality makes use of the fact that {{nowrap|cos(&minus;''x'') {{=}} cos(''x'')}} and {{nowrap|sin(&minus;''x'') {{=}} &minus;sin(''x'')}}.
where the last equality makes use of the fact that {{nowrap|cos(''x'') {{=}} cos(''x'')}} and {{nowrap|sin(''x'') {{=}} −sin(''x'')}}.


Now, let's decompose the sum into three parts: {{nowrap|''n'' {{=}} ''d''|''n'' &gt; ''d''|and ''n'' &lt; ''d''}}. Here's what we get:
Now, let's decompose the sum into three parts: {{nowrap|''n'' {{=}} ''d''|''n'' &gt; ''d''|and ''n'' &lt; ''d''}}. Here's what we get:
Line 161: Line 161:
\frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math>
\frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math>


Now, noting that {{nowrap|ln({{frac|''p''|''q''}}) {{=}} &minus;ln({{frac|''q''|''p''}})}} and that sin is an odd function, we can see that the sin terms cancel out, leaving
Now, noting that {{nowrap|ln({{frac|''p''|''q''}}) {{=}} −ln({{frac|''q''|''p''}})}} and that sin is an odd function, we can see that the sin terms cancel out, leaving


<math> \displaystyle
<math> \displaystyle
Line 179: Line 179:
\abs{ \zeta(s) }^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math>
\abs{ \zeta(s) }^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math>


Finally, by making the mysterious substitution {{nowrap|''t'' {{=}} {{sfrac|2&pi;|ln(2)}}''x''}}, the musical implications of the above will start to reveal themselves:
Finally, by making the mysterious substitution {{nowrap|''t'' {{=}} {{sfrac||ln(2)}}''x''}}, the musical implications of the above will start to reveal themselves:


<math> \displaystyle
<math> \displaystyle
Line 187: Line 187:


=== Interpretation of results: "cosine relative error" ===
=== Interpretation of results: "cosine relative error" ===
For every strictly positive rational ''n''/''d'', there is a cosine with period {{nowrap|2&pi; log<sub>2</sub>({{frac|''n''|''d''}})}}. This cosine peaks at {{nowrap|''x'' {{=}} {{sfrac|''N''|log<sub>2</sub>(''n''/''d'')}}}} for all integers ''N'', or in other words, the Nth-equal division of the rational number {{frac|''n''|''d''}}, and hits troughs midway between.
For every strictly positive rational ''n''/''d'', there is a cosine with period {{nowrap|log<sub>2</sub>({{frac|''n''|''d''}})}}. This cosine peaks at {{nowrap|''x'' {{=}} {{sfrac|''N''|log<sub>2</sub>(''n''/''d'')}}}} for all integers ''N'', or in other words, the Nth-equal division of the rational number {{frac|''n''|''d''}}, and hits troughs midway between.


Our mysterious substitution above was chosen to set the units for this up nicely. The variable x now happens to be measured in divisions of the octave. (The original variable ''t'', which was the imaginary part of the zeta argument ''s'', can be thought of as the number of divisions of the interval {{nowrap|''e''<sup>2&pi;</sup> &#x2248; 535.49}}, or what [[Keenan_Pepper|Keenan Pepper]] has called the "natural interval.")
Our mysterious substitution above was chosen to set the units for this up nicely. The variable x now happens to be measured in divisions of the octave. (The original variable ''t'', which was the imaginary part of the zeta argument ''s'', can be thought of as the number of divisions of the interval {{nowrap|''e''<sup></sup> 535.49}}, or what [[Keenan_Pepper|Keenan Pepper]] has called the "natural interval.")


As mentioned in Gene's original zeta derivation, these cosine functions can be thought of as good approximations to the terms in the TE error computation, which are all the squared errors for the different primes. Rather than taking the square of the error, we instead put the error through the function {{sfrac|1 &minus; cos(''x'')|2}}, which is "close enough" for small values of ''x''. Since we are always rounding off to the best mapping, this error is never more 0.5 steps of the EDO, so since we have {{nowrap|&minus;0.5 &lt; x &lt; 0.5}} we have a decent enough approximation.
As mentioned in Gene's original zeta derivation, these cosine functions can be thought of as good approximations to the terms in the TE error computation, which are all the squared errors for the different primes. Rather than taking the square of the error, we instead put the error through the function {{sfrac|1 cos(''x'')|2}}, which is "close enough" for small values of ''x''. Since we are always rounding off to the best mapping, this error is never more 0.5 steps of the EDO, so since we have {{nowrap|−0.5 &lt; x &lt; 0.5}} we have a decent enough approximation.


We will call this '''cosine (relative) error''', by analogy with '''TE (relative) error'''. It is easy to see that the cosine error is approximately equal to the TE error when the error is small, and only diverges slightly for large errors.
We will call this '''cosine (relative) error''', by analogy with '''TE (relative) error'''. It is easy to see that the cosine error is approximately equal to the TE error when the error is small, and only diverges slightly for large errors.
Line 197: Line 197:
There are three major differences between our "cosine error" functions, and the way we're incorporating them into the result, and what TE is doing:
There are three major differences between our "cosine error" functions, and the way we're incorporating them into the result, and what TE is doing:


# First, the function here is flipped upside down&mdash;that is, we're measuring "accuracy" rather than error&mdash;as well as shifted vertically down along the y-axis. Since it is trivial to convert between the two, and since we only care about the relative rankings of EDOs, it is clear that we're measuring essentially the same thing.
# First, the function here is flipped upside down—that is, we're measuring "accuracy" rather than error—as well as shifted vertically down along the y-axis. Since it is trivial to convert between the two, and since we only care about the relative rankings of EDOs, it is clear that we're measuring essentially the same thing.
# Instead of weighting each interval by {{sfrac|1|log(''nd'')}}, we weight it by {{sfrac|1|(''nd'')<sup>&sigma;</sup>}}.
# Instead of weighting each interval by {{sfrac|1|log(''nd'')}}, we weight it by {{sfrac|1|(''nd'')<sup>σ</sup>}}.
# Instead of only looking at the primes, as we do in TE, we are now looking at 'all' intervals, and in particular looking at the best mapping for each interval.
# Instead of only looking at the primes, as we do in TE, we are now looking at 'all' intervals, and in particular looking at the best mapping for each interval.


Line 210: Line 210:
For now, though, we will focus only on the basic zeta result that we have.
For now, though, we will focus only on the basic zeta result that we have.


Going back to the infinite summation above, we note that these cosine error (or really "cosine accuracy") functions are being weighted by {{sfrac|1|(''nd'')<sup>&sigma;</sup>}}. Note that &sigma;, which is the real part of the zeta argument ''s'', serves as sort of a complexity weighting&mdash;it determines how quickly complex rational numbers become "irrelevant." Framed another way, we can think of it as the degree of "'''rolloff'''" formed by the resultant (musical, not mathematical) harmonic series formed by those rationals with {{nowrap|''d'' {{=}} 1}}. Note that this rolloff is much stronger than the usual {{sfrac|1|log(''nd'')}} rolloff exhibited by TE error, which is one reason that zeta converges to something coherent for all rational numbers, whereas TE fails to converge as the limit increases. We will use the term "rolloff" to identify the variable &sigma; below.
Going back to the infinite summation above, we note that these cosine error (or really "cosine accuracy") functions are being weighted by {{sfrac|1|(''nd'')<sup>σ</sup>}}. Note that σ, which is the real part of the zeta argument ''s'', serves as sort of a complexity weighting—it determines how quickly complex rational numbers become "irrelevant." Framed another way, we can think of it as the degree of "'''rolloff'''" formed by the resultant (musical, not mathematical) harmonic series formed by those rationals with {{nowrap|''d'' {{=}} 1}}. Note that this rolloff is much stronger than the usual {{sfrac|1|log(''nd'')}} rolloff exhibited by TE error, which is one reason that zeta converges to something coherent for all rational numbers, whereas TE fails to converge as the limit increases. We will use the term "rolloff" to identify the variable σ below.


Putting this all together, we can take the approach to fix &sigma;, specifying a rolloff, and then let ''x'' (or ''t'') vary, specifying an EDO. The resulting function gives us the measured accuracy of EDOs across all unreduced rational numbers with respect to the chosen rolloff. Taking it all together, we get a Tenney-weighted sum of cosine accuracy over all unreduced rationals. QED.
Putting this all together, we can take the approach to fix σ, specifying a rolloff, and then let ''x'' (or ''t'') vary, specifying an EDO. The resulting function gives us the measured accuracy of EDOs across all unreduced rational numbers with respect to the chosen rolloff. Taking it all together, we get a Tenney-weighted sum of cosine accuracy over all unreduced rationals. QED.


<span style="line-height: 1.5;">It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In addition to our usual error metric on the primes, we also go to each rational, look for the best "direct" or "patent" mapping of that rational within the EDO, and add 'that' to the EDO's score. In particular, we do this even when the best mapping for some rational doesn't match up with the mapping you'd get from it just looking at the primes.
<span style="line-height: 1.5;">It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In addition to our usual error metric on the primes, we also go to each rational, look for the best "direct" or "patent" mapping of that rational within the EDO, and add 'that' to the EDO's score. In particular, we do this even when the best mapping for some rational doesn't match up with the mapping you'd get from it just looking at the primes.


So, for instance, in 16-EDO, the best mapping for 3/2 is 9 steps out of 16, and using that mapping, we get that 9/8 is 2 steps, since {{nowrap|9 * 2 &minus; 16 {{=}} 2}}. However, there is a better mapping for 9/8 at 3 steps&mdash;one which ignores the fact that it is no longer equal to two 3/2's. This can be particularly useful for playing chords: 16-EDO's "direct mapping" for 9 is useful when playing the chord 4:5:7:9, and the "indirect" or "prime-based" mapping for 9 is useful when playing the "major 9" chord 8:10:12:15:18. We can think of the zeta function as rewarding equal temperaments not just for having a good approximation of the primes, but also for having good "extra" approximations of rationals which can be used in this way. And although 16-EDO is pretty high error, similar phenomena can be found for any EDO which becomes [[consistency|inconsistent]] for some chord of interest.
So, for instance, in 16-EDO, the best mapping for 3/2 is 9 steps out of 16, and using that mapping, we get that 9/8 is 2 steps, since {{nowrap|9 * 2 16 {{=}} 2}}. However, there is a better mapping for 9/8 at 3 steps—one which ignores the fact that it is no longer equal to two 3/2's. This can be particularly useful for playing chords: 16-EDO's "direct mapping" for 9 is useful when playing the chord 4:5:7:9, and the "indirect" or "prime-based" mapping for 9 is useful when playing the "major 9" chord 8:10:12:15:18. We can think of the zeta function as rewarding equal temperaments not just for having a good approximation of the primes, but also for having good "extra" approximations of rationals which can be used in this way. And although 16-EDO is pretty high error, similar phenomena can be found for any EDO which becomes [[consistency|inconsistent]] for some chord of interest.


One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the [https://en.wikipedia.org/wiki/Free_group free group] over the strictly positive rationals, which we'll call ''"meta-JI."'' The zeta function can then be thought of as yielding an error for all meta-JI [[Patent_val|generalized patent vals]]. Whether this can be extended to all meta-JI vals, or modified to yield something nice like a "norm" on the group of meta-JI vals, is an open question. Regardless, this may be a useful conceptual bridge to understand how to relate the zeta function to "ordinary" regular temperament theory.
One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the [https://en.wikipedia.org/wiki/Free_group free group] over the strictly positive rationals, which we'll call ''"meta-JI."'' The zeta function can then be thought of as yielding an error for all meta-JI [[Patent_val|generalized patent vals]]. Whether this can be extended to all meta-JI vals, or modified to yield something nice like a "norm" on the group of meta-JI vals, is an open question. Regardless, this may be a useful conceptual bridge to understand how to relate the zeta function to "ordinary" regular temperament theory.
Line 243: Line 243:
= \sum_{n',d',c} \left[ \frac{1}{c^{2\sigma}} \cdot \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right]</math>
= \sum_{n',d',c} \left[ \frac{1}{c^{2\sigma}} \cdot \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right]</math>


Now, since we're still assuming that {{nowrap|&sigma; &gt; 1}} and everything is absolutely convergent, we can decompose this into a product of series as follows
Now, since we're still assuming that {{nowrap|σ &gt; 1}} and everything is absolutely convergent, we can decompose this into a product of series as follows


<math> \displaystyle
<math> \displaystyle
Line 256: Line 256:
\frac{\abs{ \zeta(s) }^2}{\zeta(2\sigma)} = \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}}</math>
\frac{\abs{ \zeta(s) }^2}{\zeta(2\sigma)} = \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}}</math>


Now, since we're fixing &sigma; and letting ''t'' vary, the left zeta term is constant for all EDOs. This demonstrates that the zeta function also measures cosine error over all the reduced rationals, up to a constant factor. QED.
Now, since we're fixing σ and letting ''t'' vary, the left zeta term is constant for all EDOs. This demonstrates that the zeta function also measures cosine error over all the reduced rationals, up to a constant factor. QED.


=== Measuring error on harmonics only ===
=== Measuring error on harmonics only ===
Line 265: Line 265:
* Error on reduced rationals: <math>\frac{\abs{\zeta(\sigma+it)}^2}{\zeta(2\sigma)}</math>
* Error on reduced rationals: <math>\frac{\abs{\zeta(\sigma+it)}^2}{\zeta(2\sigma)}</math>


Since the second is a simple monotonic transformation of the first, we can see that the same function basically measures both the relative error on just the prime powers, and also on all unreduced rationals, at least in the sense that EDOs will be ranked identically by both measures. The third function is really just the second function divided by a constant, since we only really care about letting <math>t</math> vary&mdash;we instead typically set <math>\sigma</math> to some value which represents the weighting "rolloff" on rationals. So, all three of these functions will rank EDOs identically.
Since the second is a simple monotonic transformation of the first, we can see that the same function basically measures both the relative error on just the prime powers, and also on all unreduced rationals, at least in the sense that EDOs will be ranked identically by both measures. The third function is really just the second function divided by a constant, since we only really care about letting <math>t</math> vary—we instead typically set <math>\sigma</math> to some value which represents the weighting "rolloff" on rationals. So, all three of these functions will rank EDOs identically.


We also note that, above, Gene tended to look at things in terms of the Z(''t'') function, which is defined so that we have {{nowrap|{{!}}Z(''t''){{!}} {{=}} {{!}}&zeta;(''t''){{!}}}}. So, the absolute value of the Z function is also monotonically equivalent to the above set of expressions, so that any one of these things will produce the same ranking on EDOs.
We also note that, above, Gene tended to look at things in terms of the Z(''t'') function, which is defined so that we have {{nowrap|{{!}}Z(''t''){{!}} {{=}} {{!}}ζ(''t''){{!}}}}. So, the absolute value of the Z function is also monotonically equivalent to the above set of expressions, so that any one of these things will produce the same ranking on EDOs.


It turns out that using the same principles of derivation above, we can also derive another expression, this time for the relative error on only the harmonics&mdash;i.e. those intervals of the form <math>1/1, 2/1, 3/1, ... n/1, ...</math>. This was studied in a paper by Peter Buch called [[:File:Zetamusic5.pdf|"Favored cardinalities of scales"]]. The expression is:
It turns out that using the same principles of derivation above, we can also derive another expression, this time for the relative error on only the harmonics—i.e. those intervals of the form <math>1/1, 2/1, 3/1, ... n/1, ...</math>. This was studied in a paper by Peter Buch called [[:File:Zetamusic5.pdf|"Favored cardinalities of scales"]]. The expression is:


Error on harmonics only: <math>\abs{\textbf{Re}\left[\zeta(\sigma + it)\right]}</math>
Error on harmonics only: <math>\abs{\textbf{Re}\left[\zeta(\sigma + it)\right]}</math>


Note that, although the last four expressions were all monotonic transformations of one another, this one is not&mdash;this is the 'real part' of the zeta function, whereas the others were all some simple monotonic function of the 'absolute value' of the zeta function. The results, however, are  very similar&mdash;in particular, the peaks are approximately to one another, shifted by only a small amount (at least for reasonably-sized EDOs up to a few hundred).
Note that, although the last four expressions were all monotonic transformations of one another, this one is not—this is the 'real part' of the zeta function, whereas the others were all some simple monotonic function of the 'absolute value' of the zeta function. The results, however, are  very similar—in particular, the peaks are approximately to one another, shifted by only a small amount (at least for reasonably-sized EDOs up to a few hundred).


=== Relationship to harmonic entropy ===
=== Relationship to harmonic entropy ===
Line 280: Line 280:
<math>\displaystyle{\abs{\zeta\left(\frac{1}{2} + it\right)}^2 \cdot \overline {\phi(t)}}</math>
<math>\displaystyle{\abs{\zeta\left(\frac{1}{2} + it\right)}^2 \cdot \overline {\phi(t)}}</math>


is, up to a flip in sign, the Fourier transform of the unnormalized Harmonic Shannon Entropy for {{nowrap|''N'' {{=}} &infin;}}, where &phi;(''t'') is the characteristic function (aka Fourier transform) of the spreading distribution and {{overline|&phi;(''t'')}} denotes complex conjugation.
is, up to a flip in sign, the Fourier transform of the unnormalized Harmonic Shannon Entropy for {{nowrap|''N'' {{=}} &infin;}}, where φ(''t'') is the characteristic function (aka Fourier transform) of the spreading distribution and {{overline|φ(''t'')}} denotes complex conjugation.


Note that in the most common case where the spreading distribution is symmetric (as in the case of the Gaussian and Laplace distributions), the characteristic function is purely real and hence the conjugate is unnecessary. In particular, when the spreading distribution is a Gaussian, the characteristic function is also a Gaussian.
Note that in the most common case where the spreading distribution is symmetric (as in the case of the Gaussian and Laplace distributions), the characteristic function is purely real and hence the conjugate is unnecessary. In particular, when the spreading distribution is a Gaussian, the characteristic function is also a Gaussian.
Line 626: Line 626:
}</math>
}</math>


where the product is over all primes ''p''. The product converges for values of ''s'' with real part greater than or equal to one, except for {{nowrap|''s'' {{=}} 1}} where it diverges to infinity. We may remove a finite list of primes from consideration by multiplying &zeta;(''s'') by the corresponding factors {{nowrap|(1 &minus; ''p''<sup>&minus;''s''</sup>)}} for each prime ''p'' we wish to remove. After we have done this, the smallest prime remaining will dominate peak values for ''s'' with large real part, and as before we can track these peaks backwards and, by analytical continuation, into the critical strip. In particular if we remove the prime 2, {{nowrap|(1 &minus; 2<sup>&minus;''s''</sup>)&zeta;(''s'')}} is now dominated by 3, and the large peak values occur near equal divisions of the "tritave", ie 3.
where the product is over all primes ''p''. The product converges for values of ''s'' with real part greater than or equal to one, except for {{nowrap|''s'' {{=}} 1}} where it diverges to infinity. We may remove a finite list of primes from consideration by multiplying ζ(''s'') by the corresponding factors {{nowrap|(1 ''p''<sup>''s''</sup>)}} for each prime ''p'' we wish to remove. After we have done this, the smallest prime remaining will dominate peak values for ''s'' with large real part, and as before we can track these peaks backwards and, by analytical continuation, into the critical strip. In particular if we remove the prime 2, {{nowrap|(1 2<sup>''s''</sup>)ζ(''s'')}} is now dominated by 3, and the large peak values occur near equal divisions of the "tritave", ie 3.


Along the critical line:
Along the critical line:
Line 639: Line 639:


=== Black magic formulas ===
=== Black magic formulas ===
When [[Gene_Ward_Smith|Gene Smith]] discovered these formulas in the 70s, he thought of them as "black magic" formulas not because of any aura of evil, but because they seemed mysteriously to give you something for next to nothing. They are based on Gram points and the Riemann&ndash;Siegel theta function &theta;(''t''). Recall that a Gram point is a point on the critical line where {{nowrap|&zeta;({{frac|1|2}} + ''ig'')}} is real. This implies that exp(''i''&theta;(''g'')) is real, so that {{frac|&theta;(''g'')|&pi;}} is an integer. Theta has an {{w|asymptotic expansion}}
When [[Gene_Ward_Smith|Gene Smith]] discovered these formulas in the 70s, he thought of them as "black magic" formulas not because of any aura of evil, but because they seemed mysteriously to give you something for next to nothing. They are based on Gram points and the Riemann–Siegel theta function θ(''t''). Recall that a Gram point is a point on the critical line where {{nowrap|ζ({{frac|1|2}} + ''ig'')}} is real. This implies that exp(''i''θ(''g'')) is real, so that {{frac|θ(''g'')|π}} is an integer. Theta has an {{w|asymptotic expansion}}


<math>\displaystyle{
<math>\displaystyle{
Line 645: Line 645:
}</math>
}</math>


From this we may deduce that {{nowrap|{{sfrac|&theta;(''t'')|&pi;}} &#x2248; ''r'' ln(''r'') &minus; ''r'' &minus; {{sfrac|1|8}}}}, where {{nowrap|''r'' {{=}} {{sfrac|''t''|2&pi;}} {{=}} {{sfrac|''x'' ln(2)}}}}; hence while x is the number of equal steps to an octave, ''r'' is the number of equal steps to an "''e''-tave", meaning the interval of ''e'', which is {{nowrap|{{sfrac|1200|ln(2)}} {{=}} 1731.234{{cent}}}}.
From this we may deduce that {{nowrap|{{sfrac|θ(''t'')|π}} ''r'' ln(''r'') ''r'' {{sfrac|1|8}}}}, where {{nowrap|''r'' {{=}} {{sfrac|''t''|}} {{=}} {{sfrac|''x'' ln(2)}}}}; hence while x is the number of equal steps to an octave, ''r'' is the number of equal steps to an "''e''-tave", meaning the interval of ''e'', which is {{nowrap|{{sfrac|1200|ln(2)}} {{=}} 1731.234{{cent}}}}.


Recall that Gram points near to pure-octave edos, where ''x'' is an integer, can be expected to correspond to peak values of {{nowrap|{{!}}&zeta;{{!}} {{=}} {{!}}Z{{!}}}}. We can find these Gram points by Newton's method applied to the above formula. If {{nowrap|''r'' {{=}} {{sfrac|''x''|ln(2)}}}}, and if {{nowrap|''n'' {{=}} &lfloor;''r'' ln(''r'') &minus; ''r'' + {{frac|3|8}}&rfloor;}} is the nearest integer to {{sfrac|&theta;(2&pi;''r'')|&pi;}}, then we may set {{nowrap|''r''<sup>+</sup> {{=}} {{sfrac|''r'' + ''n'' + {{frac|1|8}}|ln(''r'')}}}}. This is the first iteration of Newton's method, which we may repeat if we like, but in fact no more than one iteration is really required. This is the first black magic formula, giving an adjusted "Gram" tuning from the orginal one.
Recall that Gram points near to pure-octave edos, where ''x'' is an integer, can be expected to correspond to peak values of {{nowrap|{{!}}ζ{{!}} {{=}} {{!}}Z{{!}}}}. We can find these Gram points by Newton's method applied to the above formula. If {{nowrap|''r'' {{=}} {{sfrac|''x''|ln(2)}}}}, and if {{nowrap|''n'' {{=}} ''r'' ln(''r'') ''r'' + {{frac|3|8}}}} is the nearest integer to {{sfrac|θ(''r'')|π}}, then we may set {{nowrap|''r''<sup>+</sup> {{=}} {{sfrac|''r'' + ''n'' + {{frac|1|8}}|ln(''r'')}}}}. This is the first iteration of Newton's method, which we may repeat if we like, but in fact no more than one iteration is really required. This is the first black magic formula, giving an adjusted "Gram" tuning from the orginal one.


For an example, consider {{nowrap|''x'' {{=}} 12}}, so that {{nowrap|''r'' {{=}} {{sfrac|12|ln(2)}} {{=}} 17.312}}. Then {{nowrap|''r'' ln(''r'') &minus; ''r'' &minus; {{sfrac|1|8}} {{=}} 31.927}}, which rounded to the nearest integer is 32, so {{nowrap|''n'' {{=}} 32}}. Then {{nowrap|{{sfrac|''r'' + ''n'' + {{frac|1|8}}|ln(''r'')}} {{=}} 17.338}}, corresponding to {{nowrap|''x'' {{=}} 12.0176}}, which means a single step is 99.853 cents and the octave is tempered to twelve of these, which is 1198.238 cents.
For an example, consider {{nowrap|''x'' {{=}} 12}}, so that {{nowrap|''r'' {{=}} {{sfrac|12|ln(2)}} {{=}} 17.312}}. Then {{nowrap|''r'' ln(''r'') ''r'' {{sfrac|1|8}} {{=}} 31.927}}, which rounded to the nearest integer is 32, so {{nowrap|''n'' {{=}} 32}}. Then {{nowrap|{{sfrac|''r'' + ''n'' + {{frac|1|8}}|ln(''r'')}} {{=}} 17.338}}, corresponding to {{nowrap|''x'' {{=}} 12.0176}}, which means a single step is 99.853 cents and the octave is tempered to twelve of these, which is 1198.238 cents.


The fact that ''x'' is slightly greater than 12 means 12 has an overall sharp quality. We may also find this out by looking at the value we computed for {{nowrap|&theta;(2&pi;''r'') / &pi;}}, which was 31.927. Then {{nowrap|32 &minus; 31.927 {{=}} 0.0726}}, which is positive but not too large; this is the second black magic formula, evaluating the nature of an edo ''x'' by computing {{nowrap|&lfloor;''r'' ln(''r'') &minus; ''r'' + {{frac|3|8}}&rfloor; &minus; ''r'' ln(''r'') + ''r'' + {{frac|1|8}}}}, where {{nowrap|''r'' {{=}} {{sfrac|''x''|ln(2)}}}}. This works more often than not on the clearcut cases, but when ''x'' is extreme it may not; 49 is very sharp in tendency, for example, but this method calls it as flat; similarly it counts 45 as sharp.
The fact that ''x'' is slightly greater than 12 means 12 has an overall sharp quality. We may also find this out by looking at the value we computed for {{nowrap|θ(''r'') / π}}, which was 31.927. Then {{nowrap|32 31.927 {{=}} 0.0726}}, which is positive but not too large; this is the second black magic formula, evaluating the nature of an edo ''x'' by computing {{nowrap|''r'' ln(''r'') ''r'' + {{frac|3|8}}⌋ − ''r'' ln(''r'') + ''r'' + {{frac|1|8}}}}, where {{nowrap|''r'' {{=}} {{sfrac|''x''|ln(2)}}}}. This works more often than not on the clearcut cases, but when ''x'' is extreme it may not; 49 is very sharp in tendency, for example, but this method calls it as flat; similarly it counts 45 as sharp.


== Computing zeta ==
== Computing zeta ==
There are various approaches to the question of computing the zeta function, but perhaps the simplest is the use of the {{w|Dirichlet eta function}} which was introduced to mathematics by {{w|Johann Peter Gustav Lejeune Dirichlet}}, who despite his name was a German and the brother-in-law of {{w|Felix Mendelssohn}}.
There are various approaches to the question of computing the zeta function, but perhaps the simplest is the use of the {{w|Dirichlet eta function}} which was introduced to mathematics by {{w|Johann Peter Gustav Lejeune Dirichlet}}, who despite his name was a German and the brother-in-law of {{w|Felix Mendelssohn}}.


The zeta function has a [http://mathworld.wolfram.com/SimplePole.html simple pole] at {{nowrap|''z'' {{=}} 1}} which forms a barrier against continuing it with its {{w|Euler product}} or {{w|Dirichlet series}} representation. We could subtract off the pole, or multiply by a factor of {{nowrap|''z'' &minus; 1}}, but at the expense of losing the character of a Dirichlet series or Euler product. A better method is to multiply by a factor of {{nowrap|1 &minus; 2<sup>1&#x200A;&minus;&#x200A;''z''</sup>}}, leading to the eta function:
The zeta function has a [http://mathworld.wolfram.com/SimplePole.html simple pole] at {{nowrap|''z'' {{=}} 1}} which forms a barrier against continuing it with its {{w|Euler product}} or {{w|Dirichlet series}} representation. We could subtract off the pole, or multiply by a factor of {{nowrap|''z'' 1}}, but at the expense of losing the character of a Dirichlet series or Euler product. A better method is to multiply by a factor of {{nowrap|1 2<sup>1&#x200A;&#x200A;''z''</sup>}}, leading to the eta function:


<math>\displaystyle{\eta(z) = \left(1-2^{1-z}\right)\zeta(z) = \sum_{n=1}^\infty (-1)^{n-1} n^{-z}
<math>\displaystyle{\eta(z) = \left(1-2^{1-z}\right)\zeta(z) = \sum_{n=1}^\infty (-1)^{n-1} n^{-z}
= \frac{1}{1^z} - \frac{1}{2^z} + \frac{1}{3^z} - \frac{1}{4^z} + \cdots}</math>
= \frac{1}{1^z} - \frac{1}{2^z} + \frac{1}{3^z} - \frac{1}{4^z} + \cdots}</math>


The Dirichlet series for the zeta function is absolutely convergent when {{nowrap|''s'' &gt; 1}}, justifying the rearrangement of terms leading to the alternating series for eta, which converges conditionally in the critical strip. The extra factor introduces zeros of the eta function at the points {{nowrap|1 + {{sfrac|2&pi;''i''|ln(2)}}x}} corresponding to pure octave divisions along the line {{nowrap|''s'' {{=}} 1}}, but no other zeros, and in particular none in the critical strip and along the critical line. The convergence of the alternating series can be greatly accelerated by applying {{w|Euler summation}}.
The Dirichlet series for the zeta function is absolutely convergent when {{nowrap|''s'' &gt; 1}}, justifying the rearrangement of terms leading to the alternating series for eta, which converges conditionally in the critical strip. The extra factor introduces zeros of the eta function at the points {{nowrap|1 + {{sfrac|''i''|ln(2)}}x}} corresponding to pure octave divisions along the line {{nowrap|''s'' {{=}} 1}}, but no other zeros, and in particular none in the critical strip and along the critical line. The convergence of the alternating series can be greatly accelerated by applying {{w|Euler summation}}.


== Open problems ==
== Open problems ==
Line 670: Line 670:
* [http://terrytao.wordpress.com/2009/07/12/selbergs-limit-theorem-for-the-riemann-zeta-function-on-the-critical-line/ Selberg's limit theorem] by Terence Tao [http://www.webcitation.org/5xrvgjW6T Permalink]
* [http://terrytao.wordpress.com/2009/07/12/selbergs-limit-theorem-for-the-riemann-zeta-function-on-the-critical-line/ Selberg's limit theorem] by Terence Tao [http://www.webcitation.org/5xrvgjW6T Permalink]
* [[:File:Zetamusic5.pdf|Favored cardinalities of scales]] by Peter Buch
* [[:File:Zetamusic5.pdf|Favored cardinalities of scales]] by Peter Buch
* [http://www.ams.org/journals/mcom/2004-73-246/S0025-5718-03-01568-0/S0025-5718-03-01568-0.pdf Computational estimation of the order of {{nowrap|&zeta;({{frac|1|2}} + it)}}] by Tadej Kotnik
* [http://www.ams.org/journals/mcom/2004-73-246/S0025-5718-03-01568-0/S0025-5718-03-01568-0.pdf Computational estimation of the order of {{nowrap|ζ({{frac|1|2}} + it)}}] by Tadej Kotnik
* [https://www-users.cse.umn.edu/~odlyzko/zeta_tables/index.html Andrew Odlyzko: Tables of zeros of the Riemann zeta function]
* [https://www-users.cse.umn.edu/~odlyzko/zeta_tables/index.html Andrew Odlyzko: Tables of zeros of the Riemann zeta function]
* [https://www-users.cse.umn.edu/~odlyzko/doc/zeta.html Andrew Odlyzko: Papers on Zeros of the Riemann Zeta Function and Related Topics]
* [https://www-users.cse.umn.edu/~odlyzko/doc/zeta.html Andrew Odlyzko: Papers on Zeros of the Riemann Zeta Function and Related Topics]