The Riemann zeta function and tuning: Difference between revisions
ArrowHead294 (talk | contribs) |
ArrowHead294 (talk | contribs) mNo edit summary |
||
Line 16: | Line 16: | ||
<math>\displaystyle \xi_p(x) = \sum_{\substack{2 \leq q \leq p \\ q \text{ prime}}} \left(\frac{\rround{x \log_2 q}}{\log_2 q}\right)^2</math> | <math>\displaystyle \xi_p(x) = \sum_{\substack{2 \leq q \leq p \\ q \text{ prime}}} \left(\frac{\rround{x \log_2 q}}{\log_2 q}\right)^2</math> | ||
This function has local minima, corresponding to associated generalized patent vals. The minima occur for values of x which are the [[Tenney-Euclidean_Tuning|Tenney-Euclidean tuning]]s of the octaves of the associated vals, while | This function has local minima, corresponding to associated generalized patent vals. The minima occur for values of x which are the [[Tenney-Euclidean_Tuning|Tenney-Euclidean tuning]]s of the octaves of the associated vals, while ξ<sub>''p''</sub> for these minima is the square of the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]] of the val—equal to the TE error times the TE complexity, and sometimes known as "TE simple badness." | ||
Now suppose we don't want a formula for any specific prime limit, but which applies to all primes. We can't take the above sum to infinity, since it doesn't converge. However, we could change the weighting factor to a power so that it does converge: | Now suppose we don't want a formula for any specific prime limit, but which applies to all primes. We can't take the above sum to infinity, since it doesn't converge. However, we could change the weighting factor to a power so that it does converge: | ||
Line 28: | Line 28: | ||
where the summation is taken formally over all positive integers, though only the primes and prime powers make a nonzero contribution. | where the summation is taken formally over all positive integers, though only the primes and prime powers make a nonzero contribution. | ||
Another consequence of the above definition which might be objected to is that it results in a function with a [[Wikipedia:Continuous_function#Relation_to_differentiability_and_integrability|discontinuous derivative]], whereas a smooth function be preferred. The function ⌊x⌉<sup>2</sup> is quadratically increasing near integer values of x, and is periodic with period 1. Another function with these same properties is {{nowrap|1 − cos( | Another consequence of the above definition which might be objected to is that it results in a function with a [[Wikipedia:Continuous_function#Relation_to_differentiability_and_integrability|discontinuous derivative]], whereas a smooth function be preferred. The function ⌊x⌉<sup>2</sup> is quadratically increasing near integer values of x, and is periodic with period 1. Another function with these same properties is {{nowrap|1 − cos(2π''x'')}}, which is a smooth and in fact an [[Wikipedia:entire function|entire function]]. Let us therefore now define for any {{nowrap|''s'' > 1}}: | ||
<math>\displaystyle E_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{1 - \cos(2 \pi x \log_2 n)}{n^s}</math> | <math>\displaystyle E_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{1 - \cos(2 \pi x \log_2 n)}{n^s}</math> | ||
Line 38: | Line 38: | ||
<math>\displaystyle F_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{\cos(2 \pi x \log_2 n)}{n^s}</math> | <math>\displaystyle F_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{\cos(2 \pi x \log_2 n)}{n^s}</math> | ||
This new function has the property that < | This new function has the property that {{nowrap|F<sub>s</sub>(''x'') {{=}} F<sub>s</sub>(0) − E<sub>s</sub>(''x'')}}, so that all we have done is flip the sign of E<sub>s</sub>(''x'') and offset it vertically. This now increases to a maximum value for low errors, rather than declining to a minimum. Of more interest is the fact that it is a known mathematical function, which can be expressed in terms of the real part of the logarithm of the [[Wikipedia:Riemann zeta function|Riemann zeta function]]: | ||
<math>\displaystyle F_s(x) = \Re \ln \zeta(s + 2 \pi i x/\ln 2)</math> | <math>\displaystyle F_s(x) = \Re \ln \zeta(s + 2 \pi i x/\ln 2)</math> | ||
Line 51: | Line 51: | ||
So long as {{nowrap|''s'' ≥ 1}}, the absolute value of the zeta function can be seen as a relative error measurement. However, the rationale for that view of things departs when {{nowrap|''s'' < 1}}, particularly in the [http://mathworld.wolfram.com/CriticalStrip.html critical strip], when {{nowrap|0 < ''s'' < 1}}. As s approaches the value {{nowrap|''s'' {{=}} {{frac|1|2}}}} of the [http://mathworld.wolfram.com/CriticalLine.html critical line], the information content, so to speak, of the zeta function concerning higher primes increases and it behaves increasingly like a badness measure (or more correctly, since we have inverted it, like a goodness measure.) The quasi-symmetric [https://planetmath.org/encyclopedia/FunctionalEquationOfTheRiemannZetaFunction.html functional equation] of the zeta function tells us that past the critical line the information content starts to decrease again, with {{nowrap|1 − ''s''}} and ''s'' having the same information content. Hence it is the zeta function between {{nowrap|''s'' {{=}} {{frac|1|2}}}} and {{nowrap|''s'' {{=}} 1}}, and especially the zeta function along the critical line {{nowrap|''s'' {{=}} {{frac|1|2}}}}, which is of the most interest. | So long as {{nowrap|''s'' ≥ 1}}, the absolute value of the zeta function can be seen as a relative error measurement. However, the rationale for that view of things departs when {{nowrap|''s'' < 1}}, particularly in the [http://mathworld.wolfram.com/CriticalStrip.html critical strip], when {{nowrap|0 < ''s'' < 1}}. As s approaches the value {{nowrap|''s'' {{=}} {{frac|1|2}}}} of the [http://mathworld.wolfram.com/CriticalLine.html critical line], the information content, so to speak, of the zeta function concerning higher primes increases and it behaves increasingly like a badness measure (or more correctly, since we have inverted it, like a goodness measure.) The quasi-symmetric [https://planetmath.org/encyclopedia/FunctionalEquationOfTheRiemannZetaFunction.html functional equation] of the zeta function tells us that past the critical line the information content starts to decrease again, with {{nowrap|1 − ''s''}} and ''s'' having the same information content. Hence it is the zeta function between {{nowrap|''s'' {{=}} {{frac|1|2}}}} and {{nowrap|''s'' {{=}} 1}}, and especially the zeta function along the critical line {{nowrap|''s'' {{=}} {{frac|1|2}}}}, which is of the most interest. | ||
As {{nowrap|''s'' > 1}} gets larger, the Dirichlet series for the zeta function is increasingly dominated by the 2 term, getting ever closer to simply {{nowrap|1 + 2<sup>−''z''</sup>}}, which approaches 1 as {{nowrap|''s'' {{=}} Re(''z'')}} becomes larger. When {{nowrap|''s'' >> 1}} and ''x'' is an integer, the real part of zeta is approximately {{nowrap|1 + 2<sup>−''s''</sup>}}, and the imaginary part is approximately zero; that is, zeta is approximately real. Starting from {{nowrap|''s'' {{=}} +∞}} with ''x'' an integer, we can trace a line back towards the critical strip on which zeta is real. Since when {{nowrap|''s'' >> 1}} the derivative is approximately {{nowrap|−ln(2) / 2<sup>''s''</sup>}}, it is negative on this line of real values for zeta, meaning that the real value for zeta increases as ''s'' decreases. The zeta function approaches 1 uniformly as ''s'' increases to infinity, so as ''s'' decreases, the real-valued zeta function along this line of real values continues to increase though all real values from 1 to infinity monotonically. When it crosses the critical line where {{nowrap|''s'' {{=}} {{frac|1|2}}}}, it produces a real value of zeta on the critical line. Points on the critical line where {{nowrap|ζ({{frac|1|2}} + | As {{nowrap|''s'' > 1}} gets larger, the Dirichlet series for the zeta function is increasingly dominated by the 2 term, getting ever closer to simply {{nowrap|1 + 2<sup>−''z''</sup>}}, which approaches 1 as {{nowrap|''s'' {{=}} Re(''z'')}} becomes larger. When {{nowrap|''s'' >> 1}} and ''x'' is an integer, the real part of zeta is approximately {{nowrap|1 + 2<sup>−''s''</sup>}}, and the imaginary part is approximately zero; that is, zeta is approximately real. Starting from {{nowrap|''s'' {{=}} +∞}} with ''x'' an integer, we can trace a line back towards the critical strip on which zeta is real. Since when {{nowrap|''s'' >> 1}} the derivative is approximately {{nowrap|−ln(2) / 2<sup>''s''</sup>}}, it is negative on this line of real values for zeta, meaning that the real value for zeta increases as ''s'' decreases. The zeta function approaches 1 uniformly as ''s'' increases to infinity, so as ''s'' decreases, the real-valued zeta function along this line of real values continues to increase though all real values from 1 to infinity monotonically. When it crosses the critical line where {{nowrap|''s'' {{=}} {{frac|1|2}}}}, it produces a real value of zeta on the critical line. Points on the critical line where {{nowrap|ζ({{frac|1|2}} + ''ig'')}} are real are called "Gram points", after [[Wikipedia:Jørgen Pedersen Gram|Jørgen Pedersen Gram]]. We thus have associated pure-octave edos, where ''x'' is an integer, to a value near to the pure octave, at the special sorts of Gram points which corresponds to edos. | ||
Because the value of zeta increased continuously as it made its way from +∞ to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if −ζ'(''z'') is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to [[Wikipedia:Bernhard Riemann|Bernhard Riemann]] which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the [[Wikipedia:Riemann-Siegel formula|Riemann-Siegel formula]] since [[Wikipedia:Carl Ludwig Siegel|Carl Ludwig Siegel]] went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of {{nowrap|ζ({{frac|1|2}} + | Because the value of zeta increased continuously as it made its way from +∞ to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if −ζ'(''z'') is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to [[Wikipedia:Bernhard Riemann|Bernhard Riemann]] which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the [[Wikipedia:Riemann-Siegel formula|Riemann-Siegel formula]] since [[Wikipedia:Carl Ludwig Siegel|Carl Ludwig Siegel]] went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of {{nowrap|ζ({{frac|1|2}} + ''ig'')}} at the corresponding Gram point should be especially large. | ||
=== The Z function === | === The Z function === | ||
The absolute value of {{nowrap|ζ({{frac|1|2}} + | The absolute value of {{nowrap|ζ({{frac|1|2}} + ''ig'')}} at a Gram point corresponding to an edo is near to a local maximum, but not actually at one. At the local maximum, of course, the partial derivative of {{nowrap|ζ({{frac|1|2}} + ''it'')}} with respect to ''t'' will be zero; however this does not mean its derivative there will be zero. In fact, the [[Wikipedia:Riemann hypothesis|Riemann hypothesis]] is equivalent to the claim that all zeros of {{nowrap|ζ'(''s'' + ''it'')}} occur when {{nowrap|''s'' > {{frac|1|2}}}}, which is where all known zeros lie. These do not have values of ''t'' corresponding to good edos. For this and other reasons, it is helpful to have a function which is real for values on the critical line but whose absolute value is the same as that of zeta. This is provided by the [[Wikipedia:Z function|Z function]]. | ||
In order to define the Z function, we need first to define the [[Wikipedia:Riemann-Siegel theta function|Riemann-Siegel theta function]], and in order to do that, we first need to define the [http://mathworld.wolfram.com/LogGammaFunction.html Log Gamma function]. This is not defined as the natural log of the Gamma function since that has a more complicated branch cut structure; instead, the principal branch of the Log Gamma function is defined as having a branch cut along the negative real axis, and is given by the series | In order to define the Z function, we need first to define the [[Wikipedia:Riemann-Siegel theta function|Riemann-Siegel theta function]], and in order to do that, we first need to define the [http://mathworld.wolfram.com/LogGammaFunction.html Log Gamma function]. This is not defined as the natural log of the Gamma function since that has a more complicated branch cut structure; instead, the principal branch of the Log Gamma function is defined as having a branch cut along the negative real axis, and is given by the series | ||
Line 66: | Line 66: | ||
<math>\theta(z) = (\Upsilon((1 + 2 i z)/4) - \Upsilon((1 - 2 i z)/4))/(2 i) - \ln(\pi) z/2</math> | <math>\theta(z) = (\Upsilon((1 + 2 i z)/4) - \Upsilon((1 - 2 i z)/4))/(2 i) - \ln(\pi) z/2</math> | ||
Another approach is to substitute ''z'' = (1 + | Another approach is to substitute ''z'' = (1 + 2''it'')/4 into the series for Log Gamma and take the imaginary part, this yields | ||
<math>\displaystyle \theta(t) = -\frac{\gamma + \log \pi}{2}t - \arctan 2t | <math>\displaystyle \theta(t) = -\frac{\gamma + \log \pi}{2}t - \arctan 2t | ||
Line 80: | Line 80: | ||
Since θ is holomorphic on the strip with imaginary part between −{{frac|1|2}} and {{frac|1|2}}, so is Z. Since the exponential function has no zeros, the zeros of Z in this strip correspond one to one with the zeros of ζ in the critical strip. Since the exponential of an imaginary argument has absolute value 1, the absolute value of Z along the real axis is the same as the absolute value of ζ at the corresponding place on the critical line. And since theta was defined so as to give precisely this property, Z is a real even function of the real variable ''t''. | Since θ is holomorphic on the strip with imaginary part between −{{frac|1|2}} and {{frac|1|2}}, so is Z. Since the exponential function has no zeros, the zeros of Z in this strip correspond one to one with the zeros of ζ in the critical strip. Since the exponential of an imaginary argument has absolute value 1, the absolute value of Z along the real axis is the same as the absolute value of ζ at the corresponding place on the critical line. And since theta was defined so as to give precisely this property, Z is a real even function of the real variable ''t''. | ||
Using the [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelZ online plotter] we can plot Z in the regions corresponding to scale divisions, using the conversion factor {{nowrap|''t'' {{=}} | Using the [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelZ online plotter] we can plot Z in the regions corresponding to scale divisions, using the conversion factor {{nowrap|''t'' {{=}} 2π''x'' / ln(2)}}, for ''x'' a number near or at an edo number. Hence, for instance, to plot 12 plot around 108.777, to plot 31 plot around 281.006, and so forth. An alternative plotter is the applet [http://web.viu.ca/pughg/RiemannZeta/RiemannZetaLong.html here]. | ||
If you have access to [[Wikipedia:Mathematica|Mathematica]], which has Z, zeta and theta as a part of its suite of initially defined functions, you can do even better. Below is a Mathematica-generated plot of Z({{nowrap| | If you have access to [[Wikipedia:Mathematica|Mathematica]], which has Z, zeta and theta as a part of its suite of initially defined functions, you can do even better. Below is a Mathematica-generated plot of Z({{nowrap|2π''x'' / ln(2)}}) in the region around 12edo: | ||
[[File:plot12.png|alt=plot12.png|plot12.png]] | [[File:plot12.png|alt=plot12.png|plot12.png]] | ||
Line 102: | Line 102: | ||
Above, Gene proves that the zeta function measures the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]], sometimes called "Tenney-Euclidean Simple Badness," of any EDO, taken over all 'prime powers'. The relative error is simply equal to the tuning error times the size of the EDO, so we can easily get the raw "non-relative" tuning error from this as well by simply dividing by the size of the EDO. | Above, Gene proves that the zeta function measures the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]], sometimes called "Tenney-Euclidean Simple Badness," of any EDO, taken over all 'prime powers'. The relative error is simply equal to the tuning error times the size of the EDO, so we can easily get the raw "non-relative" tuning error from this as well by simply dividing by the size of the EDO. | ||
Here, we strengthen that result to show that the zeta function additionally measures weighted relative error over all | Here, we strengthen that result to show that the zeta function additionally measures weighted relative error over all rational numbers, relative to the size of the EDO. | ||
Let's dive in! | Let's dive in! | ||
Line 111: | Line 111: | ||
\zeta(s) = \sum_n n^{-s}</math> | \zeta(s) = \sum_n n^{-s}</math> | ||
Now let's do two things: we're going to expand {{nowrap|s = | Now let's do two things: we're going to expand {{nowrap|''s'' {{=}} σ + ''it''}}, and we're going to multiply ζ(s) by its conjugate ζ(''s''){{'}}, noting that {{nowrap|ζ(''s'')' {{=}} ζ(''s''{{'}})}} and {{nowrap|ζ(''s'')·ζ(''s''){{'}} {{=}} ζ(''s'')<sup>2</sup>}}. We get: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 118: | Line 118: | ||
where d is a new variable used internally in the second summation. | where d is a new variable used internally in the second summation. | ||
Now, let's focus on | Now, let's focus on {{nowrap|σ > 1}}, so that both series are absolutely convergent. The following rearrangement of terms is then justified: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 130: | Line 130: | ||
= \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | ||
where the last equality makes use of the fact that {{nowrap|cos(−x) {{=}} cos(x)}} and {{nowrap|sin(−x) {{=}} −sin(x)}}. | where the last equality makes use of the fact that {{nowrap|cos(−''x'') {{=}} cos(''x'')}} and {{nowrap|sin(−''x'') {{=}} −sin(''x'')}}. | ||
Now, let's decompose the sum into three parts: {{nowrap|n {{=}} d}}, {{nowrap|n > d}}, and {{nowrap|n < d}}. Here's what we get: | Now, let's decompose the sum into three parts: {{nowrap|''n'' {{=}} ''d''}}, {{nowrap|''n'' > ''d''}}, and {{nowrap|''n'' < ''d''}}. Here's what we get: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 141: | Line 141: | ||
We'll deal with each of these separately. | We'll deal with each of these separately. | ||
First, in the leftmost summation, we can see that n=d implies {{nowrap|ln(n/d) {{=}} 0}}. Since {{nowrap|sin(0) {{=}} 0}}, the sin term in the numerator cancels out, yielding: | First, in the leftmost summation, we can see that {{nowrap|''n'' {{=}} ''d''}} implies {{nowrap|ln(''n''/''d'') {{=}} 0}}. Since {{nowrap|sin(0) {{=}} 0}}, the sin term in the numerator cancels out, yielding: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 150: | Line 150: | ||
We will not simplify the cosine term further right now, the reasons for which will become apparent below. | We will not simplify the cosine term further right now, the reasons for which will become apparent below. | ||
Now, let's handle the two summations on the right. The key thing to note here is that we can pair up every term in the second summation with a corresponding term in the third summation that interchanges n and d. To make this clear, let p and q be two integers, and assume without loss of generality that p>q. The term corresponding to n=p | Now, let's handle the two summations on the right. The key thing to note here is that we can pair up every term in the second summation with a corresponding term in the third summation that interchanges ''n'' and ''d''. To make this clear, let ''p'' and ''q'' be two integers, and assume without loss of generality that {{nowrap|''p'' > ''q''}}. The term corresponding to {{nowrap|''n'' {{=}} ''p''|''d'' {{=}} ''q''}} will then appear in the second summation, and the term n=q, d=p will appear in the third summation. Juxtaposing those together, we get the following: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 156: | Line 156: | ||
\frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math> | \frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math> | ||
Now, noting that {{nowrap|ln(p / q) {{=}} −ln(q / p)}} and that sin is an odd function, we can see that the sin terms cancel out, leaving | Now, noting that {{nowrap|ln(''p'' / ''q'') {{=}} −ln(''q'' / ''p'')}} and that sin is an odd function, we can see that the sin terms cancel out, leaving | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 174: | Line 174: | ||
\left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | \left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | ||
Finally, by making the mysterious substitution {{nowrap|t {{=}} | Finally, by making the mysterious substitution {{nowrap|''t'' {{=}} 2π / ln(2) · ''x''}}, the musical implications of the above will start to reveal themselves: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 182: | Line 182: | ||
=== Interpretation of results: "cosine relative error" === | === Interpretation of results: "cosine relative error" === | ||
For every strictly positive rational n/d, there is a cosine with period | For every strictly positive rational ''n''/''d'', there is a cosine with period {{nowrap|2π log<sub>2</sub>(''n''/''d'')}}. This cosine peaks at {{nowrap|''x'' {{=}} ''N''/log<sub>2</sub>(''n''/''d'')}} for all integers ''N'', or in other words, the Nth-equal division of the rational number ''n''/''d'', and hits troughs midway between. | ||
Our mysterious substitution above was chosen to set the units for this up nicely. The variable x now happens to be measured in divisions of the octave. (The original variable t, which was the imaginary part of the zeta argument s, can be thought of as the number of divisions of the interval {{nowrap|''e''<sup> | Our mysterious substitution above was chosen to set the units for this up nicely. The variable x now happens to be measured in divisions of the octave. (The original variable ''t'', which was the imaginary part of the zeta argument ''s'', can be thought of as the number of divisions of the interval {{nowrap|''e''<sup>2π</sup> ≈ 535.49}}, or what [[Keenan_Pepper|Keenan Pepper]] has called the "natural interval.") | ||
As mentioned in Gene's original zeta derivation, these cosine functions can be thought of as good approximations to the terms in the TE error computation, which are all the squared errors for the different primes. Rather than taking the square of the error, we instead put the error through the function | As mentioned in Gene's original zeta derivation, these cosine functions can be thought of as good approximations to the terms in the TE error computation, which are all the squared errors for the different primes. Rather than taking the square of the error, we instead put the error through the function {{sfrac|(1 − cos(''x''))|2}}, which is "close enough" for small values of ''x''. Since we are always rounding off to the best mapping, this error is never more 0.5 steps of the EDO, so since we have {{nowrap|−0.5 < x < 0.5}} we have a decent enough approximation. | ||
We will call this '''cosine (relative) error''', by analogy with '''TE (relative) error'''. It is easy to see that the cosine error is approximately equal to the TE error when the error is small, and only diverges slightly for large errors. | We will call this '''cosine (relative) error''', by analogy with '''TE (relative) error'''. It is easy to see that the cosine error is approximately equal to the TE error when the error is small, and only diverges slightly for large errors. | ||
Line 192: | Line 192: | ||
There are three major differences between our "cosine error" functions, and the way we're incorporating them into the result, and what TE is doing: | There are three major differences between our "cosine error" functions, and the way we're incorporating them into the result, and what TE is doing: | ||
# First, the function here is flipped upside down | # First, the function here is flipped upside down—that is, we're measuring "accuracy" rather than error—as well as shifted vertically down along the y-axis. Since it is trivial to convert between the two, and since we only care about the relative rankings of EDOs, it is clear that we're measuring essentially the same thing. | ||
# Instead of weighting each interval by | # Instead of weighting each interval by {{sfrac|1|log(''nd'')}}, we weight it by {{sfrac|1|(''nd'')<sup>σ</sup>}}. | ||
# Instead of only looking at the primes, as we do in TE, we are now looking at 'all' intervals, and in particular looking at the best mapping for each interval. | # Instead of only looking at the primes, as we do in TE, we are now looking at 'all' intervals, and in particular looking at the best mapping for each interval. | ||
Line 199: | Line 199: | ||
There are also a few notes we will only write in passing, for now, perhaps to build on later: | There are also a few notes we will only write in passing, for now, perhaps to build on later: | ||
# If we do want | # If we do want {{sfrac|1|log(''nd'')}} weighting, we can derive this kind of weighting from an antiderivative of the zeta function. | ||
# If we only want the primes, rather than all intervals, we can use something called the "Prime Zeta Function" to get those kinds of summations. | # If we only want the primes, rather than all intervals, we can use something called the "Prime Zeta Function" to get those kinds of summations. | ||
# If we do want the true TE squared error rather than our cosine error, then we would end up getting something called "parabolic waves" rather than cosine waves for each interval. A parabolic wave is the antiderivative of a sawtooth wave, and as it is a periodic signal, it has a Fourier series and can be expressed as a sum of sinusoids. We can use this to get a derivation of the squared error as an infinite sum of zeta functions. | # If we do want the true TE squared error rather than our cosine error, then we would end up getting something called "parabolic waves" rather than cosine waves for each interval. A parabolic wave is the antiderivative of a sawtooth wave, and as it is a periodic signal, it has a Fourier series and can be expressed as a sum of sinusoids. We can use this to get a derivation of the squared error as an infinite sum of zeta functions. | ||
Line 205: | Line 205: | ||
For now, though, we will focus only on the basic zeta result that we have. | For now, though, we will focus only on the basic zeta result that we have. | ||
Going back to the infinite summation above, we note that these cosine error (or really "cosine accuracy") functions are being weighted by 1 | Going back to the infinite summation above, we note that these cosine error (or really "cosine accuracy") functions are being weighted by {{sfrac|1|(''nd'')<sup>σ</sup>}}. Note that σ, which is the real part of the zeta argument ''s'', serves as sort of a complexity weighting—it determines how quickly complex rational numbers become "irrelevant." Framed another way, we can think of it as the degree of "'''rolloff'''" formed by the resultant (musical, not mathematical) harmonic series formed by those rationals with {{nowrap|''d'' {{=}} 1}}. Note that this rolloff is much stronger than the usual {{sfrac|1|log(''nd'')}} rolloff exhibited by TE error, which is one reason that zeta converges to something coherent for all rational numbers, whereas TE fails to converge as the limit increases. We will use the term "rolloff" to identify the variable σ below. | ||
Putting this all together, we can take the approach to fix | Putting this all together, we can take the approach to fix σ, specifying a rolloff, and then let ''x'' (or ''t'') vary, specifying an EDO. The resulting function gives us the measured accuracy of EDOs across all unreduced rational numbers with respect to the chosen rolloff. Taking it all together, we get a Tenney-weighted sum of cosine accuracy over all unreduced rationals. QED. | ||
<span style="line-height: 1.5;">It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In addition to our usual error metric on the primes, we also go to each rational, look for the best "direct" or "patent" mapping of that rational within the EDO, and add 'that' to the EDO's score. In particular, we do this even when the best mapping for some rational doesn't match up with the mapping you'd get from it just looking at the primes. | <span style="line-height: 1.5;">It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In addition to our usual error metric on the primes, we also go to each rational, look for the best "direct" or "patent" mapping of that rational within the EDO, and add 'that' to the EDO's score. In particular, we do this even when the best mapping for some rational doesn't match up with the mapping you'd get from it just looking at the primes. | ||
So, for instance, in 16-EDO, the best mapping for 3/2 is 9 steps out of 16, and using that mapping, we get that 9/8 is 2 steps, since {{nowrap|9 * 2 − 16 {{=}} 2}}. However, there is a better mapping for 9/8 at 3 steps | So, for instance, in 16-EDO, the best mapping for 3/2 is 9 steps out of 16, and using that mapping, we get that 9/8 is 2 steps, since {{nowrap|9 * 2 − 16 {{=}} 2}}. However, there is a better mapping for 9/8 at 3 steps—one which ignores the fact that it is no longer equal to two 3/2's. This can be particularly useful for playing chords: 16-EDO's "direct mapping" for 9 is useful when playing the chord 4:5:7:9, and the "indirect" or "prime-based" mapping for 9 is useful when playing the "major 9" chord 8:10:12:15:18. We can think of the zeta function as rewarding equal temperaments not just for having a good approximation of the primes, but also for having good "extra" approximations of rationals which can be used in this way. And although 16-EDO is pretty high error, similar phenomena can be found for any EDO which becomes [[consistency|inconsistent]] for some chord of interest. | ||
One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the [https://en.wikipedia.org/wiki/Free_group free group] over the strictly positive rationals, which we'll call "'''meta- | One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the [https://en.wikipedia.org/wiki/Free_group free group] over the strictly positive rationals, which we'll call "'''meta-J''i'." The zeta function can then be thought of as yielding an error for all meta-JI [[Patent_val|generalized patent vals]]. Whether this can be extended to all meta-JI vals, or modified to yield something nice like a "norm" on the group of meta-JI vals, is an open question. Regardless, this may be a useful conceptual bridge to understand how to relate the zeta function to "ordinary" regular temperament theory. | ||
Now, one nitpick to notice above is that this expression technically involves all | Now, one nitpick to notice above is that this expression technically involves all "unreduced" rationals, e.g. there will be a cosine error term not just for 3/2, but also for 6/4, 9/6, etc. However, we can easily show that the same expression also measures the cosine relative error for reduced rationals: | ||
=== From unreduced rationals to reduced rationals === | === From unreduced rationals to reduced rationals === | ||
Line 225: | Line 225: | ||
Note that since there's no restriction that n and d be coprime, the "rationals" we're using here don't have to be reduced. So this shows that zeta yields an error metric over all unreduced rationals, but leaves open the question of how reduced rationals are handled. It turns out that the same function also measures the error of reduced rationals, scaled only by a rolloff-dependent constant factor across all EDOs. | Note that since there's no restriction that n and d be coprime, the "rationals" we're using here don't have to be reduced. So this shows that zeta yields an error metric over all unreduced rationals, but leaves open the question of how reduced rationals are handled. It turns out that the same function also measures the error of reduced rationals, scaled only by a rolloff-dependent constant factor across all EDOs. | ||
To see this, let's first note that every "unreduced" rational n/d can be decomposed into the product of a reduced rational n'/d' and a common factor c/c. Furthermore, note that for any reduced rational n'/d', we can generate all unreduced rationals n/d corresponding to it by multiplying it by all such common factors c/c, where c is a strictly positive natural number. | To see this, let's first note that every "unreduced" rational n/d can be decomposed into the product of a reduced rational ''n''{{'}}/''d''{{'}} and a common factor c/c. Furthermore, note that for any reduced rational ''n''{{'}}/''d''{{'}}, we can generate all unreduced rationals ''n''/''d'' corresponding to it by multiplying it by all such common factors ''c''/''c'', where ''c'' is a strictly positive natural number. | ||
This allows us to change our original summation so that it's over three variables, n', d', and c, where n' and d' are coprime, and c is a strictly positive natural number: | This allows us to change our original summation so that it's over three variables, ''n''{{'}}, ''d''{{'}}, and ''c''{{'}}, where ''n''{{'}} and ''d''{{'}} are coprime, and ''c'' is a strictly positive natural number: | ||
<math> \displaystyle | <math> \displaystyle | ||
\left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{cn'}{cd'}}\right)\right)}{(cn' \cdot cd')^{\sigma}}</math> | \left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{cn'}{cd'}}\right)\right)}{(cn' \cdot cd')^{\sigma}}</math> | ||
Now, the common factor c/c cancels out inside the log in the numerator. However, in the denominator, we get an extra factor of c <sup>2</sup> to contend with. This yields | Now, the common factor ''c''/''c'' cancels out inside the log in the numerator. However, in the denominator, we get an extra factor of ''c''<sup>2</sup> to contend with. This yields | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 238: | Line 238: | ||
= \sum_{n',d',c} \left[ \frac{1}{c^{2\sigma}} \cdot \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right]</math> | = \sum_{n',d',c} \left[ \frac{1}{c^{2\sigma}} \cdot \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right]</math> | ||
Now, since we're still assuming that | Now, since we're still assuming that {{nowrap|σ > 1}} and everything is absolutely convergent, we can decompose this into a product of series as follows | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 251: | Line 251: | ||
\frac{\left| \zeta(s) \right|^2}{\zeta(2\sigma)} = \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}}</math> | \frac{\left| \zeta(s) \right|^2}{\zeta(2\sigma)} = \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}}</math> | ||
Now, since we're fixing | Now, since we're fixing σ and letting ''t'' vary, the left zeta term is constant for all EDOs. This demonstrates that the zeta function also measures cosine error over all the reduced rationals, up to a constant factor. QED. | ||
=== Measuring error on harmonics only === | === Measuring error on harmonics only === | ||
Line 260: | Line 260: | ||
* Error on reduced rationals: <math>\frac{\left|\zeta(\sigma+it)\right|^2}{\zeta(2\sigma)}</math> | * Error on reduced rationals: <math>\frac{\left|\zeta(\sigma+it)\right|^2}{\zeta(2\sigma)}</math> | ||
Since the second is a simple monotonic transformation of the first, we can see that the same function basically measures both the relative error on just the prime powers, and also on all unreduced rationals, at least in the sense that EDOs will be ranked identically by both measures. The third function is really just the second function divided by a constant, since we only really care about letting <math>t</math> vary | Since the second is a simple monotonic transformation of the first, we can see that the same function basically measures both the relative error on just the prime powers, and also on all unreduced rationals, at least in the sense that EDOs will be ranked identically by both measures. The third function is really just the second function divided by a constant, since we only really care about letting <math>t</math> vary—we instead typically set <math>\sigma</math> to some value which represents the weighting "rolloff" on rationals. So, all three of these functions will rank EDOs identically. | ||
We also note that, above, Gene tended to look at things in terms of the | We also note that, above, Gene tended to look at things in terms of the Z(''t'') function, which is defined so that we have {{nowrap|{{!}}Z(''t'') {{=}} {{!}}ζ(''t''){{!}}}}. So, the absolute value of the Z function is also monotonically equivalent to the above set of expressions, so that any one of these things will produce the same ranking on EDOs. | ||
It turns out that using the same principles of derivation above, we can also derive another expression, this time for the relative error on only the harmonics | It turns out that using the same principles of derivation above, we can also derive another expression, this time for the relative error on only the harmonics—i.e. those intervals of the form <math>1/1, 2/1, 3/1, ... n/1, ...</math>. This was studied in a paper by Peter Buch called [[:File:Zetamusic5.pdf|"Favored cardinalities of scales"]]. The expression is: | ||
Error on harmonics only: <math>\left|\textbf{Re}\left[\zeta(\sigma + it)\right]\right|</math> | Error on harmonics only: <math>\left|\textbf{Re}\left[\zeta(\sigma + it)\right]\right|</math> | ||
Note that, although the last four expressions were all monotonic transformations of one another, this one is not | Note that, although the last four expressions were all monotonic transformations of one another, this one is not—this is the 'real part' of the zeta function, whereas the others were all some simple monotonic function of the 'absolute value' of the zeta function. The results, however, are very similar—in particular, the peaks are approximately to one another, shifted by only a small amount (at least for reasonably-sized EDOs up to a few hundred). | ||
=== Relationship to harmonic entropy === | === Relationship to harmonic entropy === | ||
Line 275: | Line 275: | ||
<math>\displaystyle{\left|\zeta\frac{1}{2} + it)\right|^2 \cdot \overline {\phi(t)}}</math> | <math>\displaystyle{\left|\zeta\frac{1}{2} + it)\right|^2 \cdot \overline {\phi(t)}}</math> | ||
is, up to a flip in sign, the Fourier transform of the unnormalized Harmonic Shannon Entropy for | is, up to a flip in sign, the Fourier transform of the unnormalized Harmonic Shannon Entropy for {{nowrap|''N'' {{=}} ∞}}</math>, where φ(''t'') is the characteristic function (aka Fourier transform) of the spreading distribution and {{overline|φ(''t'')}} denotes complex conjugation. | ||
Note that in the most common case where the spreading distribution is symmetric (as in the case of the Gaussian and Laplace distributions), the characteristic function is purely real and hence the conjugate is unnecessary. In particular, when the spreading distribution is a Gaussian, the characteristic function is also a Gaussian. | Note that in the most common case where the spreading distribution is symmetric (as in the case of the Gaussian and Laplace distributions), the characteristic function is purely real and hence the conjugate is unnecessary. In particular, when the spreading distribution is a Gaussian, the characteristic function is also a Gaussian. | ||
Line 495: | Line 495: | ||
Instead of looking at {{nowrap|{{pipe}}Z(x){{pipe}}}} maxima, we can look at {{nowrap|{{pipe}}Z(x){{pipe}}}} ''minima'' for integer values of ''x''. These correspond to ''zeta valley edos'', and we get a list of edos {{EDOs| 1, 8, 18, 39, 55, 64, 79, 5941, 8294, }}… These tunings tend to deviate from ''p''-limit JI as much as possible while still preserving octaves, and can serve as "more xenharmonic" tunings. Keep in mind, however, that the ''most'' xenharmonic tunings would not contain octaves at all. | Instead of looking at {{nowrap|{{pipe}}Z(x){{pipe}}}} maxima, we can look at {{nowrap|{{pipe}}Z(x){{pipe}}}} ''minima'' for integer values of ''x''. These correspond to ''zeta valley edos'', and we get a list of edos {{EDOs| 1, 8, 18, 39, 55, 64, 79, 5941, 8294, }}… These tunings tend to deviate from ''p''-limit JI as much as possible while still preserving octaves, and can serve as "more xenharmonic" tunings. Keep in mind, however, that the ''most'' xenharmonic tunings would not contain octaves at all. | ||
Notice the sudden jump from [[79edo]] to [[5941edo]]. We know that {{nowrap|{{pipe}}Z(x){{pipe}}}} grows logarithmically on average. If we assume the scores of integer edos are uniformly distributed on the interval [0, ''c'' log ''x''], the probability for the next edo to have a zeta score less than a given small value is also very small, so we would expect valley edos to be rarer than peak edos. So, it would be more productive to find edos which zeta score is simply less than a given threshold. | Notice the sudden jump from [[79edo]] to [[5941edo]]. We know that {{nowrap|{{pipe}}Z(x){{pipe}}}} grows logarithmically on average. If we assume the scores of integer edos are uniformly distributed on the interval {{nowrap|[0, ''c'' log ''x'']}}, the probability for the next edo to have a zeta score less than a given small value is also very small, so we would expect valley edos to be rarer than peak edos. So, it would be more productive to find edos which zeta score is simply less than a given threshold. | ||
Note that ''tempered-octave'' zeta valley edos would simply be any zero of Z(x). | Note that ''tempered-octave'' zeta valley edos would simply be any zero of Z(x). | ||
Line 619: | Line 619: | ||
}</math> | }</math> | ||
where the product is over all primes p. The product converges for values of s with real part greater than or equal to one, except for s=1 where it diverges to infinity. We may remove a finite list of primes from consideration by multiplying ζ(s) by the corresponding factors {{nowrap|(1 − p<sup>−s</sup>)}} for each prime p we wish to remove. After we have done this, the smallest prime remaining will dominate peak values for s with large real part, and as before we can track these peaks backwards and, by analytical continuation, into the critical strip. In particular if we remove the prime 2, {{nowrap|(1 − 2<sup>−s</sup>)ζ(s)}} is now dominated by 3, and the large peak values occur near equal divisions of the "tritave", ie 3. | where the product is over all primes ''p''. The product converges for values of ''s'' with real part greater than or equal to one, except for s=1 where it diverges to infinity. We may remove a finite list of primes from consideration by multiplying ζ(''s'') by the corresponding factors {{nowrap|(1 − ''p''<sup>−''s''</sup>)}} for each prime ''p'' we wish to remove. After we have done this, the smallest prime remaining will dominate peak values for ''s'' with large real part, and as before we can track these peaks backwards and, by analytical continuation, into the critical strip. In particular if we remove the prime 2, {{nowrap|(1 − 2<sup>−s</sup>)ζ(s)}} is now dominated by 3, and the large peak values occur near equal divisions of the "tritave", ie 3. | ||
Along the critical line, {{nowrap|{{!}}1 − p<sup>−{{frac|1|2}} − it</sup>{{!}}}} may be written | Along the critical line, {{nowrap|{{!}}1 − ''p''<sup>−{{frac|1|2}} − it</sup>{{!}}}} may be written | ||
<math>\displaystyle{ | <math>\displaystyle{ | ||
Line 632: | Line 632: | ||
=== Black magic formulas === | === Black magic formulas === | ||
When [[Gene_Ward_Smith|Gene Smith]] discovered these formulas in the 70s, he thought of them as "black magic" formulas not because of any aura of evil, but because they seemed mysteriously to give you something for next to nothing. They are based on Gram points and the Riemann-Siegel theta function θ(t). Recall that a Gram point is a point on the critical line where {{nowrap|ζ({{frac|1|2}} + ig)}} is real. This implies that exp(iθ(g)) is real, so that {{frac|θ(g)| | When [[Gene_Ward_Smith|Gene Smith]] discovered these formulas in the 70s, he thought of them as "black magic" formulas not because of any aura of evil, but because they seemed mysteriously to give you something for next to nothing. They are based on Gram points and the Riemann-Siegel theta function θ(''t''). Recall that a Gram point is a point on the critical line where {{nowrap|ζ({{frac|1|2}} + ''ig'')}} is real. This implies that exp(''i''θ(''g'')) is real, so that {{frac|θ(''g'')|π}} is an integer. Theta has an [[Wikipedia:asymptotic expansion|asymptotic expansion]] | ||
<math>\displaystyle{ | <math>\displaystyle{ | ||
Line 638: | Line 638: | ||
}</math> | }</math> | ||
From this we may deduce that {{nowrap|θ(t)/ | From this we may deduce that {{nowrap|θ(''t'')/π ≈ r ln(''r'') − ''r'' − 1/8}}, where {{nowrap|''r'' {{=}} ''t'' / (2π) {{=}} ''x'' / ln(2)}}; hence while x is the number of equal steps to an octave, ''r'' is the number of equal steps to an "e-tave", meaning the interval of ''e'' {{nowrap|1200 / ln(2) {{=}} 1731.234}} cents. | ||
Recall that Gram points near to pure-octave edos, where x is an integer, can be expected to correspond to peak values of {{nowrap|{{!}}ζ{{!}} {{=}} {{!}}Z{{!}}}}. We can find these Gram points by Newton's method applied to the above formula. If {{nowrap|r {{=}} x/ln(2)}}, and if {{nowrap|n {{=}} ⌊r ln(r) − r + 3/8⌋}} is the nearest integer to {{nowrap|θ( | Recall that Gram points near to pure-octave edos, where ''x'' is an integer, can be expected to correspond to peak values of {{nowrap|{{!}}ζ{{!}} {{=}} {{!}}Z{{!}}}}. We can find these Gram points by Newton's method applied to the above formula. If {{nowrap|''r'' {{=}} x/ln(2)}}, and if {{nowrap|''n'' {{=}} ⌊''r'' ln(''r'') − ''r'' + 3/8⌋}} is the nearest integer to {{nowrap|θ(2π''r'') / π}}, then we may set {{nowrap|''r''⁺ {{=}} (''r'' + ''n'' + 1/8) / ln(r)}}. This is the first iteration of Newton's method, which we may repeat if we like, but in fact no more than one iteration is really required. This is the first black magic formula, giving an adjusted "Gram" tuning from the orginal one. | ||
For an example, consider {{nowrap|x {{=}} 12}}, so that {{nowrap|r {{=}} 12/ln(2) {{=}} 17.312}}. Then {{nowrap|r ln(r) − r − 1/8 {{=}} 31.927}}, which rounded to the nearest integer is 32, so {{nowrap|n {{=}} 32}}. Then {{nowrap|(r + n + 1/8) / ln(r) {{=}} 17.338}}, corresponding to {{nowrap|x {{=}} 12.0176}}, which means a single step is 99.853 cents and the octave is tempered to twelve of these, which is 1198.238 cents. | For an example, consider {{nowrap|''x'' {{=}} 12}}, so that {{nowrap|''r'' {{=}} 12/ln(2) {{=}} 17.312}}. Then {{nowrap|''r'' ln(''r'') − ''r'' − 1/8 {{=}} 31.927}}, which rounded to the nearest integer is 32, so {{nowrap|n {{=}} 32}}. Then {{nowrap|(''r'' + ''n'' + ''1/8'') / ln(r) {{=}} 17.338}}, corresponding to {{nowrap|x {{=}} 12.0176}}, which means a single step is 99.853 cents and the octave is tempered to twelve of these, which is 1198.238 cents. | ||
The fact that x is slightly greater than 12 means 12 has an overall sharp quality. We may also find this out by looking at the value we computed for {{nowrap|θ( | The fact that x is slightly greater than 12 means 12 has an overall sharp quality. We may also find this out by looking at the value we computed for {{nowrap|θ(2π''r'') / π}}, which was 31.927. Then {{nowrap|32 − 31.927 {{=}} 0.0726}}, which is positive but not too large; this is the second black magic formula, evaluating the nature of an edo ''x'' by computing {{nowrap|⌊''r'' ln(''r'') − ''r'' + 3/8⌋ − ''r'' ln(''r'') + ''r'' + 1/8}}, where {{nowrap|r {{=}} ''x'' / ln(2)}}. This works more often than not on the clearcut cases, but when x is extreme it may not; 49 is very sharp in tendency, for example, but this method calls it as flat; similarly it counts 45 as sharp. | ||
== Computing zeta == | == Computing zeta == | ||
Line 654: | Line 654: | ||
= \frac{1}{1^z} - \frac{1}{2^z} + \frac{1}{3^z} - \frac{1}{4^z} + \cdots}</math> | = \frac{1}{1^z} - \frac{1}{2^z} + \frac{1}{3^z} - \frac{1}{4^z} + \cdots}</math> | ||
The Dirichlet series for the zeta function is absolutely convergent when {{nowrap|s > 1}}, justifying the rearrangement of terms leading to the alternating series for eta, which converges conditionally in the critical strip. The extra factor introduces zeros of the eta function at the points {{nowrap|1 + | The Dirichlet series for the zeta function is absolutely convergent when {{nowrap|s > 1}}, justifying the rearrangement of terms leading to the alternating series for eta, which converges conditionally in the critical strip. The extra factor introduces zeros of the eta function at the points {{nowrap|1 + 2πix / ln(2)}} corresponding to pure octave divisions along the line {{nowrap|s {{=}} 1}}, but no other zeros, and in particular none in the critical strip and along the critical line. The convergence of the alternating series can be greatly accelerated by applying [[Wikipedia:Euler summation|Euler summation]]. | ||
== Open problems == | == Open problems == |