The Riemann zeta function and tuning: Difference between revisions
ArrowHead294 (talk | contribs) No edit summary |
ArrowHead294 (talk | contribs) No edit summary |
||
Line 14: | Line 14: | ||
== Gene Smith's original derivation == | == Gene Smith's original derivation == | ||
=== Preliminaries === | === Preliminaries === | ||
Suppose ''x'' is a variable representing some equal division of the octave. For example, if ''x'' | Suppose ''x'' is a variable representing some equal division of the octave. For example, if {{nowrap|''x'' = 80}}, ''x'' reflects 80edo with a step size of 15 cents and with pure octaves. Suppose that ''x'' can also be continuous, so that it can also represent fractional or "nonoctave" divisions as well. The [[Bohlen-Pierce|Bohlen-Pierce scale]], 13 equal divisions of 3/1, is approximately 8.202 equal divisions of the "octave" (although the octave itself does not appear in this tuning), and would hence be represented by a value of {{nowrap|''x'' = 8.202}}. | ||
Now suppose that ⌊x⌉ denotes the difference between ''x'' and the integer nearest to ''x''. For example, ⌊8.202⌉ would be 0.202, since it's the difference between 8.202 and the nearest integer, which is 8. ⌊7.95⌉ would be 0.05, which is the difference between 7.95 and the nearest integer, which is 8. Mathematically speaking, <math>\rfrac{x} = \abs{x - \floor{x + \frac{1}{2}}}</math>. | Now suppose that ⌊x⌉ denotes the difference between ''x'' and the integer nearest to ''x''. For example, ⌊8.202⌉ would be 0.202, since it's the difference between 8.202 and the nearest integer, which is 8. ⌊7.95⌉ would be 0.05, which is the difference between 7.95 and the nearest integer, which is 8. Mathematically speaking, <math>\rfrac{x} = \abs{x - \floor{x + \frac{1}{2}}}</math>. | ||
Line 22: | Line 22: | ||
<math>\displaystyle \xi_p(x) = \sum_{\substack{2 \leq q \leq p \\ q \text{ prime}}} \left(\frac{\rfrac{x \log_2 q}}{\log_2 q}\right)^2</math> | <math>\displaystyle \xi_p(x) = \sum_{\substack{2 \leq q \leq p \\ q \text{ prime}}} \left(\frac{\rfrac{x \log_2 q}}{\log_2 q}\right)^2</math> | ||
This function has local minima, corresponding to associated generalized patent vals. The minima occur for values of x which are the [[Tenney-Euclidean_Tuning|Tenney-Euclidean tuning]]s of the octaves of the associated vals, while ξ<sub>''p''</sub> for these minima is the square of the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]] of the val& | This function has local minima, corresponding to associated generalized patent vals. The minima occur for values of x which are the [[Tenney-Euclidean_Tuning|Tenney-Euclidean tuning]]s of the octaves of the associated vals, while ξ<sub>''p''</sub> for these minima is the square of the [[Tenney-Euclidean_metrics|Tenney-Euclidean relative error]] of the val—equal to the TE error times the TE complexity, and sometimes known as "TE simple badness." | ||
Now suppose we don't want a formula for any specific prime limit, but which applies to all primes. We can't take the above sum to infinity, since it doesn't converge. However, we could change the weighting factor to a power so that it does converge: | Now suppose we don't want a formula for any specific prime limit, but which applies to all primes. We can't take the above sum to infinity, since it doesn't converge. However, we could change the weighting factor to a power so that it does converge: | ||
Line 34: | Line 34: | ||
where the summation is taken formally over all positive integers, though only the primes and prime powers make a nonzero contribution. | where the summation is taken formally over all positive integers, though only the primes and prime powers make a nonzero contribution. | ||
Another consequence of the above definition which might be objected to is that it results in a function with a [[Wikipedia:Continuous_function#Relation_to_differentiability_and_integrability|discontinuous derivative]], whereas a smooth function be preferred. The function ⌊x⌉<sup>2</sup> is quadratically increasing near integer values of x, and is periodic with period 1. Another function with these same properties is 1 | Another consequence of the above definition which might be objected to is that it results in a function with a [[Wikipedia:Continuous_function#Relation_to_differentiability_and_integrability|discontinuous derivative]], whereas a smooth function be preferred. The function ⌊x⌉<sup>2</sup> is quadratically increasing near integer values of x, and is periodic with period 1. Another function with these same properties is {{nowrap|1 − cos(2π''x'')}}, which is a smooth and in fact an [[Wikipedia:entire function|entire function]]. Let us therefore now define for any {{nowrap|''s'' > 1}}: | ||
<math>\displaystyle E_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{1 - \cos(2 \pi x \log_2 n)}{n^s}</math> | <math>\displaystyle E_s(x) = \sum_{n \geq 1} \frac{\Lambda(n)}{\ln n} \frac{1 - \cos(2 \pi x \log_2 n)}{n^s}</math> | ||
For any fixed ''s'' | For any fixed ''s'' > 1 this gives a real [[Wikipedia:analytic function|analytic function]] defined for all ''x'', and hence with all the smoothness properties we could desire. | ||
We can clean up this definition to get essentially the same function: | We can clean up this definition to get essentially the same function: | ||
Line 55: | Line 55: | ||
=== Into the critical strip === | === Into the critical strip === | ||
So long as ''s'' | So long as {{nowrap|''s'' ≥ 1}}, the absolute value of the zeta function can be seen as a relative error measurement. However, the rationale for that view of things departs when {{nowrap|''s'' < 1}}, particularly in the [http://mathworld.wolfram.com/CriticalStrip.html critical strip], when {{nowrap|0 < ''s'' < 1}}. As s approaches the value {{nowrap|''s'' = {{frac|2}}}} of the [http://mathworld.wolfram.com/CriticalLine.html critical line], the information content, so to speak, of the zeta function concerning higher primes increases and it behaves increasingly like a badness measure (or more correctly, since we have inverted it, like a goodness measure.) The quasi-symmetric [https://planetmath.org/encyclopedia/FunctionalEquationOfTheRiemannZetaFunction.html functional equation] of the zeta function tells us that past the critical line the information content starts to decrease again, with {{nowrap|1 − ''s''}} and ''s'' having the same information content. Hence it is the zeta function between {{nowrap|''s'' = {{frac|2}}}} and {{nowrap|''s'' = 1}}, and especially the zeta function along the critical line {{nowrap|''s'' = {{frac|2}}}}, which is of the most interest. | ||
As ''s'' | As ''s'' > 1 gets larger, the Dirichlet series for the zeta function is increasingly dominated by the 2 term, getting ever closer to simply {{nowrap|1 + 2<sup>−''z''</sup>}}, which approaches 1 as {{nowrap|''s'' = Re(''z'')}} becomes larger. When {{nowrap|''s'' >> 1}} and ''x'' is an integer, the real part of zeta is approximately {{nowrap|1 + 2<sup>−''s''</sup>}}, and the imaginary part is approximately zero; that is, zeta is approximately real. Starting from {{nowrap|''s'' = +∞}} with ''x'' an integer, we can trace a line back towards the critical strip on which zeta is real. Since when {{nowrap|''s'' >> 1}} the derivative is approximately −ln(2)/2<sup>''s''</sup>, it is negative on this line of real values for zeta, meaning that the real value for zeta increases as ''s'' decreases. The zeta function approaches 1 uniformly as ''s'' increases to infinity, so as ''s'' decreases, the real-valued zeta function along this line of real values continues to increase though all real values from 1 to infinity monotonically. When it crosses the critical line where {{nowrap|''s'' = {{frac|2}}}}, it produces a real value of zeta on the critical line. Points on the critical line where {{nowrap|ζ({{frac|2}} + i''g'')}} are real are called "Gram points", after [[Wikipedia:Jørgen Pedersen Gram|Jørgen Pedersen Gram]]. We thus have associated pure-octave edos, where ''x'' is an integer, to a value near to the pure octave, at the special sorts of Gram points which corresponds to edos. | ||
Because the value of zeta increased continuously as it made its way from +∞ to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if −ζ'(''z'') is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to [[Wikipedia:Bernhard Riemann|Bernhard Riemann]] which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the [[Wikipedia:Riemann-Siegel formula|Riemann-Siegel formula]] since [[Wikipedia:Carl Ludwig Siegel|Carl Ludwig Siegel]] went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of ζ({{frac|2}} | Because the value of zeta increased continuously as it made its way from +∞ to the critical line, we might expect the values of zeta at these special Gram points to be relatively large. This would be especially true if −ζ'(''z'') is getting a boost from other small primes as it travels toward the Gram point. A complex formula due to [[Wikipedia:Bernhard Riemann|Bernhard Riemann]] which he failed to publish because it was so nasty becomes a bit simpler when used at a Gram point. It is named the [[Wikipedia:Riemann-Siegel formula|Riemann-Siegel formula]] since [[Wikipedia:Carl Ludwig Siegel|Carl Ludwig Siegel]] went looking for it and was able to reconstruct it after rooting industriously around in Riemann's unpublished papers. From this formula, it is apparent that when x corresponds to a good edo, the value of {{nowrap|ζ({{frac|2}} + i''g'')}} at the corresponding Gram point should be especially large. | ||
=== The Z function === | === The Z function === | ||
The absolute value ζ({{frac|2}} | The absolute value of {{nowrap|ζ({{frac|2}} + i''g'')}} at a Gram point corresponding to an edo is near to a local maximum, but not actually at one. At the local maximum, of course, the partial derivative of {{nowrap|ζ({{frac|2}} + i''t'')}} with respect to ''t'' will be zero; however this does not mean its derivative there will be zero. In fact, the [[Wikipedia:Riemann hypothesis|Riemann hypothesis]] is equivalent to the claim that all zeros of {{nowrap|ζ'(''s'' + i''t'')}} occur when {{nowrap|''s'' > 1/2}}, which is where all known zeros lie. These do not have values of ''t'' corresponding to good edos. For this and other reasons, it is helpful to have a function which is real for values on the critical line but whose absolute value is the same as that of zeta. This is provided by the [[Wikipedia:Z function|Z function]]. | ||
In order to define the Z function, we need first to define the [[Wikipedia:Riemann-Siegel theta function|Riemann-Siegel theta function]], and in order to do that, we first need to define the [http://mathworld.wolfram.com/LogGammaFunction.html Log Gamma function]. This is not defined as the natural log of the Gamma function since that has a more complicated branch cut structure; instead, the principal branch of the Log Gamma function is defined as having a branch cut along the negative real axis, and is given by the series | In order to define the Z function, we need first to define the [[Wikipedia:Riemann-Siegel theta function|Riemann-Siegel theta function]], and in order to do that, we first need to define the [http://mathworld.wolfram.com/LogGammaFunction.html Log Gamma function]. This is not defined as the natural log of the Gamma function since that has a more complicated branch cut structure; instead, the principal branch of the Log Gamma function is defined as having a branch cut along the negative real axis, and is given by the series | ||
Line 86: | Line 86: | ||
Since θ is holomorphic on the strip with imaginary part between -1/2 and 1/2, so is Z. Since the exponential function has no zeros, the zeros of Z in this strip correspond one to one with the zeros of ζ in the critical strip. Since the exponential of an imaginary argument has absolute value 1, the absolute value of Z along the real axis is the same as the absolute value of ζ at the corresponding place on the critical line. And since theta was defined so as to give precisely this property, Z is a real even function of the real variable ''t''. | Since θ is holomorphic on the strip with imaginary part between -1/2 and 1/2, so is Z. Since the exponential function has no zeros, the zeros of Z in this strip correspond one to one with the zeros of ζ in the critical strip. Since the exponential of an imaginary argument has absolute value 1, the absolute value of Z along the real axis is the same as the absolute value of ζ at the corresponding place on the critical line. And since theta was defined so as to give precisely this property, Z is a real even function of the real variable ''t''. | ||
Using the [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelZ online plotter] we can plot Z in the regions corresponding to scale divisions, using the conversion factor ''t'' = 2π''x''/ln(2), for ''x'' a number near or at an edo number. Hence, for instance, to plot 12 plot around 108.777, to plot 31 plot around 281.006, and so forth. An alternative plotter is the applet [http://web.viu.ca/pughg/RiemannZeta/RiemannZetaLong.html here]. | Using the [http://functions.wolfram.com/webMathematica/FunctionPlotting.jsp?name=RiemannSiegelZ online plotter] we can plot Z in the regions corresponding to scale divisions, using the conversion factor {{nowrap|''t'' = 2π''x''/ln(2)}}, for ''x'' a number near or at an edo number. Hence, for instance, to plot 12 plot around 108.777, to plot 31 plot around 281.006, and so forth. An alternative plotter is the applet [http://web.viu.ca/pughg/RiemannZeta/RiemannZetaLong.html here]. | ||
If you have access to [[Wikipedia:Mathematica|Mathematica]], which has Z, zeta and theta as a part of its suite of initially defined functions, you can do even better. Below is a Mathematicia-generated plot of Z(2π''x''/ln(2)) in the region around 12edo: | If you have access to [[Wikipedia:Mathematica|Mathematica]], which has Z, zeta and theta as a part of its suite of initially defined functions, you can do even better. Below is a Mathematicia-generated plot of Z(2π''x''/ln(2)) in the region around 12edo: | ||
Line 92: | Line 92: | ||
[[File:plot12.png|alt=plot12.png|plot12.png]] | [[File:plot12.png|alt=plot12.png|plot12.png]] | ||
The peak around 12 is both higher and wider than the local maximums above 11 and 13, indicating its superiority as an edo. Note also that the peak occurs at a point slightly larger than 12; this indicates the octave is slightly compressed in the zeta tuning for 12. The size of a step in octaves is 1/''x'', and hence the size of the octave in the zeta peak value tuning for ''N''edo is ''N''/''x''; if ''x'' is slightly larger than ''N'' as here with ''N'' = 12, the size of the zeta tuned octave will be slightly less than a pure octave. Similarly, when the peak occurs with ''x'' less than ''N'', we have stretched octaves. | The peak around 12 is both higher and wider than the local maximums above 11 and 13, indicating its superiority as an edo. Note also that the peak occurs at a point slightly larger than 12; this indicates the octave is slightly compressed in the zeta tuning for 12. The size of a step in octaves is 1/''x'', and hence the size of the octave in the zeta peak value tuning for ''N''edo is ''N''/''x''; if ''x'' is slightly larger than ''N'' as here with {{nowrap|''N'' = 12}}, the size of the zeta tuned octave will be slightly less than a pure octave. Similarly, when the peak occurs with ''x'' less than ''N'', we have stretched octaves. | ||
For larger edos, the width of the peak narrows, but for strong edos the height more than compensates, measured in terms of the area under the peak (the absolute value of the integral of Z between two zeros.) Note how 270 completely dominates its neighbors: | For larger edos, the width of the peak narrows, but for strong edos the height more than compensates, measured in terms of the area under the peak (the absolute value of the integral of Z between two zeros.) Note how 270 completely dominates its neighbors: | ||
Line 98: | Line 98: | ||
[[File:plot270.png|alt=plot270.png|plot270.png]] | [[File:plot270.png|alt=plot270.png|plot270.png]] | ||
Note that for one of its neighbors, 271, it isn't entirely clear which peak value corresponds to the line of real values from +∞. This can be determined by looking at the absolute value of zeta along other ''s'' values, such as ''s'' = 1 or ''s'' = 3/4, and in this case the local minimum at 271.069 is the value in question. However, other peak values are not without their interest; the local maximum at 270.941, for instance, is associated to a different mapping for 3. | Note that for one of its neighbors, 271, it isn't entirely clear which peak value corresponds to the line of real values from +∞. This can be determined by looking at the absolute value of zeta along other ''s'' values, such as {{nowrap|''s'' = 1}} or {{nowrap|''s'' = 3/4}}, and in this case the local minimum at 271.069 is the value in question. However, other peak values are not without their interest; the local maximum at 270.941, for instance, is associated to a different mapping for 3. | ||
To generate this plot using the free version of Wolfram Cloud, you can copy-paste '''Plot[Abs[RiemannSiegelZ[9.06472028x]], {x, 11.9,12.1}]''' and then in the menu select '''Evaluation > Evaluate Cells'''. Change "'''11.9'''" and "'''12.1'''" to whatever values you want, e.g. to view the curve around 15edo you might use the values "'''14.9'''" and "'''15.1'''". | To generate this plot using the free version of Wolfram Cloud, you can copy-paste '''Plot[Abs[RiemannSiegelZ[9.06472028x]], {x, 11.9,12.1}]''' and then in the menu select '''Evaluation > Evaluate Cells'''. Change "'''11.9'''" and "'''12.1'''" to whatever values you want, e.g. to view the curve around 15edo you might use the values "'''14.9'''" and "'''15.1'''". | ||
Line 117: | Line 117: | ||
\zeta(s) = \sum_n n^{-s}</math> | \zeta(s) = \sum_n n^{-s}</math> | ||
Now let's do two things: we're going to expand s = σ+it, and we're going to multiply ζ(s) by its conjugate ζ(s)', noting that ζ(s)' = ζ(s') and ζ(s)·ζ(s)' = |ζ(s)|< | Now let's do two things: we're going to expand {{nowrap|s = σ + it}}, and we're going to multiply ζ(s) by its conjugate ζ(s)', noting that {{nowrap|ζ(s)' = ζ(s')}} and {{nowrap|ζ(s)·ζ(s)' = |ζ(s)|<sup>2</sup>}}. We get: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 136: | Line 136: | ||
= \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | ||
where the last equality makes use of the fact that cos( | where the last equality makes use of the fact that {{nowrap|cos(−x) = cos(x)}} and {{nowrap|sin(−x) = −sin(x)}}. | ||
Now, let's decompose the sum into three parts: n=d, n>d, n<d. Here's what we get: | Now, let's decompose the sum into three parts: {{nowrap|n = d}}, {{nowrap|n > d}}, and {{nowrap|n < d}}. Here's what we get: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 147: | Line 147: | ||
We'll deal with each of these separately. | We'll deal with each of these separately. | ||
First, in the leftmost summation, we can see that n=d implies ln(n/d) = 0. Since sin(0) = 0, the sin term in the numerator cancels out, yielding: | First, in the leftmost summation, we can see that n=d implies {{nowrap|ln(n/d) = 0}}. Since {{nowrap|sin(0) = 0}}, the sin term in the numerator cancels out, yielding: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 162: | Line 162: | ||
\frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math> | \frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}</math> | ||
Now, noting that ln(p/q) = | Now, noting that {{nowrap|ln(p / q) = −ln(q / p)}} and that sin is an odd function, we can see that the sin terms cancel out, leaving | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 180: | Line 180: | ||
\left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | \left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}</math> | ||
Finally, by making the mysterious substitution t = 2π/ln(2) · x, the musical implications of the above will start to reveal themselves: | Finally, by making the mysterious substitution {{nowrap|t = 2π / ln(2) · x}}, the musical implications of the above will start to reveal themselves: | ||
<math> \displaystyle | <math> \displaystyle | ||
Line 189: | Line 189: | ||
=== Interpretation of results: "cosine relative error" === | === Interpretation of results: "cosine relative error" === | ||
For every strictly positive rational n/d, there is a cosine with period 2π log< | For every strictly positive rational n/d, there is a cosine with period 2π log<sub>2</sub>(n/d). This cosine peaks at {{nowrap|x = N/log<sub>2</sub>(n/d)}} for all integer N, or in other words, the Nth-equal division of the rational number n/d, and hits troughs midway between. | ||
Our mysterious substitution above was chosen to set the units for this up nicely. The variable x now happens to be measured in divisions of the octave. (The original variable t, which was the imaginary part of the zeta argument s, can be thought of as the number of divisions of the interval e< | Our mysterious substitution above was chosen to set the units for this up nicely. The variable x now happens to be measured in divisions of the octave. (The original variable t, which was the imaginary part of the zeta argument s, can be thought of as the number of divisions of the interval {{nowrap|''e''<sup>2π</sup> ≈ 535.49}}, or what [[Keenan_Pepper|Keenan Pepper]] has called the "natural interval.") | ||
As mentioned in Gene's original zeta derivation, these cosine functions can be thought of as good approximations to the terms in the TE error computation, which are all the squared errors for the different primes. Rather than taking the square of the error, we instead put the error through the function <math>(1-cos(x))/2</math> - which is "close enough" for small values of x. Since we are always rounding off to the best mapping, this error is never more 0.5 steps of the EDO, so since we have <math> -0.5 < x < 0.5</math> we have a decent enough approximation. | As mentioned in Gene's original zeta derivation, these cosine functions can be thought of as good approximations to the terms in the TE error computation, which are all the squared errors for the different primes. Rather than taking the square of the error, we instead put the error through the function <math>(1 - cos(x))/2</math> - which is "close enough" for small values of x. Since we are always rounding off to the best mapping, this error is never more 0.5 steps of the EDO, so since we have <math>-0.5 < x < 0.5</math> we have a decent enough approximation. | ||
We will call this '''cosine (relative) error''', by analogy with '''TE (relative) error'''. It is easy to see that the cosine error is approximately equal to the TE error when the error is small, and only diverges slightly for large errors. | We will call this '''cosine (relative) error''', by analogy with '''TE (relative) error'''. It is easy to see that the cosine error is approximately equal to the TE error when the error is small, and only diverges slightly for large errors. | ||
Line 212: | Line 212: | ||
For now, though, we will focus only on the basic zeta result that we have. | For now, though, we will focus only on the basic zeta result that we have. | ||
Going back to the infinite summation above, we note that these cosine error (or really "cosine accuracy") functions are being weighted by 1/(nd)< | Going back to the infinite summation above, we note that these cosine error (or really "cosine accuracy") functions are being weighted by 1/(nd)<sup>σ</sup>. Note that σ, which is the real part of the zeta argument s, serves as sort of a complexity weighting - it determines how quickly complex rational numbers become "irrelevant." Framed another way, we can think of it as the degree of "'''rolloff'''" formed by the resultant (musical, not mathematical) harmonic series formed by those rationals with {{nowrap|d = 1}}. Note that this rolloff is much stronger than the usual 1/log(nd) rolloff exhibited by TE error, which is one reason that zeta converges to something coherent for all rational numbers, whereas TE fails to converge as the limit increases. We will use the term "rolloff" to identify the variable σ below. | ||
Putting this all together, we can take the approach to fix σ, specifying a rolloff, and then let x (or t) vary, specifying an EDO. The resulting function gives us the measured accuracy of EDOs across all unreduced rational numbers with respect to the chosen rolloff. Taking it all together, we get a Tenney-weighted sum of cosine accuracy over all unreduced rationals. QED. | Putting this all together, we can take the approach to fix σ, specifying a rolloff, and then let x (or t) vary, specifying an EDO. The resulting function gives us the measured accuracy of EDOs across all unreduced rational numbers with respect to the chosen rolloff. Taking it all together, we get a Tenney-weighted sum of cosine accuracy over all unreduced rationals. QED. | ||
Line 218: | Line 218: | ||
<span style="line-height: 1.5;">It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In addition to our usual error metric on the primes, we also go to each rational, look for the best "direct" or "patent" mapping of that rational within the EDO, and add 'that' to the EDO's score. In particular, we do this even when the best mapping for some rational doesn't match up with the mapping you'd get from it just looking at the primes. | <span style="line-height: 1.5;">It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In addition to our usual error metric on the primes, we also go to each rational, look for the best "direct" or "patent" mapping of that rational within the EDO, and add 'that' to the EDO's score. In particular, we do this even when the best mapping for some rational doesn't match up with the mapping you'd get from it just looking at the primes. | ||
So, for instance, in 16-EDO, the best mapping for 3/2 is 9 steps out of 16, and using that mapping, we get that 9/8 is 2 steps (9*2 - 16 = 2). However, there is a better mapping for 9/8 at 3 steps - one which ignores the fact that it is no longer equal to two 3/2's. This can be particularly useful for playing chords: 16-EDO's "direct mapping" for 9 is useful when playing the chord 4:5:7:9, and the "indirect" or "prime-based" mapping for 9 is useful when playing the "major 9" chord 8:10:12:15:18. We can think of the zeta function as rewarding equal temperaments not just for having a good approximation of the primes, but also for having good "extra" approximations of rationals which can be used in this way. And although 16-EDO is pretty high error, similar phenomena can be found for any EDO which becomes [[consistency|inconsistent]] for some chord of interest. | So, for instance, in 16-EDO, the best mapping for 3/2 is 9 steps out of 16, and using that mapping, we get that 9/8 is 2 steps {{nowrap|(9 * 2 - 16 = 2)}}. However, there is a better mapping for 9/8 at 3 steps - one which ignores the fact that it is no longer equal to two 3/2's. This can be particularly useful for playing chords: 16-EDO's "direct mapping" for 9 is useful when playing the chord 4:5:7:9, and the "indirect" or "prime-based" mapping for 9 is useful when playing the "major 9" chord 8:10:12:15:18. We can think of the zeta function as rewarding equal temperaments not just for having a good approximation of the primes, but also for having good "extra" approximations of rationals which can be used in this way. And although 16-EDO is pretty high error, similar phenomena can be found for any EDO which becomes [[consistency|inconsistent]] for some chord of interest. | ||
One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the [https://en.wikipedia.org/wiki/Free_group free group] over the strictly positive rationals, which we'll call "'''meta-JI'''." The zeta function can then be thought of as yielding an error for all meta-JI [[Patent_val|generalized patent vals]]. Whether this can be extended to all meta-JI vals, or modified to yield something nice like a "norm" on the group of meta-JI vals, is an open question. Regardless, this may be a useful conceptual bridge to understand how to relate the zeta function to "ordinary" regular temperament theory. | One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the [https://en.wikipedia.org/wiki/Free_group free group] over the strictly positive rationals, which we'll call "'''meta-JI'''." The zeta function can then be thought of as yielding an error for all meta-JI [[Patent_val|generalized patent vals]]. Whether this can be extended to all meta-JI vals, or modified to yield something nice like a "norm" on the group of meta-JI vals, is an open question. Regardless, this may be a useful conceptual bridge to understand how to relate the zeta function to "ordinary" regular temperament theory. | ||
Line 239: | Line 239: | ||
\left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{cn'}{cd'}}\right)\right)}{(cn' \cdot cd')^{\sigma}}</math> | \left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{cn'}{cd'}}\right)\right)}{(cn' \cdot cd')^{\sigma}}</math> | ||
Now, the common factor c/c cancels out inside the log in the numerator. However, in the denominator, we get an extra factor of c< | Now, the common factor c/c cancels out inside the log in the numerator. However, in the denominator, we get an extra factor of c <sup>2</sup> to contend with. This yields | ||
<math> \displaystyle | <math> \displaystyle |