User:Mike Battaglia/Mike's Zeta Function Working Page

From Xenharmonic Wiki
Jump to navigation Jump to search

My business is kicking my ass. Sometimes I have a minute to work on tuning theory. I'm going to post my ongoing work here when I can.

I am going to start some of the heavy lifting here and hope that maybe Gene can help me communicate this in a way that makes sense. Then we can merge this into the ordinary Zeta tuning page. (My notes will be in bold/italic below.)

Zeta yields tuning error over all rationals

In the Zeta tuning page, Gene proves that the zeta function measures weighted tuning error over all prime powers for any EDO. Here, we strengthen that result to show that the zeta function measures weighted tuning error over all rational numbers.

Let's dive in!

First, let's take the zeta function, expressed as a Dirichlet series:

[math] \displaystyle \zeta(s) = \sum_n n^{-s}[/math]

Now let's do two things: we're going to expand s = σ+it, and we're going to multiply ζ(s) by its conjugate ζ(s)', noting that ζ(s)' = ζ(s') and ζ(s)·ζ(s)' = |ζ(s)|2. We get:

[math] \displaystyle \left| \zeta(s) \right|^2 = \left[\sum_n n^{-(\sigma+it)}\right] \cdot \left[\sum_d d^{-(\sigma-it)}\right][/math]

where d is a new variable used internally in the second summation.

Now, let's focus on σ > 1, so that both series are absolutely convergent. The following rearrangement of terms is then justified:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n,d} \left[n^{-(\sigma+it)} \cdot d^{-(\sigma-it)}\right] = \sum_{n,d} \frac{\left({\tfrac{n}{d}}\right)^{-it}}{(nd)^{\sigma}}[/math]

Now let's do a bit of algebra with the exponential function, and use Euler's identity:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n,d} \frac{e^{-it \ln\left({\tfrac{n}{d}}\right)}}{(nd)^{\sigma}} = \sum_{n,d} \frac{\cos\left(-t \ln\left({\tfrac{n}{d}}\right)\right) + i\sin\left(-t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}[/math]

where the last equality makes use of the fact that cos(-x) = cos(x) and sin(-x) = -sin(x).

Now, let's decompose the sum into three parts: n=d, n>d, n<d. Here's what we get:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n=d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] + \sum_{n\gt d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] + \sum_{n\lt d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right][/math]

We'll deal with each of these separately.

First, in the leftmost summation, we can see that n=d implies ln(n/d) = 0. Since sin(0) = 0, the sin term in the numerator cancels out, yielding:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n=d} \left[ \frac{\cos\left( t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] + \sum_{n\gt d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] + \sum_{n\lt d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right) - i\sin\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right][/math]

We will not simplify the cosine term further right now, the reasons for which will become apparent below.

Now, let's handle the two summations on the right. The key thing to note here is that we can pair up every term in the second summation with a corresponding term in the third summation that interchanges n and d. To make this clear, let p and q be two integers, and assume without loss of generality that p>q. The term corresponding to n=p, d=q will then appear in the second summation, and the term n=q, d=p will appear in the third summation. Juxtaposing those together, we get the following:

[math] \displaystyle \frac{\cos\left(t \ln\left({\tfrac{p}{q}}\right)\right) - i\sin\left(t \ln\left({\tfrac{p}{q}}\right)\right)}{(pq)^{\sigma}} + \frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right) - i\sin\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}[/math]

Now, noting that ln(p/q) = -ln(q/p) and that sin is an odd function, we can see that the sin terms cancel out, leaving

[math] \displaystyle \frac{\cos\left(t \ln\left({\tfrac{p}{q}}\right)\right)}{(pq)^{\sigma}} + \frac{\cos\left(t \ln\left({\tfrac{q}{p}}\right)\right)}{(pq)^{\sigma}}[/math]

Now, since every term in these two summations has a pair like this, and since we've done nothing but continued rearrangements of an absolutely convergent series, we can modify the original three-part summation to cancel the sin terms out as follows:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n=d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] + \sum_{n\gt d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right] + \sum_{n\lt d} \left[ \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}} \right][/math]

Putting the whole thing back into one series, we get

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}[/math]

Finally, by making the mysterious substitution t = 2π/ln(2) · x, the musical implications of the above will start to reveal themselves:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(2\pi x \log_2\left(\tfrac{n}{d}\right)\right)}{(nd)^{\sigma}}[/math]

Let's take a breather and see what we've got.

For every strictly positive rational n/d, there is a cosine with period 2π log2(n/d). This cosine peaks at x=N/log2(n/d) for all integer N, or in other words, the Nth-equal division of the rational number n/d, and hits troughs midway between. Our mysterious substitution above was chosen to set the units for this up nicely - x now happens to be measured in divisions of the octave. (The original variable t, which was the imaginary part of the zeta argument s, can be thought of as the number of divisions of the interval e ≈ 535.49, or what Keenan Pepper has called the "natural interval.")

As mentioned in Gene's original zeta page, these cosine functions are very good approximations to the TE squared error for the interval n/d, with the difference that the function here is flipped upside down - that is, we're measuring "accuracy" rather than error - as well as shifted down the y-axis. Since it is trivial to convert between the two, it is clear that we're measuring essentially the same thing. We will call this cosine error, by analogy with TE error, or cosine accuracy when appropriate.

Going back to the infinite summation above, we note that these cosine accuracy functions are being weighted by 1/(nd)σ. Note that σ, which is the real part of the zeta argument s, serves as sort of a complexity weighting - it determines how quickly complex rational numbers become "irrelevant." Framed another way, we can think of it as the degree of "rolloff" formed by the resultant (musical, not mathematical) harmonic series formed by those rationals with d=1. Note that this rolloff is much stronger than the usual 1/log(nd) rolloff exhibited by TE error, which is one reason that zeta converges to something coherent for all rational numbers, whereas TE fails to converge as the limit increases. We will use the term "rolloff" to identify the variable σ below.

Putting this all together, we can take the approach to fix σ, specifying a rolloff, and then let x (or t) vary, specifying an EDO. The resulting function gives us the measured accuracy of EDOs across all unreduced rational numbers with respect to the chosen rolloff. Taking it all together, we get a Tenney-weighted sum of cosine accuracy over all unreduced rationals. QED.

It is extremely noteworthy to mention how "composite" rationals are treated differently than with TE error. In this case, the tuning error of each rational number is measured independently from the errors of the primes that make it up. In other words, the error of 5/3 is not necessarily the error of 5, minus the error of 3. This is in contrast to how "ordinary" regular temperaments typically work, where errors typically add up "consistently" across all intervals in the temperament. With zeta, every rational number gets its own independent error score which is added to the total, which is quite different (although still musically valid).

One way to frame this in the usual group-theoretic paradigm is to consider the group in which each strictly positive rational number is given its own linearly independent basis element. In other words, look at the free group over the strictly positive rationals, which we'll call "meta-JI." The zeta function can then be thought of as yielding an error for all meta-JI generalized patent vals. Whether this can be extended to all meta-JI vals, or modified to yield something nice like a "norm" on the group of meta-JI vals, is an open question. Regardless, this may be a useful conceptual bridge to understand how to relate the zeta function to "ordinary" regular temperament theory.

From unreduced rationals to reduced rationals

Let's go back to this expression here:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n,d} \frac{\cos\left(t \ln\left({\tfrac{n}{d}}\right)\right)}{(nd)^{\sigma}}[/math]

Note that since there's no restriction that n and d be coprime, the "rationals" we're using here don't have to be reduced. So this shows that zeta yields an error metric over all unreduced rationals, but leaves open the question of how reduced rationals are handled. It turns out that the same function also measures the error of reduced rationals, scaled only by a rolloff-dependent constant factor across all EDOs.

To see this, let's first note that every "unreduced" rational n/d can be decomposed into the product of a reduced rational n'/d' and a common factor c/c. Furthermore, note that for any reduced rational n'/d', we can generate all unreduced rationals n/d corresponding to it by multiplying it by all such common factors c/c, where c is a strictly positive natural number.

This allows us to change our original summation so that it's over three variables, n', d', and c, where n' and d' are coprime, and c is a strictly positive natural number:

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{cn'}{cd'}}\right)\right)}{(cn' \cdot cd')^{\sigma}}[/math]

Now, the common factor c/c cancels out inside the log in the numerator. However, in the denominator, we get an extra factor of c2 to contend with. This yields

[math] \displaystyle \left| \zeta(s) \right|^2 = \sum_{n',d',c} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(c^2 \cdot n'd')^{\sigma}} = \sum_{n',d',c} \left[ \frac{1}{c^{2\sigma}} \cdot \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right][/math]

Now, since we're still assuming that σ > 1 and everything is absolutely convergent, we can decompose this into a product of series as follows

[math] \displaystyle \left| \zeta(s) \right|^2 = \left[ \sum_c \frac{1}{c^{2\sigma}} \right] \cdot \left[ \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right][/math]

Finally, we note that on the left summation we simply have another zeta series, yielding

[math] \displaystyle \left| \zeta(s) \right|^2 = \zeta(2\sigma) \cdot \left[ \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}} \right][/math]

[math] \displaystyle \frac{\left| \zeta(s) \right|^2}{\zeta(2\sigma)} = \sum_{n',d'} \frac{\cos\left(t \ln\left({\tfrac{n'}{d'}}\right)\right)}{(n'd')^{\sigma}}[/math]

Now, since we're fixing σ and letting t vary, the left zeta term is constant for all EDOs. This demonstrates that the zeta function also measures cosine error over all the reduced rationals, up to a constant factor. QED.

(Mike's note - below are a lot of notes about things that need to be done to move forward, so I don't forget)

Immediate next steps

  1. Gene Smith, Martin Gough - can anyone confirm I haven't screwed anything up above.
  2. With a bit more algebra, it is also possible to prove that the same function measures error over all reduced rationals, just with the output sent through a strictly monotonic function (might have just been a scaling, I don't remember). I lost my notes, but shouldn't be too hard to work it out again. I'll do that once it's confirmed I didn't screw anything up.
  3. The above was only proven for Re{s}>1. We need to dive into the analytic continuation next. It may be good to see if eta can be "decomposed" into a weighted sum of cosines like the above, and if so, what that looks like. (Martin, did you already do this previously?)

Building on from there

  1. Once the above looks good, find a way to merge this all into one zeta page.
  2. Martin has a record of the zeta discussions on Facebook; would be good to perhaps just dump it all here for reference.
  3. Another zeta paper from an independent researcher (!) was posted on the other zeta page. I've uploaded it and replicated below - need to wade through this at some point and integrate into everything.


Conjectures to prove or disprove way later

  1. Zeta is (related to) the Fourier transform of HE via the HE convolution theorem, and hence provides a way to analytically continue HE to N=inf. (I hope! But what about the analytic continuation?)
  2. If we actually want to use TE error over all rationals, we can! We can simply use parabolic waves instead of cosine waves. Since parabolic waves are a weighted sum of cosine waves, you end up with a neat "zeta series" representation of TE error, except the error is still weighted more strongly. Need to work this out.
  3. The stronger rolloff, coupled with the use of "meta-JI", will hopefully make it so that subgroup temperaments "converge" to something as the limit increases, in some vague way I need to go over again in my notes.
  4. This is all based on Tenney Height. What if we use Weil height instead, start with cosines weighted by 1/max(n,d)^σ, and then work backwards?
  5. What if we try using cosines for just the primes, and work backward, mimicking the idea of "consistent" error ratings? Do we get the prime zeta function?
  6. Figure out if we can extend this zeta stuff to arbitrary infinite-limit vals, perhaps in meta-JI, rather than just GPV's.
  7. Figure out if we can extend this zeta stuff to higher-rank tuning systems.
  8. Figure out if there's a nice algebraic way to merge this with the rest of regular temperament theory - can we get a nicely behaved Banach space out of this somehow?
  9. We switched from 1/log(nd) weighting to 1/(nd)^σ weighting. This is kind of the same as sticking with 1/log(nd) weighting, but using a different sort of way to combine error across intervals. Something to think about.

A bunch of random stuff

Another conjecture I don't want to forget, archived from Facebook:

OK, so here’s a conjecture that I had while working on stuff.

We’re really comparing the following:

• A 1/p^s rolloff on primes

• A “truncated” 1/log(p) rolloff which is 1/log(p) up until a certain prime, then 0 afterward

So we have two free parameters here: s for the first rolloff, and the prime limit for the second rolloff. So here’s my conjecture, stated very loosely and informally:

CONJECTURE: for any choice of s, there is some prime limit for which Tenney-based approaches will “most closely” approximate the zeta approach.

So as an example, if we’re using s=0.5 and comparing that to Graham’s TE finder, then there will be some prime-limit for which TE most closely approximates zeta. For less or more than that limit, things start to diverge.

I did some basic work in this direction and proved some preliminary stuff, but I need to go over my notes again to see what I proved and didn’t prove. I do remember working out that for p=0.5 to p=1, the maximum alignment point was something around the 13-limit/17-limit area. So infinite-limit zeta badness is "approximated" best by 13-limit TE badness (or change badness for “low enough” Ek), or something like that.

So this should not really agree with zeta all that much

This is the maximum point of agreement

Now things get goofy

and so on.

I’m VERY much oversimplifying the above, but that’s the high-level idea. To see why this might be the case, consider that there’s a region in which 1/log(p) ≈ 1/sqrt(p) before things get really wonky and 1/log(p) completely dominates 1/sqrt(p).

While zeta research is still worth pursuing for other reasons, if in the end it turns out that some of the zeta stuff is computationally infeasible, then this might be useful. We might find that 13-limit TE badness is the thing “closest to zeta” that’s still easily computable, or something like that, declare that 13-limit TE space is the “best JI universe” for mere mortals to work in, and move on. That’s one way this might wrap up.

Another huge list of conjectures:

- Look at what the HE analytic continuation looks like.

- Define the function that gives you the error over ALL real dyads, weighted by HE (!!), which is going to be something like (but not exactly) two zeta functions multiplied pointwise by gaussians divided by one another, which is tantalizingly close to something basic like zeta error * EDO size.

- See how this relates to Dustin Schallert and Keenan Pepper's previous work looking at the HE of scales

- Use this zeta bridge here to find the exact connection between HE and TE, which if this works out would be a rigorous version of some ideas that Carl Lumma anticipated on IRC a while ago about how TE is "the best"

- Paul Erlich conjectured that using mediant-mediant weighting of rationals for Tenney series gives you 1/(n*d)^0.5 "widths" in the limit, where that magic 0.5 exponent appears for no reason at all and also now magically syncs up with the critical line of zeta, also apparently for no reason at all. What does it mean? Perhaps Paul's conjecture is equivalent to the Riemann Hypothesis?

- I did some basic work outlining how to use actual parabolic waves to use zeta to describe infinite-limit actual TE error rather than using the cosine approximation; the basic idea is to add "harmonics" of the zeta function in over the original. I want a general theory of this for arbitrary error functions, which I think in general will work out to be the Mellin convolution of zeta with the function in question.

- I also have a hunch that taking the Mellin convolution with aperiodic functions might let us break out of the GPV paradigm and define some kind of zeta error for arbitrary vals, but I'm not sure yet.

- I want to finish working out Martin Gough's work on the analytic continuation of zeta, and see if I can apply that concept everywhere that things don't converge (TE, HE, etc).

- Zeta involves "inconsistent" mappings, so I want to see what you get if you look only for "consistent" mappings. (The prime zeta function? Zeta function again? etc)

- I want to see if we can use all of these ideas, mixing zeta and HE and TE and stronger rolloffs and etc, to come up with subgroup temperaments that actually converge to something if you take the "best" extension to a different subgroup and repeat to the infinite-limit, perhaps as the category-theoretic inverse limit of an inverse system, and to have this converge to the same thing no matter which chain you use so long as the supremum is the infinite-limit.

Here's a huge Facebook thread dump, for reference - convo with Martin re Archiving - summary of Mike's thoughts on how to tie zeta and HE together - zeta series re sum-squared error and stuff, lots of good stuff here - original discussion about zeta optimizing error over all intervals, proofs of reduced rationals here, tons of eta function stuff - clarification between Mike and Gene over "inconsistent" error mappings - conjecture about 13-limit or 17-limit TE agreeing with zeta and then diverging past that