BOP tuning

From Xenharmonic Wiki
Jump to navigation Jump to search

For any temperament, the TOP tuning is the one that minimizes the max Tenney-weighted (i.e. [math]1/\log(nd)[/math]-weighted) error on all rationals mapped by that temperament.

It so happens that we can use the same recipe to define a closely related tuning that instead minimizes the max [math]1/(nd)^s[/math] weighting on all rationals, where s is a free parameter > 1. We will call this the BOP tuning for Benedetti Optimal.

Similarly to TOP, we will show that this is easily obtained by minimizing the max [math]1/p^s[/math] weighting on just the primes, which means it is also optimal using the Wilson height.

For now, we show that this result holds for all full-limits, though it can be extended to arbitrary subgroups.

Proof of Benedetti optimality on all rationals

The proof is similar to the TOP optimality proof.

First, we note that for any tuning map, call it [math]T[/math], there is an associated vector called a signed error map. This is given by [math]E = T-J[/math], where [math]J[/math] is the JIP. If we assume that we are in a prime-limit, then the signed error map will contain the signed error on each prime. We assume that all errors are unweighted so far.

For any such error map, we can then divide the unweighted error by the desired weighting on each prime to get the weighted error map. This can be viewed as a change of coordinates to a weighted coordinate system. Assuming our tuning maps are row vectors, this can be expressed as a matrix right-multiplication by some diagonal weighting matrix [math]W[/math], where the diagonal entries are the desired weights on each prime. We will then denote the weighted error by [math]F = E\cdot W[/math].

For any such error map, we can then evaluate the maximum unsigned weighted error on the primes. This quantity is equal to the maximum of the absolute values of the coefficients of the signed weighted error map, or equivalently the L-inf norm of the signed error map in the weighted coordinate system.

The L-inf norm has the property of being the dual norm of the L1 norm on the dual space, which is the space of monzos. This means that it has the following property:

[math]\|F\|_\infty = \sup_M \frac{|\lt F|M\gt |}{\|M\|_1}[/math]

where the supremum is taken over all monzos [math]M[/math] in the inverse-weighted coordinate system on the dual space. That is, if [math]m[/math] were a monzo in unweighted coordinates, its representation in this space would be [math]M=W^{-1}\cdot m[/math], where we are left-multiplying by the inverse of the weighting matrix from before.

This tells us that the max weighted error on the primes has the property of also being the max weighted error on all monzos in the prime-limit, where this weighting is given by the dual L1 norm. Furthermore, this shows that the weighted error will always be obtained at a prime.

Technically, the supremum above also includes all vectors with arbitrary real coordinates, whereas we want to restrict to only those that have integer coordinates in the unweighted basis. However, since we divide by the norm, we note the supremum with all integer monzos is the same as the supremum with all monzos with rational coordinates, which is dense in the set of arbitrary real-valued vectors, so that the supremum will be the same – in particular, it is always found at a prime, which is an integer-coordinate anyway.

Now, if our weighting matrix is the usual [math]1/\log(p)[/math] Tenney-weighting matrix, then the above is equivalent to Paul Erlich's theorem that the tuning that minimizing the max Tenney-weighted error on the primes also minimizes the max Tenney-weighted error on all intervals. This is called the TOP tuning. However, if we instead change the weighting matrix to [math]1/p^s[/math] instead, then our Linf norm will be dual to a different, somewhat unusual weighted L1 norm on monzos: the one where the weighting on the primes is given by [math]p^s[/math], and the weighting for an arbitrary monzo [math]m = |a\, b\, c\, \ldots\rangle[/math] is given by

[math]\text{sopfr}^s(m) = 2^s|a| + 3^s|b| + 5^s|c| + \ldots[/math]

where the notation on the left will be explained shortly.

Now, we note that this unusually weighted L1 norm does not provide the weighting we want on all rationals, which is [math](nd)^s[/math]. Rather, when [math]s=1[/math], this is called the Wilson norm, and is equivalent to the sum of prime factors with repetition of the ratio [math]n/d[/math] in question, often written [math]\text{sopfr}(nd)[/math]. For arbitrary [math]s\geq 1[/math], this is the sum of the [math]s[/math]'th powers of the prime factors with repetition of the ratio, which we will denote [math]\text{sopfr}^s(nd)[/math]. However, we note that this strange weighting does equal the weighting we want on the primes, where we have [math]\text{sopfr}^s(nd) = (nd)^s[/math] – it only diverges on the composite numbers.

We now want to show that the max weighted-error according to this strangely weighted L1 norm is equivalent to the max weighted-error according to the true weighting that we want. To do this, we consider what happens, for any particular tuning, if we simply switch weighting schemes on the rationals to the thing we want. How does this re-weighting change our evaluation of the error of that tuning? To prove our result, we need only show that

  1. Changing the weighting scheme to the thing we want does not make the max weighted error on all rationals any better
  2. Likewise, it doesn't make it any worse

The first part is easy: for any tuning map, we know that the max "strange L1"-weighted error is equal to that of one of the primes, and we know that the primes agree in weighting on both metrics. Since we know that the weighting on the worst prime doesn't change, we know that after we switch weightings, the new max error can be no better than the thing we had before.

To show the second part, all we need to show is that all rationals are weighted more strongly with our strange L1 metric than with the true metric we want. In particular, we want to show the following:

[math]\text{sopfr}^s(nd) \leq (nd)^s[/math]

where the left hand side is our strange metric and the right hand side is the true metric. This would would entail we have

[math]\frac{\text{err}_{n/d}}{\text{sopfr}^s(nd)} \geq \frac{\text{err}_{n/d}}{(nd)^s}[/math]

where the left hand side is the strangely-weighted error, and the right hand side is the true-weighted error. (Note the numerator is the same on both sides, representing unweighted error.)

If we can show the above, this means we have shown that after re-weighting, the weighted error on all rationals goes down – except at the primes, where the error (and in particular the worst error) remains the same. This would give us our desired result.

The proof is fairly simple: remember our definition of the strange L1 metric:

[math]\text{sopfr}^s(m) = 2^s|a| + 3^s|b| + 5^s|c| + \ldots[/math]

In comparison, the true weighting we want on monzos is:

[math]\text{true}^s(m) = 2^{s|a|} \cdot 3^{s|b|} \cdot 5^{s|c|} \cdot \ldots[/math]


So we want to show that the following inequality holds for all monzos:

[math]2^s|a| + 3^s|b| + 5^s|c| + \ldots \leq 2^{s|a|} \cdot 3^{s|b|} \cdot 5^{s|c|} \cdot \ldots[/math]


But as long as we have [math]s\geq 1[/math], this is easy to see because:

  1. For any particular prime [math]p[/math] with integer coefficient [math]q[/math], [math]p^s|q| \leq p^{s|q|}[/math]
  2. Since the nonzero terms on the left side are all greater than 1, their sum must be less than their product, which is in turn less than the product of the associated "true" terms given above.

So we have our desired result.

As a result, we have shown that for any particular tuning map, the max weighted error of the primes is simultaneously the max weighted error according to both metrics - the strange one and the one we really want. This leads to our main theorem:

Theorem: to minimize the [math]1/(nd)^s[/math] weighted error on all rationals, one needs only minimize the [math]1/p^s[/math] weighted error on the primes.

Extension to arbitrary subgroups

The above proof was only for prime limits. However, the above result can be extended easily to certain "nice" subgroups with a basis of prime powers, such as 2.9.7.11, with the only caveat being that we want to make sure we directly weight each prime power [math]p^n[/math] as [math]p^{ns}[/math], rather than giving it the naive weighting of [math]\text{sopfr}^s(p^n)[/math].

In general, a corresponding result can be derived for any arbitrary subgroup; the max 1/(n/d)^s weighted error will be on some relatively simple interval, so that one only needs to check a sufficiently small set of intervals (though not necessarily the primes).

Proof of optimality even with extra "inconsistent" mappings

In the page on TOP tuning, it is shown that for full-limits (and certain "nice" subgroups), the TOP tuning remains optimal even if composite rationals are given extra "inconsistent" mappings, as long as the tuning on those mappings is no worse than the consistent one. Tuning maps with such a property are called admissible.

This is because the TOP tuning is the supremum of weighted error on all intervals, which is always found at a prime. Since the primes are never altered by changing composite rationals, any "improvement" will not improve on the worst weighted error, nor will it make it any worse given we only use admissible tuning maps.

The same argument holds for the BOP tuning, since we again know the worst weighted error will always be found at a prime. So likewise, adding extra mappings for rationals that have better weighted error will neither increase or decrease the supremum on the entire temperament.

As in the TOP case, some care is needed when extending this argument to arbitrary subgroup temperaments, as it is possible to use arbitrary mappings for rationals without requiring that the relevant primes even exist in the subgroup, so that there are no primes for them to be "inconsistent" with.

This property is particularly important for infinite-limit generalized patent vals, where it can be shown that regardless of whether ratios are mapped "consistently" via the prime mapping, or "inconsistently" to the nearest edostep, the same BOP tuning is optimal for all rational numbers.

As a simpler example, this guarantees that the BOP tuning for 2.3.5.9 16edo, with the inconsistent mapping of 57 steps on the 9/1, is the same as the 2.3.5 tuning for 16edo.