Constrained tuning

From Xenharmonic Wiki
(Redirected from CWE tuning)
Jump to navigation Jump to search

[math] \def\hs{\hspace{-3px}} \def\vsp{{}\mkern-5.5mu}{} \def\llangle{\left\langle\vsp\left\langle} \def\lllangle{\left\langle\vsp\left\langle\vsp\left\langle} \def\llllangle{\left\langle\vsp\left\langle\vsp\left\langle\vsp\left\langle} \def\llbrack{\left[\left[} \def\lllbrack{\left[\left[\left[} \def\llllbrack{\left[\left[\left[\left[} \def\llvert{\left\vert\left\vert} \def\lllvert{\left\vert\left\vert\left\vert} \def\llllvert{\left\vert\left\vert\left\vert\left\vert} \def\rrangle{\right\rangle\vsp\right\rangle} \def\rrrangle{\right\rangle\vsp\right\rangle\vsp\right\rangle} \def\rrrrangle{\right\rangle\vsp\right\rangle\vsp\right\rangle\vsp\right\rangle} \def\rrbrack{\right]\right]} \def\rrrbrack{\right]\right]\right]} \def\rrrrbrack{\right]\right]\right]\right]} \def\rrvert{\right\vert\right\vert} \def\rrrvert{\right\vert\right\vert\right\vert} \def\rrrrvert{\right\vert\right\vert\right\vert\right\vert} [/math][math] \def\val#1{\left\langle\begin{matrix}#1\end{matrix}\right]} \def\tval#1{\left\langle\begin{matrix}#1\end{matrix}\right\vert} \def\bival#1{\llangle\begin{matrix}#1\end{matrix}\rrbrack} \def\bitval#1{\llangle\begin{matrix}#1\end{matrix}\rrvert} \def\trival#1{\lllangle\begin{matrix}#1\end{matrix}\rrrbrack} \def\tritval#1{\lllangle\begin{matrix}#1\end{matrix}\rrrvert} \def\quadval#1{\llllangle\begin{matrix}#1\end{matrix}\rrrrbrack} \def\quadtval#1{\llllangle\begin{matrix}#1\end{matrix}\rrrrvert} \def\monzo#1{\left[\begin{matrix}#1\end{matrix}\right\rangle} \def\tmonzo#1{\left\vert\begin{matrix}#1\end{matrix}\right\rangle} \def\bimonzo#1{\llbrack\begin{matrix}#1\end{matrix}\rrangle} \def\bitmonzo#1{\llvert\begin{matrix}#1\end{matrix}\rrangle} \def\trimonzo#1{\lllbrack\begin{matrix}#1\end{matrix}\rrrangle} \def\tritmonzo#1{\lllvert\begin{matrix}#1\end{matrix}\rrrangle} \def\quadmonzo#1{\llllbrack\begin{matrix}#1\end{matrix}\rrrrangle} \def\quadtmonzo#1{\llllvert\begin{matrix}#1\end{matrix}\rrrrangle} \def\rbra#1{\left\{\begin{matrix}#1\end{matrix}\right]} \def\rket#1{\left[\begin{matrix}#1\end{matrix}\right\}} \def\vmp#1#2{\left\langle\begin{matrix}#1\end{matrix}\,\vert\,\begin{matrix}#2\end{matrix}\right\rangle\vsp} \def\wmp#1#2{\llangle\begin{matrix}#1\end{matrix}\,\vert\vert\,\begin{matrix}#2\end{matrix}\rrangle} [/math] Constrained tunings are tuning optimization techniques using the constraint of some purely tuned intervals (i.e. unit eigenmonzos, or unchanged-intervals).

CTE tuning, (constrained Tenney-Euclidean tuning), is the most basic version of this idea. It has a more sophisticated variant, CWE tuning (constrained Weil-Euclidean tuning), a.k.a. KE tuning (Kees-Euclidean tuning), created to address some subtle problems perceived by the tuning-math community with CTE. Both of these are special cases of the CTWE tuning (constrained Tenney-Weil-Euclidean tuning).

These tunings will be the focus of this article. Otherwise normed tunings can be defined and computed analogously.

All constrained tunings are standard temperament optimization problems. Specifically, as TE tuning can be viewed as a least squares problem, CTE tuning can be viewed as an equality-constrained least squares problem.

The most common subject of constraint is the octave, which is assumed unless specified otherwise. For higher-rank temperaments, it may make sense to add multiple constraints, such as a pure-2.3 (pure-octave pure-fifth) constrained tuning. For a rank-r temperament, specifying a rank-m constraint list will yield rm degrees of freedom to be optimized.

Definition

Given a temperament mapping V and the just tuning map J, we specify a weight–skew transformation, represented by transformation matrix X, and a q-norm. Suppose the tuning is constrained by the eigenmonzo list MI. The goal is to find the generator list G by

Minimize

[math]\displaystyle \left\| GV_X - J_X \right\|_q [/math]

subject to

[math]\displaystyle (GV - J)M_I = O [/math]

where (·)X denotes the variable in the weight–skew transformed space, found by

[math]\displaystyle \begin{align} V_X &= VX \\ J_X &= JX \end{align} [/math]

The problem is feasible if

  1. rank(MI) ≤ rank(V), and
  2. The subgroups of MI and N (V) are linearly independent.

Computation

As a standard optimization problem, numerous algorithms exist to solve it, such as sequential quadratic programming, to name one. Flora Canou's tuning optimizer is such an implementation in Python. Note: it uses Scipy.

Code
# © 2020-2023 Flora Canou | Version 0.27.2
# This work is licensed under the GNU General Public License version 3.

import warnings
import numpy as np
from scipy import optimize, linalg
np.set_printoptions (suppress = True, linewidth = 256, precision = 4)

PRIME_LIST = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89]

class SCALAR:
    CENT = 1200

class Norm: 
    """Norm profile for the tuning space."""

    def __init__ (self, wtype = "tenney", wamount = 1, skew = 0, order = 2):
        self.wtype = wtype
        self.wamount = wamount
        self.skew = skew
        self.order = order

    def __get_tuning_weight (self, subgroup):
        match self.wtype:
            case "tenney":
                weight_vec = np.reciprocal (np.log2 (subgroup, dtype = float))
            case "wilson" | "benedetti":
                weight_vec = np.reciprocal (np.array (subgroup, dtype = float))
            case "equilateral":
                weight_vec = np.ones (len (subgroup))
            case _:
                warnings.warn ("weighter type not supported, using default (\"tenney\")")
                self.wtype = "tenney"
                return self.__get_weight (subgroup)
        return np.diag (weight_vec**self.wamount)

    def __get_tuning_skew (self, subgroup):
        if self.skew == 0:
            return np.eye (len (subgroup))
        elif self.order == 2:
            r = 1/(len (subgroup)*self.skew + 1/self.skew)
            kr = 1/(len (subgroup) + 1/self.skew**2)
        else:
            raise NotImplementedError ("Weil skew only works with Euclidean norm as of now.")
        return np.append (
            np.eye (len (subgroup)) - kr*np.ones ((len (subgroup), len (subgroup))),
            r*np.ones ((len (subgroup), 1)), axis = 1)

    def tuning_x (self, main, subgroup):
        return main @ self.__get_tuning_weight (subgroup) @ self.__get_tuning_skew (subgroup)

def __get_subgroup (main, subgroup):
    main = np.asarray (main)
    if subgroup is None:
        subgroup = PRIME_LIST[:main.shape[1]]
    elif main.shape[1] != len (subgroup):
        warnings.warn ("dimension does not match. Casting to the smaller dimension. ")
        dim = min (main.shape[1], len (subgroup))
        main = main[:, :dim]
        subgroup = subgroup[:dim]
    return main, subgroup

def optimizer_main (breeds, subgroup = None, norm = Norm (), 
        cons_monzo_list = None, des_monzo = None, show = True):
    # NOTE: "map" is a reserved word
    # optimization is preferably done in the unit of octaves, but for precision reasons
    breeds, subgroup = __get_subgroup (breeds, subgroup)

    just_tuning_map = SCALAR.CENT*np.log2 (subgroup)
    breeds_x = norm.tuning_x (breeds, subgroup)
    just_tuning_map_x = norm.tuning_x (just_tuning_map, subgroup)
    if norm.order == 2 and cons_monzo_list is None: #simply using lstsq for better performance
        res = linalg.lstsq (breeds_x.T, just_tuning_map_x)
        gen = res[0]
        print ("Euclidean tuning without constraints, solved using lstsq. ")
    else:
        gen0 = [SCALAR.CENT]*breeds.shape[0] #initial guess
        cons = () if cons_monzo_list is None else {
            'type': 'eq', 
            'fun': lambda gen: (gen @ breeds - just_tuning_map) @ cons_monzo_list
        }
        res = optimize.minimize (lambda gen: linalg.norm (gen @ breeds_x - just_tuning_map_x, ord = norm.order), gen0, 
            method = "SLSQP", options = {'ftol': 1e-9}, constraints = cons)
        print (res.message)
        if res.success:
            gen = res.x
        else:
            raise ValueError ("infeasible optimization problem. ")

    if not des_monzo is None:
        if np.asarray (des_monzo).ndim > 1 and np.asarray (des_monzo).shape[1] != 1:
            raise IndexError ("only one destretch target is allowed. ")
        elif (tempered_size := gen @ breeds @ des_monzo) == 0:
            raise ZeroDivisionError ("destretch target is in the nullspace. ")
        else:
            gen *= (just_tuning_map @ des_monzo)/tempered_size

    tempered_tuning_map = gen @ breeds
    error_map = tempered_tuning_map - just_tuning_map

    if show:
        print (f"Generators: {gen} (¢)",
            f"Tuning map: {tempered_tuning_map} (¢)",
            f"Error map: {error_map} (¢)", sep = "\n")

    return gen, tempered_tuning_map, error_map

optimiser_main = optimizer_main

Constraints can be added from the parameter cons_monzo_list of the optimizer_main function. For example, to find the CTE tuning for septimal meantone, you type:

breeds = np.array ([[1, 0, -4, -13], [0, 1, 4, 10]])
optimizer_main (breeds, cons_monzo_list = np.transpose ([1] + [0]*(breeds.shape[1] - 1)))

You should get:

Optimization terminated successfully.
Generators: [1200.     1896.9521] (¢)
Tuning map: [1200.     1896.9521 2787.8085 3369.5214] (¢)
Error map: [ 0.     -5.0029  1.4948  0.6955] (¢)

Analytical solutions

Analytical solutions exist for Euclidean (L2) tunings, see Constrained tuning/Analytical solution to constrained Euclidean tunings.

Method of Lagrange multipliers

It can also be solved analytically using the method of Lagrange multipliers. The solution is given by:

[math]\displaystyle \begin{bmatrix} G^{\mathsf T} \\ \mathit\Lambda^{\mathsf T} \end{bmatrix} = \begin{bmatrix} V_X V_X^{\mathsf T} & VM \\ (VM)^{\mathsf T} & O \end{bmatrix}^{-1} \begin{bmatrix} V_X J_X^{\mathsf T}\\ (JM)^{\mathsf T} \end{bmatrix} [/math]

Notice we introduced the vector of lagrange multipliers Λ, with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be discarded.

Simple fast closed-form algorithm

A much simpler way to compute the CTE and CWE tunings, and the CTWE tuning in general, is to use the pseudoinverse. This doesn't require doing any intense nonlinear optimization.

The basic idea is that the set of all pure-octave tuning maps of some temperament will be the intersection of a linear subspace and a shifted hyperplane, and thus will be a shifted subspace. This means that any pure-octave tuning map can be expressed as the sum of some arbitrary "reference" pure-octave tuning map for the temperament, plus some other one also in the temperament whose octave-coordinate is 0. The set of all such tuning maps of the latter category form a linear subspace.

We have the same thing with generator maps, meaning that any pure-octave generator map [math]g[/math] can be expressed as:

$$ g = hB + x $$

where

  • [math]x[/math] is any random generator map giving pure octaves
  • [math]B[/math] is a matrix whose rows are a basis for the subspace of generator maps with octave coordinate set to 0
  • [math]h[/math] is a free variable.

Given that, and assuming [math]M[/math] is our mapping matrix, [math]W[/math] our weighting matrix, and [math]j[/math] our JIP, we can solve for the best possible [math]g[/math] in closed form:

$$ gMW ≈ jW $$

which becomes

$$ \left(hB + x\right)MW ≈ jW \\ h = \left(j - xM\right)W \cdot \left(BMW\right)^\dagger $$

We note that this also works for any weighting matrix, and so we can use this to compute an arbitrary TWE norm very quickly in closed-form. Here is some Python code:

Code
import numpy as np
from numpy.linalg import inv, pinv
from scipy.linalg import null_space, sqrtm

def CTWE(limit, M, k):
    """
    Computes the CTWE tuning of a *full-limit* temperament given a limit and
    mapping matrix M. For k=0, this is CTE, for k=1, this is CWE/KE.

    For subgroup temperaments, first compute the tuning of the full-limit
    temperament with same kernel, then multiply by the subgroup basis matrix.    
    """
    # Basics: get the weighting matrix, JIP, etc
    W_monzo = np.vstack([np.diag(np.log2(limit)), np.log2(limit) * k])
    W_monzo_gram = sqrtm(W_monzo.T @ W_monzo)
    W = inv(W_monzo_gram) # we could call this W_val_gram
    j = 1200*np.log2(limit)
    
    # Simple way to get a random generator map with pure octaves: take the pinv
    # of the octave mapping column. The corresponding tuning map has an octave
    # coordinate of 1 cent, so multiply by the first JIP coord (probably 1200).
    octave_col = M[:,0][:,np.newaxis]
    x = pinv(M[:,0][:,np.newaxis]) * j[0]
    
    # the left nullspace of octave_col has all generators mapping to 0.
    B = null_space(octave_col.T).T
    
    # All pure-octave generator maps are just pure_octave_start + something in
    # the above row space. Now we have to solve
    #   (h@B + x)@M@W ≈ j@W
    # which, solving for h and doing the algebra out gives:
    h = (j - x@M)@W @ pinv(B@M@W)
    g = h@B + x
    t = g@M
    return g, t
    
# %% Compute the CTE of septimal meantone temperament
k = 1
limit = np.array([2, 3, 5, 7])
M = np.array( # mapping matrix for meantone
    [[1, 0, -4, -13],
      [0, 1,  4,  10]]
)

G_CTE, T_CTE = CTWE(limit, M, 0)
print("CTE Generator map: " + str(G_CTE))
print("CTE Tuning map: " + str(T_CTE))

G_CWE, T_CWE = CTWE(limit, M, 1)
print("CWE Generator map: " + str(G_CWE))
print("CWE Tuning map: " + str(T_CWE))

Interpolating TE/CTE

We can also interpolate between the TE and CTE tunings, if we want. To do this, we modify the TE tuning so that the weighting of the 2's coefficient is very large. As the weighting goes to infinity, we get the CTE tuning. Thus, we can set it to some sufficiently large number, so that we get whatever numerical precision we want, and compute the result in closed-form using the pseudoinverse. Without comments, docstrings, etc, the calculation is only about five lines of python code:

Code
import numpy as np

def modified_TE(limit, M, H=1000000):
    """
    Computes the interpolated TE/CTE tuning of a *full-limit* temperament given
    a limit and mapping matrix M. For H = 1 we have TE, and as H -> inf we have
    CTE. To compute CTE to arbitrary precision, make H sufficiently large.
    
    For subgroup temperaments, first compute the tuning of the full-limit
    temperament with same kernel, then multiply by the subgroup basis matrix.
    """
    # Compute adjusted TE weighting matrix and JIP
    W = np.diag(1/np.log2(limit))
    W[0,0] = H
    J = 1200*np.log2(limit)

    # Main calculation: get the generator map g such that g@M@W ≈ J@W. Use pinv
    G = (J@W) @ np.linalg.pinv(M@W)
    T = G @ M

    return G, T

# %% Compute the CTE of septimal meantone temperament
H = 1000000
limit = np.array([2, 3, 5, 7])
M = np.array( # mapping matrix for meantone
    [[1, 0, -4, -13],
     [0, 1,  4,  10]]
)

G, T = modified_TE(limit, M, H)

print("Generator map: " + str(G))
print("Tuning map: " + str(T))

The output to the above is:

Generator map: [ 1200.0000   1896.9521]
Tuning map: [ 1200.0000   1896.9521   2787.8086   3369.5214]

CTE tuning vs POTE tuning vs CWE tuning vs CTWE tuning

Criticism of CTE

People have long noted, since the early days of the tuning list, that the CTE tuning, despite having very nice qualities on paper, can give surprisingly strange results.[citation needed] One good example is blackwood, where the 4:5:6 chord is tuned to 0-386-720 cents, so that the error is not even close to evenly divided between the 5/4, 6/5, and 3/2. The reasons for this are subtle.

This sort of thing was important historically when looking at optimal tunings for meantone, and is ultimately the motivation for advanced tuning methods such as TOP, TE, etc. to begin with. Thus, if our goal is to extend this principle in an elegant way to all intervals (and hopefully, triads and large chords), it would seem to defeat the purpose if we use a tuning optimization that doesn't also have this property,

As a result of this, historically, the POTE tuning was used instead, which tunes it to a result that is approximately delta-rational 0-400-720 cents. People have also suggested using the Kees-Euclidean or KE tuning, also known as the constrained-Weil-Euclidean or CWE tuning. Here is a summary of the math involved and the historical reasoning behind this.

The CTE tuning can be thought of as a modified TE tuning in which the weighting (in monzo space) on the 2/1 coordinate has been changed to 0, making it a kind of seminorm rather than a norm. As a result, all elements in the same octave-equivalence class are weighted identically: they are all given complexity equal to the representative in each equivalence class in which all factors of 2 have been removed. Thus 5/4 is given the same complexity as 5/1, 13/8 as 13/1, and so on.

One criticism that has sometimes been brought up is to note the interval 13/1 nearly four octaves wide, which is considered very large in certain genres of music. In these cases, we may not really care about 13/1 more than 13/8, or 15/1 more than 15/8, and so on. Instead, we often care most about intervals which are maybe within an octave or two at most in span. This can be viewed as a criticism of Tenney-weighting in general, perhaps, but it is has often been noted that the situation makes little difference there, as many of the alternative metrics suggested give similar results. For this reason, and because the Tenney and TE norms are easy to use, and also because not everyone has been convinced by this criticism to begin with, the use of Tenney-weighting has remained the standard.

The phenomenon becomes much more evident when we essentially assign these weightings to entire equivalence classes. In this situation, all of these small, intra-octave intervals essentially have their weightings scrambled relative to one another. Small intervals which used to be similarly-calibrated are now totally different: 16/13 is weighted much more strongly than 6/5; 15/8 and 6/5 are given equal weight; 5/4 is now much more important than 6/5, and so on. This is because some of these intervals happen to be octave-equivalent to prime numbers 3-4 octaves up, and others aren't. As a result, when we build compact triads out of these close-voiced intervals, they tend to be tuned in a very lopsided fashion.

This is one of the reasons why the tuning of 1-5/4-3/2 is so skewed in CTE blackwood. This problem doesn't happen with the TE tuning: the extra degree of freedom in adjusting the octave, and different weights, tend to even this kind of thing out. TE blackwood has 1-5/4-3/2 tuned to approximately 0-398-717 cents, which does seem to evenly split the error between the 5/4 and 6/5. We can see that something good about the way that TE tunes compact triads has not quite translated to CTE. Another way to look at this situation is that with CTE, 5/4 is prioritized more strongly than 6/5, and also 1-3-5 is tuned as nicely as possible, instead of 1-5/4-3/2.

Defense of CTE

Anyone who performs tuning optimization has octave reduction to unlearn. It is tempting to optimize for close-voiced chords such as 1-5/4-3/2 without much consideration, since textbooks often present harmony in this way. The close-voiced chord, 1-5/4-3/2, is an octave-reduced version of 1-3-5, with the latter being the simplest voicing possible in the chord of nature and nontrivially being the simplest such chord containing the fundamental (the 1st harmonic/true root). It is thus important to recognize that all octave-reductions are but simplifications for our cognitive processes.

Music making, that is when we are not abstractly naming the chords, is often about various open voicings. The archaic Alberti bass is one of the few examples of close voicing, used as a bassline to accompany other materials. It should be noted that 13/1, dismissed as too wide in the section above, is still within the range of a full choir, not to mention a rock band, concert band or orchestra. Beethoven's Symphony No. 3 opens with 1-2-5/2-4-5-6-8-10-12-16. Such a chord will be much overtempered, its tuning profile unreasonably squeezed and strained, if we set 1-5/4-3/2 as our target.

CTE blackwood does not try to approximate a delta-rational 1-5/4-3/2, and not even a delta-rational 1-3-5. This is also justifiable: since prime 5 is never involved in the comma that is tempered out, it only makes sense that it is tuned pure; any new prime added to the temperament is automatically tuned pure, as in JI. The dent in prime 3 does not spread to what it does not have to, unlike the schemes introduced below.

Using destretch

The POTE tuning, which simply "de-stretches" the TE tuning and tunes the 1-5/4-3/2 to 0-400-720, translates the near-delta-rational property of TE to a pure-octave tuning: the relative sizes of all intervals are preserved. Thus, we retain these nice properties for small intervals, as well as small triads, etc. In fact, one notes that if one were to actually measure the tuning error on all triads, tetrads, etc, as well as dyads, we may very well get something closer to the POTE tuning than the CTE tuning. One also notes that "de-stretching" the POTE tuning is, to first order, approximately the same as stretching all chords in it "linearly," so that "isoharmonic," "proportional," "delta-rational" chords remain so after the transformation (approximately).

Another way to think of it is that as POTE destretches the equave, it keeps the angle in the tuning space unchanged, and thus can be thought of as sacrificing multiplicative (typically very large) ratios for divisive (typically very small) ratios, whereas CTE sticks to the original design book of TE-optimality without worrying about that.

Historically, there was also an observation that the POTE tuning can be thought of as an approximation to the CWE/KE tuning, which we will talk about below.

Using the Weil norm or Kees expressibility

Another way to solve this problem is to actually go back to the original objection that we perhaps don't care about 13/1 as much as 13/8 - or at least, that we don't care about it that much if we have to assign it to the entire equivalence class. So, we can take this objection seriously and use a different norm to begin with.

The Weil norm of [math]\max(n,d)[/math] can be thought of as the average of Tenney Height and the interval's span, and so inherently does this: 5/1 and 5/4 are weighted equally, so that the psychoacoustic importance of the former and small manageable size of the latter balance out. We can then do a constrained optimization using the Weil norm, and if we are using the Weil-Euclidean norm, we get the constrained Weil-Euclidean or CWE tuning. The term Kees-Euclidean has also sometimes been used for this (although the term has occasionally been used to refer to a de-stretched Weil-Euclidean tuning instead).

The term "Kees" is from Kees van Prooijen, whos Kees expressibility is essentially what you get if you remove factors of 2 from an interval and take its Weil norm. This is essentially the "odd-limit" associated to the interval. So, this view essentially states that, at least when we are in these sort of octave-equivalence-class situations, we ought to use the Weil norm/Kees expressibility rather than Tenney Height.

Historically, there was a sort of convoluted line of reasoning leading to this same basic idea. The thought was that we could keep Tenney height, but instead choose something like the reduced-octave version of each interval as representatives, such as 5/4, 7/4, 13/8, etc. In essence, we want the version of each interval with minimal span, so we note that we basically want to set the 2's coordinate to whatever makes the span is as small as possible. Unfortunately, doing this leads to somewhat messy nonlinear behavior, as the set of representatives no longer forms a lattice, linear subspace, etc.

The "good-enough" solution suggested on the tuning list is to instead just choose an idealized 2's coordinate, which is a real number, which instead makes the span equal to zero, which ought to give similar results. This is equivalent to placing the entire span, or rather its negation, into the 2's coordinate. This happens to be the same thing as just using the Weil-norm, which can be thought of as the L1 norm in an "augmented space" where we add the span as an extra coordinate. Regardless of if we remove factors of 2 and add a coordinate for the span, or put the span in the 2's coordinate, we clearly get the same thing.

CTWE tuning

As mentioned above, if we constrain the equave to be pure, and look for the tuning map that is closest to the JIP using the WE norm, we get the CWE tuning, a.k.a. KE tuning.

It has sometimes been noted that the Weil norm can give less-than-perfect results in other ways - for instance, it weights 13/8, 13/9, 13/10, 13/11, and 13/12 all equally. This doesn't seem to cause quite as much of a problem with the WE or KE tunings, or even the minimax Kees tuning, as it does with the minimax Weil tuning. But, this could sometimes be an issue.

So, one simple solution is to interpolate between the two, giving the Tenney-Weil-Euclidean norm: a weighted average of the TE and WE norms, with free weighting parameter k. This can be thought of as adjusting how much we care about the span: k=0 is the TE norm, k=1 is the WE norm, and in between we have intermediate norms. This also gives a Constrained Tenney-Weil-Euclidean or CTWE tuning as a result, which interpolates between CTE and CKE.

Comparison

These tunings can be very different from each other.

Meantone

Take 7-limit meantone as an example. The POTE tuning map is a little bit flatter than quarter-comma meantone, with all the primes tuned flat:

[math]\val{1200.000 & 1896.495 & 2785.980 & 3364.949}[/math]

The CWE tuning map is a little bit sharper than quarter-comma meantone, with 5 tuned sharp and 3 and 7 flat:

[math]\val{1200.000 & 1896.656 & 2786.625 & 3366.562}[/math]

The CTE tuning map is even sharper, with 3 tuned flat and 5 and 7 sharp:

[math]\val{1200.000 & 1896.952 & 2787.809 & 3369.521}[/math]

Blackwood

There is a particularly large difference when looking at less-accurate tunings. Perhaps the clearest way to see the difference is to look at tunings where one prime has total freedom as a generator. In those situations, the free prime is tuned pure. Blackwood is a good example:

Note that the POTE tuning still has prime 5 tuned sharp, even though it could be tuned pure:

[math]\langle \begin{matrix} 1200.000 & 1920.000 & 2799.594 \end{matrix} ][/math]

The CWE gives similar results, although tunes it a few cents flatter:

[math]\val{1200.000 & 1920.000 & 2795.126}[/math]

The CTE tuning, on the other hand, tunes prime 5 pure:

[math]\val{1200.000 & 1920.000 & 2786.314}[/math]

Since prime 5 is not involved in the comma to begin with, it is understandable that it is tuned pure as in 5-limit JI. This, as mentioned above, leads to very lopsided behavior for compact chords like 1-5/4-3/2. Note that the tunings for KE and POTE distribute the error between 5/4 and 6/5 relatively evenly; both are very close to the delta-rational 0-397-720. The CTE tuning, on the other hand, has that chord tuned to 0-386-720, so that all of the error is on the 6/5 at about 18 cents sharp.

Special constraint

The special eigenmonzo Xj, where j is the all-ones monzo, has the effect of removing the weighted–skewed tuning bias. This eigenmonzo is actually proportional to the monzo of the extra dimension introduced by the skew. In other words, it forces the extra dimension to be pure, and therefore, the skew will have no effect with this constrained tuning.

It can be regarded as a distinct optimum. In the case of Tenney weighting, it is the TOCTE tuning (Tenney ones constrained Tenney-Euclidean tuning).

For equal temperaments

Since the one degree of freedom of equal temperaments is determined by the constraint, optimization is not involved. It is thus reduced to TOC tuning (Tenney ones constrained tuning). This constrained tuning demonstrates interesting properties.

The step size g can be found by

[math]\displaystyle g = 1/\operatorname{mean} (V_X)[/math]

The edo number n can be found by

[math]\displaystyle n = 1/g = \operatorname{mean} (V_X)[/math]

Unlike TE or TOP, the optimal edo number space in TOC is linear with respect to V. That is, if V = αV1 + βV2, then

[math]\displaystyle \begin{align} n &= \operatorname {mean} (VX) \\ &= \operatorname {mean} ((\alpha V_1 + \beta V_2)X) \\ &= \operatorname {mean} (\alpha V_1 X) + \operatorname {mean} (\beta V_2 X) \\ &= \alpha n_1 + \beta n_2 \end{align} [/math]

As a result, the relative error space is also linear with respect to V.

For example, the relative errors of 12ettoc5 (12et in 5-limit TOC) is

[math]\displaystyle \mathcal{E}_\text {r}(12) = \val{-1.55\% & -4.42\% & +10.08\% }[/math]

That of 19ettoc5 is

[math]\displaystyle \mathcal{E}_\text {r}(19) = \val{+4.08\% & -4.97\% & -2.19\% }[/math]

As 31 = 12 + 19, the relative errors of 31ettoc5 is

[math]\displaystyle \begin{align} \mathcal{E}_\text {r}(31) &= \mathcal{E}_\text {r}(12) + \mathcal{E}_\text {r}(19) \\ &= \val{+2.52\% & -9.38\% & +7.88\% } \end{align} [/math]

Systematic name

In D&D's guide to RTT, the systematic name for the CTE tuning scheme is held-octave minimax-ES, and the systematic name for the CTWE tuning scheme is held-octave minimax-E-lils-S.

Open problems

  • Why are CWE and POTE tunings so close for many temperaments?