CTE tuning

From Xenharmonic Wiki
Jump to navigation Jump to search

The CTE tuning (constrained Tenney-Euclidean tuning) is TE tuning under the constraints of some purely tuned intervals (i.e. eigenmonzos). While the TE tuning can be viewed as a least squares problem, the CTE tuning can be viewed as an equality-constrained least squares problem. For a rank-r temperament, specifying m eigenmonzos will yield r - m degrees of freedom to be optimized.

The most significant form of CTE tuning is pure-octave constrained. For higher-rank temperaments, it may make sense to add multiple constraints, such as the pure-{2, 3} CTE tuning.

Definition

Given a temperament mapping A and the JIP J0, denote the Tenney-weighted temperament mapping by V = AW, and the Tenney-weighted JIP by J = J0W. If the tuning is contrained by the eigenmonzo list B, the CTE tuning is equivalent to the following optimization problem:

Minimize

[math]\lVert GV - J \rVert[/math]

subject to

[math](GA - J_0)B = O[/math]

where G is the generator list, and O the zero matrix.

The problem is feasible if

  1. rank (B) ≤ rank (A), and
  2. Each column in B and N (A) are linearly independent.

Computation

The tuning can be solved in the method of Lagrange multiplier. The solution is given by

[math] \begin{bmatrix} G^{\mathsf T} \\ \Lambda^{\mathsf T} \end{bmatrix} = \begin{bmatrix} VV^{\mathsf T} & AB \\ (AB)^{\mathsf T} & O \end{bmatrix}^{-1} \begin{bmatrix} VJ^{\mathsf T}\\ (J_0 B)^{\mathsf T} \end{bmatrix} [/math]

which is almost an analytical solution. Notice we introduced the vector of lagrange multipliers Λ, with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be discarded.

Otherwise, as a standard optimization problem, numerous algorithms exist to solve it, such as sequential quadratic programming, to name one.

The following Python code is an excerpt from Flora Canou's tuning optimizer. Note: it depends on Scipy.

Code
# © 2020-2021 Flora Canou | Version 0.8
# This work is licensed under the GNU General Public License version 3.

import numpy as np
from scipy import optimize, linalg
np.set_printoptions (suppress = True, linewidth = 256)

PRIME_LIST = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61]
SCALAR = 1200 #could be in octave, but for precision reason

def weighted (matrix, subgroup, type = "tenney"):
    if not type in {"tenney", "frobenius"}:
        type = "tenney"

    if type == "tenney":
        weighter = np.diag (1/np.log2 (subgroup))
    elif type == "frobenius":
        weighter = np.eye (len (subgroup))
    return matrix @ weighter

def error (gen, map, jip, order = 2):
    return linalg.norm (gen @ map - jip, ord = order)

def optimizer_main (map, subgroup = [], order = 2, weighter = "tenney", cons_monzo_list = np.array ([]), stretch_monzo = np.array ([]), show = True):
    if len (subgroup) == 0:
        subgroup = PRIME_LIST[:map.shape[1]]

    jip = np.log2 (subgroup)*SCALAR
    map_w = weighted (map, subgroup, type = weighter)
    jip_w = weighted (jip, subgroup, type = weighter)
    if order == 2 and not cons_monzo_list.size: #te with no constraints, simply use lstsq for better performance
        res = linalg.lstsq (map_w.T, jip_w)
        gen = res[0]
        print ("L2 tuning without constraints, solved using lstsq. ")
    else:
        gen0 = [SCALAR]*map.shape[0] #initial guess
        cons = {'type': 'eq', 'fun': lambda gen: (gen @ map - jip) @ cons_monzo_list} if cons_monzo_list.size else ()
        res = optimize.minimize (error, gen0, args = (map_w, jip_w, order), method = "SLSQP", constraints = cons)
        print (res.message)
        if res.success:
            gen = res.x

    if stretch_monzo.size:
        gen *= (jip @ stretch_monzo)/(gen @ map @ stretch_monzo)

    if show:
        print (f"Generators: {gen} (¢)", f"Tuning map: {gen @ map} (¢)", sep = "\n")

    return gen

Constraints can be added from the parameter cons_monzo_list of the optimizer_main function.

Versus POTE tuning

The pure-octave CTE tuning can be very different from POTE tuning. Take 7-limit meantone as an example, the POTE tuning map:

1200.000 1896.495 2785.980 3364.949]

This is a little bit flatter than quarter-comma meantone, with all the primes tuned flat.

The pure-octave CTE tuning map:

1200.000 1896.952 2787.809 3369.521]

This is a little bit sharper than quarter-comma meantone, with prime 3 tuned flat and 5 and 7 sharp.

It can be speculated that POTE tends to result in biased tunings whereas CTE less so.