User:Sintel/CTE tuning

From Xenharmonic Wiki
Jump to navigation Jump to search

The CTE tuning (constrained Tenney-Euclidean tuning) is TE tuning under the constraints of some purely tuned intervals (i.e. eigenmonzos). While the TE tuning can be viewed as a least squares problem, the CTE tuning can be viewed as an equality-constrained least squares problem. For a rank-r temperament, specifying m eigenmonzos will yield r - m degrees of freedom to be optimized.

The most significant form of CTE tuning is pure-octave constrained. For higher-rank temperaments, it may make sense to add multiple constraints, such as the pure-{2, 3} CTE tuning.

Definition

Given a temperament mapping M, the CTE tuning is equivalent to the following optimization problem:

[math] \begin{align} \underset{g}{\text{minimize}} & \quad \| gMW - jW \|^2 \\ \text{subject to} & \quad ( gM - j )V = 0 \\ \end{align} [/math]

where g is the (unknown) generator list, W the diagonal Tenney-Euclidean weight matrix, j is the JIP, and V is a matrix obtained by stacking the monzos that we want to be pure. This problem is feasible if rank (V) ≤ rank (M).

Computation

Since this is a convex problem, it can be solved using the method of lagrange multipliers. Let's first simplify:

[math] \begin{align} A &= (MW)^{\mathsf T} &b &= (jW)^{\mathsf T} \\ C &= (MV)^{\mathsf T} &d &= (jV)^{\mathsf T} \\ \end{align} [/math]

The problem then becomes:

[math] \begin{align} \underset{g}{\text{minimize}} & \quad \left\| Ag^{\mathsf T} - b \right\|^2 \\ \text{s.t.} & \quad \phantom{\|} Cg^{\mathsf T} - d = 0 \\ \end{align} [/math]

The solution can be found by solving the dual problem:

[math] \begin{bmatrix} g^{\mathsf T} \\ \lambda^{\mathsf T} \end{bmatrix} = \begin{bmatrix} A^{\mathsf T}A & C^{\mathsf T} \\ C & 0 \end{bmatrix}^{-1} \begin{bmatrix} A^{\mathsf T} b\\ d \end{bmatrix} [/math]

Where we introduced the vector of lagrange multipliers [math]\lambda[/math], with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be ignored.

As a standard optimization problem, numerous algorithms exist to solve for this tuning, such as sequential quadratic programming, to name one.

The following Python code is an excerpt from Flora Canou's tuning optimizer. Note: it depends on Scipy.

Code
# © 2020-2021 Flora Canou | Version 0.8
# This work is licensed under the GNU General Public License version 3.

import numpy as np
from scipy import optimize, linalg
np.set_printoptions (suppress = True, linewidth = 256)

PRIME_LIST = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61]
SCALAR = 1200 #could be in octave, but for precision reason

def weighted (matrix, subgroup, type = "tenney"):
    if not type in {"tenney", "frobenius"}:
        type = "tenney"

    if type == "tenney":
        weighter = np.diag (1/np.log2 (subgroup))
    elif type == "frobenius":
        weighter = np.eye (len (subgroup))
    return matrix @ weighter

def error (gen, map, jip, order = 2):
    return linalg.norm (gen @ map - jip, ord = order)

def optimizer_main (map, subgroup = [], order = 2, weighter = "tenney", cons_monzo_list = np.array ([]), stretch_monzo = np.array ([]), show = True):
    if len (subgroup) == 0:
        subgroup = PRIME_LIST[:map.shape[1]]

    jip = np.log2 (subgroup)*SCALAR
    map_w = weighted (map, subgroup, type = weighter)
    jip_w = weighted (jip, subgroup, type = weighter)
    if order == 2 and not cons_monzo_list.size: #te with no constraints, simply use lstsq for better performance
        res = linalg.lstsq (map_w.T, jip_w)
        gen = res[0]
        print ("L2 tuning without constraints, solved using lstsq. ")
    else:
        gen0 = [SCALAR]*map.shape[0] #initial guess
        cons = {'type': 'eq', 'fun': lambda gen: (gen @ map - jip) @ cons_monzo_list} if cons_monzo_list.size else ()
        res = optimize.minimize (error, gen0, args = (map_w, jip_w, order), method = "SLSQP", constraints = cons)
        print (res.message)
        if res.success:
            gen = res.x

    if stretch_monzo.size:
        gen *= (jip @ stretch_monzo)/(gen @ map @ stretch_monzo)

    if show:
        print (f"Generators: {gen} (¢)", f"Tuning map: {gen @ map} (¢)", sep = "\n")

    return gen

Constraints can be added from the parameter cons_monzo_list of the optimizer_main function.

Versus POTE tuning

The pure-octave CTE tuning can be very different from POTE tuning. Take 7-limit meantone as an example, the POTE tuning map:

1200.000 1896.495 2785.980 3364.949]

This is a little bit flatter than quarter-comma meantone, with all the primes tuned flat.

The pure-octave CTE tuning map:

1200.000 1896.952 2787.809 3369.521]

This is a little bit sharper than quarter-comma meantone, with prime 3 tuned flat and 5 and 7 sharp.

It can be speculated that POTE tends to result in biased tunings whereas CTE less so.