# CTE tuning

Constrained tunings are tuning optimization techniques under the constraints of some purely tuned intervals (i.e. eigenmonzos). The CTE tuning (constrained Tenney-Euclidean tuning) is the most typical instance and will be the focus of this article. Otherwise normed tunings can be defined and computed analogously.

While the TE tuning can be viewed as a least squares problem, the CTE tuning can be viewed as an equality-constrained least squares problem. For a rank-r temperament, specifying m eigenmonzos will yield r - m degrees of freedom to be optimized.

The most significant form of CTE tuning is pure-octave constrained, which is assumed unless specified otherwise. For higher-rank temperaments, it may make sense to add multiple constraints, such as the pure-{2, 3} CTE tuning.

## Definition

Given a temperament mapping A and the JIP J0, denote the Tenney-weighted temperament mapping by V = AW, and the Tenney-weighted JIP by J = J0W. If the tuning is contrained by the eigenmonzo list B, the CTE tuning is equivalent to the following optimization problem:

Minimize

$\lVert GV - J \rVert$

subject to

$(GA - J_0)B = O$

where G is the generator list, and O the zero matrix.

The problem is feasible if

1. rank (B) ≤ rank (A), and
2. The subspaces of B and N (A) are linearly independent.

## Computation

The tuning can be solved in the method of Lagrange multiplier. The solution is given by

$\begin{bmatrix} G^{\mathsf T} \\ \Lambda^{\mathsf T} \end{bmatrix} = \begin{bmatrix} VV^{\mathsf T} & AB \\ (AB)^{\mathsf T} & O \end{bmatrix}^{-1} \begin{bmatrix} VJ^{\mathsf T}\\ (J_0 B)^{\mathsf T} \end{bmatrix}$

which is almost an analytical solution. Notice we introduced the vector of lagrange multipliers Λ, with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be discarded.

Otherwise, as a standard optimization problem, numerous algorithms exist to solve it, such as sequential quadratic programming, to name one. Flora Canou's tuning optimizer is such an implementation in Python. Note: it depends on Scipy.

Code
# © 2020-2022 Flora Canou | Version 0.15.1

import numpy as np
from scipy import optimize, linalg
import warnings
np.set_printoptions (suppress = True, linewidth = 256, precision = 4)

PRIME_LIST = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61]
SCALAR = 1200 #could be in octave, but for precision reason

def subgroup_normalize (main, subgroup):
if subgroup is None:
subgroup = PRIME_LIST[:main.shape]
elif main.shape != len (subgroup):
warnings.warn ("dimension does not match. Casting to the smaller dimension. ")
dim = min (main.shape, len (subgroup))
main = main[:, :dim]
subgroup = subgroup[:dim]
return main, subgroup

def weighted (matrix, subgroup, wtype = "tenney"):
if not wtype in {"tenney", "frobenius", "partch"}:
wtype = "tenney"
warnings.warn ("unknown weighter type, using default (\"tenney\")")

if wtype == "tenney":
weighter = np.diag (1/np.log2 (subgroup))
elif wtype == "frobenius":
weighter = np.eye (len (subgroup))
elif wtype == "partch":
weighter = np.diag (np.log2 (subgroup))
return matrix @ weighter

def error (gen, map, jip, order = 2):
return linalg.norm (gen @ map - jip, ord = order)

def optimizer_main (map, subgroup = None, wtype = "tenney", order = 2, cons_monzo_list = None, stretch_monzo = None, show = True):
map, subgroup = subgroup_normalize (np.array (map), subgroup)

jip = np.log2 (subgroup)*SCALAR
map_w = weighted (map, subgroup, wtype = wtype)
jip_w = weighted (jip, subgroup, wtype = wtype)
if order == 2 and cons_monzo_list is None: #te with no constraints, simply use lstsq for better performance
res = linalg.lstsq (map_w.T, jip_w)
gen = res
print ("L2 tuning without constraints, solved using lstsq. ")
else:
gen0 = [SCALAR]*map.shape #initial guess
cons = () if cons_monzo_list is None else {'type': 'eq', 'fun': lambda gen: (gen @ map - jip) @ cons_monzo_list}
res = optimize.minimize (error, gen0, args = (map_w, jip_w, order), method = "SLSQP", constraints = cons)
print (res.message)
if res.success:
gen = res.x
else:
raise ValueError ("infeasible optimization problem. ")

if not stretch_monzo is None:
if np.array (stretch_monzo).ndim > 1 and np.array (stretch_monzo).shape != 1:
raise IndexError ("only one stretch target is allowed. ")
elif (tempered_size := gen @ map @ stretch_monzo) == 0:
raise ZeroDivisionError ("stretch target is in the nullspace. ")
else:
gen *= (jip @ stretch_monzo)/tempered_size

tuning_map = gen @ map

if show:
print (f"Generators: {gen} (¢)", f"Tuning map: {tuning_map} (¢)", sep = "\n")

return gen, tuning_map

optimiser_main = optimizer_main


Constraints can be added from the parameter cons_monzo_list of the optimizer_main function. For example, to find the CTE tuning for septimal meantone, you type:

map = np.array ([[1, 0, -4, -13], [0, 1, 4, 10]])
optimizer_main (map, cons_monzo_list = np.transpose ( + *(map.shape - 1)))


You should get:

Optimization terminated successfully.
Generators: [1200.     1896.9521] (¢)
Tuning map: [1200.     1896.9521 2787.8085 3369.5214] (¢)


## Versus POTE tuning

The pure-octave CTE tuning can be very different from POTE tuning. Take 7-limit meantone as an example, the POTE tuning map:

$\langle \begin{matrix} 1200.000 & 1896.495 & 2785.980 & 3364.949 \end{matrix} ]$

This is a little bit flatter than quarter-comma meantone, with all the primes tuned flat.

The pure-octave CTE tuning map:

$\langle \begin{matrix} 1200.000 & 1896.952 & 2787.809 & 3369.521 \end{matrix} ]$

This is a little bit sharper than quarter-comma meantone, with prime 3 tuned flat and 5 and 7 sharp.

It can be speculated that POTE tends to result in biased tunings whereas CTE less so.

## Special constraint

The special eigenmonzo Wj removes the weighted tuning bias, where j is the all-ones monzo:

$\displaystyle W \vec j = [ \begin{matrix} 1 & 1/\log_2 (3) & \ldots & 1/\log_2 (p) \end{matrix} \rangle$

The monzo introduces weight to the constraint:

$\displaystyle (GV - J)\vec j = 0$

It can be regarded as a distinct optimum, the TOCTE tuning (Tenney ones constrained Tenney-Euclidean tuning).

### For equal temperaments

Since the one degree of freedom of equal temperaments is determined by the constraint, optimization is not involved. It is thus reduced to TOC tuning (Tenney ones constrained tuning). This constrained tuning demonstrates interesting properties.

The step size g can be found by

$\displaystyle g = 1/\operatorname {mean} (V)$

The edo number n can be found by

$\displaystyle n = 1/g = \operatorname {mean} (V)$

Unlike TE or TOP, the optimal edo number space in TOC is linear with respect to A. That is, if A = αA1 + βA2, then

\displaystyle \begin{align} n &= \operatorname {mean} (AW) \\ &= \operatorname {mean} ((\alpha A_1 + \beta A_2)W) \\ &= \operatorname {mean} (\alpha A_1 W) + \operatorname {mean} (\beta A_2 W) \\ &= \alpha n_1 + \beta n_2 \end{align}

As a result, the relative error space is also linear with respect to A.

For example, the relative errors of 12ettoc5 (12et in 5-limit TOC) is

$\displaystyle E_\text {r, 12} = \langle \begin{matrix} -1.55\% & -4.42\% & +10.08\% \end{matrix} ]$

That of 19ettoc5 is

$\displaystyle E_\text {r, 19} = \langle \begin{matrix} +4.08\% & -4.97\% & -2.19\% \end{matrix} ]$

As 31 = 12 + 19, the relative errors of 31ettoc5 is

\displaystyle \begin{align} E_\text {r, 31} &= E_\text {r, 12} + E_\text {r, 19} \\ &= \langle \begin{matrix} +2.52\% & -9.38\% & +7.88\% \end{matrix} ] \end{align}