# Constrained tuning

**Constrained tunings** are tuning optimization techniques using the constraint of some purely tuned intervals (i.e. unit eigenmonzos, or unchanged-intervals). **CTE tuning** (**constrained Tenney-Euclidean tuning**) is the most typical instance. It has a more sophisticated variant, **CTWE tuning** (**constrained Tenney-Weil-Euclidean tuning**), a.k.a. **KE tuning** (**Kees-Euclidean tuning**). These two tunings will be the focus of this article. Otherwise normed tunings can be defined and computed analogously.

All constrained tunings are standard temperament optimization problems. Specifically, as TE tuning can be viewed as a least squares problem, CTE tuning can be viewed as an equality-constrained least squares problem.

The most common subject of constraint is the octave, which is assumed unless specified otherwise. For higher-rank temperaments, it may make sense to add multiple constraints, such as a pure-2.3 (pure-octave pure-fifth) constrained tuning. For a rank-*r* temperament, specifying a rank-*m* constraint list will yield *r* - *m* degrees of freedom to be optimized.

## Definition

Given a temperament mapping *V* and the just tuning map *J*, we specify a weight–skew transformation, represented by transformation matrix *X*, and a *q*-norm. Suppose the tuning is constrained by the eigenmonzo list *M*_{I}. The goal is to find the generator list *G* by

Minimize

[math]\displaystyle \lVert GV_X - J_X \rVert_q [/math]

subject to

[math]\displaystyle (GV - J)M_I = O [/math]

where (·)_{X} denotes the variable in the weight–skew transformed space, found by

[math]\displaystyle \begin{align} V_X &= VX \\ J_X &= JX \end{align} [/math]

The problem is feasible if

- rank (
*M*_{I}) ≤ rank (*V*), and - The subgroups of
*M*_{I}and N (*V*) are linearly independent.

## Computation

As a standard optimization problem, numerous algorithms exist to solve it, such as sequential quadratic programming, to name one. Flora Canou's tuning optimizer is such an implementation in Python. Note: it uses Scipy.

**Code**

```
# © 2020-2023 Flora Canou | Version 0.27.2
# This work is licensed under the GNU General Public License version 3.
import warnings
import numpy as np
from scipy import optimize, linalg
np.set_printoptions (suppress = True, linewidth = 256, precision = 4)
PRIME_LIST = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89]
class SCALAR:
CENT = 1200
class Norm:
"""Norm profile for the tuning space."""
def __init__ (self, wtype = "tenney", wamount = 1, skew = 0, order = 2):
self.wtype = wtype
self.wamount = wamount
self.skew = skew
self.order = order
def __get_tuning_weight (self, subgroup):
match self.wtype:
case "tenney":
weight_vec = np.reciprocal (np.log2 (subgroup, dtype = float))
case "wilson" | "benedetti":
weight_vec = np.reciprocal (np.array (subgroup, dtype = float))
case "equilateral":
weight_vec = np.ones (len (subgroup))
case _:
warnings.warn ("weighter type not supported, using default (\"tenney\")")
self.wtype = "tenney"
return self.__get_weight (subgroup)
return np.diag (weight_vec**self.wamount)
def __get_tuning_skew (self, subgroup):
if self.skew == 0:
return np.eye (len (subgroup))
elif self.order == 2:
r = 1/(len (subgroup)*self.skew + 1/self.skew)
kr = 1/(len (subgroup) + 1/self.skew**2)
else:
raise NotImplementedError ("Weil skew only works with Euclidean norm as of now.")
return np.append (
np.eye (len (subgroup)) - kr*np.ones ((len (subgroup), len (subgroup))),
r*np.ones ((len (subgroup), 1)), axis = 1)
def tuning_x (self, main, subgroup):
return main @ self.__get_tuning_weight (subgroup) @ self.__get_tuning_skew (subgroup)
def __get_subgroup (main, subgroup):
main = np.asarray (main)
if subgroup is None:
subgroup = PRIME_LIST[:main.shape[1]]
elif main.shape[1] != len (subgroup):
warnings.warn ("dimension does not match. Casting to the smaller dimension. ")
dim = min (main.shape[1], len (subgroup))
main = main[:, :dim]
subgroup = subgroup[:dim]
return main, subgroup
def optimizer_main (breeds, subgroup = None, norm = Norm (),
cons_monzo_list = None, des_monzo = None, show = True):
# NOTE: "map" is a reserved word
# optimization is preferably done in the unit of octaves, but for precision reasons
breeds, subgroup = __get_subgroup (breeds, subgroup)
just_tuning_map = SCALAR.CENT*np.log2 (subgroup)
breeds_x = norm.tuning_x (breeds, subgroup)
just_tuning_map_x = norm.tuning_x (just_tuning_map, subgroup)
if norm.order == 2 and cons_monzo_list is None: #simply using lstsq for better performance
res = linalg.lstsq (breeds_x.T, just_tuning_map_x)
gen = res[0]
print ("Euclidean tuning without constraints, solved using lstsq. ")
else:
gen0 = [SCALAR.CENT]*breeds.shape[0] #initial guess
cons = () if cons_monzo_list is None else {
'type': 'eq',
'fun': lambda gen: (gen @ breeds - just_tuning_map) @ cons_monzo_list
}
res = optimize.minimize (lambda gen: linalg.norm (gen @ breeds_x - just_tuning_map_x, ord = norm.order), gen0,
method = "SLSQP", options = {'ftol': 1e-9}, constraints = cons)
print (res.message)
if res.success:
gen = res.x
else:
raise ValueError ("infeasible optimization problem. ")
if not des_monzo is None:
if np.asarray (des_monzo).ndim > 1 and np.asarray (des_monzo).shape[1] != 1:
raise IndexError ("only one destretch target is allowed. ")
elif (tempered_size := gen @ breeds @ des_monzo) == 0:
raise ZeroDivisionError ("destretch target is in the nullspace. ")
else:
gen *= (just_tuning_map @ des_monzo)/tempered_size
tempered_tuning_map = gen @ breeds
error_map = tempered_tuning_map - just_tuning_map
if show:
print (f"Generators: {gen} (¢)",
f"Tuning map: {tempered_tuning_map} (¢)",
f"Error map: {error_map} (¢)", sep = "\n")
return gen, tempered_tuning_map, error_map
optimiser_main = optimizer_main
```

Constraints can be added from the parameter `cons_monzo_list`

of the `optimizer_main`

function. For example, to find the CTE tuning for septimal meantone, you type:

```
breeds = np.array ([[1, 0, -4, -13], [0, 1, 4, 10]])
optimizer_main (breeds, cons_monzo_list = np.transpose ([1] + [0]*(breeds.shape[1] - 1)))
```

You should get:

Optimization terminated successfully. Generators: [1200. 1896.9521] (¢) Tuning map: [1200. 1896.9521 2787.8085 3369.5214] (¢) Error map: [ 0. -5.0029 1.4948 0.6955] (¢)

Analytical solutions exist for Euclidean (*L*^{2}) tunings, see Constrained tuning/Analytical solution to constrained Euclidean tunings. It can also be solved in the method of Lagrange multiplier. The solution is given by

[math]\displaystyle \begin{bmatrix} G^{\mathsf T} \\ \mathit\Lambda^{\mathsf T} \end{bmatrix} = \begin{bmatrix} V_X V_X^{\mathsf T} & VM \\ (VM)^{\mathsf T} & O \end{bmatrix}^{-1} \begin{bmatrix} V_X J_X^{\mathsf T}\\ (JM)^{\mathsf T} \end{bmatrix} [/math]

which is almost an analytical solution. Notice we introduced the vector of lagrange multipliers *Λ*, with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be discarded.

## CTE tuning vs CTWE tuning

Consider the fact that TE tuning does not treat divisive ratios as more important than multiplicative ratios – 5/3 and 15/1 are taken as equally important, for example. To address that, a skew on the space may be introduced, resulting in TWE tuning. Constraining the equave to pure on top of TWE gives CTWE a.k.a. KE tuning.

POTE tuning works as a quick approximation to CTWE. As POTE destretches the equave, it keeps the angle in the tuning space unchanged, and thus sacrifices multiplicative ratios for divisive ratios. On the contrary, CTE sticks to the original design book of TE as its result remains TE optimal.

These tunings can be very different from each other. Take 7-limit meantone as an example. The POTE tuning map is a little bit flatter than quarter-comma meantone, with all the primes tuned flat:

[math]\langle \begin{matrix} 1200.000 & 1896.495 & 2785.980 & 3364.949 \end{matrix} ][/math]

The CTWE tuning map is a little bit sharper than quarter-comma meantone, with 5 tuned sharp and 3 and 7 flat:

[math]\langle \begin{matrix} 1200.000 & 1896.656 & 2786.625 & 3366.562 \end{matrix} ][/math]

The CTE tuning map is even sharper, with 3 tuned flat and 5 and 7 sharp:

[math]\langle \begin{matrix} 1200.000 & 1896.952 & 2787.809 & 3369.521 \end{matrix} ][/math]

## Special constraint

The special eigenmonzo *X***j**, where **j** is the all-ones monzo, has the effect of removing the weighted–skewed tuning bias. This eigenmonzo is actually proportional to the monzo of the extra dimension introduced by the skew. In other words, it forces the extra dimension to be pure, and therefore, the skew will have no effect with this constrained tuning.

It can be regarded as a distinct optimum. In the case of Tenney weighting, it is the **TOCTE tuning** (**Tenney ones constrained Tenney-Euclidean tuning**).

### For equal temperaments

Since the one degree of freedom of equal temperaments is determined by the constraint, optimization is not involved. It is thus reduced to **TOC tuning** (**Tenney ones constrained tuning**). This constrained tuning demonstrates interesting properties.

The step size *g* can be found by

[math]\displaystyle g = 1/\operatorname {mean} (V_X)[/math]

The edo number *n* can be found by

[math]\displaystyle n = 1/g = \operatorname {mean} (V_X)[/math]

Unlike TE or TOP, the optimal edo number space in TOC is linear with respect to *V*. That is, if *V* = *αV*_{1} + *βV*_{2}, then

[math]\displaystyle \begin{align} n &= \operatorname {mean} (VX) \\ &= \operatorname {mean} ((\alpha V_1 + \beta V_2)X) \\ &= \operatorname {mean} (\alpha V_1 X) + \operatorname {mean} (\beta V_2 X) \\ &= \alpha n_1 + \beta n_2 \end{align} [/math]

As a result, the relative error space is also linear with respect to *V*.

For example, the relative errors of 12ettoc5 (12et in 5-limit TOC) is

[math]\displaystyle \mathcal{E}_\text {r}(12) = \langle \begin{matrix} -1.55\% & -4.42\% & +10.08\% \end{matrix} ][/math]

That of 19ettoc5 is

[math]\displaystyle \mathcal{E}_\text {r}(19) = \langle \begin{matrix} +4.08\% & -4.97\% & -2.19\% \end{matrix} ][/math]

As 31 = 12 + 19, the relative errors of 31ettoc5 is

[math]\displaystyle \begin{align} \mathcal{E}_\text {r}(31) &= \mathcal{E}_\text {r}(12) + \mathcal{E}_\text {r}(19) \\ &= \langle \begin{matrix} +2.52\% & -9.38\% & +7.88\% \end{matrix} ] \end{align} [/math]

## Systematic name

In D&D's guide to RTT, the systematic name for the CTE tuning scheme is *held-octave minimax-ES*, and the systematic name for the CTWE tuning scheme is *held-octave minimax-E-lils-S*.