# Generator embedding optimization

When optimizing tunings of regular temperaments, it is fairly quick and easy to find approximate solutions, using (for example) the general method which is discussed in D&D's guide to RTT and available in D&D's RTT library in Wolfram Language. This RTT library also includes four other methods which quickly and easily find exact solutions. These four methods are further different from the general method insofar as they are not general; each one works only for certain optimization problems. It is these four specialized exact-solution methods which are the subject of this article.

Two of these four specialized methods were briefly discussed in D&D's guide, along with the general method, because these specialized methods are actually even quicker and easier than the general method. These two are the only held-intervals method, and the pseudoinverse method. But there's still plenty more insight to be had into how and why exactly these methods work, in particular for the pseudoinverse method, so we'll be doing a much deeper dive into it in this article here than was done in D&D's guide.

The other two of these four specialized methods — the zero-damage method, and the coinciding-damage method — are significantly more challenging to understand than the general method. Most students of RTT would not gain enough musical insight by familiarizing themselves with them to have justified the investment. This is why these two methods were not discussed in D&D's guide. However, if you feel compelled to understand the nuts and bolts of these methods anyway, then those sections of the article may well appeal to you.

This article is titled "Generator embedding optimization" because of a key feature these four specialized methods share: they can all give their solutions as generator embeddings, i.e. lists of prime-count vectors, one for each generator, where typically these prime-count vectors have non-integer entries (and are thus not JI). This is different from the general method, which can only give generator tuning maps, i.e. sizes in cents for each generator. As we'll see, a tuning optimization method's ability to give solutions as generator embeddings is equivalent to its ability to give solutions that are exact.

# Intro

## A summary of the methods

The three biggest sections of this article are dedicated to three specialized tuning methods, one for each of the three special optimization powers: the pseudoinverse method is used for $p = 2$ (miniRMS tuning schemes), the zero-damage method is used for $p = 1$ (miniaverage tuning schemes), and the coinciding-damage method is used for $p = ∞$ (minimax tuning schemes).

These three methods also work for all-interval tuning schemes, which by definition are all minimax tuning schemes (optimization power $∞$), differing instead by the power of the power norm used for the interval complexity by which they simplicity-weight damage. But it's not the interval complexity norm power $q$ which directly determines the method used, but rather its dual power, $\text{dual}(q)$: the power of the dual norm minimized on the retuning magnitude. So the pseudoinverse method is used for $\text{dual}(q) = 2$, the zero-damage method is used for $\text{dual}(q) = 1$, and the coinciding-damage method is used for $\text{dual}(q) = ∞$.

If for some reason you've decided that you want to use a different optimization power than those three, then no exact solution in the form of a generator embedding is available, and you'll need to fall back to the general tuning computation method, linked above.

The general method also works for those special powers $1$, $2$, and $∞$, however, so if you're in a hurry, you should skip this article and lean on that method instead (though you should be aware that the general method offers less insight about each of those tuning schemes than their specialized methods do).

## Exact vs approximate solutions

Tuning computation methods can be classified by whether they give an approximate or exact solution.

The general method is an approximate type; it finds the generator tuning map $𝒈$ directly, using trial-and-error methods such as gradient descent or differential evolution whose details we won't go into. The accuracy of approximate types depends on how long you are willing to wait.

In contrast, the exact type work by solving for a matrix $G$, the generator embedding.

We can calculate $𝒈$ from this $G$ via $𝒋G$, that is, the generator tuning map is obtained as the product of the just tuning map and the generator embedding.

Because $𝒈 = 𝒋G$, if $𝒈$ is the primary target, not $G$, and a formula for $G$ is known, then it is possible to substitute that into $𝒈 = 𝒋G$ and thereby bypass explicitly solving for $G$. For example, this was essentially what was done in the Only-held intervals method and Pseudoinverse method sections of [guide: tuning computation]).

Note that with any exact type that solves for $G$, since it is possible to have an exact $𝒋$, it is also possible to find an exact $𝒈$. For example, the approximate value of the 5-limit $𝒋$ we're quite familiar with is 1200.000 1901.955 2786.314], but its exact value is $1200×\log_2(2)$ $1200×\log_2(3)$ $1200×\log_2(5)$], so if the exact tuning of quarter-comma meantone is $G$ = {[1 0 0 [0 0 ¼], then this can be expressed as an exact generator tuning map $𝒈$ = {$(1200×\log_2(2))(1) + (1200×\log_2(3))(0) + (1200×\log_2(5))(0)$ $(1200×\log_2(2))(0) + (1200×\log_2(3))(0) + (1200×\log_2(5))(\frac14)$] = {$1200$ $\dfrac{1200×\log_2(5)}{4}$].

Also note that any method which solves for $G$ can also produce $𝒈$ via this $𝒋G$ formula. But methods which solve directly for $𝒈$ cannot provide a $G$, even if a $G$ could have been computed for the given type of optimization problem (such as a minimax type, which notably is the majority of tuning optimizations used on the wiki). In a way, tuning maps are like a lossily compressed form of information from embeddings.

Here's a breakdown of which computation methods solve directly for $𝒈$, and which can solve for $G$ instead:

optimization power method solution type solves for
$2$ pseudoinverse exact $G$
$1$ zero-damage exact $G$
$∞$ coinciding-damage exact $G$
general power approximate $𝒈$
power limit
n/a only held-intervals exact $G$

## The generator embedding

Roughly speaking, if $M$ is the matrix which isolates the temperament information, and $𝒋$ is the matrix which isolates the sizing information, then $G$ is the matrix that isolates the tuning information. This is a matrix whose columns are prime-count vectors representing the generators of the temperament. For example, a Pythagorean tuning of meantone temperament would look like this:

$G = \left[ \begin{array} {rrr} 1 & {-1} \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right]$

The first column is the vector [1 0 0 representing $\frac21$, and the second column is the vector [-1 1 0 representing $\frac32$. So generator embeddings will always have the shape $(d, r)$: one row for each prime harmonic in the domain basis (the dimensionality), one column for each generator (the rank).

Pythagorean tuning is not a common tuning of meantone, however, and is an extreme enough tuning of that temperament that it should be considered unreasonable. We gave it as our first example anyway, though, in order to more gently introduce the concept of generator embeddings, because its prime-count vector columns are simple and familiar, while in reality, most generator embeddings consist of prime-count vectors which do not have integer entries. Therefore, these prime-count vectors do not represent JI intervals, and are unlike any prime-count vectors we've worked with so far. For another example of a meantone tuning, then, one which is more common and reasonable, let's consider the quarter-comma tuning of meantone. Its generator embedding looks like this:

$G = \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right]$

## Algebraic setup

The basic algebraic setup of tuning optimization looks like this:

$\textbf{d} = |\,𝒈M\mathrm{T}W - 𝒋\mathrm{T}W\,|$

When we break $𝒈$ down into $𝒋$ and a $G$ we're solving for, the algebraic setup of tuning optimization comes out like this:

$\textbf{d} = |\,𝒋GM\mathrm{T}W - 𝒋G_{\text{j}}M_{\text{j}}\mathrm{T}W\,|$

We can factor things in both directions this time (and we'll take $𝒋$ outside the absolute value bars since it's guaranteed to have no negative entries):

$\textbf{d} = 𝒋\,|\,(GM - G_{\text{j}}M_{\text{j}})\mathrm{T}W\,|$

But wait — there are actually two more matrices we haven't recognized yet, on the just side of things. These are $G_{\text{j}}$ and $M_{\text{j}}$. Unsurprisingly, these two are closely related to $G$ and $M$, respectively. The subscript $\text{j}$ stands for "just intonation", so this is intended to indicate that these are the generators and mapping for JI.

We could replace either or both of these matrices with $I$, an identity matrix. On account of both $G_{\text{j}}$ and $M_{\text{j}}$ being identity matrices, we can eliminate them from our expression

$\textbf{d} = 𝒋\,|\,(GM - II)\mathrm{T}W\,|$

Which reduces to:

$\textbf{d} = 𝒋\,|\,(P - I)\mathrm{T}W\,|$

Where $P$ is the projection matrix found as $P = GM$.

So why do we have $G_{\text{j}}$ and $M_{\text{j}}$ there at all? For maximal parallelism between the tempered side and the just side. In part this is a pragmatic decision, because as we work with these sorts of expressions moving forward, we'll prefer something rather than nothing in this position anyway. But there's also a pedagogical goal here, which is to convey how in JI, the mapping matrix and the generator embedding really are identity matrices, and it can be helpful to stay mindful of it.

You can imagine reading a $(3, 3)$-shaped identity matrix like a mapping matrix: how many generators does it take to approximate prime 2? One of the first generator, and nothing else. How many to approximate prime 3? One of the second generator, and nothing else. How many to approximate prime 5? One of the third generator, and nothing else. So this mapping is not much of a mapping at all. It shows us only that in this temperament, the first generator may as well be a perfect approximation of prime 2, the second generator may as well be a perfect approximation of prime 3, and the third generator may as well be a perfect approximation of prime 5. Any temperament which has as many generators as it has primes may as well be JI like this.

And then the fact that the generator embedding on the just side is also an identity matrix finishes the point. The vector for the first generator is [1 0 0, a representation of the interval $\frac21$; the vector for the second generator is [0 1 0, a representation of the interval $\frac31$; and the vector for the third generator is [0 0 1, a representation of the interval $\frac51$.

We can even understand this in terms of a units analysis, where if $M_{\text{j}}$ is taken to have units of g/p, and $G_{\text{j}}$ is taken to have units of p/g, then together we find their units to be ... nothing. And an identity matrix that isn't even understood to have units is definitely useless and to be eliminated. Though it's actually not as simple as the $\small \sf p$'s and $\small \sf g$'s canceling out; for more details, see Dave Keenan & Douglas Blumeyer's guide to RTT: units analysis#The JI mapping times the JI generators embedding.

So when the interval vectors constituting the target-interval list $\mathrm{T}$ are multiplied by $G_{\text{j}}M_{\text{j}}$ they are unchanged, which means that multiplying the result by $𝒋$ simply computes their just sizes.

## Deduplication

### Between target-interval set and held-interval basis

Generally speaking, held-intervals should be removed if they also appear in the target-interval set. If these intervals are not removed, the correct tuning can still be computed; however, during optimization, effort will have been wasted on minimizing damage to these intervals, because their damage would have been held to 0 by other means anyway.

Of course, there is some cost to the deduplication itself, but In general, it should be more computationally efficient to remove these intervals from the target-interval set in advance, rather than submit them to the optimization procedures as-is.

Duplication of intervals between these two sets will most likely occur when using a target-interval set scheme (such as a TILT or OLD) that automatically chooses the target-interval set.

### Constant damage target-intervals

There is also a possibility, when holding intervals, that some target-intervals' damages will be constant everywhere within the tuning damage space to be searched, and thus these target-intervals will have no effect on the tuning. Their preservation in the target-interval set will only serve to slow down computation.

For example, in pajara temperament, with mapping [2 3 5 6] 0 1 -2 -2]}, if the octave is held unchanged, then there is no sense keeping $\frac75$ in the target-interval set. The octave [1 0 0 0 maps to [2 0} in this temperament, and $\frac75$ [0 0 -1 1 maps to [1 0}. So if the first generator is fixed in order to hold the octave unchanged, then ~$\frac75$'s tuning will also be fixed.

### Within target-interval set

We also note a potential for duplication within the target-interval set, irrespective of held-intervals: depending on the temperament, some target-intervals may map to the same tempered interval. For another pajara example, using the TILT as a target-interval set scheme, the target-interval set will contain $\frac{10}{7}$ and $\frac75$, but pajara maps both of those intervals to [1 0}, and thus the damage to these two intervals will always be the same.

However, critically, this is only truly redundant information in the case of a minimax tuning scheme, where the optimization power $p = ∞$. In this case, if the damage to $\frac75$ is the max, then it's irrelevant whether the damage to $\frac{10}{7}$ is also the max. But in the case of any other optimization power, both the presence of $\frac75$ and of $\frac{10}{7}$ in the target-interval set will have some effect; for example, with $p = 1$, miniaverage tuning schemes, this means that whatever the identical damage to this one mapped target-interval [1 0} may be, since two different of our target-intervals map to it, we care about its damage twice as much, and thus it essentially gets counted twice in our average damage computation.

Should redundant mapped target-intervals be removed when computing minimax tuning schemes? It's a reasonable consideration. The RTT Library in Wolfram Language does not do this. In general, this may add more complexity to the code than the benefit is worth; it requieres minding the difference between the requested target-interval set count $k$ and the count of deduped mapped target-intervals, which would require a new variable.

# Only held-intervals method

The only held-intervals method was mostly covered here: Dave Keenan & Douglas Blumeyer's guide to RTT: tuning computation#Only held-intervals method. But there are a couple adjustments we'll make to how we talk about it here.

## Unchanged-interval basis

In the D&D's guide article, this method was discussed in terms of held-intervals, which are a trait of a tuning scheme, or in other words, a request that a person makes of a tuning optimization procedure which that procedure will then satisfy. But there's something interesting that happens once we request enough many intervals to be held unchanged — that is, when our held-interval count $h$ reaches the size of our generator count, also known as rank $r$ — then we have no room left for optimization. At this point, the tuning is entirely determined by the held-intervals. And thus we get another, perhaps better, way to look at the interval basis: no longer in terms of a request on a tuning scheme, but as a characteristic of a specific tuning itself. Under this conceptualization, what we have is not a helf-interval basis $\mathrm{H}$, but an unchanged-interval basis $\mathrm{U}$.

Because in the majority of cases within this article it will be more appropriate to conceive of this basis as a characteristic of a fully-determined tuning, as opposed to a request of tuning scheme, we will be henceforth be dealing with this method in terms of $\mathrm{U}$, not $\mathrm{H}$

## Generator embedding

So, substituting $\mathrm{U}$ in for $\mathrm{H}$ in the formula we learned from the D&D's guide article:

$𝒈 = 𝒋\mathrm{U}(M\mathrm{U})^{-1}$

This tells us that if we know the unchanged-interval basis for a tuning, i.e. every unchanged-interval in the form of a prime-count vector, then we can get our generators. But the next difference we want to look at here is this: the formula has bypassed the computation of $G$! We can expand $𝒈$ to $𝒋G$:

$𝒋G = 𝒋\mathrm{U}(M\mathrm{U})^{-1}$

And cancel out:

$\cancel{𝒋}G = \cancel{𝒋}\mathrm{U}(M\mathrm{U})^{-1}$

To find:

$G = \mathrm{U}(M\mathrm{U})^{-1}$

# Pseudoinverse method

Similarly, we can take the pseudoinverse formula as presented in Dave Keenan & Douglas Blumeyer's guide to RTT: tuning computation#Pseudoinverse method, substitute $𝒋G$ for $𝒈$, and cancel out:

\begin{align} 𝒈 &= 𝒋\mathrm{T}W(M\mathrm{T}W)^{+} \\ 𝒋G &= 𝒋\mathrm{T}W(M\mathrm{T}W)^{+} \\ \cancel{𝒋}G &= \cancel{𝒋}\mathrm{T}W(M\mathrm{T}W)^{+} \\ G &= \mathrm{T}W(M\mathrm{T}W)^{+} \\ \end{align}

## Connection with the only held-intervals method

Note the similarity between the pseudoinverse formula $A^{+} = A^\mathsf{T}(AA^\mathsf{T})^{-1}$ and the only held-interval interval $G = 𝒋\mathrm{U}(M\mathrm{U})^{-1}$; in fact, it's the same formula, if we simply substitute in $M^\mathsf{T}$ for $\mathrm{U}$.

What this tells us is that for any tuning of a temperament where $G = M^{+}$, the held-intervals are given by the transpose of the mapping, $M^\mathsf{T}$. (Historically this tuning scheme has been called "Frobenius", but we would call it "minimax-E-copfr-S".)

For example, in the $G = M^{+}$ tuning of meantone temperament 1202.607 696.741], with mapping $M$ equal to:

$\left[ \begin{array} {r} 1 & 1 & 0 \\ 0 & 1 & 4 \\ \end{array} \right]$

The held-intervals are $M^\mathsf{T}$:

$\left[ \begin{array} {r} 1 & 0 \\ 1 & 1 \\ 0 & 4 \\ \end{array} \right]$

or in other words, the two held-intervals are [1 1 0 and [0 1 4, which as ratios are $\frac61$ and $\frac{1875}{1}$, respectively. Those may seem like some pretty strange intervals to be unchanged, for sure, but there is a way to think about it that makes it seem less strange. This tells us that whatever the error is on $\frac21$, it is the negation of the error on $\frac31$, because when those intervals are combined, we get a pure $\frac61$. This also tells us that whatever the error is on $\frac31$, that it in turn is the negation of the error on $\frac{625}{1} = \frac{5^4}{1}$. Also, remember that these intervals form a basis for the held-intervals; any interval that is a linear combination of them is also unchanged.

As another example, the unchanged-interval of the primes miniRMS-U tuning of 12-ET would be [12 19 28. Don't mistake that for the 12-ET map 12 19 28]; that's the prime-count vector you get from transposing it! That interval, while rational and thus theoretically JI, could not be heard directly by humans, considering that $2^{12}3^{19}5^{28}$ is over 107 octaves above unison and would typically call for scientific notation to express; it's 128553.929 ¢, which is exactly 1289 ($= 12^2+19^2+28^2$) iterations of the 99.732 ¢ generator for this tuning.

## Example

Let's refer back to the example given in Dave Keenan & Douglas Blumeyer's guide to RTT: tuning computation#Plugging back in, picking up from this point:

$\scriptsize 𝒈 = \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T}C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1.000 & \;\;\;0.000 & {-2.585} & 7.170 & {-3.322} & 0.000 & {-8.644} & 4.907 \\ 0.000 & 1.585 & 2.585 & {-3.585} & 0.000 & {-3.907} & 0.000 & 4.907 \\ 0.000 & 0.000 & 0.000 & 0.000 & 3.322 & 3.907 & 4.322 & {-4.907} \\ \end{array} \right] \end{array} \begin{array} {ccc} (M\mathrm{T}C)^\mathsf{T} \\ \left[ \begin{array} {rrr} 1.000 & 0.000 \\ \hline 3.170 & {-4.755} \\ \hline 2.585 & {-7.755} \\ \hline 0.000 & 10.755 \\ \hline 6.644 & {-16.610} \\ \hline 3.907 & {-7.814} \\ \hline 4.322 & {-21.610} \\ \hline 0.000 & 9.814 \\ \end{array} \right] \end{array} \begin{array} {ccc} (M\mathrm{T}C(M\mathrm{T}C)^\mathsf{T})^{-1} \\ \left[ \begin{array} {rrr} 0.0336 & 0.00824 \\ 0.00824 & 0.00293 \\ \end{array} \right] \end{array}$

In the original article, we simply multiplied through the entire right half of this expression. But what if we stopped before multiplying in the $𝒋$ part, instead?

$𝒈 = \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T}C(M\mathrm{T}C)^\mathsf{T}(M\mathrm{T}C(M\mathrm{T}C)^\mathsf{T})^{-1} \\ \left[ \begin{array} {rrr} 1.003 & 0.599 \\ {-0.016} & 0.007 \\ 0.010 & {-0.204} \\ \end{array} \right] \end{array}$

The matrices with shapes $(3, 8)(8, 2)(2, 2)$ led us to a $(3, \cancel{8})(\cancel{8}, \cancel{2})(\cancel{2}, 2) = (3, 2)$-shaped matrix, and that's just what we want in a $G$ here. Specifically, we want a $(d, r)$-shaped matrix, one that will convert $(r, 1)$-shaped generator-count vectors — those that are results of mapping $(d, 1)$-shaped prime-count vectors by the temperament mapping matrix — back into $(d, 1)$-shaped prime-count vectors, but now representing the intervals as they sound under this tuning of this temperament.

And so we've found what we were looking for, $G = \mathrm{T}C(M\mathrm{T}C)^\mathsf{T}(M\mathrm{T}C(M\mathrm{T}C)^\mathsf{T})^{-1}$.

At first glance, this might seem surprising or crazy, that we find ourselves looking at musical intervals described by raising prime harmonics to powers that are precise fractions. But they do, in fact, work out to reasonable interval sizes. Let's check by actually working these generators out through their decimal powers.

This generator embedding $G$ is telling us that the tuning of our first generator may be represented by the prime-count vector [1.003 -0.016 0.010, or in other words, it's the interval $2^{1.003}3^{-0.016}5^{0.010}$, which is equal to $2.00018$, or 1200.159 ¢. As for the second generator, then, we find that $2^{0.599}3^{0.007}5^{-0.205} = 1.0985$, or 162.664 ¢. By checking the porcupine article we can see that these are both reasonable generator sizes.

What we've just worked out with this sanity check is our generator tuning map, $𝒈$. In general we can find these by left-multiplying the generators $G$ by $𝒋$:

$\begin{array} {ccc} \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {ccc} G \\ \left[ \begin{array} {rrr} 1.003 & 0.599 \\ {-0.016} & 0.007 \\ 0.010 & {-0.204} \\ \end{array} \right] \end{array} = \begin{array} {ccc} \mathbf{g} \\ \left[ \begin{array} {rrr} 1200.159 & 162.664 \\ \end{array} \right] \end{array} \end{array}$

## Pseudoinverse: the "how"

Here we will investigate how, mechanically speaking, the pseudoinverse almost magically takes us straight to that answer we want.

### Like an inverse

As you might suppose — given a name like pseudoinverse — this thing is like a normal matrix inverse, but not exactly. True inverses are only defined for square matrices, so the pseudoinverse is essentially a way to make something similar available for non-square i.e. rectangular matrices. This is useful for RTT because the $M\mathrm{T}W$ matrices we use it on are usually rectangular; they are always $(r, k)$-shaped matrices.

But why would we want to take the inverse of $M\mathrm{T}W$ in the first place, though? To understand this, it will help to first simplify the problem.

1. Our first simplification will be to use unity-weight damage, meaning that the weight on each of the target-intervals is the same, and may as well be 1. This makes our weight matrix $W$ a matrix of all zeros with 1's running down the main diagonal, or in other words, it makes $W = I$. So we can eliminate it.
2. Our second simplification is to consider the case where the target-interval set $\mathrm{T}$ is the primes. This makes $\mathrm{T}$ also equal to $I$, so we can eliminate it as well.

At this point we're left with simply $M$. And this is still a rectangular matrix; it's $(r, d)$-shaped. So if we want to invert it, we'll only be able to pseudoinvert it. But we're still in the dark about why we would ever want to invert it.

To finally get to understanding why, let's look to an expression discussed here: Basic algebraic setup:

$GM \approx G_{\text{j}}M_{\text{j}}$

This expression captures the idea that a tuning based on $G$ of a temperament $M$ (the left side of this) is intended to approximate just intonation, where both $G_{\text{j}} = I$ and $M_{\text{j}} = I$ (the right side of this).

So given some mapping $M$, which $G$ makes that happen? Well, based on the above, it should be the inverse of $M$! That's because anything times its own inverse equals an identity, i.e. $M^{-1}M = I$.

### Definition of inverse

Multiplying by something to give an identity is, in fact, the very definition of "inverse". To illustrate, here's an example of a true inverse, in the case of $(2, 2)$-shaped matrices:

$\begin{array} {c} A^{-1} \\ \left[ \begin{array} {rrr} 1 & \frac23 \\ 0 & {-\frac13} \\ \end{array} \right] \end{array} \begin{array} {c} A \\ \left[ \begin{array} {rrr} 1 & 2 \\ 0 & {-3} \\ \end{array} \right] \end{array} \begin{array} {c} \\ = \end{array} \begin{array} {c} I \\ \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array}$

So the point is, if we could plug $M^{-1}$ in for $G$ here, we'd get a reasonable approximation of just intonation, i.e. an identity matrix $I$.

But the problem is, as we know already, that $M^{-1}$ doesn't exist, because $M$ is a rectangular matrix. That's why we use its pseudoinverse $M^{+}$ instead. Or to be absolutely clear, we choose our generator embedding $G$ to be $M^{+}$.

### Sometimes an inverse

Now to be completely accurate, when we multiply a rectangular matrix by its pseudoinverse, we can also get an identity matrix, but only if we do it a certain way. (And this fact that we can get an identity matrix at all is a critical example of the way how the pseudoinverse provides inverse-like powers for rectangular matrices.) But there are still a few key differences between this situation and the situation of a square matrix and its true inverse:

1. The first big difference is that in the case of square matrices, as we saw a moment ago, all the matrices have the same shape. However, for a non-square (rectangular) matrix with shape $(m, n)$, it will have a pseudoinverse with shape $(n, m)$. This difference perhaps could have gone without saying.
2. The second big difference is that in the case of square matrices, the multiplication order is irrelevant: you can either left-multiply the original matrix by its inverse or right-multiply it, and either way, you'll get the same identity matrix. But there's no way you could get the same identity matrix in the case of a rectangular matrix and its pseudoinverse; an $(m, n)$-shaped matrix times an $(n, m)$-shaped matrix gives an $(m, m)$-shaped matrix, while an $(n, m)$-shaped matrix times an $(m, n)$-shaped matrix gives an $(n, n)$-shaped matrix (the inner height and width always have to match, and the resulting matrix always has shape matching the outer width and height). So: either way we will get a square matrix, but one way we get an $(m, m)$ shape, and the other way we get an $(n, n)$ shape.
3. The third big difference — and this is probably the most important one, but we had to build up to it by looking at the other two big differences first — is that only one of those two possible results of multiplying a rectangular matrix by its pseudoinverse will actually even give an identity matrix! It will be the one of the two that gives the smaller square matrix.

### Example of when the pseudoinverse behaves like a true inverse

Here's an example with meantone temperament as $M$. Its pseudoinverse $M^{+} = M^\mathsf{T}(MM^\mathsf{T})^{-1}$ is {[17 16 -4 [16 17 4]/33. First, we'll look at the multiplication order that gives an identity matrix, when the $(2, 3)$-shaped rectangular matrix right-multiplied by its $(3, 2)$-shaped rectangular pseudoinverse gives a $(2, 2)$-shaped square identity matrix:

$\begin{array} {c} M \\ \left[ \begin{array} {r} 1 & 0 & {-4} \\ 0 & 1 & 4 \\ \end{array} \right] \end{array} \begin{array} {c} M^{+} \\ \left[ \begin{array} {c} \frac{17}{33} & \frac{16}{33} \\ \frac{16}{33} & \frac{17}{33} \\ {-\frac{4}{33}} & \frac{4}{33} \\ \end{array} \right] \end{array} \begin{array} {c} \\ = \end{array} \begin{array} {c} I \\ \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array}$

Let's give an RTT way to interpret this first result. Basically it tells us that $M^{+}$ might be a reasonable generator embedding $G$ for this temperament. First of all, let's note that $M$ was not specifically designed to handle non-JI intervals like those represented by the prime-count vector columns of $M^{+}$, like we are making it do here. But we can get away with it anyway. And in this case, $M$ maps the first column of $M^{+}$ to the generator-count vector [1 0}, and its second column to the generator-count vector [0 1}; we can find these two vectors as the columns of the identity matrix $I$.

Now, one fact we can take from this is that the first column of $M^{+}$ — the non-JI vector [$\frac{17}{33}$ $\frac{16}{33}$ $\frac{-4}{33}$ — shares at least one thing in common with other JI intervals such as $\frac21$ [1 0 0, $\frac{81}{40}$ [-3 4 -1, and $\frac{160}{81}$ [5 -4 1: they all get mapped to [1 0} by this meantone mapping matrix $M$. Note that this is no guarantee that [$\frac{17}{33}$ $\frac{16}{33}$ $\frac{-4}{33}$ is close to these intervals (in theory, we can add or subtract an indefinite number of temperament commas from an interval without altering what it maps to!), but it at least suggests that it's reasonably close to them, i.e. that it's about an octave in size.

And a similar statement can be made about the second column vector of $M^{+}$, [$\frac{16}{33}$ $\frac{17}{33}$ $\frac{4}{33}$, with respect to $\frac31$ [0 1 0 and $\frac{80}{27}$ [4 -3 1, etc.: they all map to [0 1}, and so [$\frac{16}{33}$ $\frac{17}{33}$ $\frac{4}{33}$ is probably about a perfect twelfth in size like the rest of them.

(In this case, both likelihoods are indeed true: our two tuned generators are 1202.607 ¢ and 696.741 ¢ in size.)

### Example of when the pseudoinverse does not behave like a true inverse

Before we get to that, we should finish what we've got going here, and show for contrast what happens when we flip-flop $M$ and $M^{+}$, so that the $(3, 2)$-shaped rectangular pseudoinverse times the original $(2, 3)$-shaped rectangular matrix leads to a $(3, 3)$-shaped matrix which is not an identity matrix:

$\begin{array} {c} M^{+} \\ \left[ \begin{array} {c} \frac{17}{33} & \frac{16}{33} \\ \frac{16}{33} & \frac{17}{33} \\ -{\frac{4}{33}} & \frac{4}{33} \\ \end{array} \right] \end{array} \begin{array} {c} M \\ \left[ \begin{array} {r} 1 & 0 & {-4} \\ 0 & 1 & 4 \\ \end{array} \right] \end{array} \begin{array} {c} \\ = \end{array} \begin{array} {c} M^{+}M \\ \left[ \begin{array} {c} \frac{17}{33} & \frac{16}{33} & {-\frac{4}{33}} \\ \frac{16}{33} & \frac{17}{33} & \frac{4}{33} \\ {-\frac{4}{33}} & \frac{4}{33} & \frac{32}{33} \\ \end{array} \right] \end{array}$

While this matrix $M^{+}M$ clearly isn't an identity matrix, since it's not all zeros except for ones running along its main diagonal, and it doesn't really look anything like an identity matrix from a superficial perspective — just judging by the numbers we can read off its entries — it turns out that behavior-wise this matrix does actually work out to be as "close" to an identity matrix as we can get, at least in a certain sense. And since our goal with tuning this temperament was to approximate JI as closely as possible, from this certain mathematical perspective, this is the matrix that accomplishes that. But again, we'll get to why exactly this matrix is the one that accomplishes that in a little bit.

### Un-simplifying

First, to show how we can un-simplify things. The insight leading to this choice of $G = M^{+}$ was made under the simplifying circumstances of $W = I$ (unity-weight damage) and $\mathrm{T} = \mathrm{T}_{\text{p}} = I$ (primes as target-intervals). But nothing about those choices of $W$ or $\mathrm{T}$ affect how this method works; setting them to $I$ was only to help us humans see the way forward. There's nothing stopping us now from using any other weights and target-intervals for $W$ and $\mathrm{T}$; the concept behind this method holds. Choosing $G = \mathrm{T}W(M\mathrm{T}W)^{+}$, that is, still finds for us the $p = 2$ optimization for the problem.

### Demystifying the formula

One way to think about what's happening in the formula of the pseudoinverse uses a technique we might call the "transform-act-antitransform technique": we want to take some action, but we can't do it in the current state, so we transform into a state where we can, then we take the action, and we finish off by performing the opposite of the initial transformation so that we get back to more of a similar state to the one we began with, yet having accomplished the action we intended.

In the case of the pseudoinverse, the action we want to take is inverting a matrix. But we can't exactly invert it, because $A$ is rectangular (to understand why, you can review the inversion process here: matrix inversion by hand). We happen to know that a matrix times its transpose is invertible, though (more on that in a moment), so:

1. Multiplying by the matrix's transpose, finding $AA^\mathsf{T}$, becomes our "transform" step.
2. Then we invert like we wanted to do originally, so that's the "act" step: $(AA^\mathsf{T})^{-1}$.
3. Finally, we might think that we should multiply by the inverse of the matrix's transpose in order to undo our initial transformation step; however, we actually simply repeat the same thing, that is, we multiply by the transpose again! This is because we've put the matrix into an inverted state, so actually multiplying by the original's transpose here is essentially the opposite transformation. So that's the whole formula, then: $A^\mathsf{T}(AA^\mathsf{T})^{-1}$.

Now, as for why we know a matrix times its own transpose is invertible: there's a ton of little linear algebra facts that all converge to guarantee that this is so. Please consider the following diagram which lays all these facts all out at once.

## Pseudoinverse: the "why"

In the previous section we took a look at how, mechanically, the pseudoinverse gives the solution for optimization power $p = 2$. As for why, conceptually speaking, the pseudoinverse gives us the minimum point for the RMS graph in tuning damage space, it's sort of just one of those seemingly miraculously useful mathematical results. But we can try to give a basic explanation here.

### Derivative, for slope

First, let's briefly go over some math facts. For some readers, these will be review:

• The slope of a graph means its rate of change. When slope is positive, the graph is going up, and when negative, it's going down.
• Wherever a graph has a local minimum or maximum, the slope is 0. That's because that's the point where it changes direction, between going up or down.
• We can find the slope at every point of a graph by taking its derivative.

So, considering that we want to find the minimum of a graph, one approach should be to find the derivative of this graph, then find the point(s) where its value is 0, which is where the slope is 0. This means those are the possible points where we have a local minimum, which means therefore that those are the points where we maybe have a global minimum, which is what we're after.

### A unique minimum

As discussed in the tuning fundamentals article (in the section Non-unique tunings - power continuum), the graphs of mean damage and max damage — which are equivalent to the power means with powers $p = 1$ and $p = ∞$, respectively — consist of straight line segments connected by sharp corners, while all other optimization powers between $1$ and $∞$ form smooth curves. This is important because it is only for graphs with smooth curves that we can use its derivative to find the minimum point; the sharp corners of the other type of graph create discontinuities at those points, which in this context means points which have no definitive slope. The simple mathematical methods we use to find slope for smooth graphs get all confused and crash or give wrong results if we try to use them on these types of graphs.

So we can use the derivative slope technique for other powers $1 \lt p \lt ∞$, but the pseudoinverse will only match the solution when $p = 2$.

And, spoiler alert: another key thing that's true about the $2$-mean graph whose minimum point we seek: it has only one point where the slope is equal 0, and it's our global minimum. Again, this is true of any of our curved $p$-mean graphs, but we only really care about it in the case of $p = 2$.

### A toy example using the derivative

To get our feet on solid ground, let's just work through the math for an equal temperament example, i.e. one with only a single generator.

Kicking off with the setup discussed here, we have:

$\textbf{d} = 𝒋\,|\,(GM - G_{\text{j}}M_{\text{j}})\mathrm{T}W\,|$

Let's rewrite this a tad, using the fact that $𝒋G$ is our generator tuning map $𝒈$ and $𝒋G_{\text{j}}M_{\text{j}}$ is equivalent to simply $𝒋$:

$\textbf{d} = |\,(𝒈M - 𝒋)\mathrm{T}W\,|$

Let's say our rank-1 temperament is 12-ET, so our mapping $M$ is 12 19 28]. And our target-interval set is the otonal triad, so $\{ \frac54, \frac65, \frac32 \}$. And let's say we're complexity weighting, so $𝒘 = \left[ \begin{array}{rrr} 4.322 & 4.907 & 2.585 \end{array} \right]$, and $W$ therefore is the diagonalized version of that (or $C$ is the diagonlized version of $𝒄$). As for $𝒈$, since this is a rank-1 temperament, being a $(1, r)$-shaped matrix, it's actually a $(1, 1)$-shaped matrix, and since we don't know what it is yet, it's single entry is the variable $g_1$. This can be understood to represent the size of our ET generator in cents.

$\textbf{d} = \Huge | \normalsize \begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \end{array} \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] \end{array} - \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r} {-2} & 1 & {-1} \\ 0 & 1 & 1 \\ 1 & {-1} & 0 \\ \end{array} \right] \end{array} \begin{array} {ccc} C \\ \left[ \begin{array} {rrr} 4.322 & 0 & 0 \\ 0 & 4.907 & 0 \\ 0 & 0 & 2.585 \\ \end{array} \right] \end{array} \Huge | \normalsize$

Here's what that looks like graphed:

As alluded to earlier, for rank-1 cases, it's pretty easy to read the value straight off the chart. Clearly we're expecting a generator size that's just a smidge bigger than 100 ¢. The point is here to understand the computation process.

So, let's simplify:

$\textbf{d} = \Huge | \normalsize \begin{array} {ccc} 𝒈M = 𝒕 \\ \left[ \begin{array} {rrr} 12g_1 & 19g_1 & 28g_1 \\ \end{array} \right] \end{array} - \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T}C \\ \left[ \begin{array} {r|r|r} {-8.644} & 4.907 & {-2.585} \\ 0 & 4.907 & 2.585 \\ 4.322 & {-4.907} & 0 \\ \end{array} \right] \end{array} \Huge | \normalsize$

Another pass:

$\textbf{d} = \Huge | \normalsize \begin{array} {ccc} 𝒕 - 𝒋 \\ \left[ \begin{array} {rrr} 12g_1 - 1200 & 19g_1 - 1901.955 & 28g_1 - 2786.31 \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T}C \\ \left[ \begin{array} {r|r|r} {-8.644} & 4.907 & {-2.585} \\ 0 & 4.907 & 2.585 \\ 4.322 & {-4.907} & 0 \\ \end{array} \right] \end{array} \Huge | \normalsize$

And once more:

$\textbf{d} = \Huge | \normalsize \begin{array} {ccc} (𝒕 - 𝒋)\mathrm{T}C = 𝒓\mathrm{T}C = \textbf{e}C \\ \left[ \begin{array} {rrr} 17.288g_1 - 1669.605 & 14.721g_1 - 1548.835 & 18.095g_1 - 1814.526 \\ \end{array} \right] \end{array} \Huge | \normalsize$

And remember these bars are actually entry-wise absolute values, so we can put those on each entry. Though it actually won't matter much in a minute, since squaring things automatically causes positive values.

$\textbf{d} = \begin{array} {ccc} |\textbf{e}|C \\ \left[ \begin{array} {rrr} |17.288g_1 - 1669.605| & |14.721g_1 - 1548.835| & |18.095g_1 - 1814.526| \\ \end{array} \right] \end{array}$

$% \slant{} command approximates italics to allow slanted bold characters, including digits, in MathJax. \def\slant#1{\style{display:inline-block;margin:-.05em;transform:skew(-14deg)translateX(.03em)}{#1}} % Latex equivalents of the wiki templates llzigzag and rrzigzag for double zigzag brackets. \def\llzigzag{\hspace{-1.6mu}\style{display:inline-block;transform:scale(.62,1.24)translateY(.07em);font-family:sans-serif}{ꗨ\hspace{-3mu}ꗨ}\hspace{-1.6mu}} \def\rrzigzag{\hspace{-1.6mu}\style{display:inline-block;transform:scale(-.62,1.24)translateY(.07em);font-family:sans-serif}{ꗨ\hspace{-3mu}ꗨ}\hspace{-1.6mu}}$ Because what we're going to do now is change this to the formula for the SOS of damage, that is, $\llzigzag \textbf{d} \rrzigzag _2$:

$\llzigzag \textbf{d} \rrzigzag _2 = |17.288g_1 - 1669.605|^2 + |14.721g_1 - 1548.835|^2 + |18.095g_1 - 1814.526|^2$

So we can get rid of those absolute value signs:

$\llzigzag \textbf{d} \rrzigzag _2 = (17.288g_1 - 1669.605)^2 + (14.721g_1 - 1548.835)^2 + (18.095g_1 - 1814.526)^2$

Then we're just going to work these out:

$\llzigzag \textbf{d} \rrzigzag _2 = \small (17.288g_1 - 1669.605)(17.288g_1 - 1669.605) + (14.721g_1 - 1548.835)(14.721g_1 - 1548.835) + (18.095g_1 - 1814.526)(18.095g_1 - 1814.526)$

Distribute:

$\llzigzag \textbf{d} \rrzigzag _2 = \small (298.875g_1^2 - 57728.262g_1 - 2787580.856) + (216.708g_1^2 - 45600.800g_1 - 2398889.857) + (327.429g_1^2 - 65667.696g_1 - 3292504.605)$

Combine like terms:

$\llzigzag \textbf{d} \rrzigzag _2 = 843.012g_1^2 - 168996.758g_1 - 8478975.318$

At this point, we take the derivative. Basically, exponents decrease by 1 and what they were before turn into coefficients; we won't be doing a full review of this here, but good tutorials on that should be easy to find online.

$\dfrac{\partial}{\partial{g_1}} \llzigzag \textbf{d} \rrzigzag _2 = 2×843.012g_1 - 168996.758$

This is the formula for the slope of the graph, and we want to know where it's equal to zero.

$0 = 2×843.012g_1 - 168996.758$

So we can now solve for $g_1$:

\begin {align} 0 &= 1686.024g_1 - 168996.758 \\[4pt] 168996.758 &= 1686.024g_1 \\[6pt] \dfrac{168996.758}{1686.024} &= g_1 \\[6pt] 100.234 &= g_1 \\ \end {align}

Ta-da! There's our generator size: 100.234 ¢.

### Verifying the toy example with the pseudoinverse

Okay... but what the heck does this have to do with a pseudoinverse? Well, for a sanity check, let's double-check against our pseudoinverse method.

$G = \mathrm{T}W(M\mathrm{T}C)^{+} = \mathrm{T}C(M\mathrm{T}C)^\mathsf{T}(M\mathrm{T}C(M\mathrm{T}C)^\mathsf{T})^{-1}$

We already know $\mathrm{T}C$ from an earlier step above. And so $M\mathrm{T}C$ is:

$\begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T}C \\ \left[ \begin{array} {r|r|r} {-8.644} & 4.907 & {-2.585} \\ 0 & 4.907 & 2.585 \\ 4.322 & {-4.907} & 0 \\ \end{array} \right] \end{array} = \begin{array} {ccc} M\mathrm{T}C \\ \left[ \begin{array} {r|r|r} 17.288 & 14.721 & 18.095 \\ \end{array} \right] \end{array}$

So plugging these in we get:

$G = \begin{array} {ccc} \mathrm{T}C \\ \left[ \begin{array} {r|r|r} {-8.644} & 4.907 & {-2.585} \\ 0 & 4.907 & 2.585 \\ 4.322 & {-4.907} & 0 \\ \end{array} \right] \end{array} \begin{array} {ccc} (M\mathrm{T}C)^\mathsf{T} \\ \left[ \begin{array} {rrr} 17.288 \\ \hline 14.721 \\ \hline 18.095 \\ \end{array} \right] \end{array} ( \begin{array} {ccc} M\mathrm{T}C \\ \left[ \begin{array} {r|r|r} 17.288 & 14.721 & 18.095 \\ \end{array} \right] \end{array} \begin{array} {ccc} (M\mathrm{T}C)^\mathsf{T} \\ \left[ \begin{array} {rrr} 17.288 \\ \hline 14.721 \\ \hline 18.095 \\ \end{array} \right] \end{array} )^{-1}$

Which works out to:

$G = \begin{array} {ccc} \mathrm{T}C(M\mathrm{T}C)^\mathsf{T} \\ \left[ \begin{array} {rrr} 527.565 \\ 608.962 \\ 642.322 \\ \end{array} \right] \end{array} ( \begin{array} {ccc} M\mathrm{T}C(M\mathrm{T}C)^\mathsf{T} \\ \left[ \begin{array} {rrr} 842.983 \end{array} \right] \end{array} )^{-1}$

Then take the inverse (interestingly, since this is a $(1, 1)$-shaped matrix, this is equivalent to the reciprocal, that is, we're just finding $\frac{1}{842.983} = 0.00119$:

$G = \begin{array} {ccc} \mathrm{T}C(M\mathrm{T}C)^\mathsf{T} \\ \left[ \begin{array} {rrr} {-123.974} \\ 119.007 \\ 2.484 \\ \end{array} \right] \end{array} \begin{array} {ccc} (M\mathrm{T}C(M\mathrm{T}C)^\mathsf{T})^{-1} \\ \left[ \begin{array} {rrr} 0.00119 \end{array} \right] \end{array}$

And finally multiply:

$G = \begin{array} {ccc} \mathrm{T}C(M\mathrm{T}C)^\mathsf{T}(M\mathrm{T}C(M\mathrm{T}C)^\mathsf{T})^{-1} \\ \left[ \begin{array} {rrr} {-0.147066} \\ 0.141174 \\ 0.002946 \\ \end{array} \right] \end{array}$

To compare with our 100.234 ¢ value, we'll have to convert this $G$ to a $𝒈$, but that's easy enough. As we demonstrated earlier, simply multiply by $𝒋$:

$𝒈 = \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {ccc} G \\ \left[ \begin{array} {rrr} {-0.147066} \\ 0.141174 \\ 0.002946 \\ \end{array} \right] \end{array}$

When we work through that, we get 100.236 ¢. Close enough (shrugging off rounding errors). So we've sanity-checked at least.

But if we really want to see the connection between the pseudoinverse and the finding the zero of the derivative — how they both find the point where the slope of the RMS graph is zero and therefore it is at its minimum — we're going to have to upgrade from an equal temperament (rank-1 temperament) to a rank-2 temperament. In other words, we need to address tunings with more than one generator, ones can't be represented by a simple scalar anymore, but instead need to be represented with a vector.

### A demonstration using matrix calculus

Technically speaking, even with two generators, meaning two variables, we could take the derivative with respect to one, and then take the derivative with respect to the other. And with three generators we could take three derivatives. But this gets out of hand. And there's a cleverer way we can think about the problem, which involves treating the vector containing all the generators as a single variable. We can do that! But it involves matrix calculus. And in this section we'll work through how.

Graphing damage for a rank-2 temperament, as we've seen previously, means we'll be looking at 3D tuning damage space, with the $x$ and $y$ axes in perpendicular directions across the floor, and the $z$-axis coming up out of the floor, where the $x$-axis gives the tuning of one generator, the $y$-axis gives the tuning of the other generator, and the $z$-axis gives the temperament's damage as a function of those two generator tunings.

And while in 2D tuning damage space the RMS graph made something like a V-shape but with the tip rounded off, here it makes a cone, again with its tip rounded off.

Remember that although we like to think of it, and visualize it as minimizing the $2$-mean of damage, it's equivalent, and simpler computationally, to minimize the $2$-sum. So here's our function:

$f(x, y) = \llzigzag \textbf{d} \rrzigzag _2$

Which is the same as:

$f(x, y) = \textbf{d}\textbf{d}^\mathsf{T}$

Because:

$\textbf{d}\textbf{d}^\mathsf{T} = \\ \textbf{d}·\textbf{d} = \\ \mathrm{d}_1·\mathrm{d}_1 + \mathrm{d}_2·\mathrm{d}_2 + \mathrm{d}_3·\mathrm{d}_3 = \\ \mathrm{d}_1^2 + \mathrm{d}_2^2 + \mathrm{d}_3^2$

Which is the same thing as the $2$-sum: it's the sum of entries to the 2nd power.

Alright, but I can expect you may be concerned: $x$ and $y$ do not even appear in the body of the formula! Well, we can fix that.

As a first step toward resolving this problem, let's choose some better variable names. We had only chosen $x$ and $y$ because those are the most generic variable names available. They're very typically used when graphing things in Euclidean space like this. But we can definitely do better than those names, if we bring in some information more specific to our problem.

One thing we know is that these $x$ and $y$ variables are supposed to represent the tunings of our two generators. So let's call them $g_1$ and $g_2$ instead:

$f(g_1, g_2) = \textbf{d}\textbf{d}^\mathsf{T}$

But we can do even better than this. We're in a world of vectors, so why not express $g_1$ and $g_2$ together as a vector, $𝒈$. In other words, they're just a generator tuning map.

$f(𝒈) = \textbf{d}\textbf{d}^\mathsf{T}$

You may not be comfortable with the idea of a function of a vector (Douglas: I certainly wasn't when I first saw this!) but after working through this example and meditating on it for a while, you may be surprised to find it ceasing to seem so weird after all.

So we're still trying to connect the left and right sides of this equation by showing explicitly how this is a function of $𝒈$, i.e. how $\textbf{d}$ can be expressed in terms of $𝒈$. And we promise, we will get there soon enough.

Next, let's substitute in $(𝒕 - 𝒋)\mathrm{T}W$ for $\textbf{d}$. In other words, the target-interval damage list is the difference between how the tempered-prime tuning map and the just-prime tuning map tune our target-intervals, absolute valued, and weighted by each interval's weight. But the amount of symbols necessary to represent this equation is going to get out of hand if we do exactly like this, so we're actually going to distribute first, finding $𝒕\mathrm{T}W - 𝒋\mathrm{T}W$, and then we're going to start following a pattern here of using Fraktur-style letters to represent matrices that are multiplied by $\mathrm{T}W$, so that in our case $𝖙 = 𝒕\mathrm{T}W$ and $𝖏 = 𝒋\mathrm{T}W$:

$f(𝒈) = (𝖙 - 𝖏)(𝖙^\mathsf{T} - 𝖏^\mathsf{T})$

Now let's distribute these two binomials (you know, the old $(a + b)(c + d) = ac + ad + bc + bd$ trick, AKA "FOIL" = first, outer, inner, last).

$f(𝒈) = 𝖙𝖙^\mathsf{T} - 𝖙𝖏^\mathsf{T} - 𝖏𝖙^\mathsf{T} + 𝖏𝖏^\mathsf{T}$

Because both $𝖙𝖏^\mathsf{T}$ and $𝖏𝖙^\mathsf{T}$ correspond to the dot product of $𝖙$ and $𝖏$, we can consolidate the two inner terms. Let's change $𝖙𝖏^\mathsf{T}$ into $𝖏𝖙^\mathsf{T}$, so that we will end up with $2𝖏𝖙^\mathsf{T}$ in the middle:

$f(𝒈) = 𝖙𝖙^\mathsf{T} - 2𝖏𝖙^\mathsf{T} + 𝖏𝖏^\mathsf{T}$

Alright! We're finally ready to surface $𝒈$. It's been hiding in $𝒕$ all along; the tuning map is equal to the generator tuning map times the mapping, i.e. $𝒕 = 𝒈M$. So we can just substitute that in everywhere. Exactly what we'll do is $𝖙 = 𝒕\mathrm{T}W = (𝒈M)\mathrm{T}W = 𝒈(M\mathrm{T}W) = 𝒈𝔐$, that last step introducing a new Fraktur-style symbol.

$f(𝒈) = (𝒈𝔐)(𝒈𝔐)^\mathsf{T} - 2𝖏(𝒈𝔐)^\mathsf{T} + 𝖏𝖏^\mathsf{T}$

And that gets sort of clunky, so let's execute some of those transposes. Note that when we transpose, the order of things reverses, so $(𝒈𝔐)^\mathsf{T} = 𝔐^\mathsf{T}𝒈^\mathsf{T}$:

$f(𝒈) = 𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 2𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + 𝖏𝖏^\mathsf{T}$

And now, we're finally ready to take the derivative!

$\dfrac{\partial}{\partial𝒈}f(𝒈) = \dfrac{\partial}{\partial𝒈}(𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 2𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + 𝖏𝖏^\mathsf{T})$

And remember, we want to find the place where this function is equal to zero. So let's drop the $\dfrac{\partial}{\partial𝒈}f(𝒈)$ part on the left, and show the $= \textbf{0}$ part on the right instead (note the boldness of the $\textbf{0}$; this indicates that this is not simply a single zero, but a vector of all zeros, one for each generator).

$\dfrac{\partial}{\partial𝒈}(𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 2𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + 𝖏𝖏^\mathsf{T}) = \textbf{0}$

Well, now we've come to it. We've run out of things we can do without confronting the question: how in the world do we take derivatives of matrices? This next part is going to require some of that matrix calculus we warned about. Fortunately, if one is previously familiar with normal algebraic differentiation rules, these will not seem too wild:

1. The last term, $𝖏𝖏^\mathsf{T}$, is going to vanish, because with respect to $𝒈$, it's a constant; there's no factor of $𝒈$ in it.
2. The middle term, $-2𝖏𝔐^\mathsf{T}𝒈^\mathsf{T}$, has a single factor of $𝒈$, so it will remain but with that factor gone. (Technically it's a factor of $𝒈^\mathsf{T}$, but for reasons that would probably require a deeper understanding of the subtleties of matrix calculus than the present author commands, it works out this way anyway. Perhaps we should have differentiated instead with respect to $𝒈^\mathsf{T}$, rather than $𝒈$?)
3. The first term, $𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T}$, can in a way be seen to have a $𝒈^2$, because it contains both a $𝒈$ as well as a $𝒈^\mathsf{T}$ (and we demonstrated earlier how for a vector $\textbf{v}$, there is a relationship between itself squared and it times its transpose); so, just as an $x^2$ differentiates to a $2x$, that is, the power is reduced by 1 and multiplies into any existing coefficient, this term becomes $2𝒈𝔐𝔐^\mathsf{T}$.

And so we find:

$2𝒈𝔐𝔐^\mathsf{T} - 2𝖏𝔐^\mathsf{T} = \textbf{0}$

That's much nicer to look at, huh. Well, what next? Our goal is to solve for $𝒈$, right? Then let's isolate the solitary remaining term with $𝒈$ as a factor on one side of the equation:

$2𝒈𝔐𝔐^\mathsf{T} = 2𝖏𝔐^\mathsf{T}$

Certainly we can cancel out the 2's on both sides; that's easy:

$𝒈𝔐𝔐^\mathsf{T} = 𝖏𝔐^\mathsf{T}$

And, as we proved in the earlier section "Demystifying the formula", $AA^\mathsf{T}$ is invertible, so we cancel that out on the left by multiplying both sides of the equation by $(𝔐𝔐^\mathsf{T})^{-1}$:

$𝒈𝔐𝔐^\mathsf{T}(𝔐𝔐^\mathsf{T})^{-1} = 𝖏𝔐^\mathsf{T}(𝔐𝔐^\mathsf{T})^{-1} \\ 𝒈\cancel{𝔐𝔐^\mathsf{T}}\cancel{(𝔐𝔐^\mathsf{T})^{-1}} = 𝖏𝔐^\mathsf{T}(𝔐𝔐^\mathsf{T})^{-1} \\ 𝒈 = 𝖏𝔐^\mathsf{T}(𝔐𝔐^\mathsf{T})^{-1}$

Finally, remember that $𝒈 = 𝒋G$ and $𝒋 = 𝒋G_{\text{j}}M_{\text{j}}$, so we can replace those and cancel out some more stuff (also remember that $𝖏 = 𝒋\mathrm{T}W$):

$(𝒋G) = (𝒋G_{\text{j}}M_{\text{j}})\mathrm{T}W𝔐^\mathsf{T}(𝔐𝔐^\mathsf{T})^{-1} \\ \cancel{𝒋}G = \cancel{𝒋}\cancel{I}\cancel{I}\mathrm{T}W𝔐^\mathsf{T}(𝔐𝔐^\mathsf{T})^{-1}$

And that part on the right looks pretty familiar...

$G = \mathrm{T}W𝔐^\mathsf{T}(𝔐𝔐^\mathsf{T})^{-1} \\ G = \mathrm{T}W𝔐^{+} \\ G = \mathrm{T}W(M\mathrm{T}W)^{+}$

Voilà! We've found our pseudoinverse-based $G$ formula, finding it to be the $G$ that gives the point of zero slope, i.e. the minimum point of the RMS damage graph.

If you're hungry for more information on these concepts, or even just another take on it, please see User:Sintel/Generator optimization#Least squares method.

## With held-intervals

The pseudoinverse method can be adapted to handle tuning schemes which have held-intervals. The basic idea here is that we can no longer simply grab the tuning found as the point at the bottom of the tuning damage graph bowl hovering above the floor, because that tuning probably doesn't also happen to be one that leaves the requested interval unchanged. We can imagine an additional feature in our tuning damage space: the line across this bowl which connects every point where the generator tunings work out such that our interval is indeed unchanged. Again, this line probably doesn't straight through the bottommost point of our RMS-damage graph. But that's okay. That just means we could still decrease the overall damage further if we didn't hold the interval unchanged. But assuming we're serious about holding this interval unchanged, we've simply modified the problem a bit. Now we're looking for the point along this new held-interval line which is closest to the floor. Simple enough to understand, in concept! The rest of this section is dedicated to explaining how, mathematically speaking, we're able to identify that point. It still involves matrix calculus — derivatives of vectors, and such — but now we also pull in some additional ideas. We hope you dig it.

We'll be talking through this problem assuming a three-dimensional tuning damage graph, which is to say, we're dealing with a rank-2 temperament (the two generator dimensions across the floor, and the damage dimension up from the floor). If we asked for more than one interval to be held unchanged, then we'd flip over to the "only held-intervals" method discussed later, because at that point there's only a single possible tuning. And if we asked for less than one interval to be held unchanged, then we'd be back to the ordinary pseudoinverse method which you've already learned. So for this extended example we'll be assuming one held-interval. But the principles discussed here generalize to higher dimensions of temperaments and more held-intervals, if the dimensionality supports them. These higher dimensional examples are more difficult to visualize, though, of course, and so we've chosen the simplest possibility that sufficiently demonstrates the ideas we need to learn.

### Topographic view

$% Latex equivalents of the wiki templates llzigzag and rrzigzag for double zigzag brackets. % Annoyingly, we need slightly different Latex versions for the different Latex sizes. \def\smallLLzigzag{\hspace{-1.4mu}\style{display:inline-block;transform:scale(.62,1.24)translateY(.05em);font-family:sans-serif}{ꗨ\hspace{-2.6mu}ꗨ}\hspace{-1.4mu}} \def\smallRRzigzag{\hspace{-1.4mu}\style{display:inline-block;transform:scale(-.62,1.24)translateY(.05em);font-family:sans-serif}{ꗨ\hspace{-2.6mu}ꗨ}\hspace{-1.4mu}} \def\llzigzag{\hspace{-1.6mu}\style{display:inline-block;transform:scale(.62,1.24)translateY(.07em);font-family:sans-serif}{ꗨ\hspace{-3mu}ꗨ}\hspace{-1.6mu}} \def\rrzigzag{\hspace{-1.6mu}\style{display:inline-block;transform:scale(-.62,1.24)translateY(.07em);font-family:sans-serif}{ꗨ\hspace{-3mu}ꗨ}\hspace{-1.6mu}} \def\largeLLzigzag{\hspace{-1.8mu}\style{display:inline-block;transform:scale(.62,1.24)translateY(.09em);font-family:sans-serif}{ꗨ\hspace{-3.5mu}ꗨ}\hspace{-1.8mu}} \def\largeRRzigzag{\hspace{-1.8mu}\style{display:inline-block;transform:scale(-.62,1.24)translateY(.09em);font-family:sans-serif}{ꗨ\hspace{-3.5mu}ꗨ}\hspace{-1.8mu}} \def\LargeLLzigzag{\hspace{-2.5mu}\style{display:inline-block;transform:scale(.62,1.24)translateY(.1em);font-family:sans-serif}{ꗨ\hspace{-4.5mu}ꗨ}\hspace{-2.5mu}} \def\LargeRRzigzag{\hspace{-2.5mu}\style{display:inline-block;transform:scale(-.62,1.24)translateY(.1em);font-family:sans-serif}{ꗨ\hspace{-4.5mu}ꗨ}\hspace{-2.5mu}}$ Back in the fundamentals article, we briefly demonstrated a special way to visualize a 3-dimensional tuning damage 2-dimensionally: in a topographic view, where the $z$-axis is pointing straight out of the page, and represented by contour lines tracing out the shapes of points which share the same $z$-value. In the case of a tuning damage graph, then, this will show concentric rings (not necessarily circles) around the lowest point of our damage bowl, representing how damage increases smoothly in any direction you take away from that minimum point. So far we haven't made much use of this visualization approach, but for tuning schemes with $p=2$ and at least one held-interval, it's the perfect tool for the job.

So now we draw our held-interval line across this topographic view.

Our first guess at the lowest point on this line might be the point closest to the actual minimum damage. Good guess, but not necessarily true. It would be true if the rings were exactly circles. But they're not necessarily; they might be oblong, and the skew may not be in an obvious angle with respect to the held-interval line. So for a generalized means of finding the lowest point on the held-interval line, we need to think a bit deeper about the problem.

The first step to understanding better is to adjust our contour lines. The obvious place to start was at increments of 1 damage. But we're going to want to rescale so that one of our contour lines exactly touches the held-interval line. To be clear, we're not changing the damage graph at all; we're simply changing how we visualize it on this topographic view.

The point where this contour line touches the held-interval line, then, is the lowest point on the held-interval line, that is, the point among all those where the held-interval is indeed unchanged where the overall damage to the target-intervals is the least. This should be easy enough to see, because if you step just an infinitesimally small amount in either direction along the held-interval line, you will no longer be touching the contour line, but rather you will be just outside of it, which means you have slightly higher damage than whatever constant damage amount that contour traces.

Next, we need to figure out how to identify this point. It may seem frustrating, because we're looking right at it! But we don't already have formulas for these contour lines.

### Matching slopes

In order to identify this point, it's going to be more helpful to look at the entire graph of our held-interval's error. That is, rather than only drawing the line where it's zero:

$𝒕\mathrm{H} - 𝒋\mathrm{H} = 0$

We'll draw the whole thing:

$𝒕\mathrm{H} - 𝒋\mathrm{H}$

If the original graph was like a line drawn diagonally across the floor, the full graph looks like this but with a plane running through it, tilted, on one side ascending up and out from the floor, on the other side descending down and into the floor. In the topographic view, then, this graph will appear as equally-spaced parallel lines to the original line, emanating outwards in both directions from it.

The next thing we want to see are some little arrows along all of these contour lines, both for the damage graph and for the held-interval graph, which point perpendicularly to them.

What these little arrows represent are the derivatives of these graphs at those points, or in other words, the slope. If this isn't clear, it might help to step back for a moment to 2D, and draw little arrows in a similar fashion:

In higher dimensions, the generalized way to think about slope is that it's the vector pointing in the direction of steepest slope upwards from the given point.

Now, we're not attempting to distinguish the sizes of these slopes here. We could do that, perhaps by changing the relative scale of the arrows. But that's particularly important for our purposes. We only need to notice the different directions these slopes point.

You may recall that in the simpler case — with no held-intervals — we identified the point at the bottom of the bowl using derivatives; this point is where the derivative (slope) is equal to zero. Well, what can we notice about the point we're seeking to identify? It's where the slopes of the RMS damage graph for the target-intervals and the error of the held-interval match!

So, our first draft of our goal might look something like this:

$\dfrac{\partial}{\partial{𝒈}}( \llzigzag \textbf{d} \rrzigzag _2) = \dfrac{\partial}{\partial{𝒈}}(𝒓\mathrm{H})$

But that's not quite specific enough. To ensure we grab grab a point satisfying that condition, but also ensure that it's on our held-interval line, we could simply add another equation:

\begin{align} \dfrac{\partial}{\partial{𝒈}}( \llzigzag \textbf{d} \rrzigzag _2) &= \dfrac{\partial}{\partial{𝒈}}(𝒓\mathrm{H}) \\[12pt] 𝒓\mathrm{H} &= 0 \end{align}

But there's another special way of asking for the same thing, that isn't as obvious-looking, but consolidates it all down to a single equation, which — due to some mathemagic — eventually works out to give us a really nice solution. Here's what that looks like:

$\dfrac{\partial}{\partial{𝒈, λ}}( \llzigzag \textbf{d} \rrzigzag _2) = \dfrac{\partial}{\partial{𝒈, λ}}(λ𝒓\mathrm{H})$

What we've done here is added a new variable $λ$, a multiplier which scales the error in the interval we want to be unchanged. We can visualize its effect as saying: we don't care about the relative lengths of these two vectors; we only care about wherever they point in exactly the same direction. This trick works as long as we take the derivative with respect to $λ$ as well, which you'll note we're doing now too. We don't expect this to be clear straight away; the reason this works will probably only become clear in later steps of working through the problem.

Let's rework our equation a bit to make things nicer. One thing we can do is put both terms on one side of the equation, equalling zero (rather, the zero vector, with a bolded zero):

$\dfrac{\partial}{\partial{𝒈, λ}}( \llzigzag \textbf{d} \rrzigzag _2) - \dfrac{\partial}{\partial{𝒈, λ}}(λ𝒓\mathrm{H}) = \textbf{0}$

And now we can consolidate the derivatives:

$\dfrac{\partial}{\partial{𝒈, λ}}( \llzigzag \textbf{d} \rrzigzag _2 - λ𝒓\mathrm{H}) = \textbf{0}$

We're going to switch from subtraction to addition here. How can we get away with that? Well, it just changes what $λ$ comes out to; it'll just flip the sign on it. But we'll get the same answer either way. And we won't actually need to do anything with the value of $λ$ in the end; we only need to know the answers to the generator sizes in $𝒈$.

$\dfrac{\partial}{\partial{𝒈, λ}}( \llzigzag \textbf{d} \rrzigzag _2 + λ𝒓\mathrm{H}) = \textbf{0}$

Similarly, we can do this without changing the result:

$\dfrac{\partial}{\partial{𝒈, λ}}(\frac12 \llzigzag \textbf{d} \rrzigzag _2 + λ𝒓\mathrm{H}) = \textbf{0}$

That'll make the maths work out nicer, and just means $λ$ will be half the size as it would have been otherwise.

So: we're looking for the value of $𝒈$. But $𝒈$ doesn't appear in the equation yet. That's because it's hiding inside $\textbf{d}$ and $𝒓$. We won't bother repeating all the steps from the simpler case; we'll just replace $\llzigzag \textbf{d} \rrzigzag _2$ with $𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 2𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + 𝖏𝖏^\mathsf{T}$. And as for $𝒓$, that's just $𝒕 - 𝒋$, or $𝒈M - 𝒋$. So we have:

$\dfrac{\partial}{\partial{𝒈, λ}}(\frac12(𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 2𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + 𝖏𝖏^\mathsf{T}) + λ(𝒈M - 𝒋)\mathrm{H}) = \textbf{0}$

And let's just distribute stuff so we have a simple summation:

$\dfrac{\partial}{\partial{𝒈, λ}}(\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + \frac12𝖏𝖏^\mathsf{T} + λ𝒈M\mathrm{H} - λ𝒋\mathrm{H}) = \textbf{0}$

Everything in that expression other than $𝒈$ and $λ$ are known values; only $𝒈$ and $λ$ are variables.

As a final change, we're going to recognize the fact that for higher-dimensional temperaments, we might sometimes have multiple held-intervals. Which is to say that our new variable might actually itself be a vector! So we'll use a bold $\textbf{λ}$ here to capture that idea. (The example we will demonstrate with periodically will still only have one held-interval, though, but that's fine if this is a one-entry vector, whose only entry is $λ_1$.) Note that we need to locate $\textbf{λ}$ on the right side of each term now, so that its $h$ height matches up with the $h$ width of $\mathrm{H}$.

$\dfrac{\partial}{\partial{𝒈, \textbf{λ}}}(\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + \frac12𝖏𝖏^\mathsf{T} + 𝒈M\mathrm{H}\textbf{λ} - 𝒋\mathrm{H}\textbf{λ}) = \textbf{0}$

Now in the simpler case, when we took the derivative simply with respect to $𝒈$, we could almost treat the vectors and matrices like normal variables when taking derivatives: exponents came down as coefficients, and exponents decremented by 1. But now that we're taking the derivative with respect to both $𝒈$ and $\textbf{λ}$, the clearest way forward is to understand this in terms of a system of equations, rather than a single equation of matrices and vectors.

### Multiple derivatives

One way of thinking about what we're asking for with $\dfrac{\partial}{\partial{𝒈, \textbf{λ}}}$ is that we want the vector whose entries are partial derivatives with respect to each scalar entry of $𝒈$ and $\textbf{λ}$. We hinted at this earlier when we introduced the bold-zero vector $\textbf{0}$, which represented a zero for each generator. So if:

$\dfrac{\partial}{\partial{𝒈}} \llzigzag \textbf{d} \rrzigzag _2 = \\ \dfrac{\partial}{\partial{𝒈}} ( 𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 2𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + 𝖏𝖏^\mathsf{T}) = \\ \dfrac{\partial}{\partial{𝒈}} f(𝒈) \\$

Then if we find a miniRMS damage where $𝒈$ = {1198.857 162.966], that tells us that:

$\dfrac{\partial}{\partial{𝒈}} f(\left[ \begin{array} {c} 1198.857 & 162.966 \\ \end{array} \right]) = \textbf{0} = \left[ \begin{array} {c} 0 & 0 \\ \end{array} \right]$

Or in other words:

$\dfrac{\partial}{\partial{g_1}} f(\left[ \begin{array} {c} 1198.857 & 162.966 \\ \end{array} \right]) = 0 \\ \dfrac{\partial}{\partial{g_2}} f(\left[ \begin{array} {c} 1198.857 & 162.966 \\ \end{array} \right]) = 0$

And so if we plug in some other $𝒈$ to $f()$, what we get out is some vector $\textbf{v}$ telling us the slope of the damage graph at the tuning represented by that generator tuning map:

$\dfrac{\partial}{\partial{𝒈}} f(\left[ \begin{array} {c} 1200.000 & 163.316 \\ \end{array} \right]) = \textbf{v}$

Or in other words:

$\dfrac{\partial}{\partial{g_1}} f(\left[ \begin{array} {c} 1200.000 & 163.316 \\ \end{array} \right]) = v_1 \\ \dfrac{\partial}{\partial{g_1}} f(\left[ \begin{array} {c} 1200.000 & 163.316 \\ \end{array} \right]) = v_2 \\$

$\dfrac{\partial}{\partial{𝒈, \textbf{λ}}}(\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + \frac12𝖏𝖏^\mathsf{T} + 𝒈M\mathrm{H}\textbf{λ} - 𝒋\mathrm{H}\textbf{λ}) = \textbf{0} = \left[ \begin{array} {c} 0 & 0 & 0 \\ \end{array} \right]$

What we really want under the hood is the derivative with respect to $g_1$ to be 0, the derivative with respect to $g_2$ to be 0, and also the derivative with respect to $λ_1$ to be 0:

$\dfrac{\partial}{\partial{g_1}}(\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + \frac12𝖏𝖏^\mathsf{T} + 𝒈M\mathrm{H}\textbf{λ} - 𝒋\mathrm{H}\textbf{λ}) = 0 \\ \dfrac{\partial}{\partial{g_2}}(\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + \frac12𝖏𝖏^\mathsf{T} + 𝒈M\mathrm{H}\textbf{λ} - 𝒋\mathrm{H}\textbf{λ}) = 0 \\ \dfrac{\partial}{\partial{λ_1}}(\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + \frac12𝖏𝖏^\mathsf{T} + 𝒈M\mathrm{H}\textbf{λ} - 𝒋\mathrm{H}\textbf{λ}) = 0 \\$

So, this essentially gives us a vector whose entries are derivatives, and which can be thought of as an arrow in space pointing in the multidimensional direction of the slope of the graph at a point. Sometimes these vector derivatives are called "gradients" and notated with an upside-down triangle, but we're just going to stick with the more familiar algebraic terminology here for our purposes.

To give a quick and dirty answer to the question posed earlier regarding why introducing $\textbf{λ}$ is a replacement of any sort for the obvious equation $𝒓\mathrm{H} = 0$, notice what the derivative of the third equation will be. We'll work it out in rigorous detail soon, but for now, let's just observe how $\dfrac{\partial}{\partial{λ_1}}(\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} - 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} + \frac12𝖏𝖏^\mathsf{T} + 𝒈M\mathrm{H}\textbf{λ} - 𝒋\mathrm{H}\textbf{λ}) = 𝒈M\mathrm{H} - 𝒋\mathrm{H}$. So if that's equal to 0, and $𝒓$ can be rewritten as $𝒕 - 𝒋$ and further as $𝒈𝑀 - 𝒋$, then we can see how this has covered our bases re: $𝒓\mathrm{H} = 0$, while also providing the connective tissue to the other equations re: using $𝒈$ and $\textbf{λ}$ to minimize damage to our target-intervals; because $\textbf{λ}$ figures in terms in the first two equations which also have a $𝒈$ in them, so whatever it comes out to will affect those; this is how we achieve the offsetting from the actual bottom of the damage bowl.

### Break down matrices

In order to work this out, though, we'll need to break our occurrences of $𝒈$ down into $g_1$ and $g_2$ (and $\textbf{λ}$ down into $λ_1$).

So let's take this daunting task on, one term at a time. Term one of five:

$\frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T}$

Remember, $𝔐 = M\mathrm{T}W$. We haven't specified our target-interval count $k$. Whatever it is, though, if we were to drill all the way down to the $m_{ij}$, $t_{ij}$, and $w_{ij}$ level here as we are doing with $𝒈$, then the entries of $𝔐$ would be so complicated that they'd be hard to fit on the page, with dozens of summed up terms. And the entries of $𝔐𝔐^\mathsf{T}$ would be even crazier! So let's not.

Besides, we don't need to drill down into $M$, $\mathrm{T}$, or $W$ in the same way we need to drill down into $𝒈$ and $\mathbf{λ}$, because they're not variables we need to differentiate by; they're all just known constants, information about the temperament we're tuning and the tuning scheme according to which we're tuning it. So why would we drill down into those? Well, we won't.

Instead, let's take an approach where in each term, we'll multiply together every matrix other than $𝒈$ and $\mathbf{λ}$, then use letters $\mathrm{A}$, $\mathrm{B}$, $\mathrm{C}$, $\mathrm{D}$, and $\mathrm{E}$ to identify results as matrices of constants, one different letter of the alphabet for each term. And while may not need to have drilled down to the matrix entry level in $M$, $\mathrm{T}$, or $W$, we do at least need to drill down to the entry level of these constant matrices.

So, in the case of our first term, we'll be replacing $𝔐𝔐^\mathsf{T}$ with $\mathrm{A}$. And if we've set $r=2$, then this is a matrix with shape $(2,2)$, so it'll have entries $\mathrm{a}_{11}$, $\mathrm{a}_{12}$, $\mathrm{a}_{21}$, and $\mathrm{a}_{22}$. We've indicated shapes below each matrix in the following:

\begin{align} \frac12 \begin{array} {c} 𝒈 \\ \left[ \begin{array} {r} g_1 & g_2 \\ \end{array} \right] \\ \small (1,2) \end{array} \begin{array} {c} \mathrm{A} \\ \left[ \begin{array} {r} \mathrm{a}_{11} & \mathrm{a}_{12} \\ \mathrm{a}_{21} & \mathrm{a}_{22} \\ \end{array} \right] \\ \small (2,2) \end{array} \begin{array} {c} 𝒈^\mathsf{T} \\ \left[ \begin{array} {r} g_1 \\ g_2 \\ \end{array} \right] \\ \small (2,1) \end{array} &= \\[12pt] \frac12 \begin{array} {c} 𝒈 \\ \left[ \begin{array} {r} g_1 & g_2 \\ \end{array} \right] \\ \small (1,2) \end{array} \begin{array} {c} \mathrm{A}𝒈^\mathsf{T} \\ \left[ \begin{array} {r} \mathrm{a}_{11}g_1 + \mathrm{a}_{12}g_2 \\ \mathrm{a}_{21}g_1 + \mathrm{a}_{22}g_2 \\ \end{array} \right] \\ \small (2,1) \end{array} &= \\[12pt] \frac12 \begin{array} {c} 𝒈\mathrm{A}𝒈^\mathsf{T} \\ \left[ \begin{array} {r} (\mathrm{a}_{11}g_1 + \mathrm{a}_{12}g_2)g_1 + (\mathrm{a}_{21}g_1 + \mathrm{a}_{22}g_2)g_2 \end{array} \right] \\ \small (1,1) \end{array} &= \\[12pt] \frac12\mathrm{a}_{11}g_1^2 + \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_1g_2 + \frac12\mathrm{a}_{22}g_2^2 \end{align}

Yes, there's a reason we haven't pulled the $\frac12$ into the constant matrix, despite it clearly being a constant. It's the same reason we deliberately introduced it to our equation out of nowhere earlier. We'll see soon enough.

Now let's work out the second term, $𝖏𝔐^\mathsf{T}𝒈^\mathsf{T}$. Again, we should do as little as possible other than breaking down $𝒈$. So with $𝖏$ a $(1, k)$-shaped matrix and $𝔐^\mathsf{T}$ a $(k, r)$-shaped matrix, those two together are a $(1, r)$-shaped matrix, and $r=2$ in our example. And that's our $\mathrm{B}$. So:

\begin{align} \begin{array} {c} \mathrm{B} \\ \left[ \begin{array} {r} \mathrm{b}_{11} & \mathrm{b}_{12} \\ \end{array} \right] \\ \small (1,2) \end{array} \begin{array} {c} 𝒈^\mathsf{T} \\ \left[ \begin{array} {r} g_1 \\ g_2 \\ \end{array} \right] \\ \small (2,1) \end{array} &= \\[12pt] \begin{array} {c} \mathrm{B}𝒈^\mathsf{T} \\ \left[ \begin{array} {r} \mathrm{b}_{11}g_1 + \mathrm{b}_{12}g_2 \\ \end{array} \right] \\ \small (1,1) \end{array} &= \\[12pt] \mathrm{b}_{11}g_1 + \mathrm{b}_{12}g_2 \end{align}

Third term to break down: $\frac12𝖏𝖏^\mathsf{T}$. This one has neither a $𝒈$ nor a $\textbf{λ}$ in it, and is a $(1, 1)$-shaped matrix, so all we have to do is get it into our constant form: $\frac12\mathrm{c}_{11}$ (for consistency, leaving the $\frac12$ alone, though this one matters less).

Fourth term to break down: $𝒈M\mathrm{H}\textbf{λ}$. Well, $M\mathrm{H}$ is a $(r, d)(d, h) = (r, h)$-shaped matrix, and we know $r=2$ and $h=1$, so our constant matrix $\mathrm{D}$ is a $(2, 1)$-shaped matrix.

\begin{align} \begin{array} {c} 𝒈 \\ \left[ \begin{array} {r} g_1 & g_2 \\ \end{array} \right] \\ \small (1, 2) \end{array} \begin{array} {c} \mathrm{D} \\ \left[ \begin{array} {r} \mathrm{d}_{11} \\ \mathrm{d}_{12} \\ \end{array} \right] \\ \small (2, 1) \end{array} \begin{array} {c} \textbf{λ} \\ \left[ \begin{array} {r} λ_1 \\ \end{array} \right] \\ \small (1, 1) \end{array} &= \\[12pt] \begin{array} {c} 𝒈\mathrm{D} \\ \left[ \begin{array} {r} \mathrm{d}_{11}g_1 + \mathrm{d}_{12}g_2 \end{array} \right] \\ \small (1,1) \end{array} \begin{array} {c} \textbf{λ} \\ \left[ \begin{array} {r} λ_1 \\ \end{array} \right] \\ \small (1, 1) \end{array} &= \\[12pt] \begin{array} {c} 𝒈\mathrm{D}\textbf{λ} \\ \left[ \begin{array} {r} (\mathrm{d}_{11}g_1 + \mathrm{d}_{12}g_2)λ_1 \end{array} \right] \\ \small (1,1) \end{array} &= \\[12pt] \mathrm{d}_{11}g_1λ_1 + \mathrm{d}_{12}g_2λ_1 \end{align}

Okay, the fifth and final term to break down: $𝒋\mathrm{H}\textbf{λ}$. This one's on the quicker side: we can just rewrite it as $\mathrm{e}_{11}λ_1$.

Now we just have to put all five of those rewritten terms back together!

$\begin{array} \frac12𝒈𝔐𝔐^\mathsf{T}𝒈^\mathsf{T} & - & 𝖏𝔐^\mathsf{T}𝒈^\mathsf{T} & + & \frac12𝖏𝖏^\mathsf{T} & + & 𝒈M\mathrm{H}\textbf{λ} & - & 𝒋\mathrm{H}\textbf{λ} & = \\ \frac12𝒈\mathrm{A}𝒈^\mathsf{T} & - & \mathrm{B}𝒈^\mathsf{T} & + & \frac12\mathrm{C} & + & 𝒈\mathrm{D}\textbf{λ} & - & \mathrm{E}\textbf{λ} & = \\ \frac12\mathrm{a}_{11}g_1^2 + \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_1g_2 + \frac12\mathrm{a}_{22}g_2^2 & - & \mathrm{b}_{11}g_1 - \mathrm{b}_{12}g_2 & + & \frac12\mathrm{c}_{11} & + & \mathrm{d}_{11}g_1λ_1 + \mathrm{d}_{12}g_2λ_1 & - & \mathrm{e}_{11}λ_1 & \end{array}$

Now that we've gotten our expression in terms of $g_1$, $g_2$, and $λ_1$, we are ready to take our three different derivatives of this, once with respect to each of those three scalar variables (and finally we can see why we introduced the factor of $\frac12$: so that when the exponents of 2 come down as coefficients, they cancel out; well, that's only a partial answer, we suppose, but suffice it to say that if we hadn't done this, later steps wouldn't match up quite right).

$\small \begin{array} {c} f(𝒈, \textbf{λ}) & = & \frac12\mathrm{a}_{11}g_1^2 & + & \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_1g_2 & + & \frac12\mathrm{a}_{22}g_2^2 & - & \mathrm{b}_{11}g_1 & - & \mathrm{b}_{12}g_2 & + & \frac12\mathrm{c}_{11} & + & \mathrm{d}_{11}g_1λ_1 & + & \mathrm{d}_{12}g_2λ_1 & - & \mathrm{e}_{11}λ_1 & \\ \dfrac{\partial}{\partial{g_1}}f(𝒈, \textbf{λ}) & = & \mathrm{a}_{11}g_1 & + & \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_2 & + & 0 & - & \mathrm{b}_{11} & - & 0 & + & 0 & + & \mathrm{d}_{11}λ_1 & + & 0 & - & 0 \\ \dfrac{\partial}{\partial{g_2}}f(𝒈, \textbf{λ}) & = & 0 & + & \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_1 & + & \mathrm{a}_{22}g_2 & - & 0 & - & \mathrm{b}_{12} & + & 0 & + & 0 & + & \mathrm{d}_{12}λ_1 & - & 0 \\ \dfrac{\partial}{\partial{λ_1}}f(𝒈, \textbf{λ}) & = & 0 & + & 0 & + & 0 & - & 0 & - & 0 & + & 0 & + & \mathrm{d}_{11}g_1 & + & \mathrm{d}_{12}g_2 & - & \mathrm{e}_{11} \\ \end{array}$

And so, replacing the derivatives in our system, we find:

\begin{align} \mathrm{a}_{11}g_1 + \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_2 - \mathrm{b}_{11} + \mathrm{d}_{11}λ_1 &= 0 \\ \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_1 + \mathrm{a}_{22}g_2 - \mathrm{b}_{12} + \mathrm{d}_{12}λ_1 &= 0 \\ \mathrm{d}_{11}g_1 + \mathrm{d}_{12}g_2 - \mathrm{e}_{11} &= 0 \\ \end{align}

### Build matrices back up

In this section we'd like to work our way back from this rather clunky and tedious system of equations situation back to matrices. As our first step, let's space our derivative equations' terms out nicely so we can understand better the relationships between them:

$\begin{array} {c} \mathrm{a}_{11}g_1 & + & \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_2 & + & \mathrm{d}_{11}λ_1 & - & \mathrm{b}_{11} & = & 0 \\ \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_1 & + & \mathrm{a}_{22}g_2 & + & \mathrm{d}_{12}λ_1 & - & \mathrm{b}_{12} & = & 0\\ \mathrm{d}_{11}g_1 & + & \mathrm{d}_{12}g_2 & & & - & \mathrm{e}_{11} & = & 0\\ \end{array}$

Next, notice that all of the terms that contain none of our variables are negative. Let's get all of them to the other side of their respective equations:

$\begin{array} {c} \mathrm{a}_{11}g_1 & + & \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_2 & + & \mathrm{d}_{11}λ_1 & = & \mathrm{b}_{11} \\ \frac12(\mathrm{a}_{12} + \mathrm{a}_{21})g_1 & + & \mathrm{a}_{22}g_2 & + & \mathrm{d}_{12}λ_1 & = & \mathrm{b}_{12} \\ \mathrm{d}_{11}g_1 & + & \mathrm{d}_{12}g_2 & & & = & \mathrm{e}_{11} \\ \end{array}$

Notice also that none of our terms contain more than one of our variables anymore. Let's reorganize these terms in a table according to which variable they contain:

equation $g_1$ $g_2$ $λ_1$ (no variable, i.e. constants only)
1 $\mathrm{a}_{11}$ $\frac12(\mathrm{a}_{12} + \mathrm{a}_{21})$ $\mathrm{d}_{11}$ $\mathrm{b}_{11}$
2 $\frac12(\mathrm{a}_{12} + \mathrm{a}_{21})$ $\mathrm{a}_{22}$ $\mathrm{d}_{12}$ $\mathrm{b}_{12}$
3 $\mathrm{d}_{11}$ $\mathrm{d}_{12}$ - $\mathrm{e}_{11}$

This reorganization is the first step to seeing how we can pull ourselves back into matrix form. Notice some patterns here. The constants are all grouped together by which term they came from. This means we can go back to thinking of this system of equations as a single equation of matrices, replacing these chunks with the original constant matrices:

equation $g_1$ $g_2$ $λ_1$ (no variable, i.e. constants only)
1 $\mathrm{A}$ $\mathrm{D}$ $\mathrm{B}^\mathsf{T}$
2
3 $\mathrm{D}^\mathsf{T}$ - $\mathrm{E}^\mathsf{T}$

The replacements for $\mathrm{B}$ and $\mathrm{D}$ may seem obvious enough, but you may initially balk at the replacement of $\mathrm{A}$ here, but there's a reason that works. It's due to the fact that the thing $\mathrm{A}$ represents is the product of a thing and its own transpose, which means entries mirrored across the main diagonal are equal to each other. So if $\mathrm{a}_{12} = \mathrm{a}_{21}$, then $\frac12(\mathrm{a}_{12} + \mathrm{a}_{21}) = \mathrm{a}_{12} = \mathrm{a}_{21}$. Feel free to check this yourself, or compare with our work-through in the footnote here.

Also note that we made $\mathrm{E}$ transposed; it's hard to tell because it's a $(1, 1)$-shaped matrix, but if we did have more than one held-interval, this'd be more apparent.

And so now we can go back to our original variables.

equation $g_1$ $g_2$ $λ_1$ (no variable, i.e. constants only)
1 $𝔐𝔐^\mathsf{T}$ $M\mathrm{H}$ $(𝖏𝔐^\mathsf{T})^\mathsf{T}$
2
3 $(M\mathrm{H})^\mathsf{T}$ - $(𝒋\mathrm{H})^\mathsf{T}$

And if we think about how matrix multiplication works, we can realize that the headings are just a vector containing our variables. And so the rest is just a couple of augmented matrices. We can fill the matrix with zeros where we don't have any constants. And remember, the data entries in the last column of this table are actually on the right side of the equals signs:

$\left[ \begin{array} {c|c} \\ \quad 𝔐𝔐^\mathsf{T} \quad & M\mathrm{H} \\ \hline \quad (M\mathrm{H})^\mathsf{T} \quad & 0 \\ \end{array} \right] \left[ \begin{array} {c} g_1 \\ g_2 \\ \hline λ_1 \\ \end{array} \right] = \left[ \begin{array} {c} \\ (𝖏𝔐^\mathsf{T})^\mathsf{T} \\ \hline (𝒋\mathrm{H})^\mathsf{T} \\ \end{array} \right]$

But we prefer to think of our generators in a row vector, or map. And everything on the right half is transposed. So we can address both of those issues by transposing everything. Remember, when we transpose, we also reverse the order. Conveniently, because the augmented matrix on the left side of the equation is symmetric across its main diagonal, transposing it does not change its value:

$\left[ \begin{array} {cc|c} g_1 & g_2 & λ_1 \\ \end{array} \right] \left[ \begin{array} {c|c} \\ \quad 𝔐𝔐^\mathsf{T} \quad & M\mathrm{H} \\ \hline \quad (M\mathrm{H})^\mathsf{T} \quad & 0 \\ \end{array} \right] = \left[ \begin{array} {c|c} \quad 𝖏𝔐^\mathsf{T} \quad & 𝒋\mathrm{H} \\ \end{array} \right]$

The big matrix is invertible, so we can multiply both sides by its inverse to move it to the other side, to help us solve for $g_1$ and $g_2$:

$\left[ \begin{array} {cc|c} g_1 & g_2 & λ_1 \\ \end{array} \right] = \left[ \begin{array} {c|c} \quad 𝖏𝔐^\mathsf{T} \quad & 𝒋\mathrm{H} \\ \end{array} \right] \left[ \begin{array} {c|c} \\ \quad 𝔐𝔐^\mathsf{T} \quad & M\mathrm{H} \\ \hline \quad (M\mathrm{H})^\mathsf{T} \quad & 0 \\ \end{array} \right]^{\large -1}$

And let's go back from $𝖏$ to $𝒋\mathrm{T}W$ and $𝔐$ to $M\mathrm{T}W$:

$\left[ \begin{array} {cc|c} g_1 & g_2 & λ_1 \\ \end{array} \right] = \left[ \begin{array} {c|c} 𝒋\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & 𝒋\mathrm{H} \\ \end{array} \right] \left[ \begin{array} {c|c} \\ M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & M\mathrm{H} \\ \hline \quad (M\mathrm{H})^\mathsf{T} \quad & 0 \\ \end{array} \right]^{\large -1}$

And extract the $𝒋$ from the right:

$\left[ \begin{array} {cc|c} g_1 & g_2 & λ_1 \\ \end{array} \right] = 𝒋 \left[ \begin{array} {c|c} \mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & \mathrm{H} \\ \end{array} \right] \left[ \begin{array} {c|c} \\ M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & M\mathrm{H} \\ \hline \quad (M\mathrm{H})^\mathsf{T} \quad & 0 \\ \end{array} \right]^{\large -1}$

At this point you may begin to notice the similarity between this and the pseudoinverse method. We looked at the pseudoinverse as $G = \mathrm{T}W(M\mathrm{T}W)^{+} = \mathrm{T}W(M\mathrm{T}W)^\mathsf{T}(M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T})^{-1}$, but all we need to do is multiply both sides by $𝒋$ and you get $𝒈 = 𝒋\mathrm{T}W(M\mathrm{T}W)^\mathsf{T}(M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T})^{-1}$, which looks almost the same as the above, only without any of the augmentations that are there to account for the held-intervals:

$\left[ \begin{array} {cc} g_1 & g_2 \\ \end{array} \right] = 𝒋 \left[ \begin{array} {c} \quad \mathrm{T}W(M\mathrm{T}W)^\mathsf{T} \quad \\ \end{array} \right] \left[ \begin{array} {c} \quad M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} \quad \\ \end{array} \right]^{-1}$

And so, without held-intervals, the generators can be found as the pseudoinverse of $M\mathrm{T}W$ (left-multiplied by $\mathrm{T}W$), with generators, they can be found as almost the same thing, just with some augmentations to the matrices. This augmentation results in an extra value at the end, $λ_1$, but we don't need it and can just discard it. Ta da!

### Hardcoded example

At this point everything on the right side of this equation is known. Let's actually plug in some numbers to convince ourselves this makes sense. Suppose we go with an unchanged octave, porcupine temperament, the 6-TILT, and unity-weight damage (and of course, optimization power $2$). Then we have:

$\small \begin{array} {c} 𝒋 \\ \left[ \begin{array} {c} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{H} \\ \left[ \begin{array} {c} 1 \\ 0 \\ 0 \\ \end{array} \right] \end{array} , \begin{array} {c} M \\ \left[ \begin{array} {c} 1 & 2 & 3 \\ 0 & {-3} & {-5} \\ \end{array} \right] \end{array} , \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & \;\;\;0 & {-1} & 2 & {-1} & 0 & {-2} & 1 \\ 0 & 1 & 1 & {-1} & 0 & {-1} & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & {-1} \\ \end{array} \right] \end{array} , \begin{array} {ccc} W \\ \left[ \begin{array} {c} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{array} \right] \end{array}$

Before we can plug into our formula, we need to compute a few things. Let's start with $M\mathrm{H}$:

$\begin{array} {c} M \\ \left[ \begin{array} {c} 1 & 2 & 3 \\ 0 & {-3} & {-5} \\ \end{array} \right] \end{array} \begin{array} {c} \mathrm{H} \\ \left[ \begin{array} {c} 1 \\ 0 \\ 0 \\ \end{array} \right] \end{array} = \begin{array} {c} M\mathrm{H} \\ \left[ \begin{array} {c} 1 \\ 0 \\ \end{array} \right] \end{array}$

As for $\mathrm{T}W$, that's easy, because $W$ — being a unity-weight matrix — is an identity matrix, so it's equal simply to $\mathrm{T}$. But regarding $M\mathrm{T}W = M\mathrm{T}$, that would be helpful to compute in advance:

$\begin{array} {c} M \\ \left[ \begin{array} {c} 1 & 2 & 3 \\ 0 & {-3} & {-5} \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & \;\;\;0 & {-1} & 2 & {-1} & 0 & {-2} & 1 \\ 0 & 1 & 1 & {-1} & 0 & {-1} & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & {-1} \\ \end{array} \right] \end{array} = \begin{array} {ccc} M\mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & 2 & {1} & \;\;\;0 & 2 & 1 & 1 & \;\;0 \\ 0 & {-3} & {-3} & 3 & {-5} & {-2} & {-5} & 2 \\ \end{array} \right] \end{array}$

And so $M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T}$ would be:

$\begin{array} {ccc} M\mathrm{T}W \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & 2 & {1} & \;\;\;0 & 2 & 1 & 1 & \;\;0 \\ 0 & {-3} & {-3} & 3 & {-5} & {-2} & {-5} & 2 \\ \end{array} \right] \end{array} \begin{array} {ccc} (M\mathrm{T}W)^\mathsf{T} \\ \left[ \begin{array} {c} 1 & 0 \\ \hline 2 & {-3} \\ \hline {1} & {-3} \\ \hline 0 & 3 \\ \hline 2 & {-5} \\ \hline 1 & {-2} \\ \hline 1 & {-5} \\ \hline 0 & 2 \\ \end{array} \right] \end{array} = \begin{array} {ccc} M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} \\ \left[ \begin{array} {c} 12 & {-26} \\ {-26} & 85 \\ \end{array} \right] \end{array}$

And finally, $\mathrm{T}W(M\mathrm{T}W)^\mathsf{T}$:

$\begin{array} {ccc} \mathrm{T}W \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & \;\;\;0 & {-1} & 2 & {-1} & 0 & {-2} & 1 \\ 0 & 1 & 1 & {-1} & 0 & {-1} & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & {-1} \\ \end{array} \right] \end{array} \begin{array} {ccc} (M\mathrm{T}W)^\mathsf{T} \\ \left[ \begin{array} {c} 1 & 0 \\ \hline 2 & {-3} \\ \hline {1} & {-3} \\ \hline 0 & 3 \\ \hline 2 & {-5} \\ \hline 1 & {-2} \\ \hline 1 & {-5} \\ \hline 0 & 2 \\ \end{array} \right] \end{array} = \begin{array} {ccc} \mathrm{T}W(M\mathrm{T}W)^\mathsf{T} \\ \left[ \begin{array} {c} {-4} & 26 \\ 2 & {-5} \\ 4 & {-14} \\ \end{array} \right] \end{array}$

Now we just have to plug all that into our formula for $𝒈$ (and $\textbf{λ}$, though again, we don't really care what it comes out to):

$\left[ \begin{array} {cc|c} g_1 & g_2 & λ_1 \\ \end{array} \right] = 𝒋 \left[ \begin{array} {c|c} \mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & \mathrm{H} \\ \end{array} \right] \left[ \begin{array} {c|c} \\ M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & M\mathrm{H} \\ \hline \quad (M\mathrm{H})^\mathsf{T} \quad & 0 \\ \end{array} \right]^{\large -1}$

So that's:

\begin{align} \left[ \begin{array} {cc|c} g_1 & g_2 & λ_1 \\ \end{array} \right] &= \begin{array} {c} 𝒋 \\ \left[ \begin{array} {c} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {c} \begin{array} {c|c} \mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & \mathrm{H} \\ \end{array} \\ \left[ \begin{array} {cc|c} {-4} & 26 & 1 \\ 2 & {-5} & 0 \\ 4 & {-14} & 0 \\ \end{array} \right] \end{array} \begin{array} {c} \begin{array} {c|c} M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} & M\mathrm{H} \\ \hline \quad (M\mathrm{H})^\mathsf{T} \quad & 0 \\ \end{array} \\ \left[ \begin{array} {cc|c} 12 & {-26} & 1 \\ {-26} & 85 & 0 \\ \hline 1 & 0 & 0 \\ \end{array} \right]^{\large -1} \end{array} \\ &= \left[ \begin{array} {cc|c} 1200.000 & 163.316 & {-4.627} \\ \end{array} \right] \end{align}

So as expected, our $λ_1$ value came out negative, because of our sign-switching earlier. But what we're really interested in are the first two entries of that map, which are $g_1$ and $g_2$. Our desired $𝒈$ is {1200.000 163.316]. Huzzah!

For comparison's sake, we can repeat this, but without the unchanged octave:

\begin{align} \left[ \begin{array} {c} g_1 & g_2 \\ \end{array} \right] &= \begin{array} {c} 𝒋 \\ \left[ \begin{array} {c} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {c} \mathrm{T}W(M\mathrm{T}W)^\mathsf{T} \\ \left[ \begin{array} {c} {-4} & 26 \\ 2 & {-5} \\ 4 & {-14} \\ \end{array} \right] \end{array} \begin{array} {c} M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T} \\ \left[ \begin{array} {cc|c} 12 & {-26} \\ {-26} & 85 \\ \end{array} \right]^{\large -1} \end{array} \\ &= \left[ \begin{array} {cc|c} 1198.857 & 162.966 \\ \end{array} \right] \end{align}

And that's all there is to it.

## For all-interval tuning schemes

So far we've looked at how to use the linear algebra operation called the pseudoinverse to compute miniRMS tunings. We can use a variation of that approach to solve Euclideanized all-interval tuning schemes. So where miniRMS tuning schemes are those with the optimization power $p$ is equal to $2$, all-interval minimax-ES tuning schemes are those with the dual norm power $\text{dual}(q)$ equal to $2$.

### Setup

The pseudoinverse of a matrix $A$ is notated as $A^{+}$, and for convenience, here's its equation again:

$A^{+} = A^\mathsf{T}(AA^\mathsf{T})^{-1}$

For ordinary tunings, we find $G$ to be:

$G = \mathrm{T}W(M\mathrm{T}W)^{+} = \mathrm{T}W(M\mathrm{T}W)^\mathsf{T}(M\mathrm{T}W(M\mathrm{T}W)^\mathsf{T})^{-1}$

So for all-interval tunings, we simply substitute in our all-interval analogous objects, and find it to be:

$G = \mathrm{T}_{\text{p}}S_{\text{p}}(M\mathrm{T}_{\text{p}}S_{\text{p}})^{+} = \mathrm{T}_{\text{p}}S_{\text{p}}(M\mathrm{T}_{\text{p}}S_{\text{p}})^\mathsf{T}(M\mathrm{T}_{\text{p}}S_{\text{p}}(M\mathrm{T}_{\text{p}}S_{\text{p}})^\mathsf{T})^{-1}$

That's a lot of $\mathrm{T}_{\text{p}}$, though, and we know those are equal to $I$, so let's eliminate them:

$G = S_{\text{p}}(MS_{\text{p}})^{+} = S_{\text{p}}(MS_{\text{p}})^\mathsf{T}(MS_{\text{p}}(MS_{\text{p}})^\mathsf{T})^{-1}$

### Example

So suppose we want the minimax-ES tuning of meantone temperament, where $M$ = [1 1 0] 0 1 4]} and $C_{\text{p}} = L$. Basically we just need to compute $MS_{\text{p}}$:

$\begin{array}{c} M \\ \left[ \begin{array} {r} 1 & 1 & 0 \\ 0 & 1 & 4 \\ \end{array} \right] \end{array} \begin{array}{c} S_{\text{p}} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 \\ 0 & 0 & \frac{1}{\log_2(5)} \\ \end{array} \right] \end{array} = \begin{array}{c} MS_{\text{p}} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & \frac{1}{\log2{3}} & 0 \\ 0 & \frac{1}{\log2{3}} & \frac{4}{\log2{5}} \\ \end{array} \right] \end{array}$

And plug that in a few times, two of them transposed:

$G = \begin{array}{c} S_{\text{p}} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 \\ 0 & 0 & \frac{1}{\log_2(5)} \\ \end{array} \right] \end{array} \begin{array}{c} (MS_{\text{p}})^\mathsf{T} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 \\ \frac{1}{\log2{3}} & \frac{1}{\log2{3}} \\ 0 & \frac{4}{\log2{5}} \\ \end{array} \right] \end{array} \Huge ( \normalsize \begin{array}{c} MS_{\text{p}} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & \frac{1}{\log2{3}} & 0 \\ 0 & \frac{1}{\log2{3}} & \frac{4}{\log2{5}} \\ \end{array} \right] \end{array} \begin{array}{c} (MS_{\text{p}})^\mathsf{T} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 \\ \frac{1}{\log2{3}} & \frac{1}{\log2{3}} \\ 0 & \frac{4}{\log2{5}} \\ \end{array} \right] \end{array} \Huge )^{\Large -1} \normalsize$

Work that out and you get (at this point we'll convert to decimal form):

$G = \left[ \begin{array} {r} 0.740 & {-0.088} \\ 0.260 & 0.088\\ {-0.065} & 0.228\\ \end{array} \right]$

And when you multiply that by $𝒋$, we get the generator tuning map $𝒈$ for the minimax-ES tuning of meantone, 1201.397 697.049].

## With alternative complexities

The following examples all pick up from a shared setup here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Computing all-interval tuning schemes with alternative complexities.

For all complexities used here (well again at least the first several more basic ones), our formula will be:

$G = S_{\text{p}}(MS_{\text{p}})^{+} = S_{\text{p}}(MS_{\text{p}})^\mathsf{T}(MS_{\text{p}}(MS_{\text{p}})^\mathsf{T})^{-1}$

### Minimax-E-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Log-product2. Plugging $L^{-1}$ into our pseudoinverse method for $S_{\text{p}}$ we find:

$G = L^{-1}(ML^{-1})^\mathsf{T}(ML^{-1}(ML^{-1})^\mathsf{T})^{-1}$

We already have computed $ML^{-1}$, so plug that in a few times, two of them transposed:

$G = \begin{array}{c} L^{-1} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 \\ 0 & 0 & \frac{1}{\log_2(5)} \\ \end{array} \right] \end{array} \begin{array}{c} (ML^{-1})^\mathsf{T} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 \\ \frac{2}{\log_2(3)} & \frac{-3}{\log_2(3)} \\ \frac{3}{\log_2(5)} & \frac{-5}{\log_2(5)} \\ \end{array} \right] \end{array} \Huge ( \normalsize \begin{array}{c} ML^{-1} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} \\ \end{array} \right] \end{array} \begin{array}{c} (ML^{-1})^\mathsf{T} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 \\ \frac{2}{\log_2(3)} & \frac{-3}{\log_2(3)} \\ \frac{3}{\log_2(5)} & \frac{-5}{\log_2(5)} \\ \end{array} \right] \end{array} \Huge )^{\Large -1} \normalsize$

Work that out and you get (at this point we'll convert to decimal form):

$G = \left[ \begin{array} {r} 0.991 & 0.623 \\ 0.044 & {-0.117} \\ {-0.027} & {-0.129}\\ \end{array} \right]$

And when you multiply that by $𝒋$, we get the generator tuning map $𝒈$ for the minimax-ES tuning of porcupine, 1199.562 163.891].

This too can be computed easily with the Wolfram Library:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-ES"]
Out: {1199.562 163.891]


### Minimax-E-sofpr-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Sum-of-prime-factors-with-repetition2. Plugging $\text{diag}(𝒑)^{-1}$ into our pseudoinverse method for $S_{\text{p}}$ we find:

$G = \text{diag}(𝒑)^{-1}(M\text{diag}(𝒑)^{-1})^\mathsf{T}(M\text{diag}(𝒑)^{-1}(M\text{diag}(𝒑)^{-1})^\mathsf{T})^{-1}$

We already have $M\text{diag}(𝒑)^{-1}$ computed, so we plug that in a few times, two of them transposed:

$G = \begin{array}{c} \text{diag}(𝒑)^{-1} \\ \left[ \begin{array} {r} \frac{1}{2} & 0 & 0 \\ 0 & \frac{1}{3} & 0 \\ 0 & 0 & \frac{1}{5} \\ \end{array} \right] \end{array} \begin{array}{c} (M\text{diag}(𝒑)^{-1})^\mathsf{T} \\ \left[ \begin{array} {r} \frac{1}{2} & 0 \\ \frac{2}{3} & \frac{-3}{3} \\ \frac{3}{5} & \frac{-5}{5} \\ \end{array} \right] \end{array} \Huge ( \normalsize \begin{array}{c} M\text{diag}(𝒑)^{-1} \\ \left[ \begin{array} {r} \frac{1}{2} & \frac{2}{3} & \frac{3}{5} \\ 0 & \frac{-3}{3} & \frac{-5}{5} \\ \end{array} \right] \end{array} \begin{array}{c} (M\text{diag}(𝒑)^{-1})^\mathsf{T} \\ \left[ \begin{array} {r} \frac{1}{2} & 0 \\ \frac{2}{3} & \frac{-3}{3} \\ \frac{3}{5} & \frac{-5}{5} \\ \end{array} \right] \end{array} \Huge )^{\Large -1} \normalsize$

Work that out and you get :

$G = \left[ \begin{array} {r} \frac{225}{227} & \frac{285}{454} \\ \frac{10}{227} & \frac{-63}{454} \\ \frac{-6}{227} & \frac{-53}{454} \\ \end{array} \right]$

And when you multiply that by $𝒋$, we get the generator tuning map $𝒈$ for the minimax-E-sofpr-S tuning of porcupine, 1199.567 164.102].

This too can be computed easily with the Wolfram Library:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-E-sopfr-S"]
Out: {1199.567 164.102]


### Minimax-E-copfr-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Count-of-prime-factors-with-repetition2. Plugging $I$ into our pseudoinverse method for $S_{\text{p}}$ we find:

$G = I(MI)^\mathsf{T}(MI(MI)^\mathsf{T})^{-1} = M^\mathsf{T}(MM^\mathsf{T})^{-1} = M^{+}$

That's right: our answer is simply the pseudoinverse of the mapping.

$G = \begin{array}{c} M^\mathsf{T} \\ \left[ \begin{array} {r} 1 & 0 \\ 2 & {-3} \\ 3 & {-5} \\ \end{array} \right] \end{array} \Huge ( \normalsize \begin{array}{c} M \\ \left[ \begin{array} {r} 1 & 2 & 3 \\ 0 & {-3} & {-5} \\ \end{array} \right] \end{array} \begin{array}{c} M^\mathsf{T} \\ \left[ \begin{array} {r} 1 & 0 \\ 2 & {-3} \\ 3 & {-5} \\ \end{array} \right] \end{array} \Huge )^{\Large -1} \normalsize$

Work that out and you get:

$G = \left[ \begin{array} {r} \frac{34}{35} & \frac{3}{5} \\ \frac{1}{7} & 0 \\ \frac{-3}{35} & \frac{-1}{5} \\ \end{array} \right]$

And when you multiply that by $𝒋$, we get the generator tuning map $𝒈$ for the minimax-ES tuning of porcupine, 1198.595 162.737].

This too can be computed easily with the Wolfram Library:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-E-copfr-S"]
Out: {1198.595 162.737]


### Minimax-E-lils-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Log-integer-limit-squared2.

As for the minimax-E-lils-S tuning, we use the pseudoinverse method, but with the same augmented matrices as discussed for the minimax-lils-S tuning discussed later in this article. Well, we've established our $MS_{\text{p}}$ equivalent, but we still need an equivalent for $S_{\text{p}}$ alone. This is $L^{-1}$, but with an extra 1 before the logs of primes are diagonalized:

$\begin{array} {c} \text{equiv. of} \; S_{\text{p}} \\ \left[ \begin{array} {ccc|c} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array}$

So plugging in to

$G = S_{\text{p}}(MS_{\text{p}})^\mathsf{T}(MS_{\text{p}}(MS_{\text{p}})^\mathsf{T})^{-1}$

We get:

$G = \begin{array}{c} S_{\text{p}} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array} \begin{array}{c} (MS_{\text{p}})^\mathsf{T} \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{2}{\log_2(3)} & \frac{-3}{\log_2(3)} & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{3}{\log_2(5)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{1} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] \end{array} \Huge ( \normalsize \begin{array}{c} MS_{\text{p}} \\ \left[ \begin{array} {r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] \end{array} \begin{array}{c} (MS_{\text{p}})^\mathsf{T} \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{2}{\log_2(3)} & \frac{-3}{\log_2(3)} & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{3}{\log_2(5)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{1} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] \end{array} \Huge )^{\Large -1} \normalsize$

Work that out and you get (at this point we'll convert to decimal form):

$G = \left[ \begin{array} {rr|r} 0.991 & 0.623 & \style{background-color:#FFF200;padding:5px}{0.000} \\ 0.044 & {-0.117} & \style{background-color:#FFF200;padding:5px}{-0.002} \\ {-0.027} & {-0.129} & \style{background-color:#FFF200;padding:5px}{0.001} \\ \hline \style{background-color:#FFF200;padding:5px}{1.000} & \style{background-color:#FFF200;padding:5px}{0.137} & \style{background-color:#FFF200;padding:5px}{-1.000} \\ \end{array} \right]$

(Yet again, compare with the result for minimax-ES; same but augmented.)

And when you multiply that by the augmented version of our $𝒋$, we get the generator tuning map $𝒈$ for the minimax-E-lilS tuning of porcupine, 1199.544 163.888 0.018]. Well, that last entry is only the $g_{\text{augmented}}$ result, which is junk, so we throw that part away.

This too can be computed easily with the Wolfram Library:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-ES"]
Out: {1199.544 163.888]


### Minimax-E-lols-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Log-odd-limit-squared2. We use the pseudoinverse method, with our same $MS_{\text{p}}$ and $S_{\text{p}}$ equivalents as from the minimax-E-lils-S examples:

$\begin{array}{c} \text{equiv. of} \; MS_{\text{p}} \\ \left[ \begin{array} {rrr|r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] \end{array}$

$\begin{array} {c} \text{equiv. of} \; S_{\text{p}} \\ \left[ \begin{array} {ccc|c} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array}$

And we have our $\mathrm{U}$ = [1 0 0, being the octave, but it's augmented to [1 0 0, that last entry being its size. So this whole thing is blue on account of having to do with the held-interval augmentation, but its last entry is green because it's also yellow from the lils augmentation:

$\begin{array} {c} \mathrm{U} \\ \left[ \begin{array} {c} \style{background-color:#00AEEF;padding:5px}{1} \\ \style{background-color:#00AEEF;padding:5px}{0} \\ \style{background-color:#00AEEF;padding:5px}{0} \\ \style{background-color:#8DC73E;padding:5px}{1} \\ \end{array} \right] \end{array}$

And so our $M\mathrm{U}$ we can think of as our held-interval having been mapped. For this we must ask ourselves "what is $M$"? We know what $MS_{\text{p}}$ is but not really $M$ itself, i.e. in terms of its augmentation status. So, the present author is not sure, but is going with this: [1 0 0 would normally map to [1 0} in this temperament, and the third entry it needs to fit into the block matrices we're about to build would be mapped by the mapping's junk row, so why not just make it 0. So that gives us:

$\begin{array} {c} M\mathrm{U} \\ \left[ \begin{array} {c} \style{background-color:#00AEEF;padding:5px}{1} \\ \style{background-color:#00AEEF;padding:5px}{0} \\ \style{background-color:#8DC73E;padding:5px}{0} \\ \end{array} \right] \end{array}$

Ah, and $𝒋$ is augmented with a 0 for the lils-stuff that is just junk. Might as well:

$\begin{array} {c} 𝒋 \\ \left[ \begin{array} 1200 & 1901.955 & 2786.314 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array}$

Now we need to plug this into the variation on the pseudoinverse formula that accounts for held-intervals:

$\left[ \begin{array} {cc|c|c} g_1 & g_2 & \style{background-color:#FFF200;padding:5px}{g_{\text{augmented}}} & \style{background-color:#00AEEF;padding:5px}{λ_1} \\ \end{array} \right] = 𝒋 \left[ \begin{array} {c|c} S_{\text{p}}(MS_{\text{p}})^\mathsf{T} & \style{background-color:#00AEEF;padding:5px}{U} \\ \end{array} \right] \left[ \begin{array} {c|c} \\ MS_{\text{p}}(MS_{\text{p}})^\mathsf{T} & \style{background-color:#00AEEF;padding:5px}{𝑀U} \\ \hline \quad \style{background-color:#00AEEF;padding:5px}{(𝑀U)}^\mathsf{T} \quad & \style{background-color:#00AEEF;padding:5px}{0} \\ \end{array} \right]^{\large -1}$

So let's just start plugging in!

$\small \left[ \begin{array} {cc|c|c} g_1 & g_2 & \style{background-color:#FFF200;padding:5px}{g_{\text{augmented}}} & \style{background-color:#00AEEF;padding:5px}{λ_1} \\ \end{array} \right] = \begin{array} {c} 𝒋 \\ \left[ \begin{array} {c} 1200 & 1901.955 & 2786.314 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} \left[ \begin{array} {c|c} \begin{array} {c} \text{equiv. of} \; S_{\text{p}} \\ \left[ \begin{array} {ccc|c} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array} \begin{array}{c} (MS_{\text{p}})^\mathsf{T} \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{2}{\log_2(3)} & \frac{-3}{\log_2(3)} & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{3}{\log_2(5)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{1} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] \end{array} & \begin{array} \mathrm{U} \\ \left[ \begin{array} {c} \style{background-color:#00AEEF;padding:5px}{1} \\ \style{background-color:#00AEEF;padding:5px}{0} \\ \style{background-color:#00AEEF;padding:5px}{0} \\ \style{background-color:#8DC73E;padding:5px}{1} \\ \end{array} \right] \end{array} \\ \end{array} \right] \left[ \begin{array} {c|c} \\ \begin{array}{c} \text{equiv. of} \; MS_{\text{p}} \\ \left[ \begin{array} {rrr|r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] \end{array} \begin{array}{c} (MS_{\text{p}})^\mathsf{T} \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{2}{\log_2(3)} & \frac{-3}{\log_2(3)} & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{3}{\log_2(5)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{1} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] \end{array} & \begin{array} {c} M\mathrm{U} \\ \left[ \begin{array} {c} \style{background-color:#00AEEF;padding:5px}{1} \\ \style{background-color:#00AEEF;padding:5px}{0} \\ \style{background-color:#8DC73E;padding:5px}{0} \\ \end{array} \right] \end{array} \\ \hline \begin{array}{c} (M\mathrm{U})^\mathsf{T} \\ \left[ \begin{array} {r} \style{background-color:#00AEEF;padding:5px}{1} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#8DC73E;padding:5px}{0} \\ \end{array} \right] \end{array} & \style{background-color:#00AEEF;padding:5px}{0} \\ \end{array} \right]^{\large -1}$

Now if you crunch all that on the right, you get 1200 164.062 -0.211 -0.229]. So we can throw away both the lambda that helped us hold our octave unchanged, and then the augmented generator that helped us account for the size of our intervals. So we're left with our held-octave minimax-E-lils-S tuning.

This too can be computed by the Wolfram Library:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "held-octave minimax-E-lils-S"]
Out: {1200 164.062]


# Zero-damage method

The second optimization power we'll take a look at is $p = 1$, for miniaverage tuning schemes.

Note that miniaverage tunings have not been advocated by tuning theorists thus far. We've included this section largely in order to complete the set of methods with exact solutions, one for each of the key optimization powers $1$, $2$, and $∞$. So, you may prefer to skip ahead to the next section if you're feeling more practically minded. However, the method for $p = ∞$ is related but more complicated, and its explanation builds upon this method's explanation, so it may still be worth it to work through this one first.

The high-level summary here is that we're going to collect every tuning where one target-interval for each generator is tuned pure simultaneously. Then we will check each of those tunings' damages, and choose the tuning of those which causes the least damage.

## The zero-damage point set

The method for finding the miniaverage leverages the fact that the sum graph changes slope wherever a target-interval is tuned pure. The minimum must be found among the points where $r$ target-intervals are all tuned pure at once, where $r$ is the rank of the temperament. This is because this is the maximum number of linearly independent intervals that could be pure at once, given only $r$ generators to work with. You can imagine that for any point you could find where only $r - 1$ intervals were pure at once, that point would be found on a line along which all $r - 1$ of those intervals remain pure, but if you follow it far enough in one direction, you'll reach a point where one additional interval is also pure.

These points taken together are known as the zero-damage point set. This is the first of two methods we'll look at in this article which make use of a point set. The other is the method for finding the minimax, which uses a different point set called the "coinciding-damage point set"; this method is slightly trickier than the miniaverage one, though, and so we'll be looking at it next, right after we've covered the miniaverage method here.

So, in essence, this method works by narrowing the infinite space of tuning possibilities down to a finite set of points to check. We gather these zero-damage points, find the damage (specifically the sum of damages to the target-intervals, AKA the power sum where $p = 1$) at each point, and then choose the one with the minimum damage out of those. And that'll be our miniaverage tuning (unless there's a tie, but more on that later).

## Gather and process zero-damage points

Let's practice this method by working through an example. For our target-interval list, we can use our recommended scheme, the truncated integer limit triangle (or "TILT" for short), colorized here so we'll be able to visualize their combinations better in the upcoming step. This is the 6-TILT, our default target list for 5-limit temperaments.

$\mathrm{T} = \begin{array} {c} \ \ \begin{array} {c} \textbf{i}_1 & \ \ \ \textbf{i}_2 & \ \ \ \textbf{i}_3 & \ \ \ \textbf{i}_4 & \ \ \ \textbf{i}_5 & \ \ \ \textbf{i}_6 & \ \ \ \textbf{i}_7 & \ \ \ \textbf{i}_8 \\ \frac21 & \ \ \ \frac31 & \ \ \ \frac32 & \ \ \ \frac43 & \ \ \ \frac52 & \ \ \ \frac53 & \ \ \ \frac54 & \ \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#98CC70;padding:5px}{2} & \style{background-color:#3FBC9D;padding:5px}{-1} & \style{background-color:#41B0E4;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{-2} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#FDBC42;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{-1} & \style{background-color:#3FBC9D;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{-1} & \style{background-color:#7977B8;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{1} & \style{background-color:#41B0E4;padding:5px}{1} & \style{background-color:#7977B8;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array}$

And let's use a classic example for our temperament: meantone.

### Unchanged-interval bases

We can compute ahead of time how many points we should find in our zero-damage point set, because it's simply the number of combinations of $r$ of them. With meantone being a rank-2 temperament, that's ${{8}\choose{2}} = 28$ points (8 choose 2 is 28).

Each of these 28 points may be represented by an unchanged-interval basis, symbolized as $\mathrm{U}$. An unchanged-interval basis is simply a matrix where each column is a prime-count vector representing a different interval that the tuning of this temperament should leave unchanged. So for example, the matrix [-1 1 0 [0 -1 1] tells us that $\frac32$ = [-1 1 0 and $\frac53$ = [0 -1 1 are to be left unchanged. (The "basis" part of the name tells us that furthermore every linear combination of these vectors is also left unchanged, such as 2×[-1 1 0 + -1×[0 -1 1 = [-2 3 -1, AKA $\frac{27}{20}$. It also technically tells us that none of the vectors is already a linear combination of the others, i.e. that it is full-column-rank; this may not be true of all of these matrices we're assembling using this automatic procedure, but that's okay because any of these that aren't truly bases will be eliminated for that reason in the next step.)

Note that this unchanged-interval basis $\mathrm{U}$ is different than our held-unchanged-interval basis $H$. There are a couple main differences:

1. We didn't ask for these unchanged-interval bases $\mathrm{U}$; they're just coming up as part of this algorithm.
2. These unchanged-interval bases completely specify the tuning. A held-interval basis $\mathrm{H}$ has shape $(d, h)$ where $h \leq r$, but an unchanged-interval basis $\mathrm{U}$ always has shape $(d, r)$. (Remember, $r$ is the rank of the temperament, or in other words, the count of generators.)

So here's the full list of 28 unchanged-interval bases corresponding to the zero-damage points for any 5-limit rank-2 temperament (meantone or otherwise), given the 6-TILT as its target-interval set. Use the colorization to better understand the nature of these combinations:

$\begin{array} {c} \mathrm{U}_{(1,2)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#FDBC42;padding:5px}{0} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#FDBC42;padding:5px}{1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#FDBC42;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(1,3)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac43 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(1,4)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac43 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{2} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{-1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(1,5)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#3FBC9D;padding:5px}{-1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{0} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(1,6)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#41B0E4;padding:5px}{0} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{-1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(1,7)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#7977B8;padding:5px}{-2} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{0} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(1,8)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#F69289;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#F69289;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array} ,$

$\begin{array} {c} \mathrm{U}_{(2,3)} \\ \ \ \begin{array} {rrr} \frac31 & \ \ \frac32 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \style{background-color:#FDBC42;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} \\ \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(2,4)} \\ \ \ \begin{array} {rrr} \frac31 & \ \ \frac43 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{2} \\ \style{background-color:#FDBC42;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{-1} \\ \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(2,5)} \\ \ \ \begin{array} {rrr} \frac31 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{-1} \\ \style{background-color:#FDBC42;padding:5px}{1} & \style{background-color:#3FBC9D;padding:5px}{0} \\ \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(2,6)} \\ \ \ \begin{array} {rrr} \frac31 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{0} \\ \style{background-color:#FDBC42;padding:5px}{1} & \style{background-color:#41B0E4;padding:5px}{-1} \\ \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(2,7)} \\ \ \ \begin{array} {rrr} \frac31 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{-2} \\ \style{background-color:#FDBC42;padding:5px}{1} & \style{background-color:#7977B8;padding:5px}{0} \\ \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(2,8)} \\ \ \ \begin{array} {rrr} \frac31 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#FDBC42;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#FDBC42;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array} ,$

$\begin{array} {c} \mathrm{U}_{(3,4)} \\ \ \ \begin{array} {rrr} \frac32 & \ \ \frac43 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#98CC70;padding:5px}{2} \\ \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{-1} \\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(3,5)} \\ \ \ \begin{array} {rrr} \frac32 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#3FBC9D;padding:5px}{-1} \\ \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#3FBC9D;padding:5px}{0} \\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(3,6)} \\ \ \ \begin{array} {rrr} \frac32 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#41B0E4;padding:5px}{0} \\ \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#41B0E4;padding:5px}{-1} \\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(3,7)} \\ \ \ \begin{array} {rrr} \frac32 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#7977B8;padding:5px}{-2} \\ \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#7977B8;padding:5px}{0} \\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(3,8)} \\ \ \ \begin{array} {rrr} \frac32 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array} ,$

$\begin{array} {c} \mathrm{U}_{(4,5)} \\ \ \ \begin{array} {rrr} \frac43 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#98CC70;padding:5px}{2} & \style{background-color:#3FBC9D;padding:5px}{-1} \\ \style{background-color:#98CC70;padding:5px}{-1} & \style{background-color:#3FBC9D;padding:5px}{0} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(4,6)} \\ \ \ \begin{array} {rrr} \frac43 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#98CC70;padding:5px}{2} & \style{background-color:#41B0E4;padding:5px}{0} \\ \style{background-color:#98CC70;padding:5px}{-1} & \style{background-color:#41B0E4;padding:5px}{-1} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(4,7)} \\ \ \ \begin{array} {rrr} \frac43 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#98CC70;padding:5px}{2} & \style{background-color:#7977B8;padding:5px}{-2} \\ \style{background-color:#98CC70;padding:5px}{-1} & \style{background-color:#7977B8;padding:5px}{0} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(4,8)} \\ \ \ \begin{array} {rrr} \frac43 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#98CC70;padding:5px}{2} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#98CC70;padding:5px}{-1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array} ,$

$\begin{array} {c} \mathrm{U}_{(5,6)} \\ \ \ \begin{array} {rrr} \frac52 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#3FBC9D;padding:5px}{-1} & \style{background-color:#41B0E4;padding:5px}{0} \\ \style{background-color:#3FBC9D;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{-1} \\ \style{background-color:#3FBC9D;padding:5px}{1} & \style{background-color:#41B0E4;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(5,7)} \\ \ \ \begin{array} {rrr} \frac52 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#3FBC9D;padding:5px}{-1} & \style{background-color:#7977B8;padding:5px}{-2} \\ \style{background-color:#3FBC9D;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{0} \\ \style{background-color:#3FBC9D;padding:5px}{1} & \style{background-color:#7977B8;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(5,8)} \\ \ \ \begin{array} {rrr} \frac52 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#3FBC9D;padding:5px}{-1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#3FBC9D;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#3FBC9D;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array} ,$

$\begin{array} {c} \mathrm{U}_{(6,7)} \\ \ \ \begin{array} {rrr} \frac53 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#41B0E4;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{-2} \\ \style{background-color:#41B0E4;padding:5px}{-1} & \style{background-color:#7977B8;padding:5px}{0} \\ \style{background-color:#41B0E4;padding:5px}{1} & \style{background-color:#7977B8;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(6,8)} \\ \ \ \begin{array} {rrr} \frac53 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#41B0E4;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#41B0E4;padding:5px}{-1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#41B0E4;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array} ,$

$\begin{array} {c} \mathrm{U}_{(7,8)} \\ \ \ \begin{array} {rrr} \frac54 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#7977B8;padding:5px}{-2} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#7977B8;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#7977B8;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array}$

### Canonicalize and filter deficient matrices

But many of these unchanged-interval bases are actually redundant with each other, by which we mean that they correspond to the same tuning. Said another way, some of these unchanged-interval bases are different bases for the same set of unchanged-intervals.

In order to identify such redundancies, we will put all of our unchanged-interval bases into their canonical form, following the canonicalization process that has already been described for comma bases, because they are bases, tall matrices (have more rows than columns), and their columns represent intervals. Putting matrices into canonical form is a way to determine if, for some definition of "same", they represent the same information. So here's what they look like in that form (no more color here on out; the point about combinations has been made):

$\scriptsize \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & {-1} \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & {-1} \\ 0 & 1 \\ \end{array} \right] \end{array} , \\[35pt] \scriptsize \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & 0 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \\[35pt] \scriptsize \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \\[35pt] \scriptsize \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac58 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-3} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & {-1} \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array}$

Note, for example, that our matrix representing $\frac32$ and $\frac43$ (the 14th one here) has been simplified to a matrix representing $\frac21$ and $\frac31$; this is as if to say: why define the problem as tuning $\frac32$ and $\frac43$ pure, when there's only two total different prime factors between these two intervals, so we may as well just use our two generators to make both of those basis primes pure. In fact, any combination of intervals that includes no prime 5 here will have been simplified to this same unchanged-interval basis.

Also note that many intervals are now subunison (less than $\frac11$, with a denominator greater than the numerator; for example $\frac34$). While this may be unnatural for musicians to think about, it's just the way the canonicalization math works out, and is irrelevant to tuning, because any damage to an interval will be the same as to its reciprocal.

In some cases at this point, we would eliminate some unchanged-interval bases, those that through the process of canonicalization were simplified to fewer than $r$ intervals, i.e. they lost a column (or more than one column). In this example, that has not occurred to any of our matrices; in order for it to have occurred, our target-interval set would have needed to include linearly dependent intervals. For example, the intervals $\frac32$ and $\frac94$ are linearly dependent, and we see these in the 10-TILT that's the default for a 7-limit temperament. So in that case, the unchanged-interval bases that result from the combination of those pairs of intervals will be eliminated. This captures the fact that if you were to purely tune the interval which the others are multiples of, all the others will also be purely tuned, so this is not truly a combination of distinct intervals to purely tune.

### De-dupe

And we also see that our $\frac32$ and $\frac65$ matrix has been changed to $\frac32$ and $\frac54$. This may be less obvious in terms of it being a simplification, but it does illuminate how tuning $\frac32$ and $\frac65$ pure is no different than tuning $\frac32$ and $\frac54$ pure.

And so now it's time to actually eliminate those redundancies!

$\scriptsize \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & {-1} \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & 0 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac32 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-1} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-1} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-2} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac34 & \ \ \frac58 \\ \end{array} \\ \left[ \begin{array} {r|r} {-2} & {-3} \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array}$

Counting only 11 matrices still remaining, that means we must have eliminated 17 of them as redundant from our original set of 28.

### Convert to generators

Now we just need to convert each of these unchanged-interval bases $\mathrm{U}_{(i,j)}$ to a corresponding generator embedding $G$. To do this, we use the formula $G = \mathrm{U}(M\mathrm{U})^{-1}$, where $M$ is the temperament mapping (the derivation of this formula, and examples of working through this calculation, are both described later in this article here: #Only unchanged-intervals method).

$\scriptsize \begin{array} {c} \ \ \begin{array} {rrr} \frac{2}{1} & \frac{3}{2} \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & {-1} \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{2}{1} & \sqrt{5} \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \sqrt{\frac{10}{3}} \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & \frac13 \\ 0 & {-\frac13} \\ 0 & \frac13 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \sqrt{\frac{162}{5}} & \sqrt{\frac{15}{2}} \\ \end{array} \\ \left[ \begin{array} {rrr} \frac15 & {-\frac15} \\ \frac45 & \frac15 \\ {-\frac15} & \frac15 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{3}{\sqrt{5}} & \sqrt{5} \\ \end{array} \\ \left[ \begin{array} {rrr} 0 & 0 \\ 1 & 0 \\ {-\frac14} & \frac14 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \sqrt{\frac{324}{5}} & \sqrt{\frac{45}{4}} \\ \end{array} \\ \left[ \begin{array} {rrr} \frac13 & {-\frac13} \\ \frac23 & \frac13 \\ {-\frac16} & \frac16 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{81}{40} & \frac{3}{2} \\ \end{array} \\ \left[ \begin{array} {rrr} {-3} & {-1} \\ 4 & 1 \\ {-1} & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{9}{2\sqrt{5}} & \frac{3}{2} \\ \end{array} \\ \left[ \begin{array} {rrr} {-1} & {-1} \\ 2 & 1 \\ {-\frac12} & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \sqrt{\frac{640}{81}} & \sqrt{\frac{10}{3}} \\ \end{array} \\ \left[ \begin{array} {rrr} \frac73 & \frac13 \\ {-\frac43} & {-\frac13} \\ \frac13 & \frac13 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{8\sqrt{5}}{9} & \frac{2\sqrt{5}}{3} \\ \end{array} \\ \left[ \begin{array} {rrr} 3 & 1 \\ {-2} & {-1} \\ \frac12 & \frac12 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{160}{81} & \frac{40}{27} \\ \end{array} \\ \left[ \begin{array} {rrr} 5 & 3 \\ {-4} & {-3} \\ 1 & 1 \\ \end{array} \right] \end{array}$

Note that every one of those unusual looking values above — whether it be $\frac21$, $\frac{81}{40}$, $\frac{8\sqrt{5}}{9}$, or otherwise in the first column — or $\frac32$, $\frac{40}{27}$, $\sqrt{\frac{10}{3}}$, or otherwise in the second column — is an approximation of $\frac21$ or $\frac32$, respectively.

At this point, the only inputs affecting our results have been $M$ and $\mathrm{T}$: $M$ appears in our formula for $G$, and our target-interval set $\mathrm{T}$ was our source of intervals for our set of unchanged-interval bases. Notably $W$ is missing from that list of inputs affecting our results. So at this point, it doesn't seem to matter what our damage weight slope is (or what the complexity function used for it is, if other than log-product complexity); this list of candidate $G$'s is valid in any case of $W$. But don't worry; $W$ will definitely affect the results soon; actually, it comes into play in the next step.

## Find damages at points

As the next step, we find the $1$-sum of the damages to the target-interval set for each of those tunings. We'll work through one example. Let's just grab that third $G$, then, the one with $\frac21$ and $\sqrt{\frac{10}{3}}$.

This is one way to write the formula for the damages of a tuning of a temperament, in weighted cents. You can see the close resemblance to the expression shared earlier in the #Basic algebraic setup section:

$\textbf{d} = |\,𝒋GM\mathrm{T}W - 𝒋G_{\text{j}}M_{\text{j}}\mathrm{T}W\,|$

As discussed in Dave Keenan & Douglas Blumeyer's guide to RTT: tuning fundamentals#Absolute errors, these vertical bars mean to take the absolute value of each entry of this vector, not to take its magnitude.

As discussed elsewhere, we can simplify this to:

$\textbf{d} = |\,𝒋(GM - G_{\text{j}}M_{\text{j}})\mathrm{T}W\,|$

So here's that. Since we've gone with simplicity-weight damage here, we'll be using $S$ to represent our simplicity-weight matrix rather than the generic $W$ for weight matrix:

$\textbf{d} = \Huge | \scriptsize \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} ( \begin{array} {ccc} G \\ \left[ \begin{array} {rrr} 1 & \frac13 \\ 0 & {-\frac13} \\ 0 & \frac13 \\ \end{array} \right] \end{array} \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 1 & 1 & 0 \\ 0 & 1 & 4 \\ \end{array} \right] \end{array} - \begin{array} {ccc} I \\ \left[ \begin{array} {rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right] \end{array} ) \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & \;\;\;0 & {-1} & 2 & {-1} & 0 & {-2} & 1 \\ 0 & 1 & 1 & {-1} & 0 & {-1} & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & {-1} \\ \end{array} \right] \end{array} \begin{array} {ccc} S \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{\log_2(6)} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{\log_2(12)} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\log_2(10)} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(15)} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(20)} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(30)} \\ \end{array} \right] \end{array} \Huge |$

Let's start chipping away at this from the left. As our first act, let's consolidate $𝒋$:

$\textbf{d} = \Huge | \scriptsize \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} ( \begin{array} {ccc} G \\ \left[ \begin{array} {rrr} 1 & \frac13 \\ 0 & {-\frac13} \\ 0 & \frac13 \\ \end{array} \right] \end{array} \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 1 & 1 & 0 \\ 0 & 1 & 4 \\ \end{array} \right] \end{array} - \begin{array} {ccc} I \\ \left[ \begin{array} {rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right] \end{array} ) \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & \;\;\;0 & {-1} & 2 & {-1} & 0 & {-2} & 1 \\ 0 & 1 & 1 & {-1} & 0 & {-1} & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & {-1} \\ \end{array} \right] \end{array} \begin{array} {ccc} S \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{\log_2(6)} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{\log_2(12)} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\log_2(10)} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(15)} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(20)} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(30)} \\ \end{array} \right] \end{array} \Huge |$

Distribute the $𝒋$. We find $𝒋GM = 𝒕$, the tempered-prime tuning map, and $𝒋G_{\text{j}}M_{\text{j}} = 𝒋$, the just-prime tuning map.

$\textbf{d} = \Huge | \scriptsize ( \begin{array} {ccc} 𝒕 \\ \left[ \begin{array} {rrr} 1200.000 & 1894.786 & 2779.144 \\ \end{array} \right] \end{array} - \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} ) \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & \;\;\;0 & {-1} & 2 & {-1} & 0 & {-2} & 1 \\ 0 & 1 & 1 & {-1} & 0 & {-1} & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & {-1} \\ \end{array} \right] \end{array} \begin{array} {ccc} S \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{\log_2(6)} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{\log_2(12)} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\log_2(10)} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(15)} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(20)} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(30)} \\ \end{array} \right] \end{array} \Huge |$

And now we can replace $𝒕 - 𝒋$ with a single variable $𝒓$, which represents the retuning map, which unsurprisingly is just the map which tells us by how much to retune (mistune) each of the primes (this object will come up a lot more when working with all-interval tuning schemes).

$\textbf{d} = \Huge | \scriptsize \begin{array} {ccc} 𝒓 \\ \left[ \begin{array} {rrr} 0.000 & {-7.169} & {-7.169} \\ \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r} \;\;1 & \;\;\;0 & {-1} & 2 & {-1} & 0 & {-2} & 1 \\ 0 & 1 & 1 & {-1} & 0 & {-1} & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & {-1} \\ \end{array} \right] \end{array} \begin{array} {ccc} S \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{\log_2(6)} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{\log_2(12)} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\log_2(10)} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(15)} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(20)} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(30)} \\ \end{array} \right] \end{array} \Huge |$

And multiplying that by our $\mathrm{T}$ gives us $\textbf{e}$, the target-interval error list:

$\textbf{d} = \Huge | \scriptsize \begin{array} {ccc} \textbf{e} \\ \left[ \begin{array} {rrr} 0.000 & {-7.169} & {-7.169} & 7.169 & {-7.169} & 0.000 & {-7.169} & 0.000 \\ \end{array} \right] \end{array} \begin{array} {ccc} S \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{\log_2(6)} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{\log_2(12)} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\log_2(10)} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(15)} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(20)} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(30)} \\ \end{array} \right] \end{array} \Huge |$

Our weights are all positive. The important part is to take the absolute value of the errors. So we can take care of that and get $|\textbf{e}|S$:

$\textbf{d} = \scriptsize \begin{array} {ccc} |\textbf{e}| \\ \left[ \begin{array} {rrr} |0.000| & |{-7.169}| & |{-7.169}| & |7.169| & |{-7.169}| & |0.000| & |{-7.169}| & |0.000| \\ \end{array} \right] \end{array} \begin{array} {ccc} S \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{\log_2(6)} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{\log_2(12)} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\log_2(10)} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(15)} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(20)} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{\log_2(30)} \\ \end{array} \right] \end{array}$

And now we multiply that by the weights to get the damages, $\textbf{d}$.

$\textbf{d} = \scriptsize \left[ \begin{array} {rrr} 0.000 & 4.523 & 2.773 & 2.000 & 2.158 & 0.000 & 1.659 & 0.000 \\ \end{array} \right]$

And finally since this tuning scheme is all about the sum of damages, we're actually looking for $\llzigzag \textbf{d} \rrzigzag _1$. So we total these up, and get our final answer: 0.000 + 4.523 + 2.773 + 2.000 + 2.158 + 0.000 + 1.659 + 0.000 = 13.114. And that's in units of simplicity-weighted cents, ¢(S), by the way.

## Choose the winner

Now, if we repeat that entire damage calculation process for every one of the eleven tunings we identified as candidates for the miniaverage, then we'd have found the following list of tuning damages: 21.338, 9.444, 13.114, 10.461, 15.658, 10.615, 50.433, 26.527, 25.404, 33.910, and 80.393. So 13.114 isn't bad, but it's apparently not the best we can do. That honor goes to the second tuning there, which has only 9.444 ¢(S) total damage.

Lo and behold, if we cross reference that with our list of $G$ candidates from earlier, the second one is quarter-comma meantone, the tuning where the fifth is exactly the fourth root of five:

$G = \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right]$

Often people will prefer to have the tuning in terms of the cents sizes of the generators, which is our generator tuning map $𝒈$, but again we can find that as easily as $𝒋G$:

$𝒈 = \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \begin{array} {ccc} G \\ \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] \end{array}$

And that works out to 1200.000 696.578].

## Tie-breaking

With the 6-TILT miniaverage tuning of meantone (with simplicity-weight damage), we've solved for a unique tuning based on $G$ that miniaverages the damage to this temperament $M$.

But sometimes we have a tie between tunings for least average damage, though. For example, if we had we done a unity-weight tuning, in which case $W = I$, and included the interval $\frac85$ in our set, we would have found that quarter-comma meantone tied with another tuning, one with generators of $\sqrt{\frac{2560}{81}}$ and $\sqrt{\frac{200}{27}}$, which are approximately 1195.7 ¢ and 693.352 ¢.

In this case, we fall back to our general method, which is equipped to find the true optimum somewhere in between these two extreme ends of goodness, albeit as an approximate solution. This method is discussed here: power limit method. Or, if you'd like a refresher on how to think about non-unique tunings, please see Dave Keenan & Douglas Blumeyer's guide to RTT: tuning fundamentals#Non-unique tunings.

We note that there may be a way to find an exact solution to a nested miniaverage, in a similar fashion to the nested minimax discussed in the coinciding-damage method section below, but it raises some conceptual issues about what a nested miniaverage even means. We have done some pondering of this problem but it remains open; we didn't prioritize solving it, on account of the fact that nobody uses miniaverage tunings anyway.

## With held-intervals

The zero-damage method is easily modified to handle held-intervals along with target-intervals. In short, rather than assembling our set of unchanged-interval bases $\mathrm{U}_1$ through $\mathrm{U}_n$ (where $n = {{k}\choose{r}}$) corresponding to the zero-damage points by finding every combination of $r$ different ones of our $k$ target-intervals (one for each generator to be responsible for tuning exactly), instead we must first reserve $h$ (held-unchanged-interval count) columns of each $\mathrm{U}_n$ for the held-intervals, leaving only the remaining $r - h$ columns to be assembled from the target-intervals as normal. So, we'll only have ${{k}\choose{r - h}}$ candidate tunings / zero-damage points / unchanged-interval bases in this case.

In other words, if $\mathrm{U}_n$ is one of the unchanged-interval bases characterizing a candidate miniaverage tuning, then it must contain $\mathrm{H}$ itself, the held-interval basis, which does not yet fully characterize our tuning, leaving some wiggle room (otherwise we'd just use the "only held-intervals" approach, discussed later).

For example, if seeking a held-octave miniaverage tuning of a 5-limit, rank-2 temperament with the 6-TILT as our target-interval set, then $h = 1$ (only the octave), $k = 8$ (there's 8 target-intervals in the 6-TILT), and $r = 2$ (meaning of "rank-2"). So we're looking at ${{k}\choose{r - h}} = {{(8)}\choose{(2) - (1)}} = {{8}\choose{1}} = 8$ unchanged-interval bases. That's significantly less than the ${{8}\choose{2}} = 28$ we had to slog through when $h = 0$ in the earlier example, so this will be much faster to compute. All we're doing here, really, is checking each possible tuning where we pair one of our target-intervals with the octave as our unchanged-interval basis.

So, with our unchanged-interval basis (colorized to grey to help visualize its presence in the upcoming steps):

$\begin{array} {c} \mathrm{U} \\ \ \ \begin{array} {rrr} \frac21 \\ \end{array} \\ \left[ \begin{array} {rrr} \style{background-color:#D3D3D3;padding:5px}{1} \\ \style{background-color:#D3D3D3;padding:5px}{0} \\ \style{background-color:#D3D3D3;padding:5px}{0} \\ \end{array} \right] \end{array}$

We have the unchanged-interval bases for our zero-damage points:

$\small \begin{array} {c} \mathrm{U}_{(1)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac21 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#F69289;padding:5px}{1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#F69289;padding:5px}{0} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#F69289;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(2)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#FDBC42;padding:5px}{0} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#FDBC42;padding:5px}{1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#FDBC42;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(3)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac43 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(4)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac43 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{2} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{-1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(5)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac52 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#3FBC9D;padding:5px}{-1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{0} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#3FBC9D;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(6)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#41B0E4;padding:5px}{0} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{-1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#41B0E4;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(7)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac54 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#7977B8;padding:5px}{-2} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{0} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#7977B8;padding:5px}{1} \\ \end{array} \right] \end{array} , \begin{array} {c} \mathrm{U}_{(8)} \\ \ \ \begin{array} {rrr} \frac21 & \ \ \frac65 \\ \end{array} \\ \left[ \begin{array} {r|r} \style{background-color:#D3D3D3;padding:5px}{1} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{1} \\ \style{background-color:#D3D3D3;padding:5px}{0} & \style{background-color:#D883B7;padding:5px}{-1} \\ \end{array} \right] \end{array}$

(Note that $\mathrm{U}_{(1)}$ here pairs $\frac21$ with $\frac21$. That's because the octave happens to appear both in our held-interval basis $\mathrm{H}$ and our target-interval list $\mathrm{T}$. We could have chosen to remove $\frac21$ from $\mathrm{T}$ upon adding it to $\mathrm{H}$, because once you're insisting a particular interval takes no damage there's no sense also including it in a list of intervals to minimize damage to. But we chose to leave $\mathrm{T}$ alone to make our points above more clearly, i.e. with $k$ remaining equal to $8$.)

Now we canonicalize (no need for color anymore; the point has been made about the combinations of target-intervals with held-intervals):

$\begin{array} {c} \ \ \begin{array} {rrr} \frac21 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 \\ 0 \\ 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & {-1} \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & {-1} \\ 0 & 1 \\ \end{array} \right] \end{array}$

Note that $\mathrm{U}_1$, the one which had two copies of the octave, has been canonicalized down to a single column, because its vectors are obviously not linearly independent. So it will be filtered out in the next step. Actually, since that's the only eliminated point, let's go ahead and do the next step too, which is deduping; we have a lot of dupes:

$\begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac53 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & {-1} \\ 0 & 1 \\ \end{array} \right] \end{array}$

Now convert each $\mathrm{U}_i$ to a $G_i$:

$\begin{array} {c} \ \ \begin{array} {rrr} \frac{2}{1} & \frac{3}{2} \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & {-1} \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{2}{1} & \sqrt{5} \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \sqrt{\frac{10}{3}} \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & \frac13 \\ 0 & {-\frac13} \\ 0 & \frac13 \\ \end{array} \right] \end{array}$

And convert those to generator tuning maps: 1200 701.955], 1200 696.578], and 1200 694.786]. Note that every one of these has a pure-octave period. Then check the damage sums: 353.942 ¢(U), 89.083 ¢(U), and 110.390 ¢(U), respectively. So that tells us that we want the middle result of these three, 1200 696.578], as the minimization of the $1$-mean of unity-weight damage to the 6-TILT, when we're constrained to the octave being unchanged.

For a rank-3 temperament, with 2 held-intervals, we'd again have 8 choose 1 = 8 tunings to check. With 1 held-interval, we'd have 8 choose 2 = 28 tunings to check.

## For all-interval tuning schemes

We can adapt the zero-damage method to compute all-interval tuning schemes where the dual norm power $\text{dual}(q)$ is equal to $1$..

### Maxization

Per the heading of this section, we might call these "minimax-MS" schemes, where the 'M' here indicates that their interval complexity functions have been "maxized", as opposed to "Euclideanized"; that is, the power and matching root from their norm or summation form has been changed to $∞$ instead of to $2$. "Maxization" can be thought of as a reference to the fact that distance measured by $∞$-norms (maxes, remember) resembles distance traveled by "Max the magician" to get from point A to point B; he can teleport through all dimensions except the one he needs to travel furthest in, i.e. the maximum distance he had to go in any one dimension, is the defining distance. (To complete the set, the $1$-norms could be referred to as "taxicabized", referencing that this is the type of distance a taxicab on a grid of streets would travel… though would these tunings really be "-ized" if this is the logical starting point?)

And to be clear, the $\textbf{i}$-norm is maxized here — has norm power $∞$ — because the norm power on the retuning magnitude is $1$, and these norm powers must be duals.

Tuning schemes such as these are not very popular, because where Euclideanizing $\text{lp-C}()$ already makes tunings less psychoacoustically plausible, maxizing it makes tunings even less plausible.

### Example

Let's compute the minimax-MS tuning of meantone temperament. We begin by assembling our list of unchanged-interval bases. This list will be much shorter than it was with ordinary tuning schemes, because the size of this list increases combinatorially with the count of target-intervals, and with only three (proxy) target-intervals here for a 5-limit temperament.

$\begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac31 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac31 & \ \ \frac51 \\ \end{array} \\ \left[ \begin{array} {r|r} 0 & 0 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array}$

Neither the canonicalizing, filtering deficient matrices, nor the de-duping steps will have any effect for all-interval tuning computations. Any combination from the set of prime intervals will already be in canonical form, full-column-rank, and distinct from any other combination. Easy peasy.

So now we convert to generators, using the $G = \mathrm{U}(M\mathrm{U})^{-1}$ trick:

$\begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \frac32 \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & {-1} \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac21 & \ \ \sqrt{5} \\ \end{array} \\ \left[ \begin{array} {rrr} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] \end{array} , \begin{array} {c} \ \ \begin{array} {rrr} \frac{3}{\sqrt{5}} & \ \ \sqrt{5} \\ \end{array} \\ \left[ \begin{array} {rrr} 0 & 0 \\ 1 & 0 \\ {-\frac14} & \frac14 \\ \end{array} \right] \end{array}$

So these are our candidate generator embeddings. In other words, if we seek to minimize the $1$-norm of the retuning map for meantone temperament, these are 3 pairs of generators we should check. Though remember we can simplify to checking the $1$-sum, which is just another way of saying the sum of the retunings. So each of these generator pairs corresponds to a pair of primes being tuned pure, because these are the tunings where the sum of retunings is minimized.

If we want primes 2 and 3 to both be pure, we use generators of $\frac21$ and $\frac32$ (Pythagorean tuning). If we want primes 2 and 5 to be pure, we use generators of $\frac21$ and $\sqrt{5}$ (quarter-comma tuning). If we want primes 3 and 5 to be pure, we use generators $\frac{3}{\sqrt{5}} ≈ 2.006$ and $\sqrt{5}$ (apparently named "quarter-comma 3eantone" tuning).

We note that at the analogous point in the zero-damage method for ordinary tunings, we pointed out that the choice of $W$ was irrelevant up to this point; similarly, here, the choice of $S$ has thus far been irrelevant, though it will certainly affect things in the next step.

To decide between these candidates, we check each of them for the magnitude of the error on the primes.

• Pythagorean tuning causes a magnitude of 9.262 ¢/oct of error (all on prime 5).
• Quarter-comma tuning causes a magnitude of 3.393 ¢/oct of error (all on prime 3).
• Quarter-comma 3eantone tuning causes a magnitude of 5.377 ¢/oct of error (all on prime 2).

And so quarter-comma tuning is our winner with the least retuning magnitude. That's the minimax-MS tuning of meantone.

## With alternative complexities

No examples will be given here, on account of the lack of popularity of these tunings.

# Coinciding-damage method

The third and final specific optimization power we'll take a look at in this article is $p = ∞$, for minimax tuning schemes.

The method for minimax tuning schemes is similar to the zero-damage method used for miniaverage tuning schemes, where $p = 1$. However, there are two key differences:

1. Instead of gathering only the points created where target-intervals' damage graphs coincide with zero damage, we also gather any points where target-intervals's damage graphs coincide with nonzero damage.
2. Where the $p=1$ method is not capable of tie-breaking when the basic mini-$p$-mean is a range of tunings rather than a single unique optimum tuning, this $p=∞$ method is capable of tie-breaking, to find the true single unique optimum tuning.

## History

This method was originally developed by Keenan Pepper in 2012,, in a 142-line long Python file called tiptop.py.

Keenan developed his algorithm specifically for the minimax-S tuning scheme (historically known as "TOP"), the original and quintessential all-interval tuning scheme. The all-interval use case is discussed below in the "For all-interval tuning schemes" section.

Specifically, Keenan's method was developed for its tie-breaking abilities, at a time where the power-limit method's ability to tie-break was unknown or not popular.

Keenan's method was modified in 2021-2023 by Douglas Blumeyer in order to accommodate ordinary tunings — those with target-interval sets where the optimization power is $∞$ and the norm power may be anything or possibly absent — and this is what will be discussed immediately below. Douglas's modifications also included support for held-intervals, and for alternative complexities, both of which are discussed in sections below, and also an improvement that both simplifies it conceptually and allows it to identify optimum tunings more quickly. Dave Keenan further modified Keenan's method during this time so that it can find exact solutions in the form of generator embeddings, which is also reflected in all the explanations below.

The explanation of how this method works is mostly by Douglas Blumeyer, but Dave Keenan and Keenan Pepper himself both helped tremendously with refining it. (Douglas takes credit for any shortcomings, however. In particular, he apologizes: he didn’t have time to make it shorter.)

## Coinciding-damage points

### Points for finding damage minimaxes

Damage minimaxes are always found at points in tuning damage space where individual target-interval hyper-V damage graphs intersect, or cross, to form a point.

This doesn't mean that every such point will be a damage minimax. It only means that every damage minimax will be such a point.

Now, the reason why a damage minimax point must be a point of intersection of target-interval damage graphs like this is because a minimax can only occur at a point on the max damage graph where it changes slope, and the max damage graph can only change slope where damage graphs cross. In other words, whenever damage graphs cross, then on one side of the crossing, one is on top, while on the other side, the other is on top.

(For the duration of this explanation, we'll be illustrating things in 2D tuning damage space, because it's simplest. We'll wait until the end to generalize these ideas to higher dimensions.)

But many times when damage graphs cross, while this is still true about which target-interval's damage is on top switching, they were all sloping with the same sign, i.e. all up, or all down:

And minimax points will never happen at these sorts of crossings. So we have to be more specific.

A minimax point cannot be just any point where the max damage graph changes slope. It must be at a point where the sign of the slope changes between positive and negative. This will create what we call a "local minimum", the sort of thing that could be our mini-max, or minimum maximum. ("Local minimum" is a technical term, but the "local" part of it turns out not to be relevant to this problem, and it may cause more confusion to attempt to explain why not, so we'll just ignore it.)

We might call this sort of point a $-+$ point, in reference to the signs of the slopes to either side. And by analogy, the other kinds would be $--$ or $++$ points.

When damage graphs cross while sloping in opposite directions, like this $-+$ point, then when we move in either direction away from such a coinciding-damage point, at least one of these target-intervals' damages will be going up. And by the nature of the maximum, all it takes is one of their damages going up in order for their max damage to go up.

And so if we look at it the other way around, it means that from any direction coming in toward this point, the maximum damage is going down, and that once we reach this point, there's nowhere lower to go. That's what we mean by a "minimum."

As for the "local" part of "local minimum", this only means that there might be other minima like this one. In order to deal with that part of the term better, we'll have to start looking not only at two target-intervals' damages at a time, but all of them at once.

When we zoom out and consider all the crossings among all our target-intervals, not just these two, we can see all sorts of different crossings. We have some $--$ points and $++$ on the periphery, and some $-+$ points in the middle. Notice that we included among those the zero-damage points on the floor, which aren't exactly crossings, per se, but they are closely related, as we'll see soon enough; they sort of make their own local minima all by themselves. (This isn't even all of the crossings, by the way; it's hard to tell, but the slopes of the red and green lines on the left are such that eventually they'll cross, but way, way off the left side of this view.)

Notice that most of the points are $-+$ type points (9 of them, including the zero-damage ones). Fewer of them are the mere change-of-slope types, the $--$ or $++$ type (5 of them, including the one off-screen to the left). However, of these 9 important $-+$ points, only one of them is the actual minimax! In other words, for every other $-+$ point, when we consider all the other target-intervals too, we find that at least one of their damage graphs passes above it. The minimax point is the only $-+$ that's on top.

So our minimax tuning is found at a point where:

• We have a crossing of target-interval damage graphs,
• But not just any one of those: it has to be a $-+$ type crossing,
• But not just any one of those: it has to be the one that's on top.*

Now, it might seem inefficient to check 14 points, the ones that only meet the first bullet's criterion, just to find the one we want that meets all three bullet's criteria. But actually, for a computer, 14 points is easy peasy. If we compared that with how many points it would check while following the general method for solving this, that could be thousands of times more, and it still would only be finding an approximate solution; the general method has a weaker understanding of the nature of the problem it's solving. In fact, this diagram shows a very simple 3-limit temperament with only 4 target-intervals, and the typical case is going to be 7-limit or higher with a dozen or more target-intervals, which gets exponentially more complex. But even then it may still be fewer points for the computer to check overall, even though many of them are not going to work out.

And you might wonder: but why don't we just scan along the max graph and pick the $-+$ point? Well, the problem is: we don't have a function for the max damage graph, other than defining it in terms of all the other target-interval damage graphs. So it turns out that checking all of these crossing points is a more efficient way for a computer to find this point, than doing it the way that might seem more obvious to a human observer.

* Once we start learning about tie-breaking, we'll see that this is not always exactly the case. But it's fine for now.

### Points for finding damage miniaverages

In our explanation of the zero-damage method for $p=1$, we saw a similar thing in action for damage miniaverages. But these can only be found at a strict subset of such coinciding-damage points. Specifically, a damage miniaverage is found somewhere among the subset of coinciding-damage points wherever a sufficient count of individual target-intervals' hyper-V-shaped damage graphs intersect along the zero-damage floor of the world, in other words, along their creases.

We won't find miniaverages at any of the other coinciding-damage points, at various heights above the zero-damage floor wherever enough hyper-V's intersect to form a point. We do find minimaxes there, because (to review the previous section) in any direction away from such a point, at least one of the damages will be going up, and all it takes is one damage to go up to cause the max damage to go up. But the same fact is not true of average damage.

In 2D we can see plainly that we don't create any local minimum, or even any change of slope, in our average graph at just any crossing of two damage graphs. On one side of their intersection, one is going up and the other is going down. On the other side of their intersection, the same one is still going up and the same other one is still going down! The average is changing at the same rate.

So the only points where we can say for certain that in any direction no intersecting target-interval's damage has anywhere further down to go are the places where enough creases cross to make a point where they're already along the zero-damage floor. So these are the only points worth looking for a damage miniaverage:

### Zero-damage coincidings

So: both types of points are coinciding-damage points. In fact, it may be helpful for some readers to think of the zero-damage method and its zero-damage point set as the (coinciding-)zero-damage method and its (coinciding-)zero-damage point set. It simply uses only a specialized subset of coinciding-damage points.

Because both types of points are coinciding-damage points, both types are possible candidates for damage minimax tunings. We can see that zero-damage coincidings are just as valid for getting local minima in the max damage graph:

As we'll see in the next subsection, when intersecting on the zero-damage floor, we actually need one fewer target-interval to create a point. So $\textbf{i}_2$ isn't even really necessary here. We just thought it'd be more confusing to leave it off than it would be to keep it in, even though this means we have to accept that the target-intervals are multiples of each other, e.g. we could think of $\textbf{i}_1$ and $\textbf{i}_2$ here as $\frac32$ and $\frac94$, respectively, with prime-count vectors [-1 1 and [-2 2, though it's not like that's a problem or anything. In 3D tuning damage space we wouldn't have this multiples problem; there's a lot more natural and arbitrary looking angles that creases can be made to intersect. But in 3D we'd still have the one-fewer-necessary-on-the-floor problem, which is a bigger problem. And in general it's best to demonstrate ideas as simply as possible, so we stuck with 2D.

But this also creates the terminological problem whereby in 2D, a single target-interval damage graph bouncing off the floor is wanted as a "coinciding-damage" point. In this case, we can reassure ourselves by imagining that the unison is always sort of in our target-interval set, and its graph is always the flat plane on the floor, since it can never be damaged. So in a way, the target-interval's damage coincides with the unison, and/or the unison is thought of as the "missing" interval, the one fewer that are required for an intersection here.

We may observe that the latter kind of point, the coinciding-zero-damage points — those where damage graphs intersect on the zero-damage floor — may seem to be less likely candidates for minimax tunings, considering that by the nature of being on the zero-damage floor, where no target-interval's damage could possibly be any lower, there's almost certainly some target-interval with higher damage (whose damage is still increasing in one direction or another). And this can clearly be seen on the diagram we included a bit earlier. However, as we'll find in the later section about tie-breaking, and hinted at in an asterisked comment earlier, these points are often important for tie-breaking between tunings which are otherwise tied with each other when it comes to those higher-up damages (i.e. at least one direction, a higher-up damage is neither going up nor going down, such as along an intersection of two damage graphs whose creases are parallel).

### Generalizing to higher dimensions: counts of target-intervals required to make the points

Another difference between the specific zero-damage points and general coinciding-damage points is that zero-damage points require one fewer target-interval damage graph to intersect in order to produce them. That's because a hyper-V's crease along the zero-damage floor has one fewer dimension than the hyper-V itself.

Perhaps this idea is best understood by explaining separately, for specific familiar dimensions of tuning damage space:

• In 3D tuning damage space, for every hyper-V, the main part of each of its two "wings" is a plane, and we know that it takes two intersecting planes to reduce us to a line, and three intersecting planes to reduce that line further to a single point. But each hyper-V's crease is already a line, so it only takes two intersecting hyper-V creases to reduce us to a point.
• In 2D tuning damage space, each wing of a hyper-V is a line, and it takes two intersecting lines to make a point. But each hyper-V's crease is already a single point, so we don't even need any intersections here to find points of possible minimax interest!

In general, the number of target-intervals whose graphs will intersect at a point — i.e., their damages will coincide — is equal to the dimension of the tuning damage space. So in 3D tuning damage, we need three hyper-V's to intersect. Think of it this way: a 3D point has a coordinate in the format $(x,y,z)$, and we need one plane, one target-interval, for each element of that coordinate. But for intersections among creases along the floor, we only need one less than the dimensionality of the tuning damage space to specify a point; that's because we already know that one of the coordinates is 0.

The dimension of the tuning damage space will be equal to the count of generators plus one, or in other words, $r + 1$, where $r$ is the rank. This is because tuning damage space has one dimension along the floor for each generator's tuning, and one additional dimension up off the floor for the damage amounts.

### Points vs. lines; tuning space vs. tuning damage space

Throughout the discussion of this method, we may sometimes refer to "points" and "tunings" almost interchangeably. We'll attempt to dispel some potential confusion.

In tuning damage space, a tuning corresponds with a vertical line, perpendicular to the zero damage floor. Any point on this line identifies this same tuning. If we took an aerial view on tuning damage space, looking straight down on it — as we do in the topographic, contour style graphs these lines would look like points. Basically in this view, the only dimensions are for the generators, and the extra dimension for damage is collapsed. In other words, we go from tuning damage space back to simply tuning space.

So a 2D tuning damage space collapses to a 1D tuning space: a single line, a continuum of the single generator's size. And a 3D tuning damage space collapses to a 2D tuning space, with one generators' size per axis.

So what's tricky about this method in this regard is that to some extent we care about points in tuning damage space, because it's points where key intersections between tuning damage graphs come up. But when two such points fall on the same vertical line, they've identified the same exact tuning, and are thus redundant. So we should be careful to say these are the same tuning, not the same point, but occasionally it may make sense to call them the same point even if they're offset vertically in tuning damage space, because in tuning space they would be the same point.

It might seem wise to draw the vertical tuning lines that correspond with these points, but in general we've found that this is more noise than it's worth.

## How to gather coinciding-damage points

### For a general coinciding-damage point

The first step is to iterate over every combination of $r + 1$ target-intervals, and for each of those combinations, look at all permutations of their relative directions. The rank $r$ is the same as generator count in the basic case (later on we'll see how sometimes in this method it's different). And by "direction" we mean in the sense of "undirected value", i.e. are they greater than or less than unison.

Each of these relative direction permutations of target-interval combinations (we can call these "ReDPOTICs", for short) corresponds with a coinciding-damage point, which means a different candidate generator tuning map $𝒈$. The candidate $𝒈$ which causes the least damage to the target-intervals (according to the $∞$-mean, i.e. the max statistic) will be elected as our minimax tuning.

Let's look at an example. Suppose our target-intervals are $\frac32$, $\frac54$, and $\frac53$. And suppose we are working with a rank-1 temperament, i.e. with one generator.

So our combinations of intervals would be: $\{ \{ \frac32, \frac54 \}, \{ \frac32, \frac53 \} , \{ \frac54, \frac53 \} \}$.

And each of these three combinations has two relative direction permutations: one where both intervals have the same direction, and one where both intervals have different directions. For the first combination, that is, we'd look at both $\{ \frac32, \frac54 \}$ and at $\{ \frac32, \frac45 \}$. As you can see, in the latter case, we've made one of the two intervals subunison (less than $\frac11$). To be clear, we're checking only permutations of relative direction here, by which we mean that there's no need to check the case where both intervals are subunison, or the case where which one of the two intervals is subunison and which one of them stays superunison is swapped.

We can see why we only worry about relative direction by explaining what we're going to do with these permutations of target-interval combinations: find the interval that is their product. The two permutations we've chosen above multiply to $\frac32 × \frac54 = \frac{15}{8}$ and $\frac32 × \frac45 = \frac65$. Had we chosen the other two permutations, they'd've multiplied to $\frac23 × \frac45 = \frac{8}{15}$ and $\frac23 × \frac54 = \frac56$. These second two intervals are simply the reciprocals of the first two results, and so in terms of tuning they are equivalent (technically speaking, we only care about undirected intervals, i.e. neither $\frac{15}{8}$ nor $\frac{8}{15}$ but rather $8:15$.

As for why we care about the intervals that are the products of these ReDPOTICs, we'll look into that in just a moment. In short, it has to do with our originally stated plan: to find places where target-intervals have coinciding amounts of damage. (If you're feeling bold, you might try to work out how this product could relate to that already; if not, don't worry, we'll eventually explain it all in detail.)

Keenan came up with a clever way to achieve this only-caring-about-relative-direction permutations effect: simply restrict the first element in each combination to the positive direction. This effortlessly eliminates exactly half of the possible permutations, namely, the ones that are reciprocals of all the others. Done.

### For a zero-damage point

Gathering the zero-damage points is much more straightforward. We don't need to build a ReDPOTIC. We don't need to worry about ReD (relative direction), or permutations of (PO) anything. We only need to worry about TIC (target-interval combinations).

But remember, these aren't combinations of the same count of target-intervals. They have one less target-interval each. So let's call them "smaller target-interval combinations", or STICs.

For each STIC, then, we simply want each combination of $r$ of our target-intervals. For our example, with $r=1$, that'd simply be $\{ \{ \frac32 \}, \{ \frac54 \} , \{ \frac53 \} \}$.

## How to build constraint matrices

Once we've gathered all of our coinciding-damage points, both the general kind from ReDPOTICs and the zero-damage kind from STICs, we're ready to prepare constraint matrices. When we apply these to our familiar inequalities, we can convert them to solvable equalities. More on that in the next section, where we work through an example.

These constraint matrices are not themselves directly about optimizing tunings; they're simply about identifying tunings that meet these ReDPOTIC and STIC descriptions. Many of these tunings, as we saw in an earlier section, are completely awful! But that's just how it goes. The optimum tuning is among these, but many other tunings technically fit the description we use to find them.

Let's call these matrices $K$, for "konstraint" ("C" is taken for a more widely important matrix in RTT, the comma basis).

### For a general coinciding-damage point

With each of our general coinciding points, we can build a constraint matrix from its ReDPOTIC. Perhaps for some readers the approach could be best summed up near instantaneously by listing what these constraint matrices would be for the example we're going with so far:

$\begin{array} {c} \scriptsize 3/2 \\ \scriptsize 5/4 \\ \scriptsize 5/3 \end{array} \left[ \begin{array} {c} +1 \\ +1 \\ 0 \end{array} \right] , \left[ \begin{array} {c} +1 \\ {-1} \\ 0 \end{array} \right] , \left[ \begin{array} {c} +1 \\ 0 \\ +1 \end{array} \right] , \left[ \begin{array} {c} +1 \\ 0 \\ {-1} \end{array} \right] , \left[ \begin{array} {c} 0 \\ +1 \\ +1 \end{array} \right] , \left[ \begin{array} {c} 0 \\ +1 \\ {-1} \end{array} \right]$

Each constraint matrix is a $(k, r)$-shaped matrix, i.e. with one column for each generator and one row for each target. Every entry in these constraint matrices will have either a $0$, $+1$, or $-1$.

• If the value is $0$, it means that target-interval is not included in the combination.
• If the value is $+1$, we take the target-interval's superunison value.
• If the value is $-1$, we take the target-interval's subunison value.

Another way to look at these values is from the perspective of the product that ultimately we make from the combination of target-intervals: these values are the powers to which to raise each target-interval before multiplying a column up. So a power of +1 includes the target-interval as is, a power of -1 reciprocates it, and a power of 0 sends it to unison (so multiplying it in with the rest has no effect).

So, for example, the last constraint matrix here, [0 +1 -1], means that with our example target-interval list [$\frac32$, $\frac54$, and $\frac53$] we've got a superunison $\frac54$ and a subunison $\frac53$ (and no $\frac32$), so that's $\frac54 × \frac35 = \frac34$, or in other words, $(\frac32)^{0}(\frac54)^{+1}(\frac53)^{-1}$. Yes, we're still in suspense about what the purpose of these products is, but we'll address that soon.

Notice that each first nonzero entry in each constraint matrix is $+1$, per the previous section's point about effecting relative direction.

### For a zero-damage point

For each of our zero-damage points we can build a constraint matrix from its STIC. Here those are:

$\begin{array} {c} \scriptsize 3/2 \\ \scriptsize 5/4 \\ \scriptsize 5/3 \end{array} \left[ \begin{array} {c} +1 \\ 0 \\ 0 \end{array} \right] , \left[ \begin{array} {c} 0 \\ +1 \\ 0 \end{array} \right] , \left[ \begin{array} {c} 0 \\ 0 \\ +1 \end{array} \right]$

These tell us which interval will be unchanged in the corresponding tuning (take that as a hint for what will happen with the $K$ for ReDPOTICs!).

Note that since, as we noted earlier, relative direction is irrelevant for zero-damage points, these matrices will never contain -1 entries. They will only ever contain 0 and +1.

## A simple example

For our first overarching example, to help us intuit how this technique works, let's use a simplified example where our target-intervals are even simpler than the ones we looked at so far: just the primes, and thus $\mathrm{T} = \mathrm{T}_{\text{p}} = I$, an identity matrix we can ignore.

Let's also not weight damage so the weight matrix $W = I$ too.

For our temperament, we'll go with the very familiar 12-ET, so $M$ = 12 19 28]. Since this mapping is in the 5-prime-limit, our $𝒋$ = 1200 × $\log_2(2)$ $\log_2(3)$ $\log_2(5)$]. And since our mapping is an equal temperament, our generator tuning map $𝒈$ has only a single entry $g_1$.

### A system of approximations

We'll be applying a constraint matrix to our by-now familiar approximation $𝒈M \approx 𝒋$ in order to transform it from an approximation into an equality, that is, to be able to change its approximately equals sign into an equals sign. This is how each of these constraints take us to a single solution.

$\begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \end{array} \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] \end{array} \approx \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200\log_2(2) & 1200\log_2(3) & 1200\log_2(5) \\ \end{array} \right] \end{array}$

Another way to view a matrix expression like this is as a system of multiple expressions — in this case, a system of approximations:

$12g_1 \approx 1200\log_2(2) \\ 19g_1 \approx 1200\log_2(3) \\ 28g_1 \approx 1200\log_2(5)$

One variable to satisfy three approximations… that's asking a lot of that one variable! We can see that if we tried to make these all equalities, it wouldn't be possible for all of them to be true at the same time:

$12g_1 = 1200\log_2(2) \\ 19g_1 = 1200\log_2(3) \\ 28g_1 = 1200\log_2(5)$

But this of course is the whole idea of tempering: when we approximate some number of primes with fewer generators, we can't approximate all of them exactly at once.

Constraints we apply to the problem, however, can simplify it to a point where there is an actual solution, i.e. where the count of equations matches the count of variables, AKA the count of generators.

### Apply constraint to system

So let's try applying one of our constraint matrices to this equation. Suppose we get the constraint matrix [+1 +1 0]. (We may notice this happens to be one of those made from a ReDPOTIC, for a general coinciding-damage point.) This constraint matrix tells us that the target-interval combination is $\frac21$ and $\frac31$, because those are the target-intervals corresponding to its nonzero entries. And both nonzero entries are $+1$ meaning that both target-intervals are combined in the same direction. In other words, $\frac21 × \frac31 = \frac61$ is going to have something to do with this (and we're finally about to find out what that is!).

We multiply both sides of our $𝒈M \approx 𝒋$ style setup by that constraint, to produce $𝒈MK \approx 𝒋K$:

$\begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \end{array} \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] \end{array} \begin{array} {ccc} K \\ \left[ \begin{array} {rrr} +1 \\ +1 \\ 0 \end{array} \right] \end{array} \approx \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200\log_2(2) & 1200\log_2(3) & 1200\log_2(5) \\ \end{array} \right] \end{array} \begin{array} {ccc} K \\ \left[ \begin{array} {rrr} +1 \\ +1 \\ 0 \end{array} \right] \end{array}$

And now multiply that through, to get:

\begin{align} \begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \end{array} \begin{array} {ccc} MK \\ \left[ \begin{array} {rrr} (12)(+1) + (19)(+1) + (28)(0) \\ \end{array} \right] \end{array} &\approx \begin{array} {ccc} 𝒋K \\ \left[ \begin{array} {rrr} (1200\log_2(2))(+1) + (1200\log_2(3))(+1) + (1200\log_2(5))(0) \\ \end{array} \right] \end{array} \\[15pt] \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \left[ \begin{array} {rrr} 31 \\ \end{array} \right] &= \left[ \begin{array} {rrr} 1200\log_2(2) + 1200\log_2(3) \\ \end{array} \right] \end{align}

So now we've simplified things down to a single equation with a single variable. All of our matrices are $(1,1)$-shaped, which is essentially the same thing as a scalar, so we can drop the braces around them and just treat them as such. And we'll swap the $31$ and $g_1$ around to put constants and variables in the conventional order, since scalar multiplication is commutative. Finally, we can use a basic logarithmic identity to consolidate what we have on the right-hand side:

$31g_1 = 1200\log_2(6)$

So with our constraint matrix, we've achieved the situation we needed, where we have a matching count of equations and generators. We can solve for this generator tuning:

$g_1 = \dfrac{1200\log_2(6)}{31}$

### The meaning of the ReDPOTIC product

And that's our tuning, the tuning found at this coinciding-damage point.

It's a tuning which makes $\frac61$ pure by dividing it into 31 equal parts.

In cents, our generator $g_1$ is equal to about 100.063 ¢, and indeed 100.063 × 31 = 3101.955, which is exactly $1200 × \log_2(6)$.

And so that's what the constraint matrix's ReDPOTIC product $\frac61$ had to do with things: this product is an unchanged-interval of this tuning (the only one, in fact).

But our original intention here was to find the tuning where $\frac21$ and $\frac31$ have coinciding damage. Well, it turns out this is an equivalent situation. If — according to this temperament's mapping 12 19 28] — it takes 12 steps to reach $\frac21$ and also it takes 19 steps to reach $\frac31$, and that it therefore takes 31 steps to reach a $\frac61$, and it is also the case that $\frac61$ is pure, then that implies that whatever error there is on $\frac21$ must be the exact opposite of whatever damage there is on $\frac31$, since their errors apparently cancel out. So if their errors are exact opposites — negations — then their damages are the same. So we achieve coinciding target-interval damages via an unchanged-interval that they all relate to. Cue the success fanfare.

### Applying a constraint for a zero-damage point

Let's also try applying a $K$ for a zero-damage point, i.e. one that came from a STIC.

Suppose we get the constraint matrix [0 0 +1]. This constraint matrix tells us that $\frac51$ will be unchanged, because that's the target-interval corresponding to its nonzero entry (all entries of these types of $K$ will only be 0 or +1, recall).

We multiply both sides of our $𝒈M \approx 𝒋$ style setup by that constraint, to produce $𝒈MK \approx 𝒋K$:

$\begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \end{array} \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] \end{array} \begin{array} {ccc} K \\ \left[ \begin{array} {rrr} 0 \\ 0 \\ +1 \end{array} \right] \end{array} \approx \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200\log_2(2) & 1200\log_2(3) & 1200\log_2(5) \\ \end{array} \right] \end{array} \begin{array} {ccc} K \\ \left[ \begin{array} {rrr} 0 \\ 0 \\ +1 \end{array} \right] \end{array}$

And now multiply that through, to get:

\begin{align} \begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \end{array} \begin{array} {ccc} MK \\ \left[ \begin{array} {rrr} (12)(0) + (19)(0) + (28)(+1) \\ \end{array} \right] \end{array} &\approx \begin{array} {ccc} 𝒋K \\ \left[ \begin{array} {rrr} (1200\log_2(2))(0) + (1200\log_2(3))(0) + (1200\log_2(5))(+1) \\ \end{array} \right] \end{array} \\[15pt] \left[ \begin{array} {rrr} g_1 \\ \end{array} \right] \left[ \begin{array} {rrr} 28 \\ \end{array} \right] &= \left[ \begin{array} {rrr} 1200\log_2(5) \\ \end{array} \right] \\[15pt] 28g_1 &= 1200\log_2(5) \\[15pt] g_1 &= \dfrac{1200\log_2(5)}{28} \\[15pt] g_1 &= 99.511 \end{align}

### Comparing the zero-damage method's unchanged-interval bases with the coinciding-damage method's constraint matrices

If you recall, the zero-damage method for miniaverage tunings works by directly assembling unchanged-interval bases $\mathrm{U}$ out of combinations of target-intervals. The coinciding-damage method here, however, indirectly achieves unchanged-interval bases via constraint matrices $K$. It does this both for the zero-damage points such as are used by the zero-damage method, as well as for the general coinciding-damage points that the zero-damage method does not use.

Though we note that even the general coinciding-damage points, where $r + 1$ target-intervals coincide for some possibly nonzero damage, are equivalent to zero-damage points where $r$ intervals coincide for zero damage; the difference is that these unchanged-intervals are not actually target-intervals, but rather products of pairs of directional permutations of them.


The zero-damage method might have been designed to use constraint matrices, but this would probably be overkill. When general coinciding-damage points are not needed, it's simpler to use unchanged-interval bases directly.

### Get damage lists

From here, we basically just need to take every tuning we find from the linear solutions like this, and for each one, find its target-interval damage list, and then from that find its maximum damage.

In an earlier section's example we found a candidate tuning where $g_1 = \frac{1200\log_2(6)}{31} \approx 100.0632$, so we could check damages for this one using our familiar formula:

$\textbf{d} = |\,𝒈M\mathrm{T}W - 𝒋\mathrm{T}W\,|$

And we said that both $\mathrm{T}$ and $W$ are identity matrices to simplify things so we can get rid of those.

$\textbf{d} = |\,𝒈M - 𝒋\,|$

And now substitute in the 100.0632:

$\textbf{d} = \Large | \normalsize \begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} 100.0632 \\ \end{array} \right] \end{array} \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] \end{array} - \begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 \\ \end{array} \right] \end{array} \Large |$

Anyway, that's enough busywork for now. You can work that out if you like, and then you'll have to work it out in the same way for every single candidate tuning.

You'll end up with a ton of possible damage lists $\textbf{d}$, one for each generator tuning $𝒈$ (the $K$ have been transposed here to fit better):

$\begin{array} {c} \left[ \begin{array} {rrr} +1 & +1 & 0 \end{array} \right] & 𝒈_1 = \left[ \begin{array} {rrr} 100.063 \end{array} \right] & \textbf{d}_1 = \left[ \begin{array} {rrr} 0.757 & 0.757 & 15.452 \end{array} \right] \\ \left[ \begin{array} {rrr} +1 & {-1} & 0 \end{array} \right] & 𝒈_2 = \left[ \begin{array} {rrr} 100.279 \end{array} \right] & \textbf{d}_2 = \left[ \begin{array} {rrr} 3.351 & 3.351 & 21.506 \end{array} \right] \\ \left[ \begin{array} {rrr} +1 & 0 & +1 \end{array} \right] & 𝒈_3 = \left[ \begin{array} {rrr} 99.657 \end{array} \right] & \textbf{d}_3 = \left[ \begin{array} {rrr} 4.106 & 8.456 & 4.106 \end{array} \right] \\ \left[ \begin{array} {rrr} +1 & 0 & {-1} \end{array} \right] & 𝒈_4 = \left[ \begin{array} {rrr} 99.144 \end{array} \right] & \textbf{d}_4 = \left[ \begin{array} {rrr} 10.265 & 18.208 & 10.265 \end{array} \right] \\ \left[ \begin{array} {rrr} 0 & +1 & +1 \end{array} \right] & 𝒈_5 = \left[ \begin{array} {rrr} 99.750 \end{array} \right] & \textbf{d}_5 = \left[ \begin{array} {rrr} 2.995 & 6.697 & 6.697 \end{array} \right] \\ \left[ \begin{array} {rrr} 0 & +1 & {-1} \end{array} \right] & 𝒈_6 = \left[ \begin{array} {rrr} 98.262 \end{array} \right] & \textbf{d}_6 = \left[ \begin{array} {rrr} 20.855 & 34.976 & 34.976 \end{array} \right] \\ \left[ \begin{array} {rrr} +1 & 0 & 0 \end{array} \right] & 𝒈_7 = \left[ \begin{array} {rrr} 100.000 \end{array} \right] & \textbf{d}_7 = \left[ \begin{array} {rrr} 0.000 & 1.955 & 13.686\end{array} \right] \\ \left[ \begin{array} {rrr} 0 & +1 & 0 \end{array} \right] & 𝒈_8 = \left[ \begin{array} {rrr} 100.103 \end{array} \right] & \textbf{d}_8 = \left[ \begin{array} {rrr} 1.235 & 0.000 & 16.567 \end{array} \right] \\ \left[ \begin{array} {rrr} 0 & 0 & +1 \end{array} \right] & 𝒈_9 = \left[ \begin{array} {rrr} 99.511 \end{array} \right] & \textbf{d}_9 = \left[ \begin{array} {rrr} 5.866 & 11.242 & 0.000 \end{array} \right] \\ \end{array}$

The first six of these are from ReDPOTICs, for general coinciding-damage points. The last three are from STICs, for zero-damage points.

For each damage list, we can find the coinciding damages. In the first tuning, it's the first two target-intervals' damages, both with $0.757$. In the fifth tuning, it's the second and third target-intervals' damages, both with $6.697$. Etcetera. Note that these coinciding damages are not necessarily the max damages of the tuning; for example, the third tuning shows the first and third target-intervals both equal to $4.106$ damage, but the second interval has more than twice that, at $8.456$ damage. That's fine. In many cases, in fact, the tuning we ultimately want is one of these where the coinciding damages are not the max damages.

### Identify minimax

In order to identify the minimax is generally pretty straightforward. We gather up all the maxes. And pick their min. That's the min-i-max.

So here's the maxes:

$\begin{array} {c} 𝒈_1 = \left[ \begin{array} {rrr} 100.063 \end{array} \right] & \text{max}(\textbf{d}_1) = 15.452 \\ 𝒈_2 = \left[ \begin{array} {rrr} 100.279 \end{array} \right] & \text{max}(\textbf{d}_2) = 21.506 \\ 𝒈_3 = \left[ \begin{array} {rrr} 99.657 \end{array} \right] & \text{max}(\textbf{d}_3) = 8.456 \\ 𝒈_4 = \left[ \begin{array} {rrr} 99.144 \end{array} \right] & \text{max}(\textbf{d}_4) = 18.208 \\ 𝒈_5 = \left[ \begin{array} {rrr} 99.750 \end{array} \right] & \text{max}(\textbf{d}_5) = 6.697 \\ 𝒈_6 = \left[ \begin{array} {rrr} 98.262 \end{array} \right] & \text{max}(\textbf{d}_6) = 34.976 \\ 𝒈_7 = \left[ \begin{array} {rrr} 100.000 \end{array} \right] & \text{max}(\textbf{d}_7) = 13.686 \\ 𝒈_8 = \left[ \begin{array} {rrr} 100.103 \end{array} \right] & \text{max}(\textbf{d}_8) = 16.567 \\ 𝒈_9 = \left[ \begin{array} {rrr} 99.511 \end{array} \right] & \text{max}(\textbf{d}_9) = 11.242 \\ \end{array}$

Out of these maximum values, 6.697 is the minimum. So that's our minimax tuning, $𝒈_5$, where the generator is 99.750 ¢ and the max damage to any of our target-intervals is 6.697 ¢(U).

Had there been a tie here, i.e. had some other tuning besides $𝒈_5$ also had 6.697 ¢ for its maximum damage, such that more than one tuning tied for minimax, then we would need to move on to tie-breaking. That gets very involved, so we'll look at that in detail in a later section..

## A bigger example

The rank-1 temperament case we've just worked through, which has just one generator, was a great introduction, but a bit too simple to demonstrate some of the ideas we want to touch upon here.

• Some aspects of the constraint matrices require multiple generators in order to illustrate effectively.
• And we didn't demonstrate with weighting yet.
• And we didn't demonstrate with a more interesting target-interval set yet.
• And we didn't compute an exact solution via generator embedding yet.

Dang!

So let's work through another example, this time of

• a rank-3 temperament,
• using complexity-weight damage,
• a more interesting target-interval set,
• and get our answer in the form of a generator embedding.

### Prepare constraint matrix

If we have three generators, we will have many coinciding-damage points, each one corresponding to its own constraint matrix $K$. For this example, we're not going to bother showing all of them. It would be way too much to show. Let's just follow the logic from start to finish with a single constraint matrix.

Let's suppose our target-interval list is $\{ \frac65, \frac75, \frac85, \frac95, \frac76, \frac43, \frac32, \frac87, \frac97, \frac98 \}$. We've labeled each of the rows of our $K$ here with its corresponding target-interval:

$\begin{array} {rrr} \scriptsize{6/5} \\ \scriptsize{7/5} \\ \scriptsize{8/5} \\ \scriptsize{9/5} \\ \scriptsize{7/6} \\ \scriptsize{4/3} \\ \scriptsize{3/2} \\ \scriptsize{8/7} \\ \scriptsize{9/7} \\ \scriptsize{9/8} \\ \end{array} \left[ \begin{array} {rrr} 0 & 0 & 0 \\ +1 & +1 & +1 \\ +1 & 0 & 0 \\ 0 & {-1} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & {-1} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right]$

As we can see, this is one of the ReDPOTIC types of constraints, for a general coinciding-damage point. (We won't work through a STIC type for this example; there's actually nothing particularly helpful that we don't already understand that would be illustrated by that.)

So this constraint matrix makes three statements:

1. The first column tells us that the (possibly-weighted) errors for $\frac75$ and $\frac85$ are opposites (same value but opposite sign), because the damage to $\frac75 × \frac85$ is zero.
2. The second column tells us that the (possibly-weighted) errors for $\frac75$ and $\frac95$ are identical, because the damage to $\frac75 × \frac59$ is zero.
3. The third column tells us that the (possibly-weighted) errors for $\frac75$ and $\frac43$ are identical, because the damage to $\frac75 × \frac34$ is zero.

Here's something important to observe that we couldn't confront yet with the simpler single-generator example. Note that while there is always one row of the constraint matrix for each generator, each row of the constraint matrix has no particular association with any one of the generators. In other words, it wouldn't make sense for us to label the first column of this $K$ with $g_1$, the second with $g_2$, and the third with $g_3$ (or any other ordering of those); each column is as relevant to one of those generators as it is to any other. Any one of the generators may turn out to be the one which satisfies one of these constraints. Said another way, when we perform matrix multiplication between this $K$ matrix and the $M\mathrm{T}W$ situation, each row of $K$ touches each row of $M\mathrm{T}W$, so $K$'s influence is exerted across the board.

Another thing to note is that we set up the constraint matrix so that there's one target-interval that has a non-zero entry in each row, and that this is also the first target-interval column with a non-zero entry, i.e. the one that's been anchored to the positive direction. As we can see, in our case, that target-interval is $\frac75$. Setting up our constraint matrix in this way is how we establish — using the transitive property of equality — that all four of these target-intervals with non-zero entries somewhere their column will have coinciding (equal) damages. Because if A's damage equals B's, and B's damage equals C's, then we also know that A's damage equals C's. And same for D. So we end up with A's damage = B's damage = C's damage = D's damage. All four have coinciding damage.

Eventually we want to multiply this constraint matrix by $M\mathrm{T}W$ and by $\mathrm{T}W$. So let's look at those next.

### Prepare tempered and just sides of to-be equality

For our mapping, let's use the minimal generator form of breed temperament, and for weights, let's use complexity-weighted damage ($W = C$).

$\scriptsize \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 1 & 1 & 1 & 2 \\ 0 & 2 & 3 & 2 \\ 0 & 0 & 2 & 1 \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r|r} 1 & 0 & 3 & 0 & {-1} & 2 & {-1} & 3 & 0 & {-3} \\ 1 & 0 & 0 & 2 & {-1} & {-1} & 1 & 0 & 2 & 2 \\ {-1} & {-1} & {-1} & {-1} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & {-1} & {-1} & 0 \\ \end{array} \right] \end{array} \begin{array} {ccc} C \\ \text{diag}(\log_2(\left[ \begin{array} {rrr} 30 & 35 & 40 & 45 & 42 & 12 & 6 & 56 & 63 & 72 \\ \end{array} \right])) \end{array}$

And that resolves to the following:

$\scriptsize \begin{array} {ccc} M\mathrm{T}C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & \log_2(35) & 2\log_2(40) & \log_2(45) & 0 & \log_2(12) & 0 & \log_2(56) & 0 & {-\log_2(72)} \\ {-\log_2(30)} & {-\log_2(35)} & {-3\log_2(40)} & \log_2(45) & 0 & {-2\log_2(12)} & 2\log_2(6) & {-2\log_2(56)} & 2\log_2(63) & 4\log_2(72) \\ {-2\log_2(30)} & {-\log_2(35)} & {-2\log_2(40)} & {-2\log_2(45)} & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 \\ \end{array} \right] \end{array}$

And what we've got on the other side of the equality is $\mathrm{T}W$. Note that we're not using $𝒋\mathrm{T}W$ here, since we're shooting for a generator embedding $G$ such that $G\mathrm{T}W \approx \mathrm{T}W$; in other words, we took $𝒋G\mathrm{T}W \approx 𝒋\mathrm{T}W$ and canceled out the $𝒋$ on both sides.

$\scriptsize \begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r|r} 1 & 0 & 3 & 0 & {-1} & 2 & {-1} & 3 & 0 & {-3} \\ 1 & 0 & 0 & 2 & {-1} & {-1} & 1 & 0 & 2 & 2 \\ {-1} & {-1} & {-1} & {-1} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & {-1} & {-1} & 0 \\ \end{array} \right] \end{array} \begin{array} {ccc} C \\ \text{diag}(\log_2(\left[ \begin{array} {rrr} 30 & 35 & 40 & 45 & 42 & 12 & 6 & 56 & 63 & 72 \\ \end{array} \right])) \end{array}$

And that resolves to the following:

$\begin{array} {ccc} \mathrm{T}C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & 0 & \log_2(40) & 0 & {-\log_2(42)} & 2\log_2(12) & {-\log_2(6)} & 3\log_2(56) & 0 & {-3\log_2(72)} \\ \log_2(30) & 0 & 0 & 2\log_2(45) & {-\log_2(42)} & {-\log_2(12)} & \log_2(6) & 0 & 2\log_2(63) & 2\log_2(72) \\ {-\log_2(30)} & {-\log_2(35)} & {-\log_2(40)} & {-\log_2(45)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \log_2(35) & 0 & 0 & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 \\ \end{array} \right] \end{array}$

### Apply constraint

Now we've got to constrain both sides of the problem. First the left side:

$\scriptsize \begin{array} {ccc} M\mathrm{T}C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & \log_2(35) & 2\log_2(40) & \log_2(45) & 0 & \log_2(12) & 0 & \log_2(56) & 0 & {-\log_2(72)} \\ {-\log_2(30)} & {-\log_2(35)} & {-3\log_2(40)} & \log_2(45) & 0 & {-2\log_2(12)} & 2\log_2(6) & {-2\log_2(56)} & 2\log_2(63) & 4\log_2(72) \\ {-2\log_2(30)} & {-\log_2(35)} & {-2\log_2(40)} & {-2\log_2(45)} & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 \\ \end{array} \right] \end{array} \begin{array} {ccc} K \\ \left[ \begin{array} {rrr} 0 & 0 & 0 \\ +1 & +1 & +1 \\ +1 & 0 & 0 \\ 0 & {-1} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & {-1} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right] \end{array} → \\ \begin{array} {c} M\mathrm{T}CK \\ \left[ \begin{array} {c} \log_2(35·40^2) & {-\log_2(\frac{45}{35})} & \log_2(\frac{35}{12}) \\ {-\log_2(35·40^3)} & {-\log_2(35·45)} & \log_2(\frac{12^2}{35}) \\ {-\log_2(35·40^2)} & \log_2(\frac{45^2}{35}) & {-\log_2(35)} \\ \end{array} \right] \end{array}$

And now the right side:

$\scriptsize \begin{array} {ccc} \mathrm{T}C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & 0 & \log_2(40) & 0 & {-\log_2(42)} & 2\log_2(12) & {-\log_2(6)} & 3\log_2(56) & 0 & {-3\log_2(72)} \\ \log_2(30) & 0 & 0 & 2\log_2(45) & {-\log_2(42)} & {-\log_2(12)} & \log_2(6) & 0 & 2\log_2(63) & 2\log_2(72) \\ {-\log_2(30)} & {-\log_2(35)} & {-\log_2(40)} & {-\log_2(45)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \log_2(35) & 0 & 0 & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 \\ \end{array} \right] \end{array} \begin{array} {ccc} K \\ \left[ \begin{array} {rrr} 0 & 0 & 0 \\ +1 & +1 & +1 \\ +1 & 0 & 0 \\ 0 & {-1} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & {-1} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right] \end{array} → \\ \begin{array} {c} \mathrm{T}CK \\ \left[ \begin{array} {c} \log_2(40^3) & 0 & {-\log_2(12^2)} \\ 0 & {-\log_2(45^2)} & \log_2(12) \\ {-\log_2(35·40)} & \log_2(\frac{45}{35}) & {-\log_2(35)} \\ \log_2(35) & \log_2(35) & \log_2(35) \\ \end{array} \right] \end{array}$

So now we can put them together as an equality, making sure to include the generator embedding that we're solving for on the left-hand side:

$\small \begin{array} {c} G \\ \left[ \begin{array} {c} g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \\ g_{41} & g_{42} & g_{43} \\ \end{array} \right] \end{array} \begin{array} {c} M\mathrm{T}CK \\ \left[ \begin{array} {c} \log_2(35·40^2) & {-\log_2(\frac{45}{35})} & \log_2(\frac{35}{12}) \\ {-\log_2(35·40^3)} & {-\log_2(35·45)} & \log_2(\frac{12^2}{35}) \\ {-\log_2(35·40^2)} & \log_2(\frac{45^2}{35}) & {-\log_2(35)} \\ \end{array} \right] \end{array} = \begin{array} {c} \mathrm{T}CK \\ \left[ \begin{array} {c} \log_2(40^3) & 0 & {-\log_2(12^2)} \\ 0 & {-\log_2(45^2)} & \log_2(12) \\ {-\log_2(35·40)} & \log_2(\frac{45}{35}) & {-\log_2(35)} \\ \log_2(35) & \log_2(35) & \log_2(35) \\ \end{array} \right] \end{array}$

### Solve for generator embedding

To solve for $G$, we take the inverse of $M\mathrm{T}CK$ and right-multiply both sides of the equation by it. This will cancel it out on the left-hand side, isolating $G$:

\begin{align} GM\mathrm{T}CK &= \mathrm{T}CK \\ GM\mathrm{T}CK(M\mathrm{T}CK)^{-1} &= \mathrm{T}CK(M\mathrm{T}CK)^{-1} \\ G\cancel{M\mathrm{T}CK}\cancel{(M\mathrm{T}CK)^{-1}} &= \mathrm{T}CK(M\mathrm{T}CK)^{-1} \\ G &= \mathrm{T}CK(M\mathrm{T}CK)^{-1} \end{align}

And now we just multiply those two things on the right-hand side together:

$\scriptsize \begin{array} {c} \mathrm{T}CK \\ \left[ \begin{array} {c} \log_2(40^3) & 0 & {-\log_2(12^2)} \\ 0 & {-\log_2(45^2)} & \log_2(12) \\ {-\log_2(35·40)} & \log_2(\frac{45}{35}) & {-\log_2(35)} \\ \log_2(35) & \log_2(35) & \log_2(35) \\ \end{array} \right] \end{array} \begin{array} {c} (M\mathrm{T}CK)^{-1} \\ \left[ \begin{array} {c} 3\log_2(35)\log_2(45) - \log_2(12)\log_2(\frac{405}{7}) & \log_2(35)\log_2(45) - \log_2(12)\log_2(\frac{405}{7}) & 2\log_2(35)\log_2(45) - \log_2(12)\log_2(\frac{18225}{7}) \\ -\log_2(35)\log_2(40) - \log_2(144)\log_2(56000) & -\log_2(12)\log_2(56000) & -\log_2(35)\log_2(40) - \log_2(12)\log_2(1400) \\ -8\log_2(40)\log_2(45) - \log_2(35)\log_2(\frac{18225}{8}) & -\log_2(45)\log_2(56000) & -\log_2(40)\log_2(45) - \log_2(35)\log_2(\frac{405}{8}) \\ \end{array} \right] \\ \hline \log_2(\frac98)\log_2(12)\log_2(35) - \log_2(40)\log_2(45)\log_2(\frac{20736}{35}) \end{array}$

To find $G$.

$\scriptsize \begin{array} {c} G \\ \left[ \begin{array} {c} g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \\ g_{41} & g_{42} & g_{43} \\ \end{array} \right] \end{array} = \begin{array} {c} \mathrm{T}CK(M\mathrm{T}CK)^{-1} \\ \left[ \begin{array} {c} \begin{array} {c} 9\log_2(35)\log_2(40)\log_2(45) \\ + 2\log_2(12)\log_2(35)\log_2(\frac{18225}{8}) \\ + 2\log_2(12)\log_2(40)\log_2(86821875) \end{array} & & \begin{array} {c} 3\log_2(35)\log_2(40)\log_2(45) \\ - 3\log_2(12)\log_2(40)\log_2(\frac{405}{7}) \\ + 2\log_2(12)\log_2(45)\log_2(56000) \end{array} & & \begin{array} {c} 6\log_2(35)\log_2(40)\log_2(45) \\ + 2\log_2(35)\log_2(12)\log_2(\frac{405}{8}) \\ + \log_2(12)\log_2(40)\log_2(1929375) \end{array} \\[9pt] \begin{array} {c} 2\log_2(45)\log_2(35)\log_2(40) \\ + 3\log_2(45)\log_2(144)\log_2(56000) \\ - 8\log_2(12)\log_2(40)\log_2(45) \\ - \log_2(12)\log_2(35)\log_2(\frac{18225}{8}) \end{array} & & \begin{array} {c} \log_2(12)\log_2(45)\log_2(56000) \end{array} & & \begin{array} {c} \log_2(35)\log_2(40)\log_2(45) \\ - 5\log_2(12)\log_2(40)\log_2(45) \\ - \log_2(12)\log_2(35)\log_2(\frac{405}{8}) \\ - 2\log_2(12)\log_2(45)\log_2(1400) \end{array} \\[9pt] \begin{array} {c} \log_2(144)\log_2(40)\log_2(\frac{405}{7}) \\ - \log_2(144)\log_2(45)\log_2(56000) \\ + 4\log_2(35)\log_2(40)\log_2(45) \\ + \log_2(35)\log_2(144)\log_2(3240000) \end{array} & & \begin{array} {c} \log_2(12)\log_2(40)\log_2(\frac{405}{7}) \\ - \log_2(12)\log_2(45)\log_2(56000) \\ - \log_2(35)\log_2(40)\log_2(45) \\ + \log_2(35)\log_2(45)\log_2(56000) \\ + \log_2(12)\log_2(35)\log_2(3240000) \\ - \log_2(35)\log_2(35)\log_2(45) \end{array} & & \begin{array} {c} \log_2(12)\log_2(40)\log_2(\frac{18225}{7}) \\ + 2\log_2(35)\log_2(40)\log_2(45) \\ + \log_2(12)\log_2(35)\log_2(3645000) \\ - \log_2(12)\log_2(45)\log_2(1400) \end{array} \\[9pt] \begin{array} {c} -8\log_2(35)\log_2(40)\log_2(45) \\ -\log_2(35)\log_2(144)\log_2(3240000) \end{array} & & \begin{array} {c} \log_2(35)\log_2(35)\log_2(45) \\ - \log_2(35)\log_2(45)\log_2(5600) \\ - \log_2(12)\log_2(35)\log_2(3240000) \end{array} & & \begin{array} {c} -\log_2(35)\log_2(40)\log_2(45) \\ -\log_2(12)\log_2(35)\log_2(3645000) \end{array} \end{array}\right] \\ \hline \log_2(\frac98)\log_2(12)\log_2(35) - \log_2(40)\log_2(45)\log_2(\frac{20736}{35}) \end{array}$

### Convert generator embedding to generator map

Taking the values from the first column of this, we can find that our first generator, $\textbf{g}_1$, is exactly equal to:

$\small \sqrt[ \log_2(\frac98)\log_2(12)\log_2(35) - \log_2(40)\log_2(45)\log_2(\frac{20736}{35}) ] { \rule[15pt]{0pt}{0pt} 2^{( 9\log_2(35)\log_2(40)\log_2(45) + 2\log_2(12)\log_2(35)\log_2(\frac{18225}{8}) + 2\log_2(12)\log_2(40)\log_2(86821875) )} } \hspace{1mu} \overline{\rule[15pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[15pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[15pt]{0pt}{0pt}} \\ \quad\quad\quad \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt} · 3^{( 2\log_2(45)\log_2(35)\log_2(40) + 3\log_2(45)\log_2(144)\log_2(56000) - 8\log_2(12)\log_2(40)\log_2(45) - \log_2(12)\log_2(35)\log_2(\frac{18225}{8}) )} } \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \\ \quad\quad\quad \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt} · 5^{( \log_2(144)\log_2(40)\log_2(\frac{405}{7}) - \log_2(144)\log_2(45)\log_2(56000) + 4\log_2(35)\log_2(40)\log_2(45) + \log_2(35)\log_2(144)\log_2(3240000) )} } \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \\ \quad\quad\quad \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt}} \hspace{1mu} \overline{\rule[11pt]{0pt}{0pt} · 7^{( {-(8\log_2(35)\log_2(40)\log_2(45) + \log_2(35)\log_2(144)\log_2(3240000))} )} }$

Clearly, such an exact value is of dubious interest as is. But it may be nice for some types of personalities (including the present author) to know, theoretically speaking, that this expression gives us the truly optimal size of this generator, where the general solution would only find a close approximation. It would look a little less insane if we were using unity-weight damage, or our complexity didn't include logarithmic values.

At some point we do need to convert this to an inexact decimal form to make any practical use of it. But we should wait until the last possible moment, so as to not let rounding errors compound.

Well, this is that last possible moment. So this value works out to about 1.99847, just shy of 2, which is great because it's supposed to be a tempered octave. In cents it is 1198.679 ¢.

We won't show the exact exponential form for the other generators $\textbf{g}_2$ and $\textbf{g}_3$; the point has been made. The practical thing to do is simply multiply this $G$ by $𝒋$, to find $𝒈$. We'll go ahead and show things in decimals now:

$\begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 & 3368.826 \\ \end{array} \right] \end{array} \begin{array} {ccc} G \\ \left[ \begin{array} {r} 10.156 & 2.701 & 5.530 \\ 1.831 & 1.140 & 0.306 \\ 3.662 & 1.281 & 2.612 \\ {-7.325} & {-2.561} & {-4.224} \\ \end{array} \right] \end{array} = \begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} 1198.679 & 350.516 & 265.929 \\ \end{array} \right] \end{array}$

And so that's the tuning (in cents) we find for this constraint matrix! (But remember, this is only one of many candidates for the minimax tuning here — it is not necessarily the actual minimax tuning. We picked this particular ReDPOTIC / constraint matrix / coinciding-damage point / candidate tuning example basically at random.)

### System of equations style

In our simpler example, we looked at our matrix equation as a system of equations. It may be instructive to consider this related approach toward this result.

Suppose instead that rather than going for a matrix solution in $G$, we went straight for a single vector, our generator tuning map $𝒈$. In other words, we don't save the conversion from $G$ to $𝒈$ via $𝒋$ to the end; we build this into our solution. So rewind back to before we did a matrix inverse, and instead we multiply each side of the equation by $𝒋$. The $𝒋G$ goes to $𝒈$, and we just go ahead and multiply its entries $g_1$, $g_2$, and $g_3$ up with everything else:

$\begin{array} {ccc} 𝒈M\mathrm{T}CK \\ \left[ \begin{array} {rrr} 15.773g_1 & {-0.363g_1} & 1.544g_1 \\ {-21.095g_2} & {-10.621g_2} & 2.041g_2 \\ {-15.773g_3} & 5.854g_3 & {-5.129g_3} \\ \end{array} \right] \end{array} = \begin{array} {ccc} 𝒋\mathrm{T}CK \\ \left[ \begin{array} {rrr} 7318.250 & {-2600.619} & 1202.397 \\ \end{array} \right] \end{array}$

Now the columns of this can be viewed as a system of equations (so we essentially transpose everything to get this new look):

$\begin{array} {r} 15.773g_1 & + & {-21.095g_2} & + & {-15.773g_3} & = & 7318.250 \\ {-0.363}g_1 & + & {-10.621g_2} & + & 5.854g_3 & = & {-2600.619} \\ 1.544g_1 & + & 2.041g_2 & + & {-5.129g_3} & = & 1202.397 \\ \end{array}$

These can all be true at once now (again, before the constraint, they couldn't). The values come out to:

$g_1 = 1198.679 \\ g_2 = 350.516 \\ g_3 = 265.929$

This matches what we found for $𝒈$ via solving for $G$.

The system of equations style is how Keenan's original algorithm works. The matrix inverse style is Dave's modification which may be less obvious how it works, but is capable of solving for generator embeddings.

### Sanity-check

We can sanity check it if we like. This was supposed to find us the tuning of breed temperament where $\frac75 × \frac85 = \frac{56}{25}$, $\frac75 × \frac59 = \frac79$ (or we might prefer to think of it in its superunison form, $\frac97$), and $\frac75 × \frac34 = \frac{21}{20}$ are pure.

Well, breed maps $\textbf{i}_1 = \frac{56}{25}$ [3 0 -2 1 to the generator-count vector $\textbf{y}_1$ [3 -4 -3}. And $𝒈\textbf{y}_1$ looks like {1198.679 350.516 265.929][3 -4 -3} $= 1198.679 × 3 + 350.516 x {-4} + 265.929 × {-3} = 1396.186$. Its JI size is $1200 × \log_2(\frac{56}{25}) = 1396.198$ which is pretty close; close enough, perhaps, given all the rounding errors we were accumulating.

And breed maps $\textbf{i}_2 = \frac{9}{7}$ [0 2 0 -1 to the generator-count vector $\textbf{y}_2$ [0 2 -1}. And $𝒈\textbf{y}_2$ looks like {1198.679 350.516 265.929][0 2 -1} $= 1198.679 × 0 + 350.516 × 2 + 265.929 × {-1} = 435.103$. Its JI size is $1200 × \log_2(\frac97) = 435.084$, also essentially pure.

Finally breed maps $\textbf{i}_3 = \frac{21}{20}$ [-2 1 -1 1 to the generator-count vector $\textbf{y}_3$ [0 1 -1}. And $𝒈\textbf{y}_3$ looks like {1198.679 350.516 265.929][0 1 -1} $= 1198.679 × 0 + 350.516 × 1 + 265.929 × {-1} = 84.587$. Its JI size is $1200 × \log_2(\frac{21}{20}) = 84.467$, again, essentially pure.

### Relation to only-held intervals method and zero-damage method

Note that this $G = \mathrm{T}CK(M\mathrm{T}CK)^{-1}$ formula is the same thing as the $G = U(MU)^{-1}$ formula used for the #only held-intervals method; it's just that formula where $\mathrm{U} = \mathrm{T}CK$. In other words, we will find a basis for the unchanged intervals of this tuning of this temperament to be:

$\begin{array} {c} \mathrm{U} = \mathrm{T}CK \\ \left[ \begin{array} {c} \log_2(40^3) & 0 & {-\log_2(12^2)} \\ 0 & {-\log_2(45^2)} & \log_2(12) \\ {-\log_2(35·40)} & \log_2(\frac{45}{35}) & {-\log_2(35)} \\ \log_2(35) & \log_2(35) & \log_2(35) \\ \end{array} \right] \end{array}$

Owing to our choice to weight our absolute error to obtain damage, these intervals are quite strange. Not only do we have non-integer entries in our prime-count vectors here, we've gone beyond the rational entries we often find for generator embeddings and unchanged-interval bases, etc. and now have irrational entries with freaking logarithms in them. So these aren't particularly insight-giving unchanged-intervals, but they are what they are.

So, in effect, the coinciding-damage method is fairly similar to the zero-damage method. Each point in either method's point set corresponds to an unchanged-interval basis $\mathrm{U}$. It is the case that for the zero-damage method the members of this $\mathrm{U}$ are pulled directly from the target-interval set $\mathrm{T}$, whereas for the coinciding-damage method here, the members of each $\mathrm{U}$ have a more complex relationship with the members of $\mathrm{T}$ set, being products of relative direction pairs of them instead.

## With held-intervals

When a tuning scheme has optimization power $p = ∞$ and also specifies one or more held-intervals, we can adapt the coinciding-damage method to accommodate this. In short, we can no longer dedicate every generator toward our target-intervals; we must allocate one generator toward each interval to be held unchanged.

### Counts of target-intervals with held-intervals

In the earlier section #Generalizing to higher dimensions: counts of target-intervals required to make the points, we looked at how it generally takes $r + 1$ target-interval damage graphs to intersect to make a point, but only $r$ of them on the zero-damage floor.

When there are held-intervals, however, things get a little trickier.

Each additional held-interval added is like taking a cross-section through the tuning damage space, specifically, the cross-section wherever that interval is held unchanged. Tuning damage space is still $(r + 1)$-dimensional, but now we only care about a slice through that space, a slice with $h$ fewer dimensions. That is, it'll be a $(r + 1 - h)$-dimensional slice. And within this slice, then, we only need $r + 1 - h$ target-intervals' damages to coincide to make a point, and only $r - h$ of them to make a point on the floor.

For example, when tuning an octave-fifth form of meantone temperament, we'd know we'd be searching a 3D tuning damage space, with one floor axis for $g_1$ in the vicinity of 1200 ¢ and the other floor axis for $g_2$ in the vicinity of 701.955 ¢. But if we say it's a held-octave tuning we want, then while all of our target-interval's hyper-V's are still exactly as they were, fully occupying the three dimensions of this space, but now we only care about the 2D slice through it where $g_1 = 1200$.

In that very simple example, only one of the temperament's generators was involved in the mapped interval that the held-interval maps to, so the cross-section is conveniently perpendicular to the axis for that generator, and thus the tuning damage graph with reduced dimension is easy to prepare. However, if we had instead requested a held-{5/4}, then since that maps to [-2 4}, using multiple different generators, then the cross-section will be diagonal across the floor, perpendicular to no generator axes.

### Modified constraint matrices

We do this by changing our constraint matrices $K$; rather than building them to represent permutations of relative direction for combinations of $r + 1$ target-intervals, instead we only combine $r + 1 - h$ target-intervals for each of these constraint matrices, where $h$ is the count of held-intervals. As a result of this, each $K$ has $h$ fewer columns than before — or at least it would, if we didn't replace these columns with $h$ new columns, one for each of our $h$ held-intervals. Remember, each of these constraint matrices gets multiplied together with other matrices that represent information about our temperament and tuning scheme — our targeted intervals, and their weights (if any) — in order to take a system of approximations (represented by matrices) and crunch it down to a smaller system of equalities (still represented by matrices) that can be automatically solved (using a matrix inverse). So, instead of these constraint matrices doing only a single job — enforcing that $r + 1$ target-intervals receive coinciding damage, each constraint matrix now handles two jobs at once — enforcing that only $r + 1 - h$ target-intervals receive coinciding damage, and that $h$ held-intervals receive zero damage.

In order for the constraint matrices to handle their new second job, however, we must make further changes. The held-intervals must now be accessible in the other matrices that multiply together to form our solvable system of equations. In particular:

1. We concatenate the target-interval list $\mathrm{T}$ and the held-interval basis $\mathrm{H}$ together to a new matrix $\mathrm{T}|\mathrm{H}$.
2. We accordingly extend the weight matrix $W$ diagonally so that matrix shapes work out for multiplication purposes, or said another way, so that each held-interval appended to $\mathrm{T}$ gets matched up with a dummy weight.

### Prepare constraint matrix

Let's demonstrate this by example. We'll revise the example we looked at in the earlier "bigger example" section, with 3 generators ($r = 3$) and 10 target-intervals ($k = 10$). The specific example constraint matrix we looked at, therefore, was an $(k, r)$-shaped matrix, which related $r + 1 = 4$ of those target-intervals together with coinciding damage:

$\begin{array} {rrr} \scriptsize{6/5} \\ \scriptsize{7/5} \\ \scriptsize{8/5} \\ \scriptsize{9/5} \\ \scriptsize{7/6} \\ \scriptsize{4/3} \\ \scriptsize{3/2} \\ \scriptsize{8/7} \\ \scriptsize{9/7} \\ \scriptsize{9/8} \\ \end{array} \left[ \begin{array} {rrr} 0 & 0 & 0 \\ +1 & +1 & +1 \\ +1 & 0 & 0 \\ 0 & {-1} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & {-1} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right]$

For a tuning scheme with $h = 1$, however, we can only get three target-intervals to have coinciding damage at the same time. So we'd never see this constraint matrix for such a scheme. Instead, any example constraint matrix we'd pick would only have two such rows. In order to keep working with something similar to this example, then, let's just drop the last column:

$\begin{array} {rrr} \scriptsize{6/5} \\ \scriptsize{7/5} \\ \scriptsize{8/5} \\ \scriptsize{9/5} \\ \scriptsize{7/6} \\ \scriptsize{4/3} \\ \scriptsize{3/2} \\ \scriptsize{8/7} \\ \scriptsize{9/7} \\ \scriptsize{9/8} \\ \end{array} \left[ \begin{array} {rrr} 0 & 0 \\ +1 & +1 \\ +1 & 0 \\ 0 & {-1} \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ \end{array} \right]$

But we still want a third column; we still have $r = 3$ generators in this example. But now we need to specify that one of these generators needs to accomplish the job of tuning our held-interval exactly. We do that by adding a column that's all zeros except for a single nonzero entry in the row for that held-interval (if it's not clear how this enforces that interval to be unchanged, don't worry; it will become clear in a later step when we translate this system of matrices to the system of linear equations which it is essentially shorthand notation for):

$\begin{array} {rrr} \scriptsize{6/5} \\[2pt] \scriptsize{7/5} \\[2pt] \scriptsize{8/5} \\[2pt] \scriptsize{9/5} \\[2pt] \scriptsize{7/6} \\[2pt] \scriptsize{4/3} \\[2pt] \scriptsize{3/2} \\[2pt] \scriptsize{8/7} \\[2pt] \scriptsize{9/7} \\[2pt] \scriptsize{9/8} \\[2pt] \style{background-color:#FFF200;padding:5px}{\scriptsize{5/3}} \\[2pt] \end{array} \left[ \begin{array} {rrr} 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ +1 & +1 & \style{background-color:#FFF200;padding:5px}{0}\\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & {-1} & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right]$

Oh, right — we didn't have a row for this held-interval yet. All the rows we had before were for our target-intervals. No big deal, though. We just added an extra row for this, our held-interval, and we can fill out the new entries it creates in the other columns with zeros.

### Modified tempered and just sides of to-be equality

The consequences of adding this row are more far-reaching than our $K$ matrices, however. Remember, these multiply with $M\mathrm{T}C$ on the left-hand side of the equal sign and $\mathrm{T}C$ on the right-hand side. So matrix shapes have to keep matching for matrix multiplication to remain possible. We just changed the shape of $K$ from an $(k, r)$-shaped matrix to a $(k + h, r)$-shaped matrix. The shape of $M\mathrm{T}C$ is $(r, k)$ and the shape of $\mathrm{T}C$ is $(d, k)$. So if we want these shapes to keep matching, we need to change them to $(r, k + h)$ and $(d, k + h)$, respectively.

Now that's a rather dry way of putting it. Let's put it in more natural terms. We know what these additions to $M\mathrm{T}C$ and $\mathrm{T}C$ are about: they're the held-intervals that the new entries we added to the constraint matrices are referring to! In particular, we need to expand $\mathrm{T}$ (the actual target-intervals we chose here are arbitrary and don't really matter for this example):

$\begin{array} {ccc} \mathrm{T} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r|r} 1 & 0 & 3 & 0 & {-1} & 2 & {-1} & 3 & 0 & {-3} \\ 1 & 0 & 0 & 2 & {-1} & {-1} & 1 & 0 & 2 & 2 \\ {-1} & {-1} & {-1} & {-1} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & {-1} & {-1} & 0 \\ \end{array} \right] \end{array}$

to include the held-intervals. We can just tack them on at the end. We said $h = 1$, that is, that we only have one held-interval, but we haven't picked what it is yet. This is also arbitrary for this example. How about we go with $\frac53$, with prime-count vector [0 -1 1 0:

$\begin{array} {ccc} \mathrm{T|\style{background-color:#FFF200;padding:2px}{H}} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r|r|r} 1 & 0 & 3 & 0 & {-1} & 2 & {-1} & 3 & 0 & {-3} & \style{background-color:#FFF200;padding:5px}{0} \\ 1 & 0 & 0 & 2 & {-1} & {-1} & 1 & 0 & 2 & 2 & \style{background-color:#FFF200;padding:5px}{-1} \\ {-1} & {-1} & {-1} & {-1} & 0 & 0 & 0 & 0 & 0 & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & {-1} & {-1} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array}$

We've got to extend $C$ too. We can pick any weight we want, other than 0. You'll see why when we work out the system of equations in a moment):

$\begin{array} {ccc} C \\ \text{diag}(\log_2(\left[ \begin{array} {rrr} 30 & 35 & 40 & 45 & 42 & 12 & 6 & 56 & 63 & 72 & \style{background-color:#FFF200;padding:5px}{2} \\ \end{array} \right])) \end{array}$

(Note that the last entry appears as 2 here, because we placed the $\log_2$ outside the array brackets.)

There's no need to mess with $M$ here.

### Prepare tempered and just sides of to-be equality

As before, let's work out $M\mathrm{T}C$ and $\mathrm{T}C$ separately, then constrain each of them with $K$, then put them together in a linear system of equations, solving for the generator tuning map $𝒈$. Though this time around, all occurrences of $\mathrm{T}$ will be replaced with $\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}}$.

First, $M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})C$:

$\scriptsize \begin{array} {ccc} M \\ \left[ \begin{array} {rrr} 1 & 1 & 1 & 2 \\ 0 & 2 & 3 & 2 \\ 0 & 0 & 2 & 1 \end{array} \right] \end{array} \begin{array} {ccc} \mathrm{T|\style{background-color:#FFF200;padding:2px}{H}} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r|r|r} 1 & 0 & 3 & 0 & {-1} & 2 & {-1} & 3 & 0 & {-3} & \style{background-color:#FFF200;padding:5px}{0} \\ 1 & 0 & 0 & 2 & {-1} & {-1} & 1 & 0 & 2 & 2 & \style{background-color:#FFF200;padding:5px}{-1} \\ {-1} & {-1} & {-1} & {-1} & 0 & 0 & 0 & 0 & 0 & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & {-1} & {-1} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} \begin{array} {ccc} C \\ \text{diag}(\log_2(\left[ \begin{array} {rrr} 30 & 35 & 40 & 45 & 42 & 12 & 6 & 56 & 63 & 72 & \style{background-color:#FFF200;padding:5px}{2} \\ \end{array} \right])) \end{array}$

Which works out to:

$\small \begin{array} {ccc} M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & \log_2(35) & 2\log_2(40) & \log_2(45) & 0 & \log_2(12) & 0 & \log_2(56) & 0 & {-\log_2(72)} & \style{background-color:#FFF200;padding:5px}{0} \\ {-\log_2(30)} & {-\log_2(35)} & {-3\log_2(40)} & \log_2(45) & 0 & {-2\log_2(12)} & 2\log_2(6) & {-2\log_2(56)} & 2\log_2(63) & 4\log_2(72) & \style{background-color:#FFF200;padding:5px}{1} \\ {-2\log_2(30)} & {-\log_2(35)} & {-2\log_2(40)} & {-2\log_2(45)} & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 & \style{background-color:#FFF200;padding:5px}{2} \\ \end{array} \right] \end{array}$

So that's the same as before, but with the extra column at the right, which is alone in having integer entries owing to its weight being the integer 1. What we're seeing there is that [0 -1 1 0 maps to [0 1 2} in this temperament, that's all.

Now, we do $(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})C$:

$\small \begin{array} {ccc} \mathrm{T|\style{background-color:#FFF200;padding:2px}{H}} \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r|r|r} 1 & 0 & 3 & 0 & {-1} & 2 & {-1} & 3 & 0 & {-3} & \style{background-color:#FFF200;padding:5px}{0} \\ 1 & 0 & 0 & 2 & {-1} & {-1} & 1 & 0 & 2 & 2 & \style{background-color:#FFF200;padding:5px}{-1} \\ {-1} & {-1} & {-1} & {-1} & 0 & 0 & 0 & 0 & 0 & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & {-1} & {-1} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} \begin{array} {ccc} C \\ \text{diag}(\log_2(\left[ \begin{array} {rrr} 30 & 35 & 40 & 45 & 42 & 12 & 6 & 56 & 63 & 72 & \style{background-color:#FFF200;padding:5px}{2} \\ \end{array} \right])) \end{array} = \\[50pt] \small \begin{array} {ccc} (\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & 0 & \log_2(40) & 0 & {-\log_2(42)} & 2\log_2(12) & {-\log_2(6)} & 3\log_2(56) & 0 & {-3\log_2(72)} & \style{background-color:#FFF200;padding:5px}{0} \\ \log_2(30) & 0 & 0 & 2\log_2(45) & {-\log_2(42)} & {-\log_2(12)} & \log_2(6) & 0 & 2\log_2(63) & 2\log_2(72) & \style{background-color:#FFF200;padding:5px}{-1} \\ {-\log_2(30)} & {-\log_2(35)} & {-\log_2(40)} & {-\log_2(45)} & 0 & 0 & 0 & 0 & 0 & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ 0 & \log_2(35) & 0 & 0 & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array}$

Again, same as before, but with one more column on the right now.

### Apply constraint

Now, let's constrain both sides. First, the left side:

$\scriptsize \begin{array} {ccc} M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & \log_2(35) & 2\log_2(40) & \log_2(45) & 0 & \log_2(12) & 0 & \log_2(56) & 0 & {-\log_2(72)} & \style{background-color:#FFF200;padding:5px}{0} \\ {-\log_2(30)} & {-\log_2(35)} & {-3\log_2(40)} & \log_2(45) & 0 & {-2\log_2(12)} & 2\log_2(6) & {-2\log_2(56)} & 2\log_2(63) & 4\log_2(72) & \style{background-color:#FFF200;padding:5px}{1} \\ {-2\log_2(30)} & {-\log_2(35)} & {-2\log_2(40)} & {-2\log_2(45)} & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 & \style{background-color:#FFF200;padding:5px}{2} \\ \end{array} \right] \end{array} \begin{array} K \\ \left[ \begin{array} {rrr} 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ +1 & +1 & \style{background-color:#FFF200;padding:5px}{0}\\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & {-1} & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array} → \\ \begin{array} {c} M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK \\ \left[ \begin{array} {c} \log_2(35·40^2) & {-\log_2(\frac{45}{35})} & \style{background-color:#FFF200;padding:5px}{0} \\ {-\log_2(35·40^3)} & {-\log_2(35·45)} & \style{background-color:#FFF200;padding:5px}{1} \\ {-\log_2(35·40^2)} & \log_2(\frac{45^2}{35}) & \style{background-color:#FFF200;padding:5px}{2} \\ \end{array} \right] \end{array}$

Same as before, with the rightmost column replaced with a column for our held-interval.

And now the right side:

$\scriptsize \begin{array} {ccc} (\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})C \\ \left[ \begin{array} {r|r|r|r|r|r|r|r|r} \log_2(30) & 0 & \log_2(40) & 0 & {-\log_2(42)} & 2\log_2(12) & {-\log_2(6)} & 3\log_2(56) & 0 & {-3\log_2(72)} & \style{background-color:#FFF200;padding:5px}{0} \\ \log_2(30) & 0 & 0 & 2\log_2(45) & {-\log_2(42)} & {-\log_2(12)} & \log_2(6) & 0 & 2\log_2(63) & 2\log_2(72) & \style{background-color:#FFF200;padding:5px}{-1} \\ {-\log_2(30)} & {-\log_2(35)} & {-\log_2(40)} & {-\log_2(45)} & 0 & 0 & 0 & 0 & 0 & 0 & \style{background-color:#FFF200;padding:5px}{1} \\ 0 & \log_2(35) & 0 & 0 & \log_2(42) & 0 & 0 & {-\log_2(56)} & {-\log_2(63)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} \begin{array} K \\ \left[ \begin{array} {rrr} 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ +1 & +1 & \style{background-color:#FFF200;padding:5px}{0}\\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & {-1} & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ 0 & 0 & \style{background-color:#FFF200;padding:5px}{0}\\ \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array} → \\ \begin{array} {c} (\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK \\ \left[ \begin{array} {c} \log_2(40^3) & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & {-\log_2(45^2)} & \style{background-color:#FFF200;padding:5px}{{-1}} \\ {-\log_2(35·40)} & \log_2(\frac{45}{35}) & \style{background-color:#FFF200;padding:5px}{1} \\ \log_2(35) & \log_2(35) & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array}$

Again, same as before, but with the rightmost column replaced with a column for our held-interval.

Now, put them together, with $G$ on the left-hand side, as an equality:

$\small \begin{array} {c} G \\ \left[ \begin{array} {c} g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \\ g_{41} & g_{42} & g_{43} \\ \end{array} \right] \end{array} \begin{array} {c} M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK \\ \left[ \begin{array} {c} \log_2(35·40^2) & {-\log_2(\frac{45}{35})} & \style{background-color:#FFF200;padding:5px}{0} \\ {-\log_2(35·40^3)} & {-\log_2(35·45)} & \style{background-color:#FFF200;padding:5px}{1} \\ {-\log_2(35·40^2)} & \log_2(\frac{45^2}{35}) & \style{background-color:#FFF200;padding:5px}{2} \\ \end{array} \right] \end{array} = \begin{array} {c} (\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK \\ \left[ \begin{array} {c} \log_2(40^3) & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & {-\log_2(45^2)} & \style{background-color:#FFF200;padding:5px}{{-1}} \\ {-\log_2(35·40)} & \log_2(\frac{45}{35}) & \style{background-color:#FFF200;padding:5px}{1} \\ \log_2(35) & \log_2(35) & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array}$

### Solve for generator embedding

At this point, following the pattern from above, we can solve for $G$ as $(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK(M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK)^{-1}$. The matrix inverse is the step where the exact values in terms of logarithms start to get out of hand. Since we've already proven our point about exactness of the solutions from this method in the earlier "bigger example" section, for easier reading, let's lapse into decimal numbers from this point forward:

$\begin{array} {c} G \\ \left[ \begin{array} {c} g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \\ g_{41} & g_{42} & g_{43} \\ \end{array} \right] \end{array} = \begin{array} {ccc} (\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK \\ \left[ \begin{array} {rrr} 15.966 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & {-10.984} & \style{background-color:#FFF200;padding:5px}{{-1}} \\ {-10.451} & 0.363 & \style{background-color:#FFF200;padding:5px}{1} \\ 5.129 & 5.129 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} \begin{array} {ccc} (M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK)^{-1} \\ \left[ \begin{array} {rrr} {-27.097} & 0.725 & {-0.363} \\ 26.417 & 31.546 & {-15.773} \\ \style{background-color:#FFF200;padding:5px}{{-291.028}} & \style{background-color:#FFF200;padding:5px}{{-86.624}} & \style{background-color:#FFF200;padding:5px}{{-175.177}} \\ \end{array} \right] \\ \hline {-436.978} \end{array}$

Note where the yellow-highlights went: inversing transposed them from the rightmost column to the bottom row. It's no longer super clear what these values have to do with the held-interval anymore, however, and that's okay. In the next step, notice that when we multiply together $(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK$ and $(M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK)^{-1}$ that the yellow-highlighted entries will pair up for every individual dot products, and thus that the final matrix product will have a "little bit of yellow" mixed in to every entry. In other words, every entry of $G$ may potentially participate in achieving the effect that this interval is held unchanged.

So, actually multiplying those up now, we find $G$ =

$\begin{array} {c} (\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK(M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK)^{-1} \\ \left[ \begin{array} {c} 0.990 & {-0.0265} & {0.0132} \\ {-0.00199} & 0.595 & {-0.797} \\ {-0.00399} & 0.189 & 0.405 \\ 0.00798 & {-0.379} & 0.189 \\ \end{array} \right] \\ \end{array}$

### Convert generator embedding to generator tuning map

And from here we can find $𝒈 = 𝒋G$:

$\begin{array} {ccc} 𝒋 \\ \left[ \begin{array} {rrr} 1200.000 & 1901.955 & 2786.314 & 3368.826 \\ \end{array} \right] \end{array} \begin{array} {c} G \\ \left[ \begin{array} {c} 0.990 & {-0.0265} & {0.0132} \\ {-0.00199} & 0.595 & {-0.797} \\ {-0.00399} & 0.189 & 0.405 \\ 0.00798 & {-0.379} & 0.189 \\ \end{array} \right] \end{array} = \begin{array} {ccc} 𝒈 \\ \left[ \begin{array} {rrr} 1200.000 & 350.909 & 266.725 \\ \end{array} \right] \end{array}$

Confirming: yes, {1200.000 350.909 266.725][0 1 2} = $(1200.000 × 0) + (350.909 × 1) + (266.725 × 2) = 0 + 350.909 + 533.450 = 884.359$. So the constraint to hold that interval unchanged has been satisfied.

### System of equations style

It may be instructive again to consider the system of equations style, solving directly for $𝒈$. Rewinding to before we took our matrix inverse, introducing $𝒋$ to both sides, and multiplying the entries of $𝒈$ through, we find:

$\begin{array} {ccc} 𝒈M(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK \\ \left[ \begin{array} {rrr} 15.773g_1 & {-0.363g_1} & \style{background-color:#FFF200;padding:5px}{0g_1} \\ {-21.095g_2} & {-10.621g_2} & \style{background-color:#FFF200;padding:5px}{1g_2} \\ {-15.773g_3} & 5.854g_3 & \style{background-color:#FFF200;padding:5px}{2g_3} \\ \end{array} \right] \end{array} = \begin{array} {ccc} 𝒋(\mathrm{T|\style{background-color:#FFF200;padding:2px}{H}})CK \\ \left[ \begin{array} {rrr} 7318.250 & {-2600.619} & \style{background-color:#FFF200;padding:5px}{884.359} \\ \end{array} \right] \end{array}$

Which gets viewed as a system of equations:

$\begin{array} {r} 15.773g_1 & + & {-21.095g_2} & + & {-15.773g_3} & = & 7318.250 \\ {-0.363}g_1 & + & {-10.621g_2} & + & 5.854g_3 & = & {-2600.619} \\ \style{background-color:#FFF200;padding:5px}{0g_1} & + & \style{background-color:#FFF200;padding:5px}{1g_2} & + & \style{background-color:#FFF200;padding:5px}{2g_3} & = & \style{background-color:#FFF200;padding:5px}{884.359} \\ \end{array}$

The third equation — which in the earlier "bigger example" was used to enforce that a fourth target-interval received coinciding damage to the other three target-intervals tapped by the constraint matrix — here has been replaced with an equation that enforces that [0 1 2}, the generator-count vector that $\frac53$ maps to in this temperament, is equal to the just tuning of that same interval. And so, when we solve this system of equations, we now get a completely different set of generator tunings.

$g_1 = 1200.000 \\ g_2 = 350.909 \\ g_3 = 266.725 \\$

Which agrees with what we found by solving directly for $G$.

### Sanity-check

But is this a minimax tuning candidate still? That is, do we find that in the damage list for this tuning, that the three target-intervals whose damages were requested to coincide, are still coinciding? Indeed we do:

$\begin{array} {c} \textbf{d} \\ \left[ \begin{array} {r} 0.007 & 0.742 & 0.742 & 0.742 & 0.788 & 0.495 & 0.353 & 1.650 & 0.057 & 1.694 \\ \end{array} \right] \end{array}$

We can see that the 2nd, 3rd, and 4th target-intervals, the ones with non-zero entries in their columns of $K$, all coincide with 0.742 damage, so this is a proper candidate for minimax tuning here. It's some point where three damage graphs are intersecting while within the held-interval plane. Rinse and repeat.

## Tie-breaking

We've noted that the zero-damage method for miniaverage tuning schemes cannot directly handle non-unique methods itself. When it finds a non-unique miniaverage (more than one set of generator tunings cause the same minimum sum damage), the next step is to start over with a different method, specifically, the power-limit method, which can only provide an approximate solution.

On the other hand, the coinciding-damage method discussed here, the one for minimax tuning schemes, does have the built-in ability to find a unique true optimum tuning when the minimax tuning is non-unique. In fact, it has not just one, but two different techniques for tie-breaking when it encounters non-unique minimax tunings:

1. Comparing ADSLODs. This stands for "abbreviated descending-sorted list of damage". When we have a tied minimax, we essentially have a tie between the first entries of each candidate tuning's ADSLOD. But we can compare more than just the first entry (though we can't compare all entries; hence the "abbreviated" part). If we can break the tie with any of the remaining ADSLOD entries, then we've just done "'basic tie-breaking"'.
2. Repeat iterations. When basic tie-breaking fails, we must perform a whole additional iteration of the entire conciding-damage method, but this time only within a narrowed-down region of tuning damage space. Whenever this happens, we've moved on to advanced tie-breaking. More than one repeat iteration may be necessary before a true unique optimum tuning is found (but a round of basic-tiebreaking will occur before each next more advanced iteration of the method.)

Here's a flow chart giving the overall picture:

### Basic tie-breaking example: setup

Let's just dive into an example.

The example given in the diagrams back in article 3 here and article 6 here were a bit silly. The easiest way to break the tie in this case would be to remove the offending target-interval from the set, since with constant damage, it will not aid in preferring one tuning to another. More natural examples of tied tunings — that cannot be resolved so easily — require 3D tuning damage space. So that's what we'll be looking at here.

Suppose we're doing a minimax-U tuning of blackwood temperament, and our target-interval set is $\{ \frac21, \frac31, \frac51, \frac65 \}$. This is a rank-2 temperament, so we're in 3D tuning damage space. The relevant region of its tuning damage space is visualized below. The yellow hyper-V is the damage graph for $\frac21$, the blue hyper-V is for $\frac31$, the green hyper-V is for $\frac51$, and the red hyper-V is for $\frac65$.

Because the mapped intervals for $\frac21$ and $\frac31$ both use only the first generator $g_1$, this means that their hyper-V creases on the floor are parallel. A further consequence of this is that their hyper-Vs' line of intersection above the floor is parallel to the zero-damage floor and will never touch it. Said another way, there's no tuning of blackwood temperament where both prime $\frac21$ and $\frac31$ are tuned pure at the same time. (This can also be understood through the nature of the blackwood comma $\frac{256}{243}$, which contains only primes 2 and 3.)

So, instead of a unique minimax tuning point, we instead find this line-segment range of valid minimax tunings, bounded by the points on either end where the damage to the other target-intervals exceeds this coinciding damage to primes math>\frac21[/itex] and $\frac31$. We indicated this range on the diagram above. At this point, looking down on the surface of our max damage graph, we know the true optimum tuning is somewhere along this range. But it's hard to tell exactly where. In order to find it, let's gather ReDPOTICs and STICs, convert them to constraint matrices, and convert those in turn to tunings.

### Basic tie-breaking example: comparing tunings for minimax

With four target-intervals, we have sixteen ReDPOTICs to check, and six STICs to check, for a total of 22 points. Here are their constraints, tunings, and damage lists:

constraint matrix candidate generator tuning map target-interval damage list
$K$ $𝒈_i$ {$g_1$ $g_2$] $\textbf{d}$
$\left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ 0 & 0 \\ \end{array} \right]$ $𝒈_1$ {238.612 2793.254] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & -1 \\ 0 & 0 \\ \end{array} \right]$ $𝒈_2$ {238.612 2779.374] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} +1 & +1 \\ -1 & 0 \\ 0 & +1 \\ 0 & 0 \\ \end{array} \right]$ $𝒈_3$ {233.985 2816.390] [30.075 30.075 30.075 30.075]
$\left[ \begin{array} {rrr} +1 & +1 \\ -1 & 0 \\ 0 & -1 \\ 0 & 0 \\ \end{array} \right]$ $𝒈_4$ {233.985 2756.240] [30.075 30.075 30.075 30.075]
$\left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_2$ {238.612 2779.374] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & 0 \\ 0 & -1 \\ \end{array} \right]$ $𝒈_1$ {238.612 2793.254] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} +1 & +1 \\ -1 & 0 \\ 0 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_5$ {233.985 2696.090] [30.075 30.075 90.225 30.075]
$\left[ \begin{array} {rrr} +1 & +1 \\ -1 & 0 \\ 0 & 0 \\ 0 & -1 \\ \end{array} \right]$ $𝒈_4$ {233.985 2756.240] [30.075 30.075 30.075 30.075]
$\left[ \begin{array} {rrr} +1 & +1 \\ 0 & 0 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_6$ {239.215 2790.240] [3.923 11.769 3.923 3.923]
$\left[ \begin{array} {rrr} +1 & +1 \\ 0 & 0 \\ +1 & 0 \\ 0 & -1 \\ \end{array} \right]$ $𝒈_1$ {238.612 2793.254] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} +1 & +1 \\ 0 & 0 \\ -1 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_2$ {238.612 2779.374] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} +1 & +1 \\ 0 & 0 \\ -1 & 0 \\ 0 & -1 \\ \end{array} \right]$ $𝒈_4$ {233.985 2756.240] [30.075 30.075 30.075 30.075]
$\left[ \begin{array} {rrr} 0 & 0 \\ +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_7$ {238.133 2783.200] [9.334 3.111 3.111 3.111]
$\left[ \begin{array} {rrr} 0 & 0 \\ +1 & +1 \\ +1 & 0 \\ 0 & -1 \\ \end{array} \right]$ $𝒈_2$ {238.612 2779.374] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} 0 & 0 \\ +1 & +1 \\ -1 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_1$ {238.612 2793.254] [6.940 6.940 6.940 6.940]
$\left[ \begin{array} {rrr} 0 & 0 \\ +1 & +1 \\ -1 & 0 \\ 0 & -1 \\ \end{array} \right]$ $𝒈_4$ {233.985 2756.240] [30.075 30.075 30.075 30.075]
$\left[ \begin{array} {rrr} +1 & 0 \\ 0 & +1 \\ 0 & 0 \\ 0 & 0 \\ \end{array} \right]$ n/a n/a n/a
$\left[ \begin{array} {rrr} +1 & 0 \\ 0 & 0 \\ 0 & +1 \\ 0 & 0 \\ \end{array} \right]$ $𝒈_8$ {240.000 2786.314] [0.000 18.045 0.000 18.045]
$\left[ \begin{array} {rrr} +1 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_9$ {240.000 2804.360] [0.000 18.045 18.045 0.000]
$\left[ \begin{array} {rrr} 0 & 0 \\ +1 & 0 \\ 0 & +1 \\ 0 & 0 \\ \end{array} \right]$ $𝒈_{10}$ {237.744 2786.314] [11.278 0.000 0.000 11.278]
$\left[ \begin{array} {rrr} 0 & 0 \\ +1 & 0 \\ 0 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_{11}$ {237.744 2775.040] [11.278 0.000 11.278 0.000]
$\left[ \begin{array} {rrr} 0 & 0 \\ 0 & 0 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right]$ $𝒈_{12}$ {238.612 2786.314] [6.940 6.940 0.000 0.000]

Note that many of these constraint matrices ended up identifying the same tuning. Because of relationships between the prime factors of the target-intervals, some of these points in tuning damage space turned out to be the same, or at least vertically aligned, thus giving the same tuning. So, we didn't actually end up with 22 different candidate $𝒈_i$ to compare. We only got 12 different candidate tunings. (Also note that one of our constraint matrices failed to find any tuning at all! That's the one that tried to find the place where $\frac21$ and $\frac31$ are pure simultaneously. Like we said, it cannot be done.)

Let's rework this table a bit. We won't worry about the constraint matrices we used to get here anymore; done with those. We'll just worry about the unique candidate tunings, their damage lists, and we'll add a new column for their maximum values.

candidate generator tuning map target-interval damage list max damage
$𝒈_i$ {$g_1$ $g_2$] $\textbf{d}$ $\max(\textbf{d})$
$𝒈_1$ {238.612 2793.254] [6.940 6.940 6.940 6.940] 6.940
$𝒈_2$ {238.612 2779.374] [6.940 6.940 6.940 6.940] 6.940
$𝒈_3$ {233.985 2816.390] [30.075 30.075 30.075 30.075] 30.075
$𝒈_4$ {233.985 2756.240] [30.075 30.075 30.075 30.075] 30.075
$𝒈_5$ {233.985 2696.090] [30.075 30.075 90.225 30.075] 90.225
$𝒈_6$ {239.215 2790.240] [3.923 11.769 3.923 3.923] 11.769
$𝒈_7$ {238.133 2783.200] [9.334 3.111 3.111 3.111] 9.334
$𝒈_8$ {240.000 2786.314] [0.000 18.045 0.000 18.045] 18.045
$𝒈_9$ {240.000 2804.360] [0.000 18.045 18.045 0.000] 18.045
$𝒈_{10}$ {237.744 2786.314] [11.278 0.000 0.000 11.278] 11.278
$𝒈_{11}$ {237.744 2775.040] [11.278 0.000 11.278 0.000] 11.278
$𝒈_{12}$ {238.612 2786.310] [6.940 6.940 0.000 0.000] 6.940

From here, we try to pick our minimax tuning. Skimming the last column of this table quickly, we can see that the minimum out of all of our maximum damages is 6.940.

However, alas! We can minimize the maximum damage to this amount using not just one of these candidate generator tuning maps, and not just two of them, but three different ones of them all achieve this feat. Well, we'll just have to tie-break between them, then.

### Basic tie-breaking example: understanding the tie

The tied minimax tunings are $𝒈_1$, $𝒈_2$, and $𝒈_{12}$. These have tuning maps of {238.612 2793.254], {238.612 2779.374], and {238.612 2786.314], respectively. Notice that all three of these give the same tuning for the first generator, $g_1$, namely, 238.612 ¢. This tells us that these three tunings can be found on a line together, and it also tells us that this line is perpendicular to the axis for $g_1$. As for the tuning of the second generator $g_2$, then, there's going to be one of these three tuning maps which gives the value in-between the other two; that happens here to be the last one, $𝒈_{12}$. Its $g_2$ value is 2786.314 ¢, which is in fact exactly halfway between the other two tuning maps' tuning of $g_2$, at 2779.374 ¢ and 2793.254 ¢ (you may also notice that 2786.314 is the just tuning of $\frac51$).

Based on what we know from visualizing the tuning damage space earlier, we can suppose that candidate tunings $𝒈_1$ and $𝒈_2$ are the ones that bound the ends of the line segment region of tied minimax tunings. Remember, $𝒈_1$, $𝒈_2$, and $𝒈_{12}$ are not really the only tunings tied for minimax tuning; they're merely special/important/representative tunings that are tied for minimax damage. In concrete terms, it's not just {238.612 2793.254], {238.612 2779.374], and {238.612 2786.314] that deal the minimax damage of 6.940. Any tuning with $g_1 = 238.612$ and $2779.374 \leq g_2 \leq 2793.254$ deals this same minimax damage, such as {238.612 2794.000] or {238.612 2787.878]. Any of those will satisfy someone looking only to minimax damage. But we're looking for a true optimum in here. We know there will be some other reason to prefer one of these to all the others.

As for $𝒈_{12}$, it's halfway between $𝒈_1$ and $𝒈_2$, although from this view we can't specifically see it. That's because this tuning came from one of our constraint matrices we built from a STIC, which is to say that it's for one of our zero-damage points, where target-interval tuning damage graph creases cross each other along the zero-damage floor. So to see this point better, we'll need to rotate our view on tuning damage space, and take a look from below.

We've drawn a dashed white line segment to indicate the basic minimax tied range from the previous diagram, which is along the crease between the blue and yellow graphs. That part of their crease is not directly visible here, because it's exactly the part that's covered up by that green bit in the middle. Then, we've drawn a finely-dotted cyan line straight down from that basic minimax tied range's line segment at a special point: where the green and red creases cross.

Yes, that's right: it turns out that the place the green and red creases cross is directly underneath the line where the blue and yellow creases cross. This is no coincidence. As hinted at earlier, it's because of our choices of target-intervals, and in particular their prime compositions. If you think about it, along the blue/yellow crease, that's where blue $\frac21$ and yellow $\frac31$ have the same damage. But there's two places where that happens: where they have the same exact error, and where they have the same error but opposite sign (one error is positive and the other negative). This happens to be the latter type of crossing. And also remember that we're using a unity-weight damage here, i.e. where damage is equal to absolute error. So if $\frac21$ and $\frac31$ have opposite error, that means that their errors cancel out when you combine them to the interval $\frac61$. So $\frac61$ is tuned pure along this crease (if this isn't clear, please review #The meaning of the ReDPOTIC product). And along the green $\frac51$ damage graph's crease along the zero-damage floor, $\frac51$ has zero-damage, i.e. is also pure. So wherever that green crease passes under the blue/yellow crease, that means that both $\frac61$ and $\frac51$ are pure there. So what should we expect to find with the red hyper-V here, the one for $\frac65$? Well, being comprised exactly and only of $\frac61$ and $\frac51$, which are both tuned pure, it too must be tuned pure. So no surprise that the red graph crease crosses under the blue/yellow crease at the exact same point as the green crease does.

This — as you will probably not be surprised — is our third tied candidate tuning, $𝒈_{12}$, the one that also happens to be halfway between $𝒈_1$ and $𝒈_2$. We identified this as a point of interest on account of the fact that two damage graphs crossed along the zero-damage floor, thus giving us enough damage graphs to specify a point. This isn't just any point along the floor: again, this point came from one of our STICs.

Intuitively, we should know at this time that this is our true optimum tuning, and we have labeled it as such in the diagram. Any other point along the white dashed line, in our basic minimax region, will minimax damage to $\frac21$ and $\frac31$ at 6.940 ¢(U). But if we go anywhere along this line segment region other than this one special point, the damage to our other two target-intervals, $\frac51$ and $\frac65$, will be greater than 0. Sure, the main thing we've been tasked with is to minimize the maximum damage. But while we're at it, why not go the extra mile and tune both $\frac51$ and $\frac65$ as accurately as we can, too? We might as well. And so $𝒈_{12}$ is the best we can do, in a "nested minimax" sense; that's our true optimum tuning. In other words, the maximum damages in each position of these descending-sorted lists are minimized, at least as far down the descending-sorted damage list as they've been abbreviated. (We noted earlier that these zero-damage points might seem unlikely to play much use in identifying a minimax tuning. Well, here's a perfect example of where one saved the day!)

But the question remains: how will the computer be best able to tell that this is the tuning to choose? For that, we're going to need ADSLODs.

Before we go any further, we should get straight about what an ADSLOD is. Firstly, like ReDPOTIC, it's another brutally long initialism for a brutally tough-to-name concept of the coinciding-damage method! Secondly, though, it's short for "abbreviated descending-sorted list of damage". Let's work up to that initialism bit by bit.

The "LOD" part is easy: these are "lists of damage". Specifically, they're our target-interval damage lists, which we notate as $\mathbf{d}$. (The reversal of "damage list" to "list of damage" is just to make acronym pronounceable. Sorry.)

Next, they are "DS", for "descending-sorted". Note that sorting them causes us to lose track of which target-interval that each damage in the list corresponds to. But that's okay; that information is not important for narrowing down to our optimum tuning. All of our target-intervals are treated equally at this level of the method (we may have weighted their absolute errors differently to obtain our damage, but that's not important here).

This descending-sorting can be thought of as a logical continuation of how we compared the max damage at each tuning. The first entry of each ADSLOD is the max damage done to any interval by that tuning. The second entry of each ADSLOD is the second-most max damage done to any interval by that tuning (which may be the same as the max damage, or less). The third entry is the third-most max damage. And so on.

So in basic tie-breaking, we can compare all the second entries of these ADSLODs to decide which tuning to prefer. And if that doesn't work, then we'll compare all the third entries. And so on. Until we run out of ADSLOD.

Which brings us finally, to the "A" part of the initialism, for "abbreviated". Specifically, we only keep the first $r + 1$ entries in these lists, and throw out the rest. This is where $r$ is the rank of the temperament, or in other words, the count of generators it has. Again, this is the dimension of the tuning damage space we're searching, and so its the number of points required to specify a point within it.

Critically, for our purposes here, that means it's the maximum number of target-intervals we could possibly have minimaxed damage to at this time; if we look any further down this list, then we can't guarantee that the damage values we're looking at are as low as they could possibly be at their position in the list. That's why if these ADSLODs are identical all the way down, we need to try something else, in a new tuning damage space.

### Basic tie-breaking example: using the ADSLODs

So, the maximum damages in each damage list weren't enough to choose a tuning. Three of them were tied. We're going to need to look at these three tunings in more detail. Here's our table again, reworked some more, so we only compare the three tied minimax tunings $𝒈_1$, $𝒈_2$, and $𝒈_{12}$, and so we show their whole ADSLODs now:

candidate generator tuning map target-interval damage list ADSLOD
$𝒈_i$ {$g_1$ $g_2$] $\textbf{d}$ $\text{sort}_\text{dsc}(\textbf{d})_{1 \ldots r+1}$
$𝒈_1$ {238.612 2793.250] [6.940 6.940 6.940 6.940] [6.940 6.940 6.940]
$𝒈_2$ {238.612 2779.370] [6.940 6.940 6.940 6.940] [6.940 6.940 6.940]
$𝒈_{12}$ {238.612 2786.310] [6.940 6.940 0.000 0.000] [6.940 6.940 0.000]

By comparing maximum damages between these candidate tunings already, we've already checked the first entries in each of these ADSLODs. So we start with the second column. Can we tie-break here? No. We've got 6.940, 6.940, and 6.940, respectively. Another three-way tie. Though we've noted that in ADSLODs it no longer matters to which target-intervals the damages are dealt, it may still be understanding-confirming to think through which target-intervals these damages must be for. Well, these first two 6.940 entries must be for the coinciding blue and yellow damages above, the blue/yellow crease, that is, where the damage to $\frac21$ and $\frac31$ is the same.

Well, one last chance to tie-break before we may need to fall back to advanced tie-breaking. Can we break the tie using the third entries of these ADSLODs. Why, yes! We can! Phew. While $𝒈_1$ and $𝒈_2$ are still tied for 6.940 even in the third position, with $𝒈_{12}$ we get by with only 0.000. This could be thought of as corresponding either to the damage for $\frac51$ or for $\frac65$. It doesn't matter which one. At $𝒈_1$ and $𝒈_2$ the damage one or both of these two other target-intervals has gotten so big that it's just surpassing that of the damage to $\frac21$ and $\frac31$ (that's why these are the bounds of the minimax range). But at $𝒈_{12}$ one or both of them (we know it's both) is zero.

Note that we refer to equal damages between tunings as "ties"; this is where we have a contest. We are careful to refer to equal damages within tunings as "coinciding", which does double-duty at representing equality and intersection of the target-interval damage graphs.

Thus concludes our demonstration of basic tie-breaking.

### When basic tie-breaking fails

We can modify just one thing about our example here in order to cause the basic tie-breaking to fail. Can you guess what it is? We'll give you a moment to pause the article and think. (Haha.)

The answer is: we would only need to remove $\frac65$ from our target-interval set. Thus, we'd be finding the {2/1, 3/1, 5/1} minimax-U tuning of blackwood, or more succinctly, its primes minimax-U tuning.

And why does basic tie-breaking fail without $\frac65$ in the set? This question is perhaps best answered by looking at the from-below view on tuning damage space. But for completeness, let's take a gander at the from-above view first, too.

This looks pretty much the same as before. It's just that there's no red hyper-V damage graph for $\frac65$ slicing through here anymore. But it's actually the same exact tied minimax range still.

So what can we see from below, then?

Ah, so there's the same dashed white line as before for the same basic minimax tied range on the other side. But do you notice what's different? Without the red hyper-V graph for $\frac65$ anymore, we have no crease coming through to cross the green crease at that magic point we care about! So, this point and its [6.940 6.940 0.000] ADSLOD never comes up for consideration. And so when we get to the step where we compare ADSLODs, we'd only have the two that completely tie with [6.940 6.940 6.940], and get stuck. We would have needed a second crease along the floor to intersect, to cause the method to identify it as a point of interest.

Remember, this coinciding-damage method is only designed to gather points, specifically by checking all the possible places where enough of these damage graphs intersect at once in order to define a point. In 3D tuning damage space we need two creases to intersect at once along the floor to define a point. But here on the floor we've just got one graph, with its crease's line going one direction, and up above we've got two graphs meeting at a different crease, their line going the perpendicular direction. There is a tuning, i.e. a vertical line through tuning damage space, which both of these lines pass through, and that's the tuning we want, the true optimum tuning. But because these lines don't actually cross at a point in tuning damage space, one just passes right over the other, the method misses it.

While we as human observers can pick out right away that what we want is the tuning at that vertical line where the one crease passes over the other, in terms of design of the computer algorithm this method uses, it turns out to be more efficient overall to design it in a way such that it reaches that conclusion another way. And that's what we're going to look at next; that's the advanced tie-breaking.

As a sneak preview, the way the computer will do it is to take the cross-section of tuning damage space wherever that tied range occurs, and gather a new set of coinciding-damage points. From its cross-section view, a 2D tuning damage graph, the blue and yellow graphs will look like horizontal lines, and the green graph will look like a V underneath it. From that view, it will be obvious that the tip of this green V, where the damage to $\frac51$ is zero, will be the true optimum tuning. Something like this:

So: comparing ADSLODs (abbreviated descending-sorted list of damages) is not always sufficient to identify a unique optimum tuning. Basic tie-breaking is insufficient whenever more than one coinciding-damage point has the same exact ADSLOD, and these identical ADSLODs also happen to be the best ones insofar as they give a nested minimax tuning, where by "nested minimax" we mean that the maximum damages in each position of these descending-sorted list are minimized, at least as far down the descending-sorted damage list as they've been abbreviated. We might call these "TiNMADSLODs": tied nested minimax abbreviated descending-sorted lists of damage.

In such situations, what we need to do is gather a new set of points, for a new coinciding-damage point set. But it's not like we're starting over from scratch here; it just means that we need to perform another iteration of the same process, but this time searching for tunings in a more focused region of our tuning damage space. In other words, we didn't waste our time or effort; we did make progress with our first coinciding-damage point set. The tunings with TiNMADSLODs which we identified in the previous iteration are valuable: they're what we need to proceed. What these tell us, essentially, is that the true optimum tuning must be somewhere in between them. Our tie indicates not simply two, three, or any finite number of tied tunings; it indicates a continuous range of tunings which all satisfy this tie. The trick now is to define what range, or region, that we mean exactly by "in between", and describe it mathematically. But even more importantly, we've got to figure out how to narrow down the smaller (but still infinite!) set of tuning points within this region to a new set of candidate true optimum tunings.

A slightly oversimplified way to describe this type of tie-breaking would be: whenever we find a range of tunings tied for minimax damage, in order to tie-break within this range, we temporarily ignore the target-intervals in the set which received the actual minimaxed damage amount, then search this range for the tuning that minimizes the otherwise maximum damage. And we just keep repeating this process until we finally identify a single true optimum tuning.

To help illustrate advanced tie-breaking, we're going to look at a minimax-C tuning of augmented temperament, with mapping [3 0 7] 0 1 0]}. In particular, we're going to use the somewhat arbitrary target-interval set $\{ \frac32, \frac52, \frac53, \frac83, \frac95, \frac{16}{5}, \frac{15}{8}, \frac{18}{5} \}$. As a rank-2 temperament, we're going to be searching 3D tuning damage space. This temperament divides the octave into three parts, so our ballpark $g_1$ is 400 ¢, and our second generator $g_2$ is a free generator for prime 3, so it's going to be ballpark its pure tuning of 1901.955 ¢.

Here's an aerial view on the tuning damage space, where we've "clipped" every damage graph hyper-V where it has gone out-of-bounds, above 150 ¢(C) damage; that is, we've colored it in grey and flattened it across the top of the box of our visualization. This lets us focus in clearly on the region of real interest, which is where all target-intervals' damages are less than this cap at the same time. This is the multicolored crater in the middle here:

Like the blackwood example we looked at in the basic tie-breaking section, this max damage graph also ends up with a line segment region on top that's exactly parallel to the floor, therefore every tuning along this range is tied for minimax. This happens to be 92.557 ¢(C) damage.

(Again, this was made possible by the prime compositions of our target-intervals, and how this temperament maps them. Specifically, the yellow graph here is $\frac{15}{8}$ and the blue graph is $\frac{18}{5}$. The first interval $\frac{18}{5}$ is one augmented comma $\frac{128}{125}$ off from $(\frac{15}{8})^2 = \frac{225}{64}$. In vector form, we have [1 2 -1 = 2×[-3 1 1 - [-7 0 3). So since $\frac{15}{8}$ maps to [-2 1}, that means that $\frac{18}{5}$ maps to [-4 2}. And since [-4 2} = 2×[-2 1}, i.e. a simple scalar relates these two mapped intervals. In other words, they have the same proportion of generators, this means their damage graph creases will be parallel.)

Every tuning along this range is tied for second-most minimax, still 92.557 ¢(C), on account of it being a crease between two target-interval damage graphs. But we can look 3 positions down the ADSLODs here on account of it being a rank-2 ($r = 2$) temperament and so tuning damage space is $(r + 1 = 3)$-dimensional. But this doesn't help us. Because the reason this line segment range ends at the endpoints it ends at is because that's exactly where some third damage graph finally shoots higher above 92.557 ¢(C). So the third entries in the these ADSLODs are also tied for 92.557 ¢(C).

And here's the key issue: this augmented example is more like the variation we briefly looked at for blackwood, where basic tie-breaking failed us, on account of there not happening to be any other identifiable points of interest immediately below the minimax line segment. Below this line segment, then, there must be only sloping planes, or possibly some other creases, but no points. So in order to nested minimax further, we'll have to create some points below it, by taking a cross-section right along that tied range, and seeing which damage graph V's we can find to intersect in that newly focused 2D tuning damage space.

The two tied minimax tunings we've identified are {399.809 1901.289] and {400.865 1903.401]. So, we're already very close to an answer: our $g_1$ has been narrowed down almost to the range of a single cent, while our $g_2$ is narrowed down almost to the range of two cents.

And here's what that cross-section looks like.

(Sorry for the changed colors of the graphs. Not enough time to nudge Wolfram Language into being nice.)

For now, don't worry too much about the horizontal axis, but know that the range from 0.0 to 1.0 is our actual tied minimax range. It's a little tricky to label this axis. It's not as simple as it was when we showed the cross-section for blackwood, because that cross-section was parallel to one of the generator axes, so we could simply label it with that generator's sizes. But this cross-section is diagonal through our tuning damage space (from above), so no such luck! (Well, we chose an example like this by design, to show how the method grapples with tricky situations like this. We'll get to it soon enough.)

We can see that 0.0 and 1.0 are the points where another damage graph crosses above the horizontal line otherwise on top, the horizontal line which corresponds to the crease between the damage graphs for $\frac{15}{8}$ and $\frac{18}{5}$, which appears as brown here. So those two target-intervals which cause the range to end at either end are $\frac{16}{5}$ and $\frac95$.

These two target-intervals $\frac{16}{5}$ and $\frac95$ also happen to be the two target-intervals whose crossing is going to yield us the point we need to identify the true optimum tuning! We can see that if we pretend that the horizontal $\frac{15}{8}$ and $\frac{18}{5}$ aren't here, and then just gather all the points in this view where target-intervals cross or bounce off the zero-damage floor as normal (that's our second iteration of the method, gathering another round of coinciding-damage points!), and check the maximum damages at each of those tunings, and then choose the tuning with the lowest maximum damage (possibly involving basic tie-breaking with ADSLODs, though we won't need it in this particular case), we're going to find that point where $\frac{16}{5}$ and $\frac95$ cross. To be clear, those are the yellow and blue graphs here, and they cross about a third of the way from 0.0 to 1.0.

Spoiler alert: the tuning this crossing identifies is {400.171 1902.011], which as it should be is somewhere between our previously tied tunings of {399.809 1901.289] and {400.865 1903.401]. This is indeed our true optimum tuning. But in order to understand how we would determine these exact cents values from this cross-sectioning process, we're going to have to take a little detour. In order to understand how these further iterations of the coinciding-damage method work, we need to understand the concept of blends.

### Blends: abstract concept

Let's begin to learn blends in the abstract (though you may well try to guess ahead as to the application of these concepts to the RTT problem at hand).

Suppose we want a good way to describe a line segment between two other points $\mathbf{A}$ and $\mathbf{B}$, or in other words, any point between these two points. We could describe this arbitrary point $\mathbf{P}$ as some blend of point $\mathbf{A}$ and point $\mathbf{B}$, like this: $\mathbf{P} = x\mathbf{A} + y\mathbf{B}$, where $x$ and $y$ are our blending variables.

For example, if $x=1$ and $y=0$, then $\mathbf{P} = (1)\mathbf{A} + (0)\mathbf{B} = \mathbf{A}$. And if $x=0$ and $y=1$, then similarly $\mathbf{P} = \mathbf{B}$. Now if both $x=0$ and $y=0$, we might say $\mathbf{P} = \mathbf{O}$, where this point $\mathbf{O}$ is our origin, out there in space somewhere; however, if we want to use this technique in a useful way, we don't actually need to worry about this origin, because in turns out that if we simply require that $x+y=1$, we can ensure that we can only reach points along that line segment connecting $\mathbf{A}$ to $\mathbf{B}$! For example, with $x = \frac12$ and $y = \frac12$, we describe the point exactly halfway between $\mathbf{A}$ and $\mathbf{B}$.

We can generalize this idea to higher dimensions. That was the 1-dimensional case. Suppose instead we have three points: $\mathbf{A}$, $\mathbf{B}$, and $\mathbf{C}$. Now the region bounded by these points is no longer a 1D line segment. It's a 2D plane segment. Specifically, it's a triangle, with points at our three points. And now our point $\mathbf{P}$ that is somewhere inside this triangle can be described as $\mathbf{P} = x\mathbf{A} + y\mathbf{B} + z\mathbf{C}$, where now $x + y + z = 1$.

And so on to higher and higher dimensions.

### One fewer blending variable; anchor

However, let's pause at the 2-dimensional case to make an important observation: we don't actually need one blending variable for each point being blended. In practice, since our blending variables are required to sum to 1, we only really need one fewer blending variable than we have points. When $x + y + z = 1$, then we know $x$ must equal $1 - y - z$, so that:

\begin{align} x + y + z &= 1 (1 - y - z) + y + z &= 1 1 - \cancel{y} - \cancel{z} + \cancel{y} + \cancel{z} &= 1 1 &= 1 \end{align}

But how would we modify our $\mathbf{P} = x\mathbf{A} + y\mathbf{B} + z\mathbf{C}$ formula to account for this unnecessity of $x$?

We note that the choice of $x$ — and therefore $\mathbf{A}$ — was arbitrary; we simply felt that picking the first blending variable and point would be simplest, and provide the convenience of consistency when examples from different dimensions were compared.

Clearly we can't simply drop $x$ with no further modifications; in that case, we'd have $\mathbf{P} = \mathbf{A} + y\mathbf{B} + z\mathbf{C}$, so now every single point is going to essentially have $x = 1$, or in other words, a whole $\mathbf{A}$ mixed in.

Well, the key is to change what $y$ and $z$ blend in. Think of it this way: if we always have a full $\mathbf{A}$ mixed in to our $\mathbf{P}$, then all we need to worry about are the deviations from this $\mathbf{A}$. That is, we need the formula like so:

$\mathbf{P} = \mathbf{A} + y(\mathbf{B} - \mathbf{A}) + z(\mathbf{C} - \mathbf{A})$

So now when $y=1$ and $z=0$, we still find $\mathbf{P} = \mathbf{B}$ as before:

$\mathbf{P} = \mathbf{A} + y(\mathbf{B} - \mathbf{A}) + z(\mathbf{C} - \mathbf{A}) \mathbf{P} = \mathbf{A} + (1)(\mathbf{B} - \mathbf{A}) + (0)(\mathbf{C} - \mathbf{A}) \mathbf{P} = \mathbf{A} + \mathbf{B} - \mathbf{A} \mathbf{P} = \mathbf{B}$

And similarly $\mathbf{P} = \mathbf{C}$ for $y = 0$ and $z = 1$.

For convenience, we could refer to this arbitrary point $\mathbf{A}$ our anchor'.

For a demonstration of the relationship between the formula with $\mathbf{A}$ extracted and the original formula, please see #Derivation of extracted anchor.

FYI, the general principle at work with blends here is technically called a "convex combination"; feel free to read more about them now if you're not comfortable with the idea yet.

### Combining insights

Okay, and we're almost ready to tie this back to our RTT application. Realize that back in 1D, we'd have

$\mathbf{P} = \mathbf{A} + y(\mathbf{B} - \mathbf{A})$

So now think back to the 2D damage graph for the second coinciding-damage point set we looked at in the augmented temperament example of the previous section. Every tuning damage graph we've ever looked at has had damage as the vertical axis, and every other axis corresponding to the tuning of a generator… except that graph! Recall that we took a cross-section of the original 3D tuning damage space. So the horizontal axis of that graph is not any one of our generators. As we mentioned, it runs diagonally across the zero-damage floor, since the sizes of the two target-intervals which define it depend on the sizes of both these generators.

What is the axis, then, though? Well, one way to put it would be, recalling that we took it as the cross-section between two TiNMADSLOD tunings, would be this: it's a blending variable. It's just like our abstract case, but here, the points we're blending between are tunings. In one direction we increase the blend of the first of these two tunings, and in the other direction we increase the blend of the other tuning. Or, in recognition of the "one fewer" insight, we have one blending variable that controls from 0, not at all, to 1, completely, by how much we blend away from our anchor tuning to the other tuning. We could imagine this is just like our point $\mathbf{P}$ which is a blend between $\mathbf{A}$, $\mathbf{B}$ using blend variables $x$ and $y$:

But next, we've got to grow up, and stop using these vague, completely abstract points. We've got to commit to fully applying this blending concept to RTT. Let's start using tuning-related objects now.

### Substituting RTT objects in

So we're looking along this line segment for our true optimum tuning, our equivalent of point $\mathbf{P}$. Let's call it $𝒈$, simply our generator tuning map.

But our points $\mathbf{A}$ and $\mathbf{B}$ are also generator tuning maps. Let's call those $𝒈_0$ and $𝒈_1$, that is, with subscripted indices.

We've zero-indexed these because $𝒈_0$ will be our anchor tuning, and therefore will be to large extent special and outside the normal situation; in particular, it won't need a corresponding blend variable, and it's better if our blend variables can be one-indexed like how we normally index things.

Right, so while the A, B, C and x, y, z thing worked fine for the abstract case, it'll be better moving forward if we use the same variable letter for the same type of thing, and use subscripted indices to distinguish them, so let's replace the blending variable that corresponded to point $\mathbf{B}$, which was $y$, with $b_1$, since that corresponds with $𝒈_1$, which is what $\mathbf{B}$ became.

So here's what we've got so far:

$𝒈 = 𝒈_{0} + b_1(𝒈_{1} - 𝒈_{0})$

### Generalizing to higher dimensions: the blend map

Now that's the formula for finding a generator tuning map $𝒈$ somewhere in the line segment between two other tied generator tuning maps $𝒈_0$ and $𝒈_1$. And that would work fine for our augmented temperament example. But before crawl back into the weeds on that example, let's solidify our understanding of the concepts by generalizing them, so we can feel confident we could use them for any advanced tie-breaking situation.

It shouldn't be hard to see that for a 2D triangular case — if we wanted to find a $𝒈$ somewhere in between tied $𝒈_0$, $𝒈_1$, and $𝒈_2$ — we'd use the formula:

$𝒈 = 𝒈_{0} + b_1(𝒈_{1} - 𝒈_{0}) + b_2(𝒈_{2} - 𝒈_{0})$

Each new tied $𝒈_i$ adds a new blending variable, which scales its delta with the anchor tuning $𝒈_0$.

We should recognize now that we might have an arbitrarily large number of tied tunings. This is a perfect job for a vector, that is, we should gather up all our $b_i$ into one object, a vector called $𝒃$.

Well, it is a vector, but in particular its a row vector, or covector, which we more commonly refer to as a map. This shouldn't be terribly surprising that it's a map, because we said a moment ago that while our tuning damage graph's axes (other than the damage axis) usually correspond to generator (tuning map)s, for our second iteration of this method here, in the cross-section view, those axes correspond to blending variables. So in general, the blend map $𝒃$ takes the place of the generator tuning map $𝒈$ in iterations of the coinciding-damage method beyond the first.

### The deltas matrix

Here's the full setup, for an arbitrary count of ties:

$𝒈 = \begin{array} {c} \text{anchor tuning} \\ 𝒈_{0} \\ \end{array} + \begin{array} {c} \text{blend map} \; 𝒃 \\ \left[ \begin{array} {c} b_1 & b_2 & \cdots & b_{τ-1} \end{array} \right] \\ \end{array} \begin{array} {c} \text{tuning deltas} \\ \left[ \begin{array} {c} 𝒈_{1} - 𝒈_{0} \\ 𝒈_{2} - 𝒈_{0} \\ \vdots \\ 𝒈_{τ-1} - 𝒈_{0} \\ \end{array} \right] \\ \end{array}$

We're using the variable $τ$ here for our count of tied tunings. (It's the Greek letter "tau". We're avoiding 't' because "tuning" and "temperament" both already use that letter.)

We'd like a simpler way to refer to the big matrix on the right. As we've noted above, it's a matrix of deltas. In particular, it's a matrix of deltas between the anchor tied tuning and the other tied tunings.

We don't typically see differences between generator tuning maps in RTT. These map differences are cousins to our retuning maps $𝒓$, we suppose, insofar as they're the difference between two tuning maps of some kind, but the comparison ends there, because:

• in the case of a retuning map, one of the maps is just and the other tempered, while in this case both are tempered, and
• in the case of a retuning map, both are prime tuning maps, while in this case both are generator tuning maps.

We can make use of the Greek letter delta and its association with differences. So let's use $𝜹_i$ as a substitute for $𝒈_i - 𝒈_{0}$. We may call it a delta of generator tuning maps. The delta $𝜹_i$ takes the index $i$ of whichever tied tuning is the one the anchor tuning is subtracted from. (More payoff for our zero-indexing of those; our deltas here, like our blend map entries, will therefore be one-indexed as per normal.)

Substituting back into our formula, then, we find:

$𝒈 = \begin{array} {c} \text{anchor tuning} \\ 𝒈_{0} \\ \end{array} + \begin{array} {c} \text{blend map} \; 𝒃 \\ \left[ \begin{array} {c} b_1 & b_2 & \cdots & b_{τ-1} \end{array} \right] \\ \end{array} \begin{array} {c} \text{tuning deltas} \\ \left[ \begin{array} {c} 𝜹_1 \\ 𝜹_2 \\ \vdots \\ 𝜹_{τ-1} \\ \end{array} \right] \\ \end{array}$

But we can do a bit better. Let's find a variable that could refer to this whole matrix, whose rows are each generator tuning map deltas. The natural thing seems to be to use the capital version of the Greek letter delta, which is $\textit{Δ}$. However, this letter is so strongly associated with use as an operator, for representing the difference in values of the thing just to its right, that probably this isn't the best idea. How about instead we just use the Latin letter $D$, for "delta". This is our (generator tuning map) deltas matrix.

This lets us simplify the formula down to this:

$𝒈 = 𝒈_{0} + 𝒃D$

### How to identify tunings

This formula assumes we already have all of our tied tunings $𝒈_0$ through $𝒈_{τ - 1}$ from the previous coinciding-damage point set, i.e. the previous iteration of the algorithm. And so we already know the $𝒈_0$ and $D$ parts of this equation. This equation, then, gives us a way to find a tuning $𝒈$ given some blend map $𝒃$. But what we really want to do is identify not just any tunings that are such blends, but particular tunings that are such blends: we want to find the ones that are part of the next iteration's coinciding-damage point set, the ones where damage graphs intersect in our cross-sectional diagram.

We can accomplish this by solving for each $𝒃$ with respect to a given constraint matrix $K$. This is just as we solved for each $𝒈$ in the first iteration with respect to each $K$; again, $𝒃$ is filling the role of $𝒈$ here now.

So we've got our $𝒕\mathrm{T}WK = 𝒋\mathrm{T}WK$ setup. Remember that in the first iteration, $K$ had $r$ columns, one for each generator to solve for, since each column corresponds to an unchanged-interval of the tuning. In other words, one column of $K$ for each entry of $𝒈$. Well, so we're still going to use constraint matrices to identify tunings here, but now they're going to have $τ - 1$ columns, one for each entry in the blend map, which has one entry for each delta of a tied tuning past the anchor tied tuning (a delta with the anchor tied tuning). We can still use the symbol $K$ for these constraint matrices, even though it's a somewhat different sort of constraint, with a different shape $(k, τ -1)$.

Next, let's just unpack $𝒕$ to $𝒈M$:

$𝒈M\mathrm{T}WK = 𝒋\mathrm{T}WK$

And substitute $𝒈_{0} + 𝒃D$ in for $𝒈$:

$(𝒈_{0} + 𝒃D)M\mathrm{T}WK = 𝒋\mathrm{T}WK$

Distribute:

$𝒈_{0}M\mathrm{T}WK + 𝒃DM\mathrm{T}WK = 𝒋\mathrm{T}WK$

Work toward isolating $𝒃$.

$𝒃DM\mathrm{T}WK = 𝒋\mathrm{T}WK - 𝒈_{0}M\mathrm{T}WK$

Group on the right-hand side:

$𝒃DM\mathrm{T}WK = (𝒋 - 𝒈_{0}M)\mathrm{T}WK$

Replace $𝒈_{0}M$ with $𝒕_0$ which is the corresponding (prime) tuning map to $𝒈_{0}$.

$𝒃DM\mathrm{T}WK = (𝒋 - 𝒕_0)\mathrm{T}WK$

We normally see the just tuning map subtracted from the tempered tuning map, not the other way around as we have here. So let's just negate everything. This is no big deal, since $𝒃$ is an unknown variable after all, so we can essentially think of this $𝒃$ as a new $𝒃$ equal to the negation of our old $𝒃$.

$𝒃DM\mathrm{T}WK = (𝒕_0 - 𝒋)\mathrm{T}WK$

So that's just a (prime) retuning map on the right:

$𝒃DM\mathrm{T}WK = 𝒓_{0}\mathrm{T}WK$

We've now reached the point where Keenan's original version of this algorithm would solve directly for $𝒃$, analogous to how it solves directly (and approximately) for $𝒈$ in the first iteration. But when we use the matrix inverse technique — where instead of solving directly (and approximately) for a generator tuning map $𝒈$ we instead solve exactly for a generator embedding $G$ and can then later obtain $𝒈$ as $𝒈 = 𝒋G$ — then here we must be solving exactly for some matrix which we could call $B$, following the analogy $𝒈 : G :: 𝒃 : B$. (Not to be confused with the subscripted $B_s$ that we use for basis matrices; these two matrices will probably never meet, though). This will be a $(d, τ-1)$-shaped matrix, which we could call the blend matrix.

And this is why we noted the thing earlier about how constraint matrices are about identifying tunings, not optimizing them. If you come across this setup, and see that somehow, for some reason, $𝒓_0$ has replaced $𝒋$, you might want to try to answer the question: why are we trying to optimize things relative to some arbitrary retuning map, now, instead of JI? The problem with that is: it's the wrong question. It's not so much that $𝒓_0$ is a goal or even a central player in this situation. It just sort of works out this way.

It turns out that while $𝒋$ is what relates $G$ to $𝒈$, it's $𝒓_0$ which relates this $B$ to $𝒃$. This shouldn't be hugely surprising, since $𝒓_0$ is sort of "filling the role" of $𝒋$ there on the right-hand side, insofar as it finds itself in the same position as $𝒋$ did in the simpler case.

So we get:

$𝒓_{0}BDM\mathrm{T}WK = 𝒓_{0}\mathrm{T}WK$

And cancel out the $𝒓_0$ on both sides:

\begin{align} \cancel{𝒓_{0}}BDM\mathrm{T}WK &= \cancel{𝒓_{0}}\mathrm{T}WK \\ BDM\mathrm{T}WK &= \mathrm{T}WK \end{align}

Then we do our inverse. This is the exact analog of $G = \mathrm{T}WK(M\mathrm{T}W)^{-1}$:

$B = \mathrm{T}WK(DM\mathrm{T}WK)^{-1}$

And once we have that, adapting our earlier formula for $𝒈$ from $𝒃$ to give us $𝒈$ from $B$ instead:

$𝒈 = 𝒈_{0} + 𝒓_{0}BD$

So all in one formula, substituting our formula for $B$ into that, we have:

$𝒈 = 𝒈_{0} + 𝒓_{0}\mathrm{T}WK(DM\mathrm{T}WK)^{-1}D$

And that's how you find the generators for a tuning corresponding to a coinciding damage point described by $K$ at whichever point in a tied minimax tuning range it lays.

### Computing damage

We could compute the damage list from any $𝒈$, as normal: $𝐝 = |𝐞|W = |𝒓\mathrm{T}|W = |(𝒕 - 𝒋)\mathrm{T}|W = |(𝒈M - 𝒋)\mathrm{T}|W$. But actually we don't have to recover $𝒈$ from $B$ in order to compute damage. There's a more expedient way to compute it. If:

$𝒃DM\mathrm{T}WK = 𝒓_{0}\mathrm{T}WK$

then pulling away the constraint, we revert from an equality to an approximation:

$𝒃DM\mathrm{T}W ≈ 𝒓_{0}TW$

And the analogous thing we minimize to make this approximation close (review the basic algebraic setup if need be) would be:

$|(𝒃DM - 𝒓_{0})\mathrm{T}|W$

So the damage caused by a blend map $𝒃$ is:

In other words, we can find it using the same formula as we normally use, $|(𝒈M - 𝒋)\mathrm{T}|W$, but using $𝒃$ instead of $𝒈$, $DM$ instead of $M$, and $𝒓_0$ instead of $𝒋$. Which is just what we end up with upon substituting $𝒈_0 + 𝒃D$ in for $𝒈$:

$|((𝒈_{0} + 𝒃D)M - 𝒋)\mathrm{T}|W |(𝒈_{0}M + 𝒃DM - 𝒋)\mathrm{T}|W |(𝒕_{0} + 𝒃DM - 𝒋)\mathrm{T}|W |(𝒃DM - 𝒓_{0})\mathrm{T}|W$

### Blends of blends

Okay then. So if we find the blend for each point of our next iteration's coinciding-damage point set, and use that to find the damage for that blend, then hopefully this time we find a unique minimax as far down the lists as we can validly compare.

And if not, we rinse and repeat. Which is to say, where here our generators are expressed in terms of a blend of other tunings, after another iteration of continued searching, our generators would be expressed as a blend of other tunings, where each blend was itself a blend of other tunings. And so on.

### Apply formula to example

To solidify our understanding, let's finally return to that real-life augmented example, and apply the concepts we learned to it!

At the point our basic tie-breaking failed, we found two tied tunings. Let the first one be our anchor tuning, $𝒈_0$, and the second be our $𝒈_1$:

$𝒈_0 = \left[ \begin{array} {r} 399.809 & 1901.289 \end{array} \right] \\ 𝒈_1 = \left[ \begin{array} {r} 400.865 & 1903.401 \end{array} \right] \\$

Let me drop the cross-section diagram here again, for conveniently close reference, and with some smarter stuff on top of it this time:

So our prediction looks like it should be about $b_1 = 0.35$ in order to nail that point we identified earlier where the red and olive lines cross at the triangle underneath the flat tie line across the top.

And here's the formula again for a tuning here from $K$, again, for conveniently close reference:

$B = \mathrm{T}WK(DM\mathrm{T}WK)^{-1}$

Let's get easy stuff out of the way first. We know $M$ = [3 0 7] 0 1 0]}. As for $\mathrm{T}$, in the beginning we gave it as $\{ \frac32, \frac52, \frac53, \frac83, \frac95, \frac{16}{5}, \frac{15}{8}, \frac{18}{5} \}$. And $W$ will just be the diagonal matrix of their log-product complexities, since we went with minimax-C tuning here.

Okay, how about $K$ next then. Since $τ = 2$ here — we have two tied tunings — we know $K$ will have only $τ - 1 = 1$ column. And $k = 8$, so that's its row count. In particular, we're looking for the tuning where $\frac95$ and $\frac{16}{5}$ have exactly equal errors, i.e. they even have the same sign, not opposite signs. So to get them to cancel out, we use a -1 as the nonzero entry of one of the two intervals, and conventionally we use 1 for the first one. So with $\frac95$ at index 5 of $\mathrm{T}$ and $\frac{16}{5}$ at index 6, we find $K$ = [0 0 0 0 1 -1 0 0].

Now for $D$. We know it's a $(τ - 1, 𝑟)$-shaped matrix: one row for each tied tuning past the first, and each row is the delta between generator tuning maps, so is $r$ long like any generator tuning map. In our case we have $τ = 2$ and $r = 2$, so it's a $(1,2)$-shaped matrix. That one row is $𝒈_1 - 𝒈_0$. So it's {400.865 1903.401] - {399.809 1901.289] = {1.056 2.112].

And that's everything we need to solve!

Work that out and we get $B$ = [[-0.497891 -0.216260 -0.016343]. (We're showing a little extra precision here than usual.) So we can recover $𝒈$ now as:

$𝒈 = 𝒈_0 + 𝒓_{0}BD$

We haven't worked out $𝒓_0$ yet, but it's $𝒕_0 - 𝒋$, where $𝒋$ = 1200.000 1901.955 2786.314] and $𝒕_0 = 𝒈_{0}M$ = {399.809 1901.289][3 0 7] 0 1 0]} = 1199.43 1901.29 2798.66], so $𝒓_0$ = 0.573 0.666 12.349].

Plugging everything in at once could be unwieldy. So let's just do the $𝒓_{0}B$ part to find $𝒃$. We might be curious about that anyway… how close does it match our prediction of about 0.35? Well, it's 0.573 0.666 12.349][[-0.497891 -0.216260 -0.016343] = 0.343. Not bad!

So now plug $𝒃$ into $𝒈 = 𝒈_0 + 𝒃D$ and we find $𝒈$ = {399.809 1901.289] + 0.343×{1.056 2.112] = {400.171 1902.011]. And that's what we were looking for!

The ADSLOD here, by the way, is [92.557 92.557 81.117 81.117 57.928 ... ] So it's a tie of 81.117 ¢(C) for the second-most minimax damage to $\frac95$ and $\frac{16}{5}$. No other tuning can beat this 81.117 number, even just three entries down the list. And so we're done.

### Exact solutions with advanced tie-breaking

As for recovering $G$, though. You know, the whole point of this article — finding exact tunings — we wouldn't want to give up on that just because we had to use advanced tie-breaking, would we?

So we've been looking for $𝒈$ which are blends of other $𝒈$'s. But we need to look for $G$'s that are blends of other $G$'s! Doing that directly would explode the dimensionality of the space we're searching, by the rank $r$ times the length of the blend vector $𝒃$, that is, $𝑟×(τ - 1)$. And would it even be meaningful to independently search the powers of the primes that comprise each entry of a $𝒈$? Probably not. The compressed information in $𝒈$ is all that really matters for defining the constrained search region. So what if instead we still search by $𝒈$, but what if the blend we find for each $K$ can be applied to $G$'s instead of $𝒈$'s?

Let's test on an example.

$G_0 = \left[ \begin{array} {r} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] \quad 𝒈_0 = \left[ \begin{array} {r} 1200.000 & 696.578 \\ \end{array} \right] \\[20pt] G_1 = \left[ \begin{array} {r} \frac73 & \frac13 \\ -\frac43 & -\frac13 \\ \frac13 & \frac13 \\ \end{array} \right] \quad 𝒈_1 = \left[ \begin{array} {r} 1192.831 & 694.786 \\ \end{array} \right]$

So $𝜹_1 = 𝒈_1 - 𝒈_0$ = {-7.169 -1.792]. Suppose we get $𝒃$ = [0.5]. We know that $𝒃D$ = {-3.584 -0.896]. So $𝒈$ should be $𝒈_0 + 𝒃D$ = {1200 696.578] + {-3.584 -0.896] = {1196.416 695.682].

But do we find the same tuning with $G = G_0 + b_1(G_1 - G_0) + b_2(G_2 - G_0) + \ldots + b_{τ-1}(G_{τ-1} - G_0)$? That's the key question. (In this case, we have to bust the matrix multiplication up. That is, there's no way to replace the rows of D with entire matrices. Cumbersome, but reality.)

In this case we only have the one delta, $G_1 - G_0 =$

$\left[ \begin{array} {r} \frac73 & \frac13 \\ -\frac43 & -\frac13 \\ \frac13 & \frac13 \\ \end{array} \right] - \left[ \begin{array} {r} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] = \left[ \begin{array} {r} \frac43 & \frac13 \\ -\frac43 & -\frac13 \\ \frac13 & \frac{1}{12} \\ \end{array} \right]$

And so $b_1(G_1 - G_0)$, or half of that, is:

$\left[ \begin{array} {r} \frac23 & \frac16 \\ -\frac23 & -\frac16 \\ \frac16 & \frac{1}{24} \\ \end{array} \right]$

And add that to $G_0$ then:

$\left[ \begin{array} {r} 1 & 0 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] + \left[ \begin{array} {r} \frac23 & \frac16 \\ -\frac23 & -\frac16 \\ \frac16 & \frac{1}{24} \\ \end{array} \right] = \left[ \begin{array} {r} \frac53 & \frac16 \\ -\frac23 & -\frac16 \\ \frac16 & \frac{9}{24} \\ \end{array} \right]$

So $\textbf{g}_1$ here, the first column, [$\frac53$ $-\frac23$ $\frac16$, is $2^{\frac53}3^{-\frac23}5^{\frac16} \approx 1.996$. So $g_1$ = 1196.416 ¢.

And $\textbf{g}_2$ here, the second column, {{vector|$\frac16$ $-\frac16$ $\frac{9}{24}$⟩, is $2^{\frac16}3^{-\frac16}5^{\frac{9}{24}} \approx 1.495$. So $g_2$ = 695.682 ¢.

Perfect! We wanted {1196.416 695.682] and we got it.

Now maybe this doesn't fully test the system, since we only convexly combined two tunings, but this is probably sound for general use. At least, the test suite of the RTT Library in Wolfram Language included several examples that should have failed upon switching to this way of computing true optimum tunings if this were a problem, but they did not.

### Misc. issues: polytope

Keenan Pepper named the file where we wrote the coinciding-damage method "tiptop.py", and along with that coined the tuning scheme name "TIP-TOP", for "Tiebreaker-in-polytope TOP". So what's up with this "polytope"?

Polytopes — they're nothing too scary, actually. The name seems imposing, but it's really just a name for a shape which is as generic as possible:

• The "poly-" part means generic to how many vertices/edges/faces/etc. the shape has. This prefix generalizes prefixes you may be familiar with like "tetra-", "penta-", "hexa-", etc., which are used for shapes where we do know exactly how many of the given type of feature a shape has, like a "penta-gon" (5 edges) or a "tetra-hedron" (4 faces).
• The "-tope" part means generic to how many dimensions these features occupy. This suffix generalizes suffixes you may be familiar with like "-gon", "-hedron", "-choron"/"-cell"/"-hedroid", etc., which are used for shapes where we do know exactly how many dimensions a shape occupies, again, like a "hexa-gon" (2 dimensions) or an "octa-hedron" (3 dimensions).

We note that the polytope Keenan refers to is not the full coinciding-damage point set.

It's not even the faceted bowl we see formed from the aerial view by the combination of all the individual target-intervals' tuning damage graph hyper-V's; that's an in-between sized point set we would call the "maximum damage graph vertices".

No, Keenan's polytope refers to an even smaller set of points. It contains either only a single point for the case of an immediately unique optimum, or more than one point which together bound the region (such as a line segment, triangle, etc.) within which the true optimum may be found in the next iteration of the algorithm, using blends. This is what we could call the minimax polytope.

### Misc. issues: major modification to Keenan's original method

In the beginning of our discussion of the coinciding-damage method, we mentioned that Douglas Blumeyer had modified Keenan Pepper's algorithm in a way that "simplifies it conceptually and allows it to identify optimum tunings more quickly". Here is an explanation of that change.

Keenan's original design was to only include the zero-damage points once tuning damage space had been reduced to 2D. This design still does eventually finds the same true optimum tunings, but the problem is that it requires advanced tie-breaking to accomplish this, where basic tie-breaking could have worked had those points been included. Consider the case of {2/1,3/1,5/1,6/5} minimax-U blackwood we gave as our basic tie-breaking example here: with Keenan's design, the intersection between $\frac51$ and $\frac65$'s creases would not have been included; it would have been as if we were doing the primes minimax-U blackwood example we look at in the following section where basic tie-breaking must fail.

Since advanced tie-breaking requires an entire new iteration of the algorithm, gathering a whole new coinciding-damage point set, it is more computationally expensive than handling a tie-break with the basic technique by simply including some more points in the current iteration.

Also, because always including zero-damage points is a conceptually purer way of presenting the concepts (it doesn't require an entire 7-page separate section to this article explaining the single-free-generator 2D tuning damage space exception, which as you might guess, I did write before having the necessary insight to simplify things), it is preferable for human understanding as well.

I also considered adding the unison to the target-interval set in order to capture the zero-damage points, but that was terribly inefficient and confusing. I also tried generalizing Keenan's code in a different way. Keenan's code includes $K$ which make a target-interval itself unchanged, but it only does that when the $K$ have only one column (meaning we're searching 2D tuning damage space). What if we think of it like this: our point set always includes $K$ which have one column for enforcing a target-interval itself, among any other unchanged-intervals, and we do this not only when $K$ is one column. That turned out to be an improvement, but it still resulted in redundant points, because we don't need direction permutations for the non-unison target-intervals when their errors are 0 either (e.g. If $\frac21$ is pure, and $\frac61$ is pure, that implies that $\frac31$ is pure. But if $\frac21$ is pure, and $\frac32$ is pure, that just as well implies that $\frac31$ is pure.

### Misc. issues: why we abbreviate

For some readers, it may seem pointless, or wasteful, to abbreviate DSLODs like this. Especially in those cases where you're tantalizingly close… you can see that you could break a tie, if only you were allowed to include one more entry in each ADSLOD. Or perhaps you could at least reduce the count of tunings that are tied.

Well, here's another way to think about the reason for this abbreviation, which may help you respect the purpose for the abbreviation. If you falsely eliminate tunings that rightly should still have been tied at this stage, then you will be eliminating tunings that should have been helping to define the boundary of the region to check in the next iteration of the method.

So, if you erroneously reduced the search space down to two tuning points defining a line segment region, you should have been searching an entire triangle-shaped region instead. You might miss the true optimum somewhere in the middle of the area of that triangle, not merely along one of its three sides.

### Misc. issues: importance of deduplication

Note that a very good reason to perform the type of deduplication within the target-interval set discussed earlier in this article (here) is to prevent unnecessary failing of the basic tie-breaking mechanism. Suppose we have a case like our basic tie-breakable blackwood example, where two damage graphs' crease is parallel to the floor and forms the minimum of the max damage graph, but we can still look one more position down the ADSLODs to tie-break at some point in the middle of this line segment range which minimizes the third damage position. Well, now imagine that instead we clogged our ADSLODs with a duplicate target-interval, i.e. one whose damage graph is identical to one or the other of these two forming this tied minimax crease. Now we unnecessarily find ourselves with three coinciding damages up top instead of just two, and will be forced to dip into advanced tie-breaking. But if we had only de-duped the target-intervals which map to the same mapped interval up front, we wouldn't have had to do that.

### Misc. issues: held-intervals with respect to advanced tie-breaking

In this section we discuss how to handle held-intervals with the coinciding-damage method. We note here that the extra steps involved — allocating columns of the constraint matrices to the held-intervals — are only necessary in the first iteration of the method.

Think of it this way: whichever generators were locked into the appropriate proportion in order to achieve the held-intervals at the top-level, main tuning damage space, those will remain locked in the blends of tunings at lower levels. In other words, whatever cross-section we take to capture the minimax polytope will already be within the held-interval subregion of tuning damage space.

### Misc. issues: a major flaw with the method

Keenan himself considers his algorithm to be pretty dumb. (It seems sort of fantastically powerful and genius to me, overall, but I can sort of see what he means a bit, now).

One core problem it has causes it to be potentially majorly inefficient, and also somewhat conceptually misleading. That's what we'll discuss here.

When we introduced the concept of blends in an earlier section, we noted how any point within the bounded region can be specified as a blend where the blending variables are positive sum to 1. That's the key thing that keeps us within the region; if we can specify huge and/or negative blending variables, then the exercise is moot, and we can specify points anywhere. Well, if we've got some 1D line segment embedded in a 2D plane, without the sum-to-1 rule, we can use $\mathbf{A}$ and $\mathbf{B}$ to describe any point within that 2D plane, anyway.

So, it turns out that this is essentially the same as how it works in advanced tie-breaking. When we take a cross-section of tuning damage space which contains the line segment of our tied basic minimax region, and gather a new coinciding-damage point set in terms of blending variables, we don't know if a point is going to fall within the line segment we care about or not until after we've already computed its blend variables. (Remember, the blend variable for the anchor tuning is always assumed to be whatever is required to get the sum exactly to 1, so all we care about is the other variables summing to something between 0 and 1.)

For example, consider the diagram we showed in this section. Note how the damage graphs for $\frac52$ and $\frac{16}{5}$ intersect (within this cross-section) but just outside the range where $0 \lt b_1 \lt 1$. Well, when we gather our coinciding-damage points, convert their ReDPOTICs and STICs to constraint matrices, and convert those to tunings, it's not until then that we'll realize this tuning is outside of bounds. We could filter it out at this point — it will never qualify as the true optimum tuning, because if you look straight up in the diagram, you can see that the damage to $\frac95$ is greater than the basic minimax could potentially be. But we already wasted a lot of resources finding it.

Essentially we search the whole cross-section, not just the minimax polytope we've identified.

And there's no particularly obvious way to rework this method to only find coinciding-damage points for $K$ where every entry of $𝒃$ is non-negative and $\llzigzag 𝒃 \rrzigzag_1 = 1$. To improve this part of the algorithm would require basically rethinking it from the inside out.

### Misc. issues: derivation of extracted anchor

$\mathbf{P} = x\mathbf{A} + y\mathbf{B} + z\mathbf{C}$

Add $\mathbf{A} - \mathbf{A}$ to this, which changes nothing.

$\mathbf{P} = \mathbf{A} - \mathbf{A} + x\mathbf{A} + y\mathbf{B} + z\mathbf{C}$

Recognize a coefficient of $1$ on the subtracted $\mathbf{A}$.

$\mathbf{P} = \mathbf{A} - 1\mathbf{A} + x\mathbf{A} + y\mathbf{B} + z\mathbf{C}$

We know $x + y + z = 1$, so we can substitute that in for this $1$.

$\mathbf{P} = \mathbf{A} - (x + y + z)\mathbf{A} + x\mathbf{A} + y\mathbf{B} + z\mathbf{C}$

Distribute.

$\mathbf{P} = \mathbf{A} - x\mathbf{A} - y\mathbf{A} - z\mathbf{A} + x\mathbf{A} + y\mathbf{B} + z\mathbf{C}$

Regroup by $x$ and $y$.

$\mathbf{P} = \mathbf{A} + x(\mathbf{A} - \mathbf{A}) + y(\mathbf{B} - \mathbf{A}) + z(\mathbf{C} - \mathbf{A})$

Cancel out these $\mathbf{A}$'s and thus $x$.

$\mathbf{P} = \mathbf{A} + x(\cancel{\mathbf{A}} - \cancel{\mathbf{A}}) + y(\mathbf{B} - \mathbf{A}) + z(\mathbf{C} - \mathbf{A}) \mathbf{P} = \mathbf{A} + x(0) + y(\mathbf{B} - \mathbf{A}) + z(\mathbf{C} - \mathbf{A})$

And so our final formula:

$\mathbf{P} = \mathbf{A} + y(\mathbf{B} - \mathbf{A}) + z(\mathbf{C} - \mathbf{A})$

### Misc. issues: equivalence to power-limit approach

In Keenan's original Yahoo groups post, he claims that his method (the core idea of which is explained in a modified form here as the coinciding-damage method) is equivalent to the power-limit method for finding true optimums for minimax tunings: "This is equivalent to taking the limit of the Lp norm minimizer as p tends to infinity (exercise for the reader!)". Douglas Blumeyer has attempted this exercise, but failed. He pestered Keenan himself for the solution, but it had been so long (about 10 years) since Keenan wrote this, he himself could not reproduce. So at this time, this remains an open problem — an exercise for you readers, now.

### Misc. issues: normalization required to handle exact tunings

One complication arose with the advanced tie-breaking part of the code (in Dave Keenan & Douglas Blumeyer's RTT Library in Wolfram Language which was adapted from Keenan Pepper's original Python code) upon the switch from Keenan's original technique of computing approximate generator tuning maps $𝒈$ by solving linear systems of equations to Dave Keenan's technique of computing exact generator embeddings $G$ by doing matrix inverses. In some cases where Keenan's technique used to work fine, Dave's method would fall on its face. Here's what happened.

Essentially, the basic tie-breaking step would come back with a set of tied tunings such as this:

• $𝒈_0$ = {240.000 2786.314]
• $𝒈_1$ = {240.000 2795.336]
• $𝒈_2$ = {240.000 2804.359]

The problem is that when we have three points defining a convex hull, it's supposed to be a triangle! This is a degenerate case where all three points fall along the same line. Not only is this wasteful, but it also screws stuff up, because now there's essentially more than one way to blend $\mathbf{A}$, $\mathbf{B}$, and $\mathbf{C}$ together to get $\mathbf{P}$, because $\mathbf{B}$ and $\mathbf{C}$ pull us away from $\mathbf{A}$ in the exact same direction.

Note that the only thing that matters is the direction that the tied tunings are from each other, not the distance; the values in the blend map $𝒃$ are continuous and can be anything they need to be to reach a desired point. In other words, all that matters are the proportions of the entries of the deltas to each other. In still other words, different tunings on the same line are redundant.

It happens to be the case here that the $𝜹_i$ are not only on the same line, but simple multiples of each other:

• $𝜹_2 = 𝒈_1 - 𝒈_0$ = {240.000 2795.336] - {240.000 2786.314] = {0 9.0225]
• $𝜹_1 = 𝒈_2 - 𝒈_0$ = {240.000 2804.359] - {240.000 2786.314] = {0 18.045]

which is to say that $𝒈_1$ happens to be smack-dab halfway between $𝒈_0$ and $𝒈_1$. But that's just a distraction; that's not important. It could have been $ϕ$ of the way between them instead and the problem would have been the same.

Remember that these $𝜹_i$ get combined into one big $D$ matrix. In this case, that's

$\left[ \begin{array} {r} 0 & 9.0225 \\ 0 & 18.045 \\ \end{array} \right]$

Using this for $D$, however, causes every $B$ we try to find via

$B = \mathrm{T}WK(DM\mathrm{T}WK)^{-1}$

to fail, because the $DM\mathrm{T}WK$ matrix we try to invert is singular. (And for one reason or another, Keenan's way, using a LinearSolve[], handled this degenerate case without complaining.)

One might think the solution would be simply to canonicalize this $D$ matrix: HNF it, and delete the all-zero rows. But here's the thing: it's not an integer matrix. It's not even rational. Even though it's seems obvious that since $18.045 = 9.0225 × 2$ we should be able to reduce this thing to:

$\left[ \begin{array} {r} 0 & 1 \\ 0 & 2 \\ \end{array} \right]$

actually what we want to do here is different and maybe slightly simpler. At least, it's a different breed of normalization.

What the RTT Library in Wolfram Language does now is Normalize[] every $𝜹_i$ to a unit vector and then dedupe them according to if they equal each other or their negation. Given the design of the algorithm, namely, how it doesn't actually restrict itself to searching the convex combination but instead searches the whole affine plane or whatever. And that works.

As a result, however, it doesn't always work directly in blend variables, but in scaled blend variables, scaled by the factor between the normalized and non-normalized deltas. For example, normalizing the above example would create a normalizing size factor of 9.0225. So now the tied minimax range wouldn't be from $0 \lt b_1 \lt 1$, but from $0 \lt b_1 \lt 9.0225$.

## For all-interval tuning schemes

When computing an all-interval tuning where the dual norm power is $∞$, we use a variation on the method we used for ordinary tunings when the optimization power was $∞$.

In this case, our optimization power is also still $∞$. That is to say, that in this case we're doing the same computation we would have been doing if we had a finite target-interval set, but now we're doing it as if the primes alone were our target-interval set.

Let's get the minimax-S tuning of meantone. With three proxy target-intervals and two generators, we end up with four constraint matrices:

$\left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] , \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & {-1} \\ \end{array} \right] , \left[ \begin{array} {rrr} +1 & +1 \\ {-1} & 0 \\ 0 & +1 \\ \end{array} \right] , \left[ \begin{array} {rrr} +1 & +1 \\ {-1} & 0 \\ 0 & {-1} \\ \end{array} \right]$

These correspond to tunings with the following pairs (one for each generator) of unchanged-intervals:

1. $(\frac21)^1 × (\frac31)^1 = \frac61$ and $(\frac21)^1 × (\frac51)^1 = \frac{10}{1}$
2. $(\frac21)^1 × (\frac31)^1 = \frac61$ and $(\frac21)^1 × (\frac51)^{-1} = \frac{2}{5}$
3. $(\frac21)^1 × (\frac31)^{-1} = \frac23$ and $(\frac21)^1 × (\frac51)^1 = \frac{10}{1}$
4. $(\frac21)^1 × (\frac31)^{-1} = \frac23$ and $(\frac21)^1 × (\frac51)^{-1} = \frac{2}{5}$

Which in turn become the following tunings:

1. 1202.682 695.021]
2. 1201.699 697.564]
3. 1195.387 699.256]
4. 0.000 0.000] (Yup, not kidding. This tuning is probably not going to win…)

And these in turn give the following prime absolute error lists (for each list, all three primes have coinciding absolute scaled errors, because minimax tunings lead to $r + 1$ of them coinciding):

1. 2.682 2.682 2.682]
2. 1.699 1.699 1.699]
3. 4.613 4.613 4.613]
4. 1200.000 1200.000 1200.000]

And so our second tuning wins, and that's our minimax-S tuning of meantone.

## With alternative complexities

The following examples all pick up from a shared setup here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Computing all-interval tuning schemes with alternative complexities.

So for all complexities used here — at least the first several simpler examples — our constraint matrices will be:

$\left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] , \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & -1 \\ \end{array} \right] , \\ \left[ \begin{array} {rrr} +1 & +1 \\ -1 & 0 \\ 0 & +1 \\ \end{array} \right] , \\ \left[ \begin{array} {rrr} +1 & +1 \\ -1 & 0 \\ 0 & -1 \\ \end{array} \right]$

### Minimax-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Log-product2, by plugging $L^{-1}$ into our pseudoinverse method for $S_{\text{p}}$.

$% \slant{} command approximates italics to allow slanted bold characters, including digits, in MathJax. \def\slant#1{\style{display:inline-block;margin:-.05em;transform:skew(-14deg)translateX(.03em)}{#1}}$ Now we need to find the tunings corresponding to our series of constraint matrices $K$. Those constraint matrices apply to both sides of the approximation $GM\mathrm{T}_{\text{p}}S_{\text{p}} \approx \mathrm{T}_{\text{p}}S_{\text{p}}$, or simplified, $GMS_{\text{p}} \approx S_{\text{p}}$. So first we find $MS_{\text{p}} = ML^{-1} =$ [$\frac{1}{\log_2(2)}$ $\frac{2}{\log_2(3)}$ $\frac{3}{\log_2(5)}$] $\frac{0}{\log_2(2)}$ $\frac{-3}{\log_2(3)}$ $\frac{-5}{\log_2(5)}$]}. And then we find $S_{\text{p}} = L^{-1} = \text{diag}(\left[ \begin{array} {r} \frac{1}{\log_2(2)} & \frac{1}{\log_2(3)} & \frac{1}{\log_2(5)} \end{array} \right])$.

So here's our first constraint matrix:

$\begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array}$

Applying the constraint to get an equality:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} \begin{array} {c} ML^{-1} \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} \\ \frac{0}{\log_2(2)} & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array} = \begin{array} {c} L^{-1} \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & 0 & 0 \\ 0 & \frac{1}{\log_2(3)} & 0 \\ 0 & 0 & \frac{1}{\log_2(5)} \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array}$

Multiply:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} \begin{array} {c} ML^{-1}K \\ \left[ \begin{array} {rrr} 1+\frac{2}{\log_2(3)} & 1+\frac{3}{\log_2(5)} \\ \frac{-3}{\log_2(3)} & \frac{5}{\log_2(5)} \\ \end{array} \right] \end{array} = \begin{array} {c} L^{-1}K \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & \frac{1}{\log_2(2)} \\ \frac{1}{\log_2(3)} & 0 \\ 0 & \frac{1}{\log_2(5)}\\ \end{array} \right] \end{array}$

Solve for $G$:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} = \begin{array} {c} L^{-1}K \\ \left[ \begin{array} {rrr} \frac{1}{\log_2(2)} & \frac{1}{\log_2(2)} \\ \frac{1}{\log_2(3)} & 0 \\ 0 & \frac{1}{\log_2(5)}\\ \end{array} \right] \end{array} \begin{array} {c} (ML^{-1}K)^{-1} \\ \left[ \begin{array} {rrr} \frac{-5}{\log_2(5)} & {-1}-\frac{3}{\log_2(5)} \\ \frac{3}{\log_2(3)} & 1+\frac{2}{\log_2(3)} \\ \end{array} \right] \\ \hline (\frac{3}{\log_2(3)} - \frac{5}{\log_2(5)} - \frac{1}{\log_2(3)\log_2(5)}) \end{array} = \begin{array} {c} \\ \left[ \begin{array} {rrr} 0.490 & 0.0567 \\ 2.552 & 2.717 \\ -1.531 & -1.830 \\ \end{array} \right] \end{array}$

From that we can find $𝒈 = 𝒋G$ to get $g_1 = 1174.903$ and $g_2 = 136.024$.

Sure, that looks like a horrible tuning; it only minimizes the maximum damage across all intervals to about 25 ¢(S)! But don't worry yet. This is all part of the process. We've only checked our first of four constraint matrices. Certainly one of the other three will lead to a better candidate tuning. We won't work through these examples in detail; one illustrative example should be enough.

Indeed we find that the second one to be $𝒈 =$ {1196.906 162.318] dealing only 3 ¢(S) maximum damage. And the third $K$ leads to {1203.540 166.505] which also deals just over 3 ¢(S) maximum damage. The fourth $K$ is a dud, sending the tuning to {0 0], dealing a whopping 1200 ¢(S) maximum damage.

And so the minimax-S tuning of this temperament is {1196.906 162.318]. We could compute this in the RTT Library in Wolfram Language with the following line of code:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-S"]
Out: {1196.906 162.318]


### Minimax-sofpr-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Sum-of-prime-factors-with-repetition2. Plugging $\text{diag}(𝒑)^{-1}$ in for $S_{\text{p}}$.

Now we need to find the tunings corresponding to our series of constraint matrices $K$. Those constraint matrices apply to both sides of the approximation $GM\mathrm{T}_{\text{p}}S_{\text{p}} \approx \mathrm{T}_{\text{p}}S_{\text{p}}$, or simplified, $GMS_{\text{p}} \approx S_{\text{p}}$. So first we find $MS_{\text{p}} = M\text{diag}(𝒑)^{-1} =$ [$\frac{1}{2}$ $\frac{2}{3}$ $\frac{3}{5}$] $\frac{0}{2}$ $\frac{-3}{3}$ $\frac{-5}{5}$]}. And then we find $S_{\text{p}} = \text{diag}(𝒑)^{-1} = \text{diag}(\left[ \begin{array} {r} \frac12 & \frac13 & \frac15 \end{array} \right])$.

So here's our first constraint matrix:

$\begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array}$

Applying the constraint to get an equality:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} \begin{array} {c} M\text{diag}(𝒑)^{-1} \\ \left[ \begin{array} {rrr} \frac12 & \frac23 & \frac35 \\ \frac02 & \frac{-3}{3} & \frac{-5}{5} \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array} = \begin{array} {c} \text{diag}(𝒑)^{-1} \\ \left[ \begin{array} {rrr} \frac12 & 0 & 0 \\ 0 & \frac13 & 0 \\ 0 & 0 & \frac15 \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array}$

Multiply:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} \begin{array} {c} M\text{diag}(𝒑)^{-1}K \\ \left[ \begin{array} {rrr} \frac76 & \frac{11}{10} \\ {-1} & {-1} \\ \end{array} \right] \end{array} = \begin{array} {c} \text{diag}(𝒑)^{-1}K \\ \left[ \begin{array} {rrr} \frac12 & \frac12 \\ \frac13 & 0 \\ 0 & \frac15 \\ \end{array} \right] \end{array}$

Solve for $G$:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} = \begin{array} {c} \text{diag}(𝒑)^{-1}K \\ \left[ \begin{array} {rrr} \frac12 & \frac12 \\ \frac13 & 0 \\ 0 & \frac15 \\ \end{array} \right] \end{array} \begin{array} {c} (M\text{diag}(𝒑)^{-1}K)^{-1} \\ \left[ \begin{array} {rrr} 15 & \frac{33}{2} \\ {-15} & {-\frac{35}{2}} \\ \end{array} \right] \\ \end{array} = \begin{array} {c} \\ \left[ \begin{array} {rrr} 0 & {-\frac{1}{2}} \\ 5 & \frac{11}{2} \\ -3 & {-\frac{7}{2}} \\ \end{array} \right] \end{array}$

Note the tempered octave is exactly $3^{5}5^{-3} = \frac{243}{125}$! That sounds cool, but it's actually an entire quartertone narrow. We find $g_1 = 1150.834$ and $g_2 = 108.655$. Again, that looks like a horrible tuning; this first constraint matrix is beginning to seem so hot for tuning porcupine temperament, irrespective of our choice of complexity.

But again that the second candidate tuning to be much nicer, with $𝒈 =$ {1196.927 162.430] dealing only about 1.5 ¢(S) maximum damage. And the third $K$ leads to {1203.512 166.600] which also deals about 1.8 ¢(S) maximum damage. The fourth $K$ is a dud, giving {1150.834 157.821], dealing a whopping 25 ¢(S) maximum damage.

And so the minimax-sopfr-S tuning of this temperament is {1196.927 162.430]. We could compute this in the RTT Library in Wolfram Language with the following line of code:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-sopfr-S"]
Out: {1196.927 162.430]


### Minimax-copfr-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Count-of-prime-factors-with-repetition2. Plugging $I$ into our pseudoinverse method for $S_{\text{p}}$.

Now we need to find the tunings corresponding to our series of constraint matrices $K$. Those constraint matrices apply to both sides of the approximation $GM\mathrm{T}_{\text{p}}S_{\text{p}} \approx \mathrm{T}_{\text{p}}S_{\text{p}}$, or simplified, $GMS_{\text{p}} \approx S_{\text{p}}$. So first we find $MS_{\text{p}} = M =$ [1 2 3] 0 -3 -5]}. And then we find $S_{\text{p}} = I$.

So here's our first constraint matrix:

$\begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array}$

Applying the constraint to get an equality:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} \begin{array} {c} M \\ \left[ \begin{array} {rrr} 1 & 2 & 3 \\ 0 & {-3} & {-5} \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array} = \begin{array} {c} I \\ \left[ \begin{array} {rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rrr} +1 & +1 \\ +1 & 0 \\ 0 & +1 \\ \end{array} \right] \end{array}$

Multiply:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} \begin{array} {c} MK \\ \left[ \begin{array} {rrr} 3 & 4 \\ {-3} & {-5} \\ \end{array} \right] \end{array} = \begin{array} {c} IK \\ \left[ \begin{array} {rrr} 1 & 1 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array}$

Solve for $G$:

$\begin{array} {c} G \\ \left[ \begin{array} {rrr} g_{11} & g_{12} \\ g_{21} & g_{22} \\ g_{31} & g_{32} \\ \end{array} \right] \end{array} = \begin{array} {c} K \\ \left[ \begin{array} {rrr} 1 & 1 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] \end{array} \begin{array} {c} (MK)^{-1} \\ \left[ \begin{array} {rrr} \frac53 & \frac43 \\ {-1} & {-1} \\ \end{array} \right] \end{array} = \begin{array} {c} \\ \dfrac13 \left[ \begin{array} {rrr} 2 & 1 \\ 5 & 4 \\ {-3} & {-3} \\ \end{array} \right] \end{array}$

So that's a tempered octave equal to $2^{\frac23}3^{\frac53}5^{-\frac33} = \sqrt{\frac{972}{125}}$. Interesting, perhaps. But we find $g_1 = 1183.611$ and $g_2 = 149.626$. You know the drill by now. This one's a horrible tuning. It does 16 ¢(S) damage.

The second constraint gives $𝒈 =$ {1194.537 160.552] dealing only 5 ¢(S) maximum damage. And the third $K$ leads to {1207.024 168.356] which also deals just over 7 ¢(S) maximum damage. The fourth $K$ is a dud, sending the tuning to {1249.166 182.404], dealing nearly 50 ¢(S) maximum damage.

And so the minimax-copfr-S tuning of this temperament is {1194.537 160.552]. We could compute this in the RTT Library in Wolfram Language with the following line of code:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-copfr-S"]
Out: {1194.537 160.552]


In the case of minimax-copfr-S with nullity-1 (only one comma) like this, we actually have a shortcut. First, take the size of the comma in cents, and divide it by its total count of primes. The porcupine comma is $\frac{250}{243}$, or in vector form [1 -5 3, and so it has |1| + |-5| + |3| = 9 total primes. And being 49.166 ¢ in size, that gives us $\frac{49.166}{9} = 5.463$. What's this number for? That's the amount of cents to retune each prime by! If the count of a prime in the comma is positive, we tune narrow by that much, and if negative, we tune wide. So the map for the minimax-copfr-S tuning of porcupine is $𝒕$ = 1200 1901.955 2786.314] + -5.463 5.463 -5.463] = 1194.537 1907.418 2780.851]. If you're not convinced this matches the $𝒈$ we found the long way, feel free to check via $𝒕 = 𝒈M$.

### Minimax-lils-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Log-integer-limit-squared2.

Now we need to find the tunings corresponding to our series of constraint matrices $K$. Those constraint matrices apply to both sides of the approximation $GM\mathrm{T}_{\text{p}}S_{\text{p}} \approx \mathrm{T}_{\text{p}}S_{\text{p}}$, or simplified, $GMS_{\text{p}} \approx S_{\text{p}}$, or equivalent thereof. So first we find $M\mathrm{T}_{\text{p}}S_{\text{p}}$. According to Mike's augmentation pattern, we get:

$\left[ \begin{array} {rrr|r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right]$

(Compare with the result for minimax-S, the same but without the augmentations.)

And then we find $S_{\text{p}}$ or equivalent thereof. It's an augmentation of $L^{-1}$:

$\left[ \begin{array} {rrr|r} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right]$

This is an extrapolation from Mike's augmentation pattern. It's not actually directly any sort of inverse of the complexity pretransformer. In some sense, that effect has already been built into the augmentation of $M\mathrm{T}_{\text{p}}S_{\text{p}}$. (Again, it's the same as minimax-S, but with the augmentation.)

On account of the augmentation, our constraint matrices are a bit different here. Actually, we have twice as many candidate tunings to check this time (if you compare this list with the one given in the opening part of this supersection, the pattern relating them is fairly clear). The extra dimension is treated just like it would be otherwise. Here are all of our $K$'s:

$\left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & +1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{+1} \\ \end{array} \right] , \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & +1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] , \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & -1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{+1} \\ \end{array} \right] , \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & -1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] , \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ -1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & +1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{+1} \\ \end{array} \right] , \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ -1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & +1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] , \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ -1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & -1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{+1} \\ \end{array} \right] , \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ -1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & -1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{-1} \\ \end{array} \right] , \\$

So let's just work through one tuning with the first $K$. Note that we've also augmented $G$. This augmentation is necessary for the computation but will be thrown away once we have our result.

Applying the constraint to get an equality:

$\scriptsize \begin{array} {c} G \\ \left[ \begin{array} {rr|r} g_{1,1} & g_{1,2} & \style{background-color:#FFF200;padding:5px}{g_{1,\text{aug}}} \\ g_{2,1} & g_{2,2} & \style{background-color:#FFF200;padding:5px}{g_{2,\text{aug}}} \\ g_{3,1} & g_{3,2} & \style{background-color:#FFF200;padding:5px}{g_{3,\text{aug}}} \\ \hline \style{background-color:#FFF200;padding:5px}{g_{\text{aug},1}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},2}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},\text{aug}}} \\ \end{array} \right] \end{array} \begin{array} {c} \text{equiv. of} \; MS_{\text{p}} \\ \left[ \begin{array} {rrr|r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & +1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{+1} \\ \end{array} \right] \end{array} = \begin{array} {c} \text{equiv. of} \; S_{\text{p}} \\ \left[ \begin{array} {rrr|r} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & +1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{+1} \\ \end{array} \right] \end{array}$

Multiply:

$\begin{array} {c} G \\ \left[ \begin{array} {rr|r} g_{1,1} & g_{1,2} & \style{background-color:#FFF200;padding:5px}{g_{1,\text{aug}}} \\ g_{2,1} & g_{2,2} & \style{background-color:#FFF200;padding:5px}{g_{2,\text{aug}}} \\ g_{3,1} & g_{3,2} & \style{background-color:#FFF200;padding:5px}{g_{3,\text{aug}}} \\ \hline \style{background-color:#FFF200;padding:5px}{g_{\text{aug},1}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},2}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},\text{aug}}} \\ \end{array} \right] \end{array} \begin{array} {c} \text{equiv. of} \; MS_{\text{p}}K \\ \left[ \begin{array} {rr|r} 2.262 & 2.292 & \style{background-color:#FFF200;padding:5px}{1} \\ {-1.892} & {-2.153} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{2} & \style{background-color:#FFF200;padding:5px}{2} & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right] \end{array} = \begin{array} {c} \text{equiv. of} \; S_{\text{p}}K \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & \frac{1}{\log_2(2)} & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array}$

(Again, compare this with the minimax-S case. Same but augmented.) And now solve for $G$:

$\begin{array} {c} G \\ \left[ \begin{array} {rr|r} g_{1,1} & g_{1,2} & \style{background-color:#FFF200;padding:5px}{g_{1,\text{aug}}} \\ g_{2,1} & g_{2,2} & \style{background-color:#FFF200;padding:5px}{g_{2,\text{aug}}} \\ g_{3,1} & g_{3,2} & \style{background-color:#FFF200;padding:5px}{g_{3,\text{aug}}} \\ \hline \style{background-color:#FFF200;padding:5px}{g_{\text{aug},1}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},2}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},\text{aug}}} \\ \end{array} \right] \end{array} = \begin{array} {c} \text{equiv. of} \; S_{\text{p}}K \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & \frac{1}{\log_2(2)} & \style{background-color:#FFF200;padding:5px}{1} \\ \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} \\ \end{array} \right] \end{array} \begin{array} {c} (\text{equiv. of} \; MS_{\text{p}}K)^{-1} \\ \left[ \begin{array} {rrr} 0 & 3.837 & 4.131 \\ 0 & -3.837 & -3.632 \\ 1 & 0.116 & -1.021 \\ \end{array} \right] \end{array} = \begin{array} {c} \\ \left[ \begin{array} {rr|r} 1 & 0.116 & \style{background-color:#FFF200;padding:5px}{-0.521} \\ 0 & 2.421 & \style{background-color:#FFF200;padding:5px}{2.607} \\ 0 & -1.652 & \style{background-color:#FFF200;padding:5px}{-1.564} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{0.116} & \style{background-color:#FFF200;padding:5px}{-1.021} \\ \end{array} \right] \end{array}$

From that we can find $𝒈 = 𝒋G$. But we need an augmented $𝒋$ to do this. This will work:

$\left[ \begin{array} {rrr|r} 1200 & 1200 & 1200 & \style{background-color:#FFF200;padding:5px}{0} \\ \end{array} \right]$

So that gives us $g_1 = 1200.000$, $g_2 = 138.930$, and $g_{\text{aug}} = -25.633$. The last term is junk. As stated previously, it's only a side-effect of the computation process and isn't part of the useful result. Instead we only care about $g_1$ and $g_2$, giving us the tuning {1200.000 138.930].

For this example we won't bother detailing all 8 candidate tunings. Too many. But we will at least note that not every tuning works out with an unchanged octave like this. And that this is not one of the better tunings; this one does about 26 ¢(S) damage, while half of the tunings are around only 3 ¢(S).

The best tuning we find from this set is {1193.828 161.900], and so that's our minimax-lils-S tuning of porcupine. We could compute this in the RTT Library in Wolfram Language with the following line of code:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "minimax-lils-S"]
Out: {1193.828 161.900]


### Minimax-lols-S

This example specifically picks up from the setup laid out here: Dave Keenan & Douglas Blumeyer's guide to RTT: alternative complexities#Log-odd-limit-squared2.

So for minimax-lols-S (AKA held-octave minimax-lils-S) we basically keep the same $MS_{\text{p}}$ as before. But now (as discussed here) we have to further augment it with the mapped held-interval, [1 0 0 (i.e. what the octave maps to in this temperament, including its augmented row, so that we can match it with its just size in the constrained linear system of equations to enforce it being held unchanged):

$\begin{array}{c} \text{equiv. of} \; MS_{\text{p}} \\ \left[ \begin{array} {rrr|r|r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#00AEEF;padding:5px}{0} \\ \end{array} \right] \end{array}$

And as for our equivalent of $S_{\text{p}}$, that's just going to be $L^{-1}$ augmented first with the placeholder for the size dimension for the lil, and secondly with a placeholder for the just tuning of the held-interval which will appear in the augmented $𝒋$ later, which will be matched up with its mapped form to ensure it is held unchanged.

$\left[ \begin{array} {rrr|r|r} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#00AEEF;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \\ \end{array} \right]$

Our list of $K$'s here is the same as the list for minimax-lils-S, but now they've all got one of their rows dedicated to holding the octave unchanged. For example the first one was:

$\left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#FFF200;padding:5px}{+1} \\ +1 & 0 & \style{background-color:#FFF200;padding:5px}{0} \\ 0 & +1 & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{+1} \\ \end{array} \right]$

But now it's:

$\left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#8DC73E;padding:5px}{0} \\ +1 & 0 & \style{background-color:#8DC73E;padding:5px}{0} \\ 0 & +1 & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \end{array} \right]$

(To explain the green highlighting: those cells are pertinent to both augmentations. The yellow part of the green indicates that the lil-augmentation put a new column there at all. The blue indicates that now that column has been replaced with a column for holding an interval unchanged. The held-octave issue did not actually add a new column here, only a new row.)

So let's just work through one tuning with the first $K$. Note that $𝒈$ is augmented as it was for the minimax-lils-S computation. Applying the constraint to get an equality:

$\scriptsize \begin{array} {c} G \\ \left[ \begin{array} {rr|r} g_{1,1} & g_{1,2} & \style{background-color:#FFF200;padding:5px}{g_{1,\text{aug}}} \\ g_{2,1} & g_{2,2} & \style{background-color:#FFF200;padding:5px}{g_{2,\text{aug}}} \\ g_{3,1} & g_{3,2} & \style{background-color:#FFF200;padding:5px}{g_{3,\text{aug}}} \\ \hline \style{background-color:#FFF200;padding:5px}{g_{\text{aug},1}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},2}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},\text{aug}}} \\ \hline \style{background-color:#00AEEF;padding:5px}{g_{\text{held},1}} & \style{background-color:#00AEEF;padding:5px}{g_{\text{held},2}} & \style{background-color:#8DC73E;padding:5px}{g_{\text{held},\text{aug}}} \\ \end{array} \right] \end{array} \begin{array}{c} \text{equiv. of} \; MS_{\text{p}} \\ \left[ \begin{array} {rrr|r|r} \frac{1}{\log_2(2)} & \frac{2}{\log_2(3)} & \frac{3}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \\ 0 & \frac{-3}{\log_2(3)} & \frac{-5}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#FFF200;padding:5px}{-1} & \style{background-color:#00AEEF;padding:5px}{0} \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#8DC73E;padding:5px}{0} \\ +1 & 0 & \style{background-color:#8DC73E;padding:5px}{0} \\ 0 & +1 & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \end{array} \right] \end{array} = \begin{array} {c} \text{equiv. of} \; S_{\text{p}} \\ \left[ \begin{array} {rrr|r|r} \frac{1}{\log_2(2)} & 0 & 0 & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ 0 & \frac{1}{\log_2(3)} & 0 & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ 0 & 0 & \frac{1}{\log_2(5)} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{1} & \style{background-color:#00AEEF;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \\ \end{array} \right] \end{array} \begin{array} {c} K \\ \left[ \begin{array} {rr|r} +1 & +1 & \style{background-color:#8DC73E;padding:5px}{0} \\ +1 & 0 & \style{background-color:#8DC73E;padding:5px}{0} \\ 0 & +1 & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \end{array} \right] \end{array}$

Multiply:

$\begin{array} {c} G \\ \left[ \begin{array} {rr|r} g_{1,1} & g_{1,2} & \style{background-color:#FFF200;padding:5px}{g_{1,\text{aug}}} \\ g_{2,1} & g_{2,2} & \style{background-color:#FFF200;padding:5px}{g_{2,\text{aug}}} \\ g_{3,1} & g_{3,2} & \style{background-color:#FFF200;padding:5px}{g_{3,\text{aug}}} \\ \hline \style{background-color:#FFF200;padding:5px}{g_{\text{aug},1}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},2}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},\text{aug}}} \\ \hline \style{background-color:#00AEEF;padding:5px}{g_{\text{held},1}} & \style{background-color:#00AEEF;padding:5px}{g_{\text{held},2}} & \style{background-color:#8DC73E;padding:5px}{g_{\text{held},\text{aug}}} \\ \end{array} \right] \end{array} \begin{array} {c} \text{equiv. of} \; MS_{\text{p}}K \\ \left[ \begin{array} {rr|r} 2.262 & 2.292 & \style{background-color:#FFF200;padding:5px}{1} \\ {-1.892} & {-2.153} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#8DC73E;padding:5px}{2} & \style{background-color:#8DC73E;padding:5px}{2} & \style{background-color:#8DC73E;padding:5px}{0} \\ \end{array} \right] \end{array} = \begin{array} {c} \text{equiv. of} \; S_{\text{p}}K \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & \frac{1}{\log_2(2)} & \style{background-color:#8DC73E;padding:5px}{0} \\ \frac{1}{\log_2(3)} & 0 & \style{background-color:#8DC73E;padding:5px}{0} \\ 0 & \frac{1}{\log_2(5)} & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \\ \end{array} \right] \end{array}$

Solve for $G$:

$\begin{array} {c} G \\ \left[ \begin{array} {rr|r} g_{1,1} & g_{1,2} & \style{background-color:#FFF200;padding:5px}{g_{1,\text{aug}}} \\ g_{2,1} & g_{2,2} & \style{background-color:#FFF200;padding:5px}{g_{2,\text{aug}}} \\ g_{3,1} & g_{3,2} & \style{background-color:#FFF200;padding:5px}{g_{3,\text{aug}}} \\ \hline \style{background-color:#FFF200;padding:5px}{g_{\text{aug},1}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},2}} & \style{background-color:#FFF200;padding:5px}{g_{\text{aug},\text{aug}}} \\ \hline \style{background-color:#00AEEF;padding:5px}{g_{\text{held},1}} & \style{background-color:#00AEEF;padding:5px}{g_{\text{held},2}} & \style{background-color:#8DC73E;padding:5px}{g_{\text{held},\text{aug}}} \\ \end{array} \right] \end{array} = \begin{array} {c} \text{equiv. of} \; S_{\text{p}}K \\ \left[ \begin{array} {rr|r} \frac{1}{\log_2(2)} & \frac{1}{\log_2(2)} & \style{background-color:#8DC73E;padding:5px}{0} \\ \frac{1}{\log_2(3)} & 0 & \style{background-color:#8DC73E;padding:5px}{0} \\ 0 & \frac{1}{\log_2(5)} & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#8DC73E;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1} \\ \end{array} \right] \end{array} \begin{array} {c} (\text{equiv. of} \; MS_{\text{p}}K)^{-1} \\ \left[ \begin{array} {rrr} 0 & 3.838 & 4.132 \\ 0 & -3.838 & -3.632 \\ 1 & 0.116 & -1.021 \\ \end{array} \right] \end{array} = \begin{array} {c} \\ \left[ \begin{array} {rr|r} 0 & 0 & \style{background-color:#FFF200;padding:5px}{0.500} \\ 0 & 2.421 & \style{background-color:#FFF200;padding:5px}{2.607} \\ 0 & -1.653 & \style{background-color:#FFF200;padding:5px}{-1.564} \\ \hline \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#FFF200;padding:5px}{0} \\ \hline \style{background-color:#00AEEF;padding:5px}{1} & \style{background-color:#00AEEF;padding:5px}{0.116} & \style{background-color:#8DC73E;padding:5px}{-1.021} \\ \end{array} \right] \end{array}$

From that we can find $𝒈 = 𝒋G$. But we need to have augmented $𝒋$ accordingly. It needs to be augmented both for the lils and for the held-octave. Specifically, for the held-octave, we need to add its just tuning in cents. So that's 1200. It works out to:

$\left[ \begin{array} {rrr|r|r} 1200 & 1200 & 1200 & \style{background-color:#FFF200;padding:5px}{0} & \style{background-color:#00AEEF;padding:5px}{1200} \\ \end{array} \right]$

So we find $g_1 = 1200$, $g_2 = 138.930$, and $g_{\text{aug}} = -25.633$. As stated previously, the result for $g_{\text{aug}}$ is just a side-effect of the computation process and isn't part of the useful result. Instead we only care about $g_1$ and $g_2$, giving us the tuning {1200.000 138.930]. (Yes, that's the same tuning as we found for minimax-lils-S; it happens that the octave was already pure for that one, and otherwise nothing about the tuning scheme changed.)

For this example we won't bother detailing all 8 candidate tunings. Too many. But we will at least note that not every tuning works out with a held octave like this. And that this is not one of the better tunings; this one does about 26 ¢(S) damage, while half of the tunings are around only 3 ¢(S).

The best augmented tuning we find from this set is [1200 162.737 -3.102], and so that's our held-octave minimax-lols-S tuning of porcupine. Well, when you throw away that $g_{\text{aug}}$ final entry anyway, to get {1200 162.737].

We could compute this in the RTT Library in Wolfram Language with the following line of code:

In:  optimizeGeneratorTuningMap["[⟨1 2 3] ⟨0 -3 -5]]", "held-octave minimax-lols-S"]
Out: {1200 162.737]


# Footnotes

1. Gene Ward Smith discovering this relationship: https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_16172#16172
2. The actual answer is more like 100.236. The result here is due to compounding rounding errors that I was too lazy to account for when preparing these materials. Sorry about that. ~Douglas
3. Ideally we'd've consistently applied the Fraktur-styling effect to each of these letters, changing no other properties, i.e. ended up with an uppercase italic M and lowercase bold italic j and t, but unfortunately a consistent effect was not available using Unicode and the wiki's $\LaTeX$ abilities, a consistent effect, anyway, that also satisfactorily captured the compound aspect of what these things represent.
4. Perhaps re-running this process in the recognition of the fact that these matrices are shorthand for an underlying system of equations, and the derivative of $𝒈$ is, in fact, its gradient, or in other words, the vector of partial derivatives with respect to each of its entries (as discussed in more detail in the later section, #Multiple derivatives), we could nail this down.
5. If you don't dig it, please consider alternative attempts to explain these ideas here: User:Sintel/Generator_optimization#Constraints, here: Constrained_tuning/Analytical_solution_to_constrained_Euclidean_tunings, and here: Target tuning#Least squares tunings
6. This is a different lambda to the one conventionally used for eigenvalues, or as we call them, scaling factors. This lambda refers to Lagrange, the mathematician who developed this technique.
7. To help develop your intuition for these sorts of problems, we recommend Grant Sanderson's series of videos for Khan Academy's YouTube channel, about Lagrange multipliers for constrained optimizations: https://www.youtube.com/playlist?list=PLCg2-CTYVrQvNGLbd-FN70UxWZSeKP4wV
9. \begin{align} \begin{array} {c} 𝔐 \\ \left[ \begin{array} {c} 𝕞_{11} & 𝕞_{12} \\ 𝕞_{21} & 𝕞_{22} \\ \end{array} \right] \end{array} \begin{array} {c} 𝔐^\mathsf{T} \\ \left[ \begin{array} {c} 𝕞_{11} & 𝕞_{21} \\ 𝕞_{12} & 𝕞_{22} \\ \end{array} \right] \end{array} &= \\[12pt] \begin{array} {c} 𝔐𝔐^\mathsf{T} \\ \left[ \begin{array} {c} 𝕞_{11}^2 + 𝕞_{12}^2 & 𝕞_{11}𝕞_{21} + 𝕞_{12}𝕞_{22} \\ 𝕞_{11}𝕞_{21} + 𝕞_{12}𝕞_{22} & 𝕞_{21}^2 + 𝕞_{22}^2 \\ \end{array} \right] \end{array} &∴ \\[12pt] (𝔐𝔐^\mathsf{T})_{12} = (𝔐𝔐^\mathsf{T})_{21} \end{align}
10. Writes the present author, Douglas Blumeyer, who is relieved to have completely demystified this process for himself, after being daunted by it for over a year, then struggling for a solid week to assemble it from the hints left by the better-educated tuning theorists who came before me.
11. Another reason we wrote the method for this optimization power up is because it was low-hanging fruit, on account of the fact that it was already described on the wiki in the Target tunings page, where it is presented as a method for finding "minimax" tunings, not miniaverage tunings. This is somewhat misleading, because while this method works for any miniaverage tuning scheme, it only works for some minimax tuning schemes under very specific conditions (which that page does meet, and so it's not outright wrong). The conditions are: unity-weight damage (check), and all members of the target-interval set expressible as products of other members (check, due to their choice of target-interval set, closely related to a tonality diamond, plus octaves are constrained to be unchanged). The reason why these are the two necessary conditions for this miniaverage method working for a minimax tuning scheme is because when you are solving for the minimax, you actually want the tunings where the target-intervals' damages equal each other, not where they are zero, and these zero-damage tunings will only match the tunings where other intervals' damages equal each other in the case where two intervals' damages being equal implies that another target's damage is zero, because that other target is the product of those first two; and the unity-weight damage requirement is to ensure that the slopes of each target's hyper-V are all the same, because otherwise the points where two damages are equal like this will no longer line up directly over the point where the third target's damage is zero. For more information on this problem, please see the discussion page for the problematic wiki page, where we are currently requesting the page be updated accordingly.
12. Note that this technique for converting zero-damage points to generator tunings is much simpler than the technique described on the Target tunings page. The Target tunings page uses eigendecomposition, which unnecessarily requires you to find the commas for the temperament, compute a full projection matrix $P$, and then when you need to spit a generator tuning map $𝒈$ out at the end, requires the computation of a generator detempering to do so (moreover, it doesn't explain or even mention eigendecomposition; it assumes the reader knows how and when to do them, cutting off at the point of listing the eigenvectors — a big thanks to Sintel for unpacking the thought process in that article for us). The technique described here skips the commas, computing the generator embedding $G$ directly rather than via $P = GM$, and then when you need to spit a generator tuning map out at the end, it's just $𝒈 = 𝒋G$, which is much simpler than the generator detempering computation.

The Target tunings approach and this approach are quite similar conceptually. Here's the Target tunings approach:

$\scriptsize \begin{array} {c} P \\ \left[ \begin{array} {rrr} 1 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & \frac14 & 1 \\ \end{array} \right] \end{array} = \\ \scriptsize \begin{array} {c} \mathrm{V} \\ \left[ \begin{array} {r|r|r} \style{background-color:#98CC70;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#F2B2B4;padding:5px}{-4} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#F2B2B4;padding:5px}{4} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{1} & \style{background-color:#F2B2B4;padding:5px}{-1} \\ \end{array} \right] \end{array} \begin{array} {c} \textit{Λ} \\ \left[ \begin{array} {rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right] \end{array} \begin{array} {c} \mathrm{V} \\ \left[ \begin{array} {r|r|r} \style{background-color:#98CC70;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#F2B2B4;padding:5px}{-4} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#F2B2B4;padding:5px}{4} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{1} & \style{background-color:#F2B2B4;padding:5px}{-1} \\ \end{array} \right]^{{\Large-1}} \end{array}$

And the technique demonstrated here looks like this:

$\scriptsize \begin{array} {c} G \\ \left[ \begin{array} {rrr} 1 & 1 \\ 0 & 0 \\ 0 & \frac14 \\ \end{array} \right] \end{array} = \\ \scriptsize \begin{array} {c} \mathrm{U} \\ \left[ \begin{array} {r|r} \style{background-color:#98CC70;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{0} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{1} \\ \end{array} \right] \end{array} \Large ( \scriptsize \begin{array} {c} M \\ \left[ \begin{array} {rrr} 1 & 0 & {-4} \\ 0 & 1 & 4 \\ \end{array} \right] \end{array} \begin{array} {c} \mathrm{U} \\ \left[ \begin{array} {r|r} \style{background-color:#98CC70;padding:5px}{1} & \style{background-color:#98CC70;padding:5px}{0} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{0} \\ \style{background-color:#98CC70;padding:5px}{0} & \style{background-color:#98CC70;padding:5px}{1} \\ \end{array} \right] \end{array} \Large )^{-1} \scriptsize$

So in the Target tunings approach, $P$ is the projection matrix, $\mathrm{V}$ is a matrix consisting of a list of unrotated vectors — both ones with scaling factor 1 (unchanged-intervals) and those with scaling factor 0 (commas) — and $\textit{Λ}$ is a diagonal scaling factors matrix, where you can see along the main diagonal we have 1's paired with the unrotated vectors for unchanged-intervals and 0's paired with the unrotated vectors for commas.

In our approach, we instead solve for $G$ by leaving the commas out of the equation, and simply using the mapping $M$ instead.

In addition to being much more straightforward and easier to understand, our technique gives the same results and reduces computation time by 50% (it took the computation of a miniaverage-U tuning for a rank-3, 11-limit temperament with 15 target-intervals from 12 seconds down to 8).
13. Technically this gives us the tunings of the generators, in ¢/g.
14. The article for the minimax tuning scheme, Target tunings, suggests that you fall back to the miniRMS method to tie-break between these, but that sort of misses the point of the problem. The two tied points are on extreme opposite ends of the slice of good solutions, and the optimum solution lies somewhere in between them. We don't want the tie-break to choose one or the other extreme; we want to find a better solution somewhere in between them.
15. There does not seem to be any consensus about how to identify a true optimum in the case of multiple solutions when $p=1$. See https://en.wikipedia.org/wiki/Least_absolute_deviations#Properties, https://www.researchgate.net/publication/223752233_Dealing_with_the_multiplicity_of_solutions_of_the_l1_and_l_regression_models, and https://stats.stackexchange.com/questions/275931/is-it-possible-to-force-least-absolute-deviations-lad-regression-to-return-the.
16. In fact, the Target tunings page of the wiki uses this more complicated approach in order to realize pure octaves, and so the authors of this page had to reverse engineer from it how to make it work without any held-intervals.
17. Held-intervals should generally be removed if they also appear in the target-interval list $\mathrm{T}$. If these intervals are not removed, the correct tuning can still be computed; however, during optimization, effort will have been wasted on minimizing damage to these intervals, because their damage would have been held to 0 by other means anyway. In general, it should be more computationally efficient to remove these intervals from $\mathrm{T}$ in advance, rather than submit them to the optimization procedures as-is. Duplication of intervals between these two sets will most likely occur when using a target-interval set scheme (such as a TILT or OLD) that automatically chooses the target-interval set.
18. were everything transposed, anyway; a superficial issue
19. This weight is irrelevant, since these aren't really target-intervals, they're held-intervals, and so the damage to them must be 0; we can choose any value for this weight other than 0 and the effect will be the same, so we may as well choose 1).
20. You may be unsurprised to learn that this example of basic tie-breaking success was actually developed from the case that requires advanced tie-breaking, i.e. by adding $\frac65$ to the target-interval set in order to produce the needed point at the true minimax, rather than the other way around, that the advanced tie-breaking case was developed from the basic example.
21. note that we can't rely on which half of the V-shaped graph we're on to tell whether a damage corresponds with a positive or negative error; that depends on various factors relating to $𝒈$, $M$, and the ties
22. https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_20405.html#20412
23. https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_20405.html#20412
24. https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_21029.html