Temperament addition is the general name for either the temperament sum or the temperament difference, which are two closely related operations on regular temperaments. Basically, to add or subtract temperaments means to match up the entries of temperament vectors and then add or subtract them individually. The result is a new temperament that has similar properties to the original temperaments.

## Introductory examples

For example, in the 5-limit, the sum of 12-ET and 7-ET is 19-ET because 12 19 28] + 7 11 16] = (12+7) (19+11) (28+16)] = 19 30 44], and the difference of 12-ET and 7-ET is 5-ET because 12 19 28] - 7 11 16] = (12-7) (8-11) (12-16)] = 5 8 12].

$\left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] + \left[ \begin{array} {rrr} 7 & 11 & 16 \\ \end{array} \right] = \left[ \begin{array} {rrr} (12+7) & (19+11) & (28+16) \\ \end{array} \right] = \left[ \begin{array} {rrr} 19 & 30 & 44 \\ \end{array} \right]$

$\left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right] - \left[ \begin{array} {rrr} 7 & 11 & 16 \\ \end{array} \right] = \left[ \begin{array} {rrr} (12-7) & (19-11) & (28-16) \\ \end{array} \right] = \left[ \begin{array} {rrr} 5 & 8 & 12 \\ \end{array} \right]$

We can write these using wart notation as 12p + 7p = 19p and 12p - 7p = 5p, respectively. The similarity in these temperaments can be seen in how, like both 12-ET and 7-ET, 19-ET (their sum) and 5-ET (their difference) both also support meantone temperament.

Temperament sums and differences can also be found using commas; for example meantone + porcupine = tetracot because [4 -4 1 + [1 -5 3 = [(4+1) (-4+-5) (1+3) = [5 -9 4 and meantone - porcupine = dicot because [4 -4 1 - [1 -5 3 = [(4-1) (-4--5) (1-3) = [3 1 -2.

$\left[ \begin{array} {rrr} 4 \\ -4 \\ 1 \\ \end{array} \right] + \left[ \begin{array} {rrr} 1 \\ -5 \\ 3 \\ \end{array} \right] = \left[ \begin{array} {rrr} (4+1) \\ (-4+-5) \\ (1+3) \\ \end{array} \right] = \left[ \begin{array} {rrr} 5 \\ -9 \\ 4 \\ \end{array} \right]$

$\left[ \begin{array} {rrr} 4 \\ -4 \\ 1 \\ \end{array} \right] - \left[ \begin{array} {rrr} 1 \\ -5 \\ 3 \\ \end{array} \right] = \left[ \begin{array} {rrr} (4-1) \\ (-4--5) \\ (1-3) \\ \end{array} \right] = \left[ \begin{array} {rrr} 3 \\ 1 \\ -2 \\ \end{array} \right]$

We could write this in quotient form — replacing addition with multiplication and subtraction with division — as 80/81 × 250/243=20000/19683 and 80/81 ÷ 250/243=25/24, respectively. The similarity in these temperaments can be seen in how all of them are supported by 7-ET. (Note that these examples are all given in canonical form, which is why we're seeing the meantone comma as 80/81 instead of the more common 81/80; for the reason why, see Temperament addition#Negation.)

Temperament addition is simplest for temperaments which can be represented by single vectors such as demonstrated in these examples. In other words, it is simplest for temperaments that are either rank-1 (equal temperaments, or ETs for short) or nullity-1 (having only a single comma). Because grade $g$ is the generic term for rank $r$ and nullity $n$, we could define the minimum grade $g_{\text{min}}$ of a temperament as the minimum of its rank and nullity $\min(r,n)$, and so for convenience in this article we will refer to $r=1$ (read "rank-1") or $n=1$ (read "nullity-1") temperaments as $g_{\text{min}}=1$ (read "min-grade-1") temperaments. We'll also use $g_{\text{max}}$ (read "max-grade"), which naturally is equal to $\max(r,n)$.

For $g_{\text{min}}\gt 1$ temperaments, temperament addition gets a little trickier. This is discussed in the beyond $g_{\text{min}}=1$ section later.

## Applications

The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments.

Take the case of meantone + porcupine = tetracot from the previous section. What this relationship means is that tetracot is the temperament which doesn't make the meantone comma itself vanish, nor the porcupine comma itself, but instead make whatever comma relates pitches that are exactly one meantone comma plus one porcupine comma apart vanish. And that's the tetracot comma! And on the other hand, for the temperament difference, dicot, this is the temperament that makes neither meantone nor porcupine vanish, but instead the comma that's the size of the difference between them. And that's the dicot comma. So tetracot makes 80/81 × 250/243 vanish, and dicot makes 80/81 × 243/250 vanish.

Similar reasoning is possible for the mapping-rows of mappings — the analogs of the commas of comma bases — but are less intuitive to describe. What's reasonably easy to understand, though, is how temperament addition on maps is essentially navigation of the scale tree for the rank-2 temperament they share; for more information on this, see Dave Keenan & Douglas Blumeyer's guide to RTT: exploring temperaments#Scale trees. So if you understand the effects on individual maps, then you can apply those to changes of maps within a more complex temperament.

Ultimately, these two effects are the primary applications of temperament addition.[1]

## A note on variance

For simplicity, this article will use the word "vector" in its general sense, that is, variance-agnostic. This means it includes either contravariant vectors (plain "vectors", such as prime-count vectors) or covariant vectors ("covectors", such as maps). However, the reader should assume that only one of the two types is being used at a given time, since the two variances do not mix. For more information, see Linear_dependence#Variance. The same variance-agnosticism holds for multivectors in this article as well.

A and B are vectors representing temperaments. They could be maps or commas. A∧B is their wedge product and gives a higher-grade temperament that merges both A and B. A+B and A-B give the sum and difference, respectively.

### Versus the wedge product

If the wedge product of two vectors represents the directed area of a parallelogram constructed with the vectors as its sides, then the temperament sum and difference are the vectors that connect the diagonals of this parallelogram.

### Tuning and tone space

A visualization of temperament addition on projective tuning space.

One way we can visualize temperament addition is on projective tuning space.

This shows both the sum and the difference of porcupine and meantone. All four temperaments — the two input temperaments, porcupine and meantone, as well as the sum, tetracot, and the diff, dicot — can be seen to intersect at 7-ET. This is because all four temperaments' mappings can be expressed with the map for 7-ET as one of their mapping-rows.

These are all $r=2$ temperaments, so their mappings each have one other row besides the one reserved for 7-ET. Any line that we draw across these four temperament lines will strike four ETs whose maps have a sum and difference relationship. On this diagram, two such lines have been drawn. The first one runs through 5-ET, 20-ET, 15-ET, and 10-ET. We can see that 5 + 15 = 20, which corresponds to the fact that 20-ET is the ET on the line for tetracot, which is the sum of porcupine and meantone, while 5-ET and 15-ET are the ETs on their lines. Similarly, we can see that 15 - 5 = 10, which corresponds to the fact that 10-ET is the ET on the line for dicot, which is the difference of porcupine and meantone.

A visualization of temperament addition on projective tone space.

The other line runs through the ETs 12, 41, 29, and 17, and we can see again that 12 + 29 = 41 and 29 - 12 = 17.

We can also visualize temperament addition on projective tone space. Here relationships are inverted: points are lines, and lines are points. So all four temperaments are found along the line for 7-ET.

Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5-ET and 7-ET support a $r=2$ temperament, then so will 5 + 7 = 12-ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call down the scale tree, children are always found between their parents. But when you try to go back up the scale tree, to one or the other parent, you may not immediately know which side of the child to go.

### The temperaments have the same dimensions

Temperament addition is only possible for temperaments with the same dimensions, that is, the same rank and dimensionality (and therefore, by the rank-nullity theorem, also the same nullity). The reason for this is visually obvious: without the same $d$, $r$, and $n$ (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up one-to-one. From this condition it also follows that the result of temperament addition will be a new temperament with the same $d$, $r$, and $n$ as the input temperaments.

### The temperaments share the same domain basis

If you're unfamiliar with domain bases, then you can probably safely assume your temperaments are in the same subspace, because they should be in the default, standard, prime-limit interval subspace. If they're not, change them to be on the same interval subspace if you can, and then come back to temperament addition.

In the first row, we see the sum of two vectors. In the second row, we see how a pair of temperaments each defined by 2 basis vectors may be added as long as the other basis vectors match. In the third row we see a continued development of this idea, where a pair of temperaments each defined by 3 basis vectors is able to be added by virtue of all other basis vectors being the same.

Matching the dimensions is only the first of two conditions on the possibility of temperament addition. The second condition is that the temperaments must all be addable. This condition is trickier, and so a detailed discussion of it will be deferred to a later section (here: Temperament addition#Addability). But let us at least say here what it essentially means. The basis vectors representing the summed or differenced temperaments must all match, except for one non-matching vector in each. Said another way, any number of matching vectors is allowed in the basis alongside, but ultimately we're only ever able to add (mono)vectors — the single non-matching vectors from each temperament.

We can gain some intuition about this addability condition by thinking about these non-matching vectors — the ones that are changing — as if they were themselves a basis for a temperament, and then recalling what we know about bases: that when a basis consists of two or more vectors, then an infinitude of other bases for the same subspace exist (such as how there are multiple forms for a rank-2 temperament mapping); whereas when a basis consists of only a single vector, then there is only one possible basis. Finally, we must recognize that entry-wise matrix addition is an operation defined on matrices, not bases; and so entry-wise matrix addition can give different results when done to different bases for the same subspace. The only way for temperament addition to work reliably, therefore, is to only do it on matrices where the basis for what is changing has only a single possible representation, and that is only the case when only one basis vector is changing.

Any set of $g_{\text{min}}=1$ temperaments are addable[2], because the side of duality where $g=1$ will satisfy this condition, so we don't need to worry in detail about it in that case. Or in other words, $g_{\text{min}}=1$ temperaments can be represented by monovectors, and we have no problem entry-wise adding those.

## Versus temperament merging

Like temperament merging, temperament addition takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, adding these input temperaments together.

But there is a big difference between temperament addition and merging. Temperament addition is done using entry-wise addition (or subtraction), whereas merging is done using concatenation. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the merging of mappings with two rows each is a new mapping that has a total of four rows[3].

### The linear dependence connection

Another connection between temperament addition and merging is that they may involve checks for linear dependence.

Temperament addition, as stated earlier, always requires addability, which is a more complex property involving linear dependence.

Merging does not necessarily involve linear dependence. Linear dependence only matters for merging when you attempt to do it using exterior algebra, that is, by using the wedge product, rather than the linear algebra approach, which is just to concatenate the vectors as a matrix and canonicalize. For more information on this, see Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#The linearly dependent exception to the wedge product.

## $g_{\text{min}}=1$

As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are $g_{\text{min}}=1$, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the simple case of $g_{\text{min}}=1$.

As shown in the introductory examples, $g_{\text{min}}=1$ examples are as easy as entry-wise addition or subtraction. But there's just a couple tricks to it.

### Getting to the side of duality with $g_{\text{min}}=1$

We may be looking at a temperament representation which itself does not consist of a single vector, but its dual does. For example, the meantone mapping [1 0 -4] 0 1 4]} and the porcupine mapping [1 2 3] 0 3 5]} each consist of two vectors. So these representations require additional labor to compute. But their duals are easy! If we simply find a comma basis for each of these mappings, we get [[4 -4 1] and [[1 -5 3]. In this form, the temperaments can be entry-wise added, to [[5 -9 4] as we saw earlier. And if in the end we're still after a mapping, since we started with mappings, we can take the dual of this comma basis, to find the mapping [1 1 1] 0 4 9]}.

### Negation

Equivalences of temperament addition depending on negativity.

There's just one other trick to it, and that's that we have to be mindful of negation.

The temperament difference can be understood as being the same operation as the temperament sum except with one of the two temperaments negated.

For single vectors (and multivectors), negation is as simple as changing the sign of every entry.

Suppose you have a matrix representing temperament $𝓣_1$ and another matrix representing $𝓣_2$. If you want to find both their sum and difference, you can calculate both $𝓣_1 + 𝓣_2$ and $𝓣_1 + -𝓣_2$. There's no need to also find $-𝓣_1 + 𝓣_2$; this will merely give the negation of $𝓣_1 + -𝓣_2$. The same goes for $-𝓣_1 + -𝓣_2$, which is the negation of $𝓣_1 + 𝓣_2$.

But a question remains: which result between $𝓣_1 + 𝓣_2$ and $𝓣_1 + -𝓣_2$ is actually the sum and which is the difference? This seems like an obvious question to answer, except for one key problem: how can we be certain that $𝓣_1$ or $𝓣_2$ wasn't already in negated form to begin with? We need to establish a way to check for matrix negativity.

The check is that the vectors must be in canonical form. For a contravariant vector, such as the kind that represent commas, canonical form means that the trailing entry (the final non-zero entry) must be positive. For a covariant vector, such as the kind that represent mapping-rows, canonical form means that the leading entry (the first non-zero entry) must be positive.

Sometimes the canonical form of a vector is not the most popular form. For instance, the meantone comma is usually expressed in positive form, that is, with its numerator greater than its denominator, so that its cents value is positive, or in other words, it's the meantone comma upwards in pitch, not downwards. But the prime-count vector for that form, 81/80, is [-4 4 -1, and as we can see, its trailing entry -1 is negative. So the canonical form of meantone is actually [4 -4 1.

## $g_{\text{min}}\gt 1$

As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are $g_{\text{min}}=1$, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the trickier cases of $g_{\text{min}}\gt 1$.

Throughout this section, we will be using a green color on linearly dependent objects and values, and a red color on linearly independent objects and values, to help differentiate between the two.

In order to understand how to do temperament addition on $g_{\text{min}}\gt 1$ temperaments, we must first understand addability.

In order to understand addability, we must work up to it, understanding these concepts in this order:

1. linear dependence
2. linear dependence between temperaments
3. linear independence between temperaments
4. linear independence between temperaments by only one basis vector (that's addability)

#### 1. Linear dependence

This is explained here: linear dependence.

#### 2. Linear dependence between temperaments

Linear dependence has been defined for the matrices and multivectors that represent temperaments, but it can also be defined for temperaments themselves. The conditions of temperament addition motivate a definition of linear dependence for temperaments whereby temperaments are considered linearly dependent if either of their mappings or their comma bases are linearly dependent[4].

For example, 5-limit 5-ET and 5-limit 7-ET, represented by the mappings [5 8 12]} and [7 11 16]} may at first seem to be linearly independent, because the basis vectors visible in their mappings are clearly linearly independent (when comparing two vectors, the only way they could be linearly dependent is if they are multiples of each other, as discussed here). And indeed their mappings are linearly independent. But these two temperaments are linearly dependent, because if we consider their corresponding comma bases, we will find that they share the basis vector of the meantone comma [4 -4 1.

To make this point visually, we could say that two temperaments are linearly dependent if they intersect in one or the other of tone space and tuning space. So you have to check both views.[5]

#### 3. Linear independence between temperaments

Linear dependence may be considered as a boolean (yes/no, linearly dependent/independent) or it may be considered as an integer count of linearly dependent basis vectors. In other words, it is the dimension of the linear-dependence basis $\dim(L_{\text{dep}})$. To refer to this count, we may hyphenate it as linear-dependence, and use the variable $l_{\text{dep}}$. For example, 5-ET and 7-ET, per the example in the previous section, are $l_{\text{dep}}=1$ (read "linear-dependence-1") temperaments.

It does not make sense to speak of linear dependence in this integer count sense between temperaments, however. Here's an example that illustrates why. Consider two different $d=5$, $r=2$ temperaments. Both their mappings and comma bases are linearly dependent, but their mappings have $l_{\text{dep}}=1$, while their comma bases have $l_{\text{dep}}=2$. So what could the $l_{\text{dep}}$ of this temperament possibly be? We could define "min-linear-dependence" and "max-linear-dependence", as we define "min-grade" and "max-grade", but these do not turn out to be helpful.

On the other hand, it does make sense to speak of the linear-independence of the temperament as an integer count. This is because the count of linearly independent basis vectors of two temperaments' mappings and the count of linearly independent basis vectors of their comma bases will always be the same. So the temperament linear-independence is simply this number. In the $d=5$, $r=2$ example from the previous paragraph, these would be $l_{\text{ind}}=1$ (read "linear-independence-1") temperaments.

A proof of this conjecture is given here: Temperament addition#Sintel's proof of the linear-independence conjecture.

#### 4. Linear independence between temperaments by only one basis vector (i.e. addability)

Two temperaments are addable if they are $l_{\text{ind}}=1$. In other words, both their mappings and their comma bases share all but one basis vector.

And so this is why $g_{\text{min}}=1$ temperaments are all addable. Because if $g_{\text{min}}=1$, and the temperaments are different from each other so $l_{\text{ind}}$ is at least 1, and $l_{\text{ind}}$ can't be greater than $g_{\text{min}}$, so then necessarily $l_{\text{ind}}$ $= 1$ exactly, and therefore the temperaments are addable.

### Multivector approach

The simplest approach to $g_{\text{min}}\gt 1$ temperament addition is to use multivectors. This is discussed in more detail here: Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#Temperament addition.

### Matrix approach

Temperament addition for $g_{\text{min}}\gt 1$ temperaments (again, that's with both $r\gt 1$ and $n\gt 1$) can also be done using matrices, and it works in essentially the same way — entry-wise addition or subtraction — but there are some complications that make it significantly more involved than it is with multivectors. There's essentially five steps:

1. Find the linear-dependence basis $L_{\text{dep}}$
2. Put the matrices in a form with the $L_{\text{dep}}$
3. Check for enfactoring, and do an addabilization defactor (if necessary)
4. Check for negation, and change negation (if necessary)

#### The steps

##### 1. Find the $L_{\text{dep}}$

For matrices, it is necessary to make explicit the basis for the linearly dependent vectors shared between the involved matrices before adding. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the linear-dependence basis, or $L_{\text{dep}}$.

Before this can be done, of course, we need to actually find the $L_{\text{dep}}$. This can be done using the technique described here: Linear dependence#For a given set of basis matrices, how to compute a basis for their linearly dependent vectors

##### 2. Put the matrices in a form with the $L_{\text{dep}}$

The $L_{\text{dep}}$ will always have one less vector than the original matrix, by the definition of addability as $L_{\text{ind}}=1$. And the $L_{\text{dep}}$ is not a full recreation of the original temperament; it needs that one extra vector to get back to representing it.

So a next step, we need pad out the $L_{\text{dep}}$ by drawing from vectors from the original matrices. We can start from their first vectors. But if that vector happens to be linearly dependent on the $L_{\text{dep}}$, then it won't result in a representation of the original matrix. Otherwise we'll produce a rank-deficient matrix that doesn't still represent the same temperament as we started with. So we just have to keep going until we get it.

But it is not quite as simple as determining the $L_{\text{dep}}$ and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be enfactored. And defactoring them without compromising the explicit $L_{\text{dep}}$ cannot be done using existing defactoring algorithms; it's a tricky process, or at least computationally intensive. This is called addabilization defactoring.

Most established defactoring algorithms will alter any or all of the entries of a matrix. This is not an option if we still want to be able to add temperaments, however, because these matrices must retain their explicit $L_{\text{dep}}$. And we can't defactor and then paste the $L_{\text{dep}}$ back over the first vector or something, because then we might just be enfactored again. We need to find a defactoring algorithm that manages to work without altering any of the vectors in the $L_{\text{dep}}$.

The first step to addabilization defactoring is inspired by Pernet-Stein defactoring: we find the value of the enfactoring factor (the "greatest factor") by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to remove the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps.

It turns out that you can always[6] isolate the greatest factor in the single final vector of the matrix — the linearly independent vector — through linear combinations of the vectors in the $L_{\text{dep}}$.

The example that will be worked through in this section below is as simple as it can get: the $L_{\text{dep}}$ consists of only a single vector, so we simply add some number of this single linearly dependent vector to the linearly independent vector. However, if there are multiple vectors in the $L_{\text{dep}}$, the linear combination which surfaces the greatest factor may involve just one or potentially all of those vectors, and the best approach to finding this combination is simply an automatic solver. An example of this approach is demonstrated in the RTT library in Wolfram Language, here: https://github.com/cmloegcmluin/RTT/blob/main/main.m#L477

Another complication is that the greatest factor may be very large, or be a highly composite number. In this case, searching for the linear combination that isolates the greatest factor in its entirety directly may be intractable; it is better to eliminate it piecemeal, i.e., whenever the solver finds a factor of the greatest factor, eliminate it, and repeat until the greatest factor is fully eliminated. The RTT library code linked to above works in this way.

##### 4. Negation

Temperament negation is more complex with matrices, both in terms of checking for it, as well as changing it.

For matrices, negation is accomplished by choosing a single vector and changing the sign of every entry in it. In the case of comma bases, a vector is a column, whereas in a mapping a vector (technically a row vector, or covector) is a row.

For matrices, the check for negation is related to canonicalization of multivectors as are used in exterior algebra for RTT. Essentially we take the largest possible minor determinants of the matrix (or "largest-minors" for short), and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.

The entry-wise addition of elements works mostly the same as for vectors. But there's one catch: we only do it for the pair of linearly independent vectors. We set the $L_{\text{dep}}$ aside, and reintroduce it at the end.

When taking the sum, this is just for simplicity's sake. There's no sense in adding the two copies of the $L_{\text{dep}}$ together, as we'll just get the same vector but 2-enfactored. So we may as well set it aside, and deal only with the linearly independent vectors, and put it back at the end.

When taking the difference, it's essential that we set the $L_{\text{dep}}$ aside before entry-wise addition, though, because if we were to subtract it from itself, we'd end up with all zeros. Unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros.

As a final step, as is always good to do when concluding temperament operations, put the result in canonical form.

#### Example

For our example, let’s look at septimal meantone plus flattone. The canonical forms of these temperaments are [1 0 -4 -13] 0 1 4 10]} and [1 0 -4 17] 0 1 4 -9]}.

0. Counterexample. Before we try following the detailed instructions just described above, let's do the counterexample, to illustrate why we have to follow them at all. Simple entry-wise addition of these two mapping matrices gives [2 0 -8 4] 0 2 8 1]}, which is not the correct answer:

$\left[ \begin{array} {rrr} 1 & 0 & -4 & -13 \\ 0 & 1 & 4 & 10 \\ \end{array} \right] + \left[ \begin{array} {rrr} 1 & 0 & -4 & 17 \\ 0 & 1 & 4 & -9 \\ \end{array} \right] = \left[ \begin{array} {rrr} 2 & 0 & -8 & 4 \\ 0 & 2 & 8 & 1 \\ \end{array} \right]$

And it's wrong not only because it is clearly enfactored (at least one factor of 2, that is visible in the first vector). The full explanation of why this is the wrong answer is beyond the scope of this example (the nature of correctness here is discussed in the section Temperament addition#Addition on non-addable temperaments). However, if we now follow through with the instructions described above, we can find the correct answer.

1. Find the linear-dependence basis. We know where to start: first find the $L_{\text{dep}}$ and put each of these two mappings into a form that includes it explicitly. In this case, their $L_{\text{dep}}$ consists of a single vector: [19 30 44 53]}.

2. Reproduce the original temperament. The original matrices had two vectors, so as our second step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [19 30 44 53] 1 0 -4 -13]⟩ and [19 30 44 53] 1 0 -4 17]⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on. In this case the first vectors are both fine, though.

$\left[ \begin{array} {rrr} \color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\ 1 & 0 & -4 & -13 \\ \end{array} \right] + \left[ \begin{array} {rrr} \color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\ 1 & 0 & -4 & 17 \\ \end{array} \right]$

3. Defactor. Next, verify that both matrices are defactored. In this case, both matrices are enfactored, each by a factor of 30[7]. So we'll use addabilization defactoring. Since there's only a single vector in the $L_{\text{dep}}$, therefore all we need to do is repeatedly add that one linearly dependent vector to the linearly independent vector until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix. Specifically, we add 11 times the linearly dependent vector. For the first matrix, 1 0 -4 -13] + 11⋅19 30 44 53] = 210 330 480 570], whose entries have a GCD = 30, so we can defactor the matrix by dividing that vector by 30, leaving us with 7 11 16 19]. Therefore the final matrix here is [19 30 44 53] 7 11 16 19]⟩. The other matrix matrix happens to defactor in the same way: 1 0 -4 17] + 11⋅19 30 44 53] = 210 330 480 600] whose GCD is also 30, reducing to 7 11 16 20], so the final matrix is [19 30 44 53] 7 11 16 20]⟩.

4. Check negativity. The next thing we need to do is check the negativity of these two temperaments. If either of the matrices we're adding is negative, then we'll have to change it (if both are negative, then the problem cancels out, and we go back to being right). We check negativity by using the largest-minors of these matrices. The first matrix's largest-minors are (-1, -4, -10, -4, -13, -12) and the second matrix's largest-minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are largest-minors of a mapping (if we were looking at largest-minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices are both negative! But again, since they were both negative, the effect cancels out; no need to change anything (but if we wanted to, we could just take the linearly independent vector for each matrix and negate every entry in it).

$\left[ \begin{array} {rrr} \color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\ \color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\ \end{array} \right] + \left[ \begin{array} {rrr} \color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\ \color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\ \end{array} \right]$

We set the $L_{\text{dep}}$ aside, and deal only with the linearly independent vectors:

$\left[ \begin{array} {rrr} \color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\ \end{array} \right] + \left[ \begin{array} {rrr} \color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\ \end{array} \right] = \left[ \begin{array} {rrr} \color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\ \end{array} \right]$

Then we can reintroduce the $L_{\text{dep}}$ afterwards:

$\left[ \begin{array} {rrr} \color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\ \color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\ \end{array} \right]$

And finally canonicalize:

$\left[ \begin{array} {rrr} 1 & 0 & -4 & 2 \\ 0 & 2 & 8 & 1 \\ \end{array} \right]$

so we can now see that meantone plus flattone is godzilla.

As long as we've done all this work to set these matrices up to find their sum, let's find their difference as well. Again, set aside the $L_{\text{dep}}$, and just entry-wise subtract the two linearly independent vectors:

$\left[ \begin{array} {rrr} \color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\ \end{array} \right]$ - $\left[ \begin{array} {rrr} \color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\ \end{array} \right] = \left[ \begin{array} {rrr} \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\ \end{array} \right]$

And so, reintroducing the $L_{\text{dep}}$, we have:

$\left[ \begin{array} {rrr} \color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\ \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\ \end{array} \right]$

Which canonicalizes to:

$\left[ \begin{array} {rrr} 19 & 30 & 44 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right]$

And so we can see that meantone minus flattone is meanmag.

#### Initial example: canonical form

Even when a pair of temperaments isn’t addable, if they have the same dimensions, that means the matrices representing them have the same shape, and so then there’s nothing stopping us from entry-wise adding them. For example, the $L_{\text{dep}}$ for the canonical comma bases for septimal meantone [[4 -4 1 0 [13 -10 0 1] and septimal blackwood [[-8 5 0 0 [-6 2 0 1] is empty, meaning their $l_{\text{ind}}$ $=2$, and therefore they aren't addable. Yet we can still do entry-wise addition on the matrices that are acting as these temperaments’ comma bases as if the temperaments were addable:

$\left[ \begin{array} {r|r} 4 & 13 \\ -4 & -10 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] + \left[ \begin{array} {r|r} -8 & -6 \\ 5 & 2 \\ 0 & 0 \\ 0 & 1 \\ \end{array} \right] = \left[ \begin{array} {r|r} (4+-8) & (13+-6) \\ (-4+5) & (-10+2) \\ (1+0) & (0+0) \\ (0+0) & (1+1) \\ \end{array} \right] = \left[ \begin{array} {r|r} -4 & 7 \\ 1 & -8 \\ 1 & 0 \\ 0 & 2 \\ \end{array} \right]$

And — at first glance — the result may even seem to be what we were looking for: a temperament which makes

1. neither the meantone comma [4 -4 1 0 nor the Pythagorean limma [-8 5 0 0 vanish, but does make the just diatonic semitone [-4 1 1 0 vanish; and
2. neither Harrison's comma [13 -10 0 1 nor Archytas' comma [-6 2 0 1 vanish, but does make the laruru negative second [7 -8 0 2 vanish.

But while these two monovector additions have worked out individually, the full result cannot truly be said to be the "temperament sum" of septimal meantone and blackwood. And here follows a demonstration why it cannot.

#### Second example: alternate form

Let's try summing two completely different comma bases for these temperaments and see what we get. So septimal meantone can also be represented by the comma basis consisting of the diesis [1 2 -3 1 and the hemimean comma [-6 0 5 -2 (which is another way of saying that septimal meantone also makes those commas vanish). And septimal blackwood can also be represented by the septimal third-tone [2 -3 0 1 and the cloudy comma [-14 0 0 5. So here's those two bases' entry-wise sum:

$\left[ \begin{array} {rrr} 1 & -6 \\ 2 & 0 \\ -3 & 5 \\ 1 & -2 \\ \end{array} \right] + \left[ \begin{array} {rrr} 2 & -14 \\ -3 & 0 \\ 0 & 0 \\ 1 & 5 \\ \end{array} \right] = \left[ \begin{array} {rrr} (1+2) & (-6+-14) \\ (2+-3) & (0+0) \\ (-3+0) & (5+0) \\ (1+1) & (-2+5) \\ \end{array} \right] = \left[ \begin{array} {rrr} 3 & -20 \\ -1 & 0 \\ -3 & 5 \\ 2 & 3 \\ \end{array} \right]$

This works out for the individual monovectors too, that is, it now makes none of the input commas vanish anymore, but instead their sums. But what we're looking at here is not a comma basis for the same temperament as we got the first time!

We can confirm this by putting both results into canonical form. That's exactly what canonical form is for: confirming whether or not two matrices are representations of the same temperament! The first result happens to already be in canonical form, so that's [[-4 1 1 0 [7 -8 0 2]. This second result [[3 -1 -3 2 [-20 0 5 3] doesn't match that, but we can't be sure whether we don't have a match until we put it into canonical form. So its canonical form is [[-49 3 19 0 [-23 1 8 1], which doesn't match, and so these are decidedly not the same temperament.

#### Third example: reordering of canonical form

In fact, we could even take the same sets of commas and merely reorder them to come up with a different result! Here, we'll just switch the order of the two commas in the representation of septimal blackwood:

$\left[ \begin{array} {rrr} 4 & 13 \\ -4 & -10 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] + \left[ \begin{array} {rrr} -6 & -8 \\ 2 & 5 \\ 0 & 0 \\ 1 & 0 \\ \end{array} \right] = \left[ \begin{array} {rrr} (4+-6) & (13+-8) \\ (-4+2) & (-10+5) \\ (1+0) & (0+0) \\ (0+1) & (1+0) \\ \end{array} \right] = \left[ \begin{array} {rrr} -2 & 5 \\ -2 & -5 \\ 1 & 0 \\ 1 & 1 \\ \end{array} \right]$

And the canonical form of [[-2 -2 1 1 [5 -5 0 1] is [[-7 3 1 0 [5 -5 0 1], so that's yet another possible temperament resulting from attempting to add these non-addable temperaments.

#### Fourth example: other side of duality

We can even experience this without changing basis. Let's just compare the results we get from the canonical form of these two temperaments, on either side of duality. The first example we worked through happened to be their canonical comma bases. So now let's look at their canonical mappings. Septimal meantone's is [1 0 -4 -13] 0 1 4 10]} and septimal blackwood's is [5 8 0 14] 0 0 1 0]}. So what temperament do we get by summing these?

$\left[ \begin{array} {rrr} 1 & 0 & -4 & -13 \\ 0 & 1 & 4 & 10 \\ \end{array} \right] + \left[ \begin{array} {rrr} 5 & 8 & 0 & 14 \\ 0 & 0 & 1 & 0 \\ \end{array} \right] = \left[ \begin{array} {rrr} (1+5) & (0+8) & (-4+0) & (-13+14) \\ (0+0) & (1+0) & (4+1) & (10+0) \\ \end{array} \right] = \left[ \begin{array} {rrr} 6 & 8 & -4 & 1 \\ 0 & 1 & 5 & 10 \\ \end{array} \right]$

In order to compare this result directly with our other three results, let's take the dual of this [6 8 -4 1] 0 1 5 10]}, which is [[22 -15 3 0 [41 -30 2 2] (in canonical form), so we can see that's yet a fourth possible result.[8]

#### Summary

Here's the four different results we've found so far:

$\begin{array}{ccc} \text{canonical} & & \text{alternate} & & \text{reordered canonical} & & \text{other side of duality} \\ \left[ \begin{array} {rrr} -4 & 7 \\ 1 & -8 \\ 1 & 0 \\ 0 & 2 \\ \end{array} \right] & ≠ & \left[ \begin{array} {rrr} -49 & -23 \\ 3 & 1 \\ 19 & 8 \\ 0 & 1 \\ \end{array} \right] & ≠ & \left[ \begin{array} {rrr} -7 & 5 \\ 3 & -5 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right] & ≠ & \left[ \begin{array} {rrr} 22 & 41 \\ -15 & -30 \\ 3 & 2 \\ 0 & 2 \\ \end{array} \right] \end{array}$

What we're experiencing here is the effect first discussed in the early section Temperament addition#The temperaments are addable: since entry-wise addition of matrices is a operation defined on matrices, not bases, we get different results for different bases.

This in stark contrast to the situation when you have addable temperaments; once you get them into the form with the explicit $L_{\text{dep}}$ and only the single linearly independent basis vector, you will get the same resultant temperament regardless of which side of duality you add them on — the duals stay in sync, we could say — and regardless of which basis we choose.[9]

And so we can see that despite immediate appearances, while it seems like we can simply do entry-wise addition on temperaments with more than one basis vector not in common, this does not give us reliable results per temperament.

#### How it looks with multivectors

We've now observed the outcome when adding non-addable temperaments using the matrix approach. It's instructive to observe how it works with multivectors as well. The canonical multicommas for septimal meantone and septimal blackwood are [[12 -13 4 10 -4 1⟩⟩ and [[14 0 -8 0 5 0⟩⟩, respectively. When we add these, we get [[26 -13 -4 10 1 1⟩⟩. What temperament is this — does it match with any of the four comma bases we've already found? Let's check by converting it back to matrix form. Oh, wait — we can't. This is what we call a indecomposable multivector. In other words, there is no set of vectors that could be wedged together to produce this multivector. This is the way that multivectors convey to us that there is no true temperament sum of these two temperaments.

### Further explanations

#### Diagrammatic explanation

##### Introduction

The diagrams used for this explanation were inspired in part by Kite's gencoms, and specifically how in his "twin squares" matrices — which have dimensions $d×d$ — one can imagine shifting a bar up and down to change the boundary between vectors that form a basis for the commas and those that are a generator detempering). The count of the former is the nullity $n$, and the count of the latter is the rank $r$, and the shifting of the boundary bar between them with the total $d$ vectors corresponds to the insight of the rank-nullity theorem, which states that $r + n=d$. And so this diagram's square grid has just the right amount of room to portray both the mapping and the comma basis for a given temperament (with the comma basis's vectors rotated 90 degrees to appear as rows, to match up with the rows of the mapping).

So consider this first example of such a diagram:

 $d=4$ $g_{\text{min}}=1$ ↑ ↑ ↑ ↑ $l_{\text{ind}}=1$ $g_{\text{max}}=3$ ↓ ↓ ↓ ↓ $l_{\text{ind}}=1$

This represents a $d=4$ temperament. These diagrams are grade-agnostic, which is to say that they are agnostic as to which side counts the $r$ and which side counts the $n$. So we are showing them as $g_{\text{min}}$ and $g_{\text{max}}$ instead. We could say there's a variation on the rank-nullity theorem whereby $g_{\text{min}} + g_{\text{max}}=d$, just as $r + n=d$. So we can then say that this diagram represents either a $r=1$, $n=3$ temperament, or perhaps a $n=1$, $r=3$ temperament.

But actually, this diagram represents more than just a single temperament. It represents a relationship between a pair of temperaments (which have the same dimensions, non-grade-agnostically, i.e. not a pairing of a $r=1$, $n=3$ temperament with a $r=3$, $n=1$ temperament). As elsewhere, green coloration indicates the linearly dependent basis vectors $L_{\text{dep}}$ between this pair of temperaments, and red coloration indicates linearly independent basis vectors $L_{\text{ind}}$ between the same pair of temperaments.

So, in this case, the two ET maps are linearly independent. This should be unsurprising; because ET maps are constituted by only a single vector (they're $r=1$ by definition), if they were linearly dependent, then they'd necessarily be the same exact ET! Temperament addition on two of the same ET is never interesting; $T_1$ plus $T_1$ simply equals $T_1$ again, and $T_1$ minus $T_1$ is undefined. That said, if we were to represent temperament addition between two of the same temperament on such a diagram as this, then every cell would be green. And this is true regardless whether $r=1$ or otherwise.

From this information, we can see that the comma bases of any randomly selected pair of different $d=4$ ETs are going to share 2 vectors, or in other words, their $L_{\text{dep}}$ will have two basis vectors. In terms of the diagram, we're saying that they'll always have two green-colored vectors under the black bar.

These diagrams are a good way to understand which temperament relationships are possible and which aren't, where by a "relationship" here we mean a particular combination of their matching dimensions and their linear-independence integer count. A good way to use these diagrams for this purpose is to imagine the red coloration emanating away from the black bar in both directions simultaneously, one pair of vectors at a time. Doing it like this captures the fact, as previously stated, that the $l_{\text{ind}}$ on either side of duality is always equal. There's no notion of a max or min here, as there is with $g$ or $l_{\text{dep}}$; the $l_{\text{ind}}$ on either side is always the same, so we can capture it with a single number, which counts the red vectors on just one half (that is, half of the total count of red vectors, or half of the width of the red band in the middle of the grid).

There's no need to look at diagrams like this where the black bar is below the center. This is because, even though for convenience we're currently treating the top half as $r$ and the bottom half as $n$, these diagrams are ultimately grade-agnostic. So we could say that each one essentially represents not just one possibility for the relationship between two temperaments' dimensions and $l_{\text{ind}}$, but two such possibilities. Again, this diagram equally represents both $d=4, r=1, n=3,$$l_{\text{ind}}=1$ as well as $d=4, r=3, n=1,$$l_{\text{ind}}=1$. Which is another way of saying we could vertically mirror it without changing it.

With the black bar always either in the top half or exactly in the center, we can see that the emanating red band will always either hit the top edge of the square grid first, or they will hit both the top and bottom edges of it simultaneously. So this is how these diagrams visually convey the fact that the $l_{\text{ind}}$ between two temperaments will always be less than or equal to their $g_{\text{min}}$: because a situation where $g_{\text{min}}\gt l$ would visually look like the red band spilling past the edges of the square grid.

We could also say that two temperaments are linearly dependent on each other when $l_{\text{ind}}$$\lt g_{\text{max}}$, that is, their linear-independence is less than their max-grade.

Perhaps more importantly, we can also see from these diagrams that any pair of $g_{\text{min}}=1$ temperaments will be addable. Because if they are $g_{\text{min}}=1$, then the furthest the red band can extend from the black bar is 1 vector, and 1 mirrored set of red vectors means $l_{\text{ind}}=1$, and that's the definition of addability.

##### A simple $d=3$ example

Let's back-pedal to $d=3$ for a simple illustrative example.

 $d=3$ $g_{\text{min}}=1$ ↑ ↑ ↑ $l_{\text{ind}}=1$ $g_{\text{max}}=2$ ↓ ↓ ↓ $l_{\text{ind}}=1$

This diagram shows us that any two $d=3$, $g_{\text{min}}=1$ temperaments (like 5-limit ETs) will be linearly dependent, i.e. their comma bases will share one vector. You may already know this intuitively if you are familiar with the 5-limit projective tuning space diagram from the Middle Path paper, which shows how we can draw a line through any two ETs and that line will represent a temperament, and the single comma that temperament makes to vanish is this shared vector. The diagram also tells us that any two 5-limit temperaments that make only a single comma vanish will also be linearly dependent, for the opposite reason: their mappings will always share one vector.

And we can see that there are no other diagrams of interest for $d=3$, because there's no sense in looking at diagrams with no red band, but we can't extend the red band any further than 1 vector on each side without going over the edge, and we can't lower the black bar any further without going below the center. So we're done. And our conclusion is that any pair of different $d=3$ temperaments that are nontrivial ($0 \lt n \lt d=3$ and $0 \lt r \lt d=3$) will be addable.

##### Completing the suite of $d=4$ examples

Okay, back to $d=4$. We've already looked at the $g_{\text{min}}=1$ possibility (which, for any $d$, there will only ever be one of). So let's start looking at the possibilities where $g_{\text{min}}=2$, which in the case of $d=4$ leaves us only one pair of values for $r$ and $n$: both being 2.

 $d=4$ $g_{\text{min}}=2$ ↑ ↑ ↑ ↑ $l_{\text{ind}}=1$ $g_{\text{max}}=2$ ↓ ↓ ↓ ↓ $l_{\text{ind}}=1$

But even with $d$, $r$, and $n$ fixed, we still have more than one possibility for $L_{\text{dep}}$. The above diagram shows $l_{\text{ind}}=1$. The below diagram shows $l_{\text{ind}}=2$.

 $d=4$ $g_{\text{min}}=2$ ↑ ↑ ↑ ↑ $l_{\text{ind}}=2$ ↑ ↑ ↑ ↑ $g_{\text{max}}=2$ ↓ ↓ ↓ ↓ $l_{\text{ind}}=2$ ↓ ↓ ↓ ↓

In the former possibility, where $l_{\text{ind}}=1$ (and therefore the temperaments are addable), we have a pair of different $d=4$, $r=2$ temperaments where we can find a single comma that both temperaments make to vanish, and — equivalently — we can find one ET that supports both temperaments.

In the latter possibility, where $l_{\text{ind}}=2$, neither side of duality shares any vectors in common. And so we've encountered our first example that is not addable. In other words, if the red band ever extends more than 1 vector away from the black bar, temperament addition is not possible. So $d=4$ is the first time we had enough room (half of $d$) to support that condition.

We have now exhausted the possibility space for $d=4$. We can't extend either the red band or the black bar any further.

##### $d=5$ diagrams finally reveal important relationships

So how about we go to $d=5$ (such as the 11-limit). As usual, starting with $g_{\text{min}}=1$:

 $d=5$ $g_{\text{min}}=1$ ↑ ↑ ↑ ↑ ↑ $l_{\text{ind}}=1$ $g_{\text{max}}=4$ ↓ ↓ ↓ ↓ ↓ $l_{\text{ind}}=1$

Just as with the $l_{\text{ind}}=1$ diagrams given for $d=3$ and $d=5$, we can see these are addable temperaments.

Now let's look at $d=5$ but with $g_{\text{min}}=2$. This presents two possibilities. First, $l_{\text{ind}}=1$:

 $d=5$ $g_{\text{min}}=2$ ↑ ↑ ↑ ↑ ↑ $l_{\text{ind}}=1$ $g_{\text{max}}=3$ ↓ ↓ ↓ ↓ ↓ $l_{\text{ind}}=1$

And second, $l_{\text{ind}}=2$:

 $d=5$ $g_{\text{min}}=2$ ↑ ↑ ↑ ↑ ↑ $l_{\text{ind}}=2$ ↑ ↑ ↑ ↑ ↑ $g_{\text{max}}=3$ ↓ ↓ ↓ ↓ ↓ $l_{\text{ind}}=2$ ↓ ↓ ↓ ↓ ↓

Here's where things really get interesting. Because in both of these cases, the pairs of temperaments represented are linearly dependent on each other (i.e. either their mappings are linearly dependent, their comma bases are linearly dependent, or both). And so far, every possibility where temperaments have been linearly dependent, they have also been $l_{\text{ind}}=1$, and therefore addable. But if you look at the second case here, we are $l_{\text{ind}}=2$, but since $d=5$, the temperaments still manage to be linearly dependent. So this is the first example of a linearly dependent temperament pairing which is not addable.

##### Back to $d=2$, for a surprisingly tricky example

Beyond $d=5$, these diagrams get cumbersome to prepare, and cease to reveal further insights. But if we step back down to $d=2$, a place simpler than anywhere we've looked so far, we actually find another surprisingly tricky example, which is hopefully still illuminating.

So $d=2$ (such as the 3-limit) presents another case — similar to the $d=5$, $g_{\text{min}}=2$, $l_{\text{ind}}=2$ case explored most recently above — where the properties of linearly dependence and addability do not match each other. But while in the other case, we had a temperament pair that was linearly dependent yet not addable, in this $d=2$ (and therefore $g_{\text{min}}=1$, $l_{\text{ind}}=1$) case, it is the other way around: addable yet linearly independent!

 $d=2$ $g_{\text{min}}=1$ ↑ ↑ $l_{\text{ind}}=1$ $g_{\text{max}}=1$ ↓ ↓ $l_{\text{ind}}=1$

Basically, in the case of $d=2$, $g_{\text{max}}=1$ (in non-trivial cases, i.e. not JI or the unison temperament), so any two different ETs or commas you pick are going to be linearly independent (because the only way they could be linearly dependent would be to be the same temperament). And yet we know we can still entry-wise add them to new vectors that are decomposable, because they're already vectors (decomposing means to express a multivector in the form of a list of monovectors, so decomposing a multivector that's already a monovector like this is tantamount to merely putting array braces around it.)

#### Geometric explanation

We've presented a diagrammatic illustration of the behavior of linear-independence $l_{\text{ind}}$ with respect to temperament dimensions. But some of the results might have seemed surprising. For instance, when looking at the diagram for $d=4, g_{\text{min}}=1, g_{\text{max}}=3$, it might have seemed intuitive enough that the the red band could not extend beyond the square grid, but then again, why shouldn't it be possible to have, say, two 7-limit ETs which make only a single comma in common vanish? Perhaps it doesn't seem clear that this is impossible, and that they must make two commas in common vanish (and of course the infinitude of combinations of these two commas). If this is as unclear to you as it was to the author when exploring this topic, then this explanatory section is for you! Here, we will use geometrical representations of temperaments to hone our intuitions about the possible combinations of dimensions and linear-independence $l_{\text{ind}}$ of temperaments.

In this approach, we’re actually not going to focus directly on the linear-independence $l_{\text{ind}}$ of temperaments. Instead, we're going to look at the linear-dependence $l_{\text{dep}}$ of matrices representing temperaments such as mappings and comma bases, and then compute the linear-independence $l_{\text{ind}}$ from it and the grade $g$. As we’ve established, the linear-dependence $l_{\text{dep}}$ differs from one side of duality to the other, so we’ll only be looking at one side of duality at a time.

##### Introduction

In this geometric approach, we'll be imagining individual vectors as points (0D), sets of two vectors as lines (1D), sets of three as planes (2D), four as volumes (3D), and so forth, as according to this table:

vector

count

geometric

dimension

form
0 undefined (emptiness)
1 0 point
2 1 line
3 2 plane
4 3 volume
5 4 hypervolume

This is a "vector space", and these geometric dimensions are consistent with how temperaments represented by these counts of vectors appear in projective vector space, which reduces geometric dimensions by 1. For example, a vector has a geometric interpretation as a directed line segment, which is 1D, but a point is 0D, which is one dimension lower. Essentially what we're doing is assuming the origin.

Think of it this way: geometric points are zero-dimensional, simply representing a position in space, whereas linear algebra vectors are one-dimensional, representing both a magnitude and direction; the way vectors manage to encode this extra dimension without providing any additional information is by being understood to describe this position in space relative to an origin. Well, so we'll now switch our interpretation of these objects to the geometric one, where the vector's entries are nothing more than a coordinate for a point in space. And the "projection" involved in projective vector space essentially positions us at this discarded origin, looking out from it upon every individual point, which accomplishes the same feat, in a visual way.

Perhaps an example may help clarify this setup. Suppose we've got an (x,y,z) space, and two coordinates (5,8,12) and (7,11,16). You should recognize these as the simple maps for 5-ET and 7-ET, usually written as 5 8 12] and 7 11 16], respectively. Ask for the equation of the plane defined by the three points (5,8,12), (7,11,16), and the origin (0,0,0) and you'll get -4x + 4y -1z = 0, which clearly shows us the entries of the meantone comma. That's because meantone temperament can be defined by these two maps. 5-limit JI is a 3D space, and meantone temperament, as a rank-2 temperament, would be a 2D plane. But we don't normally need to think of the map corresponding to the origin, where everything is made to vanish, including meantone. So we can just assume it, and think of a 2D plane as being defined by only 2 points, which in a view projected (from the origin) will look like just the line connecting (5,8,12) and (7,11,16).

So, we've set the stage for our projective vector spaces. We will now be looking at representations of temperaments as counts of vector sets, and then using this scheme to convert them to primitive geometric forms. We'll place two of each form into the space, representing the two temperaments whose addability is being checked. Then we will observe their possible intersections depending on how they're oriented in space, and it's these intersections that represent their linear-dependence. When the dimension of the intersection is then converted back to a vector set count, then we have their linear-dependence $l_{\text{dep}}$, for this side of duality, anyway (remember, unlike the linear-independence $l_{\text{ind}}$, this value isn't necessarily the same on both sides of duality). We can finally subtract the linear-dependence from the grade (vector count) to get the linear-indepedence, in order to determine if the two temperaments are addable.

In these examples, we'll be assuming that no two temperaments being compared are the same, because adding copies of the same temperament is not interesting. The other things we'll be assuming is that no lines, planes, etc. are parallel to each other; this is due to a strange effect touched upon in footnote 4 whereby temperament geometry that appears parallel in projective space actually still intersects; the present author asks that if anyone is able to demystify this situation, that they please do!

##### At $d=3$

First, let's establish the geometric dimension of the space. With $d=3$, we've got a 2D space (one less than 3), so the entire space can be visualized on a plane.

Our only possible values for $g_{\text{min}}$ and $g_{\text{max}}$ here are 1 and 2, respectively. So these are the two possible counts of vectors $g$ possessed by matrices representing temperaments here.

So let's look at temperaments represented by matrices with 1 vector first ($g=1$). In this case, each of the two temperaments is a point in the plane. Unless these two temperaments are the same temperament, the intersection of these two points is empty. Emptiness isn't even 0D! So that tells us that these temperaments have 0 vectors worth of linear dependence. With $g=1$, that gives us a $l_{\text{ind}}$$= g$ $-$ $l_{\text{dep}}$ $= 1 -$ $0$ $= 1$:

Next, let's look at temperaments represented by matrices with 2 vectors ($g=2$). In this case, each of the two temperaments is a line in the plane. Again, assuming the two lines are not the same line or parallel, their intersection is a point. Being 0D, that tells us that the linear-dependence of these matrices is 1. So that gives us an $l_{\text{ind}}$ $= g$ $-$ $l_{\text{dep}}$ $= 2 -$ $1$ $= 1$. This matches the value we found via the $g=1$, so we've effectively checked our work:

##### At $d=2$

Let's step back to $d=2$. Here we've got a 2 minus 1 equals 1D space, so the entire space can be visualized on a single line (one direction corresponds to an increasing ratio between the two coordinates, and the other to a decreasing ratio).

We know our only possible value for $g_{\text{min}}$ and $g_{\text{max}}$ here is 1. So in either case, each of the two temperaments is a point on the line. As with two points in a plane — when $d=3$ — unless these two temperaments are the same temperament, the intersection of these two points is empty. So again the $l_{\text{ind}}$$= g = 1$:

##### At $d=4$

First, let's establish the geometric dimension of the space. With $d=4$, we've got a 3D space (one less than 4), so the entire space can be visualized in a volume.

At $d=4$, we have a couple options for the grade: either $g_{\text{min}}=1$ and $g_{\text{max}}=3$, or both $g_{\text{min}}$ and $g_{\text{max}}$ equal 2.

Let's look at temperaments represented by matrices with 1 vector first ($g=1$). Yet again, we find ourselves with two separate points, but now we find them in a space that's not a line, not a plane, but a volume. This doesn't change $l_{\text{ind}}$$= g = 1$, so we're not even going to show it, or any further cases of $g=1$. These are all addable.

And when $g=3$, because this is paired with $g=1$ from the min and max values, we should expect to get the same answer as with $g=1$. And indeed, it will check out that way. Because two $g=3$ temperaments will be planes in this volume, and the intersection of two planes is a line. Which means that $l_{\text{dep}}$$= 2$. And so $l_{\text{ind}}$$= g$ $-$ $l_{\text{dep}}$ $= 3 -$ $2$ $= 1$. And here's where our geometric approach begins to pay off! This was the example given at the beginning that might seem unintuitive when relying only on the diagrammatic approach. But here we can see clearly that there would be no way for two planes in a volume to intersect only at a point, which proves the fact that two 7-limit ETs could never only make a single comma in common vanish.

Next let's look at temperaments represented by matrices with 2 vectors, that is, when both $g_{\text{min}}$ and $g_{\text{max}}$ are equal to 2. What are the possible ways lines can occupy a volume together? In a plane, as it was with $d=3$ (and again assuming no parallel objects in these examples), they must intersect. But in a volume, here in $d=4$, this is possible. So, with $g=2$, it is possible to have a $l_{\text{dep}}$ $= 0$, which leads to $l_{\text{ind}}$$= g$ $-$ $l_{\text{dep}}$ $= 2 -$ $0$ $= 2$. Not addable in this case.

But we can also imagine two lines in a volume that do intersect at a point. This is the case where $l_{\text{ind}}$$= g$ $-$ $l_{\text{dep}}$ $= 2 -$ $1$ $= 1$: addable!

##### At $d=5$

First, let's establish the geometric dimension of the space. With $d=5$, we've got a 4D space (one less than 5), so the entire space can be visualized in a hypervolume. We've now gone beyond the dimensionality of physical reality, so things get a little harder to conceptualize unfortunately. But $d=5$ is the first $d$ where we can make an important point about addability, so please bear with!

At $d=5$, we also have a couple options for the grade: either $g_{\text{min}}=1$ and $g_{\text{max}}=4$, or $g_{\text{min}}=2$ and $g_{\text{max}}=3$.

First we'll look at $g_{\text{min}}=1$ and $g_{\text{max}}=4$. Temperament matrices with $g=1$ are still addable. And temperament matrices with $g=4$ should be too. We can see this visually as how two volumes in a hypervolume together will have an intersection the shape of a plane. We can now see that there's a generalizable principal that any two $(d-1)$-dimensional objects will necessarily have a $(d-2)$-dimensional intersection, and thus have $l_{\text{ind}}$ $= 1$ and be addable. So we won't need to show this one or any further like it, either.

So let's look at temperament matrices with $g_{\text{min}}=2$ and $g_{\text{max}}=3$. For $g=2$, we have two possible values for $l_{\text{dep}}$: 0 or 1. Meaning that either the two lines through this hypervolume do not intersect (0), or they intersect at a point (1). These diagrams would look very much like the corresponding diagrams for $d=4$, so we will not be showing them. But what about when $g=3$? We can certainly imagine two planes in a hypervolume intersecting at a line, just as they do in an ordinary volume — they're just not taking advantage of the additional geometric dimension. So we won't show that example either. But where it gets really interesting is imagining then taking one of these two planes and rotating it in the fifth dimension; this causes the intersection between the two planes to be reduced down to a single point. And this corresponds with the case of $l_{\text{dep}}=1$ here, which means $l_{\text{ind}}$$= g$ $-$ $l_{\text{dep}}$ $= 3 -$ $1$ $= 2$, so therefore not addable:

So for $g_{\text{min}}=2$ and $g_{\text{max}}=3$ we got two different possibilities for $l_{\text{ind}}$: 1 and 2, and for each of these two possibilities, we found it twice. We can see then that these match up, that is, that the $g_{\text{min}}=2$ case with $l_{\text{ind}}=1$ matches with the $g_{\text{max}}=3$ case with $l_{\text{ind}}=1$, and the $l_{\text{ind}}=2$ cases match in the same way.

##### Summary table

Here's a summary table of our geometric findings so far:

$d$ ( $= g_{\text{min}} + g_{\text{max}}$) $g_{\text{min}}$ $g_{\text{max}}$ $l_{\text{ind}}$ ( $= g -$ $l_{\text{dep}}$)
$g$ $l_{\text{dep}}$ $g$ $l_{\text{dep}}$
2 1 0 1 0 1
3 1 0 2 1 1
4 1 0 3 2 1
2 0 2 0 2
2 1 2 1 1
5 1 0 4 3 1
2 0 3 1 2
2 1 3 2 1

#### Algebraic explanation

This explanation relies on comparing the results of the multivector and matrix approaches to temperament addition, and showing algebraically how the matrix approach can only achieve the same answer as the multivector approach on the condition that it keeps all but one vector between the added matrices the same, that is, not only are the temperaments addable, but their $L_{\text{dep}}$ appears explicitly in the added matrices.

To compare results, we eventually get both approaches into a multivector form. With the multivector approach, we wedge the vector set first and then add the resultant multivectors to get a new multivector. With the matrix approach, we treat the vector set as a matrix and add first, then treat the resultant matrix as a vector set and wedge those vectors to get a new multivector.

The diagrams below are organized into a 2×2 layout. The left part shows the multivector approach, and the right part shows the matrix approach. The top part shows how the results of two approaches match when the $L_{\text{dep}}$ is successfully explicit (and in these cases, the $L_{\text{dep}}$ vectors are highlighted in green and the $L_{\text{ind}}$ vectors are highlighted in red), and the bottom part shows how the results fail to match when it is not. Successful matches are highlighted in yellow and failures to match are highlighted in blue.

This first diagram demonstrates this situation for a $d=3, g=2$ case.

multivector approach matrix approach
explicit $L_{\text{dep}}$

[[$a$ $b$ $c$]

$a$ $b$ $c$ $a$ $b$ $c$ $a$ $b$ $c$ $+$ $a$ $b$ $c$ $=$ $2a$ $2b$ $2c$
$d$ $e$ $f$ $g$ $h$ $i$ $d$ $e$ $f$ $g$ $h$ $i$ $d+g$ $e+h$ $f+i$
$∧$ $∧$ $∧$
$bf-ce$ $af-cd$ $ae-bd$ $+$ $bi-ch$ $ai-cg$ $ah-bg$ $=$ $bf-ce+bi-ch$ $af-cd+ai-cg$ $ae-bd+ah-bg$ $2b(f+i)-2c(e+h)$ $2a(f+i)-2c(d+g)$ $2a(e+h)-2b(d+g)$
$b(f+i)-c(e+h)$ $a(f+i)-c(d+g)$ $a(e+h)-b(d+g)$ $b(f+i)-c(e+h)$ $a(f+i)-c(d+g)$ $a(e+h)-b(d+g)$
hidden $L_{\text{dep}}$ $a$ $b$ $c$ $j$ $k$ $l$ $a$ $b$ $c$ $+$ $j$ $k$ $l$ $=$ $a+j$ $b+k$ $c+l$
$d$ $e$ $f$ $g$ $h$ $i$  $d$ $e$ $f$ $g$ $h$ $i$ $d+g$ $e+h$ $f+i$
$∧$ $∧$ $∧$
$bf-ce$ $af-cd$ $ae-bd$ $+$ $ki-lh$ $ji-lg$ $jh-kg$ $=$ $bf-ce+ki-lh$ $af-cd+ji-lg$ $ae-bd+jh-kg$ $(b+k)(f+i)-(c+l)(e+h)$ $(a+j)(f+i)-(c+l)(d+g)$ $(a+j)(e+h)-(b+k)(d+g)$
$bf-ce+ki-lh$ $af-cd+ji-lg$ $ae-bd+jh-kg$ $bf+bi+kf+ki-ce-ch-le-lh$ $af+ai+jf+ji-cd-cg-ld-lg$ $ae+ah+je+jh-bd-bg-kd-kg$

This second diagram demonstrates this situation for a $d=5, g=3$ case. One pair of the $L_{\text{dep}}$ vectors are explicitly matching, but not the other, which isn't enough.

multivector approach matrix approach
explicit $L_{\text{dep}}$

[[$a$ $b$ $c$ $d$ $e$ [$f$ $g$ $h$ $i$ $j$]

$r_1$ $a$ $b$ $c$ $d$ $e$ $a$ $b$ $c$ $d$ $e$ $a$ $b$ $c$ $d$ $e$ + $a$ $b$ $c$ $d$ $e$ $=$ $2a$ $2b$ $2c$ $2d$ $2e$
$r_2$ $f$ $g$ $h$ $i$ $j$ $f$ $g$ $h$ $i$ $j$ $f$ $g$ $h$ $i$ $j$ $f$ $g$ $h$ $i$ $j$ $2f$ $2g$ $2h$ $2i$ $2j$
$r_3$ $k$ $l$ $m$ $n$ $o$ $p$ $q$ $r$ $s$ $t$ $k$ $l$ $m$ $n$ $o$ $p$ $q$ $r$ $s$ $t$ $k+p$ $l+q$ $m+r$ $n+s$ $o+t$
$∧$ $∧$ $∧$
$r_1∧r_2$ $ag-bf$ $ah-cf$ $ai-df$ $aj-ef$ $bh-cg$ $bi-dg$ $bj-eg$ $ci-dh$ $cj-eh$ $dj-ei$ $ag-bf$ $ah-cf$ $ai-df$ $aj-ef$ $bh-cg$ $bi-dg$ $bj-eg$ $ci-dh$ $cj-eh$ $dj-ei$ $4ag-4bf$ $4ah-4cf$ $4ai-4df$ $4aj-4ef$ $4bh-4cg$ $4bi-4dg$ $4bj-4eg$ $4ci-4dh$ $4cj-4eh$ $4dj-4ei$
simplify $r_1∧r_2$ if necessary $ag-bf$ $ah-cf$ $ai-df$ $aj-ef$ $bh-cg$ $bi-dg$ $bj-eg$ $ci-dh$ $cj-eh$ $dj-ei$
$(r_1∧r_2)∧r_3$ $k(bh-cg)\\-l(ah-cf)\\+m(ag-bf)$ $k(bi-dg)\\-l(ai-df)\\+n(ag-bf)$ $k(bj-eg)\\-l(aj-ef)\\+o(ag-bf)$ $k(ci-dh)\\-m(ai-df)\\+n(ah-cf)$ $k(cj-eh)\\-m(aj-ef)\\+o(ah-cf)$ $k(dj-ei)\\-n(aj-ef)\\+o(ai-df)$ $l(ci-dh)\\-m(bi-dg)\\+n(bh-cg)$ $l(cj-eh)\\-m(bj-eg)\\+o(bh-cg)$ $l(dj-ei)\\-n(bj-eg)\\+o(bi-dg)$ $m(dj-ei)\\-n(cj-eh)\\+o(ci-dh)$ $+$ $p(bh-cg)\\-q(ah-cf)\\+r(ag-bf)$ $p(bi-dg)\\-q(ai-df)\\+s(ag-bf)$ $p(bj-eg)\\-q(aj-ef)\\+t(ag-bf)$ $p(ci-dh)\\-r(ai-df)\\+s(ah-cf)$ $p(cj-eh)\\-r(aj-ef)\\+t(ah-cf)$ $p(dj-ei)\\-s(aj-ef)\\+t(ai-df)$ $q(ci-dh)\\-r(bi-dg)\\+s(bh-cg)$ $q(cj-eh)\\-r(bj-eg)\\+t(bh-cg)$ $q(dj-ei)\\-s(bj-eg)\\+t(bi-dg)$ $r(dj-ei)\\-s(cj-eh)\\+t(ci-dh)$ $=$ $(k+p)(bh-cg)\\-(l+q)(ah-cf)\\+(m+r)(ag-bf)$ $(k+p)(bi-dg)\\-(l+q)(ai-df)\\+(n+s)(ag-bf)$ $(k+p)(bj-eg)\\-(l+q)(aj-ef)\\+(o+t)(ag-bf)$ $(k+p)(ci-dh)\\-(m+r)(ai-df)\\+(n+s)(ah-cf)$ $(k+p)(cj-eh)\\-(m+r)(aj-ef)\\+(o+t)(ah-cf)$ $(k+p)(dj-ei)\\-(n+s)(aj-ef)\\+(o+t)(ai-df)$ $(l+q)(ci-dh)\\-(m+r)(bi-dg)\\+(n+s)(bh-cg)$ $(l+q)(cj-eh)\\-(m+r)(bj-eg)\\+(o+t)(bh-cg)$ $(l+q)(dj-ei)\\-(n+s)(bj-eg)\\+(o+t)(bi-dg)$ $(m+r)(dj-ei)\\-(n+s)(cj-eh)\\+(o+t)(ci-dh)$ $(k+p)(bh-cg)\\-(l+q)(ah-cf)\\+(m+r)(ag-bf)$ $(k+p)(bi-dg)\\-(l+q)(ai-df)\\+(n+s)(ag-bf)$ $(k+p)(bj-eg)\\-(l+q)(aj-ef)\\+(o+t)(ag-bf)$ $(k+p)(ci-dh)\\-(m+r)(ai-df)\\+(n+s)(ah-cf)$ $(k+p)(cj-eh)\\-(m+r)(aj-ef)\\+(o+t)(ah-cf)$ $(k+p)(dj-ei)\\-(n+s)(aj-ef)\\+(o+t)(ai-df)$ $(l+q)(ci-dh)\\-(m+r)(bi-dg)\\+(n+s)(bh-cg)$ $(l+q)(cj-eh)\\-(m+r)(bj-eg)\\+(o+t)(bh-cg)$ $(l+q)(dj-ei)\\-(n+s)(bj-eg)\\+(o+t)(bi-dg)$ $(m+r)(dj-ei)\\-(n+s)(cj-eh)\\+(o+t)(ci-dh)$
hidden $L_{\text{dep}}$ $r_1$ $a$ $b$ $c$ $d$ $e$ $a$ $b$ $c$ $d$ $e$ $a$ $b$ $c$ $d$ $e$ $+$ $a$ $b$ $c$ $d$ $e$ $=$ $2a$ $2b$ $2c$ $2d$ $2e$
$r_2$ $f$ $g$ $h$ $i$ $j$ $u$ $v$ $w$ $x$ $y$ $f$ $g$ $h$ $i$ $j$ $u$ $v$ $w$ $x$ $y$ $f+u$ $g+v$ $w+h$ $i+x$ $j+y$
$r_3$ $k$ $l$ $m$ $n$ $o$ $p$ $q$ $r$ $s$ $t$ $k$ $l$ $m$ $n$ $o$ $p$ $q$ $r$ $s$ $t$ $k+p$ $l+q$ $m+r$ $n+s$ $o+t$
$∧$ $∧$ $∧$
$r_1∧r_2$ $ag-bf$ $ah-cf$ $ai-df$ $aj-ef$ $bh-cg$ $bi-dg$ $bj-eg$ $ci-dh$ $cj-eh$ $dj-ei$ $av-bu$ $aw-cu$ $ax-du$ $ay-eu$ $bw-cv$ $bx-dv$ $by-ev$ $cx-dw$ $cy-ew$ $dy-ex$ $2a(g+v)\\-2b(f+u)$ $2a(w+h)\\-2c(f+u)$ $2a(i+x)\\-2d(f+u)$ $2a(j+y)\\-2e(f+u)$ $2b(w+h)\\-2c(g+v)$ $2b(i+x)\\-2d(g+v)$ $2b(j+y)\\-2e(g+v)$ $2c(i+x)\\-2d(w+h)$ $2c(j+y)\\-2e(w+h)$ $2d(j+y)\\-2e(i+x)$
simplify $(r_1∧r_2)$ if necessary $a(g+v)\\-b(f+u)$ $a(w+h)\\-c(f+u)$ $a(i+x)\\-d(f+u)$ $a(j+y)\\-e(f+u)$ $b(w+h)\\-c(g+v)$ $b(i+x)\\-d(g+v)$ $b(j+y)\\-e(g+v)$ $c(i+x)\\-d(w+h)$ $c(j+y)\\-e(w+h)$ $d(j+y)\\-e(i+x)$
$(r_1∧r_2)∧r_3$ $k(bh-cg)\\-l(ah-cf)\\+m(ag-bf)$ $k(bi-dg)\\-l(ai-df)\\+n(ag-bf)$ $k(bj-eg)\\-l(aj-ef)\\+o(ag-bf)$ $k(ci-dh)\\-m(ai-df)\\+n(ah-cf)$ $k(cj-eh)\\-m(aj-ef)\\+o(ah-cf)$ $k(dj-ei)\\-n(aj-ef)\\+o(ai-df)$ $l(ci-dh)\\-m(bi-dg)\\+n(bh-cg)$ $l(cj-eh)\\-m(bj-eg)\\+o(bh-cg)$ $l(dj-ei)\\-n(bj-eg)\\+o(bi-dg)$ $m(dj-ei)\\-n(cj-eh)\\+o(ci-dh)$ $+$ $p(bw-cv)\\-q(aw-cu)\\+r(av-bu)$ $p(bx-dv)\\-q(ax-du)\\+s(av-bu)$ $p(by-ev)\\-q(ay-eu)\\+t(av-bu)$ $p(cx-dw)\\-r(ax-du)\\+s(aw-cu)$ $p(cy-ew)\\-r(ay-eu)\\+t(aw-cu)$ $p(dy-ex)\\-s(ay-eu)\\+t(ax-du)$ $q(cx-dw)\\-r(bx-dv)\\+s(bw-cv)$ $q(cy-ew)\\-r(by-ev)\\+t(bw-cv)$ $q(dy-ex)\\-s(by-ev)\\+t(bw-cv)$ $r(dy-ex)\\-s(cy-ew)\\+t(cx-dw)$ $=$ $k(bh-cg)\\-l(ah-cf)\\+m(ag-bf)\\+p(bw-cv)\\-q(aw-cu)\\+r(av-bu)$ $k(bi-dg)\\-l(ai-df)\\+n(ag-bf)\\+p(bx-dv)\\-q(ax-du)\\+s(av-bu)$ $k(bj-eg)\\-l(aj-ef)\\+o(ag-bf)\\+p(by-ev)\\-q(ay-eu)\\+t(av-bu)$ $k(ci-dh)\\-m(ai-df)\\+n(ah-cf)\\+p(cx-dw)\\-r(ax-du)\\+s(aw-cu)$ $k(cj-eh)\\-m(aj-ef)\\+o(ah-cf)\\+p(cy-ew)\\-r(ay-eu)\\+t(aw-cu)$ $k(dj-ei)\\-n(aj-ef)\\+o(ai-df)\\+p(dy-ex)\\-s(ay-eu)\\+t(ax-du)$ $l(ci-dh)\\-m(bi-dg)\\+n(bh-cg)\\+q(cx-dw)\\-r(bx-dv)\\+s(bw-cv)$ $l(cj-eh)\\-m(bj-eg)\\+o(bh-cg)\\+q(cy-ew)\\-r(by-ev)\\+t(bw-cv)$ $l(dj-ei)\\-n(bj-eg)\\+o(bi-dg)\\+q(dy-ex)\\-s(by-ev)\\+t(bw-cv)$ $m(dj-ei)\\-n(cj-eh)\\+o(ci-dh)\\+r(dy-ex)\\-s(cy-ew)\\+t(cx-dw)$ $(k+p)\\(b(w+h)-c(g+v))\\-(l+q)\\(a(w+h)-c(f+u))\\+(m+r)\\(a(g+v)-b(f+u))$ $(k+p)\\(b(i+x)-d(g+v))\\-(l+q)\\(a(i+x)-d(f+u))\\+(n+s)\\(a(g+v)-b(f+u))$ $(k+p)\\(b(j+y)-e(g+v))\\-(l+q)\\(a(j+y)-e(f+u))\\+(o+t)\\(a(g+v)-b(f+u))$ $(k+p)\\(c(i+x)-d(w+h))\\-(m+r)\\(a(i+x)-d(f+u))\\+(n+s)\\(a(w+h)-c(f+u))$ $(k+p)\\(c(j+y)-e(w+h))\\-(m+r)\\(a(j+y)-e(f+u))\\+(o+t)\\(a(w+h)-c(f+u))$ $(k+p)\\(d(j+y)-e(i+x))\\-(n+s)\\(a(j+y)-e(f+u))\\+(o+t)\\(a(i+x)-d(f+u))$ $(l+q)\\(c(i+x)-d(w+h))\\-(m+r)\\(b(i+x)-d(g+v))\\+(n+s)\\(b(w+h)-c(g+v))$ $(l+q)\\(c(j+y)-e(w+h))\\-(m+r)\\(b(j+y)-e(g+v))\\+(o+t)\\(b(w+h)-c(g+v))$ $(l+q)\\(d(j+y)-e(i+x))\\-(n+s)\\(b(j+y)-e(g+v))\\+(o+t)\\(b(i+x)-d(g+v))$ $(m+r)\\(d(j+y)-e(i+x))\\-(n+s)\\(c(j+y)-e(w+h))\\+(o+t)\\(c(i+x)-d(w+h))$

These two examples are by no means a proof, but meditation on the patterns in the variables is at least fairly convincing.

#### Sintel's proof of the linear-independence conjecture

##### Sintel's original text
If A and B are mappings from Z^n to Z^m, with n > m, A, B full rank (using A and B as their rowspace equivalently):

dim(A + B) - m = dim(ker(A) + ker(B)) - (n-m)

>> dim(A)+dim(B)=dim(A+B)+dim(A∩B) => dim(A + B) = dim(A) + dim(B) - dim(A∩B)

dim(A) + dim(B) - dim(A∩B) - m = dim(ker(A) + ker(B)) - (n-m)

>> by duality of kernel, dim(ker(A) + ker(B))  = dim(ker(A ∩ B))

dim(A) + dim(B) - dim(A∩B) - m = dim(ker(A ∩ B))  - (n-m)

>> rank nullity: dim(ker(A ∩ B)) + dim(A ∩ B) = n

dim(A) + dim(B) - dim(A∩B) - m = n -  dim(A ∩ B)  - (n-m)

m + m - dim(A∩B) - m = n -  dim(A ∩ B)  - (n-m)

m + m - m = n - n + m

m = m

##### Douglas Blumeyer's interpretation

We're going to take the strategy of beginning with what we're trying to prove, then reducing it to an obvious equivalence, which will show that our initial statement must be just as true.

So here's the statement we're trying to prove:

$\text{rank}(\text{union}(M_1, M_2)) - r = \text{nullity}(\text{union}(C_1, C_2)) - n$

$M_1$ and $M_2$ are mappings which both have dimensionality $d$, rank $r$, nullity $n$, and are full-rank, and $C_1$ and $C_2$ are their comma bases, respectively.

Technically since these matrices are representing subspace bases, the correct operation here is "sumset", not "union", but because "union" is a more commonly known opposite of intersection and would work for plain matrices, I've decided to stick with it here.

So, the left-hand side of this equation is a way to express the count of linearly independent basis vectors $L_{\text{ind}}$ existing between $M_1$ and $M_2$. The right-hand side tells you the same thing, but between $C_1$ and $C_2$. The fact that these two things are equal is the thing we're trying to prove. So let's go!

Let's call the following Equation B. This makes sense because basis vectors between $M_1$ and $M_2$ are either going to be linearly dependent or linearly independent. The union is going to be all of $M_1$'s independent vectors, all of $M_2$'s independent vectors, and all of $M_1$ and $M_2$'s dependent vectors but only one copy of them. While the intersection is going be all of $M_1$ and $M_2$'s dependent vectors again — essentially the other copy of them. So they sum to the same thing.

$\text{rank}(M_1) + \text{rank}(M_2) = \text{rank}(\text{union}(M_1, M_2)) + \text{rank}(\text{intersection}(M_1, M_2))$

Then this is just Equation B, rearranged.

$\text{rank}(\text{union}(M_1, M_2)) = \text{rank}(M_1) + \text{rank}(M_2) - \text{rank}(\text{intersection}(M_1, M_2))$

This takes Equation B, solves it for $\text{rank}(\text{union}(M_1, M_2))$, then substitutes that result into Equation A, which is then flipped left/right, and then $r$ is subtracted from both sides.

$\text{rank}(M_1) + \text{rank}(M_2) - \text{rank}(\text{intersection}(M_1, M_2)) - r = \text{nullity}(\text{union}(C_1, C_2)) - n$

By the "duality of the comma basis", this is Equation C:

$\text{nullity}(\text{union}(C_1, C_2)) = \text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2)))$

Now substitute in the right-hand side of Equation C for $\text{nullity}(\text{union}(C_1, C_2))$ in Equation B.

$\text{rank}(M_1) + \text{rank}(M_2) - \text{rank}(\text{intersection}(M_1, M_2)) - r = \text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2))) - n$

This is the rank nullity theorem where $\text{intersection}(M_1, M_2)$ is the temperament. Let's call it Equation D:

$\text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2))) + \text{rank}(\text{intersection}(M_1, M_2)) = d$

Now solve Equation D for $\text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2)))$, and substitute that result into Equation B:

$\text{rank}(M_1) + \text{rank}(M_2) - \text{rank}(\text{intersection}(M_1, M_2)) - r = d - \text{rank}(\text{intersection}(M_1, M_2)) - n$

Now realize that $\text{rank}(M_1)$ and $\text{rank}(M_2)$ are both equal to $r$.

$r + r - \text{rank}(\text{intersection}(M_1, M_2)) - r = d - \text{rank}(\text{intersection}(M_1, M_2)) - n$

Now cancel the $\text{rank}(\text{intersection}(M_1, M_2))$ from both sides, and substitute in $(d - r)$ for $n$.

$r + r - r = d - d + r$

Now cancel the $r$'s on the left and the $d$'s on the right:

$r = r$

So we know this is true.

## Glossary

• $d$: dimensionality, the dimension of a temperament's domain
• $r$: rank, the dimension of a temperament's mapping
• $n$: nullity, the dimension of a temperament's comma basis
• $g$: grade, the generic term for rank or nullity
• $g_{\text{min}}$: min-grade, the minimum of a temperament's rank and nullity $\min(r,n)$
• $g_{\text{max}}$: max-grade, the maximum of a temperament's rank and nullity $\min(r,n)$
• $L_{\text{dep}}$: linear-dependence basis, a basis for all the linearly dependent vectors between two temperaments
• $L_{\text{ind}}$: linear-independence basis, a basis for all the vectors of a temperament that are linearly independent from a specific other temperament
• $l_{\text{dep}}$: linear-dependence, the dimension of the $L_{\text{dep}}$
• $l_{\text{ind}}$: linear-independence, the dimension of the $L_{\text{ind}}$
• dimensions: the $d$, $r$, and $n$ of a temperament
• addable: two temperaments are addable when they have the same dimensions and have $l_{\text{ind}}$ $= 1$
• negation: a mapping is negated when the leading entry of its largest-minors is negative; a comma basis is negated when the trailing entry of its largest-minors is negative

## Wolfram implementation

Temperament arithemetic has been implemented as the functions sum and diff in the RTT library in Wolfram Language.

## Credits

This page is mostly the work of Douglas Blumeyer, and he assumes full responsibility for any inaccuracies or otherwise shortcomings here. But he would like to thank Mike Battaglia, Dave Keenan, and Sintel for the huge amounts of counseling they provided. There's no way this page could have come together without their help. In particular, the page would not exist at all without the original spark of inspiration from Mike.

## Footnotes

1. It has also been asserted that there exists a connection between temperament addition and "Fokker groups" as discussed on this page: Fokker block, but the connection remains unclear to this author.
2. or they are all the same temperament, in which case they share all the same basis vectors and could perhaps be said to be completely linearly dependent.
3. At least, this mapping would have a total of four rows before it is canonicalized. After canonicalization, it may end up with only three (or two if you map-merged a temperament with itself for some reason).
4. or — equivalently, in EA — either their multimaps or their multicommas are linearly dependent
5. You may be wondering — what about two temperaments which are parallel in tone or tuning space, e.g. compton and blackwood in tuning space? Their comma bases are each $n=1$, and they merge to give a $n=2$ comma basis, which corresponds to a $r=1$ mapping, which means it should appear as an ET point on the PTS diagram. But how could that be? Well, here's their comma-merge: [[1 0 0 [0 1 0], and so that corresponding mapping is [0 0 1]}. So it's some degenerate ET. I suppose we could say it's the point at infinity away from the center of the diagram.
6. This conjecture was first suggested by Mike Battaglia, but it has not yet been mathematically proven. Sintel and Tom Price have done some experiments but nothing complete yet. Douglas Blumeyer's test cases in the RTT library in Wolfram Language have emiprically proven that this is true, though.
7. or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)
8. It is possible to find a pair of mapping forms for septimal meantone and septimal blackwood that sum to a mapping which is the dual of the comma basis found by summing their canonical comma bases. One example is [97 152 220 259] -30 -47 -68 -80]} + [-95 -152 -212 -266] 30 48 67 84]}.
9. Note that different bases are possible for addable temperaments, e.g. the simplest addable forms for 5-limit meantone and porcupine are [7 11 16] -2 -3 -4]⟩ + [7 11 16] 1 2 3]⟩ = [14 22 32] -1 -1 -1]} which canonicalizes to [1 1 1] 0 4 9]}. But [7 11 16] -9 -14 -20]⟩ + [7 11 16] 1 2 3]⟩ also works (in the meantone mapping, we've added one copy of the first vector to the second), giving [14 22 32] -8 -12 -17]} which also canonicalizes to [1 1 1] 0 4 9]}; in fact, as long as the $L_{\text{dep}}$ is explicit and neither matrix is enfactored, the entry-wise addition will work out fine.