Temperament addition
Temperament addition is the general name for either the temperament sum or the temperament difference, which are two closely related operations on regular temperaments. Basically, to add or subtract temperaments means to match up the entries of temperament vectors and then add or subtract them individually. The result is a new temperament that has similar properties to the original temperaments.
Introductory examples
For example, in the 5limit, the sum of 12ET and 7ET is 19ET because ⟨12 19 28] + ⟨7 11 16] = ⟨(12+7) (19+11) (28+16)] = ⟨19 30 44], and the difference of 12ET and 7ET is 5ET because ⟨12 19 28]  ⟨7 11 16] = ⟨(127) (811) (1216)] = ⟨5 8 12].
[math]\left[ \begin{array} {rrr}
12 & 19 & 28 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
7 & 11 & 16 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
(12+7) & (19+11) & (28+16) \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
19 & 30 & 44 \\
\end{array} \right][/math]
[math]\left[ \begin{array} {rrr} 12 & 19 & 28 \\ \end{array} \right]  \left[ \begin{array} {rrr} 7 & 11 & 16 \\ \end{array} \right] = \left[ \begin{array} {rrr} (127) & (1911) & (2816) \\ \end{array} \right] = \left[ \begin{array} {rrr} 5 & 8 & 12 \\ \end{array} \right][/math]
We can write these using wart notation as 12p + 7p = 19p and 12p  7p = 5p, respectively. The similarity in these temperaments can be seen in how, like both 12ET and 7ET, 19ET (their sum) and 5ET (their difference) both also support meantone temperament.
Temperament sums and differences can also be found using commas; for example meantone + porcupine = tetracot because [4 4 1⟩ + [1 5 3⟩ = [(4+1) (4+5) (1+3)⟩ = [5 9 4⟩ and meantone  porcupine = dicot because [4 4 1⟩  [1 5 3⟩ = [(41) (45) (13)⟩ = [3 1 2⟩.
[math]\left[ \begin{array} {rrr}
4 \\
4 \\
1 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
1 \\
5 \\
3 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
(4+1) \\
(4+5) \\
(1+3) \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
5 \\
9 \\
4 \\
\end{array} \right][/math]
[math]\left[ \begin{array} {rrr} 4 \\ 4 \\ 1 \\ \end{array} \right]  \left[ \begin{array} {rrr} 1 \\ 5 \\ 3 \\ \end{array} \right] = \left[ \begin{array} {rrr} (41) \\ (45) \\ (13) \\ \end{array} \right] = \left[ \begin{array} {rrr} 3 \\ 1 \\ 2 \\ \end{array} \right][/math]
We could write this in quotient form — replacing addition with multiplication and subtraction with division — as 80/81 × 250/243=20000/19683 and 80/81 ÷ 250/243=25/24, respectively. The similarity in these temperaments can be seen in how all of them are supported by 7ET. (Note that these examples are all given in canonical form, which is why we're seeing the meantone comma as 80/81 instead of the more common 81/80; for the reason why, see Temperament addition#Negation.)
Temperament addition is simplest for temperaments which can be represented by single vectors such as demonstrated in these examples. In other words, it is simplest for temperaments that are either rank1 (equal temperaments, or ETs for short) or nullity1 (having only a single comma). Because grade [math]g[/math] is the generic term for rank [math]r[/math] and nullity [math]n[/math], we could define the minimum grade [math]g_{\text{min}}[/math] of a temperament as the minimum of its rank and nullity [math]\min(r,n)[/math], and so for convenience in this article we will refer to [math]r=1[/math] (read "rank1") or [math]n=1[/math] (read "nullity1") temperaments as [math]g_{\text{min}}=1[/math] (read "mingrade1") temperaments. We'll also use [math]g_{\text{max}}[/math] (read "maxgrade"), which naturally is equal to [math]\max(r,n)[/math].
For [math]g_{\text{min}}\gt 1[/math] temperaments, temperament addition gets a little trickier. This is discussed in the beyond [math]g_{\text{min}}=1[/math] section later.
Applications
The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments.
Take the case of meantone + porcupine = tetracot from the previous section. What this relationship means is that tetracot is the temperament which doesn't make the meantone comma itself vanish, nor the porcupine comma itself, but instead make whatever comma relates pitches that are exactly one meantone comma plus one porcupine comma apart vanish. And that's the tetracot comma! And on the other hand, for the temperament difference, dicot, this is the temperament that makes neither meantone nor porcupine vanish, but instead the comma that's the size of the difference between them. And that's the dicot comma. So tetracot makes 80/81 × 250/243 vanish, and dicot makes 80/81 × 243/250 vanish.
Similar reasoning is possible for the mappingrows of mappings — the analogs of the commas of comma bases — but are less intuitive to describe. What's reasonably easy to understand, though, is how temperament addition on maps is essentially navigation of the scale tree for the rank2 temperament they share; for more information on this, see Dave Keenan & Douglas Blumeyer's guide to RTT: exploring temperaments#Scale trees. So if you understand the effects on individual maps, then you can apply those to changes of maps within a more complex temperament.
Ultimately, these two effects are the primary applications of temperament addition.^{[1]}
A note on variance
For simplicity, this article will use the word "vector" in its general sense, that is, varianceagnostic. This means it includes either contravariant vectors (plain "vectors", such as primecount vectors) or covariant vectors ("covectors", such as maps). However, the reader should assume that only one of the two types is being used at a given time, since the two variances do not mix. For more information, see Linear_dependence#Variance. The same varianceagnosticism holds for multivectors in this article as well.
Visualizing temperament addition
Versus the wedge product
If the wedge product of two vectors represents the directed area of a parallelogram constructed with the vectors as its sides, then the temperament sum and difference are the vectors that connect the diagonals of this parallelogram.
Tuning and tone space
One way we can visualize temperament addition is on projective tuning space.
This shows both the sum and the difference of porcupine and meantone. All four temperaments — the two input temperaments, porcupine and meantone, as well as the sum, tetracot, and the diff, dicot — can be seen to intersect at 7ET. This is because all four temperaments' mappings can be expressed with the map for 7ET as one of their mappingrows.
These are all [math]r=2[/math] temperaments, so their mappings each have one other row besides the one reserved for 7ET. Any line that we draw across these four temperament lines will strike four ETs whose maps have a sum and difference relationship. On this diagram, two such lines have been drawn. The first one runs through 5ET, 20ET, 15ET, and 10ET. We can see that 5 + 15 = 20, which corresponds to the fact that 20ET is the ET on the line for tetracot, which is the sum of porcupine and meantone, while 5ET and 15ET are the ETs on their lines. Similarly, we can see that 15  5 = 10, which corresponds to the fact that 10ET is the ET on the line for dicot, which is the difference of porcupine and meantone.
The other line runs through the ETs 12, 41, 29, and 17, and we can see again that 12 + 29 = 41 and 29  12 = 17.
We can also visualize temperament addition on projective tone space. Here relationships are inverted: points are lines, and lines are points. So all four temperaments are found along the line for 7ET.
Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5ET and 7ET support a [math]r=2[/math] temperament, then so will 5 + 7 = 12ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call down the scale tree, children are always found between their parents. But when you try to go back up the scale tree, to one or the other parent, you may not immediately know which side of the child to go.
Conditions on temperament addition
The temperaments have the same dimensions
Temperament addition is only possible for temperaments with the same dimensions, that is, the same rank and dimensionality (and therefore, by the ranknullity theorem, also the same nullity). The reason for this is visually obvious: without the same [math]d[/math], [math]r[/math], and [math]n[/math] (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up onetoone. From this condition it also follows that the result of temperament addition will be a new temperament with the same [math]d[/math], [math]r[/math], and [math]n[/math] as the input temperaments.
If you're unfamiliar with domain bases, then you can probably safely assume your temperaments are in the same subspace, because they should be in the default, standard, primelimit interval subspace. If they're not, change them to be on the same interval subspace if you can, and then come back to temperament addition.
The temperaments are addable
Matching the dimensions is only the first of two conditions on the possibility of temperament addition. The second condition is that the temperaments must all be addable. This condition is trickier, and so a detailed discussion of it will be deferred to a later section (here: Temperament addition#Addability). But let us at least say here what it essentially means. The basis vectors representing the summed or differenced temperaments must all match, except for one nonmatching vector in each. Said another way, any number of matching vectors is allowed in the basis alongside, but ultimately we're only ever able to add (mono)vectors — the single nonmatching vectors from each temperament.
We can gain some intuition about this addability condition by thinking about these nonmatching vectors — the ones that are changing — as if they were themselves a basis for a temperament, and then recalling what we know about bases: that when a basis consists of two or more vectors, then an infinitude of other bases for the same subspace exist (such as how there are multiple forms for a rank2 temperament mapping); whereas when a basis consists of only a single vector, then there is only one possible basis. Finally, we must recognize that entrywise matrix addition is an operation defined on matrices, not bases; and so entrywise matrix addition can give different results when done to different bases for the same subspace. The only way for temperament addition to work reliably, therefore, is to only do it on matrices where the basis for what is changing has only a single possible representation, and that is only the case when only one basis vector is changing.
Any set of [math]g_{\text{min}}=1[/math] temperaments are addable^{[2]}, because the side of duality where [math]g=1[/math] will satisfy this condition, so we don't need to worry in detail about it in that case. Or in other words, [math]g_{\text{min}}=1[/math] temperaments can be represented by monovectors, and we have no problem entrywise adding those.
Versus temperament merging
Like temperament merging, temperament addition takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, adding these input temperaments together.
But there is a big difference between temperament addition and merging. Temperament addition is done using entrywise addition (or subtraction), whereas merging is done using concatenation. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the merging of mappings with two rows each is a new mapping that has a total of four rows^{[3]}.
The linear dependence connection
Another connection between temperament addition and merging is that they may involve checks for linear dependence.
Temperament addition, as stated earlier, always requires addability, which is a more complex property involving linear dependence.
Merging does not necessarily involve linear dependence. Linear dependence only matters for merging when you attempt to do it using exterior algebra, that is, by using the wedge product, rather than the linear algebra approach, which is just to concatenate the vectors as a matrix and canonicalize. For more information on this, see Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#The linearly dependent exception to the wedge product.
[math]g_{\text{min}}=1[/math]
As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are [math]g_{\text{min}}=1[/math], and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the simple case of [math]g_{\text{min}}=1[/math].
As shown in the introductory examples, [math]g_{\text{min}}=1[/math] examples are as easy as entrywise addition or subtraction. But there's just a couple tricks to it.
Getting to the side of duality with [math]g_{\text{min}}=1[/math]
We may be looking at a temperament representation which itself does not consist of a single vector, but its dual does. For example, the meantone mapping [⟨1 0 4] ⟨0 1 4]} and the porcupine mapping [⟨1 2 3] ⟨0 3 5]} each consist of two vectors. So these representations require additional labor to compute. But their duals are easy! If we simply find a comma basis for each of these mappings, we get [[4 4 1⟩] and [[1 5 3⟩]. In this form, the temperaments can be entrywise added, to [[5 9 4⟩] as we saw earlier. And if in the end we're still after a mapping, since we started with mappings, we can take the dual of this comma basis, to find the mapping [⟨1 1 1] ⟨0 4 9]}.
Negation
There's just one other trick to it, and that's that we have to be mindful of negation.
The temperament difference can be understood as being the same operation as the temperament sum except with one of the two temperaments negated.
For single vectors (and multivectors), negation is as simple as changing the sign of every entry.
Suppose you have a matrix representing temperament [math]𝓣_1[/math] and another matrix representing [math]𝓣_2[/math]. If you want to find both their sum and difference, you can calculate both [math]𝓣_1 + 𝓣_2[/math] and [math]𝓣_1 + 𝓣_2[/math]. There's no need to also find [math]𝓣_1 + 𝓣_2[/math]; this will merely give the negation of [math]𝓣_1 + 𝓣_2[/math]. The same goes for [math]𝓣_1 + 𝓣_2[/math], which is the negation of [math]𝓣_1 + 𝓣_2[/math].
But a question remains: which result between [math]𝓣_1 + 𝓣_2[/math] and [math]𝓣_1 + 𝓣_2[/math] is actually the sum and which is the difference? This seems like an obvious question to answer, except for one key problem: how can we be certain that [math]𝓣_1[/math] or [math]𝓣_2[/math] wasn't already in negated form to begin with? We need to establish a way to check for matrix negativity.
The check is that the vectors must be in canonical form. For a contravariant vector, such as the kind that represent commas, canonical form means that the trailing entry (the final nonzero entry) must be positive. For a covariant vector, such as the kind that represent mappingrows, canonical form means that the leading entry (the first nonzero entry) must be positive.
Sometimes the canonical form of a vector is not the most popular form. For instance, the meantone comma is usually expressed in positive form, that is, with its numerator greater than its denominator, so that its cents value is positive, or in other words, it's the meantone comma upwards in pitch, not downwards. But the primecount vector for that form, 81/80, is [4 4 1⟩, and as we can see, its trailing entry 1 is negative. So the canonical form of meantone is actually [4 4 1⟩.
[math]g_{\text{min}}\gt 1[/math]
As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are [math]g_{\text{min}}=1[/math], and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the trickier cases of [math]g_{\text{min}}\gt 1[/math].
Throughout this section, we will be using a green color on linearly dependent objects and values, and a red color on linearly independent objects and values, to help differentiate between the two.
Addability
In order to understand how to do temperament addition on [math]g_{\text{min}}\gt 1[/math] temperaments, we must first understand addability.
In order to understand addability, we must work up to it, understanding these concepts in this order:
 linear dependence
 linear dependence between temperaments
 linear independence between temperaments
 linear independence between temperaments by only one basis vector (that's addability)
1. Linear dependence
This is explained here: linear dependence.
2. Linear dependence between temperaments
Linear dependence has been defined for the matrices and multivectors that represent temperaments, but it can also be defined for temperaments themselves. The conditions of temperament addition motivate a definition of linear dependence for temperaments whereby temperaments are considered linearly dependent if either of their mappings or their comma bases are linearly dependent^{[4]}.
For example, 5limit 5ET and 5limit 7ET, represented by the mappings [⟨5 8 12]} and [⟨7 11 16]} may at first seem to be linearly independent, because the basis vectors visible in their mappings are clearly linearly independent (when comparing two vectors, the only way they could be linearly dependent is if they are multiples of each other, as discussed here). And indeed their mappings are linearly independent. But these two temperaments are linearly dependent, because if we consider their corresponding comma bases, we will find that they share the basis vector of the meantone comma [4 4 1⟩.
To make this point visually, we could say that two temperaments are linearly dependent if they intersect in one or the other of tone space and tuning space. So you have to check both views.^{[5]}
3. Linear independence between temperaments
Linear dependence may be considered as a boolean (yes/no, linearly dependent/independent) or it may be considered as an integer count of linearly dependent basis vectors. In other words, it is the dimension of the lineardependence basis [math]\dim(L_{\text{dep}})[/math]. To refer to this count, we may hyphenate it as lineardependence, and use the variable [math]l_{\text{dep}}[/math]. For example, 5ET and 7ET, per the example in the previous section, are [math]l_{\text{dep}}=1[/math] (read "lineardependence1") temperaments.
It does not make sense to speak of linear dependence in this integer count sense between temperaments, however. Here's an example that illustrates why. Consider two different [math]d=5[/math], [math]r=2[/math] temperaments. Both their mappings and comma bases are linearly dependent, but their mappings have [math]l_{\text{dep}}=1[/math], while their comma bases have [math]l_{\text{dep}}=2[/math]. So what could the [math]l_{\text{dep}}[/math] of this temperament possibly be? We could define "minlineardependence" and "maxlineardependence", as we define "mingrade" and "maxgrade", but these do not turn out to be helpful.
On the other hand, it does make sense to speak of the linearindependence of the temperament as an integer count. This is because the count of linearly independent basis vectors of two temperaments' mappings and the count of linearly independent basis vectors of their comma bases will always be the same. So the temperament linearindependence is simply this number. In the [math]d=5[/math], [math]r=2[/math] example from the previous paragraph, these would be [math]l_{\text{ind}}=1[/math] (read "linearindependence1") temperaments.
A proof of this conjecture is given here: Temperament addition#Sintel's proof of the linearindependence conjecture.
4. Linear independence between temperaments by only one basis vector (i.e. addability)
Two temperaments are addable if they are [math]l_{\text{ind}}=1[/math]. In other words, both their mappings and their comma bases share all but one basis vector.
And so this is why [math]g_{\text{min}}=1[/math] temperaments are all addable. Because if [math]g_{\text{min}}=1[/math], and the temperaments are different from each other so [math]l_{\text{ind}}[/math] is at least 1, and [math]l_{\text{ind}}[/math] can't be greater than [math]g_{\text{min}}[/math], so then necessarily [math]l_{\text{ind}}[/math] [math]= 1[/math] exactly, and therefore the temperaments are addable.
Multivector approach
The simplest approach to [math]g_{\text{min}}\gt 1[/math] temperament addition is to use multivectors. This is discussed in more detail here: Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#Temperament addition.
Matrix approach
Temperament addition for [math]g_{\text{min}}\gt 1[/math] temperaments (again, that's with both [math]r\gt 1[/math] and [math]n\gt 1[/math]) can also be done using matrices, and it works in essentially the same way — entrywise addition or subtraction — but there are some complications that make it significantly more involved than it is with multivectors. There's essentially five steps:
 Find the lineardependence basis [math]L_{\text{dep}}[/math]
 Put the matrices in a form with the [math]L_{\text{dep}}[/math]
 Check for enfactoring, and do an addabilization defactor (if necessary)
 Check for negation, and change negation (if necessary)
 Entrywise add, and canonicalize
The steps
1. Find the [math]L_{\text{dep}}[/math]
For matrices, it is necessary to make explicit the basis for the linearly dependent vectors shared between the involved matrices before adding. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the lineardependence basis, or [math]L_{\text{dep}}[/math].
Before this can be done, of course, we need to actually find the [math]L_{\text{dep}}[/math]. This can be done using the technique described here: Linear dependence#For a given set of basis matrices, how to compute a basis for their linearly dependent vectors
2. Put the matrices in a form with the [math]L_{\text{dep}}[/math]
The [math]L_{\text{dep}}[/math] will always have one less vector than the original matrix, by the definition of addability as [math]L_{\text{ind}}=1[/math]. And the [math]L_{\text{dep}}[/math] is not a full recreation of the original temperament; it needs that one extra vector to get back to representing it.
So a next step, we need pad out the [math]L_{\text{dep}}[/math] by drawing from vectors from the original matrices. We can start from their first vectors. But if that vector happens to be linearly dependent on the [math]L_{\text{dep}}[/math], then it won't result in a representation of the original matrix. Otherwise we'll produce a rankdeficient matrix that doesn't still represent the same temperament as we started with. So we just have to keep going until we get it.
3. Addabiliziation defactoring
But it is not quite as simple as determining the [math]L_{\text{dep}}[/math] and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be enfactored. And defactoring them without compromising the explicit [math]L_{\text{dep}}[/math] cannot be done using existing defactoring algorithms; it's a tricky process, or at least computationally intensive. This is called addabilization defactoring.
Most established defactoring algorithms will alter any or all of the entries of a matrix. This is not an option if we still want to be able to add temperaments, however, because these matrices must retain their explicit [math]L_{\text{dep}}[/math]. And we can't defactor and then paste the [math]L_{\text{dep}}[/math] back over the first vector or something, because then we might just be enfactored again. We need to find a defactoring algorithm that manages to work without altering any of the vectors in the [math]L_{\text{dep}}[/math].
The first step to addabilization defactoring is inspired by PernetStein defactoring: we find the value of the enfactoring factor (the "greatest factor") by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to remove the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps.
It turns out that you can always^{[6]} isolate the greatest factor in the single final vector of the matrix — the linearly independent vector — through linear combinations of the vectors in the [math]L_{\text{dep}}[/math].
The example that will be worked through in this section below is as simple as it can get: the [math]L_{\text{dep}}[/math] consists of only a single vector, so we simply add some number of this single linearly dependent vector to the linearly independent vector. However, if there are multiple vectors in the [math]L_{\text{dep}}[/math], the linear combination which surfaces the greatest factor may involve just one or potentially all of those vectors, and the best approach to finding this combination is simply an automatic solver. An example of this approach is demonstrated in the RTT library in Wolfram Language, here: https://github.com/cmloegcmluin/RTT/blob/main/main.m#L477
Another complication is that the greatest factor may be very large, or be a highly composite number. In this case, searching for the linear combination that isolates the greatest factor in its entirety directly may be intractable; it is better to eliminate it piecemeal, i.e., whenever the solver finds a factor of the greatest factor, eliminate it, and repeat until the greatest factor is fully eliminated. The RTT library code linked to above works in this way.
4. Negation
Temperament negation is more complex with matrices, both in terms of checking for it, as well as changing it.
For matrices, negation is accomplished by choosing a single vector and changing the sign of every entry in it. In the case of comma bases, a vector is a column, whereas in a mapping a vector (technically a row vector, or covector) is a row.
For matrices, the check for negation is related to canonicalization of multivectors as are used in exterior algebra for RTT. Essentially we take the largest possible minor determinants of the matrix (or "largestminors" for short), and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.
5. Entrywise add
The entrywise addition of elements works mostly the same as for vectors. But there's one catch: we only do it for the pair of linearly independent vectors. We set the [math]L_{\text{dep}}[/math] aside, and reintroduce it at the end.
When taking the sum, this is just for simplicity's sake. There's no sense in adding the two copies of the [math]L_{\text{dep}}[/math] together, as we'll just get the same vector but 2enfactored. So we may as well set it aside, and deal only with the linearly independent vectors, and put it back at the end.
When taking the difference, it's essential that we set the [math]L_{\text{dep}}[/math] aside before entrywise addition, though, because if we were to subtract it from itself, we'd end up with all zeros. Unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros.
As a final step, as is always good to do when concluding temperament operations, put the result in canonical form.
Example
For our example, let’s look at septimal meantone plus flattone. The canonical forms of these temperaments are [⟨1 0 4 13] ⟨0 1 4 10]} and [⟨1 0 4 17] ⟨0 1 4 9]}.
0. Counterexample. Before we try following the detailed instructions just described above, let's do the counterexample, to illustrate why we have to follow them at all. Simple entrywise addition of these two mapping matrices gives [⟨2 0 8 4] ⟨0 2 8 1]}, which is not the correct answer:
[math]\left[ \begin{array} {rrr}
1 & 0 & 4 & 13 \\
0 & 1 & 4 & 10 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
1 & 0 & 4 & 17 \\
0 & 1 & 4 & 9 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
2 & 0 & 8 & 4 \\
0 & 2 & 8 & 1 \\
\end{array} \right][/math]
And it's wrong not only because it is clearly enfactored (at least one factor of 2, that is visible in the first vector). The full explanation of why this is the wrong answer is beyond the scope of this example (the nature of correctness here is discussed in the section Temperament addition#Addition on nonaddable temperaments). However, if we now follow through with the instructions described above, we can find the correct answer.
1. Find the lineardependence basis. We know where to start: first find the [math]L_{\text{dep}}[/math] and put each of these two mappings into a form that includes it explicitly. In this case, their [math]L_{\text{dep}}[/math] consists of a single vector: [⟨19 30 44 53]}.
2. Reproduce the original temperament. The original matrices had two vectors, so as our second step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [⟨19 30 44 53] ⟨1 0 4 13]⟩ and [⟨19 30 44 53] ⟨1 0 4 17]⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on. In this case the first vectors are both fine, though.
[math]\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & 4 & 13 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & 4 & 17 \\
\end{array} \right][/math]
3. Defactor. Next, verify that both matrices are defactored. In this case, both matrices are enfactored, each by a factor of 30^{[7]}. So we'll use addabilization defactoring. Since there's only a single vector in the [math]L_{\text{dep}}[/math], therefore all we need to do is repeatedly add that one linearly dependent vector to the linearly independent vector until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix. Specifically, we add 11 times the linearly dependent vector. For the first matrix, ⟨1 0 4 13] + 11⋅⟨19 30 44 53] = ⟨210 330 480 570], whose entries have a GCD = 30, so we can defactor the matrix by dividing that vector by 30, leaving us with ⟨7 11 16 19]. Therefore the final matrix here is [⟨19 30 44 53] ⟨7 11 16 19]⟩. The other matrix matrix happens to defactor in the same way: ⟨1 0 4 17] + 11⋅⟨19 30 44 53] = ⟨210 330 480 600] whose GCD is also 30, reducing to ⟨7 11 16 20], so the final matrix is [⟨19 30 44 53] ⟨7 11 16 20]⟩.
4. Check negativity. The next thing we need to do is check the negativity of these two temperaments. If either of the matrices we're adding is negative, then we'll have to change it (if both are negative, then the problem cancels out, and we go back to being right). We check negativity by using the largestminors of these matrices. The first matrix's largestminors are (1, 4, 10, 4, 13, 12) and the second matrix's largestminors are (1, 4, 9, 4, 17, 32). What we're looking for here are their leading entries, because these are largestminors of a mapping (if we were looking at largestminors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices are both negative! But again, since they were both negative, the effect cancels out; no need to change anything (but if we wanted to, we could just take the linearly independent vector for each matrix and negate every entry in it).
5. Add. Now the matrices are ready to add:
[math]\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
\end{array} \right][/math]
We set the [math]L_{\text{dep}}[/math] aside, and deal only with the linearly independent vectors:
[math]\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\
\end{array} \right][/math]
Then we can reintroduce the [math]L_{\text{dep}}[/math] afterwards:
[math]\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\
\end{array} \right][/math]
And finally canonicalize:
[math]\left[ \begin{array} {rrr}
1 & 0 & 4 & 2 \\
0 & 2 & 8 & 1 \\
\end{array} \right][/math]
so we can now see that meantone plus flattone is godzilla.
As long as we've done all this work to set these matrices up to find their sum, let's find their difference as well. Again, set aside the [math]L_{\text{dep}}[/math], and just entrywise subtract the two linearly independent vectors:
[math]\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
\end{array} \right][/math]

[math]\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}1 \\
\end{array} \right][/math]
And so, reintroducing the [math]L_{\text{dep}}[/math], we have:
[math]\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}1 \\
\end{array} \right][/math]
Which canonicalizes to:
[math]\left[ \begin{array} {rrr}
19 & 30 & 44 & 0 \\
0 & 0 & 0 & 1 \\
\end{array} \right][/math]
And so we can see that meantone minus flattone is meanmag.
Addition on nonaddable temperaments
Initial example: canonical form
Clearly, two nonaddable temperaments may still be entrywise added. For example, the [math]L_{\text{dep}}[/math] for the canonical comma bases for septimal meantone [[4 4 1 0⟩ [13 10 0 1⟩] and septimal blackwood [[8 5 0 0⟩ [6 2 0 1⟩] is empty, meaning their [math]l_{\text{ind}}[/math] [math]=2[/math], and therefore they aren't addable. Yet we can still do entrywise addition as if they were:
[math]\left[ \begin{array} {rrr}
4 & 13 \\
4 & 10 \\
1 & 0 \\
0 & 1 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
8 & 6 \\
5 & 2 \\
0 & 0 \\
0 & 1 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
(4+8) & (13+6) \\
(4+5) & (10+2) \\
(1+0) & (0+0) \\
(0+0) & (1+1) \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
4 & 7 \\
1 & 8 \\
1 & 0 \\
0 & 2 \\
\end{array} \right][/math]
And — at first glance — the result may even seem to be what we were looking for: a temperament which makes
 neither the meantone comma [4 4 1 0⟩ nor the Pythagorean limma [8 5 0 0⟩ vanish, but does make the just diatonic semitone [4 1 1 0⟩ vanish; and
 neither Harrison's comma [13 10 0 1⟩ nor Archytas' comma [6 2 0 1⟩ vanish, but does make the laruru negative second [7 8 0 2⟩ vanish.
But while these two monovector additions have worked out individually, the full result cannot truly be said to be the "temperament sum" of septimal meantone and blackwood. And here follows a demonstration why it cannot.
Second example: alternate form
Let's try summing two completely different comma bases for these temperaments and see what we get. So septimal meantone can also be represented by the comma basis consisting of the diesis [1 2 3 1⟩ and the hemimean comma [6 0 5 2⟩ (which is another way of saying that septimal meantone also makes those commas vanish). And septimal blackwood can also be represented by the septimal thirdtone [2 3 0 1⟩ and the cloudy comma [14 0 0 5⟩. So here's those two bases' entrywise sum:
[math]\left[ \begin{array} {rrr}
1 & 6 \\
2 & 0 \\
3 & 5 \\
1 & 2 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
2 & 14 \\
3 & 0 \\
0 & 0 \\
1 & 5 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
(1+2) & (6+14) \\
(2+3) & (0+0) \\
(3+0) & (5+0) \\
(1+1) & (2+5) \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
3 & 20 \\
1 & 0 \\
3 & 5 \\
2 & 3 \\
\end{array} \right][/math]
This works out for the individual monovectors too, that is, it now makes none of the input commas vanish anymore, but instead their sums. But what we're looking at here is not a comma basis for the same temperament as we got the first time!
We can confirm this by putting both results into canonical form. That's exactly what canonical form is for: confirming whether or not two matrices are representations of the same temperament! The first result happens to already be in canonical form, so that's [[4 1 1 0⟩ [7 8 0 2⟩]. This second result [[3 1 3 2⟩ [20 0 5 3⟩] doesn't match that, but we can't be sure whether we don't have a match until we put it into canonical form. So its canonical form is [[49 3 19 0⟩ [23 1 8 1⟩], which doesn't match, and so these are decidedly not the same temperament.
Third example: reordering of canonical form
In fact, we could even take the same sets of commas and merely reorder them to come up with a different result! Here, we'll just switch the order of the two commas in the representation of septimal blackwood:
[math]\left[ \begin{array} {rrr}
4 & 13 \\
4 & 10 \\
1 & 0 \\
0 & 1 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
6 & 8 \\
2 & 5 \\
0 & 0 \\
1 & 0 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
(4+6) & (13+8) \\
(4+2) & (10+5) \\
(1+0) & (0+0) \\
(0+1) & (1+0) \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
2 & 5 \\
2 & 5 \\
1 & 0 \\
1 & 1 \\
\end{array} \right][/math]
And the canonical form of [[2 2 1 1⟩ [5 5 0 1⟩] is [[7 3 1 0⟩ [5 5 0 1⟩], so that's yet another possible temperament resulting from adding these nonaddable temperaments.
Fourth example: other side of duality
We can even experience this without changing basis. Let's just compare the results we get from the canonical form of these two temperaments, on either side of duality. The first example we worked through happened to be their canonical comma bases. So now let's look at their canonical mappings. Septimal meantone's is [⟨1 0 4 13] ⟨0 1 4 10]} and septimal blackwood's is [⟨5 8 0 14] ⟨0 0 1 0]}. So what temperament do we get by summing these?
[math]\left[ \begin{array} {rrr}
1 & 0 & 4 & 13 \\
0 & 1 & 4 & 10 \\
\end{array} \right]
+
\left[ \begin{array} {rrr}
5 & 8 & 0 & 14 \\
0 & 0 & 1 & 0 \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
(1+5) & (0+8) & (4+0) & (13+14) \\
(0+0) & (1+0) & (4+1) & (10+0) \\
\end{array} \right]
=
\left[ \begin{array} {rrr}
6 & 8 & 4 & 1 \\
0 & 1 & 5 & 10 \\
\end{array} \right][/math]
In order to compare this result directly with our other three results, let's take the dual of this [⟨6 8 4 1] ⟨0 1 5 10]}, which is [[22 15 3 0⟩ [41 30 2 2⟩] (in canonical form), so we can see that's yet a fourth possible result^{[8]}
Summary
Here's the four different results we've found so far:
[math]
\begin{array}{ccc}
\text{canonical} & & \text{alternate} & & \text{reordered canonical} & & \text{other side of duality} \\
\left[ \begin{array} {rrr}
4 & 7 \\
1 & 8 \\
1 & 0 \\
0 & 2 \\
\end{array} \right] &
≠ &
\left[ \begin{array} {rrr}
49 & 23 \\
3 & 1 \\
19 & 8 \\
0 & 1 \\
\end{array} \right] &
≠ &
\left[ \begin{array} {rrr}
7 & 5 \\
3 & 5 \\
1 & 0 \\
0 & 1 \\
\end{array} \right] &
≠ &
\left[ \begin{array} {rrr}
22 & 41 \\
15 & 30 \\
3 & 2 \\
0 & 2 \\
\end{array} \right]
\end{array}
[/math]
What we're experiencing here is the effect first discussed in the early section Temperament addition#The temperaments are addable: since entrywise addition of matrices is a operation defined on matrices, not bases, we get different results for different bases.
This in stark contrast to the situation when you have addable temperaments; once you get them into the form with the explicit [math]L_{\text{dep}}[/math] and only the single linearly independent basis vector, you will get the same resultant temperament regardless of which side of duality you add them on — the duals stay in sync, we could say — and regardless of which basis we choose.^{[9]}
And so we can see that despite immediate appearances, while it seems like we can simply do entrywise addition on temperaments with more than one basis vector not in common, this does not give us reliable results per temperament.
How it looks with multivectors
We've now observed the outcome when adding nonaddable temperaments using the matrix approach. It's instructive to observe how it works with multivectors as well. The canonical multicommas for septimal meantone and septimal blackwood are [[12 13 4 10 4 1⟩⟩ and [[14 0 8 0 5 0⟩⟩, respectively. When we add these, we get [[26 13 4 10 1 1⟩⟩. What temperament is this — does it match with any of the four comma bases we've already found? Let's check by converting it back to matrix form. Oh, wait — we can't. This is what we call a indecomposable multivector. In other words, there is no set of vectors that could be wedged together to produce this multivector. This is the way that multivectors convey to us that there is no true temperament sum of these two temperaments.
Further explanations
Diagrammatic explanation
Introduction
The diagrams used for this explanation were inspired in part by Kite's gencoms, and specifically how in his "twin squares" matrices — which have dimensions [math]d×d[/math] — one can imagine shifting a bar up and down to change the boundary between vectors that form a basis for the commas and those that are a generator detempering). The count of the former is the nullity [math]n[/math], and the count of the latter is the rank [math]r[/math], and the shifting of the boundary bar between them with the total [math]d[/math] vectors corresponds to the insight of the ranknullity theorem, which states that [math]r + n=d[/math]. And so this diagram's square grid has just the right amount of room to portray both the mapping and the comma basis for a given temperament (with the comma basis's vectors rotated 90 degrees to appear as rows, to match up with the rows of the mapping).
So consider this first example of such a diagram:
[math]d=4[/math]  [math]g_{\text{min}}=1[/math]  ↑  ↑  ↑  ↑  [math]l_{\text{ind}}=1[/math] 
[math]g_{\text{max}}=3[/math]  ↓  ↓  ↓  ↓  [math]l_{\text{ind}}=1[/math]  
This represents a [math]d=4[/math] temperament. These diagrams are gradeagnostic, which is to say that they are agnostic as to which side counts the [math]r[/math] and which side counts the [math]n[/math]. So we are showing them as [math]g_{\text{min}}[/math] and [math]g_{\text{max}}[/math] instead. We could say there's a variation on the ranknullity theorem whereby [math]g_{\text{min}} + g_{\text{max}}=d[/math], just as [math]r + n=d[/math]. So we can then say that this diagram represents either a [math]r=1[/math], [math]n=3[/math] temperament, or perhaps a [math]n=1[/math], [math]r=3[/math] temperament.
But actually, this diagram represents more than just a single temperament. It represents a relationship between a pair of temperaments (which have the same dimensions, nongradeagnostically, i.e. not a pairing of a [math]r=1[/math], [math]n=3[/math] temperament with a [math]r=3[/math], [math]n=1[/math] temperament). As elsewhere, green coloration indicates the linearly dependent basis vectors [math]L_{\text{dep}}[/math] between this pair of temperaments, and red coloration indicates linearly independent basis vectors [math]L_{\text{ind}}[/math] between the same pair of temperaments.
So, in this case, the two ET maps are linearly independent. This should be unsurprising; because ET maps are constituted by only a single vector (they're [math]r=1[/math] by definition), if they were linearly dependent, then they'd necessarily be the same exact ET! Temperament addition on two of the same ET is never interesting; [math]T_1[/math] plus [math]T_1[/math] simply equals [math]T_1[/math] again, and [math]T_1[/math] minus [math]T_1[/math] is undefined. That said, if we were to represent temperament addition between two of the same temperament on such a diagram as this, then every cell would be green. And this is true regardless whether [math]r=1[/math] or otherwise.
From this information, we can see that the comma bases of any randomly selected pair of different [math]d=4[/math] ETs are going to share 2 vectors, or in other words, their [math]L_{\text{dep}}[/math] will have two basis vectors. In terms of the diagram, we're saying that they'll always have two greencolored vectors under the black bar.
These diagrams are a good way to understand which temperament relationships are possible and which aren't, where by a "relationship" here we mean a particular combination of their matching dimensions and their linearindependence integer count. A good way to use these diagrams for this purpose is to imagine the red coloration emanating away from the black bar in both directions simultaneously, one pair of vectors at a time. Doing it like this captures the fact, as previously stated, that the [math]l_{\text{ind}}[/math] on either side of duality is always equal. There's no notion of a max or min here, as there is with [math]g[/math] or [math]l_{\text{dep}}[/math]; the [math]l_{\text{ind}}[/math] on either side is always the same, so we can capture it with a single number, which counts the red vectors on just one half (that is, half of the total count of red vectors, or half of the width of the red band in the middle of the grid).
There's no need to look at diagrams like this where the black bar is below the center. This is because, even though for convenience we're currently treating the top half as [math]r[/math] and the bottom half as [math]n[/math], these diagrams are ultimately gradeagnostic. So we could say that each one essentially represents not just one possibility for the relationship between two temperaments' dimensions and [math]l_{\text{ind}}[/math], but two such possibilities. Again, this diagram equally represents both [math]d=4, r=1, n=3, [/math][math]l_{\text{ind}}=1[/math] as well as [math]d=4, r=3, n=1, [/math][math]l_{\text{ind}}=1[/math]. Which is another way of saying we could vertically mirror it without changing it.
With the black bar always either in the top half or exactly in the center, we can see that the emanating red band will always either hit the top edge of the square grid first, or they will hit both the top and bottom edges of it simultaneously. So this is how these diagrams visually convey the fact that the [math]l_{\text{ind}}[/math] between two temperaments will always be less than or equal to their [math]g_{\text{min}}[/math]: because a situation where [math]g_{\text{min}}\gt l[/math] would visually look like the red band spilling past the edges of the square grid.
We could also say that two temperaments are linearly dependent on each other when [math]l_{\text{ind}}[/math][math]\lt g_{\text{max}}[/math], that is, their linearindependence is less than their maxgrade.
Perhaps more importantly, we can also see from these diagrams that any pair of [math]g_{\text{min}}=1[/math] temperaments will be addable. Because if they are [math]g_{\text{min}}=1[/math], then the furthest the red band can extend from the black bar is 1 vector, and 1 mirrored set of red vectors means [math]l_{\text{ind}}=1[/math], and that's the definition of addability.
A simple [math]d=3[/math] example
Let's backpedal to [math]d=3[/math] for a simple illustrative example.
[math]d=3[/math]  [math]g_{\text{min}}=1[/math]  ↑  ↑  ↑  [math]l_{\text{ind}}=1[/math] 
[math]g_{\text{max}}=2[/math]  ↓  ↓  ↓  [math]l_{\text{ind}}=1[/math]  
This diagram shows us that any two [math]d=3[/math], [math]g_{\text{min}}=1[/math] temperaments (like 5limit ETs) will be linearly dependent, i.e. their comma bases will share one vector. You may already know this intuitively if you are familiar with the 5limit projective tuning space diagram from the Middle Path paper, which shows how we can draw a line through any two ETs and that line will represent a temperament, and the single comma that temperament makes to vanish is this shared vector. The diagram also tells us that any two 5limit temperaments that make only a single comma vanish will also be linearly dependent, for the opposite reason: their mappings will always share one vector.
And we can see that there are no other diagrams of interest for [math]d=3[/math], because there's no sense in looking at diagrams with no red band, but we can't extend the red band any further than 1 vector on each side without going over the edge, and we can't lower the black bar any further without going below the center. So we're done. And our conclusion is that any pair of different [math]d=3[/math] temperaments that are nontrivial ([math]0 \lt n \lt d=3[/math] and [math]0 \lt r \lt d=3[/math]) will be addable.
Completing the suite of [math]d=4[/math] examples
Okay, back to [math]d=4[/math]. We've already looked at the [math]g_{\text{min}}=1[/math] possibility (which, for any [math]d[/math], there will only ever be one of). So let's start looking at the possibilities where [math]g_{\text{min}}=2[/math], which in the case of [math]d=4[/math] leaves us only one pair of values for [math]r[/math] and [math]n[/math]: both being 2.
[math]d=4[/math]  [math]g_{\text{min}}=2[/math]  
↑  ↑  ↑  ↑  [math]l_{\text{ind}}=1[/math]  
[math]g_{\text{max}}=2[/math]  ↓  ↓  ↓  ↓  [math]l_{\text{ind}}=1[/math]  
But even with [math]d[/math], [math]r[/math], and [math]n[/math] fixed, we still have more than one possibility for [math]L_{\text{dep}}[/math]. The above diagram shows [math]l_{\text{ind}}=1[/math]. The below diagram shows [math]l_{\text{ind}}=2[/math].
[math]d=4[/math]  [math]g_{\text{min}}=2[/math]  ↑  ↑  ↑  ↑  [math]l_{\text{ind}}=2[/math] 
↑  ↑  ↑  ↑  
[math]g_{\text{max}}=2[/math]  ↓  ↓  ↓  ↓  [math]l_{\text{ind}}=2[/math]  
↓  ↓  ↓  ↓ 
In the former possibility, where [math]l_{\text{ind}}=1[/math] (and therefore the temperaments are addable), we have a pair of different [math]d=4[/math], [math]r=2[/math] temperaments where we can find a single comma that both temperaments make to vanish, and — equivalently — we can find one ET that supports both temperaments.
In the latter possibility, where [math]l_{\text{ind}}=2[/math], neither side of duality shares any vectors in common. And so we've encountered our first example that is not addable. In other words, if the red band ever extends more than 1 vector away from the black bar, temperament addition is not possible. So [math]d=4[/math] is the first time we had enough room (half of [math]d[/math]) to support that condition.
We have now exhausted the possibility space for [math]d=4[/math]. We can't extend either the red band or the black bar any further.
[math]d=5[/math] diagrams finally reveal important relationships
So how about we go to [math]d=5[/math] (such as the 11limit). As usual, starting with [math]g_{\text{min}}=1[/math]:
[math]d=5[/math]  [math]g_{\text{min}}=1[/math]  ↑  ↑  ↑  ↑  ↑  [math]l_{\text{ind}}=1[/math] 
[math]g_{\text{max}}=4[/math]  ↓  ↓  ↓  ↓  ↓  [math]l_{\text{ind}}=1[/math]  
Just as with the [math]l_{\text{ind}}=1[/math] diagrams given for [math]d=3[/math] and [math]d=5[/math], we can see these are addable temperaments.
Now let's look at [math]d=5[/math] but with [math]g_{\text{min}}=2[/math]. This presents two possibilities. First, [math]l_{\text{ind}}=1[/math]:
[math]d=5[/math]  [math]g_{\text{min}}=2[/math]  
↑  ↑  ↑  ↑  ↑  [math]l_{\text{ind}}=1[/math]  
[math]g_{\text{max}}=3[/math]  ↓  ↓  ↓  ↓  ↓  [math]l_{\text{ind}}=1[/math]  
And second, [math]l_{\text{ind}}=2[/math]:
[math]d=5[/math]  [math]g_{\text{min}}=2[/math]  ↑  ↑  ↑  ↑  ↑  [math]l_{\text{ind}}=2[/math] 
↑  ↑  ↑  ↑  ↑  
[math]g_{\text{max}}=3[/math]  ↓  ↓  ↓  ↓  ↓  [math]l_{\text{ind}}=2[/math]  
↓  ↓  ↓  ↓  ↓  
Here's where things really get interesting. Because in both of these cases, the pairs of temperaments represented are linearly dependent on each other (i.e. either their mappings are linearly dependent, their comma bases are linearly dependent, or both). And so far, every possibility where temperaments have been linearly dependent, they have also been [math]l_{\text{ind}}=1[/math], and therefore addable. But if you look at the second case here, we are [math]l_{\text{ind}}=2[/math], but since [math]d=5[/math], the temperaments still manage to be linearly dependent. So this is the first example of a linearly dependent temperament pairing which is not addable.
Back to [math]d=2[/math], for a surprisingly tricky example
Beyond [math]d=5[/math], these diagrams get cumbersome to prepare, and cease to reveal further insights. But if we step back down to [math]d=2[/math], a place simpler than anywhere we've looked so far, we actually find another surprisingly tricky example, which is hopefully still illuminating.
So [math]d=2[/math] (such as the 3limit) presents another case — similar to the [math]d=5[/math], [math]g_{\text{min}}=2[/math], [math]l_{\text{ind}}=2[/math] case explored most recently above — where the properties of linearly dependence and addability do not match each other. But while in the other case, we had a temperament pair that was linearly dependent yet not addable, in this [math]d=2[/math] (and therefore [math]g_{\text{min}}=1[/math], [math]l_{\text{ind}}=1[/math]) case, it is the other way around: addable yet linearly independent!
[math]d=2[/math]  [math]g_{\text{min}}=1[/math]  ↑  ↑  [math]l_{\text{ind}}=1[/math] 
[math]g_{\text{max}}=1[/math]  ↓  ↓  [math]l_{\text{ind}}=1[/math] 
Basically, in the case of [math]d=2[/math], [math]g_{\text{max}}=1[/math] (in nontrivial cases, i.e. not JI or the unison temperament), so any two different ETs or commas you pick are going to be linearly independent (because the only way they could be linearly dependent would be to be the same temperament). And yet we know we can still entrywise add them to new vectors that are decomposable, because they're already vectors (decomposing means to express a multivector in the form of a list of monovectors, so decomposing a multivector that's already a monovector like this is tantamount to merely putting array braces around it.)
Geometric explanation
We've presented a diagrammatic illustration of the behavior of linearindependence [math]l_{\text{ind}}[/math] with respect to temperament dimensions. But some of the results might have seemed surprising. For instance, when looking at the diagram for [math]d=4, g_{\text{min}}=1, g_{\text{max}}=3[/math], it might have seemed intuitive enough that the the red band could not extend beyond the square grid, but then again, why shouldn't it be possible to have, say, two 7limit ETs which make only a single comma in common vanish? Perhaps it doesn't seem clear that this is impossible, and that they must make two commas in common vanish (and of course the infinitude of combinations of these two commas). If this is as unclear to you as it was to the author when exploring this topic, then this explanatory section is for you! Here, we will use geometrical representations of temperaments to hone our intuitions about the possible combinations of dimensions and linearindependence [math]l_{\text{ind}}[/math] of temperaments.
In this approach, we’re actually not going to focus directly on the linearindependence [math]l_{\text{ind}}[/math] of temperaments. Instead, we're going to look at the lineardependence [math]l_{\text{dep}}[/math] of matrices representing temperaments such as mappings and comma bases, and then compute the linearindependence [math]l_{\text{ind}}[/math] from it and the grade [math]g[/math]. As we’ve established, the lineardependence [math]l_{\text{dep}}[/math] differs from one side of duality to the other, so we’ll only be looking at one side of duality at a time.
Introduction
In this geometric approach, we'll be imagining individual vectors as points (0D), sets of two vectors as lines (1D), sets of three as planes (2D), four as volumes (3D), and so forth, as according to this table:
vector
count 
geometric
dimension 
form 

0  undefined  (emptiness) 
1  0  point 
2  1  line 
3  2  plane 
4  3  volume 
5  4  hypervolume 
⋮  ⋮  ⋮ 
This is a "vector space", and these geometric dimensions are consistent with how temperaments represented by these counts of vectors appear in projective vector space, which reduces geometric dimensions by 1. For example, a vector has a geometric interpretation as a directed line segment, which is 1D, but a point is 0D, which is one dimension lower. Essentially what we're doing is assuming the origin.
Think of it this way: geometric points are zerodimensional, simply representing a position in space, whereas linear algebra vectors are onedimensional, representing both a magnitude and direction; the way vectors manage to encode this extra dimension without providing any additional information is by being understood to describe this position in space relative to an origin. Well, so we'll now switch our interpretation of these objects to the geometric one, where the vector's entries are nothing more than a coordinate for a point in space. And the "projection" involved in projective vector space essentially positions us at this discarded origin, looking out from it upon every individual point, which accomplishes the same feat, in a visual way.
Perhaps an example may help clarify this setup. Suppose we've got an (x,y,z) space, and two coordinates (5,8,12) and (7,11,16). You should recognize these as the simple maps for 5ET and 7ET, usually written as ⟨5 8 12] and ⟨7 11 16], respectively. Ask for the equation of the plane defined by the three points (5,8,12), (7,11,16), and the origin (0,0,0) and you'll get 4x + 4y 1z = 0, which clearly shows us the entries of the meantone comma. That's because meantone temperament can be defined by these two maps. 5limit JI is a 3D space, and meantone temperament, as a rank2 temperament, would be a 2D plane. But we don't normally need to think of the map corresponding to the origin, where everything is made to vanish, including meantone. So we can just assume it, and think of a 2D plane as being defined by only 2 points, which in a view projected (from the origin) will look like just the line connecting (5,8,12) and (7,11,16).
So, we've set the stage for our projective vector spaces. We will now be looking at representations of temperaments as counts of vector sets, and then using this scheme to convert them to primitive geometric forms. We'll place two of each form into the space, representing the two temperaments whose addability is being checked. Then we will observe their possible intersections depending on how they're oriented in space, and it's these intersections that represent their lineardependence. When the dimension of the intersection is then converted back to a vector set count, then we have their lineardependence [math]l_{\text{dep}}[/math], for this side of duality, anyway (remember, unlike the linearindependence [math]l_{\text{ind}}[/math], this value isn't necessarily the same on both sides of duality). We can finally subtract the lineardependence from the grade (vector count) to get the linearindepedence, in order to determine if the two temperaments are addable.
In these examples, we'll be assuming that no two temperaments being compared are the same, because adding copies of the same temperament is not interesting. The other things we'll be assuming is that no lines, planes, etc. are parallel to each other; this is due to a strange effect touched upon in footnote 4 whereby temperament geometry that appears parallel in projective space actually still intersects; the present author asks that if anyone is able to demystify this situation, that they please do!
At [math]d=3[/math]
First, let's establish the geometric dimension of the space. With [math]d=3[/math], we've got a 2D space (one less than 3), so the entire space can be visualized on a plane.
Our only possible values for [math]g_{\text{min}}[/math] and [math]g_{\text{max}}[/math] here are 1 and 2, respectively. So these are the two possible counts of vectors [math]g[/math] possessed by matrices representing temperaments here.
So let's look at temperaments represented by matrices with 1 vector first ([math]g=1[/math]). In this case, each of the two temperaments is a point in the plane. Unless these two temperaments are the same temperament, the intersection of these two points is empty. Emptiness isn't even 0D! So that tells us that these temperaments have 0 vectors worth of linear dependence. With [math]g=1[/math], that gives us a [math]l_{\text{ind}}[/math][math] = g[/math] [math]  [/math] [math]l_{\text{dep}}[/math] [math]= 1 [/math] [math]0[/math] [math]= 1[/math]:
Next, let's look at temperaments represented by matrices with 2 vectors ([math]g=2[/math]). In this case, each of the two temperaments is a line in the plane. Again, assuming the two lines are not the same line or parallel, their intersection is a point. Being 0D, that tells us that the lineardependence of these matrices is 1. So that gives us an [math]l_{\text{ind}}[/math] [math]= g[/math] [math][/math] [math]l_{\text{dep}}[/math] [math]= 2 [/math] [math]1[/math] [math]= 1[/math]. This matches the value we found via the [math]g=1[/math], so we've effectively checked our work:
At [math]d=2[/math]
Let's step back to [math]d=2[/math]. Here we've got a 2 minus 1 equals 1D space, so the entire space can be visualized on a single line (one direction corresponds to an increasing ratio between the two coordinates, and the other to a decreasing ratio).
We know our only possible value for [math]g_{\text{min}}[/math] and [math]g_{\text{max}}[/math] here is 1. So in either case, each of the two temperaments is a point on the line. As with two points in a plane — when [math]d=3[/math] — unless these two temperaments are the same temperament, the intersection of these two points is empty. So again the [math]l_{\text{ind}}[/math][math] = g = 1[/math]:
At [math]d=4[/math]
First, let's establish the geometric dimension of the space. With [math]d=4[/math], we've got a 3D space (one less than 4), so the entire space can be visualized in a volume.
At [math]d=4[/math], we have a couple options for the grade: either [math]g_{\text{min}}=1[/math] and [math]g_{\text{max}}=3[/math], or both [math]g_{\text{min}}[/math] and [math]g_{\text{max}}[/math] equal 2.
Let's look at temperaments represented by matrices with 1 vector first ([math]g=1[/math]). Yet again, we find ourselves with two separate points, but now we find them in a space that's not a line, not a plane, but a volume. This doesn't change [math]l_{\text{ind}}[/math][math] = g = 1[/math], so we're not even going to show it, or any further cases of [math]g=1[/math]. These are all addable.
And when [math]g=3[/math], because this is paired with [math]g=1[/math] from the min and max values, we should expect to get the same answer as with [math]g=1[/math]. And indeed, it will check out that way. Because two [math]g=3[/math] temperaments will be planes in this volume, and the intersection of two planes is a line. Which means that [math]l_{\text{dep}}[/math][math] = 2[/math]. And so [math]l_{\text{ind}}[/math][math] = g[/math] [math]  [/math] [math]l_{\text{dep}}[/math] [math]= 3 [/math] [math]2[/math] [math]= 1[/math]. And here's where our geometric approach begins to pay off! This was the example given at the beginning that might seem unintuitive when relying only on the diagrammatic approach. But here we can see clearly that there would be no way for two planes in a volume to intersect only at a point, which proves the fact that two 7limit ETs could never only make a single comma in common vanish.
Next let's look at temperaments represented by matrices with 2 vectors, that is, when both [math]g_{\text{min}}[/math] and [math]g_{\text{max}}[/math] are equal to 2. What are the possible ways lines can occupy a volume together? In a plane, as it was with [math]d=3[/math] (and again assuming no parallel objects in these examples), they must intersect. But in a volume, here in [math]d=4[/math], this is possible. So, with [math]g=2[/math], it is possible to have a [math]l_{\text{dep}}[/math] [math]= 0[/math], which leads to [math]l_{\text{ind}}[/math][math] = g[/math] [math]  [/math] [math]l_{\text{dep}}[/math] [math]= 2 [/math] [math]0[/math] [math]= 2[/math]. Not addable in this case.
But we can also imagine two lines in a volume that do intersect at a point. This is the case where [math]l_{\text{ind}}[/math][math] = g[/math] [math]  [/math] [math]l_{\text{dep}}[/math] [math]= 2 [/math] [math]1[/math] [math]= 1[/math]: addable!
At [math]d=5[/math]
First, let's establish the geometric dimension of the space. With [math]d=5[/math], we've got a 4D space (one less than 5), so the entire space can be visualized in a hypervolume. We've now gone beyond the dimensionality of physical reality, so things get a little harder to conceptualize unfortunately. But [math]d=5[/math] is the first [math]d[/math] where we can make an important point about addability, so please bear with!
At [math]d=5[/math], we also have a couple options for the grade: either [math]g_{\text{min}}=1[/math] and [math]g_{\text{max}}=4[/math], or [math]g_{\text{min}}=2[/math] and [math]g_{\text{max}}=3[/math].
First we'll look at [math]g_{\text{min}}=1[/math] and [math]g_{\text{max}}=4[/math]. Temperament matrices with [math]g=1[/math] are still addable. And temperament matrices with [math]g=4[/math] should be too. We can see this visually as how two volumes in a hypervolume together will have an intersection the shape of a plane. We can now see that there's a generalizable principal that any two [math](d1)[/math]dimensional objects will necessarily have a [math](d2)[/math]dimensional intersection, and thus have [math]l_{\text{ind}}[/math] [math]= 1[/math] and be addable. So we won't need to show this one or any further like it, either.
So let's look at temperament matrices with [math]g_{\text{min}}=2[/math] and [math]g_{\text{max}}=3[/math]. For [math]g=2[/math], we have two possible values for [math]l_{\text{dep}}[/math]: 0 or 1. Meaning that either the two lines through this hypervolume do not intersect (0), or they intersect at a point (1). These diagrams would look very much like the corresponding diagrams for [math]d=4[/math], so we will not be showing them. But what about when [math]g=3[/math]? We can certainly imagine two planes in a hypervolume intersecting at a line, just as they do in an ordinary volume — they're just not taking advantage of the additional geometric dimension. So we won't show that example either. But where it gets really interesting is imagining then taking one of these two planes and rotating it in the fifth dimension; this causes the intersection between the two planes to be reduced down to a single point. And this corresponds with the case of [math]l_{\text{dep}}=1[/math] here, which means [math]l_{\text{ind}}[/math][math] = g[/math] [math]  [/math] [math]l_{\text{dep}}[/math] [math]= 3 [/math] [math]1[/math] [math]= 2[/math], so therefore not addable:
So for [math]g_{\text{min}}=2[/math] and [math]g_{\text{max}}=3[/math] we got two different possibilities for [math]l_{\text{ind}}[/math]: 1 and 2, and for each of these two possibilities, we found it twice. We can see then that these match up, that is, that the [math]g_{\text{min}}=2[/math] case with [math]l_{\text{ind}}=1[/math] matches with the [math]g_{\text{max}}=3[/math] case with [math]l_{\text{ind}}=1[/math], and the [math]l_{\text{ind}}=2[/math] cases match in the same way.
Summary table
Here's a summary table of our geometric findings so far:
[math]d[/math] ( [math]= g_{\text{min}} + g_{\text{max}}[/math])  [math]g_{\text{min}}[/math]  [math]g_{\text{max}}[/math]  [math]l_{\text{ind}}[/math] ( [math]= g [/math] [math]l_{\text{dep}}[/math])  

[math]g[/math]  [math]l_{\text{dep}}[/math]  [math]g[/math]  [math]l_{\text{dep}}[/math]  
2  1  0  1  0  1 
3  1  0  2  1  1 
4  1  0  3  2  1 
2  0  2  0  2  
2  1  2  1  1  
5  1  0  4  3  1 
2  0  3  1  2  
2  1  3  2  1 
Algebraic explanation
This explanation relies on comparing the results of the multivector and matrix approaches to temperament addition, and showing algebraically how the matrix approach can only achieve the same answer as the multivector approach on the condition that it keeps all but one vector between the added matrices the same, that is, not only are the temperaments addable, but their [math]L_{\text{dep}}[/math] appears explicitly in the added matrices.
To compare results, we eventually get both approaches into a multivector form. With the multivector approach, we wedge the vector set first and then add the resultant multivectors to get a new multivector. With the matrix approach, we treat the vector set as a matrix and add first, then treat the resultant matrix as a vector set and wedge those vectors to get a new multivector.
The diagrams below are organized into a 2×2 layout. The left part shows the multivector approach, and the right part shows the matrix approach. The top part shows how the results of two approaches match when the [math]L_{\text{dep}}[/math] is successfully explicit (and in these cases, the [math]L_{\text{dep}}[/math] vectors are highlighted in green and the [math]L_{\text{ind}}[/math] vectors are highlighted in red), and the bottom part shows how the results fail to match when it is not. Successful matches are highlighted in yellow and failures to match are highlighted in blue.
This first diagram demonstrates this situation for a [math]d=3, g=2[/math] case.
multivector approach  matrix approach  
explicit [math]L_{\text{dep}}[/math]
[[[math]a[/math] [math]b[/math] [math]c[/math]⟩] 
[math]a[/math]  [math]b[/math]  [math]c[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]+[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]=[/math]  [math]2a[/math]  [math]2b[/math]  [math]2c[/math]  
[math]d[/math]  [math]e[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]d[/math]  [math]e[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]d+g[/math]  [math]e+h[/math]  [math]f+i[/math]  
[math]∧[/math]  [math]∧[/math]  [math]∧[/math]  
[math]bfce[/math]  [math]afcd[/math]  [math]aebd[/math]  [math] +[/math]  [math]bich[/math]  [math]aicg[/math]  [math]ahbg[/math]  [math]=[/math]  [math]bfce+bich[/math]  [math]afcd+aicg[/math]  [math]aebd+ahbg[/math]  [math]2b(f+i)2c(e+h)[/math]  [math]2a(f+i)2c(d+g)[/math]  [math]2a(e+h)2b(d+g)[/math]  
[math]b(f+i)c(e+h)[/math]  [math]a(f+i)c(d+g)[/math]  [math]a(e+h)b(d+g)[/math]  [math]b(f+i)c(e+h)[/math]  [math]a(f+i)c(d+g)[/math]  [math]a(e+h)b(d+g)[/math]  
hidden [math]L_{\text{dep}}[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]j[/math]  [math]k[/math]  [math]l[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]+[/math]  [math]j[/math]  [math]k[/math]  [math]l[/math]  [math]=[/math]  [math]a+j[/math]  [math]b+k[/math]  [math]c+l[/math]  
[math]d[/math]  [math]e[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math][/math]  [math]d[/math]  [math]e[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]d+g[/math]  [math]e+h[/math]  [math]f+i[/math]  
[math]∧[/math]  [math]∧[/math]  [math]∧[/math]  
[math]bfce[/math]  [math]afcd[/math]  [math]aebd[/math]  [math]+[/math]  [math]kilh[/math]  [math]jilg[/math]  [math]jhkg[/math]  [math]=[/math]  [math]bfce+kilh[/math]  [math]afcd+jilg[/math]  [math]aebd+jhkg[/math]  [math](b+k)(f+i)(c+l)(e+h)[/math]  [math](a+j)(f+i)(c+l)(d+g)[/math]  [math](a+j)(e+h)(b+k)(d+g)[/math]  
[math]bfce+kilh[/math]  [math]afcd+jilg[/math]  [math]aebd+jhkg[/math]  [math]bf+bi+kf+kicechlelh[/math]  [math]af+ai+jf+jicdcgldlg[/math]  [math]ae+ah+je+jhbdbgkdkg[/math]  
This second diagram demonstrates this situation for a [math]d=5, g=3[/math] case. One pair of the [math]L_{\text{dep}}[/math] vectors are explicitly matching, but not the other, which isn't enough.
multivector approach  matrix approach  
explicit [math]L_{\text{dep}}[/math]
[[[math]a[/math] [math]b[/math] [math]c[/math] [math]d[/math] [math]e[/math]⟩ [[math]f[/math] [math]g[/math] [math]h[/math] [math]i[/math] [math]j[/math]⟩] 
[math]r_1[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  +  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  [math]=[/math]  [math]2a[/math]  [math]2b[/math]  [math]2c[/math]  [math]2d[/math]  [math]2e[/math]  
[math]r_2[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]j[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]j[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]j[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]j[/math]  [math]2f[/math]  [math]2g[/math]  [math]2h[/math]  [math]2i[/math]  [math]2j[/math]  
[math]r_3[/math]  [math]k[/math]  [math]l[/math]  [math]m[/math]  [math]n[/math]  [math]o[/math]  [math]p[/math]  [math]q[/math]  [math]r[/math]  [math]s[/math]  [math]t[/math]  [math]k[/math]  [math]l[/math]  [math]m[/math]  [math]n[/math]  [math]o[/math]  [math]p[/math]  [math]q[/math]  [math]r[/math]  [math]s[/math]  [math]t[/math]  [math]k+p[/math]  [math]l+q[/math]  [math]m+r[/math]  [math]n+s[/math]  [math]o+t[/math]  
[math]∧[/math]  [math]∧[/math]  [math]∧[/math]  
[math]r_1∧r_2[/math]  [math]agbf[/math]  [math]ahcf[/math]  [math]aidf[/math]  [math]ajef[/math]  [math]bhcg[/math]  [math]bidg[/math]  [math]bjeg[/math]  [math]cidh[/math]  [math]cjeh[/math]  [math]djei[/math]  [math]agbf[/math]  [math]ahcf[/math]  [math]aidf[/math]  [math]ajef[/math]  [math]bhcg[/math]  [math]bidg[/math]  [math]bjeg[/math]  [math]cidh[/math]  [math]cjeh[/math]  [math]djei[/math]  [math]4ag4bf[/math]  [math]4ah4cf[/math]  [math]4ai4df[/math]  [math]4aj4ef[/math]  [math]4bh4cg[/math]  [math]4bi4dg[/math]  [math]4bj4eg[/math]  [math]4ci4dh[/math]  [math]4cj4eh[/math]  [math]4dj4ei[/math]  
simplify [math]r_1∧r_2[/math] if necessary  [math]agbf[/math]  [math]ahcf[/math]  [math]aidf[/math]  [math]ajef[/math]  [math]bhcg[/math]  [math]bidg[/math]  [math]bjeg[/math]  [math]cidh[/math]  [math]cjeh[/math]  [math]djei[/math]  
[math](r_1∧r_2)∧r_3[/math]  [math]k(bhcg)\\l(ahcf)\\+m(agbf)[/math]  [math]k(bidg)\\l(aidf)\\+n(agbf)[/math]  [math]k(bjeg)\\l(ajef)\\+o(agbf)[/math]  [math]k(cidh)\\m(aidf)\\+n(ahcf)[/math]  [math]k(cjeh)\\m(ajef)\\+o(ahcf)[/math]  [math]k(djei)\\n(ajef)\\+o(aidf)[/math]  [math]l(cidh)\\m(bidg)\\+n(bhcg)[/math]  [math]l(cjeh)\\m(bjeg)\\+o(bhcg)[/math]  [math]l(djei)\\n(bjeg)\\+o(bidg)[/math]  [math]m(djei)\\n(cjeh)\\+o(cidh)[/math]  [math]+[/math]  [math]p(bhcg)\\q(ahcf)\\+r(agbf)[/math]  [math]p(bidg)\\q(aidf)\\+s(agbf)[/math]  [math]p(bjeg)\\q(ajef)\\+t(agbf)[/math]  [math]p(cidh)\\r(aidf)\\+s(ahcf)[/math]  [math]p(cjeh)\\r(ajef)\\+t(ahcf)[/math]  [math]p(djei)\\s(ajef)\\+t(aidf)[/math]  [math]q(cidh)\\r(bidg)\\+s(bhcg)[/math]  [math]q(cjeh)\\r(bjeg)\\+t(bhcg)[/math]  [math]q(djei)\\s(bjeg)\\+t(bidg)[/math]  [math]r(djei)\\s(cjeh)\\+t(cidh)[/math]  [math]=[/math]  [math](k+p)(bhcg)\\(l+q)(ahcf)\\+(m+r)(agbf)[/math]  [math](k+p)(bidg)\\(l+q)(aidf)\\+(n+s)(agbf)[/math]  [math](k+p)(bjeg)\\(l+q)(ajef)\\+(o+t)(agbf)[/math]  [math](k+p)(cidh)\\(m+r)(aidf)\\+(n+s)(ahcf)[/math]  [math](k+p)(cjeh)\\(m+r)(ajef)\\+(o+t)(ahcf)[/math]  [math](k+p)(djei)\\(n+s)(ajef)\\+(o+t)(aidf)[/math]  [math](l+q)(cidh)\\(m+r)(bidg)\\+(n+s)(bhcg)[/math]  [math](l+q)(cjeh)\\(m+r)(bjeg)\\+(o+t)(bhcg)[/math]  [math](l+q)(djei)\\(n+s)(bjeg)\\+(o+t)(bidg)[/math]  [math](m+r)(djei)\\(n+s)(cjeh)\\+(o+t)(cidh)[/math]  [math](k+p)(bhcg)\\(l+q)(ahcf)\\+(m+r)(agbf)[/math]  [math](k+p)(bidg)\\(l+q)(aidf)\\+(n+s)(agbf)[/math]  [math](k+p)(bjeg)\\(l+q)(ajef)\\+(o+t)(agbf)[/math]  [math](k+p)(cidh)\\(m+r)(aidf)\\+(n+s)(ahcf)[/math]  [math](k+p)(cjeh)\\(m+r)(ajef)\\+(o+t)(ahcf)[/math]  [math](k+p)(djei)\\(n+s)(ajef)\\+(o+t)(aidf)[/math]  [math](l+q)(cidh)\\(m+r)(bidg)\\+(n+s)(bhcg)[/math]  [math](l+q)(cjeh)\\(m+r)(bjeg)\\+(o+t)(bhcg)[/math]  [math](l+q)(djei)\\(n+s)(bjeg)\\+(o+t)(bidg)[/math]  [math](m+r)(djei)\\(n+s)(cjeh)\\+(o+t)(cidh)[/math]  
hidden [math]L_{\text{dep}}[/math]  [math]r_1[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  [math]+[/math]  [math]a[/math]  [math]b[/math]  [math]c[/math]  [math]d[/math]  [math]e[/math]  [math]=[/math]  [math]2a[/math]  [math]2b[/math]  [math]2c[/math]  [math]2d[/math]  [math]2e[/math]  
[math]r_2[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]j[/math]  [math]u[/math]  [math]v[/math]  [math]w[/math]  [math]x[/math]  [math]y[/math]  [math]f[/math]  [math]g[/math]  [math]h[/math]  [math]i[/math]  [math]j[/math]  [math]u[/math]  [math]v[/math]  [math]w[/math]  [math]x[/math]  [math]y[/math]  [math]f+u[/math]  [math]g+v[/math]  [math]w+h[/math]  [math]i+x[/math]  [math]j+y[/math]  
[math]r_3[/math]  [math]k[/math]  [math]l[/math]  [math]m[/math]  [math]n[/math]  [math]o[/math]  [math]p[/math]  [math]q[/math]  [math]r[/math]  [math]s[/math]  [math]t[/math]  [math]k[/math]  [math]l[/math]  [math]m[/math]  [math]n[/math]  [math]o[/math]  [math]p[/math]  [math]q[/math]  [math]r[/math]  [math]s[/math]  [math]t[/math]  [math]k+p[/math]  [math]l+q[/math]  [math]m+r[/math]  [math]n+s[/math]  [math]o+t[/math]  
[math]∧[/math]  [math]∧[/math]  [math]∧[/math]  
[math]r_1∧r_2[/math]  [math]agbf[/math]  [math]ahcf[/math]  [math]aidf[/math]  [math]ajef[/math]  [math]bhcg[/math]  [math]bidg[/math]  [math]bjeg[/math]  [math]cidh[/math]  [math]cjeh[/math]  [math]djei[/math]  [math]avbu[/math]  [math]awcu[/math]  [math]axdu[/math]  [math]ayeu[/math]  [math]bwcv[/math]  [math]bxdv[/math]  [math]byev[/math]  [math]cxdw[/math]  [math]cyew[/math]  [math]dyex[/math]  [math]2a(g+v)\\2b(f+u)[/math]  [math]2a(w+h)\\2c(f+u)[/math]  [math]2a(i+x)\\2d(f+u)[/math]  [math]2a(j+y)\\2e(f+u)[/math]  [math]2b(w+h)\\2c(g+v)[/math]  [math]2b(i+x)\\2d(g+v)[/math]  [math]2b(j+y)\\2e(g+v)[/math]  [math]2c(i+x)\\2d(w+h)[/math]  [math]2c(j+y)\\2e(w+h)[/math]  [math]2d(j+y)\\2e(i+x)[/math]  
simplify [math](r_1∧r_2)[/math] if necessary  [math]a(g+v)\\b(f+u)[/math]  [math]a(w+h)\\c(f+u)[/math]  [math]a(i+x)\\d(f+u)[/math]  [math]a(j+y)\\e(f+u)[/math]  [math]b(w+h)\\c(g+v)[/math]  [math]b(i+x)\\d(g+v)[/math]  [math]b(j+y)\\e(g+v)[/math]  [math]c(i+x)\\d(w+h)[/math]  [math]c(j+y)\\e(w+h)[/math]  [math]d(j+y)\\e(i+x)[/math]  
[math](r_1∧r_2)∧r_3[/math]  [math]k(bhcg)\\l(ahcf)\\+m(agbf)[/math]  [math]k(bidg)\\l(aidf)\\+n(agbf)[/math]  [math]k(bjeg)\\l(ajef)\\+o(agbf)[/math]  [math]k(cidh)\\m(aidf)\\+n(ahcf)[/math]  [math]k(cjeh)\\m(ajef)\\+o(ahcf)[/math]  [math]k(djei)\\n(ajef)\\+o(aidf)[/math]  [math]l(cidh)\\m(bidg)\\+n(bhcg)[/math]  [math]l(cjeh)\\m(bjeg)\\+o(bhcg)[/math]  [math]l(djei)\\n(bjeg)\\+o(bidg)[/math]  [math]m(djei)\\n(cjeh)\\+o(cidh)[/math]  [math]+[/math]  [math]p(bwcv)\\q(awcu)\\+r(avbu)[/math]  [math]p(bxdv)\\q(axdu)\\+s(avbu)[/math]  [math]p(byev)\\q(ayeu)\\+t(avbu)[/math]  [math]p(cxdw)\\r(axdu)\\+s(awcu)[/math]  [math]p(cyew)\\r(ayeu)\\+t(awcu)[/math]  [math]p(dyex)\\s(ayeu)\\+t(axdu)[/math]  [math]q(cxdw)\\r(bxdv)\\+s(bwcv)[/math]  [math]q(cyew)\\r(byev)\\+t(bwcv)[/math]  [math]q(dyex)\\s(byev)\\+t(bwcv)[/math]  [math]r(dyex)\\s(cyew)\\+t(cxdw)[/math]  [math]=[/math]  [math]k(bhcg)\\l(ahcf)\\+m(agbf)\\+p(bwcv)\\q(awcu)\\+r(avbu)[/math]  [math]k(bidg)\\l(aidf)\\+n(agbf)\\+p(bxdv)\\q(axdu)\\+s(avbu)[/math]  [math]k(bjeg)\\l(ajef)\\+o(agbf)\\+p(byev)\\q(ayeu)\\+t(avbu)[/math]  [math]k(cidh)\\m(aidf)\\+n(ahcf)\\+p(cxdw)\\r(axdu)\\+s(awcu)[/math]  [math]k(cjeh)\\m(ajef)\\+o(ahcf)\\+p(cyew)\\r(ayeu)\\+t(awcu)[/math]  [math]k(djei)\\n(ajef)\\+o(aidf)\\+p(dyex)\\s(ayeu)\\+t(axdu)[/math]  [math]l(cidh)\\m(bidg)\\+n(bhcg)\\+q(cxdw)\\r(bxdv)\\+s(bwcv)[/math]  [math]l(cjeh)\\m(bjeg)\\+o(bhcg)\\+q(cyew)\\r(byev)\\+t(bwcv)[/math]  [math]l(djei)\\n(bjeg)\\+o(bidg)\\+q(dyex)\\s(byev)\\+t(bwcv)[/math]  [math]m(djei)\\n(cjeh)\\+o(cidh)\\+r(dyex)\\s(cyew)\\+t(cxdw)[/math]  [math](k+p)\\(b(w+h)c(g+v))\\(l+q)\\(a(w+h)c(f+u))\\+(m+r)\\(a(g+v)b(f+u))[/math]  [math](k+p)\\(b(i+x)d(g+v))\\(l+q)\\(a(i+x)d(f+u))\\+(n+s)\\(a(g+v)b(f+u))[/math]  [math](k+p)\\(b(j+y)e(g+v))\\(l+q)\\(a(j+y)e(f+u))\\+(o+t)\\(a(g+v)b(f+u))[/math]  [math](k+p)\\(c(i+x)d(w+h))\\(m+r)\\(a(i+x)d(f+u))\\+(n+s)\\(a(w+h)c(f+u))[/math]  [math](k+p)\\(c(j+y)e(w+h))\\(m+r)\\(a(j+y)e(f+u))\\+(o+t)\\(a(w+h)c(f+u))[/math]  [math](k+p)\\(d(j+y)e(i+x))\\(n+s)\\(a(j+y)e(f+u))\\+(o+t)\\(a(i+x)d(f+u))[/math]  [math](l+q)\\(c(i+x)d(w+h))\\(m+r)\\(b(i+x)d(g+v))\\+(n+s)\\(b(w+h)c(g+v))[/math]  [math](l+q)\\(c(j+y)e(w+h))\\(m+r)\\(b(j+y)e(g+v))\\+(o+t)\\(b(w+h)c(g+v))[/math]  [math](l+q)\\(d(j+y)e(i+x))\\(n+s)\\(b(j+y)e(g+v))\\+(o+t)\\(b(i+x)d(g+v))[/math]  [math](m+r)\\(d(j+y)e(i+x))\\(n+s)\\(c(j+y)e(w+h))\\+(o+t)\\(c(i+x)d(w+h))[/math]  
These two examples are by no means a proof, but meditation on the patterns in the variables is at least fairly convincing.
Sintel's proof of the linearindependence conjecture
Sintel's original text
If A and B are mappings from Z^n to Z^m, with n > m, A, B full rank (using A and B as their rowspace equivalently): dim(A + B)  m = dim(ker(A) + ker(B))  (nm) >> dim(A)+dim(B)=dim(A+B)+dim(A∩B) => dim(A + B) = dim(A) + dim(B)  dim(A∩B) dim(A) + dim(B)  dim(A∩B)  m = dim(ker(A) + ker(B))  (nm) >> by duality of kernel, dim(ker(A) + ker(B)) = dim(ker(A ∩ B)) dim(A) + dim(B)  dim(A∩B)  m = dim(ker(A ∩ B))  (nm) >> rank nullity: dim(ker(A ∩ B)) + dim(A ∩ B) = n dim(A) + dim(B)  dim(A∩B)  m = n  dim(A ∩ B)  (nm) m + m  dim(A∩B)  m = n  dim(A ∩ B)  (nm) m + m  m = n  n + m m = m
Douglas Blumeyer's interpretation
We're going to take the strategy of beginning with what we're trying to prove, then reducing it to an obvious equivalence, which will show that our initial statement must be just as true.
So here's the statement we're trying to prove:
[math]\text{rank}(\text{union}(M_1, M_2))  r = \text{nullity}(\text{union}(C_1, C_2))  n[/math]
[math]M_1[/math] and [math]M_2[/math] are mappings which both have dimensionality [math]d[/math], rank [math]r[/math], nullity [math]n[/math], and are fullrank, and [math]C_1[/math] and [math]C_2[/math] are their comma bases, respectively.
Technically since these matrices are representing subspace bases, the correct operation here is "sumset", not "union", but because "union" is a more commonly known opposite of intersection and would work for plain matrices, I've decided to stick with it here.
So, the lefthand side of this equation is a way to express the count of linearly independent basis vectors [math]L_{\text{ind}}[/math] existing between [math]M_1[/math] and [math]M_2[/math]. The righthand side tells you the same thing, but between [math]C_1[/math] and [math]C_2[/math]. The fact that these two things are equal is the thing we're trying to prove. So let's go!
Let's call the following Equation B. This makes sense because basis vectors between [math]M_1[/math] and [math]M_2[/math] are either going to be linearly dependent or linearly independent. The union is going to be all of [math]M_1[/math]'s independent vectors, all of [math]M_2[/math]'s independent vectors, and all of [math]M_1[/math] and [math]M_2[/math]'s dependent vectors but only one copy of them. While the intersection is going be all of [math]M_1[/math] and [math]M_2[/math]'s dependent vectors again — essentially the other copy of them. So they sum to the same thing.
[math]\text{rank}(M_1) + \text{rank}(M_2) = \text{rank}(\text{union}(M_1, M_2)) + \text{rank}(\text{intersection}(M_1, M_2))[/math]
Then this is just Equation B, rearranged.
[math]\text{rank}(\text{union}(M_1, M_2)) = \text{rank}(M_1) + \text{rank}(M_2)  \text{rank}(\text{intersection}(M_1, M_2))[/math]
This takes Equation B, solves it for [math]\text{rank}(\text{union}(M_1, M_2))[/math], then substitutes that result into Equation A, which is then flipped left/right, and then [math]r[/math] is subtracted from both sides.
[math]\text{rank}(M_1) + \text{rank}(M_2)  \text{rank}(\text{intersection}(M_1, M_2))  r = \text{nullity}(\text{union}(C_1, C_2))  n[/math]
By the "duality of the comma basis", this is Equation C:
[math]\text{nullity}(\text{union}(C_1, C_2)) = \text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2)))[/math]
Now substitute in the righthand side of Equation C for [math]\text{nullity}(\text{union}(C_1, C_2))[/math] in Equation B.
[math]\text{rank}(M_1) + \text{rank}(M_2)  \text{rank}(\text{intersection}(M_1, M_2))  r = \text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2)))  n[/math]
This is the rank nullity theorem where [math]\text{intersection}(M_1, M_2)[/math] is the temperament. Let's call it Equation D:
[math]\text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2))) + \text{rank}(\text{intersection}(M_1, M_2)) = d[/math]
Now solve Equation D for [math]\text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2)))[/math], and substitute that result into Equation B:
[math]\text{rank}(M_1) + \text{rank}(M_2)  \text{rank}(\text{intersection}(M_1, M_2))  r = d  \text{rank}(\text{intersection}(M_1, M_2))  n[/math]
Now realize that [math]\text{rank}(M_1)[/math] and [math]\text{rank}(M_2)[/math] are both equal to [math]r[/math].
[math]r + r  \text{rank}(\text{intersection}(M_1, M_2))  r = d  \text{rank}(\text{intersection}(M_1, M_2))  n[/math]
Now cancel the [math]\text{rank}(\text{intersection}(M_1, M_2))[/math] from both sides, and substitute in [math](d  r)[/math] for [math]n[/math].
[math]r + r  r = d  d + r[/math]
Now cancel the [math]r[/math]'s on the left and the [math]d[/math]'s on the right:
[math]r = r[/math]
So we know this is true.
Glossary
 [math]d[/math]: dimensionality, the dimension of a temperament's domain
 [math]r[/math]: rank, the dimension of a temperament's mapping
 [math]n[/math]: nullity, the dimension of a temperament's comma basis
 [math]g[/math]: grade, the generic term for rank or nullity
 [math]g_{\text{min}}[/math]: mingrade, the minimum of a temperament's rank and nullity [math]\min(r,n)[/math]
 [math]g_{\text{max}}[/math]: maxgrade, the maximum of a temperament's rank and nullity [math]\min(r,n)[/math]
 [math]L_{\text{dep}}[/math]: lineardependence basis, a basis for all the linearly dependent vectors between two temperaments
 [math]L_{\text{ind}}[/math]: linearindependence basis, a basis for all the vectors of a temperament that are linearly independent from a specific other temperament
 [math]l_{\text{dep}}[/math]: lineardependence, the dimension of the [math]L_{\text{dep}}[/math]
 [math]l_{\text{ind}}[/math]: linearindependence, the dimension of the [math]L_{\text{ind}}[/math]
 dimensions: the [math]d[/math], [math]r[/math], and [math]n[/math] of a temperament
 addable: two temperaments are addable when they have the same dimensions and have [math]l_{\text{ind}}[/math] [math]= 1[/math]
 negation: a mapping is negated when the leading entry of its largestminors is negative; a comma basis is negated when the trailing entry of its largestminors is negative
Wolfram implementation
Temperament arithemetic has been implemented as the functions sum
and diff
in the RTT library in Wolfram Language.
Credits
This page is mostly the work of Douglas Blumeyer, and he assumes full responsibility for any inaccuracies or otherwise shortcomings here. But he would like to thank Mike Battaglia, Dave Keenan, and Sintel for the huge amounts of counseling they provided. There's no way this page could have come together without their help. In particular, the page would not exist at all without the original spark of inspiration from Mike.
Footnotes
 ↑ It has also been asserted that there exists a connection between temperament addition and "Fokker groups" as discussed on this page: Fokker block, but the connection remains unclear to this author.
 ↑ or they are all the same temperament, in which case they share all the same basis vectors and could perhaps be said to be completely linearly dependent.
 ↑ At least, this mapping would have a total of four rows before it is canonicalized. After canonicalization, it may end up with only three (or two if you mapmerged a temperament with itself for some reason).
 ↑ or — equivalently, in EA — either their multimaps or their multicommas are linearly dependent
 ↑ You may be wondering — what about two temperaments which are parallel in tone or tuning space, e.g. compton and blackwood in tuning space? Their comma bases are each [math]n=1[/math], and they merge to give a [math]n=2[/math] comma basis, which corresponds to a [math]r=1[/math] mapping, which means it should appear as an ET point on the PTS diagram. But how could that be? Well, here's their commamerge: [[1 0 0⟩ [0 1 0⟩], and so that corresponding mapping is [⟨0 0 1]}. So it's some degenerate ET. I suppose we could say it's the point at infinity away from the center of the diagram.
 ↑ This conjecture was first suggested by Mike Battaglia, but it has not yet been mathematically proven. Sintel and Tom Price have done some experiments but nothing complete yet. Douglas Blumeyer's test cases in the RTT library in Wolfram Language have emiprically proven that this is true, though.
 ↑ or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)
 ↑ It is possible to find a pair of mapping forms for septimal meantone and septimal blackwood that sum to a mapping which is the dual of the comma basis found by summing their canonical comma bases. One example is [⟨97 152 220 259] ⟨30 47 68 80]} + [⟨95 152 212 266] ⟨30 48 67 84]}.
 ↑ Note that different bases are possible for addable temperaments, e.g. the simplest addable forms for 5limit meantone and porcupine are [⟨7 11 16] ⟨2 3 4]⟩ + [⟨7 11 16] ⟨1 2 3]⟩ = [⟨14 22 32] ⟨1 1 1]} which canonicalizes to [⟨1 1 1] ⟨0 4 9]}. But [⟨7 11 16] ⟨9 14 20]⟩ + [⟨7 11 16] ⟨1 2 3]⟩ also works (in the meantone mapping, we've added one copy of the first vector to the second), giving [⟨14 22 32] ⟨8 12 17]} which also canonicalizes to [⟨1 1 1] ⟨0 4 9]}; in fact, as long as the [math]L_{\text{dep}}[/math] is explicit and neither matrix is enfactored, the entrywise addition will work out fine.