Temperament addition: Difference between revisions

Cmloegcmluin (talk | contribs)
Diagrammatic explanation: improve diagrams
Cmloegcmluin (talk | contribs)
massive cleanup of variable names
Line 3: Line 3:
'''Temperament arithmetic''' is the general name for either the '''temperament sum''' or the '''temperament difference''', which are two closely related operations on [[regular temperaments]]. Basically, to do temperament arithmetic means to match up the entries of temperament vectors and then add or subtract them individually. The result is a new temperament that has similar properties to the original temperaments.
'''Temperament arithmetic''' is the general name for either the '''temperament sum''' or the '''temperament difference''', which are two closely related operations on [[regular temperaments]]. Basically, to do temperament arithmetic means to match up the entries of temperament vectors and then add or subtract them individually. The result is a new temperament that has similar properties to the original temperaments.


For example, the sum of [[12-ET]] and [[7-ET]] is [[19-ET]] because {{map|12 19 28}} + {{map|7 11 16}} = {{map|(12+7) (19+11) (28+16)}} = {{map|19 30 44}}, and the difference of 12-ET and 7-ET is 5-ET because {{map|12 19 28}} - {{map|7 11 16}} = {{map|(12-7) (8-11) (12-16)}} = {{map|5 8 12}}. We can write these using [[wart notation]] as 12p + 7p = 19p and 12p - 7p = 5p, respectively. The similarity in these temperaments can be seen in how, like both 12-ET and 7-ET, 19-ET (their sum) and 5-ET (their difference) both also support [[meantone temperament]].
For example, the sum of [[12-ET]] and [[7-ET]] is [[19-ET]] because {{map|12 19 28}} + {{map|7 11 16}}={{map|(12+7) (19+11) (28+16)}}={{map|19 30 44}}, and the difference of 12-ET and 7-ET is 5-ET because {{map|12 19 28}} - {{map|7 11 16}}={{map|(12-7) (8-11) (12-16)}}={{map|5 8 12}}. We can write these using [[wart notation]] as 12p + 7p=19p and 12p - 7p=5p, respectively. The similarity in these temperaments can be seen in how, like both 12-ET and 7-ET, 19-ET (their sum) and 5-ET (their difference) both also support [[meantone temperament]].


Temperament sums and differences can also be found using commas; for example meantone + porcupine = tetracot because {{vector|4 -4 1}} + {{vector|1 -5 3}} = {{vector|(4+1) (-4+-5) (1+3)}} = {{vector|5 -9 4}} and meantone - porcupine = dicot because {{vector|4 -4 1}} - {{vector|1 -5 3}} = {{vector|(4-1) (-4--5) (1-3)}} = {{vector|3 1 -2}}. We could write this in ratio form — replacing addition with multiplication and subtraction with division — as 80/81 × 250/243 = 20000/19683 and 80/81 ÷ 250/243 = 25/24, respectively. The similarity in these temperaments can be seen in how all of them are supported by 7-ET.
Temperament sums and differences can also be found using commas; for example meantone + porcupine=tetracot because {{vector|4 -4 1}} + {{vector|1 -5 3}}={{vector|(4+1) (-4+-5) (1+3)}}={{vector|5 -9 4}} and meantone - porcupine=dicot because {{vector|4 -4 1}} - {{vector|1 -5 3}}={{vector|(4-1) (-4--5) (1-3)}}={{vector|3 1 -2}}. We could write this in ratio form — replacing addition with multiplication and subtraction with division — as 80/81 × 250/243=20000/19683 and 80/81 ÷ 250/243=25/24, respectively. The similarity in these temperaments can be seen in how all of them are supported by 7-ET.


Temperament arithmetic is simplest for temperaments which can be represented by single vectors such as demonstrated in these examples. In other words, it is simplest for temperaments that are either rank-1 ([[equal temperament]]s, or ETs for short) or nullity-1 (having only a single comma). Because [[grade]] <math>g</math> is the generic term for rank <math>r</math> and nullity <math>n</math>, we could define the minimum grade <math>\min(g)</math> of a temperament as the minimum of its rank and nullity <math>\min(r,n)</math>, and so for convenience in this article we will refer to <math>r=1</math> (read "rank-1") or <math>n=1</math> (read "nullity-1") temperaments as <math>\min(g)=1</math> (read "min-grade-1") temperaments.  
Temperament arithmetic is simplest for temperaments which can be represented by single vectors such as demonstrated in these examples. In other words, it is simplest for temperaments that are either rank-1 ([[equal temperament]]s, or ETs for short) or nullity-1 (having only a single comma). Because [[grade]] <math>g</math> is the generic term for rank <math>r</math> and nullity <math>n</math>, we could define the minimum grade <math>g_{\text{min}}</math> of a temperament as the minimum of its rank and nullity <math>\min(r,n)</math>, and so for convenience in this article we will refer to <math>r=1</math> (read "rank-1") or <math>n=1</math> (read "nullity-1") temperaments as <math>g_{\text{min}}=1</math> (read "min-grade-1") temperaments. We'll also use <math>g_{\text{max}}</math> (read "max-grade"), which naturally is equal to <math>\max(r,n)</math>.


For <math>\min{g}>1</math> temperaments, temperament arithmetic gets a little trickier. This is discussed in the [[Temperament_arithmetic#Beyond_.5Bmath.5D.5Cmin.28g.29.3D1.5B.2Fmath.5D|beyond <math>\min(g)=1</math> section]] later.
For <math>g_{\text{min}}>1</math> temperaments, temperament arithmetic gets a little trickier. This is discussed in the [[Temperament_arithmetic#Beyond_.5Bmath.5D.5Cmin.28g.29.3D1.5B.2Fmath.5D|beyond <math>g_{\text{min}}=1</math> section]] later.


==Visualizing temperament arithmetic==
==Visualizing temperament arithmetic==
Line 25: Line 25:
[[File:Visualization of temperament arithmetic.png|500px|right|thumb|A visualization of temperament arithmetic on projective tuning space.]]
[[File:Visualization of temperament arithmetic.png|500px|right|thumb|A visualization of temperament arithmetic on projective tuning space.]]


This shows both the sum and the difference of porcupine and meantone. All four temperaments — the two input temperaments, porcupine and meantone, as well as the sum, tetracot, and the diff, dicot — can be seen to intersect at 7-ET. This is because all four temperaments' [[mapping]]s can be expressed with the map for 7-ET as one of their mapping rows.  
This shows both the sum and the difference of porcupine and meantone. All four temperaments — the two input temperaments, porcupine and meantone, as well as the sum, tetracot, and the diff, dicot — can be seen to intersect at 7-ET. This is because all four temperaments' [[mapping]]s can be expressed with the map for 7-ET as one of their mapping rows.


These are all <math>r=2</math> temperaments, so their mappings each have one other row besides the one reserved for 7-ET. Any line that we draw across these four temperament lines will strike four ETs whose maps have a sum and difference relationship. On this diagram, two such lines have been drawn. The first one runs through 5-ET, 20-ET, 15-ET, and 10-ET. We can see that 5 + 15 = 20, which corresponds to the fact that 20-ET is the ET on the line for tetracot, which is the sum of porcupine and meantone, while 5-ET and 15-ET are the ETs on their lines. Similarly, we can see that 15 - 5 = 10, which corresponds to the fact that 10-ET is the ET on the line for dicot, which is the difference of porcupine and meantone.
These are all <math>r=2</math> temperaments, so their mappings each have one other row besides the one reserved for 7-ET. Any line that we draw across these four temperament lines will strike four ETs whose maps have a sum and difference relationship. On this diagram, two such lines have been drawn. The first one runs through 5-ET, 20-ET, 15-ET, and 10-ET. We can see that 5 + 15=20, which corresponds to the fact that 20-ET is the ET on the line for tetracot, which is the sum of porcupine and meantone, while 5-ET and 15-ET are the ETs on their lines. Similarly, we can see that 15 - 5=10, which corresponds to the fact that 10-ET is the ET on the line for dicot, which is the difference of porcupine and meantone.


The other line runs through the ETs 12, 41, 29, and 17, and we can see again that 12 + 29 = 41 and 29 - 12 = 17.
The other line runs through the ETs 12, 41, 29, and 17, and we can see again that 12 + 29=41 and 29 - 12=17.


[[File:Visualization of temperament arithmetic on projective tone space.png|300px|thumb|right|A visualization of temperament arithmetic on projective tone space.]]
[[File:Visualization of temperament arithmetic on projective tone space.png|300px|thumb|right|A visualization of temperament arithmetic on projective tone space.]]


We can also visualize temperament arithmetic on [[projective tone space]]. Here relationships are inverted: points are lines, and lines are points. So all four temperaments are found along the line for 7-ET.  
We can also visualize temperament arithmetic on [[projective tone space]]. Here relationships are inverted: points are lines, and lines are points. So all four temperaments are found along the line for 7-ET.


Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5-ET and 7-ET support a <math>r=2</math> temperament, then so will 5 + 7 = 12-ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call ''down'' the scale tree, children are always found between their parents. But when you try to go back ''up'' the scale tree, to one or the other parent, you may not immediately know which side of the child to go.
Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5-ET and 7-ET support a <math>r=2</math> temperament, then so will 5 + 7=12-ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call ''down'' the scale tree, children are always found between their parents. But when you try to go back ''up'' the scale tree, to one or the other parent, you may not immediately know which side of the child to go.


==Conditions on temperament arithmetic==
==Conditions on temperament arithmetic==


Temperament arithmetic is only possible for temperaments with the same [[dimensions]], that is, the same [[rank]] and [[dimensionality]] (and therefore, by the [[rank-nullity theorem]], also the same [[nullity]]). The reason for this is visually obvious: without the same <math>d</math>, <math>r</math>, and <math>n</math> (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up one-to-one. From this condition it also follows that the result of temperament arithmetic will be a new temperament with the same <math>d</math>, <math>r</math>, and <math>n</math> as the input temperaments.  
Temperament arithmetic is only possible for temperaments with the same [[dimensions]], that is, the same [[rank]] and [[dimensionality]] (and therefore, by the [[rank-nullity theorem]], also the same [[nullity]]). The reason for this is visually obvious: without the same <math>d</math>, <math>r</math>, and <math>n</math> (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up one-to-one. From this condition it also follows that the result of temperament arithmetic will be a new temperament with the same <math>d</math>, <math>r</math>, and <math>n</math> as the input temperaments.


Matching the dimensions is only the first of two conditions on the possibility of temperament arithmetic. The second condition is that the temperaments must all be '''addable'''. This condition is trickier, though, and so a detailed discussion of it will be deferred to a later section (here: [[Temperament arithmetic#Addability]]). But we can at least say here that any set of <math>\min(g)=1</math> temperaments are addable<ref>or they are all the same temperament, in which case they share all the same basis vectors and could perhaps be said to be ''completely'' linearly dependent.</ref>, fortunately, so we don't need to worry about it in that case.
Matching the dimensions is only the first of two conditions on the possibility of temperament arithmetic. The second condition is that the temperaments must all be '''addable'''. This condition is trickier, though, and so a detailed discussion of it will be deferred to a later section (here: [[Temperament arithmetic#Addability]]). But we can at least say here that any set of <math>g_{\text{min}}=1</math> temperaments are addable<ref>or they are all the same temperament, in which case they <span style="color: #3C8031;">share all the same basis vectors and could perhaps be said to be ''completely'' linearly dependent.</span></ref>, fortunately, so we don't need to worry about it in that case.


==Versus meet and join==
==Versus meet and join==


Like [[meet and join]], temperament arithmetic takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, ''adding'' these input temperaments together.  
Like [[meet and join]], temperament arithmetic takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, ''adding'' these input temperaments together.


But there is a big difference between temperament arithmetic and meet/join. Temperament arithmetic is done using ''entry-wise'' addition (or subtraction), whereas meet/join are done using ''concatenation''. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the join of mappings with two rows each is a new mapping that has a total of four rows<ref>At least, this mapping would have a total of four rows before it is reduced. After reduction, it may end up with only three (or two if you joined a temperament with itself for some reason).</ref>.
But there is a big difference between temperament arithmetic and meet/join. Temperament arithmetic is done using ''entry-wise'' addition (or subtraction), whereas meet/join are done using ''concatenation''. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the join of mappings with two rows each is a new mapping that has a total of four rows<ref>At least, this mapping would have a total of four rows before it is reduced. After reduction, it may end up with only three (or two if you joined a temperament with itself for some reason).</ref>.
Line 51: Line 51:
===The linear dependence connection===
===The linear dependence connection===


Another connection between temperament arithmetic and meet/join is that they ''may'' involve checks for linear dependence.  
Another connection between temperament arithmetic and meet/join is that they ''may'' involve checks for linear dependence.


Temperament arithmetic, as stated earlier, always requires addability, which is a more complex property involving linear dependence.
Temperament arithmetic, as stated earlier, always requires addability, which is a more complex property involving linear dependence.
Line 61: Line 61:
[[File:Very simple illustration of temperament sum vs diff.png|500px|thumb|left|Equivalences of temperament arithmetic depending on negativity. ]]
[[File:Very simple illustration of temperament sum vs diff.png|500px|thumb|left|Equivalences of temperament arithmetic depending on negativity. ]]


The temperament difference can be understood as being the same operation as the temperament sum except with one of the two temperaments negated.  
The temperament difference can be understood as being the same operation as the temperament sum except with one of the two temperaments negated.


For single vectors (and multivectors), negation is as simple as changing the sign of every entry; for matrices, negation is accomplished by choosing a single row (in the case of mappings) or column (in the case of comma bases) and changing the sign of every entry in it.  
For single vectors (and multivectors), negation is as simple as changing the sign of every entry; for matrices, negation is accomplished by choosing a single row (in the case of mappings) or column (in the case of comma bases) and changing the sign of every entry in it.


Suppose you have a matrix representing temperament T₁ and another matrix representing T₂. If you want to find both their sum and difference, you can calculate both T₁ + T₂ and T₁ + -T₂. There's no need to also find -T₁ + T₂; this will merely give the negation of T₁ + -T₂. The same goes for -T₁ + -T₂, which is the negation of T₁ + T₂.  
Suppose you have a matrix representing temperament <math>T_1</math> and another matrix representing <math>T_2</math>. If you want to find both their sum and difference, you can calculate both <math>T_1 + T_2</math> and <math>T_1 + -T_2</math>. There's no need to also find <math>-T_1 + T_2</math>; this will merely give the negation of <math>T_1 + -T_2</math>. The same goes for <math>-T_1 + -T_2</math>, which is the negation of <math>T_1 + T_2</math>.


But a question remains: which result between T₁ + T₂ and T₁ + -T₂ is actually the sum and which is the difference? This seems like an obvious question to answer, except for one key problem: how can we be certain that T₁ or T₂ wasn't already in negated form to begin with? We need to establish a way to check for matrix negativity.
But a question remains: which result between <math>T_1 + T_2</math> and <math>T_1 + -T_2</math> is actually the sum and which is the difference? This seems like an obvious question to answer, except for one key problem: how can we be certain that <math>T_1</math> or <math>T_2</math> wasn't already in negated form to begin with? We need to establish a way to check for matrix negativity.


The check is related to canonicalization of varianced multivectors as are used in exterior algebra for RTT. Essentially we take the minors of the matrix, and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.
The check is related to canonicalization of varianced multivectors as are used in exterior algebra for RTT. Essentially we take the minors of the matrix, and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.


==Beyond <math>\min(g)=1</math>==
==Beyond <math>g_{\text{min}}=1</math>==


As stated above, temperament arithmetic is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>\min(g)=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle it.
As stated above, temperament arithmetic is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>g_{\text{min}}=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle it.


===Multivector approach===
===Multivector approach===
Line 81: Line 81:
===Matrix approach===
===Matrix approach===


Temperament arithmetic for temperaments with both <math>r>1</math> and <math>n>1</math> can also be done using matrices, but it's significantly more involved than it is with multivectors. It works in essentially the same way — entry-wise addition or subtraction — but for matrices, it is necessary to make explicit the basis for the <span style="color: green;">linearly dependent vectors</span> shared between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. But it is not as simple as determining the basis for these <span style="color: green;">linearly dependent vectors</span> (using the technique described [[Linear_dependence#For_a_given_set_of_matrices.2C_how_to_compute_a_basis_for_their_linearly_dependent_vectors|here]]) and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit <span style="color: green;">linearly dependent basis vectors</span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive.
Temperament arithmetic for temperaments with both <math>r>1</math> and <math>n>1</math> can also be done using matrices, but it's significantly more involved than it is with multivectors. It works in essentially the same way — entry-wise addition or subtraction — but for matrices, it is necessary to make explicit <span style="color: #3C8031;">the basis for the linearly dependent vectors shared</span> between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the <span style="color: #3C8031;">linear-dependence basis, or <math>L_{\text{dep}}</math></span>. But it is not as simple as determining <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> (using the technique described [[Linear_dependence#For_a_given_set_of_matrices.2C_how_to_compute_a_basis_for_their_linearly_dependent_vectors|here]]) and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive.
 
Throughout this section, we will be using <span style="color: #3C8031;">a green color on linearly dependent objects and values</span>, and <span style="color: #B6321C;">a red color on linearly independent objects and values</span>, to help differentiate between the two.


====Example====
====Example====


For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.  
For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 108: Line 110:
\end{array} \right]</math>
\end{array} \right]</math>


And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: green;">linear dependence basis</span> explicitly, we can say here that it should be able to work out correctly.  
And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> explicitly, we can say here that it should be able to work out correctly.


In this case, their <span style="color: green;">linear dependence basis</span> is the vector <span style="color: green;">{{map|19 30 44 53}}</span>. This is only one vector, though, and the original had two vectors. So as a next step, we can pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: green;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: green;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.
In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ket|{{map|19 30 44 53}}}}</span>. The original matrices had two vectors, so as a next step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & -4 & -13 \\
1 & 0 & -4 & -13 \\


Line 121: Line 123:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & -4 & 17 \\
1 & 0 & -4 & 17 \\


Line 128: Line 130:
All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.


Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: green;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: green;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: green;">linear dependence basis</span>.  
Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: #3C8031;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: #3C8031;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.


It turns out that you can always isolate the enfactoring factor in the single final vector of the matrix — the <span style="color: red;">linearly independent vector</span> — through linear combinations of the vectors in the <span style="color: green;">linear dependence basis</span>. In this case, since there's only a single vector in the <span style="color: green;">linear dependence basis</span>, therefore all we need to do is repeatedly add that <span style="color: green;">one linearly dependent vector</span> to the <span style="color: red;">linearly independent vector</span> until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix.
It turns out that you can always isolate the enfactoring factor in the single final vector of the matrix — the <span style="color: #B6321C;">linearly independent vector</span> — through linear combinations of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. In this case, since there's only a single vector in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, therefore all we need to do is repeatedly add that <span style="color: #3C8031;">one linearly dependent vector</span> to the <span style="color: #B6321C;">linearly independent vector</span> until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix.


In this case, we can accomplish this by adding 11 times the first vector. For the first matrix, {{map|1 0 -4 -13}} + 11⋅<span style="color: green;">{{map|19 30 44 53}}</span> = {{map|210 330 480 570}}, whose entries have a GCD=30, so we can defactor the matrix by dividing that vector by 30, leaving us with <span style="color: red;">{{map|7 11 16 19}}</span>. Therefore the final matrix here is [<span style="color: green;">{{map|19 30 44 53}}</span> {{map|7 11 16 19}}⟩. The other matrix matrix happens to defactor in the same way: {{map|1 0 -4 17}} + 11⋅<span style="color: green;">{{map|19 30 44 53}}</span> = {{map|210 330 480 600}} whose GCD is also 30, reducing to <span style="color: red;">{{map|7 11 16 20}}</span>, so the final matrix is [<span style="color: green;">{{map|19 30 44 53}}</span> <span style="color: red;">{{map|7 11 16 20}}</span>⟩.
In this case, we can accomplish this by adding 11 times the first vector. For the first matrix, {{map|1 0 -4 -13}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span>={{map|210 330 480 570}}, whose entries have a GCD=30, so we can defactor the matrix by dividing that vector by 30, leaving us with <span style="color: #B6321C;">{{map|7 11 16 19}}</span>. Therefore the final matrix here is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|7 11 16 19}}⟩. The other matrix matrix happens to defactor in the same way: {{map|1 0 -4 17}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span>={{map|210 330 480 600}} whose GCD is also 30, reducing to <span style="color: #B6321C;">{{map|7 11 16 20}}</span>, so the final matrix is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> <span style="color: #B6321C;">{{map|7 11 16 20}}</span>⟩.


Now the matrices are ready to add:
Now the matrices are ready to add:
Line 138: Line 140:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}19 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 145: Line 147:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}20 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\


\end{array} \right]</math>
\end{array} \right]</math>


Clearly, though, we can see that with the top vector – the <span style="color: green;">linear dependence basis</span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: green;">linear dependence basis</span> aside, and deal only with the <span style="color: red;">linearly independent vectors</span>:
Clearly, though, we can see that with the top vector – the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{red}7 & \color{red}11 & \color{red}16 & \color{red}19 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 160: Line 162:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{red}7 & \color{red}11 & \color{red}16 & \color{red}20 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 166: Line 168:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{red}14 & \color{red}22 & \color{red}32 & \color{red}39 \\
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\


\end{array} \right]</math>
\end{array} \right]</math>


Then we can reintroduce the <span style="color: green;">linear dependence basis</span> afterwards:
Then we can reintroduce the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> afterwards:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{red}14 & \color{red}22 & \color{red}32 & \color{red}39 \\
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 190: Line 192:
so we can now see that meantone plus flattone is [[godzilla]].
so we can now see that meantone plus flattone is [[godzilla]].


As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: green;">linear dependence basis</span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: red;">linearly independent vectors</span>:
As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: #B6321C;">linearly independent vectors</span>:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{red}7 & \color{red}11 & \color{red}16 & \color{red}19 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 200: Line 202:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{red}7 & \color{red}11 & \color{red}16 & \color{red}20 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 206: Line 208:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{red}0 & \color{red}0 & \color{red}0 & \color{red}-1 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 214: Line 216:
<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{red}0 & \color{red}0 & \color{red}0 & \color{red}-1 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\


\end{array} \right]</math>
\end{array} \right]</math>


Which canonicalizes to:  
Which canonicalizes to:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{red}0 & \color{red}0 & \color{red}0 & \color{red}1 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}1 \\


\end{array} \right]</math>
\end{array} \right]</math>
Line 234: Line 236:
We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.
We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.


====Example with multiple vectors in the linear dependence basis====
====Example with multiple vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>====


(Examples WIP)
(Examples WIP)
Line 243: Line 245:


In order to understand addability, we must work up to it, understanding these concepts in this order:
In order to understand addability, we must work up to it, understanding these concepts in this order:
#linear dependence
#<span style="color: #3C8031;">linear dependence</span>
#linear dependence ''between temperaments''  
#<span style="color: #3C8031;">linear dependence</span> ''between temperaments''
#linear ''in''dependence between temperaments
#<span style="color: #B6321C;">linear ''in''dependence</span> between temperaments
#linear independence between temperaments by only one basis vector (that's addability)
#<span style="color: #B6321C;">linear independence</span> between temperaments by only one basis vector (that's addability)


====1. Linear dependence====
====1. <span style="color: #3C8031;">Linear dependence</span>====


This is explained here: [[linear dependence]].
This is explained here: [[linear dependence]].


====2. Linear dependence between temperaments====
====2. <span style="color: #3C8031;">Linear dependence</span> between temperaments====


Linear dependence has been defined for the matrices and multivectors that represent temperaments, but it can also be defined for temperaments themselves. The conditions of temperament arithmetic motivate a definition of linear dependence for temperaments whereby temperaments are considered linearly dependent if ''either of their mappings or their comma bases are linearly dependent''<ref>or — equivalently, in EA — either their multimaps or their multicommas are linearly dependent</ref>.  
<span style="color: #3C8031;">Linear dependence</span> has been defined for the matrices and multivectors that represent temperaments, but it can also be defined for temperaments themselves. The conditions of temperament arithmetic motivate a definition of <span style="color: #3C8031;">linear dependence</span> for temperaments whereby temperaments are considered <span style="color: #3C8031;">linearly dependent</span> if ''either of their mappings or their comma bases are <span style="color: #3C8031;">linearly dependent</span>''<ref>or — equivalently, in EA — either their multimaps or their multicommas are <span style="color: #3C8031;">linearly dependent</span></ref>.


For example, 5-limit 5-ET and 5-limit 7-ET, represented by the mappings {{ket|{{map|5 8 12}}}} and {{ket|{{map|7 11 16}}}} may at first seem to be linearly independent, because the basis vectors visible in their mappings are clearly linearly independent (when comparing two vectors, the only way they could be linearly dependent is if they are multiples of each other, as discussed [[Linear dependence#Linear dependence between individual vectors|here]]). And indeed their ''mappings'' are linearly independent. But these two ''temperaments'' are linearly ''de''pendent, because if we consider their corresponding comma bases, we will find that they share the basis vector of the meantone comma {{vector|4 -4 1}}.  
For example, 5-limit 5-ET and 5-limit 7-ET, represented by the mappings {{ket|{{map|5 8 12}}}} and {{ket|{{map|7 11 16}}}} may at first seem to be <span style="color: #B6321C;">linearly independent</span>, because the basis vectors visible in their mappings are clearly <span style="color: #B6321C;">linearly independent</span> (when comparing two vectors, the only way they could be <span style="color: #3C8031;">linearly dependent</span> is if they are multiples of each other, as discussed [[Linear dependence#Linear dependence between individual vectors|here]]). And indeed their ''mappings'' are <span style="color: #B6321C;">linearly independent</span>. But these two ''temperaments'' are <span style="color: #3C8031;">linearly ''de''pendent</span>, because if we consider their corresponding comma bases, we will find that they <span style="color: #3C8031;">share</span> the basis vector of the meantone comma {{vector|4 -4 1}}.


To make this point visually, we could say that two temperaments are linearly dependent if they intersect in one or the other of tone space and tuning space. So you have to check both views.<ref>You may be wondering — what about two temperaments which are parallel in tone or tuning space, e.g. compton and blackwood in tuning space? Their comma bases are each <math>n=1</math>, and they meet to give a <math>n=2</math> [[comma basis]], which corresponds to a <math>r=1</math> mapping, which means it should appear as an ET point on the PTS diagram. But how could that be? Well, here's their meet: {{bra|{{vector|1 0 0}} {{vector|0 1 0}}}}, and so that corresponding mapping is {{ket|{{map|0 0 1}}}}. So it's some degenerate ET. I suppose we could say it's the point at infinity away from the center of the diagram.</ref>
To make this point visually, we could say that two temperaments are <span style="color: #3C8031;">linearly dependent</span> if they intersect in one or the other of tone space and tuning space. So you have to check both views.<ref>You may be wondering — what about two temperaments which are parallel in tone or tuning space, e.g. compton and blackwood in tuning space? Their comma bases are each <math>n=1</math>, and they meet to give a <math>n=2</math> [[comma basis]], which corresponds to a <math>r=1</math> mapping, which means it should appear as an ET point on the PTS diagram. But how could that be? Well, here's their meet: {{bra|{{vector|1 0 0}} {{vector|0 1 0}}}}, and so that corresponding mapping is {{ket|{{map|0 0 1}}}}. So it's some degenerate ET. I suppose we could say it's the point at infinity away from the center of the diagram.</ref>


====3. Linear independence between temperaments====
====3. <span style="color: #B6321C;">Linear independence</span> between temperaments====


Linear dependence may be considered as a boolean (yes/no, linearly dependent/independent) or it may be considered as an integer count of linearly dependent basis vectors (e.g. 5-ET and 7-ET, per the example in the previous section, are linear-dependence-1 temperaments).  
<span style="color: #3C8031;">Linear dependence</span> may be considered as a boolean (yes/no, linearly <span style="color: #3C8031;">dependent</span>/<span style="color: #B6321C;">independent</span>) or it may be considered as <span style="color: #3C8031;">an integer count of linearly dependent basis vectors</span>. In other words, it is the dimension of <span style="color: #3C8031;">the linear-dependence basis <math>\dim(L_{\text{dep}})</math></span>. To refer to this count, we may hyphenate it as <span style="color: #3C8031;">'''linear-dependence'''</span>, and use the variable <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span>. For example, 5-ET and 7-ET, per the example in the previous section, are <span style="color: #3C8031;"><math>l_{\text{dep}}=1</math></span> (read <span style="color: #3C8031;">"linear-dependence-1"</span>) temperaments.


It does not make sense to speak of the linear dependence between temperaments in this integer count sense. Here's an example that illustrates why. Consider two different <math>d=5</math>, <math>r=2</math> temperaments. Both their mappings and comma bases are linearly dependent, but their mappings have linear-dependence of 1, while their comma bases have linear-dependence of 2. So what could the linear-dependence of this temperament be? We could, of course, define "min-linear-dependence" and "max-linear-dependence", as we defined "min-grade" (and could define "max-grade" <math>\max(g)</math>), but this does not turn out to be helpful.  
It does not make sense to speak of <span style="color: #3C8031;">linear dependence in this integer count sense</span> between ''temperaments'', however. Here's an example that illustrates why. Consider two different <math>d=5</math>, <math>r=2</math> temperaments. Both their mappings and comma bases are <span style="color: #3C8031;">linearly dependent</span>, but their mappings have <span style="color: #3C8031;"><math>l_{\text{dep}}=1</math></span>, while their comma bases have <span style="color: #3C8031;"><math>l_{\text{dep}}=2</math></span>. So what could the <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span> of this temperament possibly be? We ''could'' define "min-linear-dependence" and "max-linear-dependence", as we define "min-grade" and "max-grade", but these do not turn out to be helpful.


However, it turns out that it does make sense to speak of the ''linear-independence'' of the temperament as an integer count. This is because the count of linearly independent basis vectors of two temperaments' mappings and the count of linearly independent basis vectors of their comma bases will always be the same. So the temperament linear-independence is simply this number. In the <math>d=5</math>, <math>r=2</math> example from the previous paragraph, these would be linear-independence-1, or <math>l=1</math> temperaments.
On the other hand, it does make sense to speak of the <span style="color: #B6321C;">'''linear-independence'''</span> of the temperament as an integer count. This is because the count of <span style="color: #B6321C;">linearly independent</span> basis vectors of two temperaments' mappings and the count of <span style="color: #B6321C;">linearly independent</span> basis vectors of their comma bases will always be the same. So the temperament <span style="color: #B6321C;">linear-independence</span> is simply this number. In the <math>d=5</math>, <math>r=2</math> example from the previous paragraph, these would be <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> (read <span style="color: #B6321C;">"linear-independence-1"</span>) temperaments.


A proof of this is given [[Temperament arithmetic#Sintel's proof of the linear independence conjecture|here]].
A proof of this conjecture is given here: [[Temperament arithmetic#Sintel's proof of the linear independence conjecture]].


====4. Linear independence between temperaments by only one basis vector (addability)====
====4. <span style="color: #B6321C;">Linear independence</span> between temperaments by only one basis vector (addability)====


Two temperaments are addable if they are <math>l=1</math>. In other words, both their mappings and their comma bases share all but one basis vector.
Two temperaments are addable if they are <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>. In other words, both their mappings and their comma bases <span style="color: #3C8031;">share</span> all but one basis vector.


===Diagrammatic explanation===
===Diagrammatic explanation===
Line 278: Line 280:
====How to read the diagrams====
====How to read the diagrams====


The diagrams used for this explanation were inspired in part by [[Kite Giedraitis|Kite]]'s [[gencom]]s, and specifically how in his "twin squares" matrices — which have dimensions <math>d×d</math> — one can imagine shifting a bar up and down to change the boundary between vectors that form a basis for the commas and those that form a basis for preimage intervals (this basis is typically called "the [[generator]]s"). The count of the former is the nullity <math>n</math>, and the count of the latter is the rank <math>r</math>, and the shifting of the boundary bar between them with the total <math>d</math> rows corresponds to the insight of the rank-nullity theorem, which states that <math>r</math> + <math>n</math> = <math>d</math>. And so this diagram's square grid has just the right amount of room to portray both the mapping and the comma basis for a given temperament (with the comma basis's vectors rotated 90 degrees to appear as rows, to match up with the rows of the mapping).
The diagrams used for this explanation were inspired in part by [[Kite Giedraitis|Kite]]'s [[gencom]]s, and specifically how in his "twin squares" matrices — which have dimensions <math>d×d</math> — one can imagine shifting a bar up and down to change the boundary between vectors that form a basis for the commas and those that form a basis for preimage intervals (this basis is typically called "the [[generator]]s"). The count of the former is the nullity <math>n</math>, and the count of the latter is the rank <math>r</math>, and the shifting of the boundary bar between them with the total <math>d</math> rows corresponds to the insight of the rank-nullity theorem, which states that <math>r + n=d</math>. And so this diagram's square grid has just the right amount of room to portray both the mapping and the comma basis for a given temperament (with the comma basis's vectors rotated 90 degrees to appear as rows, to match up with the rows of the mapping).
 
So consider this first example of such a diagram:
So consider this first example of such a diagram:


{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="4" |<math>d = 4</math>
| rowspan="4" |<math>d=4</math>
| style="border-bottom: 3px solid black;"|<math>\min(g) = 1</math>
| style="border-bottom: 3px solid black;"|<math>g_{\text{min}}=1</math>
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
| rowspan="3" |<math>\max(g) = 3</math>
| rowspan="3" |<math>g_{\text{max}}=3</math>
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
| rowspan="2" |
| rowspan="2" |
|-
|-
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
|style="background-color: #ccffcc;"|      
|style="background-color: #BED5BA;"|      
|}
|}


This represents a <math>d=4</math> temperament. These diagrams are grade-agnostic, which is to say that they are agnostic as to which side counts the <math>r</math> and which side counts the <math>n</math>. So we are showing them as <math>\min(g)</math> and <math>\max(g)</math> instead. We could say there's a variation on the rank-nullity theorem whereby <math>\min(g) + \max(g) = d</math>, just as <math>r + n = d</math>. So we can then say that this diagram represents either a <math>r=1</math>, <math>n=3</math> temperament, or perhaps a <math>n=1</math>, <math>r=3</math> temperament.
This represents a <math>d=4</math> temperament. These diagrams are grade-agnostic, which is to say that they are agnostic as to which side counts the <math>r</math> and which side counts the <math>n</math>. So we are showing them as <math>g_{\text{min}}</math> and <math>g_{\text{max}}</math> instead. We could say there's a variation on the rank-nullity theorem whereby <math>g_{\text{min}} + g_{\text{max}}=d</math>, just as <math>r + n=d</math>. So we can then say that this diagram represents either a <math>r=1</math>, <math>n=3</math> temperament, or perhaps a <math>n=1</math>, <math>r=3</math> temperament.


But actually, this diagram represents more than just a single temperament. It represents a relationship between a pair of temperaments (which have the same [[dimensions]], non-grade-agnostically, i.e. not a pairing of a <math>r=1</math>, <math>n=3</math> temperament with a <math>r=3</math>, <math>n=1</math> temperament). Green coloration indicates linearly dependent basis vectors between this pair of temperaments, and red coloration indicates linearly ''in''dependent basis vectors between the same pair of temperaments.  
But actually, this diagram represents more than just a single temperament. It represents a relationship between a pair of temperaments (which have the same [[dimensions]], non-grade-agnostically, i.e. not a pairing of a <math>r=1</math>, <math>n=3</math> temperament with a <math>r=3</math>, <math>n=1</math> temperament). As elsewhere, <span style="color: #3C8031;">green coloration indicates the linearly dependent basis vectors <math>L_{\text{dep}}</math></span> between this pair of temperaments, and <span style="color: #B6321C;">red coloration indicates linearly ''in''dependent basis vectors <math>L_{\text{ind}}</math></span> between the same pair of temperaments.


So, in this case, the two ET maps are linearly independent. This should be unsurprising; because ET maps are constituted by only a single vector (they're <math>r=1</math> by definition), if they ''were'' linearly dependent, then they'd necessarily be the ''same'' exact ET! Temperament arithmetic on two of the same ET is never interesting; A plus A simply equals A again, and A minus A is undefined. That said, if we ''were'' to represent temperament arithmetic between two of the same temperament on such a diagram as this, then every cell would be green. And this is true regardless whether <math>r=1</math> or otherwise.  
So, in this case, the two ET maps are <span style="color: #B6321C;">linearly independent</span>. This should be unsurprising; because ET maps are constituted by only a single vector (they're <math>r=1</math> by definition), if they ''were'' <span style="color: #3C8031;">linearly dependent</span>, then they'd necessarily be the ''same'' exact ET! Temperament arithmetic on two of the same ET is never interesting; <math>T_1</math> plus <math>T_1</math> simply equals <math>T_1</math> again, and <math>T_1</math> minus <math>T_1</math> is undefined. That said, if we ''were'' to represent temperament arithmetic between two of the same temperament on such a diagram as this, then every cell would be green. And this is true regardless whether <math>r=1</math> or otherwise.


From this information, we can see that the comma bases of any randomly selected pair of ''different'' <math>d=4</math> ETs are going to share 2 vectors, or in other words, their linear dependence basis will have two basis vectors. In terms of the diagram, we're saying that they'll always have two green-colored rows under the black bar.  
From this information, we can see that the comma bases of any randomly selected pair of ''different'' <math>d=4</math> ETs are going to <span style="color: #3C8031;">share 2 vectors</span>, or in other words, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> will have two basis vectors. In terms of the diagram, we're saying that they'll always have two <span style="color: #3C8031;">green-colored rows</span> under the black bar.


These diagrams are a good way to understand which temperament relationships are possible and which aren't, where by a "relationship" here we mean a particular combination of their shared dimensions and their linear-independence integer count. A good way to use these diagrams for this purpose is to imagine the red coloration emanating away from the black bar in both directions simultaneously, one pair of rows at a time. Doing it like this captures the fact, as previously stated, that the linear-independence (in the integer count sense) on either side of duality is always equal. There's no notion of a max or min here, as there is with grade or the linear-dependence (again, in the integer count sense); the linear-independence on either side is always the same, so we can capture it with a single number, which counts the red rows on just one half (that is, half of the total count of red rows, or half of the width of the red band in the middle of the grid).
These diagrams are a good way to understand which temperament relationships are possible and which aren't, where by a "relationship" here we mean a particular combination of their matching dimensions and their linear-independence integer count. A good way to use these diagrams for this purpose is to imagine the <span style="color: #B6321C;">red coloration</span> emanating away from the black bar in both directions simultaneously, one pair of rows at a time. Doing it like this captures the fact, as previously stated, that the <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> on either side of duality is always equal. There's no notion of a max or min here, as there is with <math>g</math> or <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span>; the <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> on either side is always the same, so we can capture it with a single number, which counts the <span style="color: #B6321C;">red rows</span> on just one half (that is, half of the total count of <span style="color: #B6321C;">red rows</span>, or half of the width of the <span style="color: #B6321C;">red band</span> in the middle of the grid).


There's no need to look at diagrams like this where the black bar is below the center. This is because, even though for convenience we're currently treating the top half as <math>r</math> and the bottom half as <math>n</math>, these diagrams are ultimately grade-agnostic. So we could say that each one essentially represents not just one possibility for the relationship between two temperaments' dimensions and linear dependence, but ''two'' such possibilities. Again, this diagram equally represents both <math>d=4, r=1, n=3, l=1</math> as well as <math>d=4, r=3, n=1, l=1</math>. Which is another way of saying we could vertically mirror it without changing it.  
There's no need to look at diagrams like this where the black bar is below the center. This is because, even though for convenience we're currently treating the top half as <math>r</math> and the bottom half as <math>n</math>, these diagrams are ultimately grade-agnostic. So we could say that each one essentially represents not just one possibility for the relationship between two temperaments' dimensions and <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span>, but ''two'' such possibilities. Again, this diagram equally represents both <math>d=4, r=1, n=3, </math><span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> as well as <math>d=4, r=3, n=1, </math><span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>. Which is another way of saying we could vertically mirror it without changing it.


With the black bar always either in the top half or exactly in the center, we can see that the emanating red band will always either hit the top edge of the square grid first, or they will hit both the top and bottom edges of it simultaneously. So this is how these diagrams visually convey the fact that the linear-independence between two temperaments will always be less than or equal to their min-grade: because a situation where <math>\min(g)>l</math> would visually look like the red band spilling past the edges of the square grid.  
With the black bar always either in the top half or exactly in the center, we can see that the emanating <span style="color: #B6321C;">red band</span> will always either hit the top edge of the square grid first, or they will hit both the top and bottom edges of it simultaneously. So this is how these diagrams visually convey the fact that the <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> between two temperaments will always be less than or equal to their <math>g_{\text{min}}</math>: because a situation where <math>g_{\text{min}}>l</math> would visually look like the <span style="color: #B6321C;">red band</span> spilling past the edges of the square grid.


We could also say that two temperaments are linearly dependent on each other when <math>l<\max(g)</math>, that is, their linear-independence is less than their ''max''-grade.
We could also say that two temperaments are <span style="color: #3C8031;">linearly dependent</span> on each other when <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span><math><g_{\text{max}}</math>, that is, their <span style="color: #B6321C;">linear-independence</span> is less than their ''max''-grade.


Perhaps more importantly, we can also see from these diagrams that any pair of <math>\min(g)=1</math> temperaments will be addable. Because if they are <math>\min(g)=1</math>, then the furthest the red band can extend from the black bar is 1 row, and 1 mirrored set of red rows means <math>l=1</math>, and that's the definition of addability.
Perhaps more importantly, we can also see from these diagrams that any pair of <math>g_{\text{min}}=1</math> temperaments will be addable. Because if they are <math>g_{\text{min}}=1</math>, then the furthest the <span style="color: #B6321C;">red band</span> can extend from the black bar is 1 row, and 1 mirrored set of <span style="color: #B6321C;">red rows</span> means <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and that's the definition of addability.


====A simple <math>d=3</math> example====
====A simple <math>d=3</math> example====
Line 334: Line 336:
{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="3" |<math>d = 3</math>
| rowspan="3" |<math>d=3</math>
|style="border-bottom: 3px solid black;"|<math>\min(g) = 1</math>
|style="border-bottom: 3px solid black;"|<math>g_{\text{min}}=1</math>
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
| rowspan="2" |<math>\max(g) = 2</math>
| rowspan="2" |<math>g_{\text{max}}=2</math>
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|
|
|}
|}


This diagram shows us that any two <math>d=3</math>, <math>\min(g)=1</math> temperaments (like 5-limit ETs) will be linearly dependent, i.e. their comma bases will share one vector. You may already know this intuitively if you are familiar with the 5-limit [[projective tuning space]] diagram from [[The Middle Path]] paper, which shows how we can draw a line through any two ETs and that line will represent a temperament, and the single comma that temperament tempers out is this shared vector. The diagram also tells us that any two 5-limit temperaments that temper out only a single comma will also be linearly dependent, for the opposite reason: their ''mappings'' will always share one vector.
This diagram shows us that any two <math>d=3</math>, <math>g_{\text{min}}=1</math> temperaments (like 5-limit ETs) will be <span style="color: #3C8031;">linearly dependent</span>, i.e. their comma bases will <span style="color: #3C8031;">share</span> one vector. You may already know this intuitively if you are familiar with the 5-limit [[projective tuning space]] diagram from the [[Paul_Erlich#Papers|Middle Path]] paper, which shows how we can draw a line through any two ETs and that line will represent a temperament, and the single comma that temperament tempers out is <span style="color: #3C8031;">this shared vector</span>. The diagram also tells us that any two 5-limit temperaments that temper out only a single comma will also be <span style="color: #3C8031;">linearly dependent</span>, for the opposite reason: their ''mappings'' will always <span style="color: #3C8031;">share</span> one vector.


And we can see that there are no other diagrams of interest for <math>d=3</math>, because there's no sense in looking at diagrams with no red band, but we can't extend the red band any further than 1 row on each side without going over the edge, and we can't lower the black bar any further without going below the center. So we're done. And our conclusion is that any pair of different <math>d=3</math> temperaments that are nontrivial (<math>0 < n < d = 3</math> and <math>0 < r < d = 3</math>) will be addable.  
And we can see that there are no other diagrams of interest for <math>d=3</math>, because there's no sense in looking at diagrams with no <span style="color: #B6321C;">red band</span>, but we can't extend the <span style="color: #B6321C;">red band</span> any further than 1 row on each side without going over the edge, and we can't lower the black bar any further without going below the center. So we're done. And our conclusion is that any pair of different <math>d=3</math> temperaments that are nontrivial (<math>0 < n < d=3</math> and <math>0 < r < d=3</math>) will be addable.


====Completing the suite of <math>d=4</math> examples====
====Completing the suite of <math>d=4</math> examples====


Okay, back to <math>d=4</math>. We've already looked at the <math>\min(g)=1</math> possibility (which, for any <math>d</math>, there will only ever be one of). So let's start looking at the possibilities where <math>\min(g)=2</math>, which in the case of <math>d=4</math> leaves us only one pair of values for <math>r</math> and <math>n</math>: both being 2.
Okay, back to <math>d=4</math>. We've already looked at the <math>g_{\text{min}}=1</math> possibility (which, for any <math>d</math>, there will only ever be one of). So let's start looking at the possibilities where <math>g_{\text{min}}=2</math>, which in the case of <math>d=4</math> leaves us only one pair of values for <math>r</math> and <math>n</math>: both being 2.


{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="4" |<math>d = 4</math>
| rowspan="4" |<math>d=4</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>\min(g) = 2</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>g_{\text{min}}=2</math>
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|
|
|-
|-
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
| rowspan="2" |<math>\max(g) = 2</math>
| rowspan="2" |<math>g_{\text{max}}=2</math>
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|
|
|}
|}


But even with <math>d</math>, <math>r</math>, and <math>n</math> fixed, we still have more than one possibility for <math>l</math>. The above diagram shows <math>l=1</math>. The below diagram shows <math>l=2</math>.
But even with <math>d</math>, <math>r</math>, and <math>n</math> fixed, we still have more than one possibility for <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. The above diagram shows <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>. The below diagram shows <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>.


{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="4" |<math>d = 4</math>
| rowspan="4" |<math>d=4</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>\min(g) = 2</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>g_{\text{min}}=2</math>
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| rowspan="2" |<math>l = 2</math>
| rowspan="2" |<span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>
|-
|-
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|-
|-
| rowspan="2" |<math>\max(g) = 2</math>
| rowspan="2" |<math>g_{\text{max}}=2</math>
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| rowspan="2" |<math>l = 2</math>
| rowspan="2" |<span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>
|-
|-
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|}
|}


In the former possibility, where <math>l=1</math> (and therefore the temperaments are addable), we have a pair of different <math>d=4</math>, <math>r=2</math> temperaments where we can find a single comma that both temperaments temper out, and — equivalently — we can find one ET that supports both temperaments.  
In the former possibility, where <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> (and therefore the temperaments are addable), we have a pair of different <math>d=4</math>, <math>r=2</math> temperaments where we can find a single comma that both temperaments temper out, and — equivalently — we can find one ET that supports both temperaments.


In the latter possibility, where <math>l=2</math>, neither side of duality shares any vectors in common. And so we've encountered our first example that is not addable. In other words, if the red band ever extends more than 1 row away from the black line, temperament arithmetic is not possible. So <math>d=4</math> is the first time we had enough room (half of <math>d</math>) to support that condition.
In the latter possibility, where <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>, neither side of duality <span style="color: #3C8031;">shares</span> any vectors in common. And so we've encountered our first example that is not addable. In other words, if the <span style="color: #B6321C;">red band</span> ever extends more than 1 row away from the black bar, temperament arithmetic is not possible. So <math>d=4</math> is the first time we had enough room (half of <math>d</math>) to support that condition.


We have now exhausted the possibility space for <math>d=4</math>. We can't extend either the red band or the black bar any further.
We have now exhausted the possibility space for <math>d=4</math>. We can't extend either the <span style="color: #B6321C;">red band</span> or the black bar any further.


====<math>d=5</math> diagrams finally reveal important relationships====
====<math>d=5</math> diagrams finally reveal important relationships====


So how about we go to <math>d=5</math> (such as the 11-limit). As usual, starting with <math>\min(g)=1</math>:
So how about we go to <math>d=5</math> (such as the 11-limit). As usual, starting with <math>g_{\text{min}}=1</math>:
{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="5" |<math>d = 5</math>
| rowspan="5" |<math>d=5</math>
|style="border-bottom: 3px solid black;" |<math>\min(g) = 1</math>
|style="border-bottom: 3px solid black;" |<math>g_{\text{min}}=1</math>
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #ffcccc; border-bottom: 3px solid black;" |  ↑  
| style="background-color: #E7BBB3; border-bottom: 3px solid black;" |  ↑  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
| rowspan="4" |<math>\max(g) = 4</math>
| rowspan="4" |<math>g_{\text{max}}=4</math>
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
| rowspan="3" |
| rowspan="3" |
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|}
|}


Just as with the <math>l=1</math> diagrams given for <math>d=3</math> and <math>d=5</math>, we can see these are addable temperaments.  
Just as with the <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> diagrams given for <math>d=3</math> and <math>d=5</math>, we can see these are addable temperaments.


Now let's look at <math>d=5</math> but with <math>\min(g)=2</math>. This presents two possibilities. First, <math>l=1</math>:
Now let's look at <math>d=5</math> but with <math>g_{\text{min}}=2</math>. This presents two possibilities. First, <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>:


{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="5" |<math>d = 5</math>
| rowspan="5" |<math>d=5</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>\min(g) = 2</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>g_{\text{min}}=2</math>
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|
|
|-
|-
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
| rowspan="3" |<math>\max(g) = 3</math>
| rowspan="3" |<math>g_{\text{max}}=3</math>
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|
|
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|
|
|}
|}


And second, <math>l=2</math>:
And second, <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>:


{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="5" |<math>d = 5</math>
| rowspan="5" |<math>d=5</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>\min(g) = 2</math>
| rowspan="2" style="border-bottom: 3px solid black;" |<math>g_{\text{min}}=2</math>
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| style="background-color: #ffcccc;" |  ↑  
| style="background-color: #E7BBB3;" |  ↑  
| rowspan="2" |<math>l = 2</math>
| rowspan="2" |<span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>
|-
|-
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|-
|-
| rowspan="3" |<math>\max(g) = 3</math>
| rowspan="3" |<math>g_{\text{max}}=3</math>
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| style="background-color: #ffcccc;" |  ↓  
| style="background-color: #E7BBB3;" |  ↓  
| rowspan="2" |<math>l = 2</math>
| rowspan="2" |<span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>
|-
|-
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|style="background-color: #ffcccc;|  ↓  
|style="background-color: #E7BBB3;|  ↓  
|-
|-
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|style="background-color: #ccffcc;|      
|style="background-color: #BED5BA;|      
|
|
|}
|}


Here's where things really get interesting. Because in both of these cases, the pairs of temperaments represented are linearly dependent on each other (i.e. either their mappings are linearly dependent, their comma bases are linearly dependent, or both). And so far, every possibility where temperaments have been linearly dependent, they have also been <math>l=1</math>, and therefore addable. But if you look at the second case here, we are <math>l=2</math>, but since <math>d=5</math>, the temperaments still manage to be linearly dependent. So this is the first example of a linearly dependent temperament pairing which is not addable.  
Here's where things really get interesting. Because in both of these cases, the pairs of temperaments represented are <span style="color: #3C8031;">linearly dependent</span> on each other (i.e. either their mappings are <span style="color: #3C8031;">linearly dependent</span>, their comma bases are <span style="color: #3C8031;">linearly dependent</span>, or both). And so far, every possibility where temperaments have been <span style="color: #3C8031;">linearly dependent</span>, they have also been <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and therefore addable. But if you look at the second case here, we are <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>, but since <math>d=5</math>, the temperaments still manage to be <span style="color: #3C8031;">linearly dependent</span>. So this is the first example of a <span style="color: #3C8031;">linearly dependent</span> temperament pairing which is not addable.


====Back to <math>d=2</math>, for a surprisingly tricky example====
====Back to <math>d=2</math>, for a surprisingly tricky example====
Line 561: Line 563:
Beyond <math>d=5</math>, these diagrams get cumbersome to prepare, and cease to reveal further insights. But if we step back down to <math>d=2</math>, a place simpler than anywhere we've looked so far, we actually find another surprisingly tricky example, which is hopefully still illuminating.
Beyond <math>d=5</math>, these diagrams get cumbersome to prepare, and cease to reveal further insights. But if we step back down to <math>d=2</math>, a place simpler than anywhere we've looked so far, we actually find another surprisingly tricky example, which is hopefully still illuminating.


So <math>d=2</math> (such as the 3-limit) presents another case — similar to the <math>d=5</math>, <math>\min(g)=2</math>, <math>l=2</math> case shared most recently above — where the properties "linearly dependent" and "addable" do not match each other. But while in the other case, we had a temperament pair that was linearly dependent yet not addable, in this <math>d=2</math> (and therefore <math>\min(g)=1</math>, <math>l=1</math>) case, it is the other way around: addable yet linearly independent!  
So <math>d=2</math> (such as the 3-limit) presents another case — similar to the <math>d=5</math>, <math>g_{\text{min}}=2</math>, <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span> case explored most recently above — where the properties of <span style="color: #3C8031;">linearly dependence</span> and addability do not match each other. But while in the other case, we had a temperament pair that was <span style="color: #3C8031;">linearly dependent</span> yet not addable, in this <math>d=2</math> (and therefore <math>g_{\text{min}}=1</math>, <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>) case, it is the other way around: addable yet <span style="color: #B6321C;">linearly independent</span>!


{| class="wikitable"
{| class="wikitable"
|+
|+
| rowspan="2" |<math>d = 2</math>
| rowspan="2" |<math>d=2</math>
|style="border-bottom: 3px solid black;" |<math>\min(g) = 1</math>
|style="border-bottom: 3px solid black;" |<math>g_{\text{min}}=1</math>
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #ffcccc; border-bottom: 3px solid black;"|  ↑  
|style="background-color: #E7BBB3; border-bottom: 3px solid black;"|  ↑  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|-
|-
|<math>\max(g) = 1</math>
|<math>g_{\text{max}}=1</math>
|style="background-color: #ffcccc;"|  ↓  
|style="background-color: #E7BBB3;"|  ↓  
|style="background-color: #ffcccc;"|  ↓  
|style="background-color: #E7BBB3;"|  ↓  
|<math>l = 1</math>
|<span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>
|}
|}


Basically, in the case of <math>d=2</math>, <math>\max(g)=1</math> (in non-trivial cases, i.e. not JI or the unison temperament), so any two different ETs or commas you pick are going to be linearly independent (because the only way they could be linearly dependent would be to be the same temperament). And yet we know we can still entry-wise add them to new vectors that are [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#Decomposability|decomposable]], because they're already vectors (decomposing means to express a [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#From_vectors_to_multivectors|multivector]] in the form of a list of monovectors, so decomposing a multivector that's already a monovector like this is tantamount to merely putting array braces around it.)
Basically, in the case of <math>d=2</math>, <math>g_{\text{max}}=1</math> (in non-trivial cases, i.e. not JI or the unison temperament), so any two different ETs or commas you pick are going to be <span style="color: #B6321C;">linearly independent</span> (because the only way they could be <span style="color: #3C8031;">linearly dependent</span> would be to be the same temperament). And yet we know we can still entry-wise add them to new vectors that are [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#Decomposability|decomposable]], because they're already vectors (decomposing means to express a [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#From_vectors_to_multivectors|multivector]] in the form of a list of monovectors, so decomposing a multivector that's already a monovector like this is tantamount to merely putting array braces around it.)


====Conclusion====
====Conclusion====


This explanation has hopefully helped get a grip on what addability AKA <math>l=1</math> is like. But it still hasn't quite explained why <math>l=1</math> is one and the same thing as addability. We will look at this in another section soon.
This explanation has hopefully helped get a grip on what addability AKA <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> is like. But it still hasn't quite explained why <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> is one and the same thing as addability. We will look at this in another section soon.


===Geometric explanation===
===Geometric explanation===
Line 593: Line 595:
== Applications ==
== Applications ==


The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments. According to some sources, these properties are discussed in terms of "Fokker groups" on this page: [[Fokker block]].  
The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments. According to some sources, these properties are discussed in terms of "Fokker groups" on this page: [[Fokker block]].


== Sintel's proof of the linear independence conjecture==
== Sintel's proof of the linear independence conjecture==