Temperament addition: Difference between revisions

Cmloegcmluin (talk | contribs)
Cmloegcmluin (talk | contribs)
correction to canonical form, and breathing room to LaTeX
Line 6: Line 6:


For example, the sum of [[12-ET]] and [[7-ET]] is [[19-ET]] because {{map|12 19 28}} + {{map|7 11 16}}={{map|(12+7) (19+11) (28+16)}}={{map|19 30 44}}, and the difference of 12-ET and 7-ET is 5-ET because {{map|12 19 28}} - {{map|7 11 16}}={{map|(12-7) (8-11) (12-16)}}={{map|5 8 12}}.  
For example, the sum of [[12-ET]] and [[7-ET]] is [[19-ET]] because {{map|12 19 28}} + {{map|7 11 16}}={{map|(12+7) (19+11) (28+16)}}={{map|19 30 44}}, and the difference of 12-ET and 7-ET is 5-ET because {{map|12 19 28}} - {{map|7 11 16}}={{map|(12-7) (8-11) (12-16)}}={{map|5 8 12}}.  


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 54: Line 55:


\end{array} \right]</math>
\end{array} \right]</math>


We can write these using [[wart notation]] as 12p + 7p=19p and 12p - 7p=5p, respectively. The similarity in these temperaments can be seen in how, like both 12-ET and 7-ET, 19-ET (their sum) and 5-ET (their difference) both also support [[meantone temperament]].
We can write these using [[wart notation]] as 12p + 7p=19p and 12p - 7p=5p, respectively. The similarity in these temperaments can be seen in how, like both 12-ET and 7-ET, 19-ET (their sum) and 5-ET (their difference) both also support [[meantone temperament]].


Temperament sums and differences can also be found using commas; for example meantone + porcupine=tetracot because {{vector|4 -4 1}} + {{vector|1 -5 3}}={{vector|(4+1) (-4+-5) (1+3)}}={{vector|5 -9 4}} and meantone - porcupine=dicot because {{vector|4 -4 1}} - {{vector|1 -5 3}}={{vector|(4-1) (-4--5) (1-3)}}={{vector|3 1 -2}}.  
Temperament sums and differences can also be found using commas; for example meantone + porcupine=tetracot because {{vector|4 -4 1}} + {{vector|1 -5 3}}={{vector|(4+1) (-4+-5) (1+3)}}={{vector|5 -9 4}} and meantone - porcupine=dicot because {{vector|4 -4 1}} - {{vector|1 -5 3}}={{vector|(4-1) (-4--5) (1-3)}}={{vector|3 1 -2}}.  


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 122: Line 125:


\end{array} \right]</math>
\end{array} \right]</math>


We could write this in ratio form — replacing addition with multiplication and subtraction with division — as 80/81 × 250/243=20000/19683 and 80/81 ÷ 250/243=25/24, respectively. The similarity in these temperaments can be seen in how all of them are supported by 7-ET.
We could write this in ratio form — replacing addition with multiplication and subtraction with division — as 80/81 × 250/243=20000/19683 and 80/81 ÷ 250/243=25/24, respectively. The similarity in these temperaments can be seen in how all of them are supported by 7-ET.
Line 703: Line 707:


For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.
For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 724: Line 729:


\end{array} \right]</math>
\end{array} \right]</math>


And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> explicitly, we can say here that it should be able to work out correctly.
And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> explicitly, we can say here that it should be able to work out correctly.


In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ket|{{map|19 30 44 53}}}}</span>. The original matrices had two vectors, so as a next step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.
In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ket|{{map|19 30 44 53}}}}</span>. The original matrices had two vectors, so as a next step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 742: Line 749:


\end{array} \right]</math>
\end{array} \right]</math>


All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
Line 752: Line 760:


Now the matrices are ready to add:
Now the matrices are ready to add:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 766: Line 775:


\end{array} \right]</math>
\end{array} \right]</math>


Clearly, though, we can see that with the top vector – the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>:
Clearly, though, we can see that with the top vector – the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 786: Line 797:


\end{array} \right]</math>
\end{array} \right]</math>


Then we can reintroduce the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> afterwards:
Then we can reintroduce the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> afterwards:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 795: Line 808:


\end{array} \right]</math>
\end{array} \right]</math>


And finally [[canonical form|canonicalize]]:
And finally [[canonical form|canonicalize]]:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 804: Line 819:


\end{array} \right]</math>
\end{array} \right]</math>


so we can now see that meantone plus flattone is [[godzilla]].
so we can now see that meantone plus flattone is [[godzilla]].


As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: #B6321C;">linearly independent vectors</span>:
As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: #B6321C;">linearly independent vectors</span>:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 826: Line 843:


\end{array} \right]</math>
\end{array} \right]</math>


And so, reintroducing the linear dependency basis, we have:
And so, reintroducing the linear dependency basis, we have:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}
Line 835: Line 854:


\end{array} \right]</math>
\end{array} \right]</math>


Which canonicalizes to:
Which canonicalizes to:


<math>\left[ \begin{array} {rrr}
<math>\left[ \begin{array} {rrr}


\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
19 & 30 & 44 & 0 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}1 \\
0 & 0 & 0 & 1 \\


\end{array} \right]</math>
\end{array} \right]</math>


(almost the same thing, just with a 1 instead of a -1 in the second vector).


But the last thing we need to do is check the negativity of these two temperaments, so we can figure out which of these two results is truly the sum and which is truly the difference. If one of the matrices we performed arithmetic on was actually negative, then we have our results backwards (if both are negative, then the problem cancels out, and we go back to being right).
But the last thing we need to do is check the negativity of these two temperaments, so we can figure out which of these two results is truly the sum and which is truly the difference. If one of the matrices we performed arithmetic on was actually negative, then we have our results backwards (if both are negative, then the problem cancels out, and we go back to being right).