Temperament addition: Difference between revisions

Cmloegcmluin (talk | contribs)
No edit summary
Cmloegcmluin (talk | contribs)
Line 81: Line 81:
===Matrix approach===
===Matrix approach===


Temperament arithmetic for temperaments with both <math>r>1</math> and <math>n>1</math> can also be done using matrices, but it's significantly more involved than it is with multivectors. It works in essentially the same way — entry-wise addition or subtraction — but for matrices, it is necessary to make explicit the basis for the linearly dependent vectors shared between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. But it is not as simple as determining the basis for these linearly dependent vectors and pasting them over the vectors as you found them, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit linearly dependent basis vectors cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive.
Temperament arithmetic for temperaments with both <math>r>1</math> and <math>n>1</math> can also be done using matrices, but it's significantly more involved than it is with multivectors. It works in essentially the same way — entry-wise addition or subtraction — but for matrices, it is necessary to make explicit the basis for the <span style="color: green;">linearly dependent vectors</span> shared between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. But it is not as simple as determining the basis for these <span style="color: green;">linearly dependent vectors</span> (using the technique described [[Linear_dependence#For_a_given_set_of_matrices.2C_how_to_compute_a_basis_for_their_linearly_dependent_vectors|here]]) and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit <span style="color: green;">linearly dependent basis vectors</span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive.
 
====Example====
 
For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.
 
<math>\left[ \begin{array} {rrr}
 
1 & 0 & -4 & -13 \\
0 & 1 & 4 & 10 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
1 & 0 & -4 & 17 \\
0 & 1 & 4 & -9 \\
 
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
 
2 & 0 & -8 & 4 \\
0 & 2 & 8 & 1 \\
 
\end{array} \right]</math>
 
And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: green;">linear dependence basis</span> explicitly, we can say here that it should be able to work out correctly.
 
In this case, their <span style="color: green;">linear dependence basis</span> is the vector <span style="color: green;">{{map|19 30 44 53}}</span>. This is only one vector, though, and the original had two vectors. So as a next step, we can pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: green;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: green;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.
 
<math>\left[ \begin{array} {rrr}
 
\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
1 & 0 & -4 & -13 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
1 & 0 & -4 & 17 \\
 
\end{array} \right]</math>
 
All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
 
Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: green;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: green;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: green;">linear dependence basis</span>.
 
It turns out that you can always isolate the enfactoring factor in the single final vector of the matrix — the <span style="color: red;">linearly independent vector</span> — through linear combinations of the vectors in the <span style="color: green;">linear dependence basis</span>. In this case, since there's only a single vector in the <span style="color: green;">linear dependence basis</span>, therefore all we need to do is repeatedly add that <span style="color: green;">one linearly dependent vector</span> to the <span style="color: red;">linearly independent vector</span> until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix.
 
In this case, we can accomplish this by adding 11 times the first vector. For the first matrix, {{map|1 0 -4 -13}} + 11⋅<span style="color: green;">{{map|19 30 44 53}}</span> = {{map|210 330 480 570}}, whose entries have a GCD=30, so we can defactor the matrix by dividing that vector by 30, leaving us with <span style="color: red;">{{map|7 11 16 19}}</span>. Therefore the final matrix here is [<span style="color: green;">{{map|19 30 44 53}}</span> {{map|7 11 16 19}}⟩. The other matrix matrix happens to defactor in the same way: {{map|1 0 -4 17}} + 11⋅<span style="color: green;">{{map|19 30 44 53}}</span> = {{map|210 330 480 600}} whose GCD is also 30, reducing to <span style="color: red;">{{map|7 11 16 20}}</span>, so the final matrix is [<span style="color: green;">{{map|19 30 44 53}}</span> <span style="color: red;">{{map|7 11 16 20}}</span>⟩.
 
Now the matrices are ready to add:
 
<math>\left[ \begin{array} {rrr}
 
\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}19 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}20 \\
 
\end{array} \right]</math>
 
Clearly, though, we can see that with the top vector – the <span style="color: green;">linear dependence basis</span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: green;">linear dependence basis</span> aside, and deal only with the <span style="color: red;">linearly independent vectors</span>:
 
<math>\left[ \begin{array} {rrr}
 
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}19 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}20 \\
 
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
 
\color{red}14 & \color{red}22 & \color{red}32 & \color{red}39 \\
 
\end{array} \right]</math>
 
Then we can reintroduce the <span style="color: green;">linear dependence basis</span> afterwards:
 
<math>\left[ \begin{array} {rrr}
 
\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{red}14 & \color{red}22 & \color{red}32 & \color{red}39 \\
 
\end{array} \right]</math>
 
And finally [[canonical form|canonicalize]]:
 
<math>\left[ \begin{array} {rrr}
 
1 & 0 & -4 & 2 \\
0 & 2 & 8 & 1 \\
 
\end{array} \right]</math>
 
so we can now see that meantone plus flattone is [[godzilla]].
 
As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: green;">linear dependence basis</span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: red;">linearly independent vectors</span>:
 
<math>\left[ \begin{array} {rrr}
 
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}19 \\
 
\end{array} \right]</math>
-
<math>\left[ \begin{array} {rrr}
 
\color{red}7 & \color{red}11 & \color{red}16 & \color{red}20 \\
 
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
 
\color{red}0 & \color{red}0 & \color{red}0 & \color{red}-1 \\
 
\end{array} \right]</math>
 
And so, reintroducing the linear dependency basis, we have:
 
<math>\left[ \begin{array} {rrr}
 
\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{red}0 & \color{red}0 & \color{red}0 & \color{red}-1 \\
 
\end{array} \right]</math>
 
Which canonicalizes to:
 
<math>\left[ \begin{array} {rrr}
 
\color{green}19 & \color{green}30 & \color{green}44 & \color{green}53 \\
\color{red}0 & \color{red}0 & \color{red}0 & \color{red}1 \\
 
\end{array} \right]</math>
 
(almost the same thing, just with a 1 instead of a -1 in the second vector).
 
But the last thing we need to do is check the negativity of these two temperaments, so we can figure out which of these two results is truly the sum and which is truly the difference. If one of the matrices we performed arithmetic on was actually negative, then we have our results backwards (if both are negative, then the problem cancels out, and we go back to being right).
 
We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.
 
====Example with multiple vectors in the linear dependence basis====


(Examples WIP)
(Examples WIP)