Temperament addition: Difference between revisions

Cmloegcmluin (talk | contribs)
Cmloegcmluin (talk | contribs)
Matrix approach: totally rework so that the concepts are explained apart from the example
Line 696: Line 696:
==Matrix approach==
==Matrix approach==


Temperament arithmetic for <math>g_{\text{min}}>1</math> temperaments (again, that's with both <math>r>1</math> and <math>n>1</math>) can also be done using matrices, but it's significantly more involved than it is with multivectors. It works in essentially the same way — entry-wise addition or subtraction — but for matrices, it is necessary to make explicit <span style="color: #3C8031;">the basis for the linearly dependent vectors shared</span> between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the <span style="color: #3C8031;">linear-dependence basis, or <math>L_{\text{dep}}</math></span>. But it is not as simple as determining <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> (using the technique described [[Linear_dependence#For_a_given_set_of_matrices.2C_how_to_compute_a_basis_for_their_linearly_dependent_vectors|here]]) and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive. This is called '''addabilization defactoring''' and is worked through by example below.
Temperament arithmetic for <math>g_{\text{min}}>1</math> temperaments (again, that's with both <math>r>1</math> and <math>n>1</math>) can also be done using matrices, and it works in essentially the same way — entry-wise addition or subtraction — but there are some complications that make it significantly more involved than it is with multivectors. There's essentially five steps:


===Negation===
# Find the linear dependence basis <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>
# Put the matrices in a form with the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>
# Check for enfactoring, and perform an addabilization defactor (if necessary)
# Check for negation, and change negation (if necessary)
# Entry-wise add, and canonicalize
 
=== The steps ===
 
==== 1. Find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> ====
 
For matrices, it is necessary to make explicit <span style="color: #3C8031;">the basis for the linearly dependent vectors shared</span> between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the <span style="color: #3C8031;">linear-dependence basis, or <math>L_{\text{dep}}</math></span>.
 
Before this can be done, of course, we need to actually find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. This can be done using the technique described here: [[Linear dependence#For a given set of basis matrices, how to compute a basis for their linearly dependent vectors]]
 
==== 2. Put the matrices in a form with the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> ====
 
The <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> will always have one less vector than the original matrix, by the definition of addability as <span style="color: #B6321C;"><math>L_{\text{ind}}=1</math></span>. And the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> is not a full recreation of the original temperament; it needs that one extra vector to get back to representing it.
 
So a next step, we need pad out the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> by drawing from vectors from the original matrices. We can start from their first vectors. But if that vector happens to be linearly dependent on the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, then it won't result in a representation of the original matrix. Otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with. So we just have to keep going until we get it.
 
==== 3. Addabiliziation defactoring ====
 
But it is not quite as simple as determining the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive. This is called '''addabilization defactoring'''.
 
Most established defactoring algorithms will alter any or all of the entries of a matrix. This is not an option if we still want to be able to do temperament arithmetic, however, because these matrices must retain their explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. And we can't defactor and then paste the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> back over the first vector or something, because then we might just be enfactored again. We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.
 
The first step to addabilization defactoring is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor (the "greatest factor") by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps.
 
It turns out that you can always<ref>This conjecture was first suggested by Mike Battaglia, but it has not yet been mathematically proven. Sintel and Tom Price have done some experiments but nothing complete yet. Douglas Blumeyer's test cases in the [[RTT library in Wolfram Language]] suggest this is true, though.</ref> isolate the greatest factor in the single final vector of the matrix — the <span style="color: #B6321C;">linearly independent vector</span> — through linear combinations of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.
 
The example that will be worked through in this section below is as simple as it can get: the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of only a single vector, so we simply add some number of this <span style="color: #3C8031;">single linearly dependent vector</span> to the <span style="color: #B6321C;">linearly independent vector</span>. However, if there are multiple vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, the linear combination which surfaces the greatest factor may involve just one or potentially all of those vectors, and the best approach to finding this combination is simply an automatic solver. An example of this approach is demonstrated in the [[RTT library in Wolfram Language]], here: https://github.com/cmloegcmluin/RTT/blob/main/main.m#L477
 
Another complication is that the greatest factor may be very large, or be a highly composite number. In this case, searching for the linear combination that isolates the greatest factor in its entirety directly may be intractable; it is better to eliminate it piecemeal, i.e., whenever the solver finds a factor of the greatest factor, eliminate it, and repeat until the greatest factor is fully eliminated. The RTT library code linked to above works in this way.
 
==== 4. Negation ====
 
Temperament negation is more complex with matrices, both in terms of checking for it, as well as changing it.


For matrices, negation is accomplished by choosing a single vector and changing the sign of every entry in it. In the case of comma bases, a vector is a column, whereas in a mapping a vector (technically a row vector, or covector) is a row.
For matrices, negation is accomplished by choosing a single vector and changing the sign of every entry in it. In the case of comma bases, a vector is a column, whereas in a mapping a vector (technically a row vector, or covector) is a row.


For matrices, the check for negation is related to canonicalization of multivectors as are used in exterior algebra for RTT. Essentially we take the minors of the matrix, and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.
For matrices, the check for negation is related to canonicalization of multivectors as are used in exterior algebra for RTT. Essentially we take the minors of the matrix, and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.
==== 5. Entry-wise add ====
The entry-wise addition of elements works mostly the same as for vectors. But there's one catch: we only do it for the pair of <span style="color: #B6321C;">linearly independent vectors</span>. We set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and reintroduce it at the end.
When taking the sum, this is just for simplicity's sake. There's no sense in adding the two copies of the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> together, as we'll just get the same vector but 2-enfactored. So we may as well set it aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>, and put it back at the end.
When taking the difference, it's essential that we set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside before entry-wise arithmetic, though, because if we were to subtract it from itself, we'd end up with all zeros. Unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros.
As a final step, as is always good to do when concluding temperament operations, put the result in [[canonical form]].


===Example===
===Example===


For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.
For our example, let’s look at septimal meantone plus flattone. The canonical forms of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}.
 
'''0. Counterexample.''' Before we try following the detailed instructions just described above, let's do the counterexample, to illustrate why we have to follow them at all. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}}, which is not the correct answer:




Line 731: Line 779:




And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> explicitly, we can say here that it should be able to work out correctly.
And it's wrong not only because it is clearly enfactored (at least one factor of 2, that is visible in the first vector). The full explanation of why this is the wrong answer is beyond the scope of this example. However, if we now follow through with the instructions described above, we can find the correct answer.
 
'''1. Find the linear dependence basis.''' We know where to start: first find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and put each of these two mappings into a form that includes it explicitly. In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ket|{{map|19 30 44 53}}}}</span>.  


In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ket|{{map|19 30 44 53}}}}</span>. The original matrices had two vectors, so as a next step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.
'''2. Reproduce the original temperament.''' The original matrices had two vectors, so as our second step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are <span style="color: #B6321C;">linearly independent</span> from the ones we already have; if one is not, skip it and move on. In this case the first vectors are both fine, though.




Line 751: Line 801:




All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor (the "greatest factor") by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
'''3. Defactor.''' Next, verify that both matrices are defactored. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>. So we'll use addabilization defactoring. Since there's only a single vector in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, therefore all we need to do is repeatedly add that <span style="color: #3C8031;">one linearly dependent vector</span> to the <span style="color: #B6321C;">linearly independent vector</span> until we find a vector with the target GCF, which we can then simply divide out to defactor the matrix. Specifically, we add 11 times the <span style="color: #3C8031;">linearly dependent vector</span>. For the first matrix, {{map|1 0 -4 -13}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span> = {{map|210 330 480 570}}, whose entries have a GCF = 30, so we can defactor the matrix by dividing that vector by 30, leaving us with <span style="color: #B6321C;">{{map|7 11 16 19}}</span>. Therefore the final matrix here is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|7 11 16 19}}⟩. The other matrix matrix happens to defactor in the same way: {{map|1 0 -4 17}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span> = {{map|210 330 480 600}} whose GCF is also 30, reducing to <span style="color: #B6321C;">{{map|7 11 16 20}}</span>, so the final matrix is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> <span style="color: #B6321C;">{{map|7 11 16 20}}</span>⟩.


Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: #3C8031;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: #3C8031;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.
'''4. Check negativity.''' The next thing we need to do is check the negativity of these two temperaments. If either of the matrices we're performing arithmetic on is  negative, then we'll have to change it (if both are negative, then the problem cancels out, and we go back to being right). We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices are both negative! But again, since they were ''both'' negative, the effect cancels out; no need to change anything (but if we wanted to, we could just take the <span style="color: #B6321C;">linearly independent vector</span> for each matrix and negate every entry in it).


It turns out that you can always isolate the enfactoring factor in the single final vector of the matrix — the <span style="color: #B6321C;">linearly independent vector</span> — through linear combinations of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. In this case, since there's only a single vector in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, therefore all we need to do is repeatedly add that <span style="color: #3C8031;">one linearly dependent vector</span> to the <span style="color: #B6321C;">linearly independent vector</span> until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix.
'''5. Add.''' Now the matrices are ready to add:
 
In this case, we can accomplish this by adding 11 times the first vector. For the first matrix, {{map|1 0 -4 -13}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span> = {{map|210 330 480 570}}, whose entries have a GCD = 30, so we can defactor the matrix by dividing that vector by 30, leaving us with <span style="color: #B6321C;">{{map|7 11 16 19}}</span>. Therefore the final matrix here is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|7 11 16 19}}⟩. The other matrix matrix happens to defactor in the same way: {{map|1 0 -4 17}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span> = {{map|210 330 480 600}} whose GCD is also 30, reducing to <span style="color: #B6321C;">{{map|7 11 16 20}}</span>, so the final matrix is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> <span style="color: #B6321C;">{{map|7 11 16 20}}</span>⟩.
 
Now the matrices are ready to add:




Line 777: Line 823:




Clearly, though, we can see that with the top vector – the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>:
We set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>:




Line 823: Line 869:
so we can now see that meantone plus flattone is [[godzilla]].
so we can now see that meantone plus flattone is [[godzilla]].


As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: #B6321C;">linearly independent vectors</span>:
As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. Again, set aside the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, and just entry-wise subtract the two <span style="color: #B6321C;">linearly independent vectors</span>:




Line 845: Line 891:




And so, reintroducing the linear dependency basis, we have:
And so, reintroducing the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, we have:




Line 867: Line 913:




And so we can see that meantone minus flattone is [[meanmag]]. But the last thing we need to do is check the negativity of these two temperaments, so we can figure out which of these two results is truly the sum and which is truly the difference. If one of the matrices we performed arithmetic on was actually negative, then we have our results backwards (if both are negative, then the problem cancels out, and we go back to being right).
And so we can see that meantone minus flattone is [[meanmag]].
 
We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.
 
===== Addabilization defactoring complications =====
 
This case was as simple as it can get: we simply needed to add some number of the single <span style="color: #3C8031;">linearly dependent vector</span> to the <span style="color: #B6321C;">linearly independent vector</span>. However, if there are multiple vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, the linear combination which surfaces the greatest factor may involve just one or potentially all of those vectors, and the best approach to finding this combination is simply an automatic solver. An example of this approach is demonstrated in the [[RTT library in Wolfram Language]], here: https://github.com/cmloegcmluin/RTT/blob/main/main.m#L477
 
Another complication is that the greatest factor may be very large, or be a highly composite number. In this case, searching for the linear combination that isolates the greatest factor in its entirety directly may be intractable; it is better to eliminate it piecemeal, i.e., whenever the solver finds a factor of the greatest factor, eliminate it, and repeat until the greatest factor is fully eliminated. The example linked above also does this.
 
===Proof that addabilization defactoring is always possible===
 
This conjecture has not yet been mathematically proven. Sintel and Tom Price have done some experiments but nothing complete yet. Douglas Blumeyer's test cases in the [[RTT library in Wolfram Language]] suggest this is true, though.


= References =
= References =