Temperament addition: Difference between revisions

Cmloegcmluin (talk | contribs)
m Matrix approach: more space equals problems
Cmloegcmluin (talk | contribs)
Example: Addabilization defactoring complications
Line 751: Line 751:




All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor (the "greatest factor") by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.


Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: #3C8031;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: #3C8031;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.
Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: #3C8031;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: #3C8031;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.
Line 870: Line 870:


We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.
We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.
===== Addabilization defactoring complications =====
This case was as simple as it can get: we simply needed to add some number of the single linearly dependent vector to the linearly independent vector. However, if there are multiple vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, the linear combination which surfaces the greatest factor may involve just one or potentially all of those vectors, and the best approach to finding this combination is simply an automatic linear solver. An example of this approach is demonstrated in the [[RTT library in Wolfram Language]], here: https://github.com/cmloegcmluin/RTT/blob/main/main.m#L477
Another complication is that the greatest factor may be very large, or be a highly composite number. In this case, searching for the linear combination that isolates the greatest factor in its entirety directly may be intractable; it is better to eliminate it piecemeal, i.e., whenever the solver finds a factor of the greatest factor, eliminate it, and repeat until the greatest factor is fully eliminated. The example linked above also does this.


===Proof that addabilization defactoring is always possible===
===Proof that addabilization defactoring is always possible===