Temperament addition: Difference between revisions

Cmloegcmluin (talk | contribs)
massive cleanup of variable names
Cmloegcmluin (talk | contribs)
reorder sections
Line 11: Line 11:
For <math>g_{\text{min}}>1</math> temperaments, temperament arithmetic gets a little trickier. This is discussed in the [[Temperament_arithmetic#Beyond_.5Bmath.5D.5Cmin.28g.29.3D1.5B.2Fmath.5D|beyond <math>g_{\text{min}}=1</math> section]] later.
For <math>g_{\text{min}}>1</math> temperaments, temperament arithmetic gets a little trickier. This is discussed in the [[Temperament_arithmetic#Beyond_.5Bmath.5D.5Cmin.28g.29.3D1.5B.2Fmath.5D|beyond <math>g_{\text{min}}=1</math> section]] later.


==Visualizing temperament arithmetic==
=Visualizing temperament arithmetic=


[[File:Sum diff and wedge.png|thumb|right|400px|A and B are vectors representing temperaments. They could be maps or prime count vectors. A∧B is their wedge product and gives a higher-[[grade]] temperament that [[merge]]s (sometimes called "meets" or "joins") both A and B. A+B and A-B give the sum and difference, respectively.]]
[[File:Sum diff and wedge.png|thumb|right|400px|A and B are vectors representing temperaments. They could be maps or prime count vectors. A∧B is their wedge product and gives a higher-[[grade]] temperament that [[merge]]s (sometimes called "meets" or "joins") both A and B. A+B and A-B give the sum and difference, respectively.]]


===Versus the wedge product===
==Versus the wedge product==


If the [[wedge product]] of two vectors represents the directed area of a parallelogram constructed with the vectors as its sides, then the temperament sum and difference are the vectors that connect the diagonals of this parallelogram.
If the [[wedge product]] of two vectors represents the directed area of a parallelogram constructed with the vectors as its sides, then the temperament sum and difference are the vectors that connect the diagonals of this parallelogram.


===Tuning and tone space===
==Tuning and tone space==


One way we can visualize temperament arithmetic is on [[projective tuning space]].
One way we can visualize temperament arithmetic is on [[projective tuning space]].
Line 37: Line 37:
Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5-ET and 7-ET support a <math>r=2</math> temperament, then so will 5 + 7=12-ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call ''down'' the scale tree, children are always found between their parents. But when you try to go back ''up'' the scale tree, to one or the other parent, you may not immediately know which side of the child to go.
Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5-ET and 7-ET support a <math>r=2</math> temperament, then so will 5 + 7=12-ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call ''down'' the scale tree, children are always found between their parents. But when you try to go back ''up'' the scale tree, to one or the other parent, you may not immediately know which side of the child to go.


==Conditions on temperament arithmetic==
=Conditions on temperament arithmetic=


Temperament arithmetic is only possible for temperaments with the same [[dimensions]], that is, the same [[rank]] and [[dimensionality]] (and therefore, by the [[rank-nullity theorem]], also the same [[nullity]]). The reason for this is visually obvious: without the same <math>d</math>, <math>r</math>, and <math>n</math> (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up one-to-one. From this condition it also follows that the result of temperament arithmetic will be a new temperament with the same <math>d</math>, <math>r</math>, and <math>n</math> as the input temperaments.
Temperament arithmetic is only possible for temperaments with the same [[dimensions]], that is, the same [[rank]] and [[dimensionality]] (and therefore, by the [[rank-nullity theorem]], also the same [[nullity]]). The reason for this is visually obvious: without the same <math>d</math>, <math>r</math>, and <math>n</math> (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up one-to-one. From this condition it also follows that the result of temperament arithmetic will be a new temperament with the same <math>d</math>, <math>r</math>, and <math>n</math> as the input temperaments.
Line 43: Line 43:
Matching the dimensions is only the first of two conditions on the possibility of temperament arithmetic. The second condition is that the temperaments must all be '''addable'''. This condition is trickier, though, and so a detailed discussion of it will be deferred to a later section (here: [[Temperament arithmetic#Addability]]). But we can at least say here that any set of <math>g_{\text{min}}=1</math> temperaments are addable<ref>or they are all the same temperament, in which case they <span style="color: #3C8031;">share all the same basis vectors and could perhaps be said to be ''completely'' linearly dependent.</span></ref>, fortunately, so we don't need to worry about it in that case.
Matching the dimensions is only the first of two conditions on the possibility of temperament arithmetic. The second condition is that the temperaments must all be '''addable'''. This condition is trickier, though, and so a detailed discussion of it will be deferred to a later section (here: [[Temperament arithmetic#Addability]]). But we can at least say here that any set of <math>g_{\text{min}}=1</math> temperaments are addable<ref>or they are all the same temperament, in which case they <span style="color: #3C8031;">share all the same basis vectors and could perhaps be said to be ''completely'' linearly dependent.</span></ref>, fortunately, so we don't need to worry about it in that case.


==Versus meet and join==
=Versus meet and join=


Like [[meet and join]], temperament arithmetic takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, ''adding'' these input temperaments together.
Like [[meet and join]], temperament arithmetic takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, ''adding'' these input temperaments together.
Line 49: Line 49:
But there is a big difference between temperament arithmetic and meet/join. Temperament arithmetic is done using ''entry-wise'' addition (or subtraction), whereas meet/join are done using ''concatenation''. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the join of mappings with two rows each is a new mapping that has a total of four rows<ref>At least, this mapping would have a total of four rows before it is reduced. After reduction, it may end up with only three (or two if you joined a temperament with itself for some reason).</ref>.
But there is a big difference between temperament arithmetic and meet/join. Temperament arithmetic is done using ''entry-wise'' addition (or subtraction), whereas meet/join are done using ''concatenation''. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the join of mappings with two rows each is a new mapping that has a total of four rows<ref>At least, this mapping would have a total of four rows before it is reduced. After reduction, it may end up with only three (or two if you joined a temperament with itself for some reason).</ref>.


===The linear dependence connection===
==The linear dependence connection==


Another connection between temperament arithmetic and meet/join is that they ''may'' involve checks for linear dependence.
Another connection between temperament arithmetic and meet/join is that they ''may'' involve checks for linear dependence.
Line 57: Line 57:
Meet and join does not ''necessarily'' involve linear dependence. Linear dependence only matters for meet and join when you attempt to do it using ''exterior'' algebra, that is, by using the wedge product, rather than the ''linear'' algebra approach, which is just to concatenate the vectors as a matrix and reduce. For more information on this, see [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#The linearly dependent exception to the wedge product]].
Meet and join does not ''necessarily'' involve linear dependence. Linear dependence only matters for meet and join when you attempt to do it using ''exterior'' algebra, that is, by using the wedge product, rather than the ''linear'' algebra approach, which is just to concatenate the vectors as a matrix and reduce. For more information on this, see [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#The linearly dependent exception to the wedge product]].


==Negation==
=Negation=


[[File:Very simple illustration of temperament sum vs diff.png|500px|thumb|left|Equivalences of temperament arithmetic depending on negativity. ]]
[[File:Very simple illustration of temperament sum vs diff.png|500px|thumb|left|Equivalences of temperament arithmetic depending on negativity. ]]
Line 71: Line 71:
The check is related to canonicalization of varianced multivectors as are used in exterior algebra for RTT. Essentially we take the minors of the matrix, and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.
The check is related to canonicalization of varianced multivectors as are used in exterior algebra for RTT. Essentially we take the minors of the matrix, and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa.


==Beyond <math>g_{\text{min}}=1</math>==
=Beyond <math>g_{\text{min}}=1</math>=


As stated above, temperament arithmetic is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>g_{\text{min}}=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle it.
As stated above, temperament arithmetic is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>g_{\text{min}}=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle it.
===Multivector approach===
The simplest approach is to use multivectors. This is discussed in more detail here: [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#Temperament arithmetic]].
===Matrix approach===
Temperament arithmetic for temperaments with both <math>r>1</math> and <math>n>1</math> can also be done using matrices, but it's significantly more involved than it is with multivectors. It works in essentially the same way — entry-wise addition or subtraction — but for matrices, it is necessary to make explicit <span style="color: #3C8031;">the basis for the linearly dependent vectors shared</span> between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the <span style="color: #3C8031;">linear-dependence basis, or <math>L_{\text{dep}}</math></span>. But it is not as simple as determining <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> (using the technique described [[Linear_dependence#For_a_given_set_of_matrices.2C_how_to_compute_a_basis_for_their_linearly_dependent_vectors|here]]) and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive.
Throughout this section, we will be using <span style="color: #3C8031;">a green color on linearly dependent objects and values</span>, and <span style="color: #B6321C;">a red color on linearly independent objects and values</span>, to help differentiate between the two.
====Example====
For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.
<math>\left[ \begin{array} {rrr}
1 & 0 & -4 & -13 \\
0 & 1 & 4 & 10 \\
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
1 & 0 & -4 & 17 \\
0 & 1 & 4 & -9 \\
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
2 & 0 & -8 & 4 \\
0 & 2 & 8 & 1 \\
\end{array} \right]</math>
And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> explicitly, we can say here that it should be able to work out correctly.
In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ket|{{map|19 30 44 53}}}}</span>. The original matrices had two vectors, so as a next step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.
<math>\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & -4 & -13 \\
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & -4 & 17 \\
\end{array} \right]</math>
All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: #3C8031;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: #3C8031;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.
It turns out that you can always isolate the enfactoring factor in the single final vector of the matrix — the <span style="color: #B6321C;">linearly independent vector</span> — through linear combinations of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. In this case, since there's only a single vector in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, therefore all we need to do is repeatedly add that <span style="color: #3C8031;">one linearly dependent vector</span> to the <span style="color: #B6321C;">linearly independent vector</span> until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix.
In this case, we can accomplish this by adding 11 times the first vector. For the first matrix, {{map|1 0 -4 -13}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span>={{map|210 330 480 570}}, whose entries have a GCD=30, so we can defactor the matrix by dividing that vector by 30, leaving us with <span style="color: #B6321C;">{{map|7 11 16 19}}</span>. Therefore the final matrix here is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|7 11 16 19}}⟩. The other matrix matrix happens to defactor in the same way: {{map|1 0 -4 17}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span>={{map|210 330 480 600}} whose GCD is also 30, reducing to <span style="color: #B6321C;">{{map|7 11 16 20}}</span>, so the final matrix is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> <span style="color: #B6321C;">{{map|7 11 16 20}}</span>⟩.
Now the matrices are ready to add:
<math>\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
\end{array} \right]</math>
Clearly, though, we can see that with the top vector – the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>:
<math>\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\
\end{array} \right]</math>
Then we can reintroduce the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> afterwards:
<math>\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\
\end{array} \right]</math>
And finally [[canonical form|canonicalize]]:
<math>\left[ \begin{array} {rrr}
1 & 0 & -4 & 2 \\
0 & 2 & 8 & 1 \\
\end{array} \right]</math>
so we can now see that meantone plus flattone is [[godzilla]].
As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: #B6321C;">linearly independent vectors</span>:
<math>\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
\end{array} \right]</math>
-
<math>\left[ \begin{array} {rrr}
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\
\end{array} \right]</math>
And so, reintroducing the linear dependency basis, we have:
<math>\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\
\end{array} \right]</math>
Which canonicalizes to:
<math>\left[ \begin{array} {rrr}
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}1 \\
\end{array} \right]</math>
(almost the same thing, just with a 1 instead of a -1 in the second vector).
But the last thing we need to do is check the negativity of these two temperaments, so we can figure out which of these two results is truly the sum and which is truly the difference. If one of the matrices we performed arithmetic on was actually negative, then we have our results backwards (if both are negative, then the problem cancels out, and we go back to being right).
We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.
====Example with multiple vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>====
(Examples WIP)


==Addability==
==Addability==
Line 270: Line 105:
On the other hand, it does make sense to speak of the <span style="color: #B6321C;">'''linear-independence'''</span> of the temperament as an integer count. This is because the count of <span style="color: #B6321C;">linearly independent</span> basis vectors of two temperaments' mappings and the count of <span style="color: #B6321C;">linearly independent</span> basis vectors of their comma bases will always be the same. So the temperament <span style="color: #B6321C;">linear-independence</span> is simply this number. In the <math>d=5</math>, <math>r=2</math> example from the previous paragraph, these would be <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> (read <span style="color: #B6321C;">"linear-independence-1"</span>) temperaments.
On the other hand, it does make sense to speak of the <span style="color: #B6321C;">'''linear-independence'''</span> of the temperament as an integer count. This is because the count of <span style="color: #B6321C;">linearly independent</span> basis vectors of two temperaments' mappings and the count of <span style="color: #B6321C;">linearly independent</span> basis vectors of their comma bases will always be the same. So the temperament <span style="color: #B6321C;">linear-independence</span> is simply this number. In the <math>d=5</math>, <math>r=2</math> example from the previous paragraph, these would be <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> (read <span style="color: #B6321C;">"linear-independence-1"</span>) temperaments.


A proof of this conjecture is given here: [[Temperament arithmetic#Sintel's proof of the linear independence conjecture]].
A proof of this conjecture is given here: [[Temperament arithmetic#Sintel's proof of the linear-independence conjecture]].


====4. <span style="color: #B6321C;">Linear independence</span> between temperaments by only one basis vector (addability)====
====4. <span style="color: #B6321C;">Linear independence</span> between temperaments by only one basis vecto (addability)=====


Two temperaments are addable if they are <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>. In other words, both their mappings and their comma bases <span style="color: #3C8031;">share</span> all but one basis vector.
Two temperaments are addable if they are <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>. In other words, both their mappings and their comma bases <span style="color: #3C8031;">share</span> all but one basis vector.
Line 593: Line 428:
(WIP)
(WIP)


== Applications ==
=== Sintel's proof of the linear-independence conjecture===
 
The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments. According to some sources, these properties are discussed in terms of "Fokker groups" on this page: [[Fokker block]].
 
== Sintel's proof of the linear independence conjecture==


If A and B are mappings from Z^n to Z^m, with n > m, A, B full rank (i'll use A and B as their rowspace equivalently):
If A and B are mappings from Z^n to Z^m, with n > m, A, B full rank (i'll use A and B as their rowspace equivalently):
Line 621: Line 452:
m = m
m = m


== References ==
==Multivector approach==
 
The simplest approach is to use multivectors. This is discussed in more detail here: [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#Temperament arithmetic]].
 
==Matrix approach==
 
Temperament arithmetic for temperaments with both <math>r>1</math> and <math>n>1</math> can also be done using matrices, but it's significantly more involved than it is with multivectors. It works in essentially the same way — entry-wise addition or subtraction — but for matrices, it is necessary to make explicit <span style="color: #3C8031;">the basis for the linearly dependent vectors shared</span> between the involved matrices before performing the arithmetic. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the <span style="color: #3C8031;">linear-dependence basis, or <math>L_{\text{dep}}</math></span>. But it is not as simple as determining <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> (using the technique described [[Linear_dependence#For_a_given_set_of_matrices.2C_how_to_compute_a_basis_for_their_linearly_dependent_vectors|here]]) and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactored]]. And defactoring them without compromising the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive.
 
Throughout this section, we will be using <span style="color: #3C8031;">a green color on linearly dependent objects and values</span>, and <span style="color: #B6321C;">a red color on linearly independent objects and values</span>, to help differentiate between the two.
 
===Example===
 
For example, let’s look at septimal meantone plus flattone. The [[canonical form]]s of these temperaments are {{ket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{ket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. Simple entry-wise addition of these two mapping matrices gives {{ket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}} which is not the correct answer.
 
<math>\left[ \begin{array} {rrr}
 
1 & 0 & -4 & -13 \\
0 & 1 & 4 & 10 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
1 & 0 & -4 & 17 \\
0 & 1 & 4 & -9 \\
 
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
 
2 & 0 & -8 & 4 \\
0 & 2 & 8 & 1 \\
 
\end{array} \right]</math>
 
And not only because it is enfactored. The full explanation why it's the wrong answer is beyond the scope of this example. However, if we put each of these two mappings into a form that includes their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> explicitly, we can say here that it should be able to work out correctly.
 
In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ket|{{map|19 30 44 53}}}}</span>. The original matrices had two vectors, so as a next step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are linearly independent from the ones we already have; if one is not, skip it and move on (otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with). In this case the first vectors are both fine, though.
 
<math>\left[ \begin{array} {rrr}
 
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & -4 & -13 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
1 & 0 & -4 & 17 \\
 
\end{array} \right]</math>
 
All we have to do now before performing the entry-wise addition is verify that both matrices are defactored. The best way to do this is inspired by [[Pernet-Stein defactoring]]: we find the value of the enfactoring factor by following this algorithm until the point where we have a square transformation matrix, but instead of inverting it and multiplying by it to ''remove'' the defactoring, we simply take this square matrix's determinant, which is the factor we were about to remove. If that determinant is 1, then we're already defactored; if not, then we need to take do some additional steps. In this case, both matrices ''are'' enfactored, each by a factor of 30<ref>or you may prefer to think of this as three different (prime) factors: 2, 3, 5 (which multiply to 30)</ref>.
 
Our first thought may be to simply defactor these matrices, then. The problem with that is that most established defactoring algorithms will alter the first vector so that it's no longer <span style="color: #3C8031;">{{map|19 30 44 53}}</span>, in which case we won't be able to do temperament arithmetic with the matrices anymore, which is our goal. And we can't defactor and then paste <span style="color: #3C8031;">{{map|19 30 44 53}}</span> back over the first vector or something, because then we might just be enfactored again! We need to find a defactoring algorithm that manages to work without altering any of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>.
 
It turns out that you can always isolate the enfactoring factor in the single final vector of the matrix — the <span style="color: #B6321C;">linearly independent vector</span> — through linear combinations of the vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. In this case, since there's only a single vector in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, therefore all we need to do is repeatedly add that <span style="color: #3C8031;">one linearly dependent vector</span> to the <span style="color: #B6321C;">linearly independent vector</span> until we find a vector with the target GCD, which we can then simply divide out to defactor the matrix.
 
In this case, we can accomplish this by adding 11 times the first vector. For the first matrix, {{map|1 0 -4 -13}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span>={{map|210 330 480 570}}, whose entries have a GCD=30, so we can defactor the matrix by dividing that vector by 30, leaving us with <span style="color: #B6321C;">{{map|7 11 16 19}}</span>. Therefore the final matrix here is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|7 11 16 19}}⟩. The other matrix matrix happens to defactor in the same way: {{map|1 0 -4 17}} + 11⋅<span style="color: #3C8031;">{{map|19 30 44 53}}</span>={{map|210 330 480 600}} whose GCD is also 30, reducing to <span style="color: #B6321C;">{{map|7 11 16 20}}</span>, so the final matrix is [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> <span style="color: #B6321C;">{{map|7 11 16 20}}</span>⟩.
 
Now the matrices are ready to add:
 
<math>\left[ \begin{array} {rrr}
 
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
 
\end{array} \right]</math>
 
Clearly, though, we can see that with the top vector – the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> — there's no sense in adding its two copies together, as we'll just get the same vector but 2-enfactored. So we may as well set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and deal only with the <span style="color: #B6321C;">linearly independent vectors</span>:
 
<math>\left[ \begin{array} {rrr}
 
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
 
\end{array} \right]</math>
+
<math>\left[ \begin{array} {rrr}
 
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
 
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
 
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\
 
\end{array} \right]</math>
 
Then we can reintroduce the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> afterwards:
 
<math>\left[ \begin{array} {rrr}
 
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}14 & \color{BrickRed}22 & \color{BrickRed}32 & \color{BrickRed}39 \\
 
\end{array} \right]</math>
 
And finally [[canonical form|canonicalize]]:
 
<math>\left[ \begin{array} {rrr}
 
1 & 0 & -4 & 2 \\
0 & 2 & 8 & 1 \\
 
\end{array} \right]</math>
 
so we can now see that meantone plus flattone is [[godzilla]].
 
As long as we've done all this work to set these matrices up for arithmetic, let's check their difference as well. In the case of the difference, it's even more essential that we set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside before entry-wise arithmetic, because if we were to subtract it from itself, we'd end up with all zeros; unlike the case of the sum, where we'd just end up with an enfactored version of the starting vectors, we couldn't even defactor to get back to where we started if we completely wiped out the relevant information by sending it all to zeros. So let's just entry-wise subtract the two <span style="color: #B6321C;">linearly independent vectors</span>:
 
<math>\left[ \begin{array} {rrr}
 
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}19 \\
 
\end{array} \right]</math>
-
<math>\left[ \begin{array} {rrr}
 
\color{BrickRed}7 & \color{BrickRed}11 & \color{BrickRed}16 & \color{BrickRed}20 \\
 
\end{array} \right]</math>
=
<math>\left[ \begin{array} {rrr}
 
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\
 
\end{array} \right]</math>
 
And so, reintroducing the linear dependency basis, we have:
 
<math>\left[ \begin{array} {rrr}
 
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}-1 \\
 
\end{array} \right]</math>
 
Which canonicalizes to:
 
<math>\left[ \begin{array} {rrr}
 
\color{OliveGreen}19 & \color{OliveGreen}30 & \color{OliveGreen}44 & \color{OliveGreen}53 \\
\color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}0 & \color{BrickRed}1 \\
 
\end{array} \right]</math>
 
(almost the same thing, just with a 1 instead of a -1 in the second vector).
 
But the last thing we need to do is check the negativity of these two temperaments, so we can figure out which of these two results is truly the sum and which is truly the difference. If one of the matrices we performed arithmetic on was actually negative, then we have our results backwards (if both are negative, then the problem cancels out, and we go back to being right).
 
We check negativity by using the minors of these matrices. The first matrix's minors are (-1, -4, -10, -4, -13, -12) and the second matrix's minors are (-1, -4, 9, -4, 17, 32). What we're looking for here are their leading entries, because these are minors of a mapping (if we were looking at minors of comma bases, we'd be looking at the trailing entries instead). Specifically, we're looking to see if the leading entries are positive. They're not. Which tells us these matrices, as we performed arithmetic on them, were both negative! But again, since they were ''both'' negative, the effect cancels out, and so the sum we computed is indeed the sum, and the difference was indeed the difference.
 
===Example with multiple vectors in the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>===
 
(Examples WIP)
 
= Applications =
 
The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments. According to some sources, these properties are discussed in terms of "Fokker groups" on this page: [[Fokker block]].
 
= References =


<references />
<references />