Defactoring: Difference between revisions

Cmloegcmluin (talk | contribs)
simplify inline math
Cmloegcmluin (talk | contribs)
fix section capitalization
Line 5: Line 5:
Due to complications associated with enfactored matrices which we'll get into later in this article, we discourage treating them as representations of true temperaments.<ref>As Graham Breed writes [http://x31eq.com/temper/method.html here], "Whether temperaments with contorsion should even be thought of as temperaments is a matter of debate."</ref> Instead we recommend that they be considered to represent mere "temperoids": temperament-like structures.
Due to complications associated with enfactored matrices which we'll get into later in this article, we discourage treating them as representations of true temperaments.<ref>As Graham Breed writes [http://x31eq.com/temper/method.html here], "Whether temperaments with contorsion should even be thought of as temperaments is a matter of debate."</ref> Instead we recommend that they be considered to represent mere "temperoids": temperament-like structures.


= more specific definition =
= More specific definition =


A mapping is enfactored if linear combinations of its rows can produce another row whose elements have a common factor (other than 1).<ref>The counts of rows that are being linearly combined in this way to check for enfactoring may not share a common factor (again, other than 1), or else enfactoring would be introduced.</ref>
A mapping is enfactored if linear combinations of its rows can produce another row whose elements have a common factor (other than 1).<ref>The counts of rows that are being linearly combined in this way to check for enfactoring may not share a common factor (again, other than 1), or else enfactoring would be introduced.</ref>
Line 13: Line 13:
This definition includes mappings whose rows themselves include a common factor, such as {{ket|{{map|24 38 56}}}}, which already has a clearly visible common factor of 2.  
This definition includes mappings whose rows themselves include a common factor, such as {{ket|{{map|24 38 56}}}}, which already has a clearly visible common factor of 2.  


= motivation =
= Motivation =


A major use case for defactoring is to enable a [[canonical form]] for temperament mappings, or in other words, to achieve for the linear-algebra-only school of RTT practitioners a unique ID for temperaments. Previously this was only available by using lists of minor determinants AKA wedge products of mapping rows, which by virtue of reducing the information down to a single list of numbers, could be checked for enfactoring by simply checking the single row's GCD. For more information on this historical situation, see: [[Varianced Exterior Algebra#lack of importance to RTT]], and for more information on the canonical form developed, see [[defactored Hermite form]].
A major use case for defactoring is to enable a [[canonical form]] for temperament mappings, or in other words, to achieve for the linear-algebra-only school of RTT practitioners a unique ID for temperaments. Previously this was only available by using lists of minor determinants AKA wedge products of mapping rows, which by virtue of reducing the information down to a single list of numbers, could be checked for enfactoring by simply checking the single row's GCD. For more information on this historical situation, see: [[Varianced Exterior Algebra#lack of importance to RTT]], and for more information on the canonical form developed, see [[defactored Hermite form]].


= terminology change proposal =
= Terminology change proposal =


If you've studied RTT extensively, you've probably encountered the terms [[Saturation|"saturated" and "contorted"]] that are sometimes used to describe mappings. These two terms each have several flaws, and so this article presents alternative terms that are clearer and more descriptive: "defactored" and "enfactored", respectively. These new terms were coined by [[Dave Keenan]] in collaboration with [[Douglas Blumeyer]] in June of 2021<ref>Many, many other terms were considered before arriving at defactored and enfactored, including but not limited to: repeated, (up/down)sampled, decimated, divided/multiplied, divisive/multiplicative, completed/depleted/repleted/pleated, efficient, brown, dry, spongy/holey, fluffy, corrugated, copied, shredded, tupled, tupletted, enphactored (where the ph stood for possibly hidden)...</ref>.  
If you've studied RTT extensively, you've probably encountered the terms [[Saturation|"saturated" and "contorted"]] that are sometimes used to describe mappings. These two terms each have several flaws, and so this article presents alternative terms that are clearer and more descriptive: "defactored" and "enfactored", respectively. These new terms were coined by [[Dave Keenan]] in collaboration with [[Douglas Blumeyer]] in June of 2021<ref>Many, many other terms were considered before arriving at defactored and enfactored, including but not limited to: repeated, (up/down)sampled, decimated, divided/multiplied, divisive/multiplicative, completed/depleted/repleted/pleated, efficient, brown, dry, spongy/holey, fluffy, corrugated, copied, shredded, tupled, tupletted, enphactored (where the ph stood for possibly hidden)...</ref>.  


== defactored, to replace saturated ==
== Defactored, to replace saturated ==


Several concerns with the term "saturation" may be identified:
Several concerns with the term "saturation" may be identified:
Line 43: Line 43:
# Furthermore, there is another common but conflicting sense of saturation for matrices which clamps entry values to between -1 and 1<ref>See https://math.stackexchange.com/questions/1964814/linear-transformation-of-a-saturated-vector and https://faculty.uml.edu//thu/tcs01-june.pdf</ref>.  
# Furthermore, there is another common but conflicting sense of saturation for matrices which clamps entry values to between -1 and 1<ref>See https://math.stackexchange.com/questions/1964814/linear-transformation-of-a-saturated-vector and https://faculty.uml.edu//thu/tcs01-june.pdf</ref>.  


== enfactored, to replace contorted ==
== Enfactored, to replace contorted ==


As for the term "contorsion", the concerns with it are:
As for the term "contorsion", the concerns with it are:
Line 63: Line 63:
# Due to its similarity with the word "contortion", the word contorsion evokes bending, twisting, and knotting. But there is nothing bendy, twisty, or knotted about the effect it has on JI lattices or tuning space.
# Due to its similarity with the word "contortion", the word contorsion evokes bending, twisting, and knotting. But there is nothing bendy, twisty, or knotted about the effect it has on JI lattices or tuning space.


== conclusion ==
== Conclusion ==


In accordance with this research and reasoning, this article henceforth will eschew the terms saturation and contorsion in favor of defactored and enfactored.<ref>I am starting a conversation about how to relate this page to the existing page for saturation and contortion on its discussion page, here: https://en.xen.wiki/w/Talk:Saturation</ref>
In accordance with this research and reasoning, this article henceforth will eschew the terms saturation and contorsion in favor of defactored and enfactored.<ref>I am starting a conversation about how to relate this page to the existing page for saturation and contortion on its discussion page, here: https://en.xen.wiki/w/Talk:Saturation</ref>


= the pathology of enfactoredness =
= The pathology of enfactoredness =


In this section, we will use lattices to visualize enfactored temperaments, to demonstrate the musical implications of mappings with common factors, and the lack of musical implications of comma-bases with common factors.
In this section, we will use lattices to visualize enfactored temperaments, to demonstrate the musical implications of mappings with common factors, and the lack of musical implications of comma-bases with common factors.


== defactored case ==
== Defactored case ==


[[File:Unenfactored mapping.png|365px|thumb|right|A 3-limit tempered lattice, superimposed on the JI lattice]]
[[File:Unenfactored mapping.png|365px|thumb|right|A 3-limit tempered lattice, superimposed on the JI lattice]]
Line 97: Line 97:
All of this so far is actually only just explaining the basic setup for any tempered lattice. But we've got to lay the basics down first in order to discuss the effect of enfactoring. We'll do that now!
All of this so far is actually only just explaining the basic setup for any tempered lattice. But we've got to lay the basics down first in order to discuss the effect of enfactoring. We'll do that now!


== enfactored mapping ==
== Enfactored mapping ==


[[File:2-enfactored mapping.png|365px|thumb|right|A 2-enfactored mapping represents a temperoid for which every other step of its generator lands on a pitch which no JI interval would ever temper to.]]
[[File:2-enfactored mapping.png|365px|thumb|right|A 2-enfactored mapping represents a temperoid for which every other step of its generator lands on a pitch which no JI interval would ever temper to.]]
Line 111: Line 111:
And so this 4-ET doesn't bring anything to the table that isn't already brought by 2-ET. And so it is fitting to consider it only a temperoid, rather than a true temperament. Were this as bad as things got, it might not be worth pushing for distinguishing temperoids from temperaments. But once we look at enfactored comma-bases, we'll see why things get pretty pathological.
And so this 4-ET doesn't bring anything to the table that isn't already brought by 2-ET. And so it is fitting to consider it only a temperoid, rather than a true temperament. Were this as bad as things got, it might not be worth pushing for distinguishing temperoids from temperaments. But once we look at enfactored comma-bases, we'll see why things get pretty pathological.


== enfactored comma-bases ==
== Enfactored comma-bases ==


[[File:2-enfactored comma-basis.png|365px|thumb|left|enfactored comma-bases are garbage]]
[[File:2-enfactored comma-basis.png|365px|thumb|left|enfactored comma-bases are garbage]]
Line 123: Line 123:
And so our lattice for an enfactored comma-basis looks almost identical to the original defactored lattice. The only difference here is that we've drawn a "supposed (but false)" tube circumference out to {{vector|-6 4}}, while the half of this length which is real is now labelled the "true" circumference.
And so our lattice for an enfactored comma-basis looks almost identical to the original defactored lattice. The only difference here is that we've drawn a "supposed (but false)" tube circumference out to {{vector|-6 4}}, while the half of this length which is real is now labelled the "true" circumference.


== enfactored comma-bases vs. periodicity blocks with torsion ==
== Enfactored comma-bases vs. periodicity blocks with torsion ==


[[File:Torsion.png|400px|thumb|right|a reworking of the classic torsion example from Tonalsoft to reveal the twinned generator paths]]
[[File:Torsion.png|400px|thumb|right|a reworking of the classic torsion example from Tonalsoft to reveal the twinned generator paths]]
Line 141: Line 141:
So we can see how tempting the duality can be here. In the case of a 2-enfactored mapping, the generator path reaches ''twice'' as many nodes as there were JI nodes. But in the case of a 2-enfactored comma-basis — if we could legitimately extend the width of the block, as we do in untempered periodicity blocks! — we would reach ''half'' as many nodes. But this duality just is not musically, audibly real.
So we can see how tempting the duality can be here. In the case of a 2-enfactored mapping, the generator path reaches ''twice'' as many nodes as there were JI nodes. But in the case of a 2-enfactored comma-basis — if we could legitimately extend the width of the block, as we do in untempered periodicity blocks! — we would reach ''half'' as many nodes. But this duality just is not musically, audibly real.


== enfactored mappings vs. enfactored comma-bases ==
== Enfactored mappings vs. enfactored comma-bases ==


One may pose the question: what is the relationship between an enfactored mapping and an enfactored comma-basis? Can you have one but not the other? Must you? Or must you not? Or does the question even make sense? Certainly at least some have suggested these cases are meaningfully independent<ref>such as the page [[Color_notation/Temperament_Names|color notation]], which reads "it's possible that there is both torsion and contorsion"</ref>.  
One may pose the question: what is the relationship between an enfactored mapping and an enfactored comma-basis? Can you have one but not the other? Must you? Or must you not? Or does the question even make sense? Certainly at least some have suggested these cases are meaningfully independent<ref>such as the page [[Color_notation/Temperament_Names|color notation]], which reads "it's possible that there is both torsion and contorsion"</ref>.  
Line 147: Line 147:
The conclusion we arrive at here is that because enfactored comma-bases don't make any sense, or at least don't represent any legitimately new musical information of any kind that their defactored version doesn't already represent, it is not generally useful to think of enfactored mappings and enfactored comma-bases as independent phenomena. It only makes sense to speak of enfactored temperaments. Of course, one will often use the term "enfactored mapping" because enfactored mappings are the kind which do have some musical purpose, and often the enfactored mapping will be being used to represent the enfactored temperament — or temperoid, that is.
The conclusion we arrive at here is that because enfactored comma-bases don't make any sense, or at least don't represent any legitimately new musical information of any kind that their defactored version doesn't already represent, it is not generally useful to think of enfactored mappings and enfactored comma-bases as independent phenomena. It only makes sense to speak of enfactored temperaments. Of course, one will often use the term "enfactored mapping" because enfactored mappings are the kind which do have some musical purpose, and often the enfactored mapping will be being used to represent the enfactored temperament — or temperoid, that is.


== conclusion ==
== Conclusion ==


So due to the high likelihood for confusion when conceptualizing enfactored temperaments, we believe that using HNF as the unique ID for temperaments, i.e. treating temperoids as true temperaments, is a dangerous and unhelpful road. Instead, canonical form should be used, essentially HNF but with defactoring built in, so that practitioners of RTT can focus on working with true temperaments.
So due to the high likelihood for confusion when conceptualizing enfactored temperaments, we believe that using HNF as the unique ID for temperaments, i.e. treating temperoids as true temperaments, is a dangerous and unhelpful road. Instead, canonical form should be used, essentially HNF but with defactoring built in, so that practitioners of RTT can focus on working with true temperaments.


= identifying enfactored mappings =
= Identifying enfactored mappings =


== immediately apparent enfactoring ==
== Immediately apparent enfactoring ==


Sometimes the enfactoring of a mapping is immediately apparent. For example:
Sometimes the enfactoring of a mapping is immediately apparent. For example:
Line 173: Line 173:
Think of it this way: because every element is even, any [[JI]] interval you'd map with with the mapping must come out as an even number of steps of 24-ET, by the definition of the dot product, and every even step of 24-ET is just a step of 12-ET. Examples: {{vector|1 -2 1}}.{{map|24 38 56}} = 24 - 76 + 56 = 4, {{vector|1 1 -1}}.{{map|24 38 56}} = 24 + 38 - 56  = 6.
Think of it this way: because every element is even, any [[JI]] interval you'd map with with the mapping must come out as an even number of steps of 24-ET, by the definition of the dot product, and every even step of 24-ET is just a step of 12-ET. Examples: {{vector|1 -2 1}}.{{map|24 38 56}} = 24 - 76 + 56 = 4, {{vector|1 1 -1}}.{{map|24 38 56}} = 24 + 38 - 56  = 6.


== hidden enfactoring ==
== Hidden enfactoring ==


Other times, enfactoring is less apparent. Consider this example:
Other times, enfactoring is less apparent. Consider this example:
Line 190: Line 190:
If you're unsure why this {{map|3 -3 -6}} matters despite not being in {{ket|{{map|3 0 -1}} {{map|0 3 5}}}}, we may need to quickly review some linear algebra fundamentals. It may take some getting used to, but a mapping can be changed to another equivalent mapping (both mappings will map input vectors to the same scalars) by replacing any row with linear combinations of its rows. That is, we could replace either {{map|3 0 -1}} or {{map|0 3 5}} in our original matrix {{ket|{{map|3 0 -1}} {{map|0 3 5}}}} to get {{ket|{{map|3 -3 -6}} {{map|0 3 5}}}} or {{ket|{{map|3 0 -1}} {{map|3 -3 -6}}}} and any of these mappings represent the same temperament.
If you're unsure why this {{map|3 -3 -6}} matters despite not being in {{ket|{{map|3 0 -1}} {{map|0 3 5}}}}, we may need to quickly review some linear algebra fundamentals. It may take some getting used to, but a mapping can be changed to another equivalent mapping (both mappings will map input vectors to the same scalars) by replacing any row with linear combinations of its rows. That is, we could replace either {{map|3 0 -1}} or {{map|0 3 5}} in our original matrix {{ket|{{map|3 0 -1}} {{map|0 3 5}}}} to get {{ket|{{map|3 -3 -6}} {{map|0 3 5}}}} or {{ket|{{map|3 0 -1}} {{map|3 -3 -6}}}} and any of these mappings represent the same temperament.


== well-hidden enfactoring ==
== Well-hidden enfactoring ==


Sometimes the hidden common factor is even harder to find. Consider the mapping:
Sometimes the hidden common factor is even harder to find. Consider the mapping:
Line 205: Line 205:
To find this common factor, we need to linearly combine two of the first row {{map|6 5 -4}} and negative three of the 2nd row {{map|4 -4 1}} to produce {{map|0 22 -11}}. So we can see here that its common factor is 11. But there's no clear relationship between the numbers 2 and 3 and the number 11. And so we can begin to see that the problem of identifying enfactored mapping may not be very simple or straightforward.
To find this common factor, we need to linearly combine two of the first row {{map|6 5 -4}} and negative three of the 2nd row {{map|4 -4 1}} to produce {{map|0 22 -11}}. So we can see here that its common factor is 11. But there's no clear relationship between the numbers 2 and 3 and the number 11. And so we can begin to see that the problem of identifying enfactored mapping may not be very simple or straightforward.


= defactoring methods =
= Defactoring methods =


Even better than identifying enfactored mappings is actually full-on defactoring them. Here are two methods that do just that: Smith defactoring, developed by Gene Ward Smith<ref>but the name comes from a different Smith: [https://en.wikipedia.org/wiki/Henry_John_Stephen_Smith Henry John Stephen Smith], for whom the Smith normal form is named, which this method uses</ref>, and column Hermite defactoring, developed by Dave and Douglas (the name comes, of course, from Hermite normal form, which it uses<ref>named for Charles Hermite, who was French, by the way, and so his name is pronounced more like err-MEET, not like HER-might</ref>).  
Even better than identifying enfactored mappings is actually full-on defactoring them. Here are two methods that do just that: Smith defactoring, developed by Gene Ward Smith<ref>but the name comes from a different Smith: [https://en.wikipedia.org/wiki/Henry_John_Stephen_Smith Henry John Stephen Smith], for whom the Smith normal form is named, which this method uses</ref>, and column Hermite defactoring, developed by Dave and Douglas (the name comes, of course, from Hermite normal form, which it uses<ref>named for Charles Hermite, who was French, by the way, and so his name is pronounced more like err-MEET, not like HER-might</ref>).  
Line 237: Line 237:
Column Hermite defactoring could not have been developed, however, were it not for Gene's pioneering work with the Smith defactoring (what he calls the process of "saturating" a mapping). At first Dave and Douglas had no idea what the right reducing matrix of the Smith decomposition (the process which also provides the Smith normal form) had to do with common factors, only that it somehow magically worked. So they analyzed the Smith decomposition until they isolated its key actions which actually effect the defactoring, and then honed their method down to do only these necessary actions. Again, they wouldn't have known where to start were it not for Gene.
Column Hermite defactoring could not have been developed, however, were it not for Gene's pioneering work with the Smith defactoring (what he calls the process of "saturating" a mapping). At first Dave and Douglas had no idea what the right reducing matrix of the Smith decomposition (the process which also provides the Smith normal form) had to do with common factors, only that it somehow magically worked. So they analyzed the Smith decomposition until they isolated its key actions which actually effect the defactoring, and then honed their method down to do only these necessary actions. Again, they wouldn't have known where to start were it not for Gene.


== precedent: Smith defactoring ==
== Precedent: Smith defactoring ==


Dave and Douglas did much of their work in [https://www.wolfram.com/language/ Wolfram Language] (formerly Mathematica), a popular programming language used for math problems. In this section we'll give examples using it.
Dave and Douglas did much of their work in [https://www.wolfram.com/language/ Wolfram Language] (formerly Mathematica), a popular programming language used for math problems. In this section we'll give examples using it.
Line 272: Line 272:
And that result matches what Gene finds in that xen wiki article. Defactoring and normalizing is equivalent to canonicalization. 
And that result matches what Gene finds in that xen wiki article. Defactoring and normalizing is equivalent to canonicalization. 


== new development: column Hermite defactoring ==
== New development: column Hermite defactoring ==


Here is the implementation for column Hermite defactoring:
Here is the implementation for column Hermite defactoring:
Line 285: Line 285:
Finally we take only the top <math>r</math> rows of this, where <math>r</math> is the rank of the original matrix. That's found with <code>MatrixRank[m]</code>.
Finally we take only the top <math>r</math> rows of this, where <math>r</math> is the rank of the original matrix. That's found with <code>MatrixRank[m]</code>.


=== how/why it works ===
=== How/why it works ===


The basic idea is that the column Hermite decomposition leaves any common row factor the mapping might have had in the HNF part, while preserving in the unimodular part everything that's still meaningful about how the mapping works. So that's why we throw the column HNF away and keep the unimodular part. The rest of the algorithm is basically just "undoing" stuff so we get back to the structure of the matrix that we input.
The basic idea is that the column Hermite decomposition leaves any common row factor the mapping might have had in the HNF part, while preserving in the unimodular part everything that's still meaningful about how the mapping works. So that's why we throw the column HNF away and keep the unimodular part. The rest of the algorithm is basically just "undoing" stuff so we get back to the structure of the matrix that we input.
Line 293: Line 293:
Finally, we take only the top <math>r</math> rows, which again is an "undo" type operation. Here what we're undoing is that we had to graduate from a rectangle to a square temporarily, storing our important information in the form of this invertible square unimodular matrix temporarily, so we could invert it while keeping it integer, but now we need to get it back into the same type of rectangular shape as we put in. So that's what this part is for.<ref>There is probably some special meaning or information in the rows you throw away here, but we're not sure what it might be.</ref>
Finally, we take only the top <math>r</math> rows, which again is an "undo" type operation. Here what we're undoing is that we had to graduate from a rectangle to a square temporarily, storing our important information in the form of this invertible square unimodular matrix temporarily, so we could invert it while keeping it integer, but now we need to get it back into the same type of rectangular shape as we put in. So that's what this part is for.<ref>There is probably some special meaning or information in the rows you throw away here, but we're not sure what it might be.</ref>


=== various additional ways of thinking about how/why it works ===
=== Various additional ways of thinking about how/why it works ===


==== relationship with Smith defactoring ====
==== Relationship with Smith defactoring ====


If you describe column Hermite defactoring as "column-Hermite inverse take-r-rows", the analogous way to describe Smith defactoring would be like "row-Hermite column-Hermite row-Hermite column-Hermite row-Hermite column-Hermite ... inverse take-r-rows". In other words, the nature of Smith decomposition is to essentially repeatedly Hermite decompose from either angle until you've achieved Smith normal form, which is wasteful and unnecessary in this context, when all that is required is a single column-Hermite decomposition. This helps explain why column Hermite defactoring is more performant in general than Smith defactoring, when code is run against thousands of examples at a time.
If you describe column Hermite defactoring as "column-Hermite inverse take-r-rows", the analogous way to describe Smith defactoring would be like "row-Hermite column-Hermite row-Hermite column-Hermite row-Hermite column-Hermite ... inverse take-r-rows". In other words, the nature of Smith decomposition is to essentially repeatedly Hermite decompose from either angle until you've achieved Smith normal form, which is wasteful and unnecessary in this context, when all that is required is a single column-Hermite decomposition. This helps explain why column Hermite defactoring is more performant in general than Smith defactoring, when code is run against thousands of examples at a time.
Line 301: Line 301:
According to Wolfram<ref>See: https://reference.wolfram.com/language/ref/SmithDecomposition.html</ref>, "the Smith decomposition can often be found by two iterations of the Hermite decomposition". This notion is echoed in the Sage docs<ref>See: https://doc.sagemath.org/html/en/reference/matrices/sage/matrix/matrix_integer_dense.html</ref>, which read, "Use Hermite normal form twice to find an invertible matrix whose inverse transforms a matrix with the same row span as self to its saturation," remembered that saturation is the same as defactoring. The reason for the multiple applications of Hermite decomposition to achieve Smith normal form is that you won't necessarily get a diagonal matrix on the first go. But as we know from Smith defactoring, the center matrix of the Smith decomposition — the Smith normal form one — is not the target matrix, so its exact form is not necessary to achieve to accomplish defactoring.  
According to Wolfram<ref>See: https://reference.wolfram.com/language/ref/SmithDecomposition.html</ref>, "the Smith decomposition can often be found by two iterations of the Hermite decomposition". This notion is echoed in the Sage docs<ref>See: https://doc.sagemath.org/html/en/reference/matrices/sage/matrix/matrix_integer_dense.html</ref>, which read, "Use Hermite normal form twice to find an invertible matrix whose inverse transforms a matrix with the same row span as self to its saturation," remembered that saturation is the same as defactoring. The reason for the multiple applications of Hermite decomposition to achieve Smith normal form is that you won't necessarily get a diagonal matrix on the first go. But as we know from Smith defactoring, the center matrix of the Smith decomposition — the Smith normal form one — is not the target matrix, so its exact form is not necessary to achieve to accomplish defactoring.  


==== relationship with VEA ====
==== Relationship with VEA ====


Another thought that might help congeal the notion of column Hermite defactoring for you is to use what you know about multimaps (AKA "wedgies"), in particular a) what they are, and b) how to defactor them. The answer to a) is that they are just the minor determinants (or "minors" for short) of rectangular matrices, or in other words, the closest thing rectangular matrices such as mappings have to a real determinant. And the answer to b) is that you simply extract the GCD of the entries in this list of minors. So if defactoring a list of minor determinants means dividing common factors out, then it should be little surprise that achieving a real determinant of ±1 is equivalent to defactoring, and thereby that leveraging the unimodularity of the other matrix produced by the Hermite decomposition should be valuable in this capacity.
Another thought that might help congeal the notion of column Hermite defactoring for you is to use what you know about multimaps (AKA "wedgies"), in particular a) what they are, and b) how to defactor them. The answer to a) is that they are just the minor determinants (or "minors" for short) of rectangular matrices, or in other words, the closest thing rectangular matrices such as mappings have to a real determinant. And the answer to b) is that you simply extract the GCD of the entries in this list of minors. So if defactoring a list of minor determinants means dividing common factors out, then it should be little surprise that achieving a real determinant of ±1 is equivalent to defactoring, and thereby that leveraging the unimodularity of the other matrix produced by the Hermite decomposition should be valuable in this capacity.


==== unimodular matrix size ====
==== Unimodular matrix size ====


One reason for doing a column Hermite decomposition on a mapping can be understood by comparing the sizes of the unimodular matrices. Matrices are often described as <math>m×n</math>, where <math>m</math> is the row count and <math>n</math> is the column count. In the case of mappings it may be superior to use variable names corresponding to the domain concepts of rank <math>r</math>, and dimension <math>d</math>, i.e. to speak of <math>r×d</math> mappings. The key bit of info here is that — for non-trivial mappings, anyway — <math>d</math> is always greater than <math>r</math>. So a standard row-based Hermite decomposition, i.e. to the right, is going to produce an <math>r×r</math> unimodular matrix, while a column-based Hermite decomposition, i.e. to the bottom, is going to produce a <math>d×d</math> unimodular matrix. For example, 5-limit meantone has <math>r=2</math> and <math>d=3</math>, so a standard row-based Hermite decomposition is going to produce a unimodular matrix that is only 2×2, while the column-based Hermite decomposition will produce one that is 3×3. With <math>d>r</math>, it's clear that the column-based decomposition in general will always produced the larger unimodular matrix. In fact, the row-based decomposition is too small to be capable of enclosing an amount of entries equal to the count of entries in the original mapping, and therefore it could never support preserving the entirety of the important information from the input (in terms of our example, a 3×3 matrix can hold a 2×3 matrix, but a 2×2 matrix cannot).
One reason for doing a column Hermite decomposition on a mapping can be understood by comparing the sizes of the unimodular matrices. Matrices are often described as <math>m×n</math>, where <math>m</math> is the row count and <math>n</math> is the column count. In the case of mappings it may be superior to use variable names corresponding to the domain concepts of rank <math>r</math>, and dimension <math>d</math>, i.e. to speak of <math>r×d</math> mappings. The key bit of info here is that — for non-trivial mappings, anyway — <math>d</math> is always greater than <math>r</math>. So a standard row-based Hermite decomposition, i.e. to the right, is going to produce an <math>r×r</math> unimodular matrix, while a column-based Hermite decomposition, i.e. to the bottom, is going to produce a <math>d×d</math> unimodular matrix. For example, 5-limit meantone has <math>r=2</math> and <math>d=3</math>, so a standard row-based Hermite decomposition is going to produce a unimodular matrix that is only 2×2, while the column-based Hermite decomposition will produce one that is 3×3. With <math>d>r</math>, it's clear that the column-based decomposition in general will always produced the larger unimodular matrix. In fact, the row-based decomposition is too small to be capable of enclosing an amount of entries equal to the count of entries in the original mapping, and therefore it could never support preserving the entirety of the important information from the input (in terms of our example, a 3×3 matrix can hold a 2×3 matrix, but a 2×2 matrix cannot).


=== by hand ===
=== By hand ===


In an effort to demystify the effects of column Hermite defactoring, let's walk though an example manually.  
In an effort to demystify the effects of column Hermite defactoring, let's walk though an example manually.  
Line 465: Line 465:
And that's all there is to the Hermite decomposition.
And that's all there is to the Hermite decomposition.


==== inversion by hand ====
==== Inversion by hand ====


Now let's take a look at how to do inversion by hand. The first thing to note is that this process only works for rectangular matrices. So you will not be using on non-trivial mappings or comma-bases directly. But, as you know, there is a useful application for this process on the unimodular matrix which is the other result of the Hermite decomposition than the HNF, and unimodular matrices are always square.  
Now let's take a look at how to do inversion by hand. The first thing to note is that this process only works for rectangular matrices. So you will not be using on non-trivial mappings or comma-bases directly. But, as you know, there is a useful application for this process on the unimodular matrix which is the other result of the Hermite decomposition than the HNF, and unimodular matrices are always square.  
Line 581: Line 581:
I chose this matrix specifically to demonstrate the importance of the unimodularity of the other matrix produced by the Hermite decomposition. A unimodular matrix is defined by having a determinant of ±1. And what does this have to do with inverses? Well, take a look at the determinant of our original matrix here, {{ket|{{map|3 -2 4}} {{map|1 0 2}} {{map|0 1 0}}}}. It's 2. The determinant of an invertible matrix will tell you what the LCM of all the denominators in the inverse will be<ref>If you're familiar with the formula for the Moore-Penrose pseudoinverse of rectangular matrices, you may recognize this fact as akin to how you multiply the outside of the pseudoinverse by the reciprocal of the determinant of the matrix.</ref><ref>This may also shed some light on the fact that the only square matrices that are not invertible are those with determinants equal to 0.</ref> And so, the fact that the other matrix produced by the Hermite decomposition is unimodular means that not only is it invertible, if it has only integer terms (which it will, being involved in HNF), then its inverse will also have only integer terms. And this is important because the inverse of a Hermite unimodular matrix is just one step away from the defactored form of an input matrix.
I chose this matrix specifically to demonstrate the importance of the unimodularity of the other matrix produced by the Hermite decomposition. A unimodular matrix is defined by having a determinant of ±1. And what does this have to do with inverses? Well, take a look at the determinant of our original matrix here, {{ket|{{map|3 -2 4}} {{map|1 0 2}} {{map|0 1 0}}}}. It's 2. The determinant of an invertible matrix will tell you what the LCM of all the denominators in the inverse will be<ref>If you're familiar with the formula for the Moore-Penrose pseudoinverse of rectangular matrices, you may recognize this fact as akin to how you multiply the outside of the pseudoinverse by the reciprocal of the determinant of the matrix.</ref><ref>This may also shed some light on the fact that the only square matrices that are not invertible are those with determinants equal to 0.</ref> And so, the fact that the other matrix produced by the Hermite decomposition is unimodular means that not only is it invertible, if it has only integer terms (which it will, being involved in HNF), then its inverse will also have only integer terms. And this is important because the inverse of a Hermite unimodular matrix is just one step away from the defactored form of an input matrix.


==== putting it all together ====
==== Putting it all together ====


In the Wolfram Language algorithm described above, the input matrix is first transposed and then Hermite decomposed. In fact, this is due to a limitation of the built-in <code>HermiteDecomposition[]</code> function in Wolfram Language. The HNF is well understood both in its more common and default row-style as well as a column-style, so it might be expected for Wolfram Language to provide an option for the <code>HermiteDecomposition[]</code> function to perform it by columns. Alas, it does not. So the transposition our code does is a workaround for this lack.  
In the Wolfram Language algorithm described above, the input matrix is first transposed and then Hermite decomposed. In fact, this is due to a limitation of the built-in <code>HermiteDecomposition[]</code> function in Wolfram Language. The HNF is well understood both in its more common and default row-style as well as a column-style, so it might be expected for Wolfram Language to provide an option for the <code>HermiteDecomposition[]</code> function to perform it by columns. Alas, it does not. So the transposition our code does is a workaround for this lack.  
Line 855: Line 855:
The pivots are 1 and 11, so that 11 tells us that we had a common factor of 11<ref>In the doubly-enfactored case of {{ket|{{map|17 16 -4}} {{map|4 -4 1}}}}, i.e. with a common factor of 33 = 3 × 11, the two pivots of the HNF are 3 and 11, putting each of them on display separately.</ref><ref>It's interesting to observe that while the 11-enfactoring can be observed in the original matrix as a linear combination of 2 of the 1st row with -3 of the 2nd row, i.e. 2{{map|6 5 -4}} + -3{{map|4 -4 1}} = {{map|0 22 -11}}, the linear combination of ''columns'', i.e. slicing the original {{ket|{{map|6 5 -4}} {{map|4 -4 1}}}} mapping the other direction like {{bra|{{vector|6 4}} {{vector|5 -4}} {{vector|-4 1}}}}, that leads to the revelation of this 11 is completely different: -1{{vector|6 4}} + 2{{vector|5 -4}} + 1{{vector|-4 1}} = {{vector|0 11}}.</ref>. You could say that the HNF is useful for identifying common factors, but not for removing them. But if you leave them behind in the column-style HNF, the information that is retained in the unimodular matrix which is the other product of the Hermite decomposition, is enough to preserve everything important about the temperament, to get you back to where you started via an inverse and a trimming of extraneous rows.
The pivots are 1 and 11, so that 11 tells us that we had a common factor of 11<ref>In the doubly-enfactored case of {{ket|{{map|17 16 -4}} {{map|4 -4 1}}}}, i.e. with a common factor of 33 = 3 × 11, the two pivots of the HNF are 3 and 11, putting each of them on display separately.</ref><ref>It's interesting to observe that while the 11-enfactoring can be observed in the original matrix as a linear combination of 2 of the 1st row with -3 of the 2nd row, i.e. 2{{map|6 5 -4}} + -3{{map|4 -4 1}} = {{map|0 22 -11}}, the linear combination of ''columns'', i.e. slicing the original {{ket|{{map|6 5 -4}} {{map|4 -4 1}}}} mapping the other direction like {{bra|{{vector|6 4}} {{vector|5 -4}} {{vector|-4 1}}}}, that leads to the revelation of this 11 is completely different: -1{{vector|6 4}} + 2{{vector|5 -4}} + 1{{vector|-4 1}} = {{vector|0 11}}.</ref>. You could say that the HNF is useful for identifying common factors, but not for removing them. But if you leave them behind in the column-style HNF, the information that is retained in the unimodular matrix which is the other product of the Hermite decomposition, is enough to preserve everything important about the temperament, to get you back to where you started via an inverse and a trimming of extraneous rows.


== other defactoring methods ==
== Other defactoring methods ==


When in development on an ideal defactoring method — the effort which culminated in column Hermite defactoring — Dave and Douglas experimented on other methods:
When in development on an ideal defactoring method — the effort which culminated in column Hermite defactoring — Dave and Douglas experimented on other methods:
Line 861: Line 861:
* [[MADAM defactoring]]
* [[MADAM defactoring]]


= canonical comma-bases =
= Canonical comma-bases =


Canonical form is not only for mappings; comma-bases may also be put into canonical form. The only difference is that they must be put in an "antitranspose sandwich", or in other words, antitransposed<ref>See a discussion of the antitranspose here: [[Douglas_Blumeyer%27s_RTT_How-To#null-space]]</ref>once at the beginning, and then antitransposed again at the end.
Canonical form is not only for mappings; comma-bases may also be put into canonical form. The only difference is that they must be put in an "antitranspose sandwich", or in other words, antitransposed<ref>See a discussion of the antitranspose here: [[Douglas_Blumeyer%27s_RTT_How-To#null-space]]</ref>once at the beginning, and then antitransposed again at the end.
Line 935: Line 935:
And there's our canonical comma-basis.
And there's our canonical comma-basis.


= references =
= References =