Defactoring: Difference between revisions
Cmloegcmluin (talk | contribs) add links |
Cmloegcmluin (talk | contribs) simplify inline math |
||
Line 241: | Line 241: | ||
Dave and Douglas did much of their work in [https://www.wolfram.com/language/ Wolfram Language] (formerly Mathematica), a popular programming language used for math problems. In this section we'll give examples using it. | Dave and Douglas did much of their work in [https://www.wolfram.com/language/ Wolfram Language] (formerly Mathematica), a popular programming language used for math problems. In this section we'll give examples using it. | ||
An input mapping | An input mapping <math>m</math>, such as the example Gene gives [[saturation|on the xen wiki page for Saturation]], {{ket|{{map|12 19 28 34}} {{map|26 41 60 72}}}}, in Wolfram Language you would have to write as a list: | ||
<nowiki>m = {{12,19,28,34},{26,41,60,72}};</nowiki> | <nowiki>m = {{12,19,28,34},{26,41,60,72}};</nowiki> | ||
Line 250: | Line 250: | ||
smithDefactor[m_] := Take[Inverse[rightReducingMatrix[m]], MatrixRank[m]]</nowiki> | smithDefactor[m_] := Take[Inverse[rightReducingMatrix[m]], MatrixRank[m]]</nowiki> | ||
So the first thing that happens to | So the first thing that happens to <math>m</math> when you pass it in to <code>smithDefactor[]</code> is that it calls <code>rightReducingMatrix[]</code> on it. This will find the Smith decomposition (using a function built in to Wolfram Language), which gives you three outputs: the Smith normal form, flanked by its left and right reducing matrices. Gene asks only for the right reducing matrix, so we grab that with <code>Last[]</code>. So that's what the function on the first line, <code>rightReducingMatrix[]</code>, does. | ||
Then Gene asks us to invert this result and take its first | Then Gene asks us to invert this result and take its first <math>r</math> rows, where <math>r</math> is the rank of the temperament. <code>Invert[]</code> takes care of the inversion, of course. <code>MatrixRank[m]</code> gives the count of linearly independent rows to the mapping, AKA the rank, or count of generators in this temperament. In this case that's 2. And so <code>Take[list, 2]</code> simply returns the first 2 entries of the list. | ||
Almost done! Except Gene not only defactors, he also calls for HNF, as we would, to achieve canonical (unique ID) form. | Almost done! Except Gene not only defactors, he also calls for HNF, as we would, to achieve canonical (unique ID) form. | ||
Line 283: | Line 283: | ||
The next step is to invert that matrix, which is doable because it is unimodular; a key property of unimodular matrices is that they are always invertible, and because their determinant is ±1, if they contain all integer entries, their inverse will also contain all integer entries (which it does, and we need it to)<ref>Interesting tidbit regarding full-rank matrices and unimodular matrices: for square matrices, full-rank implies unimodularity, and vice-versa.</ref>. | The next step is to invert that matrix, which is doable because it is unimodular; a key property of unimodular matrices is that they are always invertible, and because their determinant is ±1, if they contain all integer entries, their inverse will also contain all integer entries (which it does, and we need it to)<ref>Interesting tidbit regarding full-rank matrices and unimodular matrices: for square matrices, full-rank implies unimodularity, and vice-versa.</ref>. | ||
Finally we take only the top | Finally we take only the top <math>r</math> rows of this, where <math>r</math> is the rank of the original matrix. That's found with <code>MatrixRank[m]</code>. | ||
=== how/why it works === | === how/why it works === | ||
Line 291: | Line 291: | ||
So inverting is one of those "undo" type operations. To understand why, we have to understand the nature of this decomposition. What the Hermite decomposition does is return a unimodular matrix U and a Hermite normal form matrix H such that if you left-multiply your original matrix A by the unimodular matrix U you get the normal form matrix H, or in other words, UA = H. So, think of it this way. If A is what we input, and we want something sort of like A, but U is what we've taken, and U is multiplied with A in this equality to get H, where H is also kind of like A, then probably what we really want is something like U, but inverted. | So inverting is one of those "undo" type operations. To understand why, we have to understand the nature of this decomposition. What the Hermite decomposition does is return a unimodular matrix U and a Hermite normal form matrix H such that if you left-multiply your original matrix A by the unimodular matrix U you get the normal form matrix H, or in other words, UA = H. So, think of it this way. If A is what we input, and we want something sort of like A, but U is what we've taken, and U is multiplied with A in this equality to get H, where H is also kind of like A, then probably what we really want is something like U, but inverted. | ||
Finally, we take only the top | Finally, we take only the top <math>r</math> rows, which again is an "undo" type operation. Here what we're undoing is that we had to graduate from a rectangle to a square temporarily, storing our important information in the form of this invertible square unimodular matrix temporarily, so we could invert it while keeping it integer, but now we need to get it back into the same type of rectangular shape as we put in. So that's what this part is for.<ref>There is probably some special meaning or information in the rows you throw away here, but we're not sure what it might be.</ref> | ||
=== various additional ways of thinking about how/why it works === | === various additional ways of thinking about how/why it works === | ||
Line 307: | Line 307: | ||
==== unimodular matrix size ==== | ==== unimodular matrix size ==== | ||
One reason for doing a column Hermite decomposition on a mapping can be understood by comparing the sizes of the unimodular matrices. Matrices are often described as | One reason for doing a column Hermite decomposition on a mapping can be understood by comparing the sizes of the unimodular matrices. Matrices are often described as <math>m×n</math>, where <math>m</math> is the row count and <math>n</math> is the column count. In the case of mappings it may be superior to use variable names corresponding to the domain concepts of rank <math>r</math>, and dimension <math>d</math>, i.e. to speak of <math>r×d</math> mappings. The key bit of info here is that — for non-trivial mappings, anyway — <math>d</math> is always greater than <math>r</math>. So a standard row-based Hermite decomposition, i.e. to the right, is going to produce an <math>r×r</math> unimodular matrix, while a column-based Hermite decomposition, i.e. to the bottom, is going to produce a <math>d×d</math> unimodular matrix. For example, 5-limit meantone has <math>r=2</math> and <math>d=3</math>, so a standard row-based Hermite decomposition is going to produce a unimodular matrix that is only 2×2, while the column-based Hermite decomposition will produce one that is 3×3. With <math>d>r</math>, it's clear that the column-based decomposition in general will always produced the larger unimodular matrix. In fact, the row-based decomposition is too small to be capable of enclosing an amount of entries equal to the count of entries in the original mapping, and therefore it could never support preserving the entirety of the important information from the input (in terms of our example, a 3×3 matrix can hold a 2×3 matrix, but a 2×2 matrix cannot). | ||
=== by hand === | === by hand === | ||
Line 832: | Line 832: | ||
\end{array} \right]</math> | \end{array} \right]</math> | ||
And we take from this thing the top | And we take from this thing the top <math>r</math> rows, where <math>r</math> is the rank of the input matrix, which in this case is 2: | ||
<math>\left[ \begin{array} {rrr} | <math>\left[ \begin{array} {rrr} |