Tenney–Euclidean temperament measures: Difference between revisions
Start documenting Sintel's design |
It's fine to use det! |
||
Line 32: | Line 32: | ||
Our first complexity measure of a temperament is given by the ''L''<sup>2</sup> norm of the Tenney-weighted wedgie ''M''<sub>''W''</sub>, which can in turn be obtained from the Tenney-weighted mapping matrix ''V''<sub>''W''</sub>. This complexity can be easily computed either from the wedgie or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}: | Our first complexity measure of a temperament is given by the ''L''<sup>2</sup> norm of the Tenney-weighted wedgie ''M''<sub>''W''</sub>, which can in turn be obtained from the Tenney-weighted mapping matrix ''V''<sub>''W''</sub>. This complexity can be easily computed either from the wedgie or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}: | ||
$$ \norm{M_W}_2 = \sqrt {\ | $$ \norm{M_W}_2 = \sqrt {\det(V_W V_W^\mathsf{T})} $$ | ||
where | where det(·) denotes the determinant, and {{t}} denotes the transpose. | ||
Graham Breed and [[Gene Ward Smith]] have proposed different RMS norms. Let us denote the RMS norm of ''M'' as ‖''M''‖<sub>RMS</sub>. In Graham's paper<ref name="primerr">Graham Breed. [http://x31eq.com/temper/primerr.pdf ''Prime Based Error and Complexity Measures''], often referred to as ''primerr.pdf''.</ref>, an RMS norm is proposed as | Graham Breed and [[Gene Ward Smith]] have proposed different RMS norms. Let us denote the RMS norm of ''M'' as ‖''M''‖<sub>RMS</sub>. In Graham's paper<ref name="primerr">Graham Breed. [http://x31eq.com/temper/primerr.pdf ''Prime Based Error and Complexity Measures''], often referred to as ''primerr.pdf''.</ref>, an RMS norm is proposed as | ||
$$ \norm{M_W}_\text{RMS} = \sqrt {\ | $$ \norm{M_W}_\text{RMS} = \sqrt {\det \left( \frac {V_W V_W^\mathsf{T}}{n} \right)} = \frac {\norm{M_W}_2}{\sqrt {n^r}} $$ | ||
where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament. Thus ''n''<sup>''r''</sup> is the number of permutations of ''n'' things taken ''r'' at a time with repetition, which equals the number of entries of the wedgie in its full tensor form. | where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament. Thus ''n''<sup>''r''</sup> is the number of permutations of ''n'' things taken ''r'' at a time with repetition, which equals the number of entries of the wedgie in its full tensor form. | ||
Line 46: | Line 46: | ||
Gene Ward Smith's RMS norm is given as | Gene Ward Smith's RMS norm is given as | ||
$$ \norm{M_W}_\text{RMS}' = \sqrt {\frac{\ | $$ \norm{M_W}_\text{RMS}' = \sqrt {\frac{\det(V_W V_W^\mathsf{T})}{C(n, r)}} = \frac {\norm{M_W}_2}{\sqrt {C(n, r)}} $$ | ||
where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time without repetition, which equals the number of entries of the wedgie in the usual, compressed form. | where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time without repetition, which equals the number of entries of the wedgie in the usual, compressed form. | ||
Line 96: | Line 96: | ||
Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''<sub>''W''</sub>‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> {{=}} {{val| 1 1 … 1 }}}} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''<sub>''W''</sub>·('''v'''<sub>''w''</sub>)<sub>''i''</sub>/''n''}} is the mean value of the entries of ('''v'''<sub>''w''</sub>)<sub>''i''</sub>. Then note that {{nowrap|''J''<sub>''W''</sub> ∧ (('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>1</sub>''J''<sub>''W''</sub>) ∧ (('''v'''<sub>''w''</sub>)<sub>2</sub> − ''a''<sub>2</sub>''J''<sub>''W''</sub>) ∧ … ∧ (('''v'''<sub>''w''</sub>)<sub>''r''</sub> − ''a''<sub>''r''</sub>''J''<sub>''W''</sub>) {{=}} ''J''<sub>''W''</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>1</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>2</sub> ∧ … ∧ ('''v'''<sub>''w''</sub>)<sub>''r''</sub>}}, since wedge products with more than one term ''J''<sub>''W''</sub> are zero. The Gram matrix of the vectors ''J''<sub>''W''</sub> and {{nowrap|('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>''i''</sub>''J''<sub>''W''</sub>}} will have ''n'' as the {{nowrap|(1, 1)}} entry, and 0's in the rest of the first row and column. Hence we obtain: | Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''<sub>''W''</sub>‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> {{=}} {{val| 1 1 … 1 }}}} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''<sub>''W''</sub>·('''v'''<sub>''w''</sub>)<sub>''i''</sub>/''n''}} is the mean value of the entries of ('''v'''<sub>''w''</sub>)<sub>''i''</sub>. Then note that {{nowrap|''J''<sub>''W''</sub> ∧ (('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>1</sub>''J''<sub>''W''</sub>) ∧ (('''v'''<sub>''w''</sub>)<sub>2</sub> − ''a''<sub>2</sub>''J''<sub>''W''</sub>) ∧ … ∧ (('''v'''<sub>''w''</sub>)<sub>''r''</sub> − ''a''<sub>''r''</sub>''J''<sub>''W''</sub>) {{=}} ''J''<sub>''W''</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>1</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>2</sub> ∧ … ∧ ('''v'''<sub>''w''</sub>)<sub>''r''</sub>}}, since wedge products with more than one term ''J''<sub>''W''</sub> are zero. The Gram matrix of the vectors ''J''<sub>''W''</sub> and {{nowrap|('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>''i''</sub>''J''<sub>''W''</sub>}} will have ''n'' as the {{nowrap|(1, 1)}} entry, and 0's in the rest of the first row and column. Hence we obtain: | ||
$$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \ | $$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \det((\vec{v_w})_i \cdot (\vec{v_w})_j - n a_i a_j) $$ | ||
A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is: | A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is: | ||
$$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \ | $$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \det(V_J V_J^\mathsf{T}) $$ | ||
So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val. | So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val. |