Tenney–Euclidean temperament measures: Difference between revisions
More rework |
Turns out Gene's math on badness is wrong. Rework over |
||
Line 46: | Line 46: | ||
Gene Ward Smith's RMS norm is given as | Gene Ward Smith's RMS norm is given as | ||
$$ \norm{M_W}_\text{RMS} | $$ \norm{M_W}_\text{RMS'} = \sqrt {\frac{\det(V_W V_W^\mathsf{T})}{C(n, r)}} = \frac {\norm{M_W}_2}{\sqrt {C(n, r)}} $$ | ||
where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time without repetition, which equals the number of entries of the wedgie in the usual, compressed form. | where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time without repetition, which equals the number of entries of the wedgie in the usual, compressed form. | ||
Line 86: | Line 86: | ||
: '''Note''': that is the definition used by Graham Breed's temperament finder. | : '''Note''': that is the definition used by Graham Breed's temperament finder. | ||
Gene Ward Smith | Gene Ward Smith defines TE error as the ratio ‖''M''<sub>''W''</sub> ∧ ''J''<sub>''W''</sub>‖/‖''M''<sub>''W''</sub>‖, derived from the relationship of TE simple badness and TE complexity. See the next section. We denote this definition of TE error ''Ψ''. | ||
From the ratio {{nowrap|(‖'' | From the ratio {{nowrap|(‖''M''<sub>''W''</sub> ∧ ''J''<sub>''W''</sub>‖/‖''M''<sub>''W''</sub>‖)<sup>2</sup>}} we obtain {{nowrap|{{sfrac|''C''(''n'', ''r'' + 1)|''n''⋅''C''(''n'', ''r'')}} {{=}} {{sfrac|''n'' − ''r''|''n''(''r'' + 1)}}}}. If we take the ratio of this for rank 1 with this for rank ''r'', the ''n'' cancels, and we get {{nowrap|{{sfrac|''n'' − 1|2}} · {{sfrac|''r'' + 1|''n'' − ''r''}} {{=}} {{sfrac|(''r'' + 1)(''n'' − 1)|2(''n'' − ''r'')}}}}. It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank-''r'' temperament then | ||
$$ \psi = \sqrt{\frac{2(n - r)}{(r + 1)(n - 1)}} \Psi $$ | $$ \psi = \sqrt{\frac{2(n - r)}{(r + 1)(n - 1)}} \Psi $$ | ||
Line 107: | Line 107: | ||
$$ B = C \cdot E $$ | $$ B = C \cdot E $$ | ||
Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖'' | Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''M''<sub>''W''</sub> ∧ ''J''<sub>''W''</sub>‖<sub>RMS</sub>}}. A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the just tuning map; we will label this matrix ''Ṽ''<sub>''W''</sub>. Then the simple badness is: | ||
$$ \norm{ | $$ \norm{ M_W \wedge J_W }_\text {RMS'} = \sqrt{\frac{\det(\tilde V_W \tilde V_W^\mathsf{T})}{C(n, r + 1)}} $$ | ||
So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val. | So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val. | ||
Graham Breed defines the simple badness slightly differently, again equivalent to a choice of scaling | Graham Breed defines the simple badness slightly differently, again equivalent to a choice of scaling, skipped here because it is derived from the general formula. | ||
Sintel has likewise given a simple badness as | Sintel has likewise given a simple badness as | ||
$$ \norm{ | $$ \norm{ M_U \wedge J_U }_2 = \sqrt{\det(\tilde V_U \tilde V_U^\mathsf{T})} $$ | ||
where {{nowrap| ''J''<sub>''U''</sub> {{=}} ''J''<sub>''W''</sub>/det(''W'')<sup>1/''n''</sup> }} is the ''U''-weighted just tuning map. | where {{nowrap| ''J''<sub>''U''</sub> {{=}} ''J''<sub>''W''</sub>/det(''W'')<sup>1/''n''</sup> }} is the ''U''-weighted just tuning map. | ||
Line 137: | Line 133: | ||
The exponent is chosen such that if we set a cutoff margin for logflat badness, there are still infinite numbers of new temperaments appearing as complexity goes up, at a lower rate which is approximately logarithmic in terms of complexity. | The exponent is chosen such that if we set a cutoff margin for logflat badness, there are still infinite numbers of new temperaments appearing as complexity goes up, at a lower rate which is approximately logarithmic in terms of complexity. | ||
In Graham's and Gene's | In Graham's and Gene's derivations, | ||
$$ L = \norm{ | $$ L = \norm{ M_W \wedge J_W } \norm{M_W}^{r/(n - r)} $$ | ||
In Sintel's Dirichlet coefficients, or Dirichlet badness, | In Sintel's Dirichlet coefficients, or Dirichlet badness, | ||
$$ L = \norm{ | $$ L = \norm{ M_U \wedge J_U } \norm{M_U}^{r/(n - r)} / \norm{J_U} $$ | ||
Notice the extra factor 1/‖''J''<sub>''U''</sub>‖, which is to say we divide it by the norm of the just tuning map. For comparison, Gene's derivation does not have this factor, whereas with Tenney weights, whether this factor is omitted or not has no effects on Graham's derivation since ‖''J''<sub>''W''</sub>‖<sub>RMS</sub> is unity. | Notice the extra factor 1/‖''J''<sub>''U''</sub>‖, which is to say we divide it by the norm of the just tuning map. For comparison, Gene's derivation does not have this factor, whereas with Tenney weights, whether this factor is omitted or not has no effects on Graham's derivation since ‖''J''<sub>''W''</sub>‖<sub>RMS</sub> is unity. |