Tenney–Euclidean temperament measures: Difference between revisions
It's fine to use det! |
Sintel's simple and logflat badnesses |
||
Line 105: | Line 105: | ||
Graham Breed defines the simple badness slightly differently, again equivalent to a choice of scaling. This is skipped here because, by that definition, it is easier to find TE complexity and TE error first and multiply them together to get the simple badness. | Graham Breed defines the simple badness slightly differently, again equivalent to a choice of scaling. This is skipped here because, by that definition, it is easier to find TE complexity and TE error first and multiply them together to get the simple badness. | ||
Sintel has likewise given a simple badness as | |||
$$ \norm{ J_U \wedge M_U }_2 $$ | |||
where {{nowrap| ''J''<sub>''U''</sub> {{=}} ''J''<sub>''W''</sub>/det(''W'')<sup>1/''n''</sup> }} is the ''U''-weighted just tuning map. | |||
=== Reduction to the span of a comma === | === Reduction to the span of a comma === | ||
Line 111: | Line 117: | ||
This relationship also holds if TOP is used rather than TE, as the TOP damage associated with tempering out some comma ''n''/''d'' is log(''n''/''d'')/(''nd''), and if we multiply by the complexity ''nd'', we simply get log(''n''/''d'') as our result. | This relationship also holds if TOP is used rather than TE, as the TOP damage associated with tempering out some comma ''n''/''d'' is log(''n''/''d'')/(''nd''), and if we multiply by the complexity ''nd'', we simply get log(''n''/''d'') as our result. | ||
== TE logflat badness == | |||
Some consider the simple badness to be a sort of badness which | Some consider the simple badness to be a sort of badness which favors complex temperaments. The '''logflat badness''' is developed to address that. If we define ''B'' to be the simple badness (relative error) of a temperament, and ''c'' to be the complexity, then the logflat badness ''L'' is defined by the formula | ||
$$ L = B \cdot C^{r/(n - r)} $$ | |||
The exponent is chosen such that if we set a cutoff margin for logflat badness, there are still infinite numbers of new temperaments appearing as complexity goes up, at a lower rate which is approximately logarithmic in terms of complexity. | |||
In Graham's and Gene's derivation, | |||
$$ L = \norm{ J_W \wedge M_W } \norm{M_W}^{r/(n - r)} $$ | |||
In Sintel's Dirichlet coefficients, or Dirichlet badness, | |||
$$ L = \norm{ J_U \wedge M_U } \norm{M_U}^{r/(n - r)} / \norm{J_U} $$ | |||
= \norm{ | |||
Notice the extra factor 1/‖''J''<sub>''U''</sub>‖, which is to say we divide it by the norm of the just tuning map. For comparison, Gene's derivation does not have this factor, whereas with Tenney weights, whether this factor is omitted or not has no effects on Graham's derivation since ‖''J''<sub>''W''</sub>‖<sub>RMS</sub> is unity. | |||
== Examples == | == Examples == |