Tenney–Euclidean temperament measures: Difference between revisions
Write the weighted variables explicitly with subscript W, and get rid of A in favor of V accordingly. Some misc. cleanup |
Move TE error above simple badness (for obvious reasons) |
||
Line 55: | Line 55: | ||
We may also note {{nowrap|{{!}}''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}{{!}} {{=}} {{!}}''VW''<sup>2</sup>''V''{{t}}{{!}}}}. This may be related to the [[Tenney–Euclidean metrics|TE tuning projection matrix]] ''P''<sub>''W''</sub>, which is ''V''<sub>''W''</sub>{{t}}(''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}){{inv}}''V''<sub>''W''</sub>, and the corresponding matrix for unweighted monzos {{nowrap|''P'' {{=}} ''V''{{t}}(''VW''<sup>2</sup>''V''{{t}}){{inv}}''V''}}. | We may also note {{nowrap|{{!}}''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}{{!}} {{=}} {{!}}''VW''<sup>2</sup>''V''{{t}}{{!}}}}. This may be related to the [[Tenney–Euclidean metrics|TE tuning projection matrix]] ''P''<sub>''W''</sub>, which is ''V''<sub>''W''</sub>{{t}}(''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}){{inv}}''V''<sub>''W''</sub>, and the corresponding matrix for unweighted monzos {{nowrap|''P'' {{=}} ''V''{{t}}(''VW''<sup>2</sup>''V''{{t}}){{inv}}''V''}}. | ||
== TE error == | == TE error == | ||
Line 116: | Line 86: | ||
''G'' and ''ψ'' error both have the advantage that higher-rank temperament error corresponds directly to rank-1 error, but the RMS normalization has the further advantage that in the rank-1 case, {{nowrap|''G'' {{=}} sin ''θ''}}, where ''θ'' is the angle between ''J''<sub>''W''</sub> and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin(''θ''), the TE error in cents. | ''G'' and ''ψ'' error both have the advantage that higher-rank temperament error corresponds directly to rank-1 error, but the RMS normalization has the further advantage that in the rank-1 case, {{nowrap|''G'' {{=}} sin ''θ''}}, where ''θ'' is the angle between ''J''<sub>''W''</sub> and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin(''θ''), the TE error in cents. | ||
== TE simple badness == | |||
The '''TE simple badness''' of a temperament, which we may also call the '''relative error''' of a temperament, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. | |||
Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> {{=}} {{val| 1 1 … 1 }}}} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''<sub>''W''</sub>·('''v'''<sub>''w''</sub>)<sub>''i''</sub>/''n''}} is the mean value of the entries of ('''v'''<sub>''w''</sub>)<sub>''i''</sub>. Then note that {{nowrap|''J''<sub>''W''</sub> ∧ (('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>1</sub>''J''<sub>''W''</sub>) ∧ (('''v'''<sub>''w''</sub>)<sub>2</sub> − ''a''<sub>2</sub>''J''<sub>''W''</sub>) ∧ … ∧ (('''v'''<sub>''w''</sub>)<sub>''r''</sub> − ''a''<sub>''r''</sub>''J''<sub>''W''</sub>) {{=}} ''J''<sub>''W''</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>1</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>2</sub> ∧ … ∧ ('''v'''<sub>''w''</sub>)<sub>''r''</sub>}}, since wedge products with more than one term ''J''<sub>''W''</sub> are zero. The Gram matrix of the vectors ''J''<sub>''W''</sub> and {{nowrap|('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>''i''</sub>''J''<sub>''W''</sub>}} will have ''n'' as the {{nowrap|(1, 1)}} entry, and 0's in the rest of the first row and column. Hence we obtain: | |||
$$ \norm{ J_W \wedge M }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{(\vec{v_w})_i \cdot (\vec{v_w})_j - n a_i a_j} $$ | |||
A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is: | |||
$$ \norm{ J_W \wedge M }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{V_J V_J^\mathsf{T}} $$ | |||
So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val. | |||
Graham Breed defines the simple badness slightly differently, again equivalent to a choice of scaling. This is skipped here because, by that definition, it is easier to find TE complexity and TE error first and multiply them together to get the simple badness. | |||
=== Reduction to the span of a comma === | |||
It is notable that if ''M'' is codimension-1, we may view it as representing [[the dual]] of a single comma. In this situation, the simple badness happens to reduce to the [[Interval span|span]] of the comma, up to a constant multiplicative factor, so that the span of any comma can itself be thought of as measuring the complexity relative to the error of the temperament vanishing that comma. | |||
This relationship also holds if TOP is used rather than TE, as the TOP damage associated with tempering out some comma ''n''/''d'' is log(''n''/''d'')/(''nd''), and if we multiply by the complexity ''nd'', we simply get log(''n''/''d'') as our result. | |||
=== TE logflat badness === | |||
Some consider the simple badness to be a sort of badness which heavily favors complex temperaments. The '''logflat badness''' is developed to address that. If we define S(''A'') to be the simple badness (relative error) of a temperament ''A'', and C(''A'') to be the complexity of ''A'', then the logflat badness is defined by the formula | |||
<math>\displaystyle | |||
S(A)C(A)^{r/(n - r)} \\ | |||
= \norm{ J_W \wedge M } \norm{M}^{r/(n - r)} | |||
</math> | |||
If we set a cutoff margin for logflat badness, there are still infinite numbers of new temperaments appearing as complexity goes up, at a lower rate which is approximately logarithmic in terms of complexity. | |||
== Examples == | == Examples == |