Tenney–Euclidean temperament measures: Difference between revisions
ArrowHead294 (talk | contribs) m →TE error: Space out a little |
Write the weighted variables explicitly with subscript W, and get rid of A in favor of V accordingly. Some misc. cleanup |
||
Line 1: | Line 1: | ||
{{ | {{Texops}} | ||
The '''Tenney–Euclidean temperament measures''' ('''TE temperament measures''') consist of TE complexity, TE error, and TE simple badness. These are evaluations of a temperament's [[complexity]], [[error]], and [[badness]], respectively | The '''Tenney–Euclidean temperament measures''' ('''TE temperament measures''') consist of TE complexity, TE error, and TE simple badness. These are evaluations of a temperament's [[complexity]], [[error]], and [[badness]], respectively, and they follow the identity | ||
$$ \text{TE simple badness} = \text{TE complexity} \times \text{TE error} $$ | |||
\text{TE simple badness} = \text{TE complexity} \times \text{TE error} | |||
There have been several minor variations in the definition of TE temperament measures, which differ from each other only in their choice of multiplicative scaling factor. Each of these variations will be discussed below. | |||
TE temperament measures have been extensively studied by [[Graham Breed]] (see [http://x31eq.com/temper/primerr.pdf ''Prime Based Error and Complexity Measures''], often referred to as ''primerr.pdf''), who also proposed [[Cangwu badness]], an important derived measure, which adds a free parameter to TE simple badness that enables one to specify a tradeoff between complexity and error. | TE temperament measures have been extensively studied by [[Graham Breed]] (see [http://x31eq.com/temper/primerr.pdf ''Prime Based Error and Complexity Measures''], often referred to as ''primerr.pdf''), who also proposed [[Cangwu badness]], an important derived measure, which adds a free parameter to TE simple badness that enables one to specify a tradeoff between complexity and error. | ||
== Note on scaling factors == | == Note on scaling factors == | ||
Given a [[ | Given a [[wedgies and multivals|multival]] or multimonzo which is a {{w|wedge product}} of weighted vals or monzos (where the weighting factors are 1/log<sub>2</sub>(''p'') for the entry corresponding to ''p''), we may define a norm by means of the usual {{w|norm (mathematics) #Euclidean norm|Euclidean norm}} (aka ''L''<sup>2</sup> norm or ℓ<sub>2</sub> norm). We can rescale this several ways, for example by taking a {{w|root mean square}} (RMS) average of the entries of the multivector. These metrics are mainly used to rank temperaments relative to one another. In that regard, it does not matter much if an RMS or an ''L''<sup>2</sup> norm is used, because these two are equivalent up to a scaling factor, so they will rank temperaments identically. As a result, it is somewhat common to equivocate between the various choices of scaling factor, and treat the entire thing as "the" Tenney–Euclidean norm, so that we are really only concerned with the results of these metrics up to that equivalence. | ||
Because of this, there are different "standards" for scaling that are commonly in use: | Because of this, there are different "standards" for scaling that are commonly in use: | ||
Line 25: | Line 26: | ||
Below shows various definitions of TE complexity. All of them can be easily computed either from the multivector or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}. | Below shows various definitions of TE complexity. All of them can be easily computed either from the multivector or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}. | ||
Let us | Let us define the val weighting matrix ''W'' to be the {{w|diagonal matrix}} with values 1, 1/log<sub>2</sub>3, 1/log<sub>2</sub>5 … 1/log<sub>2</sub>''p'' along the diagonal. For the prime basis {{nowrap|''Q'' {{=}} {{val| 2 3 5 … ''p'' }} }}, | ||
$$ W = \operatorname {diag} (1/\log_2 (Q)) $$ | |||
If ''V'' is the mapping matrix of a temperament, then ''V<sub>W</sub>'' {{=}} ''VW'' is the mapping matrix in the weighted space, its rows being the weighted vals ('''v'''<sub>''w''</sub>)<sub>''i''</sub> | |||
< | The ''L''<sup>2</sup> norm is one of the standard complexity measures: | ||
\norm{ M }_2 = \sqrt {\abs{ | |||
$$ \norm{M}_2 = \sqrt {\abs{V_W V_W^\mathsf{T}}} $$ | |||
where {{!}}''A''{{!}} denotes the determinant of ''A'', and '' | where {{!}}''A''{{!}} denotes the determinant of ''A'', and ''A''{{t}} denotes the transpose of ''A''. | ||
We denote the RMS norm as ‖''M''‖<sub>RMS</sub>. In Graham Breed's paper, an RMS norm is proposed as | We denote the RMS norm as ‖''M''‖<sub>RMS</sub>. In Graham Breed's paper, an RMS norm is proposed as | ||
$$ \norm{M}_\text{RMS} = \sqrt {\abs{\frac {V_W V_W^\mathsf{T}}{n}}} = \frac {\norm{M}_2}{\sqrt {n^r}} $$ | |||
\norm{ M }_\text{RMS} = \sqrt {\abs{\frac { | |||
where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. | where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. | ||
Line 43: | Line 48: | ||
[[Gene Ward Smith]] has recognized that TE complexity can be interpreted as the RMS norm of the wedgie. That defines another RMS norm, | [[Gene Ward Smith]] has recognized that TE complexity can be interpreted as the RMS norm of the wedgie. That defines another RMS norm, | ||
$$ \norm{M}_\text{RMS}' = \sqrt {\frac{\abs{V_W V_W^\mathsf{T}}}{C(n, r)}} = \frac {\norm{M}_2}{\sqrt {C(n, r)}} $$ | |||
\norm{ M }_\text{RMS}' = \sqrt {\frac{\abs{ | |||
where | where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie. | ||
: '''Note''': that is the definition currently used throughout the wiki, unless stated otherwise. | : '''Note''': that is the definition currently used throughout the wiki, unless stated otherwise. | ||
We may also note {{nowrap|{{!}}''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}{{!}} {{=}} {{!}}''VW''<sup>2</sup>''V''{{t}}{{!}}}}. This may be related to the [[Tenney–Euclidean metrics|TE tuning projection matrix]] ''P''<sub>''W''</sub>, which is ''V''<sub>''W''</sub>{{t}}(''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}){{inv}}''V''<sub>''W''</sub>, and the corresponding matrix for unweighted monzos {{nowrap|''P'' {{=}} ''V''{{t}}(''VW''<sup>2</sup>''V''{{t}}){{inv}}''V''}}. | |||
== TE simple badness == | == TE simple badness == | ||
The '''TE simple badness''' of | The '''TE simple badness''' of a temperament, which we may also call the '''relative error''' of a temperament, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. | ||
Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J'' ∧ ''M''‖<sub>RMS</sub>}}, where {{nowrap|''J'' {{=}} {{val| 1 1 … 1 }}}} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''·''v''<sub>''i''</sub>/''n''}} is the mean value of the entries of ''v''<sub>''i''</sub>. Then note that {{nowrap|''J'' ∧ (''v''<sub>1</sub> − ''a''<sub>1</sub>''J'') ∧ (''v''<sub>2</sub> − ''a''<sub>2</sub>''J'') ∧ … ∧ (''v''<sub>''r''</sub> − ''a''<sub>''r''</sub>''J'') {{=}} ''J'' ∧ ''v''<sub>1</sub> ∧ ''v''<sub>2</sub> ∧ … ∧ ''v''<sub>''r''</sub>}}, since wedge products with more than one term ''J'' are zero. The Gram matrix of the vectors ''J'' and {{nowrap|''v''<sub>1</sub> − ''a''<sub>''i''</sub>''J''}} will have ''n'' as the (1, | Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> {{=}} {{val| 1 1 … 1 }}}} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''<sub>''W''</sub>·('''v'''<sub>''w''</sub>)<sub>''i''</sub>/''n''}} is the mean value of the entries of ('''v'''<sub>''w''</sub>)<sub>''i''</sub>. Then note that {{nowrap|''J''<sub>''W''</sub> ∧ (('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>1</sub>''J''<sub>''W''</sub>) ∧ (('''v'''<sub>''w''</sub>)<sub>2</sub> − ''a''<sub>2</sub>''J''<sub>''W''</sub>) ∧ … ∧ (('''v'''<sub>''w''</sub>)<sub>''r''</sub> − ''a''<sub>''r''</sub>''J''<sub>''W''</sub>) {{=}} ''J''<sub>''W''</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>1</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>2</sub> ∧ … ∧ ('''v'''<sub>''w''</sub>)<sub>''r''</sub>}}, since wedge products with more than one term ''J''<sub>''W''</sub> are zero. The Gram matrix of the vectors ''J''<sub>''W''</sub> and {{nowrap|('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>''i''</sub>''J''<sub>''W''</sub>}} will have ''n'' as the {{nowrap|(1, 1)}} entry, and 0's in the rest of the first row and column. Hence we obtain: | ||
$$ \norm{ J_W \wedge M }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{(\vec{v_w})_i \cdot (\vec{v_w})_j - n a_i a_j} $$ | |||
\norm{ | |||
A perhaps simpler way to view this is to start with a mapping matrix V and add an extra row J corresponding to the JIP; we will label this matrix V<sub>J</sub>. Then the simple badness is: | A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is: | ||
$$ \norm{ J_W \wedge M }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{V_J V_J^\mathsf{T}} $$ | |||
\norm{ | |||
So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val. | So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val. | ||
Line 72: | Line 74: | ||
It is notable that if ''M'' is codimension-1, we may view it as representing [[the dual]] of a single comma. In this situation, the simple badness happens to reduce to the [[Interval span|span]] of the comma, up to a constant multiplicative factor, so that the span of any comma can itself be thought of as measuring the complexity relative to the error of the temperament vanishing that comma. | It is notable that if ''M'' is codimension-1, we may view it as representing [[the dual]] of a single comma. In this situation, the simple badness happens to reduce to the [[Interval span|span]] of the comma, up to a constant multiplicative factor, so that the span of any comma can itself be thought of as measuring the complexity relative to the error of the temperament vanishing that comma. | ||
This relationship also holds if TOP is used rather than TE, as the TOP damage associated with tempering some comma ''n''/''d'' is log(''n''/''d'')/(''nd''), and if we multiply by the complexity ''nd'', we simply get log(''n''/''d'') as our result. | This relationship also holds if TOP is used rather than TE, as the TOP damage associated with tempering out some comma ''n''/''d'' is log(''n''/''d'')/(''nd''), and if we multiply by the complexity ''nd'', we simply get log(''n''/''d'') as our result. | ||
=== TE logflat badness === | === TE logflat badness === | ||
Some consider the simple badness to be a sort of badness which heavily favors complex temperaments. The '''logflat badness''' is developed to address that. If we define S(''A'') to be the simple badness (relative error) of ''A'', and C(''A'') to be the complexity of ''A'', then the logflat badness is defined by the formula | Some consider the simple badness to be a sort of badness which heavily favors complex temperaments. The '''logflat badness''' is developed to address that. If we define S(''A'') to be the simple badness (relative error) of a temperament ''A'', and C(''A'') to be the complexity of ''A'', then the logflat badness is defined by the formula | ||
<math>\displaystyle | <math>\displaystyle | ||
S(A)C(A)^{r/(n - r)} \\ | S(A)C(A)^{r/(n - r)} \\ | ||
= \norm{ | = \norm{ J_W \wedge M } \norm{M}^{r/(n - r)} | ||
</math> | </math> | ||
Line 87: | Line 89: | ||
We can consider '''TE error''' to be a weighted average of the error of each prime harmonics in TE tuning. Multiplying it by 1200, we get a figure with values in cents. | We can consider '''TE error''' to be a weighted average of the error of each prime harmonics in TE tuning. Multiplying it by 1200, we get a figure with values in cents. | ||
By Graham Breed's definition, TE error may be accessed via [[Tenney–Euclidean tuning|TE tuning map]]. If ''T'' is the tuning map, then the TE error ''G'' can be found by | By Graham Breed's definition, TE error may be accessed via [[Tenney–Euclidean tuning|TE tuning map]]. If ''T''<sub>''W''</sub> is the Tenney-weighted tuning map, then the TE error ''G'' can be found by | ||
<math>\displaystyle | <math>\displaystyle | ||
\begin{align} | \begin{align} | ||
G &= \norm{ | G &= \norm{T_W - J_W}_\text{RMS} \\ | ||
&= \norm{ | &= \norm{J_W(V_W^+ V_W - I) }_\text{RMS} \\ | ||
&= \sqrt{ | &= \sqrt{J_W(V_W^+ V_W - I)(V_W^+ V_W - I)^\mathsf{T} J_W^\mathsf{T}/n} | ||
\end{align} | \end{align} | ||
</math> | </math> | ||
If ''T'' is denominated in cents, then ''J'' should be also, so that {{nowrap|''J'' {{=}} {{val| 1200 1200 … 1200 }}}}. Here {{nowrap|''T'' − ''J''}} is the list of weighted mistunings of each prime harmonics. | If ''T''<sub>''W''</sub> is denominated in cents, then ''J''<sub>''W''</sub> should be also, so that {{nowrap|''J''<sub>''W''</sub> {{=}} {{val| 1200 1200 … 1200 }}}}. Here {{nowrap|''T''<sub>''W''</sub> − ''J''<sub>''W''</sub>}} is the list of weighted mistunings of each prime harmonics. | ||
: '''Note''': that is the definition used by Graham Breed's temperament finder. | : '''Note''': that is the definition used by Graham Breed's temperament finder. | ||
By Gene Ward Smith's definition, the TE error is derived from the relationship of TE simple badness and TE complexity. We denote this definition of TE error Ψ. | By Gene Ward Smith's definition, the TE error is derived from the relationship of TE simple badness and TE complexity. We denote this definition of TE error ''Ψ''. | ||
From the ratio {{nowrap|(‖''J'' ∧ ''M''‖ / ‖''M''‖)<sup>2</sup>}} we obtain {{nowrap|{{sfrac|( | From the ratio {{nowrap|(‖''J''<sub>''W''</sub> ∧ ''M''‖ / ‖''M''‖)<sup>2</sup>}} we obtain {{nowrap|{{sfrac|''C''(''n'', ''r'' + 1)|''n''⋅''C''(''n'', ''r'')}} {{=}} {{sfrac|''n'' − ''r''|''n''(''r'' + 1)}}}}. If we take the ratio of this for rank 1 with this for rank ''r'', the ''n'' cancels, and we get {{nowrap|{{sfrac|''n'' − 1|2}} · {{sfrac|''r'' + 1|''n'' − ''r''}} {{=}} {{sfrac|(''r'' + 1)(''n'' − 1)|2(''n'' − ''r'')}}}}. It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank-''r'' temperament then | ||
$$ \psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi $$ | |||
\psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi | |||
is an '''adjusted error''' which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than {{nowrap|(1 + ε)ψ}} for any positive ε results in an infinite set of vals supporting the temperament. | is an '''adjusted error''' which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than {{nowrap|(1 + ''ε'')ψ}} for any positive ''ε'' results in an infinite set of vals supporting the temperament. | ||
Ψ, ψ, and ''G'' error can be related as follows: | ''Ψ'', ''ψ'', and ''G'' error can be related as follows: | ||
$$ G = \sqrt{\frac{n-1}{2n}} \psi = \sqrt{\frac{n-r}{(r+1)n}} \Psi $$ | |||
''G'' and ψ error both have the advantage that higher rank temperament error corresponds directly to rank | ''G'' and ''ψ'' error both have the advantage that higher-rank temperament error corresponds directly to rank-1 error, but the RMS normalization has the further advantage that in the rank-1 case, {{nowrap|''G'' {{=}} sin ''θ''}}, where ''θ'' is the angle between ''J''<sub>''W''</sub> and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin(''θ''), the TE error in cents. | ||
== Examples == | == Examples == |