Tenney–Euclidean temperament measures: Difference between revisions
General formula for simple badness |
More rework |
||
Line 22: | Line 22: | ||
== TE complexity == | == TE complexity == | ||
Given a [[wedgie]] ''M'', that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ‖''M''‖ is a measure of the | Given a [[wedgie]] ''M'', that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ‖''M''‖ is a measure of the complexity of ''M''; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. | ||
Let us define the val weighting matrix ''W'' to be the {{w|diagonal matrix}} with values 1, 1/log<sub>2</sub>3, 1/log<sub>2</sub>5 … 1/log<sub>2</sub>''p'' along the diagonal. For the prime basis {{nowrap|''Q'' {{=}} {{val| 2 3 5 … ''p'' }} }}, | Let us define the val weighting matrix ''W'' to be the {{w|diagonal matrix}} with values 1, 1/log<sub>2</sub>3, 1/log<sub>2</sub>5 … 1/log<sub>2</sub>''p'' along the diagonal. For the prime basis {{nowrap|''Q'' {{=}} {{val| 2 3 5 … ''p'' }} }}, | ||
Line 28: | Line 28: | ||
$$ W = \operatorname {diag} (1/\log_2 (Q)) $$ | $$ W = \operatorname {diag} (1/\log_2 (Q)) $$ | ||
If ''V'' is the mapping matrix of a temperament, then ''V<sub>W</sub>'' {{=}} ''VW'' is the mapping matrix in the weighted space, its rows being the weighted vals ( | If ''V'' is the mapping matrix of a temperament, then ''V<sub>W</sub>'' {{=}} ''VW'' is the mapping matrix in the weighted space, its rows being the weighted vals (''v''<sub>''w''</sub>)<sub>''i''</sub>. | ||
Our first complexity measure of a temperament is given by the ''L''<sup>2</sup> norm of the Tenney-weighted wedgie ''M''<sub>''W''</sub>, which can in turn be obtained from the Tenney-weighted mapping matrix ''V''<sub>''W''</sub>. This complexity can be easily computed either from the wedgie or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}: | Our first complexity measure of a temperament is given by the ''L''<sup>2</sup> norm of the Tenney-weighted wedgie ''M''<sub>''W''</sub>, which can in turn be obtained from the Tenney-weighted mapping matrix ''V''<sub>''W''</sub>. This complexity can be easily computed either from the wedgie or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}: | ||
Line 61: | Line 61: | ||
== TE error == | == TE error == | ||
We can consider | We can consider TE error to be a weighted average of the error of each [[prime harmonic]]s in [[TE tuning]], that is, a weighted average of the [[error map]] in the tuning where it is minimized. In this regard, TE error may be expressed in any logarithmic [[interval size unit]]s such as [[cent]]s or [[octave]]s. | ||
As with complexity, we may simply define the TE error as the ''L''<sup>2</sup> norm of the weighted TE error map. If {{nowrap| ''T''<sub>''W''</sub> {{=}} ''TW'' }} is the weighted TE tuning map and {{nowrap| ''J''<sub>''W''</sub> {{=}} ''JW'' {{=}} {{val| 1 1 … 1 }} }} is the weighted just tuning map, then the TE error ''E'' is given by | |||
$$ | |||
\begin{align} | |||
E &= \norm{T_W - J_W}_2 \\ | |||
&= \norm{J_W(V_W^+ V_W - I) }_2 \\ | |||
&= \sqrt{J_W(V_W^+ V_W - I)(V_W^+ V_W - I)^\mathsf{T} J_W^\mathsf{T}} | |||
\end{align} | |||
$$ | |||
where <sup>+</sup> denotes the [[pseudoinverse]]. | |||
Often, it is desirable to know the average of errors instead of the sum, which corresponds to Graham Breed's definition<ref name="primerr"/>. This error figure, ''G'', can be found by | |||
$$ | $$ | ||
\begin{align} | \begin{align} | ||
G &= \norm{T_W - J_W}_\text{RMS} \\ | G &= \norm{T_W - J_W}_\text{RMS} \\ | ||
&= | &= E / \sqrt{n} | ||
\end{align} | \end{align} | ||
$$ | $$ | ||
: '''Note''': that is the definition used by Graham Breed's temperament finder. | : '''Note''': that is the definition used by Graham Breed's temperament finder. | ||
Gene Ward Smith derives TE error from the relationship of TE simple badness and TE complexity. See the next section. We denote this definition of TE error ''Ψ''. | |||
From the ratio {{nowrap|(‖''J''<sub>''W''</sub> ∧ ''M''‖ / ‖''M''‖)<sup>2</sup>}} we obtain {{nowrap|{{sfrac|''C''(''n'', ''r'' + 1)|''n''⋅''C''(''n'', ''r'')}} {{=}} {{sfrac|''n'' − ''r''|''n''(''r'' + 1)}}}}. If we take the ratio of this for rank 1 with this for rank ''r'', the ''n'' cancels, and we get {{nowrap|{{sfrac|''n'' − 1|2}} · {{sfrac|''r'' + 1|''n'' − ''r''}} {{=}} {{sfrac|(''r'' + 1)(''n'' − 1)|2(''n'' − ''r'')}}}}. It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank-''r'' temperament then | From the ratio {{nowrap|(‖''J''<sub>''W''</sub> ∧ ''M''<sub>''W''</sub>‖ / ‖''M''<sub>''W''</sub>‖)<sup>2</sup>}} we obtain {{nowrap|{{sfrac|''C''(''n'', ''r'' + 1)|''n''⋅''C''(''n'', ''r'')}} {{=}} {{sfrac|''n'' − ''r''|''n''(''r'' + 1)}}}}. If we take the ratio of this for rank 1 with this for rank ''r'', the ''n'' cancels, and we get {{nowrap|{{sfrac|''n'' − 1|2}} · {{sfrac|''r'' + 1|''n'' − ''r''}} {{=}} {{sfrac|(''r'' + 1)(''n'' − 1)|2(''n'' − ''r'')}}}}. It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank-''r'' temperament then | ||
$$ \psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi $$ | $$ \psi = \sqrt{\frac{2(n - r)}{(r + 1)(n - 1)}} \Psi $$ | ||
is an '''adjusted error''' which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than {{nowrap|(1 + ''ε'')ψ}} for any positive ''ε'' results in an infinite set of vals supporting the temperament. | is an '''adjusted error''' which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than {{nowrap|(1 + ''ε'')''ψ''}} for any positive ''ε'' results in an infinite set of vals supporting the temperament. | ||
''Ψ'' | To express ''Ψ'' and ''ψ'' in terms of ''E'': | ||
$$ | $$ \Psi = \sqrt{\frac{r + 1}{n - r}} E, \ \psi = \sqrt{\frac{2}{n - 1}} E $$ | ||
''G'' and ''ψ'' error both have the advantage that higher-rank temperament error corresponds directly to rank-1 error, but the RMS normalization has the further advantage that in the rank-1 case, {{nowrap|''G'' {{=}} sin ''θ''}}, where ''θ'' is the angle between ''J''<sub>''W''</sub> and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin(''θ''), the TE error in cents. | ''G'' and ''ψ'' error both have the advantage that higher-rank temperament error corresponds directly to rank-1 error, but the RMS normalization has the further advantage that in the rank-1 case, {{nowrap|''G'' {{=}} sin ''θ''}}, where ''θ'' is the angle between ''J''<sub>''W''</sub> and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin(''θ''), the TE error in cents. | ||
Line 98: | Line 107: | ||
$$ B = C \cdot E $$ | $$ B = C \cdot E $$ | ||
Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''<sub>''W''</sub>‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> | Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''<sub>''W''</sub>‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> }} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''<sub>''W''</sub>·(''v''<sub>''w''</sub>)<sub>''i''</sub>/''n''}} is the mean value of the entries of (''v''<sub>''w''</sub>)<sub>''i''</sub>. Then note that {{nowrap|''J''<sub>''W''</sub> ∧ ((''v''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>1</sub>''J''<sub>''W''</sub>) ∧ ((''v''<sub>''w''</sub>)<sub>2</sub> − ''a''<sub>2</sub>''J''<sub>''W''</sub>) ∧ … ∧ ((''v''<sub>''w''</sub>)<sub>''r''</sub> − ''a''<sub>''r''</sub>''J''<sub>''W''</sub>) {{=}} ''J''<sub>''W''</sub> ∧ (''v''<sub>''w''</sub>)<sub>1</sub> ∧ (''v''<sub>''w''</sub>)<sub>2</sub> ∧ … ∧ (''v''<sub>''w''</sub>)<sub>''r''</sub>}}, since wedge products with more than one term ''J''<sub>''W''</sub> are zero. The Gram matrix of the vectors ''J''<sub>''W''</sub> and {{nowrap|(''v''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>''i''</sub>''J''<sub>''W''</sub>}} will have ''n'' as the {{nowrap|(1, 1)}} entry, and 0's in the rest of the first row and column. Hence we obtain: | ||
$$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \det(( | $$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \det((v_w)_i \cdot (v_w)_j - n a_i a_j) $$ | ||
A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is: | A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is: |