Tenney–Euclidean temperament measures: Difference between revisions

Rework on the preliminary notes. - duplicate info
The wedgies should be weighted too
Line 23: Line 23:
== TE complexity ==
== TE complexity ==
Given a [[wedgie]] ''M'', that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ‖''M''‖ is a measure of the [[complexity]] of ''M''; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. We may call it '''Tenney–Euclidean complexity''', or '''TE complexity''' since it can be defined in terms of the [[Tenney–Euclidean metrics|Tenney–Euclidean norm]].  
Given a [[wedgie]] ''M'', that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ‖''M''‖ is a measure of the [[complexity]] of ''M''; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. We may call it '''Tenney–Euclidean complexity''', or '''TE complexity''' since it can be defined in terms of the [[Tenney–Euclidean metrics|Tenney–Euclidean norm]].  
Below shows various definitions of TE complexity. All of them can be easily computed either from the multivector or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}.


Let us define the val weighting matrix ''W'' to be the {{w|diagonal matrix}} with values 1, 1/log<sub>2</sub>3, 1/log<sub>2</sub>5 … 1/log<sub>2</sub>''p'' along the diagonal. For the prime basis {{nowrap|''Q'' {{=}} {{val| 2 3 5 … ''p'' }} }},  
Let us define the val weighting matrix ''W'' to be the {{w|diagonal matrix}} with values 1, 1/log<sub>2</sub>3, 1/log<sub>2</sub>5 … 1/log<sub>2</sub>''p'' along the diagonal. For the prime basis {{nowrap|''Q'' {{=}} {{val| 2 3 5 … ''p'' }} }},  
Line 30: Line 28:
$$ W = \operatorname {diag} (1/\log_2 (Q)) $$
$$ W = \operatorname {diag} (1/\log_2 (Q)) $$


If ''V'' is the mapping matrix of a temperament, then ''V<sub>W</sub>'' {{=}} ''VW'' is the mapping matrix in the weighted space, its rows being the weighted vals ('''v'''<sub>''w''</sub>)<sub>''i''</sub>
If ''V'' is the mapping matrix of a temperament, then ''V<sub>W</sub>'' {{=}} ''VW'' is the mapping matrix in the weighted space, its rows being the weighted vals ('''v'''<sub>''w''</sub>)<sub>''i''</sub>.


The ''L''<sup>2</sup> norm is one of the standard complexity measures:  
Our first complexity measure of a temperament is given by the ''L''<sup>2</sup> norm of the Tenney-weighted wedgie ''M''<sub>''W''</sub>, which can in turn be obtained from the Tenney-weighted mapping matrix ''V''<sub>''W''</sub>. This complexity can be easily computed either from the wedgie or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}:  


$$ \norm{M}_2 = \sqrt {\abs{V_W V_W^\mathsf{T}}} $$
$$ \norm{M_W}_2 = \sqrt {\abs{V_W V_W^\mathsf{T}}} $$


where {{!}}''A''{{!}} denotes the determinant of ''A'', and ''A''{{t}} denotes the transpose of ''A''.  
where {{!}}''A''{{!}} denotes the determinant of ''A'', and ''A''{{t}} denotes the transpose of ''A''.  


We denote the RMS norm as ‖''M''‖<sub>RMS</sub>. In Graham Breed's paper<ref name="primerr">[http://x31eq.com/temper/primerr.pdf ''Prime Based Error and Complexity Measures''], often referred to as ''primerr.pdf''</ref>, an RMS norm is proposed as
We denote the RMS norm of ''M'' as ‖''M''‖<sub>RMS</sub>. In Graham Breed's paper<ref name="primerr">[http://x31eq.com/temper/primerr.pdf ''Prime Based Error and Complexity Measures''], often referred to as ''primerr.pdf''</ref>, an RMS norm is proposed as


$$ \norm{M}_\text{RMS} = \sqrt {\abs{\frac {V_W V_W^\mathsf{T}}{n}}} = \frac {\norm{M}_2}{\sqrt {n^r}} $$
$$ \norm{M_W}_\text{RMS} = \sqrt {\abs{\frac {V_W V_W^\mathsf{T}}{n}}} = \frac {\norm{M}_2}{\sqrt {n^r}} $$


where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.  
where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.  
Line 48: Line 46:
[[Gene Ward Smith]] has recognized that TE complexity can be interpreted as the RMS norm of the wedgie. That defines another RMS norm,  
[[Gene Ward Smith]] has recognized that TE complexity can be interpreted as the RMS norm of the wedgie. That defines another RMS norm,  


$$ \norm{M}_\text{RMS}' = \sqrt {\frac{\abs{V_W V_W^\mathsf{T}}}{C(n, r)}} = \frac {\norm{M}_2}{\sqrt {C(n, r)}} $$
$$ \norm{M_W}_\text{RMS}' = \sqrt {\frac{\abs{V_W V_W^\mathsf{T}}}{C(n, r)}} = \frac {\norm{M}_2}{\sqrt {C(n, r)}} $$


where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie.  
where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie.  
Line 90: Line 88:
The '''TE simple badness''' of a temperament, which we may also call the '''relative error''' of a temperament, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step.  
The '''TE simple badness''' of a temperament, which we may also call the '''relative error''' of a temperament, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step.  


Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> {{=}} {{val| 1 1 … 1 }}}} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''<sub>''W''</sub>·('''v'''<sub>''w''</sub>)<sub>''i''</sub>/''n''}} is the mean value of the entries of ('''v'''<sub>''w''</sub>)<sub>''i''</sub>. Then note that {{nowrap|''J''<sub>''W''</sub> ∧ (('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>1</sub>''J''<sub>''W''</sub>) ∧ (('''v'''<sub>''w''</sub>)<sub>2</sub> − ''a''<sub>2</sub>''J''<sub>''W''</sub>) ∧ … ∧ (('''v'''<sub>''w''</sub>)<sub>''r''</sub> − ''a''<sub>''r''</sub>''J''<sub>''W''</sub>) {{=}} ''J''<sub>''W''</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>1</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>2</sub> ∧ … ∧ ('''v'''<sub>''w''</sub>)<sub>''r''</sub>}}, since wedge products with more than one term ''J''<sub>''W''</sub> are zero. The Gram matrix of the vectors ''J''<sub>''W''</sub> and {{nowrap|('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>''i''</sub>''J''<sub>''W''</sub>}} will have ''n'' as the {{nowrap|(1, 1)}} entry, and 0's in the rest of the first row and column. Hence we obtain:
Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''J''<sub>''W''</sub> ∧ ''M''<sub>''W''</sub>‖<sub>RMS</sub>}}, where {{nowrap|''J''<sub>''W''</sub> {{=}} {{val| 1 1 … 1 }}}} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that {{nowrap|''a''<sub>''i''</sub> {{=}} ''J''<sub>''W''</sub>·('''v'''<sub>''w''</sub>)<sub>''i''</sub>/''n''}} is the mean value of the entries of ('''v'''<sub>''w''</sub>)<sub>''i''</sub>. Then note that {{nowrap|''J''<sub>''W''</sub> ∧ (('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>1</sub>''J''<sub>''W''</sub>) ∧ (('''v'''<sub>''w''</sub>)<sub>2</sub> − ''a''<sub>2</sub>''J''<sub>''W''</sub>) ∧ … ∧ (('''v'''<sub>''w''</sub>)<sub>''r''</sub> − ''a''<sub>''r''</sub>''J''<sub>''W''</sub>) {{=}} ''J''<sub>''W''</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>1</sub> ∧ ('''v'''<sub>''w''</sub>)<sub>2</sub> ∧ … ∧ ('''v'''<sub>''w''</sub>)<sub>''r''</sub>}}, since wedge products with more than one term ''J''<sub>''W''</sub> are zero. The Gram matrix of the vectors ''J''<sub>''W''</sub> and {{nowrap|('''v'''<sub>''w''</sub>)<sub>1</sub> − ''a''<sub>''i''</sub>''J''<sub>''W''</sub>}} will have ''n'' as the {{nowrap|(1, 1)}} entry, and 0's in the rest of the first row and column. Hence we obtain:


$$ \norm{ J_W \wedge M }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{(\vec{v_w})_i \cdot (\vec{v_w})_j - n a_i a_j} $$
$$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{(\vec{v_w})_i \cdot (\vec{v_w})_j - n a_i a_j} $$


A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is:
A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the JIP; we will label this matrix ''V''<sub>''J''</sub>. Then the simple badness is:


$$ \norm{ J_W \wedge M }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{V_J V_J^\mathsf{T}} $$
$$ \norm{ J_W \wedge M_W }'_\text {RMS} = \sqrt{\frac{n}{C(n, r + 1)}} \abs{V_J V_J^\mathsf{T}} $$


So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val.
So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val.
Line 112: Line 110:
<math>\displaystyle
<math>\displaystyle
S(A)C(A)^{r/(n - r)} \\
S(A)C(A)^{r/(n - r)} \\
= \norm{ J_W \wedge M } \norm{M}^{r/(n - r)}
= \norm{ J_W \wedge M_W } \norm{M_W}^{r/(n - r)}
</math>
</math>