Tenney–Euclidean temperament measures: Difference between revisions

TE simple badness: spell out the logflat badness formula as requested
Move up Breed's def. Minor linking and style improvements
Line 9: Line 9:


== Introduction ==
== Introduction ==
Given a [[Wedgies and multivals|multival]] or multimonzo which is a [[wikipedia: Exterior algebra|wedge product]] of weighted vals or monzos (where the weighting factors are 1/log<sub>2</sub>(''p'') for the entry corresponding to ''p''), we may define a norm by means of the usual [[wikipedia: Norm_(mathematics) #Euclidean norm|Euclidean norm]] (aka ''L''<sup>2</sup> norm or ℓ<sub>2</sub> norm). We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([[wikipedia:Root_mean_square|root mean square]]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||<sub>RMS</sub>.
Given a [[Wedgies and multivals|multival]] or multimonzo which is a {{w|Exterior algebra|wedge product}} of weighted vals or monzos (where the weighting factors are 1/log<sub>2</sub>(''p'') for the entry corresponding to ''p''), we may define a norm by means of the usual {{w|Norm (mathematics) #Euclidean norm|Euclidean norm}} (aka ''L''<sup>2</sup> norm or ℓ<sub>2</sub> norm). We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ({{w|Root mean square|root mean square}}) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ‖M‖<sub>RMS</sub>.


=== Preliminary note on scaling factors ===
=== Preliminary note on scaling factors ===
These metrics are mainly used to rank temperaments relative to one another. In that regard, it doesn't matter much if an RMS or an ''L''<sup>2</sup> norm is used, because these two are equivalent up to a scaling factor, so they will rank temperaments identically.
These metrics are mainly used to rank temperaments relative to one another. In that regard, it doesn't matter much if an RMS or an ''L''<sup>2</sup> norm is used, because these two are equivalent up to a scaling factor, so they will rank temperaments identically. As a result, it is somewhat common to equivocate between the various choices of scaling factor, and treat the entire thing as "the" Tenney-Euclidean norm, so that we are really only concerned with the results of these metrics up to that equivalence.
 
As a result, it is somewhat common to equivocate between the various choices of scaling factor, and treat the entire thing as "the" Tenney-Euclidean norm, so that we are really only concerned with the results of these metrics up to that equivalence.


Because of this, there are different "standards" for scaling that are commonly in use:
Because of this, there are different "standards" for scaling that are commonly in use:
Line 28: Line 26:


== TE complexity ==
== TE complexity ==
Given a [[wedgie]] M, that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ||M|| is a measure of the [[complexity]] of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. We may call it '''Tenney-Euclidean complexity''', or '''TE complexity''' since it can be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]].  
Given a [[wedgie]] M, that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ‖M‖ is a measure of the [[complexity]] of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. We may call it '''Tenney-Euclidean complexity''', or '''TE complexity''' since it can be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]].  


Below shows various definitions of TE complexity. All of them can be easily computed either from the multivector or from the mapping matrix, using the [[wikipedia: Gramian matrix|Gramian]].  
Below shows various definitions of TE complexity. All of them can be easily computed either from the multivector or from the mapping matrix, using the [[wikipedia: Gramian matrix|Gramian]].  


Let's denote a weighted mapping matrix, whose rows are the weighted vals ''v<sub>i</sub>'', as V. The ''L''<sup>2</sup> norm is one of the standard complexity measures:  
Let us denote a weighted mapping matrix, whose rows are the weighted vals ''v<sub>i</sub>'', as V. The ''L''<sup>2</sup> norm is one of the standard complexity measures:  


<math>\displaystyle
<math>\displaystyle
Line 44: Line 42:
\lVert M \rVert_\text{RMS} = \sqrt {\operatorname{det} (\frac {VV^\mathsf{T}}{n})} = \frac {\lVert M \rVert_2}{\sqrt {n^r}}</math>
\lVert M \rVert_\text{RMS} = \sqrt {\operatorname{det} (\frac {VV^\mathsf{T}}{n})} = \frac {\lVert M \rVert_2}{\sqrt {n^r}}</math>


where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. Note: this is the definition used by Graham Breed's temperament finder.  
where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.  
 
: '''Note''': that is the definition used by Graham Breed's temperament finder.  


[[Gene Ward Smith]] has recognized that TE complexity can be interpreted as the RMS norm of the wedgie. That defines another RMS norm,  
[[Gene Ward Smith]] has recognized that TE complexity can be interpreted as the RMS norm of the wedgie. That defines another RMS norm,  
Line 51: Line 51:
\lVert M \rVert_\text{RMS}' = \sqrt {\frac{\operatorname{det} (VV^\mathsf{T})}{C(n, r)}} = \frac {\lVert M \rVert_2}{\sqrt {C(n, r)}}</math>
\lVert M \rVert_\text{RMS}' = \sqrt {\frac{\operatorname{det} (VV^\mathsf{T})}{C(n, r)}} = \frac {\lVert M \rVert_2}{\sqrt {C(n, r)}}</math>


where C(''n'', ''r'') is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie. Note: this is the definition currently used throughout the wiki, unless stated otherwise.  
where C(''n'', ''r'') is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie.  
 
: '''Note''': that is the definition currently used throughout the wiki, unless stated otherwise.  


If W is a [[Wikipedia: Diagonal matrix|diagonal matrix]] with 1, 1/log<sub>2</sub>3, …, 1/log<sub>2</sub>''p'' along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV<sup>T</sup>) = det(AW<sup>2</sup>A<sup>T</sup>). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V<sup>T</sup>(VV<sup>T</sup>)<sup>-1</sup>V, and the corresponding matrix for unweighted monzos '''P''' = A<sup>T</sup>(AW<sup>2</sup>A<sup>T</sup>)<sup>-1</sup>A.
If W is a [[Wikipedia: Diagonal matrix|diagonal matrix]] with 1, 1/log<sub>2</sub>3, …, 1/log<sub>2</sub>''p'' along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV<sup>T</sup>) = det(AW<sup>2</sup>A<sup>T</sup>). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V<sup>T</sup>(VV<sup>T</sup>)<sup>-1</sup>V, and the corresponding matrix for unweighted monzos '''P''' = A<sup>T</sup>(AW<sup>2</sup>A<sup>T</sup>)<sup>-1</sup>A.
Line 58: Line 60:
The '''TE simple badness''' of M, which we may also call the '''relative error''' of M, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step.  
The '''TE simple badness''' of M, which we may also call the '''relative error''' of M, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step.  


Gene Ward Smith defines the simple badness of M as ||J∧M||<sub>RMS</sub>, where J = {{val| 1 1 … 1 }} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that a<sub>''i''</sub> = J·v<sub>''i''</sub>/''n'' is the mean value of the entries of v<sub>''i''</sub>. Then note that J∧(v<sub>1</sub> - a<sub>1</sub>J)∧(v<sub>2</sub> - a<sub>2</sub>J)∧…∧(v<sub>''r''</sub> - a<sub>''r''</sub>J) = J∧v<sub>1</sub>∧v<sub>2</sub>∧...∧v<sub>''r''</sub>, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v<sub>1</sub> - a<sub>''i''</sub>J will have ''n'' as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:
Gene Ward Smith defines the simple badness of M as ‖J∧M‖<sub>RMS</sub>, where J = {{val| 1 1 … 1 }} is the JIP in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that a<sub>''i''</sub> = J·v<sub>''i''</sub>/''n'' is the mean value of the entries of v<sub>''i''</sub>. Then note that J∧(v<sub>1</sub> - a<sub>1</sub>J)∧(v<sub>2</sub> - a<sub>2</sub>J)∧…∧(v<sub>''r''</sub> - a<sub>''r''</sub>J) = J∧v<sub>1</sub>∧v<sub>2</sub>∧…∧v<sub>''r''</sub>, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v<sub>1</sub> - a<sub>''i''</sub>J will have ''n'' as the (1, 1) entry, and 0's in the rest of the first row and column. Hence we obtain:


<math>\displaystyle
<math>\displaystyle
Line 66: Line 68:


<math>\displaystyle
<math>\displaystyle
\lVert J \wedge M \rVert'_\text {RMS} = \sqrt{\frac{n}{C(n,r+1)}} \operatorname {det}(V_J V_J^T)</math>
\lVert J \wedge M \rVert'_\text {RMS} = \sqrt{\frac{n}{C(n,r+1)}} \operatorname {det}(V_J V_J^\mathsf{T})</math>


So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val.
So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val.
Line 90: Line 92:
We can consider '''TE error''' to be a weighted average of the error of each prime harmonics in TE tuning. Multiplying it by 1200, we get a figure with values in cents.  
We can consider '''TE error''' to be a weighted average of the error of each prime harmonics in TE tuning. Multiplying it by 1200, we get a figure with values in cents.  


By Gene Ward Smith's definition, the TE error is derived from the relationship of TE simple badness and TE complexity. We denote this definition of TE error Ψ.
By Graham Breed's definition, TE error may be accessed via [[Tenney-Euclidean tuning|TE tuning map]]. If T is the tuning map, then the TE error G can be found by
 
From the ratio (||J∧M||/||M||)<sup>2</sup> we obtain C(''n'', ''r'' + 1)/(''n'' C(''n'', ''r'')) = (''n'' - ''r'')/(''n'' (''r'' + 1)). If we take the ratio of this for rank one with this for rank ''r'', the ''n'' cancels, and we get (''n'' - 1)/2 · (''r'' + 1)/(''n'' - ''r'') = (''r'' + 1)(''n'' - 1)/(2(''n'' - ''r'')). It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank ''r'' temperament then
 
<math>\displaystyle
\psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi</math>
 
is an '''adjusted error''' which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament.
 
By Graham Breed's definition, TE error may be accessed via [[Tenney-Euclidean Tuning|TE tuning map]]. If T is the tuning map, then the TE error G can be found by


<math>\displaystyle
<math>\displaystyle
Line 109: Line 102:
</math>
</math>


If T is denominated in cents, then J should be also, so that J = {{val| 1200 1200 … 1200 }}. Here T - J is the list of weighted mistunings of each prime harmonics. Note: this is the definition used by Graham Breed's temperament finder.  
If T is denominated in cents, then J should be also, so that J = {{val| 1200 1200 … 1200 }}. Here T - J is the list of weighted mistunings of each prime harmonics.  
 
: '''Note''': that is the definition used by Graham Breed's temperament finder.
 
By Gene Ward Smith's definition, the TE error is derived from the relationship of TE simple badness and TE complexity. We denote this definition of TE error Ψ.
 
From the ratio (‖J∧M‖/‖M‖)<sup>2</sup> we obtain C(''n'', ''r'' + 1)/(''n'' C(''n'', ''r'')) = (''n'' - ''r'')/(''n'' (''r'' + 1)). If we take the ratio of this for rank one with this for rank ''r'', the ''n'' cancels, and we get (''n'' - 1)/2 · (''r'' + 1)/(''n'' - ''r'') = (''r'' + 1)(''n'' - 1)/(2(''n'' - ''r'')). It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank ''r'' temperament then
 
<math>\displaystyle
\psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi</math>
 
is an '''adjusted error''' which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament.  


Ψ, ψ and G error can be related as follows:  
Ψ, ψ and G error can be related as follows:  
Line 117: Line 121:
G and ψ error both have the advantage that higher rank temperament error corresponds directly to rank one error, but the RMS normalization has the further advantage that in the rank one case, G = sin ''θ'', where ''θ'' is the angle between J and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin ''θ'', the TE error in cents.
G and ψ error both have the advantage that higher rank temperament error corresponds directly to rank one error, but the RMS normalization has the further advantage that in the rank one case, G = sin ''θ'', where ''θ'' is the angle between J and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin ''θ'', the TE error in cents.


== Examples of each definition ==
== Examples ==
The different definitions yield different results, but they are related to each other by a factor derived only from the rank and limit. A meaningful comparison of temperaments in the same rank and limit can be provided by picking any one of them.  
The different definitions yield different results, but they are related to each other by a factor derived only from the rank and limit. A meaningful comparison of temperaments in the same rank and limit can be provided by picking any one of them.  


Line 129: Line 133:
! TE simple badness
! TE simple badness
|-
|-
! Standard ''L''<sup>2</sup> norm<ref>See also [[wikipedia:Norm (mathematics)#Euclidean norm|Norm (mathematics) #Euclidean norm - Wikipedia]]</ref>
! {{W|Norm (mathematics) #Euclidean norm|Standard ''L''<sup>2</sup> norm}}
| 7.195 : 5.400
| 7.195 : 5.400
| 2.149 : 2.763
| 2.149 : 2.763