Tenney–Euclidean temperament measures: Difference between revisions

Clarification (after a lengthy discussion on the fb group, this is my answer (first step))
Clarification (step 2)
Line 1: Line 1:
__FORCETOC__
The '''Tenney-Euclidean temperament measures''' (or '''TE temperament measures''') consist of TE complexity, TE error, and TE simple badness.
 
There have been several ways to define TE temperament measures, and each will be discussed below. Nonetheless, the following relationship always holds:
 
<math>\displaystyle
\text{TE simple badness} = \text{TE complexity} \times \text{TE error} </math>
 
== Introduction ==
== Introduction ==
Given a [[Wedgies_and_Multivals|multival]] or multimonzo which is a [http://en.wikipedia.org/wiki/Exterior_algebra wedge product] of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([http://en.wikipedia.org/wiki/Root_mean_square root mean square]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.
 
Given a [[Wedgies_and_Multivals|multival]] or multimonzo which is a [http://en.wikipedia.org/wiki/Exterior_algebra wedge product] of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([http://en.wikipedia.org/wiki/Root_mean_square root mean square]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||<sub>RMS</sub>.


== TE Complexity ==
== TE Complexity ==
Given a [[Wedgies_and_Multivals|wedgie]] M, that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ||M|| is a measure of the ''complexity'' of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [http://x31eq.com/temper/primerr.pdf extensively studied] by [[Graham_Breed|Graham Breed]], and we may call it '''Tenney-Euclidean complexity''', or '''TE complexity''' since it can be defined in terms of the [[Tenney-Euclidean_metrics|Tenney-Euclidean norm]].  
Given a [[Wedgies_and_Multivals|wedgie]] M, that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ||M|| is a measure of the ''complexity'' of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [http://x31eq.com/temper/primerr.pdf extensively studied] by [[Graham_Breed|Graham Breed]], and we may call it '''Tenney-Euclidean complexity''', or '''TE complexity''' since it can be defined in terms of the [[Tenney-Euclidean_metrics|Tenney-Euclidean norm]].  


There have been several ways to define TE complexity, and each will be discussed below. All of them can be easily computed either from the multivector or from the mapping matrix, using the [http://en.wikipedia.org/wiki/Gramian_matrix Gramian].  
Below shows various definitions of TE complexity. All of them can be easily computed either from the multivector or from the mapping matrix, using the [http://en.wikipedia.org/wiki/Gramian_matrix Gramian].  


Let's denote a weighted mapping matrix, whose rows are the weighted vals ''v<sub>i</sub>'', as V. The L<sup>2</sup> norm is one of the standard measures,  
Let's denote a weighted mapping matrix, whose rows are the weighted vals ''v<sub>i</sub>'', as V. The L<sup>2</sup> norm is one of the standard measures,  
Line 29: Line 36:
where C(''n'', ''r'') is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie. Note: this is the definition currently used throughout the wiki, unless stated otherwise.  
where C(''n'', ''r'') is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie. Note: this is the definition currently used throughout the wiki, unless stated otherwise.  


It is clear that the definitions differ from each other by a factor of rank and limit. For the same rank and limit though, any of them will provide meaningful comparison.
It is clear that the definitions differ from each other by a factor of rank and limit. For the same rank and limit though, any of them will provide meaningful comparison.  


If W is a [http://en.wikipedia.org/wiki/Diagonal_matrix diagonal matrix] with 1, 1/log<sub>2</sub>3, …, 1/log<sub>2</sub>''p'' along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV<sup>T</sup>) = det(AW<sup>2</sup>A<sup>T</sup>). This may be related to the [[Tenney-Euclidean_metrics|TE tuning projection matrix]] P, which is V<sup>T</sup>(VV<sup>T</sup>)<sup>-1</sup>V, and the corresponding matrix for unweighted monzos '''P''' = A<sup>T</sup>(AW<sup>2</sup>A<sup>T</sup>)<sup>-1</sup>A.
If W is a [http://en.wikipedia.org/wiki/Diagonal_matrix diagonal matrix] with 1, 1/log<sub>2</sub>3, …, 1/log<sub>2</sub>''p'' along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV<sup>T</sup>) = det(AW<sup>2</sup>A<sup>T</sup>). This may be related to the [[Tenney-Euclidean_metrics|TE tuning projection matrix]] P, which is V<sup>T</sup>(VV<sup>T</sup>)<sup>-1</sup>V, and the corresponding matrix for unweighted monzos '''P''' = A<sup>T</sup>(AW<sup>2</sup>A<sup>T</sup>)<sup>-1</sup>A.


== TE simple badness ==
== TE simple badness ==
If J = {{val|1 1 ... 1}} is the JI point in weighted coordinates, then the '''simple badness''' of M, which we may also call the '''relative error''' of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that a<sub>''i''</sub> = J·v<sub>''i''</sub>/''n'' is the mean value of the entries of v<sub>''i''</sub>. Then note that J∧(v<sub>1</sub> - a<sub>1</sub>J)∧(v<sub>2</sub> - a<sub>2</sub>J)∧...∧(v<sub>''r''</sub> - a<sub>''r''</sub>J) = J∧v<sub>1</sub>∧v<sub>2</sub>∧...∧v<sub>''r''</sub>, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v<sub>1</sub> - a<sub>''i''</sub>J will have ''n'' as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:
The '''TE simple badness''' of M, which we may also call the '''relative error''' of M, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. This may considered to be a sort of badness which heavily favors complex temperaments.
 
Gene Ward Smith defines the simple badness of M as ||J∧M||, where J = {{val|1 1 ... 1}} is the JI point in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that a<sub>''i''</sub> = J·v<sub>''i''</sub>/''n'' is the mean value of the entries of v<sub>''i''</sub>. Then note that J∧(v<sub>1</sub> - a<sub>1</sub>J)∧(v<sub>2</sub> - a<sub>2</sub>J)∧...∧(v<sub>''r''</sub> - a<sub>''r''</sub>J) = J∧v<sub>1</sub>∧v<sub>2</sub>∧...∧v<sub>''r''</sub>, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v<sub>1</sub> - a<sub>''i''</sub>J will have ''n'' as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:
 
<math>\displaystyle
||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])</math>


<math>\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])</math>
Again, Graham Breed defines the simple badness differently, skipped here because, by that definition, it is easier to find TE complexity and TE error first and derive the simple badness from their relationship.


== TE error ==
== TE error ==
TE simple badness can also be called and considered to be error relative to complexity. By dividing it by TE complexity, we get an error measurement, TE error. Multiplying this by 1200, we get a figure we can consider to be a weighted average with values in cents.  
We can consider '''TE error''' to be a weighted average of the error of each prime harmonics in TE tuning. Multiplying it by 1200, we get a figure with values in cents.
 
By Gene Ward Smith's definition, the TE error is derived from the relationship of TE simple badness and TE complexity. We denote this definition of TE error Ψ.  


From the ratio (||J∧M||/||M||)<sup>2</sup> we obtain C(''n'', ''r'' + 1)/(''n'' C(''n'', ''r'')) = (''n'' - ''r'')/(''n'' (''r'' + 1)). If we take the ratio of this for rank one with this for rank ''r'', the ''n'' cancels, and we get (''n'' - 1)/2 · (''r'' + 1)/(''n'' - ''r'') = (''r'' + 1)(''n'' - 1)/(2(''n'' - ''r'')). It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank ''r'' temperament then
From the ratio (||J∧M||/||M||)<sup>2</sup> we obtain C(''n'', ''r'' + 1)/(''n'' C(''n'', ''r'')) = (''n'' - ''r'')/(''n'' (''r'' + 1)). If we take the ratio of this for rank one with this for rank ''r'', the ''n'' cancels, and we get (''n'' - 1)/2 · (''r'' + 1)/(''n'' - ''r'') = (''r'' + 1)(''n'' - 1)/(2(''n'' - ''r'')). It follows that dividing TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank ''r'' temperament then
Line 45: Line 59:
<math>\displaystyle \psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi</math>
<math>\displaystyle \psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi</math>


is an adjusted error which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament. ψ can be related to TE error as it appears on Graham Breed's [http://x31eq.com/temper/ temperament finder], which we will call RMS error; if G denotes the RMS error for a temperament, then
is an '''adjusted error''' which makes the error of a rank ''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament.  
 
By Graham Breed's definition, TE error may be accessed directly via [[TE tuning map]]. If T is the tuning map, then the TE error G can be found by


<math>\displaystyle G = \sqrt{\frac{n-1}{2n}} \psi = \sqrt{\frac{n-r}{(r+1)n}} \Psi</math>
<math>\displaystyle
G = || T - J ||_\text{RMS} = \sqrt{\frac{(T - J) \cdot (T - J)^\mathsf{T}}{n}}</math>
 
where the dot represents the ordinary dot product. If T is denominated in cents, then J should be also, so that J = {{val|1200 1200 … 1200}}. Here T - J is the list of weighted mistunings of each prime harmonics. Note: this is the definition used by the temperament finder.


RMS error and ψ error both have the advantage that higher rank temperament error corresponds directly to rank one error, but the RMS normalization has the further advantage that in the rank one case,  G = sin θ, where θ is the angle between J and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin θ, TE error as it appears on the temperament finder pages. RMS error may also be found using the TE tuning map; if T is the tuning map, then
Ψ, ψ and G error can be related as follows:


<math>\displaystyle G = \sqrt{\frac{(T -J) \cdot (T - J)}{n}} = || T-J ||</math>
<math>\displaystyle G = \sqrt{\frac{n-1}{2n}} \psi = \sqrt{\frac{n-r}{(r+1)n}} \Psi</math>


where the dot represents the ordinary dot product. If T is denominated in cents, then J should be also, so that J = &lt;1200 1200 ... 1200|.
G and ψ error both have the advantage that higher rank temperament error corresponds directly to rank one error, but the RMS normalization has the further advantage that in the rank one case, G = sin θ, where θ is the angle between J and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin θ, TE error as it appears on the temperament finder pages.  


[[Category:math]]
[[Category:math]]