Tenney–Euclidean temperament measures: Difference between revisions

Clarification (step 2)
Clarification (step 3). Remove todo cuz it's very applicable now
Line 35: Line 35:


where C(''n'', ''r'') is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie. Note: this is the definition currently used throughout the wiki, unless stated otherwise.  
where C(''n'', ''r'') is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie. Note: this is the definition currently used throughout the wiki, unless stated otherwise.  
It is clear that the definitions differ from each other by a factor of rank and limit. For the same rank and limit though, any of them will provide meaningful comparison.


If W is a [http://en.wikipedia.org/wiki/Diagonal_matrix diagonal matrix] with 1, 1/log<sub>2</sub>3, …, 1/log<sub>2</sub>''p'' along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV<sup>T</sup>) = det(AW<sup>2</sup>A<sup>T</sup>). This may be related to the [[Tenney-Euclidean_metrics|TE tuning projection matrix]] P, which is V<sup>T</sup>(VV<sup>T</sup>)<sup>-1</sup>V, and the corresponding matrix for unweighted monzos '''P''' = A<sup>T</sup>(AW<sup>2</sup>A<sup>T</sup>)<sup>-1</sup>A.
If W is a [http://en.wikipedia.org/wiki/Diagonal_matrix diagonal matrix] with 1, 1/log<sub>2</sub>3, …, 1/log<sub>2</sub>''p'' along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV<sup>T</sup>) = det(AW<sup>2</sup>A<sup>T</sup>). This may be related to the [[Tenney-Euclidean_metrics|TE tuning projection matrix]] P, which is V<sup>T</sup>(VV<sup>T</sup>)<sup>-1</sup>V, and the corresponding matrix for unweighted monzos '''P''' = A<sup>T</sup>(AW<sup>2</sup>A<sup>T</sup>)<sup>-1</sup>A.
Line 73: Line 71:


G and ψ error both have the advantage that higher rank temperament error corresponds directly to rank one error, but the RMS normalization has the further advantage that in the rank one case,  G = sin θ, where θ is the angle between J and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin θ, TE error as it appears on the temperament finder pages.  
G and ψ error both have the advantage that higher rank temperament error corresponds directly to rank one error, but the RMS normalization has the further advantage that in the rank one case,  G = sin θ, where θ is the angle between J and the val in question. Multiplying by 1200 to obtain a result in cents leads to 1200 sin θ, TE error as it appears on the temperament finder pages.  
== Example in different definitions ==
The different definitions yield different results, but they are related from each other by a factor of rank and limit. Meaningful comparison of temperaments in the same rank and limit will be provided by picking any one of them.
Here is a demonstration from [[7-limit]] [[magic]] and [[meantone]] compared in different definitions.
{| class="wikitable center-all"
|+7-limit magic vs meantone in TE temperament measures
!
! TE complexity
! TE error (¢)
! TE simple badness
|-
! Standard L2 norm
| 7.195 : 5.400 = 1.332
| 2.149 : 2.763 = 0.777
| 12.882×10<sup>-3</sup> : 12.435×10<sup>-3</sup> = 1.036
|-
! Breed's RMS norm
| 1.799 : 1.350 = 1.332
| 1.074 : 1.382 = 0.777
| 1.610×10<sup>-3</sup> : 1.554×10<sup>-3</sup> = 1.036
|-
! Smith's RMS norm
| 2.937 : 2.204 = 1.332
| 2.631 : 3.384 = 0.777
| 6.441×10<sup>-3</sup> : 6.218×10<sup>-3</sup> = 1.036
|}


[[Category:math]]
[[Category:math]]
[[Category:measure]]
[[Category:measure]]
[[Category:todo:increase_applicability]]
[[Category:todo:reduce_mathslang]]