Tenney–Euclidean temperament measures: Difference between revisions
m Improve readability |
Clarification (after a lengthy discussion on the fb group, this is my answer (first step)) |
||
Line 4: | Line 4: | ||
== TE Complexity == | == TE Complexity == | ||
Given a [[Wedgies_and_Multivals|wedgie]] M, that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ||M|| is a measure of the ''complexity'' of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [http://x31eq.com/temper/primerr.pdf extensively studied] by [[Graham_Breed|Graham Breed]], and we may call it Tenney-Euclidean complexity since it can be defined in terms of the [[Tenney-Euclidean_metrics|Tenney-Euclidean norm]]. | Given a [[Wedgies_and_Multivals|wedgie]] M, that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ||M|| is a measure of the ''complexity'' of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [http://x31eq.com/temper/primerr.pdf extensively studied] by [[Graham_Breed|Graham Breed]], and we may call it '''Tenney-Euclidean complexity''', or '''TE complexity''' since it can be defined in terms of the [[Tenney-Euclidean_metrics|Tenney-Euclidean norm]]. | ||
There have been several ways to define TE complexity, and each will be discussed below. All of them can be easily computed either from the multivector or from the mapping matrix, using the [http://en.wikipedia.org/wiki/Gramian_matrix Gramian]. | |||
Let's denote a weighted mapping matrix, whose rows are the weighted vals ''v<sub>i</sub>'', as V. The L<sup>2</sup> norm is one of the standard measures, | |||
<math>\displaystyle | |||
||M||_2 = \sqrt {\operatorname{det} (VV^\mathsf{T})}</math> | |||
where det () denotes the determinant, and V<sup>T</sup> denotes the transpose of V. | |||
In Graham Breed's paper, an RMS norm is proposed as | |||
<math>\displaystyle | <math>\displaystyle | ||
||M|| = || | ||M||_\text{RMS} = \sqrt {\operatorname{det} (\frac {VV^\mathsf{T}}{n})} = \frac {||M||_2}{\sqrt {n^r}}</math> | ||
where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. Note: this is the definition used by [http://x31eq.com/temper/ the temperament finder]. | |||
[[Gene Ward Smith]] has recognized that TE complexity can be interpreted as the RMS norm of the wedgie. That defines another RMS norm, | |||
<math>\displaystyle | |||
||M||_\text{RMS}' = \sqrt {\frac{\operatorname{det} (VV^\mathsf{T})}{C(n, r)}} = \frac {||M||_2}{\sqrt {C(n, r)}}</math> | |||
where C(''n'', ''r'') is the number of combinations of ''n'' things taken ''r'' at a time, which equals the number of entries of the wedgie. Note: this is the definition currently used throughout the wiki, unless stated otherwise. | |||
It is clear that the definitions differ from each other by a factor of rank and limit. For the same rank and limit though, any of them will provide meaningful comparison. | |||
If | If W is a [http://en.wikipedia.org/wiki/Diagonal_matrix diagonal matrix] with 1, 1/log<sub>2</sub>3, …, 1/log<sub>2</sub>''p'' along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV<sup>T</sup>) = det(AW<sup>2</sup>A<sup>T</sup>). This may be related to the [[Tenney-Euclidean_metrics|TE tuning projection matrix]] P, which is V<sup>T</sup>(VV<sup>T</sup>)<sup>-1</sup>V, and the corresponding matrix for unweighted monzos '''P''' = A<sup>T</sup>(AW<sup>2</sup>A<sup>T</sup>)<sup>-1</sup>A. | ||
== TE simple badness == | == TE simple badness == | ||
If J = | If J = {{val|1 1 ... 1}} is the JI point in weighted coordinates, then the '''simple badness''' of M, which we may also call the '''relative error''' of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that a<sub>''i''</sub> = J·v<sub>''i''</sub>/''n'' is the mean value of the entries of v<sub>''i''</sub>. Then note that J∧(v<sub>1</sub> - a<sub>1</sub>J)∧(v<sub>2</sub> - a<sub>2</sub>J)∧...∧(v<sub>''r''</sub> - a<sub>''r''</sub>J) = J∧v<sub>1</sub>∧v<sub>2</sub>∧...∧v<sub>''r''</sub>, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v<sub>1</sub> - a<sub>''i''</sub>J will have ''n'' as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain: | ||
<math>\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])</math> | <math>\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])</math> |