Tenney–Euclidean temperament measures: Difference between revisions

Mike Battaglia (talk | contribs)
clarified scaling
Mike Battaglia (talk | contribs)
TE simple badness: this is RMS
Line 60: Line 60:
The '''TE simple badness''' of M, which we may also call the '''relative error''' of M, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. This may considered to be a sort of badness which heavily favors complex temperaments.  
The '''TE simple badness''' of M, which we may also call the '''relative error''' of M, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. This may considered to be a sort of badness which heavily favors complex temperaments.  


Gene Ward Smith defines the simple badness of M as ||J∧M||, where J = {{val|1 1 ... 1}} is the JI point in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that a<sub>''i''</sub> = J·v<sub>''i''</sub>/''n'' is the mean value of the entries of v<sub>''i''</sub>. Then note that J∧(v<sub>1</sub> - a<sub>1</sub>J)∧(v<sub>2</sub> - a<sub>2</sub>J)∧...∧(v<sub>''r''</sub> - a<sub>''r''</sub>J) = J∧v<sub>1</sub>∧v<sub>2</sub>∧...∧v<sub>''r''</sub>, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v<sub>1</sub> - a<sub>''i''</sub>J will have ''n'' as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:
Gene Ward Smith defines the simple badness of M as ||J∧M||<sub>RMS</sub>, where J = {{val|1 1 ... 1}} is the JI point in weighted coordinates. Once again, if we have a list of vectors we may use a Gramian to compute it. First we note that a<sub>''i''</sub> = J·v<sub>''i''</sub>/''n'' is the mean value of the entries of v<sub>''i''</sub>. Then note that J∧(v<sub>1</sub> - a<sub>1</sub>J)∧(v<sub>2</sub> - a<sub>2</sub>J)∧...∧(v<sub>''r''</sub> - a<sub>''r''</sub>J) = J∧v<sub>1</sub>∧v<sub>2</sub>∧...∧v<sub>''r''</sub>, since wedge products with more than one term J are zero. The Gram matrix of the vectors J and v<sub>1</sub> - a<sub>''i''</sub>J will have ''n'' as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:


<math>\displaystyle
<math>\displaystyle
||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])</math>
||J \wedge M||'_{RMS} = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])</math>


Again, Graham Breed defines the simple badness differently, skipped here because, by that definition, it is easier to find TE complexity and TE error first and derive the simple badness from their relationship.  
Again, Graham Breed defines the simple badness differently, skipped here because, by that definition, it is easier to find TE complexity and TE error first and derive the simple badness from their relationship.


== TE error ==
== TE error ==