Tenney–Euclidean temperament measures

From Xenharmonic Wiki
Revision as of 21:49, 5 February 2011 by Wikispaces>guest (**Imported revision 199013098 - Original comment: **)
Jump to navigation Jump to search

IMPORTED REVISION FROM WIKISPACES

This is an imported revision from Wikispaces. The revision metadata is included below for reference:

This revision was by author guest and made on 2011-02-05 21:49:00 UTC.
The original revision id was 199013098.
The revision comment was:

The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.

Original Wikitext content:

Given a [[Wedgies and Multivals|multival]] or multimonzo which is a [[http://en.wikipedia.org/wiki/Exterior_algebra|wedge product]] of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([[http://en.wikipedia.org/wiki/Root_mean_square|root mean square]]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.

===TE Complexity=== 
Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]], and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]].

In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have 

[[math]]
\displaystyle ||M|| = ||v_1 \wedge v_2 \wedge ... \wedge v_r|| = \sqrt{\frac{det([v_i \cdot v_j])}{C(n, r)}
[[math]]

where C(n, r) is the number of combinations of n things taken r at a time, and vi.vj is the TE [[http://en.wikipedia.org/wiki/Symmetric_bilinear_form|symmetric form]] on vals, which in weighted coordinates is simply the ordinary dot product. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.

If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos **P** = A*(AW^2A*)^(-1)A.

===TE simple badness=== 
If J = <1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J^M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1J)^(v2-a2J)^...^(vr-arJ) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:

[[math]]
\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])
[[math]]

Original HTML content:

<html><head><title>Tenney-Euclidean temperament measures</title></head><body>Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">multival</a> or multimonzo which is a <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="nofollow">wedge product</a> of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS (<a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Root_mean_square" rel="nofollow">root mean square</a>) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.<br />
<br />
<!-- ws:start:WikiTextHeadingRule:2:&lt;h3&gt; --><h3 id="toc0"><a name="x--TE Complexity"></a><!-- ws:end:WikiTextHeadingRule:2 -->TE Complexity</h3>
 Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the <em>complexity</em> of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been <a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow">extensively studied</a> by <a class="wiki_link" href="/Graham%20Breed">Graham Breed</a>, and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">Tenney-Euclidean norm</a>.<br />
<br />
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow">Gramian</a>. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow">dot product</a> of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have <br />
<br />
<!-- ws:start:WikiTextMathRule:0:
[[math]]&lt;br/&gt;
\displaystyle ||M|| = ||v_1 \wedge v_2 \wedge ... \wedge v_r|| = \sqrt{\frac{det([v_i \cdot v_j])}{C(n, r)}&lt;br/&gt;[[math]]
 --><script type="math/tex">\displaystyle ||M|| = ||v_1 \wedge v_2 \wedge ... \wedge v_r|| = \sqrt{\frac{det([v_i \cdot v_j])}{C(n, r)}</script><!-- ws:end:WikiTextMathRule:0 --><br />
<br />
where C(n, r) is the number of combinations of n things taken r at a time, and vi.vj is the TE <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Symmetric_bilinear_form" rel="nofollow">symmetric form</a> on vals, which in weighted coordinates is simply the ordinary dot product. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.<br />
<br />
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow">diagonal matrix</a> with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">TE tuning projection matrix</a> P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos <strong>P</strong> = A*(AW^2A*)^(-1)A.<br />
<br />
<!-- ws:start:WikiTextHeadingRule:4:&lt;h3&gt; --><h3 id="toc1"><a name="x--TE simple badness"></a><!-- ws:end:WikiTextHeadingRule:4 -->TE simple badness</h3>
 If J = &lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J^M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1J)^(v2-a2J)^...^(vr-arJ) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:<br />
<br />
<!-- ws:start:WikiTextMathRule:1:
[[math]]&lt;br/&gt;
\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])&lt;br/&gt;[[math]]
 --><script type="math/tex">\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])</script><!-- ws:end:WikiTextMathRule:1 --></body></html>