Tenney–Euclidean temperament measures

Revision as of 05:38, 28 July 2010 by Wikispaces>genewardsmith (**Imported revision 154322913 - Original comment: **)

IMPORTED REVISION FROM WIKISPACES

This is an imported revision from Wikispaces. The revision metadata is included below for reference:

This revision was by author genewardsmith and made on 2010-07-28 05:38:09 UTC.
The original revision id was 154322913.
The revision comment was:

The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.

Original Wikitext content:

Given a [[Wedgies and Multivals|multival]] or multimonzo which is a [[http://en.wikipedia.org/wiki/Exterior_algebra|wedge product]] of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([[http://en.wikipedia.org/wiki/Root_mean_square|root mean square]]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If W is a multivector, we denote the RMS norm as ||W||.

===Wedgie Complexity===
Given a [[Wedgies and Multivals|wedgie]] W, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||W|| is a measure of the //complexity// of W; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the numbner of scale steps it takes to reach an octave.

Wedgie complexity is easily computed if a routine for computing multivectors is available. However such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the RMS norm we have 

||W|| equals ||v1^v2^...^vr|| equals sqrt(det([vi.vj])/C(n, r))

where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.

===Relative error===
If J = <1 1 ... 1| is the JI point, then the relative error of W is defined as ||J^W||. Relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1*J)^(v2-a2*J)^...^(vr-ar*J) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-J*ai will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain

||J^W|| = sqrt(n det([vi.vj/n - ai*aj]))

Original HTML content:

<html><head><title>Tenney-Euclidean temperament measures</title></head><body>Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">multival</a> or multimonzo which is a <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="nofollow">wedge product</a> of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS (<a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Root_mean_square" rel="nofollow">root mean square</a>) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If W is a multivector, we denote the RMS norm as ||W||.<br />
<br />
<!-- ws:start:WikiTextHeadingRule:0:&lt;h3&gt; --><h3 id="toc0"><a name="x--Wedgie Complexity"></a><!-- ws:end:WikiTextHeadingRule:0 -->Wedgie Complexity</h3>
Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> W, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||W|| is a measure of the <em>complexity</em> of W; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the numbner of scale steps it takes to reach an octave.<br />
<br />
Wedgie complexity is easily computed if a routine for computing multivectors is available. However such a routine is not required, as it can also be computed using the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow">Gramian</a>. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow">dot product</a> of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the RMS norm we have <br />
<br />
||W|| equals ||v1^v2^...^vr|| equals sqrt(det([vi.vj])/C(n, r))<br />
<br />
where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.<br />
<br />
<!-- ws:start:WikiTextHeadingRule:2:&lt;h3&gt; --><h3 id="toc1"><a name="x--Relative error"></a><!-- ws:end:WikiTextHeadingRule:2 -->Relative error</h3>
If J = &lt;1 1 ... 1| is the JI point, then the relative error of W is defined as ||J^W||. Relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1*J)^(v2-a2*J)^...^(vr-ar*J) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-J*ai will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain<br />
<br />
||J^W|| = sqrt(n det([vi.vj/n - ai*aj]))</body></html>