Tenney–Euclidean temperament measures: Difference between revisions
Wikispaces>genewardsmith **Imported revision 154322405 - Original comment: ** |
Wikispaces>genewardsmith **Imported revision 154322913 - Original comment: ** |
||
Line 1: | Line 1: | ||
<h2>IMPORTED REVISION FROM WIKISPACES</h2> | <h2>IMPORTED REVISION FROM WIKISPACES</h2> | ||
This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br> | This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br> | ||
: This revision was by author [[User:genewardsmith|genewardsmith]] and made on <tt>2010-07-28 05: | : This revision was by author [[User:genewardsmith|genewardsmith]] and made on <tt>2010-07-28 05:38:09 UTC</tt>.<br> | ||
: The original revision id was <tt> | : The original revision id was <tt>154322913</tt>.<br> | ||
: The revision comment was: <tt></tt><br> | : The revision comment was: <tt></tt><br> | ||
The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br> | The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br> | ||
Line 11: | Line 11: | ||
Given a [[Wedgies and Multivals|wedgie]] W, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||W|| is a measure of the //complexity// of W; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the numbner of scale steps it takes to reach an octave. | Given a [[Wedgies and Multivals|wedgie]] W, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||W|| is a measure of the //complexity// of W; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the numbner of scale steps it takes to reach an octave. | ||
Wedgie complexity is easily computed if a routine for computing multivectors is available. However such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the RMS norm we have ||W|| | Wedgie complexity is easily computed if a routine for computing multivectors is available. However such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the RMS norm we have | ||
||W|| equals ||v1^v2^...^vr|| equals sqrt(det([vi.vj])/C(n, r)) | |||
where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. | where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. | ||
Line 25: | Line 27: | ||
Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> W, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||W|| is a measure of the <em>complexity</em> of W; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the numbner of scale steps it takes to reach an octave.<br /> | Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> W, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||W|| is a measure of the <em>complexity</em> of W; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the numbner of scale steps it takes to reach an octave.<br /> | ||
<br /> | <br /> | ||
Wedgie complexity is easily computed if a routine for computing multivectors is available. However such a routine is not required, as it can also be computed using the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow">Gramian</a>. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow">dot product</a> of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the RMS norm we have | Wedgie complexity is easily computed if a routine for computing multivectors is available. However such a routine is not required, as it can also be computed using the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow">Gramian</a>. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow">dot product</a> of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the RMS norm we have <br /> | ||
<br /> | |||
||W|| equals ||v1^v2^...^vr|| equals sqrt(det([vi.vj])/C(n, r))<br /> | |||
<br /> | <br /> | ||
where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.<br /> | where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.<br /> | ||
<br /> | <br /> | ||
<!-- ws:start:WikiTextHeadingRule: | <!-- ws:start:WikiTextHeadingRule:2:&lt;h3&gt; --><h3 id="toc1"><a name="x--Relative error"></a><!-- ws:end:WikiTextHeadingRule:2 -->Relative error</h3> | ||
If J = &lt;1 1 ... 1| is the JI point, then the relative error of W is defined as ||J^W||. Relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1*J)^(v2-a2*J)^...^(vr-ar*J) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-J*ai will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain<br /> | If J = &lt;1 1 ... 1| is the JI point, then the relative error of W is defined as ||J^W||. Relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1*J)^(v2-a2*J)^...^(vr-ar*J) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-J*ai will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain<br /> | ||
<br /> | <br /> | ||
||J^W|| = sqrt(n det([vi.vj/n - ai*aj]))</body></html></pre></div> | ||J^W|| = sqrt(n det([vi.vj/n - ai*aj]))</body></html></pre></div> |