Tenney–Euclidean temperament measures: Difference between revisions
Wikispaces>genewardsmith **Imported revision 175005563 - Original comment: ** |
Wikispaces>genewardsmith **Imported revision 175571321 - Original comment: ** |
||
Line 1: | Line 1: | ||
<h2>IMPORTED REVISION FROM WIKISPACES</h2> | <h2>IMPORTED REVISION FROM WIKISPACES</h2> | ||
This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br> | This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br> | ||
: This revision was by author [[User:genewardsmith|genewardsmith]] and made on <tt>2010- | : This revision was by author [[User:genewardsmith|genewardsmith]] and made on <tt>2010-11-02 00:26:38 UTC</tt>.<br> | ||
: The original revision id was <tt> | : The original revision id was <tt>175571321</tt>.<br> | ||
: The revision comment was: <tt></tt><br> | : The revision comment was: <tt></tt><br> | ||
The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br> | The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br> | ||
Line 8: | Line 8: | ||
<div style="width:100%; max-height:400pt; overflow:auto; background-color:#f8f9fa; border: 1px solid #eaecf0; padding:0em"><pre style="margin:0px;border:none;background:none;word-wrap:break-word;white-space: pre-wrap ! important" class="old-revision-html">Given a [[Wedgies and Multivals|multival]] or multimonzo which is a [[http://en.wikipedia.org/wiki/Exterior_algebra|wedge product]] of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([[http://en.wikipedia.org/wiki/Root_mean_square|root mean square]]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||. | <div style="width:100%; max-height:400pt; overflow:auto; background-color:#f8f9fa; border: 1px solid #eaecf0; padding:0em"><pre style="margin:0px;border:none;background:none;word-wrap:break-word;white-space: pre-wrap ! important" class="old-revision-html">Given a [[Wedgies and Multivals|multival]] or multimonzo which is a [[http://en.wikipedia.org/wiki/Exterior_algebra|wedge product]] of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([[http://en.wikipedia.org/wiki/Root_mean_square|root mean square]]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||. | ||
=== | ===TE Complexity=== | ||
Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]]. | Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]], and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]]. | ||
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have | |||
||M|| ``=`` ||v1^v2^...^vr|| ``=`` sqrt(det([vi.vj])/C(n, r)) | ||M|| ``=`` ||v1^v2^...^vr|| ``=`` sqrt(det([vi.vj])/C(n, r)) | ||
where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. | where C(n, r) is the number of combinations of n things taken r at a time, and vi.vj is the TE [[http://en.wikipedia.org/wiki/Symmetric_bilinear_form|symmetric form]] on vals, which in weighted coordinates is simply the ordinary dot product. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie. | ||
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics| | If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos **P** = A*(AW^2A*)^(-1)A. | ||
=== | ===TE simple badness=== | ||
If J = <1 1 ... 1| is the JI point, then the relative error of M is defined as ||J^M||. This may | If J = <1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J^M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1J)^(v2-a2J)^...^(vr-arJ) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain | ||
||J^M|| = sqrt((n/C(n, r+1)) det([vi.vj - n ai aj])) | ||J^M|| = sqrt((n/C(n, r+1)) det([vi.vj - n ai aj])) | ||
Line 28: | Line 28: | ||
<div style="width:100%; max-height:400pt; overflow:auto; background-color:#f8f9fa; border: 1px solid #eaecf0; padding:0em"><pre style="margin:0px;border:none;background:none;word-wrap:break-word;width:200%;white-space: pre-wrap ! important" class="old-revision-html"><html><head><title>Tenney-Euclidean temperament measures</title></head><body>Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">multival</a> or multimonzo which is a <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="nofollow">wedge product</a> of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS (<a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Root_mean_square" rel="nofollow">root mean square</a>) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.<br /> | <div style="width:100%; max-height:400pt; overflow:auto; background-color:#f8f9fa; border: 1px solid #eaecf0; padding:0em"><pre style="margin:0px;border:none;background:none;word-wrap:break-word;width:200%;white-space: pre-wrap ! important" class="old-revision-html"><html><head><title>Tenney-Euclidean temperament measures</title></head><body>Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">multival</a> or multimonzo which is a <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="nofollow">wedge product</a> of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS (<a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Root_mean_square" rel="nofollow">root mean square</a>) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.<br /> | ||
<br /> | <br /> | ||
<!-- ws:start:WikiTextHeadingRule:2:&lt;h3&gt; --><h3 id="toc0"><a name="x-- | <!-- ws:start:WikiTextHeadingRule:2:&lt;h3&gt; --><h3 id="toc0"><a name="x--TE Complexity"></a><!-- ws:end:WikiTextHeadingRule:2 -->TE Complexity</h3> | ||
Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the <em>complexity</em> of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been <a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow">extensively studied</a> by <a class="wiki_link" href="/Graham%20Breed">Graham Breed</a>.<br /> | Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the <em>complexity</em> of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been <a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow">extensively studied</a> by <a class="wiki_link" href="/Graham%20Breed">Graham Breed</a>, and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">Tenney-Euclidean norm</a>.<br /> | ||
<br /> | <br /> | ||
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow">Gramian</a>. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow">dot product</a> of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have <br /> | |||
<br /> | <br /> | ||
||M|| <!-- ws:start:WikiTextRawRule:00:``=`` -->=<!-- ws:end:WikiTextRawRule:00 --> ||v1^v2^...^vr|| <!-- ws:start:WikiTextRawRule:01:``=`` -->=<!-- ws:end:WikiTextRawRule:01 --> sqrt(det([vi.vj])/C(n, r))<br /> | ||M|| <!-- ws:start:WikiTextRawRule:00:``=`` -->=<!-- ws:end:WikiTextRawRule:00 --> ||v1^v2^...^vr|| <!-- ws:start:WikiTextRawRule:01:``=`` -->=<!-- ws:end:WikiTextRawRule:01 --> sqrt(det([vi.vj])/C(n, r))<br /> | ||
<br /> | <br /> | ||
where C(n, r) is the number of combinations of n things taken r at a time. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.<br /> | where C(n, r) is the number of combinations of n things taken r at a time, and vi.vj is the TE <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Symmetric_bilinear_form" rel="nofollow">symmetric form</a> on vals, which in weighted coordinates is simply the ordinary dot product. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.<br /> | ||
<br /> | <br /> | ||
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow">diagonal matrix</a> with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the <a class="wiki_link" href="/Tenney-Euclidean%20metrics"> | If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow">diagonal matrix</a> with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">TE tuning projection matrix</a> P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos <strong>P</strong> = A*(AW^2A*)^(-1)A.<br /> | ||
<br /> | <br /> | ||
<!-- ws:start:WikiTextHeadingRule:4:&lt;h3&gt; --><h3 id="toc1"><a name="x-- | <!-- ws:start:WikiTextHeadingRule:4:&lt;h3&gt; --><h3 id="toc1"><a name="x--TE simple badness"></a><!-- ws:end:WikiTextHeadingRule:4 -->TE simple badness</h3> | ||
If J = &lt;1 1 ... 1| is the JI point, then the relative error of M is defined as ||J^M||. This may | If J = &lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J^M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J^(v1-a1J)^(v2-a2J)^...^(vr-arJ) = J^v1^v2^...^vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain<br /> | ||
<br /> | <br /> | ||
||J^M|| = sqrt((n/C(n, r+1)) det([vi.vj - n ai aj]))</body></html></pre></div> | ||J^M|| = sqrt((n/C(n, r+1)) det([vi.vj - n ai aj]))</body></html></pre></div> |