Tenney–Euclidean temperament measures: Difference between revisions
Wikispaces>mbattaglia1 **Imported revision 386803464 - Original comment: ** |
Wikispaces>guest **Imported revision 419018098 - Original comment: ** |
||
Line 1: | Line 1: | ||
<h2>IMPORTED REVISION FROM WIKISPACES</h2> | <h2>IMPORTED REVISION FROM WIKISPACES</h2> | ||
This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br> | This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br> | ||
: This revision was by author [[User: | : This revision was by author [[User:guest|guest]] and made on <tt>2013-03-31 04:08:15 UTC</tt>.<br> | ||
: The original revision id was <tt> | : The original revision id was <tt>419018098</tt>.<br> | ||
: The revision comment was: <tt></tt><br> | : The revision comment was: <tt></tt><br> | ||
The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br> | The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br> | ||
Line 12: | Line 12: | ||
=TE Complexity= | =TE Complexity= | ||
Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]], and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]]. | Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]], and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]]. | ||
[[http://www.h99n.com|العاب تلبيس عرايس]] | |||
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have | In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have | ||
Line 23: | Line 23: | ||
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos **P** = A*(AW^2A*)^(-1)A. | If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos **P** = A*(AW^2A*)^(-1)A. | ||
[[http://www.h99n.com|العاب | |||
]] | |||
=TE simple badness= | =TE simple badness= | ||
If J = <1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain: | If J = <1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain: | ||
Line 40: | Line 41: | ||
<!-- ws:start:WikiTextHeadingRule:4:&lt;h1&gt; --><h1 id="toc1"><a name="TE Complexity"></a><!-- ws:end:WikiTextHeadingRule:4 -->TE Complexity</h1> | <!-- ws:start:WikiTextHeadingRule:4:&lt;h1&gt; --><h1 id="toc1"><a name="TE Complexity"></a><!-- ws:end:WikiTextHeadingRule:4 -->TE Complexity</h1> | ||
Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the <em>complexity</em> of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been <a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow">extensively studied</a> by <a class="wiki_link" href="/Graham%20Breed">Graham Breed</a>, and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">Tenney-Euclidean norm</a>.<br /> | Given a <a class="wiki_link" href="/Wedgies%20and%20Multivals">wedgie</a> M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the <em>complexity</em> of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been <a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow">extensively studied</a> by <a class="wiki_link" href="/Graham%20Breed">Graham Breed</a>, and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">Tenney-Euclidean norm</a>.<br /> | ||
<br /> | <a class="wiki_link_ext" href="http://www.h99n.com" rel="nofollow">العاب تلبيس عرايس</a><br /> | ||
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow">Gramian</a>. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow">dot product</a> of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have<br /> | In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow">Gramian</a>. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow">dot product</a> of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have<br /> | ||
<br /> | <br /> | ||
Line 53: | Line 54: | ||
<br /> | <br /> | ||
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow">diagonal matrix</a> with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">TE tuning projection matrix</a> P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos <strong>P</strong> = A*(AW^2A*)^(-1)A.<br /> | If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is <a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow">diagonal matrix</a> with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the <a class="wiki_link" href="/Tenney-Euclidean%20metrics">TE tuning projection matrix</a> P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos <strong>P</strong> = A*(AW^2A*)^(-1)A.<br /> | ||
<br /> | <a class="wiki_link_ext" href="http://www.h99n.com" rel="nofollow">العاب</a><br /> | ||
<!-- ws:start:WikiTextHeadingRule:6:&lt;h1&gt; --><h1 id="toc2"><a name="TE simple badness"></a><!-- ws:end:WikiTextHeadingRule:6 -->TE simple badness</h1> | <!-- ws:start:WikiTextHeadingRule:6:&lt;h1&gt; --><h1 id="toc2"><a name="TE simple badness"></a><!-- ws:end:WikiTextHeadingRule:6 -->TE simple badness</h1> | ||
If J = &lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:<br /> | If J = &lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:<br /> |