Tenney–Euclidean temperament measures: Difference between revisions

Wikispaces>mbattaglia1
**Imported revision 386803464 - Original comment: **
Wikispaces>guest
**Imported revision 419018098 - Original comment: **
Line 1: Line 1:
<h2>IMPORTED REVISION FROM WIKISPACES</h2>
<h2>IMPORTED REVISION FROM WIKISPACES</h2>
This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br>
This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br>
: This revision was by author [[User:mbattaglia1|mbattaglia1]] and made on <tt>2012-11-28 03:17:36 UTC</tt>.<br>
: This revision was by author [[User:guest|guest]] and made on <tt>2013-03-31 04:08:15 UTC</tt>.<br>
: The original revision id was <tt>386803464</tt>.<br>
: The original revision id was <tt>419018098</tt>.<br>
: The revision comment was: <tt></tt><br>
: The revision comment was: <tt></tt><br>
The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br>
The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br>
Line 12: Line 12:
=TE Complexity=  
=TE Complexity=  
Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]], and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]].
Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]], and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]].
 
[[http://www.h99n.com|العاب تلبيس عرايس]]
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have


Line 23: Line 23:


If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos **P** = A*(AW^2A*)^(-1)A.
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos **P** = A*(AW^2A*)^(-1)A.
 
[[http://www.h99n.com|العاب
]]
=TE simple badness=  
=TE simple badness=  
If J = &lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:
If J = &lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:
Line 40: Line 41:
&lt;!-- ws:start:WikiTextHeadingRule:4:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc1"&gt;&lt;a name="TE Complexity"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:4 --&gt;TE Complexity&lt;/h1&gt;
&lt;!-- ws:start:WikiTextHeadingRule:4:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc1"&gt;&lt;a name="TE Complexity"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:4 --&gt;TE Complexity&lt;/h1&gt;
  Given a &lt;a class="wiki_link" href="/Wedgies%20and%20Multivals"&gt;wedgie&lt;/a&gt; M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the &lt;em&gt;complexity&lt;/em&gt; of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been &lt;a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow"&gt;extensively studied&lt;/a&gt; by &lt;a class="wiki_link" href="/Graham%20Breed"&gt;Graham Breed&lt;/a&gt;, and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the &lt;a class="wiki_link" href="/Tenney-Euclidean%20metrics"&gt;Tenney-Euclidean norm&lt;/a&gt;.&lt;br /&gt;
  Given a &lt;a class="wiki_link" href="/Wedgies%20and%20Multivals"&gt;wedgie&lt;/a&gt; M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the &lt;em&gt;complexity&lt;/em&gt; of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been &lt;a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow"&gt;extensively studied&lt;/a&gt; by &lt;a class="wiki_link" href="/Graham%20Breed"&gt;Graham Breed&lt;/a&gt;, and we may call it TE or Tenney-Euclidean complexity, as it may also be defined in terms of the &lt;a class="wiki_link" href="/Tenney-Euclidean%20metrics"&gt;Tenney-Euclidean norm&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;a class="wiki_link_ext" href="http://www.h99n.com" rel="nofollow"&gt;العاب تلبيس عرايس&lt;/a&gt;&lt;br /&gt;
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow"&gt;Gramian&lt;/a&gt;. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow"&gt;dot product&lt;/a&gt; of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have&lt;br /&gt;
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow"&gt;Gramian&lt;/a&gt;. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow"&gt;dot product&lt;/a&gt; of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Line 53: Line 54:
&lt;br /&gt;
&lt;br /&gt;
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow"&gt;diagonal matrix&lt;/a&gt; with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the &lt;a class="wiki_link" href="/Tenney-Euclidean%20metrics"&gt;TE tuning projection matrix&lt;/a&gt; P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos &lt;strong&gt;P&lt;/strong&gt; = A*(AW^2A*)^(-1)A.&lt;br /&gt;
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow"&gt;diagonal matrix&lt;/a&gt; with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the &lt;a class="wiki_link" href="/Tenney-Euclidean%20metrics"&gt;TE tuning projection matrix&lt;/a&gt; P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos &lt;strong&gt;P&lt;/strong&gt; = A*(AW^2A*)^(-1)A.&lt;br /&gt;
&lt;br /&gt;
&lt;a class="wiki_link_ext" href="http://www.h99n.com" rel="nofollow"&gt;العاب&lt;/a&gt;&lt;br /&gt;
&lt;!-- ws:start:WikiTextHeadingRule:6:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc2"&gt;&lt;a name="TE simple badness"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:6 --&gt;TE simple badness&lt;/h1&gt;
&lt;!-- ws:start:WikiTextHeadingRule:6:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc2"&gt;&lt;a name="TE simple badness"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:6 --&gt;TE simple badness&lt;/h1&gt;
  If J = &amp;lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:&lt;br /&gt;
  If J = &amp;lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:&lt;br /&gt;