Tenney–Euclidean temperament measures: Difference between revisions

From Xenharmonic Wiki
Jump to navigation Jump to search
Wikispaces>genewardsmith
**Imported revision 471802146 - Original comment: **
Undo revision 203571 by VectorGraphics (talk). Readers aren't obligated to see your poor-tasted operation name
Tag: Undo
 
(76 intermediate revisions by 11 users not shown)
Line 1: Line 1:
<h2>IMPORTED REVISION FROM WIKISPACES</h2>
{{Texops}}
This is an imported revision from Wikispaces. The revision metadata is included below for reference:<br>
The '''Tenney–Euclidean temperament measures''' ('''TE temperament measures''') consist of TE complexity, TE error, and TE simple badness. These are evaluations of a temperament's [[complexity]], [[error]], and [[badness]], respectively, and they follow the identity
: This revision was by author [[User:genewardsmith|genewardsmith]] and made on <tt>2013-11-24 11:32:34 UTC</tt>.<br>
: The original revision id was <tt>471802146</tt>.<br>
: The revision comment was: <tt></tt><br>
The revision contents are below, presented both in the original Wikispaces Wikitext format, and in HTML exactly as Wikispaces rendered it.<br>
<h4>Original Wikitext content:</h4>
<div style="width:100%; max-height:400pt; overflow:auto; background-color:#f8f9fa; border: 1px solid #eaecf0; padding:0em"><pre style="margin:0px;border:none;background:none;word-wrap:break-word;white-space: pre-wrap ! important" class="old-revision-html">[[toc|flat]]
=Introduction=
Given a [[Wedgies and Multivals|multival]] or multimonzo which is a [[http://en.wikipedia.org/wiki/Exterior_algebra|wedge product]] of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS ([[http://en.wikipedia.org/wiki/Root_mean_square|root mean square]]) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.


=TE Complexity=
$$ \text{TE simple badness} = \text{TE complexity} \times \text{TE error} $$
Given a [[Wedgies and Multivals|wedgie]] M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the //complexity// of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been [[http://x31eq.com/temper/primerr.pdf|extensively studied]] by [[Graham Breed]], and we may call it Tenney-Euclidean complexity since it can be defined in terms of the [[Tenney-Euclidean metrics|Tenney-Euclidean norm]].


In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the [[http://en.wikipedia.org/wiki/Gramian_matrix|Gramian]]. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have
== Preliminaries ==
There have been several minor variations in the definition of TE temperament measures, which differ from each other only in their choice of multiplicative scaling factor. The reason these differences come up is because we are adopting different averaging methods for the entries of a multivector.


[[math]]
To start with, we may define a norm by means of the usual {{w|norm (mathematics) #Euclidean norm|Euclidean norm}}, a.k.a. ''L''<sup>2</sup> norm or ℓ<sub>2</sub> norm. The result of this is a kind of a sum of all the entries. We can rescale this in several ways, for example by taking a {{w|root mean square}} (RMS) average of the entries.
\displaystyle
||M|| = ||v_1 \wedge v_2 \wedge ... \wedge v_r|| = \sqrt{\frac{det([v_i \cdot v_j])}{C(n, r)}}
[[math]]


where C(n, r) is the number of combinations of n things taken r at a time, and vi.vj is the TE [[http://en.wikipedia.org/wiki/Symmetric_bilinear_form|symmetric form]] on vals, which in weighted coordinates is simply the ordinary dot product. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.
Here are the different standards for scaling that are commonly in use:
# Taking the simple ''L''<sup>2</sup> norm
# Taking an RMS
# Taking an RMS and also normalizing for the temperament rank
# Any of the above and also dividing by the norm of the just intonation points ([[JIP]]).  


If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is [[http://en.wikipedia.org/wiki/Diagonal_matrix|diagonal matrix]] with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the [[Tenney-Euclidean metrics|TE tuning projection matrix]] P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos **P** = A*(AW^2A*)^(-1)A.
As these metrics are mainly used to rank temperaments within the same [[rank]] and [[just intonation subgroup]], it does not matter much which scheme is used, because they are equivalent up to a scaling factor, so they will rank temperaments identically. As a result, it is somewhat common to equivocate between the various choices of scaling factor, and treat the entire thing as "the" Tenney–Euclidean norm, so that we are really only concerned with the results of these metrics up to that equivalence.


=TE simple badness=
Graham Breed's original definitions from his ''primerr.pdf'' paper tend to use the third definition, as do parts of his [https://x31eq.com/temper/ temperament finder], although other scaling and normalization methods are sometimes used as well.
If J = &lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)∧(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:


[[math]]
It is also possible to normalize the metrics to allow us to meaningfully compare temperaments across subgroups and even ranks. [[Sintel]]'s scheme in 2023 is the first attempt at this goal<ref name="sintel">Sintel. [https://github.com/Sin-tel/temper/blob/c0d5c36e3c189f64860f4aea288ff3ff3bc34982/lib_temper/temper.py "Collection of functions for dealing with regular temperaments"], Temperament Calculator.</ref>.
\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])
[[math]]


=TE error=  
== TE complexity ==
TE simple badness can also be called and considered to be error relative to complexity. By dividing it by TE complexity, we get an error measurement, TE error. Multiplying this by 1200, we get a figure we can consider to be a weighted average with values in cents. From the ratio (||J∧M||/||M||)^2 we obtain C(n, r+1)/(n C(n, r)) = (n-r)/(n (r+1)). If we take the ratio of this for rank one with this for rank r, the "n" cancels, and we get (n-1)/2 * (r+1)/(n-r) = (r+1)(n-1)/(2(n-r)). It follows that multiplying TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank r temperament then
{{Todo|rework|inline=1|text=Explain without wedgies}}
[[math]]
 
\displaystyle \psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi
Given a [[wedgie]] ''M'', that is a canonically reduced ''r''-val correspondng to a temperament of rank ''r'', the norm ‖''M''‖ is a measure of the complexity of ''M''; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave.
[[math]]
 
is an adjusted error, which we may call val normalized TE error,  which makes the error of a rank r temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament.</pre></div>
Let us define the val weighting matrix ''W'' to be the {{w|diagonal matrix}} with values 1, 1/log<sub>2</sub>3, 1/log<sub>2</sub>5 … 1/log<sub>2</sub>''p'' along the diagonal. For the prime basis {{nowrap|''Q'' {{=}} {{val| 2 3 5 … ''p'' }} }},
<h4>Original HTML content:</h4>
 
<div style="width:100%; max-height:400pt; overflow:auto; background-color:#f8f9fa; border: 1px solid #eaecf0; padding:0em"><pre style="margin:0px;border:none;background:none;word-wrap:break-word;width:200%;white-space: pre-wrap ! important" class="old-revision-html">&lt;html&gt;&lt;head&gt;&lt;title&gt;Tenney-Euclidean temperament measures&lt;/title&gt;&lt;/head&gt;&lt;body&gt;&lt;!-- ws:start:WikiTextTocRule:11:&amp;lt;img id=&amp;quot;wikitext@@toc@@flat&amp;quot; class=&amp;quot;WikiMedia WikiMediaTocFlat&amp;quot; title=&amp;quot;Table of Contents&amp;quot; src=&amp;quot;/site/embedthumbnail/toc/flat?w=100&amp;amp;h=16&amp;quot;/&amp;gt; --&gt;&lt;!-- ws:end:WikiTextTocRule:11 --&gt;&lt;!-- ws:start:WikiTextTocRule:12: --&gt;&lt;a href="#Introduction"&gt;Introduction&lt;/a&gt;&lt;!-- ws:end:WikiTextTocRule:12 --&gt;&lt;!-- ws:start:WikiTextTocRule:13: --&gt; | &lt;a href="#TE Complexity"&gt;TE Complexity&lt;/a&gt;&lt;!-- ws:end:WikiTextTocRule:13 --&gt;&lt;!-- ws:start:WikiTextTocRule:14: --&gt; | &lt;a href="#TE simple badness"&gt;TE simple badness&lt;/a&gt;&lt;!-- ws:end:WikiTextTocRule:14 --&gt;&lt;!-- ws:start:WikiTextTocRule:15: --&gt; | &lt;a href="#TE error"&gt;TE error&lt;/a&gt;&lt;!-- ws:end:WikiTextTocRule:15 --&gt;&lt;!-- ws:start:WikiTextTocRule:16: --&gt;
$$ W = \operatorname {diag} (1/\log_2 (Q)) $$
&lt;!-- ws:end:WikiTextTocRule:16 --&gt;&lt;!-- ws:start:WikiTextHeadingRule:3:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc0"&gt;&lt;a name="Introduction"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:3 --&gt;Introduction&lt;/h1&gt;
 
Given a &lt;a class="wiki_link" href="/Wedgies%20and%20Multivals"&gt;multival&lt;/a&gt; or multimonzo which is a &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="nofollow"&gt;wedge product&lt;/a&gt; of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS (&lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Root_mean_square" rel="nofollow"&gt;root mean square&lt;/a&gt;) average of the entries of the multivector. The point of this normalization is that measures of corresponding temperaments in different prime limits can be meaningfully compared. If M is a multivector, we denote the RMS norm as ||M||.&lt;br /&gt;
If ''V'' is the mapping matrix of a temperament, then ''V<sub>W</sub>'' {{=}} ''VW'' is the mapping matrix in the weighted space, its rows being the weighted vals (''v''<sub>''w''</sub>)<sub>''i''</sub>.
&lt;br /&gt;
 
&lt;!-- ws:start:WikiTextHeadingRule:5:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc1"&gt;&lt;a name="TE Complexity"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:5 --&gt;TE Complexity&lt;/h1&gt;
Our first complexity measure of a temperament is given by the ''L''<sup>2</sup> norm of the Tenney-weighted wedgie ''M''<sub>''W''</sub>, which can in turn be obtained from the Tenney-weighted mapping matrix ''V''<sub>''W''</sub>. This complexity can be easily computed either from the wedgie or from the mapping matrix, using the {{w|Gramian matrix|Gramian}}:  
Given a &lt;a class="wiki_link" href="/Wedgies%20and%20Multivals"&gt;wedgie&lt;/a&gt; M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the &lt;em&gt;complexity&lt;/em&gt; of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave. This complexity and related measures have been &lt;a class="wiki_link_ext" href="http://x31eq.com/temper/primerr.pdf" rel="nofollow"&gt;extensively studied&lt;/a&gt; by &lt;a class="wiki_link" href="/Graham%20Breed"&gt;Graham Breed&lt;/a&gt;, and we may call it Tenney-Euclidean complexity since it can be defined in terms of the &lt;a class="wiki_link" href="/Tenney-Euclidean%20metrics"&gt;Tenney-Euclidean norm&lt;/a&gt;.&lt;br /&gt;
 
&lt;br /&gt;
$$ \norm{M_W}_2 = \sqrt {\det(V_W V_W^\mathsf{T})} $$
In fact, while TE complexity is easily computed if a routine for computing multivectors is available, such a routine is not required, as it can also be computed using the &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Gramian_matrix" rel="nofollow"&gt;Gramian&lt;/a&gt;. This is the determinant of the square matrix, called the Gram matrix, defined from a list of r vectors, whose (i,j)-th entry is vi.vj, the &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Dot_product" rel="nofollow"&gt;dot product&lt;/a&gt; of the ith vector with the jth vector. The square of the ordinary Euclidean norm of a multivector is the Gramian of the vectors wedged together to define it, and hence in terms of the TE norm we have&lt;br /&gt;
 
&lt;br /&gt;
where det(·) denotes the determinant, and {{t}} denotes the transpose.
&lt;!-- ws:start:WikiTextMathRule:0:
 
[[math]]&amp;lt;br/&amp;gt;
Graham Breed and [[Gene Ward Smith]] have proposed different RMS norms. Let us denote the RMS norm of ''M'' as ‖''M''‖<sub>RMS</sub>. In Graham's paper<ref name="primerr">Graham Breed. [http://x31eq.com/temper/primerr.pdf ''Prime Based Error and Complexity Measures''], often referred to as ''primerr.pdf''.</ref>, an RMS norm is proposed as
\displaystyle&amp;lt;br /&amp;gt;
 
||M|| = ||v_1 \wedge v_2 \wedge ... \wedge v_r|| = \sqrt{\frac{det([v_i \cdot v_j])}{C(n, r)}}&amp;lt;br/&amp;gt;[[math]]
$$ \norm{M_W}_\text{RMS} = \sqrt {\det \left( \frac {V_W V_W^\mathsf{T}}{n} \right)} = \frac {\norm{M_W}_2}{\sqrt {n^r}} $$
--&gt;&lt;script type="math/tex"&gt;\displaystyle
 
||M|| = ||v_1 \wedge v_2 \wedge ... \wedge v_r|| = \sqrt{\frac{det([v_i \cdot v_j])}{C(n, r)}}&lt;/script&gt;&lt;!-- ws:end:WikiTextMathRule:0 --&gt;&lt;br /&gt;
where ''n'' is the number of primes up to the prime limit ''p'', and ''r'' is the rank of the temperament. Thus ''n''<sup>''r''</sup> is the number of permutations of ''n'' things taken ''r'' at a time with repetition, which equals the number of entries of the wedgie in its full tensor form.
&lt;br /&gt;
 
where C(n, r) is the number of combinations of n things taken r at a time, and vi.vj is the TE &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Symmetric_bilinear_form" rel="nofollow"&gt;symmetric form&lt;/a&gt; on vals, which in weighted coordinates is simply the ordinary dot product. Here n is the number of primes up to the prime limit p, and r is the rank of the temperament, which equals the number of vals wedged together to compute the wedgie.&lt;br /&gt;
: '''Note''': that is the definition used by Graham Breed's temperament finder.
&lt;br /&gt;
 
If V is a matrix whose rows are the weighted vals vi, we may write det([vi.vj]) as det(VV*), where V* denotes the transpose. If W is &lt;a class="wiki_link_ext" href="http://en.wikipedia.org/wiki/Diagonal_matrix" rel="nofollow"&gt;diagonal matrix&lt;/a&gt; with 1, 1/log2(3), ..., 1/log2(p) along the diagonal and A is the matrix corresponding to V with unweighted vals as rows, then V = AW and det(VV*) = det(AW^2A*). This may be related to the &lt;a class="wiki_link" href="/Tenney-Euclidean%20metrics"&gt;TE tuning projection matrix&lt;/a&gt; P, which is V*(VV*)^(-1)V, and the corresponding matix for unweighted monzos &lt;strong&gt;P&lt;/strong&gt; = A*(AW^2A*)^(-1)A.&lt;br /&gt;
Gene Ward Smith's RMS norm is given as
&lt;br /&gt;
 
&lt;!-- ws:start:WikiTextHeadingRule:7:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc2"&gt;&lt;a name="TE simple badness"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:7 --&gt;TE simple badness&lt;/h1&gt;
$$ \norm{M_W}_\text{RMS'} = \sqrt {\frac{\det(V_W V_W^\mathsf{T})}{C(n, r)}} = \frac {\norm{M_W}_2}{\sqrt {C(n, r)}} $$
If J = &amp;lt;1 1 ... 1| is the JI point in weighted coordinates, then the simple badness of M, which we may also call the relative error of M, is defined as ||J∧M||. This may considered to be a sort of badness which heavily favors complex temperaments, or it may be considered error relativized to the complexity of the temperament: relative error is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step. Once again, if we have a list of vectors we may use a Gramian to compute relative error. First we note that ai = J.vi/n is the mean value of the entries of vi. Then note that J∧(v1-a1J)(v2-a2J)∧...∧(vr-arJ) = J∧v1∧v2∧...∧vr, since wedge products with more than one term are zero. The Gram matrix of the vectors J and v1-aiJ will have n as the (1,1) entry, and 0s in the rest of the first row and column. Hence we obtain:&lt;br /&gt;
 
&lt;br /&gt;
where {{nowrap|C(''n'', ''r'')}} is the number of combinations of ''n'' things taken ''r'' at a time without repetition, which equals the number of entries of the wedgie in the usual, compressed form.
&lt;!-- ws:start:WikiTextMathRule:1:
 
[[math]]&amp;lt;br/&amp;gt;
We may also note {{nowrap| det(''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}) {{=}} det(''VW''<sup>2</sup>''V''{{t}}) }}. This may be related to the [[Tenney–Euclidean metrics|TE tuning projection matrix]] ''P''<sub>''W''</sub>, which is ''V''<sub>''W''</sub>{{t}}(''V''<sub>''W''</sub>''V''<sub>''W''</sub>{{t}}){{inv}}''V''<sub>''W''</sub>, and the corresponding matrix for unweighted monzos {{nowrap|''P'' {{=}} ''V''{{t}}(''VW''<sup>2</sup>''V''{{t}}){{inv}}''V''}}.
\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])&amp;lt;br/&amp;gt;[[math]]
 
--&gt;&lt;script type="math/tex"&gt;\displaystyle ||J \wedge M|| = \sqrt{\frac{n}{C(n,r+1)}} det([v_i \cdot v_j - na_ia_j])&lt;/script&gt;&lt;!-- ws:end:WikiTextMathRule:1 --&gt;&lt;br /&gt;
Sintel has defined a complexity measure that serves as an intermediate step for his badness metric<ref name="sintel"/>, which we will get to later. To obtain this complexity, we normalize the Tenney-weighting matrix ''W'' to ''U'' such that {{nowrap| det(''U'') {{=}} 1 }}, and then take the ''L''<sup>2</sup> norm of ''M''<sub>''U''</sub>. It can be shown that
&lt;br /&gt;
 
&lt;!-- ws:start:WikiTextHeadingRule:9:&amp;lt;h1&amp;gt; --&gt;&lt;h1 id="toc3"&gt;&lt;a name="TE error"&gt;&lt;/a&gt;&lt;!-- ws:end:WikiTextHeadingRule:9 --&gt;TE error&lt;/h1&gt;
$$ U = W / \det(W)^{1/n} $$
TE simple badness can also be called and considered to be error relative to complexity. By dividing it by TE complexity, we get an error measurement, TE error. Multiplying this by 1200, we get a figure we can consider to be a weighted average with values in cents. From the ratio (||J∧M||/||M||)^2 we obtain C(n, r+1)/(n C(n, r)) = (n-r)/(n (r+1)). If we take the ratio of this for rank one with this for rank r, the &amp;quot;n&amp;quot; cancels, and we get (n-1)/2 * (r+1)/(n-r) = (r+1)(n-1)/(2(n-r)). It follows that multiplying TE error by the square root of this ratio gives a constant of proportionality such that if Ψ is the TE error of a rank r temperament then&lt;br /&gt;
 
&lt;!-- ws:start:WikiTextMathRule:2:
and so the complexity is
[[math]]&amp;lt;br/&amp;gt;
 
\displaystyle \psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi&amp;lt;br/&amp;gt;[[math]]
$$ \norm{M_U}_2 = \norm{M_W}_2 / \det(W)^{r/n} $$
--&gt;&lt;script type="math/tex"&gt;\displaystyle \psi = \sqrt{\frac{2(n-r)}{(r+1)(n-1)}} \Psi&lt;/script&gt;&lt;!-- ws:end:WikiTextMathRule:2 --&gt;&lt;br /&gt;
 
is an adjusted error, which we may call val normalized TE error,  which makes the error of a rank r temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament.&lt;/body&gt;&lt;/html&gt;</pre></div>
== TE error ==
We can consider TE error to be a weighted average of the error of each [[prime harmonic]]s in [[TE tuning]], that is, a weighted average of the [[error map]] in the tuning where it is minimized. In this regard, TE error may be expressed in any logarithmic [[interval size unit]]s such as [[cent]]s or [[octave]]s.  
 
As with complexity, we may simply define the TE error as the ''L''<sup>2</sup> norm of the weighted TE error map. If {{nowrap| ''T''<sub>''W''</sub> {{=}} ''TW'' }} is the weighted TE tuning map and {{nowrap| ''J''<sub>''W''</sub> {{=}} ''JW'' {{=}} {{val| 1 1 … 1 }} }} is the weighted just tuning map, then the TE error ''E'' is given by
 
$$
\begin{align}
E &= \norm{T_W - J_W}_2 \\
&= \norm{J_W(V_W^+ V_W - I) }_2 \\
&= \sqrt{J_W(V_W^+ V_W - I)(V_W^+ V_W - I)^\mathsf{T} J_W^\mathsf{T}}
\end{align}
$$
 
where <sup>+</sup> denotes the [[pseudoinverse]].
 
Often, it is desirable to know the average of errors instead of the sum, which corresponds to Graham Breed's definition<ref name="primerr"/>. This error figure, ''G'', can be found by
 
$$
\begin{align}
G &= \norm{T_W - J_W}_\text{RMS} \\
&= E / \sqrt{n}
\end{align}
$$
 
: '''Note''': that is the definition used by Graham Breed's temperament finder.  
 
Gene Ward Smith defines the TE error as the ratio ‖''M''<sub>''W''</sub> ∧ ''J''<sub>''W''</sub>‖/‖''M''<sub>''W''</sub>‖, derived from the relationship of TE simple badness and TE complexity. See the next section. We denote this definition of TE error ''Ψ''. From {{nowrap|‖''M''<sub>''W''</sub> ∧ ''J''<sub>''W''</sub>‖/‖''M''<sub>''W''</sub>‖}} we can extract a coefficient {{nowrap| sqrt(''C''(''n'', ''r'' + 1)/''C''(''n'', ''r'')) {{=}} sqrt((''n'' − ''r'')/(''r'' + 1)) }}, which relates ''Ψ'' with ''E'' as follows:
 
$$ \Psi = \sqrt{\frac{r + 1}{n - r}} E $$
 
Also, if we set the rank ''r'' to 1, we get {{nowrap| (''n'' − 1)/2 }}. It follows that dividing TE error by this value gives a constant of proportionality such that
 
$$ \psi = \sqrt{\frac{2}{n - 1}} E $$
 
gives another error, called the ''adjusted error'', which makes the error of a rank-''r'' temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than {{nowrap|(1 + ''ε'')''ψ''}} for any positive ''ε'' results in an infinite set of vals supporting the temperament.
 
''G'' and ''ψ'' error both have the advantage that higher-rank temperament error corresponds directly to rank-1 error, but the RMS normalization has the further advantage that in the rank-1 case, {{nowrap| ''G'' {{=}} sin ''θ'' }} octaves, where ''θ'' is the angle between ''J''<sub>''W''</sub> and the val in question.  
 
== TE simple badness ==
The '''TE simple badness''' of a temperament, which we may also call the '''relative error''' of a temperament, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step.  
 
In general, if ''C'' is the complexity and ''E'' is the error of a temperament, then TE simple badness ''B'' is found by
 
$$ B = C \cdot E $$
 
Gene Ward Smith defines the simple badness of ''M'' as {{nowrap|‖''M''<sub>''W''</sub> ∧ ''J''<sub>''W''</sub>‖<sub>RMS</sub>}}. A perhaps simpler way to view this is to start with a mapping matrix ''V''<sub>''W''</sub> and add an extra row ''J''<sub>''W''</sub> corresponding to the just tuning map; we will label this matrix ''Ṽ''<sub>''W''</sub>. Then the simple badness is:
 
$$ \norm{ M_W \wedge J_W }_\text {RMS'} = \sqrt{\frac{\det(\tilde V_W \tilde V_W^\mathsf{T})}{C(n, r + 1)}} $$
 
So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val.
 
Graham Breed defines the simple badness slightly differently, again equivalent to a choice of scaling, skipped here because it is derived from the general formula.
 
Sintel has likewise given a simple badness as
 
$$ \norm{ M_U \wedge J_U }_2 = \sqrt{\det(\tilde V_U \tilde V_U^\mathsf{T})} $$
 
where {{nowrap| ''J''<sub>''U''</sub> {{=}} ''J''<sub>''W''</sub>/det(''W'')<sup>1/''n''</sup> }} is the ''U''-weighted just tuning map.
 
=== Reduction to the span of a comma ===
It is notable that if ''M'' is codimension-1, we may view it as representing [[the dual]] of a single comma. In this situation, the simple badness happens to reduce to the [[Interval span|span]] of the comma, up to a constant multiplicative factor, so that the span of any comma can itself be thought of as measuring the complexity relative to the error of the temperament vanishing that comma.
 
This relationship also holds if TOP is used rather than TE, as the TOP damage associated with tempering out some comma ''n''/''d'' is log(''n''/''d'')/(''nd''), and if we multiply by the complexity ''nd'', we simply get log(''n''/''d'') as our result.
 
== TE logflat badness ==
Some consider the simple badness to be a sort of badness which favors complex temperaments. The '''logflat badness''' (called ''Dirichlet coefficients'' in Sintel's scheme), is developed to address that. If we define ''B'' to be the simple badness (relative error) of a temperament, and ''C'' to be the complexity, then the logflat badness ''L'' is defined by the formula
 
$$ L = B \cdot C^{r/(n - r)} $$
 
The exponent is chosen such that if we set a cutoff margin for logflat badness, there are still infinite numbers of new temperaments appearing as complexity goes up, at a lower rate which is approximately logarithmic in terms of complexity.
 
In Graham's and Gene's derivations,
 
$$ L = \norm{ M_W \wedge J_W } \norm{M_W}^{r/(n - r)} $$
 
In Sintel's derivation,
 
$$ L = \norm{ M_U \wedge J_U } \norm{M_U}^{r/(n - r)} / \norm{J_U} $$
 
Notice the extra factor 1/‖''J''<sub>''U''</sub>‖, which is to say we divide it by the norm of the just tuning map. For comparison, Gene's derivation does not have this factor, whereas with Tenney weights, whether this factor is omitted or not has no effects on Graham's derivation since ‖''J''<sub>''W''</sub>‖<sub>RMS</sub> is unity.
 
== Examples ==
While the different definitions yield different results, they are related to each other by a factor derived only from the rank and subgroup. A meaningful comparison of temperaments in the same rank and subgroup is provided by picking any one of them. Here, we consider septimal [[magic]] and [[meantone]], as follows.
 
{| class="wikitable center-all left-1"
|+ style="font-size: 105%;" | ''L''<sup>2</sup> norm
|-
! Temperament
! Complexity
! Error (¢)
! Simple badness
|-
| Septimal meantone
| 5.400
| 2.763
| 12.435×10<sup>−3</sup>
|-
| Septimal magic
| 7.195
| 2.149
| 12.882×10<sup>−3</sup>
|}
{| class="wikitable center-all left-1"
|+ style="font-size: 105%;" | Breed's RMS norm
|-
! Temperament
! Complexity
! Error (¢)
! Simple badness
|-
| Septimal meantone
| 1.350
| 1.382
| 1.554×10<sup>−3</sup>
|-
| Septimal magic
| 1.799
| 1.074
| 1.610×10<sup>−3</sup>
|}
{| class="wikitable center-all left-1"
|+ style="font-size: 105%;" | Smith's RMS norm
|-
! Temperament
! Complexity
! Error (¢)
! Simple badness
|-
| Septimal meantone
| 2.204
| 3.384
| 6.218×10<sup>−3</sup>
|-
| Septimal magic
| 2.937
| 2.631
| 6.441×10<sup>−3</sup>
|}
 
== See also ==
* [[Cangwu badness]] – a derived badness measure with a free parameter that enables one to specify a tradeoff between complexity and error
 
== Notes ==
<references/>
 
[[Category:Regular temperament theory]]
[[Category:Math]]
[[Category:Temperament complexity measures]]
[[Category:Badness]]

Latest revision as of 14:25, 25 June 2025

[math]\displaystyle{ \def\hs{\hspace{-3px}} \def\lvsp{{}\mkern-5.5mu}{} \def\rvsp{{}\mkern-2.5mu}{} \def\llangle{\left\langle\lvsp\left\langle} \def\lllangle{\left\langle\lvsp\left\langle\lvsp\left\langle} \def\llllangle{\left\langle\lvsp\left\langle\lvsp\left\langle\lvsp\left\langle} \def\llbrack{\left[\left[} \def\lllbrack{\left[\left[\left[} \def\llllbrack{\left[\left[\left[\left[} \def\llvert{\left\vert\left\vert} \def\lllvert{\left\vert\left\vert\left\vert} \def\llllvert{\left\vert\left\vert\left\vert\left\vert} \def\rrangle{\right\rangle\rvsp\right\rangle} \def\rrrangle{\right\rangle\rvsp\right\rangle\rvsp\right\rangle} \def\rrrrangle{\right\rangle\rvsp\right\rangle\rvsp\right\rangle\rvsp\right\rangle} \def\rrbrack{\right]\right]} \def\rrrbrack{\right]\right]\right]} \def\rrrrbrack{\right]\right]\right]\right]} \def\rrvert{\right\vert\right\vert} \def\rrrvert{\right\vert\right\vert\right\vert} \def\rrrrvert{\right\vert\right\vert\right\vert\right\vert} }[/math][math]\displaystyle{ \def\abs#1{\left|{#1}\right|} \def\norm#1{\left\|{#1}\right\|} \def\floor#1{\left\lfloor{#1}\right\rfloor} \def\ceil#1{\left\lceil{#1}\right\rceil} \def\round#1{\left\lceil{#1}\right\rfloor} \def\rround#1{\left\lfloor{#1}\right\rceil} }[/math] The Tenney–Euclidean temperament measures (TE temperament measures) consist of TE complexity, TE error, and TE simple badness. These are evaluations of a temperament's complexity, error, and badness, respectively, and they follow the identity

$$ \text{TE simple badness} = \text{TE complexity} \times \text{TE error} $$

Preliminaries

There have been several minor variations in the definition of TE temperament measures, which differ from each other only in their choice of multiplicative scaling factor. The reason these differences come up is because we are adopting different averaging methods for the entries of a multivector.

To start with, we may define a norm by means of the usual Euclidean norm, a.k.a. L2 norm or ℓ2 norm. The result of this is a kind of a sum of all the entries. We can rescale this in several ways, for example by taking a root mean square (RMS) average of the entries.

Here are the different standards for scaling that are commonly in use:

  1. Taking the simple L2 norm
  2. Taking an RMS
  3. Taking an RMS and also normalizing for the temperament rank
  4. Any of the above and also dividing by the norm of the just intonation points (JIP).

As these metrics are mainly used to rank temperaments within the same rank and just intonation subgroup, it does not matter much which scheme is used, because they are equivalent up to a scaling factor, so they will rank temperaments identically. As a result, it is somewhat common to equivocate between the various choices of scaling factor, and treat the entire thing as "the" Tenney–Euclidean norm, so that we are really only concerned with the results of these metrics up to that equivalence.

Graham Breed's original definitions from his primerr.pdf paper tend to use the third definition, as do parts of his temperament finder, although other scaling and normalization methods are sometimes used as well.

It is also possible to normalize the metrics to allow us to meaningfully compare temperaments across subgroups and even ranks. Sintel's scheme in 2023 is the first attempt at this goal[1].

TE complexity

Todo: rework

Explain without wedgies

Given a wedgie M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ‖M‖ is a measure of the complexity of M; that is, how many notes in some sort of weighted average it takes to get to intervals. For 1-vals, for instance, it is approximately equal to the number of scale steps it takes to reach an octave.

Let us define the val weighting matrix W to be the diagonal matrix with values 1, 1/log23, 1/log25 … 1/log2p along the diagonal. For the prime basis Q = 2 3 5 … p],

$$ W = \operatorname {diag} (1/\log_2 (Q)) $$

If V is the mapping matrix of a temperament, then VW = VW is the mapping matrix in the weighted space, its rows being the weighted vals (vw)i.

Our first complexity measure of a temperament is given by the L2 norm of the Tenney-weighted wedgie MW, which can in turn be obtained from the Tenney-weighted mapping matrix VW. This complexity can be easily computed either from the wedgie or from the mapping matrix, using the Gramian:

$$ \norm{M_W}_2 = \sqrt {\det(V_W V_W^\mathsf{T})} $$

where det(·) denotes the determinant, and  T denotes the transpose.

Graham Breed and Gene Ward Smith have proposed different RMS norms. Let us denote the RMS norm of M as ‖MRMS. In Graham's paper[2], an RMS norm is proposed as

$$ \norm{M_W}_\text{RMS} = \sqrt {\det \left( \frac {V_W V_W^\mathsf{T}}{n} \right)} = \frac {\norm{M_W}_2}{\sqrt {n^r}} $$

where n is the number of primes up to the prime limit p, and r is the rank of the temperament. Thus nr is the number of permutations of n things taken r at a time with repetition, which equals the number of entries of the wedgie in its full tensor form.

Note: that is the definition used by Graham Breed's temperament finder.

Gene Ward Smith's RMS norm is given as

$$ \norm{M_W}_\text{RMS'} = \sqrt {\frac{\det(V_W V_W^\mathsf{T})}{C(n, r)}} = \frac {\norm{M_W}_2}{\sqrt {C(n, r)}} $$

where C(n, r) is the number of combinations of n things taken r at a time without repetition, which equals the number of entries of the wedgie in the usual, compressed form.

We may also note det(VWVW T) = det(VW2V T). This may be related to the TE tuning projection matrix PW, which is VW T(VWVW T)−1VW, and the corresponding matrix for unweighted monzos P = V T(VW2V T)−1V.

Sintel has defined a complexity measure that serves as an intermediate step for his badness metric[1], which we will get to later. To obtain this complexity, we normalize the Tenney-weighting matrix W to U such that det(U) = 1, and then take the L2 norm of MU. It can be shown that

$$ U = W / \det(W)^{1/n} $$

and so the complexity is

$$ \norm{M_U}_2 = \norm{M_W}_2 / \det(W)^{r/n} $$

TE error

We can consider TE error to be a weighted average of the error of each prime harmonics in TE tuning, that is, a weighted average of the error map in the tuning where it is minimized. In this regard, TE error may be expressed in any logarithmic interval size units such as cents or octaves.

As with complexity, we may simply define the TE error as the L2 norm of the weighted TE error map. If TW = TW is the weighted TE tuning map and JW = JW = 1 1 … 1] is the weighted just tuning map, then the TE error E is given by

$$ \begin{align} E &= \norm{T_W - J_W}_2 \\ &= \norm{J_W(V_W^+ V_W - I) }_2 \\ &= \sqrt{J_W(V_W^+ V_W - I)(V_W^+ V_W - I)^\mathsf{T} J_W^\mathsf{T}} \end{align} $$

where + denotes the pseudoinverse.

Often, it is desirable to know the average of errors instead of the sum, which corresponds to Graham Breed's definition[2]. This error figure, G, can be found by

$$ \begin{align} G &= \norm{T_W - J_W}_\text{RMS} \\ &= E / \sqrt{n} \end{align} $$

Note: that is the definition used by Graham Breed's temperament finder.

Gene Ward Smith defines the TE error as the ratio ‖MWJW‖/‖MW‖, derived from the relationship of TE simple badness and TE complexity. See the next section. We denote this definition of TE error Ψ. From MWJW‖/‖MW we can extract a coefficient sqrt(C(n, r + 1)/C(n, r)) = sqrt((nr)/(r + 1)), which relates Ψ with E as follows:

$$ \Psi = \sqrt{\frac{r + 1}{n - r}} E $$

Also, if we set the rank r to 1, we get (n − 1)/2. It follows that dividing TE error by this value gives a constant of proportionality such that

$$ \psi = \sqrt{\frac{2}{n - 1}} E $$

gives another error, called the adjusted error, which makes the error of a rank-r temperament correspond to the errors of the edo vals which support it; so that requiring the edo val error to be less than (1 + ε)ψ for any positive ε results in an infinite set of vals supporting the temperament.

G and ψ error both have the advantage that higher-rank temperament error corresponds directly to rank-1 error, but the RMS normalization has the further advantage that in the rank-1 case, G = sin θ octaves, where θ is the angle between JW and the val in question.

TE simple badness

The TE simple badness of a temperament, which we may also call the relative error of a temperament, may be considered error relativized to the complexity of the temperament. It is error proportional to the complexity, or size, of the multival; in particular for a 1-val, it is (weighted) error compared to the size of a step.

In general, if C is the complexity and E is the error of a temperament, then TE simple badness B is found by

$$ B = C \cdot E $$

Gene Ward Smith defines the simple badness of M as MWJWRMS. A perhaps simpler way to view this is to start with a mapping matrix VW and add an extra row JW corresponding to the just tuning map; we will label this matrix W. Then the simple badness is:

$$ \norm{ M_W \wedge J_W }_\text {RMS'} = \sqrt{\frac{\det(\tilde V_W \tilde V_W^\mathsf{T})}{C(n, r + 1)}} $$

So that we can basically view the simple badness as the TE complexity of the "pseudo-temperament" formed by adding the JIP to the mapping matrix as if it were another val.

Graham Breed defines the simple badness slightly differently, again equivalent to a choice of scaling, skipped here because it is derived from the general formula.

Sintel has likewise given a simple badness as

$$ \norm{ M_U \wedge J_U }_2 = \sqrt{\det(\tilde V_U \tilde V_U^\mathsf{T})} $$

where JU = JW/det(W)1/n is the U-weighted just tuning map.

Reduction to the span of a comma

It is notable that if M is codimension-1, we may view it as representing the dual of a single comma. In this situation, the simple badness happens to reduce to the span of the comma, up to a constant multiplicative factor, so that the span of any comma can itself be thought of as measuring the complexity relative to the error of the temperament vanishing that comma.

This relationship also holds if TOP is used rather than TE, as the TOP damage associated with tempering out some comma n/d is log(n/d)/(nd), and if we multiply by the complexity nd, we simply get log(n/d) as our result.

TE logflat badness

Some consider the simple badness to be a sort of badness which favors complex temperaments. The logflat badness (called Dirichlet coefficients in Sintel's scheme), is developed to address that. If we define B to be the simple badness (relative error) of a temperament, and C to be the complexity, then the logflat badness L is defined by the formula

$$ L = B \cdot C^{r/(n - r)} $$

The exponent is chosen such that if we set a cutoff margin for logflat badness, there are still infinite numbers of new temperaments appearing as complexity goes up, at a lower rate which is approximately logarithmic in terms of complexity.

In Graham's and Gene's derivations,

$$ L = \norm{ M_W \wedge J_W } \norm{M_W}^{r/(n - r)} $$

In Sintel's derivation,

$$ L = \norm{ M_U \wedge J_U } \norm{M_U}^{r/(n - r)} / \norm{J_U} $$

Notice the extra factor 1/‖JU‖, which is to say we divide it by the norm of the just tuning map. For comparison, Gene's derivation does not have this factor, whereas with Tenney weights, whether this factor is omitted or not has no effects on Graham's derivation since ‖JWRMS is unity.

Examples

While the different definitions yield different results, they are related to each other by a factor derived only from the rank and subgroup. A meaningful comparison of temperaments in the same rank and subgroup is provided by picking any one of them. Here, we consider septimal magic and meantone, as follows.

L2 norm
Temperament Complexity Error (¢) Simple badness
Septimal meantone 5.400 2.763 12.435×10−3
Septimal magic 7.195 2.149 12.882×10−3
Breed's RMS norm
Temperament Complexity Error (¢) Simple badness
Septimal meantone 1.350 1.382 1.554×10−3
Septimal magic 1.799 1.074 1.610×10−3
Smith's RMS norm
Temperament Complexity Error (¢) Simple badness
Septimal meantone 2.204 3.384 6.218×10−3
Septimal magic 2.937 2.631 6.441×10−3

See also

  • Cangwu badness – a derived badness measure with a free parameter that enables one to specify a tradeoff between complexity and error

Notes

  1. 1.0 1.1 Sintel. "Collection of functions for dealing with regular temperaments", Temperament Calculator.
  2. 2.0 2.1 Graham Breed. Prime Based Error and Complexity Measures, often referred to as primerr.pdf.