User:Sintel/CTE tuning: Difference between revisions

Sintel (talk | contribs)
No edit summary
Fredg999 category edits (talk | contribs)
m Categories
 
(18 intermediate revisions by one other user not shown)
Line 4: Line 4:


== Definition ==
== Definition ==
Given a temperament [[mapping]] A, the CTE tuning is equivalent to the following optimization problem:  
Given a temperament [[mapping]] M, the CTE tuning is equivalent to the following optimization problem:  


<math>
<math>
\begin{align}
\begin{align}
& \text{minimize}  && \left\|  g\mathrm{V} - \mathrm{J}   \right\|  \\[8pt]
\underset{g}{\text{minimize}}  & \quad \|  gMW - jW   \|^2   \\
& \text{subject to} && (g\mathrm{A} - \mathrm{J}_0)b_i = 0, \quad i = 1, \dots, r
\text{subject to} & \quad ( gM - j )= 0 \\
\end{align}
\end{align}
</math>
</math>


where G is the generator list, V = AW the Tenney-weighted temperament mapping, J = J<sub>0</sub>W the Tenney-weighted [[JIP]], and B the monzo list.  
where ''g'' is the (unknown) generator list, W the diagonal Tenney-Euclidean weight matrix, ''j'' is the [[JIP]], and V is a matrix obtained by stacking the monzos that we want to be pure. This problem is feasible if rank (V) ≤ rank (M).


The problem is feasible if
== Computation ==
# rank (B) ≤ rank (A), and
 
# Each column in B and N (A) are [[Wikipedia:linear independence|linearly independent]].
Since this is a convex problem, it can be solved using the method of lagrange multipliers. Let's first simplify:
 
<math>
\begin{align}
A &= (MW)^{\mathsf T}
&b &= (jW)^{\mathsf T} \\
C &= (MV)^{\mathsf T}
&d &= (jV)^{\mathsf T} \\
\end{align}
</math>
 
The problem then becomes:
 
<math>
\begin{align}
\underset{g}{\text{minimize}}  & \quad  \left\|  Ag^{\mathsf T} - b  \right\|^2  \\
\text{s.t.} & \quad \phantom{\|} Cg^{\mathsf T} - d  = 0 \\
\end{align}
</math>
 
The solution can be found by solving the dual problem:
 
<math>
\begin{bmatrix}
g^{\mathsf T}  \\
\lambda^{\mathsf T}
\end{bmatrix}
=
\begin{bmatrix}
A^{\mathsf T}A & C^{\mathsf T} \\
C & 0
\end{bmatrix}^{-1}
 
\begin{bmatrix}
A^{\mathsf T} b\\
d
\end{bmatrix}
</math>
 
Where we introduced the vector of lagrange multipliers <math>\lambda</math>, with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be ignored.


== Computation ==
As a standard optimization problem, numerous algorithms exist to solve for this tuning, such as [[Wikipedia: Sequential quadratic programming|sequential quadratic programming]], to name one.  
As a standard optimization problem, numerous algorithms exist to solve for this tuning, such as [[Wikipedia: Sequential quadratic programming|sequential quadratic programming]], to name one.  


Line 96: Line 134:
It can be speculated that POTE tends to result in biased tunings whereas CTE less so.
It can be speculated that POTE tends to result in biased tunings whereas CTE less so.


[[Category:Regular temperament theory]]
[[Category:Regular temperament tuning]]
[[Category:Terms]]
[[Category:Terms]]
[[Category:Math]]
[[Category:Math]]
[[Category:Tuning]]
[[Category:Tuning technique]]