User:Sintel/CTE tuning: Difference between revisions

Sintel (talk | contribs)
No edit summary
Fredg999 category edits (talk | contribs)
m Categories
 
(6 intermediate revisions by one other user not shown)
Line 8: Line 8:
<math>
<math>
\begin{align}
\begin{align}
\underset{g}{\text{minimize}}  & \quad \|  g\mathrm{MW} - j\mathrm{W}   \|^2  \\
\underset{g}{\text{minimize}}  & \quad \|  gMW - jW   \|^2  \\
\text{subject to} & \quad \phantom{\|} g\mathrm{MV} = j\mathrm{V} \\
\text{subject to} & \quad ( gM - j )V = 0 \\
\end{align}
\end{align}
</math>
</math>


where ''g'' is the (unknown) generator list, W the diagonal Tenney-Euclidean weight matrix, ''j'' is the [[JIP]], and V is a matrix obtained by stacking the monzos that we want to be pure. Since this is a convex problem, it can be solved using the method of lagrange multipliers. Let's first simplify:
where ''g'' is the (unknown) generator list, W the diagonal Tenney-Euclidean weight matrix, ''j'' is the [[JIP]], and V is a matrix obtained by stacking the monzos that we want to be pure. This problem is feasible if rank (V) ≤ rank (M).
 
== Computation ==
 
Since this is a convex problem, it can be solved using the method of lagrange multipliers. Let's first simplify:


<math>
<math>
\begin{align}
\begin{align}
\mathrm{A} &= (\mathrm{MW})^{\mathsf T}  
A &= (MW)^{\mathsf T}  
&b &= (j\mathrm{W})^{\mathsf T} \\
&b &= (jW)^{\mathsf T} \\
\mathrm{C} &= (\mathrm{MV})^{\mathsf T}  
C &= (MV)^{\mathsf T}  
&d &= (j\mathrm{V})^{\mathsf T} \\
&d &= (jV)^{\mathsf T} \\
\end{align}
\end{align}
</math>
</math>
Line 28: Line 32:
<math>
<math>
\begin{align}
\begin{align}
\underset{g}{\text{minimize}}  & \quad  \left\|  \mathrm{A}g^{\mathsf T} - b  \right\|^2  \\
\underset{g}{\text{minimize}}  & \quad  \left\|  Ag^{\mathsf T} - b  \right\|^2  \\
\text{s.t.} & \quad \phantom{\|} \mathrm{C}g^{\mathsf T} = d \\
\text{s.t.} & \quad \phantom{\|} Cg^{\mathsf T} - d  = 0 \\
\end{align}
\end{align}
</math>
</math>
Line 36: Line 40:


<math>
<math>
\begin{bmatrix}
\mathrm{A}^{\mathsf T}\mathrm{A} & \mathrm{C}^{\mathsf T} \\
\mathrm{C} & 0
\end{bmatrix}
\begin{bmatrix}
\begin{bmatrix}
g^{\mathsf T}  \\
g^{\mathsf T}  \\
Line 47: Line 46:
=
=
\begin{bmatrix}
\begin{bmatrix}
\mathrm{A}^{\mathsf T} b\\
A^{\mathsf T}A & C^{\mathsf T} \\
C & 0
\end{bmatrix}^{-1}
 
\begin{bmatrix}
A^{\mathsf T} b\\
d
d
\end{bmatrix}
\end{bmatrix}
</math>
</math>


Where we introduced the vector of lagrange multipliers <math>\lambda</math>, with size equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be ignored.
Where we introduced the vector of lagrange multipliers <math>\lambda</math>, with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be ignored.


The problem is feasible if rank (V) ≤ rank (M).
== Computation ==
As a standard optimization problem, numerous algorithms exist to solve for this tuning, such as [[Wikipedia: Sequential quadratic programming|sequential quadratic programming]], to name one.  
As a standard optimization problem, numerous algorithms exist to solve for this tuning, such as [[Wikipedia: Sequential quadratic programming|sequential quadratic programming]], to name one.  


Line 133: Line 134:
It can be speculated that POTE tends to result in biased tunings whereas CTE less so.
It can be speculated that POTE tends to result in biased tunings whereas CTE less so.


[[Category:Regular temperament theory]]
[[Category:Regular temperament tuning]]
[[Category:Terms]]
[[Category:Terms]]
[[Category:Math]]
[[Category:Math]]
[[Category:Tuning]]
[[Category:Tuning technique]]