Constrained tuning: Difference between revisions
→Computation: I'm no longer using the SQP routine |
|||
(5 intermediate revisions by the same user not shown) | |||
Line 32: | Line 32: | ||
The problem is feasible if | The problem is feasible if | ||
# rank(''M'') ≤ rank(''V''), and | # rank(''M'') ≤ rank(''V''), and | ||
# The subgroups of ''M'' and | # The subgroups of ''M'' and nullspace(''V'') are {{w|linear independence|linearly independent}}. | ||
== Computation == | == Computation == | ||
Line 41: | Line 41: | ||
# © 2020-2025 Flora Canou | # © 2020-2025 Flora Canou | ||
# This work is licensed under the GNU General Public License version 3. | # This work is licensed under the GNU General Public License version 3. | ||
# Version 0.28. | # Version 0.28.1 | ||
import warnings | import warnings | ||
Line 117: | Line 117: | ||
res = linalg.lstsq (breeds_x.T, just_tuning_map_x) | res = linalg.lstsq (breeds_x.T, just_tuning_map_x) | ||
gen = res[0] | gen = res[0] | ||
print ("Euclidean tuning without constraints, solved using lstsq. ") | if show: | ||
print ("Euclidean tuning without constraints, solved using lstsq. ") | |||
else: | else: | ||
gen0 = just_tuning_map[:breeds.shape[0]] #initial guess | gen0 = just_tuning_map[:breeds.shape[0]] #initial guess | ||
cons = optimize.LinearConstraint ((breeds @ cons_monzo_list).T, | if cons_monzo_list is None: | ||
cons = () | |||
else: | |||
cons = optimize.LinearConstraint ((breeds @ cons_monzo_list).T, | |||
res = optimize.minimize (lambda gen: linalg.norm (gen @ breeds_x - just_tuning_map_x, ord = norm.order), gen0, | lb = (just_tuning_map @ cons_monzo_list).T, | ||
ub = (just_tuning_map @ cons_monzo_list).T) | |||
print (res.message) | res = optimize.minimize ( | ||
lambda gen: linalg.norm (gen @ breeds_x - just_tuning_map_x, ord = norm.order), | |||
gen0, method = "COBYQA", constraints = cons) | |||
if show: | |||
print (res.message) | |||
if res.success: | if res.success: | ||
gen = res.x | gen = res.x | ||
Line 132: | Line 137: | ||
raise ValueError ("infeasible optimization problem. ") | raise ValueError ("infeasible optimization problem. ") | ||
if | if des_monzo is not None: | ||
if np.asarray (des_monzo).ndim > 1 and np.asarray (des_monzo).shape[1] != 1: | if np.asarray (des_monzo).ndim > 1 and np.asarray (des_monzo).shape[1] != 1: | ||
raise IndexError ("only one destretch target is allowed. ") | raise IndexError ("only one destretch target is allowed. ") | ||
Line 195: | Line 200: | ||
Notice we introduced the vector of lagrange multipliers ''Λ'', with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be discarded. | Notice we introduced the vector of lagrange multipliers ''Λ'', with length equal to the number of constraints. The lagrange multipliers have no concrete meaning for the resulting tuning, so they can be discarded. | ||
=== Simple fast closed-form algorithm === | ==== Simple fast closed-form algorithm ==== | ||
Another way to compute the CTE and CWE tunings, and the CTWE tuning in general, is to use the pseudoinverse. | Another way to compute the CTE and CWE tunings, and the CTWE tuning in general, is to use the pseudoinverse. | ||
The basic idea is that the set of all pure-octave tuning maps of some temperament will be the intersection of a linear subspace and a shifted hyperplane, and thus will be a shifted subspace. This means that any pure-octave tuning map can be expressed as the sum of some arbitrary "reference" pure-octave tuning map for the temperament, plus some other one also in the temperament whose octave-coordinate is 0. The set of all such tuning maps of the latter category form a linear subspace. | The basic idea is that the set of all pure-octave tuning maps of some temperament will be the intersection of a linear subspace and a shifted hyperplane, and thus will be a shifted subspace. This means that any pure-octave tuning map can be expressed as the sum of some arbitrary "reference" pure-octave tuning map for the temperament, plus some other one also in the temperament whose octave-coordinate is 0. The set of all such tuning maps of the latter category form a linear subspace. | ||
We have the same thing with generator maps, meaning that any pure-octave generator map | We have the same thing with generator maps, meaning that any pure-octave generator map ''G'' can be expressed as: | ||
$$ | $$ G = HB + G_0 $$ | ||
$$ | |||
where | where | ||
* ''G''<sub>0</sub> is any random generator map giving pure octaves; | |||
* ''B'' is an (''r'' - 1)×''r'' matrix whose rows are a basis for the subspace of generator maps with octave coordinate set to 0; | |||
* ''H'' is an (''r'' - 1)-element covector free variable. | |||
Given that, and assuming ''V'' is our mapping matrix, ''X'' our transformation matrix, and ''J'' our just tuning map, we can solve for the best possible ''G'' in closed form: | |||
Given that, and assuming '' | |||
$$ | $$ GV_X \approx J_X $$ | ||
$$ | |||
which becomes | which becomes | ||
$$ | $$ | ||
\left( | \begin{align} | ||
\left( HB + G_x \right) V_X &\approx J_X \\ | |||
H &= \left( J_X - G_0 V_X \right) \cdot \left( BV_X \right)^+ | |||
\end{align} | |||
$$ | $$ | ||
We note that this also works for any | We note that this also works for any transformation matrix, and so we can use this to compute an arbitrary TWE norm very quickly in closed-form. Here is some Python code: | ||
{{Databox|Code| | {{Databox|Code| | ||
Line 260: | Line 260: | ||
# All pure-octave generator maps are just pure_octave_start + something in | # All pure-octave generator maps are just pure_octave_start + something in | ||
# the above row space. Now we have to solve | # the above row space. Now we have to solve | ||
# (h@B + x)@M@W ≈ j@W | # (h @ B + x) @ M @ W ≈ j @ W | ||
# which, solving for h and doing the algebra out gives: | # which, solving for h and doing the algebra out gives: | ||
h = (j - x@M)@W @ pinv(B@M@W) | h = (j - x @ M) @ W @ pinv(B @ M @ W) | ||
g = h@B + x | g = h @ B + x | ||
t = g@M | t = g @ M | ||
return g, t | return g, t | ||
Line 285: | Line 285: | ||
}} | }} | ||
=== Interpolating TE/CTE === | ==== Interpolating TE/CTE ==== | ||
We can also interpolate between the TE and CTE tunings, if we want. To do this, we modify the TE tuning so that the weighting of the 2's coefficient is very large. As the weighting goes to infinity, we get the CTE tuning. Thus, we can set it to some sufficiently large number, so that we get whatever numerical precision we want, and compute the result in closed-form using the pseudoinverse. Without comments, docstrings, etc, the calculation is only about five lines of python code: | We can also interpolate between the TE and CTE tunings, if we want. To do this, we modify the TE tuning so that the weighting of the 2's coefficient is very large. As the weighting goes to infinity, we get the CTE tuning. Thus, we can set it to some sufficiently large number, so that we get whatever numerical precision we want, and compute the result in closed-form using the pseudoinverse. Without comments, docstrings, etc, the calculation is only about five lines of python code: | ||
Line 306: | Line 306: | ||
J = 1200*np.log2(limit) | J = 1200*np.log2(limit) | ||
# Main calculation: get the generator map g such that g@M@W ≈ J@W. Use pinv | # Main calculation: get the generator map g such that g @ M @ W ≈ J @ W. Use pinv | ||
G = (J@W) @ np.linalg.pinv(M@W) | G = (J @ W) @ np.linalg.pinv(M @ W) | ||
T = G @ M | T = G @ M | ||
Line 334: | Line 334: | ||
</pre> | </pre> | ||
== | == Comparison of tunings == | ||
{{Todo|inline=1| rework |comment=More properly and concisely summarize each side's POV. }} | |||
=== Criticism of CTE === | === Criticism of CTE === | ||
People have long noted, since the early days of the tuning list, that the CTE tuning, despite having very nice qualities on paper, can give surprisingly strange results.{{citation needed}} One good example is blackwood, where the 4:5:6 chord is tuned to 0–386–720 cents, so that the error is not even close to evenly divided between the 5/4, 6/5, and 3/2. The reasons for this are subtle. | People have long noted, since the early days of the tuning list, that the CTE tuning, despite having very nice qualities on paper, can give surprisingly strange results.{{citation needed}} One good example is blackwood, where the 4:5:6 chord is tuned to 0–386–720 cents, so that the error is not even close to evenly divided between the 5/4, 6/5, and 3/2. The reasons for this are subtle. | ||
Line 382: | Line 384: | ||
So, one simple solution is to interpolate between the two, giving the '''Tenney–Weil–Euclidean norm''': a weighted average of the TE and WE norms, with free weighting parameter k. This can be thought of as adjusting how much we care about the span: {{nowrap|k {{=}} 0}} is the TE norm, {{nowrap|k {{=}} 1}} is the WE norm, and in between we have intermediate norms. This also gives a '''Constrained Tenney–Weil–Euclidean''' or '''CTWE''' tuning as a result, which interpolates between CTE and CKE. | So, one simple solution is to interpolate between the two, giving the '''Tenney–Weil–Euclidean norm''': a weighted average of the TE and WE norms, with free weighting parameter k. This can be thought of as adjusting how much we care about the span: {{nowrap|k {{=}} 0}} is the TE norm, {{nowrap|k {{=}} 1}} is the WE norm, and in between we have intermediate norms. This also gives a '''Constrained Tenney–Weil–Euclidean''' or '''CTWE''' tuning as a result, which interpolates between CTE and CKE. | ||
=== | === Examples === | ||
These tunings can be very different from each other. | These tunings can be very different from each other. | ||