Generator embedding optimization: Difference between revisions
Cmloegcmluin (talk | contribs) |
Cmloegcmluin (talk | contribs) Exact solutions with advanced tie-breaking |
||
Line 8,138: | Line 8,138: | ||
The ADSLOD here, by the way, is [92.557 92.557 81.117 81.117 57.928 ... ] So it's a tie of 81.117 ¢(C) for the second-most minimax damage to <math>\frac95</math> and <math>\frac{16}{5}</math>. No other tuning can beat this 81.117 number, even just three entries down the list. And so we're done. | The ADSLOD here, by the way, is [92.557 92.557 81.117 81.117 57.928 ... ] So it's a tie of 81.117 ¢(C) for the second-most minimax damage to <math>\frac95</math> and <math>\frac{16}{5}</math>. No other tuning can beat this 81.117 number, even just three entries down the list. And so we're done. | ||
===Exact solutions with advanced tie-breaking=== | |||
As for recovering <math>G</math>, though. You know, the whole point of this article — finding exact tunings — we wouldn't want to give up on that just because we had to use advanced tie-breaking, would we? | |||
So we've been looking for <math>𝒈</math> which are blends of other <math>𝒈</math>'s. But we need to look for <math>G</math>'s that are blends of other <math>G</math>'s! Doing that directly would explode the dimensionality of the space we're searching, by the rank <math>r</math> times the length of the blend vector <math>𝒃</math>, that is, <math>𝑟×(τ - 1)</math>. And would it even be meaningful to independently search the powers of the primes that comprise each entry of a <math>𝒈</math>? Probably not. The compressed information in <math>𝒈</math> is all that really matters for defining the constrained search region. So what if instead we still search by <math>𝒈</math>, but what if the blend we find for each <math>K</math> can be applied to <math>G</math>'s instead of <math>𝒈</math>'s? | |||
Let's test on an example. | |||
<math> | |||
G_0 = | |||
\left[ \begin{array} {r} | |||
1 & 0 \\ | |||
0 & 0 \\ | |||
0 & \frac14 \\ | |||
\end{array} \right] | |||
\quad | |||
𝒈_0 = | |||
\left[ \begin{array} {r} | |||
1200.000 & 696.578 \\ | |||
\end{array} \right] | |||
\\[20pt] | |||
G_1 = | |||
\left[ \begin{array} {r} | |||
\frac73 & \frac13 \\ | |||
-\frac43 & -\frac13 \\ | |||
\frac13 & \frac13 \\ | |||
\end{array} \right] | |||
\quad | |||
𝒈_1 = | |||
\left[ \begin{array} {r} | |||
1192.831 & 694.786 \\ | |||
\end{array} \right] | |||
</math> | |||
So <math>𝜹_1 = 𝒈_1 - 𝒈_0</math> = {{rbra|-7.169 -1.792}}. Suppose we get <math>𝒃</math> = [0.5]. We know that <math>𝒃D</math> = {{rbra|-3.584 -0.896}}. So <math>𝒈</math> should be <math>𝒈_0 + 𝒃D</math> = {{rbra|1200 696.578}} + {{rbra|-3.584 -0.896}} = {{rbra|1196.416 695.682}}. | |||
But do we find the same tuning with <math>G = G_0 + b_1(G_1 - G_0) + b_2(G_2 - G_0) + \ldots + b_{τ-1}(G_{τ-1} - G_0)</math>? That's the key question. (In this case, we have to bust the matrix multiplication up. That is, there's no way to replace the rows of D with entire matrices. Cumbersome, but reality.) | |||
In this case we only have the one delta, <math>G_1 - G_0 =</math> | |||
<math> | |||
\left[ \begin{array} {r} | |||
\frac73 & \frac13 \\ | |||
-\frac43 & -\frac13 \\ | |||
\frac13 & \frac13 \\ | |||
\end{array} \right] | |||
- | |||
\left[ \begin{array} {r} | |||
1 & 0 \\ | |||
0 & 0 \\ | |||
0 & \frac14 \\ | |||
\end{array} \right] | |||
= | |||
\left[ \begin{array} {r} | |||
\frac43 & \frac13 \\ | |||
-\frac43 & -\frac13 \\ | |||
\frac13 & \frac{1}{12} \\ | |||
\end{array} \right] | |||
</math> | |||
And so <math>b_1(G_1 - G_0)</math>, or half of that, is: | |||
<math> | |||
\left[ \begin{array} {r} | |||
\frac23 & \frac16 \\ | |||
-\frac23 & -\frac16 \\ | |||
\frac16 & \frac{1}{24} \\ | |||
\end{array} \right] | |||
</math> | |||
And add that to <math>G_0</math> then: | |||
<math> | |||
\left[ \begin{array} {r} | |||
1 & 0 \\ | |||
0 & 0 \\ | |||
0 & \frac14 \\ | |||
\end{array} \right] | |||
+ | |||
\left[ \begin{array} {r} | |||
\frac23 & \frac16 \\ | |||
-\frac23 & -\frac16 \\ | |||
\frac16 & \frac{1}{24} \\ | |||
\end{array} \right] | |||
= | |||
\left[ \begin{array} {r} | |||
\frac53 & \frac16 \\ | |||
-\frac23 & -\frac16 \\ | |||
\frac16 & \frac{9}{24} \\ | |||
\end{array} \right] | |||
</math> | |||
So <math>\textbf{g}_1</math> here, the first column, {{vector|<math>\frac53</math> <math>-\frac23</math> <math>\frac16</math>}}, is <math>2^{\frac53}3^{-\frac23}5^{\frac16} \approx 1.996</math>. So <math>g_1</math> = 1196.416 ¢. | |||
And <math>\textbf{g}_2</math> here, the second column, {{vector|<math>\frac16</math> <math>-\frac16</math> <math>\frac{9}{24}</math>⟩, is <math>2^{\frac16}3^{-\frac16}5^{\frac{9}{24}} \approx 1.495</math>. So <math>g_2</math> = 695.682 ¢. | |||
Perfect! We wanted {{rbra|1196.416 695.682}} and we got it. | |||
Now maybe this doesn't fully test the system, since we only convexly combined two tunings, but this is probably sound for general use. At least, the test suite of the RTT Library in Wolfram Language included several examples that should have failed upon switching to this way of computing true optimum tunings if this were a problem, but they did not. | |||
==For all-interval tuning schemes== | ==For all-interval tuning schemes== |