Talk:Tenney–Euclidean tuning: Difference between revisions
Cmloegcmluin (talk | contribs) |
Cmloegcmluin (talk | contribs) |
||
Line 169: | Line 169: | ||
:::More related thoughts: | :::More related thoughts: | ||
:::We don’t weight intervals. We don’t weight errors. | :::We don’t weight intervals. <s>We don’t weight errors.</s> ''(edit: We don't weight interval errors.)'' | ||
:::We weight primes, when optimizing. | :::<s>We weight primes, when optimizing.</s> ''(edit: We weight optimization targets. For tunings that optimize a set of target intervals, such a tonality diamond, those targets are of course those intervals. For tunings that optimize across all intervals (in the given interval subspace), those targets are the primes.)'' | ||
:::So when someone says “TE norm” okay that’s perfectly fine. But it’s not a TE-‘’weighted'’ interval just because it’s divided by that norm. That’s an abuse of the word “weight”. Weighting is about importance in an optimizer’s priorities, not about an increase or decrease of size of error or interval. | :::So when someone says “TE norm” okay that’s perfectly fine. But it’s not a TE-‘’weighted'’ interval just because it’s divided by that norm. That’s an abuse of the word “weight”. Weighting is about importance in an optimizer’s priorities, not about an increase or decrease of size of error or interval. | ||
Line 183: | Line 183: | ||
::::: And I've moved "Tenney-Euclidean-normed" up into the completely okay category. I suggested it myself in my previous post but just didn't update my first post yet. I realized I have another way to explain why that one's okay. If you look at the Tenney-Euclidean error as <math>(eW)(i/||i||)</math> then that's a primes error map <math>e</math> with Tenney-weighted primes per the <math>W</math>, multiplied with <math>i/||i||</math> which could also be written <math>î</math> "i-hat", being the "normalized vector" or "unit vector" for the interval, specifically using the Euclidean norm. But we can group things a slightly different way, too: <math>(e)(Wi/||i||)</math>, in which case you've just got a plain old primes error map, but then the other thing you got is a ''Tenney-''Euclidean normed interval, because the <math>W</math> ("Tenney") matrix just has a diagonal of <math>1/log₂p</math>, so it puts stuff in the denominator along with that <math>||i||</math>, resulting in an interval essentially divided by its TE-height AKA TE-norm. So that's basically just a mathematical rendition of the fact I stated in the previous post that Tenney in our context simply means that <math>1/log₂p</math> thing, and is not necessarily bound to weight. --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 16:32, 27 January 2022 (UTC) | ::::: And I've moved "Tenney-Euclidean-normed" up into the completely okay category. I suggested it myself in my previous post but just didn't update my first post yet. I realized I have another way to explain why that one's okay. If you look at the Tenney-Euclidean error as <math>(eW)(i/||i||)</math> then that's a primes error map <math>e</math> with Tenney-weighted primes per the <math>W</math>, multiplied with <math>i/||i||</math> which could also be written <math>î</math> "i-hat", being the "normalized vector" or "unit vector" for the interval, specifically using the Euclidean norm. But we can group things a slightly different way, too: <math>(e)(Wi/||i||)</math>, in which case you've just got a plain old primes error map, but then the other thing you got is a ''Tenney-''Euclidean normed interval, because the <math>W</math> ("Tenney") matrix just has a diagonal of <math>1/log₂p</math>, so it puts stuff in the denominator along with that <math>||i||</math>, resulting in an interval essentially divided by its TE-height AKA TE-norm. So that's basically just a mathematical rendition of the fact I stated in the previous post that Tenney in our context simply means that <math>1/log₂p</math> thing, and is not necessarily bound to weight. --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 16:32, 27 January 2022 (UTC) | ||
::::: I've made another slight modification to something I wrote above, in order to generalize it to apply not only to all-interval optimization tunings like TE and TIPTOP, but also to [[target tunings]]. --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 22:24, 27 January 2022 (UTC) |