|
|
(5 intermediate revisions by 3 users not shown) |
Line 2: |
Line 2: |
|
| |
|
| == Crazy math theory's dominating the article == | | == Crazy math theory's dominating the article == |
| | | Anybody can read this article in its current shape and learn how to derive the TE tuning, TE generators, etc.? I can't. I learned it by coming up with the idea of RMS-error tuning myself, posting it on reddit and get told that was actually called TE tuning. |
| Anybody can read this article in its current shape and learn how to derive the TE tuning, TE generators, etc.? I can't. How I learned it was by coming up with the idea of RMS-error tuning, posting it on reddit and get told that was actually called TE tuning. | |
|
| |
|
| That said, TE tuning is an easy problem if you break it down this way. | | That said, TE tuning is an easy problem if you break it down this way. |
Line 11: |
Line 10: |
| It's a least squares problem of the following linear equations: | | It's a least squares problem of the following linear equations: |
|
| |
|
| <math>(AW)^\mathsf{T} \vec{g} = W\vec{p}</math> | | <math>(VW)^\mathsf{T} \vec{g} = W\vec{p}</math> |
|
| |
|
| where A is the known mapping of the temperament, '''g''' the column vector of each generators in cents, '''p''' the column vector of targeted intervals in cents, usually prime harmonics, and W the weighting matrix. | | where ''V'' is the known mapping of the temperament, '''g''' the column vector of each generators in cents, '''p''' the column vector of targeted intervals in cents, usually prime harmonics, and ''W'' the weighting matrix. |
|
| |
|
| This is an overdetermined system saying that the sum of (AW)<sup>T</sup><sub>''ij''</sub> steps of generator '''g'''<sub>''j''</sub> for all ''j'' equals the corresponding interval (W'''p''')<sub>''i''</sub>. | | This is an overdetermined system saying that the sum of (''VW'')<sup>T</sup><sub>''ij''</sub> steps of generator '''g'''<sub>''j''</sub> for all ''j'' equals the corresponding interval (''W'''''p''')<sub>''i''</sub>. |
|
| |
|
| '''How to solve it?''' | | '''How to solve it?''' |
Line 25: |
Line 24: |
| The only thing that matters is to identify the problem as a least square problem. The rest is nothing but manual labor. | | The only thing that matters is to identify the problem as a least square problem. The rest is nothing but manual labor. |
|
| |
|
| I'm gonna try improving the readability of this article by adding my thoughts and probably clear it up. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 18:52, 24 June 2020 (UTC) | | I'm gonna try improving the readability of this article by adding my thoughts and probably clear it up. |
| | |
| | [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 18:52, 24 June 2020 (UTC) (updated [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:07, 13 July 2025 (UTC)) |
| | |
| | : Update: I gave the article some rework to bring its level to my standard. |
|
| |
|
| : Update: the page is clear enough now. | | : The conventional way to write the equation is: |
|
| |
|
| : The standard way to write the equation is: | | : <math>GVW = JW</math> |
|
| |
|
| : <math>G(AW) = J_0 W</math> | | : The targeted interval list is known as ''JIP'' and is denoted ''J'' here. The main difference from my previous comment is that the generator list and the JIP are presented as row vectors. It can be further simplified to |
|
| |
|
| : The targeted interval list is known as ''JIP'' and is denoted J<sub>0</sub> here. The main difference from my previous comment is that the generator list and the JIP are presented as row vectors. It can be further simplified to | | : <math>GV_W = J_W</math> |
|
| |
|
| : <math>GV = J</math> | | : which is pretty clearly presented in the article now. |
|
| |
|
| : which is pretty clearly displayed in the article. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 17:39, 16 December 2021 (UTC) | | : [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 17:39, 16 December 2021 (UTC) (updated [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:07, 13 July 2025 (UTC)) |
|
| |
|
| == Damage, not error? == | | == Damage, not error? == |
Line 75: |
Line 78: |
| :: - [[User:Sintel|Sintel]] ([[User talk:Sintel|talk]]) 02:23, 19 December 2021 (UTC) | | :: - [[User:Sintel|Sintel]] ([[User talk:Sintel|talk]]) 02:23, 19 December 2021 (UTC) |
|
| |
|
| ::: Here's where Gene defined Frobenius. He doesn't clarify his exact meaning though: | | ::: [REDACTED] —-[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 08:14, 27 January 2022 (UTC) |
| ::: https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_12836.html#12836
| |
| ::: I agree that the Frobenius norm is the wrong choice here. It's a matrix norm which treats a matrix like a vector, flattening it so to speak row-by-row into one single long row, and then doing essentially the L2/Euclidean norm on that. Why not just say Euclidean, then, since we're doing it on vectors.
| |
| ::: I can see here:
| |
| ::: https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_12834#12841
| |
| ::: That Gene is talking about using it on matrices. Perhaps in earlier times they were doing it to mappings rather than to tuning maps? Dunno.
| |
| | |
| | |
| ::: So I've realized that I have another related terminological bone to pick here.
| |
| | |
| ::: So I think it's okay to call it "Tenney-Euclidean error", or "TE error" for short.
| |
| ::: And I think it's okay to call it "Tenney-Euclidean tuning", or "TE tuning" for short.
| |
| | |
| ::: (A tuning is the tuning which minimizes the corresponding error, so from this point on, I'll just write error/tuning.)
| |
| | |
| ::: But here's my concern. I think it's ''not'' okay to call it "Tenney-Euclidean-weighted error/tuning", or "TE-weighted error/tuning" for short.
| |
| | |
| ::: Here's the reason: the "Euclidean" part of these name is not about ''weighting''. It is about the ''minimized norm''.
| |
| | |
| ::: Errors/tunings have two separate defining characteristics: their weighting, and their minimized norm. The weighting applies to the primes error map. The minimized norm applies to the intervals and is different for each one. Here's a quick pic:
| |
| | |
| [[File:Weight vs norm.png|frameless|center]] | |
| | |
| ::: Yes, there is a norm which is minimized over the primes error map, and it is the dual norm to the norm minimized over the intervals, but it's the norm minimized over the intervals which is the one that defines the error/tuning (in the case of L2, which is self-dual, the norm is the same for the primes error map and the intervals, but for L1 tuning and L∞ tuning it's flipped).
| |
| | |
| ::: Both defining characteristics have 3 possible values. So in total we have 3×3 of these types of errors/tunings. For weighting we have unweighted, (Tenney-)weighted, and Partch-weighted (inverse Tenney weighted). For norms we have L1 (Minkowskian), L2 (Euclidean), and L∞ (Chebyshevian). But not all of these types are popular. Almost no one uses Partch-weighting. Almost no one uses Chebyshevian norm minimization.
| |
| | |
| ::: {| class="wikitable"
| |
| |+
| |
| !
| |
| !
| |
| ! colspan="3" |norm
| |
| |-
| |
| !
| |
| !
| |
| !L1 (Minkowskian)
| |
| !L2 (Euclidean)
| |
| !L∞ (Chebyshevian)
| |
| |-
| |
| ! rowspan="3" |weight
| |
| !unweighted
| |
| |
| |
| |"Frobenius"
| |
| |
| |
| |-
| |
| !(Tenney-)weighted
| |
| |TIPTOP
| |
| |TOP-RMS, TE
| |
| |
| |
| |-
| |
| !Partch (inverse Tenney)
| |
| |
| |
| |
| |
| |
| |
| |}
| |
| ::: I'm just getting those names Minkowski and Chebyshev because they're associated with L1/taxicab/Manhattan distance and L∞/king/chessboard distance, respectively.
| |
| | |
| ::: Unweighted is the default weighting (assume it if not specified). L1 is the default norm (assume it if not specified).
| |
| | |
| ::: For whatever reason, historically, we've simply used Tenney's surname without adjectivizing it, but Euclid's surname with adjectivizing it. Perhaps this is simply an artifact of the weighting part of the name coming first, though it's clearly the case that sometimes it also comes at the end of the name, so it's hardly a complete explanation. In any case, I don't see a major problem with proceeding with this pattern.
| |
| | |
| | |
| ::: So we can name errors/tunings in this format:
| |
| ::: '''([weight eponym](-weighted)(-)([norm eponym adjective](-normed))'''
| |
| ::: And if you use the word "weighted" or "normed" explicitly and the other term is present, you should use "weighted" or "normed" explicitly for that term too ''(edit: I recognize that this bit is way less important and more of a stylistic suggestion)''.
| |
| ::: I'll give some examples.
| |
| | |
| | |
| ::: '''These are all the same.'''
| |
| ::: - Euclidean tuning
| |
| ::: - Euclidean-normed tuning
| |
| ::: - unweighted-Euclidean-normed tuning
| |
| ::: (Since there's no eponym in the case of unweighted, I agree with Sintel that "unweighted Euclidean tuning" is an acceptable replacement name for "Frobenius tuning". I personally would just call it "Euclidean tuning", though.)
| |
| | |
| | |
| ::: '''These are all the same.'''
| |
| ::: - '''okay names:'''
| |
| ::: - - Tenney tuning
| |
| ::: - - Tenney-weighted tuning
| |
| ::: - - Tenney-weighted-Minkowskian-normed tuning
| |
| ::: - <s>'''not okay names:'''</s> '''okay, but stylistically concerning names:''' ''(edit: per Flora's thoughts, these names are fine. My concerns are merely stylistic)''
| |
| ::: - - Tenney-weighted-Minkowksian tuning
| |
| ::: - - Tenney-Minkowskian-normed tuning
| |
| | |
| | |
| ::: '''These are all the same.'''
| |
| ::: - '''okay names:'''
| |
| ::: - - Tenney-Euclidean tuning
| |
| ::: - - Tenney-weighted-Euclidean-normed tuning
| |
| ::: - - Tenney-Euclidean-normed tuning ''(edit: as per later comment, "Tenney" can be used unbound from weight, as an adjective for divide-by-log-of-prime, so this one has been moved to the "okay" category)''
| |
| ::: - '''okay, but stylistically concerning names:'''
| |
| ::: - - Tenney-weighted-Euclidean tuning ''(edit: per Flora's thoughts, this name is fine. My concerns are merely stylistic)''
| |
| ::: - '''not okay names:'''
| |
| ::: - - Tenney-Euclidean-weighted tuning
| |
| ::: - - Tenney-normed-Euclidean tuning
| |
| | |
| | |
| ::: Sorry, I know that's a lot, but hopefully it might help things click for some people. I can say, that in my personal case, the name "TE-weighted" has been causing me months of flailing agony basically until I was finally able to put all these insights together at once to see what was off about it. But of course maybe I’ve still got something wrong, or there are other good ways to think about it that I don’t see yet. --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 06:33, 27 January 2022 (UTC)
| |
| | |
| :::More related thoughts:
| |
| :::We don’t weight intervals. <s>We don’t weight errors.</s> ''(edit: We don't weight interval errors.)''
| |
| :::<s>We weight primes, when optimizing.</s> ''(edit: We weight optimization targets. For tunings that optimize a set of target intervals, such a tonality diamond, those targets are of course those intervals. For tunings that optimize across all intervals (in the given interval subspace), those targets are the primes.)''
| |
| | |
| :::So when someone says “TE norm” okay that’s perfectly fine. But it’s not a TE-‘’weighted'’ interval just because it’s divided by that norm. That’s an abuse of the word “weight”. Weighting is about importance in an optimizer’s priorities, not about an increase or decrease of size of error or interval.
| |
| | |
| :::So I take back one thing I said just above: you ‘’could’’ call it “TE-normed error/tuning” I suppose. Because “Tenney” isn’t bound to weight in the same way that “Euclidean” is bound to norm. Tenney just means divide by log of prime. So in that context it is being used for norm, not a weight. —-[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 08:14, 27 January 2022 (UTC)
| |
|
| |
|
| :::: Woa your table is really enlightening! I mostly agree with you. As I figured "Tenney" was the weighting method and "Euclidean" was the norm method, on that basis I'd be more lax about calling them. I think Tenney-weighted-Euclidean tuning and Tenney-Euclidean-normed tuning are ok. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 12:26, 27 January 2022 (UTC) | | :::: Woa your table is really enlightening! I mostly agree with you. As I figured "Tenney" was the weighting method and "Euclidean" was the norm method, on that basis I'd be more lax about calling them. I think Tenney-weighted-Euclidean tuning and Tenney-Euclidean-normed tuning are ok. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 12:26, 27 January 2022 (UTC) |
Line 193: |
Line 91: |
|
| |
|
| :::::: I've been studying these tuning techniques a lot recently and think I may end up wanting to revise some of my statements above. Some of them may be just straight up wrong. Sorry for any confusion in the meantime, but I'll share my conclusions as soon as I can, when they're ready for prime time. Ha. Get it, "prime time". --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 00:53, 23 February 2022 (UTC) | | :::::: I've been studying these tuning techniques a lot recently and think I may end up wanting to revise some of my statements above. Some of them may be just straight up wrong. Sorry for any confusion in the meantime, but I'll share my conclusions as soon as I can, when they're ready for prime time. Ha. Get it, "prime time". --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 00:53, 23 February 2022 (UTC) |
| | |
| | ::::::: Ah, I had forgotten about this monstrosity of a post I made on a discussion page. Unfortunately, reviewing it from my present vantage, having studied this extensively for the entire year and dramatically refined my take on these things, what I wrote above is beyond salvaging and I feel I must simply delete it all out of embarrassment and to spare anyone else from getting misled by it. Of course, it's still there in the edit history if anyone needs to understand what Flora was reacting to, etc. |
| | |
| | ::::::: Also, per Sintel's original comment, I have by now realized that the Frobenius tuning is the one which minimizes the Frobenius norm of ''the projection matrix'' (''not'' the mapping matrix), by defining the projection matrix as the mapping matrix left-multiplied by (a generator embedding matrix equal to) its own pseudoinverse. So the name does make sense, but I think it should be clarified where it is mentioned. --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 19:38, 11 December 2022 (UTC) |
| | |
| | == Motivation & "weaknesses" == |
| | We'll need to review these sections. It's written way too vague yet still has too many judgements baked in. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:08, 13 July 2025 (UTC) |