Talk:Tenney–Euclidean tuning: Difference between revisions
No edit summary |
Cmloegcmluin (talk | contribs) |
||
Line 68: | Line 68: | ||
:: - [[User:Sintel|Sintel]] ([[User talk:Sintel|talk]]) 02:23, 19 December 2021 (UTC) | :: - [[User:Sintel|Sintel]] ([[User talk:Sintel|talk]]) 02:23, 19 December 2021 (UTC) | ||
::: Here's where Gene defined Frobenius. He doesn't clarify his exact meaning though: | |||
::: https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_12836.html#12836 | |||
::: I agree that the Frobenius norm is the wrong choice here. It's a matrix norm which treats a matrix like a vector, flattening it so to speak row-by-row into one single long row, and then doing essentially the L2/Euclidean norm on that. Why not just say Euclidean, then, since we're doing it on vectors. | |||
::: I can see here: | |||
::: https://yahootuninggroupsultimatebackup.github.io/tuning-math/topicId_12834#12841 | |||
::: That Gene is talking about using it on matrices. Perhaps in earlier times they were doing it to mappings rather than to tuning maps? Dunno. | |||
::: So I've realized that I have another related terminological bone to pick here. | |||
::: So I think it's okay to call it "Tenney-Euclidean error", or "TE error" for short. | |||
::: And I think it's okay to call it "Tenney-Euclidean tuning", or "TE tuning" for short. | |||
::: (A tuning is the tuning which minimizes the corresponding error, so from this point on, I'll just write error/tuning.) | |||
::: But here's my concern. I think it's ''not'' okay to call it "Tenney-Euclidean-weighted error/tuning", or "TE-weighted error/tuning" for short. | |||
::: Here's the reason: the "Euclidean" part of these name is not about ''weighting''. It is about the ''minimized norm''. | |||
::: Errors/tunings have two separate defining characteristics: their weighting, and their minimized norm. The weighting applies to the primes error map. The minimized norm applies to the intervals and is different for each one. Here's a quick pic: | |||
[[File:Weight vs norm.png|frameless|center]] | |||
::: Yes, there is a norm which is minimized over the primes error map, and it is the dual norm to the norm minimized over the intervals, but it's the norm minimized over the intervals which is the one that defines the error/tuning (in the case of L2, which is self-dual, the norm is the same for the primes error map and the intervals, but for L1 tuning and L∞ tuning it's flipped). | |||
::: Both defining characteristics have 3 possible values. So in total we have 3×3 of these types of errors/tunings. For weighting we have unweighted, (Tenney-)weighted, and Partch-weighted (inverse Tenney weighted). For norms we have L1 (Minkowskian), L2 (Euclidean), and L∞ (Chebyshevian). But not all of these types are popular. Almost no one uses Partch-weighting. Almost no one uses Chebyshevian norm minimization. | |||
::: {| class="wikitable" | |||
|+ | |||
! | |||
! | |||
! colspan="3" |norm | |||
|- | |||
! | |||
! | |||
!L1 (Minkowskian) | |||
!L2 (Euclidean) | |||
!L∞ (Chebyshevian) | |||
|- | |||
! rowspan="3" |weight | |||
!unweighted | |||
| | |||
|"Frobenius" | |||
| | |||
|- | |||
!(Tenney-)weighted | |||
|TIPTOP | |||
|TOP-RMS, TE | |||
| | |||
|- | |||
!Partch (inverse Tenney) | |||
| | |||
| | |||
| | |||
|} | |||
::: I'm just getting those names Minkowski and Chebyshev because they're associated with L1/taxicab/Manhattan distance and L∞/king/chessboard distance, respectively. | |||
::: Unweighted is the default weighting (assume it if not specified). L1 is the default norm (assume it if not specified). | |||
::: For whatever reason, historically, we've simply used Tenney's surname without adjectivizing it, but Euclid's surname with adjectivizing it. Perhaps this is simply an artifact of the weighting part of the name coming first, though it's clearly the case that sometimes it also comes at the end of the name, so it's hardly a complete explanation. In any case, I don't see a major problem with proceeding with this pattern. | |||
::: So we can name errors/tunings in this format: | |||
::: '''([weight eponym](-weighted)(-)([norm eponym adjective](-normed))''' | |||
::: And if you use the word "weighted" or "normed" explicitly and the other term is present, you should use "weighted" or "normed" explicitly for that term too. | |||
::: I'll give some examples. | |||
::: '''These are all the same.''' | |||
::: - Euclidean tuning | |||
::: - Euclidean-normed tuning | |||
::: - unweighted-Euclidean-normed tuning | |||
::: (Since there's no eponym in the case of unweighted, I agree with Sintel that "unweighted Euclidean tuning" is an acceptable replacement name for "Frobenius tuning". I personally would just call it "Euclidean tuning", though.) | |||
::: '''These are all the same.''' | |||
::: - '''okay names:''' | |||
::: - - Tenney tuning | |||
::: - - Tenney-weighted tuning | |||
::: - - Tenney-weighted-Minkowskian-normed tuning | |||
::: - '''not okay names:''' | |||
::: - - Tenney-weighted-Minkowksian tuning | |||
::: - - Tenney-Minkowskian-normed tuning | |||
::: '''These are all the same.''' | |||
::: - '''okay names:''' | |||
::: - - Tenney-Euclidean tuning | |||
::: - - Tenney-weighted-Euclidean-normed tuning | |||
::: - '''not okay names:''' | |||
::: - - Tenney-weighted-Euclidean tuning | |||
::: - - Tenney-Euclidean-normed tuning | |||
::: - - Tenney-Euclidean-weighted tuning | |||
::: - - Tenney-normed-Euclidean tuning | |||
::: Sorry, I know that's a lot, but hopefully it might help things click for some people. I can say, that in my personal case, the name "TE-weighted" has been causing me months of flailing agony basically until I was finally able to put all these insights together at once to see what was off about it. --[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 06:33, 27 January 2022 (UTC) |