Talk:Tenney-Euclidean tuning

From Xenharmonic Wiki
Jump to navigation Jump to search

This page also contains archived Wikispaces discussion.

Crazy math theory's dominating the article

Anybody can read this article in its current shape and learn how to derive the TE tuning, TE generators, etc.? I can't. How I learned it was by coming up with the idea of RMS-error tuning, posting it on reddit and get told that was actually called TE tuning.

That said, TE tuning is an easy problem if you break it down this way.

What's the problem?

It's a least squares problem of the following linear equations:

[math](AW)^\mathsf{T} \vec{g} = W\vec{p}[/math]

where A is the known mapping of the temperament, g the column vector of each generators in cents, p the column vector of targeted intervals in cents, usually prime harmonics, and W the weighting matrix.

This is an overdetermined system saying that the sum of (AW)Tij steps of generator gj for all j equals the corresponding interval (Wp)i.

How to solve it?

The pseudoinverse is a common means to solve least square problems.

We don't need to document what a pseudoinverse is, at least not in so much amount of detail, cuz it's not a concept specific in tuning, and it's well documented on wikipedia. Nor do we need to document why pseudoinverses solve least square problems. Again, that's not a question specific in tuning.

The only thing that matters is to identify the problem as a least square problem. The rest is nothing but manual labor.

I'm gonna try improving the readability of this article by adding my thoughts and probably clear it up. FloraC (talk) 18:52, 24 June 2020 (UTC)

Update: the page is clear enough now.
The standard way to write the equation is:
[math]G(AW) = J_0 W[/math]
The targeted interval list is known as JIP and is denoted J0 here. The main difference from my previous comment is that the generator list and the JIP are presented as row vectors. It can be further simplified to
[math]GV = J[/math]
which is pretty clearly displayed in the article. FloraC (talk) 17:39, 16 December 2021 (UTC)

Damage, not error?

The article says, "Just as TOP tuning minimizes the maximum Tenney-weighted (L1) error of any interval, TE tuning minimizes the maximum TE-weighted (L2) error of any interval." But shouldn't it be "damage", not "error"? As far as I understand it, there would be no way to minimize the maximum error of any interval under a tuning, because you could always find a more complex interval with more error; minimaxing only makes sense for damage, which scales proportionally with the complexity of the interval. Or am I misunderstanding these concepts? --Cmloegcmluin (talk) 16:50, 28 July 2021 (UTC)

Ah, I think I see. "Damage" may be a bit of an outdated term. It's what Paul Erlich uses in his Middle Path paper. But it means error weighted (divided) by the Tenney height, which is equivalent to the L1 norm, and so "Tenney-weighted (L1) error" is the same thing as damage. And "TE-weighted (L2) error" means error weighted by the TE height, which is equivalent to the L2 norm, so it's similar to damage. --Cmloegcmluin (talk) 19:04, 28 July 2021 (UTC)
Corrections:
* The term damage is not outdated.
* My quotations from the article are out of date. They show "L1" and "L2" in parenthesis, which implies that they "Tenney-weighted error" is the same thing as "L1 error" and that "TE-weighted error" is the same thing as "L2 error". Those statements would both be incorrect. "Tenney-weighted error" is the "T1 error" and "TE-weighted error" is the "T2 error". I see that as of Jan '22, Flora has fixed this by removing the parenthesis, so that it's clear that the L1 error is being Tenney-weighted (to become T1) and the L2 error is being Tenney-weighted (to become T2). My previous comment did not reflect this understanding, stating that Tenney height was the L1 norm (it's actually the T1 norm) and that TE height was the L2 norm (it's actually the T2 norm). Erlich's "damage" is the T1-weighted absolute value of error though, so it is closely related to T2-weighted absolute value of error.
* But "error" should not be replaced with "damage" as I'd suggested. Damage is a weighted abs val of error. So the article states things with respect to "error" correctly.
--Cmloegcmluin (talk) 23:53, 5 March 2022 (UTC)

"Frobenius" tuning

Frobenius tuning has nothing to do with the Frobenius norm. It's simply the unweighted Euclidean norm. I propose renaming it to simply that: "unweighted Euclidean tuning".

The article also says:

This leads to a different tuning, the Frobenius tuning, which is perfectly functional but has less theoretical justification than TE tuning.

What theoretical justifications? This is ironic since the next paragraph proceeds to list several theoretical advantages of this tuning.

Not weighting the primes leads to -on average- errors that are the same across primes. It is the Tenney-Euclidean tuning that is biased towards lower primes and not the opposite. This is not a problem at all but the article is in no way clear on this. (In fact, even unweighted norms usually result in temperaments with a slight bias towards low primes, simply because the way temperaments are usually constructed (e.g. stacking edo maps) already has this bias (and especially wrt octaves))

-Sintel (talk) 19:37, 18 December 2021 (UTC)

I'm not sure if the name Frobenius tuning is derived from Frobenius norm.
The next part may be explained more clearly, but I'd like to remind you "the same error across primes" itself is a bias towards higher primes. Notice if 2 and 7 are equally weighted, 8 would get about thrice the error of 7's (see Graham's primerr.pdf). And no, I haven't observed the bias towards lower primes due to the way temperaments are constructed. FloraC (talk) 01:06, 19 December 2021 (UTC)
I've read Breed's paper and I think its very good work. Let me be clear: I think giving more weight to lower primes is a very good idea. But it seems obvious that this is explicitly introducing a certain bias to get better results in practice.
8 is not a prime, so when talking about average errors for the primes it is kind of irrelevant. In the case where you work in some subgroup like 2.5.9, I don't see why you would tolerate twice the error in 9 as you do for 3 in 2.3.5, as we are treating 9 here as a 'formal prime' and not even considering 3. Reading Breed's arguments further he does actually imply that his weighting is biased towards lower primes, but that this is a good thing.
I realize that this is just arguing semantics so I will not lose any sleep over it.
- Sintel (talk) 02:23, 19 December 2021 (UTC)
[REDACTED] —-Cmloegcmluin (talk) 08:14, 27 January 2022 (UTC)
Woa your table is really enlightening! I mostly agree with you. As I figured "Tenney" was the weighting method and "Euclidean" was the norm method, on that basis I'd be more lax about calling them. I think Tenney-weighted-Euclidean tuning and Tenney-Euclidean-normed tuning are ok. FloraC (talk) 12:26, 27 January 2022 (UTC)
Nice! I'm glad it was helpful. And thanks for making the requested change. (Nice work on the Tenney height article yesterday, too).
You're right about "Tenney-weighted-Euclidean tuning". I realized that my recommendation against that type of name was way less important, merely a stylistic concern, so I made changes in my previous post here accordingly.
And I've moved "Tenney-Euclidean-normed" up into the completely okay category. I suggested it myself in my previous post but just didn't update my first post yet. I realized I have another way to explain why that one's okay. If you look at the Tenney-Euclidean error as [math](eW)(i/||i||)[/math] then that's a primes error map [math]e[/math] with Tenney-weighted primes per the [math]W[/math], multiplied with [math]i/||i||[/math] which could also be written [math]î[/math] "i-hat", being the "normalized vector" or "unit vector" for the interval, specifically using the Euclidean norm. But we can group things a slightly different way, too: [math](e)(Wi/||i||)[/math], in which case you've just got a plain old primes error map, but then the other thing you got is a Tenney-Euclidean normed interval, because the [math]W[/math] ("Tenney") matrix just has a diagonal of [math]1/log₂p[/math], so it puts stuff in the denominator along with that [math]||i||[/math], resulting in an interval essentially divided by its TE-height AKA TE-norm. So that's basically just a mathematical rendition of the fact I stated in the previous post that Tenney in our context simply means that [math]1/log₂p[/math] thing, and is not necessarily bound to weight. --Cmloegcmluin (talk) 16:32, 27 January 2022 (UTC)
I've made another slight modification to something I wrote above, in order to generalize it to apply not only to all-interval optimization tunings like TE and TIPTOP, but also to target tunings. --Cmloegcmluin (talk) 22:24, 27 January 2022 (UTC)
I've been studying these tuning techniques a lot recently and think I may end up wanting to revise some of my statements above. Some of them may be just straight up wrong. Sorry for any confusion in the meantime, but I'll share my conclusions as soon as I can, when they're ready for prime time. Ha. Get it, "prime time". --Cmloegcmluin (talk) 00:53, 23 February 2022 (UTC)
Ah, I had forgotten about this monstrosity of a post I made on a discussion page. Unfortunately, reviewing it from my present vantage, having studied this extensively for the entire year and dramatically refined my take on these things, what I wrote above is beyond salvaging and I feel I must simply delete it all out of embarrassment and to spare anyone else from getting misled by it. Of course, it's still there in the edit history if anyone needs to understand what Flora was reacting to, etc.
Also, per Sintel's original comment, I have by now realized that the Frobenius tuning is the one which minimizes the Frobenius norm of the projection matrix (not the mapping matrix), by defining the projection matrix as the mapping matrix left-multiplied by (a generator embedding matrix equal to) its own pseudoinverse. So the name does make sense, but I think it should be clarified where it is mentioned. --Cmloegcmluin (talk) 19:38, 11 December 2022 (UTC)