Xenharmonic Wiki:Cross-platform dialogue: Difference between revisions

Godtone (talk | contribs)
m Ranking poll for "good" EDOs from 27 to 140: two erroneous votes including my own and the only legitimate vote was my own so restarted the poll for cleanliness
No edit summary
Line 281: Line 281:


Ranking poll is here: https://strawpoll.com/wAg3A53dMy8
Ranking poll is here: https://strawpoll.com/wAg3A53dMy8
== Reactions to Dirichlet badness ==
Sintel's Dirichlet badness is a TE logflat badness that claims to be able to meaningfully compare temperaments across ranks and subgroups.
Try it on Sintel Microtones's temperament calculator https://sintel.pythonanywhere.com/
Within each rank and subgroup, this badness measure is proportional to Gene Ward Smith's logflat badness (hence it's also a TE logflat badness, only differing by a scaling factor), but the general rule is that a temperament is good if it has Dirichlet badness < 1.
Technically, it normalizes the weight matrix W to U such that det(U) = 1. It can be shown that
U = W/det(W)^(1/d)
where d is the dimensionality of the subgroup.
No other rms-like processes are taken, so the complexity C is
C = ||M(U)||
where || || denotes the standard L2 norm and M(U) means M weighted by U. It can be shown that it equals
C = ||M(W)||/det(W)^(r/d)
where r and d are the rank and dimensionality of the temperament.
The simple badness is likewise given as ||M(U) ^ J(U)|| where J(U) = J(W)/det(W)^(1/d) is the weighted just tuning map. The logflat badness is given as
L = ||M(U) ^ J(U)||||M(U)||^(r/(d - r))/||J(U)||
Notice the extra factor of 1/||J(U)||, which is to say we divide it by the norm of the just tuning map. For comparison, Gene's derivation omits the factor of 1/||J(W)||, whereas with Tenney weights, Graham Breed's derivation should have no effect whether this factor is omitted or not since ||J(W)|| is unity.
Another interesting property it demonstrates is invariability on skew, making TE logflat badness and WE (Weil-Euclidean) logflat badness identical.
I'll be documenting it on the wiki in the next few days, which means I'll be taking over this page https://en.xen.wiki/w/Tenney-Euclidean_temperament_measures.
Some of us in the XA Discord server are fond of this measure and are proposing it as a replacement for Gene's across the wiki.
> I think genes defintion is fine for ranking but if you see a temp with badness 0.0013 it tells you absolutely nothing because that number only makes sense if you know both the dimension and rank
> whats the point of all temperaments having 0.00xxx badness lol
-- Flora Canou, 15 Jul 2024 (imported from Facebook)
: I think the basic idea is good, but the way it scales between subgroups in current form seems to be off. I had played around with something similar in the past, though, and I think this is in the right direction. Although I only had partial results (and I was more focused on generalizing Cangwu badness than logflat badness).
: The way that it's being normalized doesn't seem to make much sense for inter-subgroup temperament comparisons. For instance, you may note that 2.3.5 meantone has a badness of 0.173. On the other hand, here is 2.11/5.13/5 "Petrtri" temperament:
: https://sintel.pythonanywhere.com/result?subgroup=2.11%2F5.13%2F5&reduce=on&weights=weil&target=&edos=&commas=2200%2F2197&submit_comma=submit
: The badness here is 0.018 so it's better than meantone. Here's some random temperament on a hugely complex subgroup
: https://sintel.pythonanywhere.com/result?subgroup=1234.12312.1231&reduce=on&weights=weil&target=&edos=12%2C+31&submit_edo=submit&commas=
: And now the badness is 0.000.
: This is assuming the thing called "badness" on Sintel's temperament finder is the same thing you are talking about.
: -- Mike Battaglia, 17 Jul 2024 (imported from Facebook)
:: I'm actually not sure how it's supposed to work on degenerate subgroups like 2.11/5.13/5 ("degenerate": see my last post about subgroups). My own implementation doesn't support this cuz I'm not sure how complexity is defined on such subgroups.
:: For the next example I reckon it's becuz you're using a very simple mapping for very large primes. I think it's fair to say it makes sense for most practical purposes -- subgroups like 7-limit, 13-limit, 3.5.7, etc.
:: -- Flora Canou, 15 Jul 2024 (imported from Facebook)
::: Are you sure that what's on Sintel's website is the thing you are talking about? Because the results are not making much sense here. For instance, we have
::: 1.496: 2.3.5 mod 256/243
::: 1.234: 2.3.7 mod 256/243
::: 0.818: 2.3.19 mod 256/243
::: 0.701: 2.3.31 mod 256/243
::: So we have the dimension, codimension and kernel all staying the same, but with the subgroup complexity increasing, and the badness gets lower and lower. Shouldn't it be getting higher?
::: For reference, 2.3.5 porcupine is 0.722, so it ranks 2.3.31 mod 256/243 as better than porcupine. And here we have two very similar commas (256/243 and 250/243), but porcupine is much better in terms of error. But, 2.3.31 mod 256/243 is ranked higher because of this weird inverse weighting on subgroup complexity that you seem to be doing.
::: -- Mike Battaglia, 16 Jul 2024 (imported from Facebook)
:::: Why, I've reproduced Sintel's results. I believe the example you're showing here is but an artifact of the Tenney weight.
:::: To be clear it's not the subgroup's complexity that contributes to the badness. It's the temperament's, which decreases as the subgroup's complexity increases. That's what I observed. You're weighting the mapping such that the last prime gets less and less weight. Even tho the weight matrix is normalized the complexity of it still lowers (I'm not sure why but it is what it is).
:::: -- Flora Canou, 16 Jul 2024 (imported from Facebook)
::::: right, and that is the problem. The behavior you are describing is also what happens with just using the naive badness without trying to normalize it. TE weighted error and complexity both decrease as the subgroup gets more complex, so super complex temperaments get ranked very low in badness. This is not usually what you want musically. The goal of a subgroup temperament badness function, or "superbadness" as I was calling it at one point, is to normalize things between subgroups so that this doesn't happen.
::::: -- Mike Battaglia, 16 Jul 2024 (imported from Facebook)
::::: the reason it lowers is very subtle, but a good starting point is to remember that there are two ways to take subgroup complexity: as the norm of a multimonzo K representing the kernel or the multival V representing the subspace of vals supporting the temperament. These are proportional but not equal and in general we have
::::: ||V|| = ||K||/||S||
::::: Where ||S|| should be the determinant of the weighting matrix in *monzo space* (so the inverse of the matrix you have been using, meaning the determinant is the reciprocal of the determinant you have been using). You can see that using ||K|| directly doesn't have the problem with Blackwood above.
::::: I think, though don't remember 100%, that there is a similar thing with the relative error/simple badness. Remember that relative error can also be thought of as ||V||, but with ||...|| now representing a kind of seminorm. There is a kind of dual to that (handwaving a bit about what "dual" means) which, for single commas, is basically the seminorm measuring the span of the comma, and again I think you get a similar kind of equation above. But it's been a while since I looked at it so I don't remember the details.
::::: The point, though, is that if you compute these quantities on the "monzo" side, you get things that already are at least an improvement in that they don't decrease with increasing subgroup complexity! What you have, on the other hand, is the val side, but then divided by a certain fractional power of the val-weighting determinant, which is equivalent to multiplying by some fractional power of ||S|| - but it isn't enough to counteract the implicit division by a much larger power of ||S|| that you get by using the TE version of these metrics to begin with. So, you get something which decreases as subgroup complexity increases.
::::: -- Mike Battaglia, 16 Jul 2024 (imported from Facebook)
:::::: Hmm, alright. Technical parts aside (I'll be experimenting with it some time), I personally don't intuitively insist that this badness figure must be constant or increasing on increasingly complex subgroups, but let me relate your concern to the Discord users and see if they can/will fix it.
:::::: -- Flora Canou, 17 Jul 2024 (imported from Facebook)