Xenharmonic Wiki:Cross-platform dialogue: Difference between revisions
reply to sintel abt badness |
→Reactions to Dirichlet badness: something more to read |
||
Line 362: | Line 362: | ||
:::::: -- Flora Canou, 17 Jul 2024 (imported from Facebook) | :::::: -- Flora Canou, 17 Jul 2024 (imported from Facebook) | ||
:the badness measure is only to be taken seriously on p-limit subgroups otherwise its too easy to game. previously i did 'penalize' weird subgroups but its not very satifying to do so arbitrarily. i personally only care about temperaments that are good in common subgroups and i support weird subgroups only bc some people requested it | : the badness measure is only to be taken seriously on p-limit subgroups otherwise its too easy to game. previously i did 'penalize' weird subgroups but its not very satifying to do so arbitrarily. i personally only care about temperaments that are good in common subgroups and i support weird subgroups only bc some people requested it | ||
:as for the comment on e.g. Petrtri, I *do* think it should have a much lower badness than meantone! look at the mapping matrix: its about the same complexity as something like Dicot, yet its errors are all ~0.05 cents! thats amazing honestly! but of course that only really matters if you take the subgroup seriously (i dont) | : as for the comment on e.g. Petrtri, I *do* think it should have a much lower badness than meantone! look at the mapping matrix: its about the same complexity as something like Dicot, yet its errors are all ~0.05 cents! thats amazing honestly! but of course that only really matters if you take the subgroup seriously (i dont) | ||
:as for the other thing about subgroup complexity with the 256/243 example: that's actually a good point, had not noticed this before. I will investigate the cause! | : as for the other thing about subgroup complexity with the 256/243 example: that's actually a good point, had not noticed this before. I will investigate the cause! | ||
:[[User:Sintel|Sintel]] ([[User talk:Sintel|talk]]) 21:06, 17 July 2024 (UTC) [copypasta from facebook] | : [[User:Sintel|Sintel]] ([[User talk:Sintel|talk]]) 21:06, 17 July 2024 (UTC) [copypasta from facebook] | ||
:: Why not just penalize it so that [subgroup].p has increasing badness for increasing complexity of p? Or if that leads to a strange normalizer for whatever reason, why not just have all [subgroup].p have equivalent badness assuming that p is unconstrained in all of them? I don't think "the exact normalizer expression that would make all extensions equal badness assuming everything else stays the same" is arbitrary. --[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 18:38, 3 October 2024 (UTC) | :: Why not just penalize it so that [subgroup].p has increasing badness for increasing complexity of p? Or if that leads to a strange normalizer for whatever reason, why not just have all [subgroup].p have equivalent badness assuming that p is unconstrained in all of them? I don't think "the exact normalizer expression that would make all extensions equal badness assuming everything else stays the same" is arbitrary. --[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 18:38, 3 October 2024 (UTC) | ||
::: I attached another piece of information below. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 18:46, 3 October 2024 (UTC) | |||
: Sintel Microtones Why don't you try your metric on the "monzo" version of logflat badness? Just use the L2 norm on the multimonzo for complexity and simple badness, and this problem may partly solve itself. I think it'll be the same thing you have, but times the subgroup determinant raised to some slightly different power than what you are currently doing. | |||
: When you say that you are mainly trying to compare p-limit subgroups, are you also trying to compare subgroup temperaments of different ranks, or different codimensions, or what? Or just like, 5-limit vs 7-limit meantone, for instance? Because I noticed some weirdness also with p-limit subgroup temperaments - it rates 7-limit meantone as much worse than 5-limit meantone, which is another thing I'm not sure I agree with, as 5-limit meantone practically gives you all of these ratios of 7 for free, with barely no added error! | |||
: -- Mike Battaglia, 18 Jul 2024 (imported from Facebook) | |||
:: Regarding meantone, since septimal meantone is about twice as complex as 5-limit meantone with virtually the same avg error, it should be normal that septimal meantone has about twice the badness. | |||
:: -- Flora Canou, 20 Jul 2024 (imported from Facebook) | |||
:: I noticed if instead of | |||
:: U = W/det(W)^(1/d) | |||
:: you go for | |||
:: U = W/det(W)^(1/r) | |||
:: and thus | |||
:: C = ||M(W)||/det(W) | |||
:: you get invariant complexities and therefore invariant badnesses on the same kernel in arbitrary subgroups. However comparison of temps across ranks/dimensionalities no longer make sense now, so some other factor needs to be introduced. | |||
:: -- Flora Canou, 20 Jul 2024 (imported from Facebook) | |||
::: right, this is the metric I was talking about above regarding the "monzo" version of logflat badness. This is a pretty good start already, but now we have this issue of normalizing things of different rank. | |||
::: The starting point is: how can one meaningfully assign a complexity to two arbitrary JI *subgroups* of different rank? This is useful not just for ranking subgroups themselves to derive temperaments from, but also ranking kernels, which are also just JI subgroups. We could use the Tenney norm of the multimonzo of the subgroup - this is the weighting matrix determinant you have been using - but now we're comparing lengths to areas to volumes, etc, so how do we do that? | |||
::: This is something I put together many years ago but just haven't put on the Wiki, although I never put it all together into any combined metric. But basically the way I did it is to start with what's called the "theta function" or "theta series" of the subgroup as a lattice, but using the L1 norm instead of the L2 norm. We instead just look at the subgroup as a set of intervals, rather than as any kind of geomeric object. The theta function associated to any lattice is all points of the form Sum 1/(n*d)^s on all n/d in the subgroup, where s is a free parameter determining how much we care about more complex intervals. The s is a free parameter that determines how fast complex intervals should roll off, similar to with the zeta function. This can be thought of as representing a kind of "strength" for the subgroup, so that its inverse is a kind of complexity. | |||
::: We do a bunch of analysis with rational approximations at s -> 0 and derive the "lambda function" for the subgroup, which is basically the multimonzo norm/subgroup determinant, but normalized in a certain magic way that makes it all work out. There is one free parameter that simultaneously measures how fast complex intervals should roll off, how much you care about rank, how much you care about larger chords, how simple a new JI interval should be for it to be worth it if you go up a rank and add it, and even the base of the logarithm. There is a closed-form expression relating all of these things and I had some recommended values for this one parameter. | |||
::: Anyway, if you're interested I can write this all up at some point, but if you want to start playing around, you can do so right now just by taking what you already have but changing the base of the logarithm. | |||
::: -- Mike Battaglia, 20 Jul 2024 (imported from Facebook) |