Talk:Marvel: Difference between revisions

Re
Re
 
(8 intermediate revisions by 2 users not shown)
Line 167: Line 167:


::::: I understand and can sympathize about your frustration in the XA Discord. Thanks for the detailed explanation. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:23, 18 January 2025 (UTC)
::::: I understand and can sympathize about your frustration in the XA Discord. Thanks for the detailed explanation. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:23, 18 January 2025 (UTC)
=== Continued discussion of odd-limit complexity weighting ===
"if each individual case were impractical, they'd prolly not magically combine to something practical" to be honest I don't see why not; if the same EDO appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that EDO as your tuning of interest for trying out/further investigation? (This is especially true if you investigate different punishment strategies and it generally keeps appearing, because that ensures a more diverse sample of philosophies for tuning agreeing on the same tuning.)
"its identity is evident if you approach it by two steps of 5/4" to some degree yes but you shouldn't always need to "explain" the stacking an interval is made of to a listener IMO. If I want to use 32/25 in some chord I don't necessarily want the chord to also have 5/4 present just to make the 32/25 interpretation clearer. (For this purpose it seems to me that the tuning should at least not be closer to 9/7 which is very temperable dyadically by comparison, so that even a very undertempered ~32/25 can work as ~9/7.)
"I have no idea how you got to complexity squared or even the fourth" I think you misunderstood. Multiplying the ''square error'' by the ''fourth power'' of the odd-limit complexity is ''mathematically equivalent'' to doing the Mean-Square-Error where the "Error" is the absolute or relative error ''times'' the square of the odd-limit complexity. That is, if e is our (absolute or relative) error and c is our odd-limit complexity, then what we're doing is giving a "punishment" for each interval of e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> which means we are doing MSE on e * c<sup>2</sup>. Does that make sense? (I may change this behaviour in the future to avoid confusion so that the weighting is applied directly to the error rather than after the error function.) As for ''why use the square of the odd-limit complexity'', the answer is if you want "dyadic" tuning fidelity, which is the harshest kind, which means that an interval should be recognisable ''in isolation''. So in other words, it's because the number of intervals in the k-odd-limit grows approximately with k<sup>2</sup>.'''*''' As I found by ear, this results in the tuning fidelity required going up very quickly as the odd-limit grows so already by the time you get to 19/16, for me there is about 2 cents of mistuning allowed, simply due to lack of harmonic context.
: '''*''' Though I'm fairly sure of this claim intuitively and asked someone else about it as I didn't know how to prove it, let me know if you can find a proper bound on the number of intervals in the k-odd-limit. But the curve should be similar to the k-integer-limit as well; the general idea is that the range of the numerator and denominator both grow linearly so that the number of possibilities grows by the square.
Clearly considering intervals in isolation rather than as suggested through the harmonic context of chords gives the wrong answer though, but I'm not sure how one would quantify mathematically the harmonic ambiguity of a contextualized interval, though it's clear it reduces the tuning fidelity to not be as harsh so that the dyadic tuning fidelity e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (square of the odd-limit) is inappropriate. My best guess has been to weight the error-before-squaring by the square-root of the odd-limit complexity e<sup>2</sup> * c = (e * c<sup>1/2</sup>)<sup>2</sup> which is actually quite forgiving in the tuning of more complex intervals, hypothesizing that the context is almost but not fully capable of making their tuning fidelity the same regardless of their complexity (which is the most forgiving sensitivity I can allow, so really the question is whether it's too insensitive not whether it's too sensitive). It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for (though I gave some in the above paragraphs already and you gave more). Intuitively, context should allow the sensitivity to go from e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (which AFAIK is the precise dyadic precision) to  e<sup>2</sup> * c<sup>2</sup> = (e * c)<sup>2</sup> (weighting the error-before-squaring proportional to the odd-limit), so the reason I didn't pick the latter (which is more elegant) is because there still seemed to be a dominance phenomenon.
BTW, it's easy to use unweighted by just feeding <code>weighting=1</code> as a parameter. I just feel that the results bias to simple intervals too much so that it starts to feel like what's the point in including the more complex stuff if it's not gonna be tuned well enough.
--[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 23:14, 18 January 2025 (UTC)
: You said: "if the same edo appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that edo as your tuning of interest for trying out/further investigation?" My answer is: becuz how many times an edo appears in those sequences is an arbitrary metric you made up. Each sequence represents the scenario where one is interested and only interested in a specific odd limit, as you have and only have weight for them (becuz you use complexity weighting, this can't be explained by discarding negligible weights, or gating). Counting the time of appearance thru multiple sequences would imply that I'm interested in a specific small odd limit, ''and'' I'm also interested in another larger odd limit, which contradicts the premise that I should be uninterested in it. So this is a self-contradicting/schizophrenic metric. It doesn't represent or translate to a realistic scenario.
: Note that this isn't the same as I'm being unsure if I'm interested in something, I have a probability of being interested in something, or I'm interested but only have a low frequency of using something. If I'm unsure, I might not want to be using the limit as a hard cutoff. More likely I'll be configuring the weighting curve to reflect the fuzziness of the thought.
: The idea that the interval should be tuned closer to 32/25 than 9/7 sounds insane to me. To be clear it is a valid tuning and can be useful to those who specifically targets intervals of 25 for whatever reasons. But that's a personal artistic choice. To talk about it like an optimal tuning is wild. Imo complex intervals like 32/25 should always get some harmonic support. Even if you use it alone, consider how more significant 9/7 is. It appears everywhere in basic-level septimal harmony. In comparison, 32/25 only has some utility that I can think of in 5-limit JI purist approaches, or maybe primodalist approaches, in which case they get plenty of harmonic support anyway.
: Okay, maybe your dyadic tuning fidelity and/or contextualized tuning fidelity make some sense. Becuz I also take account of harmonic significance and frequency of use, I still have plenty of cases to turn your complexity weighting to a simplicity weighting. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 12:34, 19 January 2025 (UTC)
:: "Each sequence represents the scenario where one is interested and only interested in a specific odd limit" is possibly a bit of a misunderstanding. This reasoning would make more sense if the sequence was still subject to the "largest odds harmonic complexity based dominance" phenomenon/issue I spoke of, but I fixed that using the very forgiving complexity weighting, so that if we look at the (2k+1)-odd-limit then really we are ''also'' simultaneously looking at the (2k-1)-odd-limit as well as every subset simultaneously, because the 2k+1 odd has not "dominated"/warped the results so that previous harmonies are discarded. Rather, what's happened is that a ''little'' more error is being allowed on smaller odds so that the larger odds have even the chance of being kept in tune (on average), and because of everything being by patent val, this error is felt even less because of being internally consistent when constructing chords. If anything you can think of the square-root-of-odd-limit complexity weighting as being a counterweight to small odds' natural dominance thru their presence in composite odds strengthening their tuning requirements indirectly (so that tuning fidelity of 15 improves fidelity of 3 and 5 for example), so an unweighted weighting would bias unambiguously and unfairly in favour of small odds, though if that's your priority that's fine, I just don't see what the point of targeting more complex harmonies at all is in that case.
:: "Counting the time of appearance thru multiple sequences would imply that I'm interested in a specific small odd limit, ''and'' I'm also interested in another larger odd limit, which contradicts the premise that I should be uninterested in it. So this is a self-contradicting/schizophrenic metric. It doesn't represent or translate to a realistic scenario." I'm afraid this doesn't make any sense to me at all. Could you explain how it makes sense? As I see it, if I am looking at multiple odd-limits then an EDO appearing in all or most of them shows that regardless of which odd-limit I care more about that EDO is a good choice for its size, so if I want it to appear in many odd-limits, that's an indirect way of achieving the "complexity falloff" rather than a hard cutoff, it's just that it is the human aggregating the results, which is more reliable than an algorithm trying to do the same because the human can judge based on more information and considerations that it might not be accounting for, such as (importantly) personal taste for tuning.
:: I wanted to elaborate on something I said a little: "It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for". The weird thing about using the square-root of the odd-limit was when I used the most contentious interval I accepted the tempering of as psychoacoustically convincing (which is the ~3/2 and ~4/3 in 7edo), then the implied bounds for all the odd-limits seem to match my experience with a suspicious degree of accuracy, recognising, for example, that [[80edo]]'s very off [[~]][[15/13]] is tempered with basically exactly as much damage as I can accept for harmonically-contextualised purposes.
:: "Becuz I also take account of harmonic significance and frequency of use" I think this is maybe where the disagreement is arising from. I fundamentally don't agree that you can weight in this way; you can't say "because I use 3/2 often, therefore 3/2 is the most important to have low error on", because that disregards the tuning fidelity required for more complex intervals as well as disregards that your 3/2 may already be good enough for every practical purpose (which is especially likely if you look at larger odd-limits including composite odds because of the frequency of 3 appearing in the factorisation). It doesn't matter how infrequently you use something; if you do use it, then having it be too damaged will have consequences in its sound (specifically its capability of concordance) so you either do or don't care about whether it concords. Plus, if you wanted to adopt that philosophy then ironically [[53edo]] is ''definitely'' optimal for marvel, because it cares first and foremost about 2-limit, then 3-limit, then 5-limit, then 7-limit, then 11-limit, ''strictly'' in that order, which is exactly the proposed frequency falloff you are advocating for. So by your own reasoning, it should be the best tuning for it, because it tunes primes better the smaller they are. Discarding it due to the uneven tuning of the full 9-odd-limit is indirectly an admission of complexity weighting, at which point you can't avoid the fact that more complex intervals need to be tuned better to concord. How much better is up to debate ofc, but it's definitely not unweighted for the reasons I gave.
:: --[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 17:32, 20 January 2025 (UTC)
::: I don't think I've misunderstood. I specifically meant that when you pick a limit, you care about that limit (which includes smaller limits) and do not care about any intervals outside of it. For example when you pick 25 as a limit, you absolutely don't care about intervals of 27. That doesn't happen in reality cuz if I care about 25 so much that I put it at the local peak of the weighting curve, I have no reason to completely dismiss 27.
::: Next you said: "I just don't see what the point of targeting more complex harmonies at all is in that case." My answer is: it's not granted that there should be a point of targeting more complex harmonies. The worth of it is something that needs proof. I've been holding that the objectively best thus optimal and recommendable weighting curve is where you don't have to choose a target, and where it just does the rolloff for you.
::: Note that looking at multiple sequences and counting the times an edo appears isn't the same as interpolating the scores of the edo across limits. Interpolation would make some sense, actually, since that can translate to a configuration of the weighting curve. Counting the times cannot. It implies a completely different mindset, which I've described. You care about a limit and specifically not any intervals outside of it, and then you care about a larger limit which defies your previous choice. Then you care about yet another larger limit which defies your previous two choices. Now I don't think this is utterly wrong, but it's a new, opaque metric layered on the old metric. Same problem as POTE, if that means something to you. You could say there's some sort of "black magic" that somehow makes it close enough to what you want, but as I said it doesn't translate to or represent a realistic scenario. Generally you have a scenario in your mind and find a mathematical model for it. I find it hard to believe in if a model doesn't correspond to a scenario.
::: I'll keep holding that the optimal tuning should take account of harmonic significance and frequency of use. First, I disagree that it disregards the tuning fidelity required for more complex intervals. It just trades that against the other concerns which are just as pressing if not more so. Second, by frequency I do imply probability, cuz frequency is the expected value of use/unuse of the interval. If you're not sure you're gonna use a certain complex interval like once in a hundred chords then giving it a high weight is a waste of optimization resource. So it's not that either I care or don't care about whether it concords. More like I care, but the amount of care is in a way proportional to its harmonic significance and frequeny of use.
::: 53edo is clearly undertempered cuz it trades simple 7-odd-limit concords in favor of complex 25-odd-limit wolves. It might score better in wilson metrics than tenney cuz wilson is where 9 is simpler than 7, but 25 is still more complex than both 7 and 9, in addition to the fact recognized by euclidean metrics that trading 25 for 7 in marvel is more efficient use of optimization resource in the same way as trading 3 for 5 is in meantone. Ftr, here's the BE-optimal GPV sequence for septimal marvel:
::: 10, 12, 19, 31, 41, 53, 72, 125, 197.
::: For undecimal marvel:
::: 10, 12e, 19, 22, 31, 41, 53, 72, 125, 166, 197e, 269ce, 435cce.
::: [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 11:01, 21 January 2025 (UTC)
:::: There is a reason that counting the times is way better than building in interpolation in the way you suggest. It's because of the tuning fidelity issue. If I build in a weighting curve, then I am algorithmically giving permission for the most complex harmonies to be the most off, which I've argued is objectively the wrong choice if you want to target those harmonies. By contrast, by not having a falloff I am ensuring that if those harmonies are too off, the rest of the tuning better be worth the sacrifice in the tuning fidelity where it's most needed. This allows systems that are biased to simpler harmonies appear. I've expressed that already I'm not sure if weighting proportional to the square root of the odd-limit is too forgiving for complex harmonies, but some strange edos can start appearing if you bias too strongly for large odd-limits. For example, using proportional to the odd-limit implies [[67edo]] is a lower-absolute-error system for the [[17-odd-limit]] than [[58edo]], which I suspect is happening because of the sharp 11 and 13 of 58edo being punished more strongly as a result, which I don't find convincing. --[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 15:59, 21 January 2025 (UTC)
::::: You've already given permission for whatever intervals beyond the limit you specify to be the most off. You've already allowed, by specifying the 25-odd-limit, intervals of 27 to be the most off, which is "objectively the wrong choice" by your logic. So I don't get what you're insisting. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 10:10, 2 February 2025 (UTC)
Return to "Marvel" page.