Talk:Marvel: Difference between revisions

Re
Godtone (talk | contribs)
section & reply
Line 168: Line 168:
::::: I understand and can sympathize about your frustration in the XA Discord. Thanks for the detailed explanation. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:23, 18 January 2025 (UTC)
::::: I understand and can sympathize about your frustration in the XA Discord. Thanks for the detailed explanation. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:23, 18 January 2025 (UTC)


:::::: "if each individual case were impractical, they'd prolly not magically combine to something practical" to be honest I don't see why not; if the same EDO appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that EDO as your tuning of interest for trying out/further investigation? (This is especially true if you investigate different punishment strategies and it generally keeps appearing, because that ensures a more diverse sample of philosophies for tuning agreeing on the same tuning.)
"if each individual case were impractical, they'd prolly not magically combine to something practical" to be honest I don't see why not; if the same EDO appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that EDO as your tuning of interest for trying out/further investigation? (This is especially true if you investigate different punishment strategies and it generally keeps appearing, because that ensures a more diverse sample of philosophies for tuning agreeing on the same tuning.)


:::::: "its identity is evident if you approach it by two steps of 5/4" to some degree yes but you shouldn't always need to "explain" the stacking an interval is made of to a listener IMO. If I want to use 32/25 in some chord I don't necessarily want the chord to also have 5/4 present just to make the 32/25 interpretation clearer. (For this purpose it seems to me that the tuning should at least not be closer to 9/7 which is very temperable dyadically by comparison, so that even a very undertempered ~32/25 can work as ~9/7.)
"its identity is evident if you approach it by two steps of 5/4" to some degree yes but you shouldn't always need to "explain" the stacking an interval is made of to a listener IMO. If I want to use 32/25 in some chord I don't necessarily want the chord to also have 5/4 present just to make the 32/25 interpretation clearer. (For this purpose it seems to me that the tuning should at least not be closer to 9/7 which is very temperable dyadically by comparison, so that even a very undertempered ~32/25 can work as ~9/7.)


:::::: "I have no idea how you got to complexity squared or even the fourth" I think you misunderstood. Multiplying the ''square error'' by the ''fourth power'' of the odd-limit complexity is ''mathematically equivalent'' to doing the Mean-Square-Error where the "Error" is the absolute or relative error ''times'' the square of the odd-limit complexity. That is, if e is our (absolute or relative) error and c is our odd-limit complexity, then what we're doing is giving a "punishment" for each interval of e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> which means we are doing MSE on e * c<sup>2</sup>. Does that make sense? (I may change this behaviour in the future to avoid confusion so that the weighting is applied directly to the error rather than after the error function.) As for ''why use the square of the odd-limit complexity'', the answer is if you want "dyadic" tuning fidelity, which is the harshest kind, which means that an interval should be recognisable ''in isolation''. So in other words, it's because the number of intervals in the k-odd-limit grows approximately with k<sup>2</sup>.'''*''' As I found by ear, this results in the tuning fidelity required going up very quickly as the odd-limit grows so already by the time you get to 19/16, for me there is about 2 cents of mistuning allowed, simply due to lack of harmonic context.
"I have no idea how you got to complexity squared or even the fourth" I think you misunderstood. Multiplying the ''square error'' by the ''fourth power'' of the odd-limit complexity is ''mathematically equivalent'' to doing the Mean-Square-Error where the "Error" is the absolute or relative error ''times'' the square of the odd-limit complexity. That is, if e is our (absolute or relative) error and c is our odd-limit complexity, then what we're doing is giving a "punishment" for each interval of e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> which means we are doing MSE on e * c<sup>2</sup>. Does that make sense? (I may change this behaviour in the future to avoid confusion so that the weighting is applied directly to the error rather than after the error function.) As for ''why use the square of the odd-limit complexity'', the answer is if you want "dyadic" tuning fidelity, which is the harshest kind, which means that an interval should be recognisable ''in isolation''. So in other words, it's because the number of intervals in the k-odd-limit grows approximately with k<sup>2</sup>.'''*''' As I found by ear, this results in the tuning fidelity required going up very quickly as the odd-limit grows so already by the time you get to 19/16, for me there is about 2 cents of mistuning allowed, simply due to lack of harmonic context.


::::::: '''*''' Though I'm fairly sure of this claim intuitively and asked someone else about it as I didn't know how to prove it, let me know if you can find a proper bound on the number of intervals in the k-odd-limit. But the curve should be similar to the k-integer-limit as well; the general idea is that the range of the numerator and denominator both grow linearly so that the number of possibilities grows by the square.
: '''*''' Though I'm fairly sure of this claim intuitively and asked someone else about it as I didn't know how to prove it, let me know if you can find a proper bound on the number of intervals in the k-odd-limit. But the curve should be similar to the k-integer-limit as well; the general idea is that the range of the numerator and denominator both grow linearly so that the number of possibilities grows by the square.


:::::: Clearly considering intervals in isolation rather than as suggested through the harmonic context of chords gives the wrong answer though, but I'm not sure how one would quantify mathematically the harmonic ambiguity of a contextualized interval, though it's clear it reduces the tuning fidelity to not be as harsh so that the dyadic tuning fidelity e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (square of the odd-limit) is inappropriate. My best guess has been to weight the error-before-squaring by the square-root of the odd-limit complexity e<sup>2</sup> * c = (e * c<sup>1/2</sup>)<sup>2</sup> which is actually quite forgiving in the tuning of more complex intervals, hypothesizing that the context is almost but not fully capable of making their tuning fidelity the same regardless of their complexity (which is the most forgiving sensitivity I can allow, so really the question is whether it's too insensitive not whether it's too sensitive). It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for (though I gave some in the above paragraphs already and you gave more). Intuitively, context should allow the sensitivity to go from e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (which AFAIK is the precise dyadic precision) to  e<sup>2</sup> * c<sup>2</sup> = (e * c)<sup>2</sup> (weighting the error-before-squaring proportional to the odd-limit), so the reason I didn't pick the latter (which is more elegant) is because there still seemed to be a dominance phenomenon.
Clearly considering intervals in isolation rather than as suggested through the harmonic context of chords gives the wrong answer though, but I'm not sure how one would quantify mathematically the harmonic ambiguity of a contextualized interval, though it's clear it reduces the tuning fidelity to not be as harsh so that the dyadic tuning fidelity e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (square of the odd-limit) is inappropriate. My best guess has been to weight the error-before-squaring by the square-root of the odd-limit complexity e<sup>2</sup> * c = (e * c<sup>1/2</sup>)<sup>2</sup> which is actually quite forgiving in the tuning of more complex intervals, hypothesizing that the context is almost but not fully capable of making their tuning fidelity the same regardless of their complexity (which is the most forgiving sensitivity I can allow, so really the question is whether it's too insensitive not whether it's too sensitive). It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for (though I gave some in the above paragraphs already and you gave more). Intuitively, context should allow the sensitivity to go from e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (which AFAIK is the precise dyadic precision) to  e<sup>2</sup> * c<sup>2</sup> = (e * c)<sup>2</sup> (weighting the error-before-squaring proportional to the odd-limit), so the reason I didn't pick the latter (which is more elegant) is because there still seemed to be a dominance phenomenon.


:::::: BTW, it's easy to use unweighted by just feeding <code>weighting=1</code> as a parameter. I just feel that the results bias to simple intervals too much so that it starts to feel like what's the point in including the more complex stuff if it's not gonna be tuned well enough.
BTW, it's easy to use unweighted by just feeding <code>weighting=1</code> as a parameter. I just feel that the results bias to simple intervals too much so that it starts to feel like what's the point in including the more complex stuff if it's not gonna be tuned well enough.


:::::: --[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 23:14, 18 January 2025 (UTC)
--[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 23:14, 18 January 2025 (UTC)


::::::: You said: "if the same edo appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that edo as your tuning of interest for trying out/further investigation?" My answer is: becuz how many times an edo appears in those sequences is an arbitrary metric you made up. Each sequence represents the scenario where one is interested and only interested in a specific odd limit, as you have and only have weight for them (becuz you use complexity weighting, this can't be explained by discarding negligible weights, or gating). Counting the time of appearance thru multiple sequences would imply that I'm interested in a specific small odd limit, ''and'' I'm also interested in another larger odd limit, which contradicts the premise that I should be uninterested in it. So this is a self-contradicting/schizophrenic metric. It doesn't represent or translate to a realistic scenario.  
: You said: "if the same edo appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that edo as your tuning of interest for trying out/further investigation?" My answer is: becuz how many times an edo appears in those sequences is an arbitrary metric you made up. Each sequence represents the scenario where one is interested and only interested in a specific odd limit, as you have and only have weight for them (becuz you use complexity weighting, this can't be explained by discarding negligible weights, or gating). Counting the time of appearance thru multiple sequences would imply that I'm interested in a specific small odd limit, ''and'' I'm also interested in another larger odd limit, which contradicts the premise that I should be uninterested in it. So this is a self-contradicting/schizophrenic metric. It doesn't represent or translate to a realistic scenario.  


::::::: Note that this isn't the same as I'm being unsure if I'm interested in something, I have a probability of being interested in something, or I'm interested but only have a low frequency of using something. If I'm unsure, I might not want to be using the limit as a hard cutoff. More likely I'll be configuring the weighting curve to reflect the fuzziness of the thought.  
: Note that this isn't the same as I'm being unsure if I'm interested in something, I have a probability of being interested in something, or I'm interested but only have a low frequency of using something. If I'm unsure, I might not want to be using the limit as a hard cutoff. More likely I'll be configuring the weighting curve to reflect the fuzziness of the thought.  


::::::: The idea that the interval should be tuned closer to 32/25 than 9/7 sounds insane to me. To be clear it is a valid tuning and can be useful to those who specifically targets intervals of 25 for whatever reasons. But that's a personal artistic choice. To talk about it like an optimal tuning is wild. Imo complex intervals like 32/25 should always get some harmonic support. Even if you use it alone, consider how more significant 9/7 is. It appears everywhere in basic-level septimal harmony. In comparison, 32/25 only has some utility that I can think of in 5-limit JI purist approaches, or maybe primodalist approaches, in which case they get plenty of harmonic support anyway.  
: The idea that the interval should be tuned closer to 32/25 than 9/7 sounds insane to me. To be clear it is a valid tuning and can be useful to those who specifically targets intervals of 25 for whatever reasons. But that's a personal artistic choice. To talk about it like an optimal tuning is wild. Imo complex intervals like 32/25 should always get some harmonic support. Even if you use it alone, consider how more significant 9/7 is. It appears everywhere in basic-level septimal harmony. In comparison, 32/25 only has some utility that I can think of in 5-limit JI purist approaches, or maybe primodalist approaches, in which case they get plenty of harmonic support anyway.  


::::::: Okay, maybe your dyadic tuning fidelity and/or contextualized tuning fidelity make some sense. Becuz I also take account of harmonic significance and frequency of use, I still have plenty of cases to turn your complexity weighting to a simplicity weighting. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 12:34, 19 January 2025 (UTC)
: Okay, maybe your dyadic tuning fidelity and/or contextualized tuning fidelity make some sense. Becuz I also take account of harmonic significance and frequency of use, I still have plenty of cases to turn your complexity weighting to a simplicity weighting. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 12:34, 19 January 2025 (UTC)
 
:: "Each sequence represents the scenario where one is interested and only interested in a specific odd limit" is possibly a bit of a misunderstanding. This reasoning would make more sense if the sequence was still subject to the "largest odds harmonic complexity based dominance" phenomenon/issue I spoke of, but I fixed that using the very forgiving complexity weighting, so that if we look at the (2k+1)-odd-limit then really we are ''also'' simultaneously looking at the (2k-1)-odd-limit as well as every subset simultaneously, because the 2k+1 odd has not "dominated"/warped the results so that previous harmonies are discarded. Rather, what's happened is that a ''little'' more error is being allowed on smaller odds so that the larger odds have even the chance of being kept in tune (on average), and because of everything being by patent val, this error is felt even less because of being internally consistent when constructing chords. If anything you can think of the square-root-of-odd-limit complexity weighting as being a counterweight to small odds' natural dominance thru their presence in composite odds strengthening their tuning requirements indirectly (so that tuning fidelity of 15 improves fidelity of 3 and 5 for example), so an unweighted weighting would bias unambiguously and unfairly in favour of small odds, though if that's your priority that's fine, I just don't see what the point of targeting more complex harmonies at all is in that case.
 
:: "Counting the time of appearance thru multiple sequences would imply that I'm interested in a specific small odd limit, ''and'' I'm also interested in another larger odd limit, which contradicts the premise that I should be uninterested in it. So this is a self-contradicting/schizophrenic metric. It doesn't represent or translate to a realistic scenario." I'm afraid this doesn't make any sense to me at all. Could you explain how it makes sense? As I see it, if I am looking at multiple odd-limits then an EDO appearing in all or most of them shows that regardless of which odd-limit I care more about that EDO is a good choice for its size, so if I want it to appear in many odd-limits, that's an indirect way of achieving the "complexity falloff" rather than a hard cutoff, it's just that it is the human aggregating the results, which is more reliable than an algorithm trying to do the same because the human can judge based on more information and considerations that it might not be accounting for, such as (importantly) personal taste for tuning.
 
:: I wanted to elaborate on something I said a little: "It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for". The weird thing about using the square-root of the odd-limit was when I used the most contentious interval I accepted the tempering of as psychoacoustically convincing (which is the ~3/2 and ~4/3 in 7edo), then the implied bounds for all the odd-limits seem to match my experience with a suspicious degree of accuracy, recognising, for example, that [[80edo]]'s very off [[~]][[15/13]] is tempered with basically exactly as much damage as I can accept for harmonically-contextualised purposes.
 
:: "Becuz I also take account of harmonic significance and frequency of use" I think this is maybe where the disagreement is arising from. I fundamentally don't agree that you can weight in this way; you can't say "because I use 3/2 often, therefore 3/2 is the most important to have low error on", because that disregards the tuning fidelity required for more complex intervals. It doesn't matter how infrequently you use something; if you do use it, then having it be too damaged will have consequences in its sound (specifically its capability of concordance) so you either do or don't care about whether it concords. Plus, if you wanted to adopt that philosophy then ironically [[53edo]] is ''definitely'' optimal for marvel, because it cares first and foremost about 2-limit, then 3-limit, then 5-limit, then 7-limit, then 11-limit, ''strictly'' in that order, which is exactly the proposed frequency falloff you are advocating for. So by your own reasoning, it should be the best tuning for it, because it tunes primes better the smaller they are. Discarding it due to the uneven tuning of the full 9-odd-limit is indirectly an admission of complexity weighting, at which point you can't avoid the fact that more complex intervals need to be tuned better to concord. How much better is up to debate ofc, but it's definitely not unweighted for the reasons I gave.
 
:: --[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 17:32, 20 January 2025 (UTC)
Return to "Marvel" page.