Talk:Marvel: Difference between revisions

Re
Godtone (talk | contribs)
No edit summary
Line 167: Line 167:


::::: I understand and can sympathize about your frustration in the XA Discord. Thanks for the detailed explanation. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:23, 18 January 2025 (UTC)
::::: I understand and can sympathize about your frustration in the XA Discord. Thanks for the detailed explanation. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 16:23, 18 January 2025 (UTC)
:::::: "if each individual case were impractical, they'd prolly not magically combine to something practical" to be honest I don't see why not; if the same EDO appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that EDO as your tuning of interest for trying out/further investigation? (This is especially true if you investigate different punishment strategies and it generally keeps appearing, because that ensures a more diverse sample of philosophies for tuning agreeing on the same tuning.)
:::::: "its identity is evident if you approach it by two steps of 5/4" to some degree yes but you shouldn't always need to "explain" the stacking an interval is made of to a listener IMO. If I want to use 32/25 in some chord I don't necessarily want the chord to also have 5/4 present just to make the 32/25 interpretation clearer. (For this purpose it seems to me that the tuning should at least not be closer to 9/7 which is very temperable dyadically by comparison, so that even a very undertempered ~32/25 can work as ~9/7.)
:::::: "I have no idea how you got to complexity squared or even the fourth" I think you misunderstood. Multiplying the ''square error'' by the ''fourth power'' of the odd-limit complexity is ''mathematically equivalent'' to doing the Mean-Square-Error where the "Error" is the absolute or relative error ''times'' the square of the odd-limit complexity. That is, if e is our (absolute or relative) error and c is our odd-limit complexity, then what we're doing is giving a "punishment" for each interval of e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> which means we are doing MSE on e * c<sup>2</sup>. Does that make sense? (I may change this behaviour in the future to avoid confusion so that the weighting is applied directly to the error rather than after the error function.) As for ''why use the square of the odd-limit complexity'', the answer is if you want "dyadic" tuning fidelity, which is the harshest kind, which means that an interval should be recognisable ''in isolation''. So in other words, it's because the number of intervals in the k-odd-limit grows approximately with k<sup>2</sup>.'''*''' As I found by ear, this results in the tuning fidelity required going up very quickly as the odd-limit grows so already by the time you get to 19/16, for me there is about 2 cents of mistuning allowed, simply due to lack of harmonic context.
::::::: '''*''' Though I'm fairly sure of this claim intuitively and asked someone else about it as I didn't know how to prove it, let me know if you can find a proper bound on the number of intervals in the k-odd-limit. But the curve should be similar to the k-integer-limit as well; the general idea is that the range of the numerator and denominator both grow linearly so that the number of possibilities grows by the square.
:::::: Clearly considering intervals in isolation rather than as suggested through the harmonic context of chords gives the wrong answer though, but I'm not sure how one would quantify mathematically the harmonic ambiguity of a contextualized interval, though it's clear it reduces the tuning fidelity to not be as harsh so that the dyadic tuning fidelity e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (square of the odd-limit) is inappropriate. My best guess has been to weight the error-before-squaring by the square-root of the odd-limit complexity e<sup>2</sup> * c = (e * c<sup>1/2</sup>)<sup>2</sup> which is actually quite forgiving in the tuning of more complex intervals, hypothesizing that the context is almost but not fully capable of making their tuning fidelity the same regardless of their complexity (which is the most forgiving sensitivity I can allow, so really the question is whether it's too insensitive not whether it's too sensitive). It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for (though I gave some in the above paragraphs already and you gave more). Intuitively, context should allow the sensitivity to go from e<sup>2</sup> * c<sup>4</sup> = (e * c<sup>2</sup>)<sup>2</sup> (which AFAIK is the precise dyadic precision) to  e<sup>2</sup> * c<sup>2</sup> = (e * c)<sup>2</sup> (weighting the error-before-squaring proportional to the odd-limit), so the reason I didn't pick the latter (which is more elegant) is because there still seemed to be a dominance phenomenon.
:::::: BTW, it's easy to use unweighted by just feeding <code>weighting=1</code> as a parameter. I just feel that the results bias to simple intervals too much so that it starts to feel like what's the point in including the more complex stuff if it's not gonna be tuned well enough.
:::::: --[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 23:14, 18 January 2025 (UTC)
Return to "Marvel" page.