Talk:Marvel

From Xenharmonic Wiki
Jump to navigation Jump to search

Subheadings for scales

I suggest this although I realize that an extensive table of contents has a certain repellent effect on some readers. What do you think? --Xenwolf (talk) 09:42, 1 June 2021 (UTC)

They don't look quite different to me in this specific page. FloraC (talk) 13:43, 1 June 2021 (UTC)
In that case I'd consider it as a structural improvement. Thanks for stopping by to take a look. --Xenwolf (talk) 14:24, 1 June 2021 (UTC)

Challenge on optimality of 53edo for FloraC

53edo is consistent in the 7-limited 105-odd-limit except for two interval pairs (50/49 and 75/49 and their octave-complements). Can any other edo tuning of marvel even come close to this faithful of a representation of the 7-limit lattice? 72edo does better with only one inconsistent interval pair in the 125-odd-limit (128/125 and its octave-complement, unsurprisingly), but it's optimized for different things than just pure marvel. Similarly 41edo does even better in terms of consistency but it's clearly more overtempered than 72edo. I also don't believe that the inconsistency of 50/49 and 75/49 are particularly important, except for the damage on 7/5 and 10/7 which as far as I can tell is the only real flaw of 53edo's marvel.

Here is how every edo up to and including 240 which tempers out 225/224 with a consistent 9-odd-limit performs in the 105-odd-limit, which seems the largest 7-limited odd-limit that is reasonable to consider because 125 is obviously gonna cause inconsistencies in most tunings as 5 is the most tempered prime in marvel cuz of 32/25 = 9/7 and 7/6 = 75/64 among others.

>>> for edo in range(1,241): # using https://en.xen.wiki/w/User:Godtone#My_Python_3_code
...   if inconsistent_ivs_by_val(odd_lim(9),val(lim(7),ed(edo)))==[] and pval(S(15),ed(edo))==0:
...     print(edo,'EDO:',', '.join([ striv(x) for x in inconsistent_ivs_by_val( odd_lim(9,[],[15,21,25,27,35,45,49,63,75,81,105]), val(lim(7),ed(edo)) ) ])+'\n')
... 
12 EDO: 49/48, 49/45, 54/49, 81/70, 98/81, 60/49, 35/27, 64/49, 49/36, 72/49, 49/32, 54/35, 49/30, 81/49, 140/81, 49/27, 90/49, 96/49

19 EDO: 64/63, 49/48, 128/105, 49/40, 64/49, 49/32, 80/49, 105/64, 96/49, 63/32

22 EDO: 81/80, 50/49, 27/25, 25/21, 49/40, 100/81, 63/50, 81/64, 80/63, 98/75, 75/49, 63/40, 128/81, 100/63, 81/50, 80/49, 42/25, 50/27, 49/25, 160/81

29 EDO: 49/48, 36/35, 28/27, 25/24, 27/25, 49/45, 35/32, 54/49, 81/70, 75/64, 98/81, 128/105, 60/49, 100/81, 32/25, 35/27, 64/49, 49/36, 48/35, 112/81, 25/18, 36/25, 81/56, 35/24, 72/49, 49/32, 54/35, 25/16, 81/50, 49/30, 105/64, 81/49, 128/75, 140/81, 49/27, 64/35, 90/49, 50/27, 48/25, 27/14, 35/18, 96/49

31 EDO: 81/80, 81/70, 100/81, 81/64, 112/81, 81/56, 128/81, 81/50, 140/81, 160/81

41 EDO: 

50 EDO: 81/80, 64/63, 50/49, 21/20, 27/25, 81/70, 32/27, 128/105, 49/40, 100/81, 63/50, 81/64, 80/63, 64/49, 21/16, 27/20, 112/81, 45/32, 64/45, 81/56, 40/27, 32/21, 49/32, 63/40, 128/81, 100/63, 81/50, 80/49, 105/64, 27/16, 140/81, 50/27, 40/21, 49/25, 63/32, 160/81

53 EDO: 50/49, 98/75, 75/49, 49/25

60 EDO: 64/63, 49/48, 36/35, 25/24, 35/32, 54/49, 75/64, 128/105, 49/40, 32/25, 64/49, 21/16, 49/36, 48/35, 45/32, 64/45, 35/24, 72/49, 32/21, 49/32, 25/16, 80/49, 105/64, 128/75, 49/27, 64/35, 48/25, 35/18, 96/49, 63/32

72 EDO: 

82 EDO: 81/80, 36/35, 25/24, 27/25, 35/32, 54/49, 28/25, 81/70, 75/64, 25/21, 98/81, 128/105, 100/81, 63/50, 32/25, 35/27, 75/56, 48/35, 25/18, 36/25, 35/24, 112/75, 54/35, 25/16, 100/63, 81/50, 105/64, 81/49, 42/25, 128/75, 140/81, 25/14, 49/27, 64/35, 50/27, 48/25, 35/18, 160/81

84 EDO: 81/80, 49/48, 28/27, 49/45, 54/49, 81/70, 98/81, 60/49, 81/64, 35/27, 98/75, 49/36, 112/81, 81/56, 72/49, 75/49, 54/35, 128/81, 49/30, 81/49, 140/81, 49/27, 90/49, 27/14, 96/49, 160/81

91 EDO: 81/80, 64/63, 49/48, 16/15, 35/32, 75/64, 32/27, 128/105, 49/40, 81/64, 80/63, 32/25, 64/49, 21/16, 48/35, 45/32, 64/45, 35/24, 32/21, 49/32, 25/16, 63/40, 128/81, 80/49, 105/64, 27/16, 128/75, 64/35, 15/8, 96/49, 63/32, 160/81

94 EDO: 50/49, 25/24, 27/25, 28/25, 75/64, 25/21, 100/81, 63/50, 32/25, 98/75, 75/56, 25/18, 36/25, 112/75, 75/49, 25/16, 100/63, 81/50, 42/25, 128/75, 25/14, 50/27, 48/25, 49/25

113 EDO: 25/24, 35/32, 28/25, 75/64, 128/105, 32/25, 75/56, 48/35, 25/18, 45/32, 64/45, 36/25, 35/24, 112/75, 25/16, 105/64, 128/75, 25/14, 64/35, 48/25

125 EDO: 50/49, 49/45, 54/49, 28/25, 75/64, 98/81, 60/49, 56/45, 98/75, 75/56, 112/81, 81/56, 112/75, 75/49, 45/28, 49/30, 81/49, 128/75, 25/14, 49/27, 90/49, 49/25

144 EDO: 81/80, 64/63, 16/15, 35/32, 75/64, 32/27, 128/105, 56/45, 81/64, 32/25, 64/49, 75/56, 112/81, 45/32, 64/45, 81/56, 112/75, 49/32, 25/16, 128/81, 45/28, 105/64, 27/16, 128/75, 64/35, 15/8, 63/32, 160/81

166 EDO: 50/49, 25/24, 16/15, 15/14, 27/25, 49/45, 28/25, 75/64, 25/21, 128/105, 60/49, 56/45, 63/50, 32/25, 98/75, 75/56, 25/18, 45/32, 64/45, 36/25, 112/75, 75/49, 25/16, 100/63, 45/28, 49/30, 105/64, 42/25, 128/75, 25/14, 90/49, 50/27, 28/15, 15/8, 48/25, 49/25

197 EDO: 81/80, 64/63, 50/49, 28/27, 25/24, 16/15, 15/14, 49/45, 54/49, 28/25, 75/64, 32/27, 25/21, 98/81, 128/105, 60/49, 56/45, 81/64, 32/25, 98/75, 75/56, 112/81, 45/32, 64/45, 81/56, 112/75, 75/49, 25/16, 128/81, 45/28, 49/30, 105/64, 81/49, 42/25, 27/16, 128/75, 25/14, 49/27, 90/49, 28/15, 15/8, 48/25, 27/14, 49/25, 63/32, 160/81

--Godtone (talk) 21:57, 15 January 2025 (UTC)

Because the consistency argument may not be sufficiently convincing, here is optimal_edo_sequences (minimising the mean square cent error on the tonality diamond, with cent error deviations weighted by the square-root of the odd-limit of each interval, which is the most forgiving tuning fidelity that can be reasonable) for edos tempering out S15:

>>> odds = [k for k in range(1,125,2) if len(fact_int(k))<=4]
>>> odds
[1, 3, 5, 7, 9, 15, 21, 25, 27, 35, 45, 49, 63, 75, 81, 105]
>>> for i in range(3,len(odds)): # 0th odd is 1, 1st odd is 3, 2nd odd is 5, 3rd odd is 7
...   print('7-limited '+str(odds[i])+'-odd-limit:',optimal_edo_sequence(odds[:i+1],[edo for edo in range(1,312) if pval(S(15),ed(edo))==0]))
... 
7-limited 7-odd-limit: [2, 9, 10, 12, 19, 22, 31, 72, 103, 175, 228]
7-limited 9-odd-limit: [2, 9, 10, 12, 19, 31, 41, 53, 72, 125, 166]
7-limited 15-odd-limit: [2, 9, 10, 12, 19, 22, 31, 41, 53, 72, 125, 166]
7-limited 21-odd-limit: [2, 9, 10, 12, 19, 22, 29, 31, 41, 72, 113, 125, 166, 197]
7-limited 25-odd-limit: [2, 9, 10, 12, 19, 31, 53, 72, 84, 156, 240]
7-limited 27-odd-limit: [2, 9, 10, 12, 19, 31, 41, 53, 72, 125, 197, 281]
7-limited 35-odd-limit: [2, 9, 10, 12, 19, 31, 41, 53, 72, 125]
7-limited 45-odd-limit: [2, 9, 10, 12, 19, 31, 41, 53, 72, 125]
7-limited 49-odd-limit: [2, 9, 10, 12, 19, 22, 31, 41, 72, 197]
7-limited 63-odd-limit: [2, 9, 10, 12, 19, 22, 31, 41, 72, 197]
7-limited 75-odd-limit: [2, 9, 10, 12, 19, 22, 31, 41, 72, 197]
7-limited 81-odd-limit: [2, 9, 10, 12, 19, 31, 41, 53, 72, 113, 166]
7-limited 105-odd-limit: [2, 9, 10, 12, 19, 22, 31, 41, 53, 72, 113, 125, 166]

Notice that we haven't put any constraints on tempering or consistency (other than tempering out 225/224 by patent val) and 53edo still shows up everywhere except the 7-limited 49-, 63- and 75-odd-limit. 31 shows up everywhere simply by absence of good enough smaller competitors; same with 41edo though it disappears from the 7-limited 25-odd-limit due to the overflat 25. 72edo is very good as it appears everywhere. I also want to point out that 240edo is not only arguably too many notes for marvel but also only appears a single time! I really doubt that 240edo is optimal in any meaningful sense (except being a nice composite number of notes I guess) because it has 4 inconsistent interval pairs in the 9-odd-limit already (which is almost half of all interval pairs of the 9-odd-limit). By contrast, 166edo and 197edo both appear 5 times so appear to be well justified in terms of absolute error at least. 125edo is even better and interestingly disappears in practically the same places that 53 disappears: in the 49- to 75- 7-limited odd-limits (though 53edo reappears in the 81-odd-limit while 125 appears one later in the 105-odd-limit). 84edo only appears once but it appears in a theoretically notable odd-limit for marvel: the 25-odd-limit, which is notable for being challenging because of marvel's inclination to temper 5 significantly flat (and this is also where 41edo disappears), so IMO isn't so bad either because we know it satisfies the strict requirements and because it appears in the optimal_edo_sequence for all full odd-limits 23 thru 51 and appears in the strict_optimal_edo_sequence (meaning identical except instead based on relative error instead of absolute, so that the list is a strict subset) for a lot of those too, so that it's a natural tuning to consider for high-limit marvel.

--Godtone (talk) 22:52, 15 January 2025 (UTC)

A clarification on why the 7-limited 25-odd-limit is important to analyse for marvel: it is the smallest odd-limit which introduces a tempered equivalence within the interval set of the odd-limit other than the trivial ~16/15~15/14, and the 25-odd-limit has significantly higher tuning fidelity than anything in the 9-odd-limit; the square root of 25/9 is 5/3, so the tuning fidelity required is almost double even if we use the very forgiving (nonstrict) "square root of the odd-limit" as a weighting for cent error, and is almost thrice otherwise, or even more if you are concerned with pure dyadic convincingness. Therefore, an optimized marvel tuning must clearly tune closer to 32/25 than to 9/7, because there is no good reason that the musically useful augmented fifth ~25/16 should be discarded as a target given how naturally marvel extends a 5-limit lattice into the 7-limit, giving rise to things like the marveldene. There is also ~28/25~9/8 in the 7-limited 25-odd-limit but the usefulness of that seems more dubious, but it does show why ideally prime 3 should be tuned flat, hence systems like 72edo and 84edo.

(I think the one thing I do agree with though is that 16/15 is obviously undertempered in 53edo, but it seems to come about as a result of other considerations so I'm not fully sure it can be evaded because only a single 3 and 5 are involved. If you ask me, the smallest edo that is obviously "more optimized" (in terms of tuning) than 53edo for marvel is 125edo, its tuning profile looks about exactly correct as far as I ca tell. But that is over double the notes! I wouldn't dare add a single note more because already there is a lot of inconsistencies in higher 7-limited odd-limits as I've shown; I'll elaborate a little in the next (and final) post.)

--Godtone (talk) 23:42, 15 January 2025 (UTC)

So here's one important reason why I think probably only 125edo is a more optimized marvel tuning:

>>> [edo for edo in range(1,312) if pval(S(15),ed(edo))==0 and len(inconsistent_ivs_by_val(odd_lim(9,[],[15,21,25,27,35,45]),val(lim(7),ed(edo)))) <= 6]
[12, 19, 22, 31, 41, 53, 72, 84, 125] # all of these have no more than 3 inconsistent interval pairs in the 7-limited 45-odd-limit

(The list here is the same if we instead use "no more than 2 inconsistent interval pairs in the 7-limited 45-odd-limit".)

Also just intuitively its tuning profile looks about as optimized as could be for marvel, and regardless, the step size is getting close to being the same size as 225/224 so that by the time we get to 130edo and 140edo we clearly wanna detemper it.

--Godtone (talk) 23:51, 15 January 2025 (UTC)

Consistency is irrelevant for multirank temps. I'm sure I've related this to you many times. If you care about efficiency, the most efficient way to use these temps is always to only take the notes closest to the tonic on the lattice. None of the larger edos is "too many notes" if you're using the same scale/block in the lattice as they only differ by intonation.
I strongly disagree about the metric you use to derive the "optimal edo sequence" and I've said this a few times too. I know you spent lots of time on it but first of all pls stop citing it as if it was some kind of objective metric. I've been skeptical about any claims involving taking average values from these diamonds. There's lots of open questions, like how you choose intervals from a tonality diamond. A tonality diamond has duplicate, unreduced intervals, e.g. for 3/2 there's 6/4, 9/6, and so on. Did you include these, reduced or unreduced? I guess you didn't, but why not? To be fair I don't think there's an answer to the best practice. The choices one makes here only represent how they see it and not others.
For those reasons, I think metrics based on tonality diamonds are more questionable than prime-based/all-interval optimization schemes, and you can still give complex intervals more weight in these schemes. It's just that all complexity weighting faces the paradox that a growing weight suddenly plunges to zero at the edge of the limit, plus that tuning the octave pure no longer makes sense cuz if tuning sensitivity grows with complexity, the octave is supposed to be the least sensitive to mistuning.
105-odd-limit makes no sense for marvel as 25/16 is already conflated with 14/9. If I may propose an odd limit to look at, I'd say 21. But again I certainly won't recommend the metric you used. The very point of these temps is to trade high-odd-limit low-prime-limit intervals for low-odd-limit high-prime-limit intervals, in this case high-odd-limit HC5 intervals for low-odd-limit HC7 intervals. Your metric favors intervals like 75/64 over 7/6 which is a huge failure in assessing optimal tunings. They are the same one note in the temp and when playing it, no one cares it's out of tune from 75/64 as much as from 7/6. Similar story for 25/16 vs 14/9, or 32/25 vs 9/7 (ofc 25/16 isn't totally discarded: it's normally targeted more than for example 75/64; I just strongly disagree it should take priority over 14/9).
As for the larger edos, they don't differ that much. I'm considering random readers here. It's best to offer them a number of choices without judging too much cuz they can judge by themselves. For 240edo specifically I think its tuning profile falls into the fuzzy optimal region as it has a flat 3, flat 5, and sharp 7, tuned pretty evenly out. Whether it's strictly optimal by certain criteria doesn't matter much.
FloraC (talk) 08:27, 16 January 2025 (UTC)
In response to the choice of odd-limits/tonality diamonds, the answer is very simple: we assume pure octaves so that all octave-equivalents and octave-inversions of every interval in the tonality diamond is included implicitly with equal weighting. In other words, part of the rationale for using pure octaves is that it makes the use of odd-limits perfectly natural because there is no bias to specific octave-equivalents or octave-inversions, so that we can truly just think about the approximations of pairs of odd numbers which vastly simplifies analysis (the usefulness of which should not be underestimated). This is also important because any nonzero tempering of the octave causes an infinite number of intervals of any odd-limit to become arbitrarily inconsistent, though in practice this usually doesn't matter. In other words, it's a far clearer direction to optimize for an explicit set of harmonies and then consider octave-tempering afterwards than first allowing octave-tempering and then being unnecessarily paralysed by choice, which IMO causes things like TE to look artificially natural as solutions because they use the fact that the choice becomes very arbitrary if you allow tempered octaves to justify a disembodied method of optimization that is purely in terms of the basis. That's not to say such a scheme of optimization has no value of course, but it's important to acknowledged the limitations and especially the assumptions of any optimization scheme.
In response to "all complexity weighting faces the paradox that a growing weight suddenly plunges to zero at the edge of the limit" - this is not a paradox, this is completely expected, and this is exactly why I showed a large range of different odd-limits you could consider, to show that generally the same systems appear for basically all the 7-limited odd-limit targets one might want to consider practically.
In response to "makes no sense for marvel as 25/16 is already conflated with 14/9. If I may propose an odd limit to look at, I'd say 21", it kinda feels like you ignored my paragraph explaining why 25/16 is important to consider as the minimum odd-limit for judging how truly optimized a marvel tuning really is.
In response to "pls stop citing it as if it was some kind of objective metric", it's highly customizable and in my experience the lists it gives are pretty universally both interesting and convincing, so I encourage anyone interested in tuning optimization to explore with it (and to raise any concerns on my talk page) as I think most of the lists produced are better and more practically useful than the current standard on the wiki (depending on the exact settings used ofc, but I tried to create the most neutral default I could think of); you are doing the same thing with the metric you wrote, no? And more importantly, I don't think it's reasonable to assume all readers are interested in the optimization philosophy you are proposing where only the simplest intervals are cared about by the optimizer, and I don't even think 25/16 is that complex; it's clearly musically relevant and corresponds to the fourth-simplest composite odd (third-simplest in the 5-limit), after 9/8 which is very simple, 15/8 which is not usually considered to be particularly complex and 21/16 which is similar in complexity, so if 21/16 is included I don't see why 25/16 should be excluded (3*7 vs 5*5), and symmetrically, why aren't we excluding 21/16 if we're excluding 25/16? (I personally don't buy that 21/16 is more important than 25/16 musically so I'm wondering if there is a strong argument for 21/16 being preferred to 25/16 other than being maybe slightly lower complexity.)
(And before/in case you ask, yes I am aware that it only searches patent vals; you can edit the val used for an edo manually thru EG patent_vals[80] = val(lim(255),ed(80),'koprsuvBC'); I don't have the skill or energy to write code that auto-deduces the best val and plus that process seems to be something that requires human judgement anyways IMO so it seems better to leave it up to the user to "fix" the vals from patent in whatever way they deem reasonable/optimal; plus the best val depends on the set of target harmonies, meaning that the val would have to be deduced every time the function is run potentially which I suspect would slow down the search a lot as I don't think a proper solution to the problem of finding the right val would be light on computation (I don't think that simply taking the patent thru a stretched octave is the right solution in general, even if it usually gives the right results). Plus, I'm admittedly suspicious of using warted vals by default because of two considerations: for small edos, warting the val causes implausibility amounts of damage, and for large edos, one would hope that the edo is good enough that you don't need to use a non-patent val, except for warting maybe one prime, and if it really is that good then you'd hope that it can afford to take the damage from using the patent mapping for that prime and still appear in whatever optimal_edo_sequence you are considering.)
I'm okay with leaving the page as is at the time of writing though; I just wanted to respond to your points.
--Godtone (talk) 17:51, 16 January 2025 (UTC)
Oh and yes I understand and accept the rationale of allowing arbitrarily large edo tunings, it just seems a bit arbitrary to me is all because at that point why not hand-optimize rather than accept a specific edo's tuning? (For this reason I do think that consistency is an important factor to consider if you do want to use an edo tuning specifically.) --Godtone (talk) 17:56, 16 January 2025 (UTC)
You completely missed my specific question about using odd-limit tonality diamonds. I never questioned anything about octave equivalence. My question was about what intervals were chosen and how they were processed from the tonality diamonds, a set of intervals with duplicate/unreduced elements.
I suppose you were giving an answer to my argument that tuning the octave pure no longer made sense with complexity weighting. Since your reasons just defend pure octaves alone without addressing anything in the context of complexity weighting, my argument still holds. I mean, pure octaves and complexity weighting are at odds. All your reasons for pure octaves just add up to my point that complexity weighting isn't right.
A growing weight suddenly plunging to zero at the edge of the limit is a paradox becuz the limit a user cares about is supposed to be fuzzy, so that there's supposed to be a fairly smooth rolloff. I can't imagine that one cares about intervals of an odd number so much that they put them at the peak of the weighting curve, and then doesn't care at all when it comes to the very next odd number just becuz those intervals are more complex by a very tiny margin. So altho you measured a variety of odd limits to seemingly give results for a variety of use cases, if each individual case were impractical, they'd prolly not magically combine to something practical.
You said: "[25-odd-limit] is the smallest odd-limit which introduces a tempered equivalence within the interval set of the odd-limit other than the trivial [16/15~15/14]". First of all I don't share the view that 15/14~16/15 is trivial equivalence. It is equivalence just like any other. The exception you made for it was your own artistic choice. Next, I said 25/16 was conflated with 14/9, and that no sane person would measure how off the tempered interval was from 25/16 as much as from 14/9. I said 25/16 wasn't completely discarded in any prime-optimized tuning. All of them were addressing the topic of tempered equivalence you raised.
I believe I've written plenty about why intervals of 25 are best tempered to something simpler. For one thing, they were considered wolf during the meantone era. 32/25 was considered a wolf version of 5/4, tho some used it artistically, possibly thanks to how close it was to none else than 9/7. I've also written plenty about how 21 is the septimal analog of 15, whereas it's more reasonable to group 25 with very next odd number, 27, which is clearly wolf.
I feel whether your algorithm only searches patent vals is comparatively a minor issue. I felt like pointing it out but omitted it in my first reply. I mean perhaps it's highly customizable, as you said, but did you ever customize how it chooses and processes intervals from the tonality diamonds? Did you ever customize the weighting curve? Also when you said it was universally interesting and convincing I wonder who you've tested it with. Why not getting back to the XA Discord server? Everyone misses you (for a debate with you :d). FloraC (talk) 12:03, 17 January 2025 (UTC)
Re: the missed question, sorry, yes you are right I missed it. I falsely assumed you was talking about octave-equivalents and octave-inversions; there are no duplicates counted in the weighting. 3/2 is a single interval; 6/4 and 9/6 and so on should be implicitly taken into account by how much you care about 3/2 alone, AKA as currently coded it should be a consideration of the weighting you use because it would be unintuitive for odd_lim to spit out duplicate intervals proportional to their occurrence. I think it's most intuitive (though I agree not the most correct) to not count duplicates though because IMO it's most similar to how a human would analyse a tuning, but it gives me an idea worth trying. Speaking of the weighting...
Re: "A growing weight suddenly plunging to zero at the edge of the limit is a paradox becuz the limit a user cares about is supposed to be fuzzy, so that there's supposed to be a fairly smooth rolloff." I agree, that's why I highly recommend looking at multiple optimal_edo_sequences and also why I allowed to customize the weighting parameter/curve easily for both the optimal_edo_sequence and strict_optimal_edo_sequence, but the fact remains that the user is forced to make a choice what kinds of harmony they care about in order to pick a system; this is not a flaw, it's an advantage, because you can use your own experience of what sort of things you like and care about to customize it. The default is to multiply the squared error by the odd-limit thru weighting=lambda x: iv_complexity(x) so that it's equivalent to taking the MSE of the (absolute (normal) or relative (strict)) error times the square root of the odd-limit as I said. There is lots of reasons to prefer a harsher weighting thru iv_complexity(x)**2 instead but I found the results somewhat dubious because usually when one cares about for larger odd-limits one is considering more precise tuning systems and considering more harmonically forcing chords to justify those harmonies, so that the tuning fidelity required is lessened a lot compared to the dyadic tuning fidelity, but I also chose this weak weighting as a compromise towards the perspective that we shouldn't be damaging the simple intervals too much. Maybe most importantly, this weighting allows the new odds introduced to not "dominate" the previous considerations through required tuning fidelity, so that if an edo is good in some set of odds then having a bad odd won't necessarily cause it to be omitted from the sequence if the rest make up for it; by contrast, I found that MSE of error weighted proportional to the odd-limit (rather than square-root) caused a mild "complexity dominance" phenomenon, and if you don't weight it the results aren't related enough to the set of harmonies you chose so even worse than the case of "mild complexity dominance". In retrospect this is likely also an implicit way of accounting for what you mentioned. In other words, if we were looking MSE of pure dyadic tuning fidelity, the weighting would actually be quite absurd at iv_complexity(x)**4 (cuz it's multiplied after the squaring step, maybe I should change that but it makes more sense to me to do it after so that the way the algorithm works is easy to understand cuz you won't necessarily be looking at the squared error for example). This would lead to complete dominance, thus creating the "paradox" you raised, so clearly the harmonic context required for more complex harmonies results in a reduction of tuning fidelity required to iv_complexity(x)**2, though I'm not sure how to prove the exact dimensionality of the reduction.
If I made the weighting include duplicates in an odd-limit, the default complexity weighting would definitely need to increase to be proportional to the odd-limit complexity (which I kinda suspect is what it should be and that the reason the results weren't good enough is because of not accounting for duplicates). Would you then trust the results more? And to be clear, for the reasons I gave above having a "falloff" isn't an option as a default cuz it doesn't reflect an explicit set of targeted harmonies; that's why such a weighting option is left to the user to devise instead cuz there's lots of ways of going about it too. I do have some concerns though which I think should be addressed first, which together with everything I already said is I think are why I forgot about the duplicates:
I worry that there will be an "unevenness" introduced whenever a new duplicate is implicitly introduced in an odd-limit, where suddenly an interval has double the weighting, which could cause unexpected results to the user who might not care that 15/10 = 3/2 when inspecting some subset of the 15-odd-limit including 3, 5 and 15. In other words, it's sort of assuming unnecessarily that the only context you care about is complete harmonic series otonalities, as opposed to essentially tempered chords where the duplicate behaviour likely doesn't matter at all, and which along with comma pumps and finite notes is arguably the main point of tempering to an edo. But compounding there's also a sense in which the duplicates are already being accounted for (other than the indirect way of weighting): since everything needs to obey the patent val mapping, when we introduce odd 15 to the no-7's 9-odd-limit, we are already creating more tuning constraints on 3/2 and 5/4 simply because 3/2 * 5/4 = 15/8 is required to be consistent, so the error weighting on the latter is already adding additional tuning fidelity because 15/8 is weighted more strongly than 3/2 or 5/4, so in fact every composite that can be reached will be creating more constraints on the tuning.
(Also, it's pretty easy to write a trivial version of odd_lim yourself that doesn't exclude duplicates and then feed that in as the target.)
Re "I said 25/16 was conflated with 14/9, and that no sane person would measure how off the tempered interval was from 25/16 as much as from 14/9." while it's true that often you want to destroy the harmony of more complex intervals in favour of simpler intervals, many of the best temperaments equating two intervals want both of the interpretations to be usable so that you can use the ambiguity for new progressions not possible in JI, so to reason that it's absurd to consider 25/16's mistuning as a priority is itself absurd to me when it's the simplest odd-limit equivalence other than the one I called trivial, which I didn't do for no reason either; generally I find equating two small consecutive superparticulars, in isolation of other more nontrivial consequences usually implied by them (which often require higher odd-limits!), to not lead to as many musically useful results because you are equating dissonances, for example you'd have to change from 28/15 to 15/8 or vice versa, and stacking 16/15~15/14 doesn't give a very pleasing cluster IMO. For 225/224 the otonal stacking-based equivalence would be (15/8)2 = 7/2 which tbh doesn't seem very inviting or inspiring to me subjectively (tho let me know if you can think of any cool chords that use this), so the first instance of obvious "musical gold" to me is with switching between 14/9 and 25/16 interpretations in progressions.
Re: "Why not getting back to the XA Discord server?" I don't really want to go back on my word and regardless I left for a variety of reasons which are kinda hard to summarize in a short message, including that I'm tired of arguing for things that seem really obvious to me like I shouldn't have to say them out loud let alone reiterate them let alone push for them; even when some agree no music gets made most of the time anyways, so I'd rather just explore the possibilities of xenharmonic music at my own pace in peace. Basically it feels like noone actually shares my perspective or cares about the things I do in tuning theory and in music making, and due to personal circumstance I have very little energy for making music so I can't be a good example to other people either (especially when I frequently waste my time talking); the launchpad noodles are about as much as I can manage, and I don't think I've seen a single person post a noodle of them trying the isomorphic keyboard code after months of advertising and talking about it; the soonest is in March so maybe I'll rejoin temporarily then. Even the launchpad thing is an example of my experience more generally; AFAIK (which is hard to tell cuz of complete lack of examples posted) usually people just use the launchpad as is without customizing the layout or even considering how much potential is unlocked by easily changing the isomorphic layout with a single run of a Python 3 function. Also, I'm on basically everything else anyways (feel free to invite me to something I'm not on) and I don't really agree with how XA is run so I don't feel good about encouraging it as being the "only" place to talk about xen online (other than FB which I am not touching). Also, I have kinda an increasingly bad feeling about Discord tbh, for example the new ignore feature is absolutely terrible and kinda seals the deal that the right move is to move to revolt.chat (even though it's got less features and is a little less reliable).
--Godtone (talk) 18:23, 17 January 2025 (UTC)
You said: "I highly recommend looking at multiple optimal_edo_sequences". To this I had answered: "Altho you measured a variety of odd limits to seemingly give results for a variety of use cases, if each individual case were impractical, they'd prolly not magically combine to something practical."
Say I have a 50% probability of using intervals of 27 (cuz I can't know what chords I'm gonna use before composing), you believe I can extract something useful from your 25- and 27-odd-limit results by interpolating them or some other means. I mean, no. Your 25-odd-limit result is based on 100% for 25 and 0% for 27, and your 27-odd-limit result is based on 100% for 27. Neither is realistic. I'd much prefer a result that takes accounts of how much probability/frequency these intervals appear, and generally it's a rapidly decreasing probability/frequency with increasing complexity. If I use 27, I'll be using it like once in 100 times I use 3, 5, 7, and 9; I'll be using it like once in 10 times I use 15 and 21; and I might as well just insert 35 instead. So once in 100 times a chord is out of tune is generally not as bad as not tuning the rest 99 chords well. Simplicity weighting is just natural in these scenarios. Even equal weighting looks alright. I absolutely don't recommend complexity weighting and I have no idea how you got to complexity squared or even the fourth. You know it's absurd and I do too. In fact the paradox I raised happens with all complexity weighting. You start with infinite weight for the octaves. Then you have a valley. It grows to a peak at the limit, after which it's zero. We're staring at a zigzag when a function on the harmonic series is supposed to be smooth.
You said: "Many of the best temperaments equating two intervals want both of the interpretations to be usable so that you can use the ambiguity for new progressions not possible in JI". The complex intervals we discussed (25/16, 75/64, etc.) can all be made evident by stacking simpler intervals, so even if the 25/16 is tuned to pure 14/9 (or even flat of 14/9), its identity is evident if you approach it by two steps of 5/4. That's also the most sane approach. I'm not saying you should use those intervals like that; I just mean those intervals can use those extra supports. In a way the tuning requirement of 25/16 is already covered by that of 5/4, so that's another reason I don't believe in complexity weighting.
On the topic of duplicates in the tonality diamonds, you're right I won't trust the results more if you include them. I've said I don't think there's an objectively right answer and that I believe it only reflects personal artistic choices. I've said I don't like having this extra variable compared to the simplicity of prime-based/all-interval metrics.
I understand and can sympathize about your frustration in the XA Discord. Thanks for the detailed explanation. FloraC (talk) 16:23, 18 January 2025 (UTC)

Continued discussion of odd-limit complexity weighting

"if each individual case were impractical, they'd prolly not magically combine to something practical" to be honest I don't see why not; if the same EDO appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that EDO as your tuning of interest for trying out/further investigation? (This is especially true if you investigate different punishment strategies and it generally keeps appearing, because that ensures a more diverse sample of philosophies for tuning agreeing on the same tuning.)

"its identity is evident if you approach it by two steps of 5/4" to some degree yes but you shouldn't always need to "explain" the stacking an interval is made of to a listener IMO. If I want to use 32/25 in some chord I don't necessarily want the chord to also have 5/4 present just to make the 32/25 interpretation clearer. (For this purpose it seems to me that the tuning should at least not be closer to 9/7 which is very temperable dyadically by comparison, so that even a very undertempered ~32/25 can work as ~9/7.)

"I have no idea how you got to complexity squared or even the fourth" I think you misunderstood. Multiplying the square error by the fourth power of the odd-limit complexity is mathematically equivalent to doing the Mean-Square-Error where the "Error" is the absolute or relative error times the square of the odd-limit complexity. That is, if e is our (absolute or relative) error and c is our odd-limit complexity, then what we're doing is giving a "punishment" for each interval of e2 * c4 = (e * c2)2 which means we are doing MSE on e * c2. Does that make sense? (I may change this behaviour in the future to avoid confusion so that the weighting is applied directly to the error rather than after the error function.) As for why use the square of the odd-limit complexity, the answer is if you want "dyadic" tuning fidelity, which is the harshest kind, which means that an interval should be recognisable in isolation. So in other words, it's because the number of intervals in the k-odd-limit grows approximately with k2.* As I found by ear, this results in the tuning fidelity required going up very quickly as the odd-limit grows so already by the time you get to 19/16, for me there is about 2 cents of mistuning allowed, simply due to lack of harmonic context.

* Though I'm fairly sure of this claim intuitively and asked someone else about it as I didn't know how to prove it, let me know if you can find a proper bound on the number of intervals in the k-odd-limit. But the curve should be similar to the k-integer-limit as well; the general idea is that the range of the numerator and denominator both grow linearly so that the number of possibilities grows by the square.

Clearly considering intervals in isolation rather than as suggested through the harmonic context of chords gives the wrong answer though, but I'm not sure how one would quantify mathematically the harmonic ambiguity of a contextualized interval, though it's clear it reduces the tuning fidelity to not be as harsh so that the dyadic tuning fidelity e2 * c4 = (e * c2)2 (square of the odd-limit) is inappropriate. My best guess has been to weight the error-before-squaring by the square-root of the odd-limit complexity e2 * c = (e * c1/2)2 which is actually quite forgiving in the tuning of more complex intervals, hypothesizing that the context is almost but not fully capable of making their tuning fidelity the same regardless of their complexity (which is the most forgiving sensitivity I can allow, so really the question is whether it's too insensitive not whether it's too sensitive). It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for (though I gave some in the above paragraphs already and you gave more). Intuitively, context should allow the sensitivity to go from e2 * c4 = (e * c2)2 (which AFAIK is the precise dyadic precision) to e2 * c2 = (e * c)2 (weighting the error-before-squaring proportional to the odd-limit), so the reason I didn't pick the latter (which is more elegant) is because there still seemed to be a dominance phenomenon.

BTW, it's easy to use unweighted by just feeding weighting=1 as a parameter. I just feel that the results bias to simple intervals too much so that it starts to feel like what's the point in including the more complex stuff if it's not gonna be tuned well enough.

--Godtone (talk) 23:14, 18 January 2025 (UTC)

You said: "if the same edo appears in the sequence for multiple tonality diamonds you're interested in (both simple and complex), how is it flawed to pick that edo as your tuning of interest for trying out/further investigation?" My answer is: becuz how many times an edo appears in those sequences is an arbitrary metric you made up. Each sequence represents the scenario where one is interested and only interested in a specific odd limit, as you have and only have weight for them (becuz you use complexity weighting, this can't be explained by discarding negligible weights, or gating). Counting the time of appearance thru multiple sequences would imply that I'm interested in a specific small odd limit, and I'm also interested in another larger odd limit, which contradicts the premise that I should be uninterested in it. So this is a self-contradicting/schizophrenic metric. It doesn't represent or translate to a realistic scenario.
Note that this isn't the same as I'm being unsure if I'm interested in something, I have a probability of being interested in something, or I'm interested but only have a low frequency of using something. If I'm unsure, I might not want to be using the limit as a hard cutoff. More likely I'll be configuring the weighting curve to reflect the fuzziness of the thought.
The idea that the interval should be tuned closer to 32/25 than 9/7 sounds insane to me. To be clear it is a valid tuning and can be useful to those who specifically targets intervals of 25 for whatever reasons. But that's a personal artistic choice. To talk about it like an optimal tuning is wild. Imo complex intervals like 32/25 should always get some harmonic support. Even if you use it alone, consider how more significant 9/7 is. It appears everywhere in basic-level septimal harmony. In comparison, 32/25 only has some utility that I can think of in 5-limit JI purist approaches, or maybe primodalist approaches, in which case they get plenty of harmonic support anyway.
Okay, maybe your dyadic tuning fidelity and/or contextualized tuning fidelity make some sense. Becuz I also take account of harmonic significance and frequency of use, I still have plenty of cases to turn your complexity weighting to a simplicity weighting. FloraC (talk) 12:34, 19 January 2025 (UTC)
"Each sequence represents the scenario where one is interested and only interested in a specific odd limit" is possibly a bit of a misunderstanding. This reasoning would make more sense if the sequence was still subject to the "largest odds harmonic complexity based dominance" phenomenon/issue I spoke of, but I fixed that using the very forgiving complexity weighting, so that if we look at the (2k+1)-odd-limit then really we are also simultaneously looking at the (2k-1)-odd-limit as well as every subset simultaneously, because the 2k+1 odd has not "dominated"/warped the results so that previous harmonies are discarded. Rather, what's happened is that a little more error is being allowed on smaller odds so that the larger odds have even the chance of being kept in tune (on average), and because of everything being by patent val, this error is felt even less because of being internally consistent when constructing chords. If anything you can think of the square-root-of-odd-limit complexity weighting as being a counterweight to small odds' natural dominance thru their presence in composite odds strengthening their tuning requirements indirectly (so that tuning fidelity of 15 improves fidelity of 3 and 5 for example), so an unweighted weighting would bias unambiguously and unfairly in favour of small odds, though if that's your priority that's fine, I just don't see what the point of targeting more complex harmonies at all is in that case.
"Counting the time of appearance thru multiple sequences would imply that I'm interested in a specific small odd limit, and I'm also interested in another larger odd limit, which contradicts the premise that I should be uninterested in it. So this is a self-contradicting/schizophrenic metric. It doesn't represent or translate to a realistic scenario." I'm afraid this doesn't make any sense to me at all. Could you explain how it makes sense? As I see it, if I am looking at multiple odd-limits then an EDO appearing in all or most of them shows that regardless of which odd-limit I care more about that EDO is a good choice for its size, so if I want it to appear in many odd-limits, that's an indirect way of achieving the "complexity falloff" rather than a hard cutoff, it's just that it is the human aggregating the results, which is more reliable than an algorithm trying to do the same because the human can judge based on more information and considerations that it might not be accounting for, such as (importantly) personal taste for tuning.
I wanted to elaborate on something I said a little: "It doesn't seem like the answer could be less sensitive than this, but it's admittedly a strange answer I'd like more justification for". The weird thing about using the square-root of the odd-limit was when I used the most contentious interval I accepted the tempering of as psychoacoustically convincing (which is the ~3/2 and ~4/3 in 7edo), then the implied bounds for all the odd-limits seem to match my experience with a suspicious degree of accuracy, recognising, for example, that 80edo's very off ~15/13 is tempered with basically exactly as much damage as I can accept for harmonically-contextualised purposes.
"Becuz I also take account of harmonic significance and frequency of use" I think this is maybe where the disagreement is arising from. I fundamentally don't agree that you can weight in this way; you can't say "because I use 3/2 often, therefore 3/2 is the most important to have low error on", because that disregards the tuning fidelity required for more complex intervals as well as disregards that your 3/2 may already be good enough for every practical purpose (which is especially likely if you look at larger odd-limits including composite odds because of the frequency of 3 appearing in the factorisation). It doesn't matter how infrequently you use something; if you do use it, then having it be too damaged will have consequences in its sound (specifically its capability of concordance) so you either do or don't care about whether it concords. Plus, if you wanted to adopt that philosophy then ironically 53edo is definitely optimal for marvel, because it cares first and foremost about 2-limit, then 3-limit, then 5-limit, then 7-limit, then 11-limit, strictly in that order, which is exactly the proposed frequency falloff you are advocating for. So by your own reasoning, it should be the best tuning for it, because it tunes primes better the smaller they are. Discarding it due to the uneven tuning of the full 9-odd-limit is indirectly an admission of complexity weighting, at which point you can't avoid the fact that more complex intervals need to be tuned better to concord. How much better is up to debate ofc, but it's definitely not unweighted for the reasons I gave.
--Godtone (talk) 17:32, 20 January 2025 (UTC)
I don't think I've misunderstood. I specifically meant that when you pick a limit, you care about that limit (which includes smaller limits) and do not care about any intervals outside of it. For example when you pick 25 as a limit, you absolutely don't care about intervals of 27. That doesn't happen in reality cuz if I care about 25 so much that I put it at the local peak of the weighting curve, I have no reason to completely dismiss 27.
Next you said: "I just don't see what the point of targeting more complex harmonies at all is in that case." My answer is: it's not granted that there should be a point of targeting more complex harmonies. The worth of it is something that needs proof. I've been holding that the objectively best thus optimal and recommendable weighting curve is where you don't have to choose a target, and where it just does the rolloff for you.
Note that looking at multiple sequences and counting the times an edo appears isn't the same as interpolating the scores of the edo across limits. Interpolation would make some sense, actually, since that can translate to a configuration of the weighting curve. Counting the times cannot. It implies a completely different mindset, which I've described. You care about a limit and specifically not any intervals outside of it, and then you care about a larger limit which defies your previous choice. Then you care about yet another larger limit which defies your previous two choices. Now I don't think this is utterly wrong, but it's a new, opaque metric layered on the old metric. Same problem as POTE, if that means something to you. You could say there's some sort of "black magic" that somehow makes it close enough to what you want, but as I said it doesn't translate to or represent a realistic scenario. Generally you have a scenario in your mind and find a mathematical model for it. I find it hard to believe in if a model doesn't correspond to a scenario.
I'll keep holding that the optimal tuning should take account of harmonic significance and frequency of use. First, I disagree that it disregards the tuning fidelity required for more complex intervals. It just trades that against the other concerns which are just as pressing if not more so. Second, by frequency I do imply probability, cuz frequency is the expected value of use/unuse of the interval. If you're not sure you're gonna use a certain complex interval like once in a hundred chords then giving it a high weight is a waste of optimization resource. So it's not that either I care or don't care about whether it concords. More like I care, but the amount of care is in a way proportional to its harmonic significance and frequeny of use.
53edo is clearly undertempered cuz it trades simple 7-odd-limit concords in favor of complex 25-odd-limit wolves. It might score better in wilson metrics than tenney cuz wilson is where 9 is simpler than 7, but 25 is still more complex than both 7 and 9, in addition to the fact recognized by euclidean metrics that trading 25 for 7 in marvel is more efficient use of optimization resource in the same way as trading 3 for 5 is in meantone. Ftr, here's the BE-optimal GPV sequence for septimal marvel:
10, 12, 19, 31, 41, 53, 72, 125, 197.
For undecimal marvel:
10, 12e, 19, 22, 31, 41, 53, 72, 125, 166, 197e, 269ce, 435cce.
FloraC (talk) 11:01, 21 January 2025 (UTC)
There is a reason that counting the times is way better than building in interpolation in the way you suggest. It's because of the tuning fidelity issue. If I build in a weighting curve, then I am algorithmically giving permission for the most complex harmonies to be the most off, which I've argued is objectively the wrong choice if you want to target those harmonies. By contrast, by not having a falloff I am ensuring that if those harmonies are too off, the rest of the tuning better be worth the sacrifice in the tuning fidelity where it's most needed. This allows systems that are biased to simpler harmonies appear. I've expressed that already I'm not sure if weighting proportional to the square root of the odd-limit is too forgiving for complex harmonies, but some strange edos can start appearing if you bias too strongly for large odd-limits. For example, using proportional to the odd-limit implies 67edo is a lower-absolute-error system for the 17-odd-limit than 58edo, which I suspect is happening because of the sharp 11 and 13 of 58edo being punished more strongly as a result, which I don't find convincing. --Godtone (talk) 15:59, 21 January 2025 (UTC)