Talk:Tenney–Euclidean tuning: Difference between revisions
No edit summary |
|||
Line 60: | Line 60: | ||
: The next part may be explained more clearly, but I'd like to remind you "the same error across primes" itself is a bias towards higher primes. Notice if 2 and 7 are equally weighted, 8 would get about thrice the error of 7's (see Graham's primerr.pdf). And no, I haven't observed the bias towards lower primes due to the way temperaments are constructed. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 01:06, 19 December 2021 (UTC) | : The next part may be explained more clearly, but I'd like to remind you "the same error across primes" itself is a bias towards higher primes. Notice if 2 and 7 are equally weighted, 8 would get about thrice the error of 7's (see Graham's primerr.pdf). And no, I haven't observed the bias towards lower primes due to the way temperaments are constructed. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 01:06, 19 December 2021 (UTC) | ||
:: I've read Breed's paper and I think its very good work. Let me be clear: I think giving more weight to lower primes is a very good idea. But it seems obvious that this is explicitly introducing a certain bias to get better results in practice. | |||
:: 8 is not a prime, so when talking about average errors for the primes it is kind of irrelevant. In the case where you work in some subgroup like 2.5.9, I don't see why you would tolerate twice the error in 9 as you do for 3 in 2.3.5, as we are treating 9 here as a 'formal prime' and not even considering 3. Reading Breed's arguments further he does actually imply that his weighting is biased towards lower primes, but that this is a good thing. | |||
:: I realize that this is just arguing semantics so I will not lose any sleep over it. | |||
:: - [[User:Sintel|Sintel]] ([[User talk:Sintel|talk]]) 02:23, 19 December 2021 (UTC) |