Xenharmonic Wiki talk:Things to do: Difference between revisions

BudjarnLambeth (talk | contribs)
ArrowHead294 (talk | contribs)
mNo edit summary
 
Line 315: Line 315:
To use this to find a reasonably objective measurement of what subgroups are best, we can add a few logical restrictions on this rather general definition:<br/>
To use this to find a reasonably objective measurement of what subgroups are best, we can add a few logical restrictions on this rather general definition:<br/>
* Consider the monzos of the harmonics in any S as r-dimensional vectors (AKA, interpreted as members of N^r), corresponding to the p_r-prime-limit with p_r the r'th prime, and with p_r not exceeding L. These vectors must be linearly independent, so as to not represent a "pathological" subgroup which can have multiple mappings for the same positive integer.
* Consider the monzos of the harmonics in any S as r-dimensional vectors (AKA, interpreted as members of N^r), corresponding to the p_r-prime-limit with p_r the r'th prime, and with p_r not exceeding L. These vectors must be linearly independent, so as to not represent a "pathological" subgroup which can have multiple mappings for the same positive integer.
* Then, if we assume that all harmonics in the subgroup are harmonics we want to approximate, we can think about the logarithmic size of each harmonic as the amount of information it generates, because smaller harmonics generate more of the harmonic series, especially when combined with other small harmonics, hence leading to prime limits as the most efficient subgroup representations of the harmonic series, with "efficient" being defined as "generates the most harmonics considering the number of generators". This leads to about the most natural formulation I can currently think of which is relatively straightforward and (as a sanity check) which is used on the page for [[The Riemann Zeta Function and Tuning]], which is weighting each generator by the reciprocal of the log of its size. To then make the definition invariant to the number of generators, you can make the weightings sum to 1 by multiplying by an appropriate scalar.
* Then, if we assume that all harmonics in the subgroup are harmonics we want to approximate, we can think about the logarithmic size of each harmonic as the amount of information it generates, because smaller harmonics generate more of the harmonic series, especially when combined with other small harmonics, hence leading to prime limits as the most efficient subgroup representations of the harmonic series, with "efficient" being defined as "generates the most harmonics considering the number of generators". This leads to about the most natural formulation I can currently think of which is relatively straightforward and (as a sanity check) which is used on the page for [[The Riemann zeta function and tuning]], which is weighting each generator by the reciprocal of the log of its size. To then make the definition invariant to the number of generators, you can make the weightings sum to 1 by multiplying by an appropriate scalar.
* Then, to find the subgroups that nEDk best approximates relative to its step size, simply look at all choices for subsets of L where all harmonics are linearly independent and where the error is low enough to guarantee a good level of [[consistency]], and sort results by increasing errors. Note that this becomes very computationally intensive for large L, so L=30, L=42, L=58, L=96 and at most L=126 are all good restrictions, depending on what is computationally feasible in a reasonable amount of time.<br/>(The choices of L that I listed here are based on prime limits (specifically, record prime gaps, and 30 is 2*3*5 so its significant) with the exception of 58 which is based on the 53-prime-limit being the highest limit available on x31eq. Note that larger L can be used for small ETs if we restrict accuracy sufficiently or consider only lower-prime-limit subsets of L.)
* Then, to find the subgroups that nEDk best approximates relative to its step size, simply look at all choices for subsets of L where all harmonics are linearly independent and where the error is low enough to guarantee a good level of [[consistency]], and sort results by increasing errors. Note that this becomes very computationally intensive for large L, so L=30, L=42, L=58, L=96 and at most L=126 are all good restrictions, depending on what is computationally feasible in a reasonable amount of time.<br/>(The choices of L that I listed here are based on prime limits (specifically, record prime gaps, and 30 is 2*3*5 so its significant) with the exception of 58 which is based on the 53-prime-limit being the highest limit available on x31eq. Note that larger L can be used for small ETs if we restrict accuracy sufficiently or consider only lower-prime-limit subsets of L.)
* As for making the search more computationally feasible, there is an easy way to eliminate possibilities, which is by adding harmonics in order of increasing error relative to the error of some starting harmonic until there are none left in L or none left that wouldn't introduce too much error. This provides an easy way to define "families of subgroup interpretations" by increasing error and through superset/subset relationships as well as compatibility relations, which could be an interesting direction to take this in of itself.<br/>(I wonder how related it'd be to [[Xenharmonic_Wiki_talk:Things_to_do#13-Limit, 17-Limit and 19-Limit Comma Pages|families of temperaments]]? Seems like it'd be strongly related, and better yet, suggest potential ways of organising relatively unknown temperaments.)
* As for making the search more computationally feasible, there is an easy way to eliminate possibilities, which is by adding harmonics in order of increasing error relative to the error of some starting harmonic until there are none left in L or none left that wouldn't introduce too much error. This provides an easy way to define "families of subgroup interpretations" by increasing error and through superset/subset relationships as well as compatibility relations, which could be an interesting direction to take this in of itself.<br/>(I wonder how related it'd be to [[Xenharmonic_Wiki_talk:Things_to_do#13-Limit, 17-Limit and 19-Limit Comma Pages|families of temperaments]]? Seems like it'd be strongly related, and better yet, suggest potential ways of organising relatively unknown temperaments.)
A few notes on the mathematics:
A few notes on the mathematics:
* I pick the variance over the standard deviation because squaring the error leads to a "least-squares" optimisation, which is then much more "compatible" with the tuning optimisations represented by the Riemann Zeta function.
* I pick the variance over the standard deviation because squaring the error leads to a "least-squares" optimisation, which is then much more "compatible" with the tuning optimisations represented by the Riemann zeta function.
* We can take an alternative strategy to tuning a subgroup less focused on the regular temperament theory interpretation and more focused on what consonant chords and intervals are approximated that you want to use. In such a case, you pick ''any'' subset of X corresponding to ''any'' subset of L, which is to say that the r-dimensional vectors ''are not'' required (or even recommended) to be linearly independent. Then the subset of L represents a generalisation of [[odd limit]]s, where odd limits are specific to where your subset of L is only odd harmonics due to the discarding of 2's in the prime factorisations due to being specific to ED2s. This interpretation/use fits very nicely with the notion of [[Consistent#Consistency_to_distance_d|consistency to distance d]], with the standard deviation being an "expected overall consistency" which is less discrete/rigid. The only potential problem with this is it seems like a very large number of possibilities can result with different subsets being preferable for subjective reasons.
* We can take an alternative strategy to tuning a subgroup less focused on the regular temperament theory interpretation and more focused on what consonant chords and intervals are approximated that you want to use. In such a case, you pick ''any'' subset of X corresponding to ''any'' subset of L, which is to say that the r-dimensional vectors ''are not'' required (or even recommended) to be linearly independent. Then the subset of L represents a generalisation of [[odd limit]]s, where odd limits are specific to where your subset of L is only odd harmonics due to the discarding of 2's in the prime factorisations due to being specific to ED2s. This interpretation/use fits very nicely with the notion of [[Consistent#Consistency_to_distance_d|consistency to distance d]], with the standard deviation being an "expected overall consistency" which is less discrete/rigid. The only potential problem with this is it seems like a very large number of possibilities can result with different subsets being preferable for subjective reasons.
--[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 04:01, 22 January 2021 (UTC)
--[[User:Godtone|Godtone]] ([[User talk:Godtone|talk]]) 04:01, 22 January 2021 (UTC)
Return to the project page "Things to do".