User:Sintel/Validation of common consonance measures: Difference between revisions

Sintel (talk | contribs)
m Results: Copy pasta error
Sintel (talk | contribs)
made a mistake in calculation
 
(8 intermediate revisions by the same user not shown)
Line 7: Line 7:


In their test, subjects rated the consonance of 38 just intervals.
In their test, subjects rated the consonance of 38 just intervals.
They use a harmonic timbre with 10 harmonics, with amplitudes proportional to 1/''n'', with decay exponential decays mimicking a plucked string.
They use a harmonic timbre with 10 harmonics, with amplitudes proportional to 1/''n'', with exponential decays mimicking a plucked string.
Each participant was asked to rate the consonance of simultaneous intervals on a 1 to 5 point scale: very dissonant, mildly dissonant, not dissonant nor consonant, mildly consonant, very consonant.
Each participant was asked to rate the consonance of simultaneous intervals on a 1 to 5 point scale: very dissonant, mildly dissonant, not dissonant nor consonant, mildly consonant, very consonant.
These ratings were then normalized to the range (0, 1).
These ratings were then normalized to the range (0, 1).
Line 39: Line 39:
* The Wilson norm is the sum of prime factors (with repetition). That gives <math>\text{Wilson}\left(\tfrac{15}{8}\right) = 5 + 3 + 2 + 2 + 2 = 14</math>.
* The Wilson norm is the sum of prime factors (with repetition). That gives <math>\text{Wilson}\left(\tfrac{15}{8}\right) = 5 + 3 + 2 + 2 + 2 = 14</math>.


I have also included Euler's ''gradus suavitatis'',<ref>Leonhard Euler (1739) ''Tentamen novae theoriae musicae'' (Attempt at a New Theory of Music), St. Petersburg.</ref> which is probably the first complexity measure historically.
I have also included Euler's ''[[gradus suavitatis]]'',<ref>Leonhard Euler (1739) ''Tentamen novae theoriae musicae'' (Attempt at a New Theory of Music), St. Petersburg.</ref> which is probably the first complexity measure historically.
It is somewhat similar to the Wilson norm, in that it depends on the prime factorization.
It is somewhat similar to the Wilson norm, in that it depends on the prime factorization.
Given ''s'', the sum of prime factors, and ''n'' the number of prime factors, Euler's gradus function is: ''s'' - ''n'' + 1.
Given ''s'', the sum of prime factors, and ''n'' the number of prime factors, Euler's gradus function is {{nowrap|''s'' - ''n'' + 1}}.
For example <math>\text{Gradus}\left(\tfrac{15}{8}\right) = 14 - 5 + 1 = 10</math>.
For example <math>\text{Gradus}\left(\tfrac{15}{8}\right) = 14 - 5 + 1 = 10</math>.


Line 48: Line 48:
First proposed by [[Helmholtz]], these models are quite popular in the psychoacoustics literature, and many variations have been developed over the years.
First proposed by [[Helmholtz]], these models are quite popular in the psychoacoustics literature, and many variations have been developed over the years.
Here, I will use the classic roughness curve as derived by Plomp and Levelt in 1965.<ref>R. Plomp, W. J. M. Levelt (1965) [https://doi.org/10.1121/1.1909741 ''Tonal Consonance and Critical Bandwidth'']. J. Acoust. Soc. Am.</ref>
Here, I will use the classic roughness curve as derived by Plomp and Levelt in 1965.<ref>R. Plomp, W. J. M. Levelt (1965) [https://doi.org/10.1121/1.1909741 ''Tonal Consonance and Critical Bandwidth'']. J. Acoust. Soc. Am.</ref>
<ref group="note">This model does not take into account the amplitude of the partials. One may object that we should weight the contribution of beating according to the amplitude of the harmonics (i.e. 1/''n'') but this only makes the model worse (R<sup>2</sup> = 0.652).</ref>


[[File:Consonance_ratings_roughness.png|500px|thumb|none|Roughness model from Plomp and Levelt. They consider beating between harmonic tones with 6 partials.]]
[[File:Consonance_ratings_roughness.png|500px|thumb|none|Roughness model from Plomp and Levelt. They consider beating between harmonic tones with 6 partials.]]
Line 73: Line 74:


To validate these measures, a linear model was fitted to the perceptual data, using weighted least squares to account for the variance.
To validate these measures, a linear model was fitted to the perceptual data, using weighted least squares to account for the variance.
Since all models have only a two parameters (slope and intercept), we can directly compare their R-squared coefficient, which measures how well the model correlates with the data.
Since all models have only two parameters (slope and intercept), we can directly compare their R-squared coefficient, which measures how well the model correlates with the data.


{| class="wikitable"
{| class="wikitable"
|+ Caption text
|-
|-
! Measure !! R<sup>2</sup>
! Measure !! R<sup>2</sup>
Line 128: Line 128:
''k'' controls how quickly the measure increases as we move away from simple ratios.
''k'' controls how quickly the measure increases as we move away from simple ratios.
It roughly corresponds the the amount of tolerance we allow when we consider an interval as approximating a just interval close by.
It roughly corresponds the the amount of tolerance we allow when we consider an interval as approximating a just interval close by.
I will use, somewhat arbitrarily, ''k'' = 20{{c}}.
I will use ''k'' = 20{{c}}. This parameter was not optimized on the current dataset, as doing so would give this measure an unfair advantage over the other models, which use their standard parameters.
Note that increasing the set of intervals to a higher cutoff does not change the curve, since it already reaches a maximum below a Tenney norm of 10.
Note that increasing the set of intervals to a higher cutoff does not change the curve, since it already reaches a maximum below a Tenney norm of 10.


Line 154: Line 154:
== Code availability ==
== Code availability ==
All figures and models presented here can be replicated by running the notebook available at: https://gist.github.com/Sin-tel/59ea5446a90119c4bfe5467f4198c9b5.
All figures and models presented here can be replicated by running the notebook available at: https://gist.github.com/Sin-tel/59ea5446a90119c4bfe5467f4198c9b5.
== Notes ==
<references group = "note"/>


== References ==
== References ==
<references />
<references />