Douglas Blumeyer's RTT How-To: Difference between revisions
Cmloegcmluin (talk | contribs) m move the big chart out of the multicovector seciton |
Cmloegcmluin (talk | contribs) figure numbers match section numbers |
||
Line 53: | Line 53: | ||
Note the different direction of the brackets between covectors and vectors: covectors {{val|}} point left, vectors {{monzo|}} point right. | Note the different direction of the brackets between covectors and vectors: covectors {{val|}} point left, vectors {{monzo|}} point right. | ||
[[File:Map and vector.png|500px|thumb|right|'''Figure | [[File:Map and vector.png|500px|thumb|right|'''Figure 2a.''' Mapping example]] | ||
Covectors and vectors give us a way to bridge JI and EDOs. If the vector gives us a list of primes in a JI interval, and the covector tells us how many steps it takes to reach the approximation of each of those primes individually in an EDO, then when we put them together, we can see what step of the EDO should give the closest approximation of that JI interval. We say that the JI interval '''maps''' to that number of steps in the EDO. Calculating this looks like {{val|12 19 28}}{{monzo|4 -1 -1}}, and all that means is to multiply matching terms and sum the results (this is called the dot product). | Covectors and vectors give us a way to bridge JI and EDOs. If the vector gives us a list of primes in a JI interval, and the covector tells us how many steps it takes to reach the approximation of each of those primes individually in an EDO, then when we put them together, we can see what step of the EDO should give the closest approximation of that JI interval. We say that the JI interval '''maps''' to that number of steps in the EDO. Calculating this looks like {{val|12 19 28}}{{monzo|4 -1 -1}}, and all that means is to multiply matching terms and sum the results (this is called the dot product). | ||
So, 16/15 maps to one step in 12-EDO ''(see Figure | So, 16/15 maps to one step in 12-EDO ''(see Figure 2a)''. | ||
For another example, can quickly find the fifth size for 12-EDO from its map, because 3/2 is {{monzo|-1 1 0}}, and so {{val|12 19 28}}{{monzo|-1 1 0}} = (12 × -1) + (19 × 1) = 7. Similarly, the major third — 5/4, or {{monzo|-2 0 1}} — is simply 28 - 12 - 12 = 4. | For another example, can quickly find the fifth size for 12-EDO from its map, because 3/2 is {{monzo|-1 1 0}}, and so {{val|12 19 28}}{{monzo|-1 1 0}} = (12 × -1) + (19 × 1) = 7. Similarly, the major third — 5/4, or {{monzo|-2 0 1}} — is simply 28 - 12 - 12 = 4. | ||
Line 74: | Line 74: | ||
=== tempering out commas === | === tempering out commas === | ||
[[File:Meantone temper out.png|200px|frame|right|'''Figure | [[File:Meantone temper out.png|200px|frame|right|'''Figure 2b.''' meantone equates four fifths (3/2) with one major third (5/4)]] | ||
Here’s where things start to get really interesting. | Here’s where things start to get really interesting. | ||
Line 82: | Line 82: | ||
The immediate conclusion is that 12-EDO is not equipped to approximate the meantone comma directly as a melodic or harmonic interval, and this shouldn’t be surprising because 81/80 is only around 20¢, while the (smallest) step in 12-EDO is five times that. | The immediate conclusion is that 12-EDO is not equipped to approximate the meantone comma directly as a melodic or harmonic interval, and this shouldn’t be surprising because 81/80 is only around 20¢, while the (smallest) step in 12-EDO is five times that. | ||
But a more interesting way to think about this result involves treating {{monzo|-4 4 -1}} not as a single interval, but as the end result of moving by a combination of intervals. For example, moving up four fifths, 4 × {{monzo|-1 1 0}} = {{monzo|-4 4 0}}, and then moving down one pentave {{monzo|0 0 -1}}, gets you right back where you started in 12-EDO. Or, in other words, moving by one pentave is the same thing as moving by four fifths ''(see Figure | But a more interesting way to think about this result involves treating {{monzo|-4 4 -1}} not as a single interval, but as the end result of moving by a combination of intervals. For example, moving up four fifths, 4 × {{monzo|-1 1 0}} = {{monzo|-4 4 0}}, and then moving down one pentave {{monzo|0 0 -1}}, gets you right back where you started in 12-EDO. Or, in other words, moving by one pentave is the same thing as moving by four fifths ''(see Figure 2b)''. One can make compelling music that [[Keenan's comma pump page|exploits such harmonic mechanisms]]. | ||
From this perspective, the disappearance of 81/80 is not a shortcoming, but a fascinating feature of 12-EDO; we say that 12-EDO '''supports''' the meantone temperament. And 81/80 in 12-EDO is only the beginning of that journey. For many people, tempering commas is one of the biggest draws to RTT. | From this perspective, the disappearance of 81/80 is not a shortcoming, but a fascinating feature of 12-EDO; we say that 12-EDO '''supports''' the meantone temperament. And 81/80 in 12-EDO is only the beginning of that journey. For many people, tempering commas is one of the biggest draws to RTT. | ||
Line 110: | Line 110: | ||
</ol> | </ol> | ||
[[File:Approximation of logs.png|600px|thumb|left|'''Figure | [[File:Approximation of logs.png|600px|thumb|left|'''Figure 2c.''' visualization of an ET as a logarithmic approximation. The curve of the blue line is the familiar logarithmic curve of the harmonic series (harmonic 4 was skipped because it's not prime). Each rectangular brick is one of our generators, or in other words, one of the same ET step. The goal is to choose a size of brick that allows us to build stacks which most closely matches the position of the blue line at all three of these primes' positions.]] | ||
So when I say 12:19:28 ≈ log(2:3:5) what I’m saying is that there is indeed some shared generator g for which log<sub>g</sub>2 ≈ 12, log<sub>g</sub>3 ≈ 19, and log<sub>g</sub>5 ≈ 28 are all good approximations all at the same time, or, equivalently, a shared generator g for which g¹² ≈ 2, g¹⁹ ≈ 3, and g²⁸ ≈ 5 are all good approximations at the same time ''(see Figure | So when I say 12:19:28 ≈ log(2:3:5) what I’m saying is that there is indeed some shared generator g for which log<sub>g</sub>2 ≈ 12, log<sub>g</sub>3 ≈ 19, and log<sub>g</sub>5 ≈ 28 are all good approximations all at the same time, or, equivalently, a shared generator g for which g¹² ≈ 2, g¹⁹ ≈ 3, and g²⁸ ≈ 5 are all good approximations at the same time ''(see Figure 2c)''. And that’s a pretty cool thing to find! To be clear, with g = 1.059, we get g¹² ≈ 1.9982, g¹⁹ ≈ 2.9923, and g²⁸ ≈ 5.0291. | ||
Another glowing example is the map {{val|53 84 123}}, for which a good generator will give you g¹² ≈ 2.0002, g¹⁹ ≈ 3.0005, g²⁸ ≈ 4.9974. This speaks to historical attention given to [[53edo|53-ET]]. So while 53:84:123 is an even better approximation of log(2:3:5) (and [https://en.xen.wiki/images/a/a2/Generalized_Patent_Vals.png you won’t find a better one] until 118:187:274), of course its integers aren’t as low, so that lessens its appeal. | Another glowing example is the map {{val|53 84 123}}, for which a good generator will give you g¹² ≈ 2.0002, g¹⁹ ≈ 3.0005, g²⁸ ≈ 4.9974. This speaks to historical attention given to [[53edo|53-ET]]. So while 53:84:123 is an even better approximation of log(2:3:5) (and [https://en.xen.wiki/images/a/a2/Generalized_Patent_Vals.png you won’t find a better one] until 118:187:274), of course its integers aren’t as low, so that lessens its appeal. | ||
Why is this rare? Well, it’s like a game of trying to get these numbers to line up ''(see Figure | Why is this rare? Well, it’s like a game of trying to get these numbers to line up ''(see Figure 2d)'': | ||
[[File:Near linings up rare2.png|600px|thumb|right|'''Figure | [[File:Near linings up rare2.png|600px|thumb|right|'''Figure 2d.''' Texture of ETs approximating prime harmonics. Where the ''numerals'' line up, all primes are well-approximated by a single step size (the boundaries between cells are midpoints between perfect approximations, or in other words, the point where the closest approximation switches over from one generator count to the next). Nudging one of the maps' vertical lines to the right would mean decreasing the generator size, flattening the tunings of all the primes, and vice versa, nudging it to the left would mean increasing the generator size, sharpening the tunings of all the primes. You can visualize this on Figure 2c. as shrinking or growing the height of the rectangular bricks. The positions of each map's vertical line, or in other words the tuning of its generator, has been optimized using some formula to distribute the detuning amongst the three primes; that's why you do not see any vertical line here for which the closest step counts for each prime are all on one side of it.]] | ||
If the distance between entries in the row for 2 are defined as 1 unit apart, then the distance between entries in the row for prime 3 are 1/log₂3 units apart, and 1/log₂5 units apart for the prime 5. So, near-linings up don’t happen all that often!<ref>For more information, see: [[The_Riemann_zeta_function_and_tuning|The Riemann zeta function and tuning]].</ref> (By the way, any vertical line drawn through a chart like this is called a GPV, or “[[generalized patent val]]”; I think the association with "[[patent val]]" is confused, and "patent" isn't a good word for it in the first place, and I would prefer to characterize it as a “generatable map” myself.) | If the distance between entries in the row for 2 are defined as 1 unit apart, then the distance between entries in the row for prime 3 are 1/log₂3 units apart, and 1/log₂5 units apart for the prime 5. So, near-linings up don’t happen all that often!<ref>For more information, see: [[The_Riemann_zeta_function_and_tuning|The Riemann zeta function and tuning]].</ref> (By the way, any vertical line drawn through a chart like this is called a GPV, or “[[generalized patent val]]”; I think the association with "[[patent val]]" is confused, and "patent" isn't a good word for it in the first place, and I would prefer to characterize it as a “generatable map” myself.) | ||
Line 130: | Line 130: | ||
Well, you’ll notice that in the previous section, we did approximate the octave, using 1.998 instead of 2. But another thing {{val|12 19 28}} has going for it is that it excels at approximating 5-limit JI even if we constrain ourselves to pure octaves, locking g¹² to exactly 2: (¹²√2)¹⁹ ≈ 2.997 and (¹²√2)²⁸ ≈ 5.040. You can see that actually the approximation of 3 is even better here, marginally; it’s the damage to 5 which is lamentable. | Well, you’ll notice that in the previous section, we did approximate the octave, using 1.998 instead of 2. But another thing {{val|12 19 28}} has going for it is that it excels at approximating 5-limit JI even if we constrain ourselves to pure octaves, locking g¹² to exactly 2: (¹²√2)¹⁹ ≈ 2.997 and (¹²√2)²⁸ ≈ 5.040. You can see that actually the approximation of 3 is even better here, marginally; it’s the damage to 5 which is lamentable. | ||
When we don’t enforce pure octaves, tuning becomes a more interesting problem. Approximating all three primes at once with the same generator is a balancing act. At least one of the primes will be tuned a bit sharp while at least one of them will be tuned a bit flat. In the case of {{val|12 19 28}}, the 5 is a bit sharp, and the 2 and 3 are each a tiny bit flat ''(as you can see in Figure | When we don’t enforce pure octaves, tuning becomes a more interesting problem. Approximating all three primes at once with the same generator is a balancing act. At least one of the primes will be tuned a bit sharp while at least one of them will be tuned a bit flat. In the case of {{val|12 19 28}}, the 5 is a bit sharp, and the 2 and 3 are each a tiny bit flat ''(as you can see in Figure 2c)''. | ||
[[File:Why not just srhink every block.png|thumb|left|600px|'''Figure | [[File:Why not just srhink every block.png|thumb|left|600px|'''Figure 2e.''' Visualization of pointlessness of tuning all primes sharp (or flat, as you could imagine)]] | ||
If you think about it, you would never want to tune all the primes sharp at the same time, or all of them flat; if you care about this particular proportion of their tunings, why wouldn’t you shift them all in the same direction, toward accuracy, while maintaining that proportion? ''(see Figure | If you think about it, you would never want to tune all the primes sharp at the same time, or all of them flat; if you care about this particular proportion of their tunings, why wouldn’t you shift them all in the same direction, toward accuracy, while maintaining that proportion? ''(see Figure 2e)'' | ||
This matter of choosing the exact generator for a map is called '''tuning''', and if you’ll believe it, we won’t actually talk about that in detail again until much later. Tempering — the second ‘T’ in “RTT” — is the discipline concerned with choosing an interesting map, and tuning can remain largely independent from it. The temperament is only concerned with the fact that — no matter what exact size you ultimately make the generator — it is the case e.g. that 12 of them make a 2, 19 of them make a 3, and 28 of them make a 5. So, for now, whenever we show a value for g, assume we’ve given a computer a formula for optimizing the tuning to approximate all three primes equally well. As for us humans, let’s stay focused on tempering. | This matter of choosing the exact generator for a map is called '''tuning''', and if you’ll believe it, we won’t actually talk about that in detail again until much later. Tempering — the second ‘T’ in “RTT” — is the discipline concerned with choosing an interesting map, and tuning can remain largely independent from it. The temperament is only concerned with the fact that — no matter what exact size you ultimately make the generator — it is the case e.g. that 12 of them make a 2, 19 of them make a 3, and 28 of them make a 5. So, for now, whenever we show a value for g, assume we’ve given a computer a formula for optimizing the tuning to approximate all three primes equally well. As for us humans, let’s stay focused on tempering. | ||
Line 146: | Line 146: | ||
If your goal is to evoke JI-like harmony, then, {{val|12 20 28}} is not your friend. Feel free to work out some other variations on {{val|12 19 28}} if you like, such as {{val|12 19 29}} maybe, but I guarantee you won’t find a better one that starts with 12 than {{val|12 19 28}}. | If your goal is to evoke JI-like harmony, then, {{val|12 20 28}} is not your friend. Feel free to work out some other variations on {{val|12 19 28}} if you like, such as {{val|12 19 29}} maybe, but I guarantee you won’t find a better one that starts with 12 than {{val|12 19 28}}. | ||
[[File:17-ET mistunings.png|thumb|600px|right|'''Figure | [[File:17-ET mistunings.png|thumb|600px|right|'''Figure 2f.''' Detunings of various 17-ET maps, showing how the supposed "patent" val's total error can be improved upon by allowing flexible octaves and second-closest mappings of primes. Also shows that the pure octave 17c is improper insofar as all primes are detuned in one direction.]] | ||
So the case is cut-and-dry for {{val|12 19 28}}, and therefore from now on I'm simply going to refer to this ET by "12-ET" rather than spelling out its map. But other ETs find themselves in trickier situations. Consider [[17edo|17-ET]]. One option we have is the map {{val|17 27 39}}, with a generator of about 1.0418, and prime approximations of 2.0045, 3.0177, 4.9302. But we have a second reasonable option here, too, where {{val|17 27 40}} gives us a generator of 1.0414, and prime approximations of 1.9929, 2.9898, and 5.0659. In either case, the approximations of 2 and 3 are close, but the approximation of 5 is way off. For {{val|17 27 39}}, it’s way small, while for {{val|17 27 40}} it’s way big. The conundrum could be described like this: any generator we could find that divides 2 into about 17 equal steps can do a good job dividing 3 into about 27 equal steps, too, but it will not do a good job of dividing 5 into equal steps; 5 is going to land, unfortunately, right about in the middle between the 39th and 40th steps, as far as possible from either of these two nearest approximations. To do a good job approximating prime 5, we’d really just want to subdivide each of these steps in half, or in other words, we’d want [[34edo|34-ET]]. | So the case is cut-and-dry for {{val|12 19 28}}, and therefore from now on I'm simply going to refer to this ET by "12-ET" rather than spelling out its map. But other ETs find themselves in trickier situations. Consider [[17edo|17-ET]]. One option we have is the map {{val|17 27 39}}, with a generator of about 1.0418, and prime approximations of 2.0045, 3.0177, 4.9302. But we have a second reasonable option here, too, where {{val|17 27 40}} gives us a generator of 1.0414, and prime approximations of 1.9929, 2.9898, and 5.0659. In either case, the approximations of 2 and 3 are close, but the approximation of 5 is way off. For {{val|17 27 39}}, it’s way small, while for {{val|17 27 40}} it’s way big. The conundrum could be described like this: any generator we could find that divides 2 into about 17 equal steps can do a good job dividing 3 into about 27 equal steps, too, but it will not do a good job of dividing 5 into equal steps; 5 is going to land, unfortunately, right about in the middle between the 39th and 40th steps, as far as possible from either of these two nearest approximations. To do a good job approximating prime 5, we’d really just want to subdivide each of these steps in half, or in other words, we’d want [[34edo|34-ET]]. | ||
[[File:17-ET.png|thumb|400px|left|'''Figure | [[File:17-ET.png|thumb|400px|left|'''Figure 2g.''' Visualization of the 17-ETs on the GPV continuum, showing how the pure octave 17c is a "lie" insofar as there exists no generator that exactly reaches prime 2 in 17 steps while more closely approximating prime 5 as 40 steps than 39 steps.]] | ||
Curiously, {{val|17 27 39}} is the map for which each prime individually is as closely approximated as possible when prime 2 is exact, so it is in a sense the naively best map for 17-ET, however, if that constraint is lifted, and we’re allowed to either detune prime 2 and/or choose the next-closest approximations for prime 5, the overall approximation can be improved; in other words, even though 39 steps can take you just a tiny bit closer to prime 5 than 40 steps can, the tiny amount by which it is closer is less than the improvements to the tuning of primes 2 and 3 you can get by using {{val|17 27 40}}. So again, the choice is not always cut-and-dry; there’s still a lot of personal preference going on in the tempering process. | Curiously, {{val|17 27 39}} is the map for which each prime individually is as closely approximated as possible when prime 2 is exact, so it is in a sense the naively best map for 17-ET, however, if that constraint is lifted, and we’re allowed to either detune prime 2 and/or choose the next-closest approximations for prime 5, the overall approximation can be improved; in other words, even though 39 steps can take you just a tiny bit closer to prime 5 than 40 steps can, the tiny amount by which it is closer is less than the improvements to the tuning of primes 2 and 3 you can get by using {{val|17 27 40}}. So again, the choice is not always cut-and-dry; there’s still a lot of personal preference going on in the tempering process. | ||
Line 169: | Line 169: | ||
=== intro to PTS === | === intro to PTS === | ||
[[File:Pts-2-3-5-e2-twtop-tlin.jpg|center|thumb|800px|'''Figure | [[File:Pts-2-3-5-e2-twtop-tlin.jpg|center|thumb|800px|'''Figure 3a.''' 5-limit projective tuning space]] | ||
This is 5-limit [[projective tuning space]], or PTS for short ''(see Figure | This is 5-limit [[projective tuning space]], or PTS for short ''(see Figure 3a)''. This diagram was created by RTT pioneer [[Paul Erlich]]. It compresses a huge amount of valuable information into a small space. If at first it looks overwhelming or insane, do not despair. It may not be instantly easy to understand, but once you learn the tricks for navigating it from these materials, you will find it is very powerful. Perhaps you will even find patterns in it which others haven’t found yet. | ||
I suggest you open this diagram in another window and keep it open as you proceed through these next few sections, as we will be referring to it frequently. | I suggest you open this diagram in another window and keep it open as you proceed through these next few sections, as we will be referring to it frequently. | ||
[[File:JI scale 2.png|thumb|right|150px|'''Figure | [[File:JI scale 2.png|thumb|right|150px|'''Figure 3b.''' Just an example JI scale]] | ||
If you’ve worked with 5-limit JI before, you’re probably aware that it is three-dimensional. You’ve probably reasoned about it as a 3D lattice, where one axis is for the factors of prime 2, one axis is for the factors of prime 3, and one axis is for the factors of prime 5. This way, you can use vectors, such as {{monzo|-4 4 -1}} or {{monzo|1 -2 1}}, just like coordinates. | If you’ve worked with 5-limit JI before, you’re probably aware that it is three-dimensional. You’ve probably reasoned about it as a 3D lattice, where one axis is for the factors of prime 2, one axis is for the factors of prime 3, and one axis is for the factors of prime 5. This way, you can use vectors, such as {{monzo|-4 4 -1}} or {{monzo|1 -2 1}}, just like coordinates. | ||
Line 181: | Line 181: | ||
PTS can be thought of as a projection of 5-limit JI map space, which similarly has one axis each for 2, 3, and 5. But it is no JI pitch lattice. In fact, in a sense, it is the opposite! This is because the coordinates in map space aren’t prime count lists, but maps, such as {{val|12 19 28}}. That particular map is seen here as the biggish, slightly tilted numeral 12 just to the left of the center point. | PTS can be thought of as a projection of 5-limit JI map space, which similarly has one axis each for 2, 3, and 5. But it is no JI pitch lattice. In fact, in a sense, it is the opposite! This is because the coordinates in map space aren’t prime count lists, but maps, such as {{val|12 19 28}}. That particular map is seen here as the biggish, slightly tilted numeral 12 just to the left of the center point. | ||
[[File:PTS with axes.png|300px|left|thumb|'''Figure | [[File:PTS with axes.png|300px|left|thumb|'''Figure 3c.''' PTS, with axes]] | ||
And the two 17-ETs we looked at can be found here too. {{val|17 27 40}} is the slightly smaller numeral 17 found on the line labeled “meantone” which the 12 is also on, thus representing the fact we mentioned earlier that they both temper it out. The other 17, {{val|17 27 39}}, is found on the other side of the center point, aligned horizontally with the first 17. So you could say that map space plots ETs, showing how they are related to each other. | And the two 17-ETs we looked at can be found here too. {{val|17 27 40}} is the slightly smaller numeral 17 found on the line labeled “meantone” which the 12 is also on, thus representing the fact we mentioned earlier that they both temper it out. The other 17, {{val|17 27 39}}, is found on the other side of the center point, aligned horizontally with the first 17. So you could say that map space plots ETs, showing how they are related to each other. | ||
Of course, PTS looks nothing like this JI lattice ''(see Figure | Of course, PTS looks nothing like this JI lattice ''(see Figure 3b)''. This diagram has a ton more information, and as such, Paul needed to get creative about how to structure it. It’s a little tricky, but we’ll get there. For starters, the axes are not actually shown on the PTS diagram; if they were, they would look like this ''(see Figure 3c)''. | ||
The 2-axis points toward the bottom right, the 3-axis toward the top right, and the 5-axis toward the left. These are the positive halves of each of these axes; we don’t need to worry about the negative halves of any of them, because every term of every ET map is positive. | The 2-axis points toward the bottom right, the 3-axis toward the top right, and the 5-axis toward the left. These are the positive halves of each of these axes; we don’t need to worry about the negative halves of any of them, because every term of every ET map is positive. | ||
Line 194: | Line 194: | ||
=== scaled axes === | === scaled axes === | ||
You might guess that to arrive at that tilted numeral 12, you would start at the origin in the center, move 12 steps toward the bottom right (along the 2-axis), 19 steps toward the top right (not along, but parallel to the 3-axis), and then 28 steps toward the left (parallel to the 5-axis). And if you guessed this, you’d probably also figure that you could perform these moves in any order, because you’d arrive at the same ending position regardless ''(see Figure | You might guess that to arrive at that tilted numeral 12, you would start at the origin in the center, move 12 steps toward the bottom right (along the 2-axis), 19 steps toward the top right (not along, but parallel to the 3-axis), and then 28 steps toward the left (parallel to the 5-axis). And if you guessed this, you’d probably also figure that you could perform these moves in any order, because you’d arrive at the same ending position regardless ''(see Figure 3d)''. | ||
[[File:PTS with finding 12-ET.png|400px|right|thumb|'''Figure | [[File:PTS with finding 12-ET.png|400px|right|thumb|'''Figure 3d.''' arriving at 12-ET by moving in any of the 6 possible axis orders (Note: this is a visualization of an early guess at how things work. They're different and more complicated than this. Keep reading!)]] | ||
If you did guess this, you are on the right track, but the full truth is a bit more complicated than that. | If you did guess this, you are on the right track, but the full truth is a bit more complicated than that. | ||
The first difference to understand is that each axis’s steps have been scaled proportionally according to their prime ''(see Figure | The first difference to understand is that each axis’s steps have been scaled proportionally according to their prime ''(see Figure 3e)''. To illustrate this, let’s highlight an example ET and compare its position with the positions of three other ETs: | ||
# the one which is one step away from it on the 5-axis, | # the one which is one step away from it on the 5-axis, | ||
Line 206: | Line 206: | ||
# the one which is one step away from it on the 2-axis. | # the one which is one step away from it on the 2-axis. | ||
[[File:Shape_of_scale_of_movements_on_axes.png|thumb|left|200px|'''Figure | [[File:Shape_of_scale_of_movements_on_axes.png|thumb|left|200px|'''Figure 3e.''' the basic shape the scaled axes make between neighbor maps (maps with only 1 difference between their terms)]] | ||
Our example ET will be 40. We'll start out at the map {{val|40 63 93}}. This map is a default of sorts for 40-ET, because it’s the map where all three terms are as close as possible to JI when prime 2 is exact (sometimes unfortunately called a "[[patent val]]", as referenced earlier). | Our example ET will be 40. We'll start out at the map {{val|40 63 93}}. This map is a default of sorts for 40-ET, because it’s the map where all three terms are as close as possible to JI when prime 2 is exact (sometimes unfortunately called a "[[patent val]]", as referenced earlier). | ||
Line 216: | Line 216: | ||
Finally, let’s move by a single step on the 2-axis, from 40 to 41, moving to the map {{val|41 63 93}}, which unsurprisingly is in the direction the 2-axis points. This move actually takes us off the chart, way down here. | Finally, let’s move by a single step on the 2-axis, from 40 to 41, moving to the map {{val|41 63 93}}, which unsurprisingly is in the direction the 2-axis points. This move actually takes us off the chart, way down here. | ||
[[File:40-ET distances example.png|400px|right|thumb|'''Figure | [[File:40-ET distances example.png|400px|right|thumb|'''Figure 3f.''' Distances between 40-ET's neighbors in PTS]] | ||
Now let’s observe the difference in distances ''(see Figure | Now let’s observe the difference in distances ''(see Figure 3f)''. Notice how the distance between the maps separated by a change in 5-term is the smallest, the maps separated by a change in 3-term have the medium-sized distance, and maps separated by a change in the 2-term have the largest distance. This tells us that steps along the 3-axis are larger than steps along the 5-axis, and steps along the 2-axis are larger still. The relationship between these sizes is that the 3-axis step has been divided by the binary logarithm of 3, written log₂3, which is approximately 1.585, while the 5-axis step has been divided by the binary logarithm of 5, written log₂5, and which is approximately 2.322. The 2-axis step can also be thought of as having been divided by the binary logarithm of its prime, but because log₂2 is exactly 1, and dividing by 1 does nothing, the scaling has no effect on the 2-axis. | ||
The reason Paul chose this particular scaling scheme is that it causes those ETs which are closer to JI to appear closer to the center of the diagram (and this is a useful property to organize ETs by). How does this work? Well, let’s look into it. | The reason Paul chose this particular scaling scheme is that it causes those ETs which are closer to JI to appear closer to the center of the diagram (and this is a useful property to organize ETs by). How does this work? Well, let’s look into it. | ||
Remember that near-just ETs have maps whose terms are in close proportion to log(2:3:5). ET maps use only integers, so they can only approximate this ideal, but a theoretical pure JI map would be {{val|log₂2 log₂3 log₂5}}. If we scaled this theoretical JI map by this scaling scheme, then, we’d get 1:1:1, because we’re just dividing things by themselves: log₂2/log₂2:log₂3/log₂3:log₂5/log₂5 = 1:1:1. This tells us that we should find this theoretical JI map at the point arrived at by moving exactly the same amount along the 2-axis, 3-axis, and 5-axis. Well, if we tried that, these three movements would cancel each other out: we’d draw an equilateral triangle and end up exactly where we started, at the origin, or in other words, at pure JI. Any other ET approximating but not exactly log(2:3:5) will be scaled to proportions not exactly 1:1:1, but approximately so, like maybe 1:0.999:1.002, and so you’ll move in something close to an equilateral triangle, but not exactly, and land in some interesting spot that’s not quite in the center. In other words, we scale the axes this way so that we can compare the maps not in absolute terms, but in terms of what direction and by how much they deviate from JI ''(see Figure | Remember that near-just ETs have maps whose terms are in close proportion to log(2:3:5). ET maps use only integers, so they can only approximate this ideal, but a theoretical pure JI map would be {{val|log₂2 log₂3 log₂5}}. If we scaled this theoretical JI map by this scaling scheme, then, we’d get 1:1:1, because we’re just dividing things by themselves: log₂2/log₂2:log₂3/log₂3:log₂5/log₂5 = 1:1:1. This tells us that we should find this theoretical JI map at the point arrived at by moving exactly the same amount along the 2-axis, 3-axis, and 5-axis. Well, if we tried that, these three movements would cancel each other out: we’d draw an equilateral triangle and end up exactly where we started, at the origin, or in other words, at pure JI. Any other ET approximating but not exactly log(2:3:5) will be scaled to proportions not exactly 1:1:1, but approximately so, like maybe 1:0.999:1.002, and so you’ll move in something close to an equilateral triangle, but not exactly, and land in some interesting spot that’s not quite in the center. In other words, we scale the axes this way so that we can compare the maps not in absolute terms, but in terms of what direction and by how much they deviate from JI ''(see Figure 3g)''. | ||
[[File:Scaling.png|600px|right|thumb|'''Figure | [[File:Scaling.png|600px|right|thumb|'''Figure 3g.''' a visualization of how scaling axes illuminates deviations from JI]] | ||
For example, let’s scale our 12-ET example: | For example, let’s scale our 12-ET example: | ||
Line 251: | Line 251: | ||
The truth about distances between related ETs on the PTS diagram is actually slightly even more complicated than that, though; as we mentioned, the scaled axes are only the first difference from our initial guess. In addition to the effect of the scaling of the axes, there is another effect, which is like a perspective effect. Basically, as ETs get more complex, you can think of them as getting farther and farther away; to suggest this, they are printed smaller and smaller on the page, and the distances between them appear smaller and smaller too. | The truth about distances between related ETs on the PTS diagram is actually slightly even more complicated than that, though; as we mentioned, the scaled axes are only the first difference from our initial guess. In addition to the effect of the scaling of the axes, there is another effect, which is like a perspective effect. Basically, as ETs get more complex, you can think of them as getting farther and farther away; to suggest this, they are printed smaller and smaller on the page, and the distances between them appear smaller and smaller too. | ||
Remember that 5-limit JI is 3D, but we’re viewing it on a 2D page. It’s not the case that its axes are flat on the page. They’re not literally occupying the same plane, 120° apart from each other. That’s just not how axes normally work, and it’s not how they work here either! The 5-axis is perpendicular to the 2-axis and 3-axis just like normal Cartesian space. Again, we’re looking only at the positive coordinates, which is to say that this is only the [https://en.wikipedia.org/wiki/Octant_(solid_geometry) +++ octant] of space, which comes to a point at the origin (0,0,0) like the corner of a cube. So you should think of this diagram as showing that cubic octant sticking its corner straight out of the page at us, like a triangular pyramid. So we’re like a tiny little bug, situated right at the tip of that corner, pointing straight down the octant’s interior diagonal, or in other words the line equidistant from three axes, the line which we understand represents theoretically pure JI. So we see that in the center of the page, represented as a red hexagram, and then toward the edges of the page is our peripheral vision. ''(See Figure | Remember that 5-limit JI is 3D, but we’re viewing it on a 2D page. It’s not the case that its axes are flat on the page. They’re not literally occupying the same plane, 120° apart from each other. That’s just not how axes normally work, and it’s not how they work here either! The 5-axis is perpendicular to the 2-axis and 3-axis just like normal Cartesian space. Again, we’re looking only at the positive coordinates, which is to say that this is only the [https://en.wikipedia.org/wiki/Octant_(solid_geometry) +++ octant] of space, which comes to a point at the origin (0,0,0) like the corner of a cube. So you should think of this diagram as showing that cubic octant sticking its corner straight out of the page at us, like a triangular pyramid. So we’re like a tiny little bug, situated right at the tip of that corner, pointing straight down the octant’s interior diagonal, or in other words the line equidistant from three axes, the line which we understand represents theoretically pure JI. So we see that in the center of the page, represented as a red hexagram, and then toward the edges of the page is our peripheral vision. ''(See Figure 3h.)'' | ||
[[File:Understanding projection.png|600px|thumb|left|'''Figure | [[File:Understanding projection.png|600px|thumb|left|'''Figure 3h.''' Visualization of the projection process. (In real life, the cube is infinite in size. I made it smaller here to help make the shape clearer.)]] | ||
PTS doesn’t show the entire tuning cube. You can see evidence of this in the fact that some numerals have been cut off on its edges. We’ve cropped things around the central region of information, which is where the ETs best approximating JI are found (note how close 53-ET is to the center!). Paul added some concentric hexagons to the center of his diagram, which you could think of as concentric around that interior diagonal, or in other words, are defined by gradually increasing thresholds of deviations from pure JI for any one prime at a time. | PTS doesn’t show the entire tuning cube. You can see evidence of this in the fact that some numerals have been cut off on its edges. We’ve cropped things around the central region of information, which is where the ETs best approximating JI are found (note how close 53-ET is to the center!). Paul added some concentric hexagons to the center of his diagram, which you could think of as concentric around that interior diagonal, or in other words, are defined by gradually increasing thresholds of deviations from pure JI for any one prime at a time. | ||
Line 261: | Line 261: | ||
Okay, but what about the perspective effect? Right. So every step further away on any axis, then, appears a bit smaller than the previous step, because it’s just a bit further away from us. And how much smaller? Well, the perspective effect is such that, as seen on this diagram, the distances between n-ETs are twice the size of the distances between 2n-ETs. | Okay, but what about the perspective effect? Right. So every step further away on any axis, then, appears a bit smaller than the previous step, because it’s just a bit further away from us. And how much smaller? Well, the perspective effect is such that, as seen on this diagram, the distances between n-ETs are twice the size of the distances between 2n-ETs. | ||
Moreover, there’s a special relationship between the positions of n-ETs and 2n-ETs, and indeed between n-ETs and 3n-ETs, 4n-ETs, etc. To understand why, it’s instructive to plot it out ''(see Figure | Moreover, there’s a special relationship between the positions of n-ETs and 2n-ETs, and indeed between n-ETs and 3n-ETs, 4n-ETs, etc. To understand why, it’s instructive to plot it out ''(see Figure 3i)''. | ||
[[File:Hiding vals.png|500px|thumb|right|'''Figure | [[File:Hiding vals.png|500px|thumb|right|'''Figure 3i.''' redundant maps hiding behind their simpler counterparts]] | ||
For simplicity, we’re looking at the octant cube here from the angle straight on to the 2-axis, so changes to the 2-terms don’t matter here. In the bottom left is the origin; that’s the point at the center of PTS. Close-by, we can see the map {{val|3 5 7}}, and two closely related maps {{val|3 4 7}} and {{val|3 5 8}}. Colored lines have been drawn from the origin through these points to the black line in the top-right, which represents the page; this is portraying how if our eye is at that origin, where on the page these points would appear to be. | For simplicity, we’re looking at the octant cube here from the angle straight on to the 2-axis, so changes to the 2-terms don’t matter here. In the bottom left is the origin; that’s the point at the center of PTS. Close-by, we can see the map {{val|3 5 7}}, and two closely related maps {{val|3 4 7}} and {{val|3 5 8}}. Colored lines have been drawn from the origin through these points to the black line in the top-right, which represents the page; this is portraying how if our eye is at that origin, where on the page these points would appear to be. | ||
Line 273: | Line 273: | ||
Now, to find a 6-ET with anything new to bring to the table, we’ll need to find one whose terms don’t share a common factor. That’s not hard. We’ll just take one of the ones halfway between the ones we just looked at. How about {{val|6 11 14}}, which is halfway between {{val|6 10 14}} and {{val|6 12 14}}. Notice that the purple line that runs through it lands halfway between the red and blue lines on the page. Similarly, {{val|6 10 15}} is halfway between {{val|6 10 14}} and {{val|6 10 16}}, and its yellow line appears halfway between the red and green lines on the page. What this is demonstrating is that halfway between any pair of n-ETs on the diagram, whether this pair is separated along the 3-axis or 5-axis, you will find a 2n-ET. We can’t really demonstrate this with 3-ET and 6-ET on the diagram, because those ETs are too inaccurate; they’ve been cropped off. But if we return to our 40-ET example, that will work just fine. | Now, to find a 6-ET with anything new to bring to the table, we’ll need to find one whose terms don’t share a common factor. That’s not hard. We’ll just take one of the ones halfway between the ones we just looked at. How about {{val|6 11 14}}, which is halfway between {{val|6 10 14}} and {{val|6 12 14}}. Notice that the purple line that runs through it lands halfway between the red and blue lines on the page. Similarly, {{val|6 10 15}} is halfway between {{val|6 10 14}} and {{val|6 10 16}}, and its yellow line appears halfway between the red and green lines on the page. What this is demonstrating is that halfway between any pair of n-ETs on the diagram, whether this pair is separated along the 3-axis or 5-axis, you will find a 2n-ET. We can’t really demonstrate this with 3-ET and 6-ET on the diagram, because those ETs are too inaccurate; they’ve been cropped off. But if we return to our 40-ET example, that will work just fine. | ||
[[File:Plot of 5 10 20 40 80.png|800px|thumb|left|'''Figure | [[File:Plot of 5 10 20 40 80.png|800px|thumb|left|'''Figure 3j.'''Plot of 40-ETs with 80-ETs halfway between each pair, including the contorted 40-ETs hiding behind 20-ET and 10-ET]] | ||
I’ve circled every 40-ET visible in the chart ''(see Figure | I’ve circled every 40-ET visible in the chart ''(see Figure 3j)''. And you can see that halfway between each one, there’s an [[80edo|80-ET]] too. Well, sometimes it’s not actually printed on the diagram<ref>The reason is that Paul’s diagram, in addition to cutting off beyond 99-ET, also filters out maps that aren’t GPVs.</ref>, but it’s still there. You will also notice that if we also land right about on top of [[20edo|20-ET]] and [[10edo|10-ET]]. That’s no coincidence! Hiding behind that 20-ET is a redundant 40-ET whose terms are all 2x the 20-ET’s terms, and hiding behind the 10-ET is a redundant 40-ET whose terms are all 4x the 40-ET’s terms (and also a redundant 20-ET and a [[30edo|30-ET]], and [[50edo|50-ET]], [[60edo|60-ET]], etc. etc. etc.) | ||
Also, check out the spot halfway between our two 17-ETs: there’s the 34-ET we briefly mused about earlier, which would solve 17’s problem of approximating prime 5 by subdividing each of its steps in half. We can confirm now that this 34-ET does a superb job at approximating prime 5, because it is almost vertically aligned with the JI red hexagram. | Also, check out the spot halfway between our two 17-ETs: there’s the 34-ET we briefly mused about earlier, which would solve 17’s problem of approximating prime 5 by subdividing each of its steps in half. We can confirm now that this 34-ET does a superb job at approximating prime 5, because it is almost vertically aligned with the JI red hexagram. | ||
Line 292: | Line 292: | ||
The first key difference to notice is that we can normalize coordinates in tuning space, so that the first term of every coordinate is the same, namely, one octave, or 1200 cents. For example, note that while in map space, {{val|3 5 7}} is located physically in front of {{val|6 10 14}}, in tuning space, these two points collapse to literally the same point, {{val|1200 2000 2800}}. This can be helpful in a similar way to how the scaled axes of PTS help us visually compare maps’ proximity to the central JI spoke: they are now expressed closer to in terms of their deviation from JI, so we can more immediately compare maps to each other, as well as individually directly to the pure JI primes, as long as we memorize the cents values of those (they’re 1200, 1901.955, and 2786.314). For example, in map space, it may not be immediately obvious that {{val|6 9 14}} is halfway between {{val|3 5 7}} and {{val|3 4 7}}, but in tuning space it is immediately obvious that {{val|1200 1800 2800}} is halfway between {{val|1200 2000 2800}} and {{val|1200 1600 2800}}. | The first key difference to notice is that we can normalize coordinates in tuning space, so that the first term of every coordinate is the same, namely, one octave, or 1200 cents. For example, note that while in map space, {{val|3 5 7}} is located physically in front of {{val|6 10 14}}, in tuning space, these two points collapse to literally the same point, {{val|1200 2000 2800}}. This can be helpful in a similar way to how the scaled axes of PTS help us visually compare maps’ proximity to the central JI spoke: they are now expressed closer to in terms of their deviation from JI, so we can more immediately compare maps to each other, as well as individually directly to the pure JI primes, as long as we memorize the cents values of those (they’re 1200, 1901.955, and 2786.314). For example, in map space, it may not be immediately obvious that {{val|6 9 14}} is halfway between {{val|3 5 7}} and {{val|3 4 7}}, but in tuning space it is immediately obvious that {{val|1200 1800 2800}} is halfway between {{val|1200 2000 2800}} and {{val|1200 1600 2800}}. | ||
So if we take a look at a cross-section of projection again, but in terms of tuning space now ''(see Figure | So if we take a look at a cross-section of projection again, but in terms of tuning space now ''(see Figure 3k)'', we can see how every point is about the same distance from us. | ||
[[File:Tuning space version.png|300px|thumb|right|'''Figure | [[File:Tuning space version.png|300px|thumb|right|'''Figure 3k.''' Demonstration of projection in terms of ''tuning'' space (compare with Figure 3i).]] | ||
The other major difference is that tuning space is continuous, where map space is discrete. In other words, to find a map between {{val|6 10 14}} and {{val|6 9 14}}, you’re subdividing it by 2 or 3 and picking a point in between, that sort of thing. But between {{val|1200 2000 2800}} and {{val|1200 1800 2800}} you’ve got an infinitude of choices smoothly transitioning between each other; you’ve basically got knobs you can turn on the proportions of the tuning of 2, 3, and 5. Everything from from {{val|1200 1999.999 2800}} to {{val|1200 1901.955 2800}} to {{val|1200 1817.643 2800}} is along the way. | The other major difference is that tuning space is continuous, where map space is discrete. In other words, to find a map between {{val|6 10 14}} and {{val|6 9 14}}, you’re subdividing it by 2 or 3 and picking a point in between, that sort of thing. But between {{val|1200 2000 2800}} and {{val|1200 1800 2800}} you’ve got an infinitude of choices smoothly transitioning between each other; you’ve basically got knobs you can turn on the proportions of the tuning of 2, 3, and 5. Everything from from {{val|1200 1999.999 2800}} to {{val|1200 1901.955 2800}} to {{val|1200 1817.643 2800}} is along the way. | ||
[[File:Tuning projection.png|300px|thumb|left|'''Figure | [[File:Tuning projection.png|300px|thumb|left|'''Figure 3l.''' Demonstrating of tuning projection.]] | ||
But perhaps even more interesting than this continuous tuning space that appears in PTS between points is the continuous tuning space that does not appear in PTS because it exists within each point, that is, exactly out from and deeper into the page at each point. In tuning space, as we’ve just established, there are no maps in front of or behind each other that get collapsed to a single point. But there are still many things that get collapsed to a single point like this, but in tuning space they are different tunings ''(see Figure | But perhaps even more interesting than this continuous tuning space that appears in PTS between points is the continuous tuning space that does not appear in PTS because it exists within each point, that is, exactly out from and deeper into the page at each point. In tuning space, as we’ve just established, there are no maps in front of or behind each other that get collapsed to a single point. But there are still many things that get collapsed to a single point like this, but in tuning space they are different tunings ''(see Figure 3l)''. For example, {{val|1200 1900 2800}} is the way we’d write 12-ET in tuning space. But there are other tunings represented by this same point in PTS, such as {{val|1200.12 1900.19 2800.28}} (note that in order to remain at the same point, we’ve maintained the exact proportions of all the prime tunings). That tuning might not be of particular interest. I just used it as a simple example to illustrate the point. A more useful example would be {{val|1198.440 1897.531 2796.361}}, which by some algorithm is the optimal tuning for 12-ET (minimizes damage across primes or intervals); it may not be as obvious from looking at that one, but if you check the proportions of those terms with each other, you will find they are still exactly 12:19:28. | ||
The key point here is that, as we mentioned before, the problems of tuning and tempering are largely separate. PTS projects all tunings of the same temperament to the same point. This way, issues of tuning are completely hidden and ignored on PTS, so we can focus instead on tempering. | The key point here is that, as we mentioned before, the problems of tuning and tempering are largely separate. PTS projects all tunings of the same temperament to the same point. This way, issues of tuning are completely hidden and ignored on PTS, so we can focus instead on tempering. | ||
Line 337: | Line 337: | ||
“[[Rank]]” has a slightly different meaning than dimension, but that’s not important yet. We’ll define rank, and discuss what exactly a rank-2 or -3 temperament means later. For now, it’s enough to know that each temperament line on this 5-limit PTS diagram is defined by tempering out a comma which has the same name. For now, we’re still focusing on visually how to navigate PTS. So the natural thing to wonder next, then, is what’s up with the slopes of all these temperament lines? | “[[Rank]]” has a slightly different meaning than dimension, but that’s not important yet. We’ll define rank, and discuss what exactly a rank-2 or -3 temperament means later. For now, it’s enough to know that each temperament line on this 5-limit PTS diagram is defined by tempering out a comma which has the same name. For now, we’re still focusing on visually how to navigate PTS. So the natural thing to wonder next, then, is what’s up with the slopes of all these temperament lines? | ||
[[File:Diagrams to understand PTS for RTT.png|thumb|left|400px|'''Figure | [[File:Diagrams to understand PTS for RTT.png|thumb|left|400px|'''Figure 4a.''' How the tempered comma affects slope on PTS. A temperament defined by a comma with a 0 for a prime will be perpendicular to that prime's axis, because the tuning of that prime does not affect whether or not the comma is tempered out. Therefore the prime corresponding to the 0 in the comma is represented by x, which can be anything, while the proportion between the other two primes must remain fixed.]] | ||
Let’s begin with a simple example: the perfectly horizontal line that runs through just about the middle of the page, through the numeral 12, labelled “[[compton]]”. What’s happening along this line? Well, as we know, moving to the left means tuning 5 sharper, and moving to the right means tuning 5 flatter. But what about 2 and 3? Well, they are changing as well: 2 is sharp in the bottom right, and 3 is sharp in the top right, so when we move exactly rightward, 2 and 3 are both getting sharper (though not as directly as 5 is getting flatter). But the critical thing to observe here is that 2 and 3 are sharpening at the exact same rate. Therefore the approximations of primes 2 and 3 are in a constant ratio with each other along horizontal lines like this. Said another way, if you look at the 2 and 3 terms for any ET’s map on this line, the ratio between its term for 2 and 3 will be identical. | Let’s begin with a simple example: the perfectly horizontal line that runs through just about the middle of the page, through the numeral 12, labelled “[[compton]]”. What’s happening along this line? Well, as we know, moving to the left means tuning 5 sharper, and moving to the right means tuning 5 flatter. But what about 2 and 3? Well, they are changing as well: 2 is sharp in the bottom right, and 3 is sharp in the top right, so when we move exactly rightward, 2 and 3 are both getting sharper (though not as directly as 5 is getting flatter). But the critical thing to observe here is that 2 and 3 are sharpening at the exact same rate. Therefore the approximations of primes 2 and 3 are in a constant ratio with each other along horizontal lines like this. Said another way, if you look at the 2 and 3 terms for any ET’s map on this line, the ratio between its term for 2 and 3 will be identical. | ||
Line 351: | Line 351: | ||
There are even temperaments whose comma includes only 3’s and 5’s, such as “[[bug]]” temperament, which tempers out [[27/25]], or {{monzo|0 3 -2}}. If you look on this PTS diagram, however, you won’t find bug. Paul chose not to draw it. There are infinite temperaments possible here, so he had to set a threshold somewhere on which temperaments to show, and bug just didn’t make the cut in terms of how much it distorts harmony from JI. If he had drawn it, it would have been way out on the left edge of the diagram, completely outside the concentric hexagons. It would run parallel to the 2-axis, or from top-left to bottom-right, and it would connect the 5-ET (the huge numeral which is cut off the left edge of the diagram so that we can only see a sliver of it) to the [[9edo|9-ET]] in the bottom left, running through the 19-ET and [[14edo|14-ET]] in-between. Indeed, these ET maps — {{val|9 14 21}}, {{val|5 8 12}}, {{val|19 30 45}}, and {{val|14 22 33}} — lock the ratio between their 3-terms and 5-terms, in this case to 2:3. | There are even temperaments whose comma includes only 3’s and 5’s, such as “[[bug]]” temperament, which tempers out [[27/25]], or {{monzo|0 3 -2}}. If you look on this PTS diagram, however, you won’t find bug. Paul chose not to draw it. There are infinite temperaments possible here, so he had to set a threshold somewhere on which temperaments to show, and bug just didn’t make the cut in terms of how much it distorts harmony from JI. If he had drawn it, it would have been way out on the left edge of the diagram, completely outside the concentric hexagons. It would run parallel to the 2-axis, or from top-left to bottom-right, and it would connect the 5-ET (the huge numeral which is cut off the left edge of the diagram so that we can only see a sliver of it) to the [[9edo|9-ET]] in the bottom left, running through the 19-ET and [[14edo|14-ET]] in-between. Indeed, these ET maps — {{val|9 14 21}}, {{val|5 8 12}}, {{val|19 30 45}}, and {{val|14 22 33}} — lock the ratio between their 3-terms and 5-terms, in this case to 2:3. | ||
Those are the three simplest slopes to consider, i.e. the ones which are exactly parallel to the axes ''(see Figure | Those are the three simplest slopes to consider, i.e. the ones which are exactly parallel to the axes ''(see Figure 4a)''. But all the other temperament lines follow a similar principle. Their slopes are a manifestation of the prime factors in their defining comma. If having zero 5’s means you are perfectly horizontal, then having only one 5 means your slope will be close to horizontal, such as meantone {{monzo|-4 4 -1}} or [[helmholtz]] {{monzo|-15 8 1}}. Similarly, magic {{monzo|-10 -1 5}} and [[würschmidt]] {{monzo|17 1 -8}}, having only one 3 apiece, are close to parallel with the 3-axis, while porcupine {{monzo|1 -5 3}} and [[ripple]] {{monzo|-1 8 -5}}, having only one 2 apiece, are close to parallel with the 2-axis. | ||
Think of it like this: for meantone, a change to the mapping of 5 doesn’t make near as much of a difference to the outcome as does a change to the mapping of 2 or 3, therefore, changes along the 5-axis don’t have near as much of an effect on that line, so it ends up roughly parallel to it. | Think of it like this: for meantone, a change to the mapping of 5 doesn’t make near as much of a difference to the outcome as does a change to the mapping of 2 or 3, therefore, changes along the 5-axis don’t have near as much of an effect on that line, so it ends up roughly parallel to it. | ||
Line 399: | Line 399: | ||
We mentioned that the generator value changes continuously as we move along a temperament line. So just to either side of 12-ET along the meantone line, the tuning of 2, 3, and 5 supports a generator size which in turn supports meantone, but it wouldn’t support augmented. And just to either side of 12-ET along the augmented line, the tuning of 2, 3, and 5 supports a generator which still supports augmented, but not meantone. 12-ET, we could say, is a convergence point between the meantone generator and the augmented generator. But it is not a convergence point because the two generators become identical in 12-ET, but rather because they can both be achieved in terms of 12-ET’s generator. In other words, 5\12 ≠ 4\12, but they are both multiples of 1\12. | We mentioned that the generator value changes continuously as we move along a temperament line. So just to either side of 12-ET along the meantone line, the tuning of 2, 3, and 5 supports a generator size which in turn supports meantone, but it wouldn’t support augmented. And just to either side of 12-ET along the augmented line, the tuning of 2, 3, and 5 supports a generator which still supports augmented, but not meantone. 12-ET, we could say, is a convergence point between the meantone generator and the augmented generator. But it is not a convergence point because the two generators become identical in 12-ET, but rather because they can both be achieved in terms of 12-ET’s generator. In other words, 5\12 ≠ 4\12, but they are both multiples of 1\12. | ||
Here’s a diagram that shows how the generator size changes gradually across each line in PTS. It may seem weird how the same generator size appears in multiple different places across the space. But keep in mind that pretty much any generator is possible pretty much anywhere here. This is simply the generator size pattern you get when you lock the period to exactly 1200 cents, to establish a common basis for comparison. These are called [[Tour_of_Regular_Temperaments#Rank-2_temperaments|linear temperaments]]. This is what enables us to produce maps of temperaments such as the one found at [[Map_of_linear_temperaments|this Xen wiki page]], or this chart here ''(see Figure | Here’s a diagram that shows how the generator size changes gradually across each line in PTS. It may seem weird how the same generator size appears in multiple different places across the space. But keep in mind that pretty much any generator is possible pretty much anywhere here. This is simply the generator size pattern you get when you lock the period to exactly 1200 cents, to establish a common basis for comparison. These are called [[Tour_of_Regular_Temperaments#Rank-2_temperaments|linear temperaments]]. This is what enables us to produce maps of temperaments such as the one found at [[Map_of_linear_temperaments|this Xen wiki page]], or this chart here ''(see Figure 4b)''. | ||
[[File:Generator sizes in PTS.png|800px|thumb|'''Figure | [[File:Generator sizes in PTS.png|800px|thumb|'''Figure 4b.''' Generator sizes of linear temperaments in PTS]] | ||
And note that I didn’t break down what’s happening along the blackwood, compton, augmented, dimipent, and some other lines which are labelled on the original PTS diagram. In some cases, it’s just because I got lazy and didn’t want to deal with fitting more numbers on this thing. But in the case of all those that I just listed, it’s because those temperaments all have non-octave periods. | And note that I didn’t break down what’s happening along the blackwood, compton, augmented, dimipent, and some other lines which are labelled on the original PTS diagram. In some cases, it’s just because I got lazy and didn’t want to deal with fitting more numbers on this thing. But in the case of all those that I just listed, it’s because those temperaments all have non-octave periods. | ||
Line 482: | Line 482: | ||
When written with the {{val|}} notation, we’re expressing maps in “covector” form, or in other words, as the opposite of vectors. But we can also think of maps in terms of matrices. If vectors are like matrix columns, maps are like matrix rows. So while we have to write {{monzo|-4 4 -1}} vertically when in matrix form, {{val|19 30 44}} stays horizontal. | When written with the {{val|}} notation, we’re expressing maps in “covector” form, or in other words, as the opposite of vectors. But we can also think of maps in terms of matrices. If vectors are like matrix columns, maps are like matrix rows. So while we have to write {{monzo|-4 4 -1}} vertically when in matrix form, {{val|19 30 44}} stays horizontal. | ||
[[File:Different nestings.png|400px|thumb|left|'''Figure | [[File:Different nestings.png|400px|thumb|left|'''Figure 5a.''' How to write matrices in terms of either columns/vectors/commas or rows/covectors/maps.]] | ||
We can extend our angle bracket notation (technically called [https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation bra-ket notation, or Dirac notation]) to handle matrices by nesting rows inside columns, or columns inside rows ''(see Figure | We can extend our angle bracket notation (technically called [https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation bra-ket notation, or Dirac notation]) to handle matrices by nesting rows inside columns, or columns inside rows ''(see Figure 5a)''. For example, we could have written our comma basis like this: {{val|{{monzo|-4 4 -1}} {{monzo|-10 -1 5}}}}. Starting from the outside, the {{val|}} tells us to think in terms of a row. It's just that this covector isn't a covector of numbers, like the ones we've gotten used to by now, but rather a covector of ''columns of'' numbers. So this row houses two such columns. Alternatively, we could have written this same matrix like {{monzo|{{val|-4 -10}} {{val|4 -1}} {{val|-1 5}}}}, but that would obscure the fact that it is the combination of two familiar commas (but that notation ''would'' be useful for expressing a matrix built out of multiple maps, as we will soon see). | ||
Sometimes a comma basis may have only a single comma. That’s okay. A single vector can become a matrix. To disambiguate this situation, you could put the vector inside a covector, like this: {{val|{{monzo|-4 4 -1}}}}. Similarly, a single covector can become a matrix, by nesting inside a vector, like this: {{monzo|{{val|19 30 44}}}}. | Sometimes a comma basis may have only a single comma. That’s okay. A single vector can become a matrix. To disambiguate this situation, you could put the vector inside a covector, like this: {{val|{{monzo|-4 4 -1}}}}. Similarly, a single covector can become a matrix, by nesting inside a vector, like this: {{monzo|{{val|19 30 44}}}}. | ||
Line 949: | Line 949: | ||
The next step is to understand our primes in terms of this temperament’s generators. Meantone’s mapping is {{monzo|{{val|1 0 -4}} {{val|0 1 4}}}}. This maps prime 2 to one of the first generators and zero of the second generators. This can be seen plainly by slicing the first column from the matrix; we could even write it as the vector {{monzo|1 0}}. Similarly, this mapping maps prime 3 to zero of the first generator and one of the second generator, or in vector form {{monzo|0 1}}. Finally, this mapping maps prime 5 to negative four of the first generator and four of the second generator, or {{monzo|-4 4}}. | The next step is to understand our primes in terms of this temperament’s generators. Meantone’s mapping is {{monzo|{{val|1 0 -4}} {{val|0 1 4}}}}. This maps prime 2 to one of the first generators and zero of the second generators. This can be seen plainly by slicing the first column from the matrix; we could even write it as the vector {{monzo|1 0}}. Similarly, this mapping maps prime 3 to zero of the first generator and one of the second generator, or in vector form {{monzo|0 1}}. Finally, this mapping maps prime 5 to negative four of the first generator and four of the second generator, or {{monzo|-4 4}}. | ||
So we could label the nodes with a list of approximations. For example, the node at {{monzo|-4 4}} would be ~5. We could label ~9/8 on {{monzo|-3 2}} just the same as we could label {{monzo|-3 2}} 9/8 in JI, however, here, we can also label that node ~10/9, because {{monzo|1 -2 1}} → 1×{{monzo|1 0}} + -2×{{monzo|0 1}} + 1×{{monzo|-4 4}} = {{monzo|1 0}} + {{monzo|0 -2}} + {{monzo|-4 4}} = {{monzo|-3 2}}. Cool, huh? Because conflating 9/8 and 10/9 is a quintessential example of the effect of tempering out the meantone comma ''(see Figure | So we could label the nodes with a list of approximations. For example, the node at {{monzo|-4 4}} would be ~5. We could label ~9/8 on {{monzo|-3 2}} just the same as we could label {{monzo|-3 2}} 9/8 in JI, however, here, we can also label that node ~10/9, because {{monzo|1 -2 1}} → 1×{{monzo|1 0}} + -2×{{monzo|0 1}} + 1×{{monzo|-4 4}} = {{monzo|1 0}} + {{monzo|0 -2}} + {{monzo|-4 4}} = {{monzo|-3 2}}. Cool, huh? Because conflating 9/8 and 10/9 is a quintessential example of the effect of tempering out the meantone comma ''(see Figure 5b)''. | ||
[[File:Mapping to tempered vector.png|400px|thumb|right|'''Figure | [[File:Mapping to tempered vector.png|400px|thumb|right|'''Figure 5b.''' Converting from a JI interval vector to a tempered interval vector, with one less rank, conflating intervals related by the tempered out comma.]] | ||
Sometimes it may be more helpful to imagine slicing your mapping matrix the other way, by columns (vectors) corresponding to the different primes, rather than rows (covectors) corresponding to generators. | Sometimes it may be more helpful to imagine slicing your mapping matrix the other way, by columns (vectors) corresponding to the different primes, rather than rows (covectors) corresponding to generators. | ||
Line 962: | Line 962: | ||
Let’s review what we’ve seen so far. 5-limit JI is 3-dimensional. When we have a rank-3 temperament of 5-limit JI, 0 commas are tempered out. When we have a rank-2 temperament of 5-limit JI, 1 comma is tempered out. When we have a rank-1 temperament of 5-limit JI, 2 commas are tempered out.<ref>Probably, a rank-0 temperament of 5-limit JI would temper 3 commas out. All I can think a rank-0 temperament could be is a single pitch, or in other words, everything is tempered out. So perhaps in some theoretical sense, a comma basis in 5-limit made out of 3 vectors, thus a square 3×3 matrix, as long as none of the lines are parallel, should minimally represent every interval in the space.</ref> | Let’s review what we’ve seen so far. 5-limit JI is 3-dimensional. When we have a rank-3 temperament of 5-limit JI, 0 commas are tempered out. When we have a rank-2 temperament of 5-limit JI, 1 comma is tempered out. When we have a rank-1 temperament of 5-limit JI, 2 commas are tempered out.<ref>Probably, a rank-0 temperament of 5-limit JI would temper 3 commas out. All I can think a rank-0 temperament could be is a single pitch, or in other words, everything is tempered out. So perhaps in some theoretical sense, a comma basis in 5-limit made out of 3 vectors, thus a square 3×3 matrix, as long as none of the lines are parallel, should minimally represent every interval in the space.</ref> | ||
There’s a straightforward formula here: <span><math>d - n = r</math></span>, where <span><math>d</math></span> is dimensionality, <span><math>n</math></span> is nullity, and <span><math>r</math></span> is rank. We’ve seen every one of those words so far except '''nullity'''. [[Nullity]] simply means the count of commas tempered out, or in other words, the count of commas in a basis for the null-space ''(see Figure | There’s a straightforward formula here: <span><math>d - n = r</math></span>, where <span><math>d</math></span> is dimensionality, <span><math>n</math></span> is nullity, and <span><math>r</math></span> is rank. We’ve seen every one of those words so far except '''nullity'''. [[Nullity]] simply means the count of commas tempered out, or in other words, the count of commas in a basis for the null-space ''(see Figure 5c)''. | ||
So far, everything we’ve done has been in terms of 5-limit, which has dimensionality of 3. Before we generalize our knowledge upwards, into the 7-limit, let’s take a look at how things one step downwards, in the simpler direction, in the 3-limit, which is only 2-dimensional. | So far, everything we’ve done has been in terms of 5-limit, which has dimensionality of 3. Before we generalize our knowledge upwards, into the 7-limit, let’s take a look at how things one step downwards, in the simpler direction, in the 3-limit, which is only 2-dimensional. | ||
Line 1,008: | Line 1,008: | ||
But now you shouldn’t be afraid even of 11-limit or beyond. 11-limit is 5D. So if you temper 2 commas there, you’ll have a rank-3 temperament. | But now you shouldn’t be afraid even of 11-limit or beyond. 11-limit is 5D. So if you temper 2 commas there, you’ll have a rank-3 temperament. | ||
[[File:Mapping and comma basis dnr.png|400px|thumb|right|'''Figure | [[File:Mapping and comma basis dnr.png|400px|thumb|right|'''Figure 5c.''' The relationship between dimensionality d, rank r, and nullity n]] | ||
=== beyond the 5-limit === | === beyond the 5-limit === | ||
Line 1,020: | Line 1,020: | ||
As you can see from the 2.15.7 example, you don't even have to use primes. Simple and common examples of this situation are the 2.9.5 or the 2.3.25 groups, where you're targeting multiples of the same prime, rather than combinations of different primes. | As you can see from the 2.15.7 example, you don't even have to use primes. Simple and common examples of this situation are the 2.9.5 or the 2.3.25 groups, where you're targeting multiples of the same prime, rather than combinations of different primes. | ||
And these are no longer ''JI'' groups, of course, but you can even use irrationals, like the 2.π.5.7 group! The sky is the limit. Whatever you choose, though, this core structural rule <span><math>d - n = r</math></span> holds strong ''(see Figure | And these are no longer ''JI'' groups, of course, but you can even use irrationals, like the 2.π.5.7 group! The sky is the limit. Whatever you choose, though, this core structural rule <span><math>d - n = r</math></span> holds strong ''(see Figure 5d)''. | ||
The order you list the pitches you're approximating with your temperament is not standardized; generally you increase them in size from left to right, though as you can see from the 2.9.5 and 2.15.7 examples above it can often be less surprising to list the numbers in prime limit order instead. Whatever order you choose, the important thing is that you stay consistent about it, because that's the only way any of your vectors and covectors are going to match up correctly! | The order you list the pitches you're approximating with your temperament is not standardized; generally you increase them in size from left to right, though as you can see from the 2.9.5 and 2.15.7 examples above it can often be less surprising to list the numbers in prime limit order instead. Whatever order you choose, the important thing is that you stay consistent about it, because that's the only way any of your vectors and covectors are going to match up correctly! | ||
[[File:Temperaments by rnd.png|400px|thumb|left|'''Figure | [[File:Temperaments by rnd.png|400px|thumb|left|'''Figure 5d.''' Some temperaments by dimension, rank, and nullity]] | ||
Alright, here’s where things start to get pretty fun. 7-limit JI is 4D. We can no longer refer to our 5-limit PTS diagram for help. Maps and vectors here will have four terms; the new fourth term being for prime 7. So the map for 12-ET here is {{val|12 19 28 34}}. | Alright, here’s where things start to get pretty fun. 7-limit JI is 4D. We can no longer refer to our 5-limit PTS diagram for help. Maps and vectors here will have four terms; the new fourth term being for prime 7. So the map for 12-ET here is {{val|12 19 28 34}}. | ||
Line 1,233: | Line 1,233: | ||
# We can convert between multicovectors and multivectors using an operation called “taking the '''complement'''”<ref>Elsewhere on the wiki you may find the complement operation called "taking [[the dual]]", or even the dual of a multicovector being called simply "the dual". An alternate name for "taking the complement" may be "taking the Hodge dual", but in these materials, I am using the dual to refer to the general case, while the specific case of the dual of a multicovector is a multivector and the operation to get from one of these to its dual is called taking the complement (whereas to get to the dual of a mapping, which is a comma basis, the operation is called taking the null-space).</ref>, which basically involves reversing the order of terms and negating some of them. | # We can convert between multicovectors and multivectors using an operation called “taking the '''complement'''”<ref>Elsewhere on the wiki you may find the complement operation called "taking [[the dual]]", or even the dual of a multicovector being called simply "the dual". An alternate name for "taking the complement" may be "taking the Hodge dual", but in these materials, I am using the dual to refer to the general case, while the specific case of the dual of a multicovector is a multivector and the operation to get from one of these to its dual is called taking the complement (whereas to get to the dual of a mapping, which is a comma basis, the operation is called taking the null-space).</ref>, which basically involves reversing the order of terms and negating some of them. | ||
[[File:Algebra notation.png|300px|thumb|right|'''Figure | [[File:Algebra notation.png|300px|thumb|right|'''Figure 6a.''' RTT bracket notation comparison.]] | ||
To demonstrate these points, let’s first calculate the multivector from a comma basis, and then confirm it by calculating the same multivector as the complement of its dual multicovector. | To demonstrate these points, let’s first calculate the multivector from a comma basis, and then confirm it by calculating the same multivector as the complement of its dual multicovector. | ||
Line 1,437: | Line 1,437: | ||
What does that mean? Who cares? The motivation here is that it’s a good thing to be able to generate the entire lattice. We may be looking for JI intervals we could use as generators for our temperament, and if so, we need to know what primes to build them out of so that we can make full use of the temperament. So this tells us that if we try to build generators out of primes 2 and 3, we will succeed in generating <span><math>\frac 11</math></span> or in other words all of the tempered lattice. Whereas if we try to build the generators out of primes 2 and 5, or 3 and 5, we will fail; we will only be able to generate <span><math>\frac 14</math></span> of the lattice. In other words, prime 5 is the bad seed here; it messes things up. | What does that mean? Who cares? The motivation here is that it’s a good thing to be able to generate the entire lattice. We may be looking for JI intervals we could use as generators for our temperament, and if so, we need to know what primes to build them out of so that we can make full use of the temperament. So this tells us that if we try to build generators out of primes 2 and 3, we will succeed in generating <span><math>\frac 11</math></span> or in other words all of the tempered lattice. Whereas if we try to build the generators out of primes 2 and 5, or 3 and 5, we will fail; we will only be able to generate <span><math>\frac 14</math></span> of the lattice. In other words, prime 5 is the bad seed here; it messes things up. | ||
[[File:Generating lattice (2) (2).png|thumb|left|400px|'''Figure | [[File:Generating lattice (2) (2).png|thumb|left|400px|'''Figure 6b.''' Visualization of how primes 2 and 3 are capable of generating the entire tempered lattice for meantone, whether as generators 2/1 and 3/1, or 2/1 and 3/2]] | ||
It’s easy to see why this is the case if you know how to visualize it on the tempered lattice. Let’s start with the happy case: primes 2 and 3. Prime 2 lets us move one step up (or down). Prime 3 lets us move one step right (or left). Clearly, with these two primes, we’d be able to reach any node in the lattice. We could do it with generators 2/1 and 3/1, in the most straightforward case. But we can also do it with 2/1 and 3/2: that just means one generator moves us down and to the right (or the opposite), and the other moves us straight up by one (or the opposite) ''(see Figure | It’s easy to see why this is the case if you know how to visualize it on the tempered lattice. Let’s start with the happy case: primes 2 and 3. Prime 2 lets us move one step up (or down). Prime 3 lets us move one step right (or left). Clearly, with these two primes, we’d be able to reach any node in the lattice. We could do it with generators 2/1 and 3/1, in the most straightforward case. But we can also do it with 2/1 and 3/2: that just means one generator moves us down and to the right (or the opposite), and the other moves us straight up by one (or the opposite) ''(see Figure 6b)''. 2/1 and 4/3 works too: one moves us to the left and up two (or… you get the idea) and the other moves us straight up by one. Heck, even 3/2 and 4/3 work; try it yourself. | ||
[[File:Generating lattice (2) (1).png|thumb|right|400px|'''Figure | [[File:Generating lattice (2) (1).png|thumb|right|400px|'''Figure 6c.''' Visualization of how neither primes 2 and 5 or 3 and 5 are capable of generating the entire tempered lattice for meantone; they can only generate <span><math>\frac 14</math></span>th of it]] | ||
But now try it with only 5 and one other of primes 2 or 3. Prime 5 takes you over 4 in both directions. But if you have only prime 2 otherwise, then you can only move up or down from there, so you’ll only cover every fourth vertical line through the tempered lattice. Or if you only had prime 3 otherwise, then you could only move left and right from there, you’d only cover every fourth horizontal line ''(see Figure | But now try it with only 5 and one other of primes 2 or 3. Prime 5 takes you over 4 in both directions. But if you have only prime 2 otherwise, then you can only move up or down from there, so you’ll only cover every fourth vertical line through the tempered lattice. Or if you only had prime 3 otherwise, then you could only move left and right from there, you’d only cover every fourth horizontal line ''(see Figure 6c)''. | ||
One day you might come across a multicovector which has a term equal to zero. If you tried to interpret this term using the information here so far, you'd think it must generate <span><math>\frac 10</math></span>th of the tempered lattice. That's not easy to visualize or reason about. Does that mean it generates essentially infinity lattices? No, not really. More like the opposite. The question itself is somewhat undefined here. If anything, it's more like that combination of primes generates approximately ''none'' of the lattice. Because in this situation, the combination of primes whose multicovector term is zero generates so little of the tempered lattice that it's completely missing an entire dimension of it, so it's an infinitesimal amount of it that it generates. For example, the 11-limit temperament 7&12&31 has multicovector {{val|rank=3|0 1 1 4 4 -8 4 4 -12 -16}} and mapping {{monzo|{{val|1 0 -4 0 -12}} {{val|0 1 4 0 8}} {{val|0 0 0 1 1}}}}; we can see from this how primes <span><math>(2,3,5)</math></span> can only generate a rank-2 cross-section of the full rank-3 lattice, because while 2 and 3 do the trick of generating that rank-2 part (exactly as they do in 5-limit meantone), prime 5 doesn't bring anything to the table here so that's all we get. | One day you might come across a multicovector which has a term equal to zero. If you tried to interpret this term using the information here so far, you'd think it must generate <span><math>\frac 10</math></span>th of the tempered lattice. That's not easy to visualize or reason about. Does that mean it generates essentially infinity lattices? No, not really. More like the opposite. The question itself is somewhat undefined here. If anything, it's more like that combination of primes generates approximately ''none'' of the lattice. Because in this situation, the combination of primes whose multicovector term is zero generates so little of the tempered lattice that it's completely missing an entire dimension of it, so it's an infinitesimal amount of it that it generates. For example, the 11-limit temperament 7&12&31 has multicovector {{val|rank=3|0 1 1 4 4 -8 4 4 -12 -16}} and mapping {{monzo|{{val|1 0 -4 0 -12}} {{val|0 1 4 0 8}} {{val|0 0 0 1 1}}}}; we can see from this how primes <span><math>(2,3,5)</math></span> can only generate a rank-2 cross-section of the full rank-3 lattice, because while 2 and 3 do the trick of generating that rank-2 part (exactly as they do in 5-limit meantone), prime 5 doesn't bring anything to the table here so that's all we get. | ||
Line 1,454: | Line 1,454: | ||
=== big RTT examples chart === | === big RTT examples chart === | ||
[[File:RTT clean 3.png|800px|center|thumb|'''Figure | [[File:RTT clean 3.png|800px|center|thumb|'''Figure 7a.''' Diagram of core RTT concepts.]] | ||
=== tuning === | === tuning === |