Douglas Blumeyer's RTT How-To: Difference between revisions

Cmloegcmluin (talk | contribs)
m tuning & pure octaves: Steve Martin correction
Cmloegcmluin (talk | contribs)
use new less jargony templates
Line 45: Line 45:
Until stated otherwise, this material will assume the [[5-limit|5 prime-limit]].
Until stated otherwise, this material will assume the [[5-limit|5 prime-limit]].


If you’ve previously worked with JI, you may already be familiar with vectors. Vectors are a compact way to express JI intervals in terms of their prime factorization, or in other words, their harmonic building blocks. In JI, and in most contexts in RTT, vectors simply list the count of each prime, in order. For example, 16/15 is {{monzo|4 -1 -1}} because it has four 2’s in its numerator, one 3 in its denominator, and also one 5 in its denominator. You can look at each term as an exponent: 2⁴ × 3⁻¹ × 5⁻¹ = 16/15.
If you’ve previously worked with JI, you may already be familiar with vectors. Vectors are a compact way to express JI intervals in terms of their prime factorization, or in other words, their harmonic building blocks. In JI, and in most contexts in RTT, vectors simply list the count of each prime, in order. For example, 16/15 is {{vector|4 -1 -1}} because it has four 2’s in its numerator, one 3 in its denominator, and also one 5 in its denominator. You can look at each term as an exponent: 2⁴ × 3⁻¹ × 5⁻¹ = 16/15.


And if you’ve previously worked with EDOs, you may already be familiar with covectors. Covectors are a compact way to express EDOs in terms of the count of its steps it takes to reach its approximation of each prime harmonic, in order. For example, 12-EDO is {{val|12 19 28}}. The first term is the same as the name of the EDO, because the first prime harmonic is 2/1, or in other words: the octave. So this covector tells us that it takes 12 steps to reach 2/1 (the [[octave]]), 19 steps to reach 3/1 (the [[tritave]]), and 28 steps to each 5/1 (the [[pentave]]). Any or all of those intervals may be approximate.
And if you’ve previously worked with EDOs, you may already be familiar with covectors. Covectors are a compact way to express EDOs in terms of the count of its steps it takes to reach its approximation of each prime harmonic, in order. For example, 12-EDO is {{map|12 19 28}}. The first term is the same as the name of the EDO, because the first prime harmonic is 2/1, or in other words: the octave. So this covector tells us that it takes 12 steps to reach 2/1 (the [[octave]]), 19 steps to reach 3/1 (the [[tritave]]), and 28 steps to each 5/1 (the [[pentave]]). Any or all of those intervals may be approximate.


If the musical structure that the mathematical structure called a vector represents is an '''interval''', the musical structure that the mathematical structure called a covector represents is called a '''map'''.
If the musical structure that the mathematical structure called a vector represents is an '''interval''', the musical structure that the mathematical structure called a covector represents is called a '''map'''.


Note the different direction of the brackets between covectors and vectors: covectors {{val|}} point left, vectors {{monzo|}} point right.
Note the different direction of the brackets between covectors and vectors: covectors {{map|}} point left, vectors {{vector|}} point right.


[[File:Map and vector.png|500px|thumb|right|'''Figure 2a.''' Mapping example]]
[[File:Map and vector.png|500px|thumb|right|'''Figure 2a.''' Mapping example]]


Covectors and vectors give us a way to bridge JI and EDOs. If the vector gives us a list of primes in a JI interval, and the covector tells us how many steps it takes to reach the approximation of each of those primes individually in an EDO, then when we put them together, we can see what step of the EDO should give the closest approximation of that JI interval. We say that the JI interval '''maps''' to that number of steps in the EDO. Calculating this looks like {{val|12 19 28}}{{monzo|4 -1 -1}}, and all that means is to multiply matching terms and sum the results (this is called the dot product).
Covectors and vectors give us a way to bridge JI and EDOs. If the vector gives us a list of primes in a JI interval, and the covector tells us how many steps it takes to reach the approximation of each of those primes individually in an EDO, then when we put them together, we can see what step of the EDO should give the closest approximation of that JI interval. We say that the JI interval '''maps''' to that number of steps in the EDO. Calculating this looks like {{map|12 19 28}}{{vector|4 -1 -1}}, and all that means is to multiply matching terms and sum the results (this is called the dot product).


So, 16/15 maps to one step in 12-EDO ''(see Figure 2a)''.
So, 16/15 maps to one step in 12-EDO ''(see Figure 2a)''.


For another example, can quickly find the fifth size for 12-EDO from its map, because 3/2 is {{monzo|-1 1 0}}, and so {{val|12 19 28}}{{monzo|-1 1 0}} = (12 × -1) + (19 × 1) = 7. Similarly, the major third — 5/4, or {{monzo|-2 0 1}} — is simply 28 - 12 - 12 = 4.
For another example, can quickly find the fifth size for 12-EDO from its map, because 3/2 is {{vector|-1 1 0}}, and so {{map|12 19 28}}{{vector|-1 1 0}} = (12 × -1) + (19 × 1) = 7. Similarly, the major third — 5/4, or {{vector|-2 0 1}} — is simply 28 - 12 - 12 = 4.


WolframAlpha's syntax is slightly different than what we use in RTT, but it's pretty alright for a free online tool capable of handling most of the math we need to do in RTT, so we're going to be supplementing several topics with WolframAlpha examples as we go. Here's the first:
WolframAlpha's syntax is slightly different than what we use in RTT, but it's pretty alright for a free online tool capable of handling most of the math we need to do in RTT, so we're going to be supplementing several topics with WolframAlpha examples as we go. Here's the first:
Line 78: Line 78:
Here’s where things start to get really interesting.
Here’s where things start to get really interesting.


We can also see that the JI interval 81/80 maps to zero steps in 12-EDO, because {{val|12 19 28}}{{monzo|-4 4 -1}} = 0; we therefore say this JI interval '''vanishes''' in 12-EDO, or that it is '''[[Tempered out|tempered out]]'''. This type of JI interval is called a '''[[comma]]''', and this particular one is called the [[meantone comma]].
We can also see that the JI interval 81/80 maps to zero steps in 12-EDO, because {{map|12 19 28}}{{vector|-4 4 -1}} = 0; we therefore say this JI interval '''vanishes''' in 12-EDO, or that it is '''[[Tempered out|tempered out]]'''. This type of JI interval is called a '''[[comma]]''', and this particular one is called the [[meantone comma]].


The immediate conclusion is that 12-EDO is not equipped to approximate the meantone comma directly as a melodic or harmonic interval, and this shouldn’t be surprising because 81/80 is only around 20¢, while the (smallest) step in 12-EDO is five times that.
The immediate conclusion is that 12-EDO is not equipped to approximate the meantone comma directly as a melodic or harmonic interval, and this shouldn’t be surprising because 81/80 is only around 20¢, while the (smallest) step in 12-EDO is five times that.


But a more interesting way to think about this result involves treating {{monzo|-4 4 -1}} not as a single interval, but as the end result of moving by a combination of intervals. For example, moving up four fifths, 4 × {{monzo|-1 1 0}} = {{monzo|-4 4 0}}, and then moving down one pentave {{monzo|0 0 -1}}, gets you right back where you started in 12-EDO. Or, in other words, moving by one pentave is the same thing as moving by four fifths ''(see Figure 2b)''. One can make compelling music that [[Keenan's comma pump page|exploits such harmonic mechanisms]].
But a more interesting way to think about this result involves treating {{vector|-4 4 -1}} not as a single interval, but as the end result of moving by a combination of intervals. For example, moving up four fifths, 4 × {{vector|-1 1 0}} = {{vector|-4 4 0}}, and then moving down one pentave {{vector|0 0 -1}}, gets you right back where you started in 12-EDO. Or, in other words, moving by one pentave is the same thing as moving by four fifths ''(see Figure 2b)''. One can make compelling music that [[Keenan's comma pump page|exploits such harmonic mechanisms]].


From this perspective, the disappearance of 81/80 is not a shortcoming, but a fascinating feature of 12-EDO; we say that 12-EDO '''supports''' the meantone temperament. And 81/80 in 12-EDO is only the beginning of that journey. For many people, tempering commas is one of the biggest draws to RTT.
From this perspective, the disappearance of 81/80 is not a shortcoming, but a fascinating feature of 12-EDO; we say that 12-EDO '''supports''' the meantone temperament. And 81/80 in 12-EDO is only the beginning of that journey. For many people, tempering commas is one of the biggest draws to RTT.


But we’re still only talking about JI and EDOs. If you’re familiar with meantone as a historical temperament, you may be aware already that it is neither JI nor an EDO. Well, we’ve got a ways to go yet before we get there.  
But we’re still only talking about JI and EDOs. If you’re familiar with meantone as a historical temperament, you may be aware already that it is neither JI nor an EDO. Well, we’ve got a ways to go yet before we get there.


One thing we can easily begin to do now, though, is this: refer to EDOs instead as ETs, or equal temperaments. The two terms are [[EDO_vs_ET|roughly synonymous]], but have different implications and connotations. To put it briefly, the difference can be found in the names: 12 '''E'''qual '''D'''ivisions of the '''O'''ctave suggests only that your goal is equally dividing the octave, while 12 '''E'''qual '''T'''emperament suggests that your goal is to temper and that you have settled on a single equal step to accomplish that. Because we’re learning about temperament theory here, it would be more appropriate and accurate to use the local terminology. 12-ET it is, then.
One thing we can easily begin to do now, though, is this: refer to EDOs instead as ETs, or equal temperaments. The two terms are [[EDO_vs_ET|roughly synonymous]], but have different implications and connotations. To put it briefly, the difference can be found in the names: 12 '''E'''qual '''D'''ivisions of the '''O'''ctave suggests only that your goal is equally dividing the octave, while 12 '''E'''qual '''T'''emperament suggests that your goal is to temper and that you have settled on a single equal step to accomplish that. Because we’re learning about temperament theory here, it would be more appropriate and accurate to use the local terminology. 12-ET it is, then.
Line 92: Line 92:
=== approximating JI ===
=== approximating JI ===


If you’ve seen one map before, it’s probably {{val|12 19 28}}. That’s because this map is the foundation of conventional Western tuning: [[12edo|12 equal temperament]]. A major reason it stuck is because — for its low complexity — it can closely approximate all three of the 5 prime-limit harmonics 2, 3, and 5 at the same time.
If you’ve seen one map before, it’s probably {{map|12 19 28}}. That’s because this map is the foundation of conventional Western tuning: [[12edo|12 equal temperament]]. A major reason it stuck is because — for its low complexity — it can closely approximate all three of the 5 prime-limit harmonics 2, 3, and 5 at the same time.


One way to think of this is that 12:19:28 is an excellent low integer approximation of log(2:3:5). That's a really compact way of saying that each of these sets of three numbers has the same ratio between each pair of them:
One way to think of this is that 12:19:28 is an excellent low integer approximation of log(2:3:5). That's a really compact way of saying that each of these sets of three numbers has the same ratio between each pair of them:
Line 114: Line 114:
So when I say 12:19:28 ≈ log(2:3:5) what I’m saying is that there is indeed some shared generator g for which log<sub>g</sub>2 ≈ 12, log<sub>g</sub>3 ≈ 19, and log<sub>g</sub>5 ≈ 28 are all good approximations all at the same time, or, equivalently, a shared generator g for which g¹² ≈ 2, g¹⁹ ≈ 3, and g²⁸ ≈ 5 are all good approximations at the same time ''(see Figure 2c)''. And that’s a pretty cool thing to find! To be clear, with g = 1.059, we get g¹² ≈ 1.9982, g¹⁹ ≈ 2.9923, and g²⁸ ≈ 5.0291.
So when I say 12:19:28 ≈ log(2:3:5) what I’m saying is that there is indeed some shared generator g for which log<sub>g</sub>2 ≈ 12, log<sub>g</sub>3 ≈ 19, and log<sub>g</sub>5 ≈ 28 are all good approximations all at the same time, or, equivalently, a shared generator g for which g¹² ≈ 2, g¹⁹ ≈ 3, and g²⁸ ≈ 5 are all good approximations at the same time ''(see Figure 2c)''. And that’s a pretty cool thing to find! To be clear, with g = 1.059, we get g¹² ≈ 1.9982, g¹⁹ ≈ 2.9923, and g²⁸ ≈ 5.0291.


Another glowing example is the map {{val|53 84 123}}, for which a good generator will give you g⁵³ ≈ 2.0002, g⁸⁴ ≈ 3.0005, g¹²³ ≈ 4.9974. This speaks to historical attention given to [[53edo|53-ET]]. So while 53:84:123 is an even better approximation of log(2:3:5) (and [https://en.xen.wiki/images/a/a2/Generalized_Patent_Vals.png you won’t find a better one] until 118:187:274), of course its integers aren’t as low, so that lessens its appeal.
Another glowing example is the map {{map|53 84 123}}, for which a good generator will give you g⁵³ ≈ 2.0002, g⁸⁴ ≈ 3.0005, g¹²³ ≈ 4.9974. This speaks to historical attention given to [[53edo|53-ET]]. So while 53:84:123 is an even better approximation of log(2:3:5) (and [https://en.xen.wiki/images/a/a2/Generalized_Patent_Vals.png you won’t find a better one] until 118:187:274), of course its integers aren’t as low, so that lessens its appeal.


Why is this rare? Well, it’s like a game of trying to get these numbers to line up ''(see Figure 2d)'':
Why is this rare? Well, it’s like a game of trying to get these numbers to line up ''(see Figure 2d)'':
Line 122: Line 122:
If the distance between entries in the row for 2 are defined as 1 unit apart, then the distance between entries in the row for prime 3 are 1/log₂3 units apart, and 1/log₂5 units apart for the prime 5. So, near-linings up don’t happen all that often!<ref>For more information, see: [[The_Riemann_zeta_function_and_tuning|The Riemann zeta function and tuning]].</ref> (By the way, any vertical line drawn through a chart like this is called a GPV, or “[[generalized patent val]]”.)
If the distance between entries in the row for 2 are defined as 1 unit apart, then the distance between entries in the row for prime 3 are 1/log₂3 units apart, and 1/log₂5 units apart for the prime 5. So, near-linings up don’t happen all that often!<ref>For more information, see: [[The_Riemann_zeta_function_and_tuning|The Riemann zeta function and tuning]].</ref> (By the way, any vertical line drawn through a chart like this is called a GPV, or “[[generalized patent val]]”.)


And why is this cool? Well, if {{val|12 19 28}} approximates the harmonic building blocks well individually, then JI intervals built out of them, like 16/15, 5/4, 10/9, etc. should also be reasonably well-approximated overall, and thus recognizable as their JI counterparts in musical context. You could think of it like taking all the primes in a prime factorization and swapping in their approximations. For example, if 16/15 = 2⁴ × 3⁻¹ × 5⁻¹ ≈ 1.067, and {{val|12 19 28}} approximates 2, 3, and 5 by 1.059¹² ≈ 1.998, 1.059¹⁹ ≈ 2.992, and 1.059²⁸ ≈ 5.029, respectively, then {{val|12 19 28}} maps 16/15 to 1.998⁴ × 2.992⁻¹ × 5.029⁻¹ ≈ 1.059, which is indeed pretty close to 1.067. Of course, we should also note that 1.059 is the same as our step of {{val|12 19 28}}, which checks out with our calculation we made in the previous section that the best approximation of 16/15 in {{val|12 19 28}} would be 1 step.
And why is this cool? Well, if {{map|12 19 28}} approximates the harmonic building blocks well individually, then JI intervals built out of them, like 16/15, 5/4, 10/9, etc. should also be reasonably well-approximated overall, and thus recognizable as their JI counterparts in musical context. You could think of it like taking all the primes in a prime factorization and swapping in their approximations. For example, if 16/15 = 2⁴ × 3⁻¹ × 5⁻¹ ≈ 1.067, and {{map|12 19 28}} approximates 2, 3, and 5 by 1.059¹² ≈ 1.998, 1.059¹⁹ ≈ 2.992, and 1.059²⁸ ≈ 5.029, respectively, then {{map|12 19 28}} maps 16/15 to 1.998⁴ × 2.992⁻¹ × 5.029⁻¹ ≈ 1.059, which is indeed pretty close to 1.067. Of course, we should also note that 1.059 is the same as our step of {{map|12 19 28}}, which checks out with our calculation we made in the previous section that the best approximation of 16/15 in {{map|12 19 28}} would be 1 step.


=== tuning & pure octaves ===
=== tuning & pure octaves ===
Line 128: Line 128:
Now, because the octave is the [[interval of equivalence]] in terms of human pitch perception, it’s a major convenience to enforce pure octaves, and so many people prefer the first term to be exact. In fact, I’ll bet many readers have never even heard of or imagined impure octaves, if my own anecdotal experience is any indicator; the idea that I could temper octaves to optimize tunings came rather late to me.
Now, because the octave is the [[interval of equivalence]] in terms of human pitch perception, it’s a major convenience to enforce pure octaves, and so many people prefer the first term to be exact. In fact, I’ll bet many readers have never even heard of or imagined impure octaves, if my own anecdotal experience is any indicator; the idea that I could temper octaves to optimize tunings came rather late to me.


Well, you’ll notice that in the previous section, we did approximate the octave, using 1.998 instead of 2. But another thing {{val|12 19 28}} has going for it is that it excels at approximating 5-limit JI even if we constrain ourselves to pure octaves, locking g¹² to exactly 2: (¹²√2)¹⁹ ≈ 2.997 and (¹²√2)²⁸ ≈ 5.040. You can see that actually the approximation of 3 is even better here, marginally; it’s the damage to 5 which is lamentable.
Well, you’ll notice that in the previous section, we did approximate the octave, using 1.998 instead of 2. But another thing {{map|12 19 28}} has going for it is that it excels at approximating 5-limit JI even if we constrain ourselves to pure octaves, locking g¹² to exactly 2: (¹²√2)¹⁹ ≈ 2.997 and (¹²√2)²⁸ ≈ 5.040. You can see that actually the approximation of 3 is even better here, marginally; it’s the damage to 5 which is lamentable.


When we don’t enforce pure octaves, tuning becomes a more interesting problem. Approximating all three primes at once with the same generator is a balancing act. At least one of the primes will be tuned a bit sharp while at least one of them will be tuned a bit flat. In the case of {{val|12 19 28}}, the 5 is a bit sharp, and the 2 and 3 are each a tiny bit flat ''(as you can see in Figure 2c)''.
When we don’t enforce pure octaves, tuning becomes a more interesting problem. Approximating all three primes at once with the same generator is a balancing act. At least one of the primes will be tuned a bit sharp while at least one of them will be tuned a bit flat. In the case of {{map|12 19 28}}, the 5 is a bit sharp, and the 2 and 3 are each a tiny bit flat ''(as you can see in Figure 2c)''.


[[File:Why not just srhink every block.png|thumb|left|600px|'''Figure 2e.''' Visualization of pointlessness of tuning all primes sharp (you should be able to imagine the opposite case, where all primes are tuned flat). To be completely accurate, depending on your actual scale, there maybe cases where tuning all the primes sharp (or pure) may not be pointless, depending on which combinations of primes you use in your pitches and in particular which sides of the fraction bar they're on i.e. if they are on opposite sides then their temperings may be proportional and thus damage cancels out rather than compounds. But in general, this diagram sends the right message.]]
[[File:Why not just srhink every block.png|thumb|left|600px|'''Figure 2e.''' Visualization of pointlessness of tuning all primes sharp (you should be able to imagine the opposite case, where all primes are tuned flat). To be completely accurate, depending on your actual scale, there maybe cases where tuning all the primes sharp (or pure) may not be pointless, depending on which combinations of primes you use in your pitches and in particular which sides of the fraction bar they're on i.e. if they are on opposite sides then their temperings may be proportional and thus damage cancels out rather than compounds. But in general, this diagram sends the right message.]]
Line 140: Line 140:
=== a multitude of maps ===
=== a multitude of maps ===


Suppose we want to experiment with {{val|12 19 28}}’s map a bit. We’ll change one of the terms by 1, so now we have {{val|12 20 28}}. Because the previous map did such a great job of approximating the 5-limit (i.e. log(2:3:5)), though, it should be unsurprising that this new map cannot achieve that feat. The proportions, 12:20:28, should now be about as out of whack as they can get. The best generator we can do here is about 1.0583 (getting a little more precise now), and 1.0583¹² ≈ 1.9738 which isn’t so bad, but 1.0583¹⁹ = 3.1058 and 1.0583²⁸ = 4.8870 which are both way off! And they’re way off in the opposite direction — 3.1058 is too big and 4.8870 is too small — which is why our tuning formula for g, which is designed to make the approximation good for every prime at once, can’t improve the situation: either sharpening or flattening helps one but hurts the other.
Suppose we want to experiment with {{map|12 19 28}}’s map a bit. We’ll change one of the terms by 1, so now we have {{map|12 20 28}}. Because the previous map did such a great job of approximating the 5-limit (i.e. log(2:3:5)), though, it should be unsurprising that this new map cannot achieve that feat. The proportions, 12:20:28, should now be about as out of whack as they can get. The best generator we can do here is about 1.0583 (getting a little more precise now), and 1.0583¹² ≈ 1.9738 which isn’t so bad, but 1.0583¹⁹ = 3.1058 and 1.0583²⁸ = 4.8870 which are both way off! And they’re way off in the opposite direction — 3.1058 is too big and 4.8870 is too small — which is why our tuning formula for g, which is designed to make the approximation good for every prime at once, can’t improve the situation: either sharpening or flattening helps one but hurts the other.


The results of such inaccurate approximation are a bit chaotic. A ratio like 16/15 — where the factors of 3 and 5 are on the same side of the fraction bar and therefore cancel out each other’s damage — fares relatively alright, if by “alright” we mean it gets tempered out despite being about 112¢ in JI. On the other hand, an interval like 27/25 where the factors of 3 and 5 are on opposite sides of the fraction bar and thus their damages compound, gets mapped to a whopping 4 steps, despite only being about 133¢ in JI.
The results of such inaccurate approximation are a bit chaotic. A ratio like 16/15 — where the factors of 3 and 5 are on the same side of the fraction bar and therefore cancel out each other’s damage — fares relatively alright, if by “alright” we mean it gets tempered out despite being about 112¢ in JI. On the other hand, an interval like 27/25 where the factors of 3 and 5 are on opposite sides of the fraction bar and thus their damages compound, gets mapped to a whopping 4 steps, despite only being about 133¢ in JI.


If your goal is to evoke JI-like harmony, then, {{val|12 20 28}} is not your friend. Feel free to work out some other variations on {{val|12 19 28}} if you like, such as {{val|12 19 29}} maybe, but I guarantee you won’t find a better one that starts with 12 than {{val|12 19 28}}.
If your goal is to evoke JI-like harmony, then, {{map|12 20 28}} is not your friend. Feel free to work out some other variations on {{map|12 19 28}} if you like, such as {{map|12 19 29}} maybe, but I guarantee you won’t find a better one that starts with 12 than {{map|12 19 28}}.


[[File:17-ET mistunings.png|thumb|600px|right|'''Figure 2f.''' Deviations from JI for various 17-ET maps, showing how the supposed "patent" val's total error can be improved upon by allowing tempered octaves and second-closest mappings of primes. It also shows how pure octave 17c has no primes tuned flat.]]
[[File:17-ET mistunings.png|thumb|600px|right|'''Figure 2f.''' Deviations from JI for various 17-ET maps, showing how the supposed "patent" val's total error can be improved upon by allowing tempered octaves and second-closest mappings of primes. It also shows how pure octave 17c has no primes tuned flat.]]


So the case is cut-and-dry for {{val|12 19 28}}, and therefore from now on I'm simply going to refer to this ET by "12-ET" rather than spelling out its map. But other ETs find themselves in trickier situations. Consider [[17edo|17-ET]]. One option we have is the map {{val|17 27 39}}, with a generator of about 1.0418, and prime approximations of 2.0045, 3.0177, 4.9302. But we have a second reasonable option here, too, where {{val|17 27 40}} gives us a generator of 1.0414, and prime approximations of 1.9929, 2.9898, and 5.0659. In either case, the approximations of 2 and 3 are close, but the approximation of 5 is way off. For {{val|17 27 39}}, it’s way small, while for {{val|17 27 40}} it’s way big. The conundrum could be described like this: any generator we could find that divides 2 into about 17 equal steps can do a good job dividing 3 into about 27 equal steps, too, but it will not do a good job of dividing 5 into equal steps; 5 is going to land, unfortunately, right about in the middle between the 39th and 40th steps, as far as possible from either of these two nearest approximations. To do a good job approximating prime 5, we’d really just want to subdivide each of these steps in half, or in other words, we’d want [[34edo|34-ET]].
So the case is cut-and-dry for {{map|12 19 28}}, and therefore from now on I'm simply going to refer to this ET by "12-ET" rather than spelling out its map. But other ETs find themselves in trickier situations. Consider [[17edo|17-ET]]. One option we have is the map {{map|17 27 39}}, with a generator of about 1.0418, and prime approximations of 2.0045, 3.0177, 4.9302. But we have a second reasonable option here, too, where {{map|17 27 40}} gives us a generator of 1.0414, and prime approximations of 1.9929, 2.9898, and 5.0659. In either case, the approximations of 2 and 3 are close, but the approximation of 5 is way off. For {{map|17 27 39}}, it’s way small, while for {{map|17 27 40}} it’s way big. The conundrum could be described like this: any generator we could find that divides 2 into about 17 equal steps can do a good job dividing 3 into about 27 equal steps, too, but it will not do a good job of dividing 5 into equal steps; 5 is going to land, unfortunately, right about in the middle between the 39th and 40th steps, as far as possible from either of these two nearest approximations. To do a good job approximating prime 5, we’d really just want to subdivide each of these steps in half, or in other words, we’d want [[34edo|34-ET]].


[[File:17-ET.png|thumb|400px|left|'''Figure 2g.''' Visualization of the 17-ETs on the GPV continuum, showing how for the pure octave 17c there exists no generator that exactly reaches prime 2 in 17 steps while more closely approximating prime 5 as 40 steps than 39 steps. (If this diagram is unclear, please refer back to Figure 2d., which has the same type of information but with more thorough labelling.)]]
[[File:17-ET.png|thumb|400px|left|'''Figure 2g.''' Visualization of the 17-ETs on the GPV continuum, showing how for the pure octave 17c there exists no generator that exactly reaches prime 2 in 17 steps while more closely approximating prime 5 as 40 steps than 39 steps. (If this diagram is unclear, please refer back to Figure 2d., which has the same type of information but with more thorough labelling.)]]


Curiously, {{val|17 27 39}} is the map for which each prime individually is as closely approximated as possible when prime 2 is exact, so it is in a sense the naively best map for 17-ET, however, if that constraint is lifted, and we’re allowed to either temper prime 2 and/or choose the next-closest approximations for prime 5, the overall approximation can be improved; in other words, even though 39 steps can take you just a tiny bit closer to prime 5 than 40 steps can, the tiny amount by which it is closer is less than the improvements to the tuning of primes 2 and 3 you can get by using {{val|17 27 40}}. So again, the choice is not always cut-and-dry; there’s still a lot of personal preference going on in the tempering process.
Curiously, {{map|17 27 39}} is the map for which each prime individually is as closely approximated as possible when prime 2 is exact, so it is in a sense the naively best map for 17-ET, however, if that constraint is lifted, and we’re allowed to either temper prime 2 and/or choose the next-closest approximations for prime 5, the overall approximation can be improved; in other words, even though 39 steps can take you just a tiny bit closer to prime 5 than 40 steps can, the tiny amount by which it is closer is less than the improvements to the tuning of primes 2 and 3 you can get by using {{map|17 27 40}}. So again, the choice is not always cut-and-dry; there’s still a lot of personal preference going on in the tempering process.


So some musicians may conclude “17-ET is clearly not cut out for 5-limit music,” and move on to another ET. Other musicians may snicker maniacally, and choose one or the other map, and begin exploiting the profound and unusual 5-limit harmonic mechanisms it affords. {{val|17 27 40}}, like {{val|12 19 28}}, tempers out the meantone comma {{monzo|-4 4 -1}}, so even though fifths and major thirds are different sizes in these two ETs, the relationship that four fifths equals one major third is shared. {{val|17 27 39}}, on the other hand, does not work like that, but what it does do is temper out 25/24, {{monzo|-3 -1 2}}, or in other words, it equates one fifth with two major thirds.
So some musicians may conclude “17-ET is clearly not cut out for 5-limit music,” and move on to another ET. Other musicians may snicker maniacally, and choose one or the other map, and begin exploiting the profound and unusual 5-limit harmonic mechanisms it affords. {{map|17 27 40}}, like {{map|12 19 28}}, tempers out the meantone comma {{vector|-4 4 -1}}, so even though fifths and major thirds are different sizes in these two ETs, the relationship that four fifths equals one major third is shared. {{map|17 27 39}}, on the other hand, does not work like that, but what it does do is temper out 25/24, {{vector|-3 -1 2}}, or in other words, it equates one fifth with two major thirds.


If you’re enforcing pure octaves, the difference between {{val|17 27 39}} and {{val|17 27 40}} is nominal, or contextual. The steps in either case are identical: exactly ¹⁷√2, or 1200/17=70.588¢. You simply choose to think of 5 as being approximated by either 39 or 40 of those steps, or imply it in your composition. But when octaves are freed to temper, then the difference between these two maps becomes pronounced. When optimizing for {{val|17 27 39}}, the best step size is 70.225¢, but when optimizing for {{val|17 27 40}}, the best step size is more like 70.820¢.
If you’re enforcing pure octaves, the difference between {{map|17 27 39}} and {{map|17 27 40}} is nominal, or contextual. The steps in either case are identical: exactly ¹⁷√2, or 1200/17=70.588¢. You simply choose to think of 5 as being approximated by either 39 or 40 of those steps, or imply it in your composition. But when octaves are freed to temper, then the difference between these two maps becomes pronounced. When optimizing for {{map|17 27 39}}, the best step size is 70.225¢, but when optimizing for {{map|17 27 40}}, the best step size is more like 70.820¢.


You will sometimes see maps like 17-ET’s distinguished from each other using names like 17p and 17c. This is called [[wart notation]].
You will sometimes see maps like 17-ET’s distinguished from each other using names like 17p and 17c. This is called [[wart notation]].
Line 177: Line 177:
[[File:JI scale 2.png|thumb|right|150px|'''Figure 3b.''' Just an example JI scale]]
[[File:JI scale 2.png|thumb|right|150px|'''Figure 3b.''' Just an example JI scale]]


If you’ve worked with 5-limit JI before, you’re probably aware that it is three-dimensional. You’ve probably reasoned about it as a 3D lattice, where one axis is for the factors of prime 2, one axis is for the factors of prime 3, and one axis is for the factors of prime 5. This way, you can use vectors, such as {{monzo|-4 4 -1}} or {{monzo|1 -2 1}}, just like coordinates.
If you’ve worked with 5-limit JI before, you’re probably aware that it is three-dimensional. You’ve probably reasoned about it as a 3D lattice, where one axis is for the factors of prime 2, one axis is for the factors of prime 3, and one axis is for the factors of prime 5. This way, you can use vectors, such as {{vector|-4 4 -1}} or {{vector|1 -2 1}}, just like coordinates.


PTS can be thought of as a projection of 5-limit JI map space, which similarly has one axis each for 2, 3, and 5. But it is no JI pitch lattice. In fact, in a sense, it is the opposite! This is because the coordinates in map space aren’t prime count lists, but maps, such as {{val|12 19 28}}. That particular map is seen here as the biggish, slightly tilted numeral 12 just to the left of the center point.
PTS can be thought of as a projection of 5-limit JI map space, which similarly has one axis each for 2, 3, and 5. But it is no JI pitch lattice. In fact, in a sense, it is the opposite! This is because the coordinates in map space aren’t prime count lists, but maps, such as {{map|12 19 28}}. That particular map is seen here as the biggish, slightly tilted numeral 12 just to the left of the center point.


[[File:PTS with axes.png|300px|left|thumb|'''Figure 3c.''' PTS, with axes]]
[[File:PTS with axes.png|300px|left|thumb|'''Figure 3c.''' PTS, with axes]]


And the two 17-ETs we looked at can be found here too. {{val|17 27 40}} is the slightly smaller numeral 17 found on the line labeled “meantone” which the 12 is also on, thus representing the fact we mentioned earlier that they both temper it out. The other 17, {{val|17 27 39}}, is found on the other side of the center point, aligned horizontally with the first 17. So you could say that map space plots ETs, showing how they are related to each other.
And the two 17-ETs we looked at can be found here too. {{map|17 27 40}} is the slightly smaller numeral 17 found on the line labeled “meantone” which the 12 is also on, thus representing the fact we mentioned earlier that they both temper it out. The other 17, {{map|17 27 39}}, is found on the other side of the center point, aligned horizontally with the first 17. So you could say that map space plots ETs, showing how they are related to each other.


Of course, PTS looks nothing like this JI lattice ''(see Figure 3b)''. This diagram has a ton more information, and as such, Paul needed to get creative about how to structure it. It’s a little tricky, but we’ll get there. For starters, the axes are not actually shown on the PTS diagram; if they were, they would look like this ''(see Figure 3c)''.
Of course, PTS looks nothing like this JI lattice ''(see Figure 3b)''. This diagram has a ton more information, and as such, Paul needed to get creative about how to structure it. It’s a little tricky, but we’ll get there. For starters, the axes are not actually shown on the PTS diagram; if they were, they would look like this ''(see Figure 3c)''.
Line 189: Line 189:
The 2-axis points toward the bottom right, the 3-axis toward the top right, and the 5-axis toward the left. These are the positive halves of each of these axes; we don’t need to worry about the negative halves of any of them, because every term of every ET map is positive.
The 2-axis points toward the bottom right, the 3-axis toward the top right, and the 5-axis toward the left. These are the positive halves of each of these axes; we don’t need to worry about the negative halves of any of them, because every term of every ET map is positive.


And so it makes sense that {{val|17 27 40}} and {{val|17 27 39}} are aligned horizontally, because the only difference between their maps is in the 5-term, and the 5-axis is horizontal.
And so it makes sense that {{map|17 27 40}} and {{map|17 27 39}} are aligned horizontally, because the only difference between their maps is in the 5-term, and the 5-axis is horizontal.




Line 208: Line 208:
[[File:Shape_of_scale_of_movements_on_axes.png|thumb|left|200px|'''Figure 3e.''' the basic shape the scaled axes make between neighbor maps (maps with only 1 difference between their terms)]]
[[File:Shape_of_scale_of_movements_on_axes.png|thumb|left|200px|'''Figure 3e.''' the basic shape the scaled axes make between neighbor maps (maps with only 1 difference between their terms)]]


Our example ET will be 40. We'll start out at the map {{val|40 63 93}}. This map is a default of sorts for 40-ET, because it’s the map where all three terms are as close as possible to JI when prime 2 is exact (sometimes unfortunately called a "[[patent val]]", which is related to the generalized patent val concept referenced earlier).
Our example ET will be 40. We'll start out at the map {{map|40 63 93}}. This map is a default of sorts for 40-ET, because it’s the map where all three terms are as close as possible to JI when prime 2 is exact (sometimes unfortunately called a "[[patent val]]", which is related to the generalized patent val concept referenced earlier).


From here, let’s move by a single step on the 5-axis by adding 1 to the 5-term of our map, from 93 to 94, therefore moving to the map {{val|40 63 94}}. This map is found directly to the left. This makes sense because the orientation of the 5-axis is horizontal, and the positive direction points out from the origin toward the left, so increases to the 5-term move us in that direction.
From here, let’s move by a single step on the 5-axis by adding 1 to the 5-term of our map, from 93 to 94, therefore moving to the map {{map|40 63 94}}. This map is found directly to the left. This makes sense because the orientation of the 5-axis is horizontal, and the positive direction points out from the origin toward the left, so increases to the 5-term move us in that direction.


Back from our starting point, let’s move by a single step again, but this time on the 3-axis, by adding 1 to the 3-term of our map, from 63 to 64, therefore moving to the map {{val|40 64 93}}. This map is found up and to the right. Again, this direction makes sense, because it’s the direction the 3-axis points.
Back from our starting point, let’s move by a single step again, but this time on the 3-axis, by adding 1 to the 3-term of our map, from 63 to 64, therefore moving to the map {{map|40 64 93}}. This map is found up and to the right. Again, this direction makes sense, because it’s the direction the 3-axis points.


Finally, let’s move by a single step on the 2-axis, from 40 to 41, moving to the map {{val|41 63 93}}, which unsurprisingly is in the direction the 2-axis points. This move actually takes us off the chart, way down here.
Finally, let’s move by a single step on the 2-axis, from 40 to 41, moving to the map {{map|41 63 93}}, which unsurprisingly is in the direction the 2-axis points. This move actually takes us off the chart, way down here.


[[File:40-ET distances example.png|400px|right|thumb|'''Figure 3f.''' Distances between 40-ET's neighbors in PTS]]
[[File:40-ET distances example.png|400px|right|thumb|'''Figure 3f.''' Distances between 40-ET's neighbors in PTS]]
Line 222: Line 222:
The reason Paul chose this particular scaling scheme is that it causes those ETs which are closer to JI to appear closer to the center of the diagram (and this is a useful property to organize ETs by). How does this work? Well, let’s look into it.
The reason Paul chose this particular scaling scheme is that it causes those ETs which are closer to JI to appear closer to the center of the diagram (and this is a useful property to organize ETs by). How does this work? Well, let’s look into it.


Remember that near-just ETs have maps whose terms are in close proportion to log(2:3:5). ET maps use only integers, so they can only approximate this ideal, but a theoretical pure JI map would be {{val|log₂2 log₂3 log₂5}}. If we scaled this theoretical JI map by this scaling scheme, then, we’d get 1:1:1, because we’re just dividing things by themselves: log₂2/log₂2:log₂3/log₂3:log₂5/log₂5 = 1:1:1. This tells us that we should find this theoretical JI map at the point arrived at by moving exactly the same amount along the 2-axis, 3-axis, and 5-axis. Well, if we tried that, these three movements would cancel each other out: we’d draw an equilateral triangle and end up exactly where we started, at the origin, or in other words, at pure JI. Any other ET approximating but not exactly log(2:3:5) will be scaled to proportions not exactly 1:1:1, but approximately so, like maybe 1:0.999:1.002, and so you’ll move in something close to an equilateral triangle, but not exactly, and land in some interesting spot that’s not quite in the center. In other words, we scale the axes this way so that we can compare the maps not in absolute terms, but in terms of what direction and by how much they deviate from JI ''(see Figure 3g)''.
Remember that near-just ETs have maps whose terms are in close proportion to log(2:3:5). ET maps use only integers, so they can only approximate this ideal, but a theoretical pure JI map would be {{map|log₂2 log₂3 log₂5}}. If we scaled this theoretical JI map by this scaling scheme, then, we’d get 1:1:1, because we’re just dividing things by themselves: log₂2/log₂2:log₂3/log₂3:log₂5/log₂5 = 1:1:1. This tells us that we should find this theoretical JI map at the point arrived at by moving exactly the same amount along the 2-axis, 3-axis, and 5-axis. Well, if we tried that, these three movements would cancel each other out: we’d draw an equilateral triangle and end up exactly where we started, at the origin, or in other words, at pure JI. Any other ET approximating but not exactly log(2:3:5) will be scaled to proportions not exactly 1:1:1, but approximately so, like maybe 1:0.999:1.002, and so you’ll move in something close to an equilateral triangle, but not exactly, and land in some interesting spot that’s not quite in the center. In other words, we scale the axes this way so that we can compare the maps not in absolute terms, but in terms of what direction and by how much they deviate from JI ''(see Figure 3g)''.


[[File:Scaling.png|600px|right|thumb|'''Figure 3g.''' a visualization of how scaling axes illuminates deviations from JI]]
[[File:Scaling.png|600px|right|thumb|'''Figure 3g.''' a visualization of how scaling axes illuminates deviations from JI]]
Line 234: Line 234:
Clearly, 12:11.988:12.059 is quite close to 1:1:1. This checks out with our knowledge that it is close to JI, at least in the 5-limit.
Clearly, 12:11.988:12.059 is quite close to 1:1:1. This checks out with our knowledge that it is close to JI, at least in the 5-limit.


But if instead we picked some random alternate mapping of 12-ET, like {{val|12 23 25}}, looking at those integer terms directly, it may not be obvious how close to JI this map is. However, upon scaling them:
But if instead we picked some random alternate mapping of 12-ET, like {{map|12 23 25}}, looking at those integer terms directly, it may not be obvious how close to JI this map is. However, upon scaling them:


* 12/log₂2 = 12
* 12/log₂2 = 12
Line 264: Line 264:
[[File:Hiding vals.png|500px|thumb|right|'''Figure 3i.''' Redundant maps hiding behind their simpler counterparts. The eye is the origin; the same as in Figure 3h. Projective tuning space is the plane resting at the bottom that we are projecting onto. The portion we see in the Middle Path version is only a tiny part right in the middle. The dotted lines just above where the PTS plane is drawn are there to indicate the elision of an infinitude of space; potentially you could go way up to insanely large ETs and they would all be between the origin-eye and this projective plane.]]
[[File:Hiding vals.png|500px|thumb|right|'''Figure 3i.''' Redundant maps hiding behind their simpler counterparts. The eye is the origin; the same as in Figure 3h. Projective tuning space is the plane resting at the bottom that we are projecting onto. The portion we see in the Middle Path version is only a tiny part right in the middle. The dotted lines just above where the PTS plane is drawn are there to indicate the elision of an infinitude of space; potentially you could go way up to insanely large ETs and they would all be between the origin-eye and this projective plane.]]


For simplicity, we’re looking at the octant cube here from the angle straight on to the 2-axis, so changes to the 2-terms don’t matter here. At the top is the origin; that’s the point at the center of PTS. Close-by, we can see the map {{val|3 5 7}}, and two closely related maps {{val|3 4 7}} and {{val|3 5 8}}. Colored lines have been drawn from the origin through these points to the black line in the top-right, which represents the page; this is portraying how if our eye is at that origin, where on the page these points would appear to be.
For simplicity, we’re looking at the octant cube here from the angle straight on to the 2-axis, so changes to the 2-terms don’t matter here. At the top is the origin; that’s the point at the center of PTS. Close-by, we can see the map {{map|3 5 7}}, and two closely related maps {{map|3 4 7}} and {{map|3 5 8}}. Colored lines have been drawn from the origin through these points to the black line in the top-right, which represents the page; this is portraying how if our eye is at that origin, where on the page these points would appear to be.


In between where the colored lines touch the maps themselves and the page, we see a cluster of more maps, each of which starts with 6. In other words, these maps are about twice as far away from us as the others. Let’s consider {{val|6 10 14}} first. Notice that each of its terms is exactly 2x the corresponding term in {{val|3 5 7}}. In effect, {{val|6 10 14}} is redundant with {{val|3 5 7}}. If you imagine doing a mapping calculation or two, you can easily convince yourself that you’ll get the same answer as if you’d just done it with {{val|3 5 7}} instead and then simply divided by 2 one time at the end. It behaves in the exact same way as {{val|3 5 7}} in terms of the relationships between the intervals it maps, the only difference being that it needlessly includes twice as many steps to do so, never using every other one. So we don’t really care about {{val|6 10 14}}. Which is great, because it’s hidden exactly behind {{val|3 5 7}} from where we’re looking.
In between where the colored lines touch the maps themselves and the page, we see a cluster of more maps, each of which starts with 6. In other words, these maps are about twice as far away from us as the others. Let’s consider {{map|6 10 14}} first. Notice that each of its terms is exactly 2x the corresponding term in {{map|3 5 7}}. In effect, {{map|6 10 14}} is redundant with {{map|3 5 7}}. If you imagine doing a mapping calculation or two, you can easily convince yourself that you’ll get the same answer as if you’d just done it with {{map|3 5 7}} instead and then simply divided by 2 one time at the end. It behaves in the exact same way as {{map|3 5 7}} in terms of the relationships between the intervals it maps, the only difference being that it needlessly includes twice as many steps to do so, never using every other one. So we don’t really care about {{map|6 10 14}}. Which is great, because it’s hidden exactly behind {{map|3 5 7}} from where we’re looking.


The same is true of the map pair {{val|3 4 7}} and {{val|6 8 14}}, as well as of {{val|3 5 8}} and {{val|6 10 16}}. Any map whose terms have a common divisor other than 1 is going to be redundant in this sense, and therefore hidden. You can imagine that even further past {{val|3 5 7}} you’ll find {{val|9 15 21}}, {{val|12 20 28}}, and so on, and these are are called contorted maps<ref>On some versions of PTS which Paul prepared, these contorted ETs are actually printed on the page.</ref>. More on those later. What’s important to realize here is that Paul found a way to collapse 3 dimensions worth of information down to 2 dimensions without losing anything important. Each of these lines connecting redundant ETs have been [https://en.wikipedia.org/wiki/Projection_(mathematics) projected] onto the page as a single point. That’s why the diagram is called "projective" tuning space.
The same is true of the map pair {{map|3 4 7}} and {{map|6 8 14}}, as well as of {{map|3 5 8}} and {{map|6 10 16}}. Any map whose terms have a common divisor other than 1 is going to be redundant in this sense, and therefore hidden. You can imagine that even further past {{map|3 5 7}} you’ll find {{map|9 15 21}}, {{map|12 20 28}}, and so on, and these are are called contorted maps<ref>On some versions of PTS which Paul prepared, these contorted ETs are actually printed on the page.</ref>. More on those later. What’s important to realize here is that Paul found a way to collapse 3 dimensions worth of information down to 2 dimensions without losing anything important. Each of these lines connecting redundant ETs have been [https://en.wikipedia.org/wiki/Projection_(mathematics) projected] onto the page as a single point. That’s why the diagram is called "projective" tuning space.


Now, to find a 6-ET with anything new to bring to the table, we’ll need to find one whose terms don’t share a common factor. That’s not hard. We’ll just take one of the ones halfway between the ones we just looked at. How about {{val|6 11 14}}, which is halfway between {{val|6 10 14}} and {{val|6 12 14}}. Notice that the purple line that runs through it lands halfway between the red and blue lines on the page. Similarly, {{val|6 10 15}} is halfway between {{val|6 10 14}} and {{val|6 10 16}}, and its yellow line appears halfway between the red and green lines on the page. What this is demonstrating is that halfway between any pair of n-ETs on the diagram, whether this pair is separated along the 3-axis or 5-axis, you will find a 2n-ET. We can’t really demonstrate this with 3-ET and 6-ET on the diagram, because those ETs are too inaccurate; they’ve been cropped off. But if we return to our 40-ET example, that will work just fine.
Now, to find a 6-ET with anything new to bring to the table, we’ll need to find one whose terms don’t share a common factor. That’s not hard. We’ll just take one of the ones halfway between the ones we just looked at. How about {{map|6 11 14}}, which is halfway between {{map|6 10 14}} and {{map|6 12 14}}. Notice that the purple line that runs through it lands halfway between the red and blue lines on the page. Similarly, {{map|6 10 15}} is halfway between {{map|6 10 14}} and {{map|6 10 16}}, and its yellow line appears halfway between the red and green lines on the page. What this is demonstrating is that halfway between any pair of n-ETs on the diagram, whether this pair is separated along the 3-axis or 5-axis, you will find a 2n-ET. We can’t really demonstrate this with 3-ET and 6-ET on the diagram, because those ETs are too inaccurate; they’ve been cropped off. But if we return to our 40-ET example, that will work just fine.


[[File:Plot of 5 10 20 40 80.png|800px|thumb|left|'''Figure 3j.'''Plot of 40-ETs with 80-ETs halfway between each pair, including the contorted 40-ETs hiding behind 20-ET and 10-ET]]
[[File:Plot of 5 10 20 40 80.png|800px|thumb|left|'''Figure 3j.'''Plot of 40-ETs with 80-ETs halfway between each pair, including the contorted 40-ETs hiding behind 20-ET and 10-ET]]
Line 282: Line 282:
=== map space vs. tuning space ===
=== map space vs. tuning space ===


So far, we’ve been describing PTS as a projection of map space, which is to say that we’ve been thinking of maps as the coordinates. We should be aware that tuning space is a slightly different structure. In tuning space, coordinates are not maps, but tunings, specified in cents, octaves, or some other unit of pitch. So a coordinate might be {{val|6 10 14}} in map space, but {{val|1200 2000 2800}} in tuning space.
So far, we’ve been describing PTS as a projection of map space, which is to say that we’ve been thinking of maps as the coordinates. We should be aware that tuning space is a slightly different structure. In tuning space, coordinates are not maps, but tunings, specified in cents, octaves, or some other unit of pitch. So a coordinate might be {{map|6 10 14}} in map space, but {{map|1200 2000 2800}} in tuning space.


Both tuning space and map space project to the identical result as seen in Paul’s diagram, which is how we’ve been able to get away without distinguishing them thus far.
Both tuning space and map space project to the identical result as seen in Paul’s diagram, which is how we’ve been able to get away without distinguishing them thus far.
Line 288: Line 288:
Why did I do this to you? Well, I decided map space was conceptually easier to introduce than tuning space. Paul himself prefers to think of this diagram as a projection of tuning space, however, so I don’t want to leave this material before clarifying the difference. Also, there are different helpful insights you can get from thinking of PTS as tuning space. Let’s consider those now.
Why did I do this to you? Well, I decided map space was conceptually easier to introduce than tuning space. Paul himself prefers to think of this diagram as a projection of tuning space, however, so I don’t want to leave this material before clarifying the difference. Also, there are different helpful insights you can get from thinking of PTS as tuning space. Let’s consider those now.


The first key difference to notice is that we can normalize coordinates in tuning space, so that the first term of every coordinate is the same, namely, one octave, or 1200 cents. For example, note that while in map space, {{val|3 5 7}} is located physically in front of {{val|6 10 14}}, in tuning space, these two points collapse to literally the same point, {{val|1200 2000 2800}}. This can be helpful in a similar way to how the scaled axes of PTS help us visually compare maps’ proximity to the central JI spoke: they are now expressed closer to in terms of their deviation from JI, so we can more immediately compare maps to each other, as well as individually directly to the pure JI primes, as long as we memorize the cents values of those (they’re 1200, 1901.955, and 2786.314). For example, in map space, it may not be immediately obvious that {{val|6 9 14}} is halfway between {{val|3 5 7}} and {{val|3 4 7}}, but in tuning space it is immediately obvious that {{val|1200 1800 2800}} is halfway between {{val|1200 2000 2800}} and {{val|1200 1600 2800}}.
The first key difference to notice is that we can normalize coordinates in tuning space, so that the first term of every coordinate is the same, namely, one octave, or 1200 cents. For example, note that while in map space, {{map|3 5 7}} is located physically in front of {{map|6 10 14}}, in tuning space, these two points collapse to literally the same point, {{map|1200 2000 2800}}. This can be helpful in a similar way to how the scaled axes of PTS help us visually compare maps’ proximity to the central JI spoke: they are now expressed closer to in terms of their deviation from JI, so we can more immediately compare maps to each other, as well as individually directly to the pure JI primes, as long as we memorize the cents values of those (they’re 1200, 1901.955, and 2786.314). For example, in map space, it may not be immediately obvious that {{map|6 9 14}} is halfway between {{map|3 5 7}} and {{map|3 4 7}}, but in tuning space it is immediately obvious that {{map|1200 1800 2800}} is halfway between {{map|1200 2000 2800}} and {{map|1200 1600 2800}}.


So if we take a look at a cross-section of projection again, but in terms of tuning space now ''(see Figure 3k)'', we can see how every point is about the same distance from us.
So if we take a look at a cross-section of projection again, but in terms of tuning space now ''(see Figure 3k)'', we can see how every point is about the same distance from us.
Line 294: Line 294:
[[File:Tuning space version.png|400px|thumb|right|'''Figure 3k.''' Demonstration of projection in terms of ''tuning'' space (compare with Figure 3i, which shows projection in terms of ''map'' space). As you can see here, all the points are in about the same region of space, since tuning space normalizes nearby JI.]]
[[File:Tuning space version.png|400px|thumb|right|'''Figure 3k.''' Demonstration of projection in terms of ''tuning'' space (compare with Figure 3i, which shows projection in terms of ''map'' space). As you can see here, all the points are in about the same region of space, since tuning space normalizes nearby JI.]]


The other major difference is that tuning space is continuous, where map space is discrete. In other words, to find a map between {{val|6 10 14}} and {{val|6 9 14}}, you’re subdividing it by 2 or 3 and picking a point in between, that sort of thing. But between {{val|1200 2000 2800}} and {{val|1200 1800 2800}} you’ve got an infinitude of choices smoothly transitioning between each other; you’ve basically got knobs you can turn on the proportions of the tuning of 2, 3, and 5. Everything from from {{val|1200 1999.999 2800}} to {{val|1200 1901.955 2800}} to {{val|1200 1817.643 2800}} is along the way.
The other major difference is that tuning space is continuous, where map space is discrete. In other words, to find a map between {{map|6 10 14}} and {{map|6 9 14}}, you’re subdividing it by 2 or 3 and picking a point in between, that sort of thing. But between {{map|1200 2000 2800}} and {{map|1200 1800 2800}} you’ve got an infinitude of choices smoothly transitioning between each other; you’ve basically got knobs you can turn on the proportions of the tuning of 2, 3, and 5. Everything from from {{map|1200 1999.999 2800}} to {{map|1200 1901.955 2800}} to {{map|1200 1817.643 2800}} is along the way.


[[File:Tuning projection.png|400px|thumb|left|'''Figure 3l.''' Demonstration of tuning projection. As long as the tunings change in a fixed proportion, the tuning will project to the same point on PTS.]]
[[File:Tuning projection.png|400px|thumb|left|'''Figure 3l.''' Demonstration of tuning projection. As long as the tunings change in a fixed proportion, the tuning will project to the same point on PTS.]]


But perhaps even more interesting than this continuous tuning space that appears in PTS between points is the continuous tuning space that does not appear in PTS because it exists within each point, that is, exactly out from and deeper into the page at each point. In tuning space, as we’ve just established, there are no maps in front of or behind each other that get collapsed to a single point. But there are still many things that get collapsed to a single point like this, but in tuning space they are different tunings ''(see Figure 3l)''. For example, {{val|1200 1900 2800}} is the way we’d write 12-ET in tuning space. But there are other tunings represented by this same point in PTS, such as {{val|1200.12 1900.19 2800.28}} (note that in order to remain at the same point, we’ve maintained the exact proportions of all the prime tunings). That tuning might not be of particular interest. I just used it as a simple example to illustrate the point. A more useful example would be {{val|1198.440 1897.531 2796.361}}, which by some algorithm is the optimal tuning for 12-ET (minimizes damage across primes or intervals); it may not be as obvious from looking at that one, but if you check the proportions of those terms with each other, you will find they are still exactly 12:19:28.
But perhaps even more interesting than this continuous tuning space that appears in PTS between points is the continuous tuning space that does not appear in PTS because it exists within each point, that is, exactly out from and deeper into the page at each point. In tuning space, as we’ve just established, there are no maps in front of or behind each other that get collapsed to a single point. But there are still many things that get collapsed to a single point like this, but in tuning space they are different tunings ''(see Figure 3l)''. For example, {{map|1200 1900 2800}} is the way we’d write 12-ET in tuning space. But there are other tunings represented by this same point in PTS, such as {{map|1200.12 1900.19 2800.28}} (note that in order to remain at the same point, we’ve maintained the exact proportions of all the prime tunings). That tuning might not be of particular interest. I just used it as a simple example to illustrate the point. A more useful example would be {{map|1198.440 1897.531 2796.361}}, which by some algorithm is the optimal tuning for 12-ET (minimizes damage across primes or intervals); it may not be as obvious from looking at that one, but if you check the proportions of those terms with each other, you will find they are still exactly 12:19:28.


The key point here is that, as we mentioned before, the problems of tuning and tempering are largely separate. PTS projects all tunings of the same temperament to the same point. This way, issues of tuning are completely hidden and ignored on PTS, so we can focus instead on tempering.
The key point here is that, as we mentioned before, the problems of tuning and tempering are largely separate. PTS projects all tunings of the same temperament to the same point. This way, issues of tuning are completely hidden and ignored on PTS, so we can focus instead on tempering.
Line 306: Line 306:
We’ve shown that ETs with the same number that are horizontally aligned differ in their mapping of 5, and ETs with the same number that are aligned on the 3-axis running bottom left to top right differ in their mapping of 3. These basic relationships can be extrapolated to be understood in a general sense. ETs found in the center-left map 5 relatively big and 2 and 3 relatively small. ETs found in the top-right map 3 relatively big and 2 and 5 relatively small. ETs found in the bottom-right map 2 relatively big and 3 and 5 relatively small. And for each of these three statements, the region on the opposite side maps things in the opposite way.
We’ve shown that ETs with the same number that are horizontally aligned differ in their mapping of 5, and ETs with the same number that are aligned on the 3-axis running bottom left to top right differ in their mapping of 3. These basic relationships can be extrapolated to be understood in a general sense. ETs found in the center-left map 5 relatively big and 2 and 3 relatively small. ETs found in the top-right map 3 relatively big and 2 and 5 relatively small. ETs found in the bottom-right map 2 relatively big and 3 and 5 relatively small. And for each of these three statements, the region on the opposite side maps things in the opposite way.


So: we now know which point is {{val|12 19 28}}, and we know a couple of 17’s, 40’s and a 41. But can we answer in the general case? Given an arbitrary map, like {{val|7 11 16}}, can we find it on the diagram? Well, you may look to the first term, 7, which tells you it’s [[7edo|7-ET]]. There’s only one big 7 on this diagram, so it’s probably that. (You’re right). But that one’s easy. The 7 is huge.
So: we now know which point is {{map|12 19 28}}, and we know a couple of 17’s, 40’s and a 41. But can we answer in the general case? Given an arbitrary map, like {{map|7 11 16}}, can we find it on the diagram? Well, you may look to the first term, 7, which tells you it’s [[7edo|7-ET]]. There’s only one big 7 on this diagram, so it’s probably that. (You’re right). But that one’s easy. The 7 is huge.


What if I gave you {{val|43 68 100}}. Where’s [[43edo|43-ET]]? I’ll bet you’re still complaining: the map expresses the tempering of 2, 3, and 5 in terms of their shared generator, but doesn’t tell us directly which primes are sharp, and which primes are flat, so how could we know in which region to look for this ET?
What if I gave you {{map|43 68 100}}. Where’s [[43edo|43-ET]]? I’ll bet you’re still complaining: the map expresses the tempering of 2, 3, and 5 in terms of their shared generator, but doesn’t tell us directly which primes are sharp, and which primes are flat, so how could we know in which region to look for this ET?


The answer to that is, unfortunately: that’s just how it is. It can be a bit of a hunt sometimes. But the chances are, in the real world, if you’re looking for a map or thinking about it, then you probably already have at least some other information about it to help you find it, whether it’s memorized in your head, or you’re reading it off the results page for an automatic temperament search tool.
The answer to that is, unfortunately: that’s just how it is. It can be a bit of a hunt sometimes. But the chances are, in the real world, if you’re looking for a map or thinking about it, then you probably already have at least some other information about it to help you find it, whether it’s memorized in your head, or you’re reading it off the results page for an automatic temperament search tool.
Line 314: Line 314:
Probably you have the information about the primes’ tempering; maybe you get lucky and a 43 jumps out at you but it’s not the one you’re looking for, but you can use what you know about the perspectival scaling and axis directions and log-of-prime scaling to find other 43’s relative to it.
Probably you have the information about the primes’ tempering; maybe you get lucky and a 43 jumps out at you but it’s not the one you’re looking for, but you can use what you know about the perspectival scaling and axis directions and log-of-prime scaling to find other 43’s relative to it.


Or maybe you know which commas {{val|43 68 100}} tempers out, so you can find it along the line for that comma’s temperament.
Or maybe you know which commas {{map|43 68 100}} tempers out, so you can find it along the line for that comma’s temperament.


== linear temperaments ==
== linear temperaments ==
Line 327: Line 327:
For another example, the line on the right side of the diagram running almost vertically which has the other 17-ET we looked at, as well as 10-ET and 7-ET, is labeled “dicot”, and so this line represents the dicot temperament, and unsurprisingly all of these ET’s temper out the dicot comma.
For another example, the line on the right side of the diagram running almost vertically which has the other 17-ET we looked at, as well as 10-ET and 7-ET, is labeled “dicot”, and so this line represents the dicot temperament, and unsurprisingly all of these ET’s temper out the dicot comma.


Simply put, lines on PTS are '''temperaments'''. Specifically, they are [[abstract regular temperaments]]. If you are a student of historical temperaments, you may be familiar with e.g. [[quarter-comma meantone]]; to an RTT practitioner, this is actually a specific tuning of the meantone temperament. Meantone is an abstract temperament, which encompasses a range of other possible temperaments and tunings.  
Simply put, lines on PTS are '''temperaments'''. Specifically, they are [[abstract regular temperaments]]. If you are a student of historical temperaments, you may be familiar with e.g. [[quarter-comma meantone]]; to an RTT practitioner, this is actually a specific tuning of the meantone temperament. Meantone is an abstract temperament, which encompasses a range of other possible temperaments and tunings.


If you’re new to RTT, all of the other temperaments besides meantone, like “[[dicot]]”, “[[porcupine]]”, and “[[mavila]]”, are probably unfamiliar and their names may seem sort of random or bizarre looking. Well, you’re not wrong about the names being random and bizarre. But mathematically and musically, these temperaments are every bit as much real and of interest as meantone. One day you too may compose a piece or write an academic paper about porcupine temperament.
If you’re new to RTT, all of the other temperaments besides meantone, like “[[dicot]]”, “[[porcupine]]”, and “[[mavila]]”, are probably unfamiliar and their names may seem sort of random or bizarre looking. Well, you’re not wrong about the names being random and bizarre. But mathematically and musically, these temperaments are every bit as much real and of interest as meantone. One day you too may compose a piece or write an academic paper about porcupine temperament.
Line 339: Line 339:
Let’s begin with a simple example: the perfectly horizontal line that runs through just about the middle of the page, through the numeral 12, labelled “[[compton]]”. What’s happening along this line? Well, as we know, moving to the left means tuning 5 sharper, and moving to the right means tuning 5 flatter. But what about 2 and 3? Well, they are changing as well: 2 is sharp in the bottom right, and 3 is sharp in the top right, so when we move exactly rightward, 2 and 3 are both getting sharper (though not as directly as 5 is getting flatter). But the critical thing to observe here is that 2 and 3 are sharpening at the exact same rate. Therefore the approximations of primes 2 and 3 are in a constant ratio with each other along horizontal lines like this. Said another way, if you look at the 2 and 3 terms for any ET’s map on this line, the ratio between its term for 2 and 3 will be identical.
Let’s begin with a simple example: the perfectly horizontal line that runs through just about the middle of the page, through the numeral 12, labelled “[[compton]]”. What’s happening along this line? Well, as we know, moving to the left means tuning 5 sharper, and moving to the right means tuning 5 flatter. But what about 2 and 3? Well, they are changing as well: 2 is sharp in the bottom right, and 3 is sharp in the top right, so when we move exactly rightward, 2 and 3 are both getting sharper (though not as directly as 5 is getting flatter). But the critical thing to observe here is that 2 and 3 are sharpening at the exact same rate. Therefore the approximations of primes 2 and 3 are in a constant ratio with each other along horizontal lines like this. Said another way, if you look at the 2 and 3 terms for any ET’s map on this line, the ratio between its term for 2 and 3 will be identical.


Let’s grab some samples to confirm. We already know that 12-ET here looks like {{val|12 19}} (I’m dropping the 5 term for now). The 24-ET here looks like {{val|24 38}}, which is simply 2×{{val|12 19}}. The 36-ET here looks like {{val|36 57}} = 3×{{val|12 19}}. And so on. So that’s why we only see multiples of 12 along this line: because 12 and 19 are co-prime, so the only other maps which could have them in the same ratio would be multiples of them.
Let’s grab some samples to confirm. We already know that 12-ET here looks like {{map|12 19}} (I’m dropping the 5 term for now). The 24-ET here looks like {{map|24 38}}, which is simply 2×{{map|12 19}}. The 36-ET here looks like {{map|36 57}} = 3×{{map|12 19}}. And so on. So that’s why we only see multiples of 12 along this line: because 12 and 19 are co-prime, so the only other maps which could have them in the same ratio would be multiples of them.


Let’s look at the other perfectly horizontal line on this diagram. It’s found about a quarter of the way down the diagram, and runs through the 10-ET and 20-ET we looked at earlier. This one’s called “[[blackwood]]”. Here, we can see that all of its ETs are all multiples of 5. In fact, [[5edo|5-ET]] itself is on this line, though we can only see a sliver of its giant numeral off the left edge of the diagram. Again, all of its maps have locked ratios between their mappings of prime 2 and prime 3: {{val|5 8}}, {{val|10 16}}, {{val|15 24}}, {{val|20 32}}, {{val|40 64}}, {{val|80 128}}, etc. You get the idea.
Let’s look at the other perfectly horizontal line on this diagram. It’s found about a quarter of the way down the diagram, and runs through the 10-ET and 20-ET we looked at earlier. This one’s called “[[blackwood]]”. Here, we can see that all of its ETs are all multiples of 5. In fact, [[5edo|5-ET]] itself is on this line, though we can only see a sliver of its giant numeral off the left edge of the diagram. Again, all of its maps have locked ratios between their mappings of prime 2 and prime 3: {{map|5 8}}, {{map|10 16}}, {{map|15 24}}, {{map|20 32}}, {{map|40 64}}, {{map|80 128}}, etc. You get the idea.


So what do these two temperaments have in common such that their lines are parallel? Well, they’re defined by commas, so why don’t we compare their commas. The compton comma is {{monzo|-19 12 0}}, and the blackwood comma is {{monzo|8 -5 0}}<ref>Yes, these are the same as the [[Pythagorean comma]] and [[Pythagorean diatonic semitone]], respectively.</ref>. What sticks out about these two commas is that they both have a 5-term of 0. This means that when we ask the question “how many steps does this comma map to in a given ET”, the ET’s mapping of 5 is irrelevant. Whether we check it in {{val|40 63 93}} or {{val|40 63 94}}, the result is going to be the same. So if {{val|40 63 93}} tempers out the blackwood comma, then {{val|40 63 94}} also tempers out the blackwood comma. And if {{val|24 38 56}} tempers out compton, then {{val|24 38 55}} tempers out compton. And so on.
So what do these two temperaments have in common such that their lines are parallel? Well, they’re defined by commas, so why don’t we compare their commas. The compton comma is {{vector|-19 12 0}}, and the blackwood comma is {{vector|8 -5 0}}<ref>Yes, these are the same as the [[Pythagorean comma]] and [[Pythagorean diatonic semitone]], respectively.</ref>. What sticks out about these two commas is that they both have a 5-term of 0. This means that when we ask the question “how many steps does this comma map to in a given ET”, the ET’s mapping of 5 is irrelevant. Whether we check it in {{map|40 63 93}} or {{map|40 63 94}}, the result is going to be the same. So if {{map|40 63 93}} tempers out the blackwood comma, then {{map|40 63 94}} also tempers out the blackwood comma. And if {{map|24 38 56}} tempers out compton, then {{map|24 38 55}} tempers out compton. And so on.


Similar temperaments can be found which include only 2 of the 3 primes at once. Take “[[augmented]]”, for instance, running from bottom-left to top-right. This temperament is aligned with the 3-axis. This tells us several equivalent things: that relative changes to the mapping of 3 are irrelevant for augmented temperament, that the augmented comma has no 3’s in its prime factorization, and the ratios of the mappings of 2 and 5 are the same for any ET along this line. Indeed we find that the augmented comma is {{monzo|7 0 -3}}, or [[128/125]], which has no 3’s. And if we sample a few maps along this line, we find {{val|12 19 28}}, {{val|9 14 21}}, {{val|15 24 35}}, {{val|21 33 48}}, {{val|27 43 63}}, etc., for which there is no pattern to the 3-term, but the 2- and 5-terms for each are in a 3:7 ratio.
Similar temperaments can be found which include only 2 of the 3 primes at once. Take “[[augmented]]”, for instance, running from bottom-left to top-right. This temperament is aligned with the 3-axis. This tells us several equivalent things: that relative changes to the mapping of 3 are irrelevant for augmented temperament, that the augmented comma has no 3’s in its prime factorization, and the ratios of the mappings of 2 and 5 are the same for any ET along this line. Indeed we find that the augmented comma is {{vector|7 0 -3}}, or [[128/125]], which has no 3’s. And if we sample a few maps along this line, we find {{map|12 19 28}}, {{map|9 14 21}}, {{map|15 24 35}}, {{map|21 33 48}}, {{map|27 43 63}}, etc., for which there is no pattern to the 3-term, but the 2- and 5-terms for each are in a 3:7 ratio.


There are even temperaments whose comma includes only 3’s and 5’s, such as “[[bug]]” temperament, which tempers out [[27/25]], or {{monzo|0 3 -2}}. If you look on this PTS diagram, however, you won’t find bug. Paul chose not to draw it. There are infinite temperaments possible here, so he had to set a threshold somewhere on which temperaments to show, and bug just didn’t make the cut in terms of how much it distorts harmony from JI. If he had drawn it, it would have been way out on the left edge of the diagram, completely outside the concentric hexagons. It would run parallel to the 2-axis, or from top-left to bottom-right, and it would connect the 5-ET (the huge numeral which is cut off the left edge of the diagram so that we can only see a sliver of it) to the [[9edo|9-ET]] in the bottom left, running through the 19-ET and [[14edo|14-ET]] in-between. Indeed, these ET maps — {{val|9 14 21}}, {{val|5 8 12}}, {{val|19 30 45}}, and {{val|14 22 33}} — lock the ratio between their 3-terms and 5-terms, in this case to 2:3.
There are even temperaments whose comma includes only 3’s and 5’s, such as “[[bug]]” temperament, which tempers out [[27/25]], or {{vector|0 3 -2}}. If you look on this PTS diagram, however, you won’t find bug. Paul chose not to draw it. There are infinite temperaments possible here, so he had to set a threshold somewhere on which temperaments to show, and bug just didn’t make the cut in terms of how much it distorts harmony from JI. If he had drawn it, it would have been way out on the left edge of the diagram, completely outside the concentric hexagons. It would run parallel to the 2-axis, or from top-left to bottom-right, and it would connect the 5-ET (the huge numeral which is cut off the left edge of the diagram so that we can only see a sliver of it) to the [[9edo|9-ET]] in the bottom left, running through the 19-ET and [[14edo|14-ET]] in-between. Indeed, these ET maps — {{map|9 14 21}}, {{map|5 8 12}}, {{map|19 30 45}}, and {{map|14 22 33}} — lock the ratio between their 3-terms and 5-terms, in this case to 2:3.


Those are the three simplest slopes to consider, i.e. the ones which are exactly parallel to the axes ''(see Figure 4a)''. But all the other temperament lines follow a similar principle. Their slopes are a manifestation of the prime factors in their defining comma. If having zero 5’s means you are perfectly horizontal, then having only one 5 means your slope will be close to horizontal, such as meantone {{monzo|-4 4 -1}} or [[helmholtz]] {{monzo|-15 8 1}}. Similarly, magic {{monzo|-10 -1 5}} and [[würschmidt]] {{monzo|17 1 -8}}, having only one 3 apiece, are close to parallel with the 3-axis, while porcupine {{monzo|1 -5 3}} and [[ripple]] {{monzo|-1 8 -5}}, having only one 2 apiece, are close to parallel with the 2-axis.
Those are the three simplest slopes to consider, i.e. the ones which are exactly parallel to the axes ''(see Figure 4a)''. But all the other temperament lines follow a similar principle. Their slopes are a manifestation of the prime factors in their defining comma. If having zero 5’s means you are perfectly horizontal, then having only one 5 means your slope will be close to horizontal, such as meantone {{vector|-4 4 -1}} or [[helmholtz]] {{vector|-15 8 1}}. Similarly, magic {{vector|-10 -1 5}} and [[würschmidt]] {{vector|17 1 -8}}, having only one 3 apiece, are close to parallel with the 3-axis, while porcupine {{vector|1 -5 3}} and [[ripple]] {{vector|-1 8 -5}}, having only one 2 apiece, are close to parallel with the 2-axis.


Think of it like this: for meantone, a change to the mapping of 5 doesn’t make near as much of a difference to the outcome as does a change to the mapping of 2 or 3, therefore, changes along the 5-axis don’t have near as much of an effect on that line, so it ends up roughly parallel to it.
Think of it like this: for meantone, a change to the mapping of 5 doesn’t make near as much of a difference to the outcome as does a change to the mapping of 2 or 3, therefore, changes along the 5-axis don’t have near as much of an effect on that line, so it ends up roughly parallel to it.
Line 385: Line 385:
Cents and Hertz values can readily be converted between one form and the other, so it’s the second difference which is more important. It’s their size. If we do convert 12-ET’s generator to cents so we can compare it with meantone’s generator at 12-ET, we can see that 12-ET’s generator is 100¢ (log₂1.059 × 1200¢ = 100¢) while meantone’s generator at 12-ET is 500¢ (5/12 × 1200¢ = 500¢). What is the explanation for this difference?
Cents and Hertz values can readily be converted between one form and the other, so it’s the second difference which is more important. It’s their size. If we do convert 12-ET’s generator to cents so we can compare it with meantone’s generator at 12-ET, we can see that 12-ET’s generator is 100¢ (log₂1.059 × 1200¢ = 100¢) while meantone’s generator at 12-ET is 500¢ (5/12 × 1200¢ = 500¢). What is the explanation for this difference?


Well, notice that meantone is not the only temperament which passes through 12-ET. Consider augmented temperament. Its generator at 12-ET is 400¢. What's key here is that all three of these generators — 100¢, 500¢, and 400¢ — are multiples of 100¢.  
Well, notice that meantone is not the only temperament which passes through 12-ET. Consider augmented temperament. Its generator at 12-ET is 400¢. What's key here is that all three of these generators — 100¢, 500¢, and 400¢ — are multiples of 100¢.


Let’s put it this way. When we look at 12-ET in terms of itself, rather than in terms of any particular rank-2 temperament, its generator is 1\12. That’s the simplest, smallest generator which if we iterate it 12 times will touch every pitch in 12-ET. But when we look at 12-ET not as the end goal, but rather as a foundation upon which we could work with a given temperament, things change. We don’t necessarily need to include every pitch in 12-ET to realize a temperament it supports.
Let’s put it this way. When we look at 12-ET in terms of itself, rather than in terms of any particular rank-2 temperament, its generator is 1\12. That’s the simplest, smallest generator which if we iterate it 12 times will touch every pitch in 12-ET. But when we look at 12-ET not as the end goal, but rather as a foundation upon which we could work with a given temperament, things change. We don’t necessarily need to include every pitch in 12-ET to realize a temperament it supports.
Line 452: Line 452:
=== mappings and comma bases ===
=== mappings and comma bases ===


19-ET. Its map is {{val|19 30 44}}. We also now know that we could call it “meantone|magic”, because we find it at the intersection of the meantone and magic temperament lines. But how would we mathematically, non-visually make this connection?
19-ET. Its map is {{map|19 30 44}}. We also now know that we could call it “meantone|magic”, because we find it at the intersection of the meantone and magic temperament lines. But how would we mathematically, non-visually make this connection?


The first critical step is to recall that temperaments are defined by commas, which can be expressed as vectors. So, we can represent meantone using the meantone comma, {{monzo|-4 4 -1}}, and magic using the magic comma {{monzo|-10 -1 5}}.
The first critical step is to recall that temperaments are defined by commas, which can be expressed as vectors. So, we can represent meantone using the meantone comma, {{vector|-4 4 -1}}, and magic using the magic comma {{vector|-10 -1 5}}.


The intersection of two vectors can be represented as a matrix. If a vector is like a list of numbers, a matrix is a table of them. Technically, vectors are vertical lists of numbers, or columns, so when we put meantone and magic together, we get a matrix that looks like this:
The intersection of two vectors can be represented as a matrix. If a vector is like a list of numbers, a matrix is a table of them. Technically, vectors are vertical lists of numbers, or columns, so when we put meantone and magic together, we get a matrix that looks like this:
Line 468: Line 468:
We call such a matrix a '''comma basis'''. The plural of “basis” is “bases”, but pronounced /ˈbeɪ siz/.
We call such a matrix a '''comma basis'''. The plural of “basis” is “bases”, but pronounced /ˈbeɪ siz/.


Now how in the world could that matrix represent the same temperament as {{val|19 30 44}}? Well, they’re two different ways of describing it. {{val|19 30 44}}, as we know, tells us how many generator steps it takes to reach each prime approximation. This matrix, it turns out, is an equivalent way of stating the same information. This matrix is a minimal representation of the null-space of that mapping, or in other words, of all the commas it tempers out.  
Now how in the world could that matrix represent the same temperament as {{map|19 30 44}}? Well, they’re two different ways of describing it. {{map|19 30 44}}, as we know, tells us how many generator steps it takes to reach each prime approximation. This matrix, it turns out, is an equivalent way of stating the same information. This matrix is a minimal representation of the null-space of that mapping, or in other words, of all the commas it tempers out.


This was a bit tricky for me to get my head around, so let me hammer this point home: when you say "the null-space", you're referring to ''the entire infinite set of all commas that a mapping tempers out'', ''not only'' the two commas you see in any given basis for it. Think of the comma basis as one of many valid sets of instructions to find every possible comma, by adding or subtracting these two commas from each other<ref>To be clear, because what you are adding and subtracting in interval vectors are exponents (as you know), the commas are actually being multiplied by each other; e.g. {{monzo|-4 4 -1}} + {{monzo|10 1 -5}} = {{monzo|6 5 -6}}, which is the same thing as <span><math>\frac{81}{80} × \frac{3072}{3125} = \frac{15552}{15625}</math></span></ref>. The math term for adding and subtracting vectors like this, which you will certainly see plenty of as you explore RTT, is "linear combination". It should be visually clear from the PTS diagram that this 19-ET comma basis couldn't be listing every single comma 19-ET tempers out, because we can see there are at least four temperament lines that pass through it (there are actually infinity of them!). But so it turns out that picking two commas is perfectly enough; every other comma that 19-ET tempers out could be expressed in terms of these two!
This was a bit tricky for me to get my head around, so let me hammer this point home: when you say "the null-space", you're referring to ''the entire infinite set of all commas that a mapping tempers out'', ''not only'' the two commas you see in any given basis for it. Think of the comma basis as one of many valid sets of instructions to find every possible comma, by adding or subtracting these two commas from each other<ref>To be clear, because what you are adding and subtracting in interval vectors are exponents (as you know), the commas are actually being multiplied by each other; e.g. {{vector|-4 4 -1}} + {{vector|10 1 -5}} = {{vector|6 5 -6}}, which is the same thing as <span><math>\frac{81}{80} × \frac{3072}{3125} = \frac{15552}{15625}</math></span></ref>. The math term for adding and subtracting vectors like this, which you will certainly see plenty of as you explore RTT, is "linear combination". It should be visually clear from the PTS diagram that this 19-ET comma basis couldn't be listing every single comma 19-ET tempers out, because we can see there are at least four temperament lines that pass through it (there are actually infinity of them!). But so it turns out that picking two commas is perfectly enough; every other comma that 19-ET tempers out could be expressed in terms of these two!


Try one. How about the hanson comma, {{monzo|6 5 -6}}. Well that one’s too easy! Clearly if you go down by one magic comma to {{monzo|10 1 -5}} and then up by one meantone comma you get one hanson comma. What you’re doing when you’re adding and subtracting multiples of commas from each other like this is technically called “[https://en.wikipedia.org/wiki/Gaussian_elimination Gaussian elimination]”. Feel free to work through any other examples yourself.
Try one. How about the hanson comma, {{vector|6 5 -6}}. Well that one’s too easy! Clearly if you go down by one magic comma to {{vector|10 1 -5}} and then up by one meantone comma you get one hanson comma. What you’re doing when you’re adding and subtracting multiples of commas from each other like this is technically called “[https://en.wikipedia.org/wiki/Gaussian_elimination Gaussian elimination]”. Feel free to work through any other examples yourself.


A good way to explain why we don’t need three of these commas is that if you had three of them, you could use any two of them to create the third, and then subtract the result from the third, turning that comma into a zero vector, or a vector with only zeroes, which is pretty useless, so we could just discard it.
A good way to explain why we don’t need three of these commas is that if you had three of them, you could use any two of them to create the third, and then subtract the result from the third, turning that comma into a zero vector, or a vector with only zeroes, which is pretty useless, so we could just discard it.
Line 478: Line 478:
And a potentially helpful way to think about why any other interval arrived at through linear combinations of the commas in a basis would also be a valid column in the basis is this: any of these interval vectors, by definition, is mapped to zero steps by the mapping. So any combination of them will also map to zero steps, and thus be a comma that is tempered out by the temperament.
And a potentially helpful way to think about why any other interval arrived at through linear combinations of the commas in a basis would also be a valid column in the basis is this: any of these interval vectors, by definition, is mapped to zero steps by the mapping. So any combination of them will also map to zero steps, and thus be a comma that is tempered out by the temperament.


When written with the {{val|}} notation, we’re expressing maps in “covector” form, or in other words, as the opposite of vectors. But we can also think of maps in terms of matrices. If vectors are like matrix columns, maps are like matrix rows. So while we have to write {{monzo|-4 4 -1}} vertically when in matrix form, {{val|19 30 44}} stays horizontal.
When written with the {{map|}} notation, we’re expressing maps in “covector” form, or in other words, as the opposite of vectors. But we can also think of maps in terms of matrices. If vectors are like matrix columns, maps are like matrix rows. So while we have to write {{vector|-4 4 -1}} vertically when in matrix form, {{map|19 30 44}} stays horizontal.


[[File:Different nestings.png|400px|thumb|left|'''Figure 5a.''' How to write matrices in terms of either columns/vectors/commas or rows/covectors/maps.]]
[[File:Different nestings.png|400px|thumb|left|'''Figure 5a.''' How to write matrices in terms of either columns/vectors/commas or rows/covectors/maps.]]


We can extend our angle bracket notation (technically called [https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation bra-ket notation, or Dirac notation]<ref>Dirac notation comes to RTT from quantum mechanics, not algebra.</ref>) to handle matrices by nesting rows inside columns, or columns inside rows ''(see Figure 5a)''. For example, we could have written our comma basis like this: {{val|{{monzo|-4 4 -1}} {{monzo|-10 -1 5}}}}. Starting from the outside, the {{val|}} tells us to think in terms of a row. It's just that this covector isn't a covector of numbers, like the ones we've gotten used to by now, but rather a covector of ''columns of'' numbers. So this row houses two such columns. Alternatively, we could have written this same matrix like {{monzo|{{val|-4 -10}} {{val|4 -1}} {{val|-1 5}}}}, but that would obscure the fact that it is the combination of two familiar commas (but that notation ''would'' be useful for expressing a matrix built out of multiple maps, as we will soon see).
We can extend our angle bracket notation (technically called [https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation bra-ket notation, or Dirac notation]<ref>Dirac notation comes to RTT from quantum mechanics, not algebra.</ref>) to handle matrices by nesting rows inside columns, or columns inside rows ''(see Figure 5a)''. For example, we could have written our comma basis like this: {{map|{{vector|-4 4 -1}} {{vector|-10 -1 5}}}}. Starting from the outside, the {{map|}} tells us to think in terms of a row. It's just that this covector isn't a covector of numbers, like the ones we've gotten used to by now, but rather a covector of ''columns of'' numbers. So this row houses two such columns. Alternatively, we could have written this same matrix like {{vector|{{map|-4 -10}} {{map|4 -1}} {{map|-1 5}}}}, but that would obscure the fact that it is the combination of two familiar commas (but that notation ''would'' be useful for expressing a matrix built out of multiple maps, as we will soon see).


Sometimes a comma basis may have only a single comma. That’s okay. A single vector can become a matrix. To disambiguate this situation, you could put the vector inside a covector, like this: {{val|{{monzo|-4 4 -1}}}}. Similarly, a single covector can become a matrix, by nesting inside a vector, like this: {{monzo|{{val|19 30 44}}}}.
Sometimes a comma basis may have only a single comma. That’s okay. A single vector can become a matrix. To disambiguate this situation, you could put the vector inside a covector, like this: {{map|{{vector|-4 4 -1}}}}. Similarly, a single covector can become a matrix, by nesting inside a vector, like this: {{vector|{{map|19 30 44}}}}.


If a comma basis is the name for the matrix made out of commas, then we could say a “'''mapping'''” is the name for the matrix made out of maps.
If a comma basis is the name for the matrix made out of commas, then we could say a “'''mapping'''” is the name for the matrix made out of maps.


You will regularly see matrices across the wiki that use only square brackets on the outside, e.g. [{{val|5 8 12}} {{val|7 11 16}}] or [{{monzo|-4 4 -1}} {{monzo|-10 -1 5}}]. That's fine because it's unambiguous; if you have a list of rows, it's fairly obvious you've arranged them vertically, and if you've got a list of columns, it's fairly obvious you've arranged them horizontally. I personally prefer the style of using angle brackets at both levels — for slightly more effort, it raises slightly less questions — but using only square brackets on the outside should not be said to be wrong.
You will regularly see matrices across the wiki that use only square brackets on the outside, e.g. [{{map|5 8 12}} {{map|7 11 16}}] or [{{vector|-4 4 -1}} {{vector|-10 -1 5}}]. That's fine because it's unambiguous; if you have a list of rows, it's fairly obvious you've arranged them vertically, and if you've got a list of columns, it's fairly obvious you've arranged them horizontally. I personally prefer the style of using angle brackets at both levels — for slightly more effort, it raises slightly less questions — but using only square brackets on the outside should not be said to be wrong.


=== null-space ===
=== null-space ===


There’s nothing special about the pairing of meantone and magic. We could have chosen meantone|hanson, or magic|negri, etc. A matrix formed out of the intersection of any two of these commas will capture the same exact null-space of {{monzo|{{val|19 30 44}}}}.
There’s nothing special about the pairing of meantone and magic. We could have chosen meantone|hanson, or magic|negri, etc. A matrix formed out of the intersection of any two of these commas will capture the same exact null-space of {{vector|{{map|19 30 44}}}}.


We already have the tools to check that each of these commas’ vectors is tempered out individually by the map {{val|19 30 44}}; we learned this bit in the very first section: all we have to do is make sure that the comma maps to zero steps in this ET. But that's not a special relationship between 19-ET and any of these commas ''individually''; each of these commas are tempered out by many different ETs, not just 19-ET. The special relationship 19-ET has is to a null-space which can be expressed in basis form as the intersection of ''two'' commas (at least in the 5-limit; more on this later). In this way, the comma basis matrices which represent the intersections of two commas are greater than the sum of their individual parts.  
We already have the tools to check that each of these commas’ vectors is tempered out individually by the map {{map|19 30 44}}; we learned this bit in the very first section: all we have to do is make sure that the comma maps to zero steps in this ET. But that's not a special relationship between 19-ET and any of these commas ''individually''; each of these commas are tempered out by many different ETs, not just 19-ET. The special relationship 19-ET has is to a null-space which can be expressed in basis form as the intersection of ''two'' commas (at least in the 5-limit; more on this later). In this way, the comma basis matrices which represent the intersections of two commas are greater than the sum of their individual parts.


We can confirm the relationship between an ET and its null-space by converting back and forth between them. There exists a mathematical function which — when input any one of these comma basis matrices — will output {{monzo|{{val|19 30 44}}}}, thus demonstrating the various bases' equivalence with respect to it. If the operation called "taking the null-space" is what gets you from {{monzo|{{val|19 30 44}}}} to one basis for the null-space, then ''this'' mathematical function is in effect ''undoing'' the null-space operation.  
We can confirm the relationship between an ET and its null-space by converting back and forth between them. There exists a mathematical function which — when input any one of these comma basis matrices — will output {{vector|{{map|19 30 44}}}}, thus demonstrating the various bases' equivalence with respect to it. If the operation called "taking the null-space" is what gets you from {{vector|{{map|19 30 44}}}} to one basis for the null-space, then ''this'' mathematical function is in effect ''undoing'' the null-space operation.


And interestingly enough, as you'll soon see, the process is almost the same to take the null-space as it is to undo it.  
And interestingly enough, as you'll soon see, the process is almost the same to take the null-space as it is to undo it.


Working this out by hand goes like this (it is a standard linear algebra operation, so if you're comfortable with it already, you can skip this and other similar parts of these materials):
Working this out by hand goes like this (it is a standard linear algebra operation, so if you're comfortable with it already, you can skip this and other similar parts of these materials):
Line 605: Line 605:
</math>
</math>


And ta-da! You’ve found the mapping for which the comma basis we started with is a basis for the null-space, and it is {{monzo|{{val|19 30 44}}}}. Feel free to try this with any other combination of two commas tempered out by this map.
And ta-da! You’ve found the mapping for which the comma basis we started with is a basis for the null-space, and it is {{vector|{{map|19 30 44}}}}. Feel free to try this with any other combination of two commas tempered out by this map.


{| class="wikitable"
{| class="wikitable"
Line 616: Line 616:
|}
|}


Now the null-space function, to take you from {{monzo|{{val|19 30 44}}}} back to the matrix, is pretty much the same thing, but a bit simpler. No need to transpose or reverse. Just start at the augmentation step:
Now the null-space function, to take you from {{vector|{{map|19 30 44}}}} back to the matrix, is pretty much the same thing, but a bit simpler. No need to transpose or reverse. Just start at the augmentation step:


<math>
<math>
Line 680: Line 680:
</math>
</math>


So that’s not any of the commas we’ve looked at so far (it’s the [[19-comma]] and the [[acute limma]]). But it is clear to see that either of them would be tempered out by 19-ET (no need to map by hand — just look at these commas side-by-side with the map {{monzo|{{val|19 30 44}}}} and it should be apparent).
So that’s not any of the commas we’ve looked at so far (it’s the [[19-comma]] and the [[acute limma]]). But it is clear to see that either of them would be tempered out by 19-ET (no need to map by hand — just look at these commas side-by-side with the map {{vector|{{map|19 30 44}}}} and it should be apparent).


{| class="wikitable"
{| class="wikitable"
Line 695: Line 695:
=== the other side of duality ===
=== the other side of duality ===


So we can now convert back and forth between a mapping and a comma basis. We could imagine drawing a diagram with a line of duality down the center, with a temperament's mapping on the left, and its comma basis on the right. Either side ultimately gives the same information, but sometimes you want to come at it in terms of the maps, and sometimes in terms of the commas.  
So we can now convert back and forth between a mapping and a comma basis. We could imagine drawing a diagram with a line of duality down the center, with a temperament's mapping on the left, and its comma basis on the right. Either side ultimately gives the same information, but sometimes you want to come at it in terms of the maps, and sometimes in terms of the commas.


So far we've looked at how to intersect comma vectors to form a comma basis. Next, let's look at the other side of duality, and see how to form a mapping out of unioning maps. In many ways, the approaches are similar; the line of duality is a lot like a mirror in that way.
So far we've looked at how to intersect comma vectors to form a comma basis. Next, let's look at the other side of duality, and see how to form a mapping out of unioning maps. In many ways, the approaches are similar; the line of duality is a lot like a mirror in that way.


When we union two maps, we put them together into a matrix, just like how we put two vectors together into a matrix. But again, where vectors are vertical columns, maps are horizontal rows. So when we combine {{val|5 8 12}} and {{val|7 11 16}}, we get a matrix that looks like
When we union two maps, we put them together into a matrix, just like how we put two vectors together into a matrix. But again, where vectors are vertical columns, maps are horizontal rows. So when we combine {{map|5 8 12}} and {{map|7 11 16}}, we get a matrix that looks like


<math>
<math>
Line 708: Line 708:
</math>
</math>


This matrix represents meantone. In our angle bracket notation, we would write it as two covectors inside a vector (one column of two rows), like this: {{monzo|{{val|5 8 12}} {{val|7 11 16}}}}.
This matrix represents meantone. In our angle bracket notation, we would write it as two covectors inside a vector (one column of two rows), like this: {{vector|{{map|5 8 12}} {{map|7 11 16}}}}.


Again, we find ourselves in the position where we must reconcile a strange new representation of an object with an existing one. We already know that meantone can be represented by the vector for the comma it tempers out, {{monzo|-4 4 -1}}. How are these two representations related?
Again, we find ourselves in the position where we must reconcile a strange new representation of an object with an existing one. We already know that meantone can be represented by the vector for the comma it tempers out, {{vector|-4 4 -1}}. How are these two representations related?


Well, it’s actually quite simple! They’re related in the same way as {{monzo|{{val|19 30 44}}}} was related to {{val|{{monzo|-4 4 -1}} {{monzo|-10 -1 5}}}}: by the null-space operation. Specifically, {{val|{{monzo|-4 4 -1}}}} is a basis for the null-space of the mapping {{monzo|{{val|5 8 12}} {{val|7 11 16}}}}, because it is the minimal representation of all the commas tempered out by meantone temperament.
Well, it’s actually quite simple! They’re related in the same way as {{vector|{{map|19 30 44}}}} was related to {{map|{{vector|-4 4 -1}} {{vector|-10 -1 5}}}}: by the null-space operation. Specifically, {{map|{{vector|-4 4 -1}}}} is a basis for the null-space of the mapping {{vector|{{map|5 8 12}} {{map|7 11 16}}}}, because it is the minimal representation of all the commas tempered out by meantone temperament.


We can work this one out by hand too:
We can work this one out by hand too:
Line 802: Line 802:
|}
|}


And there’s our {{val|{{monzo|4 -4 1}}}}. Feel free to try reversing the operation by working out the mapping from this if you like. And/or you could try working out that {{val|{{monzo|4 -4 1}}}} is a basis for the null-space of any other combination of ETs we found that could specify meantone, such as 7&12, or 12&19.
And there’s our {{map|{{vector|4 -4 1}}}}. Feel free to try reversing the operation by working out the mapping from this if you like. And/or you could try working out that {{map|{{vector|4 -4 1}}}} is a basis for the null-space of any other combination of ETs we found that could specify meantone, such as 7&12, or 12&19.


It’s worth noting that, just as 2 commas were exactly enough to define a rank-1 temperament, though there were an infinitude of equivalent pairs of commas we could choose to fill that role, there’s a similar thing happening here, where 2 maps are exactly enough to define a rank-2 temperament, but an infinitude of equivalent pairs of them. We can even see that we can convert between these maps using Gaussian addition and subtraction, just like we could manipulate commas to get from one to the other. For example, the map for 12-ET {{val|12 19 28}} is exactly what you get from summing the terms of 5-ET {{val|5 8 12}} and 7-ET {{val|7 11 16}}: {{val|5+7 8+11 12+16}} = {{val|12 19 28}}. Cool!
It’s worth noting that, just as 2 commas were exactly enough to define a rank-1 temperament, though there were an infinitude of equivalent pairs of commas we could choose to fill that role, there’s a similar thing happening here, where 2 maps are exactly enough to define a rank-2 temperament, but an infinitude of equivalent pairs of them. We can even see that we can convert between these maps using Gaussian addition and subtraction, just like we could manipulate commas to get from one to the other. For example, the map for 12-ET {{map|12 19 28}} is exactly what you get from summing the terms of 5-ET {{map|5 8 12}} and 7-ET {{map|7 11 16}}: {{map|5+7 8+11 12+16}} = {{map|12 19 28}}. Cool!


Probably the biggest thing you’re in suspense about now, though, is: how the heck is
Probably the biggest thing you’re in suspense about now, though, is: how the heck is
Line 821: Line 821:
Let’s consider some facts:
Let’s consider some facts:


* {{monzo|{{val|19 30 44}}}} is the mapping for a rank-1 temperament.
* {{vector|{{map|19 30 44}}}} is the mapping for a rank-1 temperament.
* {{monzo|{{val|5 8 12}} {{val|7 11 16}}}} is the mapping for a rank-2 temperament.
* {{vector|{{map|5 8 12}} {{map|7 11 16}}}} is the mapping for a rank-2 temperament.
* A rank-1 temperament has one generator.
* A rank-1 temperament has one generator.
* A rank-2 temperament has two generators.
* A rank-2 temperament has two generators.
* {{val|19 30 44}} asks us to imagine a generator g for which g¹⁹ ≈ 2, g³⁰ ≈ 3, and g⁴⁴ ≈ 5.
* {{map|19 30 44}} asks us to imagine a generator g for which g¹⁹ ≈ 2, g³⁰ ≈ 3, and g⁴⁴ ≈ 5.


From these facts, we can see that what the mapping matrix
From these facts, we can see that what the mapping matrix
Line 840: Line 840:
And somehow… from this… we can generate meantone?! This is true, but it’s not immediately easy to see how that would happen.
And somehow… from this… we can generate meantone?! This is true, but it’s not immediately easy to see how that would happen.


First we should show how to actually use rank-2 mappings. It’s actually not that complicated. It’s just like using a rank-1 mapping, except you have to find each of them separately, and then put them back together at the end. Let’s see how this plays out for 10/9, or {{monzo|1 -2 1}}.  
First we should show how to actually use rank-2 mappings. It’s actually not that complicated. It’s just like using a rank-1 mapping, except you have to find each of them separately, and then put them back together at the end. Let’s see how this plays out for 10/9, or {{vector|1 -2 1}}.


'''{{val|5 8 12}}:'''
'''{{map|5 8 12}}:'''
* {{val|5 8 12}}{{monzo|1 -2 1}}
* {{map|5 8 12}}{{vector|1 -2 1}}
* 5×1 + 8×-2 + 12×1
* 5×1 + 8×-2 + 12×1
* 5 + -16 + 12
* 5 + -16 + 12
* 1
* 1


'''{{val|7 11 16}}:'''
'''{{map|7 11 16}}:'''
* {{val|7 11 16}}{{monzo|1 -2 1}}
* {{map|7 11 16}}{{vector|1 -2 1}}
* 7×1 + 11×-2 + 16×1
* 7×1 + 11×-2 + 16×1
* 7 + -22 + 16
* 7 + -22 + 16
* 1
* 1


So in this meantone mapping, the best approximation of the JI interval 10/9 is found by moving 1 step in each generator. We could write this in vector form as {{monzo|1 1}}.
So in this meantone mapping, the best approximation of the JI interval 10/9 is found by moving 1 step in each generator. We could write this in vector form as {{vector|1 1}}.


If the familiar usage of vectors has been as prime count lists, we can now generalize that definition to things like this {{monzo|1 1}}: generator count lists. Since interval vectors are often called monzos, you’ll often see these called tempered monzos or [[Tmonzos_and_Tvals|tmonzos]] for short. There’s very little difference. We can use these vectors as coordinates in a lattice just the same as before. The main difference is that the nodes we visit on this lattice aren’t pure JI; they’re a tempered lattice.
If the familiar usage of vectors has been as prime count lists, we can now generalize that definition to things like this {{vector|1 1}}: generator count lists. Since interval vectors are often called monzos, you’ll often see these called tempered monzos or [[Tmonzos_and_Tvals|tmonzos]] for short. There’s very little difference. We can use these vectors as coordinates in a lattice just the same as before. The main difference is that the nodes we visit on this lattice aren’t pure JI; they’re a tempered lattice.


We haven’t specified the size of either of these generators, but that’s not important here. These mappings are just like a set of requirements for any pair of generators that might implement this temperament. This is as good a time as any to emphasize the fact that temperaments are abstract; they are not ready-to-go tunings, but more like instructions for a tuning to follow. This can sometimes feel frustrating or hard to understand, but ultimately it’s a big part of the power of temperament theory.
We haven’t specified the size of either of these generators, but that’s not important here. These mappings are just like a set of requirements for any pair of generators that might implement this temperament. This is as good a time as any to emphasize the fact that temperaments are abstract; they are not ready-to-go tunings, but more like instructions for a tuning to follow. This can sometimes feel frustrating or hard to understand, but ultimately it’s a big part of the power of temperament theory.


The critical thing here is that if {{monzo|-4 4 -1}} is mapped to 0 steps by {{val|5 8 12}} individually and to 0 steps by {{val|7 11 16}} individually, then in total it comes out to 0 steps in the temperament, and thus is tempered out, or has vector {{monzo|0 0}}.
The critical thing here is that if {{vector|-4 4 -1}} is mapped to 0 steps by {{map|5 8 12}} individually and to 0 steps by {{map|7 11 16}} individually, then in total it comes out to 0 steps in the temperament, and thus is tempered out, or has vector {{vector|0 0}}.


Previously we mentioned that any given rank-2 temperaments can be generated by a wide variety of combinations of intervals. In other words, the absolute size of the intervals is not the important part, in terms of their potential for generating the temperament; only their relative size matters. However, for us humans, it’s much easier to make sense of these things if we get them in a good old standard form, by locking one generator to the octave to establish a common basis for comparison, and the other generator to a size less than half of the octave (because anything past the halfway point and it would be the octave-complement of a smaller and therefore in some sense simpler interval). And there’s a way to find this form by transforming our matrix. In fact, it also uses Gaussian elimination, though in this case, we do it by columns. Our target this time is a bit harder to explain ahead of time, so this first time through, just watch, and we’ll review the result.
Previously we mentioned that any given rank-2 temperaments can be generated by a wide variety of combinations of intervals. In other words, the absolute size of the intervals is not the important part, in terms of their potential for generating the temperament; only their relative size matters. However, for us humans, it’s much easier to make sense of these things if we get them in a good old standard form, by locking one generator to the octave to establish a common basis for comparison, and the other generator to a size less than half of the octave (because anything past the halfway point and it would be the octave-complement of a smaller and therefore in some sense simpler interval). And there’s a way to find this form by transforming our matrix. In fact, it also uses Gaussian elimination, though in this case, we do it by columns. Our target this time is a bit harder to explain ahead of time, so this first time through, just watch, and we’ll review the result.
Line 908: Line 908:
</math>
</math>


So this is still meantone! But now it’s a bit more practical to think about. Because notice what happens to the octave, {{monzo|1 0 0}}. To approximate the octave, you simply move by one of the first generator, or {{monzo|1 0}}. The second generator has nothing to do with it. And how about the fifth, {{monzo|-1 1 0}}? Well, the first generator maps that to 0 steps, and the second generator maps that to 1 step, or {{monzo|0 1}}. So that tells us our second generator is the fifth. Which is… almost perfect! I would have preferred a fourth, which is the octave-complement of the fifth which is less than half of an octave. But it’s basically the same thing. Good enough.
So this is still meantone! But now it’s a bit more practical to think about. Because notice what happens to the octave, {{vector|1 0 0}}. To approximate the octave, you simply move by one of the first generator, or {{vector|1 0}}. The second generator has nothing to do with it. And how about the fifth, {{vector|-1 1 0}}? Well, the first generator maps that to 0 steps, and the second generator maps that to 1 step, or {{vector|0 1}}. So that tells us our second generator is the fifth. Which is… almost perfect! I would have preferred a fourth, which is the octave-complement of the fifth which is less than half of an octave. But it’s basically the same thing. Good enough.


Hopefully manipulating these rows like this gives you some sort of feel for how what matters in a temperament mapping is not so much the absolute values but their relationship with each other.
Hopefully manipulating these rows like this gives you some sort of feel for how what matters in a temperament mapping is not so much the absolute values but their relationship with each other.
Line 915: Line 915:


* We’ve made it to a critical point here: we are now able to explain why RTT is called “regular” temperament theory. Regular here is a mathematical term, and I don’t have a straightforward definition of it for you, but it apparently refers to the fact that all intervals in the tuning are combinations of only these specified generators. So there you go.
* We’ve made it to a critical point here: we are now able to explain why RTT is called “regular” temperament theory. Regular here is a mathematical term, and I don’t have a straightforward definition of it for you, but it apparently refers to the fact that all intervals in the tuning are combinations of only these specified generators. So there you go.
* Both {{monzo|{{val|5 8 12}} {{val|7 11 16}}}} and {{monzo|{{val|1 1 0}} {{val|0 1 4}}}} are equivalent mappings, then. Converting between them we could call a change of basis. This makes more sense, of course, when speaking about converting between equivalent bases; I’ve been cautioned against referring to maps as “bases” despite the label seeming appropriate from an analogy standpoint.
* Both {{vector|{{map|5 8 12}} {{map|7 11 16}}}} and {{vector|{{map|1 1 0}} {{map|0 1 4}}}} are equivalent mappings, then. Converting between them we could call a change of basis. This makes more sense, of course, when speaking about converting between equivalent bases; I’ve been cautioned against referring to maps as “bases” despite the label seeming appropriate from an analogy standpoint.
* Note well: this is not to say that {{val|1 1 0}} or {{val|0 1 4}} ''are'' the generators for meantone. They are generator ''mappings'': when assembled together, they collectively describe behavior of the generators, but they are ''not'' themselves the generators. This situation can be confusing; it confused me for many weeks. I thought of it this way: because the first generator is 2/1 — i.e. {{monzo|1 0 0}} maps to {{monzo|1 0}} — referring to {{val|1 1 0}} as the octave or period seems reasonable and is effective when the context is clear. And similarly, because the second generator is 3/2 — i.e. {{monzo|-1 1 0}} maps to {{monzo|0 1}} — referring to {{val|0 1 4}} as the fifth or the generator seems reasonable as is effective when the context is clear. But it's critical to understand that the first generator "being" the octave here is ''contingent upon the definition of the second generator'', and vice versa, the second generator "being" the fifth here is ''contingent upon the definition of the first generator''. Considering {{val|1 1 0}} or {{val|0 1 4}} individually, we cannot say what intervals the generators are. What if the mapping was {{monzo|{{val|0 1 4}} {{val|1 2 4}}} instead? We'd still have the first generator mapping as {{val|1 1 0}}, but now that the second generator mapping is {{val|1 2 4}}, the two generators must be the fourth and the fifth. In summary, neither mapping row describes a generator in a vacuum, but does so in the context of all the other mapping rows.
* Note well: this is not to say that {{map|1 1 0}} or {{map|0 1 4}} ''are'' the generators for meantone. They are generator ''mappings'': when assembled together, they collectively describe behavior of the generators, but they are ''not'' themselves the generators. This situation can be confusing; it confused me for many weeks. I thought of it this way: because the first generator is 2/1 — i.e. {{vector|1 0 0}} maps to {{vector|1 0}} — referring to {{map|1 1 0}} as the octave or period seems reasonable and is effective when the context is clear. And similarly, because the second generator is 3/2 — i.e. {{vector|-1 1 0}} maps to {{vector|0 1}} — referring to {{map|0 1 4}} as the fifth or the generator seems reasonable as is effective when the context is clear. But it's critical to understand that the first generator "being" the octave here is ''contingent upon the definition of the second generator'', and vice versa, the second generator "being" the fifth here is ''contingent upon the definition of the first generator''. Considering {{map|1 1 0}} or {{map|0 1 4}} individually, we cannot say what intervals the generators are. What if the mapping was {{vector|{{map|0 1 4}} {{map|1 2 4}}} instead? We'd still have the first generator mapping as {{map|1 1 0}}, but now that the second generator mapping is {{map|1 2 4}}, the two generators must be the fourth and the fifth. In summary, neither mapping row describes a generator in a vacuum, but does so in the context of all the other mapping rows.
* This also gives us a new way to think about the scale tree patterns. Remember how earlier we pointed out that {{val|12 19 28}} was simply {{val|5 8 12}} + {{val|7 11 16}}? Well, if {{monzo|{{val|5 8 12}} {{val|7 11 16}}}} is a way of expressing meantone in terms of its two generators, you can imagine that 12-ET is the point where those two generators converge on being the same exact size<ref>For real numbers <span><math>p,q</math></span> we can make the two generators respectively <span><math>\frac{p}{5p+7q}</math></span> and <span><math>\frac{q}{5p+7q}</math></span> of an octave, e.g. <span><math>(p,q)=(1,0)</math></span> for 5-ET, <span><math>(0,1)</math></span> for 7-ET, <span><math>(1,1)</math></span> for 12-ET, and many other possibilities.</ref>. If they become the same size, then they aren’t truly two separate generators, or at least there’s no effect in thinking of them as separate. And so for convenience you can simply combine their mappings into one.
* This also gives us a new way to think about the scale tree patterns. Remember how earlier we pointed out that {{map|12 19 28}} was simply {{map|5 8 12}} + {{map|7 11 16}}? Well, if {{vector|{{map|5 8 12}} {{map|7 11 16}}}} is a way of expressing meantone in terms of its two generators, you can imagine that 12-ET is the point where those two generators converge on being the same exact size<ref>For real numbers <span><math>p,q</math></span> we can make the two generators respectively <span><math>\frac{p}{5p+7q}</math></span> and <span><math>\frac{q}{5p+7q}</math></span> of an octave, e.g. <span><math>(p,q)=(1,0)</math></span> for 5-ET, <span><math>(0,1)</math></span> for 7-ET, <span><math>(1,1)</math></span> for 12-ET, and many other possibilities.</ref>. If they become the same size, then they aren’t truly two separate generators, or at least there’s no effect in thinking of them as separate. And so for convenience you can simply combine their mappings into one.
* Technically speaking, when we first learned how to map vectors with ETs, we could think of those outputs as vectors too, but they'd be 1-dimensional vectors, i.e. if 12-ET maps 16/15 to 1 step, we could write that as {{val|12 19 28}}{{monzo|4 -1 -1}} = {{monzo|1}}, where writing the answer as {{monzo|1}} expresses that 1 step as 1 of the only generator in this equal temperament.
* Technically speaking, when we first learned how to map vectors with ETs, we could think of those outputs as vectors too, but they'd be 1-dimensional vectors, i.e. if 12-ET maps 16/15 to 1 step, we could write that as {{map|12 19 28}}{{vector|4 -1 -1}} = {{vector|1}}, where writing the answer as {{vector|1}} expresses that 1 step as 1 of the only generator in this equal temperament.


=== JI as a temperament ===
=== JI as a temperament ===
Line 950: Line 950:
</math>
</math>


That looks like an identity matrix! Well, in this case the best interpretation can be found by checking its mapping of 2/1, 3/1, and 5/1, or in other words {{monzo|1}}, {{monzo|0 1}}, and {{monzo|0 0 1}}. Each prime is generated by a different generator, independently. And if you think about the implications of that, you’ll realize that this is simply another way of expressing the idea of 5-limit JI! Because the three generators are entirely independent, we are capable of exactly generating literally any 5-limit interval. Which is another way of confirming our hypothesis that no commas are tempered out.
That looks like an identity matrix! Well, in this case the best interpretation can be found by checking its mapping of 2/1, 3/1, and 5/1, or in other words {{vector|1}}, {{vector|0 1}}, and {{vector|0 0 1}}. Each prime is generated by a different generator, independently. And if you think about the implications of that, you’ll realize that this is simply another way of expressing the idea of 5-limit JI! Because the three generators are entirely independent, we are capable of exactly generating literally any 5-limit interval. Which is another way of confirming our hypothesis that no commas are tempered out.




Line 959: Line 959:
In this rank-2 example of 5-limit meantone, we have 2 generators, so the lattice is 2D, and can therefore be viewed on a simple square grid on the page. Up and down correspond to movements by one generator, and left and right correspond to movements by the other generator.
In this rank-2 example of 5-limit meantone, we have 2 generators, so the lattice is 2D, and can therefore be viewed on a simple square grid on the page. Up and down correspond to movements by one generator, and left and right correspond to movements by the other generator.


The next step is to understand our primes in terms of this temperament’s generators. Meantone’s mapping is {{monzo|{{val|1 0 -4}} {{val|0 1 4}}}}. This maps prime 2 to one of the first generators and zero of the second generators. This can be seen plainly by slicing the first column from the matrix; we could even write it as the vector {{monzo|1 0}}. Similarly, this mapping maps prime 3 to zero of the first generator and one of the second generator, or in vector form {{monzo|0 1}}. Finally, this mapping maps prime 5 to negative four of the first generator and four of the second generator, or {{monzo|-4 4}}.
The next step is to understand our primes in terms of this temperament’s generators. Meantone’s mapping is {{vector|{{map|1 0 -4}} {{map|0 1 4}}}}. This maps prime 2 to one of the first generators and zero of the second generators. This can be seen plainly by slicing the first column from the matrix; we could even write it as the vector {{vector|1 0}}. Similarly, this mapping maps prime 3 to zero of the first generator and one of the second generator, or in vector form {{vector|0 1}}. Finally, this mapping maps prime 5 to negative four of the first generator and four of the second generator, or {{vector|-4 4}}.


So we could label the nodes with a list of approximations. For example, the node at {{monzo|-4 4}} would be ~5. We could label ~9/8 on {{monzo|-3 2}} just the same as we could label {{monzo|-3 2}} 9/8 in JI, however, here, we can also label that node ~10/9, because {{monzo|1 -2 1}} → 1×{{monzo|1 0}} + -2×{{monzo|0 1}} + 1×{{monzo|-4 4}} = {{monzo|1 0}} + {{monzo|0 -2}} + {{monzo|-4 4}} = {{monzo|-3 2}}. Cool, huh? Because conflating 9/8 and 10/9 is a quintessential example of the effect of tempering out the meantone comma ''(see Figure 5b)''.
So we could label the nodes with a list of approximations. For example, the node at {{vector|-4 4}} would be ~5. We could label ~9/8 on {{vector|-3 2}} just the same as we could label {{vector|-3 2}} 9/8 in JI, however, here, we can also label that node ~10/9, because {{vector|1 -2 1}} → 1×{{vector|1 0}} + -2×{{vector|0 1}} + 1×{{vector|-4 4}} = {{vector|1 0}} + {{vector|0 -2}} + {{vector|-4 4}} = {{vector|-3 2}}. Cool, huh? Because conflating 9/8 and 10/9 is a quintessential example of the effect of tempering out the meantone comma ''(see Figure 5b)''.


[[File:Mapping to tempered vector.png|400px|thumb|right|'''Figure 5b.''' Converting from a JI interval vector to a tempered interval vector, with one less rank, conflating intervals related by the tempered out comma.]]
[[File:Mapping to tempered vector.png|400px|thumb|right|'''Figure 5b.''' Converting from a JI interval vector to a tempered interval vector, with one less rank, conflating intervals related by the tempered out comma.]]
Line 978: Line 978:
So far, everything we’ve done has been in terms of 5-limit, which has dimensionality of 3. Before we generalize our knowledge upwards, into the 7-limit, let’s take a look at how things one step downwards, in the simpler direction, in the 3-limit, which is only 2-dimensional.
So far, everything we’ve done has been in terms of 5-limit, which has dimensionality of 3. Before we generalize our knowledge upwards, into the 7-limit, let’s take a look at how things one step downwards, in the simpler direction, in the 3-limit, which is only 2-dimensional.


We don’t have a ton of options here! The PTS diagram for 3-limit JI could be a simple line. This axis would define the relative tuning of primes 2 and 3, which are the only harmonic building blocks available. Along this line we’ll find some points, which familiarly are ETs. For example, we find 12-ET. Its map here is {{val|12 19}}; no need to mention the 5-term because we have no vectors that will use it here. At this ET, being a rank-1 temperament, <span><math>r</math></span> = 1. So if <span><math>d</math></span> = 2, then solve for <span><math>n</math></span> and we find that it only tempers out a single comma (unlike the rank-1 temperaments in 5-limit JI, which tempered out two commas). We can use our familiar null-space function to find what this comma is:
We don’t have a ton of options here! The PTS diagram for 3-limit JI could be a simple line. This axis would define the relative tuning of primes 2 and 3, which are the only harmonic building blocks available. Along this line we’ll find some points, which familiarly are ETs. For example, we find 12-ET. Its map here is {{map|12 19}}; no need to mention the 5-term because we have no vectors that will use it here. At this ET, being a rank-1 temperament, <span><math>r</math></span> = 1. So if <span><math>d</math></span> = 2, then solve for <span><math>n</math></span> and we find that it only tempers out a single comma (unlike the rank-1 temperaments in 5-limit JI, which tempered out two commas). We can use our familiar null-space function to find what this comma is:


<math>
<math>
Line 1,016: Line 1,016:
|}
|}


Unsurprisingly, the comma is {{monzo|-19 12}}, the compton comma. Basically, any comma we could temper out in 3-limit JI is going to be obvious from the ET’s map. Another option would be the blackwood comma, {{monzo|-8 5}} tempered out in 5-ET, {{val|5 8}}. Exciting stuff! Okay, not really. But good to ground yourself with.
Unsurprisingly, the comma is {{vector|-19 12}}, the compton comma. Basically, any comma we could temper out in 3-limit JI is going to be obvious from the ET’s map. Another option would be the blackwood comma, {{vector|-8 5}} tempered out in 5-ET, {{map|5 8}}. Exciting stuff! Okay, not really. But good to ground yourself with.


But now you shouldn’t be afraid even of 11-limit or beyond. 11-limit is 5D. So if you temper 2 commas there, you’ll have a rank-3 temperament.
But now you shouldn’t be afraid even of 11-limit or beyond. 11-limit is 5D. So if you temper 2 commas there, you’ll have a rank-3 temperament.
Line 1,024: Line 1,024:
=== beyond the 5-limit ===
=== beyond the 5-limit ===


So far we’ve only been dealing with RTT in terms of prime limits, which is by far the most common and simplest way to use it. But nothing is stopping you from using other JI groups. What is a JI group? Well, I'll explain in terms of what we already know: prime limits. Prime limits are basically the simplest type of JI group. A prime limit is shorthand for the JI group consisting of all the primes up to that prime which is your limit; for example, the 7-limit is the same thing as the JI group "2.3.5.7". So JI groups are just sets of harmonics, and they are notated by separating the selected harmonics with dots.  
So far we’ve only been dealing with RTT in terms of prime limits, which is by far the most common and simplest way to use it. But nothing is stopping you from using other JI groups. What is a JI group? Well, I'll explain in terms of what we already know: prime limits. Prime limits are basically the simplest type of JI group. A prime limit is shorthand for the JI group consisting of all the primes up to that prime which is your limit; for example, the 7-limit is the same thing as the JI group "2.3.5.7". So JI groups are just sets of harmonics, and they are notated by separating the selected harmonics with dots.


Sometimes you may want to use a JI [[https://en.xen.wiki/w/Just_intonation_subgroup|subgroup]]. For example, you could create a 3D tuning space out of primes 2, 3, and 7 instead, skipping prime 5. You would call it “the 2.3.7 subgroup”. Or you could just call it "the 2.3.7 group", really. Nobody really cares that it's a subgroup of another group.
Sometimes you may want to use a JI [[https://en.xen.wiki/w/Just_intonation_subgroup|subgroup]]. For example, you could create a 3D tuning space out of primes 2, 3, and 7 instead, skipping prime 5. You would call it “the 2.3.7 subgroup”. Or you could just call it "the 2.3.7 group", really. Nobody really cares that it's a subgroup of another group.


You could even choose a JI group with combinations of primes, such as the 2.5/3.7 group. Here, we still care about approximating primes 2, 3, 5, and 7, however there's something special about 3 and 5: we don't specifically care about approximating 3 or 5 individually, but only about approximating their combination. Note that this is different yet from the 2.15.7 group, where the combinations of 3 and 5 we care about approximating are when they're on the same side of the fraction bar.  
You could even choose a JI group with combinations of primes, such as the 2.5/3.7 group. Here, we still care about approximating primes 2, 3, 5, and 7, however there's something special about 3 and 5: we don't specifically care about approximating 3 or 5 individually, but only about approximating their combination. Note that this is different yet from the 2.15.7 group, where the combinations of 3 and 5 we care about approximating are when they're on the same side of the fraction bar.


As you can see from the 2.15.7 example, you don't even have to use primes. Simple and common examples of this situation are the 2.9.5 or the 2.3.25 groups, where you're targeting multiples of the same prime, rather than combinations of different primes.
As you can see from the 2.15.7 example, you don't even have to use primes. Simple and common examples of this situation are the 2.9.5 or the 2.3.25 groups, where you're targeting multiples of the same prime, rather than combinations of different primes.
Line 1,038: Line 1,038:
[[File:Temperaments by rnd.png|400px|thumb|left|'''Figure 5d.''' Some temperaments by dimension, rank, and nullity]]
[[File:Temperaments by rnd.png|400px|thumb|left|'''Figure 5d.''' Some temperaments by dimension, rank, and nullity]]


Alright, here’s where things start to get pretty fun. 7-limit JI is 4D. We can no longer refer to our 5-limit PTS diagram for help. Maps and vectors here will have four terms; the new fourth term being for prime 7. So the map for 12-ET here is {{val|12 19 28 34}}.
Alright, here’s where things start to get pretty fun. 7-limit JI is 4D. We can no longer refer to our 5-limit PTS diagram for help. Maps and vectors here will have four terms; the new fourth term being for prime 7. So the map for 12-ET here is {{map|12 19 28 34}}.


Because we're starting in 4D here, if we temper out one comma, we still have a rank-3 temperament, with 3 independent generators. Temper out two commas, and we have a rank-2 temperament, with 2 generators (remember, one of them is the period, which is usually the octave). And we’d need to temper out 3 commas here to pinpoint a single ET.
Because we're starting in 4D here, if we temper out one comma, we still have a rank-3 temperament, with 3 independent generators. Temper out two commas, and we have a rank-2 temperament, with 2 generators (remember, one of them is the period, which is usually the octave). And we’d need to temper out 3 commas here to pinpoint a single ET.
Line 1,046: Line 1,046:
Septimal meantone may be thought of as the temperament which tempers out the meantone comma and the starling comma (126/125), or “meantone|starling”. But it may also be thought of as “meantone|marvel”, where the marvel comma is 225/224. We don’t even necessarily need the meantone comma at all: it can even be “starling|marvel”! This speaks to the fact that any temperament with a nullity greater than 1 has an infinitude of equivalent comma bases. It’s up to you which one to use.
Septimal meantone may be thought of as the temperament which tempers out the meantone comma and the starling comma (126/125), or “meantone|starling”. But it may also be thought of as “meantone|marvel”, where the marvel comma is 225/224. We don’t even necessarily need the meantone comma at all: it can even be “starling|marvel”! This speaks to the fact that any temperament with a nullity greater than 1 has an infinitude of equivalent comma bases. It’s up to you which one to use.


On the other side of duality, septimal meantone’s mapping has two rows, corresponding to its two generators. We don’t have PTS for 7-limit JI handy, but because septimal meantone includes, or extends plain meantone, we can still refer to 5-limit PTS, and pick ETs from the meantone line there. The difference is that this time we need to include their 7-term. So the union of {{val|12 19 28 34}} and {{val|19 30 44 53}} would work. But so would {{val|19 30 44 53}} and {{val|31 49 72 87}}. We have an infinitude of options on this side of duality too, but here it’s not because our nullity is greater than 1, but because our rank is greater than 1.
On the other side of duality, septimal meantone’s mapping has two rows, corresponding to its two generators. We don’t have PTS for 7-limit JI handy, but because septimal meantone includes, or extends plain meantone, we can still refer to 5-limit PTS, and pick ETs from the meantone line there. The difference is that this time we need to include their 7-term. So the union of {{map|12 19 28 34}} and {{map|19 30 44 53}} would work. But so would {{map|19 30 44 53}} and {{map|31 49 72 87}}. We have an infinitude of options on this side of duality too, but here it’s not because our nullity is greater than 1, but because our rank is greater than 1.


=== normal form ===
=== normal form ===
Line 1,079: Line 1,079:
</math>
</math>


maps the fourth (4/3, {{monzo|2 -1 0 }}) to {{monzo|0 1}}.
maps the fourth (4/3, {{vector|2 -1 0 }}) to {{vector|0 1}}.


It’s often the case that a temperament’s nullity is greater than 1 or its rank is greater than 1, and therefore we have an infinitude of equivalent ways of expressing the comma basis or the mapping. This can be problematic, if we want to efficiently communicate about and catalog temperaments. It’s good to have a standardized form in these cases. The approach RTT takes here is to get these matrices into '''“normal” form'''. In plain words, this just means: we have a function which takes in a matrix and spits out a matrix of the same shape, and no matter which matrix we input from a set of matrices which we consider all to be equivalent to each other, it will spit out the same result. This output is what we call the “normalized” matrix, and it can therefore uniquely identify a temperament.
It’s often the case that a temperament’s nullity is greater than 1 or its rank is greater than 1, and therefore we have an infinitude of equivalent ways of expressing the comma basis or the mapping. This can be problematic, if we want to efficiently communicate about and catalog temperaments. It’s good to have a standardized form in these cases. The approach RTT takes here is to get these matrices into '''“normal” form'''. In plain words, this just means: we have a function which takes in a matrix and spits out a matrix of the same shape, and no matter which matrix we input from a set of matrices which we consider all to be equivalent to each other, it will spit out the same result. This output is what we call the “normalized” matrix, and it can therefore uniquely identify a temperament.
Line 1,096: Line 1,096:
</math>
</math>


If you take the HNF of {{monzo|{{val|5 8 12}} {{val|7 11 16}}}}, that’s what you get. It’s also what you get if you take the HNF of {{monzo|{{val|12 19 28}} {{val|19 30 44}}}}, etc. That’s the power of normalization.
If you take the HNF of {{vector|{{map|5 8 12}} {{map|7 11 16}}}}, that’s what you get. It’s also what you get if you take the HNF of {{vector|{{map|12 19 28}} {{map|19 30 44}}}}, etc. That’s the power of normalization.


Finding the HNF is not all too different from the other matrix transformations we’ve practiced so far. Basically you just perform Gaussian elimination until you reach your target. The target in this case requires that the biggest square matrix subset of your matrix you can fit in the top-left corner is an identity matrix. In other words, top-left corner is 1, and you have all 1’s along the main diagonal, and 0’s between any of these 1’s and the top or left of the matrix. If you can’t get a 1 along the main diagonal, shoot for a number whose absolute value is as low as possible, and lower or equal to any further numbers down the diagonal.
Finding the HNF is not all too different from the other matrix transformations we’ve practiced so far. Basically you just perform Gaussian elimination until you reach your target. The target in this case requires that the biggest square matrix subset of your matrix you can fit in the top-left corner is an identity matrix. In other words, top-left corner is 1, and you have all 1’s along the main diagonal, and 0’s between any of these 1’s and the top or left of the matrix. If you can’t get a 1 along the main diagonal, shoot for a number whose absolute value is as low as possible, and lower or equal to any further numbers down the diagonal.
Line 1,119: Line 1,119:


* They both represent information written horizontally, as a row.
* They both represent information written horizontally, as a row.
* They both use left angle brackets on the left and square brackets on the right, {{val|}}, to enclose their contents.
* They both use left angle brackets on the left and square brackets on the right, {{map|}}, to enclose their contents.
* They both exist on the left half of tuning duality, on the side built up out of ETs which concerns rank (not the side built up out of commas, which uses vectors/columns, and concerns nullity).
* They both exist on the left half of tuning duality, on the side built up out of ETs which concerns rank (not the side built up out of commas, which uses vectors/columns, and concerns nullity).


The main difference between the two is superficial, and has to do with the “multi” part of the name. A plain covector comes in only one type, but a multicovector can be a bicovector, tricovector, tetracovector, etc. Yes, a multicovector can even be a monocovector. Depending on its numeric prefix, a multicovector will be written with a different count of brackets on either side. For example, a bicovector uses two of each: {{multival|}}. A tricovector uses three of each: {{multival|rank=3|}}. A monocovector, written with one of each, like {{val|}}, is indistinguishable from a plain covector, and that’s okay.
The main difference between the two is superficial, and has to do with the “multi” part of the name. A plain covector comes in only one type, but a multicovector can be a bicovector, tricovector, tetracovector, etc. Yes, a multicovector can even be a monocovector. Depending on its numeric prefix, a multicovector will be written with a different count of brackets on either side. For example, a bicovector uses two of each: {{multicovector|}}. A tricovector uses three of each: {{multicovector|rank=3|}}. A monocovector, written with one of each, like {{map|}}, is indistinguishable from a plain covector, and that’s okay.


In order to make these materials as accessible as possible, I have been doing what I can to lean away from RTT jargon and instead toward generic, previously established mathematical and/or musical concepts. That is why I have avoided the terms "monzo", "val", "breed", and why I will now avoid "wedgie". When established mathematical and/or musical concepts are unavailable, we can at least use unmistakable analogs built upon what we do have. In this case, if in RTT we use covectors to represent maps, then analogously, we can refer to the thing multicovectors represent in RTT as a '''multimap'''.  
In order to make these materials as accessible as possible, I have been doing what I can to lean away from RTT jargon and instead toward generic, previously established mathematical and/or musical concepts. That is why I have avoided the terms "monzo", "val", "breed", and why I will now avoid "wedgie". When established mathematical and/or musical concepts are unavailable, we can at least use unmistakable analogs built upon what we do have. In this case, if in RTT we use covectors to represent maps, then analogously, we can refer to the thing multicovectors represent in RTT as a '''multimap'''.


So we mentioned that a multimap is an alternative way to represent a temperament. Let's look at an example now. Meantone's multimap looks like this: {{multival|1 4 4}}. As you can see, it is a bicovector, or bimap, because it has two of each bracket.
So we mentioned that a multimap is an alternative way to represent a temperament. Let's look at an example now. Meantone's multimap looks like this: {{multicovector|1 4 4}}. As you can see, it is a bicovector, or bimap, because it has two of each bracket.


Why care about multimaps? Well, a key reason is that they can serve the same purpose as the normal form of a temperament’s mapping: the process for converting a mapping to a multimap will convert any equivalent mapping to the same exact multimap. In other words, a multimap can serve as a unique identifier for its temperament. And, unlike normal forms for matrices, there is no question about which normal form to use.
Why care about multimaps? Well, a key reason is that they can serve the same purpose as the normal form of a temperament’s mapping: the process for converting a mapping to a multimap will convert any equivalent mapping to the same exact multimap. In other words, a multimap can serve as a unique identifier for its temperament. And, unlike normal forms for matrices, there is no question about which normal form to use.
Line 1,134: Line 1,134:
First I’ll list the steps. Don’t worry if it doesn’t all make sense the first time. We’ll work through an example and go into more detail as we do. To be clear, what we're doing here is both more and less and different ways from the strict definition of the exterior product as you may see it elsewhere; I'm specifically here describing the process for finding the multimap in the form you're going to be interested in for RTT purposes.
First I’ll list the steps. Don’t worry if it doesn’t all make sense the first time. We’ll work through an example and go into more detail as we do. To be clear, what we're doing here is both more and less and different ways from the strict definition of the exterior product as you may see it elsewhere; I'm specifically here describing the process for finding the multimap in the form you're going to be interested in for RTT purposes.


# Take each combination of <span><math>r</math></span> primes where <span><math>r</math></span> is the rank, sorted in [https://en.wikipedia.org/wiki/Lexicographic_order lexicographic order], e.g. if we're in the 7-limit, we'd have <span><math>(2,3,5)</math></span>, <span><math>(2,3,7)</math></span>, <span><math>(2,5,7)</math></span>, and <span><math>(3,5,7)</math></span>.  
# Take each combination of <span><math>r</math></span> primes where <span><math>r</math></span> is the rank, sorted in [https://en.wikipedia.org/wiki/Lexicographic_order lexicographic order], e.g. if we're in the 7-limit, we'd have <span><math>(2,3,5)</math></span>, <span><math>(2,3,7)</math></span>, <span><math>(2,5,7)</math></span>, and <span><math>(3,5,7)</math></span>.
# Convert each of those combinations to a square <span><math>r×r</math></span> matrix by slicing a column for each prime out of the mapping and putting them together.  
# Convert each of those combinations to a square <span><math>r×r</math></span> matrix by slicing a column for each prime out of the mapping and putting them together.
# Take each matrix's determinant.  
# Take each matrix's determinant.
# Flip the sign of every result if the first result is negative.
# Flip the sign of every result if the first result is negative.
# Extract the GCD from these results.
# Extract the GCD from these results.
Line 1,184: Line 1,184:
</math>
</math>


These have no GCD, or in other words, their GCD is 1. So just set these inside a number brackets equal to the rank, and we’ve got {{multival|1 4 4}}.
These have no GCD, or in other words, their GCD is 1. So just set these inside a number brackets equal to the rank, and we’ve got {{multicovector|1 4 4}}.


This method even works on an equal temperament, e.g. {{val|12 19 28}}. The rank is 1 so each combination of primes has only a single prime: they’re <span><math>(2)</math></span>, <span><math>(3)</math></span>, and <span><math>(5)</math></span>. The square matrices are therefore <span><math>\begin{bmatrix}12\end{bmatrix} \begin{bmatrix}19\end{bmatrix} \begin{bmatrix}28\end{bmatrix}</math></span>. The determinant of a 1×1 matrix is defined as the value of its single term, so now we have 12 19 28. No GCD, and <span><math>r</math></span> = 1, so we set the answer inside one layer of brackets, so our monocovector is {{val|12 19 28}}. Yes, it looks the same as what we started with, which is fine.
This method even works on an equal temperament, e.g. {{map|12 19 28}}. The rank is 1 so each combination of primes has only a single prime: they’re <span><math>(2)</math></span>, <span><math>(3)</math></span>, and <span><math>(5)</math></span>. The square matrices are therefore <span><math>\begin{bmatrix}12\end{bmatrix} \begin{bmatrix}19\end{bmatrix} \begin{bmatrix}28\end{bmatrix}</math></span>. The determinant of a 1×1 matrix is defined as the value of its single term, so now we have 12 19 28. No GCD, and <span><math>r</math></span> = 1, so we set the answer inside one layer of brackets, so our monocovector is {{map|12 19 28}}. Yes, it looks the same as what we started with, which is fine.


Let’s try a slightly harder example now: a rank-3 temperament, and in the 7-limit. There are four different ways to take 3 of 4 primes: <span><math>(2,3,5)</math></span>, <span><math>(2,3,7)</math></span>, <span><math>(2,5,7)</math></span>, and <span><math>(3,5,7)</math></span>.
Let’s try a slightly harder example now: a rank-3 temperament, and in the 7-limit. There are four different ways to take 3 of 4 primes: <span><math>(2,3,5)</math></span>, <span><math>(2,3,7)</math></span>, <span><math>(2,5,7)</math></span>, and <span><math>(3,5,7)</math></span>.
Line 1,228: Line 1,228:
</math>
</math>


In natural language, that’s each element of the first row times the determinant of the square matrix from the other two columns and the other two rows, summed but with an alternating pattern of negation beginning with positive. If you ever need to do determinants of matrices bigger than 3×3, see [https://www.mathsisfun.com/algebra/matrix-determinant.html this webpage]. Or, you can just use an online calculator.  
In natural language, that’s each element of the first row times the determinant of the square matrix from the other two columns and the other two rows, summed but with an alternating pattern of negation beginning with positive. If you ever need to do determinants of matrices bigger than 3×3, see [https://www.mathsisfun.com/algebra/matrix-determinant.html this webpage]. Or, you can just use an online calculator.


{| class="wikitable"
{| class="wikitable"
Line 1,239: Line 1,239:
|}
|}


And so our results are <span><math>-2</math></span>, <span><math>3</math></span>, <span><math>1</math></span>, <span><math>-11</math></span>. There's no GCD to extract. We prefer for the first term to be positive; this doesn’t make a difference in how things behave, but is done because it normalizes things (we could have found the result where the first term came out positive by simply changing the order of the rows of our mapping, which doesn’t affect how the mapping works at all). And so we flip the signs<ref>If it helps you, you could think of this sign-flipping step as paired with the GCD extraction step, if you think of it like extracting a GCD of -1.</ref>, and our list ends up as <span><math>2</math></span>, <span><math>-3</math></span>, <span><math>-1</math></span>, <span><math>11</math></span>. Finally, set these inside triply-nested brackets, because it’s a trimap for a rank-3 temperament, and we get {{multival|rank=3|2 -3 -1 11}}.
And so our results are <span><math>-2</math></span>, <span><math>3</math></span>, <span><math>1</math></span>, <span><math>-11</math></span>. There's no GCD to extract. We prefer for the first term to be positive; this doesn’t make a difference in how things behave, but is done because it normalizes things (we could have found the result where the first term came out positive by simply changing the order of the rows of our mapping, which doesn’t affect how the mapping works at all). And so we flip the signs<ref>If it helps you, you could think of this sign-flipping step as paired with the GCD extraction step, if you think of it like extracting a GCD of -1.</ref>, and our list ends up as <span><math>2</math></span>, <span><math>-3</math></span>, <span><math>-1</math></span>, <span><math>11</math></span>. Finally, set these inside triply-nested brackets, because it’s a trimap for a rank-3 temperament, and we get {{multicovector|rank=3|2 -3 -1 11}}.


As for getting from the multimap back to the mapping, you can solve a system of equations for that. Though it’s not easy and there may not be a unique solution. And you probably will never have the multimap without the mapping anyway.
As for getting from the multimap back to the mapping, you can solve a system of equations for that. Though it’s not easy and there may not be a unique solution. And you probably will never have the multimap without the mapping anyway.
Line 1,245: Line 1,245:
=== multicommas ===
=== multicommas ===


You may have noticed that the multimap for meantone, {{multival|1 4 4}}, looks really similar to the meantone comma, {{monzo|-4 4 -1}}. This is not a coincidence.
You may have noticed that the multimap for meantone, {{multicovector|1 4 4}}, looks really similar to the meantone comma, {{vector|-4 4 -1}}. This is not a coincidence.


To understand why, we have to cover a few key points:
To understand why, we have to cover a few key points:
Line 1,256: Line 1,256:
To demonstrate these points, let’s first calculate the multicomma from a comma basis, and then confirm it by calculating the same multicomma as the complement of its dual multimap.
To demonstrate these points, let’s first calculate the multicomma from a comma basis, and then confirm it by calculating the same multicomma as the complement of its dual multimap.


Here’s the comma basis for meantone: {{val|{{monzo|-4 4 -1}}}}. Calculating the multicomma is almost the same as calculating the multimap. The only difference is that as a preliminary step you must transpose the matrix, or in other words, exchange rows and columns. In our bracket notation, that just looks like replacing {{val|{{monzo|-4 4 -1}}}} with {{monzo|{{val|-4 4 -1}}}}. Now we can see that this is just like our ET map example from the previous section: basically an identity operation, breaking the thing up into three 1×1 matrices <span><math>\begin{bmatrix}-4\end{bmatrix} \begin{bmatrix}4\end{bmatrix} \begin{bmatrix}-1\end{bmatrix}</math></span> which are their own determinants and then nesting back inside one layer of brackets because nullity is 1. So we have {{monzo|-4 4 -1}}. Except, be careful! We can’t skip the step where we extract the GCD, which in this case is -1 again, so the multicomma (a monocomma) is actually {{monzo|4 -4 1}}. By the way, we write this operation with a different symbol; it’s upside down from the exterior product, and so is called the interior product, and uses ∨, which is great because that’s also the logical operator for “or” which matches with the intersection of commas using the “|” operator which also means “or”.
Here’s the comma basis for meantone: {{map|{{vector|-4 4 -1}}}}. Calculating the multicomma is almost the same as calculating the multimap. The only difference is that as a preliminary step you must transpose the matrix, or in other words, exchange rows and columns. In our bracket notation, that just looks like replacing {{map|{{vector|-4 4 -1}}}} with {{vector|{{map|-4 4 -1}}}}. Now we can see that this is just like our ET map example from the previous section: basically an identity operation, breaking the thing up into three 1×1 matrices <span><math>\begin{bmatrix}-4\end{bmatrix} \begin{bmatrix}4\end{bmatrix} \begin{bmatrix}-1\end{bmatrix}</math></span> which are their own determinants and then nesting back inside one layer of brackets because nullity is 1. So we have {{vector|-4 4 -1}}. Except, be careful! We can’t skip the step where we extract the GCD, which in this case is -1 again, so the multicomma (a monocomma) is actually {{vector|4 -4 1}}. By the way, we write this operation with a different symbol; it’s upside down from the exterior product, and so is called the interior product, and uses ∨, which is great because that’s also the logical operator for “or” which matches with the intersection of commas using the “|” operator which also means “or”.


Now let’s see how to do the complement operation.
Now let’s see how to do the complement operation.
Line 1,266: Line 1,266:
# Multiply each term of the multimap by these values. 1×1, 4×-1, 4×1, = 1, -4, 4.
# Multiply each term of the multimap by these values. 1×1, 4×-1, 4×1, = 1, -4, 4.
# Reverse the order: 4, -4, 1.
# Reverse the order: 4, -4, 1.
# Set the result in the proper count of brackets: {{monzo|4 -4 1}}.
# Set the result in the proper count of brackets: {{vector|4 -4 1}}.


Ta-da! Both operations get us to the same result: {{monzo|4 -4 1}}.
Ta-da! Both operations get us to the same result: {{vector|4 -4 1}}.


What’s the proper count of brackets though? Well, the total count of brackets on the multicomma and multimap for a temperament must always sum to the dimensionality of the system from which you tempered. It’s the same thing as <span><math>d - n = r</math></span>, just phrased as <span><math>r + n = d</math></span>, and where <span><math>r</math></span> should be the bracket count for the multimap  and <span><math>n</math></span> should be the bracket count for the multicomma. So with 5-limit meantone, with dimensionality 3, there should be 3 total pairs of brackets. If 2 are on the multimap, then only 1 are on the multicomma.
What’s the proper count of brackets though? Well, the total count of brackets on the multicomma and multimap for a temperament must always sum to the dimensionality of the system from which you tempered. It’s the same thing as <span><math>d - n = r</math></span>, just phrased as <span><math>r + n = d</math></span>, and where <span><math>r</math></span> should be the bracket count for the multimap  and <span><math>n</math></span> should be the bracket count for the multicomma. So with 5-limit meantone, with dimensionality 3, there should be 3 total pairs of brackets. If 2 are on the multimap, then only 1 are on the multicomma.
Line 1,392: Line 1,392:
So we now understand how to get to multimaps. And we understand that they uniquely identify the temperament. But what about the individual terms — do they mean anything in and of themselves? It turns out: yes!
So we now understand how to get to multimaps. And we understand that they uniquely identify the temperament. But what about the individual terms — do they mean anything in and of themselves? It turns out: yes!


The first thing to understand is that each term of the multimap pertains to a different combination of primes. We already know this: it’s how we calculated it from the mapping matrix. For example, in the multimap for meantone, {{multival|1 4 4}}, the 1 is for <span><math>(2,3)</math></span>, the first 4 is for <span><math>(2,5)</math></span>, and the second 4 is for <span><math>(3,5)</math></span>.
The first thing to understand is that each term of the multimap pertains to a different combination of primes. We already know this: it’s how we calculated it from the mapping matrix. For example, in the multimap for meantone, {{multicovector|1 4 4}}, the 1 is for <span><math>(2,3)</math></span>, the first 4 is for <span><math>(2,5)</math></span>, and the second 4 is for <span><math>(3,5)</math></span>.


Now, let’s convert every term of the multimap by taking its absolute value and its inverse. In this case, each of our terms is already positive, so that has no effect. But taking the inverse converts us to <span><math>\frac 11</math></span>, <span><math>\frac 14</math></span>, <span><math>\frac 14</math></span>. These values tell us what fraction of the tempered lattice we can generate using the corresponding combination of primes.
Now, let’s convert every term of the multimap by taking its absolute value and its inverse. In this case, each of our terms is already positive, so that has no effect. But taking the inverse converts us to <span><math>\frac 11</math></span>, <span><math>\frac 14</math></span>, <span><math>\frac 14</math></span>. These values tell us what fraction of the tempered lattice we can generate using the corresponding combination of primes.
Line 1,406: Line 1,406:
But now try it with only 5 and one other of primes 2 or 3. Prime 5 takes you over 4 in both directions. But if you have only prime 2 otherwise, then you can only move up or down from there, so you’ll only cover every fourth vertical line through the tempered lattice. Or if you only had prime 3 otherwise, then you could only move left and right from there, you’d only cover every fourth horizontal line ''(see Figure 6c)''.
But now try it with only 5 and one other of primes 2 or 3. Prime 5 takes you over 4 in both directions. But if you have only prime 2 otherwise, then you can only move up or down from there, so you’ll only cover every fourth vertical line through the tempered lattice. Or if you only had prime 3 otherwise, then you could only move left and right from there, you’d only cover every fourth horizontal line ''(see Figure 6c)''.


One day you might come across a multimap which has a term equal to zero. If you tried to interpret this term using the information here so far, you'd think it must generate <span><math>\frac 10</math></span>th of the tempered lattice. That's not easy to visualize or reason about. Does that mean it generates essentially infinity lattices? No, not really. More like the opposite. The question itself is somewhat undefined here. If anything, it's more like that combination of primes generates approximately ''none'' of the lattice. Because in this situation, the combination of primes whose multimap term is zero generates so little of the tempered lattice that it's completely missing an entire dimension of it, so it's an infinitesimal amount of it that it generates. For example, the 11-limit temperament 7&12&31 has multimap {{val|rank=3|0 1 1 4 4 -8 4 4 -12 -16}} and mapping {{monzo|{{val|1 0 -4 0 -12}} {{val|0 1 4 0 8}} {{val|0 0 0 1 1}}}}; we can see from this how primes <span><math>(2,3,5)</math></span> can only generate a rank-2 cross-section of the full rank-3 lattice, because while 2 and 3 do the trick of generating that rank-2 part (exactly as they do in 5-limit meantone), prime 5 doesn't bring anything to the table here so that's all we get.
One day you might come across a multimap which has a term equal to zero. If you tried to interpret this term using the information here so far, you'd think it must generate <span><math>\frac 10</math></span>th of the tempered lattice. That's not easy to visualize or reason about. Does that mean it generates essentially infinity lattices? No, not really. More like the opposite. The question itself is somewhat undefined here. If anything, it's more like that combination of primes generates approximately ''none'' of the lattice. Because in this situation, the combination of primes whose multimap term is zero generates so little of the tempered lattice that it's completely missing an entire dimension of it, so it's an infinitesimal amount of it that it generates. For example, the 11-limit temperament 7&12&31 has multimap {{map|rank=3|0 1 1 4 4 -8 4 4 -12 -16}} and mapping {{vector|{{map|1 0 -4 0 -12}} {{map|0 1 4 0 8}} {{map|0 0 0 1 1}}}}; we can see from this how primes <span><math>(2,3,5)</math></span> can only generate a rank-2 cross-section of the full rank-3 lattice, because while 2 and 3 do the trick of generating that rank-2 part (exactly as they do in 5-limit meantone), prime 5 doesn't bring anything to the table here so that's all we get.


We’ll look in more detail later at how exactly to best find these generators, once you know which primes to make them out of.
We’ll look in more detail later at how exactly to best find these generators, once you know which primes to make them out of.
Line 1,479: Line 1,479:
=== lattices ===
=== lattices ===


=== finding approximate JI generators ===  
=== finding approximate JI generators ===


=== (con)torsion ===
=== (con)torsion ===
Line 1,505: Line 1,505:
* [[Stephen Weigel]]
* [[Stephen Weigel]]


plus many many more. And of course I owe a big debt to [[Gene Ward Smith]].  
plus many many more. And of course I owe a big debt to [[Gene Ward Smith]].


I take full responsibility for any errors or shortcomings of this work. Please feel free to edit this stuff yourself if you have something you'd like to correct, revise, or contribute.
I take full responsibility for any errors or shortcomings of this work. Please feel free to edit this stuff yourself if you have something you'd like to correct, revise, or contribute.