Temperament addition: Difference between revisions
Cmloegcmluin (talk | contribs) link to new page for Supports |
ArrowHead294 (talk | contribs) m Update link |
||
(17 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
'''Temperament addition''' is the general name for either the '''temperament sum''' or the '''temperament difference''', which are two closely related operations on [[regular temperaments]]. Basically, to add or subtract temperaments means to match up the entries of temperament vectors and then add or subtract them individually. The result is a new temperament that has similar properties to the original temperaments. | '''Temperament addition''' is the general name for either the '''temperament sum''' or the '''temperament difference''', which are two closely related operations on [[regular temperaments]]. Basically, to add or subtract temperaments means to match up the entries of temperament vectors and then add or subtract them individually. The result is a new temperament that has similar properties to the original temperaments. | ||
=Introductory examples= | == Introductory examples == | ||
For example, in the [[5-limit]], the sum of [[12-ET]] and [[7-ET]] is [[19-ET]] because {{map|12 19 28}} + {{map|7 11 16}} = {{map|(12+7) (19+11) (28+16)}} = {{map|19 30 44}}, and the difference of 12-ET and 7-ET is 5-ET because {{map|12 19 28}} - {{map|7 11 16}} = {{map|(12-7) (8-11) (12-16)}} = {{map|5 8 12}}. | For example, in the [[5-limit]], the sum of [[12-ET]] and [[7-ET]] is [[19-ET]] because {{map|12 19 28}} + {{map|7 11 16}} = {{map|(12+7) (19+11) (28+16)}} = {{map|19 30 44}}, and the difference of 12-ET and 7-ET is 5-ET because {{map|12 19 28}} - {{map|7 11 16}} = {{map|(12-7) (8-11) (12-16)}} = {{map|5 8 12}}. | ||
Line 125: | Line 124: | ||
We could write this in | We could write this in quotient form — replacing addition with multiplication and subtraction with division — as 80/81 × 250/243=20000/19683 and 80/81 ÷ 250/243=25/24, respectively. The similarity in these temperaments can be seen in how all of them are supported by 7-ET. (Note that these examples are all given in canonical form, which is why we're seeing the meantone comma as 80/81 instead of the more common 81/80; for the reason why, see [[Temperament addition#Negation]].) | ||
Temperament addition is simplest for temperaments which can be represented by single vectors such as demonstrated in these examples. In other words, it is simplest for temperaments that are either rank-1 ([[equal temperament]]s, or ETs for short) or nullity-1 (having only a single comma). Because [[grade]] <math>g</math> is the generic term for rank <math>r</math> and nullity <math>n</math>, we could define the minimum grade <math>g_{\text{min}}</math> of a temperament as the minimum of its rank and nullity <math>\min(r,n)</math>, and so for convenience in this article we will refer to <math>r=1</math> (read "rank-1") or <math>n=1</math> (read "nullity-1") temperaments as <math>g_{\text{min}}=1</math> (read "min-grade-1") temperaments. We'll also use <math>g_{\text{max}}</math> (read "max-grade"), which naturally is equal to <math>\max(r,n)</math>. | Temperament addition is simplest for temperaments which can be represented by single vectors such as demonstrated in these examples. In other words, it is simplest for temperaments that are either rank-1 ([[equal temperament]]s, or ETs for short) or nullity-1 (having only a single comma). Because [[grade]] <math>g</math> is the generic term for rank <math>r</math> and nullity <math>n</math>, we could define the minimum grade <math>g_{\text{min}}</math> of a temperament as the minimum of its rank and nullity <math>\min(r,n)</math>, and so for convenience in this article we will refer to <math>r=1</math> (read "rank-1") or <math>n=1</math> (read "nullity-1") temperaments as <math>g_{\text{min}}=1</math> (read "min-grade-1") temperaments. We'll also use <math>g_{\text{max}}</math> (read "max-grade"), which naturally is equal to <math>\max(r,n)</math>. | ||
Line 131: | Line 130: | ||
For <math>g_{\text{min}}>1</math> temperaments, temperament addition gets a little trickier. This is discussed in the [[Temperament_addition#Beyond_.5Bmath.5D.5Cmin.28g.29.3D1.5B.2Fmath.5D|beyond <math>g_{\text{min}}=1</math> section]] later. | For <math>g_{\text{min}}>1</math> temperaments, temperament addition gets a little trickier. This is discussed in the [[Temperament_addition#Beyond_.5Bmath.5D.5Cmin.28g.29.3D1.5B.2Fmath.5D|beyond <math>g_{\text{min}}=1</math> section]] later. | ||
= Applications = | == Applications == | ||
The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments. | The temperament that results from summing or diffing two temperaments, as stated above, has similar properties to the original two temperaments. | ||
Take the case of meantone + porcupine = tetracot from the previous section. What this relationship means is that tetracot is the temperament which doesn't | Take the case of meantone + porcupine = tetracot from the previous section. What this relationship means is that tetracot is the temperament which doesn't make the meantone comma itself [[vanish]], nor the porcupine comma itself, but instead make whatever comma relates pitches that are exactly one meantone comma plus one porcupine comma apart vanish. And that's the tetracot comma! And on the other hand, for the temperament difference, dicot, this is the temperament that makes neither meantone nor porcupine vanish, but instead the comma that's the size of the difference between them. And that's the dicot comma. So tetracot makes 80/81 × 250/243 vanish, and dicot makes 80/81 × 243/250 vanish. | ||
Similar reasoning is possible for the mapping-rows of mappings — the analogs of the commas of comma bases — but are less intuitive to describe. What's reasonably easy to understand, though, is how temperament addition on maps is essentially navigation of the scale tree for the rank-2 temperament they share; | Similar reasoning is possible for the mapping-rows of mappings — the analogs of the commas of comma bases — but are less intuitive to describe. What's reasonably easy to understand, though, is how temperament addition on maps is essentially navigation of the scale tree for the rank-2 temperament they share; [[Dave Keenan & Douglas Blumeyer's guide to RTT/Exploring temperaments#Scale trees|see here]] for more information on this. So if you understand the effects on individual maps, then you can apply those to changes of maps within a more complex temperament. | ||
Ultimately, these two effects are the primary applications of temperament addition.<ref>It has also been asserted that there exists a connection between temperament addition and "Fokker groups" as discussed on this page: [[Fokker block]], but the connection remains unclear to this author.</ref> | Ultimately, these two effects are the primary applications of temperament addition.<ref>It has also been asserted that there exists a connection between temperament addition and "Fokker groups" as discussed on this page: [[Fokker block]], but the connection remains unclear to this author.</ref> | ||
=A note on variance= | == A note on variance == | ||
For simplicity, this article will use the word "vector" in its general sense, that is, [[variance]]-agnostic. This means it includes either contravariant vectors (plain "vectors", such as [[prime-count vector]]s) or covariant vectors ("''co''vectors", such as [[map]]s). However, the reader should assume that only one of the two types is being used at a given time, since the two variances do not mix. For more information, see [[Linear_dependence#Variance]]. The same variance-agnosticism holds for [[multivector|''multi''vector]]s in this article as well. | For simplicity, this article will use the word "vector" in its general sense, that is, [[variance]]-agnostic. This means it includes either contravariant vectors (plain "vectors", such as [[prime-count vector]]s) or covariant vectors ("''co''vectors", such as [[map]]s). However, the reader should assume that only one of the two types is being used at a given time, since the two variances do not mix. For more information, see [[Linear_dependence#Variance]]. The same variance-agnosticism holds for [[multivector|''multi''vector]]s in this article as well. | ||
=Visualizing temperament addition= | == Visualizing temperament addition == | ||
[[File:Sum diff and wedge.png|thumb|left|300px|A and B are vectors representing temperaments. They could be maps or commas. A∧B is their wedge product and gives a higher-[[grade]] temperament that [[temperament merging|merge]]s both A and B. A+B and A-B give the sum and difference, respectively.]] | |||
[[File:Sum diff and wedge.png|thumb|left|300px|A and B are vectors representing temperaments. They could be maps or | |||
==Versus the wedge product== | === Versus the wedge product === | ||
If the [[wedge product]] of two vectors represents the directed area of a parallelogram constructed with the vectors as its sides, then the temperament sum and difference are the vectors that connect the diagonals of this parallelogram. | If the [[wedge product]] of two vectors represents the directed area of a parallelogram constructed with the vectors as its sides, then the temperament sum and difference are the vectors that connect the diagonals of this parallelogram. | ||
==Tuning and tone space== | === Tuning and tone space === | ||
[[File:Visualization of temperament arithmetic.png|300px|right|thumb|A visualization of temperament addition on projective tuning space.]] | [[File:Visualization of temperament arithmetic.png|300px|right|thumb|A visualization of temperament addition on projective tuning space.]] | ||
Line 171: | Line 166: | ||
Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5-ET and 7-ET support a <math>r=2</math> temperament, then so will 5 + 7 = 12-ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call ''down'' the scale tree, children are always found between their parents. But when you try to go back ''up'' the scale tree, to one or the other parent, you may not immediately know which side of the child to go. | Note that when viewed in tuning space, the sum is found between the two input temperaments, and the difference is found on the outside of them, to one side or the other. While in tone space, it's the difference that's found between the two input temperaments, and its the sum that's found outside. In either situation when a temperament is on the outside and may be on one side or the other, the explanation for this can be inferred from behavior of the scale tree on any temperament line, where e.g. if 5-ET and 7-ET support a <math>r=2</math> temperament, then so will 5 + 7 = 12-ET, and then so will 5 + 12 and 7 + 12 in turn, and so on and so on recursively; when you navigate like this, what we could call ''down'' the scale tree, children are always found between their parents. But when you try to go back ''up'' the scale tree, to one or the other parent, you may not immediately know which side of the child to go. | ||
=Conditions on temperament addition= | == Conditions on temperament addition == | ||
=== The temperaments have the same dimensions === | |||
==The temperaments have the same dimensions== | |||
Temperament addition is only possible for temperaments with the same [[dimensions]], that is, the same [[rank]] and [[dimensionality]] (and therefore, by the [[rank-nullity theorem]], also the same [[nullity]]). The reason for this is visually obvious: without the same <math>d</math>, <math>r</math>, and <math>n</math> (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up one-to-one. From this condition it also follows that the result of temperament addition will be a new temperament with the same <math>d</math>, <math>r</math>, and <math>n</math> as the input temperaments. | Temperament addition is only possible for temperaments with the same [[dimensions]], that is, the same [[rank]] and [[dimensionality]] (and therefore, by the [[rank-nullity theorem]], also the same [[nullity]]). The reason for this is visually obvious: without the same <math>d</math>, <math>r</math>, and <math>n</math> (dimensionality, rank, and nullity, respectively), the numeric representations of the temperament — such as matrices and multivectors — will not have the same proportions, and therefore their entries will be unable to be matched up one-to-one. From this condition it also follows that the result of temperament addition will be a new temperament with the same <math>d</math>, <math>r</math>, and <math>n</math> as the input temperaments. | ||
==The temperaments share the same | === The temperaments share the same domain basis === | ||
If you're unfamiliar with [[domain bases]], then you can probably safely assume your temperaments are in the same subspace, because they should be in the default, standard, [[prime-limit]] interval subspace. If they're not, change them to be on the same interval subspace if you can, and then come back to temperament addition. | |||
If you're unfamiliar with [[ | |||
=== The temperaments are addable === | |||
[[File:Addability.png|300px|thumb|left|In the first row, we see the sum of two vectors. In the second row, we see how a pair of temperaments each defined by 2 basis vectors may be added as long as the other basis vectors match. In the third row we see a continued development of this idea, where a pair of temperaments each defined by 3 basis vectors is able to be added by virtue of all other basis vectors being the same.]] | [[File:Addability.png|300px|thumb|left|In the first row, we see the sum of two vectors. In the second row, we see how a pair of temperaments each defined by 2 basis vectors may be added as long as the other basis vectors match. In the third row we see a continued development of this idea, where a pair of temperaments each defined by 3 basis vectors is able to be added by virtue of all other basis vectors being the same.]] | ||
Line 191: | Line 182: | ||
Any set of <math>g_{\text{min}}=1</math> temperaments are addable<ref>or they are all the same temperament, in which case they <span style="color: #3C8031;">share all the same basis vectors and could perhaps be said to be ''completely'' linearly dependent.</span></ref>, because the side of duality where <math>g=1</math> will satisfy this condition, so we don't need to worry in detail about it in that case. Or in other words, <math>g_{\text{min}}=1</math> temperaments can be represented by monovectors, and we have no problem entry-wise adding those. | Any set of <math>g_{\text{min}}=1</math> temperaments are addable<ref>or they are all the same temperament, in which case they <span style="color: #3C8031;">share all the same basis vectors and could perhaps be said to be ''completely'' linearly dependent.</span></ref>, because the side of duality where <math>g=1</math> will satisfy this condition, so we don't need to worry in detail about it in that case. Or in other words, <math>g_{\text{min}}=1</math> temperaments can be represented by monovectors, and we have no problem entry-wise adding those. | ||
=Versus temperament merging= | == Versus temperament merging == | ||
Like [[temperament merging]], temperament addition takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, ''adding'' these input temperaments together. | Like [[temperament merging]], temperament addition takes temperaments as inputs and finds a new temperament sharing properties of the inputs. And they both can be understood as, in some sense, ''adding'' these input temperaments together. | ||
But there is a big difference between temperament addition and merging. Temperament addition is done using ''entry-wise'' addition (or subtraction), whereas merging is done using ''concatenation''. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the merging of mappings with two rows each is a new mapping that has a total of four rows<ref>At least, this mapping would have a total of four rows before it is canonicalized. After canonicalization, it may end up with only three (or two if you map-merged a temperament with itself for some reason).</ref>. | But there is a big difference between temperament addition and merging. Temperament addition is done using ''entry-wise'' addition (or subtraction), whereas merging is done using ''concatenation''. So the temperament sum of mappings with two rows each is a new mapping that still has exactly two rows, while the other hand, the merging of mappings with two rows each is a new mapping that has a total of four rows<ref>At least, this mapping would have a total of four rows before it is canonicalized. After canonicalization, it may end up with only three (or two if you map-merged a temperament with itself for some reason).</ref>. | ||
==The linear dependence connection== | === The linear dependence connection === | ||
Another connection between temperament addition and merging is that they ''may'' involve checks for linear dependence. | Another connection between temperament addition and merging is that they ''may'' involve checks for linear dependence. | ||
Line 205: | Line 194: | ||
Merging does not ''necessarily'' involve linear dependence. Linear dependence only matters for merging when you attempt to do it using ''exterior'' algebra, that is, by using the wedge product, rather than the ''linear'' algebra approach, which is just to concatenate the vectors as a matrix and canonicalize. For more information on this, see [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#The linearly dependent exception to the wedge product]]. | Merging does not ''necessarily'' involve linear dependence. Linear dependence only matters for merging when you attempt to do it using ''exterior'' algebra, that is, by using the wedge product, rather than the ''linear'' algebra approach, which is just to concatenate the vectors as a matrix and canonicalize. For more information on this, see [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#The linearly dependent exception to the wedge product]]. | ||
=<math>g_{\text{min}}=1</math>= | == <math>g_{\text{min}}=1</math> == | ||
As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>g_{\text{min}}=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the simple case of <math>g_{\text{min}}=1</math>. | As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>g_{\text{min}}=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the simple case of <math>g_{\text{min}}=1</math>. | ||
As shown in the [[Temperament_addition#Introductory_examples|introductory examples]], <math>g_{\text{min}}=1</math> examples are as easy as entry-wise addition or subtraction. But there's just a couple tricks to it. | As shown in the [[Temperament_addition#Introductory_examples|introductory examples]], <math>g_{\text{min}}=1</math> examples are as easy as entry-wise addition or subtraction. But there's just a couple tricks to it. | ||
==Getting to the side of duality with <math>g_{\text{min}}=1</math>== | === Getting to the side of duality with <math>g_{\text{min}}=1</math> === | ||
We may be looking at a temperament representation which itself does not consist of a single vector, but its dual does. For example, the meantone mapping {{rket|{{map|1 0 -4}} {{map|0 1 4}}}} and the porcupine mapping {{rket|{{map|1 2 3}} {{map|0 3 5}}}} each consist of two vectors. So these representations require additional labor to compute. But their duals are easy! If we simply find a comma basis for each of these mappings, we get [{{vector|4 -4 1}}] and [{{vector|1 -5 3}}]. In this form, the temperaments can be entry-wise added, to [{{vector|5 -9 4}}] as we saw earlier. And if in the end we're still after a mapping, since we started with mappings, we can take the dual of this comma basis, to find the mapping {{rket|{{map|1 1 1}} {{map|0 4 9}}}}. | |||
We may be looking at a temperament representation which itself does not consist of a single vector, but its dual does. For example, the meantone mapping {{ | |||
=== Negation === | |||
[[File:Very simple illustration of temperament sum vs diff.png|500px|thumb|left|Equivalences of temperament addition depending on negativity.]] | [[File:Very simple illustration of temperament sum vs diff.png|500px|thumb|left|Equivalences of temperament addition depending on negativity.]] | ||
Line 225: | Line 211: | ||
For single vectors (and multivectors), negation is as simple as changing the sign of every entry. | For single vectors (and multivectors), negation is as simple as changing the sign of every entry. | ||
Suppose you have a matrix representing temperament <math> | Suppose you have a matrix representing temperament <math>𝓣_1</math> and another matrix representing <math>𝓣_2</math>. If you want to find both their sum and difference, you can calculate both <math>𝓣_1 + 𝓣_2</math> and <math>𝓣_1 + -𝓣_2</math>. There's no need to also find <math>-𝓣_1 + 𝓣_2</math>; this will merely give the negation of <math>𝓣_1 + -𝓣_2</math>. The same goes for <math>-𝓣_1 + -𝓣_2</math>, which is the negation of <math>𝓣_1 + 𝓣_2</math>. | ||
But a question remains: which result between <math> | But a question remains: which result between <math>𝓣_1 + 𝓣_2</math> and <math>𝓣_1 + -𝓣_2</math> is actually the sum and which is the difference? This seems like an obvious question to answer, except for one key problem: how can we be certain that <math>𝓣_1</math> or <math>𝓣_2</math> wasn't already in negated form to begin with? We need to establish a way to check for matrix negativity. | ||
The check is that the vectors must be in [[canonical form]]. For a contravariant vector, such as the kind that represent commas, canonical form means that the trailing entry (the final non-zero entry) must be positive. For a covariant vector, such as the kind that represent mapping-rows, canonical form means that the leading entry (the first non-zero entry) must be positive. | The check is that the vectors must be in [[canonical form]]. For a contravariant vector, such as the kind that represent commas, canonical form means that the trailing entry (the final non-zero entry) must be positive. For a covariant vector, such as the kind that represent mapping-rows, canonical form means that the leading entry (the first non-zero entry) must be positive. | ||
Line 233: | Line 219: | ||
Sometimes the canonical form of a vector is not the most popular form. For instance, the meantone comma is usually expressed in positive form, that is, with its numerator greater than its denominator, so that its cents value is positive, or in other words, it's the meantone comma upwards in pitch, not downwards. But the prime-count vector for that form, 81/80, is {{vector|-4 4 -1}}, and as we can see, its trailing entry -1 is negative. So the canonical form of meantone is actually {{vector|4 -4 1}}. | Sometimes the canonical form of a vector is not the most popular form. For instance, the meantone comma is usually expressed in positive form, that is, with its numerator greater than its denominator, so that its cents value is positive, or in other words, it's the meantone comma upwards in pitch, not downwards. But the prime-count vector for that form, 81/80, is {{vector|-4 4 -1}}, and as we can see, its trailing entry -1 is negative. So the canonical form of meantone is actually {{vector|4 -4 1}}. | ||
=<math>g_{\text{min}}>1</math>= | == <math>g_{\text{min}}>1</math> == | ||
As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>g_{\text{min}}=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the trickier cases of <math>g_{\text{min}}>1</math>. | As stated above, temperament addition is simplest for temperaments which can be represented by single vectors, or in other words, temperaments that are <math>g_{\text{min}}=1</math>, and for other temperaments, the computation gets a little trickier. Here we'll look at how to handle the trickier cases of <math>g_{\text{min}}>1</math>. | ||
Throughout this section, we will be using <span style="color: #3C8031;">a green color on linearly dependent objects and values</span>, and <span style="color: #B6321C;">a red color on linearly independent objects and values</span>, to help differentiate between the two. | Throughout this section, we will be using <span style="color: #3C8031;">a green color on linearly dependent objects and values</span>, and <span style="color: #B6321C;">a red color on linearly independent objects and values</span>, to help differentiate between the two. | ||
==Addability== | === Addability === | ||
In order to understand how to do temperament addition on <math>g_{\text{min}}>1</math> temperaments, we must first understand addability. | In order to understand how to do temperament addition on <math>g_{\text{min}}>1</math> temperaments, we must first understand addability. | ||
Line 249: | Line 233: | ||
#<span style="color: #B6321C;">linear independence</span> between temperaments by only one basis vector (that's addability) | #<span style="color: #B6321C;">linear independence</span> between temperaments by only one basis vector (that's addability) | ||
===1. <span style="color: #3C8031;">Linear dependence</span>=== | ==== 1. <span style="color: #3C8031;">Linear dependence</span> ==== | ||
This is explained here: [[linear dependence]]. | This is explained here: [[linear dependence]]. | ||
===2. <span style="color: #3C8031;">Linear dependence</span> between temperaments=== | ==== 2. <span style="color: #3C8031;">Linear dependence</span> between temperaments ==== | ||
<span style="color: #3C8031;">Linear dependence</span> has been defined for the matrices and multivectors that represent temperaments, but it can also be defined for temperaments themselves. The conditions of temperament addition motivate a definition of <span style="color: #3C8031;">linear dependence</span> for temperaments whereby temperaments are considered <span style="color: #3C8031;">linearly dependent</span> if ''either of their mappings or their comma bases are <span style="color: #3C8031;">linearly dependent</span>''<ref>or — equivalently, in EA — either their multimaps or their multicommas are <span style="color: #3C8031;">linearly dependent</span></ref>. | <span style="color: #3C8031;">Linear dependence</span> has been defined for the matrices and multivectors that represent temperaments, but it can also be defined for temperaments themselves. The conditions of temperament addition motivate a definition of <span style="color: #3C8031;">linear dependence</span> for temperaments whereby temperaments are considered <span style="color: #3C8031;">linearly dependent</span> if ''either of their mappings or their comma bases are <span style="color: #3C8031;">linearly dependent</span>''<ref>or — equivalently, in EA — either their multimaps or their multicommas are <span style="color: #3C8031;">linearly dependent</span></ref>. | ||
For example, 5-limit 5-ET and 5-limit 7-ET, represented by the mappings {{ | For example, 5-limit 5-ET and 5-limit 7-ET, represented by the mappings {{rket|{{map|5 8 12}}}} and {{rket|{{map|7 11 16}}}} may at first seem to be <span style="color: #B6321C;">linearly independent</span>, because the basis vectors visible in their mappings are clearly <span style="color: #B6321C;">linearly independent</span> (when comparing two vectors, the only way they could be <span style="color: #3C8031;">linearly dependent</span> is if they are multiples of each other, as discussed [[Linear dependence#Linear dependence between individual vectors|here]]). And indeed their ''mappings'' are <span style="color: #B6321C;">linearly independent</span>. But these two ''temperaments'' are <span style="color: #3C8031;">linearly ''de''pendent</span>, because if we consider their corresponding comma bases, we will find that they <span style="color: #3C8031;">share</span> the basis vector of the meantone comma {{vector|4 -4 1}}. | ||
To make this point visually, we could say that two temperaments are <span style="color: #3C8031;">linearly dependent</span> if they intersect in one or the other of tone space and tuning space. So you have to check both views.<ref>You may be wondering — what about two temperaments which are parallel in tone or tuning space, e.g. compton and blackwood in tuning space? Their comma bases are each <math>n=1</math>, and they merge to give a <math>n=2</math> [[comma basis]], which corresponds to a <math>r=1</math> mapping, which means it should appear as an ET point on the PTS diagram. But how could that be? Well, here's their comma-merge: | To make this point visually, we could say that two temperaments are <span style="color: #3C8031;">linearly dependent</span> if they intersect in one or the other of tone space and tuning space. So you have to check both views.<ref>You may be wondering — what about two temperaments which are parallel in tone or tuning space, e.g. compton and blackwood in tuning space? Their comma bases are each <math>n=1</math>, and they merge to give a <math>n=2</math> [[comma basis]], which corresponds to a <math>r=1</math> mapping, which means it should appear as an ET point on the PTS diagram. But how could that be? Well, here's their comma-merge: [{{vector|1 0 0}} {{vector|0 1 0}}], and so that corresponding mapping is {{rket|{{map|0 0 1}}}}. So it's some degenerate ET. I suppose we could say it's the point at infinity away from the center of the diagram.</ref> | ||
==== 3. <span style="color: #B6321C;">Linear independence</span> between temperaments ==== | |||
<span style="color: #3C8031;">Linear dependence</span> may be considered as a boolean (yes/no, linearly <span style="color: #3C8031;">dependent</span>/<span style="color: #B6321C;">independent</span>) or it may be considered as <span style="color: #3C8031;">an integer count of linearly dependent basis vectors</span>. In other words, it is the dimension of <span style="color: #3C8031;">the linear-dependence basis <math>\dim(L_{\text{dep}})</math></span>. To refer to this count, we may hyphenate it as <span style="color: #3C8031;">'''linear-dependence'''</span>, and use the variable <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span>. For example, 5-ET and 7-ET, per the example in the previous section, are <span style="color: #3C8031;"><math>l_{\text{dep}}=1</math></span> (read <span style="color: #3C8031;">"linear-dependence-1"</span>) temperaments. | <span style="color: #3C8031;">Linear dependence</span> may be considered as a boolean (yes/no, linearly <span style="color: #3C8031;">dependent</span>/<span style="color: #B6321C;">independent</span>) or it may be considered as <span style="color: #3C8031;">an integer count of linearly dependent basis vectors</span>. In other words, it is the dimension of <span style="color: #3C8031;">the linear-dependence basis <math>\dim(L_{\text{dep}})</math></span>. To refer to this count, we may hyphenate it as <span style="color: #3C8031;">'''linear-dependence'''</span>, and use the variable <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span>. For example, 5-ET and 7-ET, per the example in the previous section, are <span style="color: #3C8031;"><math>l_{\text{dep}}=1</math></span> (read <span style="color: #3C8031;">"linear-dependence-1"</span>) temperaments. | ||
Line 271: | Line 252: | ||
A proof of this conjecture is given here: [[Temperament addition#Sintel's proof of the linear-independence conjecture]]. | A proof of this conjecture is given here: [[Temperament addition#Sintel's proof of the linear-independence conjecture]]. | ||
===4. <span style="color: #B6321C;">Linear independence</span> between temperaments by only one basis vector (i.e. addability)=== | ==== 4. <span style="color: #B6321C;">Linear independence</span> between temperaments by only one basis vector (i.e. addability) ==== | ||
Two temperaments are addable if they are <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>. In other words, both their mappings and their comma bases <span style="color: #3C8031;">share</span> all but one basis vector. | Two temperaments are addable if they are <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>. In other words, both their mappings and their comma bases <span style="color: #3C8031;">share</span> all but one basis vector. | ||
And so this is why <math>g_{\text{min}}=1</math> temperaments are all addable. Because if <math>g_{\text{min}}=1</math>, and the temperaments are different from each other so <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> is at least 1, and <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> can't be greater than <math>g_{\text{min}}</math>, so then necessarily <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> <math>= 1</math> exactly, and therefore the temperaments are addable. | And so this is why <math>g_{\text{min}}=1</math> temperaments are all addable. Because if <math>g_{\text{min}}=1</math>, and the temperaments are different from each other so <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> is at least 1, and <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> can't be greater than <math>g_{\text{min}}</math>, so then necessarily <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> <math>= 1</math> exactly, and therefore the temperaments are addable. | ||
==Multivector approach== | === Multivector approach === | ||
The simplest approach to <math>g_{\text{min}}>1</math> temperament addition is to use multivectors. This is discussed in more detail here: [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#Temperament addition]]. | The simplest approach to <math>g_{\text{min}}>1</math> temperament addition is to use multivectors. This is discussed in more detail here: [[Douglas Blumeyer and Dave Keenan's Intro to exterior algebra for RTT#Temperament addition]]. | ||
==Matrix approach== | === Matrix approach === | ||
Temperament addition for <math>g_{\text{min}}>1</math> temperaments (again, that's with both <math>r>1</math> and <math>n>1</math>) can also be done using matrices, and it works in essentially the same way — entry-wise addition or subtraction — but there are some complications that make it significantly more involved than it is with multivectors. There's essentially five steps: | Temperament addition for <math>g_{\text{min}}>1</math> temperaments (again, that's with both <math>r>1</math> and <math>n>1</math>) can also be done using matrices, and it works in essentially the same way — entry-wise addition or subtraction — but there are some complications that make it significantly more involved than it is with multivectors. There's essentially five steps: | ||
Line 291: | Line 269: | ||
#Entry-wise add, and canonicalize | #Entry-wise add, and canonicalize | ||
===The steps=== | ==== The steps ==== | ||
===== 1. Find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> ===== | |||
==== 1. Find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>==== | |||
For matrices, it is necessary to make explicit <span style="color: #3C8031;">the basis for the linearly dependent vectors shared</span> between the involved matrices before adding. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the <span style="color: #3C8031;">linear-dependence basis, or <math>L_{\text{dep}}</math></span>. | For matrices, it is necessary to make explicit <span style="color: #3C8031;">the basis for the linearly dependent vectors shared</span> between the involved matrices before adding. In other words, any vectors that can be found through linear combinations of any of the involved matrices' basis vectors must appear explicitly and in the same position of each matrix before the sum or difference is taken. These vectors are called the <span style="color: #3C8031;">linear-dependence basis, or <math>L_{\text{dep}}</math></span>. | ||
Before this can be done, of course, we need to actually find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. This can be done using the technique described here: [[Linear dependence#For a given set of basis matrices, how to compute a basis for their linearly dependent vectors]] | Before this can be done, of course, we need to actually find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>. This can be done using the technique described here: [[Linear dependence#For a given set of basis matrices, how to compute a basis for their linearly dependent vectors]] | ||
====2. Put the matrices in a form with the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>==== | ===== 2. Put the matrices in a form with the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> ===== | ||
The <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> will always have one less vector than the original matrix, by the definition of addability as <span style="color: #B6321C;"><math>L_{\text{ind}}=1</math></span>. And the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> is not a full recreation of the original temperament; it needs that one extra vector to get back to representing it. | The <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> will always have one less vector than the original matrix, by the definition of addability as <span style="color: #B6321C;"><math>L_{\text{ind}}=1</math></span>. And the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> is not a full recreation of the original temperament; it needs that one extra vector to get back to representing it. | ||
So a next step, we need pad out the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> by drawing from vectors from the original matrices. We can start from their first vectors. But if that vector happens to be linearly dependent on the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, then it won't result in a representation of the original matrix. Otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with. So we just have to keep going until we get it. | So a next step, we need pad out the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> by drawing from vectors from the original matrices. We can start from their first vectors. But if that vector happens to be linearly dependent on the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span>, then it won't result in a representation of the original matrix. Otherwise we'll produce a [[rank-deficient]] matrix that doesn't still represent the same temperament as we started with. So we just have to keep going until we get it. | ||
====3. Addabiliziation defactoring==== | ===== 3. Addabiliziation defactoring ===== | ||
But it is not quite as simple as determining the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactoring|enfactored]]. And defactoring them without compromising the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive. This is called '''addabilization defactoring'''. | But it is not quite as simple as determining the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and then supplying the remaining vectors necessary to match the grade of the original matrix, because the results may then be [[enfactoring|enfactored]]. And defactoring them without compromising the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> cannot be done using existing [[defactoring algorithms]]; it's a tricky process, or at least computationally intensive. This is called '''addabilization defactoring'''. | ||
Line 319: | Line 293: | ||
Another complication is that the greatest factor may be very large, or be a highly composite number. In this case, searching for the linear combination that isolates the greatest factor in its entirety directly may be intractable; it is better to eliminate it piecemeal, i.e., whenever the solver finds a factor of the greatest factor, eliminate it, and repeat until the greatest factor is fully eliminated. The RTT library code linked to above works in this way. | Another complication is that the greatest factor may be very large, or be a highly composite number. In this case, searching for the linear combination that isolates the greatest factor in its entirety directly may be intractable; it is better to eliminate it piecemeal, i.e., whenever the solver finds a factor of the greatest factor, eliminate it, and repeat until the greatest factor is fully eliminated. The RTT library code linked to above works in this way. | ||
====4. Negation==== | ===== 4. Negation ===== | ||
Temperament negation is more complex with matrices, both in terms of checking for it, as well as changing it. | Temperament negation is more complex with matrices, both in terms of checking for it, as well as changing it. | ||
Line 327: | Line 300: | ||
For matrices, the check for negation is related to canonicalization of multivectors as are used in exterior algebra for RTT. Essentially we take the largest possible minor determinants of the matrix (or "largest-minors" for short), and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa. | For matrices, the check for negation is related to canonicalization of multivectors as are used in exterior algebra for RTT. Essentially we take the largest possible minor determinants of the matrix (or "largest-minors" for short), and then look at their leading or trailing entry (leading in the case of a covariant matrix, like a mapping; trailing in the case of a contravariant matrix, like a comma basis): if this entry is positive, so is the temperament, and vice versa. | ||
====5. Entry-wise add==== | ===== 5. Entry-wise add ===== | ||
The entry-wise addition of elements works mostly the same as for vectors. But there's one catch: we only do it for the pair of <span style="color: #B6321C;">linearly independent vectors</span>. We set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and reintroduce it at the end. | The entry-wise addition of elements works mostly the same as for vectors. But there's one catch: we only do it for the pair of <span style="color: #B6321C;">linearly independent vectors</span>. We set the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> aside, and reintroduce it at the end. | ||
Line 337: | Line 309: | ||
As a final step, as is always good to do when concluding temperament operations, put the result in [[canonical form]]. | As a final step, as is always good to do when concluding temperament operations, put the result in [[canonical form]]. | ||
===Example=== | ==== Example ==== | ||
For our example, let’s look at septimal meantone plus flattone. The canonical forms of these temperaments are {{rket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and {{rket|{{map|1 0 -4 17}} {{map|0 1 4 -9}}}}. | |||
'''0. Counterexample.''' Before we try following the detailed instructions just described above, let's do the counterexample, to illustrate why we have to follow them at all. Simple entry-wise addition of these two mapping matrices gives {{rket|{{map|2 0 -8 4}} {{map|0 2 8 1}}}}, which is not the correct answer: | |||
'''0. Counterexample.''' Before we try following the detailed instructions just described above, let's do the counterexample, to illustrate why we have to follow them at all. Simple entry-wise addition of these two mapping matrices gives {{ | |||
Line 368: | Line 339: | ||
And it's wrong not only because it is clearly enfactored (at least one factor of 2, that is visible in the first vector). The full explanation of why this is the wrong answer is beyond the scope of this example (the nature of correctness here is discussed in the section [[Temperament addition#Addition on non-addable temperaments]]). However, if we now follow through with the instructions described above, we can find the correct answer. | And it's wrong not only because it is clearly enfactored (at least one factor of 2, that is visible in the first vector). The full explanation of why this is the wrong answer is beyond the scope of this example (the nature of correctness here is discussed in the section [[Temperament addition#Addition on non-addable temperaments]]). However, if we now follow through with the instructions described above, we can find the correct answer. | ||
'''1. Find the linear-dependence basis.''' We know where to start: first find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and put each of these two mappings into a form that includes it explicitly. In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{ | '''1. Find the linear-dependence basis.''' We know where to start: first find the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and put each of these two mappings into a form that includes it explicitly. In this case, their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> consists of a single vector: <span style="color: #3C8031;">{{rket|{{map|19 30 44 53}}}}</span>. | ||
'''2. Reproduce the original temperament.''' The original matrices had two vectors, so as our second step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are <span style="color: #B6321C;">linearly independent</span> from the ones we already have; if one is not, skip it and move on. In this case the first vectors are both fine, though. | '''2. Reproduce the original temperament.''' The original matrices had two vectors, so as our second step, we pad out these matrices by drawing from vectors from the original matrices, starting from their first vectors, so now we have [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 -13}}⟩ and [<span style="color: #3C8031;">{{map|19 30 44 53}}</span> {{map|1 0 -4 17}}⟩. We could choose any vectors from the original matrices, as long as they are <span style="color: #B6321C;">linearly independent</span> from the ones we already have; if one is not, skip it and move on. In this case the first vectors are both fine, though. | ||
Line 502: | Line 473: | ||
And so we can see that meantone minus flattone is [[meanmag]]. | And so we can see that meantone minus flattone is [[meanmag]]. | ||
== Addition on non-addable temperaments == | === Addition on non-addable temperaments === | ||
==== Initial example: canonical form ==== | |||
Even when a pair of temperaments isn’t addable, if they have the same dimensions, that means the matrices representing them have the same shape, and so then there’s nothing stopping us from entry-wise adding them. For example, the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> for the canonical comma bases for septimal meantone [{{vector|4 -4 1 0}} {{vector|13 -10 0 1}}] and septimal blackwood [{{vector|-8 5 0 0}} {{vector|-6 2 0 1}}] is empty, meaning their <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span> <math>=2</math>, and therefore they aren't addable. Yet we can still do entry-wise addition on the matrices that are acting as these temperaments’ comma bases as if the temperaments were addable: | |||
<math>\left[ \begin{array} {r|r} | |||
<math>\left[ \begin{array} { | |||
4 & 13 \\ | 4 & 13 \\ | ||
Line 518: | Line 487: | ||
\end{array} \right] | \end{array} \right] | ||
+ | + | ||
\left[ \begin{array} { | \left[ \begin{array} {r|r} | ||
-8 & -6 \\ | -8 & -6 \\ | ||
Line 527: | Line 496: | ||
\end{array} \right] | \end{array} \right] | ||
= | = | ||
\left[ \begin{array} { | \left[ \begin{array} {r|r} | ||
(4+-8) & (13+-6) \\ | (4+-8) & (13+-6) \\ | ||
Line 536: | Line 505: | ||
\end{array} \right] | \end{array} \right] | ||
= | = | ||
\left[ \begin{array} { | \left[ \begin{array} {r|r} | ||
-4 & 7 \\ | -4 & 7 \\ | ||
Line 546: | Line 515: | ||
And — at first glance — the result may even seem to be what we were looking for: a temperament which | And — at first glance — the result may even seem to be what we were looking for: a temperament which makes | ||
# neither the meantone comma {{vector|4 -4 1 0}} nor the Pythagorean limma {{vector|-8 5 0 0}}, but does | # neither the meantone comma {{vector|4 -4 1 0}} nor the Pythagorean limma {{vector|-8 5 0 0}} vanish, but does make the just diatonic semitone {{vector|-4 1 1 0}} vanish; and | ||
# neither Harrison's comma {{vector|13 -10 0 1}} nor Archytas' comma {{vector|-6 2 0 1}}, but does | # neither Harrison's comma {{vector|13 -10 0 1}} nor Archytas' comma {{vector|-6 2 0 1}} vanish, but does make the laruru negative second {{vector|7 -8 0 2}} vanish. | ||
But while these two monovector additions have worked out individually, the full result cannot truly be said to be the "temperament sum" of septimal meantone and blackwood. And here follows a demonstration why it cannot. | |||
Let's try summing two completely different comma bases for these temperaments and see what we get. So septimal meantone can also be represented by the comma basis consisting of the diesis {{vector|1 2 -3 1}} and the hemimean comma {{vector|-6 0 5 -2}} (which is another way of saying that septimal meantone also | ==== Second example: alternate form ==== | ||
Let's try summing two completely different comma bases for these temperaments and see what we get. So septimal meantone can also be represented by the comma basis consisting of the diesis {{vector|1 2 -3 1}} and the hemimean comma {{vector|-6 0 5 -2}} (which is another way of saying that septimal meantone also makes those commas vanish). And septimal blackwood can also be represented by the septimal third-tone {{vector|2 -3 0 1}} and the cloudy comma {{vector|-14 0 0 5}}. So here's those two bases' entry-wise sum: | |||
Line 594: | Line 562: | ||
This works out for the individual monovectors too, that is, it now | This works out for the individual monovectors too, that is, it now makes none of the input commas vanish anymore, but instead their sums. But what we're looking at here ''is not a comma basis for the same temperament'' as we got the first time! | ||
We can confirm this by putting both results into [[canonical form]]. That's exactly what canonical form is for: confirming whether or not two matrices are representations of the same temperament! The first result happens to already be in canonical form, so that's [{{vector|-4 1 1 0}} {{vector|7 -8 0 2}}]. This second result [{{vector|3 -1 -3 2}} {{vector|-20 0 5 3}}] doesn't match that, but we can't be sure whether we don't have a match until we put it into canonical form. So its canonical form is [{{vector|-49 3 19 0}} {{vector|-23 1 8 1}}], which doesn't match, and so these are decidedly not the same temperament. | |||
==== Third example: reordering of canonical form ==== | |||
In fact, we could even take the same sets of commas and merely reorder them to come up with a different result! Here, we'll just switch the order of the two commas in the representation of septimal blackwood: | In fact, we could even take the same sets of commas and merely reorder them to come up with a different result! Here, we'll just switch the order of the two commas in the representation of septimal blackwood: | ||
Line 640: | Line 607: | ||
And the canonical form of | And the canonical form of [{{vector|-2 -2 1 1}} {{vector|5 -5 0 1}}] is [{{vector|-7 3 1 0}} {{vector|5 -5 0 1}}], so that's yet another possible temperament resulting from attempting to add these non-addable temperaments. | ||
We can even experience this without changing basis. Let's just compare the results we get from the canonical form of these two temperaments, on either side of duality. The first example we worked through happened to be their canonical comma bases. So now let's look at their canonical mappings. Septimal meantone's is {{ | ==== Fourth example: other side of duality ==== | ||
We can even experience this without changing basis. Let's just compare the results we get from the canonical form of these two temperaments, on either side of duality. The first example we worked through happened to be their canonical comma bases. So now let's look at their canonical mappings. Septimal meantone's is {{rket|{{map|1 0 -4 -13}} {{map|0 1 4 10}}}} and septimal blackwood's is {{rket|{{map|5 8 0 14}} {{map|0 0 1 0}}}}. So what temperament do we get by summing these? | |||
Line 676: | Line 642: | ||
In order to compare this result directly with our other three results, let's take the dual of this {{ | In order to compare this result directly with our other three results, let's take the dual of this {{rket|{{map|6 8 -4 1}} {{map|0 1 5 10}}}}, which is [{{vector|22 -15 3 0}} {{vector|41 -30 2 2}}] (in canonical form), so we can see that's yet a fourth possible result.<ref> | ||
It is possible to find a pair of mapping forms for septimal meantone and septimal blackwood that sum to a mapping which is the dual of the comma basis found by summing their canonical comma bases. One example is {{ | It is possible to find a pair of mapping forms for septimal meantone and septimal blackwood that sum to a mapping which is the dual of the comma basis found by summing their canonical comma bases. One example is {{rket|{{map|97 152 220 259}} {{map|-30 -47 -68 -80}}}} + {{rket|{{map|-95 -152 -212 -266}} {{map|30 48 67 84}}}}.</ref> | ||
==== Summary ==== | |||
Here's the four different results we've found so far: | Here's the four different results we've found so far: | ||
Line 731: | Line 696: | ||
What we're experiencing here is the effect first discussed in the early section [[Temperament addition#The temperaments are addable]]: since entry-wise addition of matrices is a operation defined on matrices, not bases, we get different results for different bases. | What we're experiencing here is the effect first discussed in the early section [[Temperament addition#The temperaments are addable]]: since entry-wise addition of matrices is a operation defined on matrices, not bases, we get different results for different bases. | ||
This in stark contrast to the situation when you have addable temperaments; once you get them into the form with the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and only the single <span style="color: #B6321C;">linearly independent basis vector</span>, you will get the same resultant temperament regardless of which side of duality you add them on — the duals stay in sync, we could say — and regardless of which basis we choose.<ref>Note that different bases ''are'' possible for addable temperaments, e.g. the simplest addable forms for 5-limit meantone and porcupine are [<span style="color: #3C8031;">{{map|7 11 16}}</span> <span style="color: #B6321C;">{{map|-2 -3 -4}}</span>⟩ + [<span style="color: #3C8031;">{{map|7 11 16}}</span> <span style="color: #B6321C;">{{map|1 2 3}}</span>⟩ = {{ | This in stark contrast to the situation when you have addable temperaments; once you get them into the form with the explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> and only the single <span style="color: #B6321C;">linearly independent basis vector</span>, you will get the same resultant temperament regardless of which side of duality you add them on — the duals stay in sync, we could say — and regardless of which basis we choose.<ref>Note that different bases ''are'' possible for addable temperaments, e.g. the simplest addable forms for 5-limit meantone and porcupine are [<span style="color: #3C8031;">{{map|7 11 16}}</span> <span style="color: #B6321C;">{{map|-2 -3 -4}}</span>⟩ + [<span style="color: #3C8031;">{{map|7 11 16}}</span> <span style="color: #B6321C;">{{map|1 2 3}}</span>⟩ = {{rket|{{map|14 22 32}} {{map|-1 -1 -1}}}} which canonicalizes to {{rket|{{map|1 1 1}} {{map|0 4 9}}}}. But [<span style="color: #3C8031;">{{map|7 11 16}}</span> <span style="color: #B6321C;">{{map|-9 -14 -20}}</span>⟩ + [<span style="color: #3C8031;">{{map|7 11 16}}</span> <span style="color: #B6321C;">{{map|1 2 3}}</span>⟩ also works (in the meantone mapping, we've added one copy of the first vector to the second), giving {{rket|{{map|14 22 32}} {{map|-8 -12 -17}}}} which also canonicalizes to {{rket|{{map|1 1 1}} {{map|0 4 9}}}}; in fact, as long as the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> is explicit and neither matrix is enfactored, the entry-wise addition will work out fine.</ref> | ||
And so we can see that despite immediate appearances, while it seems like we can simply do entry-wise addition on temperaments with more than one <span style="color: #B6321C;">basis vector not in common</span>, this does not give us reliable results per temperament. | And so we can see that despite immediate appearances, while it seems like we can simply do entry-wise addition on temperaments with more than one <span style="color: #B6321C;">basis vector not in common</span>, this does not give us reliable results per temperament. | ||
=== How it looks with multivectors === | ==== How it looks with multivectors ==== | ||
We've now observed the outcome when adding non-addable temperaments using the matrix approach. It's instructive to observe how it works with multivectors as well. The canonical multicommas for septimal meantone and septimal blackwood are {{multicomma|12 -13 4 10 -4 1}} and {{multicomma|14 0 -8 0 5 0}}, respectively. When we add these, we get {{multicomma|26 -13 -4 10 1 1}}. What temperament is this — does it match with any of the four comma bases we've already found? Let's check by converting it back to matrix form. Oh, wait — we can't. This is what we call a [[decomposability|indecomposable]] multivector. In other words, there is no set of vectors that could be wedged together to produce this multivector. This is the way that multivectors convey to us that there is no true temperament sum of these two temperaments. | |||
=== Further explanations === | |||
==== Diagrammatic explanation ==== | |||
== Further explanations == | ===== Introduction ===== | ||
The diagrams used for this explanation were inspired in part by [[Kite Giedraitis|Kite]]'s [[gencom]]s, and specifically how in his "twin squares" matrices — which have dimensions <math>d×d</math> — one can imagine shifting a bar up and down to change the boundary between vectors that form a basis for the commas and those that are a [[generator detempering]]). The count of the former is the nullity <math>n</math>, and the count of the latter is the rank <math>r</math>, and the shifting of the boundary bar between them with the total <math>d</math> vectors corresponds to the insight of the rank-nullity theorem, which states that <math>r + n=d</math>. And so this diagram's square grid has just the right amount of room to portray both the mapping and the comma basis for a given temperament (with the comma basis's vectors rotated 90 degrees to appear as rows, to match up with the rows of the mapping). | |||
===Diagrammatic explanation=== | |||
====Introduction==== | |||
The diagrams used for this explanation were inspired in part by [[Kite Giedraitis|Kite]]'s [[gencom]]s, and specifically how in his "twin squares" matrices — which have dimensions <math>d×d</math> — one can imagine shifting a bar up and down to change the boundary between vectors that form a basis for the commas and those that are a [[ | |||
So consider this first example of such a diagram: | So consider this first example of such a diagram: | ||
Line 796: | Line 757: | ||
Perhaps more importantly, we can also see from these diagrams that any pair of <math>g_{\text{min}}=1</math> temperaments will be addable. Because if they are <math>g_{\text{min}}=1</math>, then the furthest the <span style="color: #B6321C;">red band</span> can extend from the black bar is 1 vector, and 1 mirrored set of <span style="color: #B6321C;">red vectors</span> means <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and that's the definition of addability. | Perhaps more importantly, we can also see from these diagrams that any pair of <math>g_{\text{min}}=1</math> temperaments will be addable. Because if they are <math>g_{\text{min}}=1</math>, then the furthest the <span style="color: #B6321C;">red band</span> can extend from the black bar is 1 vector, and 1 mirrored set of <span style="color: #B6321C;">red vectors</span> means <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and that's the definition of addability. | ||
====A simple <math>d=3</math> example==== | ===== A simple <math>d=3</math> example ===== | ||
Let's back-pedal to <math>d=3</math> for a simple illustrative example. | Let's back-pedal to <math>d=3</math> for a simple illustrative example. | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 820: | Line 780: | ||
|} | |} | ||
This diagram shows us that any two <math>d=3</math>, <math>g_{\text{min}}=1</math> temperaments (like 5-limit ETs) will be <span style="color: #3C8031;">linearly dependent</span>, i.e. their comma bases will <span style="color: #3C8031;">share</span> one vector. You may already know this intuitively if you are familiar with the 5-limit [[projective tuning space]] diagram from the [[Paul_Erlich#Papers|Middle Path]] paper, which shows how we can draw a line through any two ETs and that line will represent a temperament, and the single comma that temperament | This diagram shows us that any two <math>d=3</math>, <math>g_{\text{min}}=1</math> temperaments (like 5-limit ETs) will be <span style="color: #3C8031;">linearly dependent</span>, i.e. their comma bases will <span style="color: #3C8031;">share</span> one vector. You may already know this intuitively if you are familiar with the 5-limit [[projective tuning space]] diagram from the [[Paul_Erlich#Papers|Middle Path]] paper, which shows how we can draw a line through any two ETs and that line will represent a temperament, and the single comma that temperament makes to vanish is <span style="color: #3C8031;">this shared vector</span>. The diagram also tells us that any two 5-limit temperaments that make only a single comma vanish will also be <span style="color: #3C8031;">linearly dependent</span>, for the opposite reason: their ''mappings'' will always <span style="color: #3C8031;">share</span> one vector. | ||
And we can see that there are no other diagrams of interest for <math>d=3</math>, because there's no sense in looking at diagrams with no <span style="color: #B6321C;">red band</span>, but we can't extend the <span style="color: #B6321C;">red band</span> any further than 1 vector on each side without going over the edge, and we can't lower the black bar any further without going below the center. So we're done. And our conclusion is that any pair of different <math>d=3</math> temperaments that are nontrivial (<math>0 < n < d=3</math> and <math>0 < r < d=3</math>) will be addable. | And we can see that there are no other diagrams of interest for <math>d=3</math>, because there's no sense in looking at diagrams with no <span style="color: #B6321C;">red band</span>, but we can't extend the <span style="color: #B6321C;">red band</span> any further than 1 vector on each side without going over the edge, and we can't lower the black bar any further without going below the center. So we're done. And our conclusion is that any pair of different <math>d=3</math> temperaments that are nontrivial (<math>0 < n < d=3</math> and <math>0 < r < d=3</math>) will be addable. | ||
====Completing the suite of <math>d=4</math> examples==== | ===== Completing the suite of <math>d=4</math> examples ===== | ||
Okay, back to <math>d=4</math>. We've already looked at the <math>g_{\text{min}}=1</math> possibility (which, for any <math>d</math>, there will only ever be one of). So let's start looking at the possibilities where <math>g_{\text{min}}=2</math>, which in the case of <math>d=4</math> leaves us only one pair of values for <math>r</math> and <math>n</math>: both being 2. | Okay, back to <math>d=4</math>. We've already looked at the <math>g_{\text{min}}=1</math> possibility (which, for any <math>d</math>, there will only ever be one of). So let's start looking at the possibilities where <math>g_{\text{min}}=2</math>, which in the case of <math>d=4</math> leaves us only one pair of values for <math>r</math> and <math>n</math>: both being 2. | ||
Line 888: | Line 847: | ||
|} | |} | ||
In the former possibility, where <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> (and therefore the temperaments are addable), we have a pair of different <math>d=4</math>, <math>r=2</math> temperaments where we can find a single comma that both temperaments | In the former possibility, where <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> (and therefore the temperaments are addable), we have a pair of different <math>d=4</math>, <math>r=2</math> temperaments where we can find a single comma that both temperaments make to vanish, and — equivalently — we can find one ET that supports both temperaments. | ||
In the latter possibility, where <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>, neither side of duality <span style="color: #3C8031;">shares</span> any vectors in common. And so we've encountered our first example that is not addable. In other words, if the <span style="color: #B6321C;">red band</span> ever extends more than 1 vector away from the black bar, temperament addition is not possible. So <math>d=4</math> is the first time we had enough room (half of <math>d</math>) to support that condition. | In the latter possibility, where <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>, neither side of duality <span style="color: #3C8031;">shares</span> any vectors in common. And so we've encountered our first example that is not addable. In other words, if the <span style="color: #B6321C;">red band</span> ever extends more than 1 vector away from the black bar, temperament addition is not possible. So <math>d=4</math> is the first time we had enough room (half of <math>d</math>) to support that condition. | ||
Line 894: | Line 853: | ||
We have now exhausted the possibility space for <math>d=4</math>. We can't extend either the <span style="color: #B6321C;">red band</span> or the black bar any further. | We have now exhausted the possibility space for <math>d=4</math>. We can't extend either the <span style="color: #B6321C;">red band</span> or the black bar any further. | ||
====<math>d=5</math> diagrams finally reveal important relationships==== | ===== <math>d=5</math> diagrams finally reveal important relationships ===== | ||
So how about we go to <math>d=5</math> (such as the 11-limit). As usual, starting with <math>g_{\text{min}}=1</math>: | So how about we go to <math>d=5</math> (such as the 11-limit). As usual, starting with <math>g_{\text{min}}=1</math>: | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 1,024: | Line 982: | ||
Here's where things really get interesting. Because in both of these cases, the pairs of temperaments represented are <span style="color: #3C8031;">linearly dependent</span> on each other (i.e. either their mappings are <span style="color: #3C8031;">linearly dependent</span>, their comma bases are <span style="color: #3C8031;">linearly dependent</span>, or both). And so far, every possibility where temperaments have been <span style="color: #3C8031;">linearly dependent</span>, they have also been <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and therefore addable. But if you look at the second case here, we are <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>, but since <math>d=5</math>, the temperaments still manage to be <span style="color: #3C8031;">linearly dependent</span>. So this is the first example of a <span style="color: #3C8031;">linearly dependent</span> temperament pairing which is not addable. | Here's where things really get interesting. Because in both of these cases, the pairs of temperaments represented are <span style="color: #3C8031;">linearly dependent</span> on each other (i.e. either their mappings are <span style="color: #3C8031;">linearly dependent</span>, their comma bases are <span style="color: #3C8031;">linearly dependent</span>, or both). And so far, every possibility where temperaments have been <span style="color: #3C8031;">linearly dependent</span>, they have also been <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and therefore addable. But if you look at the second case here, we are <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span>, but since <math>d=5</math>, the temperaments still manage to be <span style="color: #3C8031;">linearly dependent</span>. So this is the first example of a <span style="color: #3C8031;">linearly dependent</span> temperament pairing which is not addable. | ||
====Back to <math>d=2</math>, for a surprisingly tricky example==== | ===== Back to <math>d=2</math>, for a surprisingly tricky example ===== | ||
Beyond <math>d=5</math>, these diagrams get cumbersome to prepare, and cease to reveal further insights. But if we step back down to <math>d=2</math>, a place simpler than anywhere we've looked so far, we actually find another surprisingly tricky example, which is hopefully still illuminating. | Beyond <math>d=5</math>, these diagrams get cumbersome to prepare, and cease to reveal further insights. But if we step back down to <math>d=2</math>, a place simpler than anywhere we've looked so far, we actually find another surprisingly tricky example, which is hopefully still illuminating. | ||
Line 1,046: | Line 1,003: | ||
Basically, in the case of <math>d=2</math>, <math>g_{\text{max}}=1</math> (in non-trivial cases, i.e. not [[JI]] or the [[unison temperament]]), so any two different ETs or commas you pick are going to be <span style="color: #B6321C;">linearly independent</span> (because the only way they could be <span style="color: #3C8031;">linearly dependent</span> would be to be the same temperament). And yet we know we can still entry-wise add them to new vectors that are [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#Decomposability|decomposable]], because they're already vectors (decomposing means to express a [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#From_vectors_to_multivectors|multivector]] in the form of a list of monovectors, so decomposing a multivector that's already a monovector like this is tantamount to merely putting array braces around it.) | Basically, in the case of <math>d=2</math>, <math>g_{\text{max}}=1</math> (in non-trivial cases, i.e. not [[JI]] or the [[unison temperament]]), so any two different ETs or commas you pick are going to be <span style="color: #B6321C;">linearly independent</span> (because the only way they could be <span style="color: #3C8031;">linearly dependent</span> would be to be the same temperament). And yet we know we can still entry-wise add them to new vectors that are [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#Decomposability|decomposable]], because they're already vectors (decomposing means to express a [[Douglas_Blumeyer_and_Dave_Keenan%27s_Intro_to_exterior_algebra_for_RTT#From_vectors_to_multivectors|multivector]] in the form of a list of monovectors, so decomposing a multivector that's already a monovector like this is tantamount to merely putting array braces around it.) | ||
===Geometric explanation=== | ==== Geometric explanation ==== | ||
We've presented a diagrammatic illustration of the behavior of <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span> with respect to temperament dimensions. But some of the results might have seemed surprising. For instance, when looking at the diagram for <math>d=4, g_{\text{min}}=1, g_{\text{max}}=3</math>, it might have seemed intuitive enough that the the <span style="color: #3C8031;">red band</span> could not extend beyond the square grid, but then again, why shouldn't it be possible to have, say, two 7-limit ETs which make only a single comma in common vanish? Perhaps it doesn't seem clear that this is impossible, and that they must make two commas in common vanish (and of course the infinitude of combinations of these two commas). If this is as unclear to you as it was to the author when exploring this topic, then this explanatory section is for you! Here, we will use geometrical representations of temperaments to hone our intuitions about the possible combinations of dimensions and <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span> of temperaments. | |||
We've presented a diagrammatic illustration of the behavior of <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span> with respect to temperament dimensions. But some of the results might have seemed surprising. For instance, when looking at the diagram for <math>d=4, g_{\text{min}}=1, g_{\text{max}}=3</math>, it might have seemed intuitive enough that the the <span style="color: #3C8031;">red band</span> could not extend beyond the square grid, but then again, why shouldn't it be possible to have, say, two 7-limit ETs which | |||
In this approach, we’re actually not going to focus directly on the <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span> of temperaments. Instead, we're going to look at the <span style="color: #3C8031;">linear-''de''pendence <math>l_{\text{dep}}</math></span> of matrices representing temperaments such as mappings and comma bases, and then compute the <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span> from it and the grade <math>g</math>. As we’ve established, the <span style="color: #3C8031;">linear-dependence <math>l_{\text{dep}}</math></span> differs from one side of duality to the other, so we’ll only be looking at one side of duality at a time. | In this approach, we’re actually not going to focus directly on the <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span> of temperaments. Instead, we're going to look at the <span style="color: #3C8031;">linear-''de''pendence <math>l_{\text{dep}}</math></span> of matrices representing temperaments such as mappings and comma bases, and then compute the <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span> from it and the grade <math>g</math>. As we’ve established, the <span style="color: #3C8031;">linear-dependence <math>l_{\text{dep}}</math></span> differs from one side of duality to the other, so we’ll only be looking at one side of duality at a time. | ||
====Introduction==== | ===== Introduction ===== | ||
In this geometric approach, we'll be imagining individual vectors as points (0D), sets of two vectors as lines (1D), sets of three as planes (2D), four as volumes (3D), and so forth, as according to this table: | In this geometric approach, we'll be imagining individual vectors as points (0D), sets of two vectors as lines (1D), sets of three as planes (2D), four as volumes (3D), and so forth, as according to this table: | ||
{| class="wikitable center-all" | {| class="wikitable center-all" | ||
Line 1,096: | Line 1,051: | ||
Think of it this way: geometric points are zero-dimensional, simply representing a position in space, whereas linear algebra vectors are one-dimensional, representing both a magnitude and direction; the way vectors manage to encode this extra dimension without providing any additional information is by being understood to describe this position in space ''relative to an origin''. Well, so we'll now switch our interpretation of these objects to the geometric one, where the vector's entries are nothing more than a coordinate for a point in space. And the "projection" involved in projective vector space essentially positions us at this discarded origin, looking out from it upon every individual point, which accomplishes the same feat, in a visual way. | Think of it this way: geometric points are zero-dimensional, simply representing a position in space, whereas linear algebra vectors are one-dimensional, representing both a magnitude and direction; the way vectors manage to encode this extra dimension without providing any additional information is by being understood to describe this position in space ''relative to an origin''. Well, so we'll now switch our interpretation of these objects to the geometric one, where the vector's entries are nothing more than a coordinate for a point in space. And the "projection" involved in projective vector space essentially positions us at this discarded origin, looking out from it upon every individual point, which accomplishes the same feat, in a visual way. | ||
Perhaps an example may help clarify this setup. Suppose we've got an (x,y,z) space, and two coordinates (5,8,12) and (7,11,16). You should recognize these as the simple maps for 5-ET and 7-ET, usually written as {{map|5 8 12}} and {{map|7 11 16}}, respectively. Ask for the equation of the plane defined by the three points (5,8,12), (7,11,16), and the origin (0,0,0) and you'll get -4x + 4y -1z = 0, which clearly shows us the entries of the meantone comma. That's because meantone temperament can be defined by these two maps. 5-limit JI is a 3D space, and meantone temperament, as a rank-2 temperament, would be a 2D plane. But we don't normally need to think of the map corresponding to the origin, where everything is | Perhaps an example may help clarify this setup. Suppose we've got an (x,y,z) space, and two coordinates (5,8,12) and (7,11,16). You should recognize these as the simple maps for 5-ET and 7-ET, usually written as {{map|5 8 12}} and {{map|7 11 16}}, respectively. Ask for the equation of the plane defined by the three points (5,8,12), (7,11,16), and the origin (0,0,0) and you'll get -4x + 4y -1z = 0, which clearly shows us the entries of the meantone comma. That's because meantone temperament can be defined by these two maps. 5-limit JI is a 3D space, and meantone temperament, as a rank-2 temperament, would be a 2D plane. But we don't normally need to think of the map corresponding to the origin, where everything is made to vanish, including meantone. So we can just assume it, and think of a 2D plane as being defined by only 2 points, which in a view projected (from the origin) will look like just the line connecting (5,8,12) and (7,11,16). | ||
So, we've set the stage for our projective vector spaces. We will now be looking at representations of temperaments as counts of vector sets, and then using this scheme to convert them to primitive geometric forms. We'll place two of each form into the space, representing the two temperaments whose addability is being checked. Then we will observe their possible <span style="color: #3C8031;">''intersections''</span> depending on how they're oriented in space, and it's these <span style="color: #3C8031;">intersections that represent their linear-dependence</span>. When the dimension of the <span style="color: #3C8031;">intersection</span> is then converted back to a vector set count, then we have their <span style="color: #3C8031;">linear-dependence <math>l_{\text{dep}}</math></span>, for this side of duality, anyway (remember, unlike the <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span>, this value isn't necessarily the same on both sides of duality). We can finally subtract the <span style="color: #3C8031;">linear-dependence</span> from the grade (vector count) to get the <span style="color: #B6321C;">linear-indepedence</span>, in order to determine if the two temperaments are addable. | So, we've set the stage for our projective vector spaces. We will now be looking at representations of temperaments as counts of vector sets, and then using this scheme to convert them to primitive geometric forms. We'll place two of each form into the space, representing the two temperaments whose addability is being checked. Then we will observe their possible <span style="color: #3C8031;">''intersections''</span> depending on how they're oriented in space, and it's these <span style="color: #3C8031;">intersections that represent their linear-dependence</span>. When the dimension of the <span style="color: #3C8031;">intersection</span> is then converted back to a vector set count, then we have their <span style="color: #3C8031;">linear-dependence <math>l_{\text{dep}}</math></span>, for this side of duality, anyway (remember, unlike the <span style="color: #B6321C;">linear-independence <math>l_{\text{ind}}</math></span>, this value isn't necessarily the same on both sides of duality). We can finally subtract the <span style="color: #3C8031;">linear-dependence</span> from the grade (vector count) to get the <span style="color: #B6321C;">linear-indepedence</span>, in order to determine if the two temperaments are addable. | ||
Line 1,102: | Line 1,057: | ||
In these examples, we'll be assuming that no two temperaments being compared are the same, because adding copies of the same temperament is not interesting. The other things we'll be assuming is that no lines, planes, etc. are parallel to each other; this is due to a strange effect touched upon in footnote 4 whereby temperament geometry that appears parallel in projective space actually still intersects; the present author asks that if anyone is able to demystify this situation, that they please do! | In these examples, we'll be assuming that no two temperaments being compared are the same, because adding copies of the same temperament is not interesting. The other things we'll be assuming is that no lines, planes, etc. are parallel to each other; this is due to a strange effect touched upon in footnote 4 whereby temperament geometry that appears parallel in projective space actually still intersects; the present author asks that if anyone is able to demystify this situation, that they please do! | ||
====At <math>d=3</math>==== | ===== At <math>d=3</math> ===== | ||
First, let's establish the geometric dimension of the space. With <math>d=3</math>, we've got a 2D space (one less than 3), so the entire space can be visualized on a plane. | First, let's establish the geometric dimension of the space. With <math>d=3</math>, we've got a 2D space (one less than 3), so the entire space can be visualized on a plane. | ||
Line 1,116: | Line 1,070: | ||
[[File:D3 g2 dep1.png|200px|none]] | [[File:D3 g2 dep1.png|200px|none]] | ||
====At <math>d=2</math>==== | ===== At <math>d=2</math> ===== | ||
Let's step back to <math>d=2</math>. Here we've got a 2 minus 1 equals 1D space, so the entire space can be visualized on a single line (one direction corresponds to an increasing ratio between the two coordinates, and the other to a decreasing ratio). | Let's step back to <math>d=2</math>. Here we've got a 2 minus 1 equals 1D space, so the entire space can be visualized on a single line (one direction corresponds to an increasing ratio between the two coordinates, and the other to a decreasing ratio). | ||
Line 1,124: | Line 1,077: | ||
[[File:D2 g1 dep0.png|200px|none]] | [[File:D2 g1 dep0.png|200px|none]] | ||
====At <math>d=4</math>==== | ===== At <math>d=4</math> ===== | ||
First, let's establish the geometric dimension of the space. With <math>d=4</math>, we've got a 3D space (one less than 4), so the entire space can be visualized in a volume. | First, let's establish the geometric dimension of the space. With <math>d=4</math>, we've got a 3D space (one less than 4), so the entire space can be visualized in a volume. | ||
Line 1,132: | Line 1,084: | ||
Let's look at temperaments represented by matrices with 1 vector first (<math>g=1</math>). Yet again, we find ourselves with two separate points, but now we find them in a space that's not a line, not a plane, but a volume. This doesn't change <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span><math> = g = 1</math>, so we're not even going to show it, or any further cases of <math>g=1</math>. These are all addable. | Let's look at temperaments represented by matrices with 1 vector first (<math>g=1</math>). Yet again, we find ourselves with two separate points, but now we find them in a space that's not a line, not a plane, but a volume. This doesn't change <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span><math> = g = 1</math>, so we're not even going to show it, or any further cases of <math>g=1</math>. These are all addable. | ||
And when <math>g=3</math>, because this is paired with <math>g=1</math> from the min and max values, we should expect to get the same answer as with <math>g=1</math>. And indeed, it will check out that way. Because two <math>g=3</math> temperaments will be planes in this volume, and the intersection of two planes is a line. Which means that <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span><math> = 2</math>. And so <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span><math> = g</math> <math> - </math> <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span> <math>= 3 -</math> <span style="color: #3C8031;"><math>2</math></span> <math>= 1</math>. And here's where our geometric approach begins to pay off! This was the example given at the beginning that might seem unintuitive when relying only on the diagrammatic approach. But here we can see clearly that there would be no way for two planes in a volume to intersect only at a point, which proves the fact that two 7-limit ETs could never only | And when <math>g=3</math>, because this is paired with <math>g=1</math> from the min and max values, we should expect to get the same answer as with <math>g=1</math>. And indeed, it will check out that way. Because two <math>g=3</math> temperaments will be planes in this volume, and the intersection of two planes is a line. Which means that <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span><math> = 2</math>. And so <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span><math> = g</math> <math> - </math> <span style="color: #3C8031;"><math>l_{\text{dep}}</math></span> <math>= 3 -</math> <span style="color: #3C8031;"><math>2</math></span> <math>= 1</math>. And here's where our geometric approach begins to pay off! This was the example given at the beginning that might seem unintuitive when relying only on the diagrammatic approach. But here we can see clearly that there would be no way for two planes in a volume to intersect only at a point, which proves the fact that two 7-limit ETs could never only make a single comma in common vanish. | ||
[[File:D4 g3 dep2.png|200px]] | [[File:D4 g3 dep2.png|200px]] | ||
Line 1,144: | Line 1,096: | ||
[[File:D4 g2 dep1.png|200px]] | [[File:D4 g2 dep1.png|200px]] | ||
==== At <math>d=5</math>==== | ===== At <math>d=5</math> ===== | ||
First, let's establish the geometric dimension of the space. With <math>d=5</math>, we've got a 4D space (one less than 5), so the entire space can be visualized in a hypervolume. We've now gone beyond the dimensionality of physical reality, so things get a little harder to conceptualize unfortunately. But <math>d=5</math> is the first <math>d</math> where we can make an important point about addability, so please bear with! | First, let's establish the geometric dimension of the space. With <math>d=5</math>, we've got a 4D space (one less than 5), so the entire space can be visualized in a hypervolume. We've now gone beyond the dimensionality of physical reality, so things get a little harder to conceptualize unfortunately. But <math>d=5</math> is the first <math>d</math> where we can make an important point about addability, so please bear with! | ||
Line 1,158: | Line 1,109: | ||
So for <math>g_{\text{min}}=2</math> and <math>g_{\text{max}}=3</math> we got two different possibilities for <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span>: 1 and 2, and for each of these two possibilities, we found it twice. We can see then that these match up, that is, that the <math>g_{\text{min}}=2</math> case with <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> matches with the <math>g_{\text{max}}=3</math> case with <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and the <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span> cases match in the same way. | So for <math>g_{\text{min}}=2</math> and <math>g_{\text{max}}=3</math> we got two different possibilities for <span style="color: #B6321C;"><math>l_{\text{ind}}</math></span>: 1 and 2, and for each of these two possibilities, we found it twice. We can see then that these match up, that is, that the <math>g_{\text{min}}=2</math> case with <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span> matches with the <math>g_{\text{max}}=3</math> case with <span style="color: #B6321C;"><math>l_{\text{ind}}=1</math></span>, and the <span style="color: #B6321C;"><math>l_{\text{ind}}=2</math></span> cases match in the same way. | ||
==== Summary table==== | ===== Summary table ===== | ||
Here's a summary table of our geometric findings so far: | Here's a summary table of our geometric findings so far: | ||
{| class="wikitable center-all" | {| class="wikitable center-all" | ||
Line 1,226: | Line 1,176: | ||
|} | |} | ||
=== Algebraic explanation=== | ==== Algebraic explanation ==== | ||
This explanation relies on comparing the results of the multivector and matrix approaches to temperament addition, and showing algebraically how the matrix approach can only achieve the same answer as the multivector approach on the condition that it keeps all but one vector between the added matrices the same, that is, not only are the temperaments addable, but their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> appears explicitly in the added matrices. | This explanation relies on comparing the results of the multivector and matrix approaches to temperament addition, and showing algebraically how the matrix approach can only achieve the same answer as the multivector approach on the condition that it keeps all but one vector between the added matrices the same, that is, not only are the temperaments addable, but their <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> appears explicitly in the added matrices. | ||
Line 1,264: | Line 1,213: | ||
| colspan="1" rowspan="5" |explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> | | colspan="1" rowspan="5" |explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> | ||
[{{vector|<math>a</math> <math>b</math> <math>c</math>}}] | |||
! rowspan="5" | | ! rowspan="5" | | ||
| style="background-color: #BED5BA;"|<math>a</math> | | style="background-color: #BED5BA;"|<math>a</math> | ||
Line 1,446: | Line 1,395: | ||
! | ! | ||
|} | |} | ||
This second diagram demonstrates this situation for a <math>d=5, g=3</math> case. One pair of the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> vectors are explicitly matching, but not the other, which isn't enough. | This second diagram demonstrates this situation for a <math>d=5, g=3</math> case. One pair of the <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> vectors are explicitly matching, but not the other, which isn't enough. | ||
{| class="wikitable center-all" | {| class="wikitable center-all" | ||
|+ | |+ | ||
Line 1,476: | Line 1,427: | ||
| colspan="1" rowspan="7" |explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> | | colspan="1" rowspan="7" |explicit <span style="color: #3C8031;"><math>L_{\text{dep}}</math></span> | ||
[{{vector|<math>a</math> <math>b</math> <math>c</math> <math>d</math> <math>e</math>}} | |||
{{vector|<math>f</math> <math>g</math> <math>h</math> <math>i</math> <math>j</math>}}] | {{vector|<math>f</math> <math>g</math> <math>h</math> <math>i</math> <math>j</math>}}] | ||
| style="background-color: #BED5BA;"|<math>r_1</math> | | style="background-color: #BED5BA;"|<math>r_1</math> | ||
Line 1,889: | Line 1,840: | ||
! | ! | ||
|} | |} | ||
These two examples are by no means a proof, but meditation on the patterns in the variables is at least fairly convincing. | These two examples are by no means a proof, but meditation on the patterns in the variables is at least fairly convincing. | ||
===Sintel's proof of the <span style="color: #B6321C;">linear-independence</span> conjecture=== | ==== Sintel's proof of the <span style="color: #B6321C;">linear-independence</span> conjecture ==== | ||
===== Sintel's original text ===== | |||
====Sintel's original text==== | |||
<nowiki>If A and B are mappings from Z^n to Z^m, with n > m, A, B full rank (using A and B as their rowspace equivalently): | <nowiki>If A and B are mappings from Z^n to Z^m, with n > m, A, B full rank (using A and B as their rowspace equivalently): | ||
Line 1,917: | Line 1,867: | ||
m = m</nowiki> | m = m</nowiki> | ||
====Douglas Blumeyer's interpretation==== | ===== Douglas Blumeyer's interpretation ===== | ||
We're going to take the strategy of beginning with what we're trying to prove, then reducing it to an obvious equivalence, which will show that our initial statement must be just as true. | We're going to take the strategy of beginning with what we're trying to prove, then reducing it to an obvious equivalence, which will show that our initial statement must be just as true. | ||
Line 1,954: | Line 1,903: | ||
<math>\text{nullity}(\text{union}(C_1, C_2)) = \text{nullity}(\text{ | <math>\text{nullity}(\text{union}(C_1, C_2)) = \text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2)))</math> | ||
Line 1,960: | Line 1,909: | ||
<math>\text{rank}(M_1) + \text{rank}(M_2) - \text{rank}(\text{intersection}(M_1, M_2)) - r = \text{nullity}(\text{ | <math>\text{rank}(M_1) + \text{rank}(M_2) - \text{rank}(\text{intersection}(M_1, M_2)) - r = \text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2))) - n</math> | ||
Line 1,966: | Line 1,915: | ||
<math>\text{nullity}(\text{ | <math>\text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2))) + \text{rank}(\text{intersection}(M_1, M_2)) = d</math> | ||
Now solve Equation D for <math>\text{nullity}(\text{ | Now solve Equation D for <math>\text{nullity}(\text{nullspace}(\text{intersection}(M_1, M_2)))</math>, and substitute that result into Equation B: | ||
Line 1,995: | Line 1,944: | ||
So we know this is true. | So we know this is true. | ||
=Glossary= | == Glossary == | ||
* <math>d</math>: [[dimensionality]], the dimension of a temperament's domain | * <math>d</math>: [[dimensionality]], the dimension of a temperament's domain | ||
* <math>r</math>: [[rank]], the dimension of a temperament's [[mapping]] | * <math>r</math>: [[rank]], the dimension of a temperament's [[mapping]] | ||
Line 2,011: | Line 1,959: | ||
* '''negation''': a mapping is negated when the leading entry of its [[minors|largest-minors]] is negative; a comma basis is negated when the trailing entry of its largest-minors is negative | * '''negation''': a mapping is negated when the leading entry of its [[minors|largest-minors]] is negative; a comma basis is negated when the trailing entry of its largest-minors is negative | ||
=Wolfram implementation= | == Wolfram implementation == | ||
Temperament arithemetic has been implemented as the functions <code>sum</code> and <code>diff</code> in the [[RTT library in Wolfram Language]]. | |||
Temperament arithemetic has been | |||
== Credits == | |||
This page is mostly the work of [[Douglas Blumeyer]], and he assumes full responsibility for any inaccuracies or otherwise shortcomings here. But he would like to thank [[Mike Battaglia]], [[Dave Keenan]], and [[Sintel]] for the huge amounts of counseling they provided. There's no way this page could have come together without their help. In particular, the page would not exist at all without the original spark of inspiration from Mike. | This page is mostly the work of [[Douglas Blumeyer]], and he assumes full responsibility for any inaccuracies or otherwise shortcomings here. But he would like to thank [[Mike Battaglia]], [[Dave Keenan]], and [[Sintel]] for the huge amounts of counseling they provided. There's no way this page could have come together without their help. In particular, the page would not exist at all without the original spark of inspiration from Mike. | ||
=Footnotes= | == Footnotes == | ||
<references /> | <references /> | ||
Line 2,026: | Line 1,971: | ||
[[Category:Terms]] | [[Category:Terms]] | ||
[[Category:Math]] | [[Category:Math]] | ||
[[Category:Pages with proofs]] |