Linear algebra formalism
Aspects of tuning theory are often described in the language of linear algebra. This is because the space of just intervals (and as it turns out, the space of radical intervals) constitutes a vector space. This can be determined by checking that intervals follow the axioms of linear algebra:
- Because stacking corresponds to multiplication of rational numbers:
- Stacking intervals is associative. For example, (3/2 * 5/4) * 2/1 is the same as 3/2 * (5/4 * 2/1).
- Stacking intervals is commutative. For example, 3/2 * 5/4 is the same as 5/4 * 3/2.
- The unison, 1/1, is the identity for stacking, as for an interval v, 1/1 * v = v itself.
- Every interval has a descending counterpart, which when stacked with that interval produces the unison (1/1). For example, 5/4 * 4/5 = 1/1.
- Because exponentiation by integers is well-defined for rational numbers:
- Stacking an interval a number of times x, then stacking the resulting interval a number of times y, is the same as if you did it the other way around. For example, ((3/2)^2)^3 is the same as ((3/2)^3)^2.
- Stacking an interval only once just gives that interval. For example, (3/2)^1 = (3/2).
- More mathematically, since exponentiation distributes over multiplication, for intervals u and v and integer exponents x and y:
- (u * v)^x = (u)^x * (v)^x,
- (v)^(x+y) = v^x * v^y
Note that this is the fundamental definition of what it means for something to be "a vector"; vectors are defined as objects in spaces where these axioms apply.
Additionally, the axioms of linear algebra contain the axioms of group theory, so that the just intervals under stacking can be considered a group.
Note that what we've described as multiplication is actually vector addition, and what we've described as exponentiation is actually multiplication of a vector (the interval) by a scalar (the exponent). Additionally, the unison is actually a zero vector. This makes sense if we think of intervals logarithmically, where multiplication of ratios becomes addition of cent values, the unison is 0 cents, and exponents become scale factors.
Conventionally, vectors are notated as lists of numbers representing coordinates in space. In linear algebra, these coordinates are interpreted as scale factors on several arbitrarily chosen "basis vectors" representing the space, which are scaled and added together to produce the vector in question. The standard and most intuitive way of notating intervals as vectors is to take the entries of that interval's monzo and interpret them as vector coordinates, where each basis vector represents a different generator in the monzo's subgroup (usually a prime harmonic). Because the entries of a monzo represent exponents on a prime, the intuition of stacking as addition carries over. For example, the interval 5/4, which has a monzo of 2.3.5 [-2 0 1], may be interpreted as the vector [-2 0 1⟩, where we use an angle bracket ⟩ on the right to indicate that it is a vector. This notation is called the generator-count vector, and may in this case be called the prime-count vector, since the generators are the primes. However, it is also common to call the vector itself a monzo (and hence, terms like "eigenmonzo").
Mappings and matrices
A temperament mapping, in linear algebra, is represented by a matrix. A matrix is a grid of numbers, written like so:
[math]\displaystyle{ \begin{bmatrix} 1 & 0 & -4\\ 0 & 1 & 4 \end{bmatrix} }[/math]
This matrix can be thought of as a "function" that you apply to a vector (in this case, a generator-count vector) to get out another vector (here, another generator-count vector in the tempered space). This matrix has 3 columns, meaning the vector it takes as an "input" will have 3 elements (representing a rank-3 subgroup of JI), and it has 2 rows, meaning the vector you get out will have 2 elements (representing a rank-2 temperament). So, this is a "function" down from 3-dimensional space to 2-dimensional space. The columns tell us where to find the generators of our original subgroup in the space of tempered intervals. One of the key properties of linear algebra is that knowing that alone allows us to tell where every other interval ends up, and is why regular temperaments are covered so much in xenharmonic theory as opposed to irregular temperaments.
What follows is an explanation of several matrix operations in the mathematical language.
Matrix operations
Dot product
The dot product is a way to combine two vectors to get out a single number. Say we want to take the dot product of the vectors [math]\displaystyle{ \begin{pmatrix}12\\19\\28\end{pmatrix} }[/math] and [math]\displaystyle{ \begin{pmatrix}-2\\0\\1\end{pmatrix} }[/math]. To do so, follow these steps:
- Write the vectors separated by a dot to denote the dot product: [math]\displaystyle{
\begin{pmatrix}
12\\
19\\
28\\
\end{pmatrix}
\cdot
\begin{pmatrix}
-2\\
0\\
1
\end{pmatrix}
}[/math]
- This may also be notated [math]\displaystyle{ \langle 12, 19, 28 \vert -2, 0, 1\rangle }[/math]; from this derives the notation for vals and monzos.
- Multiply the corresponding elements, and add the results together: [math]\displaystyle{ \left(12\cdot-2\right)+\left(19\cdot0\right)+\left(28\cdot1\right) = -24 + 0 + 28 = 4 }[/math]
Multiply matrix by vector
We write the "application" of a matrix like so:
[math]\displaystyle{ \begin{bmatrix} 1 & 0 & -4\\ 0 & 1 & 4 \end{bmatrix} \begin{bmatrix} -2\\ 0\\ 1 \end{bmatrix} }[/math]
where the second object is the vector.
To write the first element of our output, we take the dot product of the first row of our matrix with our vector: [math]\displaystyle{ \begin{pmatrix} 1\\ 0\\ -4\\ \end{pmatrix} \cdot \begin{pmatrix} -2\\ 0\\ 1 \end{pmatrix} = \left(1\cdot-2\right)+\left(0\cdot0\right)+\left(-4\cdot1\right) = -2 + 0 + -4 = -6 }[/math]
We do the same thing for the second element of our output, computing [math]\displaystyle{ \begin{pmatrix} 0\\ 1\\ 4\\ \end{pmatrix} \cdot \begin{pmatrix} -2\\ 0\\ 1 \end{pmatrix} = 4 }[/math].
Thus, our output is [math]\displaystyle{ \begin{bmatrix} -6\\ 4\\ \end{bmatrix} }[/math] .
Multiply matrix by matrix
A matrix can act on another matrix, as well. In this case, the matrix on the right can simply be treated as several vectors next to each other.
[math]\displaystyle{ \begin{bmatrix} 1 & 0 & -4\\ 0 & 1 & 4 \end{bmatrix} \begin{bmatrix} 1 & -1 & -2\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & -1 & -6\\ 0 & 1 & 4 \end{bmatrix} }[/math]
Multiply row vector by matrix
This is to taking the dot product as matrix*matrix multiplication is to matrix*vector multiplication. You take the dot product of the row vector with each successive column of the matrix, and write the result as another row vector. Any matrix*vector operation can be rewritten in this format by swapping rows and columns; the reason these are distinguished is because it is conventional to represent certain things as column vectors and different things as row vectors (i.e. monzos and vals); in this case, vectors represented as rows are called "covectors".