User:Sintel/Dual Weil-Euclidean norm

From Xenharmonic Wiki
Jump to navigation Jump to search

Derivation

On some [math]p[/math]-limit subgroup with [math]n[/math] primes, define the [math]n \times n[/math] Tenney weighting matrix [math]W[/math]:

$$ W = \begin{bmatrix} \log_2 2 & 0 & \cdots & 0 \\ 0 & \log_2 3 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \log_2 p \end{bmatrix} $$

And the row vector [math]j[/math] containing the log-primes (aka the JIP): [math]j = \begin{bmatrix} \log_2 2 & \log_2 3 & \cdots & \log_2 p \\ \end{bmatrix} [/math]

Then the block matrix [math]X[/math] obtained from these:

$$ X = \begin{bmatrix} W \\ \hline j \end{bmatrix} $$

defines an inner product, with positive definite [math]G = X^{\mathsf T} X [/math]:

$$ \left\langle x,y \right\rangle = x^{\mathsf T} X^{\mathsf T} X y = x^{\mathsf T} G y $$

and an induced norm [math]||x|| = \sqrt{\left\langle x,x \right\rangle}[/math], which is the Weil-Euclidean norm.

The inner product on the dual space can then be derived by simply inverting [math]G[/math] [1] , which gives the dual norm:

$$ \left\langle \alpha, \beta \right\rangle^{\ast} = \alpha G^{-1} \beta^{\mathsf T} \\ ||\alpha||^{\ast} = \sqrt{\left\langle \alpha,\alpha \right\rangle^{\ast}} = \sqrt{\alpha G^{-1} \alpha^{\mathsf T}} $$

The goal is now to find an expression for [math]G^{-1}[/math].

First, note that:

$$ G = X^{\mathsf T} X = W^2 + j^{\mathsf T}j $$

Since the outer product [math]j^{\mathsf T}j[/math] is rank-1 we can use a theorem on the inverse of matrix sums which states: [2]

If [math]A[/math] and [math]A+B[/math] are invertible, and [math]B[/math] has rank 1, then let [math]g = \text{tr}(BA^{-1})[/math]. Then [math]g \neq -1[/math] and
[math](A+B)^{−1}=A^{-1} − \frac{1}{1+g}A^{-1}BA^{-1}[/math]

Now identifying [math]A = W^2[/math] and [math]B = j^{\mathsf T}j[/math]. We can see that

$$ G^{-1} = (W^2 + j^{\mathsf T}j)^{-1} = W^{-2} - \frac{1}{1+g} W^{-2}j^{\mathsf T}jW^{-2} $$

Now let [math]l = \begin{bmatrix} \frac{1}{\log_2 2} & \frac{1}{\log_2 3} & \cdots & \frac{1}{\log_2 p} \\ \end{bmatrix} [/math], then

$$ l = W^{-2}j \\ G^{-1} = W^{-2} - \frac{1}{1+g} l^{\mathsf T}l $$

Now we only need to find [math]g[/math]. The trace of a matrix product is equal to the sum of the elements of their Hadamard (elementwise) product. Since

$$ j^{\mathsf T}j \circ W^{-2} = I_n \\ g = \text{tr}(BA^{-1}) = \text{tr}(j^{\mathsf T}j W^{-2}) = n $$

Which leads to the final expression:

$$ G^{-1} = W^{-2} - \frac{1}{n+1} l^{\mathsf T}l $$

Relation to other metrics

Graham Breed gives the following formula (adapted for the notation introduced here):[3]

$$ \begin{aligned} G_a(\lambda) &= W^{-2} - \lambda \frac{W^{-2}j^{\mathsf T}jW^{-2}}{jW^{-2}j^{\mathsf T}} \\ &= W^{-2} - \lambda \frac{l^{\mathsf T}l}{n} \end{aligned} $$

So this is equivalent to [math]G^{-1}[/math] when we pick [math]\lambda = \frac{n}{n+1}[/math].

His parametric badness is given:[4]

$$ \begin{aligned} G_b(E) &= \frac{W^{-2}}{jW^{-2}j^{\mathsf T}} (1+E^2) - \frac{W^{-2}j^{\mathsf T}jW^{-2}}{(jW^{-2}j^{\mathsf T})^2} \\ &= \frac{W^{-2}}{n} (1+E^2) - \frac{l^{\mathsf T}l}{n^2} \end{aligned} $$

Since the metric is equivalent up to scaling, we multiply by [math]\frac{n}{1+E^2_k}[/math] to obtain:

$$ G^{\prime}_b(E) = W^{-2} - \frac{1}{n(1+E^2)}l^{\mathsf T}l $$

Again, this is equivalent to [math]G^{-1}[/math], when we pick [math]E = \sqrt{\frac{1}{n}}[/math]

Generalized norm

For some parameter [math]k \gt 0[/math], set:

$$ X_k = \begin{bmatrix} W \\ \hline k\cdot j \end{bmatrix}\\ G(k) = X_k^{\mathsf T} X_k = W^2 + k^2j^{\mathsf T}j $$

Going through the same derivation, we find:

$$ G^{-1}(k) = W^{-2} - \frac{k^2}{nk^2+1} l^{\mathsf T}l $$

Which leads to a simple relation to [math]E[/math]:

$$ nk^2E^2 = 1\\ E = \sqrt{\frac{1}{nk^2}}\\ k = \sqrt{\frac{1}{nE^2}} $$

References

  1. Taking the natural map to the dual space [math]\Gamma: V\to V^{\ast}: x \mapsto \left\langle x, \cdot \right\rangle[/math], we require [math]\left\langle \Gamma(x),\Gamma(y) \right\rangle^{\ast} = \left\langle x,y \right\rangle[/math].
  2. Miller, K. S. (1981). On the Inverse of the Sum of Matrices. Mathematics Magazine, 54(2), 67–72. https://doi.org/10.2307/2690437
  3. See formula (16) in section 3.1 "Cross-Weighted Metrics" In Breed, G. (2008). RMS-Based Error and Complexity Measures Involving Composite Intervals http://x31eq.com/composite.pdf
  4. Breed, G. (2016). http://x31eq.com/badness.pdf