User:Sintel/Dual Weil-Euclidean norm

From Xenharmonic Wiki
Jump to navigation Jump to search

Derivation

On some [math]\displaystyle{ p }[/math]-limit subgroup with [math]\displaystyle{ n }[/math] primes, define the [math]\displaystyle{ n \times n }[/math] Tenney weighting matrix [math]\displaystyle{ W }[/math]:

[math]\displaystyle{ W = \begin{bmatrix} \log_2 2 & 0 & \cdots & 0 \\ 0 & \log_2 3 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \log_2 p \end{bmatrix} }[/math]

And the row vector [math]\displaystyle{ j }[/math] containing the log-primes (aka the JIP): [math]\displaystyle{ j = \begin{bmatrix} \log_2 2 & \log_2 3 & \cdots & \log_2 p \\ \end{bmatrix} }[/math]

Then the block matrix [math]\displaystyle{ X }[/math] obtained from these:

[math]\displaystyle{ X = \begin{bmatrix} W \\ \hline j \end{bmatrix} }[/math]

defines an inner product, with positive definite [math]\displaystyle{ G = X^{\mathsf T} X }[/math]:

[math]\displaystyle{ \left\langle x,y \right\rangle = x^{\mathsf T} X^{\mathsf T} X y = x^{\mathsf T} G y }[/math]

and an induced norm [math]\displaystyle{ ||x|| = \sqrt{\left\langle x,x \right\rangle} }[/math], which is the Weil-Euclidean norm.

The inner product on the dual space can then be derived by simply inverting [math]\displaystyle{ G }[/math] [1] , which gives the dual norm:

[math]\displaystyle{ \left\langle \alpha, \beta \right\rangle^{\ast} = \alpha G^{-1} \beta^{\mathsf T} }[/math]
[math]\displaystyle{ ||\alpha||^{\ast} = \sqrt{\left\langle \alpha,\alpha \right\rangle^{\ast}} = \sqrt{\alpha G^{-1} \alpha^{\mathsf T}} }[/math]

The goal is now to find an expression for [math]\displaystyle{ G^{-1} }[/math].

First, note that:

[math]\displaystyle{ G = X^{\mathsf T} X = W^2 + j^{\mathsf T}j }[/math]

Since the outer product [math]\displaystyle{ j^{\mathsf T}j }[/math] is rank-1 we can use a theorem on the inverse of matrix sums which states: [2]

If [math]\displaystyle{ A }[/math] and [math]\displaystyle{ A+B }[/math] are invertible, and [math]\displaystyle{ B }[/math] has rank 1, then let [math]\displaystyle{ g = \text{tr}(BA^{-1}) }[/math]. Then [math]\displaystyle{ g \neq -1 }[/math] and
[math]\displaystyle{ (A+B)^{−1}=A^{-1} − \frac{1}{1+g}A^{-1}BA^{-1} }[/math]

Now identifying [math]\displaystyle{ A = W^2 }[/math] and [math]\displaystyle{ B = j^{\mathsf T}j }[/math]. We can see that

[math]\displaystyle{ G^{-1} = (W^2 + j^{\mathsf T}j)^{-1} = W^{-2} - \frac{1}{1+g} W^{-2}j^{\mathsf T}jW^{-2} }[/math]

Now let [math]\displaystyle{ l = \begin{bmatrix} \frac{1}{\log_2 2} & \frac{1}{\log_2 3} & \cdots & \frac{1}{\log_2 p} \\ \end{bmatrix} }[/math], then

[math]\displaystyle{ l = W^{-2}j }[/math]
[math]\displaystyle{ G^{-1} = W^{-2} - \frac{1}{1+g} l^{\mathsf T}l }[/math]

Now we only need to find [math]\displaystyle{ g }[/math]. The trace of a matrix product is equal to the sum of the elements of their Hadamard (elementwise) product. Since

[math]\displaystyle{ j^{\mathsf T}j \circ W^{-2} = I_n }[/math]
[math]\displaystyle{ g = \text{tr}(BA^{-1}) = \text{tr}(j^{\mathsf T}j W^{-2}) = n }[/math]

Which leads to the final expression:

[math]\displaystyle{ G^{-1} = W^{-2} - \frac{1}{n+1} l^{\mathsf T}l }[/math]

Relation to other metrics

Graham Breed gives the following formula (adapted for the notation introduced here):[3]

[math]\displaystyle{ \begin{aligned} G_a(\lambda) &= W^{-2} - \lambda \frac{W^{-2}j^{\mathsf T}jW^{-2}}{jW^{-2}j^{\mathsf T}} \\ &= W^{-2} - \lambda \frac{l^{\mathsf T}l}{n} \end{aligned} }[/math]

So this is equivalent to [math]\displaystyle{ G^{-1} }[/math] when we pick [math]\displaystyle{ \lambda = \frac{n}{n+1} }[/math].

His parametric badness is given:[4]

[math]\displaystyle{ \begin{aligned} G_b(E) &= \frac{W^{-2}}{jW^{-2}j^{\mathsf T}} (1+E^2) - \frac{W^{-2}j^{\mathsf T}jW^{-2}}{(jW^{-2}j^{\mathsf T})^2} \\ &= \frac{W^{-2}}{n} (1+E^2) - \frac{l^{\mathsf T}l}{n^2} \end{aligned} }[/math]

Since the metric is equivalent up to scaling, we multiply by [math]\displaystyle{ \frac{n}{1+E^2} }[/math] to obtain:

[math]\displaystyle{ G^{\prime}_b(E) = W^{-2} - \frac{1}{n(1+E^2)}l^{\mathsf T}l }[/math]

Again, this is equivalent to [math]\displaystyle{ G^{-1} }[/math], when we pick [math]\displaystyle{ E = \sqrt{\frac{1}{n}} }[/math]

Generalized norm

For some parameter [math]\displaystyle{ k \gt 0 }[/math], set:

[math]\displaystyle{ X_k = \begin{bmatrix} W \\ \hline k\cdot j \end{bmatrix} }[/math]
[math]\displaystyle{ G(k) = X_k^{\mathsf T} X_k = W^2 + k^2j^{\mathsf T}j }[/math]

Going through the same derivation, we find:

[math]\displaystyle{ G^{-1}(k) = W^{-2} - \frac{k^2}{nk^2+1} l^{\mathsf T}l }[/math]

Which leads to a simple relation to [math]\displaystyle{ E }[/math]:

[math]\displaystyle{ \begin{aligned} nk^2E^2 &= 1\\ E &= \sqrt{\frac{1}{nk^2}}\\ k &= \sqrt{\frac{1}{nE^2}} \end{aligned} }[/math]

References

  1. Taking the natural map to the dual space [math]\displaystyle{ \Gamma: V\to V^{\ast}: x \mapsto \left\langle x, \cdot \right\rangle }[/math], we require [math]\displaystyle{ \left\langle \Gamma(x),\Gamma(y) \right\rangle^{\ast} = \left\langle x,y \right\rangle }[/math].
  2. Miller, K. S. (1981). On the Inverse of the Sum of Matrices. Mathematics Magazine, 54(2), 67–72. https://doi.org/10.2307/2690437
  3. See formula (16) in section 3.1 "Cross-Weighted Metrics" In Breed, G. (2008). RMS-Based Error and Complexity Measures Involving Composite Intervals http://x31eq.com/composite.pdf
  4. Breed, G. (2016). http://x31eq.com/badness.pdf