Talk:Interior product: Difference between revisions

Cmloegcmluin (talk | contribs)
Sintel (talk | contribs)
Notation again
 
(3 intermediate revisions by one other user not shown)
Line 66: Line 66:
You can take the progressive product of two things with the same variance and get something with greater (or equal to greatest input) grade but same variance.  
You can take the progressive product of two things with the same variance and get something with greater (or equal to greatest input) grade but same variance.  


Progressing two temperaments' multivectors together gives a multivector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma-bases for the same two original temperaments, so that's like [[meet]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is equal to the sum of the two input multivectors' grades (or less, if they share a comma in common), so it's either the same or greater than the greater of the two inputs' grades, and also it caps out at the dimensionality of the system.
Progressing two temperaments' multivectors together gives a multivector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma bases for the same two original temperaments, so that's like [[meet]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is equal to the sum of the two input multivectors' grades (or less, if they share a comma in common), so it's either the same or greater than the greater of the two inputs' grades, and also it caps out at the dimensionality of the system.


Progressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like [[join]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is the same idea as the previous statement (although it's less, in this case, if they share a mapping-row in common).
Progressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like [[join]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is the same idea as the previous statement (although it's less, in this case, if they share a mapping-row in common).
Line 74: Line 74:
Regressing two temperaments' multivectors together gives a multivector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like join, but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is equal to....... well, in the case of two multivectors, so two n=2, and d=3, you take both of their duals, so those grades are d-n=1, then wedge those, so sum them, so that's 2, but then take that's dual again so it's back to 1. Again it must be equal or less to the lesser of the two input's grades (depending on if they share a comma in common), and caps out at 0 (you can't go negative grade (negative grade is a different idea than the sign we're adding to our grade in Wolfram to supply the variance information too in a single handy package)).
Regressing two temperaments' multivectors together gives a multivector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like join, but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is equal to....... well, in the case of two multivectors, so two n=2, and d=3, you take both of their duals, so those grades are d-n=1, then wedge those, so sum them, so that's 2, but then take that's dual again so it's back to 1. Again it must be equal or less to the lesser of the two input's grades (depending on if they share a comma in common), and caps out at 0 (you can't go negative grade (negative grade is a different idea than the sign we're adding to our grade in Wolfram to supply the variance information too in a single handy package)).


Regressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma-bases for the same two original temperaments, so that's like meet, but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is the same idea as the previous statement.
Regressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma bases for the same two original temperaments, so that's like meet, but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is the same idea as the previous statement.


So the progressive and regressive products are flip-flopped in that way. And it makes sense, because of how the regressive product just takes the duals of both inputs, so it does the opposite operation to what it would if it was a straight wedging, and then takes the dual again at the end just so you get back something with the variance you put in.
So the progressive and regressive products are flip-flopped in that way. And it makes sense, because of how the regressive product just takes the duals of both inputs, so it does the opposite operation to what it would if it was a straight wedging, and then takes the dual again at the end just so you get back something with the variance you put in.
Line 106: Line 106:
|+
|+
!operations
!operations
!progressive product (AKA wedge product, exterior product)
!progressive product (AKA wedge product, exterior product)<br>
a ∧ b
a ∧ b
!regressive product (AKA vee product)
!regressive product (AKA vee product)<br>
a ∨ b
a ∨ b = <nowiki>*</nowiki>(*a ∧ *b)
<nowiki>*</nowiki>(*a ∧ *b)
!right interior product<br>
!right interior product
a ⨽ b = ∗(∗a ∧ b)<br>
a ⨽ b
∗(∗a ∧ b)
examples given where grade(a) ≥ grade(b)
examples given where grade(a) ≥ grade(b)
!(left) interior product
!(left) interior product<br>
a ⨼ b
a ⨼ b = <nowiki>*</nowiki>(a ∧ *b)<br>
<nowiki>*</nowiki>(a ∧ *b)
examples given where grade(a) < grade(b)
examples given where grade(a) < grade(b)
!symmetrical interior product
!symmetrical interior product<br>
 
a • b = if grade(a) ≥ grade(b), a ⨽ b; else a ⨼ b
a • b = if grade(a) ≥ grade(b), a ⨽ b; else a ⨼ b
|-
|-
Line 169: Line 165:
|(in terms of other two interior products)
|(in terms of other two interior products)
|}
|}
== Notation ==
As Cmloegcmluin observes above, the <math>\vee</math> notation is not really standard, and this product is usually written as <math>\alpha \mathbin{\lrcorner} \beta</math>.
That is is 'dual' to the wedge product is too vague (there are at least 3 different notions of duality here). One might say it's the adjoint of the wedge product, as <math>\left\langle \alpha \mathbin{\lrcorner} \gamma , \beta \right\rangle = \left\langle \alpha ,  \beta \wedge \gamma\right\rangle</math>.
So my suggestion is to just use the standard notation.
– [[User:Sintel|Sintel🎏]] ([[User_talk:Sintel|talk]]) 13:04, 19 April 2025 (UTC)
Return to "Interior product" page.