Talk:Interior product: Difference between revisions

Cmloegcmluin (talk | contribs)
Cmloegcmluin (talk | contribs)
unhyphenate "comma basis"
Line 66: Line 66:
You can take the progressive product of two things with the same variance and get something with greater (or equal to greatest input) grade but same variance.  
You can take the progressive product of two things with the same variance and get something with greater (or equal to greatest input) grade but same variance.  


Progressing two temperaments' multivectors together gives a multivector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma-bases for the same two original temperaments, so that's like [[meet]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is equal to the sum of the two input multivectors' grades (or less, if they share a comma in common), so it's either the same or greater than the greater of the two inputs' grades, and also it caps out at the dimensionality of the system.
Progressing two temperaments' multivectors together gives a multivector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma bases for the same two original temperaments, so that's like [[meet]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is equal to the sum of the two input multivectors' grades (or less, if they share a comma in common), so it's either the same or greater than the greater of the two inputs' grades, and also it caps out at the dimensionality of the system.


Progressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like [[join]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is the same idea as the previous statement (although it's less, in this case, if they share a mapping-row in common).
Progressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like [[join]], but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is the same idea as the previous statement (although it's less, in this case, if they share a mapping-row in common).
Line 74: Line 74:
Regressing two temperaments' multivectors together gives a multivector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like join, but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is equal to....... well, in the case of two multivectors, so two n=2, and d=3, you take both of their duals, so those grades are d-n=1, then wedge those, so sum them, so that's 2, but then take that's dual again so it's back to 1. Again it must be equal or less to the lesser of the two input's grades (depending on if they share a comma in common), and caps out at 0 (you can't go negative grade (negative grade is a different idea than the sign we're adding to our grade in Wolfram to supply the variance information too in a single handy package)).
Regressing two temperaments' multivectors together gives a multivector for the same temperament associated with the mapping you'd find by concatting and reducing the mappings for the same two original temperaments, so that's like join, but instead of being defined for temperaments in the abstract, it's the operation you perform on multicovectors to achieve it. The grade of the output is equal to....... well, in the case of two multivectors, so two n=2, and d=3, you take both of their duals, so those grades are d-n=1, then wedge those, so sum them, so that's 2, but then take that's dual again so it's back to 1. Again it must be equal or less to the lesser of the two input's grades (depending on if they share a comma in common), and caps out at 0 (you can't go negative grade (negative grade is a different idea than the sign we're adding to our grade in Wolfram to supply the variance information too in a single handy package)).


Regressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma-bases for the same two original temperaments, so that's like meet, but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is the same idea as the previous statement.
Regressing two temperaments' multicovectors together gives a multicovector for the same temperament associated with the comma basis you'd find by concatting and reducing the comma bases for the same two original temperaments, so that's like meet, but instead of being defined for temperaments in the abstract, it's the operation you perform on multivectors to achieve it. The grade of the output is the same idea as the previous statement.


So the progressive and regressive products are flip-flopped in that way. And it makes sense, because of how the regressive product just takes the duals of both inputs, so it does the opposite operation to what it would if it was a straight wedging, and then takes the dual again at the end just so you get back something with the variance you put in.
So the progressive and regressive products are flip-flopped in that way. And it makes sense, because of how the regressive product just takes the duals of both inputs, so it does the opposite operation to what it would if it was a straight wedging, and then takes the dual again at the end just so you get back something with the variance you put in.
Return to "Interior product" page.