Talk:Tenney–Euclidean tuning: Difference between revisions
Cmloegcmluin (talk | contribs) |
No edit summary |
||
Line 174: | Line 174: | ||
:::So I take back one thing I said just above: you ‘’could’’ call it “TE-normed error/tuning” I suppose. Because “Tenney” isn’t bound to weight in the same way that “Euclidean” is bound to norm. Tenney just means divide by log of prime. So in that context it is being used for norm, not a weight. —-[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 08:14, 27 January 2022 (UTC) | :::So I take back one thing I said just above: you ‘’could’’ call it “TE-normed error/tuning” I suppose. Because “Tenney” isn’t bound to weight in the same way that “Euclidean” is bound to norm. Tenney just means divide by log of prime. So in that context it is being used for norm, not a weight. —-[[User:Cmloegcmluin|Cmloegcmluin]] ([[User talk:Cmloegcmluin|talk]]) 08:14, 27 January 2022 (UTC) | ||
:::: Woa your table is really enlightening! I mostly agree with you. As I figured "Tenney" was the weighting method and "Euclidean" was the norm method, on that basis I'd be more lax about calling them. I think Tenney-weighted-Euclidean tuning and Tenney-Euclidean-normed tuning are ok. [[User:FloraC|FloraC]] ([[User talk:FloraC|talk]]) 12:26, 27 January 2022 (UTC) |