It's an interesting idea, and in my view one that one that calls for great suspicion. "Let *me* be the one to tell you when *other* people are trying to brainwash, hypnotize, and manipulate you!" rings all my alarm bells because 1) that's one hell of an opportunity for a trojan horse if swallowed hook line and sinker, and 2) the *way* they're presenting it seems awfully slimey and manipulative to me. To their credit, feeding their own sales pitch back into it shows that their algorithm agrees with me and will admit it

Emotional appeals are a necessary part of learning and for that reason you wouldn't *want* to avoid them all. The important part is that they aren't trying to *bury* the emotional appeals so that you can't notice that they are done with dishonest intent and/or they do not hold up to scrutiny.
I haven't gotten to play with it enough yet to figure out *exactly* what it is looking for, but it does not seem to make this distinction and grade on honesty of the emotional appeals (unsurprisingly) but it does seem to pick up something real on how much their emotional appeal there is in the text.
I'm sure I'll play with it more, and if anything interesting comes of it I'll report back. Thanks for sharing!
META: The above text scores "Final conclusion: Be careful! The suggestive potential of this text is very high!" on leegle.me, so be careful or I might actually influence how you see it

META2: With the above meta note included, it drops to "Final conclusion: Under certain conditions, the text can be used for subliminal suggestion. It is worth paying attention to."
META3: With the above, it drops to "Final conclusion: Great! You can enjoy reading this text without the risk of being influenced by coercive artificial persuasion.", but when I include *this* note that says "without the risk of being influenced by coercive artificial persuasion", it jumps back up to "worth paying attention to"