Sort:  

Good point! Is the AI that good, or are the graders that bad (or disinterested)?

Also, I was thinking after I posted that it might be easier to differentiate between a human and an AI if you had multiple works to examine. Maybe the AI can get past a single paper, but can it do it consistently if the grader sees ten papers? Or a hundred? Also, maybe it's possible to put AI detection tools into the hands of graders.

On a platform like Steem, I'd expect to see particular themes and unique writing styles emerge repeatedly over time from a human author. I'm not sure if we'd see that from an AI, so that might be something that an effective curator could look for.

On another note, I was wondering if curators should even care if an article is written by a human or not? If the article's job is to draw readers, then maybe that's all the voters should care about... whether or not it attracts an audience? I'm not sure what I think about that argument.

Tonight's link is related: The value of your humanity in an automated future | Kevin Roose - TED

"I'd expect to see particular themes and unique writing styles emerge repeatedly over time from a human author. I'm not sure if we'd see that from an AI"

On the other hand, a bulk of submissions by a natural language AI might have an all-too-obvious tell; perhaps trending artifacts, including themes that act as a giveaway. I wonder if: 1) a sort of anti-aliasing would be a useful addition to the algorithms producing papers (if not already a feature), and 2) if that would serve as a successful way to mitigate the risk of recognition, if indeed as per your example a professor would be likely to grade less on par with human submissions for AI papers.

Coin Marketplace

STEEM 0.27
TRX 0.13
JST 0.032
BTC 62802.08
ETH 2941.62
USDT 1.00
SBD 3.59