Found 2 comments on HN
cdbattags · 2019-01-23 · Original thread
I know this might be fuel to the fire on here, but I think Tanushree's work at Georgia Tech in her paper "A Parsimonious Language Model of Social Media Credibility Across Disparate Events" [1] is a good stab at the "fake news" problem with a lens of "credibility".

As a disclaimer, we tried to get this model off the ground in YC summer 2017 batch but were rejected after the phone interview. I did not assist in the research; only an effort to publicize.

I whole-heartedly believe this is our best bet at combatting "fake news" online for the moment. Taking the lessons of "The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive" [2], we might tease out credibility instead of fact or fiction to help guide our biased/opinionated labels.

From the abstract:

"In other words, the language used by millions of people on Twitter has considerable information about an event’s credibility. For example, hedge words and positive emotion words are associated with lower credibility"

[1]: https://credcount.com/whitepaper.pdf

[2]: https://www.amazon.com/Most-Human-Artificial-Intelligence-Te...

lmm · 2015-06-01 · Original thread
There are some, with certain (low) success rates. Expectations on IRC are weaker than those for an in-person conversation though; many people are inattentive or have limited English skills. I read an interesting excerpt from http://www.amazon.com/The-Most-Human-Artificial-Intelligence... , arguing that the Turing test would become much more interesting once the humans started playing to win.

Get dozens of book recommendations delivered straight to your inbox every Thursday.