Found in 1 comment on Hacker News
dvt · 2013-08-17 · Original thread
Levesque saves his most damning criticism for the end of his paper. It’s not just that contemporary A.I. hasn’t solved these kinds of problems yet; it’s that contemporary A.I. has largely forgotten about them. In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human beings possess.

This is very true. But I think that recent AI experts (as opposed to those doing this work in the 70s) have realized that trying to tackle linguistic analysis is very very (very) hard. The problem with language (or more correctly, discourse) analysis is that even outside the realm of computing, it still hasn't been fully explicated.

A couple of months ago I took a graduate philosophy of language seminar (taught by the brilliant Sam Cumming at UCLA) in which we looked at various theories of discourse. It would be an understatement to say that these theories vary wildly. We have the classical RST (Rhetorical Structure Theory) by Mann and Thompson[0] (renowned linguists at USC & UCSB), Jan Van Kuppevelt's erotetic model[1], Andrew Kehler's Theory of Grammar[2] and a half-dozen or so more that I don't even remember.

So let's forget about computers for a second. We don't even know how humans process discourse. My term paper was about the parallel relation which is a very talked-about topic (almost as much as anaphora; see The New Yorker article) in the academic community; not only are such linguistic phenomena difficult to theoretically model, they are nigh impossible to practically implement (say, in some sort of AI schema).

So I'm not even surprised most AI folks just started doing work on SVM's or ANN's, or Markov Chains, or what-have-you. It seems more practical to do work on stuff that could actually benefit from machine learning, as opposed to trying to solve incredibly difficult (and mostly theoretical) problems like discourse analysis.

The bottom line is that we're still a ways off from having computers like those in Star Trek - computers that understand anaphora, parallelism, ellipses, etc, etc.

[0] http://www.sfu.ca/rst/

[1] http://www.jstor.org/stable/4176301

[2] http://www.amazon.com/Coherence-Reference-Theory-Grammar-And...

Fresh book recommendations delivered straight to your inbox every Thursday.