[edit: removed wikipedia link... too many spoilers :(]
At one point it was talking about the Reticulum (Internet), botnet ecologies and how some of these would sublty modify information such that you couldn't really trust anything you read. I find this an intriguing and somewhat alarming idea.
Facebook was good early on because bots weren't sophisticated enough to create real-looking Facebook profiles so if you had a Facebook profile you had a person. Now they're better but still a human can pick a bot pretty easily.
But bots will only get better.
One idea that's been raised is the notion of trust. Usage patterns are analyzed to identify bot or bot-like behaviour. You see this in identifying sock or meat puppets on reddit, HN and other social news sites.
People have talked about the hordes of Twitter accounts used to make content appear more relevant than it is. This sort of thing is already happening.
But how do you pick out the bots when they're the majority and thus there is little to no statistical norm to compare them to real people?
At what point will bots start modifying Wikipedia articles and have other bots or, more likely, some stooges, approve those changes? What about establishing fake sites with wrong information and having them appear valid to search engines?
This will I believe be a big problem.
One particularly problematic aspect to this is that the site owners themselves, within reason, have little incentive to expose bots (or inactive accounts for that matter) because so much importance is placed on metrics like "# of active accounts".
This only becomes problematic if the user experience suffers beyond a certain threshold.
It's going to be interesting to see just how much of a problem this becomes and what we do to solve or at least mitigate it.
Fresh book recommendations delivered straight to your inbox every Thursday.