Found 2 comments on HN
api · 2015-08-27 · Original thread
Free will is typically defined to mean you[n+1] != classical_function(you[n]), at least sometimes or with some probability. That does not necessarily imply anything supernatural, though if the supernatural (or any kind of dualism) exists that would explain it. It could also arise due to quantum noise or any other process that violates classical determinism.

There are those who define free will a bit differently though. It's not a precise term. Another definition is that you[n+1] cannot be computed from any function other than you or something isomorphic with you -- in other words you are not coarse-grainable or predictable using any subset of your state. Someone would have to literally make a copy of you to predict your behavior, possibly down to the atomic or quantum level.

I've heard functions/processes with this property called computationally irreducible:

Basically the computational irreducibility definition of free will just means nothing outside of you can predict what you're going to do unless it has an exact copy of you or something functionally equivalent (uploaded mind, etc.).

Another variation on the same idea is the "arrow of time" view put forward by Ilya Prigogine:

This is IMHO very close to if not identical to Wolfram's computational irreducibility, but framed a bit differently.

Obviously humans are somewhat predictable, but somewhat predictable doesn't imply deterministic. My personal opinion is that the second theory (irreducibility/arrow of time) is almost certainly true, and the first is also probably true. So we are probably both irreducible and indeterminate. I'd say the same is likely true of any living thing and possibly other complex natural processes.

cageface · 2010-08-17 · Original thread
It has to do with entropy and thermodynamics and the fact that living systems are so far removed from thermodynamic equilibrium that they generate unpredictable, emergent behaviors.

The math behind is pretty gnarly but if you want to understand it I recommend his book: .

A CA is not a good model for this.

Get dozens of book recommendations delivered straight to your inbox every Thursday.