Found in 2 comments on Hacker News
zackmorris · 2023-12-14 · Original thread
FunSearch is more along the lines of how I wanted AI to evolve over the last 20 years or so, after reading Genetic Programming III by John Koza:

https://www.amazon.com/Genetic-Programming-III-Darwinian-Inv...

I wanted to use genetic algorithms (GAs) to come up with random programs run against unit tests that specify expected behavior. It sounds like they are doing something similar, finding potential solutions with neural nets (NNs)/LLMs and grading them against an "evaluator" (wish they added more details about how it works).

What the article didn't mention is that above a certain level of complexity, this method begins to pull away from human supervisors to create and verify programs faster than we can review them. When they were playing with Lisp GAs back in the 1990s on Beowulf clusters, they found that the technique works extremely well, but it's difficult to tune GA parameters to evolve the best solutions reliably in the fastest time. So volume III was about re-running those experiments multiple times on clusters about 1000 times faster in the 2000s, to find correlations between parameters and outcomes. Something similar was also needed to understand how tuning NN parameters affects outcomes, but I haven't seen a good paper on whether that relationship is understood any better today.

Also GPU/SIMD hardware isn't good for GAs, since video cards are designed to run one wide algorithm instead of thousands or millions of narrow ones with subtle differences like on a cluster of CPUs. So I feel that progress on that front has been hindered for about 25 years, since I first started looking at programming FPGAs to run thousands of MIPS cores (probably ARM or RISC-V today). In other words, the perpetual AI winter we've been in for 50 years is more about poor hardware decisions and socioeconomic factors than technical challenges with the algorithms.

So I'm certain now that some combination of these old approaches will deliver AGI within 10 years. I'm just frustrated with myself that I never got to participate, since I spent all of those years writing CRUD apps or otherwise hustling in the struggle to make rent, with nothing to show for it except a roof over my head. And I'm disappointed in the wealthy for hoarding their money and not seeing the potential of the countless millions of other people as smart as they are who are trapped in wage slavery. IMHO this is the great problem of our time (explained by the pumping gas scene in Fight Club), although since AGI is the last problem in computer science, we might even see wealth inequality defeated sometime in the 2030s. Either that or we become Borg!

zackmorris · 2017-03-25 · Original thread
I have read and own Genetic Programming III by John Koza https://www.amazon.com/Genetic-Programming-III-Darwinian-Inv... and the best part about it was that it revisited problems that had been only superficially explored with GP a decade before. The increased computing power available allowed for multiple runs and provided insights into what parameters to tune and gave hard numbers on how much computation was needed to solve various classes of problems.

In the end, it doesn't matter that much which approach is taken because it's all classification problems. We just need the solution matrix, and ideally what computation went into solving it. I feel that this simple fact is lost amidst the complexity of how ML is taught today.

ML isn’t accelerating because of better code or research breakthroughs either. It’s happening because the big CPU manufacturers didn’t do anything for 20 years and GPU manufactures had their lunch. ML is straightforward, even trivial in some cases with effectively unlimited cores and bandwidth. We’re just rediscovering parallelization algorithms that were well known in functional programming generations ago. These discoveries are inevitable in a suitable playground.

I used to have this fantasy that I would get ahead of the curve enough to be able to dabble in the last human endeavor but I'm beginning to realize that that's probably never going to happen. Machines will soon beat humans in pretty much every category, and not because someone figures out how to make it all work, but because there simply isn't enough time to stop it now. There are a dozen teams around the world racing to solve any problem and anyone’s odds of being first are perhaps 10% at best. Compounded with darwinian capitalism, the risk/reward equation is headed towards infinity so fast that it’s looking like the smartest move is not to play.

Barring a dystopian future or cataclysm, I give us 10 years, certainly no more than 20, before computers can do anything people can do, at least economically. And the really eerie thing is that that won’t be the most impressive thing happening, because kids will know it’s all just hill climbing and throwing hardware at problems. It will be all the other associated technologies that come about as people abandon the old hard ways of doing things.

Fresh book recommendations delivered straight to your inbox every Thursday.