Found in 2 comments on Hacker News
naasking · 2022-11-23 · Original thread
Do you actually have empirical data demonstrating that scientists aren't concerned about AI, or that "most" AI safety doesn't involve scientists, or is this just your gut feeling?

Edit: for instance, here's a computer science professor and neurologist who published a book about all of the serious dangers of AI, and calling for funding safety research:

https://www.amazon.com/Human-Compatible-Artificial-Intellige...

Furthermore, AI safety programs are hiring AI researchers and computer scientists to actually do the work, so are you claiming that these people don't sincerely believe in the work they're doing?

fossuser · 2021-01-22 · Original thread
AGI = Artificial General Intelligence, watch this for the main idea around the goal alignment problem: https://www.youtube.com/watch?v=EUjc1WuyPT8

They're explicitly not political, lesswrong is a website/community and rationality is about trying to think better by being aware of normal cognitive biases and correcting for them. Also trying to make better predictions and understand things better by applying Bayes' theorem when possible to account for new evidence: https://en.wikipedia.org/wiki/Bayes%27_theorem (and being willing to change your mind when the evidence changes).

It's about trying to understand and accept what's true no matter what political tribe it could potentially align with. See: https://www.lesswrong.com/rationality

For more reading about AGI:

Books:

- Superintelligence (I find his writing style somewhat tedious, but this is one of the original sources for a lot of the ideas): https://www.amazon.com/Superintelligence-Dangers-Strategies-...

- Human Compatible: https://www.amazon.com/Human-Compatible-Artificial-Intellige...

- Life 3.0, A lot of the same ideas, but the other extreme of writing style from superintelligence makes it more accessible: https://www.amazon.com/Life-3-0-Being-Artificial-Intelligenc...

Blog Posts:

- https://intelligence.org/2017/10/13/fire-alarm/

- https://www.lesswrong.com/tag/artificial-general-intelligenc...

- https://www.alexirpan.com/2020/08/18/ai-timelines.html

The reason the groups overlap a lot with AGI is that Eliezer Yudkowsky started less wrong and founded MIRI (the machine intelligence research institute). He's also formalized a lot of the thinking around the goal alignment problem and the existential risk of discovering how to create an AGI that can improve itself without first figuring out how to align it to human goals.

For an example of why this is hard: https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden... and probably the most famous example is the paperclip maximizer: https://www.lesswrong.com/tag/paperclip-maximizer

Fresh book recommendations delivered straight to your inbox every Thursday.