They're explicitly not political, lesswrong is a website/community and rationality is about trying to think better by being aware of normal cognitive biases and correcting for them. Also trying to make better predictions and understand things better by applying Bayes' theorem when possible to account for new evidence: https://en.wikipedia.org/wiki/Bayes%27_theorem (and being willing to change your mind when the evidence changes).
It's about trying to understand and accept what's true no matter what political tribe it could potentially align with. See: https://www.lesswrong.com/rationality
For more reading about AGI:
Books:
- Superintelligence (I find his writing style somewhat tedious, but this is one of the original sources for a lot of the ideas): https://www.amazon.com/Superintelligence-Dangers-Strategies-...
- Human Compatible: https://www.amazon.com/Human-Compatible-Artificial-Intellige...
- Life 3.0, A lot of the same ideas, but the other extreme of writing style from superintelligence makes it more accessible: https://www.amazon.com/Life-3-0-Being-Artificial-Intelligenc...
Blog Posts:
- https://intelligence.org/2017/10/13/fire-alarm/
- https://www.lesswrong.com/tag/artificial-general-intelligenc...
- https://www.alexirpan.com/2020/08/18/ai-timelines.html
The reason the groups overlap a lot with AGI is that Eliezer Yudkowsky started less wrong and founded MIRI (the machine intelligence research institute). He's also formalized a lot of the thinking around the goal alignment problem and the existential risk of discovering how to create an AGI that can improve itself without first figuring out how to align it to human goals.
For an example of why this is hard: https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden... and probably the most famous example is the paperclip maximizer: https://www.lesswrong.com/tag/paperclip-maximizer
Also
> We don’t live in the world of Neuromancer and we never will. 99.9% of everything is mathematically invulnerable to hacking.
Seriously made me chuckle.
[1]: https://slatestarcodex.com/2015/04/07/no-physical-substrate-...
[2]: https://www.amazon.com/Life-3-0-Being-Artificial-Intelligenc...