...and to reply to your second question, one thing I find interesting and want to explore further is how (and when) to best leverage what the LLM has memorized.
The way humans do math in our heads is an interesting analog: our brain (mind?) uses two types of rules that we have memorized:
1. algebraic rules for rewriting (part of) the math problem
2. atomic rules things like 2+2=4
So I'm wondering if we could write a "recursive" LLM prompt that achieves a similar thing.
Related to this, as part of another classic CMU AI research project on Cognitive Architectures, John R. Anderson's group explored how humans do math in their head as part of his ACT-R project: https://www.amazon.com/Soar-Cognitive-Architecture-MIT-Press...
The ACT-R group partnered up with cognitive scientists & neuroscientists and performed FMRIs on students while they were doing math problems.
The way humans do math in our heads is an interesting analog: our brain (mind?) uses two types of rules that we have memorized:
1. algebraic rules for rewriting (part of) the math problem
2. atomic rules things like 2+2=4
So I'm wondering if we could write a "recursive" LLM prompt that achieves a similar thing.
Related to this, as part of another classic CMU AI research project on Cognitive Architectures, John R. Anderson's group explored how humans do math in their head as part of his ACT-R project: https://www.amazon.com/Soar-Cognitive-Architecture-MIT-Press...
The ACT-R group partnered up with cognitive scientists & neuroscientists and performed FMRIs on students while they were doing math problems.