> Because refactoring requires understanding, which LLMs completely lack.
<demonstration that an LLM can refactor code>
> Cleaning up code also follows some well established patterns, performance work is much less pattern-y.
Just as writing shitty react apps follow patterns, low-level performance and concurrency work also follow patterns. See [0] for a sample.
> I bet you need 10 or 100 times more understanding
Okay, so a 10 or 100 times larger model? Sounds like something we'll have next year, and certainly within a decade.
> One day maybe AI can do it, but it probably won't be LLM. It would be something which can understand symbols and math.
You do understand that the reason some of the earlier GPTs had trouble with symbols and math was the tokenization scheme, completely separate from how they work in general, right?
> Because refactoring requires understanding, which LLMs completely lack.
<demonstration that an LLM can refactor code>
> Cleaning up code also follows some well established patterns, performance work is much less pattern-y.
Just as writing shitty react apps follow patterns, low-level performance and concurrency work also follow patterns. See [0] for a sample.
> I bet you need 10 or 100 times more understanding
Okay, so a 10 or 100 times larger model? Sounds like something we'll have next year, and certainly within a decade.
> One day maybe AI can do it, but it probably won't be LLM. It would be something which can understand symbols and math.
You do understand that the reason some of the earlier GPTs had trouble with symbols and math was the tokenization scheme, completely separate from how they work in general, right?
[0]: C++ Concurrency in Action: Practical Multithreading 1st Edition https://www.amazon.com/C-Concurrency-Action-Practical-Multit...