The way I handled it fell out as a result of how tokens were parsed. The token would be hashed, and that hash would be used to check if the token was a keyword in one hash table, then the same hash used to check in a symbol table. That made classification easy and low cost.
I don't think it's hard in practice if you use the right approach. More complex from a theory point of view, sure.
I am serious when I say that the Annotated ANSI C Standard book made this easy to understand. Without that book, parsing C types certainly did not make a lot of sense to me either. It can be found here: https://www.amazon.com/Annotated-ANSI-Standard-Programming-L...
I don't think it's hard in practice if you use the right approach. More complex from a theory point of view, sure.
I am serious when I say that the Annotated ANSI C Standard book made this easy to understand. Without that book, parsing C types certainly did not make a lot of sense to me either. It can be found here: https://www.amazon.com/Annotated-ANSI-Standard-Programming-L...