Google Collab and PaperSpace have free tiers and for only $10/month you can get access to GPU's on PaperSpace with the capacity to train fairly decently sized models.
There is a wealth of intro material (this one is pretty good https://www.oreilly.com/library/view/hands-on-machine-learni...) to get you started.
Even with the largest models that are not feasible for individuals to train (Large Language Models for example, that are so expensive even most companies can't afford to build them) there are refinement techniques that let you take existing models and train them for specific tasks. That kind of training can run on most higher end consumer grade GPU's.
I'll admit I am bit biased since I have worked in the field of AI for 20 years now, but I really don't see anything limiting even a mildly committed individual from getting into this area, even if only as a hobby.
[1]: https://www.oreilly.com/library/view/hands-on-machine-learni...
The C64 was incredible but I thought skipping Basic and going straight to 6510 assembler was the way to go. Got all those extra sprites by interrupting the scan lines, the huge performance advantage, etc
More than once I wiped my source code while zeroing memory.
I enjoyed Byte magazine but Compute! was my favorite:
https://www.commodore.ca/commodore-gallery/compute-magazines...
In my opinion, it’s even more exciting learning machine learning today than 6510 assembler on the C64.
I’d skip early the 1980’s nostalgia and begin the next quest. Kaggle is a good place to start:
https://www.kaggle.com/mmellinger66
And this book:
https://www.oreilly.com/library/view/hands-on-machine-learni...