No, not even close. We are not "tabula rasa"[1] or blank slate*. If you would actually like to understand why, some good books about this are "The Self Assembling Brain"[2] and "The Archaeology of Mind"[3].
[*] One of the things that frustrates me the most in the discourse on LLMs is that people who should know better deliberately mislead others into believing that there is something similar to "intelligence" going on with LLMs -- because they are heavily financially incentivized to do so. Comparisons with humans are categorical errors in everything but metaphor. They call them "neural networks" instead of "systems of nonlinear equations", because "neural network" sounds way sexier than vectorized y=f(mx+b).
[1] https://en.wikipedia.org/wiki/Tabula_rasa
[2] https://press.princeton.edu/books/hardcover/9780691181226/th...
[3] https://www.amazon.com/Archaeology-Mind-Neuroevolutionary-In...
[*] One of the things that frustrates me the most in the discourse on LLMs is that people who should know better deliberately mislead others into believing that there is something similar to "intelligence" going on with LLMs -- because they are heavily financially incentivized to do so. Comparisons with humans are categorical errors in everything but metaphor. They call them "neural networks" instead of "systems of nonlinear equations", because "neural network" sounds way sexier than vectorized y=f(mx+b).