Found in 26 comments on Hacker News
cgk · 2023-03-30 · Original thread
Full disclosure: Principal Software Engineer here on the Scratch backend...

Scratch is not built to be a "teach your kid programming languages" system, it is based on the work and ideas of the Life Long Kindergarten group at the MIT Media Lab (the director of this group is Professor Mitch Resnick, the LEGO, Papert Professor of Learning Research). The Papert part is where the term Mindstorms comes from ( and was used by the Lego Group when branding those products, and our philosophy is heavily influenced by that.

I can say that the are real and we have a substantial footprint of backend services and custom software to support it. We handle on the order of 15-20 million comments/month.

The primary design philosophy is:

Passion: You have a strong interest in a subject/problem to solve/explore Projects: Build something based on your passions, gain directly interactive experience with it. Peers: Share your work with folks who are interested and provide feedback to you Play: It should be fun!

Note that there is nothing in there about STEM/STEAM nor application development. We build and support Scratch to provide creative tools for anyone to explore computation in a from that is relatable and has a low floor for understanding/entry. Having said that, the complexity of what Scratch can do rises sharply the more you work with it and the concepts behind "forking" and opensource are built in via the remix ability on individual projects.

A lot of design thinking goes into the frontend of Scratch to build on a creativity feedback loop that is not focused on learning Python or any other specific language (or the syntax of them, i.e. avoid "why isn't my program working... oh, one too many tabs... or maybe this semi-colon, or maybe this .")

Another part I think is worth raising, the Scratch frontend is a sophisticated virtual machine interpreter that has it's own machine code and model that is executing in a Javascript environment in browser and it is still open source. Google's Blockly project was based on the ideas of Scratch 1.4 and when we ported Scratch 2 away from being Flash based, we partnered with the Blockly group to fork their code base and create Scratch Blocks.

Based on the TIOBE index, we're usually somewhere in the top 20 most popular "programming languages". _eat it Fortran!_

skadamat · 2022-03-14 · Original thread
The whole movement around "teaching kids to code" has been interesting to watch. Multiple goals seem to be conflated together (which isn't uncommon in education):

- Helping kids think in new ways

- Building skills for "the future" / "jobs"

- Encouraging kids to create their own things, instead of just consuming

- Probably some others I'm not thinking about! (e.g. improving odds of getting into an "elite" college)

Each tool, whether it's C or Scratch, should be evaluated against the design goals as well as the embedded context / environment that children are introduced to programming in. This is a rich topic undoubtedly.

My 2 favorite starting resources are:

- Learnable Programming by Bret Victor:

- Mindstorms by Seymour Papert:

hhas01 · 2020-10-07 · Original thread
Fine article, and what it (tongue-in-cheek) calls “cheating” is just what us self-taught automators call “making the machine do all the crapwork for you”. It’s just unfortunate that the greater tooling and culture currently available is such a sprawling hostile ballache that even the most enthusiastic cheater will be driven to conclude that this shit would be (and likely is) quicker and easier just to do by hand.

The foundational mistake is “teaching programming”. The goal should be to instill (“teach”) critical thinking and analytical problem solving skills, and a “programming environment” just another tool, like pencil and paper, which the student can use when exercising those skills on real-world problems.

Whereas “teaching programming” is teaching language features: what all the buttons are and what they do when you push them. Thus mastery of button-pushing becomes feted as the end-goal of itself, instead of being just some tiresome but necessary tool-practising crapwork (like memorizing the ten-times tables and drawing all the letters from A to Z) that you have to go through on the way to achieving your true goals (which can be anything).

Once again, I point to Papert’s Logo[1] as a good demonstration of just how simple that PE can—and should—be to serve that purpose. Logo’s core concepts can be communicated in just three steps:

1. This is a Word.

2. This is how you Perform words.

3. This is how you Add your own words.

Anything else that the platform provides, such as its dictionary of pre-defined words, can and should be explorable and discoverable; something today’s hardware and software can support and encourage without blinking. Let the students teach that crap to themselves if/as/when they need it, and keep the adults on hand just to observe when students start running themselves down a dead-end and prompt them to other possibilities they had not realized/considered.

Oh, and it really should go without saying that the PE’s error messaging must be the top of their class. Because errors aren’t the “wrong answers” of which a student should feel embarrassed and ashamed, but fresh questions in their own right which spark awareness, exploration, self-correction, and insight.



credit_guy · 2020-06-29 · Original thread
A lot of people hold this belief that knowing how to do X with an "incorrect form" is worse than not knowing at all, if you want to progress at doing X.

In programming we have debugging. You have a program that does X, but with some bugs. You later improve the program by removing the bugs.

Why can't we do this in "real life" as well? You learn how to add multi-digit numbers from right to left. You then later relearn that by going from left to right. You learn to swim with your head above the water, then later learn to keep your head in the water, and turn it every two strokes to get a quick breath.

In fact, I read about this concept of "debugging" bad habits exactly in the context of juggling. Seymour Papert covers this in Mindstorms [1], p 111. He explains that the most common "bug" that prevents people from performing 3-ball juggling is following one ball with the eyes. Once you are aware of that, you the fix is quite easy: keep your eyes pointed at the apex of the ball's trajectory. In a later chapter he goes on to say that other things can be "debugged" as well; one example is relearning skiing to replace a v-type position to a parallel ski position.


hhas01 · 2020-06-05 · Original thread
Nope. It’s not about “parsing”, it’s about representation.

Languages such as Python and C draw clear distinction between literal values on one hand and flow control statements and operators on the other. Numbers, strings, arrays, structs are first-class data. Commands, conditionals, math operators, etc are not; you cannot instantiate them, you cannot manipulate them.

What homoiconic languages do is get rid of that (artificial) distinction.

Lisp takes one approach, which is to describe commands using an existing data structure (list). This overloading means a Lisp program is context-sensitive: evaluate it one way, and you get a nested data structure; evaluate it another, you get behaviors expressed. The former representation, of course, is what Lisp macros manipulate, transforming one set of commands into another.

Programming in Algol-descended languages, we tend to think algorithmically: a sequence of instructions to be performed, one after the other, in order of appearance. Whereas Lisp-like languages tend to encourage more compositional thinking: composing existing behaviors to form new behaviors; in Lisp’s case, by literally composing lists.

Another (novel?) approach to homoiconicity is to make commands themselves a first-class datatype within the language. A programming language does not need swathes of Python/C-style operators and statements to be fully featured; only commands are actually required.

I did this in my kiwi language: a command is written natively as `foo (arg1, arg2)`, which is represented under the hood as a value of type Command, which is itself composed of a Name, a List of zero or more arguments, and a Scope (lexical binding). You can create a command, you can store it and pass it around, and you can evaluate it by retrieving it from storage within a command evaluation (“Run”) context:

    R> store value (foo, show value (“Hello, ”{$input}“!”))
    R> input (“Bob”)
    #  “Bob”
    R> {$foo}
    Hello, Bob!
Curly braces here indicate tags, which kiwi uses instead of variables to retrieve values from storage. (Tags are first-class values too, literally values describing a substitution to be performed when evaluated.)


When it comes to homoiconicity, Lisp actually “cheats” a bit. Because it eagerly (“dumbly”) evaluates argument lists, some commands such as conditionals and lambdas end up being implemented as special forms. They might look the same as every other command but their non-standard behaviors are custom-wired into the runtime. (TBH, Lisp is not that good a Lisp.)

Kiwi, like John Shutt’s Kernel, eliminates the need for special forms entirely by one additional change: decoupling command evaluation from argument evaluation. Commands capture their argument lists unevaluated, thunked with their original scope, leaving each argument to be evaluated by the receiving handler as/when/only if necessary. Thus `AND`/`OR`, `if…else…`, `repeat…`, and other “short-circuiting” operators and statements in Python and C are, in kiwi, just ordinary commands.

What’s striking is how much non-essential complexity these two fundamental design choices eliminate from the language’s semantics, as well as from the subsequent implementation. kiwi has just two built-in behaviors: tag substitution and command evaluation. The core language implementation is tiny; maybe 3000LOC for six standard data types, environment, and evaluator. All other behaviors are provided by external handler libraries: even “basics” like math, flow control, storing values, and defining handlers of your own. Had I’d tried to build a Python-like language, I’d still be writing it 10 years on.

There are other advantages too. K&R spends chapters discussing its various operators and flow control statements; and that’s even before it gets to its stdlibs. I once did a book on a Python-like language; hundreds of pages just to cover the built-in behaviors: murder for me, and probably not much better on readers.

In kiwi, the core documentation covering the built-in data types and how to use them, is less than three dozen pages. You can read it all in half an hour. Command handlers are documented separately, each as its own standardized “manpage” (currently auto-generated in CLI and HTML formats), complete with automated indexing and categorization, TOC and search engine. You can look up any language feature if/when/as you need it, either statically or in an interactive shell. Far quicker than spelunking the Python/C docs. A lot nicer than Bash.

Oh, and because all behaviors are library-defined, kiwi can be used as a data-only language a-la JSON just by running a kiwi interpreter without any libraries loaded. Contrast that with JavaScript’s notorious `eval(jsonString)`. It wasn’t created with this use-case in mind either; it just shook out of its design as a nice free bonus. We ended up using it as our preferred data interchange format for external data sources.

Honestly, I didn’t even plumb half the capabilities the language has. (Meta-programming, GUI form auto-generation, IPC-distributable job descriptions…)


Mind, kiwi’s a highly specialized DSL and its pure command syntax makes for some awkward reading code when it comes to tasks such as math. For instance, having to write `input (2), + (2)` rather than the much more familiar `2 + 2`, or even `(+ 2 2)`. Alas it’s also proprietary, which is why I can’t link it directly; I use it here because it’s the homoiconic language I’m most familiar with, and because it demonstrates that even a relative dumbass like me can easily implement a sophisticated working language just by eliminating all the syntactic and semantic complexity that other languages put in for no better reason than “that’s how other languages do it”.

More recently, I’ve been working on a general-purpose language that keeps the same underlying “everything is a command” homoiconicity while also allowing commands to be “skinned” with library-defined operator syntax to aid readability. (i.e. Algebraic syntax is the original DSL!) It’s very much a work in progress and may or may not achieve its design goals, but you can get some idea of how it looks here:

Partly inspired by Dylan, a Lisp designed to be skinnable with an extensible Pascal-like syntax, and also worth a look for those less familiar with non-Algol languages:

And, of course, by Papert’s Logo:

AriaMinaei · 2020-03-03 · Original thread
I recommend reading Seymour Papert's Mindstorms:

It gives you a powerful framework to think about learning in children (and adults), how they can learn programming, and how they can learn many other STEM and non-STEM topics using programming.

caenn · 2019-07-02 · Original thread
Mindstorms: Children, Computers, And Powerful Ideas
ModernMech · 2018-10-30 · Original thread
I'm sorry to say, but I don't believe a dismissive attitude can ever be described as professional.

I believe everyone can be taught to program, and the choice of language, semantics, and syntax has a profound effect on how far people can get, and what frustrations they face.

There are people out there with 0 formal training who run entire businesses on Excel, the most widely used programming language bar none (it's notable that in Excel, rows start at 1 and not 0. There is a reason for this). Ask a 7 year old to use C and they're not going to get very far. Give a 7 year old Logo, and they'll be writing programs with very little instruction, with results I've seen college freshmen struggle with.

I teach a summer robotics program to middle schoolers. We used to teach it in C++ because that's what the SDK came written in. In this mode, we spent most of the time getting them to think like the compiler, teaching them about memory layout, allocation, compiling, headers, preprocessors, etc. because they constantly ran into frustrations due to the design choices of C++. They never left the session with a firm understanding and confidence around programming because they spent all their time trying to build a model from scratch in their head without any relation to their own world.

Then we switched to Matlab. With one uniform data structure, a REPL, 1-based indexing, etc. they were much more comfortable, and they were able to make the robots do amazing things for their age. The most impressive thing I've seen is making a robot choir through writing a distributed protocol to synchronize the robots' notes. They were able to do this because the language, Matlab, got out of their way, which allowed them to focus on the task and relate it back to something they knew very well: music.

All I'm saying is this attitude of "Oh, you don't understand this thing we've built and these arbitrary limitations frustrate you, therefore you shouldn't even try it in the first place" is just toxic, given the evidence I've seen that people can learn and do amazing things if we give them a fighting chance.

Required reading on this subject:

cr0sh · 2017-08-02 · Original thread
In addition to all that has been said, if you really want to understand what Logo is, what it teaches, why, etc - then you need to read about it from the man who invented it:

Want to know why Lego Mindstorms exists? Well...

That's the work you need to read - but really, learn about the man, learn about Logo. As others have noted, it's more than just turtle graphics - so much more. Unfortunately, educators still have not grasped his ideas fully, and if you look closely, what is often touted out there for teaching children and others programming - is essentially his ideas, reimplemented poorly.

He has written more on the subject than that one book; and his thoughts and ideas (and Logo itself) aren't really about teaching children programming, but teaching children how to think computationally, algorithmically. He saw how and where things were heading long before many others, and he worked to try to get people prepared. Sadly, all people grasped was turtle graphics, but not the larger picture.

I often wonder where we'd be today had more people truly understood and implemented his (and, to be honest, his "muse" / "mentor" / "inspiration", if you will, in Piaget) methods and thoughts on teaching. Most likely in a much better position as a society...

wallflower · 2017-05-05 · Original thread
The beauty of Scratch and other similar tools is that instead of the teacher asking questions, the child learns to ask their own questions.

If you are interested in learning more about this mindset, you should read Mindstorms by Seymour Papert (RIP).

Scratch can be a "gateway drug" to languages that professional programmers use. The extensions/abstractions of Scratch from Berkeley that deal with making it do complicated things seem like putting a fish on a bicycle. Sometimes, you just have to leap and try to not fall.

kbouck · 2016-12-08 · Original thread
That would be amazing considering his relation to Seymour Papert [1], who:

- Co-invented Logo Programming language

- Authored "Mindstorms" [1]

- Collaborated with Lego to produce (Logo-programmable) Lego Mindstorms.

- Was made co-director of the MIT AI Lab by...... Marvin Minsky



oulipo · 2016-08-01 · Original thread
Seymour Papert was an inspiring and caring researcher, and he will inspire many generations to come. His work was truly groundbreaking, subtle and profound, and I encourage everyone to read some of his books, notably Mindstorm and Children's Machine

csours · 2015-03-16 · Original thread
Putting aside all the ad-hominem and everything-is-terrible, I think I learned a lot from following the references Tef makes in this talk.

Some references (sorry for the formatting, if this becomes a thing I'll do the wiki and the logo):


Blub Paradox:

Perl and 9/11:


Waterfall (same pdf, linking from 2 sources):

Conway's law:

Unrelated, Pournelle's Iron Law of Bureaucracy (I just like this law):

X-Y Problem:

Atwood, Don't Learn to Code:

Wason selection task:


Amazon Links, no referral:

vinalia · 2014-04-07 · Original thread
It might be fun to look at LOGO (maybe UCBLogo[1], free books included) for a first programming language. This has a first-person (turtle) view on a GUI that you move around to make shapes and do math/physics. The idea is that when programming it will be easier for the programmer to associate themselves with the turtle and interaction/exploration in the language will be natural.

The Logo way is pretty different from conventional programming models because it was tailored to be more intuitive than conventional languages like C, JavaScript, or VB. It still offers access to complex, higher order programming concepts like algorithms, AI, automata, etc. Harold Abelson from MIT (SICP) wrote a cool book that covers math/physics in Logo, too.[2]

The creator of the language has an awesome book[3] on how computers can enhance pedagogy and someone wrote a cool blog post on programming for children that mentioned it too[4].





dahjelle · 2014-03-28 · Original thread
I don't have a direct answer for you (still researching), but if you haven't read Mindstorms by Seymour Papert [1], I highly recommend it. It's generally about how computing can help kids learn problem solving in a variety of contexts, including several bits about the LOGO programming language. It's from '88, so it is definitely dated, but many of the concepts are pretty timeless.


joelhooks · 2013-10-11 · Original thread
I've been reading Papert's Mindstorms[1], which is a discussion on math education and the genesis of LOGO. If this topic interests you, I highly recommend the book.


jfarmer · 2013-08-10 · Original thread
Also, your friend should read Mindstorms:

One of the major themes is the relationship children have with mathematics and ways teachers can change it.

EzGraphs · 2013-08-01 · Original thread
Reminds me of Mindstorms:

In the intro of the book Seymour A. Papert describes how gears provided an early concrete framework that made understanding abstract mathematical concepts presented at a later point much easier to visualize and apply.

tel · 2012-10-28 · Original thread
(Also at:

I have a thesis that the kind of thinking required to survive med school is diametrically opposed to the kind of thinking required to do statistics well. It's the "rote pattern matching" versus "mathetic language fluency" issue that's at the heart of things like Papert's Constructivist learning theory[1] and it really causes me to have little surprise at an article like this. Doctors are (usually) viciously smart people who have to make a wide array of difficult decisions daily, but to operate at that level requires an intuition around a lot of cached knowledge, something I feel to be basically the opposite of statistical thought.

I don't think this is unique, either. It's the heart of Fisher's program to provide statistical tests as tools to decision-makers[2]. It's an undoubted success in providing general defense against coincidences to a wide audience, but it casts the deductive process needed in a pale light.

I think a principle component of the computer revolution is to provide more people with better insight into mathetic thought. Papert focuses on combinatorial examples in children in Mindstorms[3] but I think the next level is understand information theory, distributions, and correlation on an intuitive level. MCMC sampling went an incredible way to helping me to understand these ideas and probabilistic programming languages are a great step toward making these ideas more available to the common public, but we also need great visualization (something far removed from today's often lazy "data viz").

Ideally, things like means and variances will be concepts that are stronger than just parameters of the normal distribution---which I feel is about as far as a good student in a typical college curriculum statistics class in a science or engineering major can go---but instead be tightly connected to using distributions accurately when thinking of complex systems of many interacting parts and using concentration inequalities to guide intuition.

I think the biggest driver of the recent popularization of Bayesian statistics is that distributions as a mode of thought is something quite natural to the human brain, but also something rather unrefined. People can roughly understand uncertainty about an outcome, but have a harder time with conjunctions or risk. How can we build tools that will teach people greater refinement of these intuitions?

[1] [2] [3]

jfarmer · 2012-10-12 · Original thread
I linked to it at the end of my comment, where I also mentioned who wrote it.

Here it is, again:

Lego Mindstorms are named after it.

GHFigs · 2011-06-16 · Original thread
Seymore Papert, in Mindstorms:

"By deliberately learning to imitate mechanical thinking, the learner becomes able to articulate what mechanical thinking is and what it is not. The exercise can lead to greater confidence about the ability to choose a cognitive style that suits the problem. Analysis of "mechanical thinking" and how it is different from other kinds and practice with problem analysis can result in a new degree of intellectual sophistication. By providing a very concrete down-to-earth model of a particular style of thinking, work with the computer can make it easier to understand that there is such a thing as a "style of thinking". And giving children the opportunity to choose one style or another provides an opportunity to develop the skill necessary to choose between styles. Thus instead of inducing mechanical thinking, contact with computers could turn out to be the best conceivable antidote to it. And for me what is the most important in this is that through these experiences these children would be serving their apprenticeships as epistemologists, that is to say learning to think articulately about thinking."

ThomPete · 2010-11-16 · Original thread
I know I keep beating that horse.

But Seymore Paperts book "Mindstorms: Children, Computers, And Powerful Ideas"

Is great

ThomPete · 2010-11-03 · Original thread
Seymore Paperts Mindstorms program. Read the book it's brilliant.

ThomPete · 2010-09-05 · Original thread
Read Seymour Paperts book Mindstorms

The main point is to have children do something they understand from the real world and have a physical relationship with. That way it won't feel as abstract.

dhess · 2009-02-24 · Original thread
Learning to build or repair a car would probably improve your understanding of thermodynamics, aerodynamics, momentum, etc. Likewise, writing a computer program that simulates the motion of a planet around a star or renders 3D graphics might improve your understanding of classical mechanics and any number of topics in math, just to name a few examples; cf.

This is not to mention that learning how to program a computer is just another tool to put in your bags of tricks for solving problems in any of the domains you mentioned (some better suited than others, of course).

Fresh book recommendations delivered straight to your inbox every Thursday.