Quite a few times, I got asked, "What should I memorize when learning Python?". I always answered: "Nothing! You will naturally memorize things that matter and others don't". Obviously, it is contextual - a backend developer will memorize different things than a machine learning researcher.
When we use anything, spaced repetitions come naturally (so it is also why our brain is tuned to them!). Artificial spaced repetitions are often helpful when we learn in an artificial environment - a new human language when there is little opportunity to practice it, things for an exam, etc.
With programming languages, as long as you have a computer, there is no reason to learn it without actually using it.
My answer would be to memorize useful facts that can't be derived from other knowledge. But put the most effort in building up fundamentals that can synthsize most usages.
Arbitrary things like obscure names, weird parameter orderings, mutated inputs, etc are all the sorts of gotchas that can be learned up-front if you care to know them before being bitten.
An example for Python is the del statement `del d[key]` which I find arbitrary and non-intuitive.
Actually when learning any language I usually learn the collections usages early--these would be good to memorize rather than repeatedly looking up and learning case-by-case as a time saving (non flow state breaking) measure.
For Swift it was all the weird call forms with keyword/symbols that move around rather than being additional parts of a complete form.
Learning parameter ordering is precisely a thing that I advice against memorizing. Just use IDE.
Other things - well, there is Google, there is StackOverflow, and now - also ChatGPT with GPT-4.
Sure, it might not be enough for learning (at least, not for everyone), but well enough to avoid needless memorization. Memorization always comes at some opportunity cost of using time (and, well, brain capacity) for something more fruitful, e.g., learning good programming patterns, wise abstractions, etc.
The 'parameter ordering' bit was specifically from using PHP for a stint. The standard library goes out of its way to make parameter ordering non-standard and inconsistent. Having to look it up constantly was such a flow-kill. My text editor back then wasn't so smart.
It's not an either/or.
I'm fluent in Python, but had spent most of my time in 2.x. When I had to finally switch to 3.x, I went through all the 3.x release notes for new capabilities. One thing stood out: scandir was the preferred method compared to os.walk.
There's no way I'll remember that and I'd be too lazy to Google it. So it went into SRS.
Ditto for concurrent.futures (broke my habit of using multiprocessing directly).
The other use case for SRS and programming languages: There are always languages I use only occasionally (e.g. Emacs Lisp) So I'll never develop muscle memory. Using SRS significantly boosted my Elisp capabilities.
I don't code in JS, but I decided to take a course on it and put a lot of the stuff in SRS. I've almost never used JS since, but a coworker is using it in a project we're both working on. I was looking at his code, and pointed out to him various alternatives that he wasn't aware of (e.g. newer features since the time he learned it). Definitely would not have been able to do it without SRS. I can mostly read/understand his code. Again, almost entirely due to SRS.
> There are always languages I use only occasionally (e.g. Emacs Lisp) So I'll never develop muscle memory.
This is a fantastic use case that I hadn’t thought of.
One big problem with DSLs is that unless you use them on a near daily basis, you’ll keep forgetting how to use them. They really need to be worth the memorization cost that they introduce to a system. But SRSs can help.
I mostly agree, except if you only learn this way, you’ll end up missing a lot of non-obvious features of the language. E.g. writing your own function to do something you could do in a single standard library call, if only you knew it existed.
I programmed a spaced repetition system that integrates the doing, so that my cards aren't just memorization and theory. Each flashcard is a kata I have to program, and the program checks if my output is correct.
How did you implement this? It's exactly what I've been wanting for ages.
Also curious
Perhaps, as long as you have a computer with access to chat AI, there is no reason to memorize the whole language.
Without chat AI, or an expert Q&A forum, it is difficult to search for the best solution for a complex problem articulated in its entirety.
What will end up happening is that you will convert your Y problem into multiple smaller X problems and search for how to solve those. You will design the solution to the problem in the abstract language of your mind using these smaller steps, and then map those steps to the programming language. You can much more easily look up smaller steps such as "find the index of a character in a string".
By the time you get to these small steps, you're imagining a detailed solution, whereas the language could offer something more direct.
One tell tale sign of this is code that contains functions which have exact equivalents in the standard library.
When you ask the programmer why they didn't use the library functions or built in syntax they will often say, oh I looked for something like that, but didn't find it; it was faster to just write the code than to look for it.
> With programming languages, as long as you have a computer, there is no reason to learn it without actually using it.
Spaced repetition isn't about learning, it's about remembering. From the article:
> Flash cards are for remembering what you’ve learned.
Flash cards are great for something that you don't use often but still want to remember. I use it a lot for CLI options (docker, ripgrep, etc), parts of the standard library that are useful once in a while but not always, algorithms, editor shortcuts. It also means that when you switch environments you can still remember everything.