Thursday, September 24, 2020

the importance of rapid iteration and proximate feedback

A couple things I've been experiencing (reading, listening to, and programming) are having some synchronicity:

Daniel Kahneman, talking with Sam Harris on what it takes to develop intuition - to impress knowledge into your immediate, System 1 processing vs your slower, rational System 2:

Now, in order to recognize patterns in reality, which is what true intuitions are, the world has to be regular enough so that there are regularities to be picked up. Then you have to have enough exposure to those regularities to have a chance to learn them. And third, it turns out that intuition depends critically on the time between when you're making a guess and a judgement, and when you get feedback about it. The feedback has to be rapid. And if those three conditions are satisfied, then eventually people develop intuition. 

Maddy Myers talking on the podcast Triple Click on Which Games Do We Wish We Were Better At? - specifically at around 16:00, on playing at local competitions for fighting games

And then hearing from people, in person, feedback on what I was doing wrong and then using that immediately to improve...you really can't beat that [...] because you're getting that hands-on lesson constantly... and matches are short enough that you can get actually quickly implement that! Which I feel is another reason why I could more immediately see improvements there then at something like Starcraft, where I feel like I did 600 things wrong, and the next match is going to be completely different than this, so who frickin' knows! - But at least in fighting games you can change one thing and see a big difference and that always worked better for my learning style.

And I'm back on my own, why does my System 1 have a such a dislike of unit tests and disinterest in locking down javascript via typescript, when so many other smart developers love both so much? 

I think I answered this question on this blog in 2016:

I've found SOME parallels in the way I write code relative to the Proper Unit Tester (PUTter). I'm a ridiculously incremental coder; I hate being "wrong", and kind of scared of making a block of code so big that when I'm finally ready to run it I don't what part might be going wrong, and so I've always relied on tight code/run/eval loops... lots of manual inspection and testing of assumptions at each little baby step. Often I'd even write little homebrew test functions for the various edge cases. The difference between old me and a true PUTter, then, is I would then throw away those little functions and logging statements in the same way a building site takes down the scaffolding once the building is standing. But PUTing says - look, that's how you know this code still works in the future, nurture that scaffolding, keep it around... make it survive any refactoring, fix all the mocks when the endpont input or output gets added to, etc etc.  That's a huge cost, and I'm still trying to convince my self it's worth it.

TypeScript is a bit of friction on that kind of iterating axel... it tends to violate DRY; if you are messing with the interface of a function, and especially if your typescript system is a bit hair trigger (like yelling at you for unused values) you might have to make changes 3 or 4 places. Yeah, it's better than undocumented "oh we assumed this random key was in the props object but forgot to tell you", but it's also not much of a replacement for updated real documentation. 

Unit tests can be similar, especially for UI. I'm not sure if I've just never landed in a place that was doing it right, but every place I've been at, the correct response to "oh the unit tests broke!" is not "time to fix the code!" but "time to update the unit tests!" In general, basic developer testing confirms the new feature is working, and the old functionality isn't blatantly busted - and unit tests are often awkward enough to set up (in terms of mocking an environment for the code to run in) that they are almost always more fragile that the core code... they are canaries in the coal mine that keel over if a miner passes wind.

The other thing about UI is that dependencies often aren't well described. Today we ran into one where a change to the CSS line-height property of a container label was making our checkboxes break. There wasn't really a way to capture in that in a unit test, unless you were doing screenshot-y ones. (We use text-based snapshots... a few years back I interviewed at one place that was using graphical screenshots which might be more revealing) 

A Unit Test is setting up a fragile glass ant farm to make sure a few little ants are still toiling away - but that's no way to test the health of the colony, and I don't see why so many developers don't see the costs of them in terms of developer time, both at the outset and as the codebase extends. 

The errors that do show up as a codebase extends are emergent... most well sized units are too small to fail, the trouble happens at higher levels. And yadda yadda, what if you have to refactor the unit, and want to make sure it works? Again, the number of subtle problems I've seen introduced with that kind of unit refactoring in real life I could count on one hand.

No comments:

Post a Comment