Saturday, January 21, 2017

tidying the dropbox root folder and using it to restore settings and other mac tips

Lost in Mobile posted Jacob Salemla's Using Dropbox For An Easy Restore Of All Your Computer’s Settings but I was more intrigued by his suggestions Unclutter your Dropbox Root Folder using chflags on Mac - many applications that have a Dropbox storage option tend to jam a new folder on the root level of Dropbox, which is not a good look, and this lets you tuck them away elsewhere.

Anyway, Jacob Salemla's blog might be worth keeping an eye on skimming through the archive; lots of Raspberry Pi / AdBlocker/ Malware stuff with some Star Trek Tomfoolery as well. Some fun entries: MacOS's "say" command with various voices is worth keeping in mind though I haven't thought of a terrific use for it (though I did use to make the samples for my 7-minute workout app) Also opendiff is something I remember wanting way back when, doing a visual "diff" of two text files

Wednesday, January 18, 2017

further thoughts on innovation

This Lost in Mobile article about a Cult of Mac piece saying maybe Tim Cook's non-combative style is leading to a less innovative feeling Apple led me to this ramble:
Arguing with my sparring partner, we got to think about what innovation is, like the iPhone launch – if it was more the thunderbolt of a new idea, or incremental progress suddenly revealed. I think it’s both – someone high up gets a vision of “hey maybe we could do this if we have the tech” and then a team has to put in the hard, slow trudge of all the steps to make that happen. (Or maybe, someone in the middle-to-high level gets the idea and pitches it to the very highest, even ruffling some peer’s feathers – the process that this article says might be breaking at Apple)
I think usually that vision takes the form of a new interaction, something that wasn’t possible with the current configuration of stuff.
I think some of the current state of Apple is the lack of a big idea. Look at Jobs’ last big 3: iMac was a matter of presentation and wrapping – actually a freshening of the very original information appliance concept, but redone beautifully. iPod’s innovation was the clickwheel – and it was a great one. iPhone’s interaction innovation was putting the new type of touch sensitivity (already used in say laptop trackpads) and putting it behind glass. (And visual voicemail )
In this view, iPad really didn’t represent interaction innovation (to be fair, it represents the innovation that then got diverted into the iPhone, so by the time it came around it was kind of ho-hum, just larger) And the same for the Apple Watch. The interaction of “smaller and on your wrist instead of your pocket” doesn’t involve all that many new forms of interaction. What will the next interaction innovation be? Well if i could say for sure I might be rich. It might be in voice assist, where Apple seems to be lagging on execution a bit (some argue it’s because they’re more privacy conscious than their rivals?). Random pipe dream: what if clear touchscreens could remold themselves slight to provide tactile bumps? Like tell your thumb where the virtual cross pad was, or have faux physical slider points… no idea if the supporting tech for that is even on the horizon, but it certainly sounds cooler than edge to edge curved screens, doesn’t it?
So we turn to Microsoft. They made a bet that the future of laptops and tablets might be doable with one OS. And they paid the price for that, some of their earlier attempts were really painful to use, and even now the legacy aspect they lug along is offputting for some. But now there is some exciting interaction innovation; giant, desktop workspace touchscreens and intriguing tactile physical dials are making a hard press for “creatives” – it’s a historical shame Apple is falling behind supporting that group. (Compare to the iPad Pro message, where Apple is saying “you can do all your pro work without a real filesystem” which honestly I’m not sure I believe.)
If I thought Window was anywhere near as acceptable as MacOS I might be tempted to swapback, but I’m not willing to gamble 800 bucks and find out its not. (And that’s another way Windows might suffer from people like me who could potentially be persuaded to “switch back” – I tend to compare the hardware on my mom’s $250 cheapie Windows box to the $1000 hardware of my Macbook Air, and that’s clearly not fair.)
Sorry, a bit long winded there 😀

Tuesday, January 17, 2017

Richard Loewy's MAYA: better! different! but not TOO different...

The Atlantic had a nice piece on Richard Loewy's design aesthetic:
Loewy had an uncanny sense of how to make things fashionable. He believed that consumers are torn between two opposing forces: neophilia, a curiosity about new things; and neophobia, a fear of anything too new. As a result, they gravitate to products that are bold, but instantly comprehensible. Loewy called his grand theory “Most Advanced Yet Acceptable”—MAYA. He said to sell something surprising, make it familiar; and to sell something familiar, make it surprising.
Interesting stuff. It reminds me of "Zen and the Art of Motorcycle Maintenance" and its description of how we know "Quality", the Tao, how something is good at being whatever it is, in a circular way: we learn define the quality as we recognize the quality in the instances of the thing we're defining.

Monday, January 16, 2017

game design ala nintendo

Kottke had a post about Nintendo's game design which featured two videos.

This is the one that resonated more deeply for me:

I'm bummed I don't make many microgames any more, but when I did, they followed that idea of "make a new mechanic", have some fun with it. (I used to do Glorious Trainwrecks' two hour game jams, where you'd make the best worst game you could in 2 hours, and then hop online and play what everyone else made as well - a new fun mechanic was about all I could hope for. You can see some of the best of results on my game page (currently my 2015 advent calendar) and then a pile of other stuff here)

There should never be Game Police saying what games can and can't or should and shouldn't be, or are or aren't, but I do think you can make an argument that video games as a medium are especially interesting when they're playing to their unique strengths - things you can't easily do in other media, like for example making "physically" interactive microworlds. Lots of formats can tell stories, many of them can even bring the reader/viewer into the story.... (and video games always have to deal with "ludonarrative dissonance, where what the player wants to do may or may not make sense with what the character wants to do). And many, many types of games lets you address strategic fun, and even model their own little "worlds" in the process. But making an entertaining interactive/reactive new reality... that came first for games, and is where my focus tends to brought.

The other video presented another Nintendo-ish view, and reminded me what my attempts at gamemaking sometimes lack:


 For me the most important quote was Miyamoto saying this:
I think that first is that a game needs a sense of accomplishment. And you have to have a sense that you've done something.
Challenge and accomplishment do bring a lot to a game. (Of course games have gotten a bit more friendly and forgiving over the years - sometimes I worry that reward time spent rather than skill built...)

I have mixed but mostly sad feelings about not making or playing games much these days. I have some friends that argue I've spent too much of my life with them already, and they serve as a distraction from the important things, and certainly some of the things I've been doing more of (especially playing in some street bands) has given me great rewards as well.

Sigh, being a grown up.

Thursday, January 12, 2017

the stuck-in-traffic problem

tl;dr: The traffic isn't against you. It's just the traffic.

In Cat's Cradle, Kurt Vonnegut introduces the concept of a "wrang-wrang": a person who steers people away from a line of thinking by reducing that line, with the example of the wrang-wrang's own life, to an absurdity.

I'm trying to make Homer Simpson my wrang-wrang. Specifically this clip:


A sudden irrational and disproportionate fury at somewhat trivial things that are out of my control. In some circumstances I'm almost too controlled, many of my potential feelings of desire have to be vetted by my inner judge before they're allowed... but the feeling of "this is just wrong" rises up in a sudden furious tantrum, and I don't like that about myself. (It's gotten me into trouble in previous jobs; it's not that I rant and rave endlessly, it's just that one moment of exposed anger, even if directed at a system and not an object, can make people very uncomfortable.)

The issue has been on my mind for a while. In 2008 I wrote
"C'est la Vie!" / accepting that / "this should not be!" / but coping / more stoically; / philosophically-- / "C'est la vie..."

A few years later I read about William Irvine's modern application classical Stoicism, in "A Guide to the Good Life'; protecting one's equanimity and contentment at all costs, in part by triaging the world into things one has complete control over, no control over, and somewhere in between, and attending only to the first and last category, along with "negative visualization" - a meditative technique of thinking about how bad things could get, and then being happy when they're better than that; and realizing that you'd be able to cope even if they were that bad. So that was helpful, but just recognizing that a situation was out of my control didn't actually help my equanimity all that much.

Other approaches suggested themselves. I wrote this in 2015:
Recently a conversation with Derek gave me the idea of approaching the world with a kind of cheerful pessimism- assume that "a bit screwed up and annoying" is kind of the natural state of the universe, that things WILL be messed up, but generally not irretrievably so, and then be extra cheerful when the dice roll your way. "Lousy minor setbacks" that could otherwise be absolutely and inappropriately infuriating become almost soothing reminders that Murphy's in His Heaven and all's right, or wrong in the right way, with the world.

Again, that sounded better on paper than in real life, in terms of not being upset. I don't really want to be all that dour all the time.

In early 2016, I stumbled on "Amor Fati" - still a concept that resonates for me, a call for the cultivation of love of one's fate, even the parts that are unpleasant, that you wouldn't have it any other way. As Nietzsche put it:
"My formula for greatness in a human being is amor fati: that one wants nothing to be different, not forward, not backward, not in all eternity. Not merely bear what is necessary, still less conceal it--all idealism is mendacity in the face of what is necessary--but love it."

I felt - still feel - that much of the problem is that our monkey brains are so good at daydreaming up these alternate realities that are just like this one, but better - this same roadway, this same car, not all these other cars - but those realities don't exist in our world, except for the power we give them to make us unhappy.

Later in the fall I also stumbled on the idea of using empathy to make situations more palatable. In its more extreme form, this is a kind of hippy-dippy "we are all one thing", but even without going to that extreme, if you see yourself on a common team of humanity, someone cutting you off might be a win you can share in. Of course, this doesn't apply to traffic jams so much, at least when everyone is equally stuck. (Remember- you're not 'in' a traffic jam, you 'are' the traffic jam)

But now I've found what seems the strongest counter-formula yet... the recognition of this weird animism humans tend to have, that we look for intent and purpose even in things that are just accidental and emergent. The first stage of the this realization was that "it is absurd to take traffic personally". And yet I do. Later, in the movie "Mistress America" I found the even wider application: "The path isn't against you. It's just the path." I've been finding that a very useful mantra lately.

The other nice thing is that these various view points are complementary, they don't really undercut each other that much. (I've been told that's characteristic of Eastern religions, in general they are less combative, and defensive of their "unique path to truth" sense, than many Western outlooks.)

The traffic isn't against you. It's just the traffic.

Thursday, December 29, 2016

the digital library of babel

Jorge Luis Borges introduced the concept of the Library of Babel,  a "vast library containing all possible 410-page books of a certain format and character set." To further quote wikipedia,
Though the order and content of the books is random and apparently completely meaningless, the inhabitants believe that the books contain every possible ordering of just 25 basic characters (22 letters, the period, the comma, and the space). Though the vast majority of the books in this universe are pure gibberish, the library also must contain, somewhere, every coherent book ever written, or that might ever be written, and every possible permutation or slightly erroneous version of every one of those books.
The other day, my company CarGurus had a lunch and learn about the internals of git. I've always been impressed with how quick updates were once you've cloned a repository. In part that's because of how git stores an archive with a compressed version of every version of every file and folder your project has generated, and so chances are in doesn't have to pull down that much fresh data. What's really clever is how it stores them; each is in a physical file named after the SHA-1 hash of the file contents (each physical file sits in a folder named for the first two hex digits of the 40-hex-digit hash, you can see those folders in the .git/objects/ dir of your git project.)

SHA-1 is really amazing, because it's SO amazingly unlikely that two different files will generate the same hash. This page describes it as
Here’s an example to give you an idea of what it would take to get a SHA-1 collision. If all 6.5 billion humans on Earth were programming, and every second, each one was producing code that was the equivalent of the entire Linux kernel history (3.6 million Git objects) and pushing it into one enormous Git repository, it would take roughly 2 years until that repository contained enough objects to have a 50% probability of a single SHA-1 object collision. A higher probability exists that every member of your programming team will be attacked and killed by wolves in unrelated incidents on the same night.
So. Many years ago, an awesome experimental site word.com (now sadly defunct, the domain bought out by Merriam-Webster) ran a subsite called Pixeltime - you can read about it at my tribute page, but the upshot was it was an online graphic editor slash contest with an emcee the Pixel Master whom I've described as "a cross of Mr. Rogers and Max Headroom via Blue Man Group". Each image was 45x45, with a palette of 16 colors. (I made some visual basic hacks that let me essentially upload photos in 5 shades of gray by grabbing the mouse and clicking each pixel)


That one on the right is a little joke - I realized there was a maximum number of images that could be made in that format... at first I badly underestimated how many that is, but it turns out it's 16^2025. (one square could be 16 colors, 2 squares would be 16 * 16, etc) Anyway, most calculators don't even try to figure out what that is, they just call it "infinity".

So here's the thing: that "infinity" is much, much, much bigger than the number of unique SHA-1 hashes. If you were to make a hash for each image, you would certainly get a large number of collisions. In fact, 45x45 is extravagant - by my reckoning you could flood SHA-1 with a simple 16 colors at 10x10, which gives you 2.6 * 10 ^ 120 pictures. (I encourage people to check my math - I've certainly got it wrong before.)

So SHA-1 hashspace is so much bigger than what humanity could conceivably generate, and yet the universe of everything - if you don't put many restrictions on the grammar of the everything you're generating - is so much larger than that.

I don't think our brains can even deal with a million, never mind billions or trillions. (My 6th grade math teacher had a book of a thousand pages of a thousand dots each, with certain amusing values labeled.) Hell, get a dollars worth of pennies, lay em out in an uneven sprawl on a flat surface, and I'll bet you think it looks more like 40 cents.

Or, just watch this: