Saturday, January 9, 2016

on "How Apple is Giving Design a Bad Name"

A collaborator of mine wrote me:

Not sure if you saw this.
https://www.fastcodesign.com/3053406/how-apple-is-giving-design-a-bad-name

My response was as follows:

Interesting!

There are some parts I agree with, and some I'd go even further with -- for instance, rumors are very strong that Apple will probably ditch the traditional headphone jack with this Autumn's new phone release, ticking off many, many people, mostly in the name of making a phone slimmer by, like, a millimeter or two. (Now, Apple has led the way in ditching things before people realized they didn't need them - see the iMac w/o floppy disk in 1998 - but this feels like one of the worst ones, because I was able to use that jack for a number of different scenarios that will now need some kind of dongle) I think the parallel with design leading the parade and minimalism above everything is pretty clear.

Of course, Apple is special because they are about the only ones who have really been doing hardware and software, though Microsoft has been getting back to that lately. There's an integration there that the other makers generally can't match.

One instance of Apple vs Android's design aesthetic: I heard one Android fan argue how much more sophisticated Android's home screens look, because the icons could be any shape, while Apple constrains icons to be little jewel-like rounded rectangles - while Apple fans might feel the opposite, that the constraints still allow artitic freedom of color and design while providing a sophisticated and consistent look. In some ways that's the Android vs Apple thing in a nutshell.

I've read Bruce Tognazzini and Don Norman, the authors for a while. Both are great, but I have issues with both of them.

Don Norman wrote the famous Design of Everyday Things... in it he chastises things that are overdesigned and "probably won lots of awards" when they don't think about the thing's use in the real world enough. Ironically, that's a cobbler's children have no shoes scenario, because originally his book - which won lots of awards - was called the PSYCHOLOGY of Everyday Things -- a title he loved because of the well-designed abbreviation "POET" but which didn't think enough about the actual use in the real world, like if booksellers would know what section to put it in!

Bruce Tognazzini... my main issue with him is that he is a "if you can't meter it, it doesn't matter" guy. Maybe his most well known advocacy is for Fitt's Law: "The time to acquire a target is a function of the distance to and size of the target." So he advocates making buttons big, and stuff them right in the corner of screens, so that a mouse can just be shoved there and thus cursor-targetting-in time minimized. He leads what I think of as the "stopwatch brigade" of interaction design; the faster you can click something (as often demonstrated in some arbitrary and not-real-world like testing mock up), the faster you're operating the machine, the happier you are, etc.

For instance, that Fitt's Law bit rips into Windows' tendency to give each window its own dropdown menu, vs the Mac habit of putting a single bar at the top of the window. I thought it was a wash: it might be easier to put a mouse right on a menu item, but it's also much easier to accidentally think you're dealing with one program's menu when actually context has been switched to another. Combined with modern tendencies to have multiple, and extremely large, monitors, and his advice seems increasingly out of date - sometimes on a Mac there's just too much screen geography between the window with the content and the menu items that apply to it. (And in general, I think you see more important stuff moved to toolbars anyway.)

What Tog doesn't talk about (as much) is: how easy or hard is it for a user to keep a mental model of the system in his or her head? That's a concept that resists easy measuring, and so it gets less play. I think it's critical though, and something that the minimalists get more right than he does. Actually, the Tog/Norman article does talk about "mental load" but again tries to oversimplify, implying that every gesture must be independently memorized (rather than making sense in a larger physical metaphor) and that every gesture is as needed as every other one. Many good UIs will allow for basic interaction with onscreen components, and reserve the gestures for "expert mode" stuff.

Discoverability is important, and I'm not a big fan of having a big library of gestures (mostly because I tend to trigger them accidentally all the time, and so find them violating The Tao of Programming: a program should always do that which startles the user the least.) I'd say it's nice if the road to learning those expert gestures is embedded in the program, but not necessary.

I also feel this article implies that all of the same use cases and patterns apply independent of the physical attributes of the device (and that all users are expert users) -- but laptops are used in very different ways than phones and tablets. There are some principles that apply to both, and while we might expect to see more drawing together so you can get more work-work done via a touchscreen device and a laptop might have more fun and simplicity, there's still a big divide.

No comments:

Post a Comment