Thursday, December 29, 2011

osx vs windows redux

Ranting on obscure blogs is a bit like tossing nickels into the Grand Canyon, but... hey, a nickel!

I sometimes jump the gun in criticizing Apple and OSX. Too often that comes from inexperience with OSX. In many cases, a serious criticism of "you can't do this on a Mac" (for example, easily copying the path information from an open Finder window into a "Save As" dialog) runs head on into a "Mac just does it a different way, you Windows-muddled fool" (you can drag the little folder icon on top of the Finder window!) and gets diluted into vaguer criticisms about "UI emphasis" and "potential confusion" (the way that drag and dropping that icon feels too much like a file system manipulation... and darn it, using string-based paths is sensible, if a bit nerdy.)

So once more into the breach...

One thing holding me back from switching to OSX for my work machine is a feeling that the keyboard support isn't up to snuff... specifically on Windows, nearly every text editor I use maps ctrl-left arrow and ctrl-right arrow to jumping words. Now there is some inconsistency to it: some editors think an underscore is a word break, others don't, some jump to the start of the next word immediately, others jump to the end of the current word first. But at least the standard is there, and I don't have to interrupt my typing flow with mouse movements, or play "press and hold the arrow key 'til the cursor finally gets there".

I thought this problem sprung from Apple's pro-mouse, anti-keyboard stance. The original Mac keyboard didn't even have arrow keys! See for yourself:

But of course, I was wrong. I was expecting that all the keys I was looking for would be mapped to cmd-, which is (roughly) the Mac equivalent of "ctrl". On OSX however, the cmd-arrow keys jump either to the beginning/end of the line (left and right) or of the whole document (up and down). However, The "option" key WAS mapped to what I wanted, with the left and right jumping of words.

Arguably, Mac's use of these keys is more efficient and logical than the PC standards. There's an intuitive hook to how cmd-left/right goes to line endings, and cmd-up/down means the whole document. These keys can then play the role of home/end on PCs: home/end = line, ctrl-home,/ctrl-end=whole document. Furthermore home/end are two of the most wandering keys on laptop and compact PC keyboards, there has been no definitive consensus on where they should go, and sometimes they are mapped to special, laptop only "fn-" keys. So points to Apple for battening down the hatches on that.

So I was wrong. Mostly. But can I say Mac has too many of the wrong kind of keys? Look at this:

 "fn" "control" "option" and "command"! 4 different keys that mean roughly the same thing, "make the other key I'm pressing do something else". To make things worse, the little used key "fn" gets the most prime real estate-- the place where Fitt's Law implies the most important key should go. (To be fair, IBM/Lenovo Thinkpads make the same mistake, and it's even worse because a PC's ctrl key is much more important than the Mac's.)

Here's the same image from a typical Windows keyboard:

It's the same number of keys, but I think handled much more gracefully, with better differentiation. "ctrl" and "alt" are far away from each other. Plus in Windows, there's a stronger convention for when which is used: ctrl- combinations tend to be "do something now": copy, paste, save, new, etc. Alt- combinations are mostly used to pull down menus. (Windows used to have greater discoverability of that feature by underlining the accelerator letter on the menubar, e.g. File and Edit, but now (by default) those underlines are hidden until the user holds the alt key.)

The windows-key is special, in most sense of the word: usually it means a quick hop to the start menu (and I love the way they put the cursor in a search box there... windows-key,"program name fragment", return is a very quick way to start a program that is not frequently used enough to merit a pin on the task bar.) The Windows key also has a few obscure key combos that are all OS-wide, like win-m or win-d for hiding all windows and showing the desktop. (Though weirdly, win-d is reversible by hitting again, and the older win-m is not.)

(And it still seems odd to me that Microsoft managed to get their logo on all that hardware by all those manufacturers...)

So wrapping up my arrow key rant, Mac feels a bit klutzy to me, and it's awkward to use and annoying to remember the distinction between "command" and "option" when going back and highlighting and copy words that I just typed. To make the whole scene worse, why isn't the Option (formerly Closed Apple Key) labeled with its icon,  ?(And what kind of symbol is that anyway? To be fair I guess the Option Key has its uses, like for typing letters with accents and the like, but still there are aspects to it that seem half-baked.

Rant over! I'm still on the fence about making the "switch" (and in part because I worry about Apple being such a dominant monoculture of computing) but thinking about the UI/interface it provides makes the potential transition easier for me.

Thursday, December 22, 2011

bonus: fireworks cheatsheet notes

So the designers at my work use a lot of Adobe Fireworks. It's a cool program. Its raw format is .png, but a cranked up .png that holds lots of stuff like layers, alpha transparency, and even vector information. For enginerds like me who tend to think in terms of pixels only (and maybe diving into crazy complexities like "layers") it's a lot to wrap our heads around... and watching a skilled artisan use it, and its ability to treat text and lines and pieces part as objects, even when they're not explicitly set on a different layer... it's eye-opening.

So, as notes to my future self, I thought I'd write out just what I had to go through to make up the basic white-rectangle-with-drop-shadow images... in particular, how to use Fireworks to slice it into the top,middle, and bottom the final example used.

Here are those notes:


  • make a new image: ctrl-n, 500x500...
  • On toolbar stroke would be null (red line X, fill to white...), fill would be white
  • on the toolbar under vector, make the shape tool do rectangles not ellipse tool
  • draw the shape then handfix the W and H under Properties at the bottom (mostly the width)
  • Under Properties|Filters, Shadow and Glow | Drop Shadow
  • This pulls up some Filter properties. (By default the dropshadow is to the Southeast)
    • so Filters is a list of effects added to this selection, you have to be careful not to make extra ones...
    • the arrows is distance, how far away the center of the shadow is from the center of the item... I want this at zero, centered, not off to one side
      • (which means I can ignore the angle for which direction it should go)
    • I set the fuzziness bigger than 4, like 10 or so
    • you don't see changes applied 'til you close the filter mini-properties box
  • Make sure the canvas itself has the null background selected (indicated with the red line) and you can see the gray and white boxes
  • Here I could select the rectangle with the selection tool, ctrl-c copy, ctrl-n (which autodefaults to the size of the thing in the clipboard), paste, then save. But of course I wanted my 3 slices...
  • now we need to use the slicing tool. You draw bounding boxes (including the dropshadow) and rely on the AI thing to find the right edges for you... I draw one for the top, bottom, and middle. At my first attempt I tried to close-crop the top and bottom without interior padding, but that messed up the shading of the corners, and I had to redo it.
  • Then I go to Export... the settings are QUITE a bit fiddly, what the helpful designer helped me figure out I wanted was "Export:Images Only" "HTML: [None]" "Slices:Export Slices" "Pages:Curent Pages" and "Include Areas without Slices" left unchecked
And that was it!

Sometimes I dispair of ever deeply learning Adobe stuff. It seems like such a different, non-engineery, don't-try-to-understand-everything-we're-doing / intuitive system... or maybe the designers are just more used to. (GIMP. in contrast, is pure-Engineer, we-don't-give-you-any-convenient-defaults, no matter how logical it might be.)

ie dropshadows with scale9grid... or not

(This is another one of those "see how the sausages get made", tales-from-the trenches post, vs a bright shining tale of victory and Zenlike CSS perfection.)

So, dropshadows. Designers love'em! And why not, they bring an illusion of depth to the page and help set things apart. But man, they are a pain the butt to code sometimes... at least they are if you want to support IE. (I guess IE9 is starting to support this as well.)

First, a note: the drop shadows I'm aiming for are kind of a fuzzy border all around the div, vs the kind hanging out on one corner (usually the lower-right.)

Anyway, the CSS is still in that funky "each browser uses a prefix" state, so the code is like this:

    box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.80);
    -moz-box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.80);
    -webkit-box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.80);
Not too bad, the result is like:


Hi there!

But of course that does nothing for IE <= 8. I fiddled around with some of IE's filters,

filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=0, OffY=0, Color='gray', Positive='true');
filter: progid:DXImageTransform.Microsoft.Shadow(color='#969696', Direction=135, Strength=3);
but nothing was coming out right, either it made everyting blurry, or it was an old Win3.11 looking flat gray background, or the content of the div was rendering weirdly non-anti-aliased... we ended up going with a variation of this, and for a while letting IE users just hang with a thing gray 1px border.

But I remembered this past spring when one of my co-UI guys was really excited about a plugin called scale9grid. It's a pretty sweet of styling a div (as long as you don't mind depending on javascript and non-semantic CSS and images to do so) that can be any width or height, based on using an image as a  model for the borders (an image that presumably has a flat-colored center section where the content actually goes.) So you can have arbitrarily weird corners and sides and this plugin takes care off all the busy work.

So in Fireworks (more on that in a bit) I styled up a nice alpha-channel (transparency) preserving PNG that is shadowboxer.png:

This can be used to make a box of any size. The code for this is pretty simple, in the CSS you need to set the background-image (something the page is a little slow to mention), the dimensions (I think), and some padding, so that your content isn't over the shadow itself:


  background-image:url("/shadowboxer.png");
  padding:12px;

Then in jQuery, probably in the $(document).ready() you tell it how much of the sides of the original image you want to respect and use as the border:
$('#element_id').scale9Grid({top:12,bottom:12,left:12,right:12});

Hi there again!

So, that's kinda nice. The trouble is, the panel we wanted it for was all animated. The width was fixed but the height was changing, and that goes beyond what scale9grid was designed for. It didn't change when the child content did, and you couldn't just reapply once the animation was done.

So to let IE users back in on the fun, I decided to go with the old, slightly-gruesome, tried-and-mostly-true technique for fixed width things with funky borders: a special graphic-only div on top, the main content div with a repeating background, and a graphic-only div on the bottom. Sigh.

The images I made up in Fireworks, and most of the other external files for this blog entry, are at https://kirk.is/m/files/kirkdev/shadowbox/. I imaginatively called them "top" "mid" and "bot", and they can be used like this:


Are we having fun yet?
The CSS then is something like:


.top_box{
  width:398px;
  height:14px;
  background-image:url("top.png");
}
.mid_box{
  width:398px;
  height:218px;
  padding:10px;
  background-image:url("mid.png");
  background-repeat:repeat-y;
}
.bot_box{
  width:398px;
  height:13px;
  background-image:url("bot.png");
}


What can I say, it works, and animates fine. One irritation was I first tried to cut the top and the bottom with no interior white padding, but that messed up the fading on the corners.

It's depressing to get back to such a mundane solution, but at least I've reawakened my awareness of the potential of scale9grid, one more weapon in my UI Engineer arsenal.

Thursday, December 15, 2011

image magick and text macros

After developing professionally for 15 years, I can appreciate when it's time to apply a quick and dirty solution.

Sometimes it's my only option, a way of covering gaps in my knowledge-- for instance, I've never really had the chance to learn any Adobe product deeply, whether it's Flash, Photoshop, or Fireworks. And I've stuck with my manipulating tab-delimited files in Perl where more normal people might cozy up to Excel.

So at http://alleyoop.com/ we had a widget that would display sample math problems for various subtopics. Each image was a 380x136 png, but most had plenty of whitespace:
We decided we didn't want to so much whitespace... but ideally we wouldn't have to have a design person resize all 204 images. What to do?

Enter ImageMagick. This software has been around for a long while I think. A while back I used its Perl module as part of my image upload feature for my personal blog, so I could crop out extra white space on doodles and resize images to be more web-friendly on the server, rather than doing it locally and then reloading. And there's a handy Windows client.

The syntax is a little strange, but for trimming the file "Volume.png" I'd just do
convert Volume.png -trim Volume.png
I chose the same file name for the output as the input (Weirdly, if I didn't specify the destination filename it would give me a file name "-trim".)

So far, so good. But what' the easiest way to apply this command to all 204 files?

I'm on Windows, and haven't bothered with CygWin, so I've learned a few coping tricks... like "dir /b" gives you a "bare" directory listing of just the filenames. I run the following command:
dir /b *.png > trim.bat
So now I have trim.bat, a flat file of filenames. Then I pull up trim.bat in an editor* and ran a macro that:
  • home to to the start of the line and started the macro recording
  • shift-select to the end of the line
  • ctrl-x to cut the filename
  • type 'convert "'
  • ctrl-v to paste the filename
  • type '" -trim "'
  • ctrl-v to paste the filename
  • type '"'
  • down arrow to the next line, then home to jump to the start of it
  • end macro
Then just hold down the macro replay key, and BAM, I have my ready to run batch file. Run it and my job here is done. (Macros are great, super fast to make with almost nothing to learn, just sometimes you have to think about the general case of what you're trying to do. Every developer should intimately know at least one editor with one button macro playback as well as a small set of text manipulation tools, like either Perl or awk and sed.)

* on editors... lately I've been using Komodo Edit, despite its surprisingly long loading times. I kept my ancient copy of TextPad handy though, in part because its macro recording and playback was rock-solid and I sometimes had issues in Komodo Edit. But now it looks like Notepad++ is what the cool kids are using, at least the ones not "cool" enough for developing on Mac. Can I say though, that given most Mac laptop's half-assed keyboard support (the lack of equivalents, or at least general obscurity, of simple keystrokes for Home, End, and ctrl-arrow to jump words) that I find this kind of thing much easier to do on Windows, and it's yet another reason I'm unlikely to switch, at least for my work stuff...



Monday, December 12, 2011

animation nation part 2: introduction to raphael.js

So the other day I was talking about making an animation for my company's registration process.

I quickly put together an animation proof-of-concept in ProcessingJS. I didn't go too far with it in part because IE is still in our target browsers, and ProcessingJS relies on the canvas object, so we couldn't use the result on the site.

My thoughts turned to RaphaĆ«l in part because I knew it would work with IE. Superficially, Raphael is very similar to canvas-based APIs (in fact I thought it used a "shim" to let older versions of IE do canvas things) but really it's profoundly different... rather than a canvas of pixels, Raphael deals with SVG (Scalable Vector Graphics) vector objects. These objects are great because they deal with twisty lines and curves rather than squares, and you can scale them to any size without them getting pixel-y.

Vector graphics require a different mindset than "regular". Specifically, Raphael is object oriented and once you've created your "paper" you add shapes and lines to it. Those shapes and lines then have attributes, similar to CSS attributes, that you can modify. The examples on the Raphael homepage are impressive, and it's easy to make juicy effects, especially in terms of bouncy scaling and rotation.

I wasn't crazy about the homepage's boilerplate for two reasons: one is I prefer something I can copy and paste into a blank document and hit the ground running. Here's some code that let me do that (once I pulled down a local copy of raphael-min.js)
<head>
<script src="raphael-min.js"></script>
<script>
window.onload = function(){
  var paper = Raphael("thepaper", 320, 200);
  var circle = paper.circle(50, 40, 10);
  circle.attr("fill", "#f00");
  circle.attr("stroke", "#fff");
}
</script>
<body>
<div id="thepaper"></div>
</body>

The second improvement I made was to use a name div as a canvas, rather than ask Raphael to provide the div and position it absolutely... I think it's more common that people will be embedding a bit of Raphael in a larger webpage, so an id'd div seems the way to go for that. However, that meant I couldn't let Raphael just run as commands directly in the script tags, I had to wrap it as a window.onload function (obviously, $(document).ready(); is even a better bet if you're already using jQuery, I just wanted to remove the dependency.)

The Raphael documentation is a reference, not a tutorial, so you have to poke around... it probably helps if you know some of the basics of SVG. 

In processing, I had made up a sine wave as a serious of short line segments. In Raphael, I figure I'd want a Path: a path is a series of line and curve segments. Entertainingly, you define a path via a string, and a series of text commands. It reminded me of drawing with the language Logo and its turtle graphics in my youth. 

So to see a Path in action, you can add this line to the boilerplate, which will draw a rotated chunky path.
var p = paper.path("M50,50 L100,50 L100,100 L150,100 L150,50").rotate(45,50,50);
Some things to note:
  • Raphael uses the same kind of command chaining as jQuery, so I added the rotate command to the end. 
  • You can use commas or spaces as delimiters between commands and the arguments.
  • I used capital letters, so all the coordinates were absolute... if I had used lowercase, they would be relative to where we last drew.
  • By default angles are in degrees (i.e. 360 = circle). 
  • If I didn't specify the coordinates to rotate around, the Path would have rotated around its own center.
  • Raphael uses "painter order" rather than Z-indexing... the last thing you add to the Paper is what appears at front.
Once I was here, it still wasn't clear how to go forward in terms of making the animation I wanted. There's a pretty extensive set of animation features, but I couldn't figure out if there was a way to make the minor adjustments to the Path. (In fact, it turns out I can't even rotate a Path as an animation! Hmm.) So while I thought it would be "more Raphaelish" to make the objects once and then animate their attributes, it turned out I was going to have to do it similarly to the Processing version, and constantly create and destroy my path objects each tick.

To cut to the chase, here's what I ended up with, click kick to see it with some inertia, or just spin for a constant rotation:
You can go to the webpage to view the source though it's a bit hacky and ugly...
So comparing that animation to the original graphic:
It's not 100%, but with the tweaking I did it's not bad. Our art guy was very pleased with the result.
As the animation works, you can see where I had to add in additional "crossbeams" - the 2D original didn't have to worry about that. Also I couldn't quite figure out how to duplicate the "over under" cheat in the original, where they put a break in the underlying squiggle, so working with our art guy we decided to color one of the squiggles a bit darker.

The thing is the animation flickered a bit on IE, and even on Firefox the constant motion was pushing the CPU to around 7-10%. I decided to punt-- I added a step button to move the animation one frame at a time, and then I laboriously constructed an animated GIF:

Neat! Not quite as 3D as the Processing version, alas. (And since I was "cheating" and making a GIF anyway, I could have stuck with Processing.... in fact I might go and try that, since it's more powerful, and easier to push out a series of TIFs (rather than relying on screenshots like I did))

Finally, we were looking for the animation to run for a bit, then stop, then restart. I made the GIF animate once, and then used jQuery like this when we wanted to see it spin:

//first set up an offscreen image with the animated GIF:
this.dnaImg =  new Image();
this.dnaImg.src = '/components/onboarding/images/DNAIconAnimate.gif';
//later when we want it to animate, (re)set the 
//src of the onscreen image to the offscreen one:
$('.dnago').attr('src',this.dnaImg.src);

Easy-peasy, and it added a lot of visual pizazz to that part of the site.

Wednesday, December 7, 2011

animation nation part 1: processing.js


At work we thought it might be cool if we could jazz up the following bit of pseudo DNA (using Alleyoop's colors as the crossbars) that we are using during our signup process.

I wondered if we could get a nice little 3D-ish effect by treating the helix strands as sin curves and then animating them by increasing the angle (Hey remember the into to Superman II ?  Around 9 seconds in, the awesome spinning prison rings are actually 2 rings permanently welded together and then rotated as single unit... an awesome economical visual effect.)

My goto language for this kind of thing is Processing. Processing is a little Java IDE and API that makes making applets really easy, and lets me leverage my 10-odd years of Java experience in a way that works on most any browsers to make gamejam games and my own toys. (With applet supporting waning over the years, I'm happy to see stuff like Minecraft exercising Java as a viable game platform.)

An applet would be overkill for the task (not to mention raising the spectre of Content-y and plugin warnings) so I turned to Processing's little brother Processing.js. It's an HTML5, javascript/canvas based version of Processing. It has 2 modes: one where it can (try to) run the exact same ".pde" files as the Java version, and another where it acts as a highpowered API to the canvas object for more traditional javascript code. I knew Processing.js might not be acceptable for use on our actual site, since it depends on the canvas object that is only now getting support by IE, but I decided to give it a whirl anyway.

The pde "run the java code" mode is in preferred, but for my money it's not ready for prime time. It is essentially using a preprocessor to translate Java code into javascript, and the results aren't always pretty, especially for stuff involving classes and collections of mixed object types. The error messages are often extremely opaque or absent altogether.

Despite the problems, Processing.js is still a lot of fun, especially with one of the inbrowser IDEs like sketchpad.cc. You can type code and almost instantly see results, part of the charm of both versions of Processing.

Nearly every Processing program has two main parts: setup(), where one time activities are performed, and draw() which is called every frame. Most Processing programs (at least the ones I've written) clear the background every frame and draw the entire frame from scratch, but some just keep drawing on the same canvas.

Here's what I came up with. Apologies for the code that follows -- it's mostly hacking around proof-of-concept stuff, and there are a lot of "magic numbers" there I tweaked to make it look good... not exactly like the model, but enough to see that a spinning 3D effect could emerge from a sine wave fragment place against the flipped version of itself and constantly having its starting angle increased. Essentially the variable x1 runs through the horizontal values, x2 is the next value (for drawing line segments) and we get the two y values by running the sin() function. And then at certain x1 values, we draw the cross pieces.


void setup() {
    size(200, 200); 
    smooth();
    frameRate(30);
    strokeWeight(10);
} 
float off = 0;
void draw() {  
background(255);
pushMatrix();
translate(120,0);
rotate(3.14 / 4);

off += .1;
for(float x1 = 0; x1 < 120; x1++){
    float x2 = x1 + 1;
    float a1 = off + x1 / 40;
    float a2 = off + x2 / 40;
    float y1 = 20 * sin(a1);    
    float y2 = 20 * sin(a2);    
 
    if((x1 +1) % 31 == 0){
       stroke(255,0,0);
       line(x1,100,x2,y2+100);
       line(x1,100,x2,-y2+100);
    }
    stroke(128);
    line(x1,y1+100,x2,y2+100);
    line(x1,-y1+100,x2,-y2+100);
}
popMatrix();
 
}


Here's the result (IE users are out of luck, in this way and in so many others...)


Not half bad if I do say so myself! The effect was very sensitive to little tweaks.

Next up: the same idea in Raphael.js... a very different tool that works on all the major browsers.

Monday, December 5, 2011

worthy read

Ask Tog about Search vs Browser on iOS. I need to figure out how to push my self ever more firmly into the "HCI expert" camp.

I have an ancient copy of "Tog on Interface" on my bookshelf, I should dust it off and give it a browse. (Or a search.)

Friday, December 2, 2011

javadvent 2011 and processing

So, some years I make an Advent Calendar, with a different unique miniature videogame or virtual toy for every day of December up 'til Christmas... for this year it's at http://kirk.is/java/advent2011/

It's a fun exercise in UI design, trying to come up with 25 fun but fairly intuitive interactions. (I still provide instructions, but I kind of wish I didn't have to.) Also, the style of Advent Calendars, with a small distinct treat every day, makes a kind of delightful "interface". The calendar itself is basically a copy and paste of the 2009 Edition (except I made it so only one entry is visible at a time, multiple instances open tends to bog down machines.) It's HTML and CSS and Perl from when I didn't know any jQuery, and it kind of shows, though I think it gets the job done, and is fairly sophisticated at making it difficult to peek ahead.

They entries themselves are written in a Java IDE/library called processing. I love keeping my Java chops maintained a bit, and these little bite sized morels were perfect to code on the subway, to and from work. Unfortunately, Java applet support seems to get shakier and shakier as time goes on...

There is a javascript version called processing.js which in theory runs the same programs, is all HTML5-y, and works natively in browsers. It has some awesome things like in-browser IDEs (http://sketchpad.cc/ was my favorite) There are some drawbacks though: it doesn't support IE (at least before 9? Not sure) and it's far from 100% code compatible with the Java verison.

But Processing is a fun language especially for fun and impressive prototyping. I also use it for the 2 hour "Klik of the Month" game jams at Glorious Trainwrecks, and even for the longer 48 hour "Global Game Jams". You can see some of both at my portfolio site alienbill.com.

Thursday, December 1, 2011

preload your images


Sometimes when building a dynamic site, you might want to load some images after the page has loaded but before the images are shown so that when they do show the appear instantly.

I forget where I found this paradigm, probably somewhere on stackoverflow, but it seemed to work a treat:
function preload(sources, prefix)
{
  var images = [];
  for (var i = 0; i < sources.length; ++i) {
    images[i] = new Image();
var src = sources[i];
if(prefix != undefined){
 src = prefix + src;
}
    images[i].src = src;
  }
}

(Ha, I can tell it's not all my code, I never do "++i".) Sources is an array of image names, prefix is an optional parameter that might contain the URL for the directory where the images are located, so you don't have to include the path on each image name. Then when you actually show the image it should be ready to go, instantly.

Wednesday, November 30, 2011

windows kudos, osx phoodos

So here's something from Windows that I think belongs in a UI Hall of Fame. It's the Location bar of the Windows 7 File Explorer:

It shows you the current folder location in a logical, breadcrumb manner. Hover over part of the current location:




BAM! It fades into a button, and you can click it and hop the folder's view to that location. That's brilliant! (Somewhat less brilliant because it's getting more obscure: (AKA I didn't realize it 'til just now:) click the arrow to the right to hop to siblings of that folder.)

(UPDATE: OSX has a "Show Path Bar" option that brings up a similar breadcrumb-y view. You can even hop up to any of the parents, but you have to double click, which seems unfriendly to me.)

But the really great part is clicking on the location bar just outside of the visible path:




DOUBLE BAM! That is a text version of the path, pre-select for your Ctrl-C copying convenience. Now, even if you're not using the dear old command line much, it's still hugely useful: in any standard file dialog, you can paste that into the filename, press return, and now you're ready to save your file in that location. (OSX sort of does something similar if you start typing with a slash, but it's heart isn't really in it.)

So I don't know if that is a "poweruser" usecase, but I find it hugely intuitive and fast, fast, fast. Now, there are imperfections in this system of text versions of the path-- if you click for a file path when you happen to be doing a search, the result is an unreadable mess that you can't really usefully paste anywhere. But there are other nice touches to the widget that I'm not even going into here.

Speaking of standard file dialogs, here's a bit of crapness from OSX. Here's a file save as dialog.
So that "Where" interaction, showing a small list of common spaces you might want to Save To,  is something relatively new in UI land. Windows does it too, but the dropdown menu is used to control a larger folder view. So you can jump to one of the standard areas, and from there, say, create a new folder, or go up a level, or do the usual folder manipulations:


And so I figured that that functionality must be somewhere in the OSX file dialog, that I could save somewhere other than exactly one of the pre-ordained file locations... but where? How do I get to it?

Answer: that small triangle to the right of the filename. That changes the look of the dialog to this:

That's better... but why is it so hard to find and why is it like that in the first place? I like thinking through the challenges UI designers must have faced, because sometimes you realize there's a complicating detail... Why isn't the more flexible view part of the dialog to begin with? I guess to simplify things, stop people from getting overwhelmed with options, while satisfying an 80/20 (70/30?) rule about where a user wants to put something. So why is it arrow to the right of the filename, and not the "Where" dropdown? (It is functionality related to the location, not the filename...) I guess because the "Where" dropdown changes form enough in the new mode that the up arrow to put it back might get lost. But it seems unintuitive to me. (Meaning I had to go ask a more experienced OSX user where to go for it.) Combine it with the way I've lost my ability to copy and paste a path here, and I have to say, I find the Windows experience superior.

(I know Mac fans stuck working on Windows probably miss the 3 column view for its folder viewer, along with the too-cool-by-half "Cover Flow", but for me, I'd rather have the ability to easily transfer path information around. UPDATE: OSX has this functionality, by dragging the icon at the top of a Finder window onto a dialog. Even given my lack of experience with OSX, this seems a little fiddly to me, where I might think I was trying to move the folder to whatever location the dialog was open to.)

Another OSX-ism I find irksome: the default Preview program is great for quick viewing, except it has no concept of "go to the next file in this folder". What's weird is that the basic functionality is in place: it does the right thing if you drag and drop a bunch of files onto it once, or select the files and right click and hit "Open with Preview.app" (Sorry, I was wrong: Preview doesn't support drag and drop.) I have had friends who are bigger fans of OSX say I was expecting the wrong thing, that file systems are arbitrary ways of holding a bunch of files anyway, that a more realistic usecase is just using iPhoto for all this stuff, but, whatever man. It was a detail I think they could have done gotten better, and it makes my life on OSX less easy than my life with Windows.



Monday, November 28, 2011

why windows' taskbar beats osx' dock

OS preference is, generally, a subjective thing. With the gradual ascendancy of Apple underway, I'd like to take a second and analyze how the two OSes relate, and where I think Windows has done a consistently better job. This will reflect some of my personal preferences (and, possibly, the way my mind has been warped by 15-odd years of Windows-isms-- any analysis like this has to recognize that we tend to like what we find familiar) but I will point to some objective differences between the systems.

First, a random note on the screenshots: I'm one of those quirky people who puts the app controller on the side of the screen. I wrote up why on my blog about 5 years ago: there are a few advantages, first and foremost is how it makes better use of the "widescreen" format laptops and monitors now sport (I mean, have you seen how short the screen is on a 11" MacBook Air?)

So to the left is the Windows taskbar and the right is the OSX dock... both have their graphical bells and whistles. Windows (at least as of 7) has some translucency going on - on my work machine it's really sophisticated, with an "active window" highlight color and some pretty light effects as the mouse passes over it. The Dock has that terrifically fun "magnify" effect, where the icons fluidly grow and shrink as you pass the mouse cursor over it.

The primary difference between these two bars is this: The Taskbar is Window based, the Dock is Application based. This is why I think the Windows approach is superior for a multitasking operating system: each window maps to a task, a bit of state I might want to return to, and the Taskbar offers a passive, unobtrusive reminder of each window and a way to get back to it. (Kind of a dynamic "todo list") With the Dock, each icon is an application. There is no direct jump to a given window, just that application. (OSX offers some other paradigms for getting back to where you were, more on that in a bit.)

Let's start with Windows. Every window gets its own place in the task bar... (before Windows 7, the "launch a new program" icons were either hidden behind the Start button, or later, on a little specialized "quick launch" piece of the taskbar. Nowadays you always have the option of "pinning" an application to the taskbar, which comingles the running and launching icons.) The difference between a Window of a running program and an icon to launch a new instance is obvious: the former is a button with a caption, the latter is just an icon. (The disadvantage is launching a new window of a running program from the taskbar is a bit awkward, it's the first option on the right click menu. It's not that big of an issue thought because browsers and most document editors use File|New or Ctrl-N to open a new workspace. UPDATE: WorldMaker points out you can middle click on a running task button to pop open a new window.)

With the Dock, the visual difference between a running and launchable program is minimized: a small white dot. Click on the icon and its windows move to the front. Again, to me there's a huge difference between a running program (carrying state I want to be reminded of or get back to) and a launchable program (which is a clean slate) and so I find Windows' approach to be superior.

This App-centric approach carries through to the quick switching. With Windows, I hit alt-Tab and I see a sorted list of my windows. OSX has a similar function, but again I'm looking at placeholders for whole apps, not a window at a time, once I switch to the right app I still have to find the window I'm thinking of. I know there are different apps to adjust the OSX environment, but I think getting the defaults right is crucially important-- having to relearn how to get around a friend's computer running the same OS because they don't have the same "fix" installed stinks.

It could be argued that I'm doing it wrong, expecting OSX to act like Windows instead of adapting to what OSX offers. For a long while OSX has had exposĆ©, a single button that temporarily resizes and repositions all of your windows (or makes snapshot thumbnails, based on how you think of it) so that they are all visible at once. OSX Lion's Mission Control furthers that paradigm. While I might be getting old and curmudgeonly, I don't like exposĆ© as much as Windows' system, in part because it lacks the "quick bounce back" of alt-tab, where a quick tap of alt-tab brings me back to what I was last working on (Windows has a really good "most recently used" algorithm for tasks, an easy to miss but hugely important detail that Just Works.)

UPDATE: OSX does have cmd-` (very easy to find above cmd-tab, kudos for that choice) that cycles through open windows of the current application. But this interaction is application oriented, you can't gracefully leap back to what you were doing in a different application like you can with Window's alt-tab. 

(I guess I should say, Windows 7's defaults are a bit different than what I'm describing here... I set Windows to "Never combine" Taskbar buttons, and so I don't see the roughly exposĆ© like thumbnails Windows makes now.)

This differences harken back to one of the oldest differences between Windows and MacOS-- with Mac, there's one place on the whole screen where the "File" menus will appear: each app takes over that space. (It's kind of ironic that it took OSX so long to get fullscreen mode right.) This kind of points to the idea of Mac as "information appliance"-- the whole machine is dedicated to that app. On iOS I find this perfect and relaxing, but I find it irksome on a multitasking system. (I know people argue that there are advantages to the menubar-on-top paradigm of Mac, that you always know where to look, and your mouse can target the menu faster, but for me those bonuses are outweighed by having to figure out which menu bar applies to which window.)

I have some more thoughts on Windows vs OSX, but that can wait 'til next time...

Monday, November 21, 2011

room to write

Apologies for the less-than-fully-assed entry that follows:

I thought a pretty common task for jQuery would be to make a textarea automatically expand as content was added into it. And a bit of googling shows that a number of people have taken a stab at it. But it looks like no one has gotten it quite right, and I don't have enough time and energy to fix that.

The best I found is padolsey's jQuery.fn.autoResize. Once you include the .js file usage is trivial:

<script>
$(document).ready(function() {
  $('textarea').autoResize();
});
</script>

There are some configuration options as well.

I'm not 100% enamored of it: if you edit the value manually (like by calling $("textarea").val()) , the thing is not resized, and while it has basic coverage for resizing when users paste in text, it gets a bit wonky.




Thursday, November 17, 2011

prettier json

Yesterday I talked about JSON, and the inspect() method I use. While that example actually did some nice indenting, sometimes I have a mass of untabbed JSON I'd like to visually inspect. The best tool I've found for that is http://json.parser.online.fr/ . Slightly confusingly it shows both the "String parse" and the "JS eval" versions of the string, but usually it doesn't matter which one you look at. The way it builds a treeview of the code is really nice.

Before finding that page I used to just Google for "json pretty print" which would lead me to Cerny.js' entry as well as curiousconcept's. Both are ok in a pinch, I think the former sometimes got confused by special characters in the content, even if they were properly escaped.

Wednesday, November 16, 2011

loggin'

Yesterday I mentioned the importance of "log-based debugging". It's crucial to browser-based work, in part because alert()s and stepping through with the debugger mess up the timing of stuff, so they're not as much help for timing-related issues. I also think the act of putting in log statements forces a programmer to challenge their assumptions.

For reasons that are obscure to me, at work we call our main debug function "ccdebug". Its code is like this:

function ccdebug(s) {
  if (typeof console != "undefined" && typeof console.log != "undefined") {
    console.log(s);
  } 
}

Pretty simple! The check for console existing is necessary for preventing errors when the console isn't around...

I've been using that for just under 100% of my log statements... poking around I realize we have equivalents for console.warn() and console.error(). In firebug, the former has a nice yellow highlight, and the latter even includes a stack trace. So to improve my practice, I should probably start differentiating my information and coding messages from messages for more serious problems.

One other function I find useful, with both console.log() and good old alert():

function inspect(thing, indent){
if(indent == undefined || indent == true){
return JSON.stringify(thing, undefined," ");
}
return  JSON.stringify(thing);
}
That's just a great way to see the content of data structures and what not. The indent makes it a lot more readable, so I include it as the default.

Tuesday, November 15, 2011

simplicity simplicity simplicity -- shouldn't that just be 'simplicity'?

The other day I was thinking of Richard Gabriel's classic The Rise of 'Worse is Better'. The article has a kind of old context, but I think its core is still valid. (I also appreciate the vacillation within the article; these are issues about style and mood where there won't be a single correct answer.)

As the Wikipedia Page reiterates, the core components of "Worse is Better" are, in roughly descending order:
Simplicity
The design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
Correctness
The design must be correct in all observable aspects. It is slightly better to be simple than correct.
Consistency
The design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either complexity or inconsistency in the implementation.
Completeness
The design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.
The essay goes on to explain why these factors can be crucial to a technology really taking off. (Also it talks about the "New Jersey approach" vs the "MIT approach", terms I am including here for my future grepping needs.)

I find a rough analogy in my professional life as a UI Engineer when it comes to the selection of toolkits. I had a miserable time with "Wicket" and its bizarro blend of delicate flower Java class/HTML pairs. (It was barely ok for light work, but god help you in having to learn this whole lifecycle flow if something went wrong or you wanted to make your own component.) These days I've been arguing sticking with basic jQuery rather than taking on some heavier duty toolset like YUI or jsMVC.

So I ask myself, am I just a stick-in-the-mud curmudgeon who hates to learn new stuff? That's at least partially true: I'm much more interested in what technology can empower me to build than in the tools themselves, so I'm only interested in new toolsets when they let me do something new or in a vastly easier way, not just something old in a slightly more concise way. But I think there's something deeper in my preference for light weight toolkits, more profound...

I've said "People and computers should be judged by what they do, not by what (you think) they are." No one cares how finely crafted the objects in your code are, they just want to know, does it do the job well, and have you set yourself up to do the next few, slightly different jobs well. When I'm debugging, it's because the system is DOING something wrong, and I need to correct that behavior. My tools are-- lo and behold-- a debugger! and thoughtful logging... (good old log-based debugging is a crucial tool. I say that's because it forces you to think about what your preconceptions are at each step of the way, and then show them in action. But there's a chance it might also be because the debugging tools I used in college were so awful and opaque. Also, debuggers mess with the timing of stuff.)

Anyway, when I'm debugging, in a debugger, if I'm mostly looking at other's people code in the stacktrace, my job is much, much tougher. Most toolkits are extremely configuration heavy. (Toolkits by their nature don't just solve YOUR problem, but a hundred other problems as well, so you have to specify lots of things) It takes a fair amount of expertise and learning to set them up right, which costs time-- time you often don't have, the whole reason you're using the toolkit is for it to quickly solve your issue-- and when things break, there's often a huge gap between the misbehaving code and your error. The lighter the toolkit, the more likely the mistake is near where the debugger is showing the problem, the nearer the observed effect is from the code that caused it.

JQuery, I hardly ever have super tough to understand problems with. The few times the issue shows up in "jquery.min.js" itself, its been where I'm setting some property IE thinks should be read only or some such. Other tools I used, like Java RichFaces, the situation was much, much hairier. But with basic Javascript calling jQuery when needed, I can get to the problematic code easily.

Now this argument could be taken to an illogical extreme. I'm not saying you should code in assembly or anything like that... but your toolkits should be simple, reliable, and your widgets and libraries should be of the do one (hopefully difficult) thing and do it well.

(Of course, someone pointed out another reason to hate toolkits is they put a coating over the 90% of coding that is easy and pleasant and 'hey I built this!' and leave behind the 10% that is difficult anyway, and how you earn your pay... in fact they usually make that 10% significantly harder.)

Continuing the comparison to assembly, you still get people trying to spin Javascript as the Assembly language of the web, the code that all browsers speak, and it shows up in your debugger, but not what you want to be coding in. This kind of thinking is weak, because the additional layers, while slick and clever (epecially in the early baby tutorials) don't actually let you say stuff new. A concise essay about that in terms of toolkits is Uriel Katz' Why JavaScript is NOT the new Assembly  and a longer one against CoffeeScript and other pre-processor type languages is blog.izs.me's "JavaScript is Not Web Assembly".

Friday, November 11, 2011

not so angry birds

Today, our UX guy posted a link to Why Angry Birds is so successful and popular: a cognitive teardown of the user experience. This is the bulk of my resposne.

Cool article, though I disagree with what it assumes makes Angry Birds so popular.
The author is remiss in pointing out the obvious: some big chunk of Angry Bird’s addictiveness is because the base activity is a fun thing: it is fun and pleasant to launch a slingshot at a building of blocks.

At [Alleyoop, my College Readiness webcompany], we aren’t based on a core activity that offers such a low learning curve / high satisfaction + feedback . I think at the best points of [our Math practice and quiz subcomponent], kids can get that extra Zing! of satisfaction of a problem mastered, but in most other ways it’s a reach for us.

So with the article, I agree with some bits (the importance of pleasant and detailed visual design, even the inclusion of extra details like the chattering birds) and disagree with others (iPad icon spacing provoking a sense of tantalizing mystery? Puhleeze)

I thought margie's comment was more insightful to the charm of this particular game: (highlighting mine)
AB is a game for non-gamers. Gameplay is simple, the rewards are many and often, hence, continued play from all players. Any game that increases in difficulty and/or timing & speed too quickly I’ll drop out of […]. ~ AB is always the same: slingshot, birds, structures w/pigs.

I think the fact that players are frequently rewarded (it’s easy to pass a level, not so easy to 3 star a level) plays a huge part in why AB is so popular, especially with people who are not hardcore gamers.
Other commentators talk about “quick retrying ‘til I get it right” is a big aspect, maybe one we can learn from. Not punishing failure so much is a hallmark of modern gaming.

It’s funny too, the opening paragraphs:
Surprisingly, it is a rare client indeed who asks the opposing question: why is an interface so engaging that users cannot stop interacting with it?
The funny thing is, that’s not “engagement” so much as “low level addiction”. And frankly, popular games use some of the same pattern as drugs—an initial big rush (of success, in the case of games), a long haul of trying to recapture that high, and it being made harder to do so.

(It’s funny contrasting that with the addictiveness of say Farmville—there the addiction comes from a web of social obligations. The gameplay itself is decidedly NOT very fun in the way slingshots-at-buildings is, though it does carry a pleasant sense of “I Made This” construction. (I haven’t gotten into it either but I think Empires & Allies has those social and building aspects, along with an empowering “strong kid on the block” aspect in the fighting)

So I think [our company] would be well served if we could capture some of this addictiveness:
  • well balanced challenges with quick redo 
  • a carefully ramped increasing difficulty 
  • pleasant and juicy UI 
  • social obligations 
  • a sense of building 
The single biggest thing that Angry Birds and Farmville have that we don’t are a real feeling and visual model of steady progress. (With Angry Birds getting through a series of levels, with the chance to go back and do better, with Farmville a nicely expanded and built up farm.) [Our former point system is] now a currency. Badges were nice, but were always more of a novelty than a core “gotta catch ‘em all!” experience

BONUS:
Some more thoughts that were a bit too specific about Angry Birds as a game to be relevant to my company:
I think the article missed out one of the best UI bits for the "try again" factor: you get a little dotted line showing you what your last trajectory was, thus enabling a higher degree of fine tuning. This little dotted line is more significant than a lot of the things the article discusses.

Another thought I had: if I was designing Angry Birds, I might try to give it a split view: a zoomed-in view of the current bird (allowing more fine control of the trajectory at the launcher, and then a fun closeup view of the structure being destroyed, or possibly a pan back to the smug pigs if you totally miss) and then a view of the entire playfield, visible at all times. 

The thing is, I'm not sure if this system (more complex in its display, but simpler in its control scheme than panning and zooming Angry Birds) would be more or less satisfying than the current scheme. But when thinking about what deliberate choices Rovio made (vs, say, the delays once the bird has hit the building, which I think owes more to giving time to the physics engine to work things out than a deliberate design decision) alternative solutions to the challenges they faced should be discussed.

Wednesday, November 9, 2011

jsonp for cross-site fun

So, today's fun:
We run a few different servers for testing: www is our live site, cert is for final testing, test is for preliminary testing, and we hack DNS a bit so that dev loops back to each developer's machine.


We have a single WordPress install that we use for our blog, as well as to store the content for certain activities on the site. My most recent project was to add a simple tag system that would change a WP slug like 
[lookup value="user.profile.firstName"] 
into a rest call to /rest/user/profile and substitution with the fieldName parameter. (I also added conditional tags, something like 
[start_if value="user.grade" is="GRADE10"]Welcome to Tenth[end_if]
would display "Welcome to Tenth" only if the endpoint said that was their grade.


So when you just have the one WP install there's an obvious small problem when you want to test before pushing live, that while WP (which I keep accidentally calling "WordPerfect" not "WordPress", a habit that is catching on at work) can do its own preview, it's not part of the whole dev/test/cert/www cycle. We added a "src=http://dev.alleyoop.com" type parameter, telling the plugin where to pull its javascript guts and then rest calls from, but... d'ohh... the rest calls were failing, losing out to the draconian cross site scripting rules. 


The cool kids solution to this is JSONP, JSON-with-Padding. (The wikipedia page for it is a decent introduction.) The server needs to be configured to recongize a jsonp request, and then it forms a response of the form "<script>yourcallback(THE DATA)</script>"-- script tags aren't beholden to the usual xss rules. (Which is kind of a headscratcher, given that running code from another site seems to be a lot more dangerous than just grabbing data, but sometimes you have to take what you can get.)


So, in practice, I made a version of our wrapper for getting JSON that added a parameter dataType: "jsonp" to the jQuery.ajax call. We then ran smackdab into a kind of known Firefox issue (I'm not sure if us using a more up to date version of jQuery would have helped or not) where Firefox thinks that the JSON, what with its hashes and : and all, looks like it's trying to generate labels (what is this, BASIC?) and spits up "invalid label"... our fix for that was in the middle tier, a known solution is that wrapping the data in an additional set of parentheses fixes it, so that's what we did.


There's of course a bit of a security concern when you're allowing shenanigans like this-- especially if we were showing content from one user to another user. We'll look for some ways to tighten that up.

mobile flash in the pan

So the news is making the rounds, Adobe to stop doing mobile browser Flash.

For some Apple fans, this is a giant validation of Apple's view that Flash was the wrong technology for portable devices. (And as some have tweeted ActionScript might still be viable for making standalone apps.)

At work, our designer's response was "Aple wins!" and our lean startup guru marketing guy responded "The Internet wins!". That, along with Gruber's tweet thinking 2015 would mark the end of Desktop Flash players got me thinking...

What's next?

The most common opponent for Flash is usually described as "HTML5" which seems a little funny to me-- technically, HTML5 is about the semantics of markup, more of a topic for wonks than anyone else.

In practice, then It feels like there are two fronts for "not Flash", both powered by javascript. One is, and this is what I'm making my living with now, is what used to be called "DHTML"-- pages full of divs and graphics and one not, providing a faster and more interactive experience than the static pages of yore.

The other is stuff that makes heavy use of the "canvas" object. A lot of my gaming buddies are way into this as a technology. It's pretty cool, but there are two giant challenges for it to overcome to take the place of Flash: one is support for sound is still kind of iffy, you really have to dig if you want something that is very cross-browser and can provide "real time", synchronized sound effects. The other is support in IE (version 8 and earlier) tends to be lacking. (There are some libraries that use shims that do a good job of faking canvas support.)

I'm not sure what this means for me... Flash was on my "I should learn this" list, and now it's a little lower down.  (I took one 2 day intro class and it was frustrating because the class was geared at designers, and focused on the timeline, which STILL confuses me, as opposed to ActionScript, which felt very familiar when I helped a friend recode a few things.)

(UPDATE: Gruber also pointed to this CNET article that points out somethings Flash still does better than its alternatives...)

Tuesday, November 8, 2011

scope-a-dope

This is one of those posts that I'm a little nervous about writing, because I worry I might look a bit bad, since it represents a small bit of "trial and error" coding along with a slightly iffy theoretical understanding of the problem. Still, in the interest of being useful to my future self and maybe showing other people they're in the same boat, I'll put the process here.

I was trying to do a bit of caching of some data asynchronously.I had "neededEndpoints" that was a hash of the endpoints I needed to hit and store the data for in another hash, call it cache

So here was my first attempt: (it didn't look so blatantly wrong when I coded it, I've stripped out stuff in the name of simplicity)

for(var endpoint in neededEndpoints){
 jsonGet("/rest/"+endpoint, function(res){
                   cache[endpoint] = res;
             });
}

The trouble with this code is that javascript is not block scoped like some other C-like languages, and so "endpoint" is effectively passed by reference, not by value, and the last value of "endpoint" was used for all the cache storing.

I remembered in previous work we'd done here, we had a CreateDelegate function:
function CreateDelegate(scope, fn) {
    return function () {
    if(fn != undefined){
    fn.apply(scope, arguments);
    }
    };
}
Usually we'd call it with "this" as the argument for scope, but I wasn't getting the results I expected. Googling I found this page that gave me the nudge I needed to come up with this:
for(var endpoint in this.neededEndpoints){
 jsonGet("/rest/"+endpoint, 
               CreateDelegate({"ep":endpoint}, function(res){
cache[this.ep] = res;
   })
 );
}
So what's going on? The mental model I've come up with says CreateDelegate is in effect making a snapshot of the current state of things-- that's what a delegate is, in effect. It's actually the act of instantiation of CreateDelegate that makes the snapshot, and then the code inside it can have exactly the context we give it.


thoughts on keeping the global namespace clean

So one well known issue with javascript is that it's a bit too easy to pollute the global namespace.

Personally I find this is a big issue more in theory than in practice, choose reasonably distinct function and variable names, and you hardly ever have a problem.

Half the problem is that if you just start using a variable without a "var" declaration, it gets slapped into the global namespace. Douglas Crockford, among others, considers it a bit of misstep in the design of the language.

(The other half of the problem is the same "bootstrapping" problem any language faces: how do you tell the system where code execution starts, and where is the tree of object and functions rooted ? C and Java have their main(); in Javascript, you can start putting function declerations and even calls anywhere, in practice there's a lot of $(document).ready(function(){}); )

So, you can go a little crazy with loaders and tools to control your namespace and keep your private variables and functions private etc, and also with CreateDelegate() functions to pass thing to page elements and whatnot. This is overkill for most projects where you're more interested in the functioning of the site than making a reusable toolkit. (YMMV). A good compromise is "make one global variable per functional grouping". For example, today I was working on a "madlibs" controller (for eventual integration as a WordPress plugin) Part of the code for that was:
var ao_madlibs = new function(){
  this.neededEndpoints = {};
  this.doMadLibs = function(){
//CODE GOES HERE
    } //end doMadLibs
 }//end ao_madlibs


In practice I find this a good balance of control and ease of reading. One caveat, in the doMadLibs code I used jQuery's .each() function, and in the anonymous function in there, "this" had a different meaning, so I had to use the global reference ao_madlibs.


I was looking at a bit of coworkers' code... he's more of an architect, and cribbed this from FB's api code:

if (!window.AO) {
    window.AO = {
    processElements : function(){
                  //do something
        },

        someVariable: {}
   };
}
This code is using more of the associative array syntax, plus doing that check to make sure it's only called once. I find it's a little more "foreign" looking to me, but makes a good amount of sense if you're used to JS's variable syntax. Pick your poison I guess! Sometimes I do miss Java's class structure syntax...


The nice thing about using a global variable, to, is that you can really easily and concisely refer to your "functional grouping" in page elements and in other parts of your system, again without worrying about ever-changing meaning of "this" and using tons of oddball CreateDelegate functions... I find KISS (Keep it Simple Stupid) to be an important principle in making code I can read in the future, and in communicating my intent to other people looking at my code.