Thursday, February 27, 2014

super pretty toggle checkboxes in css

Unfortunately I couldn't jam in enough CSS on Blogger/Blogspot to make this work inline in this bog entry, so I direct you to: http://alienbill.com/kirkdev/toggles.html and please "view source".

There you see 10 checkboxes.

The first is a boring checkbox: click on the box, a check appears.

The second adds the checkbox caption in a <label> tag.  I give the checkbox an id and then use the for attribute of the label to tell it what checkbox it refers to. But now you can click on the caption, which makes a much bigger hit area, and is a good UI practice. (It also makes thing better for screenreaders and other software, since the reader knows just what the caption refers to)

The third actually simplifies things: by putting the checkbox inside the label, we get rid of the need to id the checkbox or use the for attribute. (I didn't realize before today that you could do this nesting.)

Fourth just shows that the label appearing to the right of the checkbox is just a convention: now the whole area is clickable.

Example Five has us putting an arbitrary div inside the label, and showing that it too is part of clickable space.

Example Six gets interesting again.  The CSS for the clickable div is:
.example6 input:checked~.show { 
    background-color:green;
}

The "~" tilde selector is less used than some selectors, and a bit odd. It says "any sibling following this one"-- order is important! (And :checked is a virtual selector for when the checkbox is checked). Net-net: a class "show" sibling of the checked input is background green, not red. We've reinforced the state set by the checkbox.

Example 7 shows that we can hide the checkbox itself. The code I'm borrowing from didn't use display:none, which I think might give a screenreader the wrong idea. Instead, it uses the more purely visual property "opacity". It also changed it to "position:absolute", so it didn't disrupt the flow of the other element parts.

Example 8 makes the display into a proper toggle. I had to do a little layout fudging to get this to work (in part to ensure the sibling relationship was preserved for both the background "show" and the moving toggle... CSS doesn't really support "parent" selectors, so there's no such thing as the "uncle" which I really wanted, so that the toggle could be a child of the show)

Example 9 specifies a CSS "transition" for the any changes to to the left property: (in a "cover all your bases for older browsers" sort of way)
.example9 .toggle {
-webkit-transition: left 0.3s ease-out;
-moz-transition: left 0.3s ease-out;
-ms-transition: left 0.3s ease-out;
-o-transition: left 0.3s ease-out;
transition: left 0.3s ease-out;
}
I've talked about jQuery transitions before...  CSS offers fewer options but still its nice to be able to make these effects in pure CSS.

Finally, 10 uses some rounded corners to make a slight less edge-y switch.

So what I found less familiar was the use of labels to create clever clickable areas, using a sibling relation to make an alternate representation of checkbox state, and then I'm slightly playing catch up with CSS transitions (most of my time diving into HTML5 was spent where jQuery was readily at hand, but I absolutely see the appeal of keeping such gloss in the CSS and out of the javascript code.)

Monday, February 24, 2014

.03% more famous!

http://www.lostinmobile.com/ - heh, I've loved this little UK-based mobile/gadget blog for a while, and they promoted a longish comment I made (on a previous story about who is "more influential", Steve Jobs vs Bill Gates) into a top level story, reprinted here just because:
In some ways it seems unfair, because the jury is still out on Gates, but certainly his foundation is out to make some awesome change.
In terms of "computers to the masses"-- the thing is, maybe there's more a feel of inevitability of what he did? IBM decided to make a "Personal" computer, risking their golden goose of big hardware to make sure they didn't get left behind home computers. (Which, come to think of it, was primarily the Apple II) Gates was savvy enough to catch that train with super clever licensing of someone else's DOS... but someone would have done that if he didn't? Similarly, it seems likely some form of Xerox -> Macintosh WIMP interface would have gained traction in the 90s on PCs even in a Gateless world.
So looking at what Gates did, it was that clever licensing where he could make money selling DOS to PC clone manufacturers... that was the world changing bit, perhaps? This was all in the wake of the Great Video Game Crash of 1983, which provided a window for Home Computers to really take off. But the Apples and Commodores and Atari 8bits (while running rings around PCs in terms of fun, graphics and sound) lacked the gravitas of IBM for business. So it was a combination of the reputation of IBM, Gates clever licensing, and good ol' free market competition on the hardware that pushed to make computers so ubiquitous.
But Jobs did more at the leading edge of technology -- all with a little (lot of) help from his friends. With Woz, the Apple II made the home computer happen. With Xerox, the Macintosh brought WIMP UI to the peoples. Jump forward 2 decades, and he made the next level of touch screen computing on ubiquitously connected devices occur. Jobs led Gates et al on all these things.
From the first world perspective then, Jobs without a doubt - if Gates hadn't existed, someone would have done most of the same stuff, but Jobs changed things with a personal vision and sense of design. (who knows, maybe a world where IBM clones hadn't strangled the market in the 80s and 90s, with a richer variety of products from Amiga and Atari and others, would have been cooler?) From a global perspective, the Gates Foundation will really help more people, with the focus on medicines and education. So is that "influential"? Maybe. Mostly it was one great idea, licensing the software so the hardware could have competition, that made him a ton of money, and that he then turned into helping people.
(Side note, it's interesting thinking of that summary and, say, the launch of Windows 95, and the INSANE amounts of testing of Win 3.1 software they did, and the hacks they put in place, to ensure that no one would have "well my program doesn't work on the new system" as an excuse not to upgrade. That was a consequence of "Microsoft on All Hardware". It's also important to remember how untouchably powerful Microsoft seemed in the late 90s, that they had enough cash to buy anyone who seemed like a threat. Luckily, they never saw the threat the Internet would be...)

Sunday, February 23, 2014

goto fail;

Several Apple products have a really dumb and nasty bug that might be what let NSA do some "man in the middle" attacks. (The conspiracy minded will think it was a deliberate plant.)

Wired had some great coverage of the bug. The block in question was:

As might be apparent on visual inspection the duplicate "goto fail;" is the issue.

On my first reading of the code, though, that wasn't clear. "It's goto 'fail', what's the problem-- it'll just fail". But, of course, "fail" doesn't guarantee a "return fail;", it returns whatever the error code is. And if it's still set to zero (which is the short hand for "no error" in this context), that's what gets returned. So "fail" is more "clean up and return error code" but that's hard to type as a label. (I tend to write really long variable names and constants, so CLEANUP_AND_RETURN_ERROR_CODE is not outside the realm of possibility were I writing it.)

This code is also shown as example of the danger of not using curly braces in your conditional. Personally I'm ok with not using curly braces if and only if it fits on one line, and forms a kind of logical block. This is a bit of an idiosyncratic rule of mine, but I find
if(condition) doSomething();
fine, but
if(condition)
     doSomething();
bad, for the usual "what if you put in another line of code" etc, and situations like the goto fail.

I think the other reason I couldn't immediately spot the badness is using variable assignment and conditional test all at once. I'm more prone to use a place holder for a variable:
err = someTest();
if(err != 0) return cleanUpForFailure(err);
is how I might do that sequence, which would also get away from the ugly goto. GOTO CONSIDERED HARMFUL, as they say.

So in short, the bad practices:

  • use of goto
  • not using braces for conditionals (and not being on one line)
  • poor naming conventions
  • all in one assignment and compare
All in all, a nasty piece of work.

Monday, February 17, 2014

animation spinners and blame

Interesting finding from Facebook... iOS users would tend to blame Facebook itself if presented with the scene on the left, and the device with the right.


The logic behind it seems easy to figure out; iOS uses something like the animation on the right under many circumstances (especially when powering up/down) but the left is more app-specific. Screenshot and more discussion available at mercury.io.

Saturday, February 8, 2014

joel on software and the quixotic nature of complete testing

UPDATE: I (re-)found some links that make great, thoughtful points about the topic of unit testing: Contrarian Software Development's "Unit Testing Sucks" (that site is also the site of a great quote by Bob Walsh: "Test Drive Development is like grammar driven literature.") Writing Great Unit Tests: Best and Worst Practices is a more optimistic view about doing it right, while acknowledging the giant gulf of doing it wrong that must be avoided...

For a while now I've been trying to reconcile my skepticism about most automated testing with the importance many smart people (including hiring-type folks!) place on it. Some of the problem is that I've never seen it done really well, so at this point I lack the experience to safely find a way to those safe, refreshing waters between
  • the Scylla of tests that work over trivial functionality and 
  • the Charybdis of test that blow up even with the most well meaning of refactorings. 
In practice I've seen tests as a way of multiplying the workload of active development, and then teams stubbing them out or otherwise ignoring lots of "red" results because while the software is actually doing its job, the tests aren't being kept meaningfully aligned with that.

I keep on pondering... though I live in fear that I'll never know if I'm just rationalizing my own laziness and prejudice for "making stuff!" over "engineering", or if I can be confident enough in my own experience to really justify a stance that doesn't love unit tests.

Thinking about my own coding process: I break what I want the software to do into almost ridiculously small pieces and so have a super tight code-run-inspect results loop... (I really hate situations where the a whole aggregate of code is not working but I have no idea where my assumptions are incorrect. (Pretty much all debugging is finding out which of your assumptions you made in code form is wrong, and when.)) So as I code up a complex bit, I write a lot of "runner" code that exercises what I'm doing, so that I can get to a point where I'm looking at the results as quickly as possible. This might be where I part ways from the Flock: I view this code as scaffolding to be thrown away once the core is working, but the Unit Testing faithful would have me change that into a test that can be run at a later date. There are two challenges to that: one is most of my scaffolding relies on my human judgement to see if it's right or wrong, and the other is my scaffolding is designed for the halfway points of my completed code. Parlaying it into a test for the final result that gives a yay or a nay seems tough; doing so in away that survives refactorings and also does a good job of evaluating the UI aspect of it (often a big part of what I'm working on) seems almost impossible.

Some of my skepticism, too, comes from the idea that... small bits of code tend to be trivial. It's only when they're working together that complexities arise, chaos gets raised, and assumptions get challenged. So I'm more of a fan of component testing. And it's great that you've made your code so modular that you can replace the database call with a mock object but... you know, when I think about the problems I've seen on products I've built, it's never stuff in these little subroutines or even the larger components. It's because the database on production has some old wonky data from 3 iterations ago that never got pushed to the DBs the programmers are using. It's because IE8 has this CSS quirk that IE9 doesn't. In other words, stuff that automation is just terrible at finding.

Two other things add to the challenge:
  1. A coder can only write tests for things he or she can imagine going wrong. But if they could imagine it going wrong, they would have coded around that anyway.
  2. It's very hard to get someone to find something they don't really want to find. (i.e. a bug in their code.) This is why I only put limited faith in coders doing their own testing, at least when it's time to get real.
So, this is where I am now. But I'm defensive about it, worried it makes me look like a hack... and to some extent I can be a hack, sometimes, but I'm also a hack who is good at writing robust, extensible, relatively easy to understand code. Anyway, to defend myself, I've sometimes paraphrased this idea that "any sufficiently powerful testing system is as complex (and so prone to failure!) as the system it's trying to test." But I may have just found the original source of this kind of thinking, or at least one of the two... it comes from part of a talk Joel Spolsky gave at Yale:

In fact what you'll see is that the hard-core geeks tend to give up on all kinds of useful measures of quality, and basically they get left with the only one they can prove mechanically, which is, does the program behave according to specification. And so we get a very narrow, geeky definition of quality: how closely does the program correspond to the spec. Does it produce the defined outputs given the defined inputs.
The problem, here, is very fundamental. In order to mechanically prove that a program corresponds to some spec, the spec itself needs to be extremely detailed. In fact the spec has to define everything about the program, otherwise, nothing can be proven automatically and mechanically. Now, if the spec does define everything about how the program is going to behave, then, lo and behold, it contains all the information necessary to generate the program! And now certain geeks go off to a very dark place where they start thinking about automatically compiling specs into programs, and they start to think that they've just invented a way to program computers without programming.
Now, this is the software engineering equivalent of a perpetual motion machine. It's one of those things that crackpots keep trying to do, no matter how much you tell them it could never work. If the spec defines precisely what a program will do, with enough detail that it can be used to generate the program itself, this just begs the question: how do you write the spec? Such a complete spec is just as hard to write as the underlying computer program, because just as many details have to be answered by spec writer as the programmer. To use terminology from information theory: the spec needs just as many bits of Shannon entropy as the computer program itself would have. Each bit of entropy is a decision taken by the spec-writer or the programmer.
So, the bottom line is that if there really were a mechanical way to prove things about the correctness of a program, all you'd be able to prove is whether that program is identical to some other program that must contain the same amount of entropy as the first program, otherwise some of the behaviors are going to be undefined, and thus unproven. So now the spec writing is just as hard as writing a program, and all you've done is moved one problem from over here to over there, and accomplished nothing whatsoever.
This seems like a kind of brutal example, but nonetheless, this search for the holy grail of program quality is leading a lot of people to a lot of dead ends.

I would really appreciate feedback from veterans of the testing wars here, either people who see where I'm coming from, or who vehemently disagree with me, or best yet who see where I'm coming from but can show guide me to the Promise Land of testing feeling like a good of knowing code is working and communicating with other developers rather than an endless burden of writing everything twice and still having the thing flop in production.

InOneFolderNotTheOther.pl

Found this Perl script on a laptop I'm prepping to hand over...  I remember hacking it together, a simple "what folders are in one folder but not the other", kind of a half-baked folder sync.

#!/usr/bin/perl

$dir1 = $ARGV[0];
$dir2 = $ARGV[1];


@a = getFilesInDir($dir1);
@b = getFilesInDir($dir2);

print "In $dir1 ::\n";
foreach $thing (findElementsInANotB(a,b)){
print "$thing\n";
}

print "--\n";

@a = getFilesInDir($dir2);
@b = getFilesInDir($dir1);

print "In $dir2 \n";
foreach $thing (findElementsInANotB(a,b)){
print "$thing\n";
}


exit;

@a = ("1","2");
@b = ("2","3");


sub getFilesInDir {
my($dirname) = @_;
my $file;
my @files;
opendir(DIR,"$dirname") or print "can't open $dirname";
while(defined($file= readdir(DIR))) {
if($file ne "." && $file ne ".."){
push @files, $file;
}
}
closedir(DIR);
return @files;
}




sub findElementsInBoth{
my($refA, $refB) = @_;

my @a = @$refA;
my @b = @$refB;
my %hashB = ();
my @result = ();
my $thing;

foreach $thing(@b){
$hashB{$thing} = 1;
}
foreach $thing(@a){
if(defined($hashB{$thing})){
push @result, $thing;
}
}
return @result;
}

sub findElementsInANotB{
my($refA, $refB) = @_;

my @a = @$refA;
my @b = @$refB;
my %hashB = ();
my @result = ();
my $thing;

foreach $thing(@b){
$hashB{$thing} = 1;
}
foreach $thing(@a){
if(! defined($hashB{$thing})){
push @result, $thing;
}
}
return @result;
}

Friday, February 7, 2014

keeping your place in the unix shell

Last year I wrote about some OSX Terminal tips. One was putting the full path in the bash prompt... besides my personal preference for seeing the whole thing, it means if I scroll back up in the window, I can always get back to that location via copy and paste.

There's another older trick for keeping your place when navigating folders via the command line- instead of using "cd NEWLOCATION", try using "pushd NEWLOCATION". This puts a virtual bookmark where you were before the directory change. (Actually more akin to sticking your thumb at the location, since it's pretty ephemeral.) Use "popd" to get back to that spot. As an added bonus, it exercises thinking about stack notation.

Also, I just yesterday noticed that OSX's filesystem is a bit case-insensitive, despite being case-preserving. I'm not sure how I feel about that...

Saturday, February 1, 2014