Tuesday, February 26, 2013

reverse proxies: put my thing down, flip it and reverse it

The other day I realize the web page I made to test my mockup RESTful server wouldn't allow me to make the same requests against the real server because of cross-site scripting blocks. I figured I had 4 options:
  1. Some kind of JSONP Solution, which probably wasn't going to happen, since it would require changing the server.
  2. Putting the testing web page on that server, which was also a bit unlikely.
  3. Forget testing from webpage, use curl instead.
  4. Some kind of proxy, so the webpage could hit the remote server while thinking it was talking to the same server and port it was served off of.
So 4 was the most promising solution. It turns out this is called a "reverse proxy"- that means not only is the request to a server passed on to a different server, but when the response comes back, headers are munged so that the response looks like it came from that "server in the middle" (or else the web browser would get balky.)

There is a node.js Reverse Proxy system available, but to be honest I am not yet fluent enough in node to safely know I could get the thing working in a timely way.

A better solution seemed to be using the Apache webserver that comes built in (but not on by default) with Macs.

Even though I wasn't setting up the proxy to talk to a Tomcat server, this page had one of the more concise descriptions of how to set up the reverse proxy. The only problem I had with it was I had to use more wildcards for my service to work...

(A note, I had to use "sudo" for a lot of these editing and server starting operations, since I was using protected files and ports.)

Anyway, on a mac the main config file for Apache is /etc/apache2/httpd.conf . The first few lines of code listed (the LoadModule) on that Tomcat page were already taken care of, and I added the following near the end:

# mod_proxy setup.
ProxyRequests Off
ProxyPass /myapp/
ProxyPassReverse /myapp/

<Location "/myapp/*">
  # Configurations specific to this location. Add what you need.
  # For instance, you can add mod_proxy_html directives to fix
  # links in the HTML code. See link at end of this page about using 
  # mod_proxy_html.

  # Allow access to this proxied URL location for everyone. 
  Order allow,deny
  Allow from all

So I had to specify the IP address and port, add in my own app name, but then the Location needed to be a wildcard, so the actual calls could work.

The document root for the default OSX Apache server was /Library/WebServer/Documents/  (and I had to change some permissions so I could easily work in that folder.) That's where I put my little web client page.

Finally, the best way of starting/restarting/stopping the server was apachectl, e.g.
sudo apachectl start
BAM! Proxy Reversed!

For what it's worth, it seems like "reverse proxy" is a misleading name. Personally it feels more like a "masked proxy"-- still passing on requests to a remote server, but then hiding over the fact that the information isn't coming from the local server.

Sunday, February 24, 2013

teaching the padwans

We have some smart but newb-ish interns at work; their experience with computers has been a lot different than mine because of the the 20-odd years between us.

Still, when one of them remarks "Oh yeah, the command line; I was thinking I should learn how to use that." it gives one pause (This was from a guy who had done some neat work in Eclipse already, and had shown good potential.)

I mean, how would you describe a command line to someone who had grown up with mouse-and-windows interfaces all their life? Without making too many assumptions about what they know? The line I came up with was:

"Well, it's kind of like a chat program, except you're texting with a REALLY dumb robot."

Saturday, February 23, 2013

osx protips, terminal and otherwise

Now that my main work machine is a Mac I had to review what I do to make it more comfortable for me.

I am ok with the "natural" reverse scrolling with the trackpad, because I can mentally model it like I'm shoving stuff around on an iPad -- but applying that reversal to the mouse scrollwheel just feels wrong. Scroll Reverser takes care of that, putting a simple icon on the taskbar and letting me keep the trackpad reverse scroll but lose it for the scrollwheel.

Similarly, sometimes I like to use my big old Microsoft split keyboard. Now, it's tough enough to keep my head around using cmd-C instead of ctrl-C, but reversing the location of option and command is just mean. Double Command adds a pane to the System Preferences allowing for some simple keyboard remapping. Here are the settings I find useful:
Also in preferences, I always thought it was a weird state of denial that OSX only allowed tabbing between text boxes and list controls and not buttons and the like, but there's a simple setting for that, under System Preferences | Keyboard | Keyboard Shortcuts. 

Finally, I do a lot of things in Terminal. As an old school unix guy I like editing a ".profile" file in my home directory, with the following:
export PS1="\w$ "
alias ls="ls -F"
That makes the prompt more concise, and lets me visually identify directories vs files when I ls.
(Also, I could have specified my startup script under Terminal | Preferences | Shell | Startup | Run Command.)

Other Preferences are useful when I ssh to my webserver;  under Prefrences | Advanced I get better results declaring the terminal to be vt100 and saying "Delete sends Control-H".

Anyway, I think it's good for my mental elasticity to learn the Mac. I've adapted to its app-based (rather than windows-based) way of dealing with running tasks. With the Dock, I like to stick it to one side (since screens are wider than they are tall) and then remove all non-running apps from it. (So I also remove those "lights up if app is running" dots, since the only icons there are things that I'm using.

Wednesday, February 20, 2013

quick and dirty header and body echo in node.js and express.js

Having some problem getting a client to Post JSON properly, so for diagnostics I came up with the following node.js:

Here's showpost.js:

var express = require('express');
var app = express();

app.use (function(req, res, next) {
    var data='';
    req.on('data', function(chunk) {
       data += chunk;

    req.on('end', function() {
        req.body = data;

app.get('/', function (req, res) {
   res.sendfile(__dirname + '/index.html');

    console.log(JSON.stringify(req.headers,null," "));
console.log('Listening on port 3001');

Here's index.html in the same directory:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js" type="text/javascript"></script>
function check(){
    var url = "/peek";
    var payload = JSON.stringify({"foo":"bar","hoo":["ha","ver"]});
        url: url,
        type: "post",
        data: payload,
        dataType: "json",
        success: function(response) {
            $(".out").text(JSON.stringify(response,null," "));
        error: function(e){

<input type="button" value="peek" onClick="check();">
<pre class="out"></pre>

package.json is something like
  "name": "peek",
  "description": "quick and dirty post peeker",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "express": "3.x"

copy these files in a directory, run "npm install" then "nodemon showpost.json" (or plain old "node"). Then localhost:3001/ will have a button to press, or you can post to localhost:3001/peek

Wednesday, February 13, 2013

a POST about node.js (get it?)

Very often in coding, you want to do something in a hurry, pick up some new whizbang piece of technology and just start using it without taking the time to read all the documentation. At my new job, that's what I decided to do with "node.js" to write a little RESTful server in a hurry.

node.js lets you write server code in javascript, and its event-driven structure has a reputation for being very fast and scalable. express.js is a common lightweight library for node to handle a lot of the server basics.

First step: install node.js.  (All these following specifics are for Macs) This is as simple as going to nodejs.org, hitting download, and running the .pkg. At this point I could run "node" from the terminal (ctrl-c twice to end) to see that it worked. 

I'd suggest running the first example there on the nodejs.org page; it's kind of mind blowing to get a real live server work with so little code or fuss!

In that example, though, you do see there is a little fuss, having to manually set Content-type ad what not. Express.js is key to taking care of that kind of thing, and dozens of other little details as you try to do more complex stuff. (To be honest, I found out about Express by googling something like "node read post" (I knew I was going to have to read POST data) and found this stackoverflow article- (amazing what a crucial resource that site is.))

You don't even have to explicitly download express -- you should let "npm", the "node package manager" that you got for free when you installed node.js, do the work. In fact, the first part of the express.js guide page has a hello world program that explains thing better than I would-- the idea is you make a "package.json" file that describes what your app will need, and then type "npm install", and it takes care of the rest. UPDATE: that page has kind of gone away. The getting started page kind of picks up from that... 

So if it's not clear from the guide, after you've done the package.json step, enter this as hello-world.js:
var express = require('express');
var app = express();
app.get('/hello.txt', function(req, res){
  res.send('Hello World');
console.log('Listening on port 3000');
Then type "node hellow-world.js", and you should then be able to go to a browser and navigate to http://localhost:3000/hello.txt and see a nice greeting.

At this point, I would suggest grabbing nodemon. This little wonder saves you from having to break out and restart your server after every change to your js file. You can get it with npm, but you probably want to install it with the "-g" option so you can run it from the command line, and that likely means you want to run the command as Administrator via sudo.... so that's "sudo npm install nodemon -g" in all, and then you can type "node hello-world.js", and evey time you change the file (or another file in the directory) it restarts the server. This means you can get the "save file, switch to browser, hit reload" loop going, and quickly iterate as you develop your program -- POWERFUL STUFF.

So, with that out of the way, I had to address my main task: making a nice fake server to act as a testbed. I had to implement a RESTful state service. Two of the slight challenges for that (at least challenges in that they were beyond the vanilla "read a fixed URL, spit out a fixed response") were A. I have to cope with an identifier embedded in the URL, so I can't set up a fixed pattern for GETs and B. I need to be able to read a POST body that was JSON.

Both of these tasks were "more normal" these days than they were when I was last coding in server land, when every script or servlet had a unique URL and all extra information was CGI parameters, whether it was GET or POST. (I realize I'm taking a lot of things for granted as I describe this; I guess my blog isn't as newbie-friendly as I'd like it to be.) And besides the server stuff, I also had to make sure I knew the right jQuery to POST raw JSON.

To cut to the chase, the jQuery was pretty easy:

    url: "/foo",
    type: "post",
    data: {'first':"do no harm"},
    dataType: "json",
    success: function(response) {
    error: function(e){
        alert("ERROR\n\n"+JSON.stringify(e,null," "));

(forgive the nonsense I use for data in these cases (first do no harm?) -- I just don't want to have to think about it much.)

My node code wasn't much harder... my "final" learning app does three things:

  1. if "/" is requested (i.e. the root of the site) it serves up a static index.html file (that's where I ended up putting a form with the above jQuery) That's what the "sendfile()" code is doing.
  2. if "/test/FOOBAR" is requested, where FOO can be pretty much anything, it makes a trivial page to say what was passed as the second part of the URL. The secret sauce there is using ":somename" in the url matcher which then makes the value available at req.params.somename 
  3. if "/foo" is POST'd to, the data is parsed (and it doesn't matter if it's CGI or JSON; express takes care of abstracting that detail out.) Here, that requires a piece of what express calls middleware, the line is "app.use(express.bodyParser());". (I don't quite undertand how multiple calls to .use work, but I had a problem because it didn't work until I moved app.use above the first app.get, not just before the first post.) Once that middleware is applied, the request then has a "body" key that is a map of the parameters passed in.
  4. My code also shows an example of using a "global" variable ("count")-- but that value only lasts as long as the server is running! If you restart (like because you just changed the app) it will be reset. (The Express Guide page's "Users online count" introduced me to the hash-key database Redis-- the running and installation of that was almost as easy as the rest of node put together, but that's a lesson for another post. Anyway, obviously that kind of db would let you preserve data across server restarts.)
Here is the code:

var express = require('express');
var app = express();
var count = 0;


app.get('/', function (req, res) {
   res.sendfile(__dirname + '/index.html');

    res.send(" I GOT "+req.params.id);
console.log('Listening on port 3000');

This stuff is empowering! Furthering the empowerment, there's also a company called heroku that lets you install stuff on the interwebs for free. I watched a great O'Reilly Webcast by Peter Cooper, How to Build a Chat Room in JavaScript in Under an Hour that details both the node.js and the heroku aspect. I should probably watch that again now that I've done this today.

(Incidentally, Peter Cooper curates Javascript Weekly -- I remember at a new job in 2009 or so, my fellow coder showed me "prototype.js" -- besides the coolness of that language, I had to wonder where I could go to learn about what new stuff was out there! It can be a challenge learning about technologies that you don't happen to be using at work. A few years later this newsletter came out, and could have answered my question. If you're doing UI in a browser, you should get and read this newsletter.)

Wednesday, February 6, 2013

GGJ: what I knew to make Heartchers

The other weekend I was part of the Global Game Jam. The theme was a heartbeat, and my team made Heartchers, a head to head HTML5 game for iPad (or other large-ish multitouch systems) Here it is in action:

You can play it online at  heartchers.alienbill.com (the ratio might be off in a desktip web browser, and the controls won't be as pleasant, but you can still get the idea.)

The core of the game is Processing.js -- I've had great luck with Java-based Processing at past jams, and was able to transfer my experience to its crazy fun "write java, we'll turn it to javascript" outlook. It was also a good test of my own lowLag.js sound wrapper.

My team was me as the programmer, Bart Cusick as the "art guy", and Ken Snyder for sound. So this entry here can either be seen as a postmortem of our weekend, or me tooting my own horn about how darn clever I was.

The Shape of the Game
I pitched the game during the brainstorming session (the name was suggested by Ryan Kahn)... I had the vision of a game with joust-like controls, maybe with one person doing left and right flap, and a second player aiming and firing the arrows with the mouse. I had used the left/right flap control scheme in an earlier 2hour gamejame game and I knew it had some untapped potential.

For a while we toyed with having a server-based game, which would let us hit the "play over 2 devices" diversifier (optional challenges a team can take on), but given my inexperience with that kind of game making we decided to keep it simple and local. The 2 player aspect came out as we developed playable prototypes... it's a classic, solid game paradigm harkening back to Spacewar!-- appropriate, because we were just down the street from where that pioneering game was made.
The end result was pretty close to what I had in mind when I pitched it, I'd say. I think members of my team were pretty decent at listening to one another during the inevitable small disagreements that emerged.

The Build Process
Since we were targeting iPad (using multitouch as a way to get past the multiplayer interface limitations of a typical PC+mouse setup), we had to figure out how to get builds onto devices. I figured it would be easier to throw builds up onto my webserver. (I thought I was pretty clever for making a dedicated subdomain for this, heartchers.alienbill.com, but I was surprised how many teams either did that as well or grabbed a new domain entirely!)

A typical large(ish) Processing program will be broken into multiple files, and then those files are concatenated into a single ".pjs" or ".pde". I recreated this process with a Perl script... my index.cgi actually glued the pieces-parts into a big file every time it was reloaded. (Actually shelling out -- something like

`cat src/file1.pjs src/file2.pjs src/file3.pjs > heartchers.pjs`;

-- to do the work.)

Of course, one of the charms of writing HTML5 stuff is when you eliminate the build process entirely, just save, switch over to the browser, and run. I recreated that ease of with the program WinSCP... I set it up to monitor my src directory. Every time I saved a file, it noticed the update and pushed the changed files to the server. No source control to speak of, but we had continuous deployment up the wazoo, baby! (I should  have written a few more lines of script in index.cgi to take snapshots, come to think of it.) Dropbox worked well for sharing art and sound, and with one programmer we dodged much of the usual file conflict issue.

So Processing.js supports multitouch, but I couldn't find solid documentation for it, in particular the content of the "touchEvent" object... I had hoped/assumed that if there was a touch event there was a way of identifying the "new" touch, but I didn't know what that was (the "changedTouches" array, it turns out)
I tried to use JSON.stringify on it, but its structure was circular, and stringify() wouldn't work. I ended up making my own inspector of the keys for the object, and figured it out from there.

Another important step was making it so I could still do most playtesting on my laptop -- I wrote an abstraction layer so both mouseEvents and touchEvents would be treated the same:

void mousePressed() {
  controlPress(mouseX, mouseY);
void mouseDragged() {
  controlMove(mouseX, mouseY);
void mouseReleased() {
  controlRelease(mouseX, mouseY);

void touchStart(TouchEvent t) {
  for (int i = 0; i < t.changedTouches.length;i++) {
    controlPress(t.changedTouches[i].offsetX, t.changedTouches[i].offsetY);
void touchMove(TouchEvent t) {
  for (int i = 0; i < t.changedTouches.length;i++) {
    controlMove(t.changedTouches[i].offsetX, t.changedTouches[i].offsetY);
void touchEnd(TouchEvent t) {
  for (int i = 0; i < t.changedTouches.length;i++) {
    controlRelease(t.changedTouches[i].offsetX, t.changedTouches[i].offsetY);

I was surprised to find out I didn't have to weedout mouse events on the iPad; I get the feeling once the system knows you're reading multitouch it stops doing its usual "treat touch events like the mouse" thing.

Misc Points
I've always been a big believer in log-based debugging; especially in a system with as a big a gulf between source code and running code as Processing.js, the process of "figure out your assumptions, then what you can print to tell you which is incorrect" is even more crucial than usual. Processing.js offers a fakey console with "println()" support but it interferes with the game display; I ended up writing my own "Msg()" function to do an onscreen log display.

There were some problems with stopping iOS "select text" feature, and an annoying little attempt at scrolling it would do. This link on select helped a bit as did this one on the bounce effect.

There's actually a bug on iOS and the "first touch event", see this thread which has a link to the custom bugfix processing.js file we actually used.

I think I played a little fast and loose with the Processing/Javascript divide; for instance I called my lowLag library directly, and I think after a while started getting sloppy to the point where I couldn't use the default Processing IDE- I switched over to Komodo Edit in java mode then.

The game does some funky scaling to always fit the available real estate, but it also has some hard coded size constants so it's not really playable on iPhone. Also, there have been times I left it running where it kind of locked up the iPad; the game was still running and responsive but I couldn't get back to the home screen or even turn it off without doing the "hard shutdown" off the device. (!) I don't think a web based game should be able to do that on iOS...

The old adage about building playable prototypes ASAP certainly came into play; of course, that's my dark secret as a gamemaker: I'm actually a toymaker who then builds games on top of the toy. For me, that's where the heart of the joy of video games lies; not in story, or character, or rules or system, but in virtual toys that might not even be possible in real life.

The controls were kind of fun... a virtual dial to aim, release to fire, and then virtual left and right flap buttons for each player (it was surprisingly easy to make the "upside down" version for player 2, my art guy had been skeptical) Of course the challenge with multitouch is you don't necessarily know what player is doing which touch, so we took the simple approach of splitting the screen, and having static controls separate from the action.

So overall I am happy with the way it turned out! It's one of the most fun games I've ever made. I might try to do a native iOS version at some point, or experiment with other variations, like a one player mode. I'm also delighted to know how to do a multitouch game now-- I'm eyeing a version of Atari 2600 Warlords next...