node.js scope weirdity

So I’ve been using Node.js a lot in my new job. Quick note: it’s super awesome. The job, and node.js. Anyways. I’ve put together a couple of non-trivial pieces with it and one thing that keeps tripping me up is: when is my variable in and out of scope? So I thought I’d write this up to see if I can explain it.

First example

Let’s look at this simple server code – it’s just a dumb webserver (shamelessly stolen from the node.js home page) that says ‘hello world’ and spits out a connection count:

var http = require('http');
var conncount=0;
http.createServer(function (req, res) {
  conncount++;
  num=conncount;
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.write("Here is some stuffn");
  res.write("And the connection count is: "+conncount+"n");
  setTimeout(function() {
   res.end('Hello World: conn count was: '+conncount+' and my connection # is: '+num+'n');
        conncount--;
  },5000);
}).listen(1337, "127.0.0.1");


console.log('Server running at http://127.0.0.1:1337/');

So if I just curl that (curl http://localhost:1337/), I get:

Here is some stuff
And the connection count is: 1

…5 seconds pass, and then…

Hello World: conn count was: 1 and my connection # is: 1

So that seems to make some sense. However, what happens if I run the code 12 times? This:


Here is some stuff
And the connection count is: 1
Here is some stuff
And the connection count is: 2
Here is some stuff
And the connection count is: 3
Here is some stuff
And the connection count is: 4
Here is some stuff
And the connection count is: 5
Here is some stuff
And the connection count is: 6
Here is some stuff
And the connection count is: 7
Here is some stuff
And the connection count is: 8
Here is some stuff
And the connection count is: 9
Here is some stuff
And the connection count is: 10
Here is some stuff
And the connection count is: 11
Here is some stuff
And the connection count is: 12


…then 5 seconds elapse, then…

Hello World: conn count was: 12 and my connection # is: 12
Hello World: conn count was: 11 and my connection # is: 12
Hello World: conn count was: 10 and my connection # is: 12
Hello World: conn count was: 9 and my connection # is: 12
Hello World: conn count was: 8 and my connection # is: 12
Hello World: conn count was: 7 and my connection # is: 12
Hello World: conn count was: 6 and my connection # is: 12
Hello World: conn count was: 5 and my connection # is: 12
Hello World: conn count was: 4 and my connection # is: 12
Hello World: conn count was: 3 and my connection # is: 12
Hello World: conn count was: 2 and my connection # is: 12
Hello World: conn count was: 1 and my connection # is: 12

So my question is, why does it do that? Each execution of my function _should_ have its own stack, no? And so wouldn’t each stack have its own variables?

Now, mind you – I know a (horrible) way to fix this – wrap my setTimeout call in an anonymous function and pass ‘num’ as a parameter – but what I don’t really get is ‘why’? I threw this line all the way at the end (with apologies to Haddaway) –

setTimeout(function() { sys.debug("What is num! Baby don't hurt me, don't hurt me, no more..."+num)},10000);

(And I had to require('sys') at the top too)

And, in my terminal with Node running, I got:


DEBUG: What is num! Baby don't hurt me, don't hurt me, no more...12

What?! I would’ve expected ‘num’ to fall out of scope?! Why wouldn’t that function scope up there make ‘num’ only exist for this execution? Is there no concept of ‘stack’ or anything? And even if there wasn’t any, each execution of my function is an execution and should ‘freeze’ the variable or something, right? Apparently not.

So what happened? Well, I can tell you – that variable ‘num’ that I referenced, since I *didn’t* define it using ‘var’, is GLOBAL. So that’s why it’s acting so global. Simply adding ‘var’ to the definition (var num=conncount;) made it start working properly. E.g., after the delay, my output became:

Hello World: conn count was: 12 and my connection # is: 1
Hello World: conn count was: 11 and my connection # is: 2
Hello World: conn count was: 10 and my connection # is: 3
Hello World: conn count was: 9 and my connection # is: 4
Hello World: conn count was: 8 and my connection # is: 5
Hello World: conn count was: 7 and my connection # is: 6
Hello World: conn count was: 6 and my connection # is: 7
Hello World: conn count was: 5 and my connection # is: 8
Hello World: conn count was: 4 and my connection # is: 9
Hello World: conn count was: 3 and my connection # is: 10
Hello World: conn count was: 2 and my connection # is: 11
Hello World: conn count was: 1 and my connection # is: 12

Such a terribly easy way to blow up your javascript! So apparently node.js supports “strict mode” – just make the first line of your javsascript code say:

"use strict";

(Note, that’s just a string, with the quotes. A javascript parser will just ignore it if it doesn’t understand it. You could also put a line in the middle of your code saying "poop"; and it would be ignored the same way).
Now with strict mode enabled, the previous version of my code (without the ‘var’ declaration) says:


ReferenceError: num is not defined

So I think I’ll be using this from now on. Unless ‘strict’ mode starts making me crazy – which is certainly also possible.

Next Example

"use strict";
var sys=require('sys');
for(var i=0;i<10;i++) {
 setTimeout(function() {sys.debug("I is now: "+i)},1000);
}

(Notice how I've learned my lesson? Yeah, I don't need concurrency bugs biting me in the ass, thankyouverymuch.)

The output is, unfortunately:

DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10
DEBUG: I is now: 10

So this one - I definitely know how to fix. The problem is that by the time the timeout actually _fires_, the value of 'i' will be different - in this case, incremented all the way to 10. I need to somehow 'freeze' the value of i within the timeout.

So I would do:

"use strict";
var sys=require('sys');
for(var i=0;i<10;i++) {
 setTimeout((function(number) {return function() {sys.debug("I is now: "+number)}})(i),1000);
}

Which results in:

DEBUG: I is now: 0
DEBUG: I is now: 1
DEBUG: I is now: 2
DEBUG: I is now: 3
DEBUG: I is now: 4
DEBUG: I is now: 5
DEBUG: I is now: 6
DEBUG: I is now: 7
DEBUG: I is now: 8
DEBUG: I is now: 9

The problem is, that's ugly as shit. What better way to do it is there that's more readable, maintainable, debuggable, etc? And that function instantiation thing gives me the willies. Well, I don't know the best answer for that yet. How about this:

"use strict";
var sys=require('sys');
for(var i=0;i<10;i++) {
 (function(number) {
  setTimeout(function() {sys.debug("I is now: "+number)},1000);
 })(i);
}

(The output is still the same). That feels a little less awful and unreadable - and doesn't give me the anonymous-function-returning-function yucky feelings that the previous one did. (Though it still is effectively doing that, isn't it?) The crazy squiggly brace, close-paren, open-paren business is still a little awkward though.

A piece of advice I got from the node.js group seemed pretty sage, in terms of making this stuff more readable:

"use strict";
var sys=require('sys');

function make_timeout_num(number)
{
 setTimeout(function() {sys.debug("I is now: "+number)},1000);
}

for(var i=0;i<10;i++) {
 make_timeout_num(i);
}

(output is still the same again). And, wow, yeah, that's a hell of a lot more readable, at the expense of 4 or so more lines. But sometimes, logically, you don't want to split out your functions like that - if every time you need to freeze something in a scope you have to declare a function somewhere, your eyes will have to scan all over the place, and that could be ugly. So you could maybe declare the function within the for loop - though that's still in the global scope, it would just be for readability's sake.

I think I'll probably stick with the previous one, with the anonymous function declared in-line. It's not too insanely unreadable, and it's compact enough. If the contents of my anonymous function gets a few lines long, or gets a few variables deep, I might split it out into its own function for readability's sake.

Firefox 5 extensions compatibility woes

Didn’t find this on the intertubes anywhere, so I thought I would write it down to help somebody else with the problem.

The new Firefox 5 disabled a lot of my plug-ins. You can disable the plug-in compatibility check by doing the following:

Launch FF5, go to “about:config”

Right click to add a new property, make it of type ‘Boolean’, name: extensions.checkCompatibility.5.0

Set it to ‘false’.

Restart.

You’re welcome.

(Oh, and I should mention, any previously disabled plug-ins are now re-enabled. Good point, Beckley.)

A Modest Proposal…

Had a great discussion via Twitter and my blog’s comments about some stuff about IPv6 and other ways of handling the IPv4 address space shortage. Over the course of discussing things and arguing stuff and going back and forth, I think I’ve sharpened my idea.

IPv4+

IPv4+ is an extension of IPv4 address-space via backwards-compatible use of the IPv4 ‘Options’ fields. Two such options shall be used, one for an ‘extended-address-space’ destination, one for ‘extended-address-space’ source. If these fields are used, they MUST be the first two options in the packet. If both extended-source and extended-destination options are used, the destination MUST be first. This enabled (eventual) hardware-assisted routing at a fixed offset within the IPv4 header. Extended addresses grant an additional 16 bits of address space. If any routing decisions are to be made based upon extended address space, those SHOULD only be done at an intra-networking layer, within one autonomous system. The extended source and extended destination options are exactly 32-bits long, each. The format is as follows:

Bits 0 1-2 3-7 8-15 16-31
Field Copied Class Number Length Data
Values 1 0 5/6 3 Address Data

Option 5 will be used for Extended Destination, and Option 6 will be used for Extended Source. Perhaps additional options could be reserved and specified for future use as “Super-Extended Destination” and “Super-extended Source”.

IPv4+ addresses will be specified in text as having two additional octets – e.g. 72.14.204.99.56.43. The extra octets on the right-hand side correspond to the extra 16 bits of addressing data. An address with both additional octets of zero is understood to mean a legacy IPv4 node at that address. E.g. 72.14.204.99.0.0 means the IPv4 node at 72.14.204.99.

Operation of the protocol will be designed to be as backwards-compatible with Unextended IPv4 as possible.

IPv4+ nodes have both an IPv4 address – which may be an RFC1918 non-routable address, or a link-local address – as well as an extended IPv4+ address, which SHOULD be a routable IPv4 address plus 16 additional bits of identifying data. The legacy IPv4 address MUST be locally unique within its network segment. A backwards-compatible IPv4+ that uses an RFC1918 address for its legacy IPv4 address SHOULD (MUST?) be connected to a Router or Gateway that is capable of Network Address Translation.

As the protocol gains acceptance, core BGP routes MAY be extended to full /32 networks.

An IPv4+ node learns it is on an IPv4+ compatible network through an extra DHCP option, or it may be statically configured as such.

IPv4+ ARP protocol is not currently defined.

An IPv4+-aware gateway OR node MUST be aware of the mapping from IPv4+ addresses to legacy IPv4 addresses. The mapping SHOULD be programmatic – e.g. 192.168.1.2 corresponds to 72.14.204.99.1.2.

MOST implementations will likely be RFC1918 addresses for legacy IPv4, and routable IPv4 addresses as the first four octets of the IPv4+ address. So-called “Public Hosts” MAY exist at some point in which they have both a routable IPv4 address AND an IPv4+ address. The only purpose of such a host would be future-proofing – no real benefit is conferred, other than ensuring that software stacks can utilize extended addressing.

An IPv4+ gateway MAY define a ‘default host’ which should receive all unidentified legacy IPv4 traffic, or it may drop any such packets, or it may use a simple heuristic such as ‘lowest address wins’.

“IPv4+ ONLY” hosts cannot exist. Non-legacy-routable IPv4+ hosts could exist by the local gateway refusing to NAT addresses by responding with ICMP Destination Unreachable or a new ICMP message.

Software implementations SHOULD embed 48-bit IPv4+ addresses in their existing IPv6 software stacks – which have already been implemented and rolled out. A special segment of IPv6 space SHOULD be allocated and reserved for this embedding to ensure no collisions occur if IPv6 were to become more widespread.

Interoperability Scenarios

two IPv4+ nodes on the same network

MUST use their legacy IPv4 addresses to communicate. An IPv4+ node can identify the IPv4-legacy address that corresponds to the other node because of the nodes required knowledge of mapping between IPv4+ and IPv4-legacy addresses.

two IPv4+ nodes on different networks with through IPv4+ transit

IPv4+ packets are sent and received through extended IPv4+ addresses.

IPv4 node and IPv4+ node on same network

An IPv4+ node will use its IPv4 address and the IPv4 protocols to contact the ‘legacy’ node. The IPv4+ node can identify the IPv4 node via its legacy 32-bit address.

IPv4 node and IPv4+ node on different networks, IPv4+ aware router on IPv4+ network

The IPv4+ node uses its legacy IPv4 address to talk to the IPv4 node. The IPv4+-aware router NAT’s the traffic to the IPv4 node.

IPv4 node and IPv4+ node on different networks, IPv4-only router on IPv4+ node’s network

Interesting – at some point the IPv4+ node MUST become aware that it does not have through connectivity to the other IPv4+ node, and must fall back to using its legacy address. The IPv4 gateway or router will not be able to communicate to the IPv4+ node because the IPv4+ node will ‘appear’ to be transmitting packets from an incorrect address. The IPv4+ node SHOULD eventually disable IPv4+ connectivity as it will be unable to communicate to any other devices. An ICMP ‘Ping’ exchange may be employed to determine that the gateway is not IPv4+ aware by examining the return packet’s IPv4 options (if any).

Migration Scheme

“Access” routers (small office, home office) need new firmware to handle iPv4+. IP stacks in major operating systems need extensions for IPv4+. ISP’s who do not firewall their customers do not need to do anything. IPv4+ compatible applications now have restored end-to-end connectivity.

Eventually, DNS extensions SHOULD be created to permit returning extended IP4+ addresses for services.

An ARP extension MAY be at some point required for nodes that do not require legacy connectivity.

BGP routing will eventually need to be broadened to support /32 extended networks (a /32 network could correspond to a full 65,000 host internetwork).

Interior routing protocols MAY need to be extended to make routing decisions based on extended IPv4+ addresses. The header order and length has been optimized to make this as painless as possible, but it may still be painful.

An additional 8-bits of address space can be reclaimed once all routers are compatible with IPv4+ – the length byte can be counted as an address space indicator, which is 3 for all initial IPv4+ networks, but could be allowed to vary once the entire Internet is switched over to IPv4+.

Concerns

This is a stopgap.

Will routers in the ‘wild’ strip options they do not understand? Will they ever reorder or muck with options? Do we really have to steal that one byte in the option to say ‘3’?

Will IPv4 live on forever? Will routers always have to handle all the crazy NAT and other stuff that will be a legacy of ‘legacy’ IPv4?

Will the only additional ‘feature’ of this IPv4 thing be better BitTorrent nodes?

BIG CONCERN will network devices get confused about seeing IP packets with “apparently” identical IPv4 addresses (really extended IPv4+ addresses) and freak out?

10.0.0.0/8 network is too large to map all of its hosts as IPv4+ addresses to use one legacy IPv4 address. It might require a full Class-C allocation of legacy IPv4 addresses to map every possible host. Yuck.

Should this protocol actually attempt to map full IPv6 addresses into appropriate extension headers?

IPv6

So, two things about IPv6 – first, a little bit about how to do it if you’re all Mac’ed up like me, and then, a little rant.

The easiest way to get IPv6 working it is to grab a copy of Miredo for OS X. This lets your mac, pretty much automagically, get a connection to the IPv6 Internet via an IPv4 tunnel anywhere that you have IPv4 connectivity. It’s nearly painless, and at that point, you can start to at least do some basic playing around with IPv6 stuff. I enabled IPv6 on my home network, but I still have Miredo installed but deactivated if for some reason I wanted to use it when I’m at a coffee shop or some other network.

The good way to do it is to go to tunnelbroker.net and sign up (it’s free!). Then configure your Airport Extreme to do tunneling by following these directions. Voila. Now you have IPv6 connectivity to the intarwebs…or the ip6ernet. Whatever.

The best way to do it – and I haven’t done it this way – is to actually get IPv6 connectivity from your ISP – no tunneling or anything, just native connectivity. I can’t do this because Time Warner doesn’t give me that, or maybe my Airport isn’t so good at doing that. I don’t really know.

So far, the one thing I can see here is that you could begin to use this IPv6 connectivity to work around the general destruction of the internet any-to-any principle – the idea that any IP address on the internet should be able to contact any other. This is basically no longer the case, as many people use RFC1918 addresses behind NAT to conserve IP addresses (and also there are some positive security implications). So my computer at 10.0.1.2 can’t necessarily talk directly to your computer at 192.168.1.2 (or, even worse, your computer at 10.0.1.2 but behind your NAT, and not mine). The way we work around this type of things is all kinds of magical firewall port-mapping and other such things. It’s a pain in the butt. Services like AIM’s ability to send files, or various screensharing utilities all now require some kind of centralized server that everyone can connect to because just about every network-connected computer tends to be behind a NAT. That centralization is unfortunate, and a drain on services that really should just be about connecting anyone to anyone.

But if you have IPv6 set up in the ‘good’ way listed above (or ‘better’), you actually have a new option. You can un-check “block incoming IPv6 connections” on your Airport, and now have access to anything in your network that speaks IPv6 from the outside world (again, so long as the outside world is IPv6). Of course, big security implications here, but that could actually be a way of making IPv6 somewhat (remotely) useful. Things that like this type of connectivity might be: BitTorrent-esque things…peer-to-peer video applications…some kinda of home-hosting things…I dunno. That type of stuff. But, in short, while at Starbucks, I could fire up my Miredo-for-OS X client, and connect to various things in my home. That could be useful for some people.

My experience with this new setup is rather underwhelming. I can go to ipv6.google.com. I guess on World IPv6 day I’ll be able to…somehow…enjoy some festivities or something. I don’t really have any home servers nowadays.

<Begin Rant>

Who the fuck came up with this stupid-ass migration plan? It has to be one of the dumbest things I have ever seen. IPv6 the protocol is OK (at best)…it really feels pretty close to IPv4, except with a bigger address space. OK, I guess. DJB (who is brilliant, but I think may be batshit insane) sums up the problem really well.

In short, there’s negligible benefit for going to IPv6. You can’t really get anywhere you couldn’t get to anyways. If IPv6 had been designed to interoperate with IPv4, we would be far closer to being in a happy IPv6 world – think about how many machines are dual-stacked right now? Those machines would instead be single-stacked, and some early adopters, or price conscious people (think: Web startup types who like to skip vowels in their domain names) might be able to start offering IPv6 only services, and would be able to start hitting users right now. But, no. The migration scheme seems to be:

  1. Migrate everyone and everything to IPv6 now

And you’re done! Isn’t that easy? The standard has been out for a bajillion years. The IPv4 shortage has been a problem for a bajillion years. And we’re still here. Not because the protocol for IPv6 is flawed, but because there’s no migration scheme at all. There’s no backwards compatibility. This whole infrastructure has to layer over the entire internet. Who the hell thought this was a good idea? I mean, sure, it’s “simpler”, protocol-wise, to do that…but a few more years of protocol engineering instead and a true backwards-compatible solution and we would’ve had people switching ages ago. Go look at how many transition mechanisms are in place for IPv4-to-IPv6. It’s stupid. That alone indicates the level of FAIL that is likely here.

The other problem I have with IPv6 has to do with routing tables. And protocol stacks. Right now, to do any non-trivial amount of TCP/IP networking (let’s imagine HTTP for this example), you need:

  • DNS
  • some kind of routing protocol has to be working right
  • ARP to figure out how to get to your local endpoint
  • DHCP to figure out what your IP address is going to be

Network troubleshooting ends up being an interesting and non-trivial problem of figuring out who can ping who (whom? Grammar fail. Sorry), what routing tables look like on various intermediate devices, what IP address you get from DNS, is your DNS server working, etc, etc. It’s a muddle, but it’s a muddle that’s been treating us well on this whacky internet of ours.

However, in the IPv6 world, we now have – the entire protocol stack for IPv4, PLUS a protocol stack for IPv6, and a crazy autotunneling doodad with a weird anycast IPv4 address (oh, that’ll be fun). And a routing table that is exploding out of control. I mean, my dinky little home network (theoretically) gets a /64 network. If every Time Warner customer gets a /64 – and there’s not some means of aggregating routes together – the routing table completely goes insane. Now I’d assume that TW would aggregate its customers into a /48 or something – god, I hope so! But still, we’re talking about a world where there are networks all over the place.

There’s a big question as to whether or not people ought to get provider-independent network addresses or not. I think I know the answer to this: No, they should not. It’s suicide. I think the real solution for this is at the DNS level – you should get addresses that correspond to your rough physical place on the internet to keep the routing tables somewhat simple, and if you want to move endpoints around, you change DNS entries. Get away from thinking of IP’s as static. If DNS were baked deeper into the protocol stack, this could work extremely well. Want to have a webserver at www.whatever.com? If you have some kind of authorization, your webserver would come up and use some kind of key exchange to somehow tell DNS that it is www.whatever.com. If you move, you just move your webserver. Your keys still work. If you set up a webserver in your house – same thing. Anyways, that’s just hand-waving. There still would have to be some way of bootstrapping that (like, what IP address do I contact the webserver at? Maybe you find that out by talking to your local gateway? Dunno)

<End Rant>

I guess that 1) wasn’t a little rant and 2) was a little rambly. So sue me.

ucspi-tcp and stupid errno.h (CentOS and ucspi-tcp)

I keep running into this and doing my standard google-up-the-answer-routine didn’t seem to be working.

In short, ucspi-tcp doesn’t compile on CentOS boxes (or RedHat boxes). Cuz DJB doesn’t “believe in” RedHat’s “you must have an errno.h” thing. Hey, I love DJB, and his software, but I also think he’s impractical and a nutjob sometimes. This would be one of those times.

Lots of folks had patch-related ways of fixing the problem, I thought those seemed rather laborious. I just stole The Internet’s method for another DJB package.

Just append -include /usr/include/errno.h at the end of the first line of conf-cc so it looks like this:

gcc -O2 -include /usr/include/errno.h

This will be used to compile .c files.

Boom, everything works now.

Even Mo’ Math…

So Beckley got a hold of the MetroCard Math site and built on top of David’s fantastic work to build even more prettiness, neat-workingness, and general niftitude into the site.

We also put in a thingee – well, by ‘we’ I mean ‘he’ – he put in a thingee that lets you see how the new price changes will affect you. For me, I definitely will be sticking with the pay-per-ride.

And another thing – I actually tested the new (divisible-by-a-nickel) magic number, and it *does* work. My MetroCard has an exactly even number of rides on it. Cool. Now I just have to do something with all these MetroCards that have 10 or 20 cents on them – perhaps a new part of the site that lets you put in how much money is on your cards, and then it tells you how much more to put on to get it ‘even’? Not a bad idea…

Gory Details: so, talk to any computer sciencey person and they will always tell you that Floating Point Math is Hard. I have only rarely run into this, but the rounding algorithms are very specific when you buy stuff, and if you’re off by a penny, then, well, you’re off by a penny, and things stop working. We found a couple of minor (off-by-one) bugs here and there, and every time it seems like I fixed one, the rest of the results would start to go haywire. The real problem is that I am trying to ‘move’ the rounding around the formula:

round_for_money($x * 1.15) = n * $2.25

Now solve for ‘x’, and let ‘n’ be any integer – well, that pesky ’round()’ is in the way, and if you just try to move it to the other side, or round at some random and/or inopportune time, then when you get back to the original equation, sometimes the numbers don’t work out anymore. It sucks.

So I racked and racked my brain trying to figure out a way to do my simple solve-for-x routine. I really just want to try different integers for ‘n’ until I find an answer that’s “acceptable.” But that doesn’t work. At all. Or at least, I don’t know what mathematical operation I can do to move that round() function off the left side so I can try to have a formula that points to ‘x’.

What did I do finally? I gave up. I left the formula as it is above, and just run ‘x’ from 0 to “a lot” (a thousand bucks or a hundred bucks I think?). The answer I get is going to be completely accurate, but it wastes computing power. Well, too bad, your browser has to do a little bit of multiplication in a loop. My condolences. But! The result is, I’m pretty convinced my answers are to-the-penny accurate now. We’ll see when the big price change kicks in.

Thanks again to David Dominguez for the initial switch to jQuery-powered MetroCard Math, and thanks to Beckley for the full re-skinning he pulled off.

More Metrocard Math…

So I’ve updated my Metrocard Math site.

First, my friend David Dominguez helped out to make it much, much prettier. He also added some jQuery magic, and changed up a significant amount of how the site is structured. I was trying a weird idea – where I would strip the markup down to its most basic elements, and style it from there using cleverly-constructed css selectors, but I don’t think it worked out. My friend Bryan tried to restyle it as well, and the rigidity of the markup basically stopped him in his tracks. So, anyways, now it looks prettier and is definitely more usable on my phone.

I also had tried to buy a metrocard for one of the Magic Number amounts the other day at a vending machine, and it was rejected due to “invalid amount.” Stupid. It had worked before. I tried the small number. I tried the big number. Nothing worked. On a hunch, I tried $11.75 instead of $11.74. Success. And of course, I will eventually have a metrocard with a penny on it. So apparently, the number has to be divisible by 5? So I’ve added that to the site, and we’ll see when I next buy a metrocard if the new system actually works. I hope they don’t make it where it has to be divisible by $0.25, that would really sting.

I still want to do something where you can toggle between the current prices and the newly announced ones. But right now you can just type in the new numbers – Here’s what they are according to the Queens Chronicle (which I used to consult for a million years ago!) $104 is the new 30-day, $2.50 is the new single ride, and $29 is the new 7-day. The one-day funpass is going to be eliminated and so will the 14-day unlimited. Oh, and I hadn’t seen this before – there’s now going to be a $1 surcharge every time you pick up a new metrocard (though that doesn’t start till some time in 2011) OUCH. That means when you leave your metrocard at home and have to buy another one it’s *really* going to sting. One more extra buck. Damn. I mean, you can still use the lastest magic number ($15.65 I believe? Though I worry my rounding might not match the MTA’s…), but you definitely will not want to be throwing out your metrocards anymore.

Metrocards and Math

I work from home, mostly, so I don’t usually need an unlimited metrocard. Every time the MTA changes the prices on evertyhing I have to go through and write another stupid spreadsheet to figure out what costs what. And I hate the fact that when you buy a 10$ or 20$ or 8$ metrocard, you get a number of rides and some stupid amount of money left over. I was actually juggling 5 different metrocards a few weeks back, each with slightly different amounts on them. Just stupid.

So I finally gave in and made a Web site about Metrocard Math. It has a thing where you can experiment with what-if scenarios about fare hikes and stuff (it’s kinda like a javascript spreadsheet). The interesting thing I found was this: $9.78. Buy a metrocard for that much and you will have exactly 5 rides, with no money left over. Of course, if you’re buying your metrocard via Credit Card, they won’t let you use an amount less than 10$, so you have to use the next magic number: $11.74.

I was thinking I might put in something about the proposed ‘cap’ that the MTA is talking about doing for their unlimiteds, I just don’t know what to do with it. I guess “maximal theoretical value” I can do? Or you can just look at the number and compare to ‘rides needed to beat pay-per-ride’…

As an aside, the site looks like absolute shit. I still am the worst web designer in the known universe. But I don’t mind much, the only thing I do mind is that it is hard to read on an iPhone. And that’s usually when I want the damned site – when I’m trying to get on the N train, my metrocard has run out, and I forgot the Magic Number. Anyways, I experimented with keeping the presentation, content, and behavior all separate (and yet all inline on the page). If I ever get to styling it, it’ll be interesting to see how I can do that. For instance, especially on the iPhone, the disabled fields don’t look very different from the enabled ones. I don’t remember if in CSS3 though you can specify a style of a disabled field – but I would have to imagine that you can, right? Well, when I next feel like poking at it, maybe if I add in ‘swipe-cap’ support like the MTA is proposing, I might try and throw some iphone-specific styling on there to make it useful for me (the only time I actually use it, in fact).

Blogspot and Tumblr

Well, for those of you sick of hearing the trivial minutiae about how nifty LightDesktop is, never fear! Your prayers have been answered. I made a Tumblr Blog thingee just for LightDesktop stuff, so I can yammer on endlessly about file system optimizations and other such crap.

So now when I talk about LD here – it will hopefully be coming from a more personal perspective. In that vein, a few things to mention – one is that LightDesktop got mentioned on DistroWatch. It was just a little teensy one-sentence blurb, but I wasn’t quite ready for this. Whoops! I did send an email to the distrowatch people saying, “Hey guys, probably a bit early to mention me anywhere on your site or anything, but just wanted to let you know I’m around…” and I expected they might ask me a question, send some generic message that was like, “Hey, sounds good, good luck, let us know when you’re ready” or anything like that.

And I was troubleshooting something the next day or two and tailing the server logs…strangely enough I kept finding new people hitting the informational web site. I looked into the referer tags, and lo and behold, they’re clicking over from the DW article. Awesome!

So I went from getting one hit a day, up to 60, up to 800 the next day. So I’ve had to go run around and make sure my Google Analytics tags and such are working, and I realized the worst thing – actual downloads weren’t being tracked at all. So I had to build a little downloader script so I could track that, too. Hopefully, I got it. We’ll see.

And there have been a couple of little tiny things I wanted to mention here or there about LD, but I felt like I might be spamming to put them here. So, the Tumblr thing. First off, I have to say – man, coming back here to Blogger feels like going back in time 10 years. Tumblr has their shit together. It has nice, big pretty fields, beautiful stuff everywhere, insanely easy. It feels a little sluggish here and there, and feels all railsey all over the place – even though it may or may not be built on that. So I pop back in here to my old Blogger thing to check out what’s up – and wow. It feels old.

So within half an hour of setting up on Tumblr, I found a theme just makes me happy every time I look at it. Gotta have it. Knocks it out of the park (well, for me). Gotta get comments going, so I’m signing up for a Disqus account and trying to hook that in. Generally it’s working pretty well. One thing I didn’t like was when you look at a list of posts, it didn’t show anything about comments – and I wanted a comment-count to be listed there – I’m hoping to have people comment all the time. So now I have to customize my theme. And I’ve gotta say, not all that hard. A little poking around, a little documentation, and I’m done.

I can definitely say that if I were starting up a new Blog or whatever, I would, 100%, do it on Tumblr. This Blogger thing has been pretty good to me, but it’s definitely got its problems. And they’ve been the same problems for years and years and years. If I could find a nice way of exporting/importing articles…who knows, I might do it?

Food. I have made a really concerted effort to make sure to eat my full three meals a day today – I’ve been busy lately so I’ve been skipping quite a few meals. And I’m embarassed at the improvement to my mood and my energy levels from this relatively simple source. I’ve been plowing through feeling hungry, and smashing over actually feeling down and slightly depressed from not having eaten enough. Man, if I just ate normally, imagine what I could accomplish? I’m going to make a real concerted effort.

Lightdesktop now self-hosting (ish!)

So my nifty LightDesktop project has (almost, kinda, sorta) hit a new milestone – I’m writing you this blog post from it right now!

I have officially transferred all the development files into the filesystem (in Rackspace Cloud Files), and should be able to develop it…from it. I will be getting a ‘dev’ version vs. a ‘prod’ version distinction going so I don’t destroy the filesystem for everyone when I botch something (usually the CREST-fs filesystem) and post it up. Considering my development environment is it, itself – that makes sense.

So no more CentOS box (or VM, actually) for a while. And, man, does dogfooding pop all kinds of bugs that I want fixed ASAP. Window management is pretty horrible.

I am REALLY impressed with the browser. It has been able to handle nasty Javascript-heavy sites with relative ease. AOL – not usually a company I associate with doing things right – has some kind of insane Web 2.0 AIM client hooked in to their webmail that works suprisingly well. I’m shocked they made it so well, and even more shocked that it runs in my slightly janky browser. But that’s all due to the WebKit people, and, indirectly at least, Apple.

One thing that I’ve really enjoyed is how lightning-quick everything is. When you make something as super minimalistic as this thing is, there’s not a lot of stuff going on to slow things down. I have done enough testing (though not quite ‘living’) in the new system that when I get back to using my Mac normally, it feels sluggish. And that thing has 4 gigs of RAM and a core 2 duo and whatnot! This thing has – crap, I don’t even know (poking through /proc…)…a 2.2GHz Celeron, single core. 2 Gigs of RAM though. And I bought it at Best Buy for $300 or $400 dollars! They of course didn’t want to sell it to me – I had to go to a second Best Buy to find one where they would. Must’ve been set up as a bait-n-switch or something. Or maybe they were legitimately out of stock, who knows.

Oh, another fun anecdote – I have Windows (Vista, ugh) installed on here too. And at one point I inadvertently let it reboot into Windows. I figured, well, let me grab all my software updates and stuff….nope! Didn’t work. The wireless had mysteriously stopped working for no discernable reason. I wondered if the hardware was broken. Rebooted into Lightdesktop, and the wireless came right up. Love it!