More on (Moron?) Ted Dziuba

(Math and pedantry ahead. Feel free to skip if that’s not your thing. More stuff about Node.js and this guy Ted Dziuba, who I hate.) To start, read Mr. Dziuba’s latest blog rant about Node.js.

I will make two main points here. First, that Mr. Dziuba does not intend to be comprehended; that he is deliberately phrasing points so as to confuse. Second, I will prove that his arguments are invalid. It appears to me that he has no interest in deriving any kind of truth or shedding any light on anything; he is merely trying to draw attention to himself, or draw blog traffic, or just make noise. I don’t know what his reasons are. I just know what he’s doing, and that seems to be being deliberately misleading. I would not feed the troll, as it were, except for the fact that his blog post is permanently on the internet and I’m going to end up being pointed to it someday if I ever propose an event-loop based solution for anything.

First off, let’s try to decipher what he’s saying; he does not do a very good job of being clear.

Let’s look at Theorem 1.

What does it actually say? Here’s an attempt at deciphering it.

He asks, let’s check out how things work when you have something that’s heavily CPU-biased (less I/O).

Note = the value ‘k’ is the ratio of I/C – in other words, for something really really IO intensive (I > C) then k should be ‘big’ (greater than one), and for I < C, k should be small (less than one). You can think of ‘k’ as the “IO-ish-ness” factor. It’s big for something that’s very IO-ish, and it’s little if it’s not. Why doesn’t he explicitly state the definition that k=I/C? Because he has no desire to be understood; he’s attempting hand-waving. Everywhere he uses this ‘k’ construct he could just as easily use I & C.

The important definition is: W=I+C=kC+C=(k+1)C

Therefore K is I/C

(k+1)C is equal to W = the wall clock time of I + C.

His theorem begins with the supposition:

1000/C > 1000N/(k+1)C

What does that mean?

Let’s change his equation to make more sense of it. Since (k+1)C=(I+C) by definition, he’s really just saying:

1000/C > 1000N/(C+I)

He’s trying to suppose that *IF* the number of times I can execute just the CPU-part of my event-loop code is greater than the number of times I can do that, threaded, but also with I/O time taken into account, *THEN* it must be the case that the number of threads I am using is one. Why would you make the argument like that? The same argument can be made, much more simply, by saying:

1000/(C+I) > 1000N/(C+I) only if N is one. But the problem is then you can see what he’s doing. He doesn’t want this, hence the pointless variable substitution.

Notice that ‘N’ factor on there? He is saying that a system with 2 threads runs twice as fast as a system with one thread. And apparently a system with 100 threads runs 100 times as fast as a system with one thread. I’ve worked with much software throughout my personal and professional life, and this supposition is not true. By this assumption, of course threads will always outperform event-loop software.

He attempts the same song-and-dance in Theorem 2. He still is making the assumption that N threads equals N*single-threaded performance.

It’s most clear in his “Practical Example.” There, you can see him making the n-threads-means-n-times-performance argument most clearly. If that’s true, why not 1000 threads? Why not a million?

Another point here. Why threads? Why not fully fork()’ed processes? His math (such as it is) still holds up just the same if you assume forked contexts as threaded. And, yet, none of his math requires threads instead of forks to run.

Effectively, he has proven that in a system with an infinite number of infinitely fast CPU’s, and infinite RAM, and zero threading context-switch time, and zero thread-accounting time, that threads are faster than events. Congratulations.

So there are my arguments as to why he is incorrect. Now I wish to ask questions about how he seems to be deliberately misleading.

First off – some stylistic questions. Why has he written his argument so obtusely? Why has he not shown his work? Why does he not explain what he’s supposing? He just throws symbols down, in beautiful little .PNG files, and runs off with manipulating them with algebra. That’s seems like he’s trying an “appeal to authority” via jargon. Why all the milliseconds everywhere? We’re in theoretical Comp Sci world now, why pick those units? It would appear that he has done so specifically to throw 1000’s in his equations everywhere, just to confuse things further.

Next – a more theoretical question. Why do things like the select() or poll() system calls exist? Or epoll or /dev/poll? Since they’re so “obviously” inferior to threading-based solutions, they shouldn’t exist at all, right? There should be no use for them. If I can always just use threaded I/O instead of event-looped, why use event-looped at all? It is, after all, very difficult to program.

And finally – why did Dziuba himself advocate for an event-based I/O solution – “eventlet” – in one of his own blog posts? He seems to have gotten quite the performance boost –

…but the one that really stands out in the group is Eventlet. Why is that? Two reasons:

1. You don’t need to get balls deep in theory to be productive with Eventlet.
2. You need to modify very little pre-existing code to adapt a program to be event-driven.

This all sounds great in theory, but I have actually made a large I/O bound program work using monkey patching and changing the driver. It is a piece of software that reads jobs from a queue and processes them, putting the result in memcached. For esoteric reasons I will not go into, the job processors could not thread the work, they had to fork. Using this setup, one production box with 8GB of RAM was consistently 7.5GB full. After a less than 5 line code change to the driver, that same production box uses only around 1GB of RAM consistently, and can handle 5 to 10x the throughput of the old system.

The answers to these questions I cannot be sure of. As much as I would like to imagine that Mr. Dziuba is simply terribly ignorant; it would seem far worse – that he just intends to say things that are untrue for the purpose of drawing attention to himself.

Node.js is not a cancer, you are just a moron

My tone is going to seem strangely even and un-ranty. This is because I am doing everything I can to keep myself from completely exploding when I read this bullshit that this moron is spewing. OK, that was a little ranty, but the rest will read evenly. Maybe.

So one of my programming friends posts an article at http://teddziuba.com/2011/10/node-js-is-cancer.html and says, “Ah, here’s what’s wrong with Node.js!”

The article is rather strongly written – “Node.js is Cancer”, “node.js nonsense”, “Node.js is a tumor on the programming community”, “completely braindead”, “Scalability disaster”, etc.

He then shows a Fibonacci sequence and how it performs badly under node.

The problem he has proposed is, fundamentally, CPU-bound. I wrote a version of it in C and it did perform faster than it did in Node, but still, the problem definitely took finite-time. My command-line Node.js version calculated the answer in 8 seconds, the C version did it in 4. I was rather impressed that Javascript (Node.js’s V8 engine) was able to come as close to C’s performance in pure CPU-bound execution.

The problem, and what the author perhaps misunderstands, is that this is not the situation in which Node is an ideal solution. I use Node.js in production for work – and I know of many other shops that do too. If the problems you are dealing with are CPU-related, Node.js will not help you. Node.js works well when your problems are I/O-related -e.g., reading something out of a database, running web servers, reading files, writing files, writing to queues, reading from queues, reading from other web services, aggregating several web services together, etc. The reason that this solution has become so popular of late is because these are the types of problems that are most common in web development today. Thus, node.js becomes a helpful arrow in one’s quiver with which to solve these types of issues.

Considering that the article’s author seems to have some level of experience, I wonder if his choice of skewed example was perhaps deliberate. He has other articles on his blog about other event-loop libraries. His comment at the bottom – “tl;dr – Node.js is an unpleasant software library and I will not use it” – is possibly the real source of his anger. And – an irrefutable point – if you don’t like something, you don’t want to use it, and he obviously doesn’t. That’s fine.

Node is a tool; one of many – no panacea. If you’re dealing with problems of ‘slow’ services that need to wait for various bits of I/O to complete in order to return a result – it can be a very powerful and useful tool. If you’re computing the fortieth member of the fibonacci sequence recursively, it won’t be.

The sad fact is that the author’s completely valid point – that Node.js isn’t a good tool for CPU-bound problems – is completely buried in his bile. This is because he never states that, explicitly. Node.js has other drawbacks as well – it’s very easy to end up in callback-spaghetti, it’s very minimal, and it’s very very very young. The database integration libraries have some pretty serious immaturity issues to work through; and I’ve had to code around a good deal of that.

It’s a tool that’s good at particular things, and I will continue to use it for those things. Those ‘things’ tend to be the bulk of what web development and web services development actually are. So when I can write a two hundred line program that can replace entire arrays of servers and interconnected services with just one server; I am going to do that, and I won’t feel particularly braindead in doing so.