More on (Moron?) Ted Dziuba

(Math and pedantry ahead. Feel free to skip if that’s not your thing. More stuff about Node.js and this guy Ted Dziuba, who I hate.) To start, read Mr. Dziuba’s latest blog rant about Node.js.

I will make two main points here. First, that Mr. Dziuba does not intend to be comprehended; that he is deliberately phrasing points so as to confuse. Second, I will prove that his arguments are invalid. It appears to me that he has no interest in deriving any kind of truth or shedding any light on anything; he is merely trying to draw attention to himself, or draw blog traffic, or just make noise. I don’t know what his reasons are. I just know what he’s doing, and that seems to be being deliberately misleading. I would not feed the troll, as it were, except for the fact that his blog post is permanently on the internet and I’m going to end up being pointed to it someday if I ever propose an event-loop based solution for anything.

First off, let’s try to decipher what he’s saying; he does not do a very good job of being clear.

Let’s look at Theorem 1.

What does it actually say? Here’s an attempt at deciphering it.

He asks, let’s check out how things work when you have something that’s heavily CPU-biased (less I/O).

Note = the value ‘k’ is the ratio of I/C – in other words, for something really really IO intensive (I > C) then k should be ‘big’ (greater than one), and for I < C, k should be small (less than one). You can think of ‘k’ as the “IO-ish-ness” factor. It’s big for something that’s very IO-ish, and it’s little if it’s not. Why doesn’t he explicitly state the definition that k=I/C? Because he has no desire to be understood; he’s attempting hand-waving. Everywhere he uses this ‘k’ construct he could just as easily use I & C.

The important definition is: W=I+C=kC+C=(k+1)C

Therefore K is I/C

(k+1)C is equal to W = the wall clock time of I + C.

His theorem begins with the supposition:

1000/C > 1000N/(k+1)C

What does that mean?

Let’s change his equation to make more sense of it. Since (k+1)C=(I+C) by definition, he’s really just saying:

1000/C > 1000N/(C+I)

He’s trying to suppose that *IF* the number of times I can execute just the CPU-part of my event-loop code is greater than the number of times I can do that, threaded, but also with I/O time taken into account, *THEN* it must be the case that the number of threads I am using is one. Why would you make the argument like that? The same argument can be made, much more simply, by saying:

1000/(C+I) > 1000N/(C+I) only if N is one. But the problem is then you can see what he’s doing. He doesn’t want this, hence the pointless variable substitution.

Notice that ‘N’ factor on there? He is saying that a system with 2 threads runs twice as fast as a system with one thread. And apparently a system with 100 threads runs 100 times as fast as a system with one thread. I’ve worked with much software throughout my personal and professional life, and this supposition is not true. By this assumption, of course threads will always outperform event-loop software.

He attempts the same song-and-dance in Theorem 2. He still is making the assumption that N threads equals N*single-threaded performance.

It’s most clear in his “Practical Example.” There, you can see him making the n-threads-means-n-times-performance argument most clearly. If that’s true, why not 1000 threads? Why not a million?

Another point here. Why threads? Why not fully fork()’ed processes? His math (such as it is) still holds up just the same if you assume forked contexts as threaded. And, yet, none of his math requires threads instead of forks to run.

Effectively, he has proven that in a system with an infinite number of infinitely fast CPU’s, and infinite RAM, and zero threading context-switch time, and zero thread-accounting time, that threads are faster than events. Congratulations.

So there are my arguments as to why he is incorrect. Now I wish to ask questions about how he seems to be deliberately misleading.

First off – some stylistic questions. Why has he written his argument so obtusely? Why has he not shown his work? Why does he not explain what he’s supposing? He just throws symbols down, in beautiful little .PNG files, and runs off with manipulating them with algebra. That’s seems like he’s trying an “appeal to authority” via jargon. Why all the milliseconds everywhere? We’re in theoretical Comp Sci world now, why pick those units? It would appear that he has done so specifically to throw 1000’s in his equations everywhere, just to confuse things further.

Next – a more theoretical question. Why do things like the select() or poll() system calls exist? Or epoll or /dev/poll? Since they’re so “obviously” inferior to threading-based solutions, they shouldn’t exist at all, right? There should be no use for them. If I can always just use threaded I/O instead of event-looped, why use event-looped at all? It is, after all, very difficult to program.

And finally – why did Dziuba himself advocate for an event-based I/O solution – “eventlet” – in one of his own blog posts? He seems to have gotten quite the performance boost –

…but the one that really stands out in the group is Eventlet. Why is that? Two reasons:

1. You don’t need to get balls deep in theory to be productive with Eventlet.
2. You need to modify very little pre-existing code to adapt a program to be event-driven.

This all sounds great in theory, but I have actually made a large I/O bound program work using monkey patching and changing the driver. It is a piece of software that reads jobs from a queue and processes them, putting the result in memcached. For esoteric reasons I will not go into, the job processors could not thread the work, they had to fork. Using this setup, one production box with 8GB of RAM was consistently 7.5GB full. After a less than 5 line code change to the driver, that same production box uses only around 1GB of RAM consistently, and can handle 5 to 10x the throughput of the old system.

The answers to these questions I cannot be sure of. As much as I would like to imagine that Mr. Dziuba is simply terribly ignorant; it would seem far worse – that he just intends to say things that are untrue for the purpose of drawing attention to himself.

5 thoughts on “More on (Moron?) Ted Dziuba”

  1. A fair point, I could probably have cleaned up that argument quite a bit. I was trying to imply that the existence of the select() loop, evolving towards poll, epoll, kqueue and /dev/poll – meant that there must be something there.

    But, true enough, that is still an appeal to authority. 20 or 30 years of Unix authority, surely, but still an appeal to authority.

  2. Yet another anonymous wimp with not enough balls (or ovaries) to say who he/she is, and not enough brains to make an actual point.

Leave a Reply to Ehren Murdick Cancel reply

Your email address will not be published. Required fields are marked *