Wednesday, August 02, 2017

How important is decidability in programming tools?

Hoisted from the drafts folder because the discovery that TypeScript's system is Turing-complete is making the rounds.

It's a common assumption in the programming language community that practical type systems should be decidable. Or, at least, it used to be: in the early 2000s, I had a grad school colleague who spent a lot of time trying to invent restrictions that made the type system of Cecil decidable, and people seemed to feel that his formalization of Cecil would be unreasonable if he could not solve this problem.

Over a decade later, it's well-known that Java with generics and wildcards was undecidable for years before people even realized it. Furthermore, devising restrictions to the type system that achieve decidability seems to be an open research problem.

Did hundreds of thousands of Java programmers experience an existential crisis when Java with generics was found to be undecidable? Are they waiting with bated breath for researchers to solve this urgent problem? I'm pretty sure 99.99% of them don't know that the problem exists and won't notice if it's ever fixed.

Meanwhile, the PL research community has mostly gotten bored of object-oriented languages and type systems thereof, and moved on to other things. And the functional programming community (or some fraction thereof) has embraced Scala, whose type system is also undecidable.

Conversely, it is well-known that in ML, it is possible to write a program by hand on a single sheet of paper that will take longer than the lifetime of the universe to typecheck, because Hindley-Milner is superexponential in the worst case.

What is the practical difference between a typechecker that can hang until the universe is a cold, featureless void of uniformly distributed entropy and a typechecker that can hang forever? In both cases, you would hit Ctrl-C after a decent interval and rewrite your program. Would it make much difference if ML's type system were undecidable? Probably not.

I remember standing up at an ECOOP talk one year and asking a question along those lines. The researchers had made a big deal about the importance of decidability for the problem they were solving, and I asked why it was so important. The presenters seemed to think the question was ridiculous and I'm pretty sure everyone else in the room thought so too. I probably didn't word my question well — I wasn't a good researcher and I'm not proud of my time in academia — but I still think I was basically right that one time.

The important thing is that tools have tractable runtimes for the programs that people want to write. Unfortunately, "tractable runtime for reasonable programs" is much harder to demonstrate using the formal and empirical tools that exist today than "decidable for all syntactically valid programs". And the gap in available intellectual tools has led a research community to handcuff itself needlessly in hunting for useful results.

Sunday, May 07, 2017

On the efficacy of online flamewars

Excavated from the drafts folder for no particular reason.

Isn't it great how, since the Brendan Eich affair, all his online defenders have become active labor rights organizers, fighting for workplace political freedom for all? Galvanized by the realization that not only CEOs but all workers deserve robust protections for their political beliefs, Eich's erstwhile defenders channeled all their passion into effective political action. Which is why Congress will vote this week on a bill with three key provisions: first, it outlaws any form of workplace discrimination based on political speech or activity conducted outside the office; second, it handsomely funds an investigative division of the FBI tasked specifically with working with the NLRB to track down and prosecute violators; third, by analogy with the Foreign Corrupt Practices Act and the 2003 Protect Act (which grants American prosecutors broad latitude to charge Americans who molest children while abroad), it makes it illegal for companies operating on American soil to subcontract work to overseas employers which restrict their workers' political rights.

There was never any danger that Eich's defenders would just basically forget about the whole affair and get on with their lives. Ha ha! Yeah that definitely couldn't have happened, given how deeply committed these people were to the principle of workplace political freedom. It's not like they only care about workers' rights when it's an incredibly wealthy white male celebrity who is being criticized.

Likewise, when the dead bloated corpse of patriarchy is laid to rest this fall, everyone will have to recognize that the great Twitter Flamewar of March 2014 was really the spike through its heart. No, not that one, I mean the other one, the one where we all wrote angry all-caps tweets at that one dude, he was totally mansplaining and stuff, you remember the one I'm thinking of.

Wednesday, June 22, 2016

We must reject watchlists if we are a nation of laws

I favor strict national gun regulation. There are bad ways to do it, and good ways, but almost any consistent regime would be an improvement over the status quo. If you had asked me the day before yesterday, I would have said that I'd support nearly any gun control measure that was brought before the U.S. Congress.

Well, congratulations, Democrats! You've discovered a way to do it, maybe the only way, that I find stupid and unconscionable: extending the power of "terrorist watch lists" into the realm of gun control. That you should stage a theatrical stunt on the House floor over this measure, of all measures, after you've spent my entire adult life caving cravenly to one right-wing authoritarian demand after another, from illegal war to the normalization of torture, beggars the imagination.

Right off the bat, let's be clear that this measure will have almost no effect on gun violence. But unlike most gun control measures, this one is not merely feeble; it is actively malign, because it further empowers an evil institution.

The so-called "terrorist watch list" is a fundamentally broken idea that is both impossible to implement well, and a moral catastrophe for the rule of law and human equality and liberty.

There are over a million identities on the watchlist, mashed together from many sources of unknown provenance and overseen by nobody with any accountability to disinterested review. Anybody who has worked with database integrations and human organizations of any size knows with total certainty that this watchlist is full of nonsense. Ted Kennedy was on the watchlist. Bollywood movie star Shah Rukh Khan was on the watchlist. The "terrorist watchlist" is a pile of garbage wrapped in a tire fire.

It can't help but be so: the count of people worldwide who engage in non-state terrorism against civilians is miniscule. It is a numerical certainty that nearly all of the people who are on a million-member watchlist are entirely innocent and have no potential to commit a terrorist act.

And how did the data come to be such a fetid swamp of nonsense? Nobody will tell you. The contents of the watchlist are secret; the processes for getting an identity onto the watchlist are secret; the criteria for getting your identity off the watchlist are secret. Good luck if you end up on it, and you're somebody with less clout than a sitting United States Senator or an international celebrity.

Maybe it's time for a brief refresher on how a nation of laws is supposed to work. Laws must be known to the people who govern them. There must be an agreed-upon process — a due process, you might say — by which people are deprived of their rights under those laws. Once you have been convicted of a crime, the state's carceral machinery may act upon you, but until then, merely suspected persons retain their rights. And you don't lose your rights in secret, whereupon you may sue to get them back; the burden of proof is on the state to exercise this due process to take your rights away.

Making legal processes open to public inspection is the most powerful way that we know to ensure that their operation is just; that, for example, we are not merely turning people into second-class citizens for being Muslim or black or whatever the current least-favored category of citizens is. One would think that this argument, at least — that, as Black Lives Matter and related movements have made undeniably clear in recent years, the state operates in flagrantly discriminatory ways when it operates without scrutiny — would carry some currency, even in a left-of-center discourse that values civil liberties less and less as the Bush years recede into the distance.

Yet through the depressing machinery of tribalism, it has suddenly become conventional wisdom in progressive circles that all right-thinking people shall support a bill that further entrenches the influence of the racist, unaccountable, unconstitutional no-fly-list. Otherwise-intelligent people in my Twitter feed have even argued that passing this legislation is the first step towards "fixing" the watchlist:

Astonishing. Astonishing. I have no way of even processing the level of wishful thinking and partisan groupthink necessary for a reasonable person to make this argument in good faith. Can you think of a single example in history where giving a Kafkaesque bureaucratic apparatus additional unaccountable power led to its reform, let alone any such apparatus connected with the security state? I doubt it; the ratchet turns in the other direction.

I have an alternate prediction. Should this legislation be passed by Democrats, we can look forward to the positions around the watchlist becoming crystallized along partisan lines. Since empowering the watchlist will count as a signature political achievement for Democrats, attained through a highly memorable media stunt, henceforth Democrats will fight any efforts that even remotely smell like dismantling the watchlist; this will, of course, include any efforts at meaningful reform. As reverence for the immaculate watchlist turns into a shibboleth of partisan identification, enterprising Democrats will eventually start to propose even more unconstitutional measures to extend its influence into other areas of public life. The extensions will come with, at best, token reforms; perhaps these reforms will moderate its effects on upper-class and upper-middle-class people with surplus time and financial resources, but leave the overall system fundamentally unaccountable and outside any recognizable form of due process for nearly everybody on it. Millions more human beings will suddenly become second-class in the eyes of state; their circle of rights will gradually shrink. The normalization of secret law in the United States will accelerate. Neither Democrats nor Republicans will have the stomach to turn this ratchet backwards.

Furthermore, having discovered again that "terrorism" is the magic word which can rally even the most spineless legislators into action and cow even the most intransigent opponents, Democrats will use this handy rhetorical cudgel to beat anybody who disagrees with them, just as Republicans did during the Bush years, until it is nearly meaningless with overuse. This will be used to pass all manner of additional legislation, equally stupid. Meanwhile, progressives like myself will be excoriated for allegedly being on the side of terrorists and right-wing loons, simply for opposing stupid and malign laws.

Friday, November 14, 2014

The goTo object pattern in Node.js code

I've been messing around with Node.js lately. Like everyone using Node.js, I've been wrestling with the fact that it forces programmers to use hand-rolled continuation passing style for all I/O. Of course, I could use something like async.waterfall() to eliminate boilerplate and deeply indented nested callbacks. However, since I am crazy, I am using Closure Compiler to statically typecheck my server code, and I don't like the way async.waterfall() defeats the typechecker.

You can stop reading here if you're perfectly happy with using async for everything, or perhaps if you're one of those people convinced that static typing is a boondoggle.

Anyway, I've been using a "goTo object" for my callbacks instead. The essence of the pattern is as follows:

  • Define a local variable named goTo whose members are your callbacks.
  • Asynchronous calls use goTo.functionName as the callback expression (usually the final argument to the asynchronous function).
  • At the end of the function, outside the goTo object, start the callback chain by calling goTo.start().

Here is an example, loosely patterned after some database code I recently wrote against the pg npm module. In raw nested-callback style, you would write the following:

var upsert = function(client, ..., cb) {
  client.connect(function(err, conn) {
    if (err) {
      cb(err);
      return;
    }
    
    conn.insert(..., function(err, result) {
      if (err) {
        if (isKeyConflict(err)) {
          conn.update(..., function(err, result) {
            conn.done();
            if (err) {
              cb(err);
              return;
            }
            cb(null, 'updated');
          });
        }
          
        conn.done();
        cb(err);
        return;
      }

      conn.done();
      cb(null, 'inserted');
    });
  });
};

With a goTo object, you write this:

var upsert = function(client, ..., cb) {
  var conn = null;

  var goTo = {
    start: function() {
      client.connect(..., goTo.onConnect);
    },

    onConnect: function(err, conn_) {
      if (err) {
        goTo.finish(err);
        return;
      }
      conn = conn_;  // Stash for later callbacks.
      conn.insert(..., goTo.onInsert);
    },

    onInsert: function(err, result) {
      if (err) {
        if (isKeyConflict(err)) {
          conn.update(..., goTo.onUpdate);
          return;
        }
        goTo.finish(err);
        return;
      }
      goTo.finish(null, 'inserted');
    },

    onUpdate: function(err, result) {
      if (err) {
        goTo.finish(err);
        return;
      }
      goTo.finish(null, 'updated');
    },

    finish: function(err, result) {
      if (conn) {
        conn.done();
      }
      cb(err, result);
    }
  };
  goTo.start();
};

This pattern is easy to annotate with accurate static types:

var upsert = function(...) {
  /** @type {?db.Connection} */
  var conn = null;

  var goTo = {
    ...

    /**
     * @param {db.Error} err
     * @param {db.Connection} conn_
     */
    onConnect: function(err, conn_) { ... },

    /**
     * @param {db.Error} err
     * @param {db.ResultSet} result 
     */
    onInsert: function(err, result) { ... },

    /**
     * @param {db.Error} err
     * @param {db.ResultSet} result
     */
    onUpdate: function(err, result) { ... },

    /**
     * @param {?db.Error} err
     * @param {string=} result
     */
    finish: function(err, result) { ... }
  };
  goTo.start();
};

Some notes on this pattern:

  • It is slightly more verbose than the naive nested-callback version. However, the indentation level does not grow linearly in the length of the call chain, so it scales better with the complexity of the operation. If you add a couple more levels to the nested-callback version, you have "Tea-Party Code", whereas the goTo object version stays the same nesting depth.
  • As in the nested-callback style, error handlers must be written by hand in each callback, which is still verbose and repetitive: the phrase if (err) { goTo.finish(err); return; } occurs repeatedly. On the other hand, you retain the ability to handle errors differently in one callback, as we do here with key conflicts on insertion.
  • Callbacks in the goTo object can have different types, and they will still be precisely typechecked.
  • The pattern generalizes easily to branches and loops (not surprising: it's just an encoding of old-school goto).
  • Data that is initialized during one callback and then used in later callbacks must be declared at top-level as a nullable variable. The top-level variable list can therefore get cluttered. More annoyingly, if your code base heavily uses non-nullable type annotations (Closure's !T), you will have to insert casts or checks when you use the variable, even if you can reason from the control flow that it will be non-null by the time you use it.
  • Sometimes I omit goTo.start(), and just write its contents at top-level, using the goTo object for callbacks only. This makes the code slightly more compact, but has the downside that the code no longer reads top-down.
  • There is no hidden control flow. The whole thing is just a coding idiom, not a library, and you don't have to reason about any complex combinator application going on behind the scenes. Therefore, for example, exceptions propagate exactly as you'd expect just from reading the code.
  • The camelCase identifier goTo is used because goto is a reserved word in JavaScript (reserved for future use; it currently has no semantics).

For comparison, here is the example rewritten with async.waterfall():

var async = require('async');

var upsert = function(client, ..., finalCb) {
  var conn = null;
  async.waterfall([
    function(cb) {
      client.connect(..., cb);
    },

    function(conn_, cb) {
      conn = conn_;
      client.insert(..., cb);
    }

  ], function(err, result) {
    if (isKeyConflict(err)) {
      client.update(..., function(err, result) {
        conn.done();
        if (err) {
          finalCb(err);
          return;
        }
        finalCb(null, 'updated');
      });
      return;
    }

    conn.done();
    if (err) {
      finalCb(err);
      return;
    }
    finalCb(null, 'inserted');
  });
};

In some ways, this is better and terser than the goTo object. Most importantly, error handling and operation completion are isolated in one location. Also, blocks in the waterfall are anonymous, so you're not cluttering up your code with extra identifiers.

On the other hand, this has some downsides, which are mostly the flip sides of some properties of goTo objects:

  • Closure, like most generic type systems, only supports arrays of homogeneous element type (AFAIK Typescript shares this limitation update: fixed in 1.3; see comments). Therefore, the callbacks in async.waterfall()'s first argument must be typed with their least upper bound, function(...[*]), thus losing any useful static typing for the callbacks and their arguments.
  • Any custom error handling for a particular callback must be performed in the shared "done" callback. Note that for the above example to work, the error object must carry enough information so that isKeyConflict() (whose implementation is not shown) can return true for insertion conflicts only. Otherwise, we have introduced a defect.
  • Only a linear chain of calls is supported. Branches and loops must be hand-rolled, or you have to use additional nested combinators. This doesn't matter for this example, but branches and loops aren't uncommon in interesting application code.

Now, goTo objects are not strictly superior to the alternatives in all situations. The pattern still has some overhead and boilerplate. For one or two levels of callbacks, you should probably just write in the naive nested callback style. If you have a linear callback chain of homogeneous type, or if you just don't care about statically typing the code, async.waterfall() has some advantages.

Plus, popping up a level, if you are writing lots of complex logic in your server, I'm not sure Node.js is even the right technology base. Languages where you don't have to write in continuation-passing style in the first place may be more pleasurable, terse, and straightforward. I mean, look: I've been reduced to programming with goto, the original harmful technology. By writing up this post, I'm trying to make the best of a bad situation, not leaping out of my bathtub crying eureka.

Anyway, caveats aside, I just thought I'd share this pattern in case anyone finds it useful. Yesterday I was chatting about Node.js with a friend and when I mentioned how I was handling callback hell, he seemed mildly surprised. I thought everybody was using some variant of this already, at least wherever they weren't using async. Apparently not.


p.s. The above pattern is, of course, not confined to Node.js. It could be used in any codebase written in CPS, in a language that has letrec or an equivalent construct. It's hard to think of another context where people intentionally write CPS by hand though.

Monday, November 10, 2014

A lottery is a tax on... people who are good at reasoning about risk-adjusted returns?

Rescued from the drafts folder because John Oliver has rendered it timely.

People who consider themselves smart sometimes joke that a lottery is a tax on people who are bad at math.

The root of this reasoning is that the expected return on investment for a lottery ticket is negative. It can't help but be otherwise: the lottery turns a profit, hence the payout multiplied by the probability of winning must be less than the price of the ticket. Therefore, the reasoning goes, the people who buy lottery tickets must be incapable of figuring this out. Ha ha, let us laugh at the rubes and congratulate ourselves on our superiority.

One rejoinder is that the true value of a lottery ticket, to the buyer, is the entertainment value of the fantasy of winning. One obtains the fantasy with probability 1 and thus as long as the entertainment value of this fantasy exceeds the price of the ticket, the buyer comes out ahead.

There is something to this. But I think a stronger claim can be made.

A lottery is the mirror image of catastrophe insurance. Note that buying insurance also has a negative expected return. Provided their actuarial tables are accurate, insurance companies turn a profit. Therefore the probability of being compensated for your loss, multiplied by the compensation, must be less than the cost of the insurance premiums. But nobody says that insurance is a tax on people who are bad at math. Quite the opposite: buying insurance is viewed as a sign of prudence.

The issue, of course, is that the naive expected return calculation fails to adequately consider the nature of risk and catastrophe. At certain thresholds, financial consequences as experienced by human beings become strongly nonlinear, probably due to the declining marginal utility of money. Suffering complete ruin due to, say, a car accident entails such severe consequences that we are willing to accept a modestly negative expected return in exchange for the assurance that it will simply never occur.

A lottery is simply the flip side of this coin. The extremely rare event is a hugely positive one, instead of a hugely negative one, huge enough to produce qualitative rather than merely quantitative changes to your lifestyle. And one accepts a modestly negative expected return, not so that one can avoid the risk, but that one can be exposed to the risk.

If you are a member of the educated, affluent middle class, there is an excellent chance that your instinct rebels, and you're already hunting for the flaw in this reasoning. Surely there's something sad, and not prudent, about those largely working-class souls who buy a lottery ticket every week, rather than rolling that $52 per year into a safe index fund at a post-tax rate of return of roughly 2.5% per annum, at which rate their investment, if it somehow survived unperturbed the ups and downs of a life that is considerably more exposed to financial risk than a middle-class person's, might compound to the princely sum of a couple months' rent by the time they die, or alternatively enough to pay for a slightly nicer coffin.

And maybe there is a flaw in this reasoning. I'm not completely convinced myself and I'm not going to start buying lottery tickets. But I'm honestly having trouble finding the flaw. Any indictment of spending (small amounts of) money on lottery tickets must surely also apply to buying insurance. If it is worth overpaying a little bit to eliminate the possibility of a hugely negative outcome, surely it is worth overpaying a little bit to create the possibility of a hugely positive outcome. The situations are symmetric, and I think one can only break the symmetry by admitting the validity of loss-aversion (usually viewed as irrational by economists) or something similar.

Alternatively, if you have access to the actuarial tables of your insurance company, then perhaps you can argue that a lottery is, quantitatively, simply a worse deal than your insurance typically is... but you probably don't have access to those actuarial tables. And I sincerely doubt that the widespread middle-class snobbery towards lottery players is based on quantitative calculations of this sort. (Actually, I strongly suspect that it is based on a fallacious, gut-instinct "folk probability" feeling that gambling on any extremely remote event, like winning the lottery, is somehow inherently foolish.)

So what's the flaw? I ask this question non-rhetorically; that is, I am genuinely curious about the answer.


p.s. None of the above, of course, is to say that I think the taxation effect of lotteries, which is staggeringly regressive, is a good thing. I would strongly support the replacement of lotteries with progressive tax increases plus transfer payments! Gambling addiction is bad too. But these things seem distinct from the notion that playing the lottery in small amounts is irrational in some game-theoretic sense.