Thursday, November 17, 2011

The social graph is...

"...neither social nor a graph..."

...provided you redefine the words "social" and "graph" to mean something other than what they mean to everyone else.

M. Ceglowski is just being deliberately obtuse, or more precisely he is taking a wild excess of rhetorical license in order to make his statements seem more profound and unconventional. For example, he writes:

We nerds love graphs because they are easy to represent in a computer and there is a vast literature on how to do useful things with them. . . . In order to model something as a graph, you have to have a clear definition of what its nodes and edges represent.

Well, that's actually bullshit. In a dynamic Bayesian network, you don't have a complete definition a priori of what nodes and edges represent. Well, you do, in that the nodes represent variables and the edges represent relationships between those variables, but the weights on the edges are learned statistically from data. An edge may represent a meaningful connection, or it may mean nothing at all. The graph precedes semantics, not vice versa. Likewise with the social graph. People are connected, and you don't necessarily know what each connection means. But it's still a graph.

The labels on the social graph's edges may be subtler and more multidimensional than the simple weights you put on Bayesian network edges. And we don't have a good handle on how to learn those labels, or even what the labels should be. However, calling for the abandonment of a useful mathematical construction in an emerging field of science because it's incomplete is something that you do when you want to convince people that you're smarter than the people working in that field. It's not something you do when you want people to become better-informed.

Ceglowski also writes that the social graph is "not social" because... well, actually, I have trouble even locating a coherent argument in that part of the essay. He seems to be confusing "social" with "sociable". The social graph is social, since it describes relationships between people. Perhaps some activity involved in digitally reifying the social graph is anti-social (Note that anti-social is not the opposite of social — anti-social behaviors are social behaviors!). But that doesn't make the social graph "not social". By that standard, sociology is not a social science because sociologists spend a lot of time by themselves in libraries.

Incidentally social scientists have been modeling social connections as graphs for decades.

Here is a short list of the valid points Ceglowski makes:

  1. FOAF relationship labels are kind of dumb and embarrassing.
  2. Manually maintaining anything other than a very coarse-grained digital reification of a social graph is a tedious chore.
  3. Making your social network and behavior the property of a company whose revenue model is not aligned with your long-term interests is a bad idea.

And here is a short list of other, non-terminological points that Ceglowski just gets wrong:

  1. Social networks do "[g]ive people something cool to do and a way to talk to each other". It turns out that sharing photos, videos, and links is one of the most broadly appealing online activities, and social networking sites seem to do this better (along some dimensions) than dedicated photo-, video-, and link-sharing sites.
  2. Judging communities by the outward-facing cultural artifacts they produce is a radically inadequate measure of value. The vast majority of communication is point-to-point, not broadcast, and the vast majority of interpersonal interactions are social grooming. Social grooming is a deep-seated primate instinct which nerds devalue at their peril. Social networks have made online social grooming far easier than their predecessors did.
  3. People on WoW, Eve Online, and 4chan have healthier social lives than people on Facebook? Really?

Note that I write all the above as someone who dislikes Facebook and is skeptical of reductive approaches to modeling social relationships. And I've been advocating* an end to proprietary social networks for years — long before I started working at Google, and in fact before Facebook was even the predominant social network. So I'm broadly sympathetic to Ceglowski's aims. But I don't like at all the way that he goes about explaining them.


*Incidentally, rereading this old post, I realize that I completely missed the possibility that the dominant social network site would simply become a huge platform for third-party applications. I guess it never occurred to me that serious companies would bet their livelihoods on being sharecroppers in the walled garden. Go figure. I could speculate that this willingness can be traced directly to the Valley vogue for building companies to flip rather than to create sustainable, decades-long sources of enduring value — if you're just holding on until your "liquidity event" then it doesn't matter that your business is built on the fickle forbearance of your platform landlord — but I'm not sure how right that is.

Monday, October 31, 2011

Occupy .* and the Iraq War

Then, as now, conservative opinion and elite bipartisan opinion was mostly contemptuous of the protesters. Well-fed, well-educated, well-salaried pundits looked on the shaggy protesters and remarked: how unsophisticated were the protesters' opinions, how disorganized their complaints! Fortunately, the nation was run by a select few who understood the harsh realities of a world where some suffering was necessary (for other people) so that the existing order could be maintained. And the march to war rolled on.

In all likelihood, the protests today will be as futile as those were. It's taken a couple hundred years, but the system of governance by elected representatives has evolved an immune system with nearly impervious defenses against street protests. Nevertheless in a society supported by the many and operated for the few, it is perhaps useful, for aesthetic reasons if nothing else, to have some people calling attention to that fact. If you, as a critic, imagine yourself on the side of the angels in damning the protesters, then you should perhaps reconsider.

Thursday, October 13, 2011

Cynicism and libertarian ends (again)

Why M. Yglesias is a nationally acclaimed writer and I am not, exhibit #7572: a couple of years ago I wrote a somewhat convoluted post about cynicism and libertarianism, whereas today Yglesias wrote this which is elegant, readable, and much more worth your time.

Friday, October 07, 2011

Wednesday, August 03, 2011

g++ unordered_multimap: an exercise

I discovered this randomly several weeks ago while debugging something else at work, and I thought it was worth sharing since the g++ STL is widely used.

Step 1: Save the following file as mapdemo.cpp:

#include <iostream>
#include <unordered_map>

int main() {
    typedef std::unordered_multimap<int, int> int_multimap;
    int_multimap map;
    for (int i = 0; i < 10000; ++i) {
        map.insert(int_multimap::value_type(17, i));
    }
    std::cerr << *static_cast<int*>(0);
    return 0;
}

Step 2: Compile the file:

g++ --std=c++0x -g mapdemo.cpp

Step 3: Load the file in gdb and examine the buckets:

$ gdb -silent ./a.out
Reading symbols from xxx/a.out...done.
(gdb) run
Starting program: xxx/a.out 

Program received signal SIGSEGV, Segmentation fault.
0x0000000000400b4d in main () at mapdemo.cpp:10
10     std::cerr << *static_cast<int*>(0);
(gdb) p map._M_bucket_count
$1 = 15173
(gdb) p map._M_buckets[0]
$2 = (std::__detail::_Hash_node<std::pair<int const, int>, false> *) 0x0
(gdb) p map._M_buckets[15172]
$3 = (std::__detail::_Hash_node<std::pair<int const, int>, false> *) 0x0
(gdb) p map._M_buckets[17]
$4 = (std::__detail::_Hash_node<std::pair<int const, int>, false> *) 0x605080
(gdb) p *map._M_buckets[17]
$5 = {_M_v = {first = 17, second = 0}, _M_next = 0x670c90}

Yes, g++'s implementation of unordered_multimap (known as hash_multimap in pre-C++0x versions of C++) uses bucket hashing, but the size of the backing array is proportional to the count of elements in the multimap, not the count of distinct keys.

Exercise for the reader: Explain what I just did; explain why the result of step 3 is curious; and then explain why the authors might have chosen to do it this way anyway.

It occurs to me that this would have made a decent interview question if I hadn't written it up here. Oh well, I have others.

Monday, August 01, 2011

Acer Aspire TimelineX 1830T semi-review

Attention conservation notice: Google-food for a gadget you will probably never need to know about.

I recently bought an Acer 1830T ultra-compact notebook (11.6" screen, 3.1 lbs, Core i5 470UM, ~$540 street price). I have neither the time nor (yet) the data to review it comprehensively, but here are a few observations that I didn't find in other online reviews, with a focus on the physical design. This review is intended to complement other information online, not replace it, and is offered in the hope that it's useful to other people who may be in the market for a very small laptop.

Size

You can read on the spec sheet that the laptop is 285 x 204 x 28 mm (11.22 x 8.03 x 1.1 inches) at its widest point, but that doesn't give you a visceral feel for its size. Here it is next to a few objects with which you may be familiar.

Counterclockwise from upper left: Kindle 2G, 12oz. Diet Coke, Nexus S, $1 Federal Reserve Note, 15" Macbook Pro (unibody), and TimelineX 1830T. You can see that it's quite small. It feels qualitatively similar in the hand to other ultra-compact notebooks I've handled, such as smaller Lenovo X series laptops, although it's not nearly as blade-thin as a Macbook Air. Below are a couple of detail on-edge shots to illustrate the device thickness.

The top shot is the same 15" Macbook Pro and Kindle stacked next to the TimelineX 1830T, with the other props on the edges. The bottom shot is the Macbook Pro alone next to the 1830T. Subjectively, I will say that its small size and light weight make it feel thinner than it is. I'll toss this in my bag as a second laptop without hesitation.

Input devices

The keyboard is OK but not stellar. Here it is, compared to the Macbook Pro keyboard, using my left hand as a reference object:

You'll notice that it's marginally smaller — and for me, this is enough that it does subjectively feel more cramped — but it's much closer than I would have expected, and about as good as I'd expect from a machine this size. It's not a Lenovo keyboard but then nothing is.

The trackpad is quite small and almost invisible, marked only by three fine raised lines on the palmrest's metal surface:

Overall the small size of the palmrest and trackpad are the biggest ergonomic shortcomings of this device. I've concluded that they shrunk the palmrest to make room for the (huge) battery on the top side of the keyboard. After a brief acclimation period, I find the trackpad acceptable for casual use, but I expect I'll still carry around a mouse when I want to do serious work.

Power adapter

The TimelineX has an interesting power adapter. First, here is a size illustration.

Counterclockwise from upper left: Nexus S, Macbook Pro power adapter, TimelineX 1830T power adapter, 12oz. Diet Coke, and $1 bill. Note that the adapter is closer in size and weight to a cell phone power adapter than a traditional laptop power adapter. Furthermore the plug prongs are detachable:

Cleverly, the prongs can be attached in either orientation:

Obviously this is useful for power strips or other situations where the area around the cord might be crowded. Most laptops deal with this problem by attaching the plug head via a separate cord, but I would like to see this design become more widespread for cell-phone-style power adapters. Rotating the prongs is a simple 10-second operation:

Miscellany

A few other brief observations:

  • 1080p video plays perfectly fine, either downsampled on the native 1366x768 screen or at full resolution on an external 1080p television via HDMI.
  • 64-bit Ubuntu Maverick (10.10) under VMWare Player works adequately for casual coding. I have not yet set up dual-boot and perhaps I won't need to.
  • Suspend from resume is quite quick in Windows; a full boot is slow but rare.
  • The battery is huge; the screen is small; the CPU is an ultra-low-voltage Core i5. The net result is that battery life goes all day for practical purposes, and this is the rare laptop that I will not bother to plug in during use most of the time.

Finally, here's a close-up shot of the laptop cover, which has a nice, grippy embossed cross-hatched texture.


p.s. Incidentally, having shopped for a laptop recently, I have to say that typical review sites do not pay nearly enough attention to laptops as physical objects. The physical reality of the laptop is one of its most crucial characteristics; it's not like a workstation which you just leave under your desk and hook up to the keyboard, mouse, and monitor of your choice. People who do this for a living should be able to provide more useful and objective information than "it feels light". Also, laptop review sites overall strike me as quite lazy. Why would you show useless white-background product shots from the PR kit, when digital cameras are so ubiquitous, and it's trivial to take your own much more useful photos, as this blog post shows? Grrr.

Saturday, July 02, 2011

Two white dresses

Striking juxtaposition (intentional?) on the CNN International home page just now:

One woman is the subject of the photo, and you see her eyes looking fearlessly at her vanquished opponent. Furthermore, she is important for having excelled in a worldwide competition of objective achievement. The other is a secondary subject, her eyes invisible but her gaze clearly directed at the primary subject of the photo, a man — whom, I suppose, she has also conquered, in a sense, although her fellow competitors for the prize are (as in the first photo) outside the frame. Furthermore she is newsworthy only because her new husband happens to be of royal birth. Wittstock is of course more conventionally beautiful than Kvitova as well.

I won't claim that sports are categorically more important than weddings of ceremonial heads of state, but someday I'd hope that the positions of these images would be reversed.

(Posted from lounge in BGI airport while waiting for a flight.)

Sunday, April 03, 2011

Tree structure and comment threads (a brief observation)

It has been claimed that flat, linear presentation of comments appears to work better for humans than tree-structured comment threads. Without getting too deeply into whether this is true (and if so, why), I would like to offer an observation.

Conversation is never a tree; it is a general directed acyclic graph. In reality, in the commenter's mind, every comment potentially implicitly responds to an arbitrary subset of preceding comments, not to a unique parent and its chain of unique transitive ancestors.

Tree-structured threading — sometimes (erroneously!) called "true threading" — artificially imposes a tree structure on this graph. Flat, linear comment systems do not: each comment appears after all those that precede it topologically in the DAG, and it is up to the reader to reassemble the DAG based on the comments' contents.

It is true, of course, that flat comment systems fail to reify all DAG edges as explicit metadata. However, the nature of these edges is quite subtle and capturing them all explicitly is intractable. Often a comment "responds" to previous comments in indirect ways — for example, simply by omitting some aspect of the argument that has been covered by a previous comment.

(Prompted by a TC article linked off HN.)

Sunday, March 27, 2011

Ubuntu has not solved the agency problem in community software development for the Linux desktop

Four and a half years ago, I wrote:

In an open source project with a dictatorial or committee-led governance structure, somebody would long ago have cracked some heads and gotten this feature implemented. In a commercial software project, open source or non-, some engineer would be assigned ownership of this feature; and goddammit, if that feature didn't get implemented and maintained, that engineer would be fired and the feature would be assigned to someone else. But KDE's headless. It's less like a mammal with a central nervous system than an enormous amoeba whose various pseudopodia ooze tropically in the direction of "developer itches" and "coolest implementation hacks" (hence the recent proliferation of "hugely ambitious infrastructure refactoring" subprojects like Plasma or Solid) rather than unsexy, annoying-to-implement features that merely provide value to end users.

I genuinely thought Ubuntu had a fighting chance of resolving this agency problem. Surely with a dictator taking responsibility for the entire desktop stack, there would be progress. When there's a bad corner of usability for a common user task, somebody will crack some heads and get it fixed.

Today, I tried to share a file over my local network between my Ubuntu desktop and my Ubuntu laptop. And I ran into this defect which has been open for a year. In the year 2011, the easiest way to copy a file from one Linux computer on your local network to another Linux computer on your local network is still either (a) copy it onto a USB stick or (b) upload it to the Internet (e.g. by attaching it to a draft Gmail message).

Meanwhile, Ubuntu's burning huge numbers of developer and UI designer cycles on stuff like Unity ("We're too impatient to fix the rough edges in our existing desktop which has a decade of developer investment behind it; therefore we will design a brand new desktop, because there definitely won't be any rough edges in that.").

It turns out that, in fact, Ubuntu has not solved the agency problem in Linux desktop software development. Sigh.

Thursday, March 24, 2011

Federal budgets and applied public choice theory

Atrios writes something so true you should affix it in your memory for the next two decades:

There's literally nothing that this Congress today can do to reduce the deficit 20 years from now. What you can do is sign into law legislation which reduces granny's pension 20 years from now. And, yes, given the way our system works it wouldn't necessarily be easy to reverse that decision 20 years from now depending on the politics and who is in power. But what will still be easy to do 20 years from now is cutting taxes on rich people and writing giant checks to defense contractors. Those things are always easy to do when Congress and their donors are mostly rich people. Reduce the deficit by cutting granny's pension, increase it again by cutting taxes on rich people. Rinse repeat.

. . .

. . . just to remind us of history I'm sure we all remember. That Democratic Socialist Bill Clinton got rid of the deficit. Alan Greenspan, who spent years fretting about the deficit, suddenly decided the great danger we faced was not having a deficit. And Bush tax cuts, and too and such.

Once this really sinks in, you realize that the only measure which leads to long-term balanced budgets is reform which changes the configuration of political power so that budget deficits no longer benefit the powerful. Going by today's projections, the main contributors to long-term budget deficits are rising medical costs, the Bush tax cuts, and war (and preparation for war). Therefore the principal budget-balancing methods that might work in the long term are:

  1. health care reform that reduces the power of people and industries that make medical care expensive.
  2. campaign finance reform and progressive taxation, which reduces the power of rich people to demand tax cuts (campaign finance reform directly, and progressive taxation indirectly by simply making rich people less rich).
  3. reducing funding for the military and defense contractors, which reduces the political power of the military-industrial complex by reducing the number of people dependent upon it.

You may think some of these changes would be bad. That's OK; it just means these reforms conflict with your political values, and I'm not even trying to argue you out of your political values. Just recognize, then, that budget deficits will forever be the price of maintaining policies consistent with your political values. Relax; it's not so bad; maybe it's even a price worth paying; but no amount of cleverness on your part will allow you to wriggle out of paying it.

Your faction may exercise heroic effort, tenacity, and ingenuity to bring the budget into balance. And if you succeed, the current political economy of the United States simply means that the politically powerful health care sector, the politically powerful overclass of wealthy people, or the politically powerful military-industrial complex will find some way to squander your effort and throw the government back into deficit, with the balance of money going into their pockets. This is not a piece of political polemic; it is simply an observation that I offer about the world, with recent history as my evidence.

And as for the widespread conceit of upper-middle-class liberals that there's some purely technocratic way to fix long-term deficit problems by twiddling around with the retirement age and the like — that's just an exercise in hopeless naivete.

Sunday, March 13, 2011

The moral case for sending money to Japan (rather than somewhere else)

Japan is one of the richest countries in the world, both in absolute and per capita terms. The earthquake and tsunami have inflicted terrible damage on the country, leading most likely to thousands of deaths and many millions of dollars of property damage. However, as a nation, Japan has ample financial resources to recover from the disaster. Japan was a healthy and prosperous society one week ago, and a year from now the overwhelming majority of Japan's people will still be alive and healthy, and they will still be relatively prosperous in global terms.

By contrast, there are still hundreds of millions of other people around the globe living in conditions of persistent poverty and immiseration. Just to take one random example, one year after Haiti's 2010 earthquake, Haiti is still a complete basket case.

So, the Japanese earthquake has no doubt pricked your conscience. You have been reminded that there are people in faraway places who direly need assistance. Your moral intuitions are worthy, but if they lead you to donate money to Japanese relief, then you are probably doing something non-optimal from the point of view of improving human welfare. Donate, instead, to an international relief organization that consistently directs its efforts to the most needy worldwide: Oxfam International, Unicef, etc. It is even possible that these organizations will spend some of their resources to help Japan now; but they're in a much better position to analyze the situation and direct the appropriate quantity of resources in that direction than you are.

Accordingly, I just donated to Oxfam, and I urge you to do the same.

On the other hand, of course there are some forms of aid specific to the immediate aftermath of natural disasters for which money is not a substitute. Obama has directed the U.S. Navy to station aircraft carriers in Japan to help airlift relief supplies and such. Google has launched the People Finder for Japan. Etc. These organizations are uniquely situated to help in ways that no amount of money can purchase on the open market. And if you know of some similarly specific aid effort for which equivalents cannot be obtained via market mechanisms, then you should support that effort however you can.

A final caveat is that resources within Japan are unequally distributed. There are usually some very poor and miserable people even within rich societies. If you know of some specific subgroup within Japan which is unlikely to receive assistance due to the structure of Japanese society, then again go ahead and donate to help them as well.

But to a first approximation, the logical response to natural disasters in wealthy countries is to donate money to aid organizations generally, not to donate to aid for those countries. And yes, when The Big One hits the Bay Area (where I currently live), I'll say the same thing.


UPDATE 2011-03-16: Via MR, charity ratings organization GiveWell agrees that donations should not be sent to funds earmarked for Japanese disaster relief. See also F. Salmon, and F. Salmon again.

Tuesday, February 15, 2011

A simple handwave that makes The Matrix tolerable for the scientifically literate

Suppose that controlled fusion power requires real-time control computations which can be much more efficiently implemented in neurological hardware than in silicon.

(That Second Law of Thermodynamics thing was bothering the hell out of you, wasn't it?)

Note that neurological hardware really is exceptionally power-efficient for certain classes of computations. Common estimates of the human brain's power consumption are 20-25 watts; this is roughly the wattage of a Mobile Intel Core i5 processor, which (as far as we know) appears to be a much less powerful computer for many purposes. By contrast Watson runs on 90 IBM Power750 servers, filling ten racks, whose power draw is something like 80 kilowatts. In other words, Watson consumed about four thousand times more power than either of its meat-based competitors.

Here in reality, I think it's unlikely that any fixed class of computations can be efficiently implemented in neurons but not in silicon — see Carver Mead and his academic descendants' work on analog silicon circuits. But positing that such computations may exist seems within the realm of acceptable science-fictional handwaving.


UPDATE: Yes this is close to the standard handwave that the humans are being kept as computing devices, not power sources. But I think you need to draw the connection explicitly to power generation; otherwise there's just too much narrative in the film and animated shorts that makes no sense.

UPDATE': Never mind, I just remembered Morpheus's exact wording from the first film's voice over and I don't think it's salvageable. Oh well.

Sunday, February 06, 2011

Why iPads' role in education should be limited

This morning I came across this story about a Georgia state senator proposing replacing textbooks with iPads. This strikes me as a bad idea for several reasons.

First, the iOs developer agreement means that iOs will forever be a platform that teaches children to consume rather than to create, at least with respect to my discipline (computer programming).

If you think this doesn't matter because "kids don't program", I would like to disabuse you of that notion, hard. The iPad could have been a great device for running Squeak, which includes environments like etoys and scratch. These systems teach children to be producers rather than merely consumers of computing, which I think is no small matter given the importance of computers in society today. But, as a programming language interpreter and compiler, Squeak is prohibited by the iOs developer agreement

Many adults grew up as passive consumers of technology — for example, as television watchers or video game players, rather than people who shoot videos or program games. This blinds them to the possibility that things could be profoundly different. Ubiquitous digital video cameras and video sharing sites like YouTube have already transformed the relationship between people (especially young people) and video. Video is no longer solely the thing you sit on the couch and watch. It's something everyone can and does create. Despite the proliferation of astonishing banality on YouTube, I claim that this is a positive development. Making and editing a video of your cat is still more rewarding, and exercises more cognitive skills, than passively watching almost anything on television.* Programming could be the same way. No, the average person will never build complex software from the ground up. But the average person can, with appropriate support, learn to write small programs, and to modify big programs around the edges, and (crucially) to enjoy doing so. The iOs developer agreement simply shuts down this avenue of creativity, and that's the wrong thing to be doing to children.

Well, OK, so maybe you don't care about ideological issues like "freedom" and "creativity" (although I claim that if you do not, then you have no business working in education). Let's get down to practicalities.

The general attitude of Apple towards iOs device "owners" has been that the end-user does not truly own the device — Steve is just letting you hold it for a while. Note Apple's hostility to jailbroken devices and even its use of nonstandard screws for its cases. What does this mean in the education world? A school district that buys a thousand iPads will be completely at Apple's mercy w.r.t. software upgrades, hardware repairs, and general system maintenance.

The article notes that Georgia is "currently spending about $40 million a year on books [that] last about seven years". Apple is a consumer company and builds its products to last about 2 years. Ask people with vintage 2007 iPhones how well their devices are coping with iOs 4. Furthermore this 2-year product cycle is not an accident; it's built into the foundation of Apple's current business model, which is to ride the leading edge of technology so that they can always sell a device that's shinier than their competitors' devices (and even their own devices from a couple of years ago). Have the school districts really worked out the implications of this? I doubt it.

Furthermore, the article notes: "Textbook publishers are eyeing the potential for moving their content to the digital world, enabling them to update material rapidly and include interactivity." The unspoken subtext is that school districts will move from purchasing textbooks to purchasing subscriptions to textbook content. Ask university libraries with electronic journal subscriptions how well that's working out for them. And the iPad textbook age will, in the long run, be worse than that. Electronic journals at least distribute their articles as DRM-free PDFs: if you find a paper you need, you can save it to your local hard drive and have it forever. Does anyone think that grade school textbook publishers are going to release their iPad content without DRM? I laugh mordantly in your general direction. I pity the school district that has a budget crisis and cannot afford a textbook publisher's access fees for the year. I can only hope that most districts keep the old dead-tree textbooks in the basement for emergencies.

In summary, iPads in education will

  • Teach children to consume programs rather than to create them.
  • Lock schools into a closed computing ecosystem that they do not control.
  • Lock schools into a continual treadmill of costly hardware upgrades.
  • Lock schools into electronic subscriptions to DRM-encumbered textbook content.

All that said, dead tree textbooks are expensive and heavy and troublesome to update, and there's tremendous potential for electronic educational materials to improve the situation. And the iPad, being an excellently made device with an active developer community, is a fruitful context in which to experiment with new educational software. But until/unless some of the above features change, it's extremely premature for an entire state to make the iPad the center of its educational curriculum.


*Of course, there are a few great shows on television and people still watch those. Homemade video does not replace that and never will. But great shows constitute a tiny fraction of televisual content.

Thursday, January 13, 2011

The (logarithmic) calendar I want

I was looking at my Google Calendar tonight and I realized that this software was encouraging a way of thinking about time which has significant drawbacks. Namely, it only allows one to look at a small, fixed time window at once: a day, one or two weeks, or a month at a time.* But the question I was asking myself at the time was, what is the shape of my year going to be?

When am I going to make time to visit friends and family in faraway places? How should I spend the 20 vacation days per annum that my employer gives me? What sort of personal projects will I realistically have time to pursue over the course of this year?

Being the morbid type of person I am, these thoughts then led me to other questions about even longer time spans. What do I want my life to look like in 5 years? By contrast, what is the realistic outcome of extrapolating 5 years from the way that my life is moving now? What do I hope to accomplish before I die, and am I going to have time to do it?

All these types of questions cannot be visualized on a month-at-a-time calendar, whether it's Google Calendar, or some other software, or a typical monthly wall calendar printed on paper. Now, the mere fact of having a proper calendar scarcely leads to satisfactory answers to the questions I'm pondering. But clearly the artifacts we use to track time influence our thoughts, our emotions and ultimately our actions. In this light, the limited scope of our calendars seems like a cognitive handicap with potentially huge effects on our lives.

So, here is what I want. I want a logarithmic calendar. I want this week to be visualized large. I want the rest of this month and the next to be visualized somewhat smaller. I want larger and more distant time intervals to be visualized as progressively smaller boxes. And I want the scope of the calendar to be decades — at least as long as my remaining life expectancy, or perhaps a bit longer so that I'm forced to think about posterity.

With a little work, I could write a bit of software that visualized time like this. There's nothing particularly earth-shattering about the code required. It is, as we say in the industry, a Mere Matter of Programming.

In fact, doing it in software is so straightforward that it's almost more fun to puzzle out how to do it with just paper. An argument can be made that paper would be fundamentally better anyway, in at least a few ways. A wall-sized poster is a tremendous display technology, with better resolution, pixel density, and variety of supported input methods than even a 30" touchscreen monitor. And a wall poster doesn't get covered up whenever you open a web browser while craving a moment's distraction. At best, you can ignore it via the old-fashioned method of averting your eyes.

So, let's do the exercise. How would you lay out a logarithmic wall calendar using only layers of preprinted paper? Here's my stab (apologies for the roughness of the sketch):

For every year, you mount a new spiral-bound calendar at the top. The spiral-bound annual calendar has ~52 leaves (give or take depending on the number of calendar weeks in the year). The week currently on top has a lot of writeable area per day, but the bottoms of the pages for the rest of this month peek out from beneath the current week. Below the current month's week pages, there are tabs for each month, which are large enough that you can write notes in them. The months go across from left to right. And beneath that, there's a large writeable area for annual goals, observations, etc. for each of the next 5 years. These are not part of the spiral-bound calendar; instead they are pinned to the wall, and when you reach a year's end, you unpin that year and pin up a new strip. And finally below that, there are undifferentiated strips for each semi-decade following the current semi-decade.

Suggested customization: next to each year, write down how old you will be when that year begins.

Admittedly, it's all a bit ad hoc. Randall Munroe would no doubt have devised some much more rigorously consistent mathematical visualization. But this is just my idle evening doodling, and at each level of time granularity I just chose something that looked good to me.

Now, to start thinking about those actual goals...


*Incidentally, there are deep architectural reasons that interactive calendar software which aims for predictable latency per user gesture will tend to offer (visualizations of) fixed-time-window queries, rather than queries over windows of unbounded size. If you're software-minded you can probably figure these reasons out, and also some ideas for working around them.


UPDATE 2011-10-31: Hacker News reactions.

Saturday, January 08, 2011

Sexual desire, authenticity, and Internet business models

Among the many errors in N. Vargas-Cooper's Atlantic article this month on Internet porn, one stands out as the major fallacy. The article's entire argument seems to rest on the assumption that Internet porn reflects a more authentic view of human desire than other forms of cultural production. But the prevalence of low-budget amateur porn on the Internet reflects less on the sexual stimulation that people most fervently desire than on the economic reality that it's vastly easier to build an Internet business on crowdsourced amateur porn than on any other kind of porn.*

To conclude, from the consumption of bad porn, that everyone prefers bad porn when better porn can ostensibly be had if one looks harder, is roughly like deducing from the success of McDonald's that everybody prefers Big Macs to every other type of food. Again this is a question of costs, not authenticity. Sexual desire, like hunger, is an extraordinarily robust urge and can be stimulated (and even, to some extent, satisfied) by relatively low-quality fare. Therefore, people settle. If I offered you a free dinner at McDonald's or a free dinner at French Laundry, you know which you'd pick. And yet McDonald's is a much larger business, for reasons that have nothing to do with McDonald's satisfying a more authentic or deeper hunger.

If you think I'm comparing apples and oranges with McDonald's and French Laundry, substitute In-N-Out for the latter and the answer remains the same. Even when you want a burger, you want a good one; but sometimes you settle for less, because it's cheaper or more convenient.

The analogy becomes even better if you imagine that hamburgers were culturally marginalized and frequently outlawed, such that many people were embarrassed to admit in public that they liked a good burger. Such a social and legal environment would make it hard to build a business around providing high-quality burgers, and In-N-Out or other good burger places would not exist. And cultural critics would conclude that there was some inherent property of human hunger that led us to prefer crappy burgers.

Which is why Vargas-Cooper leaps from a particular configuration of economic incentives, in a particular technological and social context, to a ridiculously broad claim about the ultimate nature of human desire.


*The proliferation of highly specialized niche porn is also a consequence of Internet economics, albeit for different reasons, which I leave as an exercise to the reader.

Monday, January 03, 2011

The types of bestselling free Kindle books

Periodically I go on binges where I browse the Kindle bestsellers list and download most of the top 100 free books, more or less indiscriminately, without consideration for quality. I mean, what the hell, it's free and my Kindle 2 still has over 1.2GB of free storage (out of 1.4GB user space). Even the worst piece of formulaic pulp trash might be funny in a so-bad-it's-good kind of way; or at least there may be some anthropological interest (oh, so this is what women fantasize about?). Most of the stuff goes totally unread of course — I don't have time to even glance at a tenth of it — but I suppose I like having the option.

Anyway, it's interesting to note that as of January 2011, the top 100 free Kindle ebooks list consists of the following:

Count Type Example Notes
50 Gutenberg ebooks The Adventures of Sherlock Holmes Kindle conversions of public domain etexts from Project Gutenberg; mostly classics.
7 Games Every Word IMO Amazon should segregate these in their own section.
5 Thriller/Mystery The Perfect Woman Mostly in the gruesome-crimes subgenre, not the sleuthing subgenre.
8 Erotica/Romance Rough Cut Romance readers might claim that these are two categories but I defy you to draw the line among these titles.
2 ChickLit Stuck in the Middle (Sister-to-Sister Book 1) Apologies for the derogatory label but what do you want me to do with a cover and title like that?
11 Christian Fiction Fools Rush In (Weddings By Bella, Book 1) Often disguises itself quite stealthily as other genre fiction.
6 Other Fantasy Don't Die, Dragonfly Mostly spirits-and-vampires stuff, not Heroic Medieval Fantasy Product.
8 Alleged Nonfiction The Winners Manual Includes many crappy cookbooks and the Bible.
3 Other Fiction The Stolen Crown Arguably the bravest authors here, as non-series non-genre fiction has the least "author stickiness" of any fiction. Which isn't to say the writing's any good necessarily.