Apply, Part 1: Why build it in the first place?

This guide is meant to be:

In the spirit of stating the need clearly, this first part will cover the rationale for building the Apply system in the first place. This is indispensible, insofar as it’s the premise set for the design decisions to be detailed later. It’s also my own effort to keep this grounded in a real world need; software engineers are often trained to prefer the most general representation of a problem available. But again, that’s only beneficial from certain perspectives, and I think it’s good to remember that real software generally doesn’t solve abstract problems.

An Apply project page, here from the Native Voices traveling exhibit. Click for full size version.

Exposition

I work for the grantmaking wing of a good sized nonprofit. The Public Programs Office at ALA partners with a wide variety of funders to put programming (in the sense of “cultural programming”) in libraries across the USA. PPO provides a wide range of expertise in the library field, knowledge of what makes a successful program, how to quantify impact and so on. We also provide the logistics of running the grant: accepting applications, coordinating reviewers, receiving reports, and so on. When I was hired, (in 2009, which seems unimaginably distant now) it was to manage the consultants building our online grant interfaces. (Implying the following: outsourcing undoubtedly saved my employers quite a lot of money for sometime, but the cost of outsourcing had now grown into that of a new full time employee.)

Even with a full time project manager though, it was a bottleneck in the process: we were paying relatively large amounts of money for external developers to build one-off application sites for each project. The duplication of effort was costly—smaller programs with low/no web budgets were still being conducted via mail. And the resulting data was typically a spreadsheet, silo-ing each project’s data until someone manually merged them. So, two distinct bottlenecks really:

What to do about it?

The need for a permanent and central system was clear, but it was an open question what form it would take. In the first two years, I managed to supplant the consultants by writing the one-off sites myself, but these suffered from the same essential defects as above, even if they were cheaper.

The question was—all risks/rewards considered, with said consideration being the most difficult part—what sort of solution represented the best long term value? Along with my bosses, we considered a number of options:

Spoiler alert, given that you’re reading a blog about software design: We decided to build from the ground up. But the reasons why are as important as any other part of the story. If our particular needs had been different, this system might not have been built in the first place.

As noted, the most attractive initial option (my preference for coding over managing aside) was a paid service. Economically, the major asset of such a service is its system, which a whole bunch of people are paid to develop and maintain. Because they directly own it, they extract the most benefit from the economy of scale. On paper it sure seemed like the path to the best system for the least money.

This became less attractive with investigation. There were a few established services that looked quite flexible and reliable; they were priced accordingly. The rest—the price brackets where we would still actually save money over the old arrangement—either lacked the flexibility we required (more on this directly below) or did not seem like the sort of thing on which I wanted to bet my reputation.

Know the domain

The above paragraph implies an interesting economic fact: Assuming my assessment was reasonable, it means we were sitting in an underserved locale of the market. It looked like the Invisible Hand had not yet taken its (invisible, natch) feather duster to our needs. My first thought was that this could just as easily imply a fault in my own market research. The market for specialized web services is very, very large, and we’ve all probably considered ourselves exceptional based on incomplete information from time to time. We needed a reasonably theory of own market exceptionality in order to proceed.

As noted, I had been building our one-offs for around two years at this point. This was incredibly helpful, because I picked up a whole bunch of domain specific knowledge—i.e., how our specific grants actually work in the real world—without which this all would have been very difficult.

It turned out we were “exceptional,” for two reasons: the volume of applications we needed to deal with, and the level of customization we required. These factors are right in the profit-making wheelhouse of most of the existing services. If you needed a lot of something generic, or a little of something custom, some of services actually represented a pretty good value. Combined though, they suggested the same kind of cost curve as our old arrangement; though in the old arrangement, while we didn’t really get anything reusable for our money, we also weren’t creating new investments or dependencies. Paying that same money—likely more—for the Platinum Cadillac Package at one of these services would tie all of our customizations to that platform.

A good example of the intersection of volume and customization is the first project used for the Apply system. (I had the good fortune of a hard test case; as will be detailed in later installments, this eliminated a number of dubious design possibilities before they could become a problem.) The project was (and is) a national, NEH-funded initiative on Islamic culture and history that was eventually featured in over 950 libraries across the USA. The project was not only large in terms of having thousands of applicants and running for several years; we also required administrative access for nearly twenty people, and reviewer access for even more. As it turns out, many of the services featured per-login pricing that quickly got expensive given that aside from our staff, there is little overlap from project to project, as we work with different funders, reviewers, and so on.

The project also required a number of features which were either expensive add-ons or purely custom: sophisticated page-branching behavior to route users through their proposal based on their institution type; multiple follow up documents after the initial proposal, for reporting and additional awards; ad hoc reporting; mail merges based on arbitrary application data, etc. The “base price” of most of the reputable services quickly became a vanishing point in the rear view mirror.

None of this would have been apparent if we couldn’t cogently report our own needs, rather than the needs of an average consumer in our market. We really were a little weird, it turns out. For me, that meant holding off on the question of a central system for a while while I learned what exactly was going on, and the decision paid off.

Paying for development

The next option in order of apparent sensibility was paying somebody else to build a system to our specs. In the end—especially considering the growth we were all hoping for—I think this probably would have represented a better value than the paid services above, but it suffered from sticker shock. Where we could have gotten into a paid service with a pricey but reasonable initial investment, I didn’t (and don’t) see how this could have been done for any less than $150K or so. Even pooling money from the big web budgets, this would have been difficult. (It stands to mention here that these costs and efforts were not, and could not have been, supported by our IT department. Even on a good day that’s not the economic reality of a lot of the nonprofit world, doubly so in the dark days after the 2008 crisis.)

The upside of this approach would have been ownership and control of the resulting code and data. But cost, not to mention the development time required, made this a backup scenario, pending the feasibility of in-house development.

We have burgers at home

In-house development, if feasible, offered a number of obvious advantages. First, my employers were going to pay me regardless of whether they were also paying someone else to do the coding. Limiting development costs to my own salary would address at least one of the major issues. Second, it would represent a truly custom solution supported onsite by the author; in terms of responsiveness, it is nearly ideal.

There were additionally a few of existing software bases to consider for adaptation. This may have represented my own lack of experience, but each time I dove into one, the list of necessary customizations and how-do-I-do-thats quickly boggled me. Learning any given codebase enough to rewire it seemed daunting enough to negate the advantage of responsiveness. (That said, having been through the process of writing and supporting a reasonably large application, I feel I would be much better qualified for it now than I was five years ago. I suppose there’s some irony in lack of experience turning into an argument in favor of in-house development, but you go with the best option at the time—knowing the grant applications well, I felt I could model those on my own better than I could interpret an alien codebase.)

You can actually do this, right?

Obviously part of the value you get for consultant $$ is a general assurance that the project is within the capabilities of the bidder. (How reliable that assurance is varies of course, but still.) This wasn’t some ancillary web resource—this is core infrastructure for our unit. It had to be (over)built correctly, laid out rationally, ready to scale, etc. It was, I thought, a good time to have a frank discussion with myself about my capabilities. I had been building Rails applications for a couple of years at that point, and generally felt comfortable, but I was leveling up, and if it didn’t go well, a lot of people stood to look pretty bad, starting with me.

I think it’s natural and necessary for programmers to assume, by default, that they can handle unknown problems. That’s a balancing act against the fact that when you tell someone they’re covered, you have to mean it. This seems like a good juncture to thank my boss for trusting me with this project; it would have never gotten done without her, and the rest of our unit, being behind it. It was certainly a big learning experience for me.

In the end, I felt I could do it, as long as (a) I was very careful in the initial design stages to understand the consequences of my decisions and (b) that those design stages could largely occur before I proposed my services. I had been accumulating notes on how such a thing might be done, so I holed up in my apartment for a few nights and started wrapping my head around the design. I gave my anxieties a fair hearing, and they were unable to convince me it was a stupid idea, so at least there was that.

Box score

Paid services looked great up front, but quickly ballooned in costs and dependencies with a sober view of our needs. There was approximately zero chance of being handed a $150K budget to work with. And for various reasons, if it was going to fall to me, building from scratch seemed more reliable than dependency on an alien codebase. Thankfully, my bosses agreed, and so we decided to build in-house.

Same bat time, same bat channel

Having settled on the skunkworks plan, next installment will be an accounting (I hope) of the actual design of the software. There will probably be bits of Ruby scattered around, but in general we won’t be too heavy on specific code. (At least not yet?)