Ferret-induced developer fatigue

There’s been interesting conversation in the corners of the blogosphere I inhabit recently about developer life and motivations for staying in it or leaving it, depending on one’s outlook. Ed Finkler kicked it off with the rather forebodingly titled “The Developer’s Dystopian Future,” in which he admitted:

My tolerance for learning curves grows smaller every day. New technologies, once exciting for the sake of newness, now seem like hassles. I’m less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity.

Marco Arment responded: “I feel the same way, and it’s one of the reasons I’ve lost almost all interest in being a web developer.” And former developer Matt Gemmell chimed in with “Confessions of an ex-developer.” He writes:

I’m in an intriguing position on this subject, because I’m not a developer anymore. My mind still works in a systemic, algorithmic way—and it probably always will. I still automatically try to discern the workings of things and diagnose their problems. You don’t let go of 20+ years of programming in just a few months. But it never gets as far as launching an IDE.

While I don’t want to add a mere “me too” to the chorus, these posts strike chords. I’ve been a web developer off and on since the late ’90s and, up until about 2012, enjoyed it. But in the last few years I’ve found it harder not only to keep up but frankly a little harder to care. In 2010 or 2011 I might have started a project in PHP/Symfony or Python/Flask on the server side and used jQuery on the front end and been happy.1

But at this point if you do two projects in a row with the same technology stack you’re behind the times. You’re still using Node? So last month. And everything should be a client-side app, because modern. And responsive! Mobile first! Here, use this framework in a language that these Stanford grads wrote last year. It “compiles” to Javascript. It’s at version 0.7.3-alpha and most of the docs are still on version 0.6 but hang out in the IRC channel to ask about backward compatibility, and make sure you have Haskell installed because it’s a build prerequisite. And the deployment tools only work for Heroku and Docker. Wait, you’re not developing using Vagrant? I give up, old man. Go back to COBOL and punch cards or whatever it was people did last April!

A lot of web developers in Silicon Valley—and aspiring competitors—are terribly smart people. But the emphasis on Keeping Up With The Buzzwords means none of them can master all the technologies they’re using. They’re using technologies developed on top of other technologies they haven’t had time to master. And they’re constantly enticed to add even more new technologies promising even more new features to make development easier—layering on even more abstractions and magic that make it even harder for programmers to fully grasp what they’re working with. Our bold new future of Big Data and Ubiquitous Presence approaches, and it’s being built by crack-addled ferrets.

Before you object that these things do help you be more productive: I believe you. I do. But when you’re writing a new web app on top of two frameworks, two template systems, three “pre-compilers,” a half-dozen “helper” modules and everything all of those rely on, you’re taking a lot on faith. Prioritizing development speed over all else is a fantastic way to accumulate technical debt. I’ve heard the problems this causes described as “changing your plane’s engines while you’re in flight.” In a couple years the #1 killer for companies in this situation may just be “slower” startups that took the time to get the damn engine design right in the first place.

Matt Gemmell finds software development “really frightening” for other reasons—because it’s progressively more difficult for small software houses to stay solvent, ending up as sole proprietorships or “acquihired” into big corporations. Frankly, I don’t mind big corporations, and I’m okay with not being independent. But in the web industry itself you won’t escape Ferret Syndrome by heading to a big company. Matt may well be right that he could keep up with all that if he wanted to, but I’m not sure I can. I’m pretty sure I don’t want to.

Marco and Matt have both solved this by going independent—Marco as a developer and Matt as a writer. Making a living as an independent anything, though, is hard. They’ve both outlined the challenges for developers. I don’t want to say it’s harder for fiction authors, but I can say with some confidence it’s not any easier; exceedingly few authors make the bulk of their income from writing.2

For my part, I’ve moved back into technical writing. And, yes, at a Silicon Valley startup—but one doing solid stuff with, dare I say, good engine design. I’m surrounded by better programmers than I ever was, and my impression is that they’re interested in being here for the long haul. And I like technical writing; it pings both my writer and techie sides, without leaving me too mentally exhausted to work on my own stuff at other times.

Like Matt and (no longer tech) blogger Harry Marks, “my own stuff” is mostly fiction, even though I don’t often write about it here.3 Unlike Matt, I’d be happy to have some of my own stuff be development work—not just tinkering—again. I’d like to learn Elixir, which looks like a fascinating language to get in on the ground floor of (assuming it takes off). I’d like to do a project using RethinkDB. Maybe in Elixir!

But it might be a while. I’m still recovering from ferret-induced development fatigue.


  1. Or Python/Django, or PHP/Laravel. If you argue too much over the framework you’re part of the problem. 

  2. My own creative writing income remains at the “enough to buy a cup of coffee a month” level, but I’m happy to report that not only is it now high-end coffee, I can usually afford a donut, too. 

  3. Which may eventually change, but “blogger with unsold novel” is the new “waiter with unsold screenplay.” (I do have a Goodreads author page.) 

The sort-of new Apple

I apologize for being back to sporadic post mode yet again. I started a new technical writing job about three and a half weeks ago and left for a writing workshop about a week and a half ago. While it’s not as if Lawrence, Kansas, has fallen off the edge of the Internet (web development nerds may know it’s the original home of Django), I haven’t had time to see the Apple WWDC keynote yet. I’m not sure I’ve even finished the Accidental Tech Podcast episode about it yet.

If you care about this stuff you’ve probably already seen all the pertinent information. iOS 8 is coming with a lot of the under-the-hood features that iOS needed, OS X Yosemite is coming with excessive Helvetica, those new releases will work together in ways that no other combination of desktop and mobile operating system pull off (at least in theory), and oh hey we’ve been working on a new programming language for four years that nobody outside Apple knew about so in your face Mark Gurman.

Developers have generally reacted to this WWDC as if it’s the most exciting one since the switch from PowerPC to Intel, if not the switch from MacOS 9 to OS X, and there’s a lot of understandable tea leaf reading going on about what this all means. If you’re Yukari Iwatani Kane, it means Apple is doomed, but if you’re Yukari Iwatani Kane everything means Apple is doomed. If you’re an Android fan, it means you get to spend the next year bitching at Apple fans about how none of this is innovative and we’re all sheep, so you should be just as happy as we are.

A lot of this stuff is about giving iOS capabilities that other mobile operating systems already had. I’m surprised by how many people seem flabbergasted by this, as if they never expected Apple would ever allow better communication between apps and better document handling—even though those were obvious deficiencies, the fact that they hadn’t been addressed yet must mean Apple didn’t think they were important and we’d all better just work harder at rationalizing why that functionality didn’t fit into The Grand Apple Vision.

But that never made a lot of sense to me. While Apple certainly has a generous share of hubris,1 they’ve never deliberately frozen their systems in amber. While people describe Apple’s takeaway from their dark times in the ’90s as “own every part of your stack that you can,” I’m pretty sure that another takeaway from the Copland fiasco was “don’t assume you’re so far ahead of everyone else that you have a decade to dick around with shit.” While MacOS System 7 was arguably better than Windows 95 in terms of its GUI, in terms of nearly everything else, Win95 kicked System 7’s ass. And kicked System 8’s ass. (It didn’t kick System 9’s ass, but System 9 came after Windows 2000, which did in fact kick System 9’s ass.)

I still prefer the iOS UI to any of the Android UIs that I’ve used, but that can only hold for so long. Around the time iOS 7 launched I’d heard a lot of under-the-hood stuff intended for that release got pushed to iOS 8 after they decided iOS 7’s priority was the new UI. This is a defensible position, but dangerous. So a lot of stuff that came out in iOS 8 didn’t surprise me. If Apple tried to maintain iOS indefinitely without something like extensions and iCloud Drive, they’d lose power users in droves. And Apple does care about those users. If Apple is in part a fashion company, as critics often charge, then power users are tastemakers. They may not be a huge bloc, but they’re a hugely influential bloc.

The biggest potential sea change, though, is what this may mean about Apple’s relationship with the development community. Many years ago, Jean-Louis Gassée described the difference between Be Inc., the OS company he founded, and Apple with, “We don’t shit on our developers,” which succinctly captures the relationship Apple has had with their software ecosystem for a long time.2 I don’t know that Apple has ever addressed so many developer complaints at once with a major release.

That’s potentially the most exciting thing about this. While Android has had a lot of these capabilities for years, I haven’t found anything like Launch Center Pro or Editorial in terms of automation and scriptability, and those apps manage to do what they do on today’s iOS. Imagine what we’ll see when these arbitrary restrictions get lifted.3


  1. Apple’s hubris is carved from a solid block of brushed aluminum. 

  2. One might argue that Be went on to do exactly that, but that’s poop under the bridge now. 

  3. I know “arbitrary” will be contentious, but I’m sticking by it. 

Could editors stop being quite so minimal?

On Hacker News, I came across mention of Earnest, a “writing tool for first drafts.” It follows the lead of minimal text editors everywhere—no real preferences to speak of and no editing tools beyond what you’d get in Notepad or TextEdit. Well, no: less than that, because most of these don’t allow any formatting, except for the ones that have thankfully adopted Markdown.

Ah! But Earnest goes out of its way to give you less than that because it prevents you from deleting anything. Because first draft.

The first Hacker News comment at the time I write this is from someone who wrote a similar program called “Typewriter,” in which he says, “It’s a little more restrictive since you can’t move the caret.”

Okay. Look. Stop. Just stop.

Someone also commented that it’s reminiscent of George R. R. Martin writing A Song of Ice and Fire on a DOS machine. To which I replied: no, it isn’t. Because George’s copy of WordStar 4? It has a metric ton more functionality than every goddamn minimalist text editor I’ve seen. Yes, even the ones using Helvetica Neue.

In 1994, I was using a DOS word processor, Nota Bene. (It has a Windows descendant that’s still around.) It supported movement, selection, deletion and transposition by word, sentence, line, paragraph and phrase. It could near-instantly index and do Google-esque boolean searches on entire project folders, showing you search terms in context. It had its own multiple file, window and clipboard support, as I recall with multiple clipboards. It had multiple overwrite modes, so you could say “overwrite text I’m typing until I hit the end of a line” or “until I hit any whitespace” and then go to insert. It had oodles of little features that were all about making editing easier.

For some reason, this is a task we’ve all kind of given up on in the prose world. Code editors can do a lot of this. But when it comes to prose, we’re all about making beautiful editors whose core functionality is preventing me from editing. Because that frees my creativity!

Except that writing gets better through editing. I could use both Nota Bene and Earnest for my first draft—but I could only use NB for any future drafts.

I don’t meant to suggest that Earnest—or any of these Aren’t I A Pretty Pretty Minimalist programs—are terrible ideas. If Earnest helps free your inner Hemingway, awesome. Knock back a daiquiri for me.

But couldn’t a few people start working on tools to make the second draft better?

The reports of the iPad’s death may be premature

I’ve seen a number of articles recently declaring the iPad to be, in effect, a failed experiment: not powerful enough to replace a full computer for “power user” productivity tasks, and with little utility that separates them from phones with oversized screens. Yet two or three years ago many of these same writers were declaring the iPad—and tablets in general, but really just the iPad—to be the next evolution in personal computing, as big a leap forward as the GUI and mouse.

I was a little contrarian about that, writing this time last year that Steve Jobs’s famous analogy of PCs being trucks and tablets being cars might be wrong:

While each new iteration of iOS […] will give us more theoretical reasons to leave our laptops at home or the office, I’m not convinced this is solely a matter of missing operating system functionality. It may be that some tasks just don’t map that well to a user experience designed around direct object manipulation.

While I still think this is true, I’m feeling a little contrarian again, because I believe the first sentence in that paragraph is still true, too. Tasks that do map well to a user experience designed around direct object manipulation are likely to get easier with each new release of iOS (and Android and Windows Whatever), and with each release there will be some tasks that weren’t doable before that are now.

I’ve also felt for some time—and I believe I’ve written this in the past, although I’m not going to dig for a link—that ideas can and will migrate from OS X to iOS over time, not just the reverse. And it’s always struck me as a little bonkers when people say that Apple’s “post-PC vision” will never include better communication between apps, smarter document sharing, and other power user features. OS X is friendlier than Windows and Linux, but it’s a far better OS for power users than either Windows or Linux, too. While I think it’s true that Apple has no intention of exposing the file system on iOS—let alone exposing shell scripting—that doesn’t mean that their answer to all shortcomings of iOS for power users is going to be “That’s not what iOS devices are for.”

So, this brings us around to Mark Gurman’s report that iOS 8 will have split-screen multitasking. I don’t know whether it’s true (although Gurman’s track record has by and large been pretty good), but this is one of the biggest shortcomings for power users on an iPad—as much as we may celebrate the wonders of focus that one app at a time gives us, in real world practice, working in one window while referring to the contents of another is something that happens all the time.

The big question for me is the feasibility of the “hybrid” model, tablets with hardware keyboards or laptops with touch screens. We don’t have the latter in the Mac world, but the Windows world not only has them, touch has become pretty common on new and not-too-expensive models. Anecdotally, people like them. To me, a tablet with a hardware keyboard makes more sense than the touch laptop, simply because you can take the keyboard off and not bother with it most of the time. While I’ve joined in mocking the Surface’s implementation of this, it may truly be Microsoft’s implementation that’s at fault, not the whole concept.

At any rate, I’m not only not expecting the iPad to be subsumed by “phablets,” I’m expecting iOS to start delivering on distinctly “power user” features over the next few versions. I’m happy with my MacBook Pro in a way I wouldn’t be with only an iPad and there will always be people who will say that. But the future of computing is a trend toward computing as appliance. Right now, there’s a measurable gap between what computing appliances do and what general computers do. And it sells Apple short to suggest that they never plan to address that gap with anything more than “we don’t think our customers need to do any of that.”

Pampero Manhattan

There’s an argument to be made that if you know a few basic cocktails, you actually know a lot of cocktails. There’s a whole family of drinks that stem from the Martinez: one part each gin and sweet vermouth, with a dash of maraschino liqueur (a clear dry cherry concoction). If you look at that as a template—base spirit, vermouth and optional dash of bitters of liqueur—you can immediately see both the martini and the Manhattan there.

The Manhattan itself—classically two parts rye whiskey to one part sweet vermouth with a dash of bitters—can be squished into all sorts of interesting shapes. You can use dry vermouth with it and have a Dry Manhattan, or use a half-part each of both dry and sweet vermouth and have a “Perfect” Manhattan. Swap the rye for scotch and you have a Rob Roy. Swap the rye for Canadian whiskey, as many bars will when you ask for a Manhattan, and you have a bad Manhattan.

There’s a variant that I’ve been introduced to by Singlebarrel that I love. It’s called the Cuban Manhattan, which is a Perfect Manhattan made with dark rum. (To live up to the name, I’d suggest Flor de Caña Grand Reserve.) This is my take on it, though, using one of my favorite dark rums: a Venezuelan rum called Pampero Aniversario.

Pampero Manhattan

  • 2 oz. Pampero Aniversario rum
  • 1 oz. Carpano Antica sweet vermouth
  • 1–2 dashes chocolate bitters

Stir with ice and strain into a chilled cocktail glass. Garnish with a twist of orange peel.

(I used Fee’s Aztec Chocolate Bitters because that’s what I happen to have, although the general consensus seems to be that Bittermens Xocolatl Mole Bitters are the best of the lot.)

Private clubs and open bars: on App.net vs. Twitter

App.net is an ambitious experiment to create a for-profit “social infrastructure” system for the Internet: you could use it to build something like Twitter, or a file syncing or storage service, or a notification service. In practice the only one that got any attention, about a year ago, was App.net’s vision of a better Twitter. This came shortly after it became clear that Twitter had decided it was in their own best interest to knife their development community.

Yet like it or not they’re still competing against Twitter, and Twitter is “free.” (The air quotes are important.) App.net started out at a price of $50 a year, then dropped to $36, then added a free tier of deliberately minimal utility. $36/year isn’t an outrageous fee, but it’s infinitely higher than free. It’s too high to even be a credible impulse buy.

Even if you didn’t see the stories yesterday you can write the next part. App.net discovered this model isn’t working well after all, and they’re keeping the service open but laying off everyone. This seemingly crazy move may work, but it reduces App.net to a side project.

Maybe App.net needed cheaper pricing, or maybe it just couldn’t work competing with “free” no matter what. Maybe focusing on the kinds of people who give a poop about Twitter’s relationship to its development community wasn’t the right tack. Maybe its problem is that it was a private haven for rich white tech dudes, as some critics snarked.

Maybe, although I’ll admit the last one grates on me. We’re mocking App.net for having a cover charge—the only legitimate beef, as there’s no evidence they discriminate based on gender, ethnicity or bank statements—then going on to the huge club down the street, the one with ads on every available bit of wall, because they give us cheap beer for free.

But maybe App.net has misdiagnosed the problem.

What’s Twitter’s functionality? Broadcast notifications sent to people who specifically subscribe to them, with simple mechanisms for setting them private or public (i.e., direct messaging) and replying. That mechanism can act as text messaging, group chat, link sharing, image sharing, status updates, site update notifications, news alerts, and more. It’s a terrific concept.

App.net’s best insight was that making notifications “infrastructure” the way email and HTTP are has amazing potential. Twitter has no interest in letting other people use their infrastructure except under the strictest terms. That’s the problem App.net’s model solves. Good for them.

But as much as this is anathema to the Valley’s technolibertarian mindset, infrastructure only works as a common good. Suppose CERN had spun off WebCorp to monetize HTTP. They could offer “free” web with tightly dictated terms on how we interact with their ecosystem, or they could be liberal with those terms and exact a connection toll. But neither of those scenarios would get us to where we are now. The Internet is the Internet because it’s built on protocols that are free. Not free-with-air-quotes, just free period.

"Wait, but if there was free infrastructure to do what Twitter does, how would anyone make money?" At first, by selling commercial clients and servers, although that market would be likely to decline over time. Some companies could run commercial notification networks with access charges to operators. (This vision needs a decentralized network like email to spread operational costs out as broadly as possible, but that network must deliver notifications in fairly close to real time and in the proper order, which is a huge problem.)

In the long run, broadcast notification services only survive if they do become like email services. App.net isn’t making enough money to sustain a full-time business, but so far Twitter isn’t either. They both believe the value is in the infrastructure, and they’re both wrong. The value comes from making the infrastructure free.

Meanwhile, I find the bits of schadenfreude I’m seeing on this—not the “I didn’t expect this to work and I’m disappointed I was right” posts, but the “ha ha, private club goes under” posts—to be a little disheartening. I like free beer, too, but the managers at Club Tweet are starting to look a bit desperate.

FourSquare de-gamifies →

As Foursquare added more Yelp-like features and kept de-emphasizing the game aspects that it had always been built around, I asked somewhat sardonically:

Turns out the answer is: next month. Ben Popper and Ellis Hamburger, The Verge:

Today, the company is announcing a brand new app called Swarm that will exist alongside the current Foursquare app. Swarm will be a social heat map, helping users find friends nearby and check in to share their location. A completely rewritten Foursquare app will launch in a month or so. The new Foursquare will ditch the check-in and focus solely on exploration and discovery, finally positioning itself as a true Yelp-killer in the battle to provide great local search.

Foursquare’s CEO has been saying for the past year that the check-ins and points and badges and all were never their real goal, but I’m dubious. The big question ahead is whether the “new” Foursquare is really competitive with Yelp. Foursquare has listings for just as many businesses as Yelp, but it doesn’t have ratings for nearly as many. But worse, for an app that wants to focus on “exploration and discovery,” the algorithm that drives Foursquare’s ratings shows a noticeable bias toward places that are both popular and get a lot of regulars. That sounds reasonable, but a lot of times you’re a regular at a place because it’s close to your office. Both Yelp and Foursquare are going to be able to tell you about that huge Mexican place that’s popular with all the tech workers for business lunches, but Yelp’s a lot more likely to tell you about the dive taqueria worth going out of your way for.

An insider-based tech bubble? →

Noam Scheiber writing in the New Republic:

The great bubbles in history, right up through the dotcom fiasco and last decade’s real estate unpleasantness, have typically been mass phenomena. […] But just because bubbles typically don’t inflate until the small-timers get involved doesn’t mean you can’t have a bubble without them.

A short but interesting piece which fits with my own somewhat cynical musings on this topic. Also:

The good news is that most of the money lost when the bubble bursts will come from private investors, not people invested in the stock market. The only average folks likely to suffer are those who make their living in the [San Francisco] Bay Area.

Awesome.

No Exit →

Gideon Lewis-Kraus writes a brilliant piece of long-form journalism in Wired this month, “No Exit,” following the path of a more typical startup in San Francisco than the ones that get huge payoffs.

As a self-taught programmer with no college degree at all, this passage particularly resonated with me:

It’s extremely difficult to hire talented engineers in the Valley unless you’ve got incredible PR, can pay a fortune, or are offering the chance to work on an unusually difficult problem. Nobody was buzzing about them, and they had no money, but the upside of having a business that relied on serious machine learning was that they had worthy challenges on the table. On January 4, they made an offer to exactly the sort of engineer they needed, Tevye. He had a PhD in AI from MIT. Just to contextualize what that means in Silicon Valley, an MIT AI PhD can generally walk alone into an investor meeting wearing a coconut-shell bra, perform a series of improvised birdcalls, and walk out with $1 million. Nick and Chris had gone to good schools of modest profile—Nick to the University of Puget Sound, Chris to the University of Vermont—and while Nick also had a Harvard business degree, both were skeptical about the credential fetish of the Valley. They were happy to play the game when they could, though.

The assumption has clearly become that it’s easier to teach good coding practices to people who know all the algorithms already than vice-versa. I can’t definitively say that’s the wrong approach, although I have my doubts. I can definitively say that it’s pushing me more toward pursuing technical writing than I was thinking about even a half-year ago.

In any case, while my own experience with startups hasn’t been nearly as stressful as what these guys are going through, I’ve seen just enough of that angle to make me wonder whether SF/Silicon Valley’s “rock stars only” mindset is healthy in the long run. (Ironically, I’m getting more contacts than ever from people inexplicably convinced I’m a Rock Star Ninja Brogrammer looking to wrangle Big Data High Scalability DevOps Buzzword Bleepbloop problems the likes of which the world has never seen. Sorry. I know enough of the words you are using to help you document your brilliant stuff, though.)