The next small thing

While Apple’s introduction of a “smart watch” yesterday couldn’t have surprised anyone with a pulse, there were three surprising bits to me:

  1. It’s coming out of the gate with so many models.
  2. The UX centered around the “digital crown.”
  3. That they’re flat-out calling it a watch.

Arguably all of those points connect. The Apple Watch is just as capable a device as the Moto 360 and other “wearables”; arguably, it’s more so, as it has hardware capabilities none of them (currently) do. But this is firmly a fashion accessory. You know, kind of like a watch.

From the moment they introduced it I started seeing criticisms of—surprise—it’s shape. “It’s square! Not round! Apple blew it!” Well, maybe, but nearly all of the people harping on the shape are tech nerds. We claim that “normal people want round watches,” but what we mean is whoa, man, Motorola made a round display. That’s an impressive technical achievement, but square watch faces have been a thing for almost as long as watch faces have. What “normal people” want is wristwatches that aren’t the size of pocketwatches. The Galaxy Gear is a staggering 37 mm × 57 mm rectangle; even the comparatively svelte Motorola 360 has a 46 mm diameter—making it larger than either of the two Apple Watches. 1

The fashion world, as Reuters put it, is “divided,” but none of them seem to be saying, “Square? Ew.” As “watch guy” Benjamin Clymer writes in Hodinkee (“one of the most widely read wristwatch publications in the world”), while the Apple Watch “is not perfect, by any means,” they get “more details right on their watch than the vast majority of Swiss and Asian brands do with similarly priced watches, and those details add up to a really impressive piece of design.” He praises it for both restraint and respect: “in its own way, [the Apple Watch] really pays great homage to traditional watchmaking and the environment in which horology was developed.”

At risk of getting even more meta, though, here are two other intriguing things about the Apple Watch, neither of which is truly about the watch.

Apple has long been one of the few tech companies that pays the same respect to exterior design as they do to internal engineering. This is something Apple fans love about the company—and something Apple critics hate. They prioritize industrial design in a way no one else does: they want their laptops to be as thin and light as possible and if that means having no optical drive, no swappable battery, few USB ports, and even redesigning the damn power connector twice, so be it. Is that no compromise design, or is it all compromise design? You can make a case for both, but it’s a mistake to treat this as an either/or scenario. Design is a feature, and it’s one a sizable minority is willing to pay for.2 If it’s not a feature you’re willing to pay for, though, it’s suspect—and the techie culture has long been deeply suspicious of prioritizing aesthetics. You can tell a techie that Apple does material science like no other consumer company and they’ll appreciate it, but they’ll still think the only reason you bought that 13″ MacBook Pro instead of the Dell laptop that’s got a 17″ screen and 37 USB ports for half the price is because you’re an iSheep.

But with recent moves—hires from Tag Heuer and Burberry, buying Beats, and now the watch—Apple is embracing the fashion label like never before. I’ve thought for years that Apple’s true target wasn’t Microsoft or IBM or Google but Sony. Now that Sony’s all but abandoned the “lifestyle electronics” market, well, capitalism abhors a vacuum.3

This gives Apple’s critics ammunition like never before, too. The digs against the Apple Watch from the nerd herd have been fast and furious. (“Not round. Less battery life than a Pebble. Doesn’t work with Android. Lame.”) This may give Apple’s longtime fans some qualms, too, though—those who understand the Mac understand that it’s not just the “computer for the rest of us,” it’s the kickass Unix workstation for the rest of us. The silver lining is that Macs are selling at higher volume than ever, and that iPads are not going to eat the Mac’s lunch any time soon.

The second intriguing thing is how much Tim Cook has staked on the Apple Watch. He’s introduced it in such a way that clearly places it on the same level as the Mac and iOS devices. This is something I’m not so sanguine about. My own biggest criticism of smartwatches is that they seem to be solutions in search of problems. Apple has a remarkable track record of solving problems that people didn’t know they had (which is another thing that makes the most hardcore techies hate them, as they still can’t forgive Apple for moving the world away from the command line), but just what problem have we had that the Apple Watch solves? I don’t see an obvious answer to that.

But I don’t think Apple does, either. Go to Apple’s web site and click on the various product categories. All of them have headlines: “The notebook people love” (MacBook Air), “More power behind every pixel” (MacBook Pro), “Bigger than Bigger” (iPhone 6), “Small wonder” (iPad mini), “Engineered for maximum funness” (iPod touch). Except one: the watch. As terrific as the Apple Watch’s design is (and it is), Apple needs to be able to tell us why we want it. Right now, they can’t. And that’s not a good enough foundation to build the “next chapter of Apple” on.


  1. In terms of area, things are murkier. The Moto 360 covers 16.62 cm²; if the 42 mm Apple Watch were square it would cover 17.64 cm², but it looks higher than it is wide. I suspect the actual area it covers is about the same as the 360. 

  2. I’d also argue that part of the reason Macs cost more than PCs is not because Macs are overpriced but because PCs are underpriced, but that’s another topic. 

  3. The one big market that Sony is in that Apple isn’t is, yes, game consoles. I think 2015 may be interesting. 

Twitter fans lash out at changes that may never happen →

If you’re a Twitter user you’ve almost certainly seen dire warnings today about Twitter planning to implement a “Facebook-style filtered feed, whether you like it or not,” according to GigaOM. This is apparently based on Twitter’s CFO saying that tweets in raw, chronological order aren’t “the most relevant experience” for users who don’t check Twitter obsessively — which is, whether you like it or not, true. However, at the same speaking engagement, the CFO also said “Individual users are not going to wake up one day and find their timeline completely ranked by an algorithm.”

What’s interesting about this kerfluffle is not that the Internet may have rushed to judgement without all the information, because the Internet has pretty much become optimized for that. It’s that so few people even stopped to wonder whether Twitter would actually do such a thing. Twitter, as a company, so fundamentally misunderstands the value in what they’ve created that we just assume that the worse it sounds the more likely it is to be true. A rumor that soon only “approved partners” will be able to send more than 50 tweets a day sounds much more likely than a rumor that, say, Twitter will ease restrictions on third-party clients.

Personally, I don’t think Twitter will mess with the unfiltered stream any time soon — but I think they’re going to introduce “alternative” ways to view Twitter that will be much closer in spirit to Facebook’s news feed. I wouldn’t at all be surprised if they introduce an entirely new suite of applications to do this, in fact, like Facebook Paper. (If they go this route, I believe they should call it “Standard Twitterrific.”)

The economics of a web-based book →

Matthew Butterick believes in “taking the web seriously as a book publishing medium,” and put his book Butterick’s Practical Typography online with a page called “How to Pay for This Book” asking for donations (or purchases of his font packages). Recently he wrote a fascinating update about the results, “The economics of a web-based book: year one.” He’s made $3,676 in direct donations—he estimates 1 out of every 650 readers has paid. While this may sound dismal in some ways, he points out that with traditional publishing this equals royalties from first-year sales of 1200–1800 copies. Great. However:

For web-based book publishing to be viable, authors need to be able to attain paperback-level financial rewards with considerably fewer than 650,000 readers.

My concern (naturally) is what happens to fiction in this brave new world. I have Thoughts on this, but I’m still getting them together. (I’m impressed by Butterick’s Pollen “typesetting” system, but as yet I’m unconvinced it’s a good way to go for fiction authors—and there are big problems in domains beyond presentation, like hosting and discovery.)

Butterick also has some acerbic but well-considered comments about the combined effects of not paying directly for content and blocking ads:

Unless you’re really slow on the uptake, you’ve outfitted your web browser with an ad blocker. Ha ha, you win! But wait—that means most web ads are only reaching those who are really slow on the uptake. So their dollars are disproportionately important in supporting the content you’re getting ad-free. “Not my problem,” you say. Oh really? Since those people are the only ones financially supporting the content, publishers increasingly are shaping their stories to appeal to them. Eventually, the content you liked—well, didn’t like it enough to pay for it—will be gone. Why? Because you starved it to death.

I’ve grumbled about “who do you expect to pay for it, if it’s not you and not the advertisers” before, but hadn’t looked at it from that disturbing—and sadly plausible—angle. Yes, I know: we’ve gotten to the point where ad banners literally shove the the content you’re trying to read out of the way, and both advertisers and the web sites that allow advertisers to do this to their content deserve a lot of the blame. But Butterick’s salient point is inescapable: We have met the Buzzfeed audience, and they are us.

Techmeme and the Verge demonstrate truth vs. clickbait

This is The Verge’s headline for a review of the new Lytro Illum camera:

This is Techmeme’s rewrite of the headline, to more accurately match what the Verge says about the Illum:

Personally, I’d be more inclined to click on Techmeme’s headline anyway.

A brief look at Twitter’s financials

Ars Technica’s Megan Geuss:

Revenue for the social media company was up 124 percent year-over-year to $312 million. Twitter lost $145 million according to GAAP (Generally Accepted Accounting Principles) numbers, but made a non-GAAP net income of $15 million.

Let me take a red pen and cross out everything that involves non-GAAP numbers. “Twitter lost $145 million.” In fact, compared to last quarter, while their revenue has increased, so has their loss.

Whee! Pop the champagne corks!

This must be a naïve reading on my part, given how rational and well-considered Wall Street historically is, right? Right?

I’m just saying, if they’re still losing money by GAAP measures (i.e., real numbers) at the end of this fiscal year—well, there’s a reason they’ve made sure AdBlock for Tweets isn’t happening.

Sorry about your startup

So I came across yet another article on Hacker News running a post-mortem on a failed startup. Right off the bat it asserts:

Entrepreneurs often write about what’s going right, but too rarely write about what’s gone wrong.

I’m sure there was a point in tech bubble history when this was true. But that point was a long time ago. Startup guys write about what went wrong all the damn time. I am pretty sure if I started a startup pitched as a platform for other startups to explain why they tanked, I would get VC money for it, especially if I could get a good deal on failr.com.

And I don’t want to pick on the author of this particular article. He’s a solid enough writer, his startup looks genuinely interesting rather than stupid, and hey, he sure went above and beyond in mistake-making. (“And so I learned that we hadn’t been paying payroll taxes for almost three years.” Oopsie!) But these articles are becoming post hoc navel-gazing bemoaning subsets of the same problems, over and over. They’re Chinese-American takeout menus of fail: I’ll have the “hired too many people soon” and “didn’t scale fast enough” from Column A, “poor communication” and “ill-defined cofounder roles” from Column B, and some extra sweet and sour sauce.

And this makes this soul-baring part of the performance, like consciously dressing down and overpaying for loft space. They’re written for potential investors and employees who hang around sites like Hacker News. They’re spin. Bob’s heart-rending tale of how he spent ten months with no income so he could avoid laying off his last three employees as long as possible, eventually living under his office desk and subsisting entirely on chocolate chai and teriyaki jerky, ensures you remember him as a naïve but sincere and selfless CEO who gave his all and shared his mistakes with the world. Without this bracing splash of sincerity aftershave, you might instead remember Bob as a twenty-two year old who burned through $7M of VC money creating an iOS app that sends the word “Meh” to selected friends on your contact list. (“We see an immense upside potential vis a vis the growing Irony as a Service (IaaS) market.”)

Ultimately, most startups in the current tech boom are going to fail for one of three reasons:

  1. The core idea of the company isn’t that good. Maybe it’s Pets.com, or Color. And if you’re pitching your startup as “like X for Y”—like Facebook for geek girls! like Instagram but only for cat pictures!—you have a problem.
  2. The core idea won’t generate more income than outcome before you run out of money. Why, yes, everyone loved StoreYourHugeFilesforFree.com, but it turns out you can’t make it up in volume after all.
  3. Your team is collectively not experienced enough to see mistakes before they kill you.

Your idea wasn’t that good, you didn’t have the capital, or you didn’t have the experience. That’s 99% of why all businesses fail. Yet those reasons are almost never in “Why My Startup Failed” articles except as hidden subtext. The stories Silicon Valley most likes to hear about itself are stories about why outliers aren’t outliers—why anyone can move right out of college into founding the new Facebook or Google. And so when we fail, we tell ourselves stories that don’t disrupt that myth. It’s absolute heresy to suggest that real world experience often outweighs youthful energy and a degree from Stanford, but most of the “mistakes you should never make” aren’t mistakes someone with the appropriate work experience would have made.

If your startup fails and it helps you to write about it, write about it. But don’t write about it because you want to prove to the world and future investors that you’re a cool guy after all. Write about it brutally honestly. Get it out of your system. Then don’t put it online. We love you, but we’ve heard it already. Next time hire an accountant.

Ferret-induced developer fatigue

There’s been interesting conversation in the corners of the blogosphere I inhabit recently about developer life and motivations for staying in it or leaving it, depending on one’s outlook. Ed Finkler kicked it off with the rather forebodingly titled “The Developer’s Dystopian Future,” in which he admitted:

My tolerance for learning curves grows smaller every day. New technologies, once exciting for the sake of newness, now seem like hassles. I’m less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity.

Marco Arment responded: “I feel the same way, and it’s one of the reasons I’ve lost almost all interest in being a web developer.” And former developer Matt Gemmell chimed in with “Confessions of an ex-developer.” He writes:

I’m in an intriguing position on this subject, because I’m not a developer anymore. My mind still works in a systemic, algorithmic way—and it probably always will. I still automatically try to discern the workings of things and diagnose their problems. You don’t let go of 20+ years of programming in just a few months. But it never gets as far as launching an IDE.

While I don’t want to add a mere “me too” to the chorus, these posts strike chords. I’ve been a web developer off and on since the late ’90s and, up until about 2012, enjoyed it. But in the last few years I’ve found it harder not only to keep up but frankly a little harder to care. In 2010 or 2011 I might have started a project in PHP/Symfony or Python/Flask on the server side and used jQuery on the front end and been happy.1

But at this point if you do two projects in a row with the same technology stack you’re behind the times. You’re still using Node? So last month. And everything should be a client-side app, because modern. And responsive! Mobile first! Here, use this framework in a language that these Stanford grads wrote last year. It “compiles” to Javascript. It’s at version 0.7.3-alpha and most of the docs are still on version 0.6 but hang out in the IRC channel to ask about backward compatibility, and make sure you have Haskell installed because it’s a build prerequisite. And the deployment tools only work for Heroku and Docker. Wait, you’re not developing using Vagrant? I give up, old man. Go back to COBOL and punch cards or whatever it was people did last April!

A lot of web developers in Silicon Valley—and aspiring competitors—are terribly smart people. But the emphasis on Keeping Up With The Buzzwords means none of them can master all the technologies they’re using. They’re using technologies developed on top of other technologies they haven’t had time to master. And they’re constantly enticed to add even more new technologies promising even more new features to make development easier—layering on even more abstractions and magic that make it even harder for programmers to fully grasp what they’re working with. Our bold new future of Big Data and Ubiquitous Presence approaches, and it’s being built by crack-addled ferrets.

Before you object that these things do help you be more productive: I believe you. I do. But when you’re writing a new web app on top of two frameworks, two template systems, three “pre-compilers,” a half-dozen “helper” modules and everything all of those rely on, you’re taking a lot on faith. Prioritizing development speed over all else is a fantastic way to accumulate technical debt. I’ve heard the problems this causes described as “changing your plane’s engines while you’re in flight.” In a couple years the #1 killer for companies in this situation may just be “slower” startups that took the time to get the damn engine design right in the first place.

Matt Gemmell finds software development “really frightening” for other reasons—because it’s progressively more difficult for small software houses to stay solvent, ending up as sole proprietorships or “acquihired” into big corporations. Frankly, I don’t mind big corporations, and I’m okay with not being independent. But in the web industry itself you won’t escape Ferret Syndrome by heading to a big company. Matt may well be right that he could keep up with all that if he wanted to, but I’m not sure I can. I’m pretty sure I don’t want to.

Marco and Matt have both solved this by going independent—Marco as a developer and Matt as a writer. Making a living as an independent anything, though, is hard. They’ve both outlined the challenges for developers. I don’t want to say it’s harder for fiction authors, but I can say with some confidence it’s not any easier; exceedingly few authors make the bulk of their income from writing.2

For my part, I’ve moved back into technical writing. And, yes, at a Silicon Valley startup—but one doing solid stuff with, dare I say, good engine design. I’m surrounded by better programmers than I ever was, and my impression is that they’re interested in being here for the long haul. And I like technical writing; it pings both my writer and techie sides, without leaving me too mentally exhausted to work on my own stuff at other times.

Like Matt and (no longer tech) blogger Harry Marks, “my own stuff” is mostly fiction, even though I don’t often write about it here.3 Unlike Matt, I’d be happy to have some of my own stuff be development work—not just tinkering—again. I’d like to learn Elixir, which looks like a fascinating language to get in on the ground floor of (assuming it takes off). I’d like to do a project using RethinkDB. Maybe in Elixir!

But it might be a while. I’m still recovering from ferret-induced development fatigue.


  1. Or Python/Django, or PHP/Laravel. If you argue too much over the framework you’re part of the problem. 

  2. My own creative writing income remains at the “enough to buy a cup of coffee a month” level, but I’m happy to report that not only is it now high-end coffee, I can usually afford a donut, too. 

  3. Which may eventually change, but “blogger with unsold novel” is the new “waiter with unsold screenplay.” (I do have a Goodreads author page.) 

The sort-of new Apple

I apologize for being back to sporadic post mode yet again. I started a new technical writing job about three and a half weeks ago and left for a writing workshop about a week and a half ago. While it’s not as if Lawrence, Kansas, has fallen off the edge of the Internet (web development nerds may know it’s the original home of Django), I haven’t had time to see the Apple WWDC keynote yet. I’m not sure I’ve even finished the Accidental Tech Podcast episode about it yet.

If you care about this stuff you’ve probably already seen all the pertinent information. iOS 8 is coming with a lot of the under-the-hood features that iOS needed, OS X Yosemite is coming with excessive Helvetica, those new releases will work together in ways that no other combination of desktop and mobile operating system pull off (at least in theory), and oh hey we’ve been working on a new programming language for four years that nobody outside Apple knew about so in your face Mark Gurman.

Developers have generally reacted to this WWDC as if it’s the most exciting one since the switch from PowerPC to Intel, if not the switch from MacOS 9 to OS X, and there’s a lot of understandable tea leaf reading going on about what this all means. If you’re Yukari Iwatani Kane, it means Apple is doomed, but if you’re Yukari Iwatani Kane everything means Apple is doomed. If you’re an Android fan, it means you get to spend the next year bitching at Apple fans about how none of this is innovative and we’re all sheep, so you should be just as happy as we are.

A lot of this stuff is about giving iOS capabilities that other mobile operating systems already had. I’m surprised by how many people seem flabbergasted by this, as if they never expected Apple would ever allow better communication between apps and better document handling—even though those were obvious deficiencies, the fact that they hadn’t been addressed yet must mean Apple didn’t think they were important and we’d all better just work harder at rationalizing why that functionality didn’t fit into The Grand Apple Vision.

But that never made a lot of sense to me. While Apple certainly has a generous share of hubris,1 they’ve never deliberately frozen their systems in amber. While people describe Apple’s takeaway from their dark times in the ’90s as “own every part of your stack that you can,” I’m pretty sure that another takeaway from the Copland fiasco was “don’t assume you’re so far ahead of everyone else that you have a decade to dick around with shit.” While MacOS System 7 was arguably better than Windows 95 in terms of its GUI, in terms of nearly everything else, Win95 kicked System 7’s ass. And kicked System 8’s ass. (It didn’t kick System 9’s ass, but System 9 came after Windows 2000, which did in fact kick System 9’s ass.)

I still prefer the iOS UI to any of the Android UIs that I’ve used, but that can only hold for so long. Around the time iOS 7 launched I’d heard a lot of under-the-hood stuff intended for that release got pushed to iOS 8 after they decided iOS 7’s priority was the new UI. This is a defensible position, but dangerous. So a lot of stuff that came out in iOS 8 didn’t surprise me. If Apple tried to maintain iOS indefinitely without something like extensions and iCloud Drive, they’d lose power users in droves. And Apple does care about those users. If Apple is in part a fashion company, as critics often charge, then power users are tastemakers. They may not be a huge bloc, but they’re a hugely influential bloc.

The biggest potential sea change, though, is what this may mean about Apple’s relationship with the development community. Many years ago, Jean-Louis Gassée described the difference between Be Inc., the OS company he founded, and Apple with, “We don’t shit on our developers,” which succinctly captures the relationship Apple has had with their software ecosystem for a long time.2 I don’t know that Apple has ever addressed so many developer complaints at once with a major release.

That’s potentially the most exciting thing about this. While Android has had a lot of these capabilities for years, I haven’t found anything like Launch Center Pro or Editorial in terms of automation and scriptability, and those apps manage to do what they do on today’s iOS. Imagine what we’ll see when these arbitrary restrictions get lifted.3


  1. Apple’s hubris is carved from a solid block of brushed aluminum. 

  2. One might argue that Be went on to do exactly that, but that’s poop under the bridge now. 

  3. I know “arbitrary” will be contentious, but I’m sticking by it. 

Could editors stop being quite so minimal?

On Hacker News, I came across mention of Earnest, a “writing tool for first drafts.” It follows the lead of minimal text editors everywhere—no real preferences to speak of and no editing tools beyond what you’d get in Notepad or TextEdit. Well, no: less than that, because most of these don’t allow any formatting, except for the ones that have thankfully adopted Markdown.

Ah! But Earnest goes out of its way to give you less than that because it prevents you from deleting anything. Because first draft.

The first Hacker News comment at the time I write this is from someone who wrote a similar program called “Typewriter,” in which he says, “It’s a little more restrictive since you can’t move the caret.”

Okay. Look. Stop. Just stop.

Someone also commented that it’s reminiscent of George R. R. Martin writing A Song of Ice and Fire on a DOS machine. To which I replied: no, it isn’t. Because George’s copy of WordStar 4? It has a metric ton more functionality than every goddamn minimalist text editor I’ve seen. Yes, even the ones using Helvetica Neue.

In 1994, I was using a DOS word processor, Nota Bene. (It has a Windows descendant that’s still around.) It supported movement, selection, deletion and transposition by word, sentence, line, paragraph and phrase. It could near-instantly index and do Google-esque boolean searches on entire project folders, showing you search terms in context. It had its own multiple file, window and clipboard support, as I recall with multiple clipboards. It had multiple overwrite modes, so you could say “overwrite text I’m typing until I hit the end of a line” or “until I hit any whitespace” and then go to insert. It had oodles of little features that were all about making editing easier.

For some reason, this is a task we’ve all kind of given up on in the prose world. Code editors can do a lot of this. But when it comes to prose, we’re all about making beautiful editors whose core functionality is preventing me from editing. Because that frees my creativity!

Except that writing gets better through editing. I could use both Nota Bene and Earnest for my first draft—but I could only use NB for any future drafts.

I don’t meant to suggest that Earnest—or any of these Aren’t I A Pretty Pretty Minimalist programs—are terrible ideas. If Earnest helps free your inner Hemingway, awesome. Knock back a daiquiri for me.

But couldn’t a few people start working on tools to make the second draft better?