This is The Verge’s headline for a review of the new Lytro Illum camera:
This is Techmeme’s rewrite of the headline, to more accurately match what the Verge says about the Illum:
Personally, I’d be more inclined to click on Techmeme’s headline anyway.
Ars Technica’s Megan Geuss:
Revenue for the social media company was up 124 percent year-over-year to $312 million. Twitter lost $145 million according to GAAP (Generally Accepted Accounting Principles) numbers, but made a non-GAAP net income of $15 million.
Let me take a red pen and cross out everything that involves non-GAAP numbers. “Twitter lost $145 million.” In fact, compared to last quarter, while their revenue has increased, so has their loss.
Whee! Pop the champagne corks!
This must be a naïve reading on my part, given how rational and well-considered Wall Street historically is, right? Right?
I’m just saying, if they’re still losing money by GAAP measures (i.e., real numbers) at the end of this fiscal year—well, there’s a reason they’ve made sure AdBlock for Tweets isn’t happening.
So I came across yet another article on Hacker News running a post-mortem on a failed startup. Right off the bat it asserts:
Entrepreneurs often write about what’s going right, but too rarely write about what’s gone wrong.
I’m sure there was a point in tech bubble history when this was true. But that point was a long time ago. Startup guys write about what went wrong all the damn time. I am pretty sure if I started a startup pitched as a platform for other startups to explain why they tanked, I would get VC money for it, especially if I could get a good deal on
And I don’t want to pick on the author of this particular article. He’s a solid enough writer, his startup looks genuinely interesting rather than stupid, and hey, he sure went above and beyond in mistake-making. (“And so I learned that we hadn’t been paying payroll taxes for almost three years.” Oopsie!) But these articles are becoming post hoc navel-gazing bemoaning subsets of the same problems, over and over. They’re Chinese-American takeout menus of fail: I’ll have the “hired too many people soon” and “didn’t scale fast enough” from Column A, “poor communication” and “ill-defined cofounder roles” from Column B, and some extra sweet and sour sauce.
And this makes this soul-baring part of the performance, like consciously dressing down and overpaying for loft space. They’re written for potential investors and employees who hang around sites like Hacker News. They’re spin. Bob’s heart-rending tale of how he spent ten months with no income so he could avoid laying off his last three employees as long as possible, eventually living under his office desk and subsisting entirely on chocolate chai and teriyaki jerky, ensures you remember him as a naïve but sincere and selfless CEO who gave his all and shared his mistakes with the world. Without this bracing splash of sincerity aftershave, you might instead remember Bob as a twenty-two year old who burned through $7M of VC money creating an iOS app that sends the word “Meh” to selected friends on your contact list. (“We see an immense upside potential vis a vis the growing Irony as a Service (IaaS) market.”)
Ultimately, most startups in the current tech boom are going to fail for one of three reasons:
- The core idea of the company isn’t that good. Maybe it’s Pets.com, or Color. And if you’re pitching your startup as “like X for Y”—like Facebook for geek girls! like Instagram but only for cat pictures!—you have a problem.
- The core idea won’t generate more income than outcome before you run out of money. Why, yes, everyone loved StoreYourHugeFilesforFree.com, but it turns out you can’t make it up in volume after all.
- Your team is collectively not experienced enough to see mistakes before they kill you.
Your idea wasn’t that good, you didn’t have the capital, or you didn’t have the experience. That’s 99% of why all businesses fail. Yet those reasons are almost never in “Why My Startup Failed” articles except as hidden subtext. The stories Silicon Valley most likes to hear about itself are stories about why outliers aren’t outliers—why anyone can move right out of college into founding the new Facebook or Google. And so when we fail, we tell ourselves stories that don’t disrupt that myth. It’s absolute heresy to suggest that real world experience often outweighs youthful energy and a degree from Stanford, but most of the “mistakes you should never make” aren’t mistakes someone with the appropriate work experience would have made.
If your startup fails and it helps you to write about it, write about it. But don’t write about it because you want to prove to the world and future investors that you’re a cool guy after all. Write about it brutally honestly. Get it out of your system. Then don’t put it online. We love you, but we’ve heard it already. Next time hire an accountant.
There’s been interesting conversation in the corners of the blogosphere I inhabit recently about developer life and motivations for staying in it or leaving it, depending on one’s outlook. Ed Finkler kicked it off with the rather forebodingly titled “The Developer’s Dystopian Future,” in which he admitted:
My tolerance for learning curves grows smaller every day. New technologies, once exciting for the sake of newness, now seem like hassles. I’m less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity.
Marco Arment responded: “I feel the same way, and it’s one of the reasons I’ve lost almost all interest in being a web developer.” And former developer Matt Gemmell chimed in with “Confessions of an ex-developer.” He writes:
I’m in an intriguing position on this subject, because I’m not a developer anymore. My mind still works in a systemic, algorithmic way—and it probably always will. I still automatically try to discern the workings of things and diagnose their problems. You don’t let go of 20+ years of programming in just a few months. But it never gets as far as launching an IDE.
While I don’t want to add a mere “me too” to the chorus, these posts strike chords. I’ve been a web developer off and on since the late ’90s and, up until about 2012, enjoyed it. But in the last few years I’ve found it harder not only to keep up but frankly a little harder to care. In 2010 or 2011 I might have started a project in PHP/Symfony or Python/Flask on the server side and used jQuery on the front end and been happy.1
A lot of web developers in Silicon Valley—and aspiring competitors—are terribly smart people. But the emphasis on Keeping Up With The Buzzwords means none of them can master all the technologies they’re using. They’re using technologies developed on top of other technologies they haven’t had time to master. And they’re constantly enticed to add even more new technologies promising even more new features to make development easier—layering on even more abstractions and magic that make it even harder for programmers to fully grasp what they’re working with. Our bold new future of Big Data and Ubiquitous Presence approaches, and it’s being built by crack-addled ferrets.
Before you object that these things do help you be more productive: I believe you. I do. But when you’re writing a new web app on top of two frameworks, two template systems, three “pre-compilers,” a half-dozen “helper” modules and everything all of those rely on, you’re taking a lot on faith. Prioritizing development speed over all else is a fantastic way to accumulate technical debt. I’ve heard the problems this causes described as “changing your plane’s engines while you’re in flight.” In a couple years the #1 killer for companies in this situation may just be “slower” startups that took the time to get the damn engine design right in the first place.
Matt Gemmell finds software development “really frightening” for other reasons—because it’s progressively more difficult for small software houses to stay solvent, ending up as sole proprietorships or “acquihired” into big corporations. Frankly, I don’t mind big corporations, and I’m okay with not being independent. But in the web industry itself you won’t escape Ferret Syndrome by heading to a big company. Matt may well be right that he could keep up with all that if he wanted to, but I’m not sure I can. I’m pretty sure I don’t want to.
Marco and Matt have both solved this by going independent—Marco as a developer and Matt as a writer. Making a living as an independent anything, though, is hard. They’ve both outlined the challenges for developers. I don’t want to say it’s harder for fiction authors, but I can say with some confidence it’s not any easier; exceedingly few authors make the bulk of their income from writing.2
For my part, I’ve moved back into technical writing. And, yes, at a Silicon Valley startup—but one doing solid stuff with, dare I say, good engine design. I’m surrounded by better programmers than I ever was, and my impression is that they’re interested in being here for the long haul. And I like technical writing; it pings both my writer and techie sides, without leaving me too mentally exhausted to work on my own stuff at other times.
Like Matt and (no longer tech) blogger Harry Marks, “my own stuff” is mostly fiction, even though I don’t often write about it here.3 Unlike Matt, I’d be happy to have some of my own stuff be development work—not just tinkering—again. I’d like to learn Elixir, which looks like a fascinating language to get in on the ground floor of (assuming it takes off). I’d like to do a project using RethinkDB. Maybe in Elixir!
But it might be a while. I’m still recovering from ferret-induced development fatigue.
Or Python/Django, or PHP/Laravel. If you argue too much over the framework you’re part of the problem. ↩
My own creative writing income remains at the “enough to buy a cup of coffee a month” level, but I’m happy to report that not only is it now high-end coffee, I can usually afford a donut, too. ↩
I apologize for being back to sporadic post mode yet again. I started a new technical writing job about three and a half weeks ago and left for a writing workshop about a week and a half ago. While it’s not as if Lawrence, Kansas, has fallen off the edge of the Internet (web development nerds may know it’s the original home of Django), I haven’t had time to see the Apple WWDC keynote yet. I’m not sure I’ve even finished the Accidental Tech Podcast episode about it yet.
If you care about this stuff you’ve probably already seen all the pertinent information. iOS 8 is coming with a lot of the under-the-hood features that iOS needed, OS X Yosemite is coming with excessive Helvetica, those new releases will work together in ways that no other combination of desktop and mobile operating system pull off (at least in theory), and oh hey we’ve been working on a new programming language for four years that nobody outside Apple knew about so in your face Mark Gurman.
Developers have generally reacted to this WWDC as if it’s the most exciting one since the switch from PowerPC to Intel, if not the switch from MacOS 9 to OS X, and there’s a lot of understandable tea leaf reading going on about what this all means. If you’re Yukari Iwatani Kane, it means Apple is doomed, but if you’re Yukari Iwatani Kane everything means Apple is doomed. If you’re an Android fan, it means you get to spend the next year bitching at Apple fans about how none of this is innovative and we’re all sheep, so you should be just as happy as we are.
A lot of this stuff is about giving iOS capabilities that other mobile operating systems already had. I’m surprised by how many people seem flabbergasted by this, as if they never expected Apple would ever allow better communication between apps and better document handling—even though those were obvious deficiencies, the fact that they hadn’t been addressed yet must mean Apple didn’t think they were important and we’d all better just work harder at rationalizing why that functionality didn’t fit into The Grand Apple Vision.
But that never made a lot of sense to me. While Apple certainly has a generous share of hubris,1 they’ve never deliberately frozen their systems in amber. While people describe Apple’s takeaway from their dark times in the ’90s as “own every part of your stack that you can,” I’m pretty sure that another takeaway from the Copland fiasco was “don’t assume you’re so far ahead of everyone else that you have a decade to dick around with shit.” While MacOS System 7 was arguably better than Windows 95 in terms of its GUI, in terms of nearly everything else, Win95 kicked System 7’s ass. And kicked System 8’s ass. (It didn’t kick System 9’s ass, but System 9 came after Windows 2000, which did in fact kick System 9’s ass.)
I still prefer the iOS UI to any of the Android UIs that I’ve used, but that can only hold for so long. Around the time iOS 7 launched I’d heard a lot of under-the-hood stuff intended for that release got pushed to iOS 8 after they decided iOS 7’s priority was the new UI. This is a defensible position, but dangerous. So a lot of stuff that came out in iOS 8 didn’t surprise me. If Apple tried to maintain iOS indefinitely without something like extensions and iCloud Drive, they’d lose power users in droves. And Apple does care about those users. If Apple is in part a fashion company, as critics often charge, then power users are tastemakers. They may not be a huge bloc, but they’re a hugely influential bloc.
The biggest potential sea change, though, is what this may mean about Apple’s relationship with the development community. Many years ago, Jean-Louis Gassée described the difference between Be Inc., the OS company he founded, and Apple with, “We don’t shit on our developers,” which succinctly captures the relationship Apple has had with their software ecosystem for a long time.2 I don’t know that Apple has ever addressed so many developer complaints at once with a major release.
That’s potentially the most exciting thing about this. While Android has had a lot of these capabilities for years, I haven’t found anything like Launch Center Pro or Editorial in terms of automation and scriptability, and those apps manage to do what they do on today’s iOS. Imagine what we’ll see when these arbitrary restrictions get lifted.3
On Hacker News, I came across mention of Earnest, a “writing tool for first drafts.” It follows the lead of minimal text editors everywhere—no real preferences to speak of and no editing tools beyond what you’d get in Notepad or TextEdit. Well, no: less than that, because most of these don’t allow any formatting, except for the ones that have thankfully adopted Markdown.
Ah! But Earnest goes out of its way to give you less than that because it prevents you from deleting anything. Because first draft.
The first Hacker News comment at the time I write this is from someone who wrote a similar program called “Typewriter,” in which he says, “It’s a little more restrictive since you can’t move the caret.”
Okay. Look. Stop. Just stop.
Someone also commented that it’s reminiscent of George R. R. Martin writing A Song of Ice and Fire on a DOS machine. To which I replied: no, it isn’t. Because George’s copy of WordStar 4? It has a metric ton more functionality than every goddamn minimalist text editor I’ve seen. Yes, even the ones using Helvetica Neue.
In 1994, I was using a DOS word processor, Nota Bene. (It has a Windows descendant that’s still around.) It supported movement, selection, deletion and transposition by word, sentence, line, paragraph and phrase. It could near-instantly index and do Google-esque boolean searches on entire project folders, showing you search terms in context. It had its own multiple file, window and clipboard support, as I recall with multiple clipboards. It had multiple overwrite modes, so you could say “overwrite text I’m typing until I hit the end of a line” or “until I hit any whitespace” and then go to insert. It had oodles of little features that were all about making editing easier.
For some reason, this is a task we’ve all kind of given up on in the prose world. Code editors can do a lot of this. But when it comes to prose, we’re all about making beautiful editors whose core functionality is preventing me from editing. Because that frees my creativity!
Except that writing gets better through editing. I could use both Nota Bene and Earnest for my first draft—but I could only use NB for any future drafts.
I don’t meant to suggest that Earnest—or any of these Aren’t I A Pretty Pretty Minimalist programs—are terrible ideas. If Earnest helps free your inner Hemingway, awesome. Knock back a daiquiri for me.
But couldn’t a few people start working on tools to make the second draft better?
I’ve seen a number of articles recently declaring the iPad to be, in effect, a failed experiment: not powerful enough to replace a full computer for “power user” productivity tasks, and with little utility that separates them from phones with oversized screens. Yet two or three years ago many of these same writers were declaring the iPad—and tablets in general, but really just the iPad—to be the next evolution in personal computing, as big a leap forward as the GUI and mouse.
I was a little contrarian about that, writing this time last year that Steve Jobs’s famous analogy of PCs being trucks and tablets being cars might be wrong:
While each new iteration of iOS […] will give us more theoretical reasons to leave our laptops at home or the office, I’m not convinced this is solely a matter of missing operating system functionality. It may be that some tasks just don’t map that well to a user experience designed around direct object manipulation.
While I still think this is true, I’m feeling a little contrarian again, because I believe the first sentence in that paragraph is still true, too. Tasks that do map well to a user experience designed around direct object manipulation are likely to get easier with each new release of iOS (and Android and Windows Whatever), and with each release there will be some tasks that weren’t doable before that are now.
I’ve also felt for some time—and I believe I’ve written this in the past, although I’m not going to dig for a link—that ideas can and will migrate from OS X to iOS over time, not just the reverse. And it’s always struck me as a little bonkers when people say that Apple’s “post-PC vision” will never include better communication between apps, smarter document sharing, and other power user features. OS X is friendlier than Windows and Linux, but it’s a far better OS for power users than either Windows or Linux, too. While I think it’s true that Apple has no intention of exposing the file system on iOS—let alone exposing shell scripting—that doesn’t mean that their answer to all shortcomings of iOS for power users is going to be “That’s not what iOS devices are for.”
So, this brings us around to Mark Gurman’s report that iOS 8 will have split-screen multitasking. I don’t know whether it’s true (although Gurman’s track record has by and large been pretty good), but this is one of the biggest shortcomings for power users on an iPad—as much as we may celebrate the wonders of focus that one app at a time gives us, in real world practice, working in one window while referring to the contents of another is something that happens all the time.
The big question for me is the feasibility of the “hybrid” model, tablets with hardware keyboards or laptops with touch screens. We don’t have the latter in the Mac world, but the Windows world not only has them, touch has become pretty common on new and not-too-expensive models. Anecdotally, people like them. To me, a tablet with a hardware keyboard makes more sense than the touch laptop, simply because you can take the keyboard off and not bother with it most of the time. While I’ve joined in mocking the Surface’s implementation of this, it may truly be Microsoft’s implementation that’s at fault, not the whole concept.
At any rate, I’m not only not expecting the iPad to be subsumed by “phablets,” I’m expecting iOS to start delivering on distinctly “power user” features over the next few versions. I’m happy with my MacBook Pro in a way I wouldn’t be with only an iPad and there will always be people who will say that. But the future of computing is a trend toward computing as appliance. Right now, there’s a measurable gap between what computing appliances do and what general computers do. And it sells Apple short to suggest that they never plan to address that gap with anything more than “we don’t think our customers need to do any of that.”
There’s an argument to be made that if you know a few basic cocktails, you actually know a lot of cocktails. There’s a whole family of drinks that stem from the Martinez: one part each gin and sweet vermouth, with a dash of maraschino liqueur (a clear dry cherry concoction). If you look at that as a template—base spirit, vermouth and optional dash of bitters of liqueur—you can immediately see both the martini and the Manhattan there.
The Manhattan itself—classically two parts rye whiskey to one part sweet vermouth with a dash of bitters—can be squished into all sorts of interesting shapes. You can use dry vermouth with it and have a Dry Manhattan, or use a half-part each of both dry and sweet vermouth and have a “Perfect” Manhattan. Swap the rye for scotch and you have a Rob Roy. Swap the rye for Canadian whiskey, as many bars will when you ask for a Manhattan, and you have a bad Manhattan.
There’s a variant that I’ve been introduced to by Singlebarrel that I love. It’s called the Cuban Manhattan, which is a Perfect Manhattan made with dark rum. (To live up to the name, I’d suggest Flor de Caña Grand Reserve.) This is my take on it, though, using one of my favorite dark rums: a Venezuelan rum called Pampero Aniversario.
- 2 oz. Pampero Aniversario rum
- 1 oz. Carpano Antica sweet vermouth
- 1–2 dashes chocolate bitters
Stir with ice and strain into a chilled cocktail glass. Garnish with a twist of orange peel.
(I used Fee’s Aztec Chocolate Bitters because that’s what I happen to have, although the general consensus seems to be that Bittermens Xocolatl Mole Bitters are the best of the lot.)