There’s been interesting conversation in the corners of the blogosphere I inhabit recently about developer life and motivations for staying in it or leaving it, depending on one’s outlook. Ed Finkler kicked it off with the rather forebodingly titled “The Developer’s Dystopian Future,” in which he admitted:
My tolerance for learning curves grows smaller every day. New technologies, once exciting for the sake of newness, now seem like hassles. I’m less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity.
Marco Arment responded: “I feel the same way, and it’s one of the reasons I’ve lost almost all interest in being a web developer.” And former developer Matt Gemmell chimed in with “Confessions of an ex-developer.” He writes:
I’m in an intriguing position on this subject, because I’m not a developer anymore. My mind still works in a systemic, algorithmic way—and it probably always will. I still automatically try to discern the workings of things and diagnose their problems. You don’t let go of 20+ years of programming in just a few months. But it never gets as far as launching an IDE.
While I don’t want to add a mere “me too” to the chorus, these posts strike chords. I’ve been a web developer off and on since the late ’90s and, up until about 2012, enjoyed it. But in the last few years I’ve found it harder not only to keep up but frankly a little harder to care. In 2010 or 2011 I might have started a project in PHP/Symfony or Python/Flask on the server side and used jQuery on the front end and been happy.1
A lot of web developers in Silicon Valley—and aspiring competitors—are terribly smart people. But the emphasis on Keeping Up With The Buzzwords means none of them can master all the technologies they’re using. They’re using technologies developed on top of other technologies they haven’t had time to master. And they’re constantly enticed to add even more new technologies promising even more new features to make development easier—layering on even more abstractions and magic that make it even harder for programmers to fully grasp what they’re working with. Our bold new future of Big Data and Ubiquitous Presence approaches, and it’s being built by crack-addled ferrets.
Before you object that these things do help you be more productive: I believe you. I do. But when you’re writing a new web app on top of two frameworks, two template systems, three “pre-compilers,” a half-dozen “helper” modules and everything all of those rely on, you’re taking a lot on faith. Prioritizing development speed over all else is a fantastic way to accumulate technical debt. I’ve heard the problems this causes described as “changing your plane’s engines while you’re in flight.” In a couple years the #1 killer for companies in this situation may just be “slower” startups that took the time to get the damn engine design right in the first place.
Matt Gemmell finds software development “really frightening” for other reasons—because it’s progressively more difficult for small software houses to stay solvent, ending up as sole proprietorships or “acquihired” into big corporations. Frankly, I don’t mind big corporations, and I’m okay with not being independent. But in the web industry itself you won’t escape Ferret Syndrome by heading to a big company. Matt may well be right that he could keep up with all that if he wanted to, but I’m not sure I can. I’m pretty sure I don’t want to.
Marco and Matt have both solved this by going independent—Marco as a developer and Matt as a writer. Making a living as an independent anything, though, is hard. They’ve both outlined the challenges for developers. I don’t want to say it’s harder for fiction authors, but I can say with some confidence it’s not any easier; exceedingly few authors make the bulk of their income from writing.2
For my part, I’ve moved back into technical writing. And, yes, at a Silicon Valley startup—but one doing solid stuff with, dare I say, good engine design. I’m surrounded by better programmers than I ever was, and my impression is that they’re interested in being here for the long haul. And I like technical writing; it pings both my writer and techie sides, without leaving me too mentally exhausted to work on my own stuff at other times.
Like Matt and (no longer tech) blogger Harry Marks, “my own stuff” is mostly fiction, even though I don’t often write about it here.3 Unlike Matt, I’d be happy to have some of my own stuff be development work—not just tinkering—again. I’d like to learn Elixir, which looks like a fascinating language to get in on the ground floor of (assuming it takes off). I’d like to do a project using RethinkDB. Maybe in Elixir!
But it might be a while. I’m still recovering from ferret-induced development fatigue.
Or Python/Django, or PHP/Laravel. If you argue too much over the framework you’re part of the problem. ↩
My own creative writing income remains at the “enough to buy a cup of coffee a month” level, but I’m happy to report that not only is it now high-end coffee, I can usually afford a donut, too. ↩
Which may eventually change, but “blogger with unsold novel” is the new “waiter with unsold screenplay.” (I do have a Goodreads author page.) ↩
I apologize for being back to sporadic post mode yet again. I started a new technical writing job about three and a half weeks ago and left for a writing workshop about a week and a half ago. While it’s not as if Lawrence, Kansas, has fallen off the edge of the Internet (web development nerds may know it’s the original home of Django), I haven’t had time to see the Apple WWDC keynote yet. I’m not sure I’ve even finished the Accidental Tech Podcast episode about it yet.
If you care about this stuff you’ve probably already seen all the pertinent information. iOS 8 is coming with a lot of the under-the-hood features that iOS needed, OS X Yosemite is coming with excessive Helvetica, those new releases will work together in ways that no other combination of desktop and mobile operating system pull off (at least in theory), and oh hey we’ve been working on a new programming language for four years that nobody outside Apple knew about so in your face Mark Gurman.
Developers have generally reacted to this WWDC as if it’s the most exciting one since the switch from PowerPC to Intel, if not the switch from MacOS 9 to OS X, and there’s a lot of understandable tea leaf reading going on about what this all means. If you’re Yukari Iwatani Kane, it means Apple is doomed, but if you’re Yukari Iwatani Kane everything means Apple is doomed. If you’re an Android fan, it means you get to spend the next year bitching at Apple fans about how none of this is innovative and we’re all sheep, so you should be just as happy as we are.
A lot of this stuff is about giving iOS capabilities that other mobile operating systems already had. I’m surprised by how many people seem flabbergasted by this, as if they never expected Apple would ever allow better communication between apps and better document handling—even though those were obvious deficiencies, the fact that they hadn’t been addressed yet must mean Apple didn’t think they were important and we’d all better just work harder at rationalizing why that functionality didn’t fit into The Grand Apple Vision.
But that never made a lot of sense to me. While Apple certainly has a generous share of hubris,1 they’ve never deliberately frozen their systems in amber. While people describe Apple’s takeaway from their dark times in the ’90s as “own every part of your stack that you can,” I’m pretty sure that another takeaway from the Copland fiasco was “don’t assume you’re so far ahead of everyone else that you have a decade to dick around with shit.” While MacOS System 7 was arguably better than Windows 95 in terms of its GUI, in terms of nearly everything else, Win95 kicked System 7’s ass. And kicked System 8’s ass. (It didn’t kick System 9’s ass, but System 9 came after Windows 2000, which did in fact kick System 9’s ass.)
I still prefer the iOS UI to any of the Android UIs that I’ve used, but that can only hold for so long. Around the time iOS 7 launched I’d heard a lot of under-the-hood stuff intended for that release got pushed to iOS 8 after they decided iOS 7’s priority was the new UI. This is a defensible position, but dangerous. So a lot of stuff that came out in iOS 8 didn’t surprise me. If Apple tried to maintain iOS indefinitely without something like extensions and iCloud Drive, they’d lose power users in droves. And Apple does care about those users. If Apple is in part a fashion company, as critics often charge, then power users are tastemakers. They may not be a huge bloc, but they’re a hugely influential bloc.
The biggest potential sea change, though, is what this may mean about Apple’s relationship with the development community. Many years ago, Jean-Louis Gassée described the difference between Be Inc., the OS company he founded, and Apple with, “We don’t shit on our developers,” which succinctly captures the relationship Apple has had with their software ecosystem for a long time.2 I don’t know that Apple has ever addressed so many developer complaints at once with a major release.
That’s potentially the most exciting thing about this. While Android has had a lot of these capabilities for years, I haven’t found anything like Launch Center Pro or Editorial in terms of automation and scriptability, and those apps manage to do what they do on today’s iOS. Imagine what we’ll see when these arbitrary restrictions get lifted.3
Apple’s hubris is carved from a solid block of brushed aluminum. ↩
One might argue that Be went on to do exactly that, but that’s poop under the bridge now. ↩
I know “arbitrary” will be contentious, but I’m sticking by it. ↩
On Hacker News, I came across mention of Earnest, a “writing tool for first drafts.” It follows the lead of minimal text editors everywhere—no real preferences to speak of and no editing tools beyond what you’d get in Notepad or TextEdit. Well, no: less than that, because most of these don’t allow any formatting, except for the ones that have thankfully adopted Markdown.
Ah! But Earnest goes out of its way to give you less than that because it prevents you from deleting anything. Because first draft.
The first Hacker News comment at the time I write this is from someone who wrote a similar program called “Typewriter,” in which he says, “It’s a little more restrictive since you can’t move the caret.”
Okay. Look. Stop. Just stop.
Someone also commented that it’s reminiscent of George R. R. Martin writing A Song of Ice and Fire on a DOS machine. To which I replied: no, it isn’t. Because George’s copy of WordStar 4? It has a metric ton more functionality than every goddamn minimalist text editor I’ve seen. Yes, even the ones using Helvetica Neue.
In 1994, I was using a DOS word processor, Nota Bene. (It has a Windows descendant that’s still around.) It supported movement, selection, deletion and transposition by word, sentence, line, paragraph and phrase. It could near-instantly index and do Google-esque boolean searches on entire project folders, showing you search terms in context. It had its own multiple file, window and clipboard support, as I recall with multiple clipboards. It had multiple overwrite modes, so you could say “overwrite text I’m typing until I hit the end of a line” or “until I hit any whitespace” and then go to insert. It had oodles of little features that were all about making editing easier.
For some reason, this is a task we’ve all kind of given up on in the prose world. Code editors can do a lot of this. But when it comes to prose, we’re all about making beautiful editors whose core functionality is preventing me from editing. Because that frees my creativity!
Except that writing gets better through editing. I could use both Nota Bene and Earnest for my first draft—but I could only use NB for any future drafts.
I don’t meant to suggest that Earnest—or any of these Aren’t I A Pretty Pretty Minimalist programs—are terrible ideas. If Earnest helps free your inner Hemingway, awesome. Knock back a daiquiri for me.
But couldn’t a few people start working on tools to make the second draft better?
I’ve seen a number of articles recently declaring the iPad to be, in effect, a failed experiment: not powerful enough to replace a full computer for “power user” productivity tasks, and with little utility that separates them from phones with oversized screens. Yet two or three years ago many of these same writers were declaring the iPad—and tablets in general, but really just the iPad—to be the next evolution in personal computing, as big a leap forward as the GUI and mouse.
I was a little contrarian about that, writing this time last year that Steve Jobs’s famous analogy of PCs being trucks and tablets being cars might be wrong:
While each new iteration of iOS […] will give us more theoretical reasons to leave our laptops at home or the office, I’m not convinced this is solely a matter of missing operating system functionality. It may be that some tasks just don’t map that well to a user experience designed around direct object manipulation.
While I still think this is true, I’m feeling a little contrarian again, because I believe the first sentence in that paragraph is still true, too. Tasks that do map well to a user experience designed around direct object manipulation are likely to get easier with each new release of iOS (and Android and Windows Whatever), and with each release there will be some tasks that weren’t doable before that are now.
I’ve also felt for some time—and I believe I’ve written this in the past, although I’m not going to dig for a link—that ideas can and will migrate from OS X to iOS over time, not just the reverse. And it’s always struck me as a little bonkers when people say that Apple’s “post-PC vision” will never include better communication between apps, smarter document sharing, and other power user features. OS X is friendlier than Windows and Linux, but it’s a far better OS for power users than either Windows or Linux, too. While I think it’s true that Apple has no intention of exposing the file system on iOS—let alone exposing shell scripting—that doesn’t mean that their answer to all shortcomings of iOS for power users is going to be “That’s not what iOS devices are for.”
So, this brings us around to Mark Gurman’s report that iOS 8 will have split-screen multitasking. I don’t know whether it’s true (although Gurman’s track record has by and large been pretty good), but this is one of the biggest shortcomings for power users on an iPad—as much as we may celebrate the wonders of focus that one app at a time gives us, in real world practice, working in one window while referring to the contents of another is something that happens all the time.
The big question for me is the feasibility of the “hybrid” model, tablets with hardware keyboards or laptops with touch screens. We don’t have the latter in the Mac world, but the Windows world not only has them, touch has become pretty common on new and not-too-expensive models. Anecdotally, people like them. To me, a tablet with a hardware keyboard makes more sense than the touch laptop, simply because you can take the keyboard off and not bother with it most of the time. While I’ve joined in mocking the Surface’s implementation of this, it may truly be Microsoft’s implementation that’s at fault, not the whole concept.
At any rate, I’m not only not expecting the iPad to be subsumed by “phablets,” I’m expecting iOS to start delivering on distinctly “power user” features over the next few versions. I’m happy with my MacBook Pro in a way I wouldn’t be with only an iPad and there will always be people who will say that. But the future of computing is a trend toward computing as appliance. Right now, there’s a measurable gap between what computing appliances do and what general computers do. And it sells Apple short to suggest that they never plan to address that gap with anything more than “we don’t think our customers need to do any of that.”
There’s an argument to be made that if you know a few basic cocktails, you actually know a lot of cocktails. There’s a whole family of drinks that stem from the Martinez: one part each gin and sweet vermouth, with a dash of maraschino liqueur (a clear dry cherry concoction). If you look at that as a template—base spirit, vermouth and optional dash of bitters of liqueur—you can immediately see both the martini and the Manhattan there.
The Manhattan itself—classically two parts rye whiskey to one part sweet vermouth with a dash of bitters—can be squished into all sorts of interesting shapes. You can use dry vermouth with it and have a Dry Manhattan, or use a half-part each of both dry and sweet vermouth and have a “Perfect” Manhattan. Swap the rye for scotch and you have a Rob Roy. Swap the rye for Canadian whiskey, as many bars will when you ask for a Manhattan, and you have a bad Manhattan.
There’s a variant that I’ve been introduced to by Singlebarrel that I love. It’s called the Cuban Manhattan, which is a Perfect Manhattan made with dark rum. (To live up to the name, I’d suggest Flor de Caña Grand Reserve.) This is my take on it, though, using one of my favorite dark rums: a Venezuelan rum called Pampero Aniversario.
2 oz. Pampero Aniversario rum
1 oz. Carpano Antica sweet vermouth
1–2 dashes chocolate bitters
Stir with ice and strain into a chilled cocktail glass. Garnish with a twist of orange peel.
(I used Fee’s Aztec Chocolate Bitters because that’s what I happen to have, although the general consensus seems to be that Bittermens Xocolatl Mole Bitters are the best of the lot.)
Private clubs and open bars: on App.net vs. Twitter
App.net is an ambitious experiment to create a for-profit “social infrastructure” system for the Internet: you could use it to build something like Twitter, or a file syncing or storage service, or a notification service. In practice the only one that got any attention, about a year ago, was App.net’s vision of a better Twitter. This came shortly after it became clear that Twitter had decided it was in their own best interest to knife their development community.
Yet like it or not they’re still competing against Twitter, and Twitter is “free.” (The air quotes are important.) App.net started out at a price of $50 a year, then dropped to $36, then added a free tier of deliberately minimal utility. $36/year isn’t an outrageous fee, but it’s infinitely higher than free. It’s too high to even be a credible impulse buy.
Even if you didn’t see the stories yesterday you can write the next part. App.net discovered this model isn’t working well after all, and they’re keeping the service open but laying off everyone. This seemingly crazy move may work, but it reduces App.net to a side project.
Maybe App.net needed cheaper pricing, or maybe it just couldn’t work competing with “free” no matter what. Maybe focusing on the kinds of people who give a poop about Twitter’s relationship to its development community wasn’t the right tack. Maybe its problem is that it was a private haven for rich white tech dudes, as some critics snarked.
Maybe, although I’ll admit the last one grates on me. We’re mocking App.net for having a cover charge—the only legitimate beef, as there’s no evidence they discriminate based on gender, ethnicity or bank statements—then going on to the huge club down the street, the one with ads on every available bit of wall, because they give us cheap beer for free.
But maybe App.net has misdiagnosed the problem.
What’s Twitter’s functionality? Broadcast notifications sent to people who specifically subscribe to them, with simple mechanisms for setting them private or public (i.e., direct messaging) and replying. That mechanism can act as text messaging, group chat, link sharing, image sharing, status updates, site update notifications, news alerts, and more. It’s a terrific concept.
App.net’s best insight was that making notifications “infrastructure” the way email and HTTP are has amazing potential. Twitter has no interest in letting other people use their infrastructure except under the strictest terms. That’s the problem App.net’s model solves. Good for them.
But as much as this is anathema to the Valley’s technolibertarian mindset, infrastructure only works as a common good. Suppose CERN had spun off WebCorp to monetize HTTP. They could offer “free” web with tightly dictated terms on how we interact with their ecosystem, or they could be liberal with those terms and exact a connection toll. But neither of those scenarios would get us to where we are now. The Internet is the Internet because it’s built on protocols that are free. Not free-with-air-quotes, just free period.
"Wait, but if there was free infrastructure to do what Twitter does, how would anyone make money?" At first, by selling commercial clients and servers, although that market would be likely to decline over time. Some companies could run commercial notification networks with access charges to operators. (This vision needs a decentralized network like email to spread operational costs out as broadly as possible, but that network must deliver notifications in fairly close to real time and in the proper order, which is a huge problem.)
In the long run, broadcast notification services only survive if they do become like email services. App.net isn’t making enough money to sustain a full-time business, but so far Twitter isn’t either. They both believe the value is in the infrastructure, and they’re both wrong. The value comes from making the infrastructure free.
Meanwhile, I find the bits of schadenfreude I’m seeing on this—not the “I didn’t expect this to work and I’m disappointed I was right” posts, but the “ha ha, private club goes under” posts—to be a little disheartening. I like free beer, too, but the managers at Club Tweet are starting to look a bit desperate.
Turns out the answer is: next month. Ben Popper and Ellis Hamburger, The Verge:
Today, the company is announcing a brand new app called Swarm that will exist alongside the current Foursquare app. Swarm will be a social heat map, helping users find friends nearby and check in to share their location. A completely rewritten Foursquare app will launch in a month or so. The new Foursquare will ditch the check-in and focus solely on exploration and discovery, finally positioning itself as a true Yelp-killer in the battle to provide great local search.
Foursquare’s CEO has been saying for the past year that the check-ins and points and badges and all were never their real goal, but I’m dubious. The big question ahead is whether the “new” Foursquare is really competitive with Yelp. Foursquare has listings for just as many businesses as Yelp, but it doesn’t have ratings for nearly as many. But worse, for an app that wants to focus on “exploration and discovery,” the algorithm that drives Foursquare’s ratings shows a noticeable bias toward places that are both popular and get a lot of regulars. That sounds reasonable, but a lot of times you’re a regular at a place because it’s close to your office. Both Yelp and Foursquare are going to be able to tell you about that huge Mexican place that’s popular with all the tech workers for business lunches, but Yelp’s a lot more likely to tell you about the dive taqueria worth going out of your way for.
The great bubbles in history, right up through the dotcom fiasco and last decade’s real estate unpleasantness, have typically been mass phenomena. […] But just because bubbles typically don’t inflate until the small-timers get involved doesn’t mean you can’t have a bubble without them.
A short but interesting piece which fits with my own somewhat cynical musings on this topic. Also:
The good news is that most of the money lost when the bubble bursts will come from private investors, not people invested in the stock market. The only average folks likely to suffer are those who make their living in the [San Francisco] Bay Area.
Gideon Lewis-Kraus writes a brilliant piece of long-form journalism in Wired this month, “No Exit,” following the path of a more typical startup in San Francisco than the ones that get huge payoffs.
As a self-taught programmer with no college degree at all, this passage particularly resonated with me:
It’s extremely difficult to hire talented engineers in the Valley unless you’ve got incredible PR, can pay a fortune, or are offering the chance to work on an unusually difficult problem. Nobody was buzzing about them, and they had no money, but the upside of having a business that relied on serious machine learning was that they had worthy challenges on the table. On January 4, they made an offer to exactly the sort of engineer they needed, Tevye. He had a PhD in AI from MIT. Just to contextualize what that means in Silicon Valley, an MIT AI PhD can generally walk alone into an investor meeting wearing a coconut-shell bra, perform a series of improvised birdcalls, and walk out with $1 million. Nick and Chris had gone to good schools of modest profile—Nick to the University of Puget Sound, Chris to the University of Vermont—and while Nick also had a Harvard business degree, both were skeptical about the credential fetish of the Valley. They were happy to play the game when they could, though.
The assumption has clearly become that it’s easier to teach good coding practices to people who know all the algorithms already than vice-versa. I can’t definitively say that’s the wrong approach, although I have my doubts. I can definitively say that it’s pushing me more toward pursuing technical writing than I was thinking about even a half-year ago.
In any case, while my own experience with startups hasn’t been nearly as stressful as what these guys are going through, I’ve seen just enough of that angle to make me wonder whether SF/Silicon Valley’s “rock stars only” mindset is healthy in the long run. (Ironically, I’m getting more contacts than ever from people inexplicably convinced I’m a Rock Star Ninja Brogrammer looking to wrangle Big Data High Scalability DevOps Buzzword Bleepbloop problems the likes of which the world has never seen. Sorry. I know enough of the words you are using to help you document your brilliant stuff, though.)
Eich’s defense — from himself and others — largely boiled down to “well, you have to include all viewpoints if you’re going to call yourself inclusive,” which is a fine argument until one considers that including the side that says you can’t include the other side is the rhetorical equivalent of a division by zero error. Given how strongly Eich had been digging in his heels, one suspects he didn’t step down as much as had the ladder yanked out from under him.
The net is abruptly abuzz with news that Facebook bought Oculus VR, the partially-Kickstarted virtual reality company backed by game engine wunderkind John Carmack. And, it’s abuzz with a lot of fairly predictable hair-pulling and shirt-rending.
It’s certainly interesting news, and on the surface bemusing—although no more so than half of what Google buys these days. Facebook seems to be pretty interested in keeping abreast or ahead of Google, too. Hmm. Does Google have any VR product that sort of like Oculus Rift? Something that rhymes with “crass?” I’m sure there’s something along those lines I’ve been hearing about.
Frankly, despite all the hair-pulling I don’t think this is going to make a lot of difference to Oculus Rift users. When it comes to handling acquisitions Facebook seems to be more like Microsoft than Google or Apple. And that’s actually a good thing. Microsoft has certainly done in good products through acquisitions, but look at Bungie and, before them, Softimage, one of the leading high end 3D animation programs of the 1990s. In both cases, the companies were given a great deal of autonomy—what Microsoft wanted them for actually required that. Bungie’s Halo essentially defined the Xbox gaming platform, and Softimage got NT into animation and movie studios. (I suspect this was a much bigger nail in SGI’s coffin than it’s usually given credit for.)
Facebook needs the Rift to be a successful product, and for it to be a successful product they have to not screw with it. They don’t want to take it away from gamers—what they want is, well, pretty much what Zuckerberg wrote:
Imagine enjoying a court side seat at a game, studying in a classroom of students and teachers all over the world or consulting with a doctor face-to-face — just by putting on goggles in your home.
What Facebook wants, ultimately, is to build the kind of communication device that up until now only existed in science fiction. I’m not sure any company can actually pull that off, but I can’t think of another company that genuinely has a better shot. And as much as many things about Facebook continue to exasperate me, I’ve been coming to a somewhat reluctant admiration not only of their ambition but their engineering.
I can’t say this purchase makes me more likely to use either Oculus products or Facebook, but it’s a very interesting milepost. I’ve thought for years that Facebook the product has a limited lifespan, but Facebook the company may have a much longer—and far more interesting—one than I would ever have guessed.
Hundreds of open source packages, including the Red Hat, Ubuntu, and Debian distributions of Linux, are susceptible to attacks that circumvent the most widely used technology to prevent eavesdropping on the Internet, thanks to an extremely critical vulnerability in a widely used cryptographic code library.
Goodin argues that the bug is worse than Apple’s highly publicized “goto fail” bug, as it appears that it may have gone undetected since 2005.
I’d like to pretend I’m above feeling a bit of schadenfreude given that some of the loudest critics of Apple tend to be the open source zealots, but I’m not. One of the more religious aspects of Open Sourcitude is the insistence that all software ills stem from proprietary, closed source code. If the code is open, then in theory there should be fewer bugs, more opportunity for new features, and a lower chance of the software dying due to abandonment by its original developers for whatever reason.
But in actual, real world practice, software with few users tends to stagnate; software that becomes popular tends to keep being developed. This holds true regardless of the license and access to the source code. There are a lot of fossilized open source projects out there, and a lot of commercial products with vibrant communities. Being open source helps create such communities for certain kinds of applications (mostly developer tools), but it’s neither necessary nor, in and of itself, sufficient. And no one—not even the most passionate open source developer—ever says something like, “You know what I’d like to do tonight? Give GnuTLS a code security audit.”
While I only used the service briefly, this is still unfortunate news. Editorially is (was) a collaborative online writing tool using Markdown that I was planning to use for—yes—editorial work.
I’m not sure what to take away from this; Editorially was one of those free services they were planning to figure out how to make money from later, and that’s always a dicey business. Yet as Everpix’s failure demonstrated, being a paid service from the get-go isn’t automatically a win. Editorially’s management concluded that “even if all our users paid up, it wouldn’t be enough”; would it have been if they been a paid service from the beginning? Maybe, but of course they’d have had fewer users—probably far fewer—and so that might not have been sustainable, either.
While I was going to express some dubiousness about the headline of this WSJ article—it at first suggests revamping a mythical set-top box that Apple has yet to produce, rather than the one that they’ve been producing for years—but that probably isn’t fair. It could be the actual Apple TV that they’ve been selling.
So instead, knock them for the first sentence.
"Apple Inc. appears to be scaling back its lofty TV industry plans."
In theory what this means is that Apple “envisages working with cable companies, rather than competing against them.” Well, yes. As much as some may dream of Apple displacing cable companies, that’s going to be pretty hard to do in the near-to-mid term for financial reasons alone. Netflix and Amazon are starting to compete with cable networks by paying for original content, but they established themselves first. Apple will need to do the same thing. (If they even want to get into that side of the business.)
If the WSJ report is correct, Apple’s service sounds like it will be nerfed to the point of bringing nothing new to the table—but that doesn’t mean it will remain nerfed forever. Even so, for right now they have the same problem that always faced Hulu: a profound disconnect between what their customers want to get, and what the media companies are comfortable giving. Apple got nearly everything they wanted when they launched the iPhone because Cingular was desperate enough to agree to their demands. As much as techies believe Hollywood should be that desperate, they’re not.
Did Apple ban a Bitcoin wallet because they fear competition?
Apple pulled Blockchain, a Bitcoin “wallet” app, from their store, and Blockchain’s CEO is sure this is because Apple sees them as a potential competitor. “I think that Apple is positioning itself to take on mobile payments in a way they haven’t described to the public and they’re being anti-competitive,” he told Wired.
As usual, though, this comes up because it fits into the default narrative about What Apple Is Like, not because there’s either direct or even circumstantial evidence to support it. While Apple may not welcome competition with open arms, there are dozens of applications on both the iOS and Mac app stores that directly compete with Apple’s products and services. Pandora, Netflix, Hulu, just about every app Amazon makes—by this logic, shouldn’t these all be gone?
For that matter, there are already mobile payment applications in the store that are far more direct competition for this still-hypothetical Apple payment system than Blockchain is. I can walk into (some) stores and pay directly with the Square or Paypal apps, but I can’t walk into a store and pay directly with Blockchain; it has no infrastructure specifically designed for point-of-sale, and even if the store took Bitcoin, I’d rather just swipe a credit card than diddle around with paying by SMS.
So why did Apple pull Blockchain? As usual, we don’t know, because Apple is about as communicative as a sack of flour. But given that they pulled Coinbase a couple months ago, the signs point to this having little to do with competition and a lot to do with Bitcoin. Apple has a rapidly expanding presence in China—whose antipathy toward Bitcoin is crystal clear—and they’re a strikingly conservative company when it comes to financial and legal matters; Bitcoin’s reputation as a haven for illegal activity may not sit well with them.
Twitter reported a net loss of $511 million, compared with an $8.7 million loss a year ago.
However, using a closely watched measure of income that excludes stock-based compensation and other expenses, the company reported a profit of $9.7 million, or 2 cents a share, in the quarter, compared with a loss of $271,000 a year ago.
Oh, they’re on the page that looks at actual numbers. Must be some old media thing!
Revenue, of course, isn’t profit, so the headlines arguably aren’t contradictory. But they’re telling of what the different media outlets are focusing on. To the tech press, slowing user growth is the real concern—a per-quarter loss of over half a billion dollars, no big deal. Most of it’s just stock-based compensation, right?
The NYT goes on to report that Twitter projects full year revenue in 2014 to be $1.15B, and adjusted EBITDA (the “closely watched measure of income”) to be $150–180M. That does sound promising, but I honestly can’t tell whether it means they’ll still actually be losing money. I’m fairly sure most network providers don’t accept payment in EBITDAcoin yet.
As expected, white smoke came out of Microsoft’s chimney early this morning and the board named Satya Nadella their new CEO. Microsoft watchers, both inside and outside the company, all seem to like him and agree he’s the best choice, but I’m not sure that’s actually a good sign. Microsoft may not have needed a “turnaround artist” like some had suggested, but they do need someone willing to make risky, potentially unpopular moves—something the company historically tends to bungle through half-measures when they attempt it (Windows 8).
Nadella is deeply steeped in Microsoft’s current company culture, promoted to CEO from being EVP of “Cloud and Enterprise.” Is this a guy who’s really willing to make bold moves? If they are, I’d be surprised if they aren’t moves toward doubling down on what he knows best: business services. Earlier I wrote:
While picking [Nadella] might be a statement about re-invigorating Microsoft’s engineering culture, it might also be a statement about re-invigorating their focus on big business: all of Nadella’s background seems to be in enterprise computing.
In that same article, I wrote about my favorite also-ran, Stephen Elop. Those who felt sure that an Elop-led Microsoft would crash and burn are no doubt breathing a sigh of relief, but I still wonder about this:
The big difference Elop might bring to Microsoft is—I’d like to find a word that’s both less polarizing and less squishy than this one, but I can’t—a sense of taste. Elop understands the importance of user experience. I’m not sure there’s anyone at Microsoft who would have come up with devices as, well, cool as the Nokia Lumia line. Not even the Surface tablets have the same quality industrial design.
I don’t want to imply that Nadella is tasteless, but I would be honestly surprised if user experience is something he really gets. It’s not a prized skill in the enterprise world. We’ll see.
Google selling Motorola Mobility to Lenovo for a substantial loss seems to say something significant about Google, but nobody seems to be entirely sure what. Short attention span? Finally coming to their senses? That they no longer need Motorola’s hardware expertise now that they have Nest’s? (Bingo.)
One aspect that’s been widely reported is that Google is keeping Motorola’s patents with them, which seems to be yet another baffling move. From the standpoint of “patents as offensive weapons,” Motorola’s are certainly a failure, and it seems Google put a lot more stock in that aspect than they should have.
But that’s a new fashion, pushed by the rise of companies whose business model is essentially to flood the patent office with claims of dubious “inventions” that examiners have neither the time nor training to evaluate properly. Whether you approve of patents conceptually or not, the vast majority of the ones owned by a company like Motorola—in the mobile communications space from the very beginning—aren’t the dippy kinds we roll our eyes at when they get in the news. They really did create a lot of the technology in widespread use in the industry.
Yet this is a double-edged sword. It turns out that it’s the stupid patents that are most easily weaponized. Patents that are so valuable they define industries tend to get quickly and widely licensed. Such patents are subject to FRAND, “fair, reasonable and non-discriminatory” licensing rules, and companies that Google could plausibly sue—excuse me, defend against—likely already have license agreements.
So what is Google left with by hanging onto these patents? Nothing? Well, no. It’s still nearly 25,000 patents. It’s hard to figure out how much licensing they’re already bringing in because it doesn’t seem that Google breaks out their income that way, but when Motorola Mobility was an independent company it was in the neighborhood of $250 million a year. Is it possible they could get more money of that by carefully reviewing the portfolio? Sure. They couldn’t increase it ten-fold, but doubling it isn’t wildly unreasonable.
Google estimated the patent portfolio as being worth $5.5B. Again, that doesn’t seem ridiculous; after all, a patent’s total value is what you get from it during the duration of its licensing. (And what you get from it when you’re actually using it, of course; if Google makes products that are covered by these patents, they’re saving money by not paying royalties to Lenovo.) The press has made a big deal about Motorola being smacked down earlier this year in their quest to get Microsoft to pay $4B a year in royalties, but that was pretty clearly a pie-in-the-sky shot: if the estimation of the portfolio had been based on such claims being successful, it’d have been a lot higher. Google probably needs to double the royalty revenue from patents over what Motorola was getting to hit that $5.5B valuation, but it’s not nuts.
The most interesting question to me moving forward from this point is what Google actually does with hardware. It seems certain that the Nest design team will be put to work on more than thermostats and smoke detectors. Stratēchery’s Ben Thompson wrote (quoted by John Gruber) that it’s a “tiresome myth” that Google wants to be more like Apple, i.e., building their own hardware, and that selling Motorola puts that “to bed once and for all.” Really? I normally find Thompson pretty savvy, but I think we’re going to see Nest-designed Nexus phones and tablets within a year.
Columbia University student Kevin Chen attempted to sign up to be able to process credit cards with Square, and was put through one of those somewhat dubious “identity verification” quizzes that credit bureaus provide (“which of the following addresses have you lived at”). It didn’t work—he doesn’t have much of a financial history to draw on—and discovered that when it comes to actually solving problems like this, Square isn’t any better than Paypal and other competitors. (He wrote that Amazon Payments and Citibank required him to fax copies of his driver’s license and Social Security card and sniffed, “how backwards,” although that strikes me as an understandable hoop to jump through when online verification fails for exactly this reason.)
Next, I tried to contact customer service. After all, there’s plenty of evidence that I’m real: I visited their office once, they gave my friends a ton of Square Readers, and I even interviewed for a job there. A week later, nobody had responded.
"I’ve honestly never seen such a user-hostile design in financial services," Chen writes, "and I use PayPal! Design is how it works, not how it looks."
Well… no. As far as financial services go, the design—and I do mean “how it works”—of Square is actually still pretty damn good. And despite all the crap PayPal gets thrown at it, its user experience (UX) design is also pretty damn good.1
PayPal’s weak spot is their policies, which frequently come across as “the customer is always wrong.” You do anything that deviates from their concept of ordinary usage and you get shut down. What tends to get them in news stories as a mustache-twirling villain is how frequently this policy shuts down things like charity drives. It may be unusual to see $40K of transfers from thousands of people stuffed into a single PayPal account, but by now you’d think PayPal would flag those usage patterns and have a human check to see whether it’s the annual Cute Puppies For Cancer-Stricken Orphans drive before pulling the plug.
What Kevin Chen is having an issue with here isn’t the part of UX that people think about when they usually think of UX. The problem is how these services handle failure points. While fraud is a very big deal for financial service companies—to a point where they’re better off being painfully conservative—there are counter-examples extant to show that they’re not required to be jerks about it.
The new banking service Simple rightly touts their design and usability, but what sets them apart is how good the experience is when stuff doesn’t work. It’s rare for them to take more than a couple hours to respond to support requests, and they make it very easy to get an actual person on the phone if you want to. PayPal isn’t as good, but they’ve still got a customer support department. For all of the bitching we do about them they not only work in the vast majority of cases, they’ll at least try to fix things when they don’t, as lumbering about it as they may be.
Square, however, doesn’t have a support phone number, and their official attitude seems to be that if their service works as intended—that is, nothing that you experienced is a result of a bug on their end—then that’s that. This is part of the same attitude I’ve accused Google of having: all problems are ultimately engineering problems. (Google is notoriously bad for customer service, even with their paying customers.) Kevin Chen’s problem is not an engineering problem, and thus might as well not exist.
I suspect this comes across as more disappointing than it might otherwise be because it’s Square, the company that’s supposed to be revolutionizing the payment industry. We expect this from PayPal, but not Square. And it’s grating to realize that we probably have those expectations backward.
Its developer experience is pretty dreadful, although their new REST API looks like it may not suck. ↩
The Federal Communication Commission’s net neutrality rules were partially struck down today by the US Court of Appeals for the District of Columbia Circuit, which said the Commission did not properly justify its anti-discrimination and anti-blocking rules.
Those rules in the Open Internet Order, adopted in 2010, forbid ISPs from blocking services or charging content providers for access to the network. Verizon challenged the entire order and got a big victory in today’s ruling. While it could still be appealed to the Supreme Court, the order today would allow pay-for-prioritization deals that could let Verizon or other ISPs charge companies like Netflix for a faster path to consumers.
I’m conflicted about net neutrality. Not about the notion that the Internet should remain open to everyone, even if ensuring that requires (gasp) regulatory interference. But I’m not so sure that “pay-for-prioritization deals” are intrinsically evil.
Way back in the dark ages—the late 1990s—I managed capacity on a major data network. Even back then, our switches had the ability to prioritize traffic whose packets were tagged as being certain kinds of data. (We called it “QoS,” quality of service.) The technology we were expecting to replace frame relay—asynchronous transfer mode—had this functionality built-in as a protocol level. The rationale for this was that some kinds of traffic really need higher priority. You see, IP was designed with no guarantees of low latency or even of packets arriving in the right order. For a terminal connection, a web page (or a Gopher page!) or a file transfer—for any of the kinds of stuff the Internet was originally being used for in the early ’90s and before—this was okay. For anything that involves real-time streaming, though, the results can range from suboptimal to unusable. However, ATM never truly took off; during the original dotcom boom, telecom companies and the equipment manufacturers selling to them raced to build so much big-pipe infrastructure that it became easier and cheaper to solve network congestion and latency problems by just throwing more bandwidth at them.
At least, it did for a decade or so. But bandwidth consumed per user has grown far faster than the absolute number of users have. You are using ten times the bandwidth now than you were five or six years ago. The cost to get you that bandwidth has dropped substantially on a per-kilobyte basis, but your ISP is making less off of you in 2014 than they did in 2007. This isn’t a problem for the ISPs as long as new customers are being added like mad. Through the 2000s they were, but not anymore—yet demand per user is still climbing steadily. Where’s the new revenue to offset the new infrastructure costs come from? Content providers have historically never been charged a premium based on the amount and kind of content they’re providing—but we may be at the point where we can’t keep kicking the can down the road by making the pipes fatter.
The worry about creating a “class divide” between content providers who can pay for better service and those who can’t is valid, but the real battle being fought isn’t about free speech and open access. It’s about how to spread the cost of building the infrastructure we need for our increasingly everything-over-IP future. And no one involved is really neutral.
While this sounds like “well, duh” news at first glance, the interesting thing about it is that nearly all of the loss came from B&N’s Nook ebook division; sales in their actual physical stores aren’t growing, but they only declined by 6.6%. I wouldn’t spin that as “essentially flat” the way B&N’s new CEO is, though. (“Officer, I can’t imagine why my car rolled down that hill when I left it in neutral—a 7% grade is essentially flat!”)
Physical books aren’t going away any time soon, if ever, but ebooks are effectively replacing mass-market paperbacks, bringing back something that B&N and their ilk effectively killed: the “mid-list title,” books that sell small but consistent amounts from which many authors, particularly of genre fiction, made the bulk of their royalties. My guess is that B&N will survive, but they’ll do so by reversing not just the last five years but the last fifty: losing their digital division entirely, closing many of their stores, and re-emphasizing hardbacks and limited editions.
It’s become a thing currently to bemoan the tech bubble centered around San Francisco by castigating its perceived drivers: young, arrogant nerds paid exorbitant salaries and paying exorbitant rents, with no interest in SF’s culture or history, swaggering into town and driving all the good people out. This has caused the arrogant nerds to write their own responses, but it’s usually the ones who really are assholes who are motivated to do so.
The real problem, as Priceonomics explained, is our old f(r)iend supply and demand. Between the start of 2010 and the start of 2013 the unemployment rate almost halved and more than 20,000 new residents arrived—and in 2012 only 269 new housing units entered the market. They helpfully chirp that 8,000 new units will come on the market between 2013 and 2015—but that’s clearly not enough to help matters. Geography and San Francisco politics combine to make new housing difficult and expensive; with the new technonerd-driven demand being concentrated on the high end of the market, things get even worse.
Yet the vilification of the techie seems misguided. I’m not a disinterested party; I moved to the Bay Area at the tail end of 2002, after the dotcom crash, with no job waiting. I had friends here—importantly, including one who let me crash with him, which morphed into renting a room in his house for five years—and I already loved the area. Almost everyone I know in this area lives here because they want, specifically, to be in the Bay Area. The people I know who live in “The City” live there because they love San Francisco. I suspect that’s true of most of the people inside those Google buses people are throwing (organic and sustainably-grown) tomatoes at.1 Valleywag loves to trot out pissy clueless technolibertarians as the True Face of Tech Startups, but let’s not pretend that Valleywag is a shining beacon of unbiased journalism.2
Yes, techies are paid exorbitant salaries and paying exorbitant rents. Yes, this is a problem for people being squeezed out. “No, I can’t accept your offer of a dream job in one of the great cities of the world because doing so incrementally contributes to a growing class divide” is a lot easier to say when you’re not the twenty-something from Suburban Elsewhere actually being given that offer.
Of course, with all respect to Marc Andreessen and his insistence that those who say this is a bubble are misguided, then give me another word for it. Housing costs are rising faster than salaries; the median home price in San Francisco is, if you run the numbers, twice as high as what a household with a median income in San Francisco can afford. This is happening in the Valley, too. My rent share of the nice two-bed/two-bath apartment in Santa Clara I’m in would more than cover the entire rent of a comparable apartment—or the mortgage of a respectable house—in Sacramento. I could actually commute from that area to here two days a week and still come out ahead. Some friends and acquaintances of mine are planning moves out of the area already, and the more expensive things get, the more will follow.
So there’s a silver lining for the “nerds go home” crowd: when even the techies decide they can’t afford to live here anymore, they’ll leave. In the long term, that might be better for San Francisco’s health as a city. But that might be a much longer term than critics think: the result of the high end of the tax base crumbling—especially in as tax-driven a city as SF is—won’t be very pretty.
While the Google Buses have fueled the perception that the techies live in SF and commute to Silicon Valley, that’s far from universal; while there are a few huge employers down here, the majority of startups are clustering around SOMA in SF. ↩
Having said that, it would sure be nice if sites like PandoDaily evolved enough self-awareness to stop letting pissy clueless technolibertarians write responses to Valleywag. ↩
While I may be the last person in the tech sphere to bother linking to Re/code, the new post-WSJ incarnation of Walt Mossberg and Kara Swisher’s AllThingsD, I’m surprised how few people have pointed out that it’s a really ugly web site. It’s like they’ve started the worst bits from news sites of five years ago—tiny type, boxes everywhere—and merged in a couple annoying recent trends like presenting the articles in an unevenly staggered multi-column “river.” While I like Futura as a headline font, they’re not using it very well. And the body typeface is (yawn) Georgia.
All poking aside, the big problem with this design—which is not unique to them—is that it’s awfully hard to figure out what’s important here. If layout is supposed to lead the eye, this is a hedge maze painted bright red.
A lot of web design through the 2000s borrowed the wrong things from print design: it forced layout into fixed widths, because that makes it much easier to place page elements, and it tended toward small print that “felt” like the 10–12 point body type that we already knew. On 1024×768 displays, this worked well enough, but we’ve learned since then that it’s much better for viewers if the web site adapts to the display that it’s on rather than decreeing “Thy content shall be 960 pixels wide” and that, paradoxically, the 16-pixel standard type size browsers inherited from Netscape and Mosaic is, if anything, a little too small to be comfortable for long-form reading rather than too large.
But there’s a lot of good to be learned from print, and so far the only thing we’ve universally adopted is that typefaces are important. (Well, most of us have.) Sites like Medium not only demonstrate well-chosen type, but demonstrate something many sites still ignore: white space is also important. A good case could be made that it’s even more important in a browser window than it is on the page.
Print magazines have well over a century of experience in balancing the demands of display ads with the absolute necessity of making the magazine a pleasant reading experience. This is a lesson that for the most part we haven’t figured out online; Medium doesn’t have ads, so they don’t face this dilemma. I have faith we’ll get there, but an important first step is making the content itself pleasant to read.
Ironically, when you actually get to a Re/code article, they do a pretty good job of this. If they can only make the front page as streamlined as the rest of the reading experience, it’d be terrific—or at least a good base from which we could start thinking about the ad problem.
Seriously, though. Georgia? In 2014? Who uses that?
We are what we pretend to be, so we must be careful about what we pretend to be.
— Kurt Vonnegut, Mother Night
A podcast I used to listen to, Angry Mac Bastards, recently flamed out spectacularly. The show’s premise was simple: the three hosts found stupid things people said on the Internet—as the name implies, mostly things about Apple—and ranted about them in deliberately profane fashion. Sometimes its segments were damn funny, and sometimes they seemed to be little more than shouting you fucking cunt into a microphone for twenty minutes.
Comedy is, almost universally, observational. The best comedy is built on observations we nod our heads to, even if we may feel a little guilty for doing so. And it’s often mean. What separates funny mean from just mean mean is hard to define, but funny tends to be either aimed at ourselves or at those who are deserving of scorn and ridicule. And as much as the phrase speaking truth to power may be a liberal cliché, it’s a pretty good barometer for who is—and isn’t—deserving.
The other interesting thing about comedy has its roots in something it shares with all performance: the stage persona. We adopt personas in all our interactions to varying degrees, of course, but performances—even when we’re performing “as ourselves”—heighten this. We’re going to exaggerate some aspects of the way we are and downplay others.
Comedy, though, is singular in that we’re rewarded for exaggerating the aspects of our personality that, in other circumstances, kind of make us assholes. Funny mean is saying those head-nodding things in a way that only an asshole would. Mean mean is just being an asshole.
AMB found a software developer’s “hire me" page and went to town on not just the page but developer Aaron Vegh personally. A lot of it was juvenile (capping on his photograph), founded on extremely ungenerous readings (equating self-proclaimed flexibility with poor problem-solving and being a prima donna), or just plain wrong (describing his web development book as “self-published”).1 What they did, in short, looked a whole damn lot like the woefully superficial and ill-informed trollbait they’re theoretically supposed to be skewering.
They were clearly trying for funny mean. Some of it was. But a lot just came across as mean mean.
There were several reasons I’d checked out of AMB sometime last year. One of them, though, was that I just don’t like punching down. Beating up on Seth Godin for spouting out Deep Thoughts on marketing strangely reminiscent of Deep Thoughts by Jack Handey is one thing; beating up on a guy we’ve never heard of—especially when, between kicks, you’re admitting his web page isn’t really that bad—is quite another. But something unexpected happened this time: it went viral. Suddenly a whole lot of people learned of AMB not as a podcast about flinging sporadically funny crap at people who deserve to be taken down a peg or two, but one about verbally brutalizing someone whose worst crime was overly twee self-promotion.
And the Internet Outrage Machine went to work.
The fallout AMB’s hosts have suffered from this has been serious, including costing one her job. Vegh didn’t deserve to be beaten down, but they didn’t, either—they arguably earned a rebuke, but not virtual destruction. We tend to force stories into “hero vs. villain” narratives: either everybody deserved what they got and good triumphed, or the forces of evil beat the profane but good-hearted heroes thanks to Vegh’s dastardly move of calling the public’s attention to this. But that’s not the way any of this really went down.
I’m not sure if there’s an object lesson for either side. I’m not sure there are even really sides. If there is a lesson, though, maybe it’s this: how we say what we say matters, no matter what it is we’re saying. It applies equally to pompous “hire me” web pages and ranting “fuck you all” podcasts. This is not a kind of censorship, not a form of political correctness, not an example of Orwellian thought control—it’s an axiomatic truth of any language that has more than one way to say the same thing. Loudly proclaiming “I say things that way because it’s who I am so just deal with it” does not grant you a Get Out of Shitstorm Free card; we all know—and you do, too—that your word choice is not preordained by your immutable nature.
So I’d think twice about deciding your online persona is “righteous asshole.” If it seems like a good idea, think two more times. You are not speaking truth to power. It is not a litmus test for determining your true friends. You are not guaranteed that only the “right” people will be pissed off. And you will build an audience that rewards you for being unkind—which makes it all too easy to cross lines you shouldn’t. When you get called on it, it’s too late to rip off your asshole mask and protest that’s not who you really are.
They finally noticed it wasn’t self-published, but weirdly described Wiley—the original publishers of Herman Melville and Edgar Allan Poe, founded in 1807—as “semi-legitimate.” Random House is presumably “a step up from a mimeograph.” ↩
MG Siegler at TechCrunch makes the case that “the death of Nintendo has been greatly under-exaggerated”:
Nintendo no longer is just competing with the others in their direct space—meaning gaming consoles and handhelds—meaning Sony and Microsoft. They are competing with every single smartphone and tablet currently on the market. And soon, they’ll likely be competing with a number of other set-top boxes as well entering the gaming space. Nintendo’s greatest weakness is the illusion (bolstered by years of reality) that they have years to figure out their competition and outlast them. They do not.
I’d been thinking off and on about writing something about Nintendo which would have annoyed people in much the same way that MG’s piece will, although mine would not annoy nearly as many people because (a) my audience is smaller and (b) a much smaller percentage of my readers make it their life’s work to tell me why I am wrong about everything than is true of MG’s readers. Also, I was never really a gamer, not in the way we mean it now.1
When people like John Gruber made the case that Nintendo was in trouble earlier this year, the general reaction from gamers—and very smart gamers, mind you—was, essentially, you say that because you don’t understand Nintendo. Perhaps not, but MG puts his finger on what bothered me about that response. Nintendo’s problem isn’t like Apple’s in the late ’90s—it’s like Nokia’s in the late 2000s. People were describing Apple back then as on death’s door because it was. Nokia, though, was the #1 cell phone manufacturer by a wide margin, flush with cash, stuffed with smart engineers and designers. And Nokia was supported by tens of thousands of fans ready to explain in impatient detail why the iPhone—let alone Android, pff!—only looked like a threat to people who didn’t understand Nokia.
They were right. The changes in the market did only look like a threat to people who didn’t “get” Nokia. That was the problem.
In terms of technology, Nintendo simply isn’t competitive with the PlayStation 4 and the Xbox One. Fans maintain that the brand strength is more than enough to offset that deficiency. Maybe, but that bet didn’t work out so well for Sega. Maybe Nintendo will come out with a Wii U successor in a few years that’s at least on a par with Microsoft and Sony’s offerings—but that just sticks them on the horns of a different dilemma: either they’ll end up in exactly the same position when the PlayStation 5 and Xbox π come out, or they’ll have zigged when the market has zagged. There’s a non-zero chance that the current generation of consoles is the last generation of Consoles As We Know Them.
Like everyone else bloviating about this I can’t come up with guaranteed good advice for Mario and friends. I don’t think the advice to drop all the hardware and port their games to everything else as fast as possible is particularly sound. But if I ran Nintendo, I’d probably be quietly exiting the console business and partner with Microsoft or Sony to make them the exclusive platform for our next generation “living room” games. I’d stay in the handheld hardware business for the time being, for much the same reason it makes sense for Amazon to keep producing e-ink Kindles for the time being: smartphones and tablets have become fine gaming devices, but a handheld game console is still a better experience at a lower price point. And I wouldn’t definitively rule out original games—not ports—for iOS and Android.
Do I really think Nintendo is doomed? No. But they—and their fans—need to come to grips with the truth that what they’re doing right now is not working. It’s not going to suddenly start working when a new Zelda game ships for the Wii U. They need to change, and they need to figure out those changes while they still have the resources to make bold moves rather than desperate ones.
I grew up with pen-and-paper role playing games, but the aspects of RPGs easiest for computers to model—dice, numbers and dungeon crawling—were the aspects I found the least interesting. My favorite games have always been adventure games, and to me the “storytelling” in modern big-budget productions, even critically-acclaimed ones like Bioshock and The Last of Us, tends to have a lot less to it than meets the eye. ↩
As you might have seen about the Internets, Google is making a concerted effort to tie YouTube identities with G+ identities, and this has brought another round of arguments about what the relative harm of this may be—and that in turn always gets into questions of just what identity means on the Internet. Should you be allowed to be anonymous? Is anonymous bad but pseudonymous okay, or do we treat them as essentially the same thing?
Usually when this comes up, the arguments on the “pro-pseudonym” side circle around the notion that people sometimes have very good reasons to hide their identity. They might be under an oppressive authoritarian regime. They might be hiding from an ex-lover. They might be gay, or trans, or atheist, or Muslim. There could be any number of reasons to keep their identity secret. “If you don’t have anything to hide you shouldn’t be worried” is, after more than a moment’s thought, rather glib.
This seems to me, though, to be setting an unnecessarily high bar for when we deem it “permissible” for someone to participate pseudonymously in an online community. Historically, people who are not public figures have legally and socially been held to have a fairly broad right to privacy. You don’t get to go into your neighbor’s house at your leisure to see what magazines they subscribe to, what kind of food they keep in their refrigerator, and what kind of porn they’ve saved on their TiVo. As long as there’s no reasonable suspicion of illegal shenanigans going on, what they do isn’t any of your damn business. By extension, as long as there’s no reasonable suspicion of illegal online shenanigans, what forums your neighbors hang out on and what kind of porn they’re saving to their hard drive isn’t any of your damn business, either.
There are many, many people you interact with on a professional basis every day whose personal lives you don’t need to know a thing about. This not only includes sales clerks and bartenders and waiters and bank tellers, it by and large includes your boss and your coworkers and the guy at the Starbucks who recognizes you when you come in every morning and has your three shot skinny grande vanilla latte waiting by the time you get to the register. It includes the people who hire you, who approve your rent or mortgage application, who fill your prescriptions, and who file your tax returns. To a large degree it even includes your family. (Don’t pretend you share everything about your life with your parents.)
The deep underlying problem with the relentless drive to give us One Identity Everywhere that both Google and Facebook are so enamored with is that—whether or not it’s the intent—it’s a drive to largely erase the distinction between the professional and the personal in the online world.
I don’t think this is the intent; Google almost certainly believes that this is a problem—like every single problem, from managing your calendars to the Israel-Palestine conflict—which can be solved by code running in their data centers. It’s just a question of getting the code right. But the belief that the separation between personal and private identities—between our various “circles”—can be adequately managed by twiddling settings in your G+ account betrays a lack of understanding not of what privacy is, but of where privacy is.
Privacy—both in the legal sense and in the mores and folkways sense—is something that by definition can’t be outsourced. You can’t put your privacy mirror in the cloud; it has to be at an endpoint, in our own personal devices we have physical control over. In some future iteration of the Internet, it could be possible to bake privacy right into the infrastructure—but even then, that can only reliably happen at the endpoints.
In practice a lot of privacy standards are convention rather than law; they work in large part because someone trying to find out what you do outside the public eye has to put some effort into doing so. In the first decade and a half or so of the Internet age, this was true online as well. Despite the oft-made argument that anything you do on the Internet is de facto being performed in public, it really wasn’t. You had to have some idea what you were looking for to find someone. This is becoming progressively less true, and we need to start asking ourselves whether enshrining the principle of “if you don’t want anyone to know about it, it shouldn’t be online” truly leads somewhere we want to go. I don’t think it does.
For the immediate future, though, we already have a straightforward way to manage our privacy on our endpoints, where that management needs to happen: separate identities that aren’t tied together anywhere off our personal devices. The problem here isn’t how Google (or Facebook or anyone else) handles our privacy; the problem is that Google shouldn’t be managing our privacy. And Google (and others) need to stop demanding otherwise.
Microsoft Corp has narrowed its list of external candidates to replace Chief Executive Steve Ballmer to about five people, including Ford Motor Co chief Alan Mulally and former Nokia CEO Stephen Elop, according to sources familiar with the matter.
Assuming the sources are correct, this just formalizes what we’ve already heard. According to Damouni, investors want a “turnaround expert,” which is usually taken to mean someone who can bring an ailing company back to profitability. That’s what Mulally did at Ford, cutting tens of thousands of jobs, closing plants, ending the Mercury brand, and selling off recently-acquired “premier” brands like Jaguar, Land Rover, and Volvo. It’s been brutal by some measures, but Ford is—by American car company standards, especially—doing well.
But Microsoft is already profitable. What is it they actually need from a leader?
A friend who works at Microsoft thinks they need a guy who understands tech—they don’t need Bill Gates back, they need a new Bill Gates. I can certainly see the argument, but I’m not entirely sure I agree. They need someone who understands technology deeply—but that isn’t the same thing as needing an engineer.
Elop seems to be a more popular choice with Microsoft’s board than he is with pundits, of both professional and armchair variety. Most of the criticism of him is a variant of c’mon, look what he did to Nokia. Well, okay, let’s: what he did to Nokia was to get them to start producing high-end phones that people paid attention to. They get good reviews. They’re in stores near you. (In 2010, finding a Nokia touchscreen phone in the United States was like finding a unicorn.) And while it’s taken until 2013, I’m seeing more Nokia phones in the wild again—five years ago their smartphones were essentially invisible.1 Yes, Windows Phone still lags far behind Android and iOS, but for all practical purposes, Nokia didn’t get its shit together until 2011—the other guys had a three-year headstart. I’m not convinced that Nokia could realistically have done any better.
So do I think he’d be a good choice for Microsoft? Maybe. Elop would be different from their usual executives—he’s charismatic in interviews and is well-spoken, and you’re not going to get crazy Ballmer chanting from him; on the downside, he sometimes hasn’t known when not to talk. Telling the world that they were switching to Windows Phone over half a year before they could ship anything on it was a terrible idea.
But the big difference Elop might bring to Microsoft is—I’d like to find a word that’s both less polarizing and less squishy than this one, but I can’t—a sense of taste. While I’d never compare Elop to Steve Jobs—their styles are so different it would only cause giggling—my impression is that, like Jobs (and unlike many Microsoft executives, past and present), Elop understands the importance of user experience. Bill Gates is a brilliant guy but he never had any interest in being a tastemaker, and on a good day Steve Ballmer has the taste of a Nacho Cheese Doritos Locos Taco. I’m not sure there’s anyone at Microsoft who would have come up with devices as, well, cool as the Nokia Lumia line. Not even the Surface tablets have the same quality industrial design.
Microsoft needs someone who can come in and get rid of things that aren’t working, which appears to be the main appeal of Ford’s Mulally. But Elop has certainly demonstrated a willingness—some would say an unseemly eagerness—to shitcan things that don’t align with his chosen direction.
Of the two internal candidates we know about, there’s former Skype CEO Tony Bates, and “cloud and enterprise chief” Satya Nadella. I don’t know a lot about either of them; Bates joined Skype in late 2010, which suggests his chief accomplishment as CEO was selling them to Microsoft seven months later. Nadella may be the closest to the kind of candidate my Microsoft friend wants: he’s an EVP who started as an engineer at Sun, and appears to have been involved deeply in moving Microsoft online. While picking him might be a statement about re-invigorating Microsoft’s engineering culture, it might also be a statement about re-invigorating their focus on big business: all of Nadella’s background seems to be in enterprise computing.
Whenever an American writes this people inevitably bring up India, where Nokia’s S40 phones are still #1 with a bullet. Yes. We know. They’re not smartphones. No, not even the Asha line. ↩
BlackBerry, it’s noted, has appointed John Chen to be their “interim CEO.” Chen is the former CEO of database company Sybase, and was presumably chosen to lead BlackBerry back to the same kind of high brand recognition and market leadership that Sybase enjoys today.
“The Cinderella story has two morals. The light moral is that when you have grace and beauty, it will shine through and elevate you to your proper station. The dark moral is that this only happens if you are lucky enough have a Fairy Godparent.”— Tim Maly
While I’m not a podcaster (yet?), I follow several people who are, and lately I’ve seen a few talk about how there isn’t an audio editing app specifically dedicated to the spoken word. You can use Garageband or Logic Pro or the like, but those are designed for music production, and there are things they could be doing that might make a podcaster’s life easier.
But there’s a podcast I listen to, Andrea Seabrook’s “DecodeDC,” that has two interesting features. One, the production quality is at the level of “This American Life”: we’re talking very high. Two, it tells you what software it’s being produced on at the end of each episode: Hindenburg Journalist Pro.
After hearing this for a dozen episodes or so I finally went to the web site for the software. It’s aimed at radio journalists, and is, in fact, an audio editing app specifically dedicated to the spoken word. And it has podcast-specific publishing features. It lacks many features that are in the bigger-name “pro” audio apps—but most of what’s been left out has, from what I can tell, been left out by design, because it doesn’t fit their niche.
Again, I’m not a podcaster—but if I were, I’d at least check this thing out.
Thanks, Apple! All our fingerprints are going to be available to the NSA now!
What? That’s wrong on so many levels.
I know, right? Can you believe it?
No. I mean it’s just not correct. For a start, the iPhone 5S doesn’t even store your fingerprints.
C’mon, of course it does. How else could it work?
A web site doesn’t actually store your password, it stores a one-way hash of your password. Likewise, the iPhone 5S doesn’t store an image of your fingerprint, it creates data about your fingerprint and then stores a hash of that.
Oh, sure, that’s what they tell you.
That’s what anyone who understands programming would tell you. Abstracting your fingerprint to data rather than storing an image is just as secure, much less computationally expensive, and probably more accurate.
Even if that’s true, we know passwords can be broken, so we can’t know for sure that someone couldn’t reconstruct your real fingerprint from the data.
You can’t reconstruct a password from its hash. Password cracking tools use lists of common words and various permutations, testing them against the password hashes. So, yes, in fact, we do know that someone can’t reconstruct your real fingerprint from the data.
So why would Apple do this if it’s not to collect your fingerprints?
How about the rationale they’ve already given us? What’s important on your phone isn’t your passcode, it’s everything the passcode is protecting. But for a lot of users—even technically savvy ones—passcodes are sufficiently irritating they’re not using them. With a fingerprint as a passcode, this barrier is removed, making it much more likely people will start protecting their phones.
There are really only two ways for your phone to be compromised: someone getting physical access to it, and someone siphoning data out of it remotely. Touch ID greatly reduces risks from the former, and doesn’t affect the latter at all.
But the fingerprints are still being read and that means they’re being turned into data and that data must be being sent to the NSA in some way you just haven’t thought of yet. Look at everything the NSA is collecting already!
Yes, let’s. Everything the NSA collects goes over networks. They’re a signal intelligence organization. They analyze signals—which is to say, communications. Phone calls. Email messages. IM messages. Bank transactions. There’s a lot of things that constitute signals. But you know one thing that doesn’t?
Could Nokia nerds knock it off with the "Trojan Horse" thing?
Yes, I know, Betteridge’s Law strikes again. Isn’t it possible that Stephen Elop was secretly dispatched from Microsoft to join Nokia as part of a three-year plan to make them Windows Phone’s biggest hardware partner and then return to the fold with Nokia’s smartphone division after Steve Ballmer was unexpectedly forced out?
Actually, when you put it that way, no. Jesus Christ, people, take off the tinfoil hats. Or put them on. Or something.
As I’ve pointed out before, Nokia went with Windows Phone because Google wouldn’t let them both use Android branding and Nokia service branding, and they considered that a deal-breaker. Elop also gave another reason to The Guardian back in July which I’d missed:
What we were worried about a couple years ago was the very high risk that one hardware manufacturer could come to dominate Android. We had a suspicion of who it might be, because of the resources available, the vertical integration, and we were respectful of the fact that we were quite late in making that decision. […] Now fast forward to today and examine the Android ecosystem. There’s a lot of good devices from many different companies, but one company has essentially now become the dominant player.
Call me crazy, but it seems the man was right. What he doesn’t say—but you can clearly take away from this—is that he foresaw there would only be room for three major smartphone operating systems and that Nokia’s own MeeGo was not going to be the third one. That meant Nokia needed to be for Windows Phone what Samsung was for Android. They may be a distant third now, but if they’d stuck on their original course they’d be a distant fourth, with HTC having taken the dominant role with Windows Phone—an outcome that probably wouldn’t have been much better for Microsoft than it would have been for Nokia.
It’s true that Nokia hasn’t had any out-of-the-ballpark hits under Elop, but it’s ridiculous to think that he didn’t want one. It’s just not realistic to have expected that. Nokia’s substantial fanbase didn’t want to believe—and still doesn’t, apparently—that he was handed the reigns of a company that, like RIM and Palm, was already in the process of collapsing due to entirely misreading where the market was going. It’s easy to accuse Elop of being Nokia’s Adam Osborne rather than their Steve Jobs. But keeping his predecessor, Olli-Pekka Kallasvuo, would have left them with their Mike Lazardis. How’d that work out for Blackberry?
Years ago I was trying to formulate a maxim that would have gone something like this:
Any given technology will be put to all possible uses that have a greater predicted value than cost.
There’s a science fiction novel I’ve been working on fitfully for about two years in which this concept drives the background; closer to the foreground is the concept of ubiquitous presence, a consequence of being always online and, whether by choice or just passively, leaving digital footprints everywhere. The characters have far less privacy than we’re used to, but not due to intrusive government surveillance. Societal expectations have shifted from “private unless explicitly shared” to “public unless explicitly hidden.” Like all societal shifts, that wasn’t a collective, conscious decision, but more of a response to pressures over time—specifically, the pressure of applying that maxim.
There are a lot of things about the novel I expect to remain science fiction—it’s set around the asteroid belt, for a start—but ubiquitous presence? You’re soaking in it. It’s not as pervasive now as it is in the novel, but it will be (and long before we’re living among the space rocks). This is the dawn of the age of Big Data. If you’re in the first world, there’s likely more data generated and stored about you annually now than about anyone who died before the year 2000 over their entire lives. Not all of that data is being actively generated by you, very little of it is being actively stored by you, and the uses that it’s being put to are largely not under your control even when they benefit you.
The current NSA scandals have been the catalyst for a lot of people to finally start thinking about this, but most of us aren’t thinking about it very deeply. We know we’re outraged by the government collecting all this data on us without our consent; most of us feel “this isn’t a bad thing because our intentions are good” is not reassuring. History is, by and large, notatallsupportive of the “just trust us” defense.
Even so, there’s better and worse reasons for not trusting an organization, and “because government,” in and of itself, is a poor one. The state—like a hammer, gun or Tim Tebow—is a tool. When a tool isn’t producing a good outcome, there are many possible reasons why, but “this tool is intrinsically evil and nothing good will ever come from it” isn’t one of them.
The signal that’s somewhat lost in the noise is this: the government isn’t collecting the data. Corporations are. What the NSA is doing is correlating it with metadata to form complex diagrams of our personal relationships, performing deep data mining to create potentially very invasive profiles, and taking an extremely expansive view of what they’re allowed to do under the law.
That’s just as much of a problem! Yes, it is, but why was the data being collected in the first place? To allow private entities, taking an extremely expansive view of what they’re allowed to do under the law, to correlate it with metadata to form complex diagrams of our personal relationships and perform deep data mining to create potentially very invasive profiles. And it’s important to understand that they’re not doing it because they’re evil, or stupid, or in league with the Rothschilds to bring about the New World Order.1
They’re doing it because it helps them do the jobs we want them to do.
If you’re going to show me ads at all, I’d much rather those ads be relevant. I want Netflix to recommend cool stuff to me. Don’t you? That requires it to have some idea of what you and I consider cool. Your phone may already have the ability to know an email is a meeting request, offer to make the calendar appointment for you, and remind you when you need to get in the car and head out based on your time, location and the traffic conditions along the way. Jeeves was an excellent valet because he knew Bertie Wooster better than Bertie knew himself in some respects. So it will be with our electronic valets—and a lot of what they know is going to be accessible to others.
That tends to lead to thoughts of Orwellian dystopia, but even our optimistic science fiction implies this kind of data collection. Analysts back at Starfleet HQ can easily find out how much Earl Grey Captain Picard drinks and at what temperature, can get to everyone’s medical records through the transporter logs, and can look up just what you’ve been doing on the Holodeck in your spare time. (Oh, myyyyy.) “Star Trek” depicts, intentionally or not, a society that long ago set a different balance between privacy and presence than we did.
We’re at the point where we need to pay attention to how we’re setting that balance ourselves, though. The problem isn’t that we’re laying the foundations for a “public unless explicitly hidden” society; the problem is that we don’t seem to be fully aware we’re doing it. We’re ecstatic about finding ever more value in ubiquitous presence, but we’ve put very little effort into understanding the costs to our privacy and how to address them. We need to build accountability into the infrastructure. In the long term we may need to become, as science fiction author and Guy Who Thinks About This Stuff David Brin puts it, a transparent society.
As civil libertarian Jennifer Granick wrote, “good people will do bad things when a system is poorly designed, no matter how well-intentioned they may be.” Right now, the systems are being designed by people for whom all the value lies in presence rather than privacy. The state isn’t the architect of many of these systems, and Orwell’s future isn’t the only dystopian outcome possible. As vital as the debate over how much data-driven surveillance the government should be permitted to perform may be, it’s part of a much larger set of issues.
"Rothschild and the New World Order" is my Information Society cover band. ↩
Touch’s premium continues to scare off buyers who have been trained by years of cut-rate PC deals, but the prices themselves are not entirely to blame. Even if the gap between touch and non-touch PCs was significantly smaller, customers would still pass because they don’t see much value in having touch on a PC.
I remember Steve Jobs being dismissive of the notion of putting touch screens on laptops and desktops—especially desktops—and have gotten the impression that it was kind of a controversial position, not just among Apple skeptics but even among a few Apple watchers who were expecting that to be one of Jobs’ infamous “that’s a stupid idea until we do it” moves. But I always thought that he actually meant that one. He specifically called out the ergonomic problems in having to lean forward and reach across your keyboard to do things. He was right. it’s awkward even when you’re just talking about an iPad and an external keyboard. And even if you believe that a pointing device is often actually faster than a keyboard if the UI’s sufficiently well-designed—which I do—the added time in making a much bigger motion between pointing device (screen) and keyboard negates that advantage.
Touch works fantastically well as an interaction mechanism with handheld screens, but it’s not being used as a replacement for mice in those contexts—it’s being used as a replacement for styluses and the handful of crappy ways we previously had to interact with our phones (buttons, control wheels and thumb trackballs). Something better will probably come along to replace mice, but touch may not be it.
Windows 8 is the 3-D TV of operating systems: great in demos, fantastic at dazzling pundits, and beloved of accountants, but just not something buyers actually wanted.
The Edge, a smartphone that runs a mobile edition of the popular desktop OS Ubuntu, will only get made if would-be users pledge $32 million via the crowdfunding site Indiegogo [within 30 days]. In its 15 days on Indigegogo, the Edge project has attracted $8.3 million in pledges, leaving it nearly $24 million short.
The editor of the annoyingly named “OMG! Ubuntu!” site still holds out hope, but “does note that the low introductory tiers of the edge—the handset was priced at $600 for the first 5,000 backers—may have ‘possibly set unrealistic expectations about the cost of the device.’” See, the longer you waited to back, the more expensive it got. Two weeks ago, it was that apparently unrealistically low $600; now it’s $730. (Note: it’s apparently been dropped down to “only” $695 for everyone now, but my thoughts here still apply.)
While one can try to read this as a slam against the viability of open source hardware or of crowdfunding massively expensive projects, I suspect the real story is simply this: $700 out-of-pocket for a phone is too much. You often hear arguments—usually aimed at Apple—that the off-contract price is the true price of the phone, and that therefore whatever phone they want to argue is overpriced is way overpriced. How out of bounds Apple is depends on who you compare it to; an HTC One with 32GB would be $600 off-contract compared to the 32G 5’s $750, but a Galaxy S 4 with the same memory is $695.1
But these aren’t the prices most people pay. I don’t mean “most Americans” due to our carrier subsidies, I mean “most people.” European operators offer “handset financing” over 12 or 24 month periods which have very much the same effect. Orange UK offers that 32G iPhone 5 for £100 up front plus £42/month for two years, on the 5GB/mo data plan—as of this writing, roughly $155 and $73. (While I haven’t done the currency conversions for Deutsche Telekom’s offer it looks comparable—and again a lower overall cost than I’d get here!)
Back here in the States, if I pay $300 for the iPhone and $85 a month for service the next two years, I’ve paid $450 less than someone who’s paid $750 for the same phone and $85 a month for service for the next two years. While I’ve paid $100 more than someone who bought an HTC One on contract, I’ve paid $300 less than someone who bought the HTC One off contract.
This is real, non-imaginary money being saved. To the buyer, the true price of a phone is the price actually paid. And this means that the true difference in cost between that HTC One and the Ubuntu Edge is $530. The Edge is an objectively better phone in some ways.2 But ultimately a lot of what you’re being asked to pay for is a warm fuzzy feeling—and a warm fuzzy feeling that costs you a lot more in real, actual money than that Apple (or Samsung or HTC or…) logo does.
The Edge’s biggest claim to fame would have been the 128G storage capacity and the ability to dual boot between Ubuntu and Android. As we move more and more data into the cloud the first seems less interesting, though, and anyone who wants to dual boot their phone—including the mid-90s version of me who’d probably have thought that was awesome—seriously needs to get out more. ↩
What’s that—you require a different form of payment? Well, in that case, how about my high-level exposure to a wide online audience? Can I pay with that? Listen, maybe you don’t understand how “exposure” works: It means my writing is now reaching more readers than ever before, many of whom are currently enjoying and sharing my work, for free. This in turn brings increased web traffic and attention to the site that has published it, which compensates me for my labor and time with the feeling of deep fulfillment and validation I derive from being a part of said site. And you’re saying that when it comes to paying for some of my most basic living expenses, that counts for nothing?
The Verge’s Jeff Blagdon reports—or more accurately is one of the first to link to the company’s press release—that FileMaker is dropping Bento, their interesting but perhaps too odd-duck “consumer database” program. Everyone who played with it seemed to love it in theory, but I know I could never quite figure out what I’d do with it in practice. Apparently I wasn’t alone.
I confess the main thing that surprised me about this announcement wasn’t the announcement, it was going to FileMaker’s web site and discovering the cheapest product that will be left—FileMaker Pro—is $299. Access 2013’s street price is around $99, and the whole of Microsoft Office Professional (the version that comes with Access) is around $275. Yes, Access is hateful and FileMaker is a pretty pretty flower, but wow.
Hello, and thank you for sending me your job posting for a JAVA DEVELOPER in GODFORSAKEN GULCH, IA . I regret to inform you that this does not meet my needs for the following reasons:
☒ This position is not in my area, and does not mention relocation
☒ This is a contract position for under a year and makes no mention of contract-to-hire
☒ The position’s main technology requirement isn’t even on my résumé
☐ The position’s main technology requirement is for “5–7 years” of a technology that burst onto the scene two years ago
☐ Your client’s business model is a transparent attempt to clone someone else’s success
☐ Your client’s business model is a transparent attempt to clone someone else’s failure (N.B.: ask for your full fee up front)
☐ I don’t expect you to be a programmer but you apparently can’t even use your mail client correctly and for God’s sake turn on spell-check
I understand that it saves you time to do a keyword search on a résumé database followed by blasting out form letters that specifically target those who are:
Just as willing to do a half-assed version of their job as you are to do a half-assed version of yours
I understand this works for many clients: there are enough underqualified, desperate people with half an ass to go around, and recruiters who care about their clients’ needs frankly charge more than you do. However, any company that would use someone like you for recruiting is a company I wouldn’t want to work for (and would be hesitant to use products/services from, since I know the quality of their employees), and so I respectfully ask you to remove me from your contact database.
So this letter isn’t as much a waste of your time as yours was of mine, I’d like to offer a new slogan for your firm, free of charge: “Matching miserable people with miserable jobs at miserable companies since YEAR .” It has a nice ring to it, and it’s exclusively yours, RECRUITER NAME of BUZZWORDS STRUNG TOGETHER INC !
Netflix has had HTML5-based streaming for a while now—that’s how they get to iOS-based devices and anything else that can’t run Silverlight, the Flash competitor from Microsoft that never really took off. (It joins Microsoft’s long parade of chart failures in their attempt to take over Adobe’s markets.) Now, Netflix is ready to move forward with plans to make everything HTML5-based. They’re abandoning proprietary protocols to go with standards! Hooray!
So naturally, GigaOM reports, “free software” activists are greeting this with… a call to boycott.
Why? Netflix is relying on HTML5’s upcoming video standards to allow content to be saddled with DRM. The free software crowd isn’t upset that Netflix is adopting HTML5 video, it’s upset that Netflix is helping to drive “non-free” web video.
Except, of course, that it isn’t. Netflix is helping to drive a “non-free” wrapper around an actual open standard, the way MP3, AAC and MP4 already work.1 They’re not doing this because they hate freedom, they’re doing it because it’s required by their business partners.
Furthermore, for most users this is simply a non-issue. I’d vastly prefer to own media and software without DRM (and I have no compunction about removing DRM for my own purposes), but if I’m watching streaming video I don’t care whether the stream is encrypted. The people who care are people who either don’t have the decryption plugin available or who refuse on general principle to run any closed source software. (If you said “and pirates,” bad answer. Pirates don’t care whether your shit is encrypted to begin with.)
In other words, Linux users. Not all Linux users, either, but the ones who correct you with, “Ah, that’s GNU/Linux.”
So: sorry, guys, but your reality check bounced.2 You know full well that the choice here is between Netflix using HTML5 video with encryption or using something completely proprietary with encryption—there is no option that lets you get your “House of Cards” fix without using closed source. (Or pirating, which of course none of you would ever do, right?) You can have Kevin Spacey doing a Southern drawl or your GNU/Principles, but not both.
I’m sure someone wants to bring up H.264 versus WebM here, but when we talk about standards, “open” doesn’t require free in any sense of free, it just means that it’s not exclusive. The Mini-USB connector is an open standard and Apple’s Lightning connector is not, but both are patented and require royalties. ↩
While using “guys” as a collective is sexist, percentage-wise it’s pretty accurate; Linux kernel developer Sarah Sharp estimates that while the proprietary software world has about 20–30% women contributing, the open source software world is at a dismal 1–3%. I presume they’re waiting for a GPL-licensed implementation of equality. ↩
According to sources cited by The Verge, the Finnish phone maker is readying a software update that will sweep many of its higher-end Lumia devices, turning on Bluetooth 4.0 capability that resides within the chipset.
Okay, so: Nokia is late to the party, since everyone else has this. Bluetooth 4.0 is two years old. (The first device on the market supporting it was the iPhone 4S.)
What makes this interesting to me is that Bluetooth 4.0 is actually a rebranding of a technology called “Wibree” released in 2006 based on Bluetooth. While I’m sure there were a few differences between Wibree and the final standard, you’d think the company that invented Wibree would have a bit of a head start on Bluetooth 4.0 devices. And who invented Wibree?
While “Nokia phones finally get Bluetooth 4.0” isn’t really a big story even to most Nokia users, it’s a made to order illustration of how Nokia reached the market position they’re in today.
[Dropbox CEO Drew] Houston wants Dropbox to become “the spiritual successor to the hard drive.” Users don’t interact with files on iOS, Android, or the web the way they do on PCs. Developers can now make Dropbox the “open” and “save” windows in their apps. But Houston and company don’t want to stop at files. In large part because mobile devices have spawned a less file-centric user experience, much of the data we generate and access doesn’t have a separate life as a document. You don’t play Angry Birds and then save your game as a “.ang” file. If Dropbox’s gambit works, however, you’ll be able to pick up the game on your iPad where you left off playing on your Galaxy S4.
This is a really smart move—ambitious, but over the long run necessary. Right now we’re still talking about how much trust to put in the cloud, but sooner than we think the notion that any data has a “primary” physical location is going to seem as quaint as mechanical hard drives.
The bigger problem is that [big web players have] abandoned interoperability. RSS, semantic markup, microformats, and open APIs all enable interoperability, but the big players don’t want that — they want to lock you in, shut out competitors, and make a service so proprietary that even if you could get your data out, it would be either useless (no alternatives to import into) or cripplingly lonely (empty social networks).
Inevitably when this is brought up, people conflate “interoperability” and “open data” with “open source.” (Then they rant about how Marco is a hypocrite for making observations like this while using Apple products.) The free software community seems to have had, for a long time, an assumption that open data is a natural consequence of open source and that those of use who want to see more of the former should be pushing the latter. This isn’t unreasonable at first glance—an open source word processor can’t be using a proprietary data format.1 The problem is that open data and open source are increasingly orthogonal issues.
So let’s state this very baldly. Suppose I come up with what turns out to be a popular online social network, a successful “walled garden” like Facebook. It’s a walled garden because I don’t give you any APIs to get your data back out once you’ve put it in. That’s all that matters.
What doesn’t matter is what the license the software it runs on is. I could be running WalledGardenBook on 100% free software, put its source code on Github, do all of the coding in Emacs and genuflect to a photo of Richard Stallman daily and it would still be a walled garden.
Your choice of software ultimately has very little bearing on how easy it is to access your data in the cloud. What matters is access to that data—certainly import/export functions (and you want both), but also open, well-documented and, when possible, standardized APIs—and, just as importantly, terms of service that let you treat your data as your data. Twitter, by making an active choice to be evil, has done us all a favor in a strange way: we know what to watch out for next time.2
Within the next decade the computing model will flip to cloud-first—the choices of who we trust with our data will become all-important. Making that choice based on who makes our favorite mobile operating system—or on who’s screaming “open!” the loudest—is probably unwise.
This doesn’t mean that an open source data format can’t be so convoluted it’s extremely difficult to use in anything else. ↩
As much as I rail against Facebook’s walled garden, I don’t think of them as evil the way I think of Twitter as evil—they’re actually pretty good about giving you access to your data. Their primary fault is assuming that society will eventually just redefine the concept of “privacy.” Of course, there’s a non-zero chance they’re right. ↩
There’s a lot that’s been written over the past couple of years about battles over online music royalties, most recently in the form of David Lowery’s article, “My Song Got Played On Pandora 1 Million Times and All I Got Was $16.89.” Michael DeGusta produced a well-researched rebuttal estimating that Pandora actually paid over $1,300 for that, and points out that when you consider this on a basis of listeners per play Pandora (and by extension other similar services like Spotify) are paying far more than terrestrial radio: generally, only one person was listening to each of those million plays on Pandora, while tens of thousands of people are hearing each song play on commercial FM.
Of course, if you’re a performer or songwriter, what matters to you is how much money you’re actually getting in total, not per play, and a future in which you’re making a fraction of the revenue you are now is not one you’re likely to be enthusiastic about. Even if DeGusta is correct and Lowery actually made $234 for those plays, that’s still less than one-fifth of what he’s making from conventional radio airplay. If you assume Lowery is making hundreds of thousands a year that doesn’t sound so bad, but the median income for a composer is about $46,000.
Pandora, for their part, has pointed out that they pay more money per song than any other radio service “broadcasting” on any media—by more than four times on average—and while they didn’t call attention to this, Pandora is, despite their revenue and subscriber growth, losing money. They’re not losing money as fast as Spotify is losing money, so, uh, that’s something. The main source of Spotify’s massive bleeding? You guessed it—royalty payments.
Of course, Pandora and Spotify don’t have the same business model, not exactly: Pandora is kind of a radio station, whereas Spotify is “subscribe to your whole music library.” Because of this, Spotify actually pays a higher rate per song play than Pandora does. But it’s still not very much, so it leaves us in the same place, with musicians and songwriters screaming at them for being cheap bastards and them screaming back that they’re losing money as it is and they can’t possibly be expected to pay more.
And while we’re all primed to look for villains in any story like this, they’re both right. It may not be possible for services like Pandora or Spotify to simultaneously pay as much money as non-Internet radio stations do and turn a profit. This suggests that this is a lousy business model. Spotify in particular reminds me of the old joke about selling goods at a per-unit loss, believing you can make it up in volume. It’s the Webvan of streaming services.
Apple’s soon-to-launch iTunes Radio is following the Pandora model—I suspect it’s “iTunes Genius Mix” with Apple’s library rather than your library. It will, according to Billboard, pay “slightly more than the pure-play rate that Pandora pays,” but slightly is slightly. Yet unlike everybody else, Apple can tie their “radio” directly to their online music store, and you can bet that when the first middle-aged rocker writes a whiny anti-Apple editorial Eddie Cue will hold up a balance sheet showing just how many songs have been sold directly from iTunes Radio.
Apple has long maintained that people would rather own their music than subscribe to it. I’ve known more than a few users who’ve disagreed—people who use Spotify for all their music listening—and, of course, Spotify and their competitors are betting Apple is wrong, too. In the final analysis, though, the question may not be about what consumers want after all: you can go broke giving people what they want.
Sad that Game of Thrones has wrapped up its third season? Looking for some drama to fill the time? We’ve got just the thing for you. One of the Internet’s longest-running and most-hated lawsuits is back: SCO v. IBM has been reopened by Utah district court judge David Nuffer.
If you’re not familiar with this case, it really is one of the most fascinatingly bizarre ones in the annals of computer history — and it’s just gotten more bizarre. You may remember “SCO Unix” from many years ago; SCO made a Unix clone and then bought the rights to the Unix name from AT&T. The thing is, the SCO in this lawsuit really isn’t the SCO that sold SCO Unix — that SCO actually sold off all their Unix rights and changed their name to Tarantella. One of the companies that got some of those rights was Caldera, a spinoff from Novell; Caldera then changed their name to SCO and started suing people.
The sad footnote in that story is that the people who started Caldera — none of whom were involved by the time the lawsuits were flying — were arguably years ahead of their time in the Linux world. If they’d actually gotten their shit together Linux might have actually taken over the desktop after all ha ha ha ha hahaha okay but they were actually pretty good.
There are over 1,000 flashlight apps in the App Store, 50 in our own CrunchBase, and even one that’s venture funded. They’re all in trouble now. Why download or waste home screen space on an app when you can use Apple’s native version with just a swipe up and a tap? Sure, these apps probably didn’t take long to develop, and maybe their makers should have predicted Apple might launch its own flashlight, but still, they deserve a little sympathy.
I love this article. See, the basic premise boils down to “it’s unfair for a major vendor to integrate functionality that is or could be served by third-party developers.” But that’s always been tendentious bullshit dressed up in revolutionary fight-for-the-underdog language. At various points in the history of computing, everything from spell checkers to TCP/IP stacks to GUIs were third-party applications.
The thing is, it’s often easy to miss how stupid the argument is, but Constine has—dare I say—shined a bright light on it. “How dare Apple bring down its iron heel on hundreds of developers,” he asks, “competing in the vibrant flashlight application space!” Yes! Yes, Josh, we all shudder to think how much innovation in the field of turning the camera flash LED on and off might be lost forever!
Like I said, I love this article. Its revolutionary clothes are so clearly a Che Guevara T-shirt it bought at Hot Topic.
On John Gruber’s “Talk Show” last week, he said that he’d heard from his sources that the new iOS 7 interface was “polarizing.” Given the reactions I’ve seen around the web in the last 24 hours, his source hit it on the nose.
There’s more than a little grumbling from designers I’ve seen. Mike Rundle proclaims it’s “anti-Apple,” whereas Ryan Katkov asks in his subhead, “Did Apple fall victim to design by committee?”
The icons for the new iOS 7 apps feel designed by individual app teams without overall editorial direction. Compare them to the Nokia N9—in some ways a closer visual kin to iOS 7 than Windows Phone or Android—and Nokia’s icons look like they’re all from the same set in a way that iOS 7’s don’t.
But iOS 7 has “harsh colors?” Its entire color palette is pastels. It’s easy—possibly unavoidable—to make comparisons between it and Windows Phone, but the thing that most immediately stands out as a difference is that iOS 7 is not full of big squares of solid primary colors. iOS 7’s aesthetic is a post-modern love child of Dieter Rams and Georgia O’Keeffe. Windows Phone’s aesthetic is that of injection-molded plastic blocks.
The design of iOS 7 is based on rules. There’s an intricate system at work, a Z-axis of layers organized in a logical way. [It] is anything but flat. It is three dimensional not just visually but logically. It uses translucency not to show off, but to provide you with a sense of place.
This new look is extremely deliberate. This doesn’t mean that a lot of changes aren’t controversial—it’s pretty much killed the button as we’ve come to know it—but they’re never being done just for the hell of it. They’re questioning assumptions we take for granted. (Do we really still need buttons? After 15–20 years of using the web, we’re accustomed to taking action by clicking/tapping frameless icons and words differentiated only by color.) It’s not entirely consistent, but they’re trying to replace three decades of design language. That’s not something you’re going to knock out of the ballpark on your first try, especially if you have less than a year to do it in.
The question critics are understandably harping on is how much of the iOS 7 interface is influenced by Windows Phone and Android. If I were running Microsoft’s PR Twitter account, I admit I’d have a lot of trouble not tweeting things like “How about those photocopiers, Cupertino” and “Don’t worry, we won’t sue over the non-rounded corners.”
Yet I think iOS 7 reflects Jony Ive’s sensibilities rather than merely keeping up with competitors. Android and iOS have always shared a “grid of icons” look; the ways in which Android varies from that—such as widgets—are ways iOS still doesn’t attempt. Windows Phone goes a markedly different way. We’ve seen things like the Control Center on Android devices first, but we’ve also seen things like it on Macs in the 1990s. The fonts do look similar, but Android moved toward iOS, not the reverse—iOS has always used Helvetica or Helvetica Neue, while Android 4.0 introduced the Helvetica-like Roboto. (Again, Microsoft stands out by using Segoe.) Ive definitely does share some sensibilities with the folks at Microsoft, but no one’s going to mistake an iOS 7 device for a WP8 device.
I wouldn’t claim that iOS 7’s design language is bolder than WP8’s, but it’s certainly a radical break from the past. Which is why Mike Rundle is completely wrong to say it’s anti-Apple. A radical break from the past is just about the most goddamn Apple thing that Apple could do.
One of the many neat little things about Scrivener is its ability to set “target” word counts. Personally I waffle about what amount to shoot for all the time—my current feeling is that 500 is a good minimum because it’s fairly easily achievable (even easily surpassable).
But, I’m frequently not writing in Scrivener. Blog posts like this one tend to be written in Byword, and lately I’ve been using Ulysses 3 for shorter fiction. (Scrivener’s ability to serve as a collection bin for all kinds of research bits still gives it an edge over U3 for, well, long stuff that needs research bits.)
I realized not too long ago that I could hack together a “daily word count” system with Keyboard Maestro. It’s not as elegant as Scrivener’s; it doesn’t reset automatically, and it doesn’t give you neat progress bars. But it gets the job as I need it done. I originally wrote it for Byword but it should be able to work in any plain text editor. (I organize my KM macros into application-specific groups, and have these ones in an “editor” group.)
This consists of two separate macros. Use the admittedly misnamed Set Target to set your baseline when you start writing, and Check Target to see how many words you’ve added.
Interestingly, I’ve found that Keyboard Maestro’s word counts differ from Byword’s and Ulysses’s: they’re always a little higher. In practice they’re not far enough off to be bothersome for this use case, but I’m curious what the difference in the algorithm is.