Greatest Hits

Coyote Tracks

If you are drinking to forget, please pay in advance
A collection of thoughts and shiny objects, mostly (but not always) related to computers and technology. And cocktails. Brought to you by Watts Martin (@chipotlecoyote).


Coyote Prints (writing blog)

Why Coyotes Howl, a short story collection: EPUB · Kindle/Print

Indigo Rain, an anthropomorphic fantasy/suspense novella: Print

  • January 14, 2014 11:34 am

    "Net neutrality is half-dead"

    Jon Brodkin, Ars Technica:

    The Federal Communication Commission’s net neutrality rules were partially struck down today by the US Court of Appeals for the District of Columbia Circuit, which said the Commission did not properly justify its anti-discrimination and anti-blocking rules.

    Those rules in the Open Internet Order, adopted in 2010, forbid ISPs from blocking services or charging content providers for access to the network. Verizon challenged the entire order and got a big victory in today’s ruling. While it could still be appealed to the Supreme Court, the order today would allow pay-for-prioritization deals that could let Verizon or other ISPs charge companies like Netflix for a faster path to consumers.

    I’m conflicted about net neutrality. Not about the notion that the Internet should remain open to everyone, even if ensuring that requires (gasp) regulatory interference. But I’m not so sure that “pay-for-prioritization deals” are intrinsically evil.

    Way back in the dark ages—the late 1990s—I managed capacity on a major data network. Even back then, our switches had the ability to prioritize traffic whose packets were tagged as being certain kinds of data. (We called it “QoS,” quality of service.) The technology we were expecting to replace frame relay—asynchronous transfer mode—had this functionality built-in as a protocol level. The rationale for this was that some kinds of traffic really need higher priority. You see, IP was designed with no guarantees of low latency or even of packets arriving in the right order. For a terminal connection, a web page (or a Gopher page!) or a file transfer—for any of the kinds of stuff the Internet was originally being used for in the early ’90s and before—this was okay. For anything that involves real-time streaming, though, the results can range from suboptimal to unusable. However, ATM never truly took off; during the original dotcom boom, telecom companies and the equipment manufacturers selling to them raced to build so much big-pipe infrastructure that it became easier and cheaper to solve network congestion and latency problems by just throwing more bandwidth at them.

    At least, it did for a decade or so. But bandwidth consumed per user has grown far faster than the absolute number of users have. You are using ten times the bandwidth now than you were five or six years ago. The cost to get you that bandwidth has dropped substantially on a per-kilobyte basis, but your ISP is making less off of you in 2014 than they did in 2007. This isn’t a problem for the ISPs as long as new customers are being added like mad. Through the 2000s they were, but not anymore—yet demand per user is still climbing steadily. Where’s the new revenue to offset the new infrastructure costs come from? Content providers have historically never been charged a premium based on the amount and kind of content they’re providing—but we may be at the point where we can’t keep kicking the can down the road by making the pipes fatter.

    The worry about creating a “class divide” between content providers who can pay for better service and those who can’t is valid, but the real battle being fought isn’t about free speech and open access. It’s about how to spread the cost of building the infrastructure we need for our increasingly everything-over-IP future. And no one involved is really neutral.

  • January 9, 2014 9:37 am

    Barnes & Nobles reports steep sales drop

    While this sounds like “well, duh” news at first glance, the interesting thing about it is that nearly all of the loss came from B&N’s Nook ebook division; sales in their actual physical stores aren’t growing, but they only declined by 6.6%. I wouldn’t spin that as “essentially flat” the way B&N’s new CEO is, though. (“Officer, I can’t imagine why my car rolled down that hill when I left it in neutral—a 7% grade is essentially flat!”)

    Physical books aren’t going away any time soon, if ever, but ebooks are effectively replacing mass-market paperbacks, bringing back something that B&N and their ilk effectively killed: the “mid-list title,” books that sell small but consistent amounts from which many authors, particularly of genre fiction, made the bulk of their royalties. My guess is that B&N will survive, but they’ll do so by reversing not just the last five years but the last fifty: losing their digital division entirely, closing many of their stores, and re-emphasizing hardbacks and limited editions.

  • January 8, 2014 9:15 am

    Nerds Go Home

    It’s become a thing currently to bemoan the tech bubble centered around San Francisco by castigating its perceived drivers: young, arrogant nerds paid exorbitant salaries and paying exorbitant rents, with no interest in SF’s culture or history, swaggering into town and driving all the good people out. This has caused the arrogant nerds to write their own responses, but it’s usually the ones who really are assholes who are motivated to do so.

    The real problem, as Priceonomics explained, is our old f(r)iend supply and demand. Between the start of 2010 and the start of 2013 the unemployment rate almost halved and more than 20,000 new residents arrived—and in 2012 only 269 new housing units entered the market. They helpfully chirp that 8,000 new units will come on the market between 2013 and 2015—but that’s clearly not enough to help matters. Geography and San Francisco politics combine to make new housing difficult and expensive; with the new technonerd-driven demand being concentrated on the high end of the market, things get even worse.

    Yet the vilification of the techie seems misguided. I’m not a disinterested party; I moved to the Bay Area at the tail end of 2002, after the dotcom crash, with no job waiting. I had friends here—importantly, including one who let me crash with him, which morphed into renting a room in his house for five years—and I already loved the area. Almost everyone I know in this area lives here because they want, specifically, to be in the Bay Area. The people I know who live in “The City” live there because they love San Francisco. I suspect that’s true of most of the people inside those Google buses people are throwing (organic and sustainably-grown) tomatoes at.1 Valleywag loves to trot out pissy clueless technolibertarians as the True Face of Tech Startups, but let’s not pretend that Valleywag is a shining beacon of unbiased journalism.2

    Yes, techies are paid exorbitant salaries and paying exorbitant rents. Yes, this is a problem for people being squeezed out. “No, I can’t accept your offer of a dream job in one of the great cities of the world because doing so incrementally contributes to a growing class divide” is a lot easier to say when you’re not the twenty-something from Suburban Elsewhere actually being given that offer.

    Of course, with all respect to Marc Andreessen and his insistence that those who say this is a bubble are misguided, then give me another word for it. Housing costs are rising faster than salaries; the median home price in San Francisco is, if you run the numbers, twice as high as what a household with a median income in San Francisco can afford. This is happening in the Valley, too. My rent share of the nice two-bed/two-bath apartment in Santa Clara I’m in would more than cover the entire rent of a comparable apartment—or the mortgage of a respectable house—in Sacramento. I could actually commute from that area to here two days a week and still come out ahead. Some friends and acquaintances of mine are planning moves out of the area already, and the more expensive things get, the more will follow.

    So there’s a silver lining for the “nerds go home” crowd: when even the techies decide they can’t afford to live here anymore, they’ll leave. In the long term, that might be better for San Francisco’s health as a city. But that might be a much longer term than critics think: the result of the high end of the tax base crumbling—especially in as tax-driven a city as SF is—won’t be very pretty.

    1. While the Google Buses have fueled the perception that the techies live in SF and commute to Silicon Valley, that’s far from universal; while there are a few huge employers down here, the majority of startups are clustering around SOMA in SF. 

    2. Having said that, it would sure be nice if sites like PandoDaily evolved enough self-awareness to stop letting pissy clueless technolibertarians write responses to Valleywag. 

  • January 3, 2014 9:30 am

    Re/code and web design

    While I may be the last person in the tech sphere to bother linking to Re/code, the new post-WSJ incarnation of Walt Mossberg and Kara Swisher’s AllThingsD, I’m surprised how few people have pointed out that it’s a really ugly web site. It’s like they’ve started the worst bits from news sites of five years ago—tiny type, boxes everywhere—and merged in a couple annoying recent trends like presenting the articles in an unevenly staggered multi-column “river.” While I like Futura as a headline font, they’re not using it very well. And the body typeface is (yawn) Georgia.

    All poking aside, the big problem with this design—which is not unique to them—is that it’s awfully hard to figure out what’s important here. If layout is supposed to lead the eye, this is a hedge maze painted bright red.

    A lot of web design through the 2000s borrowed the wrong things from print design: it forced layout into fixed widths, because that makes it much easier to place page elements, and it tended toward small print that “felt” like the 10–12 point body type that we already knew. On 1024×768 displays, this worked well enough, but we’ve learned since then that it’s much better for viewers if the web site adapts to the display that it’s on rather than decreeing “Thy content shall be 960 pixels wide” and that, paradoxically, the 16-pixel standard type size browsers inherited from Netscape and Mosaic is, if anything, a little too small to be comfortable for long-form reading rather than too large.

    But there’s a lot of good to be learned from print, and so far the only thing we’ve universally adopted is that typefaces are important. (Well, most of us have.) Sites like Medium not only demonstrate well-chosen type, but demonstrate something many sites still ignore: white space is also important. A good case could be made that it’s even more important in a browser window than it is on the page.

    Print magazines have well over a century of experience in balancing the demands of display ads with the absolute necessity of making the magazine a pleasant reading experience. This is a lesson that for the most part we haven’t figured out online; Medium doesn’t have ads, so they don’t face this dilemma. I have faith we’ll get there, but an important first step is making the content itself pleasant to read.

    Ironically, when you actually get to a Re/code article, they do a pretty good job of this. If they can only make the front page as streamlined as the rest of the reading experience, it’d be terrific—or at least a good base from which we could start thinking about the ad problem.

    Seriously, though. Georgia? In 2014? Who uses that?


  • December 30, 2013 9:57 am

    Hunting of the Snark

    We are what we pretend to be, so we must be careful about what we pretend to be.

    — Kurt Vonnegut, Mother Night

    A podcast I used to listen to, Angry Mac Bastards, recently flamed out spectacularly. The show’s premise was simple: the three hosts found stupid things people said on the Internet—as the name implies, mostly things about Apple—and ranted about them in deliberately profane fashion. Sometimes its segments were damn funny, and sometimes they seemed to be little more than shouting you fucking cunt into a microphone for twenty minutes.

    Comedy is, almost universally, observational. The best comedy is built on observations we nod our heads to, even if we may feel a little guilty for doing so. And it’s often mean. What separates funny mean from just mean mean is hard to define, but funny tends to be either aimed at ourselves or at those who are deserving of scorn and ridicule. And as much as the phrase speaking truth to power may be a liberal cliché, it’s a pretty good barometer for who is—and isn’t—deserving.

    The other interesting thing about comedy has its roots in something it shares with all performance: the stage persona. We adopt personas in all our interactions to varying degrees, of course, but performances—even when we’re performing “as ourselves”—heighten this. We’re going to exaggerate some aspects of the way we are and downplay others.

    Comedy, though, is singular in that we’re rewarded for exaggerating the aspects of our personality that, in other circumstances, kind of make us assholes. Funny mean is saying those head-nodding things in a way that only an asshole would. Mean mean is just being an asshole.

    AMB found a software developer’s “hire me" page and went to town on not just the page but developer Aaron Vegh personally. A lot of it was juvenile (capping on his photograph), founded on extremely ungenerous readings (equating self-proclaimed flexibility with poor problem-solving and being a prima donna), or just plain wrong (describing his web development book as “self-published”).1 What they did, in short, looked a whole damn lot like the woefully superficial and ill-informed trollbait they’re theoretically supposed to be skewering.

    They were clearly trying for funny mean. Some of it was. But a lot just came across as mean mean.

    There were several reasons I’d checked out of AMB sometime last year. One of them, though, was that I just don’t like punching down. Beating up on Seth Godin for spouting out Deep Thoughts on marketing strangely reminiscent of Deep Thoughts by Jack Handey is one thing; beating up on a guy we’ve never heard of—especially when, between kicks, you’re admitting his web page isn’t really that bad—is quite another. But something unexpected happened this time: it went viral. Suddenly a whole lot of people learned of AMB not as a podcast about flinging sporadically funny crap at people who deserve to be taken down a peg or two, but one about verbally brutalizing someone whose worst crime was overly twee self-promotion.

    And the Internet Outrage Machine went to work.

    The fallout AMB’s hosts have suffered from this has been serious, including costing one her job. Vegh didn’t deserve to be beaten down, but they didn’t, either—they arguably earned a rebuke, but not virtual destruction. We tend to force stories into “hero vs. villain” narratives: either everybody deserved what they got and good triumphed, or the forces of evil beat the profane but good-hearted heroes thanks to Vegh’s dastardly move of calling the public’s attention to this. But that’s not the way any of this really went down.

    I’m not sure if there’s an object lesson for either side. I’m not sure there are even really sides. If there is a lesson, though, maybe it’s this: how we say what we say matters, no matter what it is we’re saying. It applies equally to pompous “hire me” web pages and ranting “fuck you all” podcasts. This is not a kind of censorship, not a form of political correctness, not an example of Orwellian thought control—it’s an axiomatic truth of any language that has more than one way to say the same thing. Loudly proclaiming “I say things that way because it’s who I am so just deal with it” does not grant you a Get Out of Shitstorm Free card; we all know—and you do, too—that your word choice is not preordained by your immutable nature.

    So I’d think twice about deciding your online persona is “righteous asshole.” If it seems like a good idea, think two more times. You are not speaking truth to power. It is not a litmus test for determining your true friends. You are not guaranteed that only the “right” people will be pissed off. And you will build an audience that rewards you for being unkind—which makes it all too easy to cross lines you shouldn’t. When you get called on it, it’s too late to rip off your asshole mask and protest that’s not who you really are.

    1. They finally noticed it wasn’t self-published, but weirdly described Wiley—the original publishers of Herman Melville and Edgar Allan Poe, founded in 1807—as “semi-legitimate.” Random House is presumably “a step up from a mimeograph.” 

  • December 17, 2013 8:38 pm

    "The NSA didn’t wake up and say, ‘Let’s just spy on everybody.’ They looked up and said, ‘Wow, corporations are spying on everybody. Let’s get ourselves a copy."

    — Bruce Schneier, Cryptographer and security specialist, via Reform Corporate Surveillance, a parody site of Reform Government Surveillance, created by Aral Balkan, Founder of Indie Phone.  (via futurejournalismproject)

  • December 4, 2013 11:02 am

    Mario on Skid Row

    MG Siegler at TechCrunch makes the case that “the death of Nintendo has been greatly under-exaggerated”:

    Nintendo no longer is just competing with the others in their direct space—meaning gaming consoles and handhelds—meaning Sony and Microsoft. They are competing with every single smartphone and tablet currently on the market. And soon, they’ll likely be competing with a number of other set-top boxes as well entering the gaming space. Nintendo’s greatest weakness is the illusion (bolstered by years of reality) that they have years to figure out their competition and outlast them. They do not.

    I’d been thinking off and on about writing something about Nintendo which would have annoyed people in much the same way that MG’s piece will, although mine would not annoy nearly as many people because (a) my audience is smaller and (b) a much smaller percentage of my readers make it their life’s work to tell me why I am wrong about everything than is true of MG’s readers. Also, I was never really a gamer, not in the way we mean it now.1

    When people like John Gruber made the case that Nintendo was in trouble earlier this year, the general reaction from gamers—and very smart gamers, mind you—was, essentially, you say that because you don’t understand Nintendo. Perhaps not, but MG puts his finger on what bothered me about that response. Nintendo’s problem isn’t like Apple’s in the late ’90s—it’s like Nokia’s in the late 2000s. People were describing Apple back then as on death’s door because it was. Nokia, though, was the #1 cell phone manufacturer by a wide margin, flush with cash, stuffed with smart engineers and designers. And Nokia was supported by tens of thousands of fans ready to explain in impatient detail why the iPhone—let alone Android, pff!—only looked like a threat to people who didn’t understand Nokia.

    They were right. The changes in the market did only look like a threat to people who didn’t “get” Nokia. That was the problem.

    In terms of technology, Nintendo simply isn’t competitive with the PlayStation 4 and the Xbox One. Fans maintain that the brand strength is more than enough to offset that deficiency. Maybe, but that bet didn’t work out so well for Sega. Maybe Nintendo will come out with a Wii U successor in a few years that’s at least on a par with Microsoft and Sony’s offerings—but that just sticks them on the horns of a different dilemma: either they’ll end up in exactly the same position when the PlayStation 5 and Xbox π come out, or they’ll have zigged when the market has zagged. There’s a non-zero chance that the current generation of consoles is the last generation of Consoles As We Know Them.

    Like everyone else bloviating about this I can’t come up with guaranteed good advice for Mario and friends. I don’t think the advice to drop all the hardware and port their games to everything else as fast as possible is particularly sound. But if I ran Nintendo, I’d probably be quietly exiting the console business and partner with Microsoft or Sony to make them the exclusive platform for our next generation “living room” games. I’d stay in the handheld hardware business for the time being, for much the same reason it makes sense for Amazon to keep producing e-ink Kindles for the time being: smartphones and tablets have become fine gaming devices, but a handheld game console is still a better experience at a lower price point. And I wouldn’t definitively rule out original games—not ports—for iOS and Android.

    Do I really think Nintendo is doomed? No. But they—and their fans—need to come to grips with the truth that what they’re doing right now is not working. It’s not going to suddenly start working when a new Zelda game ships for the Wii U. They need to change, and they need to figure out those changes while they still have the resources to make bold moves rather than desperate ones.

    1. I grew up with pen-and-paper role playing games, but the aspects of RPGs easiest for computers to model—dice, numbers and dungeon crawling—were the aspects I found the least interesting. My favorite games have always been adventure games, and to me the “storytelling” in modern big-budget productions, even critically-acclaimed ones like Bioshock and The Last of Us, tends to have a lot less to it than meets the eye. 

  • November 11, 2013 10:40 am

    Privacy happens at the endpoints

    As you might have seen about the Internets, Google is making a concerted effort to tie YouTube identities with G+ identities, and this has brought another round of arguments about what the relative harm of this may be—and that in turn always gets into questions of just what identity means on the Internet. Should you be allowed to be anonymous? Is anonymous bad but pseudonymous okay, or do we treat them as essentially the same thing?

    Usually when this comes up, the arguments on the “pro-pseudonym” side circle around the notion that people sometimes have very good reasons to hide their identity. They might be under an oppressive authoritarian regime. They might be hiding from an ex-lover. They might be gay, or trans, or atheist, or Muslim. There could be any number of reasons to keep their identity secret. “If you don’t have anything to hide you shouldn’t be worried” is, after more than a moment’s thought, rather glib.

    This seems to me, though, to be setting an unnecessarily high bar for when we deem it “permissible” for someone to participate pseudonymously in an online community. Historically, people who are not public figures have legally and socially been held to have a fairly broad right to privacy. You don’t get to go into your neighbor’s house at your leisure to see what magazines they subscribe to, what kind of food they keep in their refrigerator, and what kind of porn they’ve saved on their TiVo. As long as there’s no reasonable suspicion of illegal shenanigans going on, what they do isn’t any of your damn business. By extension, as long as there’s no reasonable suspicion of illegal online shenanigans, what forums your neighbors hang out on and what kind of porn they’re saving to their hard drive isn’t any of your damn business, either.

    There are many, many people you interact with on a professional basis every day whose personal lives you don’t need to know a thing about. This not only includes sales clerks and bartenders and waiters and bank tellers, it by and large includes your boss and your coworkers and the guy at the Starbucks who recognizes you when you come in every morning and has your three shot skinny grande vanilla latte waiting by the time you get to the register. It includes the people who hire you, who approve your rent or mortgage application, who fill your prescriptions, and who file your tax returns. To a large degree it even includes your family. (Don’t pretend you share everything about your life with your parents.)

    The deep underlying problem with the relentless drive to give us One Identity Everywhere that both Google and Facebook are so enamored with is that—whether or not it’s the intent—it’s a drive to largely erase the distinction between the professional and the personal in the online world.

    I don’t think this is the intent; Google almost certainly believes that this is a problem—like every single problem, from managing your calendars to the Israel-Palestine conflict—which can be solved by code running in their data centers. It’s just a question of getting the code right. But the belief that the separation between personal and private identities—between our various “circles”—can be adequately managed by twiddling settings in your G+ account betrays a lack of understanding not of what privacy is, but of where privacy is.

    Privacy—both in the legal sense and in the mores and folkways sense—is something that by definition can’t be outsourced. You can’t put your privacy mirror in the cloud; it has to be at an endpoint, in our own personal devices we have physical control over. In some future iteration of the Internet, it could be possible to bake privacy right into the infrastructure—but even then, that can only reliably happen at the endpoints.

    In practice a lot of privacy standards are convention rather than law; they work in large part because someone trying to find out what you do outside the public eye has to put some effort into doing so. In the first decade and a half or so of the Internet age, this was true online as well. Despite the oft-made argument that anything you do on the Internet is de facto being performed in public, it really wasn’t. You had to have some idea what you were looking for to find someone. This is becoming progressively less true, and we need to start asking ourselves whether enshrining the principle of “if you don’t want anyone to know about it, it shouldn’t be online” truly leads somewhere we want to go. I don’t think it does.

    For the immediate future, though, we already have a straightforward way to manage our privacy on our endpoints, where that management needs to happen: separate identities that aren’t tied together anywhere off our personal devices. The problem here isn’t how Google (or Facebook or anyone else) handles our privacy; the problem is that Google shouldn’t be managing our privacy. And Google (and others) need to stop demanding otherwise.

  • November 6, 2013 4:54 pm

    Blockbuster to close 300 remaining U.S. stores

    I’d like to say that I feel sad or nostalgic, even in a vague way, but I’d be lying. I never got into video rentals when they required physical media.

    The interesting hidden question here is what other businesses are going to follow suit over the next decade.

  • 10:06 am

    Does Microsoft need a turnaround expert?

    Nadia Damouni, Reuters:

    Microsoft Corp has narrowed its list of external candidates to replace Chief Executive Steve Ballmer to about five people, including Ford Motor Co chief Alan Mulally and former Nokia CEO Stephen Elop, according to sources familiar with the matter.

    Assuming the sources are correct, this just formalizes what we’ve already heard. According to Damouni, investors want a “turnaround expert,” which is usually taken to mean someone who can bring an ailing company back to profitability. That’s what Mulally did at Ford, cutting tens of thousands of jobs, closing plants, ending the Mercury brand, and selling off recently-acquired “premier” brands like Jaguar, Land Rover, and Volvo. It’s been brutal by some measures, but Ford is—by American car company standards, especially—doing well.

    But Microsoft is already profitable. What is it they actually need from a leader?

    A friend who works at Microsoft thinks they need a guy who understands tech—they don’t need Bill Gates back, they need a new Bill Gates. I can certainly see the argument, but I’m not entirely sure I agree. They need someone who understands technology deeply—but that isn’t the same thing as needing an engineer.

    Elop seems to be a more popular choice with Microsoft’s board than he is with pundits, of both professional and armchair variety. Most of the criticism of him is a variant of c’mon, look what he did to Nokia. Well, okay, let’s: what he did to Nokia was to get them to start producing high-end phones that people paid attention to. They get good reviews. They’re in stores near you. (In 2010, finding a Nokia touchscreen phone in the United States was like finding a unicorn.) And while it’s taken until 2013, I’m seeing more Nokia phones in the wild again—five years ago their smartphones were essentially invisible.1 Yes, Windows Phone still lags far behind Android and iOS, but for all practical purposes, Nokia didn’t get its shit together until 2011—the other guys had a three-year headstart. I’m not convinced that Nokia could realistically have done any better.

    So do I think he’d be a good choice for Microsoft? Maybe. Elop would be different from their usual executives—he’s charismatic in interviews and is well-spoken, and you’re not going to get crazy Ballmer chanting from him; on the downside, he sometimes hasn’t known when not to talk. Telling the world that they were switching to Windows Phone over half a year before they could ship anything on it was a terrible idea.

    But the big difference Elop might bring to Microsoft is—I’d like to find a word that’s both less polarizing and less squishy than this one, but I can’t—a sense of taste. While I’d never compare Elop to Steve Jobs—their styles are so different it would only cause giggling—my impression is that, like Jobs (and unlike many Microsoft executives, past and present), Elop understands the importance of user experience. Bill Gates is a brilliant guy but he never had any interest in being a tastemaker, and on a good day Steve Ballmer has the taste of a Nacho Cheese Doritos Locos Taco. I’m not sure there’s anyone at Microsoft who would have come up with devices as, well, cool as the Nokia Lumia line. Not even the Surface tablets have the same quality industrial design.

    Microsoft needs someone who can come in and get rid of things that aren’t working, which appears to be the main appeal of Ford’s Mulally. But Elop has certainly demonstrated a willingness—some would say an unseemly eagerness—to shitcan things that don’t align with his chosen direction.

    Of the two internal candidates we know about, there’s former Skype CEO Tony Bates, and “cloud and enterprise chief” Satya Nadella. I don’t know a lot about either of them; Bates joined Skype in late 2010, which suggests his chief accomplishment as CEO was selling them to Microsoft seven months later. Nadella may be the closest to the kind of candidate my Microsoft friend wants: he’s an EVP who started as an engineer at Sun, and appears to have been involved deeply in moving Microsoft online. While picking him might be a statement about re-invigorating Microsoft’s engineering culture, it might also be a statement about re-invigorating their focus on big business: all of Nadella’s background seems to be in enterprise computing.

    1. Whenever an American writes this people inevitably bring up India, where Nokia’s S40 phones are still #1 with a bullet. Yes. We know. They’re not smartphones. No, not even the Asha line.