MG Siegler at TechCrunch makes the case that “the death of Nintendo has been greatly under-exaggerated”:
Nintendo no longer is just competing with the others in their direct space—meaning gaming consoles and handhelds—meaning Sony and Microsoft. They are competing with every single smartphone and tablet currently on the market. And soon, they’ll likely be competing with a number of other set-top boxes as well entering the gaming space. Nintendo’s greatest weakness is the illusion (bolstered by years of reality) that they have years to figure out their competition and outlast them. They do not.
I’d been thinking off and on about writing something about Nintendo which would have annoyed people in much the same way that MG’s piece will, although mine would not annoy nearly as many people because (a) my audience is smaller and (b) a much smaller percentage of my readers make it their life’s work to tell me why I am wrong about everything than is true of MG’s readers. Also, I was never really a gamer, not in the way we mean it now.1
When people like John Gruber made the case that Nintendo was in trouble earlier this year, the general reaction from gamers—and very smart gamers, mind you—was, essentially, you say that because you don’t understand Nintendo. Perhaps not, but MG puts his finger on what bothered me about that response. Nintendo’s problem isn’t like Apple’s in the late ’90s—it’s like Nokia’s in the late 2000s. People were describing Apple back then as on death’s door because it was. Nokia, though, was the #1 cell phone manufacturer by a wide margin, flush with cash, stuffed with smart engineers and designers. And Nokia was supported by tens of thousands of fans ready to explain in impatient detail why the iPhone—let alone Android, pff!—only looked like a threat to people who didn’t understand Nokia.
They were right. The changes in the market did only look like a threat to people who didn’t “get” Nokia. That was the problem.
In terms of technology, Nintendo simply isn’t competitive with the PlayStation 4 and the Xbox One. Fans maintain that the brand strength is more than enough to offset that deficiency. Maybe, but that bet didn’t work out so well for Sega. Maybe Nintendo will come out with a Wii U successor in a few years that’s at least on a par with Microsoft and Sony’s offerings—but that just sticks them on the horns of a different dilemma: either they’ll end up in exactly the same position when the PlayStation 5 and Xbox π come out, or they’ll have zigged when the market has zagged. There’s a non-zero chance that the current generation of consoles is the last generation of Consoles As We Know Them.
Like everyone else bloviating about this I can’t come up with guaranteed good advice for Mario and friends. I don’t think the advice to drop all the hardware and port their games to everything else as fast as possible is particularly sound. But if I ran Nintendo, I’d probably be quietly exiting the console business and partner with Microsoft or Sony to make them the exclusive platform for our next generation “living room” games. I’d stay in the handheld hardware business for the time being, for much the same reason it makes sense for Amazon to keep producing e-ink Kindles for the time being: smartphones and tablets have become fine gaming devices, but a handheld game console is still a better experience at a lower price point. And I wouldn’t definitively rule out original games—not ports—for iOS and Android.
Do I really think Nintendo is doomed? No. But they—and their fans—need to come to grips with the truth that what they’re doing right now is not working. It’s not going to suddenly start working when a new Zelda game ships for the Wii U. They need to change, and they need to figure out those changes while they still have the resources to make bold moves rather than desperate ones.
I grew up with pen-and-paper role playing games, but the aspects of RPGs easiest for computers to model—dice, numbers and dungeon crawling—were the aspects I found the least interesting. My favorite games have always been adventure games, and to me the “storytelling” in modern big-budget productions, even critically-acclaimed ones like Bioshock and The Last of Us, tends to have a lot less to it than meets the eye. ↩
As you might have seen about the Internets, Google is making a concerted effort to tie YouTube identities with G+ identities, and this has brought another round of arguments about what the relative harm of this may be—and that in turn always gets into questions of just what identity means on the Internet. Should you be allowed to be anonymous? Is anonymous bad but pseudonymous okay, or do we treat them as essentially the same thing?
Usually when this comes up, the arguments on the “pro-pseudonym” side circle around the notion that people sometimes have very good reasons to hide their identity. They might be under an oppressive authoritarian regime. They might be hiding from an ex-lover. They might be gay, or trans, or atheist, or Muslim. There could be any number of reasons to keep their identity secret. “If you don’t have anything to hide you shouldn’t be worried” is, after more than a moment’s thought, rather glib.
This seems to me, though, to be setting an unnecessarily high bar for when we deem it “permissible” for someone to participate pseudonymously in an online community. Historically, people who are not public figures have legally and socially been held to have a fairly broad right to privacy. You don’t get to go into your neighbor’s house at your leisure to see what magazines they subscribe to, what kind of food they keep in their refrigerator, and what kind of porn they’ve saved on their TiVo. As long as there’s no reasonable suspicion of illegal shenanigans going on, what they do isn’t any of your damn business. By extension, as long as there’s no reasonable suspicion of illegal online shenanigans, what forums your neighbors hang out on and what kind of porn they’re saving to their hard drive isn’t any of your damn business, either.
There are many, many people you interact with on a professional basis every day whose personal lives you don’t need to know a thing about. This not only includes sales clerks and bartenders and waiters and bank tellers, it by and large includes your boss and your coworkers and the guy at the Starbucks who recognizes you when you come in every morning and has your three shot skinny grande vanilla latte waiting by the time you get to the register. It includes the people who hire you, who approve your rent or mortgage application, who fill your prescriptions, and who file your tax returns. To a large degree it even includes your family. (Don’t pretend you share everything about your life with your parents.)
The deep underlying problem with the relentless drive to give us One Identity Everywhere that both Google and Facebook are so enamored with is that—whether or not it’s the intent—it’s a drive to largely erase the distinction between the professional and the personal in the online world.
I don’t think this is the intent; Google almost certainly believes that this is a problem—like every single problem, from managing your calendars to the Israel-Palestine conflict—which can be solved by code running in their data centers. It’s just a question of getting the code right. But the belief that the separation between personal and private identities—between our various “circles”—can be adequately managed by twiddling settings in your G+ account betrays a lack of understanding not of what privacy is, but of where privacy is.
Privacy—both in the legal sense and in the mores and folkways sense—is something that by definition can’t be outsourced. You can’t put your privacy mirror in the cloud; it has to be at an endpoint, in our own personal devices we have physical control over. In some future iteration of the Internet, it could be possible to bake privacy right into the infrastructure—but even then, that can only reliably happen at the endpoints.
In practice a lot of privacy standards are convention rather than law; they work in large part because someone trying to find out what you do outside the public eye has to put some effort into doing so. In the first decade and a half or so of the Internet age, this was true online as well. Despite the oft-made argument that anything you do on the Internet is de facto being performed in public, it really wasn’t. You had to have some idea what you were looking for to find someone. This is becoming progressively less true, and we need to start asking ourselves whether enshrining the principle of “if you don’t want anyone to know about it, it shouldn’t be online” truly leads somewhere we want to go. I don’t think it does.
For the immediate future, though, we already have a straightforward way to manage our privacy on our endpoints, where that management needs to happen: separate identities that aren’t tied together anywhere off our personal devices. The problem here isn’t how Google (or Facebook or anyone else) handles our privacy; the problem is that Google shouldn’t be managing our privacy. And Google (and others) need to stop demanding otherwise.
Microsoft Corp has narrowed its list of external candidates to replace Chief Executive Steve Ballmer to about five people, including Ford Motor Co chief Alan Mulally and former Nokia CEO Stephen Elop, according to sources familiar with the matter.
Assuming the sources are correct, this just formalizes what we’ve already heard. According to Damouni, investors want a “turnaround expert,” which is usually taken to mean someone who can bring an ailing company back to profitability. That’s what Mulally did at Ford, cutting tens of thousands of jobs, closing plants, ending the Mercury brand, and selling off recently-acquired “premier” brands like Jaguar, Land Rover, and Volvo. It’s been brutal by some measures, but Ford is—by American car company standards, especially—doing well.
But Microsoft is already profitable. What is it they actually need from a leader?
A friend who works at Microsoft thinks they need a guy who understands tech—they don’t need Bill Gates back, they need a new Bill Gates. I can certainly see the argument, but I’m not entirely sure I agree. They need someone who understands technology deeply—but that isn’t the same thing as needing an engineer.
Elop seems to be a more popular choice with Microsoft’s board than he is with pundits, of both professional and armchair variety. Most of the criticism of him is a variant of c’mon, look what he did to Nokia. Well, okay, let’s: what he did to Nokia was to get them to start producing high-end phones that people paid attention to. They get good reviews. They’re in stores near you. (In 2010, finding a Nokia touchscreen phone in the United States was like finding a unicorn.) And while it’s taken until 2013, I’m seeing more Nokia phones in the wild again—five years ago their smartphones were essentially invisible.1 Yes, Windows Phone still lags far behind Android and iOS, but for all practical purposes, Nokia didn’t get its shit together until 2011—the other guys had a three-year headstart. I’m not convinced that Nokia could realistically have done any better.
So do I think he’d be a good choice for Microsoft? Maybe. Elop would be different from their usual executives—he’s charismatic in interviews and is well-spoken, and you’re not going to get crazy Ballmer chanting from him; on the downside, he sometimes hasn’t known when not to talk. Telling the world that they were switching to Windows Phone over half a year before they could ship anything on it was a terrible idea.
But the big difference Elop might bring to Microsoft is—I’d like to find a word that’s both less polarizing and less squishy than this one, but I can’t—a sense of taste. While I’d never compare Elop to Steve Jobs—their styles are so different it would only cause giggling—my impression is that, like Jobs (and unlike many Microsoft executives, past and present), Elop understands the importance of user experience. Bill Gates is a brilliant guy but he never had any interest in being a tastemaker, and on a good day Steve Ballmer has the taste of a Nacho Cheese Doritos Locos Taco. I’m not sure there’s anyone at Microsoft who would have come up with devices as, well, cool as the Nokia Lumia line. Not even the Surface tablets have the same quality industrial design.
Microsoft needs someone who can come in and get rid of things that aren’t working, which appears to be the main appeal of Ford’s Mulally. But Elop has certainly demonstrated a willingness—some would say an unseemly eagerness—to shitcan things that don’t align with his chosen direction.
Of the two internal candidates we know about, there’s former Skype CEO Tony Bates, and “cloud and enterprise chief” Satya Nadella. I don’t know a lot about either of them; Bates joined Skype in late 2010, which suggests his chief accomplishment as CEO was selling them to Microsoft seven months later. Nadella may be the closest to the kind of candidate my Microsoft friend wants: he’s an EVP who started as an engineer at Sun, and appears to have been involved deeply in moving Microsoft online. While picking him might be a statement about re-invigorating Microsoft’s engineering culture, it might also be a statement about re-invigorating their focus on big business: all of Nadella’s background seems to be in enterprise computing.
Whenever an American writes this people inevitably bring up India, where Nokia’s S40 phones are still #1 with a bullet. Yes. We know. They’re not smartphones. No, not even the Asha line. ↩
BlackBerry, it’s noted, has appointed John Chen to be their “interim CEO.” Chen is the former CEO of database company Sybase, and was presumably chosen to lead BlackBerry back to the same kind of high brand recognition and market leadership that Sybase enjoys today.
"The Cinderella story has two morals. The light moral is that when you have grace and beauty, it will shine through and elevate you to your proper station. The dark moral is that this only happens if you are lucky enough have a Fairy Godparent."
While I’m not a podcaster (yet?), I follow several people who are, and lately I’ve seen a few talk about how there isn’t an audio editing app specifically dedicated to the spoken word. You can use Garageband or Logic Pro or the like, but those are designed for music production, and there are things they could be doing that might make a podcaster’s life easier.
But there’s a podcast I listen to, Andrea Seabrook’s “DecodeDC,” that has two interesting features. One, the production quality is at the level of “This American Life”: we’re talking very high. Two, it tells you what software it’s being produced on at the end of each episode: Hindenburg Journalist Pro.
After hearing this for a dozen episodes or so I finally went to the web site for the software. It’s aimed at radio journalists, and is, in fact, an audio editing app specifically dedicated to the spoken word. And it has podcast-specific publishing features. It lacks many features that are in the bigger-name “pro” audio apps—but most of what’s been left out has, from what I can tell, been left out by design, because it doesn’t fit their niche.
Again, I’m not a podcaster—but if I were, I’d at least check this thing out.
Thanks, Apple! All our fingerprints are going to be available to the NSA now!
What? That’s wrong on so many levels.
I know, right? Can you believe it?
No. I mean it’s just not correct. For a start, the iPhone 5S doesn’t even store your fingerprints.
C’mon, of course it does. How else could it work?
A web site doesn’t actually store your password, it stores a one-way hash of your password. Likewise, the iPhone 5S doesn’t store an image of your fingerprint, it creates data about your fingerprint and then stores a hash of that.
Oh, sure, that’s what they tell you.
That’s what anyone who understands programming would tell you. Abstracting your fingerprint to data rather than storing an image is just as secure, much less computationally expensive, and probably more accurate.
Even if that’s true, we know passwords can be broken, so we can’t know for sure that someone couldn’t reconstruct your real fingerprint from the data.
You can’t reconstruct a password from its hash. Password cracking tools use lists of common words and various permutations, testing them against the password hashes. So, yes, in fact, we do know that someone can’t reconstruct your real fingerprint from the data.
So why would Apple do this if it’s not to collect your fingerprints?
How about the rationale they’ve already given us? What’s important on your phone isn’t your passcode, it’s everything the passcode is protecting. But for a lot of users—even technically savvy ones—passcodes are sufficiently irritating they’re not using them. With a fingerprint as a passcode, this barrier is removed, making it much more likely people will start protecting their phones.
There are really only two ways for your phone to be compromised: someone getting physical access to it, and someone siphoning data out of it remotely. Touch ID greatly reduces risks from the former, and doesn’t affect the latter at all.
But the fingerprints are still being read and that means they’re being turned into data and that data must be being sent to the NSA in some way you just haven’t thought of yet. Look at everything the NSA is collecting already!
Yes, let’s. Everything the NSA collects goes over networks. They’re a signal intelligence organization. They analyze signals—which is to say, communications. Phone calls. Email messages. IM messages. Bank transactions. There’s a lot of things that constitute signals. But you know one thing that doesn’t?
Yes, I know, Betteridge’s Law strikes again. Isn’t it possible that Stephen Elop was secretly dispatched from Microsoft to join Nokia as part of a three-year plan to make them Windows Phone’s biggest hardware partner and then return to the fold with Nokia’s smartphone division after Steve Ballmer was unexpectedly forced out?
Actually, when you put it that way, no. Jesus Christ, people, take off the tinfoil hats. Or put them on. Or something.
As I’ve pointed out before, Nokia went with Windows Phone because Google wouldn’t let them both use Android branding and Nokia service branding, and they considered that a deal-breaker. Elop also gave another reason to The Guardian back in July which I’d missed:
What we were worried about a couple years ago was the very high risk that one hardware manufacturer could come to dominate Android. We had a suspicion of who it might be, because of the resources available, the vertical integration, and we were respectful of the fact that we were quite late in making that decision. […] Now fast forward to today and examine the Android ecosystem. There’s a lot of good devices from many different companies, but one company has essentially now become the dominant player.
Call me crazy, but it seems the man was right. What he doesn’t say—but you can clearly take away from this—is that he foresaw there would only be room for three major smartphone operating systems and that Nokia’s own MeeGo was not going to be the third one. That meant Nokia needed to be for Windows Phone what Samsung was for Android. They may be a distant third now, but if they’d stuck on their original course they’d be a distant fourth, with HTC having taken the dominant role with Windows Phone—an outcome that probably wouldn’t have been much better for Microsoft than it would have been for Nokia.
It’s true that Nokia hasn’t had any out-of-the-ballpark hits under Elop, but it’s ridiculous to think that he didn’t want one. It’s just not realistic to have expected that. Nokia’s substantial fanbase didn’t want to believe—and still doesn’t, apparently—that he was handed the reigns of a company that, like RIM and Palm, was already in the process of collapsing due to entirely misreading where the market was going. It’s easy to accuse Elop of being Nokia’s Adam Osborne rather than their Steve Jobs. But keeping his predecessor, Olli-Pekka Kallasvuo, would have left them with their Mike Lazardis. How’d that work out for Blackberry?
Years ago I was trying to formulate a maxim that would have gone something like this:
Any given technology will be put to all possible uses that have a greater predicted value than cost.
There’s a science fiction novel I’ve been working on fitfully for about two years in which this concept drives the background; closer to the foreground is the concept of ubiquitous presence, a consequence of being always online and, whether by choice or just passively, leaving digital footprints everywhere. The characters have far less privacy than we’re used to, but not due to intrusive government surveillance. Societal expectations have shifted from “private unless explicitly shared” to “public unless explicitly hidden.” Like all societal shifts, that wasn’t a collective, conscious decision, but more of a response to pressures over time—specifically, the pressure of applying that maxim.
There are a lot of things about the novel I expect to remain science fiction—it’s set around the asteroid belt, for a start—but ubiquitous presence? You’re soaking in it. It’s not as pervasive now as it is in the novel, but it will be (and long before we’re living among the space rocks). This is the dawn of the age of Big Data. If you’re in the first world, there’s likely more data generated and stored about you annually now than about anyone who died before the year 2000 over their entire lives. Not all of that data is being actively generated by you, very little of it is being actively stored by you, and the uses that it’s being put to are largely not under your control even when they benefit you.
The current NSA scandals have been the catalyst for a lot of people to finally start thinking about this, but most of us aren’t thinking about it very deeply. We know we’re outraged by the government collecting all this data on us without our consent; most of us feel “this isn’t a bad thing because our intentions are good” is not reassuring. History is, by and large, notatallsupportive of the “just trust us” defense.
Even so, there’s better and worse reasons for not trusting an organization, and “because government,” in and of itself, is a poor one. The state—like a hammer, gun or Tim Tebow—is a tool. When a tool isn’t producing a good outcome, there are many possible reasons why, but “this tool is intrinsically evil and nothing good will ever come from it” isn’t one of them.
The signal that’s somewhat lost in the noise is this: the government isn’t collecting the data. Corporations are. What the NSA is doing is correlating it with metadata to form complex diagrams of our personal relationships, performing deep data mining to create potentially very invasive profiles, and taking an extremely expansive view of what they’re allowed to do under the law.
That’s just as much of a problem! Yes, it is, but why was the data being collected in the first place? To allow private entities, taking an extremely expansive view of what they’re allowed to do under the law, to correlate it with metadata to form complex diagrams of our personal relationships and perform deep data mining to create potentially very invasive profiles. And it’s important to understand that they’re not doing it because they’re evil, or stupid, or in league with the Rothschilds to bring about the New World Order.1
They’re doing it because it helps them do the jobs we want them to do.
If you’re going to show me ads at all, I’d much rather those ads be relevant. I want Netflix to recommend cool stuff to me. Don’t you? That requires it to have some idea of what you and I consider cool. Your phone may already have the ability to know an email is a meeting request, offer to make the calendar appointment for you, and remind you when you need to get in the car and head out based on your time, location and the traffic conditions along the way. Jeeves was an excellent valet because he knew Bertie Wooster better than Bertie knew himself in some respects. So it will be with our electronic valets—and a lot of what they know is going to be accessible to others.
That tends to lead to thoughts of Orwellian dystopia, but even our optimistic science fiction implies this kind of data collection. Analysts back at Starfleet HQ can easily find out how much Earl Grey Captain Picard drinks and at what temperature, can get to everyone’s medical records through the transporter logs, and can look up just what you’ve been doing on the Holodeck in your spare time. (Oh, myyyyy.) “Star Trek” depicts, intentionally or not, a society that long ago set a different balance between privacy and presence than we did.
We’re at the point where we need to pay attention to how we’re setting that balance ourselves, though. The problem isn’t that we’re laying the foundations for a “public unless explicitly hidden” society; the problem is that we don’t seem to be fully aware we’re doing it. We’re ecstatic about finding ever more value in ubiquitous presence, but we’ve put very little effort into understanding the costs to our privacy and how to address them. We need to build accountability into the infrastructure. In the long term we may need to become, as science fiction author and Guy Who Thinks About This Stuff David Brin puts it, a transparent society.
As civil libertarian Jennifer Granick wrote, “good people will do bad things when a system is poorly designed, no matter how well-intentioned they may be.” Right now, the systems are being designed by people for whom all the value lies in presence rather than privacy. The state isn’t the architect of many of these systems, and Orwell’s future isn’t the only dystopian outcome possible. As vital as the debate over how much data-driven surveillance the government should be permitted to perform may be, it’s part of a much larger set of issues.
"Rothschild and the New World Order" is my Information Society cover band. ↩