I’ve been watching the rollout of “Google+”, the new don’t-call-it-a-social-network from everyone’s favorite advocate of open. Since I’m not involved in the beta I can’t make direct comments, but thanks to longtime user interface designer Andy Hertzfeld (Apple fans know him as one of the original Mac OS guys), it really does look good. That’s not something I customarily say about Google products. The nerd comic XKCD made a more trenchant point, though: Google+ is “like Facebook but not Facebook,” and for some of the audience that would be enough.
This isn’t Google’s first attempt to move into the “social” space, of course; they recently rolled out the peculiar “+1” button that lets you indicate a search result or a web page that’s added the button is, um, one arbitrary unit better than other results or web pages. (I presume this came from the habit of people showing agreement in comments by replying “+1.” We can be glad Google didn’t name the service “This.”) And “social” in various guises has run through a lot of Google products: not just Buzz and Wave and Orkut before it, but the haphazardly integrated Mail and Talk and Voice.
Google is clearly trying to take on Facebook, but they’ve been trying to take on everyone. At various points they’ve been trying to compete with AIM, Wikipedia, PayPal, desktop email clients, Microsoft Office, PriceGrabber, YouTube (until they bought them), hosted blogging services, Flickr, MapQuest, desktop RSS readers, web portals (remember those?), and oh yes, search engines.
All these attempts are, of course, built on the back of the last. In terms of web search, there’s Google and there’s everybody else; their market share, at 65%, is more than four times that of their closest competitor. Google is basically making the bet that by putting everything you need in front of your face, you’re going to stick with them rather than use their competitors.
This isn’t, on its face, much different from what Microsoft did a little over a decade ago to promote Internet Explorer. This gets described as Microsoft trying to “kill” Netscape by “locking them out,” but what Gates & Company did was devious mostly in its simplicity. They just started to ship Internet Explorer as part of Windows, making its icon appear on the desktop by default. Nothing prevented users from installing Netscape Communicator or another competing browser, and it was easy to make the IE icon go away.¹ Microsoft’s bet was simply that You, the General Public, were mostly too damn lazy to do that and would use what they set in front of you if it was good enough. It worked. Never bet against lazy. IE’s share of the browser market went from 20% to 75% in three years, and kept climbing.
Yet the way this tale is typically told neglects the inconvenient truth that Microsoft deviously tricked us all into using what was, at least until Firefox 1.0, a genuinely better product. Netscape Navigator 4’s rendering engine was slow in general, even worse on tables (which were being used for page layout, not just little charts, at that point), and as Cascading Style Sheets started to take off, Navigator just completely went to shit: pages rendered incorrectly, sometimes to the point of being unreadable. We all tend to focus on the things Microsoft screws up—the security bug of the week, how much we hate the ribbon UI—but most of their software, most of the time, at least rises to that all-important “good enough” level. In more than a few cases, their software was better than the competition in the ways that ended up mattering.
Likewise, being better than the competition is how Google got where they are now. They were better than the search engines that came before them. People moved to Gmail because it’s better than other free email systems.
I’ve said before that I don’t think Apple wants to be the next Microsoft, they want to be the next Sony. Google wants to be the next Microsoft. In Google’s vision—and I think they’re correct in this—the focus of computing has moved from the personal computer to the network, and they want to be the glue that ties it all together. They want to be where you not only search but communicate and plan and share and work. Like Microsoft, they’d rather you be there by choice—but they’ll be perfectly happy if you’re there because you’re too lazy to go anywhere else.
The problem is that so far Google just hasn’t been very good at it. They’re always fast followers and sometimes genuinely innovative, but with rare exceptions they just don’t make good products.
So far, I think they’ve believed, not that they’d ever put it this way, that they didn’t really need to make good products. Most of what they do is released under the principle that it’s bringing some amazing new thing to the task at hand—something that could only be done by shoving the task in question into the browser—and that all the old functionality you’re losing in the shift is largely incidental. Gmail’s big innovation was giving you lots of free storage and telling you to “archive” instead of delete. (Some would argue that replacing folders with topic keywords is a great innovation, but I see them as two separate things.) Google Docs is great for collaboration, but little else. Google Reader is more useful as a syncing service for RSS readers than it is for actually reading RSS feeds. And for some of their other services, there’s little innovation other than, well, just happening to be on Google. “You’ll use this because you’re already using our search site and mail program and it’s just one more click. You can leverage your Google Contacts to get up and running on this new thing quick. And we’re not evil, by which we mean ‘not Facebook.’”
While I’ll cop to being cynical about Google+, it has a good chance of taking off. Whether it has a good chance of dethroning Facebook is another question entirely. Just like Google search is sufficiently “good enough” that it’s really damn hard for Microsoft to make inroads with Bing, for the kinds of people who use Facebook—over ten percent of the world’s population²—Facebook is clearly “good enough.” Never bet against lazy—but unless it becomes easier to get to Google+ than it is to get to Facebook, lazy still favors Facebook.
The elephant in the room? Google may get progressively more competent at being a monopolist. An awful lot of users actually get to Facebook through Google. While I can’t find a citation, anecdotal evidence suggests that bookmarks—like FTP, desktop email clients and punctuation—are increasingly only used by old-school nerds, and a lot of users “navigate” by just typing what they’re looking for, even domain names, into a search bar. In theory, just like Microsoft, shall we say, encouraged us all to just use IE by putting it prominently before us and making it a little more work to go get a competing browser, Google could make sure that Google Plus shows up prominently in search results for Facebook. That’s easy. The hard part would be avoiding a shitstorm that would make the Microsoft case look like something from Judge Judy.
- Of course, “making the icon go away” didn’t actually remove Internet Explorer, which became a key point of contention in the antitrust trial.
- Seriously. Facebook has around 750 million users—and that’s defined as someone who’s logged in within the last 30 days, so it minimizes inactive accounts—and the world has just under 7 billion people.
Over at Daring Fireball, John Gruber wonders about the N9 and Meego:
It could be that in practice, the N9 and MeeGo are nowhere near as polished or useful as these demos make it seem. And maybe the problem is with the MeeGo APIs and developer tools — that they’re just not up to snuff with iOS, Android, and Windows Phone. I.e., it could be that the N9 will be a good phone out of the box — as compared to, say, an out-of-the-box iPhone or Android phone — but that MeeGo is not a competitive software platform. The whole point of these app phones is that you can add software to them.
This is mostly my take on this, too—Nokia’s development tools have historically been, well, a little wonky. (Anyone who’s used the Symbian tool chain understands that I am being very diplomatic, although Qt’s system is considerably nicer.) There’s one thing that’s been glossed over in the reporting, though—including my tidbit a few days ago when I dutifully relayed the same wrong bit everybody else is relaying.
From what I gather now, the Nokia N9 does not run MeeGo.
It’s being described as running “MeeGo 1.2 Harmattan,” but this was described at the MeeGo Conference last month a little differently:
Nokia is working on the release of a product this year, expected to raise the interest of the MeeGo community. It comes with an OS codenamed Harmattan that is API compatible with MeeGo 1.2. Let’s have a look at the peculiarities of Harmattan, and how to overcome them for the benefit of the MeeGo community.
From all appearances, Harmattan is essentially Maemo 6 with a MeeGo API compatibility layer. Maemo is the OS that Nokia was ostensibly replacing with MeeGo—we last saw it on the Nokia N900, a smartphone that’s the spiritual descendant of the old Nokia Communicator line.
Gruber wonders if Nokia’s CEO “determined that MeeGo would never have a competitive third-party library of apps.” I suspect he did determine that, but I think he also determined that MeeGo as it existed at the end of last year just wasn’t capable of moving the company forward. As a really fabulous article in Businessweek’s June 2nd issue called Stephen Elop’s Nokia Adventure pointed out, at the beginning of this year MeeGo was a shambles. That the N9 really isn’t a “pure MeeGo” device suggests that it’s, well, still kind of a shambles.
This still leaves Gruber’s last question—“why are they bothering to finish the N9 and ship this”—unanswered. Here’s a thought, though. We know the N9 is a testbed for the hardware design of their Windows Phone handsets. But it may also be a testbed for new user experience design.
Watchers all seem to be asking, “Why would Nokia put all that effort into making a cool-looking new user experience paradigm that they’re never going to use again?” The logical answer is: they wouldn’t. Will it be in Windows Phone Mango (or Papaya, or whatever)? The Businessweek article makes the point that Nokia’s deal with Microsoft lets them customize Windows Phone in ways other handset manufacturers can’t. Or maybe a successor to both MeeGo and Maemo from the “New Disruptions” skunkworks project? Who knows. But even though the N9’s OS is effectively a one-off, the genetics of its UX may be farther-reaching than we’re assuming right now.
Bloomberg’s Hugo Miller and Danielle Kucera, “Rim’s Plunge May Beckon Microsoft, Dell”:
Research In Motion Ltd. (RIMM) has lost so much value that an acquirer could pay a 50 percent premium and still buy the BlackBerry maker for a lower multiple than any company in the industry.
Make sense so far.
Given how significant the deterioration of the stock price has been, that alone will cause interest,” said Paul Taylor, who oversees $14.5 billion, including RIM shares, as chief investment officer at BMO Harris in Toronto. “RIM still has meaningful market share in the U.S. and meaningful market share internationally, and RIM has an iconic brand.”
Okay, I guess.
Among potential acquirers, Microsoft could build its share in smartphones and gain a device to complement its Windows Phone 7 mobile-phone platform, said BMO Harris’s Taylor.
Microsoft has made it quite clear they’re betting on Nokia to save Windows Phone, and—over the objection of the loud wailing of the Symbian fans who are convinced this is a horrible, horrible mistake—vice versa. What the hell does Taylor think Microsoft would do? Put Windows Phone 7 on RIM hardware? For that to make any sense they’d have to tie it into the whole BlackBerry Email Server, and wait, that still doesn’t make any sense. Nokia is going to have Windows Phone hardware out considerably sooner than any Microsoft/RIM alliance† could—and say what you will about the N9’s odd swipe-to-switch UI, Nokia really does have some of the best industrial designers in the business. Think about an N9 running Windows Phone “Mango,” then think about a BlackBerry Torch running it, and think about which one you see people actually leaving the store with.
Dell is less crazy to the degree that they don’t have an existing phone business to overturn. I mean, yes, technically, they sell phones, but nobody uses them. I’m not sure anyone who doesn’t work for Dell or a gadget site even knows they sell phones. I could possibly—possibly—see that happening. But Microsoft?
To be fair, not all analysts are idiots. Walter Todd at Greenwood Capital, quoted in the same Bloomberg piece:
“It’s easy to say the stock is cheap and somebody should buy it,” said Todd. “In reality, the options are a lot less than people think. You see them missing the boat and being killed by Apple and Google’s Android. It’s hard to catch up when you miss the boat.”
I’d actually use a different nautical metaphor for RIM—I’d go with Nokia CEO Stephen Elop’s “burning platform.” Whether or not one agrees with the way Elop jumped, it’s at least understandable. RIM’s response has been to break out the marshmallows.
†I really wanted to say “job” here. Don’t hate me.
Ben Brooks linked to this Business Insider piece by Jay Yarow claiming that RIM’s PlayBook didn’t ship with native email support “because its architecture can’t support two devices with one person’s account.” What’s not obvious unless you follow the link is that it’s not the PlayBook’s architecture that’s lacking, it’s the server-based BlackBerry email system itself.
Here’s how our source explains it: “The BlackBerry Email System server has the concept of one user = one device.” When RIM built its system it didn’t see ahead to realize there would be a time when a user could have a smartphone and a tablet.
I don’t blame RIM for not foreseeing a time when a user could have both a smartphone and a tablet, but this is not foreseeing a time when a user might have any reason at all to access their email through BlackBerry’s secure mail system other than one and only one registered device. I understand that this (theoretically) makes the system more secure—the BlackBerry becomes, in effect, its own hardware dongle—but I have to wonder: is this really the first time that this issue has come up for RIM, or just the first time that they couldn’t blow it off?
Free Software pundit Eric Raymond, preaching on the report that Apple “has filed for a patent on a system for disabling the video camera on an iPhone or iPad when its user attempts to film a concert or other interdicted live event”:
On their past record, can there be any doubt of Apple’s willingness to quietly slipstream this technology into a future release of iOS, leaving its victims unaware that their ability to record a police action or a political demonstration is now conditional on whether the authorities have deployed the right sort of IR flasher to invisibly censor the event?
Um, yes. Yes, Eric, there can be doubt.
Raymond’s argument about Apple’s past record is buried in the comments:
DRM. Their whole corporate history indicates a willingness to design iPhones and iPads so they are instruments of RIAA/MPAA control rather than the purchaser’s.
Sigh. Okay, let’s actually look at their whole corporate history. Apple’s CEO has stated in the past that the reason Apple doesn’t do subscription services like Spotify is that he believes people want to own their music. Is the fact that they started out offering DRM-encumbered music at odds with that? In part, yes—but at the time, the choice was offering music with DRM or not offering music at all. The iTunes Music Store was successful where others had failed because iTunes’ DRM was far and away the least onerous: music was keyed to an iTunes account, up to five computers could be linked to that account and an unlimited number of devices could be linked to it through the computers. Neither iTunes nor any Apple device had to “re-authorize” playback; the media you’d bought couldn’t be “de-authorized” by Apple, and they couldn’t mysteriously go back onto your device to delete it (hello, Amazon).
And from very early on, Apple was apparently campaigning behind the scenes to get rid of DRM. They weren’t the first DRM-free music store, but they were certainly instrumental in getting major record companies to back off DRM, first by getting EMI to agree to the iTunes Plus format—higher bitrate and no DRM—which in turn set the stage for Amazon’s DRM-free music store.
There’s something important to note here: I saw arguments at the time that Apple had no compelling business interest in supporting DRM, but by the time the Amazon MP3 store opened, Apple had such a commanding lead in both the MP3 player and digital download markets that arguably Apple had a very compelling business interest in supporting DRM: vendor lock-in. By getting rid of DRM, they would make it possible for you to take your music to any software/hardware combination that could play AAC files.
And they did it anyway. This strongly suggests that Jobs was not blowing smoke out his ass when he said that he felt people wanted to own their own music.
Apple still sells DRM-encumbered media, to be sure—movies and TV shows. But again, this is due to the content providers: it’s either sell (or rent) them with DRM or not sell them at all. The Eric Raymonds of the world argue that this is proof positive that Apple is evil. I’d argue that this is proof positive that Apple is pragmatic. If people like Raymond ran Apple, there would never have been an iTunes Store. If there had never been an iTunes Store, there wouldn’t be an Amazon MP3 Store, either. Raymond is a radical of a certain stripe, and in some ways that’s a good thing—but nearly all radicals of nearly all stripes tend to regard compromise as pure evil. With all respect to the radicals of the world, this is why you guys tend to never actually get shit done.
So what about Apple’s patent here? What’s actually being addressed in the patent is using a phone’s camera to
…determine whether each image detected by the camera includes an infrared signal with encoded data. If the image processing circuitry determines that an image includes an infrared signal with encoded data, the circuitry may route at least a portion of the image (e.g., the infrared signal) to circuitry operative to decode the encoded data.
This could be used in museums, for instance, to trigger a display of information about what the camera’s pointed at, or to get tourist information from a display stand in a town square or a theme park. And, yes, the patent does suggest the signal could be used to “disable the recording functions of devices.” The examples actually listed in the patent are all about museums, which gives us some insight into what the inventors were actually thinking—they’re imagining “photography prohibited” signs at MOMA.
Naturally, that doesn’t mean that the nefarious uses Raymond imagines for this technology aren’t possible. (If this were deployed, they would almost certainly happen, in fact—that’s simply the way of technology.) However, using the methods in this patent to restrict device functionality are less practical than using them to enhance device functionality. In the latter case, it’s an added bonus you get for having an iThing; it’s the former case, it’s a penalty for having an iThing, at least to the exclusion of other portable cameras. As a company which is primarily interested in selling you iThings, which of those do you think Apple would have a stronger interest in promoting?
Lastly, Raymond is, of course, setting up Google as “less evil” than Apple. Again from the comments:
Google—so far—doesn’t have a history that indicates a willingness to seize control of Android handsets in order to protect someone’s DRM. So, even though Apple and Google both have products that are DRMed, Google is objectively less evil about it.
What exactly has Apple done to “seize control” of my iPhone? Raymond’s argument seems to be, as expressed still later in comments, that “Apple’s closed source means I never had control of my handset in the first place”: in other words, it’s the same specious reasoning used by Richard Stallman to argue that running Microsoft Windows is slavery. Right then. It doesn’t matter that Google has actually remotely deleted Android apps, and somehow Google’s business model—in which users like Raymond are manifestly not Google’s customer, and are arguably part of what Google is actually selling to their real customers—is seen as more conducive to liberty. I think that’s extremely short-sighted—but Google has mastered the language of open, and for people like ESR, so far that’s enough.
Daring Fireball links somewhat offhandedly to Sepia Labs as “Brent Simmons’ new gig.” Following the link reveals that it’s not only Brent Simmons’ new gig, it’s Nick Bradbury’s new gig, too.
Mac users know—or should know—Simmons’ work as the creator of NetNewsWire and MarsEdit, for a long time the best RSS reader and blog editor, respectively, on the platform. (I’d say MarsEdit, now developed by Daniel Jalkut, is still the best at what it does; for me, NetNewsWire has been eclipsed by Reeder, but that may yet change back.) Bradbury created the best RSS reader on Windows, FeedDemon—and before that, authored HomeSite and its spiritual successor TopStyle. HomeSite was far and away the best “code-centric” HTML editor that I’ve ever used, and for years it was the thing that I missed the most when I switched back from Windows to the Mac in the late 90s. It’s hard to describe how cool HomeSite was, but the best way I can describe it to Mac nerds: take Panic’s Coda with its live previews and built in reference documentation, then give it BBEdit’s text editing engine. It’s only a mild exaggeration to say I owe my career to HomeSite.
At any rate, I can’t wait to see just what it is they’re working on; they describe their forthcoming app, “Glassboard,” as a social sharing app that takes privacy seriously (the tagline is “know who you’re sharing with”).
(Factoid: when I got my first Mac I wrote to Allaire and asked them if there was any chance that they could port HomeSite to the Mac. They wrote back and explained that HomeSite was written in Borland Delphi and was thus completely unportable—and suggested I should check out BBEdit instead.)
DC and Marvel’s flagship titles don’t have narrative arcs anymore. If you believe the two biggest comic publishers on Earth, the life of a superhero is incident after nightmarish incident, with no logical progression. And not even death can end the eternal parade of horrors, because dead heroes get only get a few precious months of rest before their hideously contrived resurrections.
What Resnikoff gets at is that building a publishing business around characters who must ultimately be tied to a given set of behaviors and places and themes requires that nothing that happens to the characters has consequence over the long run. Batman and Spiderman not only can’t ever stay dead, they can’t ever fundamentally change. They’re little different from most newspaper comic strips or the “Archie” titles in that regard, but they don’t want to be seen that way—so instead they put their characters through elaborately torturous mega-plots every decade or so: “After Ultimate Crisis War, nothing will ever be the same again!” But the entire point is to make sure everything is the same again. And again. And again.
I tend to come down in favor of reboots but DC does them every five years. Part of the answer, to me at least, is not junking up your marquee titles by having 8 versions running at any given time with no discernible story arc and countless cross-title runs that are just shameless attempts to try to get people to buy titles they wouldn’t normally.
I agree with the latter part, but I suspect this only gets “fixed” by doing something way too radical to be considered. If Grant Morrison wants to tell an eight-issue story with Batman, he writes an eight-issue story with Batman which stands alone. Instead of handling the multiverse like a soap opera, handle it like a shared world. The only continuity you have to worry about is when stories flatly contradict one another, but issues #100–#103 of a title might tell a story that comes before the one in issues #97–#99 and the two have nothing to do with one another.
At any rate, Resnikoff asks the bottom line question that kept me from ever being a superhero comics collector, even though I was prime collecting age right around the time of some of the most vaunted titles like Watchmen and The Sandman (both of which, one should note, avoid these problems):
If you live forever and nothing fundamental in your circumstances can ever permanently change, how can anything mean anything? And if there’s no room for meaning, then why is your story worth telling?
Analyst Gene Munster reported the surprising-or-is-it tidbit that only 7% of iOS developers are also Mac developers, “down from 50% in 2008.” The first comment on The Loop is typical of the comments you get on this sort of survey:
The fact that the study was done on 45 people discredits it completely. Also, percentages do not make any sense giving that the sample was inferior to 100. These numbers might be right (though highly imprecise), but they could also be totally wrong.
Let’s find out, shall we?
A lot of people—and I include myself—don’t understand statistical sampling very well. (Neither does Munster, as we’ll see in a moment.) This is one of those places where Wikipedia does a good job in presenting a high-level overview of the concept. The takeaway point is that for a given population size, desired level of confidence, and desired margin of error, you can compute a necessary sample size. You can solve for one of the other figures—say, you have the population size and the sample size, so you can determine the margin of error if you assume a 95% confidence rate (which seems to be pretty standard).
The formulas make my head spin, but fortunately a nice group called the Research Advisors has made a table for us, with a downloadable spreadsheet. You usually want a margin of error to be within a few percent—what “margin of error” means, after all, is that if you say something is 10% likely with a 2% margin of error, it’s actually 8% to 12% likely. (There are some interesting things about statistical sampling you can notice from the chart, the most fascinating one being that the bigger the population the less percentage you need to sample for a statistically valid result.)
Plug in Munster’s sample size of 45 for a population of 5,000, and you determine that for a 95% confidence level, he has a rather remarkable 14.5% margin of error. So—yes, this is close to statistically meaningless.
But wait—actually, it’s not quite that accurate.
The figures above are for probability sampling, in which (quoting our friend Wikipedia) “every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined.” If you want to survey a million people, you can talk to just sixteen thousand of them and get a margin of error around 1% at 99% confidence if your sample is systematically selected. But if you just wander out into the street and talk to the first 16,000 people you meet, you’re doing convenience sampling, not systematic sampling. You’ve only learned about those sixteen thousand people.
Guess which of these Munster was doing. That’s right, you win a statistically random jelly bean! (It’s purple, within a 5% margin of error.)
It’s not that his extremely damn anecdotal data isn’t interesting, but don’t let anyone spin it as if it were a sign of the decline of the Mac. It may be a sign that at this point there are a lot of mobile developers who are only developing for mobile platforms, and a couple years ago iOS developers were often Mac developers moving into the mobile space.
MacRumors reports that Apple has “borrowed” iOS 5’s wi-fi syncing ability from an app rejected in April 2010. I’m going out on a limb here, Apple apologist and all that, but I’m betting that (1) the app was simply a wireless equivalent to the USB syncing that iTunes already did, and didn’t involve syncing with a remote data center and keeping multiple devices synced (hopefully) seamlessly, which is what Apple’s new system is all about; and (2) Apple was working on iCloud and associated paraphernalia back then and ultimately rejected the app because they didn’t want to deal with the possible headaches created when users tried to use both the third-party solution and the new native solution simultaneously. You know that would happen and when it turned people’s data stores into new “Will It Blend!” videos, they’d blame Apple.
TUAW dares to ask the probing questions, though:
Is it a coincidence that the Apple Wi-Fi Sync icon is almost identical to the one that Hughes had a designer create for him last year? Check out Hughes’ icon below at left, and Apple’s new icon at right.
Okay—again, this is wild speculation—but this is not a coincidence. See, the icon Apple has always been using for wi-fi is the little wedge with the wavy lines, and the icon Apple has always been using for syncing is the two arrows in a circle. Darned if you won’t see the former in the icon for “AirPort Utility” and the latter in the icon for “iSync.”
Now, I suppose you could argue that Apple should have looked at the other program’s icon and said, “Damn, they’ve already appropriated our existing icons for this, so we’d better come up with something entirely different so that way nobody accuses us of appropriating this guy’s icon.”
You could. But you’d kinda be an idiot.