Eich’s defense — from himself and others — largely boiled down to “well, you have to include all viewpoints if you’re going to call yourself inclusive,” which is a fine argument until one considers that including the side that says you can’t include the other side is the rhetorical equivalent of a division by zero error. Given how strongly Eich had been digging in his heels, one suspects he didn’t step down as much as had the ladder yanked out from under him.
The net is abruptly abuzz with news that Facebook bought Oculus VR, the partially-Kickstarted virtual reality company backed by game engine wunderkind John Carmack. And, it’s abuzz with a lot of fairly predictable hair-pulling and shirt-rending.
It’s certainly interesting news, and on the surface bemusing—although no more so than half of what Google buys these days. Facebook seems to be pretty interested in keeping abreast or ahead of Google, too. Hmm. Does Google have any VR product that sort of like Oculus Rift? Something that rhymes with “crass?” I’m sure there’s something along those lines I’ve been hearing about.
Frankly, despite all the hair-pulling I don’t think this is going to make a lot of difference to Oculus Rift users. When it comes to handling acquisitions Facebook seems to be more like Microsoft than Google or Apple. And that’s actually a good thing. Microsoft has certainly done in good products through acquisitions, but look at Bungie and, before them, Softimage, one of the leading high end 3D animation programs of the 1990s. In both cases, the companies were given a great deal of autonomy—what Microsoft wanted them for actually required that. Bungie’s Halo essentially defined the Xbox gaming platform, and Softimage got NT into animation and movie studios. (I suspect this was a much bigger nail in SGI’s coffin than it’s usually given credit for.)
Facebook needs the Rift to be a successful product, and for it to be a successful product they have to not screw with it. They don’t want to take it away from gamers—what they want is, well, pretty much what Zuckerberg wrote:
Imagine enjoying a court side seat at a game, studying in a classroom of students and teachers all over the world or consulting with a doctor face-to-face — just by putting on goggles in your home.
What Facebook wants, ultimately, is to build the kind of communication device that up until now only existed in science fiction. I’m not sure any company can actually pull that off, but I can’t think of another company that genuinely has a better shot. And as much as many things about Facebook continue to exasperate me, I’ve been coming to a somewhat reluctant admiration not only of their ambition but their engineering.
I can’t say this purchase makes me more likely to use either Oculus products or Facebook, but it’s a very interesting milepost. I’ve thought for years that Facebook the product has a limited lifespan, but Facebook the company may have a much longer—and far more interesting—one than I would ever have guessed.
Hundreds of open source packages, including the Red Hat, Ubuntu, and Debian distributions of Linux, are susceptible to attacks that circumvent the most widely used technology to prevent eavesdropping on the Internet, thanks to an extremely critical vulnerability in a widely used cryptographic code library.
Goodin argues that the bug is worse than Apple’s highly publicized “goto fail” bug, as it appears that it may have gone undetected since 2005.
I’d like to pretend I’m above feeling a bit of schadenfreude given that some of the loudest critics of Apple tend to be the open source zealots, but I’m not. One of the more religious aspects of Open Sourcitude is the insistence that all software ills stem from proprietary, closed source code. If the code is open, then in theory there should be fewer bugs, more opportunity for new features, and a lower chance of the software dying due to abandonment by its original developers for whatever reason.
But in actual, real world practice, software with few users tends to stagnate; software that becomes popular tends to keep being developed. This holds true regardless of the license and access to the source code. There are a lot of fossilized open source projects out there, and a lot of commercial products with vibrant communities. Being open source helps create such communities for certain kinds of applications (mostly developer tools), but it’s neither necessary nor, in and of itself, sufficient. And no one—not even the most passionate open source developer—ever says something like, “You know what I’d like to do tonight? Give GnuTLS a code security audit.”
While I only used the service briefly, this is still unfortunate news. Editorially is (was) a collaborative online writing tool using Markdown that I was planning to use for—yes—editorial work.
I’m not sure what to take away from this; Editorially was one of those free services they were planning to figure out how to make money from later, and that’s always a dicey business. Yet as Everpix’s failure demonstrated, being a paid service from the get-go isn’t automatically a win. Editorially’s management concluded that “even if all our users paid up, it wouldn’t be enough”; would it have been if they been a paid service from the beginning? Maybe, but of course they’d have had fewer users—probably far fewer—and so that might not have been sustainable, either.
While I was going to express some dubiousness about the headline of this WSJ article—it at first suggests revamping a mythical set-top box that Apple has yet to produce, rather than the one that they’ve been producing for years—but that probably isn’t fair. It could be the actual Apple TV that they’ve been selling.
So instead, knock them for the first sentence.
"Apple Inc. appears to be scaling back its lofty TV industry plans."
In theory what this means is that Apple “envisages working with cable companies, rather than competing against them.” Well, yes. As much as some may dream of Apple displacing cable companies, that’s going to be pretty hard to do in the near-to-mid term for financial reasons alone. Netflix and Amazon are starting to compete with cable networks by paying for original content, but they established themselves first. Apple will need to do the same thing. (If they even want to get into that side of the business.)
If the WSJ report is correct, Apple’s service sounds like it will be nerfed to the point of bringing nothing new to the table—but that doesn’t mean it will remain nerfed forever. Even so, for right now they have the same problem that always faced Hulu: a profound disconnect between what their customers want to get, and what the media companies are comfortable giving. Apple got nearly everything they wanted when they launched the iPhone because Cingular was desperate enough to agree to their demands. As much as techies believe Hollywood should be that desperate, they’re not.
Did Apple ban a Bitcoin wallet because they fear competition?
Apple pulled Blockchain, a Bitcoin “wallet” app, from their store, and Blockchain’s CEO is sure this is because Apple sees them as a potential competitor. “I think that Apple is positioning itself to take on mobile payments in a way they haven’t described to the public and they’re being anti-competitive,” he told Wired.
As usual, though, this comes up because it fits into the default narrative about What Apple Is Like, not because there’s either direct or even circumstantial evidence to support it. While Apple may not welcome competition with open arms, there are dozens of applications on both the iOS and Mac app stores that directly compete with Apple’s products and services. Pandora, Netflix, Hulu, just about every app Amazon makes—by this logic, shouldn’t these all be gone?
For that matter, there are already mobile payment applications in the store that are far more direct competition for this still-hypothetical Apple payment system than Blockchain is. I can walk into (some) stores and pay directly with the Square or Paypal apps, but I can’t walk into a store and pay directly with Blockchain; it has no infrastructure specifically designed for point-of-sale, and even if the store took Bitcoin, I’d rather just swipe a credit card than diddle around with paying by SMS.
So why did Apple pull Blockchain? As usual, we don’t know, because Apple is about as communicative as a sack of flour. But given that they pulled Coinbase a couple months ago, the signs point to this having little to do with competition and a lot to do with Bitcoin. Apple has a rapidly expanding presence in China—whose antipathy toward Bitcoin is crystal clear—and they’re a strikingly conservative company when it comes to financial and legal matters; Bitcoin’s reputation as a haven for illegal activity may not sit well with them.
Twitter reported a net loss of $511 million, compared with an $8.7 million loss a year ago.
However, using a closely watched measure of income that excludes stock-based compensation and other expenses, the company reported a profit of $9.7 million, or 2 cents a share, in the quarter, compared with a loss of $271,000 a year ago.
Oh, they’re on the page that looks at actual numbers. Must be some old media thing!
Revenue, of course, isn’t profit, so the headlines arguably aren’t contradictory. But they’re telling of what the different media outlets are focusing on. To the tech press, slowing user growth is the real concern—a per-quarter loss of over half a billion dollars, no big deal. Most of it’s just stock-based compensation, right?
The NYT goes on to report that Twitter projects full year revenue in 2014 to be $1.15B, and adjusted EBITDA (the “closely watched measure of income”) to be $150–180M. That does sound promising, but I honestly can’t tell whether it means they’ll still actually be losing money. I’m fairly sure most network providers don’t accept payment in EBITDAcoin yet.
As expected, white smoke came out of Microsoft’s chimney early this morning and the board named Satya Nadella their new CEO. Microsoft watchers, both inside and outside the company, all seem to like him and agree he’s the best choice, but I’m not sure that’s actually a good sign. Microsoft may not have needed a “turnaround artist” like some had suggested, but they do need someone willing to make risky, potentially unpopular moves—something the company historically tends to bungle through half-measures when they attempt it (Windows 8).
Nadella is deeply steeped in Microsoft’s current company culture, promoted to CEO from being EVP of “Cloud and Enterprise.” Is this a guy who’s really willing to make bold moves? If they are, I’d be surprised if they aren’t moves toward doubling down on what he knows best: business services. Earlier I wrote:
While picking [Nadella] might be a statement about re-invigorating Microsoft’s engineering culture, it might also be a statement about re-invigorating their focus on big business: all of Nadella’s background seems to be in enterprise computing.
In that same article, I wrote about my favorite also-ran, Stephen Elop. Those who felt sure that an Elop-led Microsoft would crash and burn are no doubt breathing a sigh of relief, but I still wonder about this:
The big difference Elop might bring to Microsoft is—I’d like to find a word that’s both less polarizing and less squishy than this one, but I can’t—a sense of taste. Elop understands the importance of user experience. I’m not sure there’s anyone at Microsoft who would have come up with devices as, well, cool as the Nokia Lumia line. Not even the Surface tablets have the same quality industrial design.
I don’t want to imply that Nadella is tasteless, but I would be honestly surprised if user experience is something he really gets. It’s not a prized skill in the enterprise world. We’ll see.
Google selling Motorola Mobility to Lenovo for a substantial loss seems to say something significant about Google, but nobody seems to be entirely sure what. Short attention span? Finally coming to their senses? That they no longer need Motorola’s hardware expertise now that they have Nest’s? (Bingo.)
One aspect that’s been widely reported is that Google is keeping Motorola’s patents with them, which seems to be yet another baffling move. From the standpoint of “patents as offensive weapons,” Motorola’s are certainly a failure, and it seems Google put a lot more stock in that aspect than they should have.
But that’s a new fashion, pushed by the rise of companies whose business model is essentially to flood the patent office with claims of dubious “inventions” that examiners have neither the time nor training to evaluate properly. Whether you approve of patents conceptually or not, the vast majority of the ones owned by a company like Motorola—in the mobile communications space from the very beginning—aren’t the dippy kinds we roll our eyes at when they get in the news. They really did create a lot of the technology in widespread use in the industry.
Yet this is a double-edged sword. It turns out that it’s the stupid patents that are most easily weaponized. Patents that are so valuable they define industries tend to get quickly and widely licensed. Such patents are subject to FRAND, “fair, reasonable and non-discriminatory” licensing rules, and companies that Google could plausibly sue—excuse me, defend against—likely already have license agreements.
So what is Google left with by hanging onto these patents? Nothing? Well, no. It’s still nearly 25,000 patents. It’s hard to figure out how much licensing they’re already bringing in because it doesn’t seem that Google breaks out their income that way, but when Motorola Mobility was an independent company it was in the neighborhood of $250 million a year. Is it possible they could get more money of that by carefully reviewing the portfolio? Sure. They couldn’t increase it ten-fold, but doubling it isn’t wildly unreasonable.
Google estimated the patent portfolio as being worth $5.5B. Again, that doesn’t seem ridiculous; after all, a patent’s total value is what you get from it during the duration of its licensing. (And what you get from it when you’re actually using it, of course; if Google makes products that are covered by these patents, they’re saving money by not paying royalties to Lenovo.) The press has made a big deal about Motorola being smacked down earlier this year in their quest to get Microsoft to pay $4B a year in royalties, but that was pretty clearly a pie-in-the-sky shot: if the estimation of the portfolio had been based on such claims being successful, it’d have been a lot higher. Google probably needs to double the royalty revenue from patents over what Motorola was getting to hit that $5.5B valuation, but it’s not nuts.
The most interesting question to me moving forward from this point is what Google actually does with hardware. It seems certain that the Nest design team will be put to work on more than thermostats and smoke detectors. Stratēchery’s Ben Thompson wrote (quoted by John Gruber) that it’s a “tiresome myth” that Google wants to be more like Apple, i.e., building their own hardware, and that selling Motorola puts that “to bed once and for all.” Really? I normally find Thompson pretty savvy, but I think we’re going to see Nest-designed Nexus phones and tablets within a year.
Columbia University student Kevin Chen attempted to sign up to be able to process credit cards with Square, and was put through one of those somewhat dubious “identity verification” quizzes that credit bureaus provide (“which of the following addresses have you lived at”). It didn’t work—he doesn’t have much of a financial history to draw on—and discovered that when it comes to actually solving problems like this, Square isn’t any better than Paypal and other competitors. (He wrote that Amazon Payments and Citibank required him to fax copies of his driver’s license and Social Security card and sniffed, “how backwards,” although that strikes me as an understandable hoop to jump through when online verification fails for exactly this reason.)
Next, I tried to contact customer service. After all, there’s plenty of evidence that I’m real: I visited their office once, they gave my friends a ton of Square Readers, and I even interviewed for a job there. A week later, nobody had responded.
"I’ve honestly never seen such a user-hostile design in financial services," Chen writes, "and I use PayPal! Design is how it works, not how it looks."
Well… no. As far as financial services go, the design—and I do mean “how it works”—of Square is actually still pretty damn good. And despite all the crap PayPal gets thrown at it, its user experience (UX) design is also pretty damn good.1
PayPal’s weak spot is their policies, which frequently come across as “the customer is always wrong.” You do anything that deviates from their concept of ordinary usage and you get shut down. What tends to get them in news stories as a mustache-twirling villain is how frequently this policy shuts down things like charity drives. It may be unusual to see $40K of transfers from thousands of people stuffed into a single PayPal account, but by now you’d think PayPal would flag those usage patterns and have a human check to see whether it’s the annual Cute Puppies For Cancer-Stricken Orphans drive before pulling the plug.
What Kevin Chen is having an issue with here isn’t the part of UX that people think about when they usually think of UX. The problem is how these services handle failure points. While fraud is a very big deal for financial service companies—to a point where they’re better off being painfully conservative—there are counter-examples extant to show that they’re not required to be jerks about it.
The new banking service Simple rightly touts their design and usability, but what sets them apart is how good the experience is when stuff doesn’t work. It’s rare for them to take more than a couple hours to respond to support requests, and they make it very easy to get an actual person on the phone if you want to. PayPal isn’t as good, but they’ve still got a customer support department. For all of the bitching we do about them they not only work in the vast majority of cases, they’ll at least try to fix things when they don’t, as lumbering about it as they may be.
Square, however, doesn’t have a support phone number, and their official attitude seems to be that if their service works as intended—that is, nothing that you experienced is a result of a bug on their end—then that’s that. This is part of the same attitude I’ve accused Google of having: all problems are ultimately engineering problems. (Google is notoriously bad for customer service, even with their paying customers.) Kevin Chen’s problem is not an engineering problem, and thus might as well not exist.
I suspect this comes across as more disappointing than it might otherwise be because it’s Square, the company that’s supposed to be revolutionizing the payment industry. We expect this from PayPal, but not Square. And it’s grating to realize that we probably have those expectations backward.
Its developer experience is pretty dreadful, although their new REST API looks like it may not suck. ↩
The Federal Communication Commission’s net neutrality rules were partially struck down today by the US Court of Appeals for the District of Columbia Circuit, which said the Commission did not properly justify its anti-discrimination and anti-blocking rules.
Those rules in the Open Internet Order, adopted in 2010, forbid ISPs from blocking services or charging content providers for access to the network. Verizon challenged the entire order and got a big victory in today’s ruling. While it could still be appealed to the Supreme Court, the order today would allow pay-for-prioritization deals that could let Verizon or other ISPs charge companies like Netflix for a faster path to consumers.
I’m conflicted about net neutrality. Not about the notion that the Internet should remain open to everyone, even if ensuring that requires (gasp) regulatory interference. But I’m not so sure that “pay-for-prioritization deals” are intrinsically evil.
Way back in the dark ages—the late 1990s—I managed capacity on a major data network. Even back then, our switches had the ability to prioritize traffic whose packets were tagged as being certain kinds of data. (We called it “QoS,” quality of service.) The technology we were expecting to replace frame relay—asynchronous transfer mode—had this functionality built-in as a protocol level. The rationale for this was that some kinds of traffic really need higher priority. You see, IP was designed with no guarantees of low latency or even of packets arriving in the right order. For a terminal connection, a web page (or a Gopher page!) or a file transfer—for any of the kinds of stuff the Internet was originally being used for in the early ’90s and before—this was okay. For anything that involves real-time streaming, though, the results can range from suboptimal to unusable. However, ATM never truly took off; during the original dotcom boom, telecom companies and the equipment manufacturers selling to them raced to build so much big-pipe infrastructure that it became easier and cheaper to solve network congestion and latency problems by just throwing more bandwidth at them.
At least, it did for a decade or so. But bandwidth consumed per user has grown far faster than the absolute number of users have. You are using ten times the bandwidth now than you were five or six years ago. The cost to get you that bandwidth has dropped substantially on a per-kilobyte basis, but your ISP is making less off of you in 2014 than they did in 2007. This isn’t a problem for the ISPs as long as new customers are being added like mad. Through the 2000s they were, but not anymore—yet demand per user is still climbing steadily. Where’s the new revenue to offset the new infrastructure costs come from? Content providers have historically never been charged a premium based on the amount and kind of content they’re providing—but we may be at the point where we can’t keep kicking the can down the road by making the pipes fatter.
The worry about creating a “class divide” between content providers who can pay for better service and those who can’t is valid, but the real battle being fought isn’t about free speech and open access. It’s about how to spread the cost of building the infrastructure we need for our increasingly everything-over-IP future. And no one involved is really neutral.
While this sounds like “well, duh” news at first glance, the interesting thing about it is that nearly all of the loss came from B&N’s Nook ebook division; sales in their actual physical stores aren’t growing, but they only declined by 6.6%. I wouldn’t spin that as “essentially flat” the way B&N’s new CEO is, though. (“Officer, I can’t imagine why my car rolled down that hill when I left it in neutral—a 7% grade is essentially flat!”)
Physical books aren’t going away any time soon, if ever, but ebooks are effectively replacing mass-market paperbacks, bringing back something that B&N and their ilk effectively killed: the “mid-list title,” books that sell small but consistent amounts from which many authors, particularly of genre fiction, made the bulk of their royalties. My guess is that B&N will survive, but they’ll do so by reversing not just the last five years but the last fifty: losing their digital division entirely, closing many of their stores, and re-emphasizing hardbacks and limited editions.
It’s become a thing currently to bemoan the tech bubble centered around San Francisco by castigating its perceived drivers: young, arrogant nerds paid exorbitant salaries and paying exorbitant rents, with no interest in SF’s culture or history, swaggering into town and driving all the good people out. This has caused the arrogant nerds to write their own responses, but it’s usually the ones who really are assholes who are motivated to do so.
The real problem, as Priceonomics explained, is our old f(r)iend supply and demand. Between the start of 2010 and the start of 2013 the unemployment rate almost halved and more than 20,000 new residents arrived—and in 2012 only 269 new housing units entered the market. They helpfully chirp that 8,000 new units will come on the market between 2013 and 2015—but that’s clearly not enough to help matters. Geography and San Francisco politics combine to make new housing difficult and expensive; with the new technonerd-driven demand being concentrated on the high end of the market, things get even worse.
Yet the vilification of the techie seems misguided. I’m not a disinterested party; I moved to the Bay Area at the tail end of 2002, after the dotcom crash, with no job waiting. I had friends here—importantly, including one who let me crash with him, which morphed into renting a room in his house for five years—and I already loved the area. Almost everyone I know in this area lives here because they want, specifically, to be in the Bay Area. The people I know who live in “The City” live there because they love San Francisco. I suspect that’s true of most of the people inside those Google buses people are throwing (organic and sustainably-grown) tomatoes at.1 Valleywag loves to trot out pissy clueless technolibertarians as the True Face of Tech Startups, but let’s not pretend that Valleywag is a shining beacon of unbiased journalism.2
Yes, techies are paid exorbitant salaries and paying exorbitant rents. Yes, this is a problem for people being squeezed out. “No, I can’t accept your offer of a dream job in one of the great cities of the world because doing so incrementally contributes to a growing class divide” is a lot easier to say when you’re not the twenty-something from Suburban Elsewhere actually being given that offer.
Of course, with all respect to Marc Andreessen and his insistence that those who say this is a bubble are misguided, then give me another word for it. Housing costs are rising faster than salaries; the median home price in San Francisco is, if you run the numbers, twice as high as what a household with a median income in San Francisco can afford. This is happening in the Valley, too. My rent share of the nice two-bed/two-bath apartment in Santa Clara I’m in would more than cover the entire rent of a comparable apartment—or the mortgage of a respectable house—in Sacramento. I could actually commute from that area to here two days a week and still come out ahead. Some friends and acquaintances of mine are planning moves out of the area already, and the more expensive things get, the more will follow.
So there’s a silver lining for the “nerds go home” crowd: when even the techies decide they can’t afford to live here anymore, they’ll leave. In the long term, that might be better for San Francisco’s health as a city. But that might be a much longer term than critics think: the result of the high end of the tax base crumbling—especially in as tax-driven a city as SF is—won’t be very pretty.
While the Google Buses have fueled the perception that the techies live in SF and commute to Silicon Valley, that’s far from universal; while there are a few huge employers down here, the majority of startups are clustering around SOMA in SF. ↩
Having said that, it would sure be nice if sites like PandoDaily evolved enough self-awareness to stop letting pissy clueless technolibertarians write responses to Valleywag. ↩
While I may be the last person in the tech sphere to bother linking to Re/code, the new post-WSJ incarnation of Walt Mossberg and Kara Swisher’s AllThingsD, I’m surprised how few people have pointed out that it’s a really ugly web site. It’s like they’ve started the worst bits from news sites of five years ago—tiny type, boxes everywhere—and merged in a couple annoying recent trends like presenting the articles in an unevenly staggered multi-column “river.” While I like Futura as a headline font, they’re not using it very well. And the body typeface is (yawn) Georgia.
All poking aside, the big problem with this design—which is not unique to them—is that it’s awfully hard to figure out what’s important here. If layout is supposed to lead the eye, this is a hedge maze painted bright red.
A lot of web design through the 2000s borrowed the wrong things from print design: it forced layout into fixed widths, because that makes it much easier to place page elements, and it tended toward small print that “felt” like the 10–12 point body type that we already knew. On 1024×768 displays, this worked well enough, but we’ve learned since then that it’s much better for viewers if the web site adapts to the display that it’s on rather than decreeing “Thy content shall be 960 pixels wide” and that, paradoxically, the 16-pixel standard type size browsers inherited from Netscape and Mosaic is, if anything, a little too small to be comfortable for long-form reading rather than too large.
But there’s a lot of good to be learned from print, and so far the only thing we’ve universally adopted is that typefaces are important. (Well, most of us have.) Sites like Medium not only demonstrate well-chosen type, but demonstrate something many sites still ignore: white space is also important. A good case could be made that it’s even more important in a browser window than it is on the page.
Print magazines have well over a century of experience in balancing the demands of display ads with the absolute necessity of making the magazine a pleasant reading experience. This is a lesson that for the most part we haven’t figured out online; Medium doesn’t have ads, so they don’t face this dilemma. I have faith we’ll get there, but an important first step is making the content itself pleasant to read.
Ironically, when you actually get to a Re/code article, they do a pretty good job of this. If they can only make the front page as streamlined as the rest of the reading experience, it’d be terrific—or at least a good base from which we could start thinking about the ad problem.
Seriously, though. Georgia? In 2014? Who uses that?
We are what we pretend to be, so we must be careful about what we pretend to be.
— Kurt Vonnegut, Mother Night
A podcast I used to listen to, Angry Mac Bastards, recently flamed out spectacularly. The show’s premise was simple: the three hosts found stupid things people said on the Internet—as the name implies, mostly things about Apple—and ranted about them in deliberately profane fashion. Sometimes its segments were damn funny, and sometimes they seemed to be little more than shouting you fucking cunt into a microphone for twenty minutes.
Comedy is, almost universally, observational. The best comedy is built on observations we nod our heads to, even if we may feel a little guilty for doing so. And it’s often mean. What separates funny mean from just mean mean is hard to define, but funny tends to be either aimed at ourselves or at those who are deserving of scorn and ridicule. And as much as the phrase speaking truth to power may be a liberal cliché, it’s a pretty good barometer for who is—and isn’t—deserving.
The other interesting thing about comedy has its roots in something it shares with all performance: the stage persona. We adopt personas in all our interactions to varying degrees, of course, but performances—even when we’re performing “as ourselves”—heighten this. We’re going to exaggerate some aspects of the way we are and downplay others.
Comedy, though, is singular in that we’re rewarded for exaggerating the aspects of our personality that, in other circumstances, kind of make us assholes. Funny mean is saying those head-nodding things in a way that only an asshole would. Mean mean is just being an asshole.
AMB found a software developer’s “hire me" page and went to town on not just the page but developer Aaron Vegh personally. A lot of it was juvenile (capping on his photograph), founded on extremely ungenerous readings (equating self-proclaimed flexibility with poor problem-solving and being a prima donna), or just plain wrong (describing his web development book as “self-published”).1 What they did, in short, looked a whole damn lot like the woefully superficial and ill-informed trollbait they’re theoretically supposed to be skewering.
They were clearly trying for funny mean. Some of it was. But a lot just came across as mean mean.
There were several reasons I’d checked out of AMB sometime last year. One of them, though, was that I just don’t like punching down. Beating up on Seth Godin for spouting out Deep Thoughts on marketing strangely reminiscent of Deep Thoughts by Jack Handey is one thing; beating up on a guy we’ve never heard of—especially when, between kicks, you’re admitting his web page isn’t really that bad—is quite another. But something unexpected happened this time: it went viral. Suddenly a whole lot of people learned of AMB not as a podcast about flinging sporadically funny crap at people who deserve to be taken down a peg or two, but one about verbally brutalizing someone whose worst crime was overly twee self-promotion.
And the Internet Outrage Machine went to work.
The fallout AMB’s hosts have suffered from this has been serious, including costing one her job. Vegh didn’t deserve to be beaten down, but they didn’t, either—they arguably earned a rebuke, but not virtual destruction. We tend to force stories into “hero vs. villain” narratives: either everybody deserved what they got and good triumphed, or the forces of evil beat the profane but good-hearted heroes thanks to Vegh’s dastardly move of calling the public’s attention to this. But that’s not the way any of this really went down.
I’m not sure if there’s an object lesson for either side. I’m not sure there are even really sides. If there is a lesson, though, maybe it’s this: how we say what we say matters, no matter what it is we’re saying. It applies equally to pompous “hire me” web pages and ranting “fuck you all” podcasts. This is not a kind of censorship, not a form of political correctness, not an example of Orwellian thought control—it’s an axiomatic truth of any language that has more than one way to say the same thing. Loudly proclaiming “I say things that way because it’s who I am so just deal with it” does not grant you a Get Out of Shitstorm Free card; we all know—and you do, too—that your word choice is not preordained by your immutable nature.
So I’d think twice about deciding your online persona is “righteous asshole.” If it seems like a good idea, think two more times. You are not speaking truth to power. It is not a litmus test for determining your true friends. You are not guaranteed that only the “right” people will be pissed off. And you will build an audience that rewards you for being unkind—which makes it all too easy to cross lines you shouldn’t. When you get called on it, it’s too late to rip off your asshole mask and protest that’s not who you really are.
They finally noticed it wasn’t self-published, but weirdly described Wiley—the original publishers of Herman Melville and Edgar Allan Poe, founded in 1807—as “semi-legitimate.” Random House is presumably “a step up from a mimeograph.” ↩
MG Siegler at TechCrunch makes the case that “the death of Nintendo has been greatly under-exaggerated”:
Nintendo no longer is just competing with the others in their direct space—meaning gaming consoles and handhelds—meaning Sony and Microsoft. They are competing with every single smartphone and tablet currently on the market. And soon, they’ll likely be competing with a number of other set-top boxes as well entering the gaming space. Nintendo’s greatest weakness is the illusion (bolstered by years of reality) that they have years to figure out their competition and outlast them. They do not.
I’d been thinking off and on about writing something about Nintendo which would have annoyed people in much the same way that MG’s piece will, although mine would not annoy nearly as many people because (a) my audience is smaller and (b) a much smaller percentage of my readers make it their life’s work to tell me why I am wrong about everything than is true of MG’s readers. Also, I was never really a gamer, not in the way we mean it now.1
When people like John Gruber made the case that Nintendo was in trouble earlier this year, the general reaction from gamers—and very smart gamers, mind you—was, essentially, you say that because you don’t understand Nintendo. Perhaps not, but MG puts his finger on what bothered me about that response. Nintendo’s problem isn’t like Apple’s in the late ’90s—it’s like Nokia’s in the late 2000s. People were describing Apple back then as on death’s door because it was. Nokia, though, was the #1 cell phone manufacturer by a wide margin, flush with cash, stuffed with smart engineers and designers. And Nokia was supported by tens of thousands of fans ready to explain in impatient detail why the iPhone—let alone Android, pff!—only looked like a threat to people who didn’t understand Nokia.
They were right. The changes in the market did only look like a threat to people who didn’t “get” Nokia. That was the problem.
In terms of technology, Nintendo simply isn’t competitive with the PlayStation 4 and the Xbox One. Fans maintain that the brand strength is more than enough to offset that deficiency. Maybe, but that bet didn’t work out so well for Sega. Maybe Nintendo will come out with a Wii U successor in a few years that’s at least on a par with Microsoft and Sony’s offerings—but that just sticks them on the horns of a different dilemma: either they’ll end up in exactly the same position when the PlayStation 5 and Xbox π come out, or they’ll have zigged when the market has zagged. There’s a non-zero chance that the current generation of consoles is the last generation of Consoles As We Know Them.
Like everyone else bloviating about this I can’t come up with guaranteed good advice for Mario and friends. I don’t think the advice to drop all the hardware and port their games to everything else as fast as possible is particularly sound. But if I ran Nintendo, I’d probably be quietly exiting the console business and partner with Microsoft or Sony to make them the exclusive platform for our next generation “living room” games. I’d stay in the handheld hardware business for the time being, for much the same reason it makes sense for Amazon to keep producing e-ink Kindles for the time being: smartphones and tablets have become fine gaming devices, but a handheld game console is still a better experience at a lower price point. And I wouldn’t definitively rule out original games—not ports—for iOS and Android.
Do I really think Nintendo is doomed? No. But they—and their fans—need to come to grips with the truth that what they’re doing right now is not working. It’s not going to suddenly start working when a new Zelda game ships for the Wii U. They need to change, and they need to figure out those changes while they still have the resources to make bold moves rather than desperate ones.
I grew up with pen-and-paper role playing games, but the aspects of RPGs easiest for computers to model—dice, numbers and dungeon crawling—were the aspects I found the least interesting. My favorite games have always been adventure games, and to me the “storytelling” in modern big-budget productions, even critically-acclaimed ones like Bioshock and The Last of Us, tends to have a lot less to it than meets the eye. ↩
As you might have seen about the Internets, Google is making a concerted effort to tie YouTube identities with G+ identities, and this has brought another round of arguments about what the relative harm of this may be—and that in turn always gets into questions of just what identity means on the Internet. Should you be allowed to be anonymous? Is anonymous bad but pseudonymous okay, or do we treat them as essentially the same thing?
Usually when this comes up, the arguments on the “pro-pseudonym” side circle around the notion that people sometimes have very good reasons to hide their identity. They might be under an oppressive authoritarian regime. They might be hiding from an ex-lover. They might be gay, or trans, or atheist, or Muslim. There could be any number of reasons to keep their identity secret. “If you don’t have anything to hide you shouldn’t be worried” is, after more than a moment’s thought, rather glib.
This seems to me, though, to be setting an unnecessarily high bar for when we deem it “permissible” for someone to participate pseudonymously in an online community. Historically, people who are not public figures have legally and socially been held to have a fairly broad right to privacy. You don’t get to go into your neighbor’s house at your leisure to see what magazines they subscribe to, what kind of food they keep in their refrigerator, and what kind of porn they’ve saved on their TiVo. As long as there’s no reasonable suspicion of illegal shenanigans going on, what they do isn’t any of your damn business. By extension, as long as there’s no reasonable suspicion of illegal online shenanigans, what forums your neighbors hang out on and what kind of porn they’re saving to their hard drive isn’t any of your damn business, either.
There are many, many people you interact with on a professional basis every day whose personal lives you don’t need to know a thing about. This not only includes sales clerks and bartenders and waiters and bank tellers, it by and large includes your boss and your coworkers and the guy at the Starbucks who recognizes you when you come in every morning and has your three shot skinny grande vanilla latte waiting by the time you get to the register. It includes the people who hire you, who approve your rent or mortgage application, who fill your prescriptions, and who file your tax returns. To a large degree it even includes your family. (Don’t pretend you share everything about your life with your parents.)
The deep underlying problem with the relentless drive to give us One Identity Everywhere that both Google and Facebook are so enamored with is that—whether or not it’s the intent—it’s a drive to largely erase the distinction between the professional and the personal in the online world.
I don’t think this is the intent; Google almost certainly believes that this is a problem—like every single problem, from managing your calendars to the Israel-Palestine conflict—which can be solved by code running in their data centers. It’s just a question of getting the code right. But the belief that the separation between personal and private identities—between our various “circles”—can be adequately managed by twiddling settings in your G+ account betrays a lack of understanding not of what privacy is, but of where privacy is.
Privacy—both in the legal sense and in the mores and folkways sense—is something that by definition can’t be outsourced. You can’t put your privacy mirror in the cloud; it has to be at an endpoint, in our own personal devices we have physical control over. In some future iteration of the Internet, it could be possible to bake privacy right into the infrastructure—but even then, that can only reliably happen at the endpoints.
In practice a lot of privacy standards are convention rather than law; they work in large part because someone trying to find out what you do outside the public eye has to put some effort into doing so. In the first decade and a half or so of the Internet age, this was true online as well. Despite the oft-made argument that anything you do on the Internet is de facto being performed in public, it really wasn’t. You had to have some idea what you were looking for to find someone. This is becoming progressively less true, and we need to start asking ourselves whether enshrining the principle of “if you don’t want anyone to know about it, it shouldn’t be online” truly leads somewhere we want to go. I don’t think it does.
For the immediate future, though, we already have a straightforward way to manage our privacy on our endpoints, where that management needs to happen: separate identities that aren’t tied together anywhere off our personal devices. The problem here isn’t how Google (or Facebook or anyone else) handles our privacy; the problem is that Google shouldn’t be managing our privacy. And Google (and others) need to stop demanding otherwise.
Microsoft Corp has narrowed its list of external candidates to replace Chief Executive Steve Ballmer to about five people, including Ford Motor Co chief Alan Mulally and former Nokia CEO Stephen Elop, according to sources familiar with the matter.
Assuming the sources are correct, this just formalizes what we’ve already heard. According to Damouni, investors want a “turnaround expert,” which is usually taken to mean someone who can bring an ailing company back to profitability. That’s what Mulally did at Ford, cutting tens of thousands of jobs, closing plants, ending the Mercury brand, and selling off recently-acquired “premier” brands like Jaguar, Land Rover, and Volvo. It’s been brutal by some measures, but Ford is—by American car company standards, especially—doing well.
But Microsoft is already profitable. What is it they actually need from a leader?
A friend who works at Microsoft thinks they need a guy who understands tech—they don’t need Bill Gates back, they need a new Bill Gates. I can certainly see the argument, but I’m not entirely sure I agree. They need someone who understands technology deeply—but that isn’t the same thing as needing an engineer.
Elop seems to be a more popular choice with Microsoft’s board than he is with pundits, of both professional and armchair variety. Most of the criticism of him is a variant of c’mon, look what he did to Nokia. Well, okay, let’s: what he did to Nokia was to get them to start producing high-end phones that people paid attention to. They get good reviews. They’re in stores near you. (In 2010, finding a Nokia touchscreen phone in the United States was like finding a unicorn.) And while it’s taken until 2013, I’m seeing more Nokia phones in the wild again—five years ago their smartphones were essentially invisible.1 Yes, Windows Phone still lags far behind Android and iOS, but for all practical purposes, Nokia didn’t get its shit together until 2011—the other guys had a three-year headstart. I’m not convinced that Nokia could realistically have done any better.
So do I think he’d be a good choice for Microsoft? Maybe. Elop would be different from their usual executives—he’s charismatic in interviews and is well-spoken, and you’re not going to get crazy Ballmer chanting from him; on the downside, he sometimes hasn’t known when not to talk. Telling the world that they were switching to Windows Phone over half a year before they could ship anything on it was a terrible idea.
But the big difference Elop might bring to Microsoft is—I’d like to find a word that’s both less polarizing and less squishy than this one, but I can’t—a sense of taste. While I’d never compare Elop to Steve Jobs—their styles are so different it would only cause giggling—my impression is that, like Jobs (and unlike many Microsoft executives, past and present), Elop understands the importance of user experience. Bill Gates is a brilliant guy but he never had any interest in being a tastemaker, and on a good day Steve Ballmer has the taste of a Nacho Cheese Doritos Locos Taco. I’m not sure there’s anyone at Microsoft who would have come up with devices as, well, cool as the Nokia Lumia line. Not even the Surface tablets have the same quality industrial design.
Microsoft needs someone who can come in and get rid of things that aren’t working, which appears to be the main appeal of Ford’s Mulally. But Elop has certainly demonstrated a willingness—some would say an unseemly eagerness—to shitcan things that don’t align with his chosen direction.
Of the two internal candidates we know about, there’s former Skype CEO Tony Bates, and “cloud and enterprise chief” Satya Nadella. I don’t know a lot about either of them; Bates joined Skype in late 2010, which suggests his chief accomplishment as CEO was selling them to Microsoft seven months later. Nadella may be the closest to the kind of candidate my Microsoft friend wants: he’s an EVP who started as an engineer at Sun, and appears to have been involved deeply in moving Microsoft online. While picking him might be a statement about re-invigorating Microsoft’s engineering culture, it might also be a statement about re-invigorating their focus on big business: all of Nadella’s background seems to be in enterprise computing.
Whenever an American writes this people inevitably bring up India, where Nokia’s S40 phones are still #1 with a bullet. Yes. We know. They’re not smartphones. No, not even the Asha line. ↩
BlackBerry, it’s noted, has appointed John Chen to be their “interim CEO.” Chen is the former CEO of database company Sybase, and was presumably chosen to lead BlackBerry back to the same kind of high brand recognition and market leadership that Sybase enjoys today.
“The Cinderella story has two morals. The light moral is that when you have grace and beauty, it will shine through and elevate you to your proper station. The dark moral is that this only happens if you are lucky enough have a Fairy Godparent.”— Tim Maly
While I’m not a podcaster (yet?), I follow several people who are, and lately I’ve seen a few talk about how there isn’t an audio editing app specifically dedicated to the spoken word. You can use Garageband or Logic Pro or the like, but those are designed for music production, and there are things they could be doing that might make a podcaster’s life easier.
But there’s a podcast I listen to, Andrea Seabrook’s “DecodeDC,” that has two interesting features. One, the production quality is at the level of “This American Life”: we’re talking very high. Two, it tells you what software it’s being produced on at the end of each episode: Hindenburg Journalist Pro.
After hearing this for a dozen episodes or so I finally went to the web site for the software. It’s aimed at radio journalists, and is, in fact, an audio editing app specifically dedicated to the spoken word. And it has podcast-specific publishing features. It lacks many features that are in the bigger-name “pro” audio apps—but most of what’s been left out has, from what I can tell, been left out by design, because it doesn’t fit their niche.
Again, I’m not a podcaster—but if I were, I’d at least check this thing out.
Thanks, Apple! All our fingerprints are going to be available to the NSA now!
What? That’s wrong on so many levels.
I know, right? Can you believe it?
No. I mean it’s just not correct. For a start, the iPhone 5S doesn’t even store your fingerprints.
C’mon, of course it does. How else could it work?
A web site doesn’t actually store your password, it stores a one-way hash of your password. Likewise, the iPhone 5S doesn’t store an image of your fingerprint, it creates data about your fingerprint and then stores a hash of that.
Oh, sure, that’s what they tell you.
That’s what anyone who understands programming would tell you. Abstracting your fingerprint to data rather than storing an image is just as secure, much less computationally expensive, and probably more accurate.
Even if that’s true, we know passwords can be broken, so we can’t know for sure that someone couldn’t reconstruct your real fingerprint from the data.
You can’t reconstruct a password from its hash. Password cracking tools use lists of common words and various permutations, testing them against the password hashes. So, yes, in fact, we do know that someone can’t reconstruct your real fingerprint from the data.
So why would Apple do this if it’s not to collect your fingerprints?
How about the rationale they’ve already given us? What’s important on your phone isn’t your passcode, it’s everything the passcode is protecting. But for a lot of users—even technically savvy ones—passcodes are sufficiently irritating they’re not using them. With a fingerprint as a passcode, this barrier is removed, making it much more likely people will start protecting their phones.
There are really only two ways for your phone to be compromised: someone getting physical access to it, and someone siphoning data out of it remotely. Touch ID greatly reduces risks from the former, and doesn’t affect the latter at all.
But the fingerprints are still being read and that means they’re being turned into data and that data must be being sent to the NSA in some way you just haven’t thought of yet. Look at everything the NSA is collecting already!
Yes, let’s. Everything the NSA collects goes over networks. They’re a signal intelligence organization. They analyze signals—which is to say, communications. Phone calls. Email messages. IM messages. Bank transactions. There’s a lot of things that constitute signals. But you know one thing that doesn’t?
Could Nokia nerds knock it off with the "Trojan Horse" thing?
Yes, I know, Betteridge’s Law strikes again. Isn’t it possible that Stephen Elop was secretly dispatched from Microsoft to join Nokia as part of a three-year plan to make them Windows Phone’s biggest hardware partner and then return to the fold with Nokia’s smartphone division after Steve Ballmer was unexpectedly forced out?
Actually, when you put it that way, no. Jesus Christ, people, take off the tinfoil hats. Or put them on. Or something.
As I’ve pointed out before, Nokia went with Windows Phone because Google wouldn’t let them both use Android branding and Nokia service branding, and they considered that a deal-breaker. Elop also gave another reason to The Guardian back in July which I’d missed:
What we were worried about a couple years ago was the very high risk that one hardware manufacturer could come to dominate Android. We had a suspicion of who it might be, because of the resources available, the vertical integration, and we were respectful of the fact that we were quite late in making that decision. […] Now fast forward to today and examine the Android ecosystem. There’s a lot of good devices from many different companies, but one company has essentially now become the dominant player.
Call me crazy, but it seems the man was right. What he doesn’t say—but you can clearly take away from this—is that he foresaw there would only be room for three major smartphone operating systems and that Nokia’s own MeeGo was not going to be the third one. That meant Nokia needed to be for Windows Phone what Samsung was for Android. They may be a distant third now, but if they’d stuck on their original course they’d be a distant fourth, with HTC having taken the dominant role with Windows Phone—an outcome that probably wouldn’t have been much better for Microsoft than it would have been for Nokia.
It’s true that Nokia hasn’t had any out-of-the-ballpark hits under Elop, but it’s ridiculous to think that he didn’t want one. It’s just not realistic to have expected that. Nokia’s substantial fanbase didn’t want to believe—and still doesn’t, apparently—that he was handed the reigns of a company that, like RIM and Palm, was already in the process of collapsing due to entirely misreading where the market was going. It’s easy to accuse Elop of being Nokia’s Adam Osborne rather than their Steve Jobs. But keeping his predecessor, Olli-Pekka Kallasvuo, would have left them with their Mike Lazardis. How’d that work out for Blackberry?
Years ago I was trying to formulate a maxim that would have gone something like this:
Any given technology will be put to all possible uses that have a greater predicted value than cost.
There’s a science fiction novel I’ve been working on fitfully for about two years in which this concept drives the background; closer to the foreground is the concept of ubiquitous presence, a consequence of being always online and, whether by choice or just passively, leaving digital footprints everywhere. The characters have far less privacy than we’re used to, but not due to intrusive government surveillance. Societal expectations have shifted from “private unless explicitly shared” to “public unless explicitly hidden.” Like all societal shifts, that wasn’t a collective, conscious decision, but more of a response to pressures over time—specifically, the pressure of applying that maxim.
There are a lot of things about the novel I expect to remain science fiction—it’s set around the asteroid belt, for a start—but ubiquitous presence? You’re soaking in it. It’s not as pervasive now as it is in the novel, but it will be (and long before we’re living among the space rocks). This is the dawn of the age of Big Data. If you’re in the first world, there’s likely more data generated and stored about you annually now than about anyone who died before the year 2000 over their entire lives. Not all of that data is being actively generated by you, very little of it is being actively stored by you, and the uses that it’s being put to are largely not under your control even when they benefit you.
The current NSA scandals have been the catalyst for a lot of people to finally start thinking about this, but most of us aren’t thinking about it very deeply. We know we’re outraged by the government collecting all this data on us without our consent; most of us feel “this isn’t a bad thing because our intentions are good” is not reassuring. History is, by and large, notatallsupportive of the “just trust us” defense.
Even so, there’s better and worse reasons for not trusting an organization, and “because government,” in and of itself, is a poor one. The state—like a hammer, gun or Tim Tebow—is a tool. When a tool isn’t producing a good outcome, there are many possible reasons why, but “this tool is intrinsically evil and nothing good will ever come from it” isn’t one of them.
The signal that’s somewhat lost in the noise is this: the government isn’t collecting the data. Corporations are. What the NSA is doing is correlating it with metadata to form complex diagrams of our personal relationships, performing deep data mining to create potentially very invasive profiles, and taking an extremely expansive view of what they’re allowed to do under the law.
That’s just as much of a problem! Yes, it is, but why was the data being collected in the first place? To allow private entities, taking an extremely expansive view of what they’re allowed to do under the law, to correlate it with metadata to form complex diagrams of our personal relationships and perform deep data mining to create potentially very invasive profiles. And it’s important to understand that they’re not doing it because they’re evil, or stupid, or in league with the Rothschilds to bring about the New World Order.1
They’re doing it because it helps them do the jobs we want them to do.
If you’re going to show me ads at all, I’d much rather those ads be relevant. I want Netflix to recommend cool stuff to me. Don’t you? That requires it to have some idea of what you and I consider cool. Your phone may already have the ability to know an email is a meeting request, offer to make the calendar appointment for you, and remind you when you need to get in the car and head out based on your time, location and the traffic conditions along the way. Jeeves was an excellent valet because he knew Bertie Wooster better than Bertie knew himself in some respects. So it will be with our electronic valets—and a lot of what they know is going to be accessible to others.
That tends to lead to thoughts of Orwellian dystopia, but even our optimistic science fiction implies this kind of data collection. Analysts back at Starfleet HQ can easily find out how much Earl Grey Captain Picard drinks and at what temperature, can get to everyone’s medical records through the transporter logs, and can look up just what you’ve been doing on the Holodeck in your spare time. (Oh, myyyyy.) “Star Trek” depicts, intentionally or not, a society that long ago set a different balance between privacy and presence than we did.
We’re at the point where we need to pay attention to how we’re setting that balance ourselves, though. The problem isn’t that we’re laying the foundations for a “public unless explicitly hidden” society; the problem is that we don’t seem to be fully aware we’re doing it. We’re ecstatic about finding ever more value in ubiquitous presence, but we’ve put very little effort into understanding the costs to our privacy and how to address them. We need to build accountability into the infrastructure. In the long term we may need to become, as science fiction author and Guy Who Thinks About This Stuff David Brin puts it, a transparent society.
As civil libertarian Jennifer Granick wrote, “good people will do bad things when a system is poorly designed, no matter how well-intentioned they may be.” Right now, the systems are being designed by people for whom all the value lies in presence rather than privacy. The state isn’t the architect of many of these systems, and Orwell’s future isn’t the only dystopian outcome possible. As vital as the debate over how much data-driven surveillance the government should be permitted to perform may be, it’s part of a much larger set of issues.
"Rothschild and the New World Order" is my Information Society cover band. ↩
Touch’s premium continues to scare off buyers who have been trained by years of cut-rate PC deals, but the prices themselves are not entirely to blame. Even if the gap between touch and non-touch PCs was significantly smaller, customers would still pass because they don’t see much value in having touch on a PC.
I remember Steve Jobs being dismissive of the notion of putting touch screens on laptops and desktops—especially desktops—and have gotten the impression that it was kind of a controversial position, not just among Apple skeptics but even among a few Apple watchers who were expecting that to be one of Jobs’ infamous “that’s a stupid idea until we do it” moves. But I always thought that he actually meant that one. He specifically called out the ergonomic problems in having to lean forward and reach across your keyboard to do things. He was right. it’s awkward even when you’re just talking about an iPad and an external keyboard. And even if you believe that a pointing device is often actually faster than a keyboard if the UI’s sufficiently well-designed—which I do—the added time in making a much bigger motion between pointing device (screen) and keyboard negates that advantage.
Touch works fantastically well as an interaction mechanism with handheld screens, but it’s not being used as a replacement for mice in those contexts—it’s being used as a replacement for styluses and the handful of crappy ways we previously had to interact with our phones (buttons, control wheels and thumb trackballs). Something better will probably come along to replace mice, but touch may not be it.
Windows 8 is the 3-D TV of operating systems: great in demos, fantastic at dazzling pundits, and beloved of accountants, but just not something buyers actually wanted.
The Edge, a smartphone that runs a mobile edition of the popular desktop OS Ubuntu, will only get made if would-be users pledge $32 million via the crowdfunding site Indiegogo [within 30 days]. In its 15 days on Indigegogo, the Edge project has attracted $8.3 million in pledges, leaving it nearly $24 million short.
The editor of the annoyingly named “OMG! Ubuntu!” site still holds out hope, but “does note that the low introductory tiers of the edge—the handset was priced at $600 for the first 5,000 backers—may have ‘possibly set unrealistic expectations about the cost of the device.’” See, the longer you waited to back, the more expensive it got. Two weeks ago, it was that apparently unrealistically low $600; now it’s $730. (Note: it’s apparently been dropped down to “only” $695 for everyone now, but my thoughts here still apply.)
While one can try to read this as a slam against the viability of open source hardware or of crowdfunding massively expensive projects, I suspect the real story is simply this: $700 out-of-pocket for a phone is too much. You often hear arguments—usually aimed at Apple—that the off-contract price is the true price of the phone, and that therefore whatever phone they want to argue is overpriced is way overpriced. How out of bounds Apple is depends on who you compare it to; an HTC One with 32GB would be $600 off-contract compared to the 32G 5’s $750, but a Galaxy S 4 with the same memory is $695.1
But these aren’t the prices most people pay. I don’t mean “most Americans” due to our carrier subsidies, I mean “most people.” European operators offer “handset financing” over 12 or 24 month periods which have very much the same effect. Orange UK offers that 32G iPhone 5 for £100 up front plus £42/month for two years, on the 5GB/mo data plan—as of this writing, roughly $155 and $73. (While I haven’t done the currency conversions for Deutsche Telekom’s offer it looks comparable—and again a lower overall cost than I’d get here!)
Back here in the States, if I pay $300 for the iPhone and $85 a month for service the next two years, I’ve paid $450 less than someone who’s paid $750 for the same phone and $85 a month for service for the next two years. While I’ve paid $100 more than someone who bought an HTC One on contract, I’ve paid $300 less than someone who bought the HTC One off contract.
This is real, non-imaginary money being saved. To the buyer, the true price of a phone is the price actually paid. And this means that the true difference in cost between that HTC One and the Ubuntu Edge is $530. The Edge is an objectively better phone in some ways.2 But ultimately a lot of what you’re being asked to pay for is a warm fuzzy feeling—and a warm fuzzy feeling that costs you a lot more in real, actual money than that Apple (or Samsung or HTC or…) logo does.
The Edge’s biggest claim to fame would have been the 128G storage capacity and the ability to dual boot between Ubuntu and Android. As we move more and more data into the cloud the first seems less interesting, though, and anyone who wants to dual boot their phone—including the mid-90s version of me who’d probably have thought that was awesome—seriously needs to get out more. ↩
What’s that—you require a different form of payment? Well, in that case, how about my high-level exposure to a wide online audience? Can I pay with that? Listen, maybe you don’t understand how “exposure” works: It means my writing is now reaching more readers than ever before, many of whom are currently enjoying and sharing my work, for free. This in turn brings increased web traffic and attention to the site that has published it, which compensates me for my labor and time with the feeling of deep fulfillment and validation I derive from being a part of said site. And you’re saying that when it comes to paying for some of my most basic living expenses, that counts for nothing?
The Verge’s Jeff Blagdon reports—or more accurately is one of the first to link to the company’s press release—that FileMaker is dropping Bento, their interesting but perhaps too odd-duck “consumer database” program. Everyone who played with it seemed to love it in theory, but I know I could never quite figure out what I’d do with it in practice. Apparently I wasn’t alone.
I confess the main thing that surprised me about this announcement wasn’t the announcement, it was going to FileMaker’s web site and discovering the cheapest product that will be left—FileMaker Pro—is $299. Access 2013’s street price is around $99, and the whole of Microsoft Office Professional (the version that comes with Access) is around $275. Yes, Access is hateful and FileMaker is a pretty pretty flower, but wow.
Hello, and thank you for sending me your job posting for a JAVA DEVELOPER in GODFORSAKEN GULCH, IA . I regret to inform you that this does not meet my needs for the following reasons:
☒ This position is not in my area, and does not mention relocation
☒ This is a contract position for under a year and makes no mention of contract-to-hire
☒ The position’s main technology requirement isn’t even on my résumé
☐ The position’s main technology requirement is for “5–7 years” of a technology that burst onto the scene two years ago
☐ Your client’s business model is a transparent attempt to clone someone else’s success
☐ Your client’s business model is a transparent attempt to clone someone else’s failure (N.B.: ask for your full fee up front)
☐ I don’t expect you to be a programmer but you apparently can’t even use your mail client correctly and for God’s sake turn on spell-check
I understand that it saves you time to do a keyword search on a résumé database followed by blasting out form letters that specifically target those who are:
Just as willing to do a half-assed version of their job as you are to do a half-assed version of yours
I understand this works for many clients: there are enough underqualified, desperate people with half an ass to go around, and recruiters who care about their clients’ needs frankly charge more than you do. However, any company that would use someone like you for recruiting is a company I wouldn’t want to work for (and would be hesitant to use products/services from, since I know the quality of their employees), and so I respectfully ask you to remove me from your contact database.
So this letter isn’t as much a waste of your time as yours was of mine, I’d like to offer a new slogan for your firm, free of charge: “Matching miserable people with miserable jobs at miserable companies since YEAR .” It has a nice ring to it, and it’s exclusively yours, RECRUITER NAME of BUZZWORDS STRUNG TOGETHER INC !
Netflix has had HTML5-based streaming for a while now—that’s how they get to iOS-based devices and anything else that can’t run Silverlight, the Flash competitor from Microsoft that never really took off. (It joins Microsoft’s long parade of chart failures in their attempt to take over Adobe’s markets.) Now, Netflix is ready to move forward with plans to make everything HTML5-based. They’re abandoning proprietary protocols to go with standards! Hooray!
So naturally, GigaOM reports, “free software” activists are greeting this with… a call to boycott.
Why? Netflix is relying on HTML5’s upcoming video standards to allow content to be saddled with DRM. The free software crowd isn’t upset that Netflix is adopting HTML5 video, it’s upset that Netflix is helping to drive “non-free” web video.
Except, of course, that it isn’t. Netflix is helping to drive a “non-free” wrapper around an actual open standard, the way MP3, AAC and MP4 already work.1 They’re not doing this because they hate freedom, they’re doing it because it’s required by their business partners.
Furthermore, for most users this is simply a non-issue. I’d vastly prefer to own media and software without DRM (and I have no compunction about removing DRM for my own purposes), but if I’m watching streaming video I don’t care whether the stream is encrypted. The people who care are people who either don’t have the decryption plugin available or who refuse on general principle to run any closed source software. (If you said “and pirates,” bad answer. Pirates don’t care whether your shit is encrypted to begin with.)
In other words, Linux users. Not all Linux users, either, but the ones who correct you with, “Ah, that’s GNU/Linux.”
So: sorry, guys, but your reality check bounced.2 You know full well that the choice here is between Netflix using HTML5 video with encryption or using something completely proprietary with encryption—there is no option that lets you get your “House of Cards” fix without using closed source. (Or pirating, which of course none of you would ever do, right?) You can have Kevin Spacey doing a Southern drawl or your GNU/Principles, but not both.
I’m sure someone wants to bring up H.264 versus WebM here, but when we talk about standards, “open” doesn’t require free in any sense of free, it just means that it’s not exclusive. The Mini-USB connector is an open standard and Apple’s Lightning connector is not, but both are patented and require royalties. ↩
While using “guys” as a collective is sexist, percentage-wise it’s pretty accurate; Linux kernel developer Sarah Sharp estimates that while the proprietary software world has about 20–30% women contributing, the open source software world is at a dismal 1–3%. I presume they’re waiting for a GPL-licensed implementation of equality. ↩
According to sources cited by The Verge, the Finnish phone maker is readying a software update that will sweep many of its higher-end Lumia devices, turning on Bluetooth 4.0 capability that resides within the chipset.
Okay, so: Nokia is late to the party, since everyone else has this. Bluetooth 4.0 is two years old. (The first device on the market supporting it was the iPhone 4S.)
What makes this interesting to me is that Bluetooth 4.0 is actually a rebranding of a technology called “Wibree” released in 2006 based on Bluetooth. While I’m sure there were a few differences between Wibree and the final standard, you’d think the company that invented Wibree would have a bit of a head start on Bluetooth 4.0 devices. And who invented Wibree?
While “Nokia phones finally get Bluetooth 4.0” isn’t really a big story even to most Nokia users, it’s a made to order illustration of how Nokia reached the market position they’re in today.
[Dropbox CEO Drew] Houston wants Dropbox to become “the spiritual successor to the hard drive.” Users don’t interact with files on iOS, Android, or the web the way they do on PCs. Developers can now make Dropbox the “open” and “save” windows in their apps. But Houston and company don’t want to stop at files. In large part because mobile devices have spawned a less file-centric user experience, much of the data we generate and access doesn’t have a separate life as a document. You don’t play Angry Birds and then save your game as a “.ang” file. If Dropbox’s gambit works, however, you’ll be able to pick up the game on your iPad where you left off playing on your Galaxy S4.
This is a really smart move—ambitious, but over the long run necessary. Right now we’re still talking about how much trust to put in the cloud, but sooner than we think the notion that any data has a “primary” physical location is going to seem as quaint as mechanical hard drives.
The bigger problem is that [big web players have] abandoned interoperability. RSS, semantic markup, microformats, and open APIs all enable interoperability, but the big players don’t want that — they want to lock you in, shut out competitors, and make a service so proprietary that even if you could get your data out, it would be either useless (no alternatives to import into) or cripplingly lonely (empty social networks).
Inevitably when this is brought up, people conflate “interoperability” and “open data” with “open source.” (Then they rant about how Marco is a hypocrite for making observations like this while using Apple products.) The free software community seems to have had, for a long time, an assumption that open data is a natural consequence of open source and that those of use who want to see more of the former should be pushing the latter. This isn’t unreasonable at first glance—an open source word processor can’t be using a proprietary data format.1 The problem is that open data and open source are increasingly orthogonal issues.
So let’s state this very baldly. Suppose I come up with what turns out to be a popular online social network, a successful “walled garden” like Facebook. It’s a walled garden because I don’t give you any APIs to get your data back out once you’ve put it in. That’s all that matters.
What doesn’t matter is what the license the software it runs on is. I could be running WalledGardenBook on 100% free software, put its source code on Github, do all of the coding in Emacs and genuflect to a photo of Richard Stallman daily and it would still be a walled garden.
Your choice of software ultimately has very little bearing on how easy it is to access your data in the cloud. What matters is access to that data—certainly import/export functions (and you want both), but also open, well-documented and, when possible, standardized APIs—and, just as importantly, terms of service that let you treat your data as your data. Twitter, by making an active choice to be evil, has done us all a favor in a strange way: we know what to watch out for next time.2
Within the next decade the computing model will flip to cloud-first—the choices of who we trust with our data will become all-important. Making that choice based on who makes our favorite mobile operating system—or on who’s screaming “open!” the loudest—is probably unwise.
This doesn’t mean that an open source data format can’t be so convoluted it’s extremely difficult to use in anything else. ↩
As much as I rail against Facebook’s walled garden, I don’t think of them as evil the way I think of Twitter as evil—they’re actually pretty good about giving you access to your data. Their primary fault is assuming that society will eventually just redefine the concept of “privacy.” Of course, there’s a non-zero chance they’re right. ↩
There’s a lot that’s been written over the past couple of years about battles over online music royalties, most recently in the form of David Lowery’s article, “My Song Got Played On Pandora 1 Million Times and All I Got Was $16.89.” Michael DeGusta produced a well-researched rebuttal estimating that Pandora actually paid over $1,300 for that, and points out that when you consider this on a basis of listeners per play Pandora (and by extension other similar services like Spotify) are paying far more than terrestrial radio: generally, only one person was listening to each of those million plays on Pandora, while tens of thousands of people are hearing each song play on commercial FM.
Of course, if you’re a performer or songwriter, what matters to you is how much money you’re actually getting in total, not per play, and a future in which you’re making a fraction of the revenue you are now is not one you’re likely to be enthusiastic about. Even if DeGusta is correct and Lowery actually made $234 for those plays, that’s still less than one-fifth of what he’s making from conventional radio airplay. If you assume Lowery is making hundreds of thousands a year that doesn’t sound so bad, but the median income for a composer is about $46,000.
Pandora, for their part, has pointed out that they pay more money per song than any other radio service “broadcasting” on any media—by more than four times on average—and while they didn’t call attention to this, Pandora is, despite their revenue and subscriber growth, losing money. They’re not losing money as fast as Spotify is losing money, so, uh, that’s something. The main source of Spotify’s massive bleeding? You guessed it—royalty payments.
Of course, Pandora and Spotify don’t have the same business model, not exactly: Pandora is kind of a radio station, whereas Spotify is “subscribe to your whole music library.” Because of this, Spotify actually pays a higher rate per song play than Pandora does. But it’s still not very much, so it leaves us in the same place, with musicians and songwriters screaming at them for being cheap bastards and them screaming back that they’re losing money as it is and they can’t possibly be expected to pay more.
And while we’re all primed to look for villains in any story like this, they’re both right. It may not be possible for services like Pandora or Spotify to simultaneously pay as much money as non-Internet radio stations do and turn a profit. This suggests that this is a lousy business model. Spotify in particular reminds me of the old joke about selling goods at a per-unit loss, believing you can make it up in volume. It’s the Webvan of streaming services.
Apple’s soon-to-launch iTunes Radio is following the Pandora model—I suspect it’s “iTunes Genius Mix” with Apple’s library rather than your library. It will, according to Billboard, pay “slightly more than the pure-play rate that Pandora pays,” but slightly is slightly. Yet unlike everybody else, Apple can tie their “radio” directly to their online music store, and you can bet that when the first middle-aged rocker writes a whiny anti-Apple editorial Eddie Cue will hold up a balance sheet showing just how many songs have been sold directly from iTunes Radio.
Apple has long maintained that people would rather own their music than subscribe to it. I’ve known more than a few users who’ve disagreed—people who use Spotify for all their music listening—and, of course, Spotify and their competitors are betting Apple is wrong, too. In the final analysis, though, the question may not be about what consumers want after all: you can go broke giving people what they want.
Sad that Game of Thrones has wrapped up its third season? Looking for some drama to fill the time? We’ve got just the thing for you. One of the Internet’s longest-running and most-hated lawsuits is back: SCO v. IBM has been reopened by Utah district court judge David Nuffer.
If you’re not familiar with this case, it really is one of the most fascinatingly bizarre ones in the annals of computer history — and it’s just gotten more bizarre. You may remember “SCO Unix” from many years ago; SCO made a Unix clone and then bought the rights to the Unix name from AT&T. The thing is, the SCO in this lawsuit really isn’t the SCO that sold SCO Unix — that SCO actually sold off all their Unix rights and changed their name to Tarantella. One of the companies that got some of those rights was Caldera, a spinoff from Novell; Caldera then changed their name to SCO and started suing people.
The sad footnote in that story is that the people who started Caldera — none of whom were involved by the time the lawsuits were flying — were arguably years ahead of their time in the Linux world. If they’d actually gotten their shit together Linux might have actually taken over the desktop after all ha ha ha ha hahaha okay but they were actually pretty good.
There are over 1,000 flashlight apps in the App Store, 50 in our own CrunchBase, and even one that’s venture funded. They’re all in trouble now. Why download or waste home screen space on an app when you can use Apple’s native version with just a swipe up and a tap? Sure, these apps probably didn’t take long to develop, and maybe their makers should have predicted Apple might launch its own flashlight, but still, they deserve a little sympathy.
I love this article. See, the basic premise boils down to “it’s unfair for a major vendor to integrate functionality that is or could be served by third-party developers.” But that’s always been tendentious bullshit dressed up in revolutionary fight-for-the-underdog language. At various points in the history of computing, everything from spell checkers to TCP/IP stacks to GUIs were third-party applications.
The thing is, it’s often easy to miss how stupid the argument is, but Constine has—dare I say—shined a bright light on it. “How dare Apple bring down its iron heel on hundreds of developers,” he asks, “competing in the vibrant flashlight application space!” Yes! Yes, Josh, we all shudder to think how much innovation in the field of turning the camera flash LED on and off might be lost forever!
Like I said, I love this article. Its revolutionary clothes are so clearly a Che Guevara T-shirt it bought at Hot Topic.
On John Gruber’s “Talk Show” last week, he said that he’d heard from his sources that the new iOS 7 interface was “polarizing.” Given the reactions I’ve seen around the web in the last 24 hours, his source hit it on the nose.
There’s more than a little grumbling from designers I’ve seen. Mike Rundle proclaims it’s “anti-Apple,” whereas Ryan Katkov asks in his subhead, “Did Apple fall victim to design by committee?”
The icons for the new iOS 7 apps feel designed by individual app teams without overall editorial direction. Compare them to the Nokia N9—in some ways a closer visual kin to iOS 7 than Windows Phone or Android—and Nokia’s icons look like they’re all from the same set in a way that iOS 7’s don’t.
But iOS 7 has “harsh colors?” Its entire color palette is pastels. It’s easy—possibly unavoidable—to make comparisons between it and Windows Phone, but the thing that most immediately stands out as a difference is that iOS 7 is not full of big squares of solid primary colors. iOS 7’s aesthetic is a post-modern love child of Dieter Rams and Georgia O’Keeffe. Windows Phone’s aesthetic is that of injection-molded plastic blocks.
The design of iOS 7 is based on rules. There’s an intricate system at work, a Z-axis of layers organized in a logical way. [It] is anything but flat. It is three dimensional not just visually but logically. It uses translucency not to show off, but to provide you with a sense of place.
This new look is extremely deliberate. This doesn’t mean that a lot of changes aren’t controversial—it’s pretty much killed the button as we’ve come to know it—but they’re never being done just for the hell of it. They’re questioning assumptions we take for granted. (Do we really still need buttons? After 15–20 years of using the web, we’re accustomed to taking action by clicking/tapping frameless icons and words differentiated only by color.) It’s not entirely consistent, but they’re trying to replace three decades of design language. That’s not something you’re going to knock out of the ballpark on your first try, especially if you have less than a year to do it in.
The question critics are understandably harping on is how much of the iOS 7 interface is influenced by Windows Phone and Android. If I were running Microsoft’s PR Twitter account, I admit I’d have a lot of trouble not tweeting things like “How about those photocopiers, Cupertino” and “Don’t worry, we won’t sue over the non-rounded corners.”
Yet I think iOS 7 reflects Jony Ive’s sensibilities rather than merely keeping up with competitors. Android and iOS have always shared a “grid of icons” look; the ways in which Android varies from that—such as widgets—are ways iOS still doesn’t attempt. Windows Phone goes a markedly different way. We’ve seen things like the Control Center on Android devices first, but we’ve also seen things like it on Macs in the 1990s. The fonts do look similar, but Android moved toward iOS, not the reverse—iOS has always used Helvetica or Helvetica Neue, while Android 4.0 introduced the Helvetica-like Roboto. (Again, Microsoft stands out by using Segoe.) Ive definitely does share some sensibilities with the folks at Microsoft, but no one’s going to mistake an iOS 7 device for a WP8 device.
I wouldn’t claim that iOS 7’s design language is bolder than WP8’s, but it’s certainly a radical break from the past. Which is why Mike Rundle is completely wrong to say it’s anti-Apple. A radical break from the past is just about the most goddamn Apple thing that Apple could do.
One of the many neat little things about Scrivener is its ability to set “target” word counts. Personally I waffle about what amount to shoot for all the time—my current feeling is that 500 is a good minimum because it’s fairly easily achievable (even easily surpassable).
But, I’m frequently not writing in Scrivener. Blog posts like this one tend to be written in Byword, and lately I’ve been using Ulysses 3 for shorter fiction. (Scrivener’s ability to serve as a collection bin for all kinds of research bits still gives it an edge over U3 for, well, long stuff that needs research bits.)
I realized not too long ago that I could hack together a “daily word count” system with Keyboard Maestro. It’s not as elegant as Scrivener’s; it doesn’t reset automatically, and it doesn’t give you neat progress bars. But it gets the job as I need it done. I originally wrote it for Byword but it should be able to work in any plain text editor. (I organize my KM macros into application-specific groups, and have these ones in an “editor” group.)
This consists of two separate macros. Use the admittedly misnamed Set Target to set your baseline when you start writing, and Check Target to see how many words you’ve added.
Interestingly, I’ve found that Keyboard Maestro’s word counts differ from Byword’s and Ulysses’s: they’re always a little higher. In practice they’re not far enough off to be bothersome for this use case, but I’m curious what the difference in the algorithm is.
The company I’m with—TRAIL, developers of an online “Internet literacy” platform called JobScout—needs to bring on another developer, at least on a contract basis, to help with both maintenance/improvements to their existing product and a barely-started rewrite to bring the platform to the next level.
While remote working is possible (and what I do most of the time, in fact), ideally I’d like someone around the San Francisco Bay Area who can come into the office once a week with me and the rest of the staff who mostly work on site there.
If you think you might be interested in helping out, contact me here with a bit of an introduction and ideally a pointer to projects you’ve worked on, Github repositories, the usual drill.
My first post on Medium, in part to see what editing on the thing is like (nice) and in part because I may want to have a place where I can write the occasional thing that doesn’t seem like it quite fits here.
Sarkeesian’s [“Tropes vs. Women in Video Games”] videos aren’t fearful or phobic; instead, they extend hope that video games and other media live up to their promise. But that hasn’t stopped some video game fans and men’s rights advocacy collectives from repeatedly decrying Sarkeesian’s work.
Tellingly, Sarkeesian’s game videos themselves go into little depth at all. Their arguments are radically nonconfrontational, and also limited by time constraints. There’s barely anything about Sarkeesian’s takeaway message to cause real affront to anybody. The disproportionately angry reaction to the videos, however, is due cause for alarm. If people, especially female people, can literally say nothing in criticism of lazy game narratives, what hope do we have?
Unlike Jenn Frank, I have relatively little interest in either video games or horror films, but I’ve been watching the drama around Sarkeesian with bemusement and more than a little horror myself. There may be valid criticisms to be made of her own critiques, but so far I haven’t found one. The best attempts—by which I mean the ones that aren’t overtly misogynist death and rape threats—seem to always circle back to “but that’s the only way to make game stories exciting.” If you honestly think that, you may have glaucoma of the imagination.
Thanks to a friend I acquired an iPad mini for an Android tablet price. I was one of many Apple-leaning folks who was originally pretty skeptical of the 7″ form factor. (The ghost of Steve Jobs compels me to point out that the iPad mini is really 8″, thus not technically betraying his own anti-7″ preference.) But it really is better for most of the things that I do with a tablet, which mostly involve content consumption rather than creation.
You may have winced at that last phrase, content consumption rather than creation. I wince when I read it (or type it); it’s the standard knock against iPads, the thing Microsoft wants us to believe isn’t true of the Microsoft Surface, the thing Amazon embraces wholeheartedly. The last couple of years are full of Apple-leaning pundits, including me, demonstrating all the ways in which we can create on iPads, dammit. I’m using Byword on the iPad mini right now!
But if I was using Byword on the Mac, it’d be showing formatting as I typed. I’d be able to prepare “final copy” in HTML or RTF. I’d be able to use Brett Terprstra’s Markdown services. And I’d have already linked the phrase Markdown services because I could have a browsing window and an editing window open at the same time.
Let’s stop praising how the iPad makes us focus on single tasks. No. It doesn’t. It makes us focus on a single application, and if it’s a document-based application, just a single document. A Twitter window next to a window I’m writing a short story in is a distraction, but a window with my story notes next to the short story window is not—it is, in fact, part of my single task.
You’re just clinging to the old way of doing things. Computers are trucks and most people don’t need trucks, they need cars. Tablet sales are going up up up and traditional PC sales are falling. Way of the future.
Yes, true. Tablet sales are rising as PC sales fall. But are you sure that means what you think it means?
How many people buying iPads are buying them as their first computer? There ae charming anecdotes about 90-year-old grandmothers who’ve done that, but tablets seem to overwhelmingly be in the hands of people who already own computers. Maybe they’re buying an iPad instead of a laptop, but it seems most of them are buying an iPad to have an iPad. PCs and tablets are related markets, but not the same market. The PC market hasn’t stopped expanding because iPads are cannibalizing it; it’s stopped expanding because it’s saturated.
Am I saying Steve Jobs’ famous cars and trucks analogy was wrong, then? Sort of. Desktop computers may be trucks, but the laptops are the cars. That’s why they’ve been outselling desktops for years. Tablets are motorcycles. Maybe Vespas. They’re fun and in some circumstances they’re genuinely your best choice, but most people just aren’t going to get by with them as their only vehicle.
This isn’t a knock against tablets (or Vespas). It’s just a recognition that you have to use the right product for the right task. Using an iPad for reading and web browsing and casual gaming feels natural, more so than using a laptop. For writing (let alone coding), though, this isn’t the case. And while each new iteration of iOS, Android and Windows God We Gave You Back The Start Button Please Love Us Again1 will give us more theoretical reasons to leave our laptops at home or the office, I’m not convinced this is solely a matter of missing operating system functionality. It may be that some tasks just don’t map that well to a user experience designed around direct object manipulation. It may be that spreadsheets and word processors, for instance, will always be better with a hardware keyboard and a trackpad—and I remain dubious that the Microsoft Surface represents the best way to solve that.
While I thought that was a catchy name, too, I’m told it’s been renamed due to a copyright infringement claim. ↩
While there’s an unavoidable undertone of glib hipsterism in the whole “offline for a year” experiment Miller conducted, this insight is one that goes too often unrecognized:
My plan was to leave the internet and therefore find the “real” Paul and get in touch with the “real” world, but the real Paul and the real world are already inextricably linked to the internet. Not to say that my life wasn’t different without the internet, just that it wasn’t real life.
For both good and ill, the Internet really has changed everything. Very few of us in the first world are living lives in 2013 that bear much resemblance to the life we would have had—or even could have had—were we the same age in 1988. (And life outside our first world privilege sphere has undergone radical change due to technology in the last quarter-century, too.)
I joked on Twitter last night that I should follow up PHP is not an acceptable COBOL with “PSR-0 makes PHP a completely adequate FORTRAN” and let people guess whether that’s actually a compliment. This is not exactly that post, but what’s been going in the PHP world has been kind of interesting.
First off, a lot of people get really defensive when you attack PHP, or indeed whatever their language of choice is. (Except Lisp, because Lispers are secure enough not to give a shit what you think.)
Eevee’s epic rant PHP: a fractal of bad design came along not too long after my rant and took most of the heat away. (Thanks!) The most common defense against critics seems to boil down to “a good programmer will get used to the quirks of his tools and work around them.” Absolutely. But that doesn’t mean someone saying, “Hey, your tools are quirky in vastly unnecessary ways” is wrong. Having two similar functions that take the same arguments take them in different order? Bad design. Requires you to explicitly name the variables you want your closure to close over? Bad design. And as Eevee noted, PHP makes it easy to write code which does the wrong thing with no warning: where a failure to do proper error checking in Python and Ruby will always cause an exception, PHP might chug merrily along and “work” incorrectly. Bad design.
Yet! I come to praise PHP rather than bury it. Eevee complained “the language, the framework, the ecosystem are all just bad." What’s changed in the last year is the ecosystem. And it’s changed a lot.
PSR-0 sets up a standard for what PHP calls “autoloading,” a way to automatically load a class when your code references one. In theory, this is “better” than a language like Python, which requires you to import foo when you want to use a “module.” Python sounds just like the old PHP way of using require to load files, right?
No. In practice, Python’s structure for how you organize modules into packages—basically, easily-installable libraries—is a huge win over PHP’s laissez-faire attitude. (PHP tried to have this with PEAR, which would be fine if PEAR wasn’t opaque, fragile, out-of-date and just all around annoying.) PSR-0 now gives us that package structure. More or less.
That brings us to Composer. Which is not directly related to PSR-0, but bear with me.
Using Composer requires you to structure a PHP project in a specific way—not an overly restrictive way, but there are a few standards, just like you’d need for Node, or for Ruby’s bundler package manager. Composer, among other things, provides an autoload component that handles not only PSR-0 style files but can easily “map” anything else you want to use. You can include that in your PHP project with one line:
The magic happens, though, when you bring Composer and PSR-0 together. Because PSR-0 is a standard, it lets you create packages and use packages other people have created. Because of that, you can include new packages in your project by adding them to your project’s Composer configuration file and running composer install.
Yes, this is stuff that Ruby and Node have had for a couple years, and is the rough equivalent of stuff that Python has had for over a decade and Perl has had since the 1880s. One can argue about whether the Composer/npm/bundler approach is better or worse than the CPAN/PyPI approach—as much as a fan of Python as I am, I lean toward “better,” honestly—but the main thing is that it not only exists, it works, and it works well.
Right now, there are nearly 10,000 packages at Packagist, the repository that Composer draws from; I’m sure that just like npm’s repository a lot of those packages are absolute crap, but again, not the real point. The point is that thanks to all this infrastructure that’s sprung up, PHP is very quickly starting to feel, well, modern. A lot of interesting stuff was always out there, but now it’s there in one place, following one standard, (mostly) guaranteed to be interoperable. A lot of things that Rails and Django developers have been rightly smug about for the last few years, PHP developers finally get to share in, also.
Sure, this doesn’t fix a lot of PHP weirdness. (I’ve recently been fighting with json_decode and whoever thought that error-checking scheme was a good idea needs to be smacked with a dead salmon a few times.) But developing with a framework like Laravel 4 is… pleasant. I may still like Flask more, but I’m no longer pining for it.
I have a confession to make that may get my B-List Nerd Card revoked, but I don’t think the Aeropress coffee brewer makes great coffee.
It makes good coffee, and when it comes to brewing speed and cleanup it’s unbeatable. But I’ve been using one off and on for nearly four years now, generally aiming to brew what I consider to be a standard twelve-ounce cup of coffee, and there’s a lot of…quirky things about it. You can’t brew a twelve-ounce cup of coffee with it. Instead, you’re supposed to brew it at about triple-strength, which is what the Aeropress is designed to do, and dilute it. (They call the triple-strength coffee “espresso,” which it manifestly is not.)
Yet when I do that—even using (at least!) as much ground coffee as I would for twelve ounces of water—I often get watery coffee. I can get consistently good eight ounce cups of coffee by using enough water to make the concentrate about double-strength instead, but twelve is hit or miss at best. I’ve talked with baristas at good local coffee shops like Chromatic and Bellano and they’ve said they had similar experiences.
The other thing is that even a good cup of coffee on the Aeropress has, well, a distinctly Aeropress flavor, in much the same way that cold brew coffee tends to taste distinctly like cold brew coffee. The sweetness gets highlighted, the bitterness is muted, and two very different coffees—say, a dark roast Sumatran and a light roast Ethopian—have their similarities accented more than their differences. This isn’t bad, but it’s noticeable.
I think the Aeropress appeals to so many geeks because it’s simple yet endlessly fiddly: change the grind, change the steeping time, change when and for how long you stir, brew it inverted, so on and so forth. You can’t match that level of fiddliness with anything but a pour-over cone.
But I don’t always want fiddly. A lot of times I just want coffee, consistently brewed well and full-flavored. Geeky is still a plus. And this led me a few months ago to the Clever Coffee Dripper.
Basically, what the Clever does is act as a hybrid of filter pour-over and French press. It’s a big cone that takes #4 coffee filters (the ones that you’d use for most electric coffee makers that have cones rather than baskets—bigger than what other “single cup” cone brewers use). Put the filter in, put the coffee in, then pour the water in. Don’t “pre-soak” the grounds, stir after a minute and a half or so, and wait about four minutes.
Then, just set the Clever on your coffee cup. A stopper in the bottom gets opened when it’s on a cup; when it’s on the counter, letting the coffee steep, it’s closed. The coffee pours out of the Clever over a minute or so.
I can’t say the Clever gives me the best coffee I’ve ever had, or even the best coffee I’ve ever made. And, yes, I’ve had (eight-ounce) cups from the Aeropress which are as good as the (twelve-ounce) cups from the Clever. But the Clever brews a great cup of coffee every single time.
I’ll still keep the Aeropress and the other coffee torture devices I’ve accumulated over the years. There really isn’t much tweaking you can do with the Clever, and hey, tweaking is fun. Also, when I want more than a cup at a time, I need something else. (More about which another time, maybe.) But for my standard, go-to single cup brewer, the Clever has won me over.
Earlier this week the web was abuzz with Apple’s latest clear strike against freedom: they refused the newest issue of the independent comic Saga to be distributed through ComiXology due to “postage-stamp sized” images of gay sex. Even though ComiXology is a third-party application, it uses Apple’s in-app purchase system—as it must, due to Apple’s iron fist of doom—and thus is subject to Apple’s doomy iron fisted policies. And this did seem pretty outrageous—and possibly homophobic, given that previous issues of the very same comic had explicit images of straight sex.
Of course, the web was abuzz a day later—in subdued fashion—with ComiXology’s admission that, actually, they just didn’t submit Saga #12 to Apple because they interpreted Apple’s rules as prohibiting it. When they actually asked Apple, turns out they didn’t have a problem with it after all.
I’m curious how widespread the second report is compared to the first; my suspicion is that it hasn’t traveled as far, for much the same reason that ComiXology made the mistake in the first place. “Apple is Big Brother” has become a default narrative about the company. Apple stands for closed systems, proprietary everything, and a level of control over the way their customers use their products that would send us all fleeing for the hills if we had any common sense.
At first glance this is a baffling take. If there’s something I could do with OS X 10.6 that I can’t do with OS X 10.8, I haven’t found it yet. My software all still works. The Unix shell is still there. AppleScript is still there. I can still use utilities like LaunchBar and Keyboard Maestro that are so absurdly powerful that I giggle like a Japanese schoolgirl when some yoyo spouts off with the old “Macs are just toys” trope. While iOS is locked down by comparison—and there are some things that definitely do need to be opened up with respect to inter-application communication—an iOS device is an application console. We don’t complain (much) about a PlayStation 3 being “locked down” because it’s a game console. That’s what they do.
Yet it’s a take even long-time Apple users fall into. Ted Landau comes across this way at times. Leo Laporte is positively cranky about it. The “iOSification” of OS X surely leads to OS X either becoming just as locked down as iOS or simply merging with iOS in a few years! Again, no real evidence supports this—the iOS elements that have been migrated to OS X have not resulted in OS X becoming more locked down. And there’s no reason to think that more OS X technologies won’t move to iOS, making it less locked down.
No reason, except that wouldn’t fit the default narrative.
Apple isn’t the first company to be saddled with this—most companies of any consequence do. Microsoft’s default narrative for about a decade, starting in the late ’90s, was that they were slow, stodgy and frankly kind of thuggish: you dealt with them because you had to. Before them, IBM had the slow and stodgy reputation, although less thug than “man in the dark blue business suit.” And, of course, Apple’s default narrative in the late ’90s can be summed up in one word: “beleaguered.”
These narratives aren’t pure nonsense. Apple was on the verge of going under for about a decade, and Microsoft and IBM really were eight thousand pound gorillas that weren’t, by most measures, very good winners. Microsoft’s contract with OEMs, for instance, disallowed shipping machines with Windows pre-installed that could “dual boot” into another operating system, effectively ensuring that only tech heads would ever see BeOS, OS/2 or Linux.
So how much of the default narrative they’re stuck with now is Apple’s fault, and how much of it is fair? Apple is interested in controlling everything they possibly can, sometimes to their obvious detriment (see: anything that involves a server-side component ever). But no market Apple competes in, with the possible exception of the dwindling MP3 player market, affords them a position comparable to Microsoft’s in 1999. Apple’s market share of smartphones can’t smother Android (or vice-versa; unlike PCs, that market simply doesn’t work that way). And frankly, no one’s ecosystem is as closed as it once was—data portability is king. Mac users are often happy Android users; Windows users are often happy iOS users.
There’s a temptation for those of us who generally like Apple to ascribe the default narrative to Tall Poppy Syndrome, which is, in Wikipedia’s words, “a social phenomenon in which people of genuine merit are resented, attacked, cut down, or criticized because their talents or achievements elevate them above or distinguish them from their peers.” The Apple of 15 years ago was just as controlling, arrogant, and likely to overuse the word “magical” in advertising as the Apple of today, but now, they’re claiming they effectively reinvented the smartphone market and the tablet market. This is infuriating not because it’s transparent bullshit, but because it isn’t: they have a pretty good case for those claims.
That isn’t to say that Apple should be controlling and arrogant, and that their choices don’t create genuine problems worth bitching about. Nobody should pretend it’s good for consumers that Apple doesn’t let Nuance make Swype for iOS and doesn’t let us set Chrome to be our default iPhone browser and that it’s to my benefit that iCloud makes it impossible to use Byword on the Mac and iA Writer on the iPad to edit the same plain text documents. Apple definitely contributes to their own reputation.
But if Apple did “open up” everything, would the default narrative change? At this point, I don’t think so. Look at how persistent bits of the previous Apple narrative have been: that their operating system isn’t powerful enough, that their hardware is massively overpriced, that their main audience is technically illiterate fashionistas. It doesn’t matter what facts are marshaled against this. People hate Apple because Apple. As Marco Arment wrote, Apple “could release a revolutionary 60-inch 4K TV for $99 with built-in nanobots to assemble and dispense free smartwatches, and people would complain that it should cost $49 and the nanobots aren’t open enough.”
And, at least when I ran the Google search now for news on Apple’s Saga saga, the first hit is from the Guardian: “Apple didn’t ban Saga comic, but censorship questions remain.”
Apple does make content decisions that at various points have seemed anti-competitive, puritanical, politically correct, strangely prissy or just plain baffling. So does Google, and if people buy Windows Phone in enough numbers for it to matter, so will Microsoft. And in cases involving content rather than applications, at the least, it won’t matter that much, because Apple, Google and Microsoft won’t be your exclusive content providers. (On iOS, I can buy from ComiXology directly, I can load DRM-Free EPUBs into iBooks by merely clicking on a link in Safari, and so on.)
Apple is neither alone in making weird “curation” decisions, nor are they the inescapable arbiter of what data you load up your iPad with. But most reports, for the foreseeable future, will be written from the perspective that both of those facts are false or, at best, irrelevant. After all, you gotta stick to the narrative.
“Explore” is the direction Foursquare wants to go in for the future. Over the past year, the company has slowly moved away from its heavy emphasis on badges, mayorships and “gamification” (my least favorite word in tech), instead moving toward a mobile discovery service somewhat akin to the space Yelp currently inhabits.
Foursquare derives 1–10 ratings with an opaque calculation based, presumably, on the only quantitative information it has: how many check-ins there are at a given location, how many of those are returning customers, and how many of them “liked” the spot. They claim this is a better method than Yelp’s approach of asking users to rate things on a 1–5 star scale.
But here’s my bet: that Foursquare is going to have trouble distinguishing between “places everyone goes to because they’re great” and “places everyone goes to because they’re convenient.” I can’t prove that, but Foursquare’s top-rated Mexican restaurant in Santa Clara is Pedro’s, a solid but uninspired place most notable for being really convenient for margarita-fueled Silicon Valley power lunches. (They’re within walking distance of Intel, Broadcom, EMC and McAfee.)
Their second highest rated Mexican place here? Taco Bell.
Nothing against those Cool Ranch Dorito Tacos, but I’m not convinced Foursquare has really cracked this yet.
Kent Tessman's name keeps coming up, at least if you're me
From late 1998 through early 2001, I was using BeOS as my primary operating system. Yes, it was really possible to do that—between Gobe Productive (a great “Works”-style app written by the team that wrote the original ClarisWorks), the BBEdit-ish Pe text editor, and the slightly buggy but awesome-when-it-worked “objected oriented” graphics program e-Picture (a lot like Macromedia Fireworks, but in some ways better), I had the tools I needed. For the era, Be’s native web browser, NetPositive wasn’t bad—and, unlike iOS (nudge nudge), it actually had a Flash player available. Not an official one, but one from the oddly named “General Coffee Company Film Productions.”
Later, as I got (back) into text adventures and interactive fiction, I discovered a pretty neat but underutilized language for it called HUGO. Same guys. Actually, just same guy, singular: Kent Tessman. There were actually a couple commercial games done with this thing—at least one by Tessman himself.
When I Googled him a few years ago out of curiosity, I discovered that, yes, General Coffeeis actually an independent film company. They’ve had at least two films on the festival circuit. By “they” I again mean “Kent Tessman,” since he wrote and directed them.
Right then. I forgot about this strange renaissance gentleman until today, while following up on some links from Brett Terpstra’s recent episode of Systematic with comedian/actor/tech-nerd Rob Corddry. They talked about Markdown—I suspect it’s impossible to have a conversation with Brett about anything that doesn’t touch on Markdown—and on its screenwriting relative, Fountain. Corddry lamented how much the market leader in screenwriting software, Final Draft, sucks, and enthused at some length about a new up-and-coming program called Fade In, which looks like it’s shaping up to be the first serious competitor since the late Movie Magic Screenwriter1.
So I went to Fade In’s web site, and apparently, this amazing thing is written by just one guy.
Yes. Kent Tessman.
If he and Brett Terpstra are ever in a room together, it may cause an awesome nerd singularity. That’s all I wanted to say.
Sure, Movie Magic Screenwriter is still being sold. But Dramatica Pro, their story plot development tool, was still PowerPC code until mid-2012, parts of Screenwriter still are—and all their software looks like Word 95. Even on the Mac. But not as pretty. ↩