The Microsoft Surface Scandal Intensifies
August 15th, 2017When I wrote that Microsoft had a potential Consumer Reports problem on its hands, I didn’t see — at least not yet — the tip of an iceberg. Clearly the story has taken hold and has garnered attention in lots of places. It also raises the specter of a wider range of product defects that Microsoft should take seriously.
Now I’ve made it clear that I have problems with the way CR does its reliability surveys. Manufacturers will probably not provide such data, since it is proprietary, and independent repair shops probably won’t have much information on recent models. Instead, CR relies on is own readers for this data. They receive questionnaires covering a whole range of product categories, and, with simple checkboxes and brief descriptions, list their own experiences.
CR does explain online why it is able to apply its own sampling methods so the results represent the U.S. population. I’ll accept it as accurate for the sake of argument. But that doesn’t necessarily validate the results. It doesn’t mean the information collected is accurate, since they are relying on tens of thousands of strangers to decide what information to enter and even if their perceptions are correct.
How is that scientific?
Now a published report quoting analyst Ben Bajarin, of Creative Strategies, provides yet a further reality check on CR’s conclusion that 25% of Microsoft Surface tablets and notebooks have developed serious defects during the first two years of ownership. As a result, CR will no longer recommend them.
The survey covers 91,741 owners who have responded to CR’s survey. But how many of those people actually own a Surface? Good question. Bajarin suggests it could be less than 50 people based on the Surface’s low market share. Is that sufficient to provide an accurate picture of the reliability of these PCs? What sort of sampling error should you expect? Remember that political polls in which hundreds or a few thousand people are surveyed will list a sampling error of several points. How about 50? Really?
Now compare Microsoft’s response to the CR decision not to recommend the Surface. It was one of denial, an insistence that their gear is reliable. Compare that to Apple when it was confronted with inconsistent battery life tests for the Late 2016 MacBook Pro last December. Apple worked with CR to isolate the cause, which turned out to be an obscure Safari bug only triggered by the publication’s peculiar testing scheme.
Apple fixed the problem, received a Recommended rating, and that was it.
CR got the opportunity to earn click bait headlines for two stories. The original story reporting the problem, and the updated report about a macOS Sierra update that fixed it. Apple didn’t engage in corporate spin; they did what they had to do, which was to find out what went wrong and fix it.
And now there’s yet another published report claiming that a leaked memo from Microsoft lists unusually high return rates for the Surface Book and the Surface Pro 4.
This is not just about owners having problems, but about buying products that result in customers sending them back. The memo was reportedly written by Panos Panay, a Microsoft VP, and published by Paul Thurrott, a known Windows advocate. If anything, you might expect Thurrott to skew his coverage positively towards Microsoft, so it’s particularly damning that he’d be the one to reveal such damaging news.
The memo lists 16% returns at the launch of the Surface Pro 4, which declined to below 10% after the first month on sale. Microsoft supposedly traced the high returns to problems caused by early bugs in Intel’s Skylake processors, but it’s also reported that, when Microsoft CEO Satya Nadella contacted people at Lenovo to see how they were dealing with those issues, they were told there were no issues.
Now it may well be that these glitches were the result of software or settings issues. Problems with an unresponsive touchscreen or constant freezes might be about the software rather than hardware, Such ills are not unusual in the Windows world, and they can usually be fixed with patches.
From a customer’s standpoint, though, it doesn’t matter. If it doesn’t work, they want the problem fixed, or they might just return it and choose something else. Problems with Surface gear are legion, and evidently widely documented online.
It’s not the sort of news Microsoft would want to confront, since poor reliability ratings will just dissuade customers from buying a Surface. Even though the Surface usually earns stellar reviews, the people who write them rarely have enough time to evaluate reliability unless they have to deal with an obvious defect with a test sample. Some issues may not appear until after days or weeks of regular use.
Even then, you’d think that CR should have encountered similar issues during their own product tests. After all, they buy everything they test; they do not rely on manufacturer samples except, perhaps, to write a preliminary report. But a final review will always be based on a shipping product purchased anonymously at retail.
But even if CR’s reliability survey doesn’t accurately reflect Surface reliability due to its low sampling size, and the unreliability of its data, it does seem that Microsoft may have real problems on its hands. With relatively low Surface sales, maybe it’s time to figure out what’s really going on. The corporate spin game is getting old.
| Print This Post
What sort of sampling error should you expect
If only there were a way to figure that out…
Oh, wait, there is. Ignoring for the minute that Bajarin’s comment is rank unsourced speculation, the margin of error for a sample size of 50 representing 300 million people (roughly the population of the US) is plus/minus 14. So the potential range is 11 (25-14) to 39 (25+14 — yes, it could be worse than CR is making it out).
11% failure rate is still really high. 39% is catastrophic. I’m not sure you’re making the argument you want to make.
You’ve merely confirmed the story, which is that the sampling is too small to produce an accurate representation of Surface reliability. CR should have labeled the results as “insufficient data” instead of engaging in click bait headlines.
Peace,
Gene