Consumer Reports Still Lives in a Bubble
February 8th, 2012For several years, I have written at great length about the problems that arise when Consumer Reports tackles tech gear. Yes, the magazine has a stellar reputation, in large part because tested products are all bought retail, they don’t take ads from other companies, and don’t even allow those companies to quote CR reviews in promoting their products. CR is also run by a non-profit corporation that is funded by sales, subscriptions and even reader donations.
This veneer of incorruptibility means that most everyone takes the magazine’s ratings seriously. That would be a good thing if the reviews were thorough, and the ratings made sense. Quite often they do. So the information in the March 2012 issue about the excessively high calorie and fat count of the buttered popcorn you buy for inflated prices at the local multiplex shouldn’t come as a surprise, but maybe moviegoers will pay attention and seek healthier refreshments. The roundup of LCD and plasma TV sets accurately described the differences between the two technologies, and why plasma is often better unless you want the brightest possible picture.
But when it comes to such personal tech gear as smartphones, tablets, and particularly personal computers, CR falls down on the job. Way down.
A notable example is the curious way in which they handled the alleged “Antennagate” scandal, involving the original iPhone 4 and the possibility that you could kill reception with what became known as a “death grip.” Despite all the visual evidence that a similar phenomenon could be easily duplicated on other phones when held in somewhat different ways, CR decided that only the iPhone 4 was at fault and, despite getting the highest numeric rating in a smartphone feature, still wouldn’t recommend the product. CR was even oblivious to manufacturer warning labels and printed documentation that also cautioned against holding their mobile handsets the wrong way.
To be perfectly fair, yes, the iPhone 4s did get a recommendation due to the superior antenna system and relatively high test results. But other models got higher scores for having larger displays and/or 3D. It doesn’t seem as if CR bothered to consider the usability of a larger form factor, and how the gadget might fit in pants and shirt pockets, not to mention the ease of one-handed manipulation. Larger must be good, period. CR also didn’t bother to actually compare the user friendliness of the various smartphone operating systems, nor the quality of the various app stores in terms of selection and quality of software.
When CR reviews personal computers, it’s not at all clear how closely they try to match the various specs, or whether the basics, such as display size, hard drive capacity, and memory, are sufficient for them to put products in the same overall category. Although CR is aware of the existence of Mac OS X and Windows as separate, distinct platforms, they do not actually compare the two in any meaningful way, so you can decide whether to go Apple or with one of the Windows PC models.
The same issue that carries those reports about movie theater snacks and high definition TVs also contains an article entitled, “Light & lively laptops.” Here, CR seems oblivious to the difference between traditional note-books and so-called “Ultrabooks,” since it seems they rate them all in the same category, defined strictly by display size.
Even then, Macs rate at or near the top in the 11-inch and 13-inch categories. Why a 13-inch Samsung scores a tad higher than a 13-inch MacBook Air isn’t really explained, other than the former having a lower price. Curiously, CR compares two versions of the 13-inch MacBook Pro, which appear to be from different generations, rating one better to the other when it comes to something called “Versatility.” How so? With CR, you never know, because they don’t explain such fine distinctions.
At least on this occasion, the tested Macs were priced comparably to the Windows note-books, except for the lower-rated models. At least these ratings indicate that, yes, you should get better value for a higher price. That’s good as far as it goes, but the ultimate questions of usability and OS elegance are off the table for CR, which only seems to understand specs and raw benchmarks. To them, it appears that there is no Mac versus Windows question, not any reason why, for example, Apple is gaining sales while most PC companies are suffering from flattened or lower sales. Certainly you cannot attribute this to the alleged Steve Jobs reality distortion field, since he’s no longer here, yet Apple’s sales are better than ever.
On the other hand, if I were in the market for a new vacuum cleaner, I suppose CR would be a good place in which to compare the various models, although most are at least good enough. For autos, CR isn’t interested all that much in the “fun to drive” factor, since they are more attuned to basic ride, handling, safety and comfort issues. A car may be supremely comfortable in all respects, an efficient people hauler from here to there without guzzling a lot of fuel, yet be a total bore to drive. But that’s not CR’s market.
Now what’s unfortunate about all this is that CR seems tone deaf to the problems with their reviews. They aren’t asked the hard questions by a fawning media, and thus have nothing to explain. But with all the resources at their disposal, they should do a better job than anyone. Too bad they haven’t figured that out.
| Print This Post
The problem with CR and reviews is that it’s based on paper and printing. As a result, it can take many months before reviews are published. In the tech world this can mean two generations of gear. When it comes to tech, CR is always seriously behind the curve.
They aren’t bad when it comes to reviewing refrigerators or lawn mowers (although I wonder why Sears products are always so highly rated when I don’t think they’re that good). But when it comes to tech gear, ignore them.
@Don108, They do have an online portal where they publish blogs and preliminary reviews. They posted a fairly quick evaluation of the iPhone 4s, but it doesn’t matter if the content of their reviews is deficient.
One area where I suppose CR does a good job is in publishing reliability ratings. But they are based on reader surveys, which are returned at random and are therefore not terribly scientific. But if a particular product appears to be particularly prone to trouble, that news is significant. The issue, though, is that the questionnaires are often so general that it’s not at all clear whether problems reported with a specific make or model indicate a serious defect, or just some minor glitch that you’d just ignore if there wasn’t a question about it.
Peace,
Gene
My experience is that everyone things CR is great “except for [x]” where x happens to be the area of the person’s own expertise.
I think really, they aren’t experts in much (if anything), but their guidance is generally enough to help you avoid some really flawed products, if not necessarily enough to truly guide you toward the best.
@Scott, Well, if you want to know whether the family SUV will tip over if you make a turn a little too quickly, yes I suppose that’s a good thing. But CR also refused to recommend the iPhone 4 because it suffered from symptoms that all other smartphones suffered from. Unfortunately, CR’s test method was too flawed to detect those symptoms on other models, evidently because it was focused on the way the Death Grip is implemented on the iPhone. A real-world test would have been far better, but that’s merely logical.
Peace,
Gene
Quantitative analysis is relatively easy. It involves unambiguous measurements that can be independently verified. But issues of quality such as the consistency of user interface, ease of use, esthetics, etc., are harder to measure, and subject to opinion. Of course it is in these areas where Apple excels. Here is where user satisfaction surveys could be of value
Unfortunately, Consumer Reports has chosen not to venture into this area of analysis (to the detriment of their readers).
You are right, CR is hopeless as a serious guide to tech gear, as well as cars (cupholders are ranked so much higher than ergonomics or ‘fun to drive’).
But I’ve also noticed that even where CR should excel–for example, when buying a toaster oven or a vacuum cleaner–frequently the model they test is either no longer available, or by the time CR reports on it, there’s a newer model available.
I gave up my CR subscription ages ago, and rely much more on web reviews and recommendations from magazines like Car and Driver, Cooks Illustrated, and webcasts like MacBreak Weekly. I also find consumer reviews in Amazon very helpful.
There are so many ways to triangulate recommendations these days, that CR’s narrow focus on features makes it the not-to-go-to source…
Sean
@Sean, I read a piece in Car & Driver this week that pointed to the handling/steering shortcomings of the latest cars from Hyundai and Kia. The author suggested that Hyundai and its sister company could learn a thing or two from Honda and Mazda. Meantime, Hyundai’s flashy Sonata family car got high marks from CR for these two capabilities. With lots of Honda Accord experience, I took a test drive in the Sonata (the one without the turbo engine) and found that, yes, Car & Driver was on the money. The steering is somewhat sloppy, even on a straightaway, and I’m no automative expert. A current Accord’s steering is smooth, linear, predictable, the result of decades of hard-won engineering experience. How could CR miss this?
Peace,
Gene
Consumer Reports lives in a bubble called ‘reality’.
If one lawn mower cuts the grass better than the rest, costs less AND has proven reliability CR rates it higher. What else do you expect?? It’s a publication geared to consumers, not tech geeks.
As you pointed out, nowhere else can you get detailed reliability data on such a huge range of products, including computers. This is simply unavailable from any other (trustworthy) source. My family relies on CR and has done for decades.
However, I don’t know anybody that thinks CR is ‘incorruptible’ any more than they think Hondas NEVER break down. CR -and Honda- have solid reputations for good reason but neither are perfect.
Finally, technology is a personal thing, and I think it is wholly unreasonable to expect perfection from CR. The Mac and iOS is more usable -in our opinion- but this is not necessarily quantifiable. A PC laptop with 5 USB ports IS more flexible than a MacBook with only two, regardless of your view of W7.
Respectfully
Geddy.
@GL, Claiming the original iPhone 4 was the only phone in their tests that is susceptible to a Death Grip is not reality. There’s plenty of visual evidence online to demonstrate that pretty much all phones of the period suffered from similar symptoms. The reality is that CR was not only oblivious to these facts, but oblivious to the fact that some handset makers put labels on the “sensitive” regions of their products, and warnings in the instruction manuals about holding them the wrong way. How could CR miss that reality?
I also think it’s not unreasonable to have user panels to examine key features of the Mac OS/Windows and the iOS/Android/Windows Phone, etc. to see which ones score better from a usability standpoint, and scoring overall preferences is also not a bad idea. Indeed, the people are are not “geeks” will benefit most from understanding that there are different platforms for personal computers and mobile devices that may or may not suit their needs. CR does none of this and hence fails the reader.
Peace,
Gene
I think GL has a point, CR is not for geeks or specialists of any sort really. If you’re really into some field, you’ll know more about it than they do. I would not count on their reviews to get you the best product, but a product at the top of their recommendations probably won’t disappoint the average consumer, and the items at the bottom usually do have flaws you’d want to avoid (iphone debacle aside). So it kind of at least leads people to reasonably safe bets. (But don’t get me started on how they would rank loudspeakers by how close they could get them to flat frequency response by manipulating the bass and treble controls of a receiver… omg…)
@Scott, The so-called “geek” can easily look at the appropriate literature, or do their own tests to make decisions about PC and mobile platforms. They don’t need to read CR.
The regular person needs guidance, and CR fails in providing that guidance. Is the PC as safe a choice as the Mac? Is the iPhone as safe a choice as a Droid? That depends on many factors, and CR doesn’t explain any of them.
Yes, Scott, CR, if they are trying to flatten the response with tone controls, clearly doesn’t get it about loudspeakers. They need to spend a few hours in the same room with a loudspeaker designer to set their test engineers straight. Maybe I should have them call up Bob Carver. 🙂
Peace,
Gene
@ immovableobject
Quantitative analysis is easy enough. RELEVANT quantitative analysis is hard, as hard as qualitative. Is the best place to measure a speaker’s response in an anechoic chamber? Or a living room? But whose living room? Or, sure water temperature is important to good coffee, but so is speed of brew and distribution of water over ALL the grounds.
The problem is in attempting to create an objective environment, you immediately create a subjective nature when you decide one set of criteria is more important than another.
The other problem is, as Mr. Steinberg points out, the implicit “incorruptibility” or the more explicit “objectivity” of the nature of CR. Undue influence does not automatically equate to accurate or reliable, even if the measures are verifiable.
Joe