Let me put this in perspective. Consumer Reports magazine, supposedly an incorruptible source of product reviews and advice, is often at odds with other product reviewers. CR more or less implies that they have the advantage over other publications by dint of the fact that they buy all the products they test, usually anonymously. That way they cannot get a “ringer,” a product that may be specially modified by the manufacturer and thus is not a true example of what customers will get when purchasing the product via the usual channels.
That said, having reviewed consumer products for over 25 years, I never encountered any evidence that I received anything that was different from what a regular customer would buy.
I will also grant that CR will, when publishing reviews that have a subjective factor, such as the sound quality of an audio system or the picture quality of a TV set, may not reach the same conclusions as others.
Take my VIZIO SB3621 sound bar, the 2017 version. It comes with a wireless Bluetooth subwoofer. According to CNET, it’s a game-changer, matching or exceeding gear costing twice its average $150 purchase price. CR tested similar sound bars from VIZIO, but none of them rose beyond a “good” rating, though they still ended up among the top ten. The description of the sound quality of a similar sound bar, the SB3821, revealed many flaws. But I’ll grant that the two models are different, models with a “38” designation are 38 inches wide, whereas the one I have is 36 inches wide. While they ought to be similar, they appear to come from different years, and that could also account for some of the differences beyond the subjective factors.
CR’s review of the TV I own, a 55-inch M-Series, afforded it a lower rating than was awarded by CNET, which considered it one of the best they’ve tested at its price point and when compared to gear costing a lot more.
Long and short, is CNET simply less critical, or are their priorities different? But slightly different models and/or different firmware versions may also account for changes in performance.
Now when it comes to Apple’s HomePod, CR has published its preliminary review [1], which includes listening tests from a panel in a dedicated listening room with unspecified size and design.
But for Apple’s smart speaker, it shouldn’t matter. Most reviews gave it high marks, with amazing sound quality for its size in different listening areas. One review compared it to a KEF studio monitor costing almost three times as much. Another reviewer, as quoted in AppleInsider [2], made detailed measurements, and come up with results that were about as flat as any speaker gets across the frequency range. The reviewer who conducted those tests characterized highs as “exceptionally crisp” and bass as “incredibly impressive,” meaning tight and natural. Take note of this.
Now columnist Kirk McElhearn is very much into a variety of musical styles, from classical, to jazz to rock. He listens carefully, and he’s someone I take very seriously. His experience with a brand new HomePod was very different from the crowd. Sometimes it didn’t sound so good, muddy and bassy. He tried it with a TV set and it failed on all counts.
CR’s first look was more measured. Pitted against the $400 Google Home Max, and a $200 Sonos One, the HomePod also received a “very good” rating. But the other two were still judged superior. To CR, the HomePod’s bass was “boomy and overemphasized,” the midrange was “somewhat hazy,” and treble was “underemphasized,” the equivalent of being distant or less clear. Doesn’t sound very good to me. But the magazine also concluded that these smart speakers were all “reasonably short” of regular wireless speakers.
Again, we know nothing of the design of CR’s listening room, or the abilities of its listening panel, but I would hope they are skilled at such tasks. In large part, CR’s results weren’t far off from Kirk’s. But the reviewer quoted from AppleInsider, as indicated above, came up with decidedly different conclusions.
I’d normally be skeptical of CR’s results, since it wouldn’t be the first time they differed from the consensus, but since the listening results were similar to what Kirk reported on some musical material and TV shows, I take it seriously too.
It’s quite possible much of the disparity is just due to the fact that different people just hear things differently and they are entitled to their preferences. I suppose it’s possible that the HomePods that CR and Kirk tested were defective in some fashion, or that the software needs to be adjusted to better configure the system to deliver similar sonic qualities in a wider variety of listening environments.
And don’t forget the problems CR had when testing the 2016 MacBook Pro with Touch Bar. Battery life tests were all over the place, until Apple got involved, and an obscure bug was discovered with caching when using Safari in the Develop mode to turn off that feature. It was fixed, the MacBook Pro passed the retests and garnered high ratings from CR.
Since Apple claims that the HomePod can adapt itself for a variety of listening areas, maybe they’ll enter the fray to see what’s up. Or maybe all this is due to the fact that no active speaker system can be the perfect solution for every listening area. It’s a reason why I suggest you listen to one before you buy it if you can. Either way, if it doesn’t work the way you want, don’t assume it’ll be fixed eventually. Nothing wrong with returning a product that doesn’t satisfy you, even if it’s from Apple.