Our piece on that subject has stimulated some heated replies here and elsewhere. That wasn't entirely unexpected. Any concisely argued position lends itself to a polarization of opinion. Because our position has been stated already, the only thing worth adding is the following. We have published our share of critical reviews and anyone not predisposed to denying their existence can easily find them. That's not the point. The point is that there already have been -- and predictably will be again -- highly favorable reviews elsewhere of the very products we criticized here on the moons. That's the point worth stressing again. As I said, a reviewer should only feel compelled to pen a so-called bad review if he's sure beyond a reasonable doubt that the product is mediocre or flawed per se (and not just performing in a handicapped fashion due to synergy or apparent listener bias).


Since such bad reviews do crop up here and there in the audio press, we are left to believe their writers were beyond any such reasonable doubt. How do you then correlate the very favorable review that product gets elsewhere? Examples of such polarized reviewer findings are legion. 99% of the time, negative findings must remain qualified as a function of listener bias and the limitations of a reviewer's ability to create enough varying system contexts. Unless a manufacturer brought a product to market prematurely; failed to do sufficient beta-testing; or dispatched malfunctioning review units - it should be an extremely rare occurrence that a group of independent reviewers working for different publications would all come up with an identical finding of "below par and beyond any possible recommendation". If the requirement for a bad review to really stand was confirmation by multiple publications and writers -- a de facto but un-orchestrated group consensus in other words -- how many bad reviews actually do exist which can be considered confirmed and fully deserved?


Measurements such as published by Stereophile and SoundStage! can certainly shed light on a product's test-bench performance. Recent high-profile cable and speaker measurements clearly did put into question their product's engineering. Alas, the subjective review findings bore little if any semblance to the criticisms of the measurements. A hi-tech cable clearly measured non-linear yet the reviewer praised its sonics. A top-brand flagship speaker measured rather mediocre yet the review was glowing. Variations on this theme are as common and old as the appearance of measurements in subjective audio reviewing. A reader keeping all of the above in mind can only conclude one thing: outside of basic information and entertainment, reviews can only serve their additionally hoped-for function -- of half-way reliably suggesting a product for acquisition or not -- if the reader has followed a particular writer's work for a while; has had opportunity to test personal findings against review findings; and possesses compelling evidence that the reviewer is consistent in "calling apples yellow and red, sweet and sour, small for their kind or large, pricey or a steal". Even if one's personal value system of categorizing apples was different, as long as the reviewer remained consistent and one had learned his sorting system, his writings can become a useful tool to pre-qualify components for closer personal investigation.


Expecting anything more out of audio reviews or reviewers is, to my mind, delusional. Any publication claiming a higher level of objectivity, truth or infallibility -- beyond consistency applied without prejudice to whatever comes through a reviewer's room -- strikes me as a bright blue apple. I've never yet seen or tasted one. Anyone reading us will know that we claim no such thing for ourselves. We're just another ordinary apple. You have to decide whether we taste good or not and whether our writers are consistent in applying their personal yardsticks to the components they review.