|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
188.183.58.134
I would like to see the discussion of the listening results in the review divided into two parts: 1. A purely descriptive discussion of what the reviewer heard on various pieces of music, and 2. A subjective rating system, used by all reviewers, at the end to express how satisfied the reviewer was by the performance of the piece of equipment in their system.
I would also like to see a tighter relationship (dialogue) between the measurements and what the reviewer heard. There is often nothing like this, except for a sentence at the very end by JA.
"You don't need to be a Weatherman to know which way the wind blows"
Follow Ups:
1. I have somewhat reluctantly moved away from specific musical references over the last few years, choosing more general terms. This is because I, like a number of UK reviewers, have received a lot of criticism about musical references in recent years - this is a no win situation; keep citing the same benchmark discs and you risk being labelled as being decades behind the times, use new ones and you are just trying to be 'street' and fashionable. Use popular music and you are 'trying to be an everyman', use less well known material and you are 'showing off'. In addition, as we move into a more international market, we risk citing references that are unknown and using references that people fail to see general trends out of specific references. For example, if I say that a particular loudspeaker thickens out the mid-range of a tenor singer, it's not as specific as saying "this loudspeaker thickens out Siegfried's voice, as portrayed by Windgassen in the classic Solti rendition of Götterdämmerung on Decca", but the response can be "I don't like opera, does that still count?" or even "I have the Barenboim/Teldec version. Does the problem still apply?" from those for whom English is not their first or even second language.2. I have worked for many years on magazines that use subjective scoring, and I feel they create more problems than they attempt to solve. People tend to build systems on the basis of percentage points instead of compatibility and system performance: the same people will struggle to reconcile scores with prices, no matter how elegant the scoring system ("is a $1,000 product at 78% better or worse than a $1,200 product at 77%?"); scores inevitably creep upwards, and you pretty much end up with a lot of 95% products if you aren't unbelievably diligent about score-keeping (or you end up with a score of 128 out of a possible 100) and you quickly create an underclass of very good products that might not achieve the best score for reasons not connected to performance. This is a major problem with star-system scoring mechanisms. The Five Star review means a Four Star product is essentially dead in the water, even if the reason for its Four Star status is as trivial as only having four line inputs where the rest of the products under test have five.
I strongly disagree with closer links between listening test and measurement. Ideally, the two should have no point of contact until the review hits the page. By suggesting there should be greater dialogue between the two sides risks 'leading' the review. If the listener hears something that corresponds to a measured aspect of the performance... hurrah, we got correlation! If not... it's either down to the listener legitimately hearing something that didn't register on measurement, legitimately not hearing something that did register on measurement, or listener error. If it's the first two, it's far better to make this public than try to force them to change their findings to fit the measurements. If it's the latter, and it happens repeatedly, chances are they will be casually dropped from the review roster. Generally, whenever I've worked with independent objective testing, I've found broad correlation between ears and meters in a better than 80% hit rate. I'd say of the times there was no correlation, it's mostly "we're missing something" and not "I messed up".
No-one's infallible. But I'd rather see the occasional foul-up in my own writing in print, than have to reject what I know I can hear because it doesn't tally with what can be measured.
-
Editor, Hi-Fi Plus magazine, Lun-duhnn, Ingerland, innit
Edits: 03/13/12
If it's the first two, it's far better to make this public than try to force them to change their findings to fit the measurements.
I think in essence this approach is likely to make me trust the review.
Keeping the subjective assesment as well as objective measurements seperate till the actual review is put on paper is the best way for component review in my opinion.
navman
.
"In this land right now, some are insane and they're in charge. To hell with poverty, we'll get drunk on cheap wine."
... how does "said" product compare to other alternative products within various systems, under various systems, conditions, rooms, and ears.
Other than using a standard "reference" system as a benchmark, or a group test scenario, the above process would represent a more stringent and useful test for any product, or set of products.
But don't expect this type of review anytime soon.
Hence ...
Stereophiles and TAS "recommended" component lists hint to this comparison type activity, placing a particular set of products in "classes" does provide an atmosphere in which all these products are tested against one another in order to be "classed".
Of course ... this is just an illusion.
tb1
Besides room interactions, magazines like Stereophile and TAS take potential consumers away from what should be their goal: system synergy. High end consumers buy 3 or 4 great individual components and will likely build themselves a crappy system.
It also causes the potential consumer to hear items out of context. Better reviews smack the reader over the head by repeatedly emphasizing the fact that there are many different "contexts" to plug the component into: some of which will necessarily be better than others. The "goodness" or "badness" of something like the BenchMark DAC varies greatly within different systems.
No one should care whether or not the BenchMark DAC is a good DAC. They should care about how well the BenchMark integrates with the other components in their system, room, and musical tastes.
One of my favorite examples of this were $30,000 Karma speakers being reviewed with $1500 piece-of-shit Musical Fidelity amps.
"In this land right now, some are insane and they're in charge. To hell with poverty, we'll get drunk on cheap wine."
On the one hand you suggest that components can be good or bad depending on the system, and therefore magazines should not declare a component bad or good, and on the other hand, you then declare the Musical Fidelity stuff pieces of shit, and the Rogue components on another forum 'muddy.' A case of do as I say, not as I do? I find it hard to believe you don't see the irony here.
Yes,
Within the realm of subjectivity, deductive logic isn't as present, - but that doesn't mean a contradiction.
I attached enough qualifiers to my statements about the post in amp/preamp that most folks would recognize, - that it couldn't have only been the amps that made the SYSTEM sound the way that it did. And/or that other factors could've been at play to distort my viewpoint. (ie. I had one experience that was a "bad" experience, - or that I didn't like).
Actually your post reminded me of a time where I heard Rogue amps at CES with some metal Gallo speakers and a Jolida player: and my friend and I thought that that system sounded pretty good.
One can also deduce from my bad language, that no one would ever buy MF amps with Karma speakers. Especially amps wherein other folks in the industry noted that they were "light in bass impact." I recall that it was even mentioned in the review. Yes, - I think that we can all say, subjectively at least, that there are going to be components out there that are "bad" enough to influence the system.
Of course I said things with perhaps too much drama. But it's all art. I hope that my point wasn't lost, that sometimes, - one can get a "bad" system by getting 4 great individual components that do not work well together.
Thank you for answering my post, Thank you for triggering a memory of the Rogue/Gallo system that I heard. And, yes, - you are correct in your assessment.
"In this land right now, some are insane and they're in charge. To hell with poverty, we'll get drunk on cheap wine."
Could the market support an audio publication with a more narrow focus I do not pretend to know, but what I would most appreciate in an audio publication is one that stayed focused on equipment and equipment reviews and did not incorporate products not of an audio nature or content about lifestyle and opinion oriented articles/commentary outside of audio equipment/reviews as there are plenty of publications which offer those things and do them better.
Edits: 03/04/12
comparing products being reviewed to products six times their price.
That would be helpful.
"Lock up when you're done and don't touch the piano."
-Dr. Greg House
> if only stereophile could stop comparing products being reviewed to
> products six times their price. That would be helpful.Does that happen very often? We do do a lot of comparisons, and do try to put
the component being reviewed in its context. But this is not as
straightforward as some believe because you can't do meaningful comparisons
with products with which you are not familiar. Take my review of the
Musical Fidelity AMS100 amplifier, linked below. This amplifier costs
$19,999. In the review I compared it with two amplifiers, the sonic
signatures of which are very familiar to me, the Classe CTM-600 monoblock
($13,000/pair) and the MBL 9007 monoblock ($42,800/pair). Neither
competes directly on price with the Musical Fidelity, but as they bracket
the price of the amplifier under review, I felt that the comparisons were
both valid and useful.
John Atkinson
Editor, Stereophile
Edits: 03/02/12
But I think it is okay.What really irks me is the "review within a review" tactic. You know the drill: "...but it wasn't until I inserted brand x cables that things really came into focus...blah blah blah."
Nice "product placement" but a bore.
Edits: 03/02/12
Readers are quick to criticize reviews in which there is not full disclosure of all the details of the test system context and rightly so. In the context of an amplifier or speaker review, cables are support and ancillary but essential.
However, the reviewer is pulled in two directions when comparing a product under test with competitors. The more "scientific" direction is to compare with everything else in the system maintained constant. The other is to optimize, as much as possible, the performance of each product by switching support devices (like cables). Surely, the reader would like to know about the latter, as well.
If police officers protecting each other is called a "blue wall", I'd like to know what color the stereophile wall is?
"Apparently, people now believe that mental telepathy is the foundation of communication and magic is the source of daily events. Consequently, we no longer have to participate in our own lives."
That's your response? I expect more, but you never know.
"Apparently, people now believe that mental telepathy is the foundation of communication and magic is the source of daily events. Consequently, we no longer have to participate in our own lives."
> What really irks me is the "review within a review" tactic. You know the drill:
> "...but it wasn't until I inserted brand x cables that things really came into
> focus...blah blah blah."
We try not to do that. In general, the only products that can be mentioned
in a Stereophile review - and especially used for comparison - are those that
have already been reviewed and are therefore known to the reader.
John Atkinson
Editor, Stereophile
...ideally it should be compared to a similarly priced, highly rated component of its type, but that isn't always, or even many times, possible.
A comparison to something much more expensive can help you decide whether the difference in performance is worth the difference in price - and whether the less expensive DUT is a bargain.
"...nearly as good as components six times the price."
It's a great way to point out where pieces of gear succeed or fall short on the price/performance curve.
At least many of those who write for Stereophile. Group tests are more popular in the UK and I agree more helpful to the average hi-fi enthusiast.
However, the group reviews I have read by writers for UK magazines are a lot shorter and more factual in their comparisons.
So, how do get a "dilettante" reviewer to use a budget reference that isn't personally enjoyable? The review of the test component might actually be worse than if it were compared to a much better piece of equipment.
"You don't need to be a Weatherman to know which way the wind blows"
the perspective is what we need. or a reasonable review of $1900 speakers might be confused with a reasonable review of $5000 speakers.
P
As I slowly slip into the dark cesspool of audiophalia neurosis. . . .
My speaker building site
Hi, mbnx01,
It never made any sense to me that a product is compared to something completely out of its league. Are they just uncomfortable comparing similar products? It seems that it's easier to say things like "it sounds great for the money" or "it sounds great for less than a third the price" than to have to compare a product to others in the same price range. I respect a review much more when they use similarly priced products for comparison.
Regards,
Tom
I bought two speakers based on very good reviews. I found them no good after bringing them home. They were famous names so chances of they being of poor quality control was remote even though both were made in China. The problem could have been the source and amp that I was using. The reviewer had sources like Mark Levinson but I had a low end NAD.
Now, I want to buy a Harbeth P3ESR because of the great 'affection' shown by reviewers. But I have only a humble NAD to drive it with. My point is that there is a labrynth of equipments where one can easily get lost if reviews are followed. I dont think the reviewers have the time or the will to guide the prospective buyers though this maze to salvation.
Cheers
Bill
Bill
If you have a local dealer take you amp in and go listen to the speakers with the amp you're going to use. I haven't listened to the P3ESR's but the Compact 7s were very impressive. I can't say that I would spend that kind of money unless I could listen to the equipment myself.
When they discover the center of the universe, a lot of people will be disappointed to discover they are not it. ~ Bernard Bailey
...there are only so many associated components a reveiwer can try with a DUT.
Hopefully, it's more than one and of a different configuration.
But there is no way a reviewer can find THE best synergistic amplifier for a pair of speakers under review, for example.
Unless you have a similar amp to the one used in the review, your results will vary.
So now matter how much a reviewer likes a particular product or how high he rates it using whatever rating scale one can develop, there is no substitution for listening yourself and working with a good dealer.
Too bad they are so few and far between these days.
If only audiophiles had been religiously patronizing them instead of AudioGon over the past 10 to 20 years, things might be very different now.
Here is an interesting rating up for discussion from SonicFalre.com
These are the kind of term I don't understand. What the hell does "emotional" have to do with audio equipment? It could mean anything. By way of contrast, words like "grainy" or "dark" or "liquid" evoke, for me, a certain kind of sound.
The emotion comes from the music and the musicians, or more precisely, from some music, some musicians, some of the time. When this happens in a live concert it can be magical. When it happens through the medium of a recording and playback system it can be equally magical. But this isn't going to happen if the recording or the playback system don't measure up to a certain standard. So, for example, if a bright recording is played back on a flat or brighter than flat system the resulting screeching may cause pain, and the listener might be left with the wrong emotion: pain rather than joy or bliss.
One can judge a system by the fraction of reference recordings that allow the emotions in the music to pass through unimpeded. That to me is what "emotional" means in the context of playback systems.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
I can be extremely moved by a symphony on a car radio, and my favorite recordings of all time are the Schnabel 32, which are 78's. Of course, I'd rather listen on a good system, and in my experience, doing so can magnify the experience in many ways -- more beautiful, more exciting, more emotional. But it really depends. I could listen to a superb recording of a work I didn't much care for on a superb system and not be moved. Or a bad recording of a powerful work on a bad system, and be moved powerfully. Or a good recording of a not as great work on a good system and also be moved -- the Telarc recording of the Organ Symphony that I used to blast on my 1-D's, say. All of which makes the term problematic for me, because emotionality isn't an intrinsic quality of the speaker. Accuracy, sure. But emotionality seems too subjective, particularly since I'm not sure that emotionality and fidelity are inevitably linked. On some pop recordings, for example, exaggerated bass, or boomy midbass, might prove emotionally engaging.
What I do understand is J. Gordon Holt's "goosebump test."
Yes, the "goose bump" test works. Or in the case of a recording I played recently, the "tear" test. The orchestra and singing were so beautiful both sonically and musically, that I found myself crying.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Yes, the tear test works as well, although I don't think it's a very good way to sell speakers. :-)
I still remember discovering the great Klemperer EMI recording of the St. Matthew Passion, listening to it on my 1-D's. Talk about emotionally overpowering, every time I listened I sobbed all the way through.
I'll admit it. More than once, Bonnie Raitt singing Richard Thompson's "Dimming of the Day" has done a number on my tear ducts.
See ya. Dave
...as audiophiles - a closer emotional connection to the music.
That's why we keep changing equipment and spend so much time and money at our hobby.
That is a very subjective quality which will be extremely difficult, if not impossible, to convert into some objective rating scale.
Which is why we have all of the prose equipment reviews.
nt
I've seen what those types of rating systems have done for wine doofuses and it ain't pretty.
I figure if I can't figure out what a reviewer thought of a piece of gear by reading the review, then that ain't something a letter or number grade will fix.
The ongoing bottom line is that a review should lead one to an audition, not a single blind purchase.
A sad trend in the hobby (generalizing here, and no offense to anyone in particular) is the movement away from audiophiles assigning any monetary value to getting to a dealer or a show to actually touch and hear the stuff we buy before buying it.
I saw a poll at another site and something north of 60-ish percent of gear was bought sound unheard - by freaking "audiophiles!"
We are so cracked, no reviewing system will be able to put us back together again.
Tee hee.
JM
They'll be lined up three deep to grab the Sansuchi while the Ahamay goes begging.
It could be a Buridan's Ass situation. :-)
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
I think this has been pointed out to me many times that dealers are disappearing so people feel they're forced to buy based on reviews. I personally don't want people to buy anything based on my review - I want them to audition the stuff I like because my hope is to save people time and possibly money.
I am personally more interested in the affordable gear end of the spectrum to find superior gear at sane prices. Rich people can fend for themselves since if they buy a $100,000 amplifier chances are that's just a "blip" in their bank statement. Kind of like the Yankees - if they buy a free agent at $20 million a year and he gets hurt they just pick up the phone for the next one.
Normal people can't make those mistakes.
Numbers are not so useful on an individual review basis but they could be if you looked at the review over several years. Was XYZ speaker better in soundstaging than ABC speaker - well the same reviewer gave the former a 5/10 and the latter a 9/10 so you really don't have to re-read both reviews all over again - you've got a pretty clear indicator from the number.
There are problems with numbers - one reviewer might give out 9/10s like candy while another review may give something a 7/10 but actually like it more.
I think numbers can work but they really need to be very clear as to what represents the grade - what is an A or 5/5 or 10/10 and why is a 9 not a ten and why is it better than an 8. Otherwise reviewers can't get on the same page and readers might think a 7 is bad when in reality it could be very high.
For years I subscribed to Consumers Report and never bought anything they rated highly. I decided ultimately that they were worthless and that Amazon's consumer ratings were more valuable.
I see nothing wrong with multiple listeners rating a speaker, amp, etc., but given what I have seen here and elsewhere, there is little agreement. If we ended up with ten speakers with four stars rating, what would a consumer conclude? He or she are on their own or would they give greater trust to the guy who gave several one star? Road and Track often does this also, with some makes often getting the highest nod without much that would distinguish them from others.
Basically, I cannot imagine anything better than we have now.
But it would be nice to get some more consistency between reviewers.
"You don't need to be a Weatherman to know which way the wind blows"
I look first of all for reviewers who hear what I've heard on equipment I own. When they do, it gives me faith in both my judgement and theirs, since it's unlikely that we'd both notice the same phenomenon as a matter of chance. (With the obvious caveats: the phenomenon must be a bit unusual, and not a common stereotype.)
Then, I look for a reviewer with tastes similar to my own. I've seen two reviewers give essentially the same description of the sonics of a component, but reach different conclusions about its relative worth. I don't think there's a right or a wrong here, since most components are imperfect. Or maybe it's that I don't think there's a right or wrong as long as the reviewer's criterion is fidelity to an acoustical performance, rather than to studio creations.
It's that their meaning is often hard to understand.
"You don't need to be a Weatherman to know which way the wind blows"
You do have to read between the lines a lot.
"For years I subscribed to Consumers Report and never bought anything they rated highly. I decided ultimately that they were worthless"
That would be a tautology...
Of course it's a waste of money to buy advice you plan to ignore!
I on the other hand still enjoy the lawnmower I bought solely on their recommendation decades ago. I live in a wet climate and I wanted something that would bag wet grass without choking for a change. I now own the #1 rated wet-grass-bagger and let me tell you it's fantastic, it can pack so much soggy grass into it's crop that it takes two men and a small boy to empty it!
Rainy day Rick
I spent my youth in Germany and in the '80s there was a mag that did just that.
Every review was a group test and the listening was done blind by a group of reviewers who graded the items according to their own ears.
Whoever did the write up did not participate in the listening tests and the listeners did not know any of the measurements. The magazines listening room was modelled on the average german living room.
In the end everything was correlated and the tested items given a score out of 100.
Made it easy to match amps with speakers, tuners etc of similar ability.
...back in the early 1980s there was a publication called High Performance Review which Stereophile reviewer Larry Greenhill was part of. They used the same peices of music in each review to help illustrate the DUT's sonic signature along with measurements and graphs.
Unfortunately, the writing was pretty structured and came across very dry and uninteresting.
Writing in TAS back in the 1990s, Tony Cordesman (AHC) use a laundry list of criteria in each of his reviews for a while. Again, it got to be pretty repetitive and boring so he dropped it.
Martin Colloms developed a point scale which he has used in his reviews. Then he modified it which made all of his prior reviews using the old point system impossible to compare with the new one.
The problem with a "subjective rating system" is that each reviewer, like each reader, values various aspects of the musical reproduction differently (listening biases).
It's difficult to improve on Stereophile's prose reviews, measurements and then ranking in the different classes in the periodic RCL.
Personally, I want the reviews to be interesting and well written. I read them for entertainment, and if I were interested in buying a component, a guide to help me make a short list from.
Enjoy The Music.com reviews used to include a rating system in chart form. The rating system could be expanded to include a brief description of why the product got a certain rating. Here's an example (converted to text) from a Dynavector Karat 17D3 review...
Tonality 90
Sub-bass (10 Hz - 60 Hz) 95
Mid-bass (80 Hz - 200 Hz) 95
Midrange (200 Hz - 3,000 Hz) 95
High-frequencies (3,000 Hz on up) 95
Attack 95
Decay 95
Inner Resolution 85
Soundscape width front 85
Soundscape width rear 85
Soundscape depth behind speakers 85
Soundscape extension into the room 85
Imaging 90
Fit and Finish 85
Self Noise 95
Value for the Money 95
Regards,
Tom
I rather like that. Not because I could necessarily choose something based on it, but because it does a pretty good job of reporting some important sonic characteristics and common strengths and weaknesses. As long as it doesn't become some kind of US News college report -- a list like this doesn't touch on all attributes (is it ugly? big? does the treble beam? etc.), and even if it did, the neural net is better at judgment than a simple linear weighting.
1. I bought it
2. It is among the very best components of its kind
3. It was very enjoyable in my system
4. It did not work well in my system
But I am not clever enough to invent more subjective categories, since the default above would always be 3.
So, maybe it wasn't such a hot idea as I originally thought.
"You don't need to be a Weatherman to know which way the wind blows"
...Sound and Vision uses?
Might be closer to what you want and more manageable.
I had a rating system that I made up in 2004 over at audioreview.
I modified three rating systems and belended them together - Stereophile's LLF categorizing, enjoythemusic.s numerical system.
What I was planning though had 11 categories each out of ten. Each reviewer could add modifiers to what they felt was most important.
So if the reviewer placed a premium on soundstaging and awarded the speaker and 8/10 he could add a X2 or a X3 modifier so it would weight more heavily on the overall mark - while I as a review would not place any modifier on that but have a X4 modifier on cohesiveness because if it doesn't sound "of a piece" then IMO all is lost (which hurts most large multi-way speakers) so they really have to be good everywhere else to make up the difference.
But this allows the reader to know instantly what the reviewer values.
The overall score was out of 200 and a percentage would be calculated. And speakers for example were compared against ALL speakers at all Price points. So if an Acapella Violoncello scores 90% a Paradigm Atom might get 22%.
But I had a further factor involving set price ranges - so at $200 if the expected percentage or average was 15% and the Atom scored 22% it would be well above average for the price - so it would be awarded a Best Buy or Recommended Tag (the third magazine was Hi-Fi Choice)
If I ran my own magazine I would incorporate this - but I don't.
And speakers that got 90% above would be class world class speaker regardless of price - though it may not be awarded top marks in the value for the dollar camp.
I worked on it a few years back. The idea was one could put this at the bottom of some review and you could see the score and a visual that would quickly show positives and other. Note lack of overall score.
I still like it but I still think about the reviewers and wine and the overall scoring. I know JA and a number of people don't like this idea but if you look at the core value of a score: This is where this reviewer places this item at this point in time. It works.
I think in wine the score is at a set point and at a specific time. So why not with equipment? We know that X Merlot was a 91 back in 2004 and we also know it may be a 97 or an 83 today. Scoring tells us what that reviewer thought at that moment and forced them to place this item in a line against other items they reviewed. It is better or is it worse? I don't care if it was against an amp from 2005 (or even 1968) or not. You think this one is better then put the number down. Will say a lot about the reviewer and the equipment. And just like wine, one reviewer may give it a 91 and another a 72. Adds value to the review.
P
As I slowly slip into the dark cesspool of audiophalia neurosis. . . .
My speaker building site
Hi, P,
I like the concept and visual representation of spider graphing but the graphs aren't as useful with disparate categories of ranking. That is, if the categories included in the web do not have context or relevance to each other it is more difficult to interpret what is being conveyed. In the graph you provide, "Setup ease" is sandwiched between "Image size" and "Forwardness" and has no relevance to its neighbors, as one example.If, however, the categories do have context than the graph allows the viewer to quickly see the relative qualities of the component. For example, if four broad themes with applicable categories were provided, there would be a visual representation of the overall characteristics that could be easily compared to other components. Following are some themes and categories borrowing from your example with possible additional categories. These are only quickly borrowed examples and not thoroughly considered recommendations:
Soundstage/Presentation: Image Size - Lack of Harshness - Warmth - etc.
Frequency/Dynamics: Treble Extension - Bass Extension - Tonal Balance - Dynamics
Detail/Clarity: Overall Detail - Treble Clarity - Bass Clarity - Separation of Instruments - etc.
Operation/Aesthetics: Efficiency - Ease of Setup - Attractiveness - etc.The more I think about your suggested rating format the more I like the possibilities!
Regards,
Tom
Edits: 02/26/12
I worked on that for a while and the last version is a little different but I did not have the time this morning to redo the pic. The original idea was just what you said. The problem would be the looooooong debate over what attributes should be listed. To my mind this covers a lot of it but I am sure others would have more. Key is that the graph cannot have too many points or the purpose, to quickly see highs and lows, gets mushed. I really wanted the chart to have a top section with one set, like listening, and a bottom set like setup. Added the average but I really don't think this adds value here. I would rather force the reviewer to pick a number that fits into their lifetime of reviews.
P
As I slowly slip into the dark cesspool of audiophalia neurosis. . . .
My speaker building site
nt
As I slowly slip into the dark cesspool of audiophalia neurosis. . . .
My speaker building site
Since it is purely based on what the reviewer experiences in their system and room. Any reviewer should have no trouble in completing the check list. I think 0-50 is a little too much though, and 1-10 is plenty and less intimidating.
Bass Clarity
Midrange Clarity
Tweeter Clarity
Freedom from Distortion
Bass Extension
Tweeter Extension
Low-Mid-High Balance
Image size
Forward/back image placement
Separation of instruments
Overall Detail
Dynamics
Regards,
Geoff
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: