Metal guitar recording methods used by majority are far from optimal?

  • Thread starter Thread starter Kraku
  • Start date Start date
the simple reason they use 412 v30s is that they like the sound.

perhaps the sound of a mic'd cab is preferred in a mix vs an accurate 1:1 of the same cab in the room. I don't know if you can assume a perfect replication would be better.
Perfect replication would be the optimal way of doing things for studio work. What-you-hear-is-what-you-get. That's always the best way when doing sound design. If you can't hear what you're doing, that complicates everything a lot and makes things really slow and cumbersome. So the logical way around that is to figure out how to get that what-you-hear-is-what-you-get situation.
 
I don't think its the same as "only what the cabinet puts forward matters". I think that 99.999999999% of the guitar sounds most audience members and an only slightly smaller percentage of what guitar players themselves ever hear as a "guitar sound" is the transfer function of a mic'd cab.

Listening at a big enough stage where they need to mic the cymbals? You are listening to a guitar amp thru a mic, thru the channel FX, thru the group/master FX, (or the transfer function of these first three), and finally, whatever coloration the reproduction speakers have

Listening to your favorite song on Spotify thru your earbuds? You are listening to a guitar amp thru a mic, thru the channel FX, thru the group/master FX, (or the transfer function of these first three), and finally, whatever coloration the reproduction system has (your earbuds, any artefacts of mp3 data reduction, etc)

Only when you are sitting there in your space playing thru an amp and cabinet do you hear the actual cabinet sound, and that sound is not only totally unfamiliar in quantity to what your audience thinks you sound like, its also usually pretty unfamiliar to what you expect a guitar to sound like at the point it matters to the final delivery to the audience
 
Perfect replication would be the optimal way of doing things for studio work. What-you-hear-is-what-you-get. That's always the best way when doing sound design. If you can't hear what you're doing, that complicates everything a lot and makes things really slow and cumbersome. So the logical way around that is to figure out how to get that what-you-hear-is-what-you-get situation.
I think this is true that in many cases if we really love the source of the audio, we definitely want to capture it 10000%

But lets say pop vocals, you want nothing even slightly resembling the actual sound of that artists' voice in the vast majority of cases. The dynamic range alone would make it unusable in a modern mix. And then think about the insane volume swing of a palm muted guitar vs some chords.

So many albums today are mastered to around 8LUFS. That is far less than a four bit dynamic range. There's no way that chug vs chords is being represented in four bits

There are some free bit reducer plugins out there and some DAWs (not sure what the rules are exactly here about company affiliations and disclosures, so not being too specific) come with them, its a really eye opening experiment to take what seems to be a decent dynamic range type of mix and start reducing. Its frightening how except maybe for some fade outs, you can easily get to 8 bits and not be able to successfully ABX between that and the original. And anything at the top of the spotify pop charts will go much lower.

TLDR: it is IMPERATIVE for the recording engineer to be able to pass a WYSIWYG signal, so they know and can account for any change they make, among a billion other reasons, but likely the sound the audience expects to come out the speakers bears as much resemblance to the original as any of Wings Of Pegasus cases resemble the un pitch corrected vocals claimed by the shills.
 
You're idealistic but your sig line tells me you will eventually be more realistic once you accrue some time with this.

His sig says "electronic music" which may explain some of it. I don't equate electronic music with electric guitar. :dunno:

If as you say "Only what the guitar cab / studio monitor projects forward matters"

Then the standard way of micing cabinets everyone has been using for 50 years is by your own definition, the most efficient way.

Kind of reminds me of that guy from about 5 years ago. What was his name? True Tone or something?
 
Perfect replication would be the optimal way of doing things for studio work. What-you-hear-is-what-you-get. That's always the best way when doing sound design. If you can't hear what you're doing, that complicates everything a lot and makes things really slow and cumbersome. So the logical way around that is to figure out how to get that what-you-hear-is-what-you-get situation.
if you're dialing in the amp then getting to the mixing desk and becoming shocked because its different you're doing it wrong. you monitor through what the mic hears. thats the point you're missing, the in the room sound you personally may hear is not the goal for the record. the chain is another instrument.

who's to say people don't have slight differences in eardrum thickness or wax build up and can't ever 'accurately' hear the reproduction regardless of possibility. everything in the chain, not just mics, applies an eq curve and you're just trying to achieve a pleasant average across the board.

I wouldn't be surprised if there are differences in neurons and some people are more sensitive to certain frequencies than others. In fact, how do you account for frequency loss due to age?

pursue this if you must, but the point of music is emotional regardless of the frequencies involved.
 
Last edited:
Perfect replication would be the optimal way of doing things for studio work. What-you-hear-is-what-you-get. That's always the best way when doing sound design. If you can't hear what you're doing, that complicates everything a lot and makes things really slow and cumbersome. So the logical way around that is to figure out how to get that what-you-hear-is-what-you-get situation.

Hard disagree. No, it wouldn’t be the “optimal way,” at least not in my opinion. The purpose of recording high gain guitar is not necessarily achieving total accuracy in capturing the sound of the cab in the room. The purpose is whatever the people doing the recording want it to be. In my case, the purpose isn’t accuracy, it’s “getting cool tones that sound good.” And make no mistake those are entirely different pursuits.

You assume “directly translating the sound of the cab in the room as it hits your ears while you’re in the same room” directly to the mixing console is in any way desirable. For me it is not. At all. Serious question, have you ever been in the room with a multi-speaker guitar cab being driven by a high gain tone and tracked for recording?

Personally, I really like the tones I get in my mixes. But when I record cabs, the direct sounds from the cabs I hear in the room pretty much suck ass. In the room, cabs are garbled, overly directional, mid heavy to the point of sounding small, congested, overly gained up, and just… bad. But in the mix, through a mic and monitors? Totally different story. Those same cabs come through much more balanced, bigger and wider, and much more articulate. It’s an entirely different sound, and miles better than anything I hear in the room.

As far as your concern about me hearing what I’m doing? I record guitar at the console, listening through monitors, because that’s the best way to hear how the tone is going to sound to the listener when it’s done. What I hear in the control room *is* exactly what I get.
 
Last edited:
interest
You assume “directly translating the sound of the cab in the room as it hits your ears while you’re in the same room” directly to the mixing console is in any way desirable. For me it is not. At all. Serious question, have you ever been in the room with a multi-speaker guitar cab being driven by a high gain tone and tracked for recording?


Also the presumption that a flat eq mic to reference would be optimal is a huge one.

The whole point is coloring the sound with different mics because they change the end result
 
Hard disagree. No, it wouldn’t be the “optimal way,” at least not in my opinion. The purpose of recording high gain guitar is not necessarily achieving total accuracy in capturing the sound of the cab in the room. The purpose is whatever the people doing the recording want it to be. In my case, the purpose isn’t accuracy, it’s “getting cool tones that sound good.” And make no mistake those are entirely different pursuits.
This in 99.999999% of the cases!

I got booted from another popular gear forum for pointing out that, no, the mic'd up amp, with zero changes in it is NOT in any way shape or form what you hear at the end of a mix on a typical modern record you are listening to on spotify or whatever.

Nevermind all the compression, filtering, and other wizardry we do to every track, which would be much more than enough to kill that claim, an album mastered to -8LUFS (which you could nearly represent in a two bit digital signal!) cannot possibly even begin to hope to contain the dynamic range of your typical chugga chugga djent track, nevermind some edge of breakup tone that you would struggle to contain to 8 bits of dynamic range.
 
I think this is true that in many cases if we really love the source of the audio, we definitely want to capture it 10000%

But lets say pop vocals, you want nothing even slightly resembling the actual sound of that artists' voice in the vast majority of cases. The dynamic range alone would make it unusable in a modern mix. And then think about the insane volume swing of a palm muted guitar vs some chords.

So many albums today are mastered to around 8LUFS. That is far less than a four bit dynamic range. There's no way that chug vs chords is being represented in four bits

There are some free bit reducer plugins out there and some DAWs (not sure what the rules are exactly here about company affiliations and disclosures, so not being too specific) come with them, its a really eye opening experiment to take what seems to be a decent dynamic range type of mix and start reducing. Its frightening how except maybe for some fade outs, you can easily get to 8 bits and not be able to successfully ABX between that and the original. And anything at the top of the spotify pop charts will go much lower.

TLDR: it is IMPERATIVE for the recording engineer to be able to pass a WYSIWYG signal, so they know and can account for any change they make, among a billion other reasons, but likely the sound the audience expects to come out the speakers bears as much resemblance to the original as any of Wings Of Pegasus cases resemble the un pitch corrected vocals claimed by the shills.
I'm not 100% following what you mean with the above explanation, so I'll confirm the points one by one:

Regarding the 8LUFS v. 4 bits:
I assume LUFS unit uses decibels, but with the added idea of different frequencies sounding louder to the human ear than others? So 8LUFS would be about 2.5x amplitude change, which should be just a little less than what 2 bits would be capable of representing as numbers. With the 4 bits you mentioned you could represent numbers up to 15 (1/16th of the full range). So if the dynamics difference between the loudest and softest parts of the song was 1/16th of the full range, that would give a fair amount of dynamic range already. But since it's only about 2.5 (8LUFS), that's really squished.

I'm not sure what you are after with the bit reducer plugin example. The bit depth of the audio signal has very little to do with the dynamic range of the song itself to which someone has limited/compressed the audio peaks and lows into.

And yes, we're talking purely about studio work here. Not gigs or anything, as I mentioned earlier in the thread.
 
Last edited:
if you're dialing in the amp then getting to the mixing desk and becoming shocked because its different you're doing it wrong. you monitor through what the mic hears. thats the point you're missing, the in the room sound you personally may hear is not the goal for the record. the chain is another instrument.

who's to say people don't have slight differences in eardrum thickness or wax build up and can't ever 'accurately' hear the reproduction regardless of possibility. everything in the chain, not just mics, applies an eq curve and you're just trying to achieve a pleasant average across the board.

I wouldn't be surprised if there are differences in neurons and some people are more sensitive to certain frequencies than others. In fact, how do you account for frequency loss due to age?

pursue this if you must, but the point of music is emotional regardless of the frequencies involved.
You're doing circular reasoning. The whole point of my OP is to get rid of that issue you're describing. (not being able to hear the sound of the cab before it's monitored in a separate room)

The recording engineer is the one whose ears/brain/etc. perceives the audio, regardless of ear wax and degraded ears. So that doesn't matter, as it's his decision what to capture on record and what to deliver to the audience. So that part is not relevant to this topic of discussion.

However, the idea that everything in the signal chain is part of the sound, that is absolutely true. But it is also true that the more signal altering filters/steps there are between the sound source and the recorded audio signal, the more convoluted the process becomes. It's like trying to paint a painting with beautiful colors, while having strongly color tinted sunglasses on. Sure it's probably doable, but why not make things more straightforward and easy? Why use those sunglasses in the first place and why not directly look at what you're actually working on?

As I mentioned before, to be able to monitor and capture the guitar cab sound easily with the current processes, you have to locate the cabinet in a different room which has been dedicated only for capturing such guitar cabinets. Probably you need a robot arm which then moves the mic(s) so you can decide where exactly you should put that mic to capture the sound you're after. This mic movement is mostly necessary when the method doesn't capture the actual sound of the speaker but mostly tiny surgical areas of it which you'll later use to construct a completely new sound, which may or may not resemble the actual sound coming out of the cabinet. So why now come up with a process that captures accurately the actual sound coming out of the speaker, and then just tweak the knobs on your amp/pedals/guitar to get the sound you want. This way you don't really need the separate room to capture the cabinet, but you can still use it if you want. This would make things much more flexible from the perspective of workflow options.


As a general note: What is going on here? Am I really that bad in trying to convey technical ideas to people in written form, so that I get a thread filled with answers that continuously misunderstand what is even being discussed here? Or are people just lazy readers and give answers based on what they think I'm probably talking about, without actually reading what I wrote?
 
Last edited:
Regarding the 8LUFS v. 4 bits:
I assume LUFS unit uses decibels, but with the added idea of different frequencies sounding louder to the human ear than others? So 8LUFS would be about 2.5x amplitude change, which should be just a little less than what 2 bits would be capable of representing as numbers. With the 4 bits you mentioned you could represent numbers up to 15 (1/16th of the full range). So if the dynamics difference between the loudest and softest parts of the song was 1/16th of the full range, that would give a fair amount of dynamic range already. But since it's only about 2.5 (8LUFS), that's really squished.

You have this right. Aside from fade outs and maybe some quiet spots (though as shown in dropping many Nu Metal songs with very distinct stops and starts down to 8bits without being able to ABX between 8 bits and 16) I'm saying four bits to be generous. But as you say, its more than likely going to fit in the theoretical dynamic range of 2 bits (though there are confounding factors in dropping it that low as for instance quantization error is much more of a percentage of the signal)

I'm not sure what you are after with the bit reducer plugin example. The bit depth of the audio signal has very little to do with the dynamic range of the song itself to which someone has limited/compressed the audio peaks and lows into.
The point is to show just how counterintuitively low a bit rate you can reduce to in a modern album without being able to ABX the difference (again aside from fade outs and similar), which is a world apart from the giant dynamic range of even a distorted guitar part featuring chords interspersed with palm muting
 
The point is to show just how counterintuitively low a bit rate you can reduce to in a modern album without being able to ABX the difference (again aside from fade outs and similar), which is a world apart from the giant dynamic range of even a distorted guitar part featuring chords interspersed with palm muting
Ah OK, so you mean the quantization noise you'll get if you reduce the bitdepth of the audio low enough? With 8 bits the originally smooth sounding old recordings would sound audibly sizzly. With 8 bits most new bright sounding recordings you might not notice much difference. Any recording with 4 bits would sound really crunchy.
 
I'm firmly in the "no single imperfect overused suboptimal" camp.

The only "absolute" is that there must be splatter from freshly-squeezed limes on your grill-cloth.

1956 Deluxe.jpg
 
Where did you conjure up the V30s from his answer?
Here you go. Have fun.

 
Ah OK, so you mean the quantization noise you'll get if you reduce the bitdepth of the audio low enough? With 8 bits the originally smooth sounding old recordings would sound audibly sizzly. With 8 bits most new bright sounding recordings you might not notice much difference. Any recording with 4 bits would sound really crunchy.
You can get most modern pop recordings down to 5 bits without people really noticing the difference, but the point here is that the less than 8 bits dynamic range of most modern albums is WAY less than the dynamic range of the average guitar part. This is to counter claims that somehow the original guitar recording is pristinely reproduced at the end of the album chain
 
You can get most modern pop recordings down to 5 bits without people really noticing the difference, but the point here is that the less than 8 bits dynamic range of most modern albums is WAY less than the dynamic range of the average guitar part. This is to counter claims that somehow the original guitar recording is pristinely reproduced at the end of the album chain
There is some conceptual weirdness going on with what you're saying, but I'm not sure what exactly. I'm not sure you're using correctly the concepts of the amount of bits and the dynamic range of the song to describe what you try to describe, which makes it hard for me to understand what you actually mean.

By saying "get most modern pop recordings down to 5 bits", do you mean the average audio signal amplitude difference between the quiet parts of the song vs. the loudest parts of the song? If so, decibels would be much more easy way to convey what you're saying. If you mean something else, then I need clarification what you actually meant.

Or do you mean that when some clean quitar tone is playing in the song by itself really quietly, you can hear a lot of quiet noise with that guitar sound, which is because of the insufficient amount of bits used to encode the audio file? This should not happen, as all audio formats (as far as I know) are at least 16 bits per audio sample. Some even 24 bits per sample. Both of those are far more than enough to store the audio without any sort of audible noise/artefacts.
 
Back
Top