Nephilim_BC
Murikkka
...wtf is tidal
A regular person is not gonna hear a noise floor difference of 24 bit audio over 16. Even people with high end sound systems shouldn't be able toyou can only tell the difference if its real instrumentation and the song/album is mastered properly. Think about it. if you're using low level samples and a bunch of sythns that are not mastered properly. it wont matter if its 16 or 24bit. you wont get any more clarity in the music's highs lows or mids. But having higher quality real instruments being recored properly and mastered right will make you tell the difference. even for a regular listener. or if the samples you use come from a high quality source. same with the sythns. if they are producing high quality sounds. then yes. the jump from 16 to 24 will matter to your ears.
what you say is true for the most part. but its also true that 24 is more bits then 16. thats just a fact.A regular person is not gonna hear a noise floor difference of 24 bit audio over 16. Even people with high end sound systems shouldn't be able to
if their DACs have proper dithering going on. With 24 you're just adding dynamic range and headroom for recording/mixing purposes. The final output should sound no different than if you were doing it in 16. Now their are probably some Engineers and weirdo audiophiles who might be able to spot the difference but that's few and far between.
Deep Bits
When it comes to judging digital music quality, the discussion usually begins and ends with bitrate. A song encoded at 320 kilobits/second is going to sound a whole lot better than a song with a 128kbps bitrate, right? Well, sure, but it’s a bit more complicated than that. Bitrate stems from two different elements: bit depth and sample rate. Here’s where we can understand the difference between 16-bit and 24-bit audio.
Bit depth is essentially the number of bits you have to contain a piece of audio--the range from the imperceptible whispers of virtually no sound to the loudest noise a piece of audio gear can crank out. The difference between 16-bit audio and 24-bit audio isn’t just a matter of eight bits. As TweakHeadz explains,
“The easiest way to envision this is as a series of levels, that audio energy can be sliced at any given moment in time. With 16 bit audio, there are 65,536 possible levels. With every bit of greater resolution, the number of levels double. By the time we get to 24 bit, we actually have 16,777,216 levels. Remember we are talking about a slice of audio frozen in a single moment of time.”
SAMPLING 4 BIT AUDIO (2^4) GIVES US ONLY 16 VALUES, A FAR CRY FROM 16-BIT AUDIO'S 65,536!
sample rate. Sample rate refers to the number of samples or measurements taken each second from a recording. The typical CD sample rate is 44.1kHz, or 44,100 samples per second. High-end audio gear often samples at an even higher rate, and DVD-Audio quality--which employs 24-bit audio--sample at 96kHz or even 192 kHz.
Without turning to compression formats, those sample rates mean big file sizes. A 16-bit, 44.1kHz song requires a bitrate of 1.35 megabit/second of data, and a single minute of stereo audio takes up about 10 megabytes of space. A 24-bit song with a 96kHz sample rate, by contrast, requires a bitrate of 4.39mbps and requires 33 megabytes of storage for a single minute of stereo audio. Now you can see why MP3 filesizes are so appealing.
But How Does It Sound?
24-bit sound is a tricky thing to gauge. Does it provide for a greater resolution of sound? Definitively. It has room for 256 times the data, remember. Are you going to be able to hear that difference? Harder to judge. Human hearing supposedly tops out at 20kHz, but that doesn’t make higher sample rates useless. According to theNyquist rate, to fully capture a wave, it should be sampled at twice its highest frequency. In other words, a higher sample rate, and a greater bit depth, gives your sound more wiggle room, meaning sound peaks are less likely to be truncated and the subtleties of the music are less likely to be drowned out.
AQ UNIVERSITY
24-BIT AUDIO EXPLAINED BY SEAN BEAVAN
By Skwerl @skwerl · On February 28, 2011
Here comes the longest interview intro ever, but this is a special one, addressing a very specific topic, and a bit of context is in order…
Last week, CNN reported that Interscope Records CEO Jimmy Iovine is pushing for the sale of 24-bit audio on iTunes and other online retailers. In a news conference for Hewlett-Packard, Iovine said: “We’ve gone back now at Universal, and we’re changing our pipes to 24-bit. And Apple has been great. We’re working with them and other digital [download] services to change to 24-bit. And some of their electronic devices are going to be changed as well. So we have a long road ahead of us.”
The term 24-bit refers to the “depth” of an audio recording. We can explain this technical dimension somewhat by comparing it to video; most of us have bought at least one HD television after reading a little bit on video resolution. Audio CDs are limited to 16-bit, which you might compare to a basic cable channel coming in at 480p. 24-bit is theoretically audio’s equivalent to the 720p or 1080p video coming from our Blu-Ray players or digital cable services.
However, that’s a bit of an oversimplification, and it might be a little more accurate to compare the depth of an audio signal to the number of different colors each single pixel on a television could be (rather than the total number of them). Audio sample rate (which we’ll get to later) is somewhat equivalent to video frame rate.
![]()
Yet for those of you not aware of the so-called “Loudness War” that has been raging since the late 1980s, the music most of us listen to is almost without exception “compressed” to be as loud as God allows, to compete for our attention next to whatever came before on the radio or in our playlist. And when everything- all of the colors so to speak- are so relentlessly saturated to be as bright and loud as possible, the question arises: Are we even using 16 bits’ worth? Why do we need 24?
Our own Tom Davenport, in a recent editorial for Gizmodo entitled “Why 24-bit Audio Will Be Bad For Users,” presented the theory that 24 bit audio is a consumer con, and a format that regular consumers will “never need.” This sparked a debate here at Antiquiet. There was the speculation that Iovine’s idea is simply savvy marketing designed to turn audiophiles into even bigger suckers to sell his possibly overhyped Beats Auio headphones. There was my confidence in my own precious hi-fi system and few 24-bit audio sources (I was ecstatic to read that 24-bit audio could be coming to iTunes). And of course, as always, there was the passion we all share for cutting through bullshyt.
So we had a bit of a debate, and finally Saturday night we sat down with the most experienced professional we could blackmail, Sean Beavan of the band 8mm. Over the course of an extremely enviable engineering career spanning two decades, Sean has had a hand in the mix of several favorite albums of yours and ours, including Nine Inch Nails’ The Downward Spiral, and Marilyn Manson’s Antichrist Superstar andMechanical Animals. Also possibly Chinese Democracy.
![]()
The bottom line is that while it may not be appropriate for every consumer, the 24-bit audio format has at least the potential, for those that care, to be the best thing to happen to the art form of recorded music since the CD. I choose these words carefully, because there’s as much subjectivity involved as science, as Sean explains in much depth.
It absolutely must be understood that 24-bit audio isn’t just about “ripping” the fully mastered recordings we’re all familiar with to a slightly different digital format. Just as the mastering process for vinyl is- and in fact must be- completely different than the mastering process for 16-bit CDs or radio; to bring 24-bit audio to market, it’s nearly guaranteed that the labels will locate the pre-mastered tapes (or Pro Tools sessions or what have you) and remaster them responsibly to take proper advantage of the benefits of the 24-bit format. And you may or may not be surprised to learn that these differences aren’t so subtle. Even a non-audiophile can hear them on a regular consumer stereo system. You may also be surprised to learn that the infrastructure to properly bring what is essentially an entirely new format to market is already in place (more or less) at the labels.
what you say is true for the most part. but its also true that 24 is more bits then 16. thats just a fact.
if you look at sound as a picture. the more bits you have to play with. the better replication of sound you can produce once you get into the digital (especially) realm.
for instance. lets say i have a hi hat. you know that hi pitch "chhhhh" noise it makes. if i were to playback a live high hat hit in 4 bit, 8 bit, 12 bit, 16 bit...24 bit... the "chhhh" sound would be more distorted sounding in 8 bit then 12, 16 and so on. By the time i get to 24. its a very close replication of the real "chhhh" sound with little to know recognizable distortion.
but that hi hat being played behind tons of other music is where it will get lost anyway and as you say. who's really going to hear that? not many
But here's the thing. if i'm recording in 24, and lets say everyone had access to 2 gigabyte internet speeds. and lets say everyone now has 50 terabyte drives in their homes. Why would the recording industry compress it into a 16 bit file(CD quality), when they could just output it the exact same way they recorded it?
most people couldn't tell you if i was playing a cd, a high audio mix straight from pro tools, or an actual record on a record playerwhat you say is true for the most part. but its also true that 24 is more bits then 16. thats just a fact.
if you look at sound as a picture. the more bits you have to play with. the better replication of sound you can produce once you get into the digital (especially) realm.
for instance. lets say i have a hi hat. you know that hi pitch "chhhhh" noise it makes. if i were to playback a live high hat hit in 4 bit, 8 bit, 12 bit, 16 bit...24 bit... the "chhhh" sound would be more distorted sounding in 8 bit then 12, 16 and so on. By the time i get to 24. its a very close replication of the real "chhhh" sound with little to know recognizable distortion.
but that hi hat being played behind tons of other music is where it will get lost anyway and as you say. who's really going to hear that? not many
But here's the thing. if i'm recording in 24, and lets say everyone had access to 2 gigabyte internet speeds. and lets say everyone now has 50 terabyte drives in their homes. Why would the recording industry compress it into a 16 bit file(CD quality), when they could just output it the exact same way they recorded it?
here's an excellent legit breakdown of the point i made and agreeing with your point as well.
http://www.tested.com/tech/1905-the-real-differences-between-16-bit-and-24-bit-audio/
AND THIS is another good article on the subject matter a few years ago.
http://antiquiet.com/aqu/2011/02/24-bit-audio-explained-by-sean-beavan/