Is the Pentatonic scale ingrained?

I’m sure most of you seen this, apparently non musicians were able to follow/predict the pitch as he jumped around.

Does this mean in our inner voice we hum along in pentatonic?
For me the speed and uniformity the audience follows seems too good to be true… Though if pentatonic is what we default to in our heads naturally perhaps not? Idk.

I’m interested as I put lot of effort into playing whats in my head, over a jam track for example, though often whats in my head is very difficult to find on the fretboard.

Short answer, I think it’s all just learned via pop culture.

The western world is absolutely full of kids who want to be on “The Voice”, and who can do full on Steve Wonder-level melismatic blues pentatonic vocal improv, but have no formal music training. They mimic the pitches, the vibrato, even the accents. (Google “cursive singing” if you want to go into full “old man yells at cloud” mode.) I don’t think this is really any different than the fact that every rock guitar player can do box pentatonic licks.

I would add to this, that this video is probably an even better sample set than most. It’s a conference with an audience of educated people many of whom probably have some music background and interests. When I was in college it was really amazing how many people there had taken piano or violin lessons at some point. I would guess 80% in total, just in the dorm I lived in, which had no musical afiliation of any kind. It’s the overachiever thing to do.

1 Like

Well, how is the guitar tuned? EADGBE usually, right?

I guess that would give us one of two chords 1) an Emin7add11 2) Gmaj6add9
Arpeggiate those and you get either an E minor pentatonic “scale” or a G major pentatonic “scale”. Every time you strum your open strings. Yep, barring straight across in standard tuning gives you a pentatonic “scale”.

Now if you take a look at a lot indigenous music I think one might find that pentatonics are quite common. But, what do I know.

3 Likes

Without getting into the adjustments of tempered tuning-

The pentatonic and diatonic scale (and their modes) can be generated by stacking fifths or fourths. Fifths occur early in the overtone series after the octave.

These scales are easy to generate using the most basic of harmonic ratios. Basic harmonic ratios are easier to understand and perceived as more consonant because it’s easier for our brains to calculate them. The frequency relationships of dissonant harmonies are more difficult to discern.

So the scales aren’t innate, and aren’t exactly universal. But they are generated efficiently out of natural phenomena and principles.

Personally, I have a distaste for minor pentatonic that goes back to my childhood. I favor major pentatonic, the major scale and its modes, the barry harris diminished 6th scales (“bebop scales”), and the diminished scale as a way of building tension.

2 Likes

So for example, most people can hum a note, and then hum the octave of that note. I think the majority of people can do this by nature. And the extrapolation of that brings about the pentatonic?

Just like how it’s easy to add in halfs, like 10 15 20 etc. rather than odd numbers?

Well, sort of. But I think that kind of math is more conscious, whereas hearing harmony isn’t. Our brains just do complex Fourier analysis without any kind of conceptual thinking needed. We hear formants in vowels really well as part of our inborn “language acquisition device”, as Chomsky put it.

Another way to think about it is that harmonies generate polyrhythms. Polyrhythms that take longer for our auditory system to process are the ones we refer to as dissonant or unstable.

1 Like

Ha he is amazing for sure. I have never in my life even began a sentence with “My favorite polyrhythm is…”

I’d rather say: our brains are good at solving the kind of simplification and generalization problems that can be solved by Fourier analysis, without doing Fourier analysis.

Our brains are great at simplification/reduction and pattern recognition.

I don’t see any reason not to call what our auditory system does Fourier analysis, even if we haven’t quantified the algorithm at work.

Pitch differentiation actually occurs mainly in the ear itself, not in the brain. It is largely mechanical. The brain doesn’t need to do Fourier analysis on audio signals because the cochlea gives pitch information directly via mechanical methods.

This is heavily simplified, but I think it’s accurate enough for this discussion: The cochlea is a long tube filled with fluid and a long basilar membrane with hairs all over it. The near end of the membrane is narrow and stiff, while the far end is wider and looser. When the eardrum vibrates, the frequency of that vibration creates resonances inside the cochlea. The location of the resonances stimulates motion in the basilar membrane, causing the hairs to move. The motion of the hairs sends nerve impulses to the brain. The brain translates these impulses to pitch on the basis of which hairs are moving, not the frequency of the impulses. Near end of membrane: higher frequencies, far end: lower frequencies. The spacing of frequencies on the basilar membrane is logarithmic, just like the relationship between frequency and pitch.

Obviously there is plenty of computation for the brain to do for interpretation purposes, converting these signals into meaningful sounds. But the pitch analysis part is done mechanically before the signal reaches the brain. No FFT required.

Fun fact: We lose high frequencies first with age because the near side of the basilar membrane is more susceptible to mechanical damage from high amplitude signals.

Edit to add:

This explanation meshes with the physical description pretty well. Two notes in a consonant interval have cochlear resonances in some of the same places, providing fewer pitch signals to the brain. Likewise, dissonant intervals will have more simultaneous resonances for the brain to process as pitches.

Another fun fact (many of you may already know): The harmonic series of a single open string spells out a dominant 7 chord.

By “auditory system” I’m referring to the ear as well as the brain. All of this together functions as a parallel processing system. I’m also not talking about pitch differentiation per se, I’m talking about experiencing harmony as consonance and dissonance based on beat relationships.

But consider that we can imagine complex sounds. No sounds have to hit the ear. Spend some time doing additive synthesis and you can imagine what sine waves added together sound like. If you can do formant shifting in singing, and you can audiate it, then you’re mixing sine waves and predicting the results in real time.

I wasn’t disagreeing with you. I was providing the missing information about how the brain ‘does Fourier Transforms’, and pointing out that your previous explanation is grounded pretty well in physics.

I don’t know if beat relationships are specifically responsible for how the brain determines consonance and dissonance, but I will note that beat relationships and overlapping harmonics are two sides of the same coin.

I’m also not saying that consonance and dissonance are determined in the cochlea. The brain certainly has to interpret the frequency information it receives by the ear to use it to form such judgements. Certainly the brain can simulate the effect of physical signals that it’s not actually receiving (to some extent), but it’s not going to be able to do that without a long prior history of actually receiving such input from the ears. I am skeptical that a person born deaf would ever intuit the existence of beat frequencies without doing the math. In formant shifting, you predict the results of mixing sine waves in real time by listening (with your ears) to the results of your efforts and cataloguing them for future use, not by blind calculation.

Newborns and people who are deaf from birth and get cochlear implants to hear for the first time have to train their brains to interpret sounds by association with phenomena. It’s not automatic. Your ears can’t be taken out of the equation.

While a deaf person probably wouldn’t have an understanding of harmony, they do experience rhythm and beat. It’s also interesting to note that neural entrainment has been observed at harmonic intervals in response to periodic events.

Decomposing a sound into sine waves (and vice versa) is often referred to as Fourier analysis, more generally it fits into the area of harmonic analysis. It seems reasonable to suggest that the heuristic involved in the actions I’ve mentioned is analogous. Whether it’s trained through a process of experience and comparison is sort of irrelevant- conceivably we could train a neural network to do harmonic analysis using inputs too.

" The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis . For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations."

The music of the Aka is complex polyphonic vocal music which uses an anhemitonic pentatonic scale. Not quite the “western pentatonic scale standard”, but close. One of the variation techniques they use is substituting a note for its fifth.

That link doesn’t work for some reason.

Defiantly sounds a bit weird. I like it lol

To my mind, Fourier Analysis is only relevant to human perception on the input side. The ear detects a time-series of eardrum deviation due to air pressure waves and must somehow convert that information into frequency. This is what I commented on above.

On the internal and output side, Fourier Analysis would seem to be unnecessary because the mind can operate natively in frequency-space. That’s why we need the input FFT in the first place. We have a mental sensation of pitch, not eardrum deviation or air pressure. There is no need to produce an output waveform unless you want to sing aloud or play an instrument. Most musical instruments, including the human voice and the guitar, are operated natively in frequency-space.

I’m not disagreeing with the beat relationships as part of the decoding system of human hearing, but for thinking about sound and producing sound I’m not clear why beat relationships would come into it. (Except for programming synthesizers, of course.)

<This has been “Nerds Passing in the Night”. Brought to you by CTC>

1 Like

A complex sound isn’t just a frequency. Experiencing timbre of a single note already involves harmonic analysis. So does discerning between vowels, and between different emotional inflection in speech.

I simplified quite a bit by saying “beat relationship”. Amplitude matters a lot. I tried to condense a lot of reading in psychoacoustics without writing a research paper here.

There’s a better example of decomposing a sound than programming a synthesizer-- A common practice for ear training is to learn to hear and reproduce the distinct partials in a note. You can sit and listen to a vibrating string, then learn to “hear”, and sing, the octave above the fundamental, the fifth, the third, etc.

(The affinity for that fifth is strong. When I was first learning songs “by ear”, I’d sometimes make the common mistake of confusing a note with the one a fifth above.)

Operating in frequency-space doesn’t at all imply that complex sounds are just frequencies. Fourier components have amplitudes. On the input side, the ear performs this decomposition mechanically, amplitudes included, and passes the frequency and amplitude information to the brain. It does this in a way that the time-series information itself does not need to be visible to the brain. No Fourier Analysis required on the brain’s end.

Ear training is an excellent example of the mind operating directly in frequency space. No Fourier decomposition or synthesis is required.

So far, in this thread at least, we have not identified any problem that the human hearing system needs to solve that involves the brain doing Fourier Analysis or anything similar.

I guess I’m having a hard time figuring out what claim you are trying to support, other than ‘the brain is capable of receiving, interpreting, processing, and generating frequency information in the audio spectrum’, i.e. ‘we can think about sound’, which I doubt many people disagree with. So I suspect you have a more specific point, but I am unable to figure out what it is. This is probably my failure not yours.

I hope I’m not coming across as rude or attacking you. I’m enjoying this conversation. But I’m genuinely curious what point you are trying to support.

For one, you keep separating the mind and the ear, which i didn’t do.

The example of being able to decompose a vibrating string sound into constituent partials seems a pretty straightforward case of harmonic analysis. How do you suggest that happens otherwise?

There’s a video from “3Blue1Brown” titled “Music and Measure Theory”. In the first couple minutes he goes over how harmoniousness as a phenomena is largely captured by simple ratios of rational numbers with detectable beat relationships.

Ok. I think we’re talking in circles. I’ll leave you to it. Thanks for the discussion. It was fun.