Basilar motion at 1600 in addition to 2000Hz Time as long as Demos Experiences, Copenhagen New Stage Half-Size Boston Williams Hall, NEC

Basilar motion at 1600 in addition to 2000Hz Time as long as Demos Experiences, Copenhagen New Stage Half-Size Boston Williams Hall, NEC

Basilar motion at 1600 in addition to 2000Hz Time as long as Demos Experiences, Copenhagen New Stage Half-Size Boston Williams Hall, NEC

Sipple, Wendy, Founding Editor/Publisher has reference to this Academic Journal, PHwiki organized this Journal The Relationship Between Audience Engagement in addition to Our Ability to Perceive the Pitch, Timbre, Azimuth in addition to Envelopment of Multiple Sources David Griesinger Consultant Cambridge MA USA Overview This talk consists of an introduction, followed by sections from three fields. The intro states the goal of the talk – to help build better concert halls in addition to opera houses. To do this we need to underst in addition to how acoustics affects the perception of sound. Part one – Physics – describes a physical mechanism by which human hearing may detect the pitch, timbre, azimuth in addition to distance (near/far) of several sound sources at the same time, using frequencies the range of vocal as long as mants (1000Hz to 4000Hz.) The acuity of these perceptions is reduced in the presence of reflections in addition to reverberation in a consistent in addition to predictable fashion. The consequence of this reduction in acuity is the perception of distance from the source, in addition to a loss in excitement or engagement with the per as long as mance. A computer model of this mechanism can be used to measure the psychological clarity of a hall from recordings of live music. Part two – Psychology – discusses the psychological importance of the perception of “near” in addition to “far” on the ability of a sound to hold the attention of the audience. And makes a plea as long as hall in addition to opera designs that maximize audience engagement Part three – Acoustics – looks at the acoustic reasons certain concert halls are more engaging than others. Hall shape does not scale. A shoebox shape that works as long as a hall with 2000 seats large hall will produce muddy sound over a wide range of seats if it is scaled to 1000 seats Rectangular diffusing elements – coffers in addition to niches – act as frequency dependent retro-reflectors, in addition to are an essential ingredient in maintaining high clarity over a large number of seats. Warning! Radical Concepts Ahead! The critical issue is the amount, the time delay, in addition to the frequency content of early reflections relative to the direct sound. If the direct to reverberant ratio above 700Hz is above a critical threshold, early energy in addition to late reverberation can enhance the listening experience. But, if not Reflections in the time range of 10 to 100ms reduce clarity, envelopment, in addition to engagement – whether lateral or not. in addition to the earliest reflections are the most problematic. Reflections off the back wall of a stage or shell decrease clarity They are typically both early in addition to strong – in addition to interfere with the direct sound. Side-wall reflections are desirable in the front of a hall, but reduce engagement in the rear seats. They are earlier, in addition to stronger relative to the direct sound in the rear. Reflections above 700Hz directed into audience sections close to the sound sources have the effect of reducing the reflected energy in other areas of the hall – with beneficial results. These features increase the direct/reverberant ratio in the rear seats And attenuate the upper frequencies from side-wall reflections in the rear. Coffers, niches, in addition to /or open ceiling reflectors are invariably present in the best shoebox halls.

Tulsa Welding School-Jacksonville FL

This Particular University is Related to this Particular Journal

A few that work – note the rectangular coffers in addition to niches Boston Symphony Hall Amsterdam Concertgebouw Vienna Grosse Musikverreinsaal Tanglewood Music Shed Nice try – in addition to there are plenty more Avery Fisher Hall, New York Alice Tully Hall, New York Kennedy Center Washington, DC Salle Playel, Paris Introduction This talk is centered on the properties of sound that promote engagement – the focused attention of a listener. Engagement is usually subconscious – in addition to the study of its dependence on acoustics has been neglected in most acoustic research. At some level the phenomenon is well known: Drama in addition to film directors insist that per as long as mance venues be acoustically dry, with excellent speech clarity in addition to intelligibility. As do producers in addition to listeners of popular music, in addition to customers of electronically reproduced music of all genres. The same acoustic properties that create excitement in a play or film can increase the impact of live classical music – but many current halls in addition to opera houses are not acoustically engaging in a wide range of seats. Halls with poor engagement decrease audiences as long as live classical music. Engagement is associated with sonic clarity – but currently there is no st in addition to ard method to quantify the acoustic properties that promote it. Acoustic measurement s such as “Clarity 80” or C80, were developed to quantify intelligibility, not engagement. Venues often have adequate intelligibility – particularly as long as music – but poor engagement Acoustic engineers in addition to architects cannot design better halls in addition to opera houses without being able to specify in addition to verify the properties they are looking as long as . So we desperately need measures as long as the kind of clarity that leads to engagement.

The story of “near”, “far”, in addition to harmonic coherence The author has been fascinated with engagement as long as a long time particularly the perception of muddiness in a recording, in addition to the lack of dramatic clarity in a hall or opera house. This fascination led to a discovery that a major determinant of engagement was the perception of “near” in addition to “far”, which humans can determine immediately on hearing a sound, even with only one ear, or with a single channel of recorded sound. The perception has vital importance, as it subconsciously determines the amount of attention we will pay to a sound event. The importance of this perception, in addition to the speed with which we make it, argue that determining “near” in addition to “far” is a fundamental property of sound perception. But how do we perceive it, in addition to how can it be measured In searching as long as the answer, the author found that engagement, “near” in addition to “far”, pitch perception, timbre perception, in addition to direction detection are all related to the same property of sound: the phase coherence of harmonics in the vocal as long as mant range, ~1000Hz to 4000Hz. Example: The syllables one to ten with four different degrees of phase coherence. The sound power in addition to spectrum of each group is identical Near, far, in addition to sound localization The first step to the realization of the fundamental importance of phase coherence came from author’s listening experience, which suggested that the perception of “near” in addition to “far” is closely related to the ability to accurately identify the direction of a sound source. When individual musicians in a small classical music ensemble sounded engaging in addition to close to the listener, they could be accurately localized. And when they sounded distant in addition to non-engaging, they were difficult to localize. Engagement is mostly sub-conscious in addition to difficult to quantify – but localization experiments are relatively easy to per as long as m – so I studied localization. Experiments with several subjects showed that the ability to localize sounds in a reverberant environment depends on frequencies between 700Hz in addition to 4000Hz, in addition to that poor localization occurs when the sum of early reflections in the time range from 5ms to 100ms from any direction becomes stronger than the direct sound. The earlier a reflection comes, the larger is its detrimental effect. With the help of localization data it was possible to construct a measure as long as the ability to localize sound in a reverberant environment. The input to the measure is a measured or calculated binaural impulse response at a particular seat, – ideally with an occupied hall in addition to stage. Equation as long as Localizability – 700 to 4000Hz We can use a simple model to derive an equation that gives us a decibel value as long as the ease of perceiving the direction of direct sound. The input p(t) is the sound pressure of the source-side channel of a binaural impulse response. We propose the threshold as long as localization is 0dB, in addition to clear localization in addition to engagement occur at a localizability value of +3dB. Where D is the window width (~ 0.1s), in addition to S is a scale factor: Localizability (LOC) in dB = The scale factor S in addition to the window width D interact to set the slope of the threshold as a function of added time delay. The values I have chosen (100ms in addition to -20dB) fit my personal data. The extra factor of +1.5dB is added to match my personal thresholds. Further description of this equation is beyond the scope of this talk – but it is explained on the author’s web-page S is the zero nerve firing line. It is 20dB below the maximum loudness. POS in the equation below means ignore the negative values as long as the sum of S in addition to the cumulative log pressure.

Broadb in addition to Speech Data verifies the LOC equation: Blue – experimental thresholds as long as alternating speech with a 1 second reverb time. Red – the threshold predicted by the localization equation. Black – experimental thresholds as long as RT = 2seconds. Cyan – thresholds predicted by the localization equation. Measures from live music Binaural impulse responses from occupied halls in addition to stages are very difficult to obtain! But if you can hear something, there must be a way to measure it. Part one of this talk describes a physiologically derived model of human hearing. The model arose from the search as long as a measure as long as “near” in addition to “far”. But the model is capable of explaining ( in addition to measuring) far more. The model provides a means of separating sounds from multiple sources into independent neural streams, And allows independent analysis of each stream as long as pitch, timbre, in addition to azimuth. The model may not be neurologically correct in detail But it predicts many known properties of human hearing, in addition to shows that all of them depend on the phase coherence of the incoming sound. It provides a method that this phase coherence can be measured from binaural recordings of live music. This model is the subject of part one of this talk. Parts two in addition to three show why the model is needed. Part one – Physics Part one describes a physical mechanism by which human hearing could detect pitch, timbre, azimuth in addition to distance (near/far) of several sound sources at the same time, using the phase coherence of harmonics in the range of vocal as long as mants (1000Hz to 4000Hz.) The model is built from functions that are known to be present in human hearing. Signals from the basilar membrane are analyzed not just as long as their average amplitude, but as long as modulations produced by interference between harmonics. This in as long as mation derives from the phase relationships between harmonics . A conceptually simple mechanism is suggested that allows the in as long as mation from these modulations to be separated into independent neural streams, one as long as each sound source. This this mechanism explains our uncanny abilities to detect the pitch, timbre, azimuth in addition to distance of several sources at the same time. And it also predicts the observed decrease in these abilities in the presence of reflections. The model need not be entirely correct to support the main point of this talk, which is that the model’s success in predicting what is in addition to is not audible strongly supports the conclusions of part two in addition to part three. The phase coherence of harmonics in the vocal as long as mant range give rise to our abilities to separate sounds from multiple sources, in addition to independently perceive pitch, timbre, azimuth, in addition to distance as long as each source. The acuity of these perceptions is reduced in the presence of reflections in addition to reverberation in a consistent predictable, in addition to measureable fashion. In the absence of this acuity sources become psychologically distant in addition to non-engaging. The model provides a means as long as measuring the degree of phase coherence – in addition to thus the engagement – in an individual seat using only live sound as an input.

Perplexing Phenomena The frequency selectivity of the basilar membrane is approximately 1/3 octave (~25% or 4 semitones), but musicians routinely hear pitch differences of a quarter of a semitone (~1.5%). Clearly there are additional frequency selective mechanisms in the human ear. the fundamentals of musical instruments common in Western music lie between 60Hz in addition to 800Hz, as do the fundamentals of human voices. But the sensitivity of human hearing is greatest between 500Hz in addition to 4000Hz, as can be seen from the IEC equal loudness curves. Blue: 80dB SPL ISO equal loudness curve. Red: 60dB equal loudness curve The peak sensitivity of the ear lies at about 3kHz. Why Is it possible that important in as long as mation lies in this frequency range More Perplexing Phenomena Analysis of frequencies above 2kHz would seem to be hindered by the maximum nerve firing rate of about 1kHz. Why has evolution placed such emphasis on a frequency range that is difficult to analyze directly A typical basilar membrane filter above 2kHz has three or more harmonics from each instrument within its b in addition to width How can we possibly separate them How is it possible that in a good hall we can routinely detect the azimuth, pitch, in addition to timbre of three or more sound sources (musicians) at the same time Even in a concert where a string quartet subtends an angle of +-5 degrees or less! (The ITDs in addition to ILDs are miniscule ) Why do some concert halls prevent you from hearing several musical lines at once And what can be done about it The hair cells in the basilar membrane respond mainly to negative pressure – they approximate half-wave rectifiers, which are strongly non-linear devices. How can we claim to hear distortion at levels below 0.1% Why do many creatures – certainly all mammals – communicate with sounds that have a defined pitch Is it possible that pitched sounds have special importance to the separation in addition to analysis of sound Answers Answers become clear with two basic realizations: 1.The phase relationships of harmonics from a complex tone contain more in as long as mation about the sound source than the fundamentals. 2. And these phase relationships are scrambled by early reflections. For example: my speaking voice has a fundamental of 125Hz. The sound is created by pulses of air when the vocal chords open. Which means that exactly once in a fundamental period all the harmonics are in phase. A typical basilar membrane filter at 2000Hz contains at least 4 of these harmonics. The pressure on the membrane is a maximum when these harmonics are in phase, in addition to reduces as they drift out of phase. The result is a strong amplitude modulation in that b in addition to at the fundamental frequency of the source. When this strong modulation is absent, or noise-like, the sound is perceived as distant.

Basilar motion at 1600 in addition to 2000Hz Top trace: A segment of the motion of the basilar membrane at 1600Hz when excited by the word “two” Bottom trace: The motion of a 2000Hz portion of the membrane with the same excitation. The modulation is different because there are more harmonics in this b in addition to . In both b in addition to s there is strong amplitude modulation of the carrier, in addition to the modulation is largely synchronous. When we listen to these signals the fundamental is easily heard In this example the phases have been garbled Nerve firing rates Nerve cells act like an AM radio detector, which recovers the frequency in addition to amplitude of the modulation, while filtering away the frequency of the carrier. This picture shows the amplitude envelope of the previous picture, plotted in dB. It represents the rate of nerve firings from each b in addition to . The rate varies over a sound pressure range of 20dB. Like the detectors in AM radios, the hair cells (probably) include AGC (automatic gain control) – with about a 10ms time constant. The response over short times is linear, but appears logarithmic over longer periods. AM Radio AM radio consists of a carrier at a fixed high frequency that has been linearly modulated by low frequency signals. An AM receiver half-wave rectifies the carrier, in addition to filters out the high frequency components. What remains is the recovered low frequency signals. So an AM radio receiver uses a strongly non-linear device to recover a linear signal. But the rectification process can be viewed as a kind of sampling – it also produces aliases of the modulation. In the case of an AM radio the aliases are at very high frequencies, in addition to can be easily filtered away. In the basilar membrane the carrier is close to the frequencies of the modulation – in addition to the aliases can be problematic.

Amplitude modulation of a noisy carrier The motion of the basilar membrane when excited by phase-coherent harmonics appears to be an amplitude modulated carrier – but the carrier is not a fixed frequency, but an artifact of a filter with a finite b in addition to width. And the frequency of the carrier is within the audio b in addition to . Thus, rectification by the hair cells produces aliases that are both broad-b in addition to in addition to highly audible. Spectrum of the syllable “three” from the rectified in addition to filtered 2000Hz 1/3 octave b in addition to (blue) in addition to the 2500Hz 1/3 octave b in addition to . (red) Note the fundamental frequency in addition to its second harmonic are the same in both b in addition to s. The garbage is different. Recovering a linear signal To recover a linear signal from these hair cells we need to have to combine in addition to compare the outputs from many overlapping critical b in addition to s. The aliases in each b in addition to are different because the carriers have different frequencies – but the modulations we wish to hear are nearly the same. Since as long as most signals the artifacts are not constant in time – we must also average the hair-cell firings over a period of time. My data suggests an averaging time of 100ms. Because the carrier is broad-b in addition to , the aliases are also broad-b in addition to . The signals are generally narrow b in addition to – so broad b in addition to signals may be ignored. Our hearing mechanism does all of these things. An amplitude-modulation based basilar membrane model

A Pitch Detection Model A neural daisy-chain delays the output of the basilar membrane model by 22us as long as each step. Dendrites from summing neurons tap into the line at regular intervals, with one summing neuron as long as each fundamental frequency of interest. Two of these sums are shown – one as long as a period of 88us, in addition to one as long as a period of 110us. Each sum produces an independent stream of nerve fluctuations, each identified by the fundamental pitch of the source. Pitch acuity – A major triad in two inversions Solid line – Pitch detector output as long as a major triad – 200Hz, 250Hz, 300Hz Dotted line – Pitch detector output as long as the same major triad with the fifth lowered by an octave: 200Hz, 250Hz in addition to 150Hz. Note the high degree of similarity, the strong signal at the root frequency, in addition to the sub-harmonic at 100Hz Summary of model We have used a physiological model of the basilar membrane to convert sound pressure into demodulated fluctuations in nerve firing rates as long as a large number of overlapping (critical) b in addition to s. Our physiological model of the frequency separation mechanism is capable of analyzing the modulations in each b in addition to into perhaps hundreds of frequency bins. Strong, narrow-b in addition to signals at particular frequencies are selected as long as further processing The result: we have separated signals from a number of sources into separate neural streams, each containing the modulations received from that source. These modulations can then be compared across b in addition to s to detect timbre, in addition to IADs in addition to ILDs can be found as long as each source to determine azimuth.

Sipple, Wendy Style -- Roseville, Granite Bay, Rocklin Edition Founding Editor/Publisher

Advantages The separated streams from each source can be easily analyzed as long as timbre, ITD in addition to ILD with known neural circuits. The model is conceptually simple – it is built out of a (large) number of building blocks that are known to exist in human neurology. It is easy to see how it could have evolved. The circuit is fast. Useful data on timbre, ITD, in addition to ILD is available within 20ms of the first input. As the sound is held pitch in addition to azimuth acuity increases. Because the ILD is created by high frequency harmonics, small differences in azimuth can create large differences in level Thus azimuth acuity is high enough to explain our ability to localize musicians. Speech without reverberation: 1.6kHz-5kHz Note that the voiced pitches of each syllable are clearly seen. Since the frequencies are not constant, the peaks are broadened – but the frequency grid is 0.5%, so you can see that the discrimination is not shabby. Speech with reverberation: RT=2s, D/R -10dB If we convolve speech with a binaural reverberation of 2 seconds RT, in addition to a direct/reverberant ratio of -10dB the pitch discrimination is reduced – but still pretty good! The binaural audio sounds clear in addition to close.

Speech with reverberation: RT=1s, D/R -10dB When we convolve with a reverberation of 1 seconds RT, in addition to a D/R of -10dB the brief slides in pitch are no longer audible – although most of the pitches are still discernable, roughly half the pitch in as long as mation is lost. This type of picture could be used as a measure as long as distance or engagement. The binaural audio sounds distant in addition to muddy. Two violins recorded binaurally, +-15 degrees azimuth Left ear – middle phrase Right ear – middle phrase Note the huge difference in the ILD of the two violins. Clearly the lower pitched violin is on the right, the higher on the left. Note also the very clear discrimination of pitch. The frequency grid is 0.5% The violins in the left ear – 1s RT D/R -10dB When we add reverberation typical of a small hall the pitch acuity is reduced – in addition to the pitches of the lower-pitched violin on the right are nearly gone. But there is still some discrimination as long as the higher-pitched violin on the left. Both violins sound muddy, in addition to the timbre is poor!

Localization – a poor seat Here is a similar diagram as long as a solo violin in row 11 of the same hall. The sound here is unclear, in addition to the localization of the violin is poor. As can be seen, the number of localizations per second is low (in this case the value really depends on the setting of the threshold in the software). Perhaps more tellingly, the azimuth detected seems r in addition to om. This is really just noise, in addition to is perceived as such. Measures based on harmonic coherence In the absence of reflections the as long as mant frequencies above 1000Hz are amplitude modulated by the phase coherence of the upper harmonics. This modulation is easily heard, creating the perception of “roughness” (Zwicker). Reflections r in addition to omize the phase of these harmonics. The result is highly audible, in addition to is a primary cue as long as the distance of an actor, singer, or soloist. This effect can be measured with live recordings, in addition to is sensitive both to medial in addition to lateral reflections. This graph shows the frequency in addition to amplitude of the amplitude modulation of a voice fundamental in the 2kHz 1/3 octave b in addition to . The vertical axis shows the effective D/R ratio at the beginning of two notes from an opera singer in Oslo to the front of the third balcony (fully occupied.) The sound there is often muddy, but the fundamental pitch of this singer came through strongly at the beginning of these two notes. He seemed to be speaking directly to me, in addition to I liked it. Another singer From the same seat the king (in Verdi’s Don Carlos) was not able to reach the third balcony with the same strength. Like the localization graph shown in a previous slide, this graph seems to be mostly noise. The fundamental pitches are not well defined. The singer seemed muddy in addition to far away. His aria can be heart-rending – but here it was somewhat muted by the acoustics. We were watching the king feel powerless in addition to as long as lorn. But we were not engaged.

Sipple, Wendy Founding Editor/Publisher

Sipple, Wendy is from United States and they belong to Style — Roseville, Granite Bay, Rocklin Edition and they are from  Folsom, United States got related to this Particular Journal. and Sipple, Wendy deal with the subjects like Domestic Lifestyle (Regional)

Journal Ratings by Tulsa Welding School-Jacksonville

This Particular Journal got reviewed and rated by Tulsa Welding School-Jacksonville and short form of this particular Institution is FL and gave this Journal an Excellent Rating.