Reading Activity Week #11 (Due Monday)

| 36 Comments

Please read chapter 10. After reading chapter 10, please respond to the following questions:

What were three things from the chapter that you found interesting? Why were they interesting to you? Which one thing did you find the least interesting? Why? What did you read in the chapter that you think will be most useful to in understanding Sensation & Perception? Finally indicate two topics or concepts that you might like more information about.

Note: Keep in mind that there are no scheduled exams. When you make you posts make sure they are of sufficient caliber that the could be used as notes in a test - since the posts are what we are doing in lieu of an exam. Be sure to use the terms and terminology in your posts.

Once you are done with your post make list of the terms and terminology you used in your post.

36 Comments

While reading chapter 10, I found many concepts interesting to learn about, but found a few to be particularly interesting. I enjoyed reading about how simple sounds like sine waves are useful for exploring fundamental operating characteristics of auditory systems, just as sinusoidal contrast gratings and single-wavelength light sources are vital tools for researchers studying vision. Pure sine wave tones are uncommon in the real auditory world, in which things that matter to listeners are more complex and more difficult for researchers to study. Many sounds like the human voice and musical instruments possess harmonic structure. Theses sounds are among the most common types of sounds in our environment. The lowest frequency of a harmonic spectrum, or component of a complex periodic sound, is called the fundamental frequency. There is also energy at frequencies that are integer multiples of the fundamental frequency. The textbook used the example of a female speaker producing a vowel sound with a fundamental frequency of 250 Hz. The vocal cords will produce the greatest energy at 250 Hz, less at 500 Hz, and even less at 750 Hz, and so on. With this example, 500 Hz is the second harmonic and 750 Hz the third harmonic. For harmonic complexes, the perceived pitch of the complex is determined by the fundamental frequency, and the harmonics add to the perceived richness of the sound. Due to the acute sensitivity to the natural relationships between harmonics of our auditory system, if the first harmonic, or fundamental frequency, is eliminated from a series of harmonics and only the remaining harmonics are presented, the pitch that listeners hear corresponds to the fundamental frequency, despite it not being part of the sound. I found this concept to be interesting to read about because this means the listener still hears the missing fundamental, and it’s not necessary to have all the harmonics present to hear the missing fundamental, only a few are needed.

I also found the psychological sensation of timbre interesting to learn about because I think it’s amazing that our auditory system can judge that two sounds that have the same loudness and pitch are dissimilar. Timbre quality is conveyed by harmonics and other high frequencies. Loudness and pitch are easy to describe because they correspond well to simple acoustic dimensions, which we learned are amplitude and frequency. However, the richness of the complex sounds is dependent upon more than simple sensations of loudness and pitch. The textbook offered a helpful example for understanding this concept. A trombone and a tenor saxophone might play the same note (same fundamental frequency) at exactly the same loudness (sound waves have identical intensities), but a person would have no trouble discerning that two different instruments were being played. The perceptual quality that differs between these two musical instruments, as well as between vowel sounds like in the words hot, heat, and hoot, is referred to as timbre. Differences in timbre between musical instruments or vowel sounds can be estimated closely by comparison of the overall spectra of two sounds overlapping. Thus, timbre must be involved with the relative energy of spectral components, and perception of timbre depends on the context in which a sound is heard. The way a complex sound begins, called the attack of the sound, and ends, called the sound’s decay, is another important quality. Auditory systems are sensitive to attack and decay characteristics. A helpful example given by the textbook showed the important contrasts between speech sounds such as bill and will, and cheat and sheet relate to differences in how quickly sound energy increases at the speed of the attack. How quickly a sound decays depends on how long it takes for the vibrating object creating the sound to dissipate energy and stop moving.

The concept of restoration of complex sounds was also interesting to learn about, because listeners are able to hear a sentence, piece of music, or speech as intact, even if a component of the complex sound was replaced with silence or another burst of noise. What I find especially interesting, is that this phenomenon occurs even if the listener knows that a small part of the sentence, music, etc will be removed or replaced. If familiar music is played and a portion of the song or notes are replaced by noise, listeners still perceive the missing notes as present, and cannot report which notes were replaced by the noise. This perceptual restoration also occurs in speech. In 1971, Warren and Obusek played the sentence “The state governors met with their respective legi*latures convening in the capital city” with the first “s” in legislatures removed and replaced by silence, a cough, a burst of noise, or any of a few other sounds. Despite the missing “s,” listeners heard the sentence as if it were intact and complete, even when they were warned that a component of the sentence had been removed and replaced with silence or another sound. The listeners were unable to accurately report where the sentence had been hanged, except when the missing “s” was replaced with silence. Meaningful sentences actually become more intelligible when gaps are filled with intense noise than when gaps are left silent. In other words, adding noise improves comprehension.

I found the concept of grouping by timbre to be the least interesting to read about in this chapter, because it seemed to be more common sense than other concepts in the chapter that required more explanation. When a sequence of tones that have increasing and decreasing frequencies is presented, tones that deviate from the rising and falling pattern are heard to “pop out” of the sequence. If the tones are simple sine waves, two streams of sound are heard without overlapping pitches. One of the streams includes all of the high tones, and the other has all of the low tones. However, if harmonics are added to one of these sequences, which creates a richer timbre, two overlapping patterns are heard as distinct. I know the concept is important for the understanding of how our auditory system helps us to perceive our environment, but it wasn’t that interesting for me to read.

I think the concept of restoration of complex sounds will be the most useful to my greater understanding of sensation and perception, because there are so many instances in our daily lives where bursts of noise replace components of complex sounds that we are focusing our auditory attention on, and it is beneficial that we can still perceive the complex sound as if it were not removed or replaced, at all. This can be important in our daily lives when we are trying to listen to instructions involving our safety, or a lecture by a professor about why a concept is important, or a news reporter relaying the updates on a big news story you have been following. All of these activities could be happening while the phone is ringing, the dog is barking, friends are yelling and laughing, and so on. It’s beneficial that our auditory system in conjunction with our brain, allows us to accurately piece together a sentence, song, instructions, etc without having to hear every phoneme, note, or word.

I would like more information about the restoration of complex sounds because the textbook was very informative, but I would like more explanations on how our brain and auditory system work together to fill in the missing gaps accurately. I would also like more information about the concept of auditory stream segregation. I know that it is the perceptual organization of a complex acoustic signal into separate auditory events for which each stream is heard as a separate event, but I don’t fully understand how this works. Some examples, or maybe just a more detailed explanation, would be helpful to me.

Terms: sine waves, auditory systems, sinusoidal contrast gratings, single-wavelength, sine wave tones, harmonic structure, harmonic spectrum, complex periodic sound, fundamental frequency, vocal cords, harmonic complexes, pitch, timbre, loudness, attack, decay, restoration of complex sounds, perceptual restoration, grouping by timbre, tones, streams of sound, overlapping pitches, phoneme, auditory stream segregation, acoustic signal, auditory events,

Pretty amazing that all these sounds are just combinations or reverb of some predictable and detectable range of frequencies. Good post.

The first topic I found interesting in chapter 10 was Interaural time difference. ITD is the difference in time between a sound arriving at one ear versus the other. Obviously if a sound hits the right ear first we will hear in the right ear first and so on. The way we determine the angle at which each ear receives a sound is azimuth. Azimuth is the angle of a sound source on the horizontal plane relative to a point in the center of the head between the ears azimuth is measured in degrees, with 0 degrees being straight ahead. The angle increases clockwise toward the right, with 180 degrees being directly behind. Sounds from both ears are relayed in the medial superior olive. At this spot, the brain determines the time differences from sound coming from one ear compared to the other. Next sound is judge by its intensity from one ear to the other in the interaural level difference. Finally, the sound goes through the lateral superior olive (in the brain stem) where inputs from both ears contribute to the detection of the interaural level difference.

Another topic I found of interest was complex sounds. Complex sounds make up most of what we hear in our environment. Complex sounds are made up of harmonics. The start of a harmonic begins with the fundamental frequency; the lowest frequency component of a complex periodic sound. The cool thing about complex sounds is that something might have the same loudness and fundamental frequency but will have a completely different sound. This is called timbre; the psychological sensation by which a listener can judge that two sounds with the same loudness and pitch are dissimilar. Timbre quality is conveyed by harmonics and other high frequencies. All sounds start the same with an attack and end the same with decay. Attack is the part of the sound during which amplitude increases and decay is when the sound during which amplitude decreases.

One last topic I found interesting was spatial, spectral, and temporal segregation. This is how we segregate sounds from one another. Sounds that stay in once space are easier to hear from moving person that those also traveling. Sounds with the same or similar pitch are more likely to be treated as coming from the same source which usually come from the same auditory. Dividing the world into separate auditory objects is known as auditory stream segregation. What I find cool about all this is the fact that we have many different ways to determine and decipher sounds from one another.

One thing I didn’t like was the inverse square law. It states that a principle stating that as distance from a source increases intensity decreases faster such that decrease in intensity is the distance squared. This general law also applies to optics and other forms of energy. I was a little confused on how this works and how exactly these things are measured.

I think the most useful thing I learned in this chapter was about complex sounds. It was interesting and to figure out how similar sounds can be different at the same time. Also, it was useful to learn what most sounds in the environment are made up of.

I would like to learn more about inverse square law and head-related transfer function.

Terms: HRTF, inverse square law, auditory stream segregation, attack, decay, timbre, complex sounds, harmonics, fundamental frequency, interaural level difference, lateral superior olive, medial superior olive, azimuth, interaural time difference.

Its cool that we are able to localize sounds just based on the physics that sound coming from various locations gets to one ear before the other. Good post.


The previous chapter was on the components of hearing, this chapter is more on the hows of how we hear and the ways we can locate sounds. I hadn’t thought a lot about it personally as I’m learning I’m taking my hearing for granted. The interaural time difference is the difference in time between a sound arriving at one ear versus the other. Which when thought on simple terms makes sense; a sound on the left side will reach the left ear a lot quicker than it would the right, thus resulting in a way to locate said sound. The term azimuth applies to that location so sound as the azimuth is the angle of a sound source on the horizontal plan relative to a point in the center of the head between the ears. The azimuth is measured in degrees, the 0 being straight ahead, going clockwise with 180 degrees being directly behind. Within the brain the medial superior olive (MSO) is a relay station in the brain stem where inputs from both ears contribute to the detection of the interaural time difference. Thus sort of reminding me of a somewhat sonar only the sound comes from somewhere else and our ears on the ones receiving it. Another interesting thing that relates to sound location is the interaural level difference, another means for sound location. ILD is the difference in level intensity between a sound arriving at one ear versus the other. So sounds closer to left or right ear is more intense versus the other ear; ILD is at its highest potential at 90 and -90 degrees and its lowest at 0 or 180 degrees which is straight ahead or behind. That makes sense since to sneak up on a person you usually come up behind them and watch them jump. But another difference between time and level interaural difference is that ILD relay to a station in the brain stem where inputs from both ears contribute to the detection of the interaural level difference.

The third interesting thing in this chapter was the attack and decay of sounds; how sounds begin and end. Attack is the part of a sound during which amplitude increases, the beginning or onset of the sound. Where a person hears the differences in words by the simple onset differences between words like ‘bill’ and ‘will’. Decay is the part of a sound during which amplitude decreases, ends or offsets. Simple sounds that distinguish between differences in words or musical instruments and similar sounds in the environment

As for the least interesting things in this chapter would have to be the cone of confusion, a region of positions in space where all sounds produce the same time and level intensity differences in ITD and ILD. I sort of grasp the idea of it but not follow so and wouldn’t mind some more detail and understanding. There is also the inverse-square law, a principle stating that as distance from a source increases, intensity decreases faster such that decrease in intensity is the distance squared. This general law applying to optics and other forms of energy is a hard thing to understand and confuses me, and thus making it less interesting.

Terms: inverse-square law, attack, decay, interaural level difference, interaural time difference, medial superior olive, lateral superior olive, azimuth

Sound is difficult for me to understand as well. Some of the concepts I just don't experience that often or think about regularly, so I think we get removed from these concepts, even though we experience these concepts daily.

The first set of concepts I really enjoyed from this week's chapter was the section on complex sounds. I have been very involved in singing and other musical pursuits for as long as i can remember, therefore terms like harmonics and timbre are nothing new to me. However, reading about them from a purely scientific perspective is nothing short of fascinating. The study of harmonics is particularly neat because it recognizes more of a real-world set of environmental sounds, with a more complex spectra of frequencies for certain noisemakers. Whether it be a plucked guitar string or the human voice, a specific note actually produces a wide range of audible frequencies that occur at multiples of the lowest tone, or fundamental frequency. A similar concept can make this a bit easier to understand - when singing a capella (most notably in barbershop music) oftentimes when all the notes of a chord line up perfectly in tune, an additional pitch can be heard above those that are being sung by the vocalists. This "phantom pitch" is called an overtone, and is directly related to the eponymous term discussed in our textbook.

I also really enjoyed the topic of continuity effects/perceptual restoration. Given our earlier discussion of similar restorative properties in the visual system it is not surprising to hear these sorts of phenomena exist, but it is a bit harder pill to swallow this time around. I would say this is probably the case because we do not typically think of auditory stimuli in the same way as most visual ones. In a majority of cases, sound is fleeting and mutable, while most of what we see is fairly structured and unchanging. Therefore it does not seem that unreasonable for our visual system to "fill in the gaps" of a fairly constant environment, but (at least to me) establishing corresponding perceptual continuity within the auditory system seems much more remarkable. Despite my initial reservations, it is pretty obvious that even without the scientific evidence for such functions, such a system must occur to some degree. We must fill in the gaps for unwanted interruptions, otherwise we would only understand conversations that occurred amidst near or complete silence! It is also quite interesting to note that interruptive noise is actually better than intermittent periods of silence when it comes to auditory restoration.

And finally, I really enjoyed one of the more fundamental sections of the chapter - namely, the portion about auditory distance perception. This was one of the areas I felt was key to understanding the rest of the chapter. We frequently take this ability for granted, but we human beings are actually fairly adept at identifying the relative distances of sound sources within a reasonable distance. Taking this into account, it is helpful to know that the relative intensities of a sound decreases on a magnitude of the distance squared, a concept known as the inverse-square law. This explains why it is usually pretty easy to pick out relative distances between objects at close range, but at an equal distance difference further away this task is much more difficult. This is also rather intuitive looking at it from an evolutionary perspective because 99 times out of 100 a person's survival is likely to depend on relative sound differences nearer to them than those that are further away.

I would say the least interesting part of the chapter was probably the section on the "cone of confusion". This appropriately-named concept is equally tough to understand, though it probably would be interesting if I could fully wrap my brain around it. I think the most useful thing about this chapter was how it related isolated effects that could be created and observed in a laboratory to those that are more common in the real world, because most of these synthetic phenomena are only valuable so long as they can be applied to actual settings. I would enjoy learning more about perceptual restoration and some clarification on the cone of confusion.

Terms: complex sound, harmonic, timbre, fundamental frequency, overtone, continuity effect, perceptual restoration, relative intensity, inverse-square law, cone of confusion

Great post. I'm glad you related it to your own life and realized how integrated these concepts are in a lot of what we experience on a regular basis.

Chapter 10 was an interesting chapter, and even though it included hearing, I enjoyed it much better than chapter 9. First of all, I was interested in how the brain can determine where an object is by only using the sound it creates. The brain uses multiple sources to determine the position in space of the object. First is called an Interaural Time Difference. This is where the brain knows what side of the body a sound is due to a small difference is time that the sound takes to get to both ears. This is one reason why the auditory system must be very quick (discussed in chapter 9). If the auditory system was any slower, it wouldn’t be able to detect a difference between the two flows of neural information coming in. Discussed in chapter 9, the visual system can perceive a flowing scene by a rapid display of stationary images, but if the auditory was presented respective stimuli in equal time intervals, it would detect individual stimuli; not a continuous scene. This is because the auditory system is much faster in receiving and sending signals to its respective cortex. Furthermore, Interaural Time Difference is a result of this blazing fast processing. Other cues that the brain uses to create “depth” from the auditory system were also very interesting to me. Another cue that is intensely related to the ITD is the ILD, or the interaural level difference. This cue also helps to detect where the source of the sound is in space; however, this cue uses differences in sound intensity rather than time differences.

The most interesting part of this chapter was the Cones of Confusion. Using basic geometry, if the area around the head (360 degrees) is taken in by only two points (the ears), there would be two potential places that any one sound could be. Having a line go straight out from your ear being a reflective line, every point could be confused as a source for its reflective point. If the source of a sound is at 45 degrees from your head, it could be mistakenly deduced that the source of the sound is at 135 degrees as well. These points on your head can be combined to create geometric cones, which are called the Cones of Confusion. One way your body helps with accuracy concerning the cones of confusion is the pinnae of your ear. As you probably have noticed, your pinnae are not symmetrical front to back; this asymmetry creates small differences in the sound that goes into your ear canal. These small differences can help create accuracy of where the source of the sound is coming from.

One area of this chapter that I found interesting, but I want to learn more about, is the brain’s analysis of harmonics. One way your brain can perceive complex sounds is through harmonics. A harmonic is a fundamental frequency, a sine wave, and a combination of other sine waves that are produced in integral sequence. All of these sine waves converge at the same peak in wavelength (the peak of the fundamental frequency). One thing that interested me, but I wish I knew more about, is the phenomenon of a missing fundamental. I understand that an individual will perceive a note as with a fundamental frequency even though it was absent. However, what I don’t understand is how we perceive notes in the natural world higher than their fundamental frequency? The chapter was very brief about this subject, and left me feeling clueless.

I know this sounds like a ridiculous answer, but I didn’t really have a part in this chapter that didn’t like. There are sections that I enjoyed, but there aren’t any sections that stand out for being unenjoyable.

Terms: Interaural Time Difference, Interaural Level Difference, Cones of Confusion, pinnae, harmonics, fundamental frequency.

Glad you enjoyed finding out more about sound.

After reading chapter 10 I found majority of the chapter interesting and easy to understand. I thought this chapter really went over some of the key points in sensation and perception. All of the information seemed to relate to how we perceive sound, and what makes it easier or more difficult to perceive different or the same sounds.

I thought the section about sound localization was interesting. I found it fascinating that humans can detect and know where a sound is coming from so quickly. This concept really adds to what was discussed in class last week and what was in chapter 9 because our auditory system is one of the most important senses. We do not need to see where a sound is at to be able to know where the sound it coming from. When we hear sounds we usually automatically know where the sound is coming from. Although we can detect sound so quickly it does arrive to one ear before the other. Also wherever the sound is coming from it will be more intense to which ever ear is closer. The two cues that are most important in sound localization are interaural level difference and interaural time difference. Interaural time difference is the difference in time between the sound reaching one ear to reaching the other ear. Interaural level difference is the different intensity from one ear to the other.

One other sections I found to be interesting was the section on timbre. This was interesting to me because as humans we can determine the difference in sounds that are very similar to one another. Timbre is when someone can determine the difference between two sounds that have the same loudness and pitch. The example in the chapter really helped me understand the concept of timbre, if a trombone and a saxophone play the same note they will have the same fundamental frequency (meaning they will be the same sound in loudness and pitch), but we can still perceive each note as sounding different. Also timbre is affected by the environment. Depending on the environment we are in we can perceive sound that are the same as being different.

I also found the section on restoration of complex sound to be interesting. This section was one that caught my attention because even if we don’t hear a sound that is supposed to be there we can still determine what the sound is. The example in the book was taking one letter out of a sentence; although the letter was missing people could perceive what the sentence was saying. This only worked if the missing sound was “covered” by another, like a closing door or a cough. If the sound was just missing in silence it was much easier for someone to be able to determine that there was something missing. I also found it interesting that depending on what context the sentence was being said we fill in the sound that we think is best. I thought the chapter and authors did a good job of explaining this. The example they gave was, “the eel fell of the car” and “the eel fell of the table.” More than likely we fill the first missing sound with “wheel” and the second sound with “meal.

One of the areas I found to be uninteresting in chapter 10 was the section about cones of confusion. The main reason I found this to be uninteresting was because I thought the chapter should have went into more detail about it. I thought this might be something important because it was stated that there are many different cones and the widest cone is the hardest to understand, but they didn’t really explain it.

I thought majority of the chapter was important in understanding sensation and perception. One of the main sections I found to be important in understanding sensation and perception was the section about pinna and head cues. I found this section to be important because it really went into detail about how each person can perceive sound differently because each person pinna is different from the net persons. Also our perception of sound is different by the ways we hear the sound, like listening to a concert live or listening to the same songs or music in headphones. When listening to music in headphones we do not use the pinna so our perception of sound is different. I also thought the head related transfer function was important, because that takes most of our body, like our head and torso into consideration in how we perceive a sound, and how the frequencies are different.

I would like to know more about attack and decay, and how a sound starts and stops. This was interesting to me because our auditory systems are so sensitive to attack and decay. I would like to know more about this because this chapter did not go into much detail about the subject. I would also like to know more about the concept of timbre and how we determine the difference between the “same sounds.” Also I would like to know more about what exactly makes those “same” sounds so different when we perceive them.

Terms: sound localization, auditory system, interaural level difference and interaural time difference, timbre, loudness and pitch, restoration of complex sound, cones of confusion, pinna and head cues, head related transfer function, attack and decay

Interesting how there is so much to know about what we hear and experience everyday!

Interesting how there is so much to know about what we hear and experience everyday!

Three things I found interesting in Chapter 10 were interaural time difference or ITD, head-related transfer function or HRTF, and auditory scene analysis. ITD is the difference in time between a sound arriving at one ear versus the other. A way to describe the location of an imaginary circle that extends around us and in a horizontal plane is called the azimuth. Azimuth is measured in degrees with 0 being straight ahead. The angle increases clockwise toward the right with 180 degrees being directly behind. Our two ears receive slightly different inputs when the sound source is located to one side or the other. Interestingly enough, for frequencies louder than 1000 hertz, the head blocks some of the energy from reaching the other ear. Binaural input enters almost every stage of the auditory nervous system after the auditory nerve. The medial superior olives or MSOs are the first places in the auditory system where inputs from both ears converge, so this is the first place to look for ITD detectors. The second cue for sound localization is called the interaural level difference or ILD in sound intensity. Sounds are more intense the closer the sound is to the ear because the head partially blocks out the sound pressure wave from reaching the opposite ear. Neurons that are sensitive to intensity differences between the two ears can be found in the lateral superior olives or LSOs. These receive both excitatory and inhibitory inputs.

Head-related transfer function describes how the pinna, ear canal, head, and torso change the intensity of sounds with different frequencies that arrive at each ear from different locations in space. The importance of HRTF for sound localization is easily perceptualized when we think about being at a concert live and listening to one through headphones. It is possible to stimulate HTRTFs as well. Two microphones are placed near the eardrums, and then the sound source is recorded from these two microphones. HRTFs relate to places in the environment through their extensive experience listening to sound while other sources of information provide feedback about location. The inverse-square law is a principle stating that as distance from a source increases, intensity decreases faster such that decrease in intensity is the distance squared.

For an auditory scene the situation is greatly complicated by the fact that all the sound waves from all the sound sources in the environment are summed together in a single complex sound wave. Somehow the auditory system contends quite well with the perception of a world that has easily seperatable sounds. This distinction of auditory events or objects in the broader auditory environment is commonly referred to as source segregation or auditory scene analysis.

One thing that I did not like about the chapter was the section about the timbre. I find the timbre interesting but in the book I was confused by all the graphs and the sections about test vowel violations. I also didn’t find it interesting because of the small amount of information that was given. Something that I would like to learn more about is the restoration of complex sounds and cones of confusion.

TERMS: cones of confusion, restoration of complex sounds, auditory scene analysis, head-related transfer function, pinna, ear canal, head, torso, inverse-square law, interaural time difference, azimuth, binaural input

People who study sound and audition really get intense about it and have a really deep understanding of it.

The two auditory localization cues were very interesting to me because they are crucial to our functioning. Interaural time difference is how we tell which direction a sound is coming from. It works by determining which ear heard the sound first. At first glance, ITD seems like a common sense principal, but think of a time when it was hard to tell where a sound was coming from. The azimuth helps to determine where the sound is located in space by measuring where the sound is coming from in location to one’s head. More specifically, 0 degrees azimuth is straight in front of the person and a 180 degree azimuth is directly behind the person, the other directions vary between these two points. The development of the ITD is crucial for its functioning. The ITD must develop during the first few months of life, they cannot be developed before the infant is born. The reason for this is that the size of the head affects the ITD, and if a person was adapted to the size of an infant head, then they would have a very difficult time because their sound localization abilities would decline as their heads reached normal size. This is comparable to specific functions within the visual system that develop late intentionally. The second auditory localization cue is interaural level difference, ILD. The ILD is similar to the ITD in the sense that it helps to determine which direction the sound is coming from, but unlike the ITD, the ILD measures the intensity of the sound. More specifically, the ILD helps to determine how close the sound is in relation to a person’s head. ILD is measured oppositely of the ITD because it is the strongest at 90 degrees and -90 degrees (directly in front of ears, on the sides of the head). ILD is virtually useless at 0 and 180 degrees, whereas this is the ITD’s strongest location.

I found it useful when the book used the metaphor listening to music live versus through headphones to describe head-related transfer functions (HRTF). The HRTF is useful for describing a change in intensity of a sound with different frequencies. The HRTF uses the pinnae to determine the direction that the sound is coming from, providing a auditory experience similar to a live concert. If the sounds are heard through headphones, then there is no sense of sound direction. The direction is determined by ITDs and ILDs, which can be manipulated to provide a “directional” experience. But this is different because the direction comes from inside of the brain rather than from the pinnae connected with the sound itself. The pinnae is the outer portion of the ear that collects the sounds and when it is combined with the other localization cues, it provides direction and intensity of the sound.

After reading about the ITD’s and the ILD’s, I encountered a confusing section about microseconds. Perhaps this confused me because when terms begin to become mathematically technical, I tend to check out. So it was not only difficult for me to focus on this section, but it was hard to understand. I understand that degrees measures where the sound is in location to the head, but I do not understand how the degrees are measured into millionths of a second. This section also mentioned that head size affects how the sound is heard. Does this mean that someone with a larger head than mine experiences sound in a different way than I do? Does our head size really have a significant impact on our auditory experience? I am not sure if these questions can be answered, but I had to share my confusion.

Terms: auditory localization cues, interaural time difference, azimuth, sound localization, visual system, interaural level difference, intensity, head-related transfer functions, frequency, pinnae, microseconds,

Events, especially in sound, occur very quickly, and there are aspects of these events that we can measure with respect to a very small scale. Hence, microseconds.

(1) All the different names (ways of identifying) how we pin-point where a sound is coming from. I always figured that we knew where a sound was coming from through azimuth (how much each ear hears based on the degree it is from the head). I didn’t ever think that the time difference between when the left ear and the right ear hear a noise (interaural time difference (ITD)) was big enough to be taken into account when finding where a noise came from. Interaural level difference (ILD) makes sense when you are walking around trying to find something that is making a noise and the intensity of the sound changes when you get closer or farther away. All three of these along with the two relay systems; medial superior olive (MSO) for finding direction and lateral superior olive (LSO) for finding both direction and intensity. (2) Head-related transfer function (HRTF), or the ear’s version of depth perception. I knew that the pinna played a part in picking up sounds but I didn’t know that it went as far as to have its own function that determines how it affects intensity. (3) Source segregation or auditory scene analysis, I have seen used in movies (and at least on episode of CSI: Crime Scene Investigation). For me, the video about the two brothers have had to wear hearing aids and didn’t have source segregation was very interesting. I just thought that all hearing aids worked the same way or were at least similar in function. I didn’t know that some didn’t have source segregation.

Cone of confusion, I get why we have it but it can easily be corrected by turning your head or moving your moving to be at a different position from before.

I would have to say all the parts that contribute to finding the source of a noise (part 1 of first paragraph). We don’t just hear a noise, our brains calculate it’s relativity to the noise based on its intensity and direction from the head and how loud it is. So much goes on without us even thinking about it and we go on like it’s nothing special.

Source segregation (it was the only one I really had any question about); is it possible to use it with machines the way they do in movies? If so, how similar is it to how we naturally do it?

Terms: interaural time difference (ITD), azimuth, medial superior olive (MSO), interaural level difference (ILD), lateral superior olive (LSO), head-related transfer function (HRTF), pinna, source segregation or auditory scene analysis, and cone of confusion.

Interesting questions you posed. I know there are pretty high-tech sound detection devices around that record complex sounds, but I don't know about the answer to your question.


Sound localization started of the chapter and it was interesting to me. I think I was interested because it was based on to main factors the ITD and the ILD. The ITD is the Interaural time difference. What that means is the difference in time that a sound reaches each ear. Since sound does not always come from the same place this difference will vary. The angle in which sound is located is called the azimuth. The other factor that helps with the localization of sound is called the interaural level difference. This is the difference in intensity of sound in each ear. The intensity is different because sounds are more intense the closer they are. The head is one thing that could take away some of the intensity as the ear further away receives the sound.
The concept of head-related transfer function was intriguing to me. The HRTF describes how the pinnae, ear canal, head and torso change the intensity of sounds with different frequencies that arrive at each ear from different locations in space In this section it talked about how as we grow and experience more things we learn more about how this relates to the environment. Research was done placing molds into adult’s pinnae and it changed their hearing for the worse. The interesting was that after sometime they began to get used to the change and their hearing improved again.
Auditory Scene analysis was also of some interest to me. As I read this chapter and learned more about how we hear this became more fascinating to me. We live in such a fast paced world and sometimes end up in places where there is nothing but noise everywhere. It is a great feat that we are able to process through and separate all of the different sound sources that we encounter. This reminded me of the YouTube video that we watched in class last week. In it the brothers talked about how they had a lot of sounds jumbled all together before they received their new hearing devices.
The cone of confusion was not interesting ironically because it was confusing. The book said that it was places in which all sounds produce the same ITD’s and IlD’s. It was confusing because it said that many books talk like there is only one but there are actually many of them. That itself is confusing to me.
What I believe is essential to my knowledge of sensation perception from this chapter is some of the similarities between the visual and auditory system. I think that it is important to know about the neurons that fire in the medial superior olive, and also the lateral superior olive. What they do is they fire off to help with sound localization and I believe that play a great role in what we hear and how we react to those things.
The two things that I would like to learn more about is the research that has been done with pinnae adaptation and also more about timbre. With the pinnae adaptation I would like to learn how long it takes for and individual to begin adapting and also if the hearing level reaches normal. With timbre I am just interested in fully understanding why it is possible to judge two sounds with the same pitch and loudness as different.


Key Terms: Auditory segregation, head-related transfer function, pinnae, interaural time difference, interaural level difference, azimuth, cone of confusion, medial superior difference, lateral superior difference, timbre

Timbre is an interesting game to play with your auditory abilities. See if you can tell which instruments are being played in an orchestra or even some music you like. It's difficult these days because so much music has electronically derived components that don't necessarily sound like any one instrument.

From this chapter, I found the information on sound localization interesting. It takes a different amount of time for a sound to reach each ear depending on the location of the sound as long as the sound is not coming from directly infront or behind you. This is known as interaural time difference. ITD is measured in angles from the center of your face called azimuth. The biggest time difference is when a sound comes from either directly left or right of the center of your face. Sound can also be localized by different levels of intensity which is known as interaural level difference. Sounds are more intense when heard up close. The biggest intensity difference between the two ears will also take place at directly left or right of the center of the face. The pinna also helps us understand location of a sound. I thought this was interesting because it is something that I do in my every day life and had never taken the time to stop and think about how it actually works.

I also liked the concept of attack and decay. This concept highlights the fact that sounds have different levels of emphasises at the beginnings and the ends. These help us determine what word is being said or what sound is being made. I found this concept interesting because it is a technique I have used for years to help sound things out when spelling and telling how many syllables a word has.

I also liked the section on timbre. Timbre is the being able to interpret sounds differently even though they may be played at the same pitch and intensity. People can tell the difference between to sounds identical in pitch and intensity because of the relative energy on the different acoustic scales. Timbre can also be effected by the environment that a person is in. I thought this was interesting because it really shows just how sensative our hearing is.

There wasn't really anything that I didn't find interesting from this chapter but I would like more information on head-related transfer function. I understand that it you can learn and adjust your hearing if parts get changed, but I would like to know more about how it works.

Terms: interaural time difference, azimuth, interaural level difference, pinna, attack, decay, pitch, head-related transfer function,

Interesting stuff.

Our auditory system as humans is really quite fascinating. As we learned in the last chapter, there are many different anatomical components that make up the ear which allow us this great variety of noises we hear. I was able to easily find interest in sound localization because of its contrast and comparison of localization within the visual field. Our sense of perception within the visual field is slightly different, however. With our eyes we are able to detect where an object is, though comes some problems when trying to determine how far away it is. Auditory location is similar to some extent. Sound does not reach the both ears at the same time unless the sound is coming from directly behind at a 180 degrees or straight in front at 0 degrees. Even with these measurements and the consideration of ear shape sound between both ears might be reached at a slightly different time. This part helped me to further understand localization and keep my attention, because of the importance of the degree the sound comes from, or the azimuth. When or if we are placed into a situation of complete silence and we detect a loud noise, more than likely we can determine which direction we heard it from, whether it be left, right, behind, ahead, or even up and down. The detection of sound from a location and the amount of time it takes to reach one ear as appose to its opposite ear is seen in the concept of interaural time difference (ITD). With ITD’s there is an important physiologic component known as the medial superior olives (MSO). The MSO is important, because it’s the first spot where the inputs of both ears join together. Similarly, another component is important for the localization of sound in the auditory system. This component is called the interaural level difference (ILD), which means that it is similar in the detection of sound arriving at one ear compared to the other, though it is intensity focused. The lateral superior olives (LSO) detect both inputs of excitatory and inhibitory responses. It is an essential part of the localization of sound for the ILD. When I began reading the section on pinna and head cues I was able to find interest based on the examples that were given and the importance of the pinnae of people as the very first part of the ear noted in auditory sounds. The head cues explained the shift of the head and how it effects sound localization as well while incorporating the usage of ITD’s and ILD’s. Something important that I took away from this section was the term head-related transfer function (HRTF). It involves how the pinna and other parts of the body such as the head change intensity of sound when there are different frequencies received by both ears from different locations. One example of HRTF that the book gave was comparing an actual in person concert verse listening to that music through headphones. At a concert there is a difference of localization because of where particular musicians are standing allowing for different times and frequencies reaching the ears. Listening to music through speakers, such as headphones, doesn’t give you as great an effect and reaches the ears differently. I also took interest in learning about the two terms attack and decay. Attack refers to the beginning of sound, while decay means the part of sound where it ends and amplitude is decreased. The section on auditory distance perception was kind of boring. It mostly just talked about how when things noise related is closer to our physical body it is much easier to distinguish in intensity. The further away these noises are, that ability becomes more difficult and the intensity is much harder when trying to distinguish. Honestly, I believe the very first section of the book based on sound localization is and will be the greatest thing involved for myself in my understanding of sensation and perception. I believe it’s because of how important the visual field allowed us to understand localization in one way through a particular sense and learning of localization through a different sense in the auditory field allows for a greater expansion when combining these two forms as well as learning how to use them separate. I would like to learn more about the timbre. There was some good information, but I still have the want to look up more than what the book has to offer. I would also like to learn more about harmonics, because it is another form of a complex sound that intrigues and grabs my attention as well.
Terms: Azimuth, ITD, MSO, ILD, LSO, HRTF, Attack, Decay

Timbre is cool because it makes it so we can appreciate the combination of different instruments and differentiate between them. I think this concept interests most of us who love music.

In the reading blog this week I came across more interesting things that I thought I would. In the beginning I found interaural time difference (ITD) and interaural level difference (ILD) very interesting and thought provoking. This to me is the main reason that we have an ability to hear (besides hearing language to communicate). Sound is one of the senses that is used to send important messages to the brain to protect us from harm. Interaural time difference is the idea that our brain hears sounds and bases its judgment of where it comes from on the time that the sound wave took to access a specific ear. Interaural level difference on the other hand is how we individually discover where a specific sound came from based on if it was below us, above us, or where it was in relation to our body. What amazes me about this is that we do it almost subconsciously. Never have I heard a noise and then quickly looked over my left shoulder and thought WOW that sound must have reached my left ear first! This phenomenon can apply to sounds all around the body if you apply it to ITD or ILD.
What interested me next was the ability for this concept of ITD or ILD to become confused by the brain. I figured out that our body can mix these signals up and sometimes have a hard time figuring out where a sound came from. Researcher Wallach examined this in 1940 when he did an experiment that basically mixed up people’s perceptions of reality by making them feel as if they were spinning around. He then put out sounds and asked people where they thought the sounds were coming from. Through this he discovered cones of confusion. This theory is based on the idea that we have invisible cones projecting from our ears that pick up sound. In the experiment the sound was coming from directly in front of each participants nose yet the sound seemed as if it were coming from above or below them. The question that came to mind for me is why would this help us as a species? I do not really have a good conclusion for this question but think that it is an important one worth asking.
The topics that I learned that seemed most important for my knowledge of sensation and perception were attack and decay. This basically just lays out when a sound begins (attack) and when a sound stops (decay). Nearly every sound stops eventually and parallel to this we know that each sound must thus have a beginning. I had never really thought about this having a name. A topic that relates to this which I also found interesting was source segregation. Source segregation is important because just like sounds having a beginning and an end they also often times must be separated by the ear when close together. An example of this would be stopping at a cross walk. The person next to you is chewing gum very loudly and cars are cruising by. If your ear was not complex and could not separate the sounds and focus on which sound is of more importance than you could be in danger. Luckily with our ability to utilize ITD, ILD, attack, decay, and source segregation we can figure out how far away the cars are, which sounds are which, when the cars stop zooming by and begin driving again.
Another example and part of the chapter that I did not especially like was about common fate, an early gestalt principle, and how it relates to sensation and perception. The example in the book was of a bottle breaking and/ or bouncing and how this principle can help us determine outcomes based on sound. This just seemed a little obvious and confusing. Two concepts that I am interested in learning more about are azimuth and medial superior olive.
Terms: interaural time difference, interaural level difference, cones of confusion, Wallach, attack, decay, source segregation, common fate, gestalt principle, azimuth, and medial superior olive.

I think the cool thing about these Gestalt principles are that they fascilitate our perception and allow us to perceive sounds that co-occur or at least get grouped together so that it seems as though they co-occur. Makes the brain's job of processing all of these aspects of sound much easier.

I found it interesting that we use ITD's to locate where a sound is coming from. I didn't know, but it obviously makes perfect sense, that whichever way the sound is coming from, we know it because we hear it more in one ear than the other. It never occurred to me that we don't just hear the same amount in one ear than the other. The azimuth idea is cool too, the idea of the shadow behind the head that one ear doesn't hear as clearly. My grandfather lost his hearing in one ear because he rode a tractor for a long period of time when he was younger. His hearing is worse on one side but I wasn't sure why it really mattered if he was sitting on a tractor, wouldn't the sound disturb both his ears equally? It turns out that this isn't true because he would turn his head to one side, making the noise from the engine more intense on one ear.
This leads to my next topic I found interesting which was the information on ILD's. A lot like ITD's in that the level of sound intensity is different on the ears depending on which direction the sound is coming from. The most interesting part about ILD's is that our head blocks high-frequency sounds much more effectively than low-frequency sounds, for the ear that isn't facing the noise. So, when my grandpa was on the tractor and turning his head, the ear facing the loud noise was getting most all of the high-frequency wave lengths of the loud noise. Because his head was actually blocking his other ear from getting the most intense sounds. Although if the sound was of a smaller wave-length, the opposite ear facing away from the sound would have gotten more frequency. Although it is all passed through the head into the lateral superior olive (LSO), which is where our brain contributes to the detection of the ILD's.
Thirdly I found directional transfer function (DTF) very interesting. It really interested me because I realized just how different people actually hear things, depending on your pinna shape, head shape, and torso shape! Being on different levels (torso) can really make a difference in hearing. Learning to hear can change throughout your life. The book talks about putting plastic molds into the pinnae of adults' ears, and they actually became less able to detect where sound was coming from, although with time (6 weeks) they became accustomed to the molds and got their localization back!
I did not find attack and decay to be very interesting. I appreciate the musicality of it and how music sounds, but it just didn't seem very important or interesting to me.
One thing that will help me in understanding sensation and perception in this chapter are ITD's and ILD's. The way that we hear things is very important in understanding the way we sense and perceive things in our world. MSO's and LSO's are also very important in knowing what is going on with sensation and perception because it is what our brain makes sense of our perception of sound. They put together what we sense of our ITD's and ILD's.
One thing I would really like to further research is auditory stream segregation. I am really interested in learning more about this because I have mentioned before that I play the cello and totally know what this is referring to when you try and bring out a melody. Very interesting.
Another topic I would like to research further would be the mold in the ears experiment and how different shapes of ears perceive different sounds more or less. The book talked about the guy from star trek and wondered how he localized sound in comparison to a "normal" ear. This would be interesting to know!
Terms:ITD, MSO, ILD, LSO, DTF, azimuth, attack decay, auditory stream segregation.

I sort of agree with you on some of the attack and decay stuff. I love music, and I understand that it is a complex thing, but I really am just more concerned with my subjective experience of it and what that means to me, rather than how it actually works. That's my lazy way out!

In chapter 10, I found the seciton about localization interesting. It's amazing how we can detect where sounds coming from so quickly. We do not need to see where a sound is at to be able to know where the sound it coming from. When we hear sounds we usually automatically know where the sound is coming from. There are two cues that are most important in sound localization. They are, interaural level difference and interaural time difference. Interaural time difference is the difference in time between the sound reaching one ear to reaching the other ear. Interaural level difference is the different intensity from one ear to the other.

Attack and decay was another interesting concept. This concept highlights the fact that sounds have different levels of emphasises at the beginnings and the ends. These help us determine what word is being said or what sound is being made. Like everything else dealing with hearing, this has to do with the vibrations that our inner ear picks up on.

Another interesting concept in chapter 10 was the section about cones of confusion. It can easily be corrected by turning your head or moving your moving to be at a different position from before. One way your body helps with accuracy concerning the cones of confusion is the pinnae of your ear. As you probably have noticed, your pinnae are not symmetrical front to back; this asymmetry creates small differences in the sound that goes into your ear canal. These small differences can help create accuracy of where the source of the sound is coming from.

The least interesting thing would have to be timbre. It is an interesting topic in general, but our book made it very confusing with all of the graphs and the rambling about test vowel violations. It didn't go into great detail and we were left with very little information about it. So this would be a good topic to cover in class.

Terms: localization, interaural level difference, interaural time difference, attack and decay, vibrations, inner ear, cones of confusion, pinae, ear canal, sound, timbre, test vowel violations

Hopefully you guys got to cover more about timbre in class!

When reading the chapter I found the interaural time difference (ITD) concept to be interesting. I think it is very interesting that we can detect where a sound is coming from. This is something I have never really considered before. Technically all sound reach our ears in the same exact spot, so it is pretty amazing that we can detect the general location of the sound. ITD is basically how we determine where a sound is coming from. When a sound is made the sound reaches one of our ears before the other. We have this imaginary circle around our heads called the azimuth. This is a 360 degree plane that can be used to measure where exactly the sound is coming from. Depending on what degree the sound is at in the azimuth it takes a different amount of time to reach each ear. We can then interpret that to determine where the sound is coming from. This is done using a part of the ear called the medial superior olive (MSO). This is a part of the brain stem where the information from both ears are interpreted and ITD can be detected. I think this is pretty cool.
I also thought that the Hofman experiement was interesting. It has been shown that our ears change throughout our development. It has been suggested by research that people can learn about the way that HRTF’s relate to the places in the environment throughout time. Basically we can learn to re-figure out hearing though our experiences. Hoffman chose to change the way a group of people hear by placing molds into people’s pinnaes. The people in the experiment were not able to localize sound as well with the molds in their ears. After 6 weeks though people were able to re-learn how to localize sound based on their experience with the molds in their ears. I also thought it was really interesting that when they took their molds out after the 6 weeks they could still localize sound with their regular ears. I think that is pretty cool.
I also thought the part in the chapter about restoration of complex sound was really interesting. Sasaki played different songs on the piano and replaced some of the notes with just noise and the listeners did not notice a difference. Also in normal language when replacing one of the sounds of the words with silence or a random noise the word is still understood which is really cool. I also thought it was interesting that when hearing the following sentences, “the *eel fell off the car” and the *eel fell off the table” readers heard different words in place of the *eel. For the first sentence they heard “wheel” and for the second sentences they heard “meal” and the same sound was given for each sentence. This is an example of how sometimes our brains fill in the blanks when we do not have enough information to make sense of something. One other note, people were less reliable when silence replaced to sound being left out than they were if some other noise replaced the sound left out. So adding additional noise increases the comprehension of misspoken words.
I was kind of confused by the cone of confusion.. haha, therefore it wasn’t my favorite part of the chapter. The cone of confusion is a bunch of cones where all sounds produce the same time and level of differences. I think what it is trying to say in the book is that there are certain areas that it is hard for us to determine where the sound is coming from if our heads are stationary. These areas include directly in front of you, directly over your head, directly behind you, and directly below you.
I think the most important thing to understand in the chapter in relation to sensation and perception would be understanding complex sounds and timbre.
I would like to learn more about timbre and inverse square law.
Terms: interaural time difference, azimuth, medial superior olive, HRTF, pinnaes, cone of confusion, timbre, inverse square law

Some of this stuff is pretty complex and confusing, because you almost have to understand certain types of physics and psychoacoustics, which are pretty complex to those of us who haven't had much exposure to them.

The chapter in sensation and perception and hearing in the environment is very interesting and very useful. We are able to determine what things may be in our environment or surroundings by using our auditory system, without being able to see what is near us. The auditory system in humans is amazing as we are able to transform air pressure changes into sound. Many of us take this remarkable mechanism for granted. Unfortunately not everyone is able to relate or process sound at the same level of sharpness or tone, and some individuals do not have the opportunity to hear at all.

The auditory distance perception was interesting to learn how listeners are able to capture where sound is coming from and know how far away it is. The relative intensity of the sound helps in determining the distance and movement of the sound. If you have ever gone to a car race, you can tell exactly where the cars are on the track. The sound is quite intense when they are in front of the grandstands. As they round the corners sounds fade and be come less intense and intensity gradually returns as they round the corners. The interaural level differences also vary between the sounds arriving at one ear versus the other as the cars round the track. The sound pressure intensity shifts from one ear to the other as your head partially blocks the sound pressure waves especially at this high-leveled pitch. Interaural time difference is also cued as the car rounds the track; you will first hear them approach on your left, so we are able to determine which direction the cars are coming from. I also found the attack and decay of sounds interesting. Sounds beginning and ending and how we can determine differences in words just by determining the contrast between speech sounds, such as cheat and sheet, and how quickly sound energy begins and ends.

The least interesting thing in this chapter for me would be the cone of confusion, a region of positions in space where sounds produce similar time and level of intensity. As soon as you move your head, the sound source shifts. This is confusing and may need more detail for a better understanding. Harmonics and fundamental frequency are interesting, but confusing as well. Harmonic sounds, pitches and overtones are heard by everyone, but may be perceived differently. I wonder if those who are more musically inclined are able to interpret the sounds and pitches of harmonics differently over others? It seems like some people are definitely tone deaf and cannot carry a tune, while others are very much in tune and can hear all the correct pitches. I wonder if this is due to the auditory system not being able to interpret harmonic sounds and structure as well as another individual, or if there are other factors involved in the perception process.

Terms: sound localization, interaural level difference, interaural time difference, sound pressure, air pressure changes, sound, auditory system, harmonics, fundamental frequency, and pitch.

Chapter 10- Depth and Size



1. After reading chapter 10, I thought it was interesting learning about binocular depth information. When both of our eyes perceive anything, they are received different in the left and right eye. I did a demonstration in the book to understanding the two viewpoints by covering my right eye and putting my finger up to cover up a door knob in front of me. When I looked at it with my left eye and then switched to the right eye it was positioned differently. By this I am meaning when I looked at my left eye it was covering the door knob, but when I covered my left eye and looked with my right eye it was not covering up the door knob. I would love to do more experiments in everyday life to see how I perceive with each eye driving or swimming. I want to see if my eyes view things separating still while doing different activities.



2. The second topic I liked reading about is the pictorial cues. Pictorial cues is when the depth info can be understood in a picture. This is when you are looking at a picture book or a magazine.When a picture is partially hidden or its half covered, this is called an occlusion of an object. When objects are below your “lookout horizon” and have a higher view that seem more distance is called relative height. Also the relative size is a cue that when two objects look the same size and if one is farther away it will seem smaller than the other object that is closer & takes up more view space. Also there is a familiar size is used from prior experience, like coins. If a person draws a dime, nickel and quarter the same size, you know that is wrong from dealing with money everyday. 



3. The third interesting topic I found in this chapter is depth information across species. Monkeys, humans and cats all have frontal eyes that use binocular disparity usually many cues or a couple cues in their visual range. However, a rabbit has lateral eyes that is very important for them to adapt to the environment to have a lookout for predators that allow them to gain a wider field of vision. Frogs use senses of angle of distance before jumping. Perceiving depth is crucially important for animals and obviously humans, but I never took the time to understand how different species perceive depth, which was very interesting to me as an animal lover.



One thing I did not enjoy reading was all of the technical/medical terms of perceiving depth with the neurons. The illustrations in the book helps to understand, but I do not understand by the wording and explanation of the process.



One thing I found useful is knowing depth in art work or visual angles in the environment. I frequently go to art museums and now can understand abstract art work and paintings by the depth perception and what my eyes do to perceive the image. Art work is very unique and creates illusions in depth and colors. Visual angles in the environment is important in order to sit on a bench in the park or using depth perception to jump over a hole.



I would like to learn more about visual illusions in art and how gymnastics motion and depth perception is used on the beams, bars, and floor routines.

words-binocular depth, pictorial cues, angle of distance, relative height, relative size, familiar size, occlusion, lateral eyes, frontal eyes, visual illusions

In my book, Chapter 10 is entitled Hearing in the Environment. I did think that this would be a chapter I enjoyed better than the previous chapter because this one doesn’t have as much of a biological background.

I really liked learning about the interaural time difference. This was a new term for me and I enjoyed learning about it because it was something I understood before but never really noticed in myself. The Interaural time difference or ITD is when there is a difference in time between the sound arriving in one ear versus the other based on where the sound is located. This was interesting to me because I notice this in my own hearing. To me, its not even that the sound arrives late, but that its louder in one ear than the other. Just when I was thinking about this, the book described that the sound would be different loudness based on which ear is closer. This is called the interaural level of difference or ILD. That is cool that my own suspicions were correct.

I thought the cones of confusion were interesting to learn about because it basically is when the sensations of ITD and ILD aren’t present and each ear is hearing the same thing as the other one. I almost interpret this as having a sound directly in front of you that isn’t close to one side or the other and you confuse yourself because you don’t know which side the sound is coming from. I would like to look more into this topic for my topical blog. I also think that ITD and ILD along with the cones of confusion are very important in perception because you will perceive things differently with each sensation.

I enjoyed learning about the inverse-square law which states that as the distance from a source increases the intensity decreases. This makes lots of sense because the farther away a sound is, the less intense or loud the sound will be. I thought that distance perception was just as interesting as sound perception. I like how they pointed out that if you are closer ahead in a theater setting, it is easier to hear because you are closer and you don’t have people blocking your auditory sounds.

I didn’t enjoy learning about complex sounds as much. It was almost too much review for me and I didn’t like learning about part of it.

Interaural Time Difference, Interaural Level of Difference, cone of Confusion, Inverse-square Law,

Leave a comment

Recent Entries

Reading Activity Week #1 (ASAP)
Topical Blog Week #1 (ASAP)
Reading Activity Week #2 (Due Monday)