Riders of mass transit are exposed to noise at levels that may exceed recommended limits, and thus may experience noise-induced hearing loss given sufficient exposure duration times, reports a new study. Researchers evaluated the noise levels of a representation of New York City mass transit systems (subways, buses, ferries, tramways and commuter railways) during June and July 2007. Subway cars and platforms had the highest associated equivalent continuous average and maximum noise levels, but all systems showed some potential for noise exposure. The study's authors suggest, "Engineering noise-control efforts, including increased transit infrastructure maintenance and the use of quieter equipment, should be given priority over use of hearing protection, which requires rider motivation and knowledge of how and when to wear it." Source American Journal of Public Health
We humans prefer to be addressed in our right ear and are more likely to perform a task when we receive the request in our right ear rather than our left. In a series of three studies, looking at ear preference in communication between humans, Dr. Luca Tommasi and Daniele Marzoli from the University "Gabriele d'Annunzio" in Chieti, Italy, show that a natural side bias, depending on hemispheric asymmetry in the brain, manifests itself in everyday human behavior. Their findings were just published online in Springer's journal Naturwissenschaften. One of the best known asymmetries in humans is the right ear dominance for listening to verbal stimuli, which is believed to reflect the brain's left hemisphere superiority for processing verbal information. However, until now, the majority of studies looking at ear preference in human communication have been controlled laboratory studies and there is very little published observational evidence of spontaneous ear dominance in everyday human behavior.
New technology to hear vibrations through the skull bone has been developed at Chalmers University of Technology. Besides investigating the function of a new implantable bone conduction hearing aid, Sabine Reinfeldt has studied the sensitivity for bone conducted sound and also examined the possibilities for a two-way communication system that is utilizing bone conduction in noisy environments. A new Bone Conduction Implant (BCI) hearing system was investigated by Sabine Reinfeldt: "This hearing aid does not require a permanent skin penetration, in contrast to the Bone-Anchored Hearing Aids (BAHAs) used today." Measurements showed that the new BCI hearing system can be a realistic alternative to the BAHA. Sound is normally perceived through Air Conduction (AC), which means that the sound waves in the air enter the ear-canal and are transmitted to the cochlea in the inner ear. However, sound can also be perceived via Bone Conduction (BC). Vibrations are then transmitted to the cochleae through the skull bone from either one's own voice, the surrounding sound field, or a BC transducer.
HearAtLast To Launch Exclusive Groundbreaking Neuro-CompensatorTM Technology Hearing Aids From VitaSound
HearAtLast Holdings, Inc. (PINKSHEETS: HRAL), a leading provider of suitable affordable solutions to clients with hearing needs in the billion dollar hearing loss market, announced that in keeping with its tradition of bringing innovative new products to consumers, the Company announces the unveiling of breakthrough hearing products based on the Neuro-Compensator™ algorithm technology from VitaSound Audio. The NEURO-COMPENSATOR™ hearing instruments are powered by the newest groundbreaking neuro-biological technology designed to optimize the auditory nerve output. Based on many years of research at McMaster University into the electrical signals that are transmitted to the brain by the auditory nerves in healthy and impaired ears, this patented technology is designed to significantly improve the perceived audio quality in hearing devices. Using standard audiometric test data, the algorithm engine derives a customized map of an individual's auditory system and configures the hearing device to optimize auditory nerve output for that individual.
Doctors may get a new arsenal for meningitis treatment and the war on drug-resistant bacteria and fungal infections with novel peptide nanoparticles developed by scientists at the Institute of Bioengineering and Nanotechnology (IBN) of Singapore and reported in Nature Nanotechnology. The stable bioengineered nanoparticles devised at IBN effectively seek out and destroy bacteria and fungal cells that could cause fatal infections and are highly therapeutic. Major brain infections such as meningitis and encephalitis are a leading cause of death, hearing loss, learning disability and brain damage in patients. IBN's peptide nanoparticles, on the other hand, contain a membrane-penetrating component that enables them to pass through the blood brain barrier to the infected areas of the brain that require treatment. The ability of IBN's peptide nanoparticles to traverse the blood brain barrier offers a superior alternative to existing treatments for brain infections. The brain membrane is impenetrable to most conventional antibiotics because the molecular structure of most drugs is too big to enter the membrane.
Parents and children giving or receiving an electronic device with music this holiday season should give their ears a gift as well by pre-setting the maximum decibel level to somewhere between one-half and two-thirds maximum volume. Any sound over 85 decibels (dBs) exceeds what hearing experts consider to be a safe level and some MP3 players are programmed to reach levels as high as 120 dBs at their maximum. Vanderbilt Bill Wilkerson Center Director Ron Eavey, M.D., who also chairs the Department of Otolaryngology, says the new generation is especially susceptible to hearing loss when they listen to music with headphones or earbuds either too long or too loud. One preventive measure is to pre-set the device so that it cannot be turned up to damaging levels. "As parents, we can't hear how loud their music is when they have the earbuds in so this is an important step, " Eavey said. "I can tell you that if you hear the music coming from their headphones it is too loud, but an easier way to know for sure is to preset the device.
A front portion of the brain that handles tasks like decision-making also helps decipher different phonetic sounds, according to new Brown University research. This section of the brain - the left inferior frontal sulcus - treats different pronunciations of the same speech sound (such as a 'd' sound) the same way. In determining this, scientists have solved a mystery. "No two pronunciations of the same speech sound are exactly alike. Listeners have to figure out whether these two different pronunciations are the same speech sound such as a 'd' or two different sounds such as a 'd' sound and a 't' sound, " said Emily Myers, assistant professor (research) of cognitive and linguistic sciences at Brown University. "No one has shown before what areas of the brain are involved in these decisions." Sheila Blumstein, the study's principal investigator, said the findings provide a window into how the brain processes speech. "As human beings we spend much of our lives categorizing the world, and it appears as though we use the same brain areas for language that we use for categorizing non-language things like objects, said Blumstein, the Albert D.
A team of researchers from the University of AlcalÃ de Henares (UAH) has shown scientifically that human beings can develop echolocation, the system of acoustic signals used by dolphins and bats to explore their surroundings. Producing certain kinds of tongue clicks helps people to identify objects around them without needing to see them, something which would be especially useful for the blind. "In certain circumstances, we humans could rival bats in our echolocation or biosonar capacity", Juan Antonio MartÃ nez, lead author of the study and a researcher at the Superior Polytechnic School of the UAH, tells SINC. The team led by this scientist has started a series of tests, the first of their kind in the world, to make use of human beings' under-exploited echolocation skills. In the first study, published in the journal Acta Acustica united with Acustica, the team analyses the physical properties of various sounds, and proposes the most effective of these for use in echolocation.
A new study from Canada shows that our skin helps us hear speech by sensing the puffs of air that the speaker produces with certain sounds. The study is the first to show that when we are in conversation with another person we don't just hear their sounds with our ears and use our eyes to interpret facial expressions and other cues (a fact that is already well researched), but we also use our skin to "perceive" their speech. The study is the work of professor Bryan Gick from the Department of Linguistics, University of British Columbia, in Vancouver, Canada and PhD student Donald Derrick. A paper on their work was published in Nature on 26 November. Gick and Derrick found that pointing puffs of air at the skin can bias the hearer's perception of spoken syllables. Gick, who is also a member of Haskins Laboratories, an affiliate of Yale University in the US, told the media that their findings suggest: "We are much better at using tactile information than was previously thought." We are already aware of using our eyes to help us interpret speech, such as when we lip-read or observe facial features and gestures.
It is relatively common for listeners to "hear" sounds that are not really there. In fact, it is the brain's ability to reconstruct fragmented sounds that allows us to successfully carry on a conversation in a noisy room. Now, a new study helps to explain what happens in the brain that allows us to perceive a physically interrupted sound as being continuous. The research, published by Cell Press in the November 25 issue of Neuron provides fascinating insight into the constructive nature of human hearing. "In our day-to-day lives, sounds we wish to pay attention to may be distorted or masked by background noise, which means that some of the information gets lost. In spite of this, our brains manage to fill in the information gaps, giving us an overall 'image' of the sound, " explains senior study author, Dr. Lars Riecke from the Department of Cognitive Neuroscience at Maastricht University in The Netherlands. Dr. Riecke and colleagues were interested in unraveling the neural mechanisms associated with this auditory continuity illusion, where a physically interrupted sound is heard as continuing through background noise.