User:Nickmillan/sandbox

From WikiProjectMed
Jump to navigation Jump to search

Sign language refers to a mode of communication, distinct from spoken languages, which uses visual gestures with the hands accompanied by body language to express meaning. It has been determined that the brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech.[1] Signers with damage in the Wernicke's area (left hemisphere) in the temporal lobe of the brain have problems comprehending signed languages, while those with damage in the Broca's area, which consists of parts of the temporal lobe, it's junction with the parietal lobe, as well as the Superior Temporal Gyrus, have problems producing signs. In the 1980's research on deaf patients with left hemisphere stroke were examined to explore the brains connection with signed languages. The left perisylvian region was discovered to be functionally critical for language, spoken and signed.[2] Its location near several key auditory processing regions led to the belief that language processing required auditory input and was used to discredit signed languages as "real languages." This research opened the doorway for linguistic analysis and further research of signed languages. Despite complex differences between spoken and signed languages, the associated brain areas are closely related.[3]

Hemispheric Differences between spoken and signed languages[edit | edit source]

Spoken and signed languages both depend on the same cortical substrate.[2] The neural organization underlying sign language abilities, however, has more in common with that of spoken language than it does with the neural organization underlying visuospatial processing, which is processed dominantly in the right hemisphere.[2] This shows that the left hemisphere is responsible for processing all facets of language, not just speech. However, there still remains the belief that signed languages engage the right hemisphere more than spoken language. In addition to the universal roles in discourse cohesion and prosody, the right hemisphere has been proposed to assist in detection, processing and discrimination of visual movement.[2] The right hemisphere has also been shown to play a role in the perception of body movements and positions.[2] All of these right hemisphere features are more important for signed languages than spoken languages, hence the argument that signed languages engage the right hemisphere more than spoken languages.

As brain imaging technology such as EEG became more developed and commonplace, it was eventually applied to sign language comprehension. Using EEG to record event-related potentials can correlate specific brain activity to language processing in real time. Previous application of ERP on hearing patients showed anomalies in the left hemisphere related to syntactic errors.[2] When the electrodes were hooked up to deaf native signers similar syntactic anomalies associated with an event-related potential we're recorded across both left and right hemisphere. This shows that syntactic processing for ASL and possibly all signed languages is not lateralized to the left hemisphere. [2]

When communicating in their respective languages, similar brain regions are activated for both deaf and hearing subjects with a few exceptions. During the processing of auditory stimuli for spoken languages there is detectable activity within Broca's Area, Wernicke's Area, the angular gyrus, dorsolateral prefrontal cortex, and superior temporal sulcus.[4] Right hemisphere activity was detectable in less than 50% of trials for hearing subjects reciting English sentences.[4] When deaf subjects were tasked with reading English, none of the left hemisphere structures seen with hearing subjects were visible. Deaf subjects also displayed obvious middle and posterior temporal-parietal activation within the right hemisphere.[4] When hearing subjects were presented various signs designed to evoke emotion within native signers, there was no clear changes in brain activity in traditional language processing centers. Brain activity of deaf native signers when processing signs was similar to activity of hearing subjects processing English. However, processing of ASL extensively recruited right hemisphere structures including significant activation of the entire superior temporal lobe, the angular region, and inferior prefrontal cortex. Since native hearing signers also exhibited this right hemisphere activation when processing ASL, it has been proposed that this right hemisphere activation is due to the temporal visuospatial decoding necessary to process signed languages.[4]

Neurological Differences between deaf and hearing humans

There are only a handful of studies which have tried to detail the neurological differences between hearing and deaf people. Even fewer of these studies are conducted on humans who learned ASL after adolescence. Neurologists at Georgetown University Medical Center published a study which used two populations of deaf and hearing subjects. Half of each group (deaf and hearing) grew up with English as their first language, while the other half grew up using ASL. Structural brain imaging has commonly shown white matter volume of the auditory cortices differs between deaf and hearing subjects, regardless of the first language learned.[5] Deaf humans are thought to have a larger ratio of gray matter to white matter in certain auditory cortices, such as left and right Heschl's gyrus and Superior Temporal gyrus.[6] This heightened ratio is thought to exist due to less overall white matter in Heschl's gyrus and the Superior Temoral gyrus among deaf humans.[6] Overall, the auditory cortices of deaf humans have an increased gray-white matter ratio as a result of the lack of auditory stimuli which is commonly thought to lead to less myelination and fewer projections to and from the auditory cortices.[6] It has been thought that congenitally deaf people could provide insight into brain plasticity; the decreased auditory connectivity and brain volume for auditory processing provides an opportunity for enhancement in the visual cortices which are of greater importance to deaf humans.[7] The Calcarine sulcus acts as the hub for the Primary Visual Cortex in humans. Congenitally deaf humans have measurably higher volume of Calcarine cortex than hearing humans.[7] The increased volume and size of visual cortices of deaf individuals can lead to heightened visual processing. Deaf humans have demonstrated, via event-related potential, an increased sensitivity and reactivity to new visual stimuli-- evidence of brain plasticity leading to behavioral enhancement.[8]

Brain centers responsible for language processing[edit | edit source]

In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus (Brodmann areas 44, 45). Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus (Brodmann area 22).

Early on, it was noted that Broca’s area was near the part of the motor cortex controlling the face and mouth. Likewise, Wernicke's area was near the auditory cortex. These motor and auditory areas are important in spoken language processing and production, but the connection to signed languages had yet to be uncovered. For this reason, the left hemisphere was described as the verbal hemisphere, with the right hemisphere deemed to be responsible for spatial tasks. This criteria and classification was used to denounce signed languages as equal with their spoken counterparts before it was more widely agreed upon that due to the similarities in cortical connectivity they are linguistically and cognitively equivalent.

Before signed languages received recognition, a debate arose among linguists over the validity of signed languages. What was the brain organization of these languages? It was hypothesized that the deaf-equivalent of Broca's aphasia arose from damage somewhere near the cortex controlling the movement of the hands, and the deaf-equivalent of Wernicke's aphasia arose from damage near the visual cortex. However, that has not turned out to be the correct description of sign language in the brain, a finding which has changed our understanding of how language is organized in the brain more generally. The discovery that the left perisylvian region, which lay close to several auditory processing regions, does not rely on auditory processing helped to validate signed languages as "real languages."

Furthermore, as science and medicine progressed and more advanced research, such as lesion studies, were able to be carried out the scientific community began to arrive at a consensus that the brain functions very similarly to spoken and signed languages.[9] At about this time, theories began to float around the community that there may be an unexplained involvement of the right hemisphere in signed languages not seen in spoken languages. Prior right hemisphere studies on spoken languages has led to prevailing theories in its role in discourse cohesion and prosody. These theories were also adopted by signed language linguists and further imaging studies and neuropsychological testing confirmed the presence of activity in the right hemisphere. [10] While it has become evident that the right hemisphere plays a similar role in both spoken and signed languages, there is suspicion that the right hemisphere may play a more significant role in signed languages than in spoken languages.[1]

  1. ^ a b Campbell, Ruth (June 29, 2007). "Sign Language and the Brain". Oxford Academic. Retrieved March 19, 2017.
  2. ^ a b c d e f g Campbell, Ruth, et al. “Sign Language and the Brain: A Review.” Journal of Deaf Studies and Deaf Education, vol. 13, no. 1, 2008, pp. 3–20., www.jstor.org/stable/42658909.
  3. ^ Poizner H, Klima ES, Bellugi U., What the hands reveal about the brain, 1987, Cambridge, MA The MIT Press
  4. ^ a b c d Neville, Helen (February 3, 1998). "Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience". Proceedings of the National Academy of Sciences of the United States of America. 95 (3): 922–929. doi:10.1073/pnas.95.3.922. PMC 33817. PMID 9448260.
  5. ^ Olulade, Olumide. "BRAIN ANATOMY DIFFERENCES BETWEEN DEAF, HEARING DEPEND ON FIRST LANGUAGE LEARNED". Georgetown University Medical Center. Retrieved May 5, 2017.
  6. ^ a b c Emmorey, Karen, et al. “A Morphometric Analysis of Auditory Brain Regions in Congenitally Deaf Adults.” Proceedings of the National Academy of Sciences of the United States of America, vol. 100, no. 17, 2003, pp. 10049–10054., www.jstor.org/stable/3147660.
  7. ^ a b Allen JS, Emmorey K, Bruss J, Damasio H. Neuroanatomical differences in visual, motor, and language cortices between congenitally deaf signers, hearing signers, and hearing non-signers. Frontiers in Neuroanatomy. 2013;7:26. doi:10.3389/fnana.2013.00026.
  8. ^ Bottari D, Caclin A, Giard M-H, Pavani F. Changes in Early Cortical Visual Processing Predict Enhanced Reactivity in Deaf Individuals. Sirigu A, ed. PLoS ONE. 2011;6(9):e25607. doi:10.1371/journal.pone.0025607.
  9. ^ Hickok, G (August 2002). "Role of the left hemisphere in sign language comprehension". Brain and Language. 82 (2): 167–178. doi:10.1016/s0093-934x(02)00013-5. PMID 12096874. S2CID 11950135.
  10. ^ Hickok, G (February 1999). "Discourse deficits following right hemisphere damage in deaf signers". Brain and Language. 66 (2): 233–248. doi:10.1006/brln.1998.1995. PMID 10190988. S2CID 12070728.