This chapter focuses on an area that has been at the center of the debate between the approaches: processing ambiguous words and sentences. Interestingly, an important factor for ambiguity resolution appears to be the frequency of the different meanings of the ambiguous words. Subordinate- bias effect is as follows: in a neutral, nonbiasing context, words that are balanced cause longer reading times than words that are either unbalanced or unambiguous. Different languages impose different rules about how grammatical categories may be combined. In the garden path model, sentence processing happens in two stages: an initial structure building stage in which the only information that is used is syntactic, and then a second stage in which the structure is checked against semantic and pragmatic information. Constraint-based models take a very different approach to how sentences are initially parsed and how mistakes are sometimes made.
Your search for all content returned 9 results
The researchers were specifically interested in whether they would get more incorrect responses depending on the type of sentence. From a certain perspective, passive sentences are more complicated than active sentences and so perhaps it is the case that passives are more difficult simply because they are more complicated. It appears that the important difference between subject cleft and actives on one hand, and passives on the other, is that the order of the roles is reversed between them: in active sentences, the agent comes first. Indeed, there is a growing body of evidence that languages allow English speakers to structure their utterances in a way that can flag certain parts of the sentence as particularly important or worthy of special attention. Recently, psycholinguists have been interested, too, in how information structure influences language processing.
The study of the properties of language can be divided up into roughly five, somewhat overlapping categories: sound system, word structure, sentence structure, meaning, and real-world use. In spoken languages, segments are sounds—each language has a set of sounds that are produced by changing the positions of various parts of the vocal tract. The sound system of language is actually studied in two main parts: phonetics, phonology. Phonemes can be combined to make words, and words themselves have an internal structure and can even be ambiguous based on this structure. Syntax is the study of how sentences are formed. There are two noun phrases (NPs) in the sentence—the artist and a paintbrush. The field of semantics is concerned with meaning in language and can be divided into two major parts: lexical and propositional.
This chapter talks about questions related to how speakers and hearers influence each other. It looks at research on dialogue, and especially how a dialogue context influences speakers. Speakers have an impact on their listeners. The goal of a dialogue is successful communication and so it would make sense that a speaker would pay careful attention to the needs of a listener and do things like avoid ambiguity and package information in a way that flags particular information as important or new to the listener. Ambiguity may be avoided depending on the speaker’s choice of words and so a natural question is whether, and when, speakers appear to avoid ambiguous language. In terms of pronunciation, speakers reduce articulation and intelligibility over the course of a dialogue. There are some constraints and preferences on how to interpret pronouns and other coreferring expressions that appear to be structural or syntactic in nature.
This chapter shows an overview of the techniques that are used to measure language processing. It shows at the things psycholinguists do when designing experiments in order to ensure that their results are valid. Online measures include any measure considered to give information about language processing as it happens. The prototypical off-line measure is the questionnaire—literally asking people for their judgments about what they’ve just encountered. In fact, all kinds of data can be collected from questionnaire studies. The button press task is perhaps the most versatile of all the things that people can do to collect data involving response times. The conscious responses discussed about here are vocal response. Like eye-tracking, event-related brain potentials (ERPs) help to understand the technique if people know a bit about the response measured—in this case, the brain. In many ways, functional magnetic resonance imaging (FMRI) can be considered the complement to ERPs.
Psycholinguist is someone who studies phenomena in the intersection of linguistics and psychology. The whole endeavor of psycholinguistics often finds a home in the broader research field of cognitive science—an interdisciplinary field that addresses the difficult question of how animals, people, and even computers think. The centrality of language in the daily lives means that any disruption to the ability to use it may be keenly felt—the worse the disruption, the more devastating the impact. From the beginning of psychology, there has been an interest in language. In psychology, behaviorism was a movement in which the study of mental states was more or less rejected, and the idea that one could account for human behavior in terms of mental states or representation was discounted. This book covers a number of topics that are very much relevant in current psycholinguistics, including child language acquisition, sign language, language perception, and grammatical structure.
This book explores a set of key topics that have shaped research and given us a much better understanding of how language processing works. The study of language involves examining sounds, structure, and meaning, and the book covers the aspects of language in each of these areas that are most relevant to psycholinguistics. The book then covers relatively low-tech methods that simply involve pencil and paper as well as very high-tech methods like functional magnetic resonance imaging (fMRI) that use advanced technology to determine brain activity in response to language and discusses a topic that has dominated the field for over two decades how people handle ambiguity in language. It describes how language is represented, both in the brain itself and in how multiple languages interact, which parts of the brain are critical for the basics of language, and how language ability can be disrupted when the brain is damaged. The book further talks about progressive language disorders like semantic dementia and what the study of disordered language can tell us about the neurological basis of language. Finally, it looks at sign language research to see if and how sign language processing differs from speech and a relatively new hypothesis that has emerged: most previous work has taken for granted that comprehenders (and speakers) fully process language, that is that we try to build complete representations of what we hear, read, or produce.
This chapter talks about the representation of language in the brain— including what parts of the brain are known to be involved in language. It talks about how multiple languages are represented and interact in bilingual speakers. The most important lobes for language are the temporal lobe and the frontal lobe. In terms of language, in right-handed people it is the left hemisphere that supports the majority of language function. There are two areas in particular that appear to be especially important for language: an area toward the front of the brain in the frontal lobe that includes Broca’s area and an area more or less beneath and behind the ear toward the back of the temporal lobe called Wernicke’s area. Broca’s aphasia is characterized by difficulty with language production—with effortful, slow speech, and the striking absence of function words like prepositions, determiners, conjunctions, and grammatical inflections.
Research on both sign language and how it is processed has been growing quickly over the last decade, with researchers from a number of different fields increasingly interested in it. This chapter addresses two common misconceptions about sign language to understand exactly what sign language is. French sign language is just a version of spoken French, British Sign Language (BSL) is just a version of English, and so on. Variations in hand shape and other differences can differentiate dialects of sign language. Sound symbolism shows that there are cases in spoken language when sounds are linked in a nonarbitrary way to meaning. Further, there are phonotactic rules that differ from language to language about how signs may be formed. Speech errors are mistakes that speakers make when they intend to say one thing but something else comes out instead.