| Home | E-Submission | Sitemap | Contact Us |  
top_img
Commun Sci Disord > Volume 24(1); 2019 > Article
언어 친숙성 가설과 지연청각역입: 칸나다어(L1)-영어(L2) 이중언어사용 일반아동의 구어붕괴

초록

배경 및 목적

본 연구는 칸나다어(Kannada, L1)-영어(L2) 이중언어 아동에게서 나타나는 구어붕괴에 대해 언어친숙성 가설이 미치는 영향을 지연청각역입기(delayed auditory feedback, DAF)를 이용하여 알아보고자 하였다.

방법

14명의 참여자를 L2 숙달도에 따라 숙달도가 높은 군과 낮은 군으로 나누었다(각 7명씩). 150 ms와 250 ms의 지연청각역입 상황에서 L1과 L2 모두를 이용하여 표준문단을 낭독하고 질문에 대답하게 하는 과제를 시행하였다.

결과

구어붕괴는 실험과제에 따라 조음오류, 반복오류, 기타오류로 나누었다. 더 짧은 지연시간(150 ms)에서는 과제특정적(task-specific)인 구어오류가 나타났으나 더 긴 지연시간(250 ms)에서는 이런 오류가 나타나지 않았다. 문단낭독 과제에서는 L1과 L2에서 각각 반복오류와 조음오류가 나타났으나 기타오류는 대답하기 과제에서 더 빈번하게 나타났다. 흥미롭게도 L2 숙달도가 낮은 화자는 L1의 실험과제 전반에 걸쳐 더 높은 조음오류를 보였다. 두 언어를 비교해 본 결과 과제특정적 특성을 보이는 L1에서의 구어붕괴가 더 크게 나타났다. 모국어(L1)에서 문단낭독 시에는 조음오류가, 대답하기에서는 반복오류가 관찰되었다.

논의 및 결론

이중언어 습득에 대한 언어친숙성 가설에 대한 반박도 있지만 본 연구는 청각지연 상황에서 일어나는 구어붕괴에 대해 주의력 통제가 영향을 미칠 수 있음을 포함하는 대안적 설명을 제시하고 있다. 또한 본 연구결과는 말더듬을 동반한 이중언어 화자에 대한 임상적 시사점을 제시해준다.

Abstract

Objectives

The study examined the language familiarity hypothesis on speech disruptions in Kannada (L1)-English (L2) bilinguals using delayed auditory feedback (DAF).

Methods

Fourteen participants were classified into high and low L2 proficient bilingual groups (n=7 each). Experimental tasks involved reading the standard passages and answering questions in both L1 and L2 on 150 ms and 250 ms DAF. Speech disruptions were grouped into articulatory, repetition, and other errors for the experimental tasks. A shorter delay of 150 ms, but not the longer delay (250 ms) showed task specific speech errors.

Results

Reading passages showed repetition and articulatory errors in L1 and L2, respectively, but other errors were more common while answering questions. Interestingly, low L2 proficient speakers showed higher articulatory errors in L1 across the experimental tasks. Between language comparisons revealed greater speech disruptions in L1, which was specific to the experimental task. Articulatory errors during reading and repetition errors while answering the questions were observed in the native language (L1). While refuting the language familiarity hypothesis in bilinguals, current findings favour task specific variation in speech disruptions in typical bilinguals.

Conclusion

Current findings propose alternative explanations including the influence of attention control on speech disruptions under auditory delay. Findings also offer implications for the current research on bilingual speakers with stuttering.

Speech of normal speakers is less effortful as synchronized sub-systems of speech renders continuous emission of acoustic segments. Although the efforts behind speech production are less obvious, many investigations agree to the finding that a continuous chain of feedback from various sensory systems enable undisrupted speech. In particular, auditory and proprioceptive feedbacks from the vocal tract facilitate continuous speech flow without any interruptions (Liebenthal & Möttönen, 2018; Sato & Shiller, 2018). Some of the contemporary models of speech production detail the importance of auditory feedback and feed-forward loops in the acquisition and maintainence of speech motor control (Guenther, 2006; Guenther & Vladusich, 2012).
The feed forward control predicts the expected sensory consequence for a planned motoric input prior to its articulatory execution. Such a prediction is possible because of effective auditory-articulatory realizations acquired during the process of speech motor development. However, it is also hypothesized that sensory feedback loops facilitate the acquisition of speech motor control, and thereby strengthen the feed-forward mechanism so that the dependency on feedback is reduced with age. The neural architecture also favours feed-forward loops as the error identification and subsequent corrections to the motor plan/program using feedback loops are generally tedious in nature (20–45 ms) (Civier, Tasko, & Guenther, 2010). As explained in the Directions into Velocities of Articulators (DIVA) model of speech production, dedicated centres in the cerebral cortex mediate feed-forward and feedback loops (Guenther, Ghosh, & Tourville, 2006; Tourville, Reilly, & Guenther, 2008). ‘Speech sound maps' that details the ‘motor plan/program’ of a speech segment also sends the expected sensory consequences (efference copy) to the cerebellum before the speech is articulated. When the expected sensory consequence matches with the incoming sensory feedback, no corrections are appended to the motor plan and the forward flow of speech is facilitated. Mismatch in the sensory consequences leads to the generation of an error signal which is sent to the premotor areas that directs corrective motor plan/program to the ‘articulation maps', thereby precisely executing the intended speech motor plan. Past studies have supported the discrepancy between the expected and incoming sensory information under the influence of DAF by reporting changes in the temporal dimension of acoustic signal (Cai, Ghosh, Guenther, & Perkell, 2011; Mitsuya, MacDonald, & Munhall, 2014) and kinematic measures (Mochida, Gomi, & Kashino, 2010; Sasisekaran, 2012).
Under the influence of delayed auditory feedback (DAF), typical speakers show stuttering like dysfluencies, reduced speaking rate and increased loudness in the speech segments (Bradshaw, Nettleton, & Geffen, 1971; Fabbro & Darò, 1995; Fairbanks & Guttman, 1958; Harrington, 1988; Jones & Striemer, 2007; Lee, 1950a, 1950b; MacKay, 1970; Stephen & Haggard, 1980; Van Borsel, Sunaert, & Engelen, 2005), whereas persons with stuttering (PWS) are known to show improved fluency under the same conditions, with a short auditory delay (Kalinowski, Stuart, Sark, & Armson, 1996; Sparks, Grant, Millay, Walker-Baston, & Hynan, 2002). Several factors that influence speech disruptions under DAF have been deliberated by the previous research. This include (a) age (MacKay, 1968; Siegel, Fehst, Garber, & Pick, 1980), (b) auditory delay (Howell & Powell, 1987; MacKay, 1968; Stuart, Kalinowski, Rastatter, & Lynch, 2002), (c) gender (Bachrach, 1964; Ball & Code, 1997; Corey & Cuddapah, 2008; Fukawa, Yoshioka, Ozawa, & Yoshida, 1988), (d) familiarity of the stimuli material (Fukawa, 1980; Harano & Tagami, 1976), and (e) bilingualism/language familiarity (Fabbro & Darò, 1995; MacKay, 1970; Van Borsel et al., 2005). Nevertheless, a high individual variability have been reported in typical speakers with respect to their susceptibility for speech disruptions under DAF (Agnew, McGettigan, Banks, & Scott, 2018; Chon, Kraft, Zhang, Loucks, & Ambrose, 2013).
Language familiarity is reported to be an important factor active in bilinguals that influences speech disruptions under DAF. According to the language familiarity hypothesis, DAF exerts less of an influence on the familiar language when compared to the unfamiliar language in bilinguals (MacKay, 1970). This hypothesis was formulated and proposed by MacKay (1970), who studied the speech errors in native German and English speakers who also had varying extents of knowledge in Congolese (L3). It was observed that speech errors were less common in the more familiar language of the participants, i.e., German or English compared to the least common language, i.e., Congolese. It was also observed that the native language had a higher speech rate, irrespective of the auditory time delay. This is further supported by the investigations which showed improvement in speech disruptions in children as they increase in age (MacKay, 1968; Siegel et al., 1980). The age effect on speech disruptions under DAF was also attributed to the increased language familiarity.
Some of the investigations have shown equivocal evidence for the language familiarity hypothesis on speech disruptions of bilinguals under DAF. Rouse & Tucker (1966) examined speech disruptions under DAF on three groups of participants who either read their native language (English) or foreign language prose (English/French). Findings revealed that the interference of DAF was less on non-native languages compared to the native language which was familiar to the participants. Unequivocal interpretation of the study results is difficult as a single auditory delay (225 ms) was employed and it is known that less familiar languages may require longer auditory delays than usual to observe maximum speech interference (MacKay, 1970).
Kvavik, Katsuki-Nakamuri, Siegel, & Pick (1991) reported speech disturbances of 38 native English speakers (L1) who had varied levels of mastery in Spanish/Japanese (L2). Based on the University credit scores of Spanish or Japanese mastery, these participants were divided into beginning and advanced learners. Participants read 20 simple declarative sentences that were balanced for word choice and syntax across the languages by translating the originally constructed L1 sentences into L2. Results revealed that the vocal intensity increased while reading the sentences of Japanese (L2) and Spanish (L2) compared to English; however, language mastery did not show any influence on this parameter. Measured duration of sentences also failed to show the language familiarity effect. Authors concluded that the less significant gap in the language experience of novice and advanced foreign language learners were attributed for the lack of differences between the languages.
Fabbro & Darò (1995) studied the speech disruptions under DAF for verbal fluency tasks between L2 polyglot interpreters and an equal number of monolinguals. The speech errors were compared between ‘no delay’ and three ‘delay conditions' (150, 200, and 250 ms). Compared to delay conditions, no speech disruptions were observed in the ‘no delay condition’ in the control group. However, polyglot interpreters did not show variations in speech disruptions across languages and delay conditions. It was opined that the external attention focus of the polyglot interpreters made them able to depend less on the auditory feedback and therefore they were equally resistant for speech disruptions across L1 and L2. It is reasonable to assume such a hypothesis as language interpreters are involved in simultaneous translation of others' speech without paying attention to their own verbal output.
A recent report by Van Borsel et al. (2005) supported the language familiarity hypothesis using DAF measures on native Dutch speakers (17 males, 13 females) who were proficient in both French and English. Meaningful and nonsense text materials were read by the participants under 200 ms DAF. Results confirmed the language familiarity hypothesis wherein participants had less speech disruptions and a faster speech rate in their mother tongue compared to later acquired languages. Effect of stimuli was observed wherein the nonsense texts exhibited more errors than meaningful texts across languages.

Purpose of the Study

Due to methodological drawbacks, previous research on speech disruptions under DAF has led to equivocal results. Reported studies did not adequately measure the language abilities of bi/multilingual speakers (MacKay, 1970; Rouse & Tucker, 1966); stimuli materials were largely restricted to reading the texts which undermines the language formulation process (MacKay, 1970; Rouse & Tucker, 1966; Van Borsel et al., 2005); the inclusion of trained bilinguals who could handle complex speaking environments may have skewed the results (Fabbro & Darò, 1995); and inclusion of questionable classification methods to group bilinguals (Kvavik et al., 1991). Additionally, most of the reported studies were from In-do-European language families whose language structures remain relatively similar to each other.
The current study focused on Kannada (L1)-English (L2) bilingual speakers. Kannada is a Dravidian language spoken in the state of Karnataka and recognized as an official and administrative language in the state of Karnataka. Kannada has approximately 56.6 million speakers, including those who speak this language in the neighbouring states of Andhra Pradesh, Kerala, Maharashtra, Tamil Nadu, and Telangana (Ministry of Home Affairs, 2001). Kannada is an agglutinative language which follows subject-object-verb order in written form, whereas the verbal form is generally agreed to have free word order. Kannada differs in its phonology to English as it consists of compound letters and aspirated sounds. Aspirated sounds (including voiced sound classes) are borrowed from the Sanskrit language. Most commonly, native Kannada speakers learn English as their second language (L2) in school, and these schools offer instruction in English. English being the lingua-franca of the world, which differs with Kannada on aspects of phonology and word order usage. It sternly follows a subject-verb-object word order in both oral and written language expressions.
We propose to test the language familiarity hypothesis in Kannada (L1)-English (L2) bilinguals on two sets of stimuli which includes reading passages and answering questions. Inclusion of two different types of stimuli is justified by previous research, as few of the past studies have shown that there was an effect of stimuli on the overall speech disruptions of bilinguals under auditory delay (Van Borsel et al., 2005). Reading passages generally reduces the language formulation load, whereas answering questions may increase the same. Although language familiarity could be superficially judged based on one's self rating, it is a less straightforward measure. Hence, stringent measures were adopted to group the participants into high and low L2 proficient categories using selfrating and language performance scores. Additionally, the bilingual environment considerably varies in the Indian context as most of the second language learners are embedded in the L1 speaking environment and the second language is sequentially learnt in the schooling years. It would be interesting to study how such variations in the bilingual environment and language learning patterns influences the speech disruptions in typical Kannada (L1)-English bilinguals.
Furthermore, this study aims to measure the speech disruptions across 150 and 250 ms auditory delay, as previous research have suggested that an auditory delay in and around 200 ms can significantly impact the speech production of adult speakers under DAF (Chase, Sutton, First, & Zubin, 1961; Hashimoto & Sakai, 2003; MacKay, 1968; Stuart et al., 2002).
The current study proposes and examines the following null hypotheses as the previous research results are equivocal,
  • a) There will be no significant difference in the speech disruptions of typical Kannada (L1)-English (L2) bilinguals between 150 and 250 ms DAF.

  • b) There will be no significant difference in the speech disruptions of high and low L2 proficient typical Kannada (L1)-English (L2) bilingual speakers across DAF delay conditions.

  • c) There will be no significant difference in the speech disruptions of typical Kannada (L1)-English (L2) bilingual speakers between L1 and L2 across DAF delay conditions.

METHODS

Participants

The study included a total of 14 (n=14; female 12, male 2) participants who were typical Kannada-English bilingual adults in the age range of 18–35 years (20.07±1.20). All the participants were native Kannada speakers. Participants were residents of the state of Karnataka since birth, and none of them had changed their state of residence. English was learnt as a second language (L2) in school, and was the medium of instruction. As a part of the school curriculum, from the 5th grade onwards, all the participants learnt Hindi as a third language (L3). However, none of the participants reported to have studied Hindi after 10th grade. A clinical examination was carried out on all participants', wherein their history of speech-language development, emotional and psychological well-being was collected. It was observed that none of the participants had any history of spoken language delay/disorder or hearing or any psychological illnesses. None of them were under any medications for chronic illness. A written informed consent was signed by the participants before enrolling into the study. The participant data is presented in Table 1.
Table 1.
Characteristics of the participants included in the study
Participants no. Years of exposure to L2 Most familiar language Age of exposure to L2 LEAP-Q scores (for L2)
Cloze test scores Proficiency category
S U R W
1 12 L1 4 2 3 3 2 13 LP
2 15 L1 6 2 3 4 4 14 LP
3 17 L1 3 3 3 4 4 25 HP
4 14 L1 5 4 4 4 4 27 HP
5 15 L1 5 3 4 4 4 26 HP
6 15 L1 5 4 4 4 4 28 HP
7 15 L1 5 3 4 4 3 24 HP
8 16 L1 4 2 2 3 2 15 LP
9 18 L1 3 1 2 3 2 13 LP
10 15 L1 3 2 2 2 3 14 LP
11 17 L1 4 3 4 4 3 24 HP
12 13 L1 4 4 4 3 4 26 HP
13 13 L1 5 2 2 2 2 12 LP
14 14 L1 5 2 3 2 3 15 LP

LEAP-Q=Language Experience and Proficiency Questionnaire; S=speaking; U = understanding; R = reading;W=writing; LP=low proficiency; HP=high proficiency.

Although 16 participants were included in the study, the samples of two of the participants were removed as one of them could not adequately recollect his bilingual language history, and for another participant the quality of speech recorded was indiscernible.

Materials

Assessment of language proficiency

To document the language history and proficiency of the participants, an Indian version of LEAP-Q, i.e., Language Experience and Proficiency Questionnaire was used (Maitreyee, 2009; Marian, Blumenfeld, & Kaushanskaya, 2007). The Indian version of LEAP-Q was standardized on a Kannada-English-Hindi trilingual population (Maitreyee, 2009). LEAP-Q is a self-rated language proficiency measure that collects the detailed language histories of bi/multilingual population. It gathers information on participants' knowledge of languages, age of acquisition, self-agreed proficiency levels across domains of language (speaking, understanding, reading, and writing) and the usage patterns. Self-rating is carried out on a 4 point rating scale, where 0 indicates ‘poor’ and 3 indicates ‘native-like’ proficiency. As most of them learnt Hindi as the third language (L3) in schooling years, they were encouraged to document the language history of this while filling the LEAP-Q. The details of the LEAP-Q are displayed in Table 1.
To reduce the subjective bias of the self-rated language proficiency of L2, a cloze test in English was administered to all participants. A cloze test (Taylor, 1953) is a language performance measure where the participants would be instructed to fill in the missing words in a passage depending on the contextual cues (Appendix 1). A total score of 30 was allotted for this task, and each participant's score was based on the number of correct answers provided for this test. Based on the correlations obtained between self-rated speaking proficiency levels on LEAP-Q and the scores on Cloze test, the participants were grouped either into high or low L2 proficiency category. The details of the participants' performance of the cloze test are provided in Table 1.

Apparatus

An android based ‘Delayed Auditory Feedback’ application was used in the current study. The DAF application developed by Boostlabz software (http://boostlabz.com/daf/download) can be freely downloaded from the Google Play Store. The application was installed into a Motorola G5 plus android smartphone with 2 GHz octacore processor. Compatible headsets were used to deliver the auditory delay at the comfortable loudness level of the listener. The application ran in the background, without affecting the performance of the smartphone; however, to reduce the overload on the system, incoming calls and text messages, along with internet data, were disabled while conducting the experiments. The application offers an auditory delay that can be varied from 0 to 500 ms and the speech can be sampled between 8 to 48 kHz. In the current study, a sampling rate of 44.1 kHz and auditory delays of 250 ms and 150 ms were chosen to conduct the experiments. An Olympus digital sound recorder (Model WS-550M) was used to record the speech of the participants. The recorder was placed at a distance of 15 cm from the participants' mouths while collecting the speech samples. Speech was sampled at 16 bits with a sampling rate of 44.1 kHz. Speech samples were recorded in wave format (.wav) and transferred to a personal computer (PC) for analysis.

Stimuli

The study used standard Kannada and English reading passages as the first group of stimuli. A standard reading passage in Kannada developed by Savithri and Jayaram (2004) was chosen which included all the sounds of the Kannada language. For English (L2), the Rainbow Passage (Fairbanks, 1960) was selected as the stimuli which included all the English phonemes except /z/ and /h/. Even though the Kannada reading passage was more than 300 words in length, only the first 256 syllables were selected as it matched the syllable length of the rainbow passage in English.
A list of questions was developed as a second group of stimuli, as the previous material of reading passages does not tax the language formulation process (Appendix 2). A list of questions were framed (in English) and this was rated for grammatical correctness (0 = grammatically incorrect, 1= grammatically fair, 2= grammatically correct), commonality (0 = uncommon, 1= common, 2= very common), and quality of responses (0 = evokes poorly elaborated verbal responses, 1= evokes fairly elaborated verbal responses, 2= evokes good elaborated verbal responses) on a 3-point rating scale by 5 SLPs. Finally, those questions that gathered a response of ‘2’ on a 3-point rating scale on the above delineated parameters were chosen as the question sets. Accordingly, 10 questions that formed the final set were translated from English to Kannada to maintain the content similarity across languages.

Procedure

Establishing the language proficiency groups

All the participants provided the details of their language history by filling a written format of LEAP-Q (L1, L2, and L3). As the questionnaire was self-explanatory, no specific instructions were provided to participants. However, any clarifications demanded by the participants were addressed by the second and third investigator of this study. This was followed by the administration of cloze test. Here, the participants were instructed to fill in the incomplete letters/words of an English reading passage by understanding the contextual cues. To familiarize the participants with the task, a ‘practise phrase’ with missing letters/words was provided and the participants were encouraged to fill in the phrase with cues. This was followed by the administration of the ‘cloze test’ passage, which consisted of a total of 30 incomplete words. Completion of the cloze test task was not timed. Administration of these two tests took approximately 30 minutes. For the current study we operationally defined high proficient L2 speakers as those who rated their speaking proficiency in LEAP-Q as ≥3 and obtained a score of ≥16 in the cloze test. Participants who rated their speaking proficiency as ≤2 in LEAP-Q with a score of ≤15 in cloze test were categorized as low proficient L2 speakers.

Procedure to study speech disruptions under DAF

In the experimental tasks, participants were comfortably seated in a noise free room. The smartphone loaded with the DAF application with a compatible headset was provided to the participants. For the experimental task of reading passages, a printed copy of the reading passages written in Kannada and English orthography was handed over to the participants. Participants were instructed to read the passages in their habitual speaking rate and loudness as accurately as possible without any interruptions. Reading passages were recorded with a gap of 3 minutes between the languages. The task was counterbalanced for languages and auditory delay conditions across the participants.
Questions were written in the respective orthographies and presented on a personal computer using Microsoft PowerPoint. Participants were instructed to read the questions and answer it in their habitual speaking rate and loudness. Answering of questions was an untimed task as there were open ended questions that evoked elaborative responses from the participants. The order of questions, languages, and auditory delay conditions were counterbalanced across the participants. Instructions were provided in either L1 or L2, depending on the language in which the task was performed. Though both open and close ended questions evoked differential answers in terms of sentence length, the participants were encouraged to answer the questions in sentences. The speech disruptions that occurred during the experimental tasks were captured using Olympus digital sound recorder.

Analysis

Speech disruptions of the bilinguals was analysed according to the framework provided by Kvavik et al. (1991) wherein the errors were classified into articulatory (substitutions, omissions, distortions and additions), repetition error (repetition of sounds, syllables, words and phrases), and other errors (interjections, prolongations, pauses within and between the words). Speech disruptions was analysed by two co-investigators of this study. Both the investigators served as judges and listened to the speech samples of all participants and analysed the errors independently. For the counted speech disruptions, Cronbach's alpha was used to analyze the reliability in their agreements. There was a high agreement between the judges and their analysed speech samples where the coefficient of Cronbach's alpha was .85, .92, and .87 for articulatory, repetition, and other errors, respectively.

Statistical Analysis

Non parametric statistical tests were used as the data of speech disruptions was non-normally distributed. The study analysed the main effects of auditory delay, L2 language proficiency and languages. As auditory delay and languages were within subject factors, a Wilcoxon signed-rank test was used to analyze the effects. A Mann-Whitney U-test was used to analyze the main effects of L2 language proficiency on speech disruptions under DAF. With-in proficiency groups and within language comparisons were carried out using a Friedman test, and any differences were followed up with a post hoc analysis using a Wilcoxon signed-rank test.

RESULTS

The study had three objectives. The first objective of the study examined the effect of auditory delay on the speech disruptions under DAF on the experimental tasks of reading passages and answering questions in typical Kannada (L1)-English (L2) bilinguals. The second objective examined the main effect of second language (L2) proficiency on speech disruptions in typical Kannada (L1)-English (L2) bilinguals. The third objective checked the speech disruptions under DAF conditions between languages. As the data of speech errors was non-normally distributed across languages, proficiency groups, and auditory delay; non-parametric tests were used to analyze the differences (Shapiro-Wilk test, p < .05).

Effect of Auditory Delay on Speech Errors

Reading task

Speech disruptions categorized under Articulatory, Repetition, and Other errors in typical Kannada (L1)-English (L2) bilinguals were compared using a Wilcoxon signed-rank test for Kannada and English languages separately. In L1, repetition error was higher in the 150 ms DAF delay (/z/= 2.28, p < .02) compared to the delay of 250 ms. No other comparisons of L1 revealed statistically significant differences. As a trend it is clear from Table 2 that the overall speech disruptions under DAF was higher at 150 ms delay compared to the delay of 250 ms in L1.
Table 2.
Speech disruptions across DAF delay conditions (n = 14) in Kannada (L1) and English (L2) for the reading task
Type of errors 150 ms DAF delay
250 ms DAF delay
Mean (SD) Median No. of errors Mean (SD) Median No. of errors
Kannada (L1)
   Articulatory error 1.14 (.86) 1 16 .57 (1.08) 0 8
   Repetition error 2.43 (1.82) 2 14 1.50 (1.60) 1 21
   Other error 2.50 (1.28) 3 35 2 (1.35) 2.50 28
English (L2)
   Articulatory error .57 (.93) 0 8 0 (0) 0 0
   Repetition error 1.71 (1.59) 2 24 1.36 (1.15) 2 19
   Other error 1.79 (1.31) 2 25 1.86 (1.29) 1.50 26

DAF = delayed auditory feedback.

The effect of DAF delay was observed to be significant for only articulatory error in L2. The articulatory errors were greater at 150 ms auditory delay (/z/= 2.06, p < .03). The effect of auditory delay was insignificant for repetition and other errors. Descriptive statistics of speech errors in L1 and L2 across DAF conditions for the task of reading is represented in Table 2.

Task of answering questions

The effects of auditory delay on speech disruptions were insignificant for the task of answering questions in L1. However, differences in speech disruptions was significant in L2, for Other errors, which was higher in the 150 ms delay condition (/z/= 2.65, p =.008). Descriptive statistical results of speech disruptions under DAF across L1 and L2 for the task of answering questions are displayed in Table 3.
Table 3.
Speech disruptions across DAF delay conditions (n = 14) in Kannada (L1) and English (L2) for the task of answering questions
Type of errors 150 ms DAF delay
250 ms DAF delay
Mean (SD) Median No. of errors Mean (SD) Median No. of errors
Kannada (L1)
   Articulatory error .43 (.75) 0 6 .29 (.72) 0 4
   Repetition error 1.86 (1.02) 2 26 1.50 (2.34) 1 21
   Other error 2.14 (1.79) 1.5 30 1.07 (1.32) .5 15
English (L2)
   Articulatory error .21 (.57) 0 3 .36 (.84) 0 5
   Repetition error .93 (.82) 1 13 .86 (1.23) 0 12
   Other error 1.79 (1.42) 2 25 .79 (1.05) 0 11

DAF = delayed auditory feedback.

Effect of L2 Language Proficiency on Speech Errors

Between proficiency group comparisons: reading task

A Mann-Whitney U-test was used to analyse the effect of L2 language proficiency on mean speech disruptions under DAF for 150 ms and 250 ms delay conditions separately. For the 150 ms delay condition, results revealed a marginally significant difference for the articulatory error for L1. It was observed that the articulatory errors in L1 were higher in the low proficiency group compared to the high proficiency counterpart (/z/=1.86, p =.06). For the 250 ms delay condition, no speech disruptions were statistically significant (p>.05). Descriptive statistics of speech errors under DAF for the factor of language proficiency is shown in Table 4.
Table 4.
Speech disruptions under 150 ms and 250 ms auditory delay between high and low L2 proficient speakers for reading task
Type of errors High proficiency group
Low proficiency group
Mean (SD) Median No. of errors Mean (SD) Median No. of errors
Auditory delay of 150 ms
   Kannada (L1)
     Articulatory error .71 (.75) 1 5 1.57 (.78) 1 11
     Repetition error 2.57 (1.90) 2 18 2.29 (1.89) 2 16
     Other error 2.43 (1.27) 3 17 2.57 (1.39) 3 18
   English (L2)
     Articulatory error .29 (.48) 0 2 .86 (1.21) 0 6
     Repetition error 1.86 (1.86) 2 13 1.57 (1.39) 2 11
     Other error 1.43 (1.27) 1 10 2.14 (1.34) 2 15
Auditory delay of 250 ms
   Kannada (L1)
     Articulatory error .29 (.48) 0 2 .86 (1.46) 0 6
     Repetition error 1.57 (2.07) 1 11 1.43 (1.13) 2 10
     Other error 2.14 (1.57) 3 15 1.86 (1.21) 2 13
   English (L2)
     Articulatory error 0 (0) 0 0 0 (0) 0 0
     Repetition error 1.43 (1.39) 2 10 1.29 (.95) 2 9
     Other error 1.57 (1.13) 1 11 2.14 (1.46) 3 15

Within proficiency group comparisons: reading task

Speech disruptions were also compared within high and low proficiency groups separately using a Friedman test. It was observed that speech errors significantly differed for the high proficiency group in L1 when the 150 ms delay was set (χ2 = 7.0, p =.03), but no differences were observed for low proficiency group (χ2 = 4.0, p =.13). Posthoc analysis of high proficiency group revealed less articulatory errors compared to repetition (/z/= 2.22, p =.02) and other errors (/z/= 2.05, p =.04).
Differences were seen between speech disruptions at 250 ms DAF for L2 in high (χ2 = 7.30, p =.02) and low proficiency groups (χ2 = 9.65, p =.008). As a whole, articulatory errors were less frequent compared to repetition and other errors in both high (/z/= 1.85, p =.06; /z/= 2.33, p =.02) and low proficiency (/z/= 2.12, p =.03; /z/= 2.23, p =.02) groups. Repetition and other errors were comparable in each of the proficiency categories (/z/=.33, p =.73; /z/= 1.73, p =.08).

Between proficiency group comparisons: task of answering questions

The effect of Language proficiency on speech disruptions for the 150 ms delay condition revealed differences only for articulatory errors of L1 (/z/= 2.24, p =.02). In the observed difference, high proficiency speakers had less overall speech disruptions compared to the low proficiency L2 speakers. Effect of language proficiency on speech disruptions for 250 ms delay did not reveal statistically significant differences. Descriptive statistics of speech disruptions while answering questions across proficiency groups is represented in Table 5.
Table 5.
Speech disruptions under 150 ms and 250 ms auditory delay between high and low L2 proficiency speakers for the task of answering questions
Type of errors High proficiency group
Low proficiency group
Mean (SD) Median No. of errors Mean (SD) Median No. of errors
Auditory delay of 150 ms
   Kannada (L1)
     Articulatory error 0 (0) 0 0 .86 (.90) 1 6
     Repetition error 1.86 (.69) 2 13 1.86 (1.34) 2 13
     Other error 1.57 (1.13) 1 11 2.71 (2.21) 2 19
   English (L2)
     Articulatory error .14 (.38) 0 1 .29 (.75) 0 2
     Repetition error 1.29 (.95) 1 9 .57 (.53) 1 4
     Other error 1.43 (.97) 1 10 2.14 (1.77) 2 15
Auditory delay of 250 ms
   Kannada (L1)
     Articulatory error .29 (.75) 0 2 .29 (.75) 0 2
     Repetition error 1.17 (3.30) 0 12 1.29 (.95) 1 9
     Other error 1.43 (1.61) 1 10 .71 (.95) 0 5
   English (L2)
     Articulatory error .14 (.37) 0 1 .57 (1.13) 0 4
     Repetition error 1.00 (1.52) 0 7 .71 (.95) 0 5
     Other error .71 (.95) 0 5 .86 (1.21) 0 6

Within proficiency group differences: answering questions

In the high proficiency group, differences were observed between the speech disruptions across L1 (χ2 =11.21, p =.004) and L2 (χ2 = 7.75, p =.02) for the auditory delay of 150 ms. Posthoc analysis revealed that articulatory errors were less and significantly different from repetition and other errors across L1 (/z/= 2.41, p =.01; /z/= 2.23, p =.02) and L2 (/z/= 2.33, p =.02; /z/=1.98, p =.04). No such differences were evident between speech disruptions in the low proficiency group. Also, differences between the speech disruptions were insignificant for the auditory delay of 250 ms.

Effect of Language on Speech Errors

Between language comparisons: reading task

Comparison of speech errors between L1 and L2 across auditory delay (150 ms, 250 ms) conditions using a Wilcoxon signed-rank test revealed significant differences only for articulatory errors (/z/= 2.12, p =.03; /z/= 2.11, p =.03). Articulatory errors were found to be more frequent in L1 compared to L2. None of the other comparisons reached statistical significance. The mean speech disruptions between languages across auditory delay conditions are rep-resented in Figure 1.
Figure 1.
Between language comparison of mean speech disruptions errors between L1 and L2 for the reading task.
csd-24-1-154f1.jpg

Within language comparisons: reading task

A Friedman and Wilcoxon signed-rank test was used to examine the speech errors within L1 and L2 for each of the auditory delay conditions. In L1, with an auditory delay of 150 ms, articulatory errors were less frequent than repetition (/z/= 2.57, p =.01) and other errors (/z/= 2.35, p =.01. L2 showed no such differences between the speech disruptions for the auditory delay of 150 ms (χ2 = 5.48, p =.06). In both L1 and L2, with an auditory delay of 250 ms, articulatory errors were least in number compared to repetition (L1, /z/= 2.23, p =.02; L2, /z/= 2.75, p < .01) and other errors (L1, /z/= 2.62, p < .01; L2, /z/= 3.10, p < .01).

Between language comparisons: answering to questions

In this comparison, only repetition errors were statistically significant between the languages at 150 ms auditory delay (/z/= 2.21, p =.02). No appreciable differences were observed between languages for a delay of 250 ms. The mean speech disruptions while answering questions is represented in Figure 2.
Figure 2.
Between language comparison of mean speech disruptions errors between L1 and L2 for the task of answering the questions.
csd-24-1-154f2.jpg

Within language comparisons: answering to questions

Comparison of mean speech disruptions under DAF was significant for 150 ms delay for both L1 (χ2 = 6.83, p =.03) and L2 (χ2 = 11.87, p < .01) but no such differences were seen for 250 ms. In L1 and L2, less articulatory errors were observed compared to repetition and other errors, but no differences were observed between repetition and other errors.

DISCUSSION & CONCLUSION

Effects of DAF on Speech Disruptions

There was a significant main effect of auditory delay, where speech disruptions were higher for 150 ms compared to 250 ms DAF. Findings corroborate with some of the earlier studies of DAF and theoretical models of speech production, which have shown greater speech disruptions in shorter auditory delays indicative of less over-all reliance on feedback mechanisms favouring the pre-existing internal feed forward controls (Guenther, 2006; Guenther & Vladusich, 2012). Total number of speech disruptions at 150 ms DAF outnumbers the errors obtained from 250 ms DAF. With the above findings, we reject our first null hypothesis and accept the main effect of auditory delay on speech disruptions in typical Kannada (L1)-English (L2) bilingual speakers.
Interestingly, the effect of auditory delay did not evoke the same types of speech errors across the tasks of reading and answering questions. The type of speech errors also varied across languages. In reading, repetition errors were high in L1 whereas articulatory errors were high in L2. While answering questions, none of the speech disruptions differed in L1 whereas ‘other errors' were more in L2. This indicated that there was an interaction occurring between auditory delay and error types in L2. A shorter auditory delay of 150 ms produced greater interjections, prolongations and within/between word pauses in English which was absent for Kannada. Mixed ANOVA analysis computed for the variables also supported an interaction (F(1,12) =.79, p =.39). This also supports the notion that difference in language formulation could induce different types of speech errors in bilinguals under DAF. Repetition and articulatory errors may indicate language formulation difficulties whereas other errors (interjections, prolongations, and pauses within and between the words) may signify errors independent of language formulation. Task dependent speech errors are reported in very few studies which commonly employed reading (non-propositional) along with conversation (propositional) tasks using DAF (Boller, Vrtunski, Kim, & Mack, 1978; Burke, 1975; Corey & Cuddapah, 2008). Collectively, these studies propose that monitoring the semantic content becomes necessary for propositional speech tasks, and this puts greater linguistic demand on the speakers compared to reading tasks which do not necessarily require such monitoring. We also propose that not only the monitoring of semantic content but also construction of motor plan/programs would be tedious when the auditory feedback becomes faulty. Teasing apart which speech and/or language processes are taxed under disruptive auditory feedback could be an interesting future study which may be conceptualized by comparing nonsense word/texts with meaningful words. Also, not many investigations have been performed on task dependent speech errors of DAF which could be an interesting future study.
The effect of auditory delay was insignificant while answering questions in Kannada (L1) but it was observed for L2 for the other error type. It can be speculated that answering some of the common questions may not have considerably taxed the language formulation process on our participants. Also, it can be reasoned that the length of the answer to a question is solely dependent on the participants' discretion and short answers generally reduce the number of opportunities to make an error. Therefore, answering in a familiar language with short pre-programmed phrases/sentences may have reduced the overall errors in our participants. It was not the objective of this study to analyze the participants' mean length of utterances and hence this can be taken up in a future study.

Effects of Language on Speech Disruptions under DAF

Between language comparisons clearly revealed greater speech disruptions in L1 which was the native language of all the participants. However, the type of errors differed across the experimental tasks. Our finding goes against the language familiarity hypothesis supported by a few of the earlier studies (MacKay, 1970; Van Borsel et al., 2005). It is intriguing to note such high errors in L1, as all our participants were native Kannada speakers who were using L1 on a day to day basis. It is speculated that the current findings could possibly have occurred due to a variation in attention regulation across languages in bilinguals. Even though our participants had varied proficiency in L2, we agree that they were at a higher level of language mastery in L1 compared to L2. Individuals will have unprecedented abilities to construct and vary semantic and syntactic structures to express any concept when a given language has reached high level of mastery. Such abilities may necessitate conscious control of attention while producing utterances in a language of high familiarity. More speech disruptions can be expected when attention focus was internalized while uttering a group of segments. This may have increased the overall number of speech disruptions in our participants in L1. These findings may go along with some of the physiological studies on bilinguals who attributed the higher variability in the native language spoken tokens to the increased flexibility in the motor planning of speech (Chakraborty, Goffman, & Smith, 2008; Mahesh & Manjula, 2016; Sharkey & Folkins, 1985).
Alternatively, it can also be speculated that when auditory feedback was rapidly altered on a linguistically and motorically well-practised language, there is a possibility that the flow of feed forward plans was temporarily halted and the whole system must have begun to depend on feedback control. As feedback control was previously used in a limited way to monitor the above language, greater speech errors could have occurred in L1. These speculations need to be confirmed in prospective studies by altering the attention control in bilinguals across tasks that demand linguistic formulation. Our current finding is in agreement with some of the earlier reports, which demonstrated higher speech disruptions in the L1 of typical bilinguals under the influence of DAF (Rouse & Tucker, 1966; Yeni-Komshian, Chase, & Mobley, 1968).
Task specific variation in the type of speech disruptions was also noted. Articulatory errors were more frequent in reading passages, whereas repetition errors were higher while answering questions in L1. We attribute this difference to the variations in the load on the language formulation across the experimental tasks. Also, differential neural circuitries are hypothesized to be functioning for speaking tasks that are guided with/without external sensory stimuli (Ritto, Costa, Juste, & Andrade, 2016). Ritto et al. (2016) put forth the hypothesis that propositional speech, such as simple conversation, requires initiation which is controlled by dorsal premotor systems; whereas when speech is guided by external sensory stimuli such as choral reading (auditory) and oral reading (visual), it may involve circuitries of lateral premotor cortex. Our findings provide a partial behavioural support for the above hypothesis as propositional (answering questions) and less propositional (oral reading) tasks were employed in the current study, and we posit that change in the recruitment of neural circuitries may also change the behavioural error types.
Within language comparisons revealed high repetition and other errors across languages and delay conditions in the reading task. Answering questions revealed a similar trend only at 150 ms auditory delay. At longer auditory delays, boundaries between articulatory and fluency like disruptions could become blurred as the participants might more consciously monitor their speech productions.

Proficiency Differences

The effect of language proficiency was significant only for Kannada (L1) at 150 ms DAF across the experimental tasks. Articulatory errors in L1 were higher at 150 ms DAF in the low proficiency group. The effect of L2 language proficiency on articulatory errors was independent of experimental tasks, but was specific to the chosen language and auditory delay. This indicated that there was an interaction between auditory delays× languages× L2 language proficiency. This finding was further substantiated by the mixed ANOVA analysis of the data wherein clear interactions were observed for the factors of auditory delays, languages, and L2 proficiency (F(1,12) = 5.18, p =.04).
Results revealed on the effect of L2 language proficiency pose interesting questions. Why is that the native Kannada (L1) speakers whose proficiency in English (L2) was low showed more speech errors in Kannada? This cannot be attributed to their low proficiency or language attrition in L1 as all were native Kannada speakers who rated themselves as using Kannada in day-to-day communi-cations. None of them reported to have a change in their native state of residence in the previous 10 years, which also indirectly supports that they were constantly exposed to Kannada on a daily basis. Can the findings be attributed to language interference seen in bilinguals? The answer is less straightforward as the interference of L2 on L1 is less common in low proficiency L2 speakers. In fact, the opposite is true wherein the more experienced L1 is known to influence the language structure of L2 (Grosjean, 1982). If this line of argument is agreed, then it is reasonable to expect greater errors in L1 for highly proficient speakers but not for their low proficiency counterparts. Language usage could be an important factor that was not controlled for; however, all the participants reported in their self-rating that they used Kannada (L1) for most of their communication purposes. If they are using L1 for most of their communication, then why overall were speech errors more in Kannada (L1) but not in L2?
With the above findings, we reject our second null hypothesis and state that second language (L2) proficiency influences speech disruptions under DAF in typical Kannada-English bilingual speakers only on L1 spoken utterances at a shorter auditory delay. From the previous investigations it can be hypothesized that an increase in auditory delay accumulates enormous amounts of speech errors which may have prompted our participants to ignore the auditory feedback and depend on the intact somatosensory feedback thereby nullifying the effects of L2 language proficiency on speech disruptions under DAF across languages and experimental tasks (Mitsuya, Munhall, & Purcell, 2017). The role of attention, although uncontrolled in this study, cannot be ignored for its influence on the current findings. It is speculated that highly proficient L2 speakers who also use Kannada (L1) to a large extent may have developed adequate cognitive control to shift their attention quite flexibly, whereas this could be limited in a low proficiency L2 bilingual who most commonly use L1 for speaking. As flexibility in attention control is a skill set which is known to influence speech production, it partly explains the current results of this study. Reported investigations carried out on some of the speech and non-speech tasks have suggested that the number of errors reduces when individuals' shift their attention away from their verbal output (Freedman, Mass, Caligiuri, Wulf, & Robin, 2007; Lisman & Sadagopan, 2013; MacKay, 1970).
Various types of speech disruptions were compared within high and low L2 proficiency groups across languages and auditory delay conditions. Articulatory inaccuracies which comprised the traditional Substitution, Omission, Distortion and Addition (SODA) errors were comparatively less than repetition and other errors in the high proficiency group. The errors were highly consistent in L1, answering questions, and 150 ms DAF. As a whole, errors were undifferentiated in the low proficiency group across experimental tasks and delay conditions. Together these findings indicated that stuttering like Dysfluencies (which included repetition and other errors of this study) are more common under the influence of DAF than articulatory errors (Fabbro & Darò, 1995; Fairbanks & Guttman, 1958; Harrington, 1988; Jones & Striemer, 2007; Lee, 1950a, 1950b). As a corollary, the finding also points out that the less proficient group could show both articulatory as well as stuttering like dysfluencies (SLDs). These findings may have implications for stuttering research on bilinguals as the DAF effect mimics stuttering; we propose that SLDs could be more commonly expected in L1 of high proficiency speakers whereas low proficiency speakers are vulnerable for both articulatory and fluency disruptions across languages.
Studies carried out on bilinguals with stuttering (BWS) have identified the influence of language related variables on speech fluency (see Van Borsel, Maes, & Foulon, 2001 for a review on bilingualism and stuttering). Language proficiency is studied in many of the recent studies as an important factor that dictates the amount and type of dysfluencies across languages of BWS. Although the reported severity and types of dysfluencies across languages of BWS are equivocal, a parallel can be drawn from the findings of the current study with the available literature. Current findings corroborate with the investigations which have shown increased disfluencies in the native language of BWS (Howell et al., 2004; Jayaram, 1983). Additionally, current results also support the articulatory kinematic research on BWS, which has shown increased token-to-token variability in the utterance of the native language of high L2 proficient Bengali (L1)-English (L2) and Kannada (L1)-English (L2) bilinguals (Chakraborty et al., 2008; Mahesh & Manjula, 2016). Semantic satiation and cross linguistic interference that consequently allocates lesser cognitive resources to produce utterances of L1 was cited as the reasons for the obtained higher dysfluencies/variability in the native languages of BWS.
Although structural differences in the auditory processing areas are reported to be different in bilinguals compared to monolinguals (Golestani, Price, & Scott, 2011; Ressel et al., 2012; Wong et al., 2008), very few studies have investigated on the changes in the auditory processing areas of the brain as a factor of bilingual language acquisition. Gresele, Garcia, Torres, Santos, & Costa (2013) showed that successive Portuguese-Italian typical bilinguals had better scores on the dichotic digit test and staggered spondaic word test than simultaneous bilinguals. Auditory processing differences observed in simultaneous and successive bilinguals in the above study hint that the type of bilingual language acquisition may influence speech disruptions under auditory delay. As the current study was carried out on sequential bilinguals who were exposed to English in their schooling years, it would be an interesting future study to compare how simultaneous and successive bilinguals would respond across a range of auditory delay conditions and speaking tasks.

Limitations and Future Directions

This study included less sample size and most of the participants were females. Even though the effects of gender on DAF related disruptions are equivocal, it is recommended to examine the validity of such a phenomenon in typical bilinguals. To further compel such investigations, it is to be noted that the mean speech disruptions were very minimal in the current study, probably due to the inclusion of a high number of female participants. Hence, exploring the effects of gender on speech errors under DAF in typical bilinguals would be an interesting endeavour.
Other sensitive measures, such as total time taken to read a passage/answer the questions across languages of a bilingual could be evaluated as this could not be addressed in the current study. Controlling both language proficiency and usage of L2, and correlating the same with the type of speech disruptions can also be looked into in future studies. We used two types of tasks to understand the influence of language load on speech errors, and such task related variability on speech errors (for e.g., propositional vs. non-propositional) of DAF in typical bilinguals could be further explored. Though we have speculated on the role of attention on speech disruptions, this can be systematically controlled and its effects need to be documented in upcoming studies.
A selfrating proficiency scale along with a simple language performance measure was used to subgroup the participants into high and low L2 proficiency groups. Instead, future studies can use language achievement tests, particularly at the syntax level, which could help in clearly differentiating the participants into language proficiency categories.
As the study included both within (languages, auditory delays, error types) and between subject factors (L2 language proficiency), it became difficult to understand the interaction effects by using non-parametric tests such as those which are used in the current study. Therefore, a mixed ANOVA was run to understand the interactions, if any. No interactions were observed for between and within subject factors for the task of reading, whereas interaction was observed for task of answering questions: (auditory delays× languages× L2 language proficiency) and (delay× error types). These interactions were further elaborated for the task of answering questions in the discussion section.
In conclusion, the language familiarity hypothesis was examined in the current study by varying the language proficiency in L2 of typical Kannada (L1)-English (L2) bilinguals. Results revealed an opposite trend to the proposed language familiarity hypothesis of previous studies. Kannada (L1), being the most familiar language, showed more speech disruptions under DAF when language proficiency of L2 being varied. The low L2 proficiency group displayed more errors than the high L2 proficiency group in L1 at 150 ms DAF across experimental tasks. Within proficiency group comparisons revealed more errors in L1 for high L2 proficients' across experimental tasks at 150 ms DAF. Findings revealed greater repetition (repetition of sounds, syllables, words and phrases) and other errors (interjections, prolongations, pauses within and between the words) compared to articulatory errors (substitutions, omissions, distortions and additions) in typical bilinguals under DAF. Overall errors in L1 outnumbered errors in L2 when languages were compared with each other. This is the only study to date which has reported the influence of DAF on speech disruptions in typical bilinguals from the Indian context. The findings pave the way for further studies which should be formulated by exerting control on language usage patterns in bilinguals and exploring the effect of attention control in speech disruptions under DAF.

REFERENCES

Agnew, ZK., McGettigan, C., Banks, B., & Scott, SK. (2018). Group and individual variability in speech production networks during delayed auditory feedback. The Journal of the Acoustical Society of America. 143(5):3009–3023.
crossref pmid pmc
Bachrach, DL. (1964). Sex differences in reactions to delayed auditory feedback. Perceptual and Motor Skills. 19(1):81–82.
crossref pmid
Ball, MJ., & Code, C. (1997). Instrumental clinical phonetics. London: Whurr Publishers.

Boller, F., Vrtunski, PB., Kim, Y., & Mack, JL. (1978). Delayed auditory feedback and aphasia. Cortex. 14(2):212–226.
crossref pmid
Bradshaw, JL., Nettleton, NC., & Geffen, G. (1971). Ear differences and delayed auditory feedback: Effects on a speech and a music task. Journal of Experimental Psychology. 91(1):85–92.
crossref pmid
Burke, BD. (1975). Variables affecting stutterer's initial reactions to delayed auditory feedback. Journal of Communication Disorders. 8(2):141–155.
crossref pmid
Cai, S., Ghosh, SS., Guenther, FH., & Perkell, JS. (2011). Focal manipulations of formant trajectories reveal a role of auditory feedback in the online control of both within-syllable and between-syllable speech timing. Journal of Neuroscience. 31(45):16483–16490.
crossref pmid
Chakraborty, R., Goffman, L., & Smith, A. (2008). Physiological indices of bilingualism: oral–motor coordination and speech rate in Bengali–English speakers. Journal of Speech, Language, and Hearing Research. 51(2):321–332.
crossref pmc
Chase, RA., Sutton, S., First, D., & Zubin, J. (1961). A developmental study of changes in behavior under delayed auditory feedback. The Journal of Genetic Psychology. 99(1):101–112.
crossref pmid
Chon, H., Kraft, SJ., Zhang, J., Loucks, T., & Ambrose, NG. (2013). Individual variability in delayed auditory feedback effects on speech fluency and rate in normally fluent adults. Journal of Speech, Language, and Hearing Research. 56(2):489–504.
crossref
Civier, O., Tasko, SM., & Guenther, FH. (2010). Overreliance on auditory feedback may lead to sound/syllable repetitions: Simulations of stuttering and fluency-inducing conditions with a neural model of speech production. Journal of Fluency Disorders. 35(3):246–279.
crossref pmid pmc
Corey, DM., & Cuddapah, VA. (2008). Delayed auditory feedback effects during reading and conversation tasks: gender differences in fluent adults. Journal of Fluency Disorders. 33(4):291–305.
crossref pmid
Fabbro, F., & Darò, V. (1995). Delayed auditory feedback in polyglot simultaneous interpreters. Brain and Language. 48(3):309–319.
crossref pmid
Fairbanks, G. (1960). Voice and articulation drillbook. New York, NY: Harper.

Fairbanks, G., & Guttman, N. (1958). Effects of delayed auditory feedback upon articulation. Journal of Speech and Hearing Research. 1(1):12–22.
crossref pmid
Freedman, SE., Maas, E., Caligiuri, MP., Wulf, G., & Robin, DA. (2007). Internal versus external: oralmotor performance as a function of attentional focus. Journal of Speech, Language, and Hearing Research. 50(1):131–136.
crossref
Fukawa, T. (1980). Familiarity of reading materials and the delayed auditory feedback effect. The Japan Journal of Logopedics and Phoniatrics. 21(2):103–108.
crossref
Fukawa, T., Yoshioka, H., Ozawa, E., & Yoshida, S. (1988). Difference of susceptibility to delayed auditory feedback between stutterers and nonstutterers. Journal of Speech, Language, and Hearing Research. 31(3):475–479.
crossref
Golestani, N., Price, CJ., & Scott, SK. (2011). Born with an ear for dialects? Structural plasticity in the expert phonetician brain. Journal of Neuroscience. 31(11):4213–4220.
crossref pmid
Gresele, ADP., Garcia, MV., Torres, EMO., Santos, SNDS., & Costa, MJ. (2013). Bilingualism and auditory processing abilities: Performance of adults in dichotic listening tests. CoDAS. 25(6):506–512.
crossref pmid
Grosjean, F. (1982). Life with two languages: an introduction to bilingualism. Cambridge, MA: Harvard University Press.

Guenther, FH. (2006). Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders. 39(5):350–365.
crossref pmid
Guenther, FH., Ghosh, SS., & Tourville, JA. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language. 96(3):280–301.
crossref pmid pmc
Guenther, FH., & Vladusich, T. (2012). A neural theory of speech acquisition and production. Journal of Neurolinguistics. 25(5):408–422.
crossref pmid pmc
Harano, K., & Tagami, F. (1976). Effects of reading materials and delay time upon reading rate and disfluency under auditory delayed feedback. Japanese Journal of Educational Psychology. 24(3):167–176.
crossref
Harrington, J. (1988). Stuttering, delayed auditory feedback, and linguistic rhythm. Journal of Speech, Language, and Hearing Research. 31(1):36–47.
crossref
Hashimoto, Y., & Sakai, KL. (2003). Brain activations during conscious self‐ monitoring of speech production with delayed auditory feedback: An fMRI study. Human Brain Mapping. 20(1):22–28.
crossref pmid
Howell, P., & Powell, DJ. (1987). Delayed auditory feedback with delayed sounds varying in duration. Perception and Psychophysics. 42(2):166–172.
crossref pmid
Howell, P., Ruffle, L., Fernandez-Zuniga, A., Gutierrez, R., Fernandez, AH., O'Brien, ML., & Au-Yeung, J. (2004). Comparison of exchange patterns of stuttering in Spanish and English monolingual speakers and a bilingual Spanish-English speaker. In A. Packmann (Ed.), Theory, research and therapy in fluency disorders. (pp. 415–422). Nijmegen: Nijmegen University Press.

Jayaram, M. (1983). Phonetic influences on stuttering in monolingual and bilingual stutterers. Journal of Communication Disorders. 16(4):287–297.
crossref pmid
Jones, JA., & Striemer, D. (2007). Speech disruption during delayed auditory feedback with simultaneous visual feedback. The Journal of the Acoustical Society of America. 122(4):EL135–EL141.
crossref pmid pmc
Kalinowski, J., Stuart, A., Sark, S., & Armson, J. (1996). Stuttering amelioration at various auditory feedback delays and speech rates. International Journal of Language and Communication Disorders. 31(3):259–269.
crossref
Kvavik, KH., Katsuki-Nakamura, J., Siegel, GM., & Pick Jr, HL. (1991). Delayed auditory feedback effects on learners of Japanese and Spanish. Folia Phoniatrica et Logopaedica. 43(6):282–290.
crossref
Lee, BS. (1950a). Some effects of side‐tone delay. The Journal of the Acoustical Society of America. 22(5):639–640.
crossref
Lee, BS. (1950b). Effects of delayed speech feedback. The Journal of the Acoustical Society of America. 22(6):824–826.
crossref
Liebenthal, E., & Möttönen, R. (2018). An interactive model of auditory-motor speech perception. Brain and Language. 187, 33–40.
crossref pmid
Lisman, AL., & Sadagopan, N. (2013). Focus of attention and speech motor performance. Journal of Communication Disorders. 46(3):281–293.
crossref
MacKay, DG. (1968). Metamorphosis of a critical interval: Age‐linked changes in the delay in auditory feedback that produces maximal disruption of speech. The Journal of the Acoustical Society of America. 43(4):811–821.
crossref pmid
MacKay, DG. (1970). How does language familiarity influence stuttering under delayed auditory feedback? Perceptual and Motor Skills. 30(2):655–669.
crossref pmid
Mahesh, BVM., & Manjula, R. (2016). The effects of English (L2) language proficiency on speech motor indices for bisyllabic bilabial word utterances in typical Kannada (L1)–English (L2) bilinguals. Speech, Language and Hearing. 19(3):171–179.
crossref
Maitreyee, R. (2009). Language proficiency questionnaire: an adaptation of LEAP-Q in Indian context. (Master's Dissertation). University of Mysore, Mysore, India.

Marian, V., Blumenfeld, HK., & Kaushanskaya, M. (2007). The Language Experience and Proficiency Questionnaire (LEAP-Q): Assessing language profiles in bilinguals and multilinguals. Journal of Speech, Language, and Hearing Research. 50(4):940–967.
crossref
Ministry of Home Affairs. (2001). Census data 2001. http://www.censusindia.gov.in/2011-common/census_data_2001.html.

Mitsuya, T., MacDonald, EN., & Munhall, KG. (2014). Temporal control and compensation for perturbed voicing feedback. The Journal of the Acoustical Society of America. 135(5):2986–2994.
crossref pmid pmc
Mitsuya, T., Munhall, KG., & Purcell, DW. (2017). Modulation of auditory-motor learning in response to formant perturbation as a function of delayed auditory feedback. The Journal of the Acoustical Society of America. 141(4):2758–2767.
crossref pmid pmc
Mochida, T., Gomi, H., & Kashino, M. (2010). Rapid change in articulatory lip movement induced by preceding auditory feedback during production of bilabial plosives. PloS One. 5(11):e13866.
crossref pmid pmc
Ressel, V., Pallier, C., Ventura-Campos, N., Díaz, B., Roessler, A., Ávila, C., & Sebastián-Gallés, N. (2012). An effect of bilingualism on the auditory cortex. Journal of Neuroscience. 32(47):16597–16601.
crossref pmid
Ritto, AP., Costa, JB., Juste, FS., & Andrade, CRFD. (2016). Comparison of different speech tasks among adults who stutter and adults who do not stutter. Clinics. 71(3):152–155.
crossref pmid pmc
Rouse, RO., & Tucker, GR. (1966). An effect of delayed auditory feedback on speech in American and foreign students. Journal of Speech and Hearing Research. 9(3):456–460.
crossref
Sasisekaran, J. (2012). Effects of delayed auditory feedback on speech kinematics in fluent speakers. Perceptual and Motor Skills. 115(3):845–864.
crossref pmid pmc
Sato, M., & Shiller, DM. (2018). Auditory prediction during speaking and listening. Brain and Language. 187, 92–103.
crossref pmid
Savithri, SR., & Jayaram, M. (2004). Rate of speech/reading in Dravidian languages. AIISH Research Fund (ARF) Project, All India Institute of Speech and Hearing, Mysore, India.

Sharkey, SG., & Folkins, JW. (1985). Variability of lip and jaw movements in children and adults: implications for the development of speech motor control. Journal of Speech, Language, and Hearing Research. 28(1):8–15.
crossref
Siegel, GM., Fehst, CA., Garber, SR., & Pick Jr, HL. (1980). Delayed auditory feedback with children. Journal of Speech, Language, and Hearing Research. 23(4):802–813.
crossref
Sparks, G., Grant, DE., Millay, K., Walker-Batson, D., & Hynan, LS. (2002). The effect of fast speech rate on stuttering frequency during delayed auditory feedback. Journal of Fluency Disorders. 27(3):187–201.
crossref pmid
Stephen, SC., & Haggard, MP. (1980). Acoustic properties of masking/delayed feedback in the fluency of stutterers and controls. Journal of Speech, Language, and Hearing Research. 23(3):527–538.
crossref
Stuart, A., Kalinowski, J., Rastatter, MP., & Lynch, K. (2002). Effect of delayed auditory feedback on normal speakers at two speech rates. The Journal of the Acoustical Society of America. 111(5):2237–2241.
crossref pmid
Taylor, WL. (1953). “Cloze procedure”: a new tool for measuring readability. Journalism Bulletin. 30(4):415–433.
crossref
Tourville, JA., Reilly, KJ., & Guenther, FH. (2008). Neural mechanisms underlying auditory feedback control of speech. Neuroimage. 39(3):1429–1443.
crossref pmid
Van Borsel, J., Maes, E., & Foulon, S. (2001). Stuttering and bilingualism: a review. Journal of Fluency Disorders. 26(3):179–205.
crossref
Van Borsel, J., Sunaert, R., & Engelen, S. (2005). Speech disruption under delayed auditory feedback in multilingual speakers. Journal of Fluency Disorders. 30(3):201–217.
crossref pmid
Wong, PC., Warrier, CM., Penhune, VB., Roy, AK., Sadehh, A., Parrish, TB., & Zatorre, RJ. (2008). Volume of left Heschl's gyrus and linguistic pitch learning. Cerebral Cortex. 18(4):828–836.
crossref pmid
Yeni-Komshian, G., Chase, RA., & Mobley, RL. (1968). The development of auditory feedback monitoring. II. Delayed auditory feedback studies on the speech of children between two and three years of age. Journal of Speech and Hearing Research. 11(2):307–315.
crossref pmid

Appendices

Appendix 1.

Cloze test

Cloze test

In the following passage, the blank spaces indicate that words are incomplete. Please fill in the necessary letters in order to make the words, as well as the passage, linguistically correct.
Example: In order to bake a cake you need fl___r, e__s, m__k, bak__ so_a, and su__r.
The house I live in is not very big, but it is comfortable. There i_ a gard__ in fr__t of t_ _ house. Wh___ you o___ the fr___ door, y___ are in___ the li___ room. Wh___ you wa___ through t___ living r___, you en___ t___ kitchen. T___ backyard i___ through t___ kitchen do___. Th___ are thr___ bedrooms a___ one ba___ in t___ house. Y___ reach th___m through t___ door nea___ the ki___.
Appendix 2.

A list of questions developed as a second group of stimuli

Questions

  • 1. Where do you live?

  • 2. What do you do for your living?

  • 3. How do you spend your leisure time?

  • 4. Who all are there at your home?

  • 5. Which is your favourite sport?

  • 6. At what time do you go for your college/ work?

  • 7. In which all languages you are a fluent speaker?

  • 8. What do you aspire to become?

  • 9. What do you do during the weekends?

  • 10. Which is your favourite hangout place with your friends?

TOOLS
PDF Links  PDF Links
PubReader  PubReader
ePub Link  ePub Link
Full text via DOI  Full text via DOI
Download Citation  Download Citation
Supplement  Supplement
  Print
Share:      
METRICS
0
Crossref
0
Scopus
5,393
View
85
Download
Editorial office contact information
Department of Speech Pathology, College of Rehabilitation Sciences, Daegu University,
Daegudae-Ro 201, Gyeongsan-si, Gyeongsangbuk-do 38453, Republic of Korea
Tel: +82-502-196-1996   Fax: +82-53-359-6780   E-mail: kjcd@kasa1986.or.kr

Copyright © by Korean Academy of Speech-Language Pathology and Audiology.
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Developed in M2PI