English [ɫ] exhibits retracted tongue dorsum and low F2 frequencies compared to Korean [l], but is frequently asserted to be perceptually similar to Korean [l] and therefore difficult for Korean learners to acquire due to articulatory transfer. This study examines the articulatory and acoustic characteristics of Korean and English word-final laterals produced by Korean learners. Korean learners' productions of English [ɫ] were systematically different from Korean [l], with retracted tongue dorsum and low F2 similar to L1 English [ɫ]. The findings suggest Korean learners form a distinct phonetic category for English [ɫ] rather than modifying an existing Korean category.

Traditional textbook descriptions of laterals in English distinguish between “light” [l] and “dark” [ɫ], with many descriptions associating the dark variant with syllable-final position in British varieties, and generally with American English productions in all contexts. These dark varieties have been characterized as involving two gestures, a primary alveolar contact gesture and a secondary dorsal retraction gesture (Sproat and Fujimura, 1993). However, English [ɫ] is often vocalized, especially in some varieties of American English. In such cases, the lateral is produced as a uvular approximant with a single vocalic gesture, having no anterior contact. This articulation is pervasively observed from speakers of a mid-western dialect (Berkson et al., 2017). Thus, the general articulatory characteristic of English [ɫ] is a narrowed upper pharyngeal area with a retracted tongue dorsum, and depending on variety and context, with a second gesture creating alveolar contact.

In contrast to the dark English [ɫ], the Korean lateral has different articulatory and acoustic characteristics, and is considered “clear” or light. Previous articulatory studies on the Korean lateral using two-dimensional (2D) ultrasound technology (Gick et al., 2006) and real-time magnetic resonance imaging (Lee et al., 2015) have supported an analysis that the Korean lateral is composed of two gestures, (1) tongue tip closure and (2) tongue body raising (palatalization). However, studies using electro-palatography (Umeda, 1980; Lee, 1980) and three-dimensional/four-dimensional (3D/4D) ultrasound methodology (Hwang et al., 2019) provide evidence that tongue body raising is highly variable, indicating a possible vowel-context dependent variation together with individual variation. Thus, the general articulatory characteristics of the Korean lateral appear to be more appropriately described as an anterior occlusion without tongue body lowering or tongue backing gestures.

The articulatory differences between the Korean and American English laterals also affect their acoustic characteristics. According to Stevens (1998, p. 546), the second formant (F2) in laterals is related to the resonance of the back cavity, and the tongue backing gesture in English [ɫ] causes selective lowering of F2. Charles and Lulich (2019) agreed, finding that F2 frequency was most closely affiliated with the length of the back cavity in Brazilian Portuguese alveolar and palatal laterals. When the tongue body lowers as a consequence of a tongue backing gesture, the back cavity lengthens to include both the pharyngeal and oral cavities up to the point of tongue blade constriction. Since the tongue body is not retracted in the articulation of Korean [l], the F2 frequency for Korean [l] is much higher than for English [ɫ]. Kwon (2005) reports that F2 frequency values for Korean [l] are about 600 Hz higher for male speakers and around 800 Hz higher for female speakers than the F2 frequency values for English [ɫ] in syllable final position. Regardless of the noticeable articulatory and acoustic differences between the Korean [l] and English [ɫ], when English words are adapted into Korean as loanwords, English [ɫ] is matched with Korean [l].

The Speech Learning Model (SLM; Flege, 1987, 1995) of second language production claimed that the degree of perceived cross-language phonetic dissimilarity determines how successfully sounds in a second language (L2) will be produced. Here, if an L2 sound is perceptually similar to an existing sound in the first language (L1), it is more difficult for learners to acquire a native-like production, because the perceptual system is not sensitive to its difference from the sound in L1. Thus, the articulatory production regimes from the L1 are used in L2 production. By contrast, if an L2 sound is perceptually dissimilar from the closest L1 sound, it will more readily exhibit the effects of learning. Thus, it might be produced poorly in the earliest stages of acquisition, but more accurately as L2 acquisition advances. Parallel research in the framework of the Perceptual Assimilation Model (Best, 1995; Best and Tyler, 2007) points out additional factors to consider in perception, especially pointing out that production regimes need to be considered as part of a larger phonological system with contrasting elements.

Previous studies, such as Kwon (2006) and Jun (2004), which adopted SLM, considered English [ɫ] to be perceptually similar to the Korean [l], and assert that Korean learners transfer their articulatory strategies of the Korean lateral without creating a new phonetic category for the English lateral. If the English lateral is perceptually assimilated to the Korean lateral, however, it might be produced fairly well in early stages of L2 acquisition but the production accuracy may not greatly improve even for advanced learners. Various studies, however, indicate that Korean learners' productions of English /l/ are poor in the initial stage, but progress toward more native-like productions as they have more exposure to English. For instance, Yang (2008) reveals that Korean learners who had more exposure to English produced word-final English [ɫ] much more accurately than those who had less exposure. Kwon (2006) also reveals that F2 frequencies of English [ɫ] produced by advanced Korean learners are much lower than intermediate learners, and so is less influenced by the phonetic properties of Korean [l], which has a higher F2 frequency.

The aim of this study is to examine whether advanced Korean female learners of English use similar or different strategies to produce Korean [l] and English [ɫ]. If different strategies are used, it may be because advanced learners establish a new phonetic category for English [ɫ] and thus develop native-like productions rather than continuing to rely on L1 strategies. To answer the research question, this study will focus on both articulatory and acoustic characteristics of English [ɫ] produced by Korean learners of English, and compare them with those of Korean [l] produced by the same speakers. Specifically, we propose [following Yang (2008), and Kwon (2006)] that English [ɫ] is perceptually dissimilar from Korean [l], and thus advanced Korean female learners of English use different strategies to produce Korean [l] and English [ɫ]. This paper tests the hypotheses that: (1) English [ɫ] produced by Korean learners has a more retracted and lowered tongue body than Korean [l], (2) English [ɫ] produced by Korean learners has a lower F2 than Korean [l], and (3) that English [ɫ] produced by Korean learners is qualitatively similar to L1 English [ɫ].

Data were collected from six female native speakers of Korean. Participant ages ranged from 22 to 29 yr (mean: 24.5). At the time of data collection, they were studying at Indiana University and their length of residence in the U.S. was between 3 months and 5 yr (mean: 27.3 months). All of the participants reported that they had begun learning English before the age of 10, with between 12 and 19 yr of experience learning English (mean: 15 yr). All participants had been admitted to Indiana University with internet-based Test Of English as a Foreign Language (ETS, 2012) scores higher than 90 and had no difficulty in communicating in English during data collection. Participant P6 is the first author.

Stimuli included five near minimal pairs of mono-syllabic Korean and English words that end with a syllable-final lateral. The stimuli used in this study are presented in Table 1. In order to prevent productions in one language from being affected by the other language, data collection was divided into two phases. Korean words were presented in the first phase and English words were presented in the second phase. For each section, eight filler words, which contained no lateral sound, were included. The participants were seated in a double-walled sound-attenuating booth and were instructed to pronounce each word naturally. The stimuli were presented one at a time on a computer monitor located in front of the participants. Each stimulus word was presented twice. Thus a total of 26 stimuli were recorded from each participant, in both Korean and English (52 stimuli together), of which 10 stimuli were target words in each language (20 target stimuli together). The 26 stimuli were randomized separately for each language, and the same ordering was used for each participant.

Table 1.

Stimuli.

Korean [tal] [mil] [phul] [tʌl] [pol] 
“moon” “wheat” “grass” “less” “cheek” 
English [daɫ] [miɫ] [phuɫ] [dʌɫ] [phoɫ] 
“doll” “meal” “pool” “dull” “pole” 
Korean [tal] [mil] [phul] [tʌl] [pol] 
“moon” “wheat” “grass” “less” “cheek” 
English [daɫ] [miɫ] [phuɫ] [dʌɫ] [phoɫ] 
“doll” “meal” “pool” “dull” “pole” 

3D wedge-shaped volumetric images of the tongue were recorded using a Philips 3D/4D EPIQ 7 G ultrasound machine with a Philips xMatrix ×6–1 digital 3D/4D transducer (Lulich et al., 2018) (Philips: Bothell, WA; Articulate Instruments: Edinburgh, UK; NextEngine: Santa Monica, CA; SHURE: Niles, IL). The transducer was stabilized under the jaw with a customized Articulate Instruments Ltd. headset, which restricts probe movement relative to the head (Scobbie et al., 2008). Ultrasound frame rates were between 14.07 and 19.19 fps (mean: 16.47 fps), depending on the size of the speaker's tongue and oral cavity. A palate impression was made using dental alginate, and subsequently scanned and digitized using a NextEngine Desktop 3D Scanner. The digitized palate impression was later registered with the ultrasound data using the method described by Charles and Lulich (2018, 2019) (see supplementary material for details1). Audio signals were recorded simultaneously with the ultrasound data, using a SHURE KSM32 microphone placed approximately 1.5 m in front of the participant. Recordings were made with a 48 kHz sampling rate and 16 bit quantization. Ultrasound and audio recordings were synchronized using the method described in Lulich et al. (2018) (see supplementary material1).

The articulatory data collected in this study were analyzed in two different ways. First, using a custom-built toolbox for Matlab (The MathWorks Inc., 2017), the tongue surface was visually identified and manually segmented in every fifth sagittal and coronal slice. This formed a regular grid of tongue surface contours, which were used to interpolate the full 3D tongue surface (similar to the segmentation method used in Charles and Lulich, 2018). The manual segmentation was done for the middle frame of the target segments. After segmentation, the palate impression was registered with the ultrasound data.

Second, anatomical distance measures were defined for the height of the tongue blade and the retraction of the tongue dorsum as follows. Measurements were made within the mid-sagittal plane and in the middle frame of the target segments. For the height of the tongue blade, the vertical distance from the anterior end of the tendon of the genioglossus (GG) to the tongue surface was measured in centimeters. For the retraction of the tongue dorsum, the distance from the tendon of the GG to the tongue dorsum was measured in centimeters at an angle of 45°. These measures are illustrated in Fig. 1 for participant P1.

Fig. 1.

Two intersecting 2D planes (the midsagittal plane and a coronal plane that passes through the anterior end of the tendon of the GG) of each Korean learner's production of Korean [l] and English [ɫ] following a vowel /a/. The cross-hairs on each sagittal figure indicate the location of the coronal slice in the neighboring panel; the cross-hairs in each coronal figure indicate the location of the sagittal slice in the neighboring panel. Example measurements of the anatomical distance to the tongue blade and to the tongue dorsum are shown for Korean and English, respectively, in the midsagittal plane for learner P1.

Fig. 1.

Two intersecting 2D planes (the midsagittal plane and a coronal plane that passes through the anterior end of the tendon of the GG) of each Korean learner's production of Korean [l] and English [ɫ] following a vowel /a/. The cross-hairs on each sagittal figure indicate the location of the coronal slice in the neighboring panel; the cross-hairs in each coronal figure indicate the location of the sagittal slice in the neighboring panel. Example measurements of the anatomical distance to the tongue blade and to the tongue dorsum are shown for Korean and English, respectively, in the midsagittal plane for learner P1.

Close modal

For the visualization in Fig. 1, the data were not rotated in any way. The “up” direction was referenced to how the ultrasound probe helmet fit on the participant's head. The data are therefore not referenced to the occlusal plane, which is probably the other most common way to define up. Nevertheless, in Fig. 1 it is within-speaker differences that are important, so that common reference to the occlusal plane is not necessary. The articulatory measures presented in Fig. 2 were taken 45° apart, with a common origin at the anterior end of the tendon of the GG. This may introduce some additional between-speaker variability based on the probe orientation, but it is probably not large, at least in comparison with variability due to speaker-specific anatomical and articulatory differences.

Fig. 2.

(Color online) (a) Distance measures for the height of the tongue blade (y axis) and retraction of the tongue dorsum (x axis). The x axis is reversed, indicating a tongue tip position to the right, with larger positive values of tongue dorsum retraction indicated further to the left. (b) F2 (x axis) and F3 (x axis) measures. (c) F2 (x axis) and the difference between tongue blade and tongue dorsum measurements (y axis). In (a) and (b) each participant is represented by symbols with a different color and shape, and each symbol represents an average over two repetitions (data were not averaged for statistical analyses). Filled symbols represent English laterals and open symbols represent Korean laterals.

Fig. 2.

(Color online) (a) Distance measures for the height of the tongue blade (y axis) and retraction of the tongue dorsum (x axis). The x axis is reversed, indicating a tongue tip position to the right, with larger positive values of tongue dorsum retraction indicated further to the left. (b) F2 (x axis) and F3 (x axis) measures. (c) F2 (x axis) and the difference between tongue blade and tongue dorsum measurements (y axis). In (a) and (b) each participant is represented by symbols with a different color and shape, and each symbol represents an average over two repetitions (data were not averaged for statistical analyses). Filled symbols represent English laterals and open symbols represent Korean laterals.

Close modal

The acoustic recordings were analyzed using the Praat phonetics software (Boersma and Weenink, 2019). The beginning and end of the lateral segments were determined by a change in waveform intensity, a sudden wideband change in the spectrogram, and changes in formant trajectories. Praat was used to automatically generate formant tracks [Maximum Formant = 6500 Hz; Window Length = 0.025 s; Number of Formants = 5 (speakers P1, P3, P4, P5) or 6 (speakers P2, P6)], and the frequencies of F1, F2, and F3 were recorded at the center of the target segment. Each formant frequency measurement was compared with the spectrogram and manually corrected as needed (an estimated 5% of measurements required correction).

All anatomical distances and formant frequencies were measured a second time in order to assess measurement reliability. Coefficients of variation (COVs) for the two anatomical distance measures were smaller than 3%, COVs for F2 and F3 were smaller than 6.5%, and the COV for F1 was 15.52% (see supplementary material1).

The tongue blade and dorsum measurements and the formant frequency measurements were statistically analyzed using 2-way analysis of variance with language (Korean vs English) and vowel ([i, u, ʌ, o, a]) as independent factors. An unbiased Pearson's correlation coefficient was used to estimate the correlation between articulatory and acoustic measures. Data were not averaged across the two repetitions for the statistical analyses. Raw data and Matlab scripts for statistical analyses are presented in the supplementary material.1

Figure 1 displays example ultrasound images of each participant's Korean and English laterals with traced tongue surfaces. The Korean lateral has high between-speaker variation, and each participant exhibited their own characteristic tongue shapes suggesting different types of constriction gestures, including lamino-postalveolar (P1, P4, P6), apico-dental (P2), retroflex (P3), and lamino-alveolar (P5). Although four participants (P1, P4, P5, P6) produced the Korean [l] with a tongue shape suggesting a palatalized lateral with a high tongue body and an advanced tongue root, the other two participants (P2, P3) exhibited a neutral tongue body and tongue dorsum position. The most remarkable tongue shape for the Korean lateral was observed with P3, who exhibited a retroflex articulation. Even if the place of articulation and the size of the occlusion varied across speakers, all participants displayed an anterior occlusion and neither lowered the tongue body nor retracted the tongue dorsum.

The articulatory gestures of the word-final English lateral were also variable. Three participants (P1, P3, P5) exhibited very similar tongue gestures, involving multiple articulations: heavy tongue dorsum retraction, tongue body lowering, and tongue blade constriction. For tongue dorsum retraction, they exhibited a prominent dorsal arching suggestive of a uvular constriction. For the tongue body, a cupping shape was observed, with a dip in the mid-line of the anterior portion of the tongue that does not extend to the margins of the tongue. In addition, the tongue tip was extended to make contact with the alveolar ridge. Two participants (P4, P6) also displayed similar tongue gestures, but their tongue backing and tongue body lowering gestures were somewhat less prominent than for participants P1, P3, and P5. The most remarkable tongue shape for the English lateral was observed from P2, who produced the English word-final lateral as a uvular approximant. Although she exhibited the tongue backing and tongue body lowering gestures like the other participants, her lateral had no anterior contact.

Figure 2(a) displays the anatomical distance measures from the Korean and English laterals in five different vowel contexts (averaged across two repetitions for visualization purposes; data were not averaged for statistical analyses). The articulatory differences between the Korean and English laterals seen in the ultrasound images are also observed in the results of the anatomical distance measures. The axis of the figure presents the distances from the tendon of GG to the tongue dorsum (x axis) and to the tongue blade (y axis). The results of the distance measures in Fig. 2(a) reveal that, compared to the Korean lateral (open symbols), the English lateral (filled symbols) generally exhibits a shorter distance to the tongue blade [F(1,114) = 131.55, p ≪ 0.05] and a longer distance to the tongue dorsum [F(1,114) = 137.64, p ≪ 0.05], which means that the English lateral has a lower tongue blade/body and more retracted tongue dorsum. Both the Korean and English laterals exhibited little vowel-dependent variation in their tongue shapes [tongue blade: F(4,114) = 0.38, p = 0.82; tongue dorsum: F(4,114) = 0.27, p = 0.90] for the measurements recorded in the middle frame of each token. Because the shape and size of the tongue and oral cavity vary from person to person, the measured anatomical distances differ somewhat for each participant, but overall the participants all exhibited similar patterns within a given language, and completely different tongue shapes for their Korean and English laterals.

The acoustic analysis revealed that the articulatory differences between the Korean and English laterals also influence their acoustic characteristics [first formant: F(1,114) = 16.27, p ≪ 0.05; second formant: F(1,114) = 900.56, p ≪ 0.05; third formant: F(1,114) = 24.98, p ≪ 0.05]. Figure 2(b) presents F2 (x axis) and F3 (y axis) values of the Korean and English laterals in five different vowel contexts (averaged across two repetitions). The Korean and English laterals are clearly differentiated by F2, though their F3 values are in a similar range.

Since the measures of tongue blade and dorsum position both contributed to distinguishing Korean and English laterals articulatorily, we calculated the difference between the two measures (distance to tongue blade minus distance to tongue dorsum) in order to define a single metric for the correlated dorsum-backing and blade-lowering gestures. We calculated the Pearson correlation coefficient between this metric and F2. The metric was highly correlated with F2 [Pearson r = 0.883, p ≪ 0.05]. Figure 2(c) shows the scatter plot of the data (not averaged across repetitions) together with a first-order polynomial fit and the R2 value.

This study revealed that advanced Korean female learners of English use completely different articulatory tongue shapes for their production of the English [ɫ] and Korean [l], in support of our first and third hypotheses. Unlike the Korean lateral, participants displayed a low tongue body and a heavily retracted tongue dorsum for their English word-final lateral, similar to native English speakers. These different articulatory shapes result in very different formant patterns, and their F2 values were significantly lower in the English lateral, in support of our second hypothesis. Thus, two very clearly different patterns that correspond to the two languages were observed for both tongue gestures and formant frequencies. Although the English [ɫ] maps onto the Korean [l] in loanwords, the results of this study suggest that the two laterals are perceptually dissimilar to Korean listeners, and form two distinct phonetic categories in the production systems of advanced Korean learners of English. These findings are consistent with predictions from the SLM.

The question that this raises, in turn, is why it is that previous work has assumed that Korean and English laterals are perceptually assimilated, as there is such a large discrepancy between the English and Korean productions, and this difference is reflected in each L2 speaker examined here. We believe that this may be due to the Korean learners perceptually assimilating the English lateral to a different category from the Korean lateral, specifically one of the vowel categories, based on the similarity of the English secondary articulation to dorsal gestures used for the Korean unrounded, non-low non-front vowels (e.g., as shown in Yun, 2005).

If this is the case, it supports the contention of Best and Tyler (2007) that the larger phonological frame surrounding phonetic elements needs to be taken into account. Here, the mapping of the Korean and English lateral may be suggested by the phonetic alignment of both as consonants as well as orthographic usage. However, this does not preclude learners from utilizing gestural regimes from vowels for the purpose of approximating English productions. If this is what is occurring in the present case, this in turn raises the question of whether the Korean productions of the English lateral exhibit aspects of vowel dynamics and coordination from Korean that are different from what is found in native English speakers. Examining the dynamics of production is an important next step in this research.

We would like to thank Sherman Charles for assistance with the data collection for this study. We also thank the Associate Editor and one anonymous reviewer for their feedback.

1

See supplementary material at https://doi.org/10.1121/1.5134656 for additional methodological details and raw data measurements.

1.
Berkson
,
K. H.
,
De Jong
,
K.
, and
Lulich
,
S. M.
(
2017
). “
Three dimensional ultrasound imaging of pre- and post-vocalic liquid consonants in American English: Preliminary observations
,” in
The 42nd IEEE International Conference on Acoustics, Speech and Signal Processing
, New Orleans, Louisiana, pp.
5080
5084
.
2.
Best
,
C. T.
(
1995
). “
A direct realist view of cross-language speech perception
,” in
Speech Perception and Linguistic Experience: Issues in Cross-Language Research
, edited by
W.
Strange
(
York
,
Timonium, MD
), pp.
171
204
.
3.
Best
,
C. T.
, and
Tyler
,
M. D.
(
2007
). “
Nonnative and second-language speech perception: Commonalities and complementarities
,” in
Language Experience in Second Language Speech Learning: In Honor of James Emil Flege
, edited by
O.-S.
Bohn
and
M. J.
Munro
(
John Benjamins
,
Amsterdam
), pp.
13
34
.
4.
Boersma
,
P.
, and
Weenink
,
D.
(
2019
). “
Praat: Doing phonetics by computer
,”
Boersma
,
P.
, and
Weenink
,
D.
, [Computer program]. Version 6.0.52, http://www.praat.org/ (Last viewed May 2, 2019).
5.
Charles
,
S.
, and
Lulich
,
S. M.
(
2018
). “
Case study of Brazilian Portuguese laterals using a novel articulatory-acoustic methodology with 3D/4D ultrasound
,”
Speech Commun.
103
,
37
48
.
6.
Charles
,
S.
, and
Lulich
,
S. M.
(
2019
). “
Articulatory-acoustic relations in the production of alveolar and palatal lateral sounds in Brazilian Portuguese
,”
J. Acoust. Soc. Am.
145
(
6
),
3269
3288
.
7.
ETS
. (
2012
). The Official Guide to the TOEFL® Test (McGraw Hill, New York, NY).
8.
Flege
,
J. E.
(
1987
). “
The production of ‘new’ and ‘similar’ phones in a foreign language: Evidence for the effect of equivalence classification
,”
J. Phonetics
15
(
1
),
47
65
.
9.
Flege
,
J. E.
(
1995
). “
Second language speech learning: Theory, findings, and problems
,” in
Speech Perception and Linguistic Experience: Issues in Cross-linguistic Research
, edited by
W.
Strange
(
York Press
,
Timonium, MD
), pp.
233
277
.
10.
Gick
,
B.
,
Campbell
,
F.
,
Oh
,
S.
, and
Tamburri-Watt
,
L.
(
2006
). “
Toward universals in the gestural organization of syllables: A cross-linguistic study of liquids
,”
J. Phonetics
34
(
1
),
49
72
.
11.
Hwang
,
Y.
,
Charles
,
S.
, and
Lulich
,
S. M.
(
2019
). “
Articulatory characteristics and variation of Korean laterals
,”
Phonetics Speech Sci.
11
(
1
),
19
27
.
12.
Jun
,
E.
(
2004
). “
Korean speakers' production of /r/ and /l/
,”
English Teach.
59
(
1
),
43
57
.
13.
Kwon
,
B. Y.
(
2005
). “
Articulatory and acoustic correlates of Korean /l/
,”
Malsori
56
,
76
101
.
14.
Kwon
,
B. Y.
(
2006
). “
Features of first language transfer in Korean speakers' production of English /l/
,”
English Teach.
61
(
2
),
179
207
.
15.
Lee
,
H. B.
(
1980
). “
A study of Korean speech sounds using electropalatography and its application to speech pathology
,”
Hangul
170
,
443
487
.
16.
Lee
,
Y.
,
Goldstein
,
L.
, and
Narayanan
,
S.
(
2015
). “
Systematic variation in the articulation of the Korean liquid across prosodic positions
,” in
Proceedings of the International Congress of Phonetic Sciences (ICPhS 2015)
, Glasgow, UK. https://www.internationalphoneticassociation.org/icphs-proceedings/ICPhS2015/Papers/ICPHS0923.pdf.
17.
Lulich
,
S. M.
,
Berkson
,
K. H.
, and
de Jong
,
K.
(
2018
). “
Acquiring and visualizing 3D/4D ultrasound recordings of tongue motion
,”
J. Phonetics
71
,
410
424
.
18.
Scobbie
,
J. M.
,
Wrench
,
A. A.
, and
van der Linden
,
M.
(
2008
). “
Head-probe stabilization in ultrasound tongue imaging using a headset to permit natural head movement
,” in
Proceedings of the 8th International Seminar on Speech Production
, pp.
373
376
; https://eresearch.qmu.ac.uk/bitstream/handle/20.500.12289/1099/eResearch_1099.pdf?sequence=1.
19.
Sproat
,
R.
, and
Fujimura
,
O.
(
1993
). “
Allophonic variation in English /l/ and its implications for phonetic implementation
,”
J. Phonetics
2
(
3
),
291
311
.
20.
Stevens
,
K. N.
(
1998
).
Acoustic Phonetics
(
MIT Press
,
Cambridge, MA
).
21.
The MathWorks, Inc.
(
2017
). Matlab [Computer program] (The MathWorks, Inc., Natick, MA).
22.
Umeda
,
H.
(
1980
). “
Observation of some selected articulation in Korean and Japanese by use of dynamic palatography
,” in
Papers of the 1st International Conference on Korean Studies
, The Academy of Korean Studies, Seoul, Korea, pp.
869
880
.
23.
Yang
,
B.
(
2008
). “
An analysis of the English l sound produced by Korean students
,”
Speech Sci.
15
(
1
),
53
62
.
24.
Yun
,
G.
(
2005
). “
An ultrasound study of coarticulation and vowel assimilation in Korean
,” in
Coyote Working Papers in Linguistics, Linguistic Theory at the University of Arizona
, Vol. 14, pp.
196
216
.

Supplementary Material