Assuming that the rate of variation of articulatory movements is slower than that of acoustic signals, it is hypothesized that tactile speech transmission systems should be based on articulatory rather than acoustical parameters of speech. The “System for Electrocutaneous Stimulation SEHR‐2” which produces current‐controlled bipolar electrical impulses with freely variable intervals was used to code tactile equivalents of CV and VC syllables. In experiment 1, patterns were delivered to the left forearm in such a way that “place of articulation” was transformed quasiisomorphically to “locus of tactile stimulation.” Without previous training, identification tests showed good recognition rates with an average of 65.83% for six vowels coded in a two‐dimensional space (upper‐lower, distal‐proximal) and fairly good identification (38.33 %) for six (tactile) consonantal places of articulation coded only in the distal‐proximal dimension. In experiment 2, differences in “manner of articulation” were transformed to differences in phenomenal “gestalt” of the tactile patterns created by variation of the interpulse intervals. Recognition of (tactile) manner of articulation was poorer (33.75 % for each of four consonants), but clearly above chance level.

This content is only available via PDF.