Wednesday, December 16, 2009

Scientists begin to understand how sign language can be spoken as fast as speech

From PhysOrg.com:

Scientists have known for 40 years that even though it takes longer to use sign language to sign individual words, sentences can be signed, on average, in the same time it takes to say them, but until now they have never understood how this could be possible.

Sign languages such as American Sign Language (ASL) use hand gestures to indicate words, and are used by millions of deaf people around the world for communication. In American Sign Language every sign is made up a combination of hand gestures and handshapes. (The sign language for British English is quite different to ASL, and the two sign languages are not mutually intelligible.)

Scientists Andrew Chong and colleagues at Princeton University in New Jersey have been studying the empirical entropy and redundancy in American Sign Language handshapes to find an answer to the puzzle. The term entropy is used in the research as a measure of the average information content of a unit of data.

The fundamental unit of data of ASL is the handshape, while for spoken languages the fundamental units are phonemes. A handshape is a specific movement of the hand and specific location of the hand.

Their results show that the information contained in the 45 handshapes making up the American Sign Language is higher than the amount of information contained in phonemes. This means spoken English has more redundancy than the signed equivalent.

The researchers reached this conclusion by measuring the frequency of handshapes in videos of signing uploaded by deaf people to websites YouTube, DeafRead, and DeafVideo.tv, and videos of conversations in sign language recorded on campus. They discovered that the entropy (information content) of the handshapes averages at 0.5 bits per shape less than the theoretical maximum, while the entropy per phoneme in speech is around three bits below the maximum possible.

This means that even though making the signs for words is slower, signers can keep up with speakers because the low redundancy rate compensates for the slower rate of signing.

Chong believes the signed language has less redundancy than the spoken language because less is needed. The redundancy in spoken language allows speech to be understood in a noisy environment, but Chong explains the "visual channel is less noisy than the auditory channel", so there is less chance of being misunderstood.

The researchers speculated that errors are dealt with differently in signing and speaking. If hand gestures are not understood, difficulties can be overcome by slowing the transition between them, but if speech is not understood speaking phonemes for longer times does not always solve the difficulty.

Understanding sign language and its information content is essential if automated sign recognition technology is to develop, and the language needs to be understood to allow sign language to be encoded and transmitted electronically by means other than video recordings.