This article is presented by:
DEPARTMENT OF E & TC ENGINEERING
ARMY INSTITUTE OF TECHNOLOGY
DIGHI HILLS, PUNE 411015
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gesture via mathematical algorithms. Gesture can originate from any bodily motion or state but commonly originate from face or hand. Current focuses in the field of
Using Neural network a simple and fast algorithm will be developed to work on a workstation. It will recognize static hand gestures, namely, a subset of American Sign Language (ASL).
A pattern recognition system emotion recognition from the face and hand gesture recognition. will be using a transform that converts an image into a feature vector, which will then be compared with the feature vectors of a training set of gestures. The final system will be implemented with a Perceptron network.
The scope of this project is to create a method to recognize hand gestures, based on a pattern recognition technique developed by McConnell; employing histograms of local orientation. The orientation histogram will be used as a feature vector for gesture classification and interpolation.
High priority for the system is to be simple without making use of any special hardware. All the computation should occur on a workstation or PC. Special hardware would be used only to digitize the image (scanner or digital camera).
Since the introduction of the most common input computer devices not a lot have changed. This is probably because the existing devices are adequate. It is also now that computers have been so tightly integrated with everyday life, that new applications and hardware are constantly introduced. The means of communicating with computers at the moment are limited to keyboards, mice, light pen, trackball, keypads etc. These devices have grown to be familiar but inherently limit the speed and naturalness with which we interact with the computer.
As the computer industry follows Moore’s Law since middle 1960s, powerful machines are built equipped with more peripherals. Vision based interfaces are feasible and at the present moment the computer is able to “see”. Hence users are allowed for richer and user friendly man-machine interaction. This can lead to new interfaces that will allow the deployment of new commands that are not possible with the current input devices. Plenty of time will be saved as well. Recently, there has been a surge in interest in recognizing human hand gestures. Handgesture recognition has various applications like computer games, machinery control (e.g. crane), and thorough mouse replacement. One of the most structured sets of gestures belongs to sign language. In sign language, each gesture has an assigned meaning (or meanings).
Computer recognition of hand gestures may provide a more natural-computer interface, allowing people to point, or rotate a CAD model by rotating their hands. Hand gestures can be classified in two categories: static and dynamic. A static gesture is a particular hand configuration and pose, represented by a single image. A dynamic gesture is a moving gesture, represented by a sequence of images. We will focus on the recognition of static images.
Interactive applications pose particular challenges. The response time should be very fast. The user should sense no appreciable delay between when he or she makes a gesture or motion and when the computer responds. The computer vision algorithms should be reliable and work for different people.
There are also economic constraints: the vision-based interfaces will be replacing existing ones, which are often very low cost. A hand-held video game controller and a television remote control each cost about $40. Even for added functionality, consumers may not want to spend more. When additional hardware is needed the cost is considerable higher. Academic and industrial researchers have recently been focusing on analyzing images of people.
To know more about Gesture recognition,please follow: