Skip to content

Visual Sign Language recognition system using image processing, computer vision and neural network methodologies, to identify the characteristics of the hand in images taken from a video through web camera.

Notifications You must be signed in to change notification settings

jmp1730/Sign-Language-Interpreter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sign Language Interpreter

Developed by Joshua Mathew Philip from Cochin University of Science and Technology.

Made for UG Mini Project.

In this project, we have proposed a marker-free, visual Sign Language recognition system using image processing, computer vision and neural network methodologies, to identify the characteristics of the hand in images taken from a video through web camera.

Features

  • Sign Language Interpretation: Purpose: The program will translate the given sign language into ordinary alphabets. Input: American Sign Language gestures. Output: Corresponding alphabets.
  • Speech Synthesis: Purpose: To convert the translated gestures into speech. Input: American Sign language gestures. Output: Corresponding sound of the letters.

How this works?

This approach will convert video of daily frequently used full sentences gesture into a text and speak. Identification of hand shape from continuous frames will be done by using series of image processing operations. Interpretation of signs and corresponding meaning will be identified by OpenCV.

Steps workflow:

  • Setting up
  • Download/Update latest data.
  • Activate Virtual Environment.
  • Start the program.

Hardware and Software Requirements

Hardware:

  • Does not require any particular hardware. The application works on any computer with standard processing speeds.

Software:

  • Tkinter :The frontend development is entirely done with Tkinter. Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk GUI toolkit, and is Python’s de facto standard GUI.
  • Python 3.7 : The entire application is written in the python program- ming language, the deep learning modules used are also based on python.
  • Tensorflow v2.1 : It’s the backbone of the back-end part of the system. Focused on training and inference of the neural networks.
  • OpenCV : It’s also a part of backend part, mainly aimed at real-time computer vision.
  • Keras: It provides a Python interface for artificial neural networks and acts as an interface for the TensorFlow library.

STEPS

Installing Prerequisites

Running the Program

  • 2.1 - Open anaconda prompt.
  • 2.2 - Run command : activate virtual .
  • 2.3 - Change directory to project folder.
  • 2.4 - Run command : main.py.
  • 2.5 - Click the button on the pop up window to start detection.
  • 2.6 - Set the values of the trackbars as given in the screenshot below.

NOTE: Please run the project in plan environment. Here I’m attaching a portion. Set the trackbars as mentioned in the screenshot. _ symbol can be used for speaking.

TrackBar

Brief of given files

  1. main.py - to run the program.
  2. training.py - to train the model.
  3. recognize.py - for recognition of gestures.
  4. gui.py - for creation of user interface.
  5. mymo.h5 - is an already created model.
  6. virtual.zip - to set up the virtual environment.
  7. mydata - contains the dataset.

Screenshots

Dataset Creation-

Dataset Creation

Gesture Recognition-

Gesture Recognition

Any improvement to this Documentation or any new contributions are always welcome!

Thanks, Joshua Mathew Philip

About

Visual Sign Language recognition system using image processing, computer vision and neural network methodologies, to identify the characteristics of the hand in images taken from a video through web camera.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages