Skip to content

A real-time fitness pose detection system for recognizing push-ups, squats, and sit-ups using MediaPipe and LSTM

Notifications You must be signed in to change notification settings

timchen1015/FitPose-Detector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FitPose-Detector

A real-time exercise pose detection system based on MediaPipe and LSTM, capable of recognizing different exercise movements including push-ups, squats, and sit-ups.

FitPose Detector Demo

Click on the image above to watch the demo video

Training Performance

Features

  • Real-time pose detection and tracking
  • Recognition of three different exercises:
    • Push-ups
    • Squats
    • Sit-ups
  • LSTM deep learning model for accurate movement classification
  • Live skeleton tracking visualization
  • Real-time display of recognition results with confidence scores

Installation

Prerequisites

  • Python 3.8+ (recommended)
  • Webcam for real-time detection

Steps to Install

  1. Clone this repository:

    git clone https://github.com/timchen1015/FitPose-Detector.git
    cd FitPose-Detector
  2. Set up a virtual environment (recommended):

    # Create a virtual environment
    python -m venv venv
    
    # Activate the virtual environment (Windows)
    venv\Scripts\activate
  3. Install required dependencies:

    pip install -r requirements.txt

    Required packages:

    • tensorflow==2.18.0
    • keras==3.6.0
    • mediapipe==0.10.9
    • opencv-python==4.10.0.84
    • numpy==1.26.4
    • scikit-learn==1.3.2
    • matplotlib==3.7.3
    • albumentations==1.4.22

Usage

Running the Exercise Recognition System

To start the real-time exercise recognition:

# Make sure your virtual environment is activated (Windows)
# venv\Scripts\activate

python main.py

This will:

  1. Open your webcam
  2. Detect and track your body movements
  3. Recognize and classify exercises in real-time
  4. Display the detected exercise type and confidence score

Project Structure

project_root/
│
├── main.py                  # Main application for real-time detection
├── requirements.txt         # Required Python packages
│
├── trained_pose_model/      # Pre-trained model files
│   ├── train_model.py       # Script for training the model
│   ├── best_model.keras     # Trained LSTM model
│   ├── label_encoder.npy    # Class labels for exercises
│   └── training_history.png # Model training performance
│
└── exercise_dataset/        # Dataset folder 
    ├── extract_video.py     # Script to extract frames from videos
    ├── image_dataset/       # Processed image frames for training (generated by extract_video.py)
    └── video_dataset/       # Raw video datasets (must download from Google Drive and place here)
        ├── push_up_video/   # Push-up exercise videos
        ├── sit_up_video/    # Sit-up exercise videos
        └── squat_video/     # Squat exercise videos

Dataset and Data Processing

Due to GitHub file size limitations, the video dataset files are not included in this repository. Instead, you can download them from this Google Drive link:

Download Video Dataset

Setting up the Dataset

  1. Download the video dataset files from the Google Drive link
  2. Place the downloaded video files in the appropriate folders under exercise_dataset/video_dataset/
  3. Run the frame extraction script to generate the training data:
# Make sure your virtual environment is activated (Windows)
# venv\Scripts\activate

# Run the frame extraction script
python exercise_dataset/extract_video.py

This script will:

  • Process all videos in the push_up_video, sit_up_video, and squat_video folders
  • Extract frames at 10 FPS
  • Save the frames to the image_dataset directory
  • Create all necessary folders automatically

Creating Your Own Dataset

You can also use your own exercise videos:

  • Place your videos in the corresponding folders under exercise_dataset/video_dataset/
  • Run the extraction script to process your videos

Training the Model

After processing the dataset, train the model with:

# Make sure your virtual environment is activated (Windows)
# venv\Scripts\activate

python trained_pose_model/train_model.py

Demo

Check out our demonstration video to see FitPose-Detector in action:

FitPose Detector Demo Video

The demo shows:

  • Real-time detection of push-ups, squats, and sit-ups
  • Skeleton tracking and visualization
  • Exercise classification with confidence scores
  • Performance in different lighting conditions and angles

Contact

For questions or collaboration opportunities, please reach out to timchen1015.

About

A real-time fitness pose detection system for recognizing push-ups, squats, and sit-ups using MediaPipe and LSTM

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages