A real-time exercise pose detection system based on MediaPipe and LSTM, capable of recognizing different exercise movements including push-ups, squats, and sit-ups.
Click on the image above to watch the demo video
- Real-time pose detection and tracking
- Recognition of three different exercises:
- Push-ups
- Squats
- Sit-ups
- LSTM deep learning model for accurate movement classification
- Live skeleton tracking visualization
- Real-time display of recognition results with confidence scores
- Python 3.8+ (recommended)
- Webcam for real-time detection
-
Clone this repository:
git clone https://github.com/timchen1015/FitPose-Detector.git cd FitPose-Detector
-
Set up a virtual environment (recommended):
# Create a virtual environment python -m venv venv # Activate the virtual environment (Windows) venv\Scripts\activate
-
Install required dependencies:
pip install -r requirements.txt
Required packages:
- tensorflow==2.18.0
- keras==3.6.0
- mediapipe==0.10.9
- opencv-python==4.10.0.84
- numpy==1.26.4
- scikit-learn==1.3.2
- matplotlib==3.7.3
- albumentations==1.4.22
To start the real-time exercise recognition:
# Make sure your virtual environment is activated (Windows)
# venv\Scripts\activate
python main.py
This will:
- Open your webcam
- Detect and track your body movements
- Recognize and classify exercises in real-time
- Display the detected exercise type and confidence score
project_root/
│
├── main.py # Main application for real-time detection
├── requirements.txt # Required Python packages
│
├── trained_pose_model/ # Pre-trained model files
│ ├── train_model.py # Script for training the model
│ ├── best_model.keras # Trained LSTM model
│ ├── label_encoder.npy # Class labels for exercises
│ └── training_history.png # Model training performance
│
└── exercise_dataset/ # Dataset folder
├── extract_video.py # Script to extract frames from videos
├── image_dataset/ # Processed image frames for training (generated by extract_video.py)
└── video_dataset/ # Raw video datasets (must download from Google Drive and place here)
├── push_up_video/ # Push-up exercise videos
├── sit_up_video/ # Sit-up exercise videos
└── squat_video/ # Squat exercise videos
Due to GitHub file size limitations, the video dataset files are not included in this repository. Instead, you can download them from this Google Drive link:
- Download the video dataset files from the Google Drive link
- Place the downloaded video files in the appropriate folders under
exercise_dataset/video_dataset/
- Run the frame extraction script to generate the training data:
# Make sure your virtual environment is activated (Windows)
# venv\Scripts\activate
# Run the frame extraction script
python exercise_dataset/extract_video.py
This script will:
- Process all videos in the push_up_video, sit_up_video, and squat_video folders
- Extract frames at 10 FPS
- Save the frames to the image_dataset directory
- Create all necessary folders automatically
You can also use your own exercise videos:
- Place your videos in the corresponding folders under
exercise_dataset/video_dataset/
- Run the extraction script to process your videos
After processing the dataset, train the model with:
# Make sure your virtual environment is activated (Windows)
# venv\Scripts\activate
python trained_pose_model/train_model.py
Check out our demonstration video to see FitPose-Detector in action:
The demo shows:
- Real-time detection of push-ups, squats, and sit-ups
- Skeleton tracking and visualization
- Exercise classification with confidence scores
- Performance in different lighting conditions and angles
For questions or collaboration opportunities, please reach out to timchen1015.