A comprehensive markerless gait analysis system using computer vision and machine learning techniques for analyzing human walking patterns.
This system provides a complete pipeline for markerless gait analysis using:
- Computer Vision: Real-time pose estimation with MediaPipe
- Machine Learning: Temporal Convolutional Networks (TCN) for gait pattern analysis
- Data Processing: Advanced preprocessing and feature extraction
- Visualization: Real-time pose visualization with trail effects
- 🔄 Unified Pose Estimation: Extensible architecture supporting multiple pose estimation backends
- 🧠 TCN Architecture: Temporal sequence modeling for gait analysis
- 📊 Advanced Analytics: Gait event detection, phase analysis, and performance metrics
- 🎨 Real-time Visualization: Interactive pose visualization with trail effects
- 🔧 Modular Design: Easy to extend with new pose models and analysis methods
- 📈 Cross-validation: Robust evaluation pipeline with comprehensive metrics
- 📁 Organized Outputs: All results centralized in
outputs/directory
The unified pose processor manager makes it easy to add new pose estimation models:
- Create a new processor class inheriting from
PoseProcessor - Implement required abstract methods
- Add the model to the
AVAILABLE_MODELSdictionary - Update the
create_processormethod
gait_analysis/
├── core/ # Core system modules (pose processing, TCN model, training)
├── usecases/ # Use case implementations (gait analysis, testing)
├── scripts/ # Utility scripts
├── configs/ # Configuration files
├── docs/ # Documentation
├── archive/ # Legacy scripts (see archive/README.md)
├── data/ # Input data and trained models
├── videos/ # Video files
└── outputs/ # Output results (analysis, logs, models, visualizations)
On macOS/Linux:
./setup_environment.shOn Windows:
setup_environment.batsource .venv/bin/activate # macOS/Linux
# or
.venv\Scripts\activate # Windows# Test the complete system
python3 usecases/testing/test_system.py
# Test pose models specifically
python3 usecases/testing/test_pose_models.py
# Show available models
python3 scripts/pose_model_comparison.py --info# Basic gait analysis with MediaPipe
python3 usecases/gait_analysis/main_gait_analysis.py \
--videos videos/raw/sample.mp4 \
--output outputs/gait_analysis/
# Pose detection only
python3 usecases/gait_analysis/main_gait_analysis.py \
--videos videos/raw/sample.mp4 \
--pose-detection-only
# With real-time visualization
python3 usecases/gait_analysis/main_gait_analysis.py \
--videos videos/raw/sample.mp4 \
--with-visualization| Framework | Status | Notes |
|---|---|---|
| MediaPipe | ✅ Implemented | Default, actively used |
| OpenPose | Code in archive/, not integrated |
|
| YOLO-Pose | ❌ Not implemented | Architecture ready for integration |
| MMPose | ❌ Not implemented | Architecture ready for integration |
| ViTPose | ❌ Not implemented | Architecture ready for integration |
- Single-person detection only: The system is currently configured with
num_poses=1, detecting only one person per frame. Multi-person detection is supported by MediaPipe but not yet implemented in this codebase.
- Speed: Fast, real-time processing
- Accuracy: Good for most applications
- Resource Usage: Low, works on CPU
- Best For: Real-time applications, mobile/edge devices
- Landmarks: 33 pose landmarks, converted to BODY_25 format (25 keypoints)
The system architecture is designed to easily support additional pose estimation models:
- Create a new processor class that inherits from
PoseProcessorabstract base class - Implement the required abstract methods (
process_video,process_webcam,cleanup) - Add the model to the
AVAILABLE_MODELSdictionary inPoseProcessorManager - Update the
create_processormethod to handle the new model type
See core/pose_processor_manager.py for the extensible architecture and core/mediapipe_integration.py for an implementation example.
# Compare available models on the same video
python3 scripts/pose_model_comparison.py --video videos/raw/sample.mp4 --compare
# Process with specific model
python3 usecases/gait_analysis/main_gait_analysis.py \
--videos videos/raw/sample.mp4 \
--pose-model mediapipeThe system includes an interactive real-time pose visualization tool that displays pose keypoints as colored dots with trail effects.
# Basic visualization with trail effect
python3 usecases/gait_analysis/features/realtime_pose_visualization.py videos/raw/sample.mp4
# Show confidence values
python3 usecases/gait_analysis/features/realtime_pose_visualization.py videos/raw/sample.mp4 --show-confidence
# Fast performance mode
python3 usecases/gait_analysis/features/realtime_pose_visualization.py videos/raw/sample.mp4 --model-complexity 0 --no-trail- 'q': Quit visualization
- 't': Toggle trail effect
- 'c': Toggle connections
- 'r': Reset trail
- SPACE: Pause/resume
- '1', '2', '3': Change model complexity
All results are organized in the outputs/ directory:
outputs/
├── gait_analysis/ # Main gait analysis results
├── mediapipe/ # MediaPipe pose detection outputs
├── test_results/ # Testing and validation results
├── logs/ # Application logs
├── visualizations/ # Charts, graphs, and visual outputs
└── models/ # Trained models and artifacts
- Real-time Visualization: Interactive pose visualization guide
- TCN Gait Analysis: Comprehensive TCN system documentation
- Installation Guide: Detailed setup instructions
- Core Modules: Core system modules documentation
- Changelog: Project history and changes
- Archive: Legacy scripts and migration notes
The system uses JSON configuration files for customization:
{
"pose_model": "mediapipe",
"task_type": "phase_detection",
"num_classes": 4,
"num_filters": 64,
"kernel_size": 3,
"num_blocks": 4,
"dropout_rate": 0.2,
"learning_rate": 0.001,
"epochs": 100,
"batch_size": 32
}- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- MediaPipe team for the pose estimation framework
- TensorFlow/Keras community for the deep learning framework
- OpenCV community for computer vision tools
Note: Legacy scripts from the initial development phase have been moved to the archive/ directory. See archive/README.md for details about the archived files and migration notes.

