Skip to content

AI-powered EEG analysis engine for real-time brain-computer interfaces, neurological research, and clinical monitoring. Built with CNN-LSTM architecture, FastAPI, and MQTT streaming support.

Notifications You must be signed in to change notification settings

neurolab-0x/ai.neurolab

Repository files navigation

NeuroLab: EEG & Voice Analysis Platform

Python FastAPI License

πŸ“‹ Table of Contents

πŸ”­ Overview

NeuroLab is a sophisticated multimodal analysis platform that combines EEG (Electroencephalogram) data processing with voice emotion detection to provide comprehensive mental state classification. The system leverages machine learning to identify mental states such as relaxed, focused, and stressed, making it valuable for applications in mental health monitoring, neurofeedback, and brain-computer interfaces.

✨ Features

Core Capabilities

  • Real-time EEG Processing: Stream and analyze EEG data in real-time
  • Voice Emotion Detection: TensorFlow-based audio analysis with rule-based fallback
  • Multimodal Analysis: Combine EEG and voice data for comprehensive assessment
  • Multiple File Format Support: Compatible with .edf, .bdf, .gdf, .csv, WAV, MP3, and more
  • Advanced Signal Processing: Comprehensive preprocessing and feature extraction
  • Machine Learning Integration: TensorFlow/Keras models with graceful degradation
  • NLP-based Recommendations: AI-driven personalized insights and recommendations
  • RESTful API: FastAPI-powered endpoints for seamless integration
  • Interactive Web UI: Gradio interface for easy testing and demonstration
  • Scalable Architecture: Modular design for easy extension and maintenance

Mental State Classification

  • Relaxed (State 0): Calm, neutral emotional states
  • Focused (State 1): Alert, positive, engaged states
  • Stressed (State 2): Anxious, fearful, negative states

πŸ— System Architecture

neurolab_model/
β”œβ”€β”€ api/                    # API endpoints and routing
β”‚   β”œβ”€β”€ auth.py            # Authentication endpoints
β”‚   β”œβ”€β”€ training.py        # Model training endpoints
β”‚   β”œβ”€β”€ voice.py           # Voice processing endpoints
β”‚   └── streaming_endpoint.py
β”œβ”€β”€ config/                # Configuration files
β”‚   β”œβ”€β”€ database.py
β”‚   └── settings.py
β”œβ”€β”€ core/                  # Core functionality
β”‚   β”œβ”€β”€ config/
β”‚   β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ ml/
β”‚   β”œβ”€β”€ models/
β”‚   └── services/
β”œβ”€β”€ preprocessing/         # Data preprocessing modules
β”‚   β”œβ”€β”€ features.py
β”‚   β”œβ”€β”€ labeling.py
β”‚   β”œβ”€β”€ load_data.py
β”‚   └── preprocess.py
β”œβ”€β”€ utils/                 # Utility functions
β”‚   β”œβ”€β”€ ml_processor.py
β”‚   β”œβ”€β”€ nlp_recommendations.py
β”‚   β”œβ”€β”€ voice_processor.py
β”‚   └── model_manager.py
β”œβ”€β”€ data/                  # Raw data storage
β”œβ”€β”€ processed/             # Processed data and trained models
β”œβ”€β”€ main.py               # Application entry point
β”œβ”€β”€ requirements.txt      # Project dependencies
└── README.md

πŸš€ Installation

Prerequisites

  • Python 3.8+
  • pip package manager
  • (Optional) MongoDB for data storage
  • (Optional) InfluxDB for time-series data

Setup Steps

  1. Clone the Repository

    git clone https://github.com/neurolab-0x/ai.neurolab.git neurolab_model
    cd neurolab_model
  2. Create a Virtual Environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install Dependencies

    pip install -r requirements.txt
  4. Install Additional Audio Libraries (Recommended for voice processing)

    pip install librosa soundfile
  5. Environment Setup

    cp .env.example .env
    # Configure your .env file with appropriate settings
  6. Verify Installation

    python -c "import tensorflow as tf; print(f'TensorFlow: {tf.__version__}')"
    python -c "import torch; print(f'PyTorch: {torch.__version__}')"

🎯 Quick Start

Option 1: FastAPI Server

Start the API server:

uvicorn main:app --reload

Server will run on: http://localhost:8000

Access API Documentation:

Option 2: Gradio Web Interface

Launch the interactive web UI:

python gradio_app.py

Interface will run on: http://localhost:7860

Features:

  • πŸ“ Manual EEG input with sliders
  • 🎲 Sample data generation and testing
  • πŸ“ CSV file upload and analysis
  • ℹ️ Model information and status

3. Quick API Test

Test EEG Analysis:

import requests

eeg_data = {
    "alpha": 10.5,
    "beta": 15.2,
    "theta": 6.3,
    "delta": 2.1,
    "gamma": 30.5
}

response = requests.post('http://localhost:8000/analyze', json=eeg_data)
print(response.json())

πŸš€ Hugging Face Deployment

Deploy NeuroLab to Hugging Face Spaces for easy testing and API access.

Quick Deploy

1. Install Hugging Face CLI:

pip install huggingface_hub
huggingface-cli login

2. Prepare deployment:

python scripts/prepare_hf_space.py

3. Create and deploy Space:

cd neurolab-hf-space
git init
git add .
git commit -m "Deploy NeuroLab"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/neurolab-eeg-analysis
git push -u origin main

4. Access your Space:

https://huggingface.co/spaces/YOUR_USERNAME/neurolab-eeg-analysis

Deployment Options

  • Gradio Space: Interactive web interface (recommended for testing)
  • Docker Space: Full FastAPI backend with all endpoints
  • Model Hub: Upload trained models for inference

Documentation

Test Deployed API

from gradio_client import Client

client = Client("YOUR_USERNAME/neurolab-eeg-analysis")
result = client.predict(
    alpha=10.5, beta=15.2, theta=6.3, delta=2.1, gamma=30.5
)
print(result)

πŸ“š API Documentation

Core Endpoints

Health & Status

  • GET /health - System health check and diagnostics
  • GET / - API information and available endpoints

EEG Analysis

  • POST /upload - Upload and process EEG files

    • Supports files up to 500MB
    • Returns mental state classification and analysis
  • POST /analyze - Analyze EEG data

    • Real-time EEG data processing
    • Returns mental state, confidence, and metrics
  • POST /detailed-report - Generate comprehensive analysis report

    • Includes cognitive metrics
    • Provides NLP-based recommendations
    • Optional report saving

Recommendations

  • POST /recommendations - Get personalized recommendations
    • Based on mental state analysis
    • NLP-powered insights
    • Customizable recommendation count

Model Management

  • POST /calibrate - Calibrate model with new data
  • POST /train - Train model with custom dataset (requires auth)

🎀 Voice Processing

Overview

The voice processing module analyzes audio for emotion detection and maps emotions to mental states compatible with EEG analysis.

Supported Emotions

  • Angry β†’ Stressed (State 2)
  • Fear β†’ Stressed (State 2)
  • Sad β†’ Stressed (State 2)
  • Neutral β†’ Relaxed (State 0)
  • Calm β†’ Relaxed (State 0)
  • Happy β†’ Focused (State 1)
  • Surprise β†’ Focused (State 1)

Voice API Endpoints

Health Check

GET /voice/health

Check if voice processor is initialized and ready.

Get Supported Emotions

GET /voice/emotions

List all supported emotions and their mental state mappings.

Analyze Audio File

POST /voice/analyze

Upload and analyze an audio file for emotion detection.

Example:

import requests

with open('audio.wav', 'rb') as f:
    files = {'file': ('audio.wav', f, 'audio/wav')}
    response = requests.post('http://localhost:8000/voice/analyze', files=files)
    result = response.json()
    
print(f"Emotion: {result['data']['emotion']}")
print(f"Mental State: {result['data']['mental_state']}")
print(f"Confidence: {result['data']['confidence']}")

Batch Analysis

POST /voice/analyze-batch

Analyze multiple audio files with pattern analysis.

Features:

  • Process up to 50 files simultaneously
  • Aggregate emotion distribution
  • Calculate average mental state
  • Identify dominant emotions

Raw Audio Analysis

POST /voice/analyze-raw

Analyze raw audio data (base64 or bytes array).

Example:

import base64
import requests

with open('audio.wav', 'rb') as f:
    audio_bytes = f.read()
    audio_base64 = base64.b64encode(audio_bytes).decode()

payload = {
    "audio_data": {
        "data": audio_base64,
        "format": "base64"
    },
    "sample_rate": 16000
}

response = requests.post('http://localhost:8000/voice/analyze-raw', json=payload)

Multimodal Analysis

Combine EEG and voice data for comprehensive mental state assessment:

import requests

# Analyze EEG data
eeg_response = requests.post('http://localhost:8000/analyze', json=eeg_data)
eeg_state = eeg_response.json()['mental_state']

# Analyze voice data
with open('audio.wav', 'rb') as f:
    voice_response = requests.post('http://localhost:8000/voice/analyze', 
                                   files={'file': f})
voice_state = voice_response.json()['data']['mental_state']

# Combine results
combined_state = (eeg_state + voice_state) / 2
print(f"Combined Mental State: {combined_state}")

πŸ” Model Interpretability

SHAP (SHapley Additive exPlanations)

  • Explains model predictions by attributing feature importance
  • Identifies which EEG features contribute most to classifications
  • Available via: /interpretability/explain?explanation_type=shap

LIME (Local Interpretable Model-agnostic Explanations)

  • Provides local explanations for individual predictions
  • Available via: /interpretability/explain?explanation_type=lime
  • Can be included in streaming responses with include_interpretability=true

Confidence Calibration

  • Ensures confidence scores accurately reflect true probabilities
  • Methods: temperature scaling, Platt scaling, isotonic regression
  • Available via: /interpretability/calibrate?method=temperature_scaling

Usage Example:

from utils.interpretability import ModelInterpretability

interpreter = ModelInterpretability(model)

# Get SHAP explanations
shap_results = interpreter.explain_with_shap(X_data)

# Calibrate confidence
cal_results = interpreter.calibrate_confidence(X_val, y_val, 
                                               method='temperature_scaling')

# Make predictions with calibrated confidence
predictions = interpreter.predict_with_calibration(X_test)

πŸ”„ Data Processing Pipeline

EEG Processing

  1. Data Loading - File validation and format checking
  2. Preprocessing - Artifact removal, filtering, normalization
  3. Feature Extraction - Temporal, frequency domain, statistical features
  4. State Classification - Mental state prediction with confidence scoring

Voice Processing

  1. Audio Loading - Multiple format support (WAV, MP3, etc.) using scipy, soundfile, or fallback methods
  2. Preprocessing - Normalization, resampling to 16kHz
  3. Feature Extraction - RMS energy, zero-crossing rate, spectral centroid, spectral rolloff
  4. Emotion Detection - TensorFlow-based model or rule-based classification fallback
  5. State Mapping - Convert emotions to mental states (7 emotions β†’ 3 states)

🧠 Model Training

Training Process

  1. Data preparation and splitting
  2. Feature engineering
  3. Model selection and hyperparameter tuning
  4. Cross-validation
  5. Model calibration
  6. Performance evaluation

Evaluation Metrics

  • Accuracy
  • Precision
  • Recall
  • F1 Score
  • ROC-AUC
  • Confidence calibration metrics

🎨 Gradio Web Interface

NeuroLab includes a user-friendly Gradio interface for easy testing and demonstration.

Features

Manual Input Tab:

  • Interactive sliders for each EEG frequency band
  • Real-time analysis as you adjust values
  • Visual feedback on mental state

Sample Data Tab:

  • Pre-generated data for different mental states
  • Quick testing without manual input
  • Demonstrates expected outputs

CSV Upload Tab:

  • Upload CSV files with EEG data
  • Automatic processing and analysis
  • Supports multiple rows (uses mean values)

Model Info Tab:

  • View model status and configuration
  • Check TensorFlow availability
  • Model architecture details

Launch Gradio Interface

python gradio_app.py

Access at: http://localhost:7860

πŸ”§ Troubleshooting

Common Issues

1. TensorFlow GPU not detected:

# Check GPU availability
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

# Install CUDA-enabled TensorFlow if needed
pip install tensorflow[and-cuda]

2. Voice processing errors:

# Install audio processing libraries
pip install librosa soundfile scipy

3. Model not found:

  • Ensure ./processed/trained_model.h5 exists for EEG analysis
  • Ensure ./model/voice_emotion_model.h5 exists for voice processing
  • System will use rule-based fallback if models are missing

4. Port already in use:

# Use a different port
uvicorn main:app --port 8001
# or for Gradio
python gradio_app.py  # Edit server_port in the file

5. Import errors:

# Reinstall dependencies
pip install -r requirements.txt --force-reinstall

πŸ“– Additional Documentation

🀝 Contributing

We welcome contributions! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ“ž Contact

AI Model Maintainer: Mugisha Prosper
Email: nelsonprox92@gmail.com

Project: Neurolabs Inc
Repository: GitHub


Built with ❀️ by the NeuroLab Team

About

AI-powered EEG analysis engine for real-time brain-computer interfaces, neurological research, and clinical monitoring. Built with CNN-LSTM architecture, FastAPI, and MQTT streaming support.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •