A collection of intelligent agents that play the classic Snake game using different AI approaches. This project demonstrates various AI techniques from simple reflex agents to reinforcement learning.
This research was conducted as part of the AI Laboratory coursework, demonstrating the practical application of intelligent agent architectures in game environments. The implementations serve as educational resources for understanding the evolution from reactive systems to learning-based artificial intelligence.
The Snake game implementation includes multiple AI agents:
- Simple Reflex Agent: Direct reactions to immediate perceptions
- Goal-Based Agent: Uses A* pathfinding algorithm
- Model-Based Agent: Maintains internal world model with prediction
- Utility-Based Agent: Maximizes expected utility across multiple criteria
- Q-Learning Agent: Reinforcement learning with Q-table
- Python 3.8 or higher
- pip (Python package installer)
-
Clone the repository:
git clone https://github.com/Krish-Om/agents-in-ai.git cd SnakeAgents
-
Create a virtual environment (recommended):
python -m venv .venv source .venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
Or install pygame directly:
pip install pygame
Execute the main runner script:
python run_agents.py
This will display a menu where you can choose which AI agent to run.
You can also run agents directly:
# Simple Reflex Agent
python agents/simple_reflex/simple_reflex_player.py
# Goal-Based Agent
python agents/goal_based/goal_based_player.py
# Model-Based Agent
python agents/model_based/model_based_player.py
# Utility-Based Agent
python agents/utility_based/utility_based_player.py
# Q-Learning Agent
python agents/q_learning/trained_player.py
โโโ agents/ # AI agent implementations
โ โโโ simple_reflex/ # Simple reflex agent
โ โโโ goal_based/ # Goal-based agent with A*
โ โโโ model_based/ # Model-based agent
โ โโโ utility_based/ # Utility-based agent
โ โโโ q_learning/ # Q-learning agent
โโโ core_game/ # Core game engine
โโโ resources/ # Game assets (images, sounds)
โโโ reports/ # Performance reports
โโโ ss/ # Screenshots
โโโ run_agents.py # Main runner script
- Strategy: Direct reactions to current state
- Complexity: Low
- Best for: Understanding basic AI concepts
- Strategy: A* pathfinding to reach food
- Complexity: Medium
- Best for: Learning search algorithms
- Strategy: Maintains world model and predicts outcomes
- Complexity: High
- Best for: Understanding predictive AI
- Strategy: Evaluates multiple criteria and maximizes utility
- Complexity: High
- Best for: Multi-objective optimization
- Strategy: Reinforcement learning with experience replay
- Complexity: Very High
- Best for: Machine learning applications
To train the Q-learning agent:
python agents/q_learning/auto_trainer.py
Training data is saved to trained_q_table.json
and can be used by the trained player.
- ESC: Quit game
- Space: Pause/Resume (in some modes)
Check the reports/
directory for detailed performance analysis of each agent.
- Create a new directory in
agents/
- Implement your agent following the existing pattern
- Add the agent to
run_agents.py
pygame
: Game engine and graphicsjson
: For saving/loading training data (Q-learning)- Standard Python libraries:
os
,sys
,math
,random
,time
- Location:
agents/simple_reflex/
- Basic reactive behavior based on immediate perceptions
- No internal state or planning
- Location:
agents/goal_based/
- Uses goal information to guide decision making
- Plans actions to achieve specific objectives
- Location:
agents/model_based/
- Maintains internal model of the world
- Uses model for planning and decision making
- Location:
agents/utility_based/
- Uses utility functions to evaluate outcomes
- Makes decisions based on expected utility
- Location:
agents/q_learning/
- Reinforcement learning using Q-learning algorithm
- Learns optimal policy through trial and error
- Includes both training and playing scripts
cd agents/q_learning
python auto_trainer.py
cd agents/q_learning
python trained_player.py
cd agents/q_learning
python screenshot_trainer.py # Training with overlay
python screenshot_player.py # Playing with stats
The main Snake game implementation is in core_game/source.py
and provides:
- Game mechanics (movement, collision detection, scoring)
- Rendering and display
- External agent interface
- Audio and visual effects
The Q-learning agent shows clear improvement:
- Training performance: ~0.30 average score
- Trained performance: Consistently achieves scores of 7+
- Learning uses epsilon-greedy strategy with decay (0.9 โ 0.01)
Full technical report and analysis available in reports/snake_agents_report.tex