Skip to content

Implementation of Deep Q-Learning and Learning from Demonstrator algorithms to train an agent to play the Snake game

Notifications You must be signed in to change notification settings

willmacd/dql_snake_game

Repository files navigation

Deep Q-Learning & Uncertainty Aware Action Advising Based Approach to The Snake Game


Implementation of Deep Q-Learning and Uncertainty-Aware Action Advising algorithms to train Deep Reinforcement Learning agents to play the Snake game

This repository hosts the Queen's University CISC 856 - Reinforcement Learning course project of Will Macdonald & Ekin Tureoglu


Abstract — Although Q-Learning is considered the standard in Machine Learning and games, it may not be the best practice in complex cases where the state and action spaces are very large. One popular solution is to go deep and incorporate a Neural Network architecture into the Q-Learning algorithm as an ap- proximation model. In this work, we replicate the paper ”A Deep Q-Learning Based Approach Applied to the Snake Game” by Sebastianelli et al. where the Q-values for the snake are predicted with a Deep Q-Network. To expand upon this Deep Q-Network model, this report will also present work on replicating the paper ”Uncertainty-Aware Action Advising for Deep Reinforcement Learning Agents” by Da Silva et al. By asking for advice when the uncertainty is deemed high, the resulting snake game is able to offer a solution to the exploration-exploitation trade-off problem while being more efficient than Q-tables. The successful training results of both Deep Q-Network and Action Advising models are explored and compared to the baseline performances set out by their respective original works.


Model_Training.mp4

About

Implementation of Deep Q-Learning and Learning from Demonstrator algorithms to train an agent to play the Snake game

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages