Skip to content
This repository was archived by the owner on Jul 18, 2024. It is now read-only.
Ton Ngo edited this page Dec 5, 2017 · 10 revisions

Welcome to the tensorflow-kubernetes-art-classification wiki!

Short Name

Classify Art using TensorFlow model on Kubernetes

Short Description

Train an image classification model using TensorFlow and running on a Kubernetes cluster.

Offering Type

Cognitive

Introduction

This Code Pattern shows how to build your own data set and train a model for image classification. The models are available in TensorFlow and are run on a Kubernetes cluster. The demonstration code pulls data and labels from the New York Metropolitan Museum of Art website and Google BigQuery. IBM Cloud container service provides the Kubernetes cluster. Developers can modify the code to build different image data sets and select from a collection of public model such as Inception, VGG, Resnet, Alexnet, Mobilenet, etc.

Author

By Ton Ngo, Winnie Tsang

Code

https://github.com/IBM/tensorflow-kubernetes-art-classification

Demo

N/A

Video

https://youtu.be/I-8xmMxo-RQ

Overview

In this Code Pattern, we will use Deep Learning to train an image classification model. The data comes from the art collection at the New York Metropolitan Museum of Art and the metadata from Google BigQuery. We will use the Inception model implemented in TensorFlow and we will run the training on a Kubernetes cluster. We will save the trained model and load it later to perform inference. To use the model, we provide as input a picture of a painting and the model will return the likely culture, for instance Italian Florence art. The user can choose other attributes to classify the art collection, for instance author, time period, etc. Depending on the compute resource available, the user can choose the number of images to train, the number of classes to use, etc. In this journey, we will select a small set of image and a small number of classes to allow the training to complete within a reasonable amount of time. With a large data set, the training may take days or weeks.

When the reader has completed this Code Pattern, they will understand how to:

  • Collect and process the data for Deep Learning in TensorFlow
  • Configure and deploy TensorFlow to run on a Kubernetes cluster
  • Train an advanced image classification Neural Network
  • Use TensorBoard to visualize and understand the training process

Flow

  1. Inspect the available attributes in the Google BigQuery database for the Met art collection
  2. Create the labeled dataset using the attribute selected
  3. Select a model for image classification from the set of available public models and deploy to IBM Cloud
  4. Run the training on Kubernetes, optionally using GPU if available
  5. Save the trained model and logs
  6. Visualize the training with TensorBoard
  7. Load the trained model in Kubernetes and run an inference on a new art drawing to see the classification

Included components

Featured technologies

Blog

Visualizing High Dimension data for Deep Learning

Links

Clone this wiki locally