Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
49a5870
1. draft create binary tensor
Knight-X May 11, 2018
7be1355
1. draft create binary tensor
Knight-X May 11, 2018
6a0d958
1. extend createSnippet class to generate the blcok code for binary…
Knight-X Jun 4, 2018
e5ee094
1. generate weight header to become global constant array
Knight-X Jun 5, 2018
7dbb505
Merge branch 'feature/binary_tensor' of https://github.com/Knight-X/u…
Knight-X Jun 5, 2018
91234da
1. fix the tensor array format bug, which is from different type of
Knight-X Jun 15, 2018
e0ac005
1. implement binary tensor and test passed for now
Knight-X Jun 24, 2018
71e88f6
Merge remote-tracking branch 'up/pytest/ir' into feature/binary_tensor
Knight-X Jun 24, 2018
e8876b8
Merge remote-tracking branch 'up/pytest/ir' into feature/binary_tensor
Knight-X Jun 24, 2018
8de2c62
1. merge pytest branch and modify for inlinetensor (form kwargs to
Knight-X Jun 24, 2018
26452fe
1. add comments for createOperatorSnippet interface
Knight-X Jun 24, 2018
bfb44fb
Merge remote-tracking branch 'up/pytest/ir' into feature/binary_tensor
Knight-X Jun 24, 2018
1e23a7d
1. inline transform test complete
Knight-X Jul 1, 2018
efed4dc
1. fix snippet template error
Knight-X Jul 7, 2018
41f093b
Sketch cli code
dboyliao Jul 20, 2018
ec8a944
cli with click done
dboyliao Jul 20, 2018
383cbf9
Merge branch 'develop' into feature/binary_tensor
Knight-X Jul 23, 2018
d381bd7
Merge pull request #36 from Knight-X/feature/binary_tensor
dboyliao Jul 25, 2018
a60b6cd
Update lock file and requirements.txt
dboyliao Jul 28, 2018
60053e2
Fix conflict with develop
dboyliao Jul 28, 2018
54beff6
Update README.md
dboyliao Jul 28, 2018
1a759c0
Add input_nodes property
dboyliao Jul 29, 2018
021dd26
input output nodes
dboyliao Aug 1, 2018
c42cbfd
move setup input output nodes procedure to base class of transformer
dboyliao Aug 1, 2018
dec26e4
Fix refcnt for binary tensor
dboyliao Aug 5, 2018
dab3ccf
Merge branch 'develop' into click-cli
dboyliao Aug 5, 2018
205220f
Make inline default optimization pass
dboyliao Aug 5, 2018
d31f396
Flexible transform methods parser
dboyliao Aug 5, 2018
985dc6d
Add help string for subcmds
dboyliao Aug 7, 2018
5978aa0
Update example notebook (cnn)
dboyliao Aug 9, 2018
238a0f2
Clear noteobook output
dboyliao Aug 9, 2018
a7c5635
Minor files update
dboyliao Aug 10, 2018
0004478
remove old cnn pb file
dboyliao Aug 10, 2018
b7aaee6
Fix minor inline bug
dboyliao Aug 11, 2018
8a29186
Change error type for shallow copy
dboyliao Aug 11, 2018
d3b6508
update pbfile and notebook
dboyliao Aug 11, 2018
9db39ab
Update notebook and pb file (small model)
dboyliao Aug 11, 2018
d3d7279
Update lock file and README.md
dboyliao Aug 13, 2018
8110e38
Fix skip pattern bug (there are noe name in TF graph which starts wit…
dboyliao Aug 19, 2018
33747a4
Fix template bug
dboyliao Aug 20, 2018
b0d7a14
Fix no inline pass error
dboyliao Aug 22, 2018
c5430b7
Fix snippet bug
dboyliao Aug 22, 2018
ea7c5fc
Add input/output nodes property to uTensorGraph
dboyliao Sep 3, 2018
367735d
Fix transformer bug
dboyliao Sep 3, 2018
ac9cf3e
Merge pull request #40 from uTensor/click-cli
dboyliao Sep 20, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,12 @@ verify_ssl = true
name = "pypi"

[packages]
"jinja2" = "*"
tensorflow = ">=1.6"
numpy = "*"
"idx2numpy" = "*"
"e1839a8" = {path = ".", editable = true}
attrs = "*"

[dev-packages]
pylint = "*"
"flake8" = "*"
pytest = "*"
rope = "*"
pillow = "*"
scipy = "*"
292 changes: 213 additions & 79 deletions Pipfile.lock

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,10 @@ Following steps are a general guild for user how to porting a `TensorFlow` proto

1. install `utensor_cgent`
- run `python3 setupt.py install`
2. run `utensor-cli graph.pb --output-nodes=NODE,NODE,...`
2. run `utensor-cli convert --output-nodes='NODE,NODE,...' graph.pb`
- run `utensor-cli -h` for help
- the `graph.pb` is the pb file of *original* graph (not quantized)
3. If you want to see what ops/nodes are in the pb file, you can run `utensor-cli show <pbfile>`

# How to test (for Developer)

Expand Down
30 changes: 19 additions & 11 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,34 +1,42 @@
absl-py==0.2.2
astor==0.6.2
astor==0.7.1
astroid==1.6.5
atomicwrites==1.1.5
attrs==18.1.0
bleach==1.5.0
backports.functools-lru-cache==1.5
backports.weakref==1.0.post1
click==6.7
configparser==3.5.0
enum34==1.1.6
flake8==3.5.0
funcsigs==1.0.2
futures==3.2.0
gast==0.2.0
grpcio==1.12.1
html5lib==0.9999999
grpcio==1.13.0
idx2numpy==1.2.2
isort==4.3.4
Jinja2==2.10
lazy-object-proxy==1.3.1
Markdown==2.6.11
MarkupSafe==1.0
mccabe==0.6.1
mock==2.0.0
more-itertools==4.2.0
numpy==1.14.3
numpy==1.14.5
pbr==4.1.1
pluggy==0.6.0
protobuf==3.5.2.post1
py==1.5.3
protobuf==3.6.0
py==1.5.4
pycodestyle==2.3.1
pyflakes==1.6.0
pylint==1.9.2
pytest==3.6.1
pytest==3.6.3
rope==0.10.7
singledispatch==3.4.0.3
six==1.11.0
tensorboard==1.8.0
tensorflow==1.8.0
tensorboard==1.9.0
tensorflow==1.9.0
termcolor==1.1.0
-e git+https://github.com/uTensor/utensor_cgen.git@f7ff03eef8653818aa47652f673509daa9b7a8f1#egg=utensor_cgen
-e git+https://github.com/uTensor/utensor_cgen.git@ec8a9444a52a280473ae56d86a73a66d4b188699#egg=utensor_cgen
Werkzeug==0.14.1
wrapt==1.10.11
6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,14 @@
package_data={"utensor_cgen": ["templates/*"]},
entry_points={
"console_scripts": [
"utensor-cli=utensor_cgen.__main__:cli"
"utensor-cli=utensor_cgen.cli:cli"
]},
install_requires=[
'Jinja2',
'tensorflow',
'numpy',
'idx2numpy',
'attrs'
'attrs',
'click'
],
extras_require={
'dev': ['pytest']
Expand Down
1 change: 1 addition & 0 deletions tests/deep_cnn/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
data
4 changes: 4 additions & 0 deletions tests/deep_cnn/cifar/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# -*- coding: utf8 -*-
from __future__ import absolute_import

from ._cifar import *
100 changes: 100 additions & 0 deletions tests/deep_cnn/cifar/_cifar.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# -*- coding: utf8 -*-
from __future__ import print_function
from __future__ import absolute_import
import os
import tarfile

import numpy as np
from tensorflow.python.platform import gfile
from tensorflow.python.framework import dtypes
from tensorflow.contrib.learn.python.learn.datasets import base
from .dataset import DataSet, dense_to_one_hot
from .cs231n.data_utils import load_CIFAR10

__all__ = ["read_data_sets", "get_class_names", "onehot_to_names"]

_SOURCE_URL = "http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
_LABELS_MAP = {0: 'plane', 1: 'car', 2: 'bird',
3: 'cat', 4: 'deer', 5: 'dog',
6: 'frog', 7: 'horse', 8: 'ship',
9: 'truck'}


def read_data_sets(work_dir,
fake_data=False,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
validation_size=None,
seed=None):
if fake_data:
def fake():
return DataSet([], [],
fake_data=True,
image_dims=32*32*3,
num_class=10,
one_hot=one_hot,
dtype=dtype,
seed=seed)

train = fake()
validation = fake()
test = fake()
return base.Datasets(train=train, validation=validation, test=test)

root_data_dir = os.path.join(work_dir, "cifar-10-batches-py")
if not os.path.exists(root_data_dir):
# no data directory found
# download gz file
print("Trying to download cifar data (if the tar.gz file is not available)")
gz_fpath = base.maybe_download("cifar-10-python.tar.gz",
work_dir,
_SOURCE_URL)
print("Extracting data in {}".format(root_data_dir))
with tarfile.open(gz_fpath) as tar:
tar.extractall(work_dir)
else:
print("cifar data directory found {}".format(root_data_dir))
print("loading data...")
X_train, Y_train, X_test, Y_test = load_CIFAR10(root_data_dir)
if one_hot:
num_class_train = len(np.unique(Y_train))
num_class_test = len(np.unique(Y_test))
assert num_class_test == num_class_train, \
"number of classes mismatch: {} and {}".format(num_class_train, num_class_test)
Y_train = dense_to_one_hot(Y_train, num_class_train)
Y_test = dense_to_one_hot(Y_test, num_class_test)
if validation_size is None:
validation_size = int(X_train.shape[0]/10)
valid_idx = np.random.choice(range(X_train.shape[0]), validation_size)
mask = np.array([True if row_idx in valid_idx else False for row_idx in range(X_train.shape[0])])
X_train, X_valid = X_train[~mask], X_train[mask]
Y_train, Y_valid = Y_train[~mask], Y_train[mask]

train_dataset = DataSet(X_train, Y_train,
one_hot=one_hot,
dtype=dtype,
reshape=reshape,
seed=seed)
valid_dataset = DataSet(X_valid, Y_valid,
one_hot=one_hot,
dtype=dtype,
reshape=reshape,
seed=seed)
test_dataset = DataSet(X_test, Y_test,
one_hot=one_hot,
dtype=dtype,
reshape=reshape,
seed=seed)
return base.Datasets(train=train_dataset,
validation=valid_dataset,
test=test_dataset)


def get_class_names(labels):
return np.vectorize(_LABELS_MAP.get)(labels)


def onehot_to_names(one_hot):
labels = np.argmax(one_hot, axis=1)
return get_class_names(labels)
Empty file.
40 changes: 40 additions & 0 deletions tests/deep_cnn/cifar/cs231n/data_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
from __future__ import print_function

from six.moves import cPickle as pickle
import numpy as np
import os
from scipy.misc import imread
import platform

def load_pickle(f):
version = platform.python_version_tuple()
if version[0] == '2':
return pickle.load(f)
elif version[0] == '3':
return pickle.load(f, encoding='latin1')
raise ValueError("invalid python version: {}".format(version))

def load_CIFAR_batch(filename):
""" load single batch of cifar """
with open(filename, 'rb') as f:
datadict = load_pickle(f)
X = datadict['data']
Y = datadict['labels']
X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("float")
Y = np.array(Y)
return X, Y

def load_CIFAR10(ROOT):
""" load all of cifar """
xs = []
ys = []
for b in range(1,6):
f = os.path.join(ROOT, 'data_batch_%d' % (b, ))
X, Y = load_CIFAR_batch(f)
xs.append(X)
ys.append(Y)
Xtr = np.concatenate(xs)
Ytr = np.concatenate(ys)
del X, Y
Xte, Yte = load_CIFAR_batch(os.path.join(ROOT, 'test_batch'))
return Xtr, Ytr, Xte, Yte
126 changes: 126 additions & 0 deletions tests/deep_cnn/cifar/dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# -*- coding: utf8 -*-
# this file is (mostly) adapt from Tensorflow source code
from __future__ import print_function
from functools import reduce
import numpy
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import random_seed

def dense_to_one_hot(labels_dense, num_classes):
"""Convert class labels from scalars to one-hot vectors."""
num_labels = labels_dense.shape[0]
index_offset = numpy.arange(num_labels) * num_classes
labels_one_hot = numpy.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot

class DataSet(object):

def __init__(self,
images,
labels,
fake_data=False,
image_dims = None,
num_class = None,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
seed=None):
"""Construct a DataSet.
one_hot arg is used only if fake_data is true. `dtype` can be either
`uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
`[0, 1]`. Seed arg provides for convenient deterministic testing.
"""
seed1, seed2 = random_seed.get_seed(seed)
# If op level seed is not set, use whatever graph level seed is returned
numpy.random.seed(seed1 if seed is None else seed2)
dtype = dtypes.as_dtype(dtype).base_dtype
if dtype not in (dtypes.uint8, dtypes.float32):
raise TypeError('Invalid image dtype %r, expected uint8 or float32' %
dtype)
if fake_data:
self._num_examples = 10000
self.one_hot = one_hot
assert image_dims is not None, \
"must give image_dims if fake_data is True: get {}".format(image_dims)
self._image_dims = image_dims
assert num_class is not None, \
"must give num_class if fake_data is True: get {}".format(num_class)
self._num_class = num_class
else:
assert images.shape[0] == labels.shape[0], (
'images.shape: %s labels.shape: %s' % (images.shape, labels.shape))
self._num_examples = images.shape[0]

# Convert shape from [num examples, rows, columns, depth]
# to [num examples, rows*columns*depth]
if reshape:
images = images.reshape(images.shape[0], -1)
if dtype == dtypes.float32:
# Convert from [0, 255] -> [0.0, 1.0].
images = images.astype(numpy.float32)
images = numpy.multiply(images, 1.0 / 255.0)
self._images = images
self._labels = labels
self._epochs_completed = 0
self._index_in_epoch = 0

@property
def images(self):
return self._images

@property
def labels(self):
return self._labels

@property
def num_examples(self):
return self._num_examples

@property
def epochs_completed(self):
return self._epochs_completed

def next_batch(self, batch_size, fake_data=False, shuffle=True):
"""Return the next `batch_size` examples from this data set."""
if fake_data:
fake_image = [1] * self._image_dims
if self.one_hot:
fake_label = [1] + [0] * (self._num_class-1)
else:
fake_label = 0
return [fake_image for _ in xrange(batch_size)], [
fake_label for _ in xrange(batch_size)
]
start = self._index_in_epoch
# Shuffle for the first epoch
if self._epochs_completed == 0 and start == 0 and shuffle:
perm0 = numpy.arange(self._num_examples)
numpy.random.shuffle(perm0)
self._images = self.images[perm0]
self._labels = self.labels[perm0]
# Go to the next epoch
if start + batch_size > self._num_examples:
# Finished epoch
self._epochs_completed += 1
# Get the rest examples in this epoch
rest_num_examples = self._num_examples - start
images_rest_part = self._images[start:self._num_examples]
labels_rest_part = self._labels[start:self._num_examples]
# Shuffle the data
if shuffle:
perm = numpy.arange(self._num_examples)
numpy.random.shuffle(perm)
self._images = self.images[perm]
self._labels = self.labels[perm]
# Start next epoch
start = 0
self._index_in_epoch = batch_size - rest_num_examples
end = self._index_in_epoch
images_new_part = self._images[start:end]
labels_new_part = self._labels[start:end]
return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)
else:
self._index_in_epoch += batch_size
end = self._index_in_epoch
return self._images[start:end], self._labels[start:end]
Binary file added tests/deep_cnn/cifar10_cnn.pb
Binary file not shown.
Binary file added tests/deep_cnn/cnn_weights.pkl
Binary file not shown.
Loading