Skip to content

Commit 354d30d

Browse files
committed
Update version to 0.33.0
1 parent ba61253 commit 354d30d

File tree

28 files changed

+77
-77
lines changed

28 files changed

+77
-77
lines changed

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.33.0
2323

2424
image=$1
2525

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.33.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.33.0
2121

2222
host=$1
2323
image=$2

dev/registry.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
1616

17-
CORTEX_VERSION=master
17+
CORTEX_VERSION=0.33.0
1818

1919
set -eo pipefail
2020

docs/clients/install.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ pip install cortex
99
```
1010

1111
<!-- CORTEX_VERSION_README x2 -->
12-
To install or upgrade to a specific version (e.g. v0.32.0):
12+
To install or upgrade to a specific version (e.g. v0.33.0):
1313

1414
```bash
15-
pip install cortex==0.32.0
15+
pip install cortex==0.33.0
1616
```
1717

1818
To upgrade to the latest version:
@@ -25,8 +25,8 @@ pip install --upgrade cortex
2525

2626
<!-- CORTEX_VERSION_README x2 -->
2727
```bash
28-
# For example to download CLI version 0.32.0 (Note the "v"):
29-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.32.0/get-cli.sh)"
28+
# For example to download CLI version 0.33.0 (Note the "v"):
29+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.33.0/get-cli.sh)"
3030
```
3131

3232
By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.

docs/clients/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ Deploy an API.
8888

8989
**Arguments**:
9090

91-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
91+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.33/ for schema.
9292
- `predictor` - A Cortex Predictor class implementation. Not required for TaskAPI/TrafficSplitter kinds.
9393
- `task` - A callable class/function implementation. Not required for RealtimeAPI/BatchAPI/TrafficSplitter kinds.
9494
- `requirements` - A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.

docs/clusters/management/create.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -96,27 +96,27 @@ The docker images used by the cluster can also be overridden. They can be config
9696
9797
<!-- CORTEX_VERSION_BRANCH_STABLE -->
9898
```yaml
99-
image_operator: quay.io/cortexlabs/operator:master
100-
image_manager: quay.io/cortexlabs/manager:master
101-
image_downloader: quay.io/cortexlabs/downloader:master
102-
image_request_monitor: quay.io/cortexlabs/request-monitor:master
103-
image_image_async_gateway: quay.io/cortexlabs/async-gateway:master
104-
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
105-
image_metrics_server: quay.io/cortexlabs/metrics-server:master
106-
image_inferentia: quay.io/cortexlabs/inferentia:master
107-
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
108-
image_nvidia: quay.io/cortexlabs/nvidia:master
109-
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
110-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
111-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
112-
image_prometheus: quay.io/cortexlabs/prometheus:master
113-
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
114-
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
115-
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
116-
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:master
117-
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:master
118-
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:master
119-
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:master
120-
image_grafana: quay.io/cortexlabs/grafana:master
121-
image_event_exporter: quay.io/cortexlabs/event-exporter:master
99+
image_operator: quay.io/cortexlabs/operator:0.33.0
100+
image_manager: quay.io/cortexlabs/manager:0.33.0
101+
image_downloader: quay.io/cortexlabs/downloader:0.33.0
102+
image_request_monitor: quay.io/cortexlabs/request-monitor:0.33.0
103+
image_image_async_gateway: quay.io/cortexlabs/async-gateway:0.33.0
104+
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.33.0
105+
image_metrics_server: quay.io/cortexlabs/metrics-server:0.33.0
106+
image_inferentia: quay.io/cortexlabs/inferentia:0.33.0
107+
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.33.0
108+
image_nvidia: quay.io/cortexlabs/nvidia:0.33.0
109+
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.33.0
110+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.33.0
111+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.33.0
112+
image_prometheus: quay.io/cortexlabs/prometheus:0.33.0
113+
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.33.0
114+
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.33.0
115+
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.33.0
116+
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.33.0
117+
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.33.0
118+
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.33.0
119+
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.33.0
120+
image_grafana: quay.io/cortexlabs/grafana:0.33.0
121+
image_event_exporter: quay.io/cortexlabs/event-exporter:0.33.0
122122
```

docs/workloads/async/configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,15 +26,15 @@ predictor:
2626
shell: <string> # relative path to a shell script for system package installation (default: dependencies.sh)
2727
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
2828
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
29-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master, quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:master based on compute)
29+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.33.0, quay.io/cortexlabs/python-predictor-gpu:0.33.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.33.0 based on compute)
3030
env: <string: string> # dictionary of environment variables
3131
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
3232
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
3333
```
3434
3535
### Tensorflow Predictor
3636
37-
<!-- CORTEX_VERSION_BRANCH_STABLE x3 -->
37+
<!-- CORTEX_VERSION_BRANCH_STABLE x4 -->
3838
3939
```yaml
4040
predictor:
@@ -49,8 +49,8 @@ predictor:
4949
signature_key: # name of the signature def to use for prediction (required if your model has more than one signature def)
5050
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
5151
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
52-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
53-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master, quay.io/cortexlabs/tensorflow-serving-gpu:master, or quay.io/cortexlabs/tensorflow-serving-inf:master based on compute)
52+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.33.0)
53+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.33.0, quay.io/cortexlabs/tensorflow-serving-gpu:0.33.0, or quay.io/cortexlabs/tensorflow-serving-inf:0.33.0 based on compute)
5454
env: <string: string> # dictionary of environment variables
5555
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
5656
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

docs/workloads/async/predictors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ class TensorFlowPredictor:
135135
<!-- CORTEX_VERSION_MINOR -->
136136

137137
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance
138-
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py)
138+
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.33/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py)
139139
that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as
140140
an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make
141141
an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions

docs/workloads/batch/configuration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ predictor:
1919
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
2020
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
2121
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
22-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn8 based on compute)
22+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.33.0 or quay.io/cortexlabs/python-predictor-gpu:0.33.0-cuda10.2-cudnn8 based on compute)
2323
env: <string: string> # dictionary of environment variables
2424
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
2525
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -49,8 +49,8 @@ predictor:
4949
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
5050
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
5151
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
52-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
53-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-gpu:master based on compute)
52+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.33.0)
53+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.33.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.33.0 based on compute)
5454
env: <string: string> # dictionary of environment variables
5555
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
5656
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

0 commit comments

Comments
 (0)