Skip to content

Commit 3f05631

Browse files
committed
Update version to 0.31.0
1 parent 16cebef commit 3f05631

File tree

29 files changed

+93
-93
lines changed

29 files changed

+93
-93
lines changed

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.31.0
2323

2424
image=$1
2525

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.31.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.31.0
2121

2222
image=$1
2323

dev/registry.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
1616

17-
CORTEX_VERSION=master
17+
CORTEX_VERSION=0.31.0
1818

1919
set -eo pipefail
2020

docs/clients/install.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ pip install cortex
99
```
1010

1111
<!-- CORTEX_VERSION_README x2 -->
12-
To install or upgrade to a specific version (e.g. v0.30.0):
12+
To install or upgrade to a specific version (e.g. v0.31.0):
1313

1414
```bash
15-
pip install cortex==0.30.0
15+
pip install cortex==0.31.0
1616
```
1717

1818
To upgrade to the latest version:
@@ -25,8 +25,8 @@ pip install --upgrade cortex
2525

2626
<!-- CORTEX_VERSION_README x2 -->
2727
```bash
28-
# For example to download CLI version 0.30.0 (Note the "v"):
29-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.30.0/get-cli.sh)"
28+
# For example to download CLI version 0.31.0 (Note the "v"):
29+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.31.0/get-cli.sh)"
3030
```
3131

3232
By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.

docs/clients/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ Deploy an API.
8989

9090
**Arguments**:
9191

92-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
92+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.31/ for schema.
9393
- `predictor` - A Cortex Predictor class implementation. Not required for TaskAPI/TrafficSplitter kinds.
9494
- `task` - A callable class/function implementation. Not required for RealtimeAPI/BatchAPI/TrafficSplitter kinds.
9595
- `requirements` - A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.

docs/clusters/aws/install.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -92,26 +92,26 @@ The docker images used by the Cortex cluster can also be overridden, although th
9292
9393
<!-- CORTEX_VERSION_BRANCH_STABLE -->
9494
```yaml
95-
image_operator: quay.io/cortexlabs/operator:master
96-
image_manager: quay.io/cortexlabs/manager:master
97-
image_downloader: quay.io/cortexlabs/downloader:master
98-
image_request_monitor: quay.io/cortexlabs/request-monitor:master
99-
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
100-
image_metrics_server: quay.io/cortexlabs/metrics-server:master
101-
image_inferentia: quay.io/cortexlabs/inferentia:master
102-
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
103-
image_nvidia: quay.io/cortexlabs/nvidia:master
104-
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
105-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
106-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
107-
image_prometheus: quay.io/cortexlabs/prometheus:master
108-
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
109-
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
110-
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
111-
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:master
112-
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:master
113-
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:master
114-
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:master
115-
image_grafana: quay.io/cortexlabs/grafana:master
116-
image_event_exporter: quay.io/cortexlabs/event-exporter:master
95+
image_operator: quay.io/cortexlabs/operator:0.31.0
96+
image_manager: quay.io/cortexlabs/manager:0.31.0
97+
image_downloader: quay.io/cortexlabs/downloader:0.31.0
98+
image_request_monitor: quay.io/cortexlabs/request-monitor:0.31.0
99+
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.31.0
100+
image_metrics_server: quay.io/cortexlabs/metrics-server:0.31.0
101+
image_inferentia: quay.io/cortexlabs/inferentia:0.31.0
102+
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.31.0
103+
image_nvidia: quay.io/cortexlabs/nvidia:0.31.0
104+
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.31.0
105+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.31.0
106+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.31.0
107+
image_prometheus: quay.io/cortexlabs/prometheus:0.31.0
108+
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.31.0
109+
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.31.0
110+
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.31.0
111+
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.31.0
112+
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.31.0
113+
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.31.0
114+
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.31.0
115+
image_grafana: quay.io/cortexlabs/grafana:0.31.0
116+
image_event_exporter: quay.io/cortexlabs/event-exporter:0.31.0
117117
```

docs/clusters/gcp/install.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -71,21 +71,21 @@ The docker images used by the Cortex cluster can also be overridden, although th
7171
7272
<!-- CORTEX_VERSION_BRANCH_STABLE -->
7373
```yaml
74-
image_operator: quay.io/cortexlabs/operator:master
75-
image_manager: quay.io/cortexlabs/manager:master
76-
image_downloader: quay.io/cortexlabs/downloader:master
77-
image_request_monitor: quay.io/cortexlabs/request-monitor:master
78-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
79-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
80-
image_google_pause: quay.io/cortexlabs/google-pause:master
81-
image_prometheus: quay.io/cortexlabs/prometheus:master
82-
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
83-
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
84-
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
85-
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:master
86-
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:master
87-
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:master
88-
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:master
89-
image_grafana: quay.io/cortexlabs/grafana:master
90-
image_event_exporter: quay.io/cortexlabs/event-exporter:master
74+
image_operator: quay.io/cortexlabs/operator:0.31.0
75+
image_manager: quay.io/cortexlabs/manager:0.31.0
76+
image_downloader: quay.io/cortexlabs/downloader:0.31.0
77+
image_request_monitor: quay.io/cortexlabs/request-monitor:0.31.0
78+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.31.0
79+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.31.0
80+
image_google_pause: quay.io/cortexlabs/google-pause:0.31.0
81+
image_prometheus: quay.io/cortexlabs/prometheus:0.31.0
82+
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.31.0
83+
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.31.0
84+
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.31.0
85+
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.31.0
86+
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.31.0
87+
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.31.0
88+
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.31.0
89+
image_grafana: quay.io/cortexlabs/grafana:0.31.0
90+
image_event_exporter: quay.io/cortexlabs/event-exporter:0.31.0
9191
```

docs/workloads/batch/configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ predictor:
1919
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
2020
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
2121
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
22-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn8 based on compute)
22+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.31.0 or quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.2-cudnn8 based on compute)
2323
env: <string: string> # dictionary of environment variables
2424
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
2525
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -49,8 +49,8 @@ predictor:
4949
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
5050
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
5151
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
52-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
53-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-gpu:master based on compute)
52+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.31.0)
53+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.31.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.31.0 based on compute)
5454
env: <string: string> # dictionary of environment variables
5555
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
5656
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -75,7 +75,7 @@ predictor:
7575
...
7676
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
7777
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
78-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:master or quay.io/cortexlabs/onnx-predictor-gpu:master based on compute)
78+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.31.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.31.0 based on compute)
7979
env: <string: string> # dictionary of environment variables
8080
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
8181
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

docs/workloads/batch/predictors.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ class TensorFlowPredictor:
143143
```
144144

145145
<!-- CORTEX_VERSION_MINOR -->
146-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
146+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.31/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
147147

148148
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`).
149149

@@ -204,7 +204,7 @@ class ONNXPredictor:
204204
```
205205

206206
<!-- CORTEX_VERSION_MINOR -->
207-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
207+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.31/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
208208

209209
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`).
210210

0 commit comments

Comments
 (0)