Skip to content

Commit bc7afd8

Browse files
committed
Update version to 0.25.0
1 parent ef6dbc6 commit bc7afd8

File tree

26 files changed

+75
-77
lines changed

26 files changed

+75
-77
lines changed

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.25.0
2323

2424
image=$1
2525
dir="${ROOT}/images/${image/-slim}"

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.25.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.25.0
2121

2222
image=$1
2323

docs/clients/install.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ pip install cortex
99
```
1010

1111
<!-- CORTEX_VERSION_README x2 -->
12-
To install or upgrade to a specific version (e.g. v0.24.1):
12+
To install or upgrade to a specific version (e.g. v0.25.0):
1313

1414
```bash
15-
pip install cortex==0.24.1
15+
pip install cortex==0.25.0
1616
```
1717

1818
To upgrade to the latest version:
@@ -25,8 +25,8 @@ pip install --upgrade cortex
2525

2626
<!-- CORTEX_VERSION_README x2 -->
2727
```bash
28-
# For example to download CLI version 0.24.1 (Note the "v"):
29-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.24.1/get-cli.sh)"
28+
# For example to download CLI version 0.25.0 (Note the "v"):
29+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.25.0/get-cli.sh)"
3030
```
3131

3232
By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.

docs/clients/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ Deploy an API.
114114

115115
**Arguments**:
116116

117-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
117+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.25/ for schema.
118118
- `predictor` - A Cortex Predictor class implementation. Not required when deploying a traffic splitter.
119119
- `requirements` - A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.
120120
- `conda_packages` - A list of Conda dependencies that will be installed before the predictor class implementation is invoked.

docs/clusters/aws/install.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -81,17 +81,17 @@ The docker images used by the Cortex cluster can also be overridden, although th
8181
8282
<!-- CORTEX_VERSION_BRANCH_STABLE -->
8383
```yaml
84-
image_operator: quay.io/cortexlabs/operator:master
85-
image_manager: quay.io/cortexlabs/manager:master
86-
image_downloader: quay.io/cortexlabs/downloader:master
87-
image_request_monitor: quay.io/cortexlabs/request-monitor:master
88-
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
89-
image_metrics_server: quay.io/cortexlabs/metrics-server:master
90-
image_inferentia: quay.io/cortexlabs/inferentia:master
91-
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
92-
image_nvidia: quay.io/cortexlabs/nvidia:master
93-
image_fluentd: quay.io/cortexlabs/fluentd:master
94-
image_statsd: quay.io/cortexlabs/statsd:master
95-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
96-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
84+
image_operator: quay.io/cortexlabs/operator:0.25.0
85+
image_manager: quay.io/cortexlabs/manager:0.25.0
86+
image_downloader: quay.io/cortexlabs/downloader:0.25.0
87+
image_request_monitor: quay.io/cortexlabs/request-monitor:0.25.0
88+
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.25.0
89+
image_metrics_server: quay.io/cortexlabs/metrics-server:0.25.0
90+
image_inferentia: quay.io/cortexlabs/inferentia:0.25.0
91+
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.25.0
92+
image_nvidia: quay.io/cortexlabs/nvidia:0.25.0
93+
image_fluentd: quay.io/cortexlabs/fluentd:0.25.0
94+
image_statsd: quay.io/cortexlabs/statsd:0.25.0
95+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.25.0
96+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.25.0
9797
```

docs/clusters/aws/update.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ cortex cluster configure # or: cortex cluster configure --config cluster.yaml
88

99
## Upgrade to a newer version of Cortex
1010

11-
<!-- CORTEX_VERSION_MINOR -->
12-
1311
```bash
1412
# spin down your cluster
1513
cortex cluster down

docs/clusters/gcp/install.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -45,11 +45,11 @@ The docker images used by the Cortex cluster can also be overridden, although th
4545
4646
<!-- CORTEX_VERSION_BRANCH_STABLE -->
4747
```yaml
48-
image_operator: quay.io/cortexlabs/operator:master
49-
image_manager: quay.io/cortexlabs/manager:master
50-
image_downloader: quay.io/cortexlabs/downloader:master
51-
image_statsd: quay.io/cortexlabs/statsd:master
52-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
53-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
54-
image_pause: quay.io/cortexlabs/pause:master
48+
image_operator: quay.io/cortexlabs/operator:0.25.0
49+
image_manager: quay.io/cortexlabs/manager:0.25.0
50+
image_downloader: quay.io/cortexlabs/downloader:0.25.0
51+
image_statsd: quay.io/cortexlabs/statsd:0.25.0
52+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.25.0
53+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.25.0
54+
image_pause: quay.io/cortexlabs/pause:0.25.0
5555
```

docs/workloads/batch/configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
1212
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
1313
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
14-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master based on compute)
14+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.25.0 or quay.io/cortexlabs/python-predictor-gpu:0.25.0 based on compute)
1515
env: <string: string> # dictionary of environment variables
1616
networking:
1717
endpoint: <string> # the endpoint for the API (default: <api_name>)
@@ -45,8 +45,8 @@
4545
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
4646
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
4747
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
48-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
49-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:master or quay.io/cortexlabs/tensorflow-serving-cpu:master based on compute)
48+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.25.0)
49+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.25.0 or quay.io/cortexlabs/tensorflow-serving-cpu:0.25.0 based on compute)
5050
env: <string: string> # dictionary of environment variables
5151
networking:
5252
endpoint: <string> # the endpoint for the API (default: <api_name>)
@@ -75,7 +75,7 @@
7575
...
7676
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
7777
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
78-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:master or quay.io/cortexlabs/onnx-predictor-cpu:master based on compute)
78+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.25.0 or quay.io/cortexlabs/onnx-predictor-cpu:0.25.0 based on compute)
7979
env: <string: string> # dictionary of environment variables
8080
networking:
8181
endpoint: <string> # the endpoint for the API (default: <api_name>)

docs/workloads/batch/predictors.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ torchvision==0.6.1
161161
```
162162

163163
<!-- CORTEX_VERSION_MINOR x3 -->
164-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-inf/Dockerfile) (for Inferentia).
164+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.25/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.25/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.25/images/python-predictor-inf/Dockerfile) (for Inferentia).
165165

166166
## TensorFlow Predictor
167167

@@ -216,7 +216,7 @@ class TensorFlowPredictor:
216216
```
217217

218218
<!-- CORTEX_VERSION_MINOR -->
219-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
219+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.25/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
220220

221221
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`).
222222

@@ -242,7 +242,7 @@ tensorflow==2.3.0
242242
```
243243

244244
<!-- CORTEX_VERSION_MINOR -->
245-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
245+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.25/images/tensorflow-predictor/Dockerfile).
246246

247247
## ONNX Predictor
248248

@@ -297,7 +297,7 @@ class ONNXPredictor:
297297
```
298298

299299
<!-- CORTEX_VERSION_MINOR -->
300-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
300+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.25/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
301301

302302
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`).
303303

@@ -320,4 +320,4 @@ requests==2.24.0
320320
```
321321

322322
<!-- CORTEX_VERSION_MINOR x2 -->
323-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
323+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.25/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.25/images/onnx-predictor-gpu/Dockerfile) (for GPU).

0 commit comments

Comments
 (0)