You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/workloads/batch/configuration.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@
11
11
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
12
12
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
13
13
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
14
-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.0 or quay.io/cortexlabs/python-predictor-gpu:0.24.0 based on compute)
14
+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.1 or quay.io/cortexlabs/python-predictor-gpu:0.24.1 based on compute)
15
15
env: <string: string> # dictionary of environment variables
16
16
networking:
17
17
endpoint: <string> # the endpoint for the API (default: <api_name>)
@@ -44,8 +44,8 @@
44
44
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
45
45
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
46
46
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
47
-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.0)
48
-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.0 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.0 based on compute)
47
+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.1)
48
+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.1 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.1 based on compute)
49
49
env: <string: string> # dictionary of environment variables
50
50
networking:
51
51
endpoint: <string> # the endpoint for the API (default: <api_name>)
@@ -74,7 +74,7 @@
74
74
...
75
75
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
76
76
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
77
-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.0 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.0 based on compute)
77
+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.1 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.1 based on compute)
78
78
env: <string: string> # dictionary of environment variables
79
79
networking:
80
80
endpoint: <string> # the endpoint for the API (default: <api_name>)
Copy file name to clipboardExpand all lines: docs/workloads/realtime/configuration.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@
22
22
threads_per_process: <int> # the number of threads per process (default: 1)
23
23
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
24
24
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
25
-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.0 or quay.io/cortexlabs/python-predictor-gpu:0.24.0 based on compute)
25
+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.1 or quay.io/cortexlabs/python-predictor-gpu:0.24.1 based on compute)
26
26
env: <string: string> # dictionary of environment variables
27
27
networking:
28
28
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
@@ -82,8 +82,8 @@
82
82
threads_per_process: <int> # the number of threads per process (default: 1)
83
83
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
84
84
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
85
-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.0)
86
-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.0 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.0 based on compute)
85
+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.1)
86
+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.1 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.1 based on compute)
87
87
env: <string: string> # dictionary of environment variables
88
88
networking:
89
89
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
@@ -138,7 +138,7 @@
138
138
threads_per_process: <int> # the number of threads per process (default: 1)
139
139
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
140
140
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
141
-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.0 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.0 based on compute)
141
+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.1 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.1 based on compute)
142
142
env: <string: string> # dictionary of environment variables
143
143
networking:
144
144
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
Cortex's base Docker images are listed below. Depending on the Cortex Predictor and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:
Note: the images listed above use the `-slim` suffix; Cortex's default API images are not `-slim`, since they have additional dependencies installed to cover common use cases. If you are building your own Docker image, starting with a `-slim` Predictor image will result in a smaller image size.
65
65
@@ -69,7 +69,7 @@ The sample Dockerfile below inherits from Cortex's Python CPU serving image, and
69
69
```dockerfile
70
70
# Dockerfile
71
71
72
-
FROM quay.io/cortexlabs/python-predictor-cpu-slim:0.24.0
72
+
FROM quay.io/cortexlabs/python-predictor-cpu-slim:0.24.1
0 commit comments