Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 0 additions & 34 deletions .github/workflows/kfp-integration-tests.yaml

This file was deleted.

5 changes: 0 additions & 5 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,6 @@ lintrunner init
lintrunner -a
```

## Integration Tests

See the [KFP integration test](scripts/kfpint.py) file for more details on setup
and running them.

## License
By contributing to TorchX, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree.
3 changes: 0 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,6 @@ pip install torchx
# install torchx sdk and CLI -- all dependencies
pip install "torchx[dev]"

# install torchx kubeflow pipelines (kfp) support
pip install "torchx[kfp]"

# install torchx Kubernetes / Volcano support
pip install "torchx[kubernetes]"

Expand Down
4 changes: 0 additions & 4 deletions dev-requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ google-cloud-logging==3.10.0
google-cloud-runtimeconfig==0.34.0
hydra-core
ipython
kfp==1.8.22
# pin protobuf to the version that is required by kfp
protobuf==3.20.3
mlflow-skinny
moto~=5.0.8
pyre-extensions
Expand Down Expand Up @@ -45,4 +42,3 @@ grpcio==1.62.1
grpcio-status==1.48.1
googleapis-common-protos==1.63.0
google-api-core==2.18.0
protobuf==3.20.3 # kfp==1.8.22 needs protobuf < 4
35 changes: 3 additions & 32 deletions docs/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@ The top level modules in TorchX are:
4. :mod:`torchx.cli`: CLI tool
5. :mod:`torchx.runner`: given an app spec, submits the app as a job on a scheduler
6. :mod:`torchx.schedulers`: backend job schedulers that the runner supports
7. :mod:`torchx.pipelines`: adapters that convert the given app spec to a "stage" in an ML pipeline platform
8. :mod:`torchx.runtime`: util and abstraction libraries you can use in authoring apps (not app spec)
7. :mod:`torchx.runtime`: util and abstraction libraries you can use in authoring apps (not app spec)

Below is a UML diagram

Expand All @@ -32,8 +31,7 @@ the actual application. In scheduler lingo, this is a ``JobDefinition`` and a
similar concept in Kubernetes is the ``spec.yaml``. To disambiguate between the
application binary (logic) and the spec, we typically refer to a TorchX
``AppDef`` as an "app spec" or ``specs.AppDef``. It
is the common interface understood by ``torchx.runner``
and ``torchx.pipelines`` allowing you to run your app as a standalone job
is the common interface understood by ``torchx.runner`` allowing you to run your app as a standalone job
or as a stage in an ML pipeline.

Below is a simple example of an ``specs.AppDef`` that echos "hello world"
Expand Down Expand Up @@ -119,10 +117,6 @@ can be achieved through python function composition rather than object compositi
However **we do not recommend component composition** for maintainability
purposes.

**PROTIP 2:** To define dependencies between components, use a pipelining DSL.
See :ref:`basics:Pipeline Adapters` section below to understand how TorchX components
are used in the context of pipelines.

Before authoring your own component, browse through the library of
:ref:`Components` that are included with TorchX
to see if one fits your needs.
Expand All @@ -141,34 +135,11 @@ There are two ways to access runners in TorchX:
See :ref:`Schedulers` for a list of schedulers that the runner can
launch apps to.

Pipeline Adapters
~~~~~~~~~~~~~~~~~~~~~~
While runners launch components as standalone jobs, ``torchx.pipelines``
makes it possible to plug components into an ML pipeline/workflow. For a
specific target pipeline platform (e.g. kubeflow pipelines), TorchX
defines an adapter that converts a TorchX app spec to whatever the
"stage" representation is in the target platform. For instance,
``torchx.pipelines.kfp`` adapter for kubeflow pipelines converts an
app spec to a ``kfp.ContainerOp`` (or more accurately, a kfp "component spec" yaml).


In most cases an app spec would map to a "stage" (or node) in a pipeline.
However advanced components, especially those that have a mini control flow
of its own (e.g. HPO), may map to a "sub-pipeline" or an "inline-pipeline".
The exact semantics of how these advanced components map to the pipeline
is dependent on the target pipeline platform. For example, if the
pipeline DSL allows dynamically adding stages to a pipeline from an upstream
stage, then TorchX may take advantage of such feature to "inline" the
sub-pipeline to the main pipeline. TorchX generally tries its best to adapt
app specs to the **most canonical** representation in the target pipeline platform.

See :ref:`Pipelines` for a list of supported pipeline platforms.

Runtime
~~~~~~~~
.. important:: ``torchx.runtime`` is by no means is a requirement to use TorchX.
If your infrastructure is fixed and you don't need your application
to be portable across different types of schedulers and pipelines,
to be portable across different types of schedulers,
you can skip this section.

Your application (not the app spec, but the actual app binary) has **ZERO** dependencies
Expand Down
3 changes: 1 addition & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@ def handle_item(fieldarg, content):
code_url = f"https://github.com/pytorch/torchx/archive/refs/heads/{notebook_version}.tar.gz"

first_notebook_cell = f"""
!pip install torchx[kfp]
!pip install torchx
!wget --no-clobber {code_url}
!tar xf {notebook_version}.tar.gz --strip-components=1

Expand All @@ -351,7 +351,6 @@ def handle_item(fieldarg, content):
sphinx_gallery_conf = {
"examples_dirs": [
"../../torchx/examples/apps",
"../../torchx/examples/pipelines",
],
"gallery_dirs": [
"examples_apps",
Expand Down
12 changes: 0 additions & 12 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ TorchX
==================

TorchX is a universal job launcher for PyTorch applications.
TorchX is designed to have fast iteration time for training/research and support
for E2E production ML pipelines when you're ready.

**GETTING STARTED?** Follow the :ref:`quickstart guide<quickstart:Quickstart>`.

Expand Down Expand Up @@ -91,14 +89,6 @@ Works With

schedulers/fb/*

.. _Pipelines:
.. toctree::
:maxdepth: 1
:caption: Pipelines

pipelines/kfp
pipelines/airflow.md

.. fbcode::

.. toctree::
Expand All @@ -116,7 +106,6 @@ Examples
:caption: Examples

examples_apps/index
examples_pipelines/index


Components Library
Expand Down Expand Up @@ -165,7 +154,6 @@ Reference
runner
schedulers
workspace
pipelines

.. toctree::
:maxdepth: 1
Expand Down
15 changes: 0 additions & 15 deletions docs/source/pipelines.rst

This file was deleted.

104 changes: 0 additions & 104 deletions docs/source/pipelines/airflow.md

This file was deleted.

24 changes: 0 additions & 24 deletions docs/source/pipelines/kfp.rst

This file was deleted.

Binary file removed docs/source/pipelines/kfp_diagram.jpg
Binary file not shown.
Empty file.
1 change: 0 additions & 1 deletion docs/source/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -236,5 +236,4 @@ The `slurm` and `local_cwd` use the current environment so you can use `pip` and
1. Checkout other features of the [torchx CLI](cli.rst)
2. Take a look at the [list of schedulers](schedulers.rst) supported by the runner
3. Browse through the collection of [builtin components](components/overview.rst)
4. See which [ML pipeline platforms](pipelines.rst) you can run components on
5. See a [training app example](examples_apps/index.rst)
Loading
Loading