You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a tabular data which is quite large in size (a few GBs). Have been using Databricks and Pyspark for data exploration, analysis, and training. Pyspark ml pipelines (from pyspark.ml import Pipeline) are similar to Sklearn pipelines but utilises the capabilities of Spark.
The issue I am facing is, I can't save the model trained using this pipeline in Joblib, FIL, or any other Triton supported backend.
How efficiently Triton's python backend runs Pyspark pipelines?