Skip to content

LLVM ERROR: out of memory #368

@sandeepb2013

Description

@sandeepb2013

root@2ff024ed2346:/opt/tritonserver/tmp/simple-xgboost# python3 sample.py
Test Accuracy: 51.24
/usr/local/lib/python3.10/dist-packages/xgboost/core.py:160: UserWarning: [09:16:55] WARNING: /workspace/src/c_api/c_api.cc:1240: Saving into deprecated binary model format, please consider using json or ubj. Model format will default to JSON in XGBoost 2.2 if not specified.
warnings.warn(smsg, UserWarning)
root@2ff024ed2346:/opt/tritonserver/tmp/simple-xgboost# WARNING: [Torch-TensorRT] - Unable to read CUDA capable devices. Return status: 35
I1030 09:17:00.890915 1358 libtorch.cc:2507] TRITONBACKEND_Initialize: pytorch
I1030 09:17:00.892801 1358 libtorch.cc:2517] Triton TRITONBACKEND API version: 1.15
I1030 09:17:00.893583 1358 libtorch.cc:2523] 'pytorch' TRITONBACKEND API version: 1.15
W1030 09:17:00.895411 1358 pinned_memory_manager.cc:237] Unable to allocate pinned system memory, pinned memory pool will not be available: CUDA driver version is insufficient for CUDA runtime version
I1030 09:17:00.896514 1358 cuda_memory_manager.cc:117] CUDA memory pool disabled
I1030 09:17:00.933129 1358 model_lifecycle.cc:462] loading: fil:1
I1030 09:17:00.947223 1358 initialize.hpp:43] TRITONBACKEND_Initialize: fil
I1030 09:17:00.948097 1358 backend.hpp:47] Triton TRITONBACKEND API version: 1.15
I1030 09:17:00.948809 1358 backend.hpp:52] 'fil' TRITONBACKEND API version: 1.15
I1030 09:17:00.950459 1358 model_initialize.hpp:37] TRITONBACKEND_ModelInitialize: fil (version 1)
I1030 09:17:00.988559 1358 instance_initialize.hpp:46] TRITONBACKEND_ModelInstanceInitialize: fil_0_0 (CPU device 0)
LLVM ERROR: out of memory

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions