@@ -34,7 +34,7 @@ Below are some notable libraries available in Conan Center Index. These librarie
3434from running large language models locally to optimizing model inference on edge devices
3535or using specialized toolkits for tasks like computer vision and numerical optimization.
3636
37- #### LLaMA.cpp
37+ #### [ LLaMA.cpp] ( https://conan.io/center/recipes/llama-cpp )
3838
3939** LLaMA.cpp** is a C/C++ implementation of [ Meta’s LLaMA models] ( https://www.llama.com/ )
4040and others, enabling local inference with minimal dependencies and high performance. It
@@ -100,7 +100,7 @@ integrate LLMs into your own applications. For example, here is the code for the
100100we just executed. For more information on the LLaMA.cpp project, please [ check their
101101repository on GitHub] ( https://github.com/ggerganov/llama.cpp ) .
102102
103- #### TensorFlow Lite
103+ #### [ TensorFlow Lite] ( https://conan.io/center/recipes/tensorflow-lite )
104104
105105** TensorFlow Lite** is a specialized version of [ TensorFlow] ( https://www.tensorflow.org/ )
106106designed for deploying machine learning models on mobile, embedded systems, and other
@@ -128,7 +128,7 @@ on platforms like [Kaggle Models](https://www.kaggle.com/models) for various tas
128128can be easily integrated into your code. For more information on Tensorflow Lite, please
129129[ check their documentation] ( https://www.tensorflow.org/lite/guide ) .
130130
131- #### ONNX Runtime
131+ #### [ ONNX Runtime] ( https://conan.io/center/recipes/onnxruntime )
132132
133133** ONNX Runtime** is a high-performance inference engine designed to run models in the
134134[ ONNX] ( https://onnx.ai/ ) format, an open standard for representing network models across
@@ -150,7 +150,13 @@ runtime configurations or hardware accelerators. Explore [the Performance sectio
150150documentation] ( https://onnxruntime.ai/docs/performance/ ) for more details. For more
151151information, visit the [ ONNX Runtime documentation] ( https://onnxruntime.ai/docs/ ) .
152152
153- #### OpenVINO
153+ Check all available versions in the Conan Center Index by running:
154+
155+ ``` shell
156+ conan search onnxruntime
157+ ```
158+
159+ #### [ OpenVINO] ( https://conan.io/center/recipes/openvino )
154160
155161** OpenVINO** (Open Visual Inference and Neural Network Optimization) is an
156162[ Intel-developed toolkit] ( https://docs.openvino.ai/ ) that accelerates deep learning
@@ -165,7 +171,13 @@ examples to see how you can integrate OpenVINO into your projects.
165171
166172For more details, visit the [ OpenVINO documentation] ( https://docs.openvino.ai/2024/ ) .
167173
168- #### mlpack
174+ Check all available versions in the Conan Center Index by running:
175+
176+ ``` shell
177+ conan search openvino
178+ ```
179+
180+ #### [ mlpack] ( https://conan.io/center/recipes/mlpack )
169181
170182** mlpack** is a fast, flexible, and lightweight header-only C++ library for machine
171183learning. It is ideal for lightweight deployments and prototyping. It offers a broad range
@@ -180,7 +192,13 @@ healthcare data.
180192
181193For further details, visit the [ mlpack documentation] ( https://www.mlpack.org/ ) .
182194
183- #### Dlib
195+ Check all available versions in the Conan Center Index by running:
196+
197+ ``` shell
198+ conan search mlpack
199+ ```
200+
201+ #### [ Dlib] ( https://conan.io/center/recipes/dlib )
184202
185203** Dlib** is a modern C++ library widely used in research and industry for advanced machine
186204learning algorithms and computer vision tasks. Its comprehensive documentation and
@@ -192,6 +210,12 @@ object classification, and tracking. Examples of these functionalities can be fo
192210
193211For more information, visit the [ Dlib official site] ( http://dlib.net/ ) .
194212
213+ Check all available versions in the Conan Center Index by running:
214+
215+ ``` shell
216+ conan search dlib
217+ ```
218+
195219## Conclusion
196220
197221C++ offers high-performance AI libraries and the flexibility to optimize for your
0 commit comments