Skip to content

Commit 443f6dc

Browse files
authored
Merge pull request #279 from czoido/do-ai-conan
Yes, You Can Do AI with C++ blogpost
2 parents be14a0b + f97a585 commit 443f6dc

File tree

1 file changed

+225
-0
lines changed

1 file changed

+225
-0
lines changed
Lines changed: 225 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,225 @@
1+
---
2+
layout: post
3+
comments: false
4+
title: "Yes, You Can Do AI with C++!"
5+
meta_title: "C++ AI Libraries Available in Conan Center Index"
6+
description: "Learn how to implement AI and Machine Learning in C++ using libraries like TensorFlow Lite, Dlib, and ONNX Runtime, all available in Conan Center Index. Discover why C++ is a powerful choice for AI development."
7+
keywords: "C++, AI, Machine Learning, TensorFlow Lite, Dlib, ONNX Runtime, Conan Center Index"
8+
---
9+
10+
When thinking about Artificial Intelligence and Machine Learning, languages like Python
11+
often come to mind. However, **C++ is a powerful choice for developing AI and ML
12+
applications**, especially when performance and resource efficiency are critical. At
13+
[Conan Center](https://conan.io/center), you can find a variety of libraries that enable
14+
AI and ML development in C++. In this post, we will briefly highlight some of the most
15+
relevant ones available, helping you get started with AI development in C++ easily.
16+
17+
### Why Use C++ for AI and Machine Learning?
18+
19+
C++ offers several advantages for AI and ML development:
20+
21+
- **Performance**: C++ provides high execution speed and efficient resource management,
22+
making it ideal for computationally intensive tasks.
23+
- **Low-Level Optimizations**: C++ enables developers to utilize multiple compilation
24+
options and optimize libraries directly from the source code. This provides precise
25+
control over memory usage, inference processes, and hardware features like SIMD and
26+
CUDA, allowing custom optimizations for specific hardware capabilities.
27+
28+
In summary, C++ can be an excellent choice for working with AI. Let's explore some of the
29+
most representative AI libraries available in Conan Center Index.
30+
31+
### An Overview of Some AI and ML Libraries Available in Conan Center
32+
33+
Below are some notable libraries available in Conan Center Index. These libraries range
34+
from running large language models locally to optimizing model inference on edge devices
35+
or using specialized toolkits for tasks like computer vision and numerical optimization.
36+
37+
#### [LLaMA.cpp](https://conan.io/center/recipes/llama-cpp)
38+
39+
**LLaMA.cpp** is a C/C++ implementation of [Meta’s LLaMA models](https://www.llama.com/)
40+
and others, enabling local inference with minimal dependencies and high performance. It
41+
works on CPUs and GPUs, supports diverse architectures, and accommodates a variety of text
42+
models like [LLaMA 3](https://huggingface.co/models?search=llama),
43+
[Mistral](https://mistral.ai/), or [Phi](https://azure.microsoft.com/en-us/products/phi),
44+
as well as multimodal models like [LLaVA](https://github.com/haotian-liu/LLaVA).
45+
46+
One of the most interesting aspects of this library is that it includes a collection of
47+
CLI tools as examples, making it easy to run your own LLMs straight out of the box.
48+
49+
Let's try one of those tools. First, install the library with Conan and ensure that you
50+
enable building the examples and activate the network options (which require `libcurl`).
51+
Then, use a [Conan deployer](https://docs.conan.io/2/reference/extensions/deployers.html)
52+
to move the installed files from the Conan cache to the user space. To accomplish this,
53+
simply run the following command:
54+
55+
```shell
56+
# Install llama-cpp using Conan and deploy to the local folder
57+
$ conan install --requires=llama-cpp/b4079 --build=missing \
58+
-o="llama-cpp/*:with_examples=True" \
59+
-o="llama-cpp/*:with_curl=True" \
60+
--deployer=direct_deploy
61+
```
62+
63+
You can run your chatbot locally by invoking the packaged `llama-cli` application with a
64+
model from a Hugging Face repository. In this example, we will use a Llama 3.2 model with
65+
1 billion parameters and 6-bit quantization from the [unsloth
66+
repository](https://huggingface.co/unsloth).
67+
68+
Now, simply run the following command to start asking questions:
69+
70+
```shell
71+
# Run llama-cli downloading a Hugging Face model
72+
$ ./direct_deploy/llama-cpp/bin/llama-cli \
73+
--hf-repo unsloth/Llama-3.2-1B-Instruct-GGUF \
74+
--hf-file Llama-3.2-1B-Instruct-Q6_K.gguf \
75+
-p "What is the meaning to life and the universe?\n"
76+
```
77+
78+
Let’s check out our LLM’s perspective:
79+
80+
```text
81+
What is the meaning to life and the universe?
82+
83+
The meaning to life and the universe is a subject of endless
84+
debate among philosophers, theologians, scientists, and everyday
85+
people. But what if I told you that there is a simple
86+
yet profound truth that can help you find meaning and purpose
87+
in life? It's not a complex theory or a scientific formula.
88+
It's something that can be discovered by simply observing the
89+
world around us.
90+
91+
Here's the truth: **every moment is a
92+
new opportunity to create meaning and purpose.**
93+
...
94+
```
95+
96+
As you can see, in just a few minutes, we can have our own LLM running locally, all using
97+
C++. You can also use the libraries provided by the **llama-cpp** Conan package to
98+
integrate LLMs into your own applications. For example, here is the code for the
99+
[llama-cli](https://github.com/ggerganov/llama.cpp/blob/b4079/examples/main/main.cpp) that
100+
we just executed. For more information on the LLaMA.cpp project, please [check their
101+
repository on GitHub](https://github.com/ggerganov/llama.cpp).
102+
103+
#### [TensorFlow Lite](https://conan.io/center/recipes/tensorflow-lite)
104+
105+
**TensorFlow Lite** is a specialized version of [TensorFlow](https://www.tensorflow.org/)
106+
designed for deploying machine learning models on mobile, embedded systems, and other
107+
resource-constrained devices. It’s ideal for applications that require low-latency
108+
inference, such as edge computing or IoT devices. TensorFlow Lite focuses on optimizing
109+
performance while minimizing power consumption.
110+
111+
<figure class="centered">
112+
<img src="{{ site.baseurl }}/assets/post_images/2023-05-11/pose-detection-tensorflow.gif"
113+
style="display: block; margin-left: auto; margin-right: auto;"
114+
alt="Pose estimation with TensorFlow Lite"/>
115+
<figcaption style="text-align: center; font-size: 0.9em;">
116+
TensorFlow Lite in action
117+
</figcaption>
118+
</figure>
119+
120+
If you'd like to learn how to use TensorFlow Lite with a neural network model in C++, we
121+
previously published a [blog
122+
post](https://blog.conan.io/2023/05/11/tensorflow-lite-cpp-mobile-ml-guide.html)
123+
showcasing how to build a real-time human pose detection application using TensorFlow Lite
124+
and OpenCV. Check it out if you haven't read it yet.
125+
126+
One of the interesting aspects of using the library is the availability of numerous models
127+
on platforms like [Kaggle Models](https://www.kaggle.com/models) for various tasks, which
128+
can be easily integrated into your code. For more information on Tensorflow Lite, please
129+
[check their documentation](https://www.tensorflow.org/lite/guide).
130+
131+
#### [ONNX Runtime](https://conan.io/center/recipes/onnxruntime)
132+
133+
**ONNX Runtime** is a high-performance inference engine designed to run models in the
134+
[ONNX](https://onnx.ai/) format, an open standard for representing network models across
135+
various AI frameworks such as PyTorch, TensorFlow, and scikit-learn.
136+
137+
Thanks to this interoperability, ONNX Runtime allows you to use models trained in
138+
different frameworks with a single unified runtime. Here’s the general workflow:
139+
140+
1. **Get a model**: Train a model using your preferred framework and export or convert it
141+
to the ONNX format. There are [tutorials](https://onnxruntime.ai/docs/tutorials/)
142+
available for popular frameworks and libraries.
143+
144+
2. **Load and run the model with ONNX Runtime**: Check out these [C++ inference
145+
examples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx)
146+
to get started quickly.
147+
148+
Additionally, ONNX Runtime offers multiple options for tuning performance using various
149+
runtime configurations or hardware accelerators. Explore [the Performance section in the
150+
documentation](https://onnxruntime.ai/docs/performance/) for more details. For more
151+
information, visit the [ONNX Runtime documentation](https://onnxruntime.ai/docs/).
152+
153+
Check all available versions in the Conan Center Index by running:
154+
155+
```shell
156+
conan search onnxruntime
157+
```
158+
159+
#### [OpenVINO](https://conan.io/center/recipes/openvino)
160+
161+
**OpenVINO** (Open Visual Inference and Neural Network Optimization) is an
162+
[Intel-developed toolkit](https://docs.openvino.ai/) that accelerates deep learning
163+
inference on a wide range of devices. It supports models from frameworks like PyTorch,
164+
TensorFlow, and ONNX, offering tools to optimize, deploy, and scale AI applications
165+
efficiently.
166+
167+
The [OpenVINO C++
168+
examples](https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html) demonstrate
169+
tasks such as model loading, inference, and performance benchmarking. Explore these
170+
examples to see how you can integrate OpenVINO into your projects.
171+
172+
For more details, visit the [OpenVINO documentation](https://docs.openvino.ai/2024/).
173+
174+
Check all available versions in the Conan Center Index by running:
175+
176+
```shell
177+
conan search openvino
178+
```
179+
180+
#### [mlpack](https://conan.io/center/recipes/mlpack)
181+
182+
**mlpack** is a fast, flexible, and lightweight header-only C++ library for machine
183+
learning. It is ideal for lightweight deployments and prototyping. It offers a broad range
184+
of machine learning algorithms for classification, regression, clustering, and more, along
185+
with preprocessing utilities and data transformations.
186+
187+
Explore [mlpack’s examples
188+
repository](https://github.com/mlpack/examples/tree/master/cpp), where you’ll find C++
189+
applications such as training neural networks for digit recognition, decision tree models
190+
for predicting loan defaults, and clustering algorithms for identifying patterns in
191+
healthcare data.
192+
193+
For further details, visit the [mlpack documentation](https://www.mlpack.org/).
194+
195+
Check all available versions in the Conan Center Index by running:
196+
197+
```shell
198+
conan search mlpack
199+
```
200+
201+
#### [Dlib](https://conan.io/center/recipes/dlib)
202+
203+
**Dlib** is a modern C++ library widely used in research and industry for advanced machine
204+
learning algorithms and computer vision tasks. Its comprehensive documentation and
205+
well-designed API make it straightforward to integrate into existing projects.
206+
207+
Dlib provides a variety of algorithms, including facial detection, landmark recognition,
208+
object classification, and tracking. Examples of these functionalities can be found in
209+
[their GitHub repository](https://github.com/davisking/dlib/tree/master/examples).
210+
211+
For more information, visit the [Dlib official site](http://dlib.net/).
212+
213+
Check all available versions in the Conan Center Index by running:
214+
215+
```shell
216+
conan search dlib
217+
```
218+
219+
## Conclusion
220+
221+
C++ offers high-performance AI libraries and the flexibility to optimize for your
222+
hardware. With Conan, integrating these tools is straightforward, enabling efficient,
223+
scalable AI workflows.
224+
225+
Now, give these tools a go and see your AI ideas come to life in C++!

0 commit comments

Comments
 (0)