Skip to content

Commit 5f9dcb9

Browse files
Merge pull request #2 from A2R-Lab/main
Added background section, fixed logo and plots
2 parents d07abb5 + 3f247bc commit 5f9dcb9

File tree

8 files changed

+277
-53
lines changed

8 files changed

+277
-53
lines changed

docs/index.md

Lines changed: 43 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,15 @@ description: TinyMPC description and overview
66
# Welcome to TinyMPC's documentation!
77

88
<p align="center">
9-
<img width="50%" src="media/lightmode-banner.png#only-light" />
10-
<img width="50%" src="media/darkmode-banner.png#only-dark" />
9+
<img width="50%" src="media/tinympc-light-logo.png#only-light" />
10+
<img width="50%" src="media/tinympc-dark-logo.png#only-dark" />
1111
</p>
1212

1313
<p align="center" markdown>
1414
[Get Started :material-arrow-right-box:](get-started/installation.md){.md-button}
1515
</p>
1616

17-
TinyMPC is an open-source solver tailored for convex model-predictive control that delivers high speed computation with a small memory footprint. Implemented in C++ with minimal dependencies, TinyMPC is particularly suited for embedded control and robotics applications on resource-constrained platforms. TinyMPC can handle state and input bounds and second-order cone constraints. A Python interface is available to aid in generating code for embedded systems.
17+
TinyMPC is an open-source solver tailored for convex model-predictive control that delivers high speed computation with a small memory footprint. Implemented in C++ with minimal dependencies, TinyMPC is particularly suited for embedded control and robotics applications on resource-constrained platforms. TinyMPC can handle state and input bounds and second-order cone constraints. [Python](https://github.com/TinyMPC/tinympc-python), [MATLAB](https://github.com/TinyMPC/tinympc-matlab), and [Julia](https://github.com/TinyMPC/tinympc-julia) interfaces are available to aid in generating code for embedded systems.
1818

1919
!!! success ""
2020

@@ -75,11 +75,11 @@ TinyMPC outperforms state-of-the-art solvers in terms of speed and memory footpr
7575
</figure>
7676

7777
<figure markdown="span">
78-
![CDC24 MCU benchmarks](media/cdc_bench.png){ width=60% align=left}
78+
![ICRA26 benchmarks](media/icra_2026_benchmarks.png){ width=60% align=left}
7979
<div style="text-align: left;">
8080
<br>
8181
<br>
82-
TinyMPC is also capable of handling conic constraints. In (b), we benchmarked TinyMPC against two existing conic solvers with embedded support, [SCS](https://www.cvxgrp.org/scs/){:target="_blank"} and [ECOS](https://web.stanford.edu/~boyd/papers/ecos.html){:target="_blank"}, on the rocket soft-landing problem. TinyMPC achieves an average speed-up of 13x over SCS and 137x over ECOS.
82+
TinyMPC is also capable of handling conic constraints. Conic-TinyMPC outperforms [SCS](https://www.cvxgrp.org/scs/){:target="_blank"} and [ECOS](https://web.stanford.edu/~boyd/papers/ecos.html){:target="_blank"} in execution time and memory, achieving an average speed-up of 13.8x over SCS and 142.7x over ECOS.
8383
<!-- #gain, because of its lack of generality, TinyMPC is orders of magnitudes faster than SCS and ECOS. -->
8484
</div>
8585
</figure>
@@ -98,76 +98,79 @@ TinyMPC outperforms state-of-the-art solvers in terms of speed and memory footpr
9898

9999
## Made by
100100

101+
<!-- First row: Khai, Sam, Ishaan -->
101102
<div style="display: flex;">
102103
<div style="flex: 1;">
103104
<p align="center">
104-
<a href="https://www.linkedin.com/in/anoushka-alavilli-89586b178/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/anoushka_alavilli.jpg" /></a>
105+
<a href="https://xkhainguyen.github.io/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/khai_nguyen.jpg" /></a>
105106
</p>
106107
<h4 align="center">
107-
Anoushka Alavilli
108+
Khai Nguyen
108109
</h4>
109-
<!-- <h6 align="center">
110-
Main developer
111-
</h6> -->
112110
</div>
113111
<div style="flex: 1;">
114112
<p align="center">
115-
<a href="https://xkhainguyen.github.io/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/khai_nguyen.jpg" /></a>
113+
<a href="https://samschoedel.com/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/sam_schoedel.jpg" /></a>
116114
</p>
117115
<h4 align="center">
118-
Khai Nguyen
116+
Sam Schoedel
119117
</h4>
120-
<!-- <h6 align="center">
121-
Main developer
122-
</h6> -->
123118
</div>
124119
<div style="flex: 1;">
125120
<p align="center">
126-
<a href="https://samschoedel.com/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/sam_schoedel.jpg" /></a>
121+
<a href="https://ishaanmahajan.com/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/ishaan_mahajan.jpg" /></a>
127122
</p>
128123
<h4 align="center">
129-
Sam Schoedel
124+
Ishaan Mahajan
130125
</h4>
131-
<!-- <h6 align="center">
132-
Main developer
133-
</h6> -->
134126
</div>
135127
</div>
136128

137-
129+
<!-- Second row: Anoushka, Elakhya, Moises -->
138130
<div style="display: flex;">
131+
<div style="flex: 1;">
132+
<p align="center">
133+
<a href="https://www.linkedin.com/in/anoushka-alavilli-89586b178/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/anoushka_alavilli.jpg" /></a>
134+
</p>
135+
<h4 align="center">
136+
Anoushka Alavilli
137+
</h4>
138+
</div>
139139
<div style="flex: 1;">
140140
<p align="center">
141141
<a href="https://www.linkedin.com/in/elakhya-nedumaran/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/elakhya_nedumaran.png" /></a>
142142
</p>
143143
<h4 align="center">
144144
Elakhya Nedumaran
145145
</h4>
146-
<!-- <h6 align="center">
147-
Code generation and interfaces
148-
</h6> -->
149146
</div>
150147
<div style="flex: 1;">
148+
<p align="center">
149+
<a href="https://www.linkedin.com/in/moises-mata-cu/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/moises_mata.jpg" /></a>
150+
</p>
151+
<h4 align="center">
152+
Moises Mata
153+
</h4>
154+
</div>
155+
</div>
156+
157+
<!-- Third row: Brian and Zac (centered with proper sizing) -->
158+
<div style="display: flex; justify-content: center;">
159+
<div style="flex: 0 0 33.33%; max-width: 33.33%;">
151160
<p align="center">
152161
<a href="https://brianplancher.com/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/brian_plancher.jpg" /></a>
153162
</p>
154163
<h4 align="center">
155164
Prof. Brian Plancher
156165
</h4>
157-
<!-- <h6 align="center">
158-
Math and advice
159-
</h6> -->
160166
</div>
161-
<div style="flex: 1;">
167+
<div style="flex: 0 0 33.33%; max-width: 33.33%;">
162168
<p align="center">
163169
<a href="https://www.linkedin.com/in/zacmanchester/" target="_blank"><img style="border-radius: 0%;" width="60%" src="media/contributors/zac_manchester.jpg" /></a>
164170
</p>
165171
<h4 align="center">
166172
Prof. Zac Manchester
167173
</h4>
168-
<!-- <h6 align="center">
169-
Math and advice
170-
</h6> -->
171174
</div>
172175
</div>
173176

@@ -192,4 +195,13 @@ TinyMPC outperforms state-of-the-art solvers in terms of speed and memory footpr
192195
eprint={2403.18149},
193196
archivePrefix={arXiv},
194197
}
198+
```
199+
200+
```latex
201+
@article{mahajan2025robust,
202+
title={Robust and Efficient Embedded Convex Optimization through First-Order Adaptive Caching},
203+
author={Mahajan, Ishaan and Plancher, Brian},
204+
journal={arXiv preprint arXiv:2507.03231},
205+
year={2025}
206+
}
195207
```
3.71 MB
Loading
31.3 KB
Loading
243 KB
Loading

docs/media/tinympc-dark-logo.png

96.3 KB
Loading

docs/media/tinympc-light-logo.png

61.2 KB
Loading

docs/solver/background.md

Lines changed: 142 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,10 @@ The alternating direction method of multipliers algorithm was developed in the 1
1515
We want to solve optimization problems in which our cost function $f$ and set of valid states $\mathcal{C}$ are both convex:
1616

1717
$$
18-
\begin{alignat}{2}
19-
\min_x & \quad f(x) \\
20-
\text{subject to} & \quad x \in \mathcal{C}.
21-
\end{alignat}
18+
\begin{aligned}
19+
\min_x \quad & f(x) \\
20+
\text{subject to} \quad & x \in \mathcal{C}
21+
\end{aligned}
2222
$$
2323

2424
We define an indicator function for the set $\mathcal{C}$:
@@ -35,12 +35,29 @@ The indicator function says simply that there is infinite additional cost when $
3535

3636
We modify the generic optimization problem to include the indicator function by adding it to the cost. We introduce a new state variable $z$, called the slack variable, to describe the constrained version of the original state variable $x$, which we will now call the primal variable.
3737

38-
$$
38+
Our approach leverages the ADMM framework to distinctly separate the dynamics constraints from other convex constraints such as torque limits and obstacle avoidance. This separation is crucial as it allows us to:
39+
1. Handle dynamics through efficient LQR techniques in the primal update
40+
2. Manage other convex constraints through simple projection methods in the slack update
41+
42+
Since both the state constraints ($\mathcal{X}$) and input constraints ($\mathcal{U}$) are convex, this decomposition works by projecting the primal variables ($x, u$) onto these constraint sets through the slack updates. This projection ensures constraint satisfaction while leveraging the separability of the constraint structure, significantly reducing computational complexity compared to solving the fully constrained problem directly.
43+
44+
45+
46+
47+
<!-- $$
3948
\begin{alignat}{2}
4049
\min_x & \quad f(x) + I_\mathcal{C}(z) \\
4150
\text{subject to} & \quad x = z.
4251
\end{alignat}
52+
$$ -->
53+
4354
$$
55+
\begin{alignat}{2}
56+
\min_{x, u} & \quad f(x, u) + I_\mathcal{X}(z_x) + I_\mathcal{U}(z_u) \\
57+
\text{subject to} & \quad x = z_x, \quad u = z_u.
58+
\end{alignat}
59+
$$
60+
4461

4562
At minimum cost, the primal variable $x$ must be equal to the slack variable $z$, but during each solve they will not necessarily be equal. This is because the slack variable $z$ manifests in the algorithm as the version of the primal variable $x$ that has been projected onto the feasible set $\mathcal{C}$, and thus whenever the primal variable $x$ violates any constraint, the slack variable at that iteration will be projected back onto $\mathcal{C}$ and thus differ from $x$. To push the primal variable $x$ back to the feasible set $\mathcal{C}$, we introduce a third variable, $\lambda$, called the dual variable. This method is referred to as the [augmented Lagrangian](https://en.wikipedia.org/wiki/Augmented_Lagrangian_method){:target="_blank"} (originally named the method of multipliers), and introduces a scalar penalty parameter $\rho$ alongside the dual variable $\lambda$ (also known as a Lagrange multiplier). The penalty parameter $\rho$ is the augmentation to what would otherwise just be the Lagrangian of our constrained optimization problem above. $\lambda$ and $\rho$ work together to force $x$ closer to $z$ by increasing the cost of the augmented Lagrangian the more $x$ and $z$ differ.
4663

@@ -51,18 +68,132 @@ $$
5168
Our optimization problem has now been divided into two variables: the primal $x$ and slack $z$, and we can optimize over each one individually while holding all of the other variables constant. To get the ADMM algorithm, all we have to do is alternate between solving for the $x$ and then for the $z$ that minimizes our augmented Lagrangian. After each set of solves, we then update our dual variable $\lambda$ based on how much $x$ differs from $z$.
5269

5370
$$
54-
\begin{alignat}{3}
55-
\text{primal update: } & x^+ & ={} & \underset{x}{\arg \min} \hspace{2pt} \mathcal{L}_A(x,z,\lambda), \\
56-
\text{slack update: } & z^+ & ={} & \underset{z}{\arg \min} \hspace{2pt} \mathcal{L}_A(x^+,z,\lambda), \\
57-
\text{dual update: } & \lambda^+ & ={} & \lambda + \rho(x^+ - z^+),
58-
\end{alignat}
71+
\begin{aligned}
72+
\text{primal update: } x^+ &= \underset{x}{\arg \min} \, \mathcal{L}_A(x,z,\lambda), \\
73+
\text{slack update: } z^+ &= \underset{z}{\arg \min} \, \mathcal{L}_A(x^+,z,\lambda), \\
74+
\text{dual update: } \lambda^+ &= \lambda + \rho(x^+ - z^+)
75+
\end{aligned}
5976
$$
6077

6178
where $x^+$, $z^+$, and $\lambda^+$ refer to the primal, slack, and dual variables to be used in the next iteration.
6279

6380
Now all we have to do is solve a few unconstrained optimization problems!
6481

65-
## TODO: primal and slack update and discrete algebraic riccati equation
82+
---
83+
84+
## Primal and slack update
85+
86+
The primal update in TinyMPC takes advantage of the special structure of Model Predictive Control (MPC) problems. The optimization problem can be written as:
87+
88+
$$
89+
\min_{x_{1:N}, u_{1:N-1}} J = \frac{1}{2}x_N^\intercal Q_f x_N + q_f^\intercal x_N + \sum_{k=1}^{N-1} \frac{1}{2}x_k^\intercal Q x_k + q_k^\intercal x_k + \frac{1}{2}u_k^\intercal R u_k + r_k^\intercal u_k
90+
$$
91+
92+
$$
93+
\text{subject to: } x_{k+1} = Ax_k + Bu_k \quad \forall k \in [1,N)
94+
$$
95+
96+
In addition to the dynamics constraints, the optimization problem also includes convex state and input constraints:
97+
98+
$$
99+
x_k \in \mathcal{X}, u_k \in \mathcal{U} \quad \forall k \in [1,N)
100+
$$
101+
102+
where $\mathcal{X}$ and $\mathcal{U}$ are convex sets representing the feasible state and input regions, respectively. These convex constraints ensure that the solution remains within feasible boundaries for both the state and the control inputs at every time step.
103+
104+
When we apply ADMM to this problem, the primal update becomes an equality-constrained quadratic program with modified cost matrices:
105+
106+
$$
107+
\begin{aligned}
108+
\tilde{Q}_f &= Q_f + \rho I, \quad \tilde{q}_f = q_f + \lambda_N - \rho z_N \\
109+
\tilde{Q} &= Q + \rho I, \quad \tilde{q}_k = q_k + \lambda_k - \rho z_k \\
110+
\tilde{R} &= R + \rho I, \quad \tilde{r}_k = r_k + \mu_k - \rho w_k
111+
\end{aligned}
112+
$$
113+
114+
This modified LQR problem has a closed-form solution through the discrete Riccati equation. The feedback law takes the form:
115+
116+
$$
117+
u_k^* = -K_kx_k - d_k
118+
$$
119+
120+
where $K_k$ is the feedback gain and $d_k$ is the feedforward term. These are computed through the backward Riccati recursion:
121+
122+
$$
123+
\begin{aligned}
124+
K_k &= (R + B^\intercal P_{k+1} B)^{-1}(B^\intercal P_{k+1} A) \\
125+
d_k &= (R + B^\intercal P_{k+1} B)^{-1}(B^\intercal p_{k+1} + r_k) \\
126+
P_k &= Q + K_k^\intercal R K_k + (A - B K_k)^\intercal P_{k+1} (A - B K_k) \\
127+
p_k &= q_k + (A - B K_k)^\intercal (p_{k+1} - P_{k+1} B d_k) + K_k^\intercal (R d_k - r_k)
128+
\end{aligned}
129+
$$
130+
131+
The slack update is simpler, requiring only projection onto the constraint sets:
132+
133+
$$
134+
\begin{aligned}
135+
z_k^+ &= \text{proj}_{\mathcal{X}}(x_k^+ + y_k) \\
136+
w_k^+ &= \text{proj}_{\mathcal{U}}(u_k^+ + g_k)
137+
\end{aligned}
138+
$$
139+
140+
where $\mathcal{X}$ and $\mathcal{U}$ are the feasible sets for states and inputs respectively, and $y_k, g_k$ are scaled dual variables.
141+
142+
A key optimization in TinyMPC is the pre-computation of certain matrices that remain constant throughout the iterations. Given a sufficiently long horizon, the Riccati recursion converges to the infinite-horizon solution, allowing us to cache:
143+
144+
$$
145+
\begin{aligned}
146+
C_1 &= (R + B^\intercal P_{\text{inf}} B)^{-1} \\
147+
C_2 &= (A - B K_{\text{inf}})^\intercal
148+
\end{aligned}
149+
$$
150+
151+
This significantly reduces the online computational burden while maintaining the algorithm's effectiveness.
152+
153+
---
154+
155+
## Discrete Algebraic Riccati Equation (DARE)
156+
157+
For long time horizons, the Riccati recursion converges to a steady-state solution given by the discrete algebraic Riccati equation:
158+
159+
$$
160+
P_{\text{inf}} = Q + A^\intercal P_{\text{inf}} A - A^\intercal P_{\text{inf}} B(R + B^\intercal P_{\text{inf}} B)^{-1} B^\intercal P_{\text{inf}} A
161+
$$
162+
163+
This steady-state solution $P_{\text{inf}}$ yields a constant feedback gain:
164+
165+
$$
166+
K_{\text{inf}} = (R + B^\intercal P_{\text{inf}} B)^{-1} B^\intercal P_{\text{inf}} A
167+
$$
168+
169+
TinyMPC leverages this property by pre-computing these steady-state matrices offline, significantly reducing the online computational burden. The only online updates needed are for the time-varying linear terms in the cost function.
170+
171+
---
172+
173+
## Dual Updates and Convergence
174+
175+
The dual update step in ADMM pushes the solution toward constraint satisfaction:
176+
177+
$$
178+
\begin{aligned}
179+
y_k^+ &= y_k + x_k^+ - z_k^+ \\
180+
g_k^+ &= g_k + u_k^+ - w_k^+
181+
\end{aligned}
182+
$$
183+
184+
where $y_k$ and $g_k$ are the scaled dual variables ($y_k = \lambda_k/\rho$ and $g_k = \mu_k/\rho$).
185+
186+
The algorithm terminates when both primal and dual residuals are sufficiently small:
187+
188+
$$
189+
\begin{aligned}
190+
\text{primal residual: } & \|x_k^+ - z_k^+\|_2 \leq \epsilon_{\text{pri}} \\
191+
\text{dual residual: } & \rho\|z_k^+ - z_k\|_2 \leq \epsilon_{\text{dual}}
192+
\end{aligned}
193+
$$
194+
195+
where $\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ are user-defined tolerance parameters.
196+
66197

67198
<!--
68199
this is an example of `code` in markdown

0 commit comments

Comments
 (0)