You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Get Started :material-arrow-right-box:](get-started/installation.md){.md-button}
15
15
</p>
16
16
17
-
TinyMPC is an open-source solver tailored for convex model-predictive control that delivers high speed computation with a small memory footprint. Implemented in C++ with minimal dependencies, TinyMPC is particularly suited for embedded control and robotics applications on resource-constrained platforms. TinyMPC can handle state and input bounds and second-order cone constraints. A Python interface is available to aid in generating code for embedded systems.
17
+
TinyMPC is an open-source solver tailored for convex model-predictive control that delivers high speed computation with a small memory footprint. Implemented in C++ with minimal dependencies, TinyMPC is particularly suited for embedded control and robotics applications on resource-constrained platforms. TinyMPC can handle state and input bounds and second-order cone constraints. [Python](https://github.com/TinyMPC/tinympc-python), [MATLAB](https://github.com/TinyMPC/tinympc-matlab), and [Julia](https://github.com/TinyMPC/tinympc-julia) interfaces are available to aid in generating code for embedded systems.
18
18
19
19
!!! success ""
20
20
@@ -75,11 +75,11 @@ TinyMPC outperforms state-of-the-art solvers in terms of speed and memory footpr
TinyMPC is also capable of handling conic constraints. In (b), we benchmarked TinyMPC against two existing conic solvers with embedded support, [SCS](https://www.cvxgrp.org/scs/){:target="_blank"} and [ECOS](https://web.stanford.edu/~boyd/papers/ecos.html){:target="_blank"}, on the rocket soft-landing problem. TinyMPC achieves an average speed-up of 13x over SCS and 137x over ECOS.
82
+
TinyMPC is also capable of handling conic constraints. Conic-TinyMPC outperforms [SCS](https://www.cvxgrp.org/scs/){:target="_blank"} and [ECOS](https://web.stanford.edu/~boyd/papers/ecos.html){:target="_blank"} in execution time and memory, achieving an average speed-up of 13.8x over SCS and 142.7x over ECOS.
83
83
<!-- #gain, because of its lack of generality, TinyMPC is orders of magnitudes faster than SCS and ECOS. -->
84
84
</div>
85
85
</figure>
@@ -98,76 +98,79 @@ TinyMPC outperforms state-of-the-art solvers in terms of speed and memory footpr
Copy file name to clipboardExpand all lines: docs/solver/background.md
+18-19Lines changed: 18 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,10 +15,10 @@ The alternating direction method of multipliers algorithm was developed in the 1
15
15
We want to solve optimization problems in which our cost function $f$ and set of valid states $\mathcal{C}$ are both convex:
16
16
17
17
$$
18
-
\begin{alignat}{2}
19
-
\min_x & \quad f(x) \\
20
-
\text{subject to} & \quad x \in \mathcal{C}.
21
-
\end{alignat}
18
+
\begin{aligned}
19
+
\min_x \quad & f(x) \\
20
+
\text{subject to} \quad & x \in \mathcal{C}
21
+
\end{aligned}
22
22
$$
23
23
24
24
We define an indicator function for the set $\mathcal{C}$:
@@ -68,26 +68,25 @@ $$
68
68
Our optimization problem has now been divided into two variables: the primal $x$ and slack $z$, and we can optimize over each one individually while holding all of the other variables constant. To get the ADMM algorithm, all we have to do is alternate between solving for the $x$ and then for the $z$ that minimizes our augmented Lagrangian. After each set of solves, we then update our dual variable $\lambda$ based on how much $x$ differs from $z$.
where $x^+$, $z^+$, and $\lambda^+$ refer to the primal, slack, and dual variables to be used in the next iteration.
79
79
80
80
Now all we have to do is solve a few unconstrained optimization problems!
81
81
82
-
<!-- ## TODO: primal and slack update and discrete algebraic riccati equation -->
83
82
---
84
83
85
84
## Primal and slack update
86
85
87
86
The primal update in TinyMPC takes advantage of the special structure of Model Predictive Control (MPC) problems. The optimization problem can be written as:
TinyMPC leverages this property by pre-computing these steady-state matrices offline, significantly reducing the online computational burden. The only online updates needed are for the time-varying linear terms in the cost function.
\text{subject to} \quad & x_{k+1} = A x_k + B u_k \quad \forall k \in [1, N), \\
20
+
& x_k \in \mathcal{X}, \quad u_k \in \mathcal{U}
21
+
\end{aligned}
26
22
$$
27
23
28
-
where $x_k \in \mathbb{R}^n$, $u_k \in \mathbb{R}^m$ are the state and control input at time step $k$, $N$ is the number of time steps (also referred to as the horizon), $A \in \mathbb{R}^{n \times n}$ and $B \in \mathbb{R}^{n \times m}$ define the system dynamics, $Q \succeq 0$, $R \succ 0$, and $Q_f \succeq 0$ are symmetric cost weight matrices and $\bar{x}_k$ and $\bar{u}_k$ are state and input reference trajectories. Constrains include state and input lower and upper bounds and second-order cones $\mathcal{K}$.
24
+
where $x_k \in \mathbb{R}^n$, $u_k \in \mathbb{R}^m$ are the state and control input at time step $k$, $N$ is the number of time steps (also referred to as the horizon), $A \in \mathbb{R}^{n \times n}$ and $B \in \mathbb{R}^{n \times m}$ define the system dynamics, $Q \succeq 0$, $R \succ 0$, and $Q_f \succeq 0$ are symmetric cost weight matrices, and $q_k$, $r_k$, $q_f$ are linear cost vectors. The convex sets $\mathcal{X}$ and $\mathcal{U}$ represent state and input constraints respectively.
29
25
30
26
---
31
27
32
28
## Algorithm
33
29
34
-
Will be provided soon! Meanwhile, check out [our papers](../index.md) or [background](background.md) for more details.
30
+
TinyMPC employs the **Alternating Direction Method of Multipliers (ADMM)** to efficiently solve convex quadratic MPC problems. The algorithm separates dynamics constraints from other convex constraints, enabling the use of specialized techniques for each type.
31
+
32
+
### ADMM Framework
33
+
34
+
The algorithm reformulates the MPC problem by introducing slack variables and solving three iterative update steps:
The **primal update** step is reformulated as a Linear Quadratic Regulator (LQR) problem, which has a closed-form solution through the **discrete Riccati recursion**. This is the computational bottleneck that TinyMPC optimizes.
47
+
48
+
The modified cost matrices for the LQR problem become:
where $\mathcal{X}$ and $\mathcal{U}$ are the feasible sets for states and inputs respectively.
80
+
81
+
### Computational Optimization
82
+
83
+
For long horizons, TinyMPC pre-computes matrices that converge to steady-state values:
84
+
85
+
$$
86
+
\begin{aligned}
87
+
P_{\text{inf}} &= Q + A^\intercal P_{\text{inf}} A - A^\intercal P_{\text{inf}} B(R + B^\intercal P_{\text{inf}} B)^{-1} B^\intercal P_{\text{inf}} A \\
88
+
K_{\text{inf}} &= (R + B^\intercal P_{\text{inf}} B)^{-1} B^\intercal P_{\text{inf}} A
89
+
\end{aligned}
90
+
$$
91
+
92
+
This significantly reduces online computation by caching expensive matrix operations.
93
+
94
+
### Algorithm Pseudocode
95
+
96
+
```
97
+
Algorithm 1: TinyMPC
98
+
function TINY_SOLVE(input)
99
+
while not converged do
100
+
// Primal update
101
+
p_{1:N-1}, d_{1:N-1} ← Backward pass via Riccati recursion
102
+
x_{1:N}, u_{1:N-1} ← Forward pass via feedback law
103
+
104
+
// Slack update
105
+
z_{1:N}, w_{1:N-1} ← Project to feasible set
106
+
107
+
// Dual update
108
+
y_{1:N}, g_{1:N-1} ← Gradient ascent update
109
+
q_{1:N}, r_{1:N-1}, p_N ← Update linear cost terms
110
+
end while
111
+
return x_{1:N}, u_{1:N-1}
112
+
end function
113
+
```
35
114
36
115
---
37
116
38
117
## Implementations
39
118
40
119
The TinyMPC library offers a C++ implementation of the algorithm mentioned above, along with [interfaces to several high-level languages](../get-started/examples.md). This integration allows these languages to seamlessly solve optimal control problems using TinyMPC.
41
120
121
+
We have made available [Python](https://github.com/TinyMPC/tinympc-python), [MATLAB](https://github.com/TinyMPC/tinympc-matlab), and [Julia](https://github.com/TinyMPC/tinympc-julia) interfaces.
122
+
42
123
There are also several community-developed implementations of this algorithm: [Rust](https://github.com/peterkrull/tinympc-rs)
43
124
44
125
Numerical benchmarks against other solvers on microcontrollers are available at [this repository](https://github.com/RoboticExplorationLab/mcu-solver-benchmarks).
0 commit comments