You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/get-started/examples.md
+52-53Lines changed: 52 additions & 53 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,48 +16,6 @@ Check out our GitHub repository for [examples in C++](https://github.com/TinyMPC
16
16
17
17
TinyMPC requires four matrices (A, B, Q, and R) and one number (N) to use. A and B describe the linearized system dynamics and Q and R are the costs on the state and control inputs. N is the length of the prediction horizon (or the number of time steps in the problem). This page assumes you already have a discrete, linearized model of your system dynamics (A and B). [The next page](./model.md) walks through obtaining these starting from a nonlinear model.
18
18
19
-
=== "Cart-pole"
20
-
21
-
For the cart-pole, we use the linearized model of the discretized cart-pole dynamics to stabilize about the upright position. The state is the position of the cart, the angle of the pole, the velocity of the cart, and the angular velocity of the pole, which looks like $x = [p, \theta, v, \omega]^T$. The control input is a single force $u$ acting on the cart. (1)
22
-
{.annotate}
23
-
24
-
1. TinyMPC always produces a $\Delta u$ and $\Delta x$ about the linearization point. Because we linearized the cart-pole about an equilibrium position that required no control input, $\Delta u$ = $u$. Additionally, as discussed in [this page](./model.md), because we defined the coordinate frame of our cart-pole system such that the vertical equilibrium position (which is where we linearized) corresponds to a state of all zeros, $\Delta x$ = $x$. This is irrelevant for the following example, but is important to keep in mind when simulating the system with its full dynamics or applying a control input when the linearization point is not at $x=0$ or $u=0$.
25
-
26
-
27
-
``` py
28
-
import tinympc
29
-
import numpy as np
30
-
31
-
# Define necessary data
32
-
A = np.array([[1.0, 0.01, 0.0, 0.0], # (1)
33
-
[0.0, 1.0, 0.039, 0.0],
34
-
[0.0, 0.0, 1.002, 0.01],
35
-
[0.0, 0.0, 0.458, 1.002]])
36
-
B = np.array([[0.0 ], # (2)
37
-
[0.02 ],
38
-
[0.0 ],
39
-
[0.067]])
40
-
Q = np.diag([10.0, 1, 10, 1]) # (3)
41
-
R = np.diag([1.0]) # (4)
42
-
43
-
N = 20 # (5)
44
-
45
-
# Set up the problem
46
-
prob = tinympc.TinyMPC()
47
-
prob.setup(A, B, Q, R, N)
48
-
49
-
# Define initial condition
50
-
x0 = np.array([0.5, 0, 0, 0])
51
-
```
52
-
53
-
1. This is the state transition matrix, which you get when linearizing the discretized version of your model's full nonlinear dynamics (in this case the cart-pole dynamics, described on [this page](./model.md)) with respect to the state.
54
-
2. This is the input or control matrix, which you get when linearizing the discretized version of your model's full nonlinear dynamics (in this case the cart-pole dynamics, described on [this page](./model.md)) with respect to the input.
55
-
3. This is the stage cost for the state, and defines how much to penalize the state for deviating from the reference state at each time step in the horizon. Change this to modify the controller's behavior.
56
-
4. This is the stage cost for the input, and defines how much to penalize the input for deviating from the reference control at each time step in the horizon. Change this to modify the controller's behavior.
57
-
5. This is the length of the horizon, and can be anything greater than one. The problem size scales linearly with this variable.
58
-
59
-
60
-
61
19
=== "Quadrotor"
62
20
63
21
For the quadrotor, we use the linearized model of the discretized quadrotor dynamics to stabilize about a hovering position. The state is composed of twelve variables: the three dimensional position, orientation, translational velocity, and angular velocity, which looks like $x = [p_x, p_y, p_z, \theta_x, \theta_y, \theta_z, v_x, v_y, v_z, \omega_x, \omega_y, \omega_z]^T$. The control input is a four dimensional vector describing the thrust of each motor, and looks like $u = [u_1, u_2, u_3, u_4]^T$.
@@ -121,6 +79,47 @@ TinyMPC requires four matrices (A, B, Q, and R) and one number (N) to use. A and
121
79
4. This is the stage cost for the input, and defines how much to penalize the input for deviating from the reference control at each time step in the horizon. Change this to modify the controller's behavior.
122
80
5. This is the length of the horizon, and can be anything greater than one. The problem size scales linearly with this variable.
123
81
82
+
83
+
=== "Cart-pole"
84
+
85
+
For the cart-pole, we use the linearized model of the discretized cart-pole dynamics to stabilize about the upright position. The state is the position of the cart, the angle of the pole, the velocity of the cart, and the angular velocity of the pole, which looks like $x = [p, \theta, v, \omega]^T$. The control input is a single force $u$ acting on the cart. (1)
86
+
{.annotate}
87
+
88
+
1. TinyMPC always produces a $\Delta u$ and $\Delta x$ about the linearization point. Because we linearized the cart-pole about an equilibrium position that required no control input, $\Delta u$ = $u$. Additionally, as discussed in [this page](./model.md), because we defined the coordinate frame of our cart-pole system such that the vertical equilibrium position (which is where we linearized) corresponds to a state of all zeros, $\Delta x$ = $x$. This is irrelevant for the following example, but is important to keep in mind when simulating the system with its full dynamics or applying a control input when the linearization point is not at $x=0$ or $u=0$.
89
+
90
+
91
+
``` py
92
+
import tinympc
93
+
import numpy as np
94
+
95
+
# Define necessary data
96
+
A = np.array([[1.0, 0.01, 0.0, 0.0], # (1)
97
+
[0.0, 1.0, 0.039, 0.0],
98
+
[0.0, 0.0, 1.002, 0.01],
99
+
[0.0, 0.0, 0.458, 1.002]])
100
+
B = np.array([[0.0 ], # (2)
101
+
[0.02 ],
102
+
[0.0 ],
103
+
[0.067]])
104
+
Q = np.diag([10.0, 1, 10, 1]) # (3)
105
+
R = np.diag([1.0]) # (4)
106
+
107
+
N = 20 # (5)
108
+
109
+
# Set up the problem
110
+
prob = tinympc.TinyMPC()
111
+
prob.setup(A, B, Q, R, N)
112
+
113
+
# Define initial condition
114
+
x0 = np.array([0.5, 0, 0, 0])
115
+
```
116
+
117
+
1. This is the state transition matrix, which you get when linearizing the discretized version of your model's full nonlinear dynamics (in this case the cart-pole dynamics, described on [this page](./model.md)) with respect to the state.
118
+
2. This is the input or control matrix, which you get when linearizing the discretized version of your model's full nonlinear dynamics (in this case the cart-pole dynamics, described on [this page](./model.md)) with respect to the input.
119
+
3. This is the stage cost for the state, and defines how much to penalize the state for deviating from the reference state at each time step in the horizon. Change this to modify the controller's behavior.
120
+
4. This is the stage cost for the input, and defines how much to penalize the input for deviating from the reference control at each time step in the horizon. Change this to modify the controller's behavior.
121
+
5. This is the length of the horizon, and can be anything greater than one. The problem size scales linearly with this variable.
122
+
124
123
---
125
124
126
125
## Solve problem
@@ -165,39 +164,39 @@ for i in range(Nsim):
165
164
166
165
`pip install matplotlib` if not already installed.
0 commit comments