A lightweight Linear Quadratic Regulator (LQR) controller library for Arduino and compatible microcontrollers. AutoLQR provides an easy-to-use implementation of LQR control theory for small embedded systems.
Linear Quadratic Regulators are a form of optimal control that balances system performance against control effort. The AutoLQR library handles the complex matrix operations required to compute optimal feedback gains, allowing you to implement sophisticated control systems on resource-constrained platforms.
- Minimal Dependencies: Designed to work with the Arduino core libraries only
- Dual Allocation Modes: Choose between dynamic (
AutoLQR) or static (AutoLQRStatic) memory allocation - Flexible System Sizing: Support for systems with multiple states and inputs
- Simplified API: Easy-to-use interface for setting up system dynamics and cost matrices
- Pre-computed Gains: Option to use offline-computed gains for complex systems
- Optimal Control: Implementation of discrete-time LQR control theory
- Matrix Operations: Built-in matrix utilities for embedded control applications
- Configurable Solver: Adjustable iteration count and tolerance for DARE solver (static version)
- Download this repository as a ZIP file
- In the Arduino IDE, navigate to Sketch > Include Library > Add .ZIP Library
- Select the downloaded ZIP file
- After installation, go to File > Examples > AutoLQR to view the examples
- Download or clone this repository
- Copy the contents to your Arduino libraries directory (usually
Documents/Arduino/libraries/)
The Linear Quadratic Regulator solves the optimal control problem by minimizing the infinite horizon cost function:
where:
-
$x_k$ is the state vector at time k -
$u_k$ is the control input at time k -
$Q$ is the state cost matrix (penalizes state error) -
$R$ is the control cost matrix (penalizes control effort)
For a discrete-time linear system represented by:
The LQR controller produces a control law:
where K is the optimal feedback gain matrix computed by solving the Discrete Algebraic Riccati Equation (DARE).
The AutoLQR library solves the DARE iteratively:
Once P converges, the optimal gain matrix K is calculated as:
AutoLQR provides two variants:
| Variant | Header | Memory | Control Size Limit | Use Case |
|---|---|---|---|---|
AutoLQR |
AutoLQR.h | Dynamic (heap) | Unlimited | Flexible sizing, MIMO systems, prototyping |
AutoLQRStatic<N,M> |
AutoLQRStatic.h | Static (stack/global) | M <= 3 | Production, memory-constrained |
Recommendation: Use AutoLQRStatic for production deployments to avoid heap fragmentation. Use AutoLQR for MIMO systems requiring more than 3 control inputs.
#include <AutoLQR.h>
#define STATE_SIZE 2
#define CONTROL_SIZE 1
// Create controller instance (allocates memory on heap)
AutoLQR controller(STATE_SIZE, CONTROL_SIZE);#include <AutoLQRStatic.h>
// Create controller with compile-time fixed sizes
// No heap allocation - all memory is statically allocated
AutoLQRStatic<2, 1> controller; // 2 states, 1 controlBenefits of static allocation:
- No heap fragmentation
- Predictable memory usage at compile time
- Slightly faster (no allocation overhead)
- Configurable solver parameters
// Define system size
#define STATE_SIZE 2 // Number of state variables
#define CONTROL_SIZE 1 // Number of control inputs
// Create controller instance
AutoLQR controller(STATE_SIZE, CONTROL_SIZE);The system dynamics are defined by two matrices:
- State matrix A describes how the system evolves on its own
- Input matrix B describes how control inputs affect the system
For example, a simple position-velocity system with discretized dynamics:
// For a system with dt = 0.1s
float A[STATE_SIZE * STATE_SIZE] = {
1.0, 0.1, // Position += velocity * dt
0.0, 0.95 // Velocity with damping factor
};
float B[STATE_SIZE * CONTROL_SIZE] = {
0.005, // Effect of control on position (0.5*dt^2)
0.1 // Effect of control on velocity (dt)
};
// Set matrices in controller
controller.setStateMatrix(A);
controller.setInputMatrix(B);The cost matrices Q and R define the trade-off between performance and control effort:
- Q penalizes state errors (higher values = faster response)
- R penalizes control effort (higher values = smoother control)
// Q matrix - state cost
float Q[STATE_SIZE * STATE_SIZE] = {
10.0, 0.0, // Position error cost
0.0, 1.0 // Velocity error cost
};
// R matrix - control cost
float R[CONTROL_SIZE * CONTROL_SIZE] = {
0.1 // Control effort cost
};
// Set cost matrices
controller.setCostMatrices(Q, R);Once the system is defined, compute the optimal gains:
// Dynamic version (AutoLQR)
if (controller.computeGains()) {
Serial.println("Gain computation successful");
} else {
Serial.println("Error: Failed to compute gains");
}
// With custom solver parameters (both versions)
if (controller.computeGains(150, 1e-7f)) { // 150 iterations, tighter tolerance
Serial.println("Gain computation successful");
}
// Or set parameters persistently (AutoLQR only)
controller.setSolverParameters(150, 1e-7f);
controller.computeGains(); // Uses saved parametersIn each control loop:
// 1. Calculate state error (current state - desired state)
float state_error[STATE_SIZE] = {
current_position - desired_position,
current_velocity - desired_velocity
};
// 2. Update the controller with current state error
controller.updateState(state_error);
// 3. Calculate optimal control input
float control[CONTROL_SIZE];
controller.calculateControl(control);
// 4. Apply control to your system
motor_power = control[0];For complex systems or microcontrollers with limited computational resources:
// Pre-computed optimal gain matrix (can be calculated offline)
float K[CONTROL_SIZE * STATE_SIZE] = {
4.47, 2.28 // Example values
};
// Set pre-computed gains
controller.setGains(K);For setpoint tracking, you can calculate feedforward gains:
float desired_state[STATE_SIZE] = {target_position, 0.0};
float feedforward[CONTROL_SIZE];
controller.estimateFeedforwardGain(feedforward, desired_state);
// Apply both feedback and feedforward control
float u_total = control[0] + feedforward[0];Estimate how long the system will take to converge:
float convergence_time = controller.estimateConvergenceTime(0.05); // 5% threshold
if (convergence_time > 0) {
Serial.print("Estimated convergence time: ");
Serial.println(convergence_time);
}Calculate the expected cost from the current state:
float expected_cost = controller.calculateExpectedCost();
if (expected_cost >= 0) {
Serial.print("Expected cost: ");
Serial.println(expected_cost);
}Verify if your system is controllable using the full controllability matrix:
if (controller.isSystemControllable()) {
Serial.println("System is controllable");
} else {
Serial.println("Warning: System is not controllable");
}
// Get the controllability rank
int rank = controller.getControllabilityRank();
Serial.print("Controllability rank: ");
Serial.print(rank);
Serial.print(" / ");
Serial.println(controller.getStateSize());
// Export the full controllability matrix C = [B, AB, A²B, ..., A^(n-1)B]
float C[STATE_SIZE * STATE_SIZE * CONTROL_SIZE];
controller.getControllabilityMatrix(C);The static version provides compile-time memory size calculation:
// Access compile-time constants
constexpr int n = AutoLQRStatic<4, 2>::StateSize; // 4
constexpr int m = AutoLQRStatic<4, 2>::ControlSize; // 2
constexpr size_t mem = AutoLQRStatic<4, 2>::MemorySize; // Memory in bytes
// Or use the method
Serial.print("Controller memory usage: ");
Serial.print(controller.getMemorySize());
Serial.println(" bytes");All matrices are represented as flattened 1D arrays in row-major order:
[0,0] [0,1] [0,2] A[0] A[1] A[2]
[1,0] [1,1] [1,2] → A[3] A[4] A[5]
[2,0] [2,1] [2,2] A[6] A[7] A[8]
For example, a 3×3 state matrix A would be stored in a 9-element array.
Memory allocation is proportional to:
- State matrix A: stateSize² floats
- Input matrix B: stateSize × controlSize floats
- Cost matrices Q and R: stateSize² and controlSize² floats
- Internal buffers for matrix operations
For a 2-state, 1-control system, approximately 72 bytes of RAM are used.
- Matrix multiplication: O(n³) for n×n matrices
- DARE solver: O(k×n³) where k is the number of iterations
- Matrix inversion (dynamic version):
- 1x1, 2x2, 3x3: O(n²) closed-form solutions (fast, no extra memory)
- 4x4 and larger: O(n³) Gauss-Jordan elimination with partial pivoting
- Systems with more than 3 states may be computationally intensive
- Pre-compute gains for larger systems
- Use fixed-point arithmetic for memory-constrained systems
- Consider decreasing DARE solver iteration count for faster computation
- Tune Q and R matrices to achieve desired control performance
The AutoLQR library is suitable for a variety of control applications:
- Motor Position Control: Precisely control motor position with minimal oscillation
- Self-balancing Robots: Maintain stability using multiple sensor inputs
- Temperature Control Systems: Optimize heating/cooling with minimal energy usage
- Drone Flight Control: Stabilize attitude with optimal control authority
- Servo Control: Smooth and precise control of servo mechanisms
See the examples folder for detailed implementation examples.
- Ensure matrices A, B, Q, and R have appropriate dimensions
- Check that Q and R are symmetric and positive definite
- Verify the system is controllable
- Try adjusting Q and R values (start with diagonal matrices)
- Check sign conventions in control output application
- Ensure state error is properly calculated (current - desired)
- Verify system model (A and B matrices) accurately represents the physical system
- Gradually increase Q values for more aggressive control
- Gradually increase R values for smoother control action
- Reduce system size if possible
- Use pre-computed gains instead of on-board computation
- Use
AutoLQRStatic<N,M>instead ofAutoLQRto avoid heap fragmentation - Check compile-time memory usage with static allocation
Dynamic Allocation (AutoLQR)
AutoLQR(int stateSize, int controlSize)stateSize: Number of state variablescontrolSize: Number of control inputs
Static Allocation (AutoLQRStatic)
AutoLQRStatic<N, M>()N: State size (template parameter, compile-time constant)M: Control size (template parameter, compile-time constant, max 3)
bool setStateMatrix(const float* A)Sets the system dynamics matrix A. Returns success/failure.
bool setInputMatrix(const float* B)Sets the input matrix B. Returns success/failure.
bool setCostMatrices(const float* Q, const float* R)Sets the state cost matrix Q and control cost matrix R. Returns success/failure.
bool computeGains() // Uses current solver parameters
bool computeGains(int maxIterations, float tolerance) // With custom parametersComputes the optimal feedback gains by solving the discrete algebraic Riccati equation. Returns success/failure. Both versions support configurable solver parameters.
void setSolverParameters(int maxIterations, float tolerance) // AutoLQR only
int getMaxIterations() const
float getTolerance() constConfigure and query DARE solver parameters. Default: 100 iterations, 1e-6 tolerance.
void updateState(const float* currentState)Updates the controller with the current system state or state error.
void calculateControl(float* controlOutput)Calculates the optimal control inputs for the current state.
void setGains(const float* K)Sets pre-computed gain values directly.
bool isSystemControllable()Checks if the system is controllable using full controllability matrix rank. Returns true if rank equals state size.
bool getControllabilityMatrix(float* C)Exports the controllability matrix C = [B, AB, A²B, ..., A^(n-1)B]. Array size: stateSize x (stateSize * controlSize).
int getControllabilityRank()Returns the rank of the controllability matrix. System is controllable if rank equals stateSize.
const float* getRiccatiSolution() const // AutoLQRStatic (correct spelling)
const float* getRicattiSolution() const // AutoLQR (legacy spelling)Gets the solution of the Riccati equation (P matrix).
AutoLQRStatic Additional Methods
constexpr int getStateSize() const
constexpr int getControlSize() const
bool hasValidGains() constvoid estimateFeedforwardGain(float* ffGain, const float* desiredState)Estimates feedforward gain for steady-state tracking.
float estimateConvergenceTime(float convergenceThreshold = 0.05f)Estimates time to convergence. Returns time in seconds or -1 if estimation fails.
bool exportGains(float* exportedK)Exports computed gains to an external array. Returns success/failure.
float calculateExpectedCost()Calculates expected cost from current state. Returns cost value or -1 if calculation fails.
Contributions to improve AutoLQR are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch
- Implement your changes
- Submit a pull request
Please include tests and example code when adding new features.
This library is released under the MIT License. See LICENSE for details.
Created by 1999AZZAR.
For questions or support, please contact: [email protected]