diff --git a/.github/workflows/Documenter.yml b/.github/workflows/Documenter.yml index 6882936c..b12a6305 100644 --- a/.github/workflows/Documenter.yml +++ b/.github/workflows/Documenter.yml @@ -2,15 +2,12 @@ name: Documenter on: push: branches: - - master - main - - v0.2-main - tags: '*' + tags: + - 'v*' pull_request: branches: - - master - main - - v0.2-main jobs: build: runs-on: ubuntu-latest diff --git a/docs/make.jl b/docs/make.jl index dcc31176..1a600c93 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -8,17 +8,16 @@ bib = CitationBibliography( ) makedocs( sitename = "RLinearAlgebra", - format = Documenter.HTML( - collapselevel=1, - ), plugins=[bib], modules = [RLinearAlgebra], pages = [ "Home" => "index.md", -# "Tutorials" => [ -# "Introduction" => "tutorials/introduction.md", -# "Getting started" => "tutorials/getting_started.md" -# ], + "Tutorials" => [ + "Introduction" => "tutorials/introduction.md", + "Consistent Linear System" => [ + "tutorials/consistent_system/consistent_system.md", + "tutorials/consistent_system/consistent_system_compressor.md", + ], "Manual" => [ "Introduction" => "manual/introduction.md", "Compression" => "manual/compression.md", @@ -40,6 +39,7 @@ makedocs( ], ], "References" => "references.md", + ], ] ) diff --git a/docs/src/tutorials/consistent_system/consistent_system.md b/docs/src/tutorials/consistent_system/consistent_system.md new file mode 100644 index 00000000..a9a8b36f --- /dev/null +++ b/docs/src/tutorials/consistent_system/consistent_system.md @@ -0,0 +1,143 @@ +# Solving a Consistent Linear System + +This guide demonstrates how to use `RLinearAlgebra.jl` package to solve a +**consistent linear system**—a system where at least one solution +exists—expressed in the form: + +$$Ax = b$$ + +We'll walk through setting up the problem, using a solver, and verifying the result. + +--- +## Problem setup and solve the system + +First, let's define our linear system $Ax = b$. + +To easily verify the accuracy of our solver, we'll construct a problem where the true +solution, $x_{\text{true}}$, is known beforehand. We'll start by creating a random +matrix $A$ and a known solution vector $x_{\text{true}}$. Then, we can generate the +right-hand side vector $b$ by computing $b = Ax_{\text{true}}$. + +The following Julia code imports the necessary libraries, sets up the dimensions, and +creates $A$, $x_{\text{true}}$, and $b$. We also initialize a starting guess, `x_init`, +for our iterative solver. + +```julia +using LinearAlgebra +num_rows, num_cols = 1000, 20; +A = randn(Float64, num_rows, num_cols); +x_true = randn(Float64, num_cols); +x_init = zeros(Float64, num_cols); +b = A * x_true; +``` + +```@setup ConsistentExample +using LinearAlgebra +num_rows, num_cols = 1000, 20; +A = randn(Float64, num_rows, num_cols); +x_true = randn(Float64, num_cols); +x_init = zeros(Float64, num_cols); +b = A * x_true; +``` + +As simple as you can imagine, `RLinearAlgebra.jl` can solve this system in just a +few lines of codes and high efficiency: + +```@example ConsistentExample +using RLinearAlgebra +logger = BasicLogger(max_it = 300) +kaczmarz_solver = Kaczmarz(log = logger) +solver_recipe = complete_solver(kaczmarz_solver, x_init, A, b) +rsolve!(solver_recipe, x_init, A, b) + +solution = x_init; +println("Solution to the system: \n", solution) +``` +**Done! How simple it is!** + + +Let's check how close our calculated `solution` is to the known `x_true`. +We can measure the accuracy by calculating the Euclidean norm of the difference +between the two vectors. A small norm indicates that our solver found a good approximation. + +```@example ConsistentExample +# Calculate the norm of the error +error_norm = norm(solution - x_true) +println(" - Norm of the error between the solution and x_true: ", error_norm) +``` +As you can see, by using the modular Kaczmarz solver, we were able to configure a +randomized block-based method and find a solution vector that is very close to +the true solution. + + +Let's break down the solver code line by line to understand what each part does. + +--- +## Codes breakdown + +As shown in the code, we used the [`Kaczmarz` solver](@ref Kaczmarz). A key feature of +**RLinearAlgebra.jl** is its modularity; you can customize the solver's behavior by passing +in different "component" objects for tasks, such as system compression, progress logging, +and termination checks. + +For this example, we kept it simple by only customizing the maximum iteration located +in [`Logger`](@ref Logger) component. Let's break down each step. + + +### Configure the logger + +We start with the simplest component: the [`Logger`](@ref Logger). The +[`Logger`](@ref Logger) is +responsible for tracking metrics (such as the error history) and telling the solver +when to stop. For this guide, we use the default [`BasicLogger`](@ref BasicLogger) +and configure +it with a single stopping criterion: a maximum number of iterations. + +```julia +# Configure the maximum iteration to be 300 +logger = BasicLogger(max_it = 300) +``` + +### Create the solver + +Before running the solver on our specific problem (`A, b, x_init`), we must prepare it +using the [`complete_solver`](@ref complete_solver) function. This function creates +a [`KaczmarzRecipe`](@ref KaczmarzRecipe), which combines the solver +configuration with the problem data. + +Crucially, this "recipe" pre-allocates all necessary memory buffers, which is a +key step for ensuring efficient and high-performance computation. + +```julia +# Create the Kaczmarz solver object by passing in the ingredients +kaczmarz_solver = Kaczmarz(log = logger) +# Create the solver recipe by combining the solver and the problem data +solver_recipe = complete_solver(kaczmarz_solver, x_init, A, b) +``` + +### Solve the system using the solver + +Finally, we call [`rsolve!`](@ref rsolve!) to execute the algorithm. The `!` at the end +of the function name is a Julia convention indicating that the function will inplace +update part of its arguments. In this case, `rsolve!` modifies `x_init` in-place, +filling it with the final solution vector. The solver will iterate until a stopping +criterion in the `logger` is met, i.e. iteration goes up to $300$. + +```julia +# Run the inplace solver! +rsolve!(solver_recipe, x_init, A, b) + +# The solution is now stored in the updated x_init vector +solution = x_init; +``` + + + + + + + + + + + diff --git a/docs/src/tutorials/consistent_system/consistent_system_compressor.md b/docs/src/tutorials/consistent_system/consistent_system_compressor.md new file mode 100644 index 00000000..e5f6ab6a --- /dev/null +++ b/docs/src/tutorials/consistent_system/consistent_system_compressor.md @@ -0,0 +1,217 @@ +# Deeper Dive: Modular Components + +In the previous guide, we showed how to solve a consistent linear system in just a few +lines of codes. That example used the default configurations of the +[`Kaczmarz` solver](@ref Kaczmarz) solver, which is highly effective for many standard +problems. + +However, the true power of **RLinearAlgebra.jl** lies in its high degree of modularity +and flexibility. You can fine-tune the solver's behavior by combining different +ingradients, like cooking a fine dish, to tackle specific challenges, improve +performance, or implement more complex algorithms. + + + +We will follow the design philosophy of **RLinearAlgebra.jl** by composing different +modules ([`Compressor`](@ref Compressor), [`Logger`](@ref Logger), [`Solver`](@ref Solver), +etc.) to customize a solver and solve the same consistent linear system, + +$$Ax = b.$$ + + +```@setup ConsistentExample +# Import relevant libraries +using RLinearAlgebra, Random, LinearAlgebra + + +# Define the dimensions of the linear system +num_rows, num_cols = 1000, 20 + +# Create the matrix A and a known true solution x_true +A = randn(Float64, num_rows, num_cols); +x_true = randn(Float64, num_cols); + +# Calculate the right-hand side vector b from A and x_true +b = A * x_true; + +# Set an initial guess for the solution vector x (typically a zero vector) +x_init = zeros(Float64, num_cols); +``` + + +--- +## 1. Configure [`Compressor`](@ref Compressor) + +For large-scale problems, the matrix $A$ can be massive, slowing down iterative algorithms. +We can use a randomized sketching technique to "compress" $A$ and $b$ to a lower dimension +while preserving the essential information of the system, s.t. we can solve the system +fast without the loss of accuracy. + +Here, we'll configure a [`SparseSign`](@ref SparseSign) compressor as an example. +This compressor generates a sparse matrix $S$, whose non-zero elements are +1 or -1 +(with scaling). + +### (a) Configure the [`SparseSign`](@ref SparseSign) Compressor + +We will configure a compression matrix that reduces the 1000 rows of our original +system down to a more manageable 300 rows. + +```@example ConsistentExample +# The goal is to compress the 1000 rows of A to 300 rows +compression_dim = 300 +# We want each row of the compression matrix S to have 5 non-zero elements +non_zeros = 5 + +# Configure the SparseSign compressor +sparse_compressor = SparseSign( + cardinality=Left(), # S will be left-multiplied: SAx = Sb + compression_dim=compression_dim, # The compressed dimension (number of rows) + nnz=non_zeros, # The number of non-zero elements per row in S + type=Float64 # The element type for the compression matrix +) +``` + +If the compression dimension of `300` rows is considered too large, it can be changed to `10` by updating the compressor configuration and rebuilding the recipe as follows: + +```@example ConsistentExample +# Change the dimension of the compressor. Similarly, you can use the same idea +# for other configurations' changes. +sparse_compressor.compression_dim = 10 +``` + +The `sparse_compressor` is containing all the [`SparseSign`](@ref SparseSign) +configurations that we need. + +While the solver can use the `sparse_compressor` to perform the compression method +on-the-fly, we can stop here to configure other "ingradients". However, +it can sometimes be useful to form the compression matrix and the compressed +system explicitly to get an idea of your compression matrix. Therefore, we will +continue playing with it. + +--- +### (b) Build the [`SparseSignRecipe`](@ref SparseSignRecipe) + +After defining the compressor's parameters, we combine it with our matrix $A$ to +create a [`SparseSignRecipe`](@ref SparseSignRecipe). This "recipe" +pre-calculates the sparse matrix `S` and prepares everything needed for +efficient compression. + +```@example ConsistentExample +# Pass the compressor configuration and the original matrix A to +# create the final compression recipe. +S = complete_compressor(sparse_compressor, A) + +# You can closely look at the compression recipe you created. +println("Configurations of compression matrix:") +println(" - Compression matrix is applied to left or right: ", S.cardinality) +println(" - Compression matrix's number of rows: ", S.n_rows) +println(" - Compression matrix's number of columns: ", S.n_cols) +println(" - The number of nonzeros in each column (left)/row (right) of compression matrix: ", S.nnz) +println(" - Compression matrix's nonzero entry values: ", S.scale) +println(" - Compression matrix: ", S.op) +``` + +### (c) Apply the sparse sign matrix to the system + +We can use `*` to apply this sparse matrix `S` to the system. + +```@example ConsistentExample +# Form the compressed system SAx = Sb +SA = S * A +Sb = S * b + +println("Dimensions of the compressed system:") +println(" - Matrix SA: ", size(SA)) +println(" - Vector Sb: ", size(Sb)) +``` + +--- +## 2. Configure [`Logger`](@ref Logger) + +To monitor the solver and control its execution, we will configure a +[`BasicLogger`](@ref BasicLogger) . +This object serves two purposes: tracking metrics (like the error history) and +defining stopping rules. + +We'll configure it to stop after a maximum of `50` iterations or if the residual +error drops below `1e-6`. We will also set `collection_rate = 5` to record the +error every $5$ iterations. + +```@example ConsistentExample +# Configure the logger to control the solver's execution +logger = BasicLogger( + max_it = 50, + threshold = 1e-6, + collection_rate = 5 +) +``` + +--- +## 3. Configure [`Solver`](@ref Solver) + +Now we assemble our configured ingradients—the `sparse_compressor` and the `logger`—into +the main [`Kaczmarz` solver](@ref Kaczmarz) object. For any component we don't specify, +a default will be used. Here, we'll explicitly specify the [LQSolver](@ref LQSolver) +as our sub-solver. + +### (a) Configure the [`Kaczmarz` solver](@ref Kaczmarz) +```@example ConsistentExample +# Create the Kaczmarz solver object by passing in the ingredients +kaczmarz_solver = Kaczmarz( + compressor = sparse_compressor, + log = logger, + sub_solver = LQSolver() +) +``` + +### (b) Create the solver recipe + +Just as with the compressor, we must call [`complete_solver`](@ref complete_solver) to +create a final "recipe". This function takes the solver configuration and +the specific problem data (`A, b, x_init`) and pre-allocates all memory needed +for an efficient run. + +```@example ConsistentExample +# Create the solver recipe by combining the solver and the problem data +solver_recipe = complete_solver(kaczmarz_solver, x_init, A, b) +``` + + + +--- +## 4. Solve and Verify the Result + +### (a) Solve the System + +We call [`rsolve!`](@ref rsolve!) to run the [`Kaczmarz` solver](@ref Kaczmarz). +The `!` in the name indicates that the function modifies its arguments in-place. +Here, `x_init` will be updated with the solution vector. +The algorithm will run until a stopping criterion from our +`logger` is met. + +```@example ConsistentExample +# Run the solver! +rsolve!(solver_recipe, x_init, A, b) + +# The solution is now stored in the updated x_init vector +solution = x_init; +``` + +### (b). Verify the result + +Finally, let's check how close our calculated solution is to the known +`x_true` and inspect the `logger` to see how the solver performed. + +```@example ConsistentExample +# We can inspect the logger's history to see the convergence +error_history = solver_recipe.log.hist; +println(" - Solver stopped at iteration: ", solver_recipe.log.iteration) +println(" - Final residual error, ||Ax-b||_2: ", error_history[solver_recipe.log.record_location]) + +# Calculate the norm of the error +error_norm = norm(solution - x_true) +println(" - Norm of the error between the solution and x_true: ", error_norm) +``` + +As you can see, by composing different modules, we configured a randomized +solver that found a solution vector very close to the true solution. \ No newline at end of file diff --git a/docs/src/tutorials/introduction.md b/docs/src/tutorials/introduction.md new file mode 100644 index 00000000..3dec10c4 --- /dev/null +++ b/docs/src/tutorials/introduction.md @@ -0,0 +1,13 @@ +# Introduction + +The purpose of these tutorials is to use examples help new users quickly get hands +experience using `RLinearAlgebra.jl`. + +## How tutorials are structured + +Problem sets: +- Consistent system problem, solving $x$ s.t. $$Ax = b$$. + + + + diff --git a/src/Compressors/sparse_sign.jl b/src/Compressors/sparse_sign.jl index 93de96b2..46568abe 100644 --- a/src/Compressors/sparse_sign.jl +++ b/src/Compressors/sparse_sign.jl @@ -26,7 +26,7 @@ with dimension ``n \\times s``, where ``s`` is the compression dimension that is supplied by the user. In this case, each row of ``S`` is generated independently by the following steps: -1. Randomly choose `nnz` components fo the ``s`` components of the row. Note, `nnz` +1. Randomly choose `nnz` components of the ``s`` components of the row. Note, `nnz` is supplied by the user. 2. For each selected component, randomly set it either to ``-1/\\sqrt{\\text{nnz}}`` or ``1/\\sqrt{\\text{nnz}}`` with equal probability. diff --git a/src/Solvers/Loggers/basic_logger.jl b/src/Solvers/Loggers/basic_logger.jl index 21cbcdd5..6aebae49 100644 --- a/src/Solvers/Loggers/basic_logger.jl +++ b/src/Solvers/Loggers/basic_logger.jl @@ -93,7 +93,7 @@ function update_logger!(logger::BasicLoggerRecipe, error::Float64, iteration::In logger.error = error # Always check max_it stopping criterion # Compute in this way to avoid bounds error from searching in the max_it + 1 location - logger.converged = iteration <= logger.max_it ? + logger.converged = iteration < logger.max_it ? logger.stopping_criterion(logger) : true