Skip to content

Explanations about three projects of Linear Algebra course; including LU decompositions, denoising signals, and SVD decompositions

Notifications You must be signed in to change notification settings

ZahraRahimii/Linear-Algebra-Course-Projects

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Linear Algebra Course Projects

Here is the explanation about three projects of Linear Algebra course:

  • LU decomposition
  • Least Squares based Signal Denoising
  • Using SVD decomposition for image compression

LU decomposition

The solution of Ax=b is done with Row Reduction in ${O(n^3)}$. But if we have the LU decomposition of the matrix, the solution of the equation can be done in ${O(n^2)}$. We also know that the calculation of the matrices L and U itself takes place in time ${O(n^3)}$. So that, solving a linear equation system by calculating the LU decomposition of its matrix and finding the solution of the device from it is not a effective method. Now we want to solve a large number of equations in the form of Ax=b in which A is constant and only b is different, we can solve the equation for different b's in ${O(n^2)}$ time, which is much more optimal.

Least Squares based Signal Denoising

The diagram shows the price of Bitcoin every 2 hours from the end of 2020 to the 20th of May. Suppose that the vector 𝒚 is the vector of bitcoin price values, the unknown vector 𝒙 is the noise-free vector of the price we are looking for, and the vector 𝒗 is the uncertain noise vector. That is, we have: ${y=lx+v}$

The results of denoising would be like this:

   
         λ=10; not denoised                λ=100; well denoised                 λ=10000; too denoised 

Using SVD decomposition for image compression

We can decompose a given image into the three color channels red, green and blue. Each channel can be represented as a (m × n)‑matrix with values ranging from 0 to 255. We will now compress the matrix A representing one of the channels. To do this, we compute an approximation to the matrix A that takes only a fraction of the space to store. Now here's the great thing about SVD: the data in the matrices U, Σ and V is sorted by how much it contributes to the matrix A in the product. That enables us to get quite a good approximation by simply considering only the k-terms of the first important parts of the matrices

   
              k=50;                               k=250;                               k=750;

About

Explanations about three projects of Linear Algebra course; including LU decompositions, denoising signals, and SVD decompositions

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages