-
Notifications
You must be signed in to change notification settings - Fork 46
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- R1 Matrix Util Unsafe Power (1, 2, 3) - R1 Matrix Util Product Power (4, 5, 6) - R1 Square Matrix k Power (37, 38, 39) - Numerical Estimation R^k To R^1 Series (40, 41, 42) - Implementation of Jordan Normal Form #1 (43, 44, 45) - Numerical Jordan Form JSubM Shell (46, 47, 48) - Numerical Jordan Form JSubM mSubI (49, 50) - Numerical Jordan Form JSubM Lambda (51, 52) - Numerical Jordan Form JSubM R1 Grid (53, 54) - Numerical Jordan Form JSubM Constructor #1 (55, 56, 57) - Numerical Jordan Form JSubM Constructor #2 (58, 59, 60) - Numerical Jordan Form JSubM Power #1 (61, 62, 63) - Numerical Jordan Form JSubM Power #2 (64, 65, 66) - Numerical Jordan Form J Shell (67, 68, 69) - Numerical Jordan Form J jSubMArray (70, 71, 72) - Numerical Jordan Form J R1 Grid (73, 74, 75) - Numerical Jordan Form J Constructor #1 (76, 77, 78) - Numerical Jordan Form J Constructor #2 (79, 80, 81) - Numerical Jordan Form J Power #1 (82, 83, 84) - Numerical Jordan Form J Power #2 (85, 86, 87) - Jordan Form VJ Decomposition J (91, 92) - Jordan Form VJ Decomposition V (93, 94) - Jordan Form VJ Decomposition Constructor (95, 96, 97) - Numerical Jordan Form J Diagonal (98, 99) - Numerical Jordan Form VJ Decomposition Recover Original (100, 101, 102) - Implementation of Matrix Norm Variants (104, 105) - R1 Square Matrix Norm #1 (106, 107, 108) - R1 Square Matrix Norm #2 (109, 110, 111) - R1 Square Matrix Norm #3 (112, 113, 114) - R1 Square Evaluator Validator Shell (115, 116, 117) - R1 Square Positive Valued Norm (118, 119, 120) Bug Fixes/Re-organization: - Implementation of Jordan Normal Form #2 (103) Samples: - Matrix Power Series #1 (7, 8, 9) - Matrix Power Series #2 (10, 11, 12) - Matrix Power Series #3 (13, 14, 15) - Matrix Power Series #4 (16, 17, 18) - Matrix Power Series #5 (19, 20, 21) - Matrix Power Series #6 (22, 23, 24) - Matrix Power Series #7 (25, 26, 27) - Matrix Power Series #8 (28, 29, 30) - Matrix Power Series #9 (31, 32, 33) - Matrix Power Series #10 (34, 35, 36) IdeaDRIP:
- Loading branch information
Showing
17 changed files
with
1,506 additions
and
19 deletions.
There are no files selected for viewing
Binary file added
BIN
+162 Bytes
Docs/Internal/NumericalAnalysis/Shruti/Source/~$mericalAnalysis_v6.49.docx
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,285 @@ | ||
|
||
-------------------------- | ||
#1 - Successive Over-Relaxation | ||
-------------------------- | ||
-------------------------- | ||
1.1) Successive Over-Relaxation Method for A.x = b; A - n x n square matrix, x - unknown vector, b - RHS vector | ||
1.2) Decompose A into Diagonal, Lower, and upper Triangular Matrices; A = D + L + U | ||
1.3) SOR Scheme uses omega input | ||
1.4) Forward subsitution scheme to iteratively determine the x_i's | ||
1.5) SOR Scheme Linear System Convergence: Inputs A and D, Jacobi Iteration Matrix Spectral Radius, omega | ||
- Construct Jacobi Iteration Matrix: C_Jacobian = I - (Inverse D) A | ||
- Convergence Verification #1: Ensure that Jacobi Iteration Matrix Spectral Radius is < 1 | ||
- Convergence Verification #2: Ensure omega between 0 and 2 | ||
- Optimal Relaxation Parameter Expression in terms of Jacobi Iteration Matrix Spectral Radius | ||
- Omega Based Convergence Rate Expression | ||
- Gauss-Seidel omega = 1; corresponding Convergence Rate | ||
- Optimal Omega Convergence Rate | ||
1.6) Generic Iterative Solver Method: | ||
- Inputs: Iterator Function(x) and omega | ||
- Unrelaxed Iterated variable: x_n+1 = f(x_n) | ||
- SOR Based Iterated variable: x_n+1 = (1-omega).x_n + omega.f(x_n) | ||
- SOR Based Iterated variable for Unknown Vector x: x_n+1 = (1-omega).x_n + omega.(L_star inverse)(b - U.x_n) | ||
-------------------------- | ||
|
||
-------------------------- | ||
#2 - Successive Over-Relaxation | ||
-------------------------- | ||
-------------------------- | ||
2.1) SSOR Algorithm - Inputs; A, omega, and gamma | ||
- Decompose into D and L | ||
- Pre-conditioner Matrix: Expression from SSOR | ||
- Finally SSOR Iteration Formula | ||
-------------------------- | ||
|
||
---------------------------- | ||
#7 - Tridiagonal matrix algorithm | ||
---------------------------- | ||
---------------------------- | ||
7.1) Is Tridiagonal Check | ||
7.2) Core Algorithm: | ||
- C Prime's and D Prime's Calculation | ||
- Back Substitution for the Result | ||
- Modified better Book-keeping algorithm | ||
7.3) Sherman-Morrison Algorithm: | ||
- Choice of gamma | ||
- Construct Tridiagonal B from A and gamma | ||
- u Column from gamma and c_n | ||
- v Column from a_1 and gamma | ||
- Solve for y from By=d | ||
- Solve for q from Bq=u | ||
- Use Sherman Morrison Formula to extract x | ||
7.4) Alternate Boundary Condition Algorithm: | ||
- Solve Au=d for u | ||
- Solve Av={-a_2, 0, ..., -c_n} for v | ||
- Full solution is x_i = u_i + x_1 * v_i | ||
- x_1 if computed using formula | ||
---------------------------- | ||
|
||
---------------------------- | ||
#8 - Triangular Matrix | ||
---------------------------- | ||
---------------------------- | ||
8.1) Description: | ||
- Lower/Left Triangular Verification | ||
- Upper/Right Triangular Verification | ||
- Diagonal Matrix Verification | ||
- Upper/Lower Trapezoidal Verification | ||
8.2) Forward/Back Substitution: | ||
- Inputs => L and b | ||
- Forward Substitution | ||
- Inputs => U and b | ||
- Back Substitution | ||
8.3) Properties: | ||
- Is Matrix Normal, i.e., A times A transpose = A transpose times A | ||
- Characteristic Polynomial | ||
- Determinant/Permanent | ||
8.4) Special Forms: | ||
- Upper/Lower Unitriangular Matrix Verification | ||
- Upper/Lower Strictly Matrix Verification | ||
- Upper/Lower Atomic Matrix Verification | ||
---------------------------- | ||
|
||
---------------------------- | ||
#9 - Sylvester Equation | ||
---------------------------- | ||
---------------------------- | ||
9.1) Matrix Form: | ||
- Inputs: A, B, and C | ||
- Size Constraints Verification | ||
9.2) Solution Criteria: | ||
- Co-joint EigenSpectrum between A and B | ||
9.3) Numerical Solution: | ||
- Decomposition of A/B using Schur Decomposition into Triangular Form | ||
- Forward/Back Substitution | ||
---------------------------- | ||
|
||
---------------------------- | ||
#10 - Bartels-Stewart Algorithm | ||
---------------------------- | ||
---------------------------- | ||
10.1) Matrix Form: | ||
- Inputs: A, B, and C | ||
- Size Constraints Verification | ||
10.2) Schur Decompositions: | ||
- R = U^T A U - emits U and R | ||
- S = V^T B^T V - emits V and S | ||
- F = U^T C V | ||
- Y = U^T X V | ||
- Solution to R.Y - Y.S^T = F | ||
- Finally X = U.Y.V^T | ||
10.3) Computational Costs: | ||
- Flops cost for Schur decomposition | ||
- Flops cost overall | ||
10.4) Hessenberg-Schur Decompositions: | ||
- R = U^T A U becomes H = Q^T A Q - thus emits Q and H (Upper Hessenberg) | ||
- Computational Costs | ||
---------------------------- | ||
|
||
---------------------------- | ||
#11 - Gershgorin Circle Theorem | ||
---------------------------- | ||
---------------------------- | ||
11.1) Gershgorin Disc: | ||
- Diagonal Entry | ||
- Radius | ||
- One disc per Row/Column in Square Matrix | ||
- Optimal Disc based on Row/Column | ||
11.2) Tolerance based Gershgorin Convergence Criterion | ||
11.3) Joint and Disjoint Discs | ||
11.4) Gershgorin Strengthener | ||
11.5) Row/Column Diagonal Dominance | ||
---------------------------- | ||
|
||
---------------------------- | ||
#12 - Condition Number | ||
---------------------------- | ||
---------------------------- | ||
12.1) Condition Number Calculation: | ||
- Absolute Error | ||
- Relative Error | ||
- Absolute Condition Number | ||
- Relative Condition Number | ||
12.2) Matrix Condition Number Calculation: | ||
- Condition Number as a Product of Regular and Inverse Norms | ||
- L2 Norm | ||
- L2 Norm for Normal Matrix | ||
- L2 Norm for Unitary Matrix | ||
- Default Condition Number | ||
- L Infinity Norm | ||
- L Infinity Norm Triangular Matrix | ||
12.3) Non-linear | ||
- One Variable | ||
- Basic Formula | ||
- Condition Numbers for Common Functions | ||
- Multi Variables | ||
---------------------------- | ||
|
||
---------------------------- | ||
#13 - Unitary Matrix | ||
---------------------------- | ||
---------------------------- | ||
13.1) Properties: | ||
- U.U conjugate = I defines unitary matrix | ||
- U.U conjugate = U conjugate.U = I | ||
- det U = 1 | ||
- Eigenvectors are orthogonal | ||
- U conjugate = U inverse | ||
- Norm (U.x) = Norm (x) | ||
- Normal Matrix U | ||
13.2) 2x2 Elementary Matrices | ||
- Wiki Representation #1 | ||
- Wiki Representation #2 | ||
- Wiki Representation #3 | ||
- Wiki Representation #4 | ||
---------------------------- | ||
|
||
---------------------------- | ||
#14 - Matrix Norm | ||
---------------------------- | ||
---------------------------- | ||
14.1) Properties: | ||
- ||A|| >= 0 | ||
- ||A|| = 0 only if every element is zero | ||
- ||cA|| = |c|.||A|| | ||
- ||A + B|| <= ||A|| + ||B|| | ||
- ||A . B|| <= ||A|| . ||B|| | ||
14.2) Matrix norms induced by vector p-norms | ||
- p = 1, ∞ | ||
- p = 1 => maximum absolute column sum | ||
- p = ∞ => maximum absolute row sum | ||
- Spectral norm (p = 2) | ||
- ‖A‖_2=√(λ_max (A^* A) )=σ_max (A) | ||
- Properties: | ||
- ‖A‖_2=sup{x^* Ay∶x∈K^m; y∈K^n with ‖x‖_2=‖y‖_2=1} | ||
- ‖A^* A‖_2=‖AA^* ‖_2=‖A‖_2^2 | ||
- ‖A‖_2=σ_max (A)≤‖A‖_F=√(∑_i▒〖σ_i^2 (A) 〗) | ||
- ‖A‖_2=√(ρ(A^* A) )≤√(‖A^* A‖_∞ )≤√(‖A‖_1∙‖A‖_∞ ) | ||
- Matrix Norms Induced by Vector α- and β- Norms | ||
- ‖A‖_(2,∞)=■(max@1≤i≤m) ‖A_(i:) ‖_2 | ||
- ‖A‖_(1,2)=■(max@1≤j≤n) ‖A_(:j) ‖_2 | ||
- Properties Verification | ||
- Square Matrix Verification | ||
- Consistent and Compatible Norms Verification | ||
- "Entry-wise" matrix norms | ||
- ‖A‖_(p,p)=‖vec (A)‖_p=(∑_(i=1)^n▒∑_(j=1)^n▒|a_ij |^p )^(1/p) | ||
- Entry-wise L_(2,1) and L_(p,q) Matrix Norms | ||
- ‖A‖_(2,1)=∑_(j=1)^n▒‖a_j ‖_2 =∑_(j=1)^n▒√(∑_(i=1)^n▒|a_ij |^2 ) | ||
- ‖A‖_(p,q)=[(∑_(i=1)^m▒|a_ij |^p )^(q/p) ]^(1/q) | ||
- Frobenius norm (check the wiki elements) | ||
- Max norm (check the wiki elements) | ||
- Schatten norms | ||
- ‖A‖_p=[∑_(i=1)^min(m,n)▒〖σ_i (A)〗^p ]^(1/p) | ||
- Ky-Fan - ‖A‖_*=trace (√(A^* A))=∑_(i=1)^min(m,n)▒〖σ_i (A) 〗 | ||
- Holder's Inequality | ||
- SChatten Inequality | ||
- Monotone norms | ||
- Cut norms (check the wiki elements) | ||
- Equivalence of Norms (check the wiki elements) | ||
- Examples of Norm Equivalence (check the wiki elements) | ||
---------------------------- | ||
|
||
---------------------------- | ||
#15 - Spectral Radius | ||
---------------------------- | ||
---------------------------- | ||
15.1) Definition - Matrix: | ||
- Spectral Radius - Max of the Eigenvalues | ||
15.2) Definition - Bounded Complex Linear Operators: | ||
- Spectral Radius - Max of the Spectrum | ||
- Gelfand Formula applies to Bounded Complex Linear Operators | ||
15.3) Definition - Graph: | ||
- Function on Graph Vertex | ||
- Square Integrability of Function across Vertexes | ||
- Graph Function Map - Square Integrability Function Space across Vertexes 1 -> 2 | ||
- Graph Adjacency Operator - Function Sum over edges of a vertex | ||
15.4)Upper bounds | ||
- Upper bounds on the spectral radius of a matrix <= Power (Norm (A^k), 1/k) | ||
- Upper bounds for spectral radius of a graph | ||
15.5) Jordan Normal Power Sequence | ||
- J (m_i, lambda_i) | ||
- J (m_i, lambda_i) power k | ||
- Jordan Normal J Matrix | ||
- Jordan Normal J Matrix power k | ||
- A = V.J.V^-1 | ||
- A^k = V.J^k.V^-1 | ||
15.6) Gelfand's formula | ||
- The Formula | ||
- Gelfand's formula for Commuting Matrices | ||
---------------------------- | ||
|
||
-------------------------- | ||
#16 - Crank–Nicolson method | ||
-------------------------- | ||
-------------------------- | ||
16.1) von Neumann Stability Validator - Inputs; time-step, diffusivity, space step | ||
- 1D => time step * diffusivity / space step ^2 < 0.5 | ||
- nD => time step * diffusivity / (space step hypotenuse) ^ 2 < 0.5 | ||
16.2) Set up: | ||
- Input: Spatial variable x | ||
- Input: Time variable t | ||
- Inputs: State variable u, du/dx, d2u/dx2 - all at time t | ||
- Second Order, 1D => du/dt = F (u, x, t, du/dx, d2u/dx2) | ||
16.3) Finite Difference Evolution Scheme: | ||
- Time Step delta_t, space step delta_x | ||
- Forward Difference: F calculated at x | ||
- Backward Difference: F calculated at x + delta_x | ||
- Crank Nicolson: Average of Forward/Backward | ||
16.4) 1D Diffusion: | ||
- Inputs: 1D von Neumann Stability Validator, Number of time/space steps | ||
- Time Index n, Space Index i | ||
- Explicit Tridiagonal form for the discretization - State concentration at n+1 given state concentration at n | ||
- Non-linear diffusion coefficient: | ||
- Linearization across x_i and x_i+1 | ||
- Quasi-explicit forms accommodation | ||
16.5) 2D Diffusion: | ||
- Inputs: 2D von Neumann Stability Validator, Number of time/space steps | ||
- Time Index n, Space Index i, j | ||
- Explicit Tridiagonal form for the discretization - State concentration at n+1 given state concentration at n | ||
- Explicit Solution using the Alternative Difference Implicit Scheme | ||
16.6) Extension to Iterative Solver Schemes above: | ||
- Input: State Space "velocity" dF/du | ||
- Input: State "Step Size" delta_u | ||
- Fixed Point Iterative Location Scheme | ||
- Relaxation Scheme based Robustness => Input: Relaxation Parameter | ||
-------------------------- |
Oops, something went wrong.