Skip to content
14 changes: 14 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
[deps]
DiffEqFlux = "aae7a2af-3d4f-5e19-a356-7da93b79d9d0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DomainSets = "5b8099bc-c8ec-5219-889f-1d9e522a28bf"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
Integrals = "de52edbc-65ea-441a-8357-d3a637375a31"
IntegralsCubature = "c31f79ba-6e32-46d4-a52f-182a8ac42a54"
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationFlux = "253f991c-a7b2-45f8-8852-8b9a9df78a86"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OptimizationPolyalgorithms = "500b13db-7e66-49ce-bda4-eed966be6282"
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
QuasiMonteCarlo = "8a4e6c94-4038-4cdc-81c3-7e6ffdb2a71b"
Roots = "f2b01f46-fcfa-551c-844a-d8ac1e96c665"
SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"

[compat]
Documenter = "0.27"
3 changes: 3 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
using Documenter, NeuralPDE

ENV["GKSwstype"] = "100"
using Plots

include("pages.jl")

makedocs(sitename = "NeuralPDE.jl",
Expand Down
4 changes: 2 additions & 2 deletions docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,9 @@ pages = [
"pinn/low_level.md",
"pinn/ks.md",
"pinn/fp.md",
"pinn/parm_estim.md",
"pinn/param_estim.md",
"pinn/heterogeneous.md",
"pinn/integro_diff.md",
"pinn/debugging.md",
"pinn/neural_adapter.md"],
"Specialized Neural PDE Tutorials" => Any["examples/kolmogorovbackwards.md",
"examples/optimal_stopping_american.md"],
Expand All @@ -23,4 +22,5 @@ pages = [
"solvers/kolmogorovbackwards_solver.md",
"solvers/optimal_stopping.md",#TODO
"solvers/nnrode.md"],
"Developer Documentation" => Any["developer/debugging.md"],
]
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Debugging PINN Solutions

#### Note this is all not current right now!

Let's walk through debugging functions for the physics-informed neural network
PDE solvers.

Expand Down
5 changes: 2 additions & 3 deletions docs/src/pinn/2D.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,8 @@ x \in [0, 2] \, ,\ y \in [0, 2] \, , \ t \in [0, 2] \, ,
with physics-informed neural networks.

```julia
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux
using Quadrature, Cuba, CUDA, QuasiMonteCarlo
import ModelingToolkit: Interval, infimum, supremum
using NeuralPDE, Flux, Optimization, OptimizationOptimJL, DiffEqFlux
import ModelingToolkit: Interval

@parameters t x y
@variables u(..)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/pinn/3rd.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ x &\in [0, 1] \, ,

We will use physics-informed neural networks.

```julia
```@example 3rdDerivative
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux
import ModelingToolkit: Interval, infimum, supremum

Expand Down Expand Up @@ -52,7 +52,7 @@ phi = discretization.phi

We can plot the predicted solution of the ODE and its analytical solution.

```julia
```@example 3rdDerivative
using Plots

analytic_sol_func(x) = (π*x*(-x+(π^2)*(2*x-3)+1)-sin(π*x))/(π^3)
Expand Down
17 changes: 10 additions & 7 deletions docs/src/pinn/fp.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,9 @@ p(-2.2) = p(2.2) = 0

with Physics-Informed Neural Networks.

```julia
```@example fokkerplank
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux
using Integrals, IntegralsCubature
import ModelingToolkit: Interval, infimum, supremum
# the example is taken from this article https://arxiv.org/abs/1910.10503
@parameters x
Expand Down Expand Up @@ -69,13 +70,15 @@ discretization = PhysicsInformedNN(chain,
init_params = initθ,
additional_loss=norm_loss_function)

@named pde_system = PDESystem(eq,bcs,domains,[x],[p(x)])
prob = discretize(pde_system,discretization)
@named pdesystem = PDESystem(eq,bcs,domains,[x],[p(x)])
prob = discretize(pdesystem,discretization)
phi = discretization.phi

pde_inner_loss_functions = prob.f.f.loss_function.pde_loss_function.pde_loss_functions.contents
bcs_inner_loss_functions = prob.f.f.loss_function.bcs_loss_function.bc_loss_functions.contents
sym_prob = NeuralPDE.symbolic_discretize(pdesystem, discretization)

phi = discretization.phi
pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions
aprox_derivative_loss_functions = sym_prob.loss_functions.bc_loss_functions

cb_ = function (p,l)
println("loss: ", l )
Expand All @@ -92,7 +95,7 @@ res = Optimization.solve(prob,BFGS(),callback = cb_,maxiters=2000)

And some analysis:

```julia
```@example fokkerplank
using Plots
C = 142.88418699042 #fitting param
analytic_sol_func(x) = C*exp((1/(2*_σ^2))*(2*α*x^2 - β*x^4))
Expand Down
17 changes: 13 additions & 4 deletions docs/src/pinn/heterogeneous.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,17 @@
# Differential Equations with Heterogeneous Inputs
# Differential Equations with Heterogeneous Domains

A differential equation is said to have heterogeneous inputs when its dependent variables depend on different independent variables:
A differential equation is said to have heterogeneous domains when its dependent variables depend on different independent variables:

```math
u(x) + w(x, v) = \frac{\partial w(x, v)}{\partial w}
```

Here, we write an arbitrary heterogeneous system:

```julia
```@example heterogeneous
using NeuralPDE, DiffEqFlux, Flux, ModelingToolkit, Optimization, OptimizationOptimJL
import ModelingToolkit: Interval

@parameters x y
@variables p(..) q(..) r(..) s(..)
Dx = Differential(x)
Expand All @@ -29,9 +32,15 @@ domains = [x ∈ Interval(0.0, 1.0),
numhid = 3
fastchains = [[FastChain(FastDense(1, numhid, Flux.σ), FastDense(numhid, numhid, Flux.σ), FastDense(numhid, 1)) for i in 1:2];
[FastChain(FastDense(2, numhid, Flux.σ), FastDense(numhid, numhid, Flux.σ), FastDense(numhid, 1)) for i in 1:2]]
discretization = NeuralPDE.PhysicsInformedNN(fastchains, QuadratureTraining()
discretization = NeuralPDE.PhysicsInformedNN(fastchains, QuadratureTraining())

@named pde_system = PDESystem(eq, bcs, domains, [x,y], [p(x), q(y), r(x, y), s(y, x)])
prob = SciMLBase.discretize(pde_system, discretization)

callback = function (p,l)
println("Current loss is: $l")
return false
end

res = Optimization.solve(prob, BFGS(); callback = callback, maxiters=100)
```
4 changes: 2 additions & 2 deletions docs/src/pinn/integro_diff.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ and boundary condition
u(0) = 0
```

```julia
```@example integro
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux, DomainSets
import ModelingToolkit: Interval, infimum, supremum

Expand Down Expand Up @@ -75,7 +75,7 @@ res = Optimization.solve(prob, BFGS(); callback = callback, maxiters=100)

Plotting the final solution and analytical solution

```julia
```@example integro
ts = [infimum(d.domain):0.01:supremum(d.domain) for d in domains][1]
phi = discretization.phi
u_predict = [first(phi([t],res.minimizer)) for t in ts]
Expand Down
4 changes: 2 additions & 2 deletions docs/src/pinn/ks.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ where `\theta = 1 - x/2` and with initial and boundary conditions:

We use physics-informed neural networks.

```julia
```@example ks
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux
import ModelingToolkit: Interval, infimum, supremum

Expand Down Expand Up @@ -77,7 +77,7 @@ phi = discretization.phi

And some analysis:

```julia
```@example ks
using Plots

xs,ts = [infimum(d.domain):dx:supremum(d.domain) for (d,dx) in zip(domains,[dx/10,dt])]
Expand Down
14 changes: 7 additions & 7 deletions docs/src/pinn/parm_estim.md → docs/src/pinn/param_estim.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Optimising Parameters of a Lorenz System

Consider a Lorenz System ,
Consider a Lorenz System,

```math
\begin{align*}
Expand All @@ -14,7 +14,7 @@ with Physics-Informed Neural Networks. Now we would consider the case where we w

We start by defining the the problem,

```julia
```@example param_estim
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux, OrdinaryDiffEq, Plots
import ModelingToolkit: Interval, infimum, supremum
@parameters t ,σ_ ,β, ρ
Expand All @@ -31,7 +31,7 @@ dt = 0.01

And the neural networks as,

```julia
```@example param_estim
input_ = length(domains)
n = 8
chain1 = FastChain(FastDense(input_,n,Flux.σ),FastDense(n,n,Flux.σ),FastDense(n,n,Flux.σ),FastDense(n,1))
Expand All @@ -43,7 +43,7 @@ We will add an additional loss term based on the data that we have in order to o

Here we simply calculate the solution of the lorenz system with [OrdinaryDiffEq.jl](https://diffeq.sciml.ai/v1.10/tutorials/ode_example.html#In-Place-Updates-1) based on the adaptivity of the ODE solver. This is used to introduce non-uniformity to the time series.

```julia
```@example param_estim
function lorenz!(du,u,p,t)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
Expand Down Expand Up @@ -71,15 +71,15 @@ len = length(data[2])

Then we define the additional loss funciton `additional_loss(phi, θ , p)`, the function has three arguments, `phi` the trial solution, `θ` the parameters of neural networks, and the hyperparameters `p` .

```julia
```@example param_estim
function additional_loss(phi, θ , p)
return sum(sum(abs2, phi[i](t_ , θ[sep[i]]) .- u_[[i], :])/len for i in 1:1:3)
end
```

Then finally defining and optimising using the `PhysicsInformedNN` interface.

```julia
```@example param_estim
discretization = NeuralPDE.PhysicsInformedNN([chain1 , chain2, chain3],NeuralPDE.GridTraining(dt), param_estim=true, additional_loss=additional_loss)
@named pde_system = PDESystem(eqs,bcs,domains,[t],[x(t), y(t), z(t)],[σ_, ρ, β], defaults=Dict([p .=> 1.0 for p in [σ_, ρ, β]]))
prob = NeuralPDE.discretize(pde_system,discretization)
Expand All @@ -93,7 +93,7 @@ p_ = res.minimizer[end-2:end] # p_ = [9.93, 28.002, 2.667]

And then finally some analyisis by plotting.

```julia
```@example param_estim
initθ = discretization.init_params
acum = [0;accumulate(+, length.(initθ))]
sep = [acum[i]+1 : acum[i+1] for i in 1:length(acum)-1]
Expand Down
24 changes: 13 additions & 11 deletions docs/src/pinn/poisson.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ with grid discretization `dx = 0.1` using physics-informed neural networks.

## Copy-Pastable Code

```julia
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux
import ModelingToolkit: Interval, infimum, supremum
```@example
using NeuralPDE, Flux, Optimization, OptimizationOptimJL, DiffEqFlux
import ModelingToolkit: Interval

@parameters x y
@variables u(..)
Expand Down Expand Up @@ -90,9 +90,9 @@ plot(p1,p2,p3)

The ModelingToolkit PDE interface for this example looks like this:

```julia
```@example poisson
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DiffEqFlux
import ModelingToolkit: Interval, infimum, supremum
import ModelingToolkit: Interval

@parameters x y
@variables u(..)
Expand All @@ -113,39 +113,39 @@ domains = [x ∈ Interval(0.0,1.0),
Here, we define the neural network, where the input of NN equals the number of dimensions and output equals the number of equations in the system.


```julia
```@example poisson
# Neural network
dim = 2 # number of dimensions
chain = FastChain(FastDense(dim,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,1))
```

Convert weights of neural network from Float32 to Float64 in order to all inner calculation will be with Float64.

```julia
```@example poisson
# Initial parameters of Neural network
initθ = Float64.(DiffEqFlux.initial_params(chain))
```

Here, we build PhysicsInformedNN algorithm where `dx` is the step of discretization, `strategy` stores information for choosing a training strategy and
`init_params =initθ` initial parameters of neural network.

```julia
```@example poisson
# Discretization
dx = 0.05
discretization = PhysicsInformedNN(chain, GridTraining(dx),init_params =initθ)
```

As described in the API docs, we now need to define the `PDESystem` and create PINNs problem using the `discretize` method.

```julia
```@example poisson
@named pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x, y)])
prob = discretize(pde_system,discretization)
```

Here, we define the callback function and the optimizer. And now we can solve the PDE using PINNs
(with the number of epochs `maxiters=1000`).

```julia
```@example poisson
#Optimizer
opt = OptimizationOptimJL.BFGS()

Expand All @@ -160,14 +160,16 @@ phi = discretization.phi

We can plot the predicted solution of the PDE and compare it with the analytical solution in order to plot the relative error.

```julia
```@example poisson
xs,ys = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sin(pi*x)*sin(pi*y))/(2pi^2)

u_predict = reshape([first(phi([x,y],res.minimizer)) for x in xs for y in ys],(length(xs),length(ys)))
u_real = reshape([analytic_sol_func(x,y) for x in xs for y in ys], (length(xs),length(ys)))
diff_u = abs.(u_predict .- u_real)

using Plots

p1 = plot(xs, ys, u_real, linetype=:contourf,title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf,title = "predict");
p3 = plot(xs, ys, diff_u,linetype=:contourf,title = "error");
Expand Down
Loading