Skip to content
This repository was archived by the owner on Sep 28, 2024. It is now read-only.

Commit 6c2cd97

Browse files
Merge pull request #114 from ArnoStrouwen/md
[skip ci] LanguageTool
2 parents d2262cf + 090156b commit 6c2cd97

File tree

7 files changed

+28
-28
lines changed

7 files changed

+28
-28
lines changed

docs/src/apis.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ v'(x) = \mathcal{F}^{-1} \{ F'(s) \}
1717
```
1818

1919
where ``v(x)`` and ``v'(x)`` denotes input and output function,
20-
``\mathcal{F} \{ \cdot \}``, ``\mathcal{F}^{-1} \{ \cdot \}`` are transform,
21-
inverse transform, respectively.
20+
``\mathcal{F} \{ \cdot \}``, ``\mathcal{F}^{-1} \{ \cdot \}`` are the transform and
21+
the inverse transform, respectively.
2222
Function ``g`` is a linear transform for lowering spectrum modes.
2323

2424
```@docs
@@ -35,9 +35,9 @@ Reference: [FNO2021](@cite)
3535
v_{t+1}(x) = \sigma(W v_t(x) + \mathcal{K} \{ v_t(x) \} )
3636
```
3737

38-
where ``v_t(x)`` is the input function for ``t``-th layer and
38+
where ``v_t(x)`` is the input function for the ``t``'th layer and
3939
``\mathcal{K} \{ \cdot \}`` denotes spectral convolutional layer.
40-
Activation function ``\sigma`` can be arbitrary non-linear function.
40+
Activation function ``\sigma`` can be an arbitrary non-linear function.
4141

4242
```@docs
4343
OperatorKernel
@@ -56,7 +56,7 @@ v_{t+1}(x_i) = \sigma(W v_t(x_i) + \frac{1}{|\mathcal{N}(x_i)|} \sum_{x_j \in \m
5656
where ``v_t(x_i)`` is the input function for ``t``-th layer,
5757
``x_i`` is the node feature for ``i``-th node and
5858
``\mathcal{N}(x_i)`` represents the neighbors for ``x_i``.
59-
Activation function ``\sigma`` can be arbitrary non-linear function.
59+
Activation function ``\sigma`` can be an arbitrary non-linear function.
6060

6161
```@docs
6262
GraphKernel

docs/src/index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,10 @@ CurrentModule = NeuralOperators
66

77
| ![](https://github.com/foldfelis/NeuralOperators.jl/blob/main/example/FlowOverCircle/gallery/ans.gif?raw=true) | ![](https://github.com/foldfelis/NeuralOperators.jl/blob/main/example/FlowOverCircle/gallery/inferenced.gif?raw=true) |
88
|:----------------:|:--------------:|
9-
| **Ground Truth** | **Inferenced** |
9+
| **Ground Truth** | **Inferred** |
1010

11-
The demonstration shown above is Navier-Stokes equation learned by the `MarkovNeuralOperator` with only one time step information.
12-
Example can be found in [`example/FlowOverCircle`](https://github.com/SciML/NeuralOperators.jl/tree/main/example/FlowOverCircle).
11+
The demonstration shown above is the Navier-Stokes equation learned by the `MarkovNeuralOperator` with only one time step information.
12+
The example can be found in [`example/FlowOverCircle`](https://github.com/SciML/NeuralOperators.jl/tree/main/example/FlowOverCircle).
1313

1414
## Quick start
1515

@@ -73,7 +73,7 @@ trunk = Chain(Dense(24, 64, tanh), Dense(64, 72, tanh))
7373
model = DeepONet(branch, trunk)
7474
```
7575

76-
You can again specify loss, optimization and training parameters just as you would for a simple neural network with Flux.
76+
You can again specify loss, optimization, and training parameters just as you would for a simple neural network with Flux.
7777

7878
```julia
7979
loss(xtrain, ytrain, sensor) = Flux.Losses.mse(model(xtrain, sensor), ytrain)

docs/src/introduction.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,11 @@ by linking the operators into a Markov chain.
2121

2222
## [Deep Operator Network](https://github.com/SciML/NeuralOperators.jl/blob/main/src/DeepONet/DeepONet.jl)
2323

24-
Deep operator network (DeepONet) learns a neural operator with the help of two sub-neural network structures described as the branch and the trunk network.
25-
The branch network is fed the initial conditions data, whereas the trunk is fed with the locations where the target(output) is evaluated from the corresponding initial conditions.
26-
It is important that the output size of the branch and trunk subnets is same so that a dot product can be performed between them.
24+
Deep operator network (DeepONet) learns a neural operator with the help of two sub-neural network structures, described as the branch and the trunk network.
25+
The branch network is fed the initial condition data, whereas the trunk is fed with the locations where the target (output) is evaluated from the corresponding initial conditions.
26+
It is important that the output size of the branch and trunk subnets is the same so that a dot product can be performed between them.
2727

2828
## [Nonlinear Manifold Decoders for Operator Learning](https://github.com/SciML/NeuralOperators.jl/blob/main/src/NOMAD/NOMAD.jl)
2929

30-
Nonlinear Manifold Decoders for Operator Learning (NOMAD) learns a neural operator with a nonlinear decoder parameterized by a deep neural network which jointly takes output of approximator and the locations as parameters.
31-
The approximator network is fed with the initial conditions data. The output-of-approximator and the locations are then passed to a decoder neural network to get the target (output). It is important that the input size of the decoder subnet is sum of size of the output-of-approximator and number of locations.
30+
Nonlinear Manifold Decoders for Operator Learning (NOMAD) learns a neural operator with a nonlinear decoder parameterized by a deep neural network which jointly takes the output of the approximator and the locations as parameters.
31+
The approximator network is fed with the initial condition data. The output-of-approximator and the locations are then passed to a decoder neural network to get the target (output). It is important that the input size of the decoder subnet is the sum of size of the output-of-approximator and number of locations.

src/DeepONet/DeepONet.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ x --- branch --
2222
y --- trunk ---
2323
2424
Where `x` represents the input function, discretely evaluated at its respective sensors.
25-
So the ipnut is of shape [m] for one instance or [m x b] for a training set.
25+
So, the input is of shape [m] for one instance or [m x b] for a training set.
2626
`y` are the probing locations for the operator to be trained. It has shape [N x n] for
2727
N different variables in the PDE (i.e. spatial and temporal coordinates) with each n distinct evaluation points.
2828
`u` is the solution of the queried instance of the PDE, given by the specific choice of parameters.
@@ -38,16 +38,16 @@ You can set up this architecture in two ways:
3838
flexibility and e.g. use an RNN or CNN instead of simple `Dense` layers.
3939
4040
Strictly speaking, DeepONet does not imply either of the branch or trunk net to be a simple
41-
DNN. Usually though, this is the case which is why it's treated as the default case here.
41+
DNN. Usually this is the case, which is why it's treated as the default case here.
4242
4343
# Example
4444
4545
Consider a transient 1D advection problem ∂ₜu + u ⋅ ∇u = 0, with an IC u(x,0) = g(x).
46-
We are given several (b = 200) instances of the IC, discretized at 50 points each and want
46+
We are given several (b = 200) instances of the IC, discretized at 50 points each, and want
4747
to query the solution for 100 different locations and times [0;1].
4848
49-
That makes the branch input of shape [50 x 200] and the trunk input of shape [2 x 100]. So the
50-
input for the branch net is 50 and 100 for the trunk net.
49+
That makes the branch input of shape [50 x 200] and the trunk input of shape [2 x 100]. So, the
50+
input for the branch net is 50 and 100 for the trunk net.
5151
5252
# Usage
5353

src/DeepONet/subnets.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Construct a Chain of `Dense` layers from a given tuple of integers.
33
44
Input:
5-
A tuple (m,n,o,p) of integer type numbers that each describe the width of the i-th Dense layer to Construct
5+
A tuple (m,n,o,p) of integer type numbers that each describe the width of the i'th Dense layer to Construct
66
77
Output:
88
A `Flux` Chain with length of the input tuple and individual width given by the tuple elements

src/FNO/FNO.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,14 @@ Flux.@functor FourierNeuralOperator
1818
1919
Fourier neural operator is a operator learning model that uses Fourier kernel to perform
2020
spectral convolutions.
21-
It is a promissing way for surrogate methods, and can be regarded as a physics operator.
21+
It is a promising way for surrogate methods, and can be regarded as a physics operator.
2222
2323
The model is comprised of
2424
a `Dense` layer to lift (d + 1)-dimensional vector field to n-dimensional vector field,
2525
and an integral kernel operator which consists of four Fourier kernels,
2626
and two `Dense` layers to project data back to the scalar field of interest space.
2727
28-
The role of each channel size described as follow:
28+
The role of each channel size described as follows:
2929
3030
```
3131
[1] input channel number
@@ -133,7 +133,7 @@ a `Dense` layer to lift d-dimensional vector field to n-dimensional vector field
133133
and an integral kernel operator which consists of four Fourier kernels,
134134
and a `Dense` layers to project data back to the scalar field of interest space.
135135
136-
The role of each channel size described as follow:
136+
The role of each channel size described as follows:
137137
138138
```
139139
[1] input channel number

src/operator_kernel.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,10 +31,10 @@ end
3131
## Keyword Arguments
3232
3333
* `init`: Initial function to initialize parameters.
34-
* `permuted`: Whether the dim is permuted. If `permuted=true`, layer accepts
35-
data in the order of `(ch, x_1, ... , x_d , batch)`,
36-
otherwise the order is `(x_1, ... , x_d, ch, batch)`.
37-
* `T`: Data type of parameters.
34+
* `permuted`: Whether the dim is permuted. If `permuted=true`, the layer accepts
35+
data in the order of `(ch, x_1, ... , x_d , batch)`.
36+
Otherwise the order is `(x_1, ... , x_d, ch, batch)`.
37+
* `T`: Datatype of parameters.
3838
3939
## Example
4040
@@ -132,7 +132,7 @@ end
132132
133133
## Keyword Arguments
134134
135-
* `permuted`: Whether the dim is permuted. If `permuted=true`, layer accepts
135+
* `permuted`: Whether the dim is permuted. If `permuted=true`, the layer accepts
136136
data in the order of `(ch, x_1, ... , x_d , batch)`,
137137
otherwise the order is `(x_1, ... , x_d, ch, batch)`.
138138

0 commit comments

Comments
 (0)