pihnn.nn.DD_PIHNN#
- class pihnn.nn.DD_PIHNN(PDE, units, boundary, material={'lambda': 1, 'mu': 1}, activation=torch.exp, has_bias=True, rhs_solution=None)#
Bases:
PIHNN
Domain-decomposition physics-informed holomorphic neural networks (DD-PIHNNs).
DD-PIHNNs have been introduced in Calafà et al. [2024], Section 4.3, to solve problems on multiply-connected domains. The structure is similar to
pihnn.nn.PIHNN
but includes multiple stacked networks, each one corresponding to each function \(\varphi,\psi\) and each domain.- Parameters:
PDE (str) – Problem to solve, either ‘laplace’, ‘biharmonic’, ‘km’ or ‘km-so’.
units (list of int) – List containing number of units at each layer, e.g., [1,10,10,1].
material (dict) – Properties of the material, dictionary with ‘lambda’ (first Lamé coefficient), ‘mu’ (second Lamé coefficient).
boundary (
pihnn.geometries.boundary
) – Geometry of the domain. Needed for information regarding DD partition.activation (callable) – Activation function, by default the complex exponential.
has_bias (bool) – True if the linear layers include bias vectors.
rhs_solution (callable) – Particular solution to the non-homogeneous problem. E.g., \(x^2+y^2\) for \(\nabla^2u=4\).
- unflatten(z_flat, domains)#
Internal operation to transform a 1D batch of coordinates of dimension \(N\) to the 2D matrix of dimension \([D,N_D]\), where \(D\) is the number of subdomains and \(N_D\) is approximately the number of points per subdomain. This operation is needed to increase the efficiency of the network. See
pihnn.geometries.boundary.extract_points_dd()
for further details.- Parameters:
z_flat (
torch.tensor
) – 1D input vector.domains (
torch.tensor
) – A 2D tensor such that \(T_{i,j}=1\) if and only if the \(j\)-th point belongs to the \(i\)-th domain, \(0\) otherwise.
- Returns:
z (
torch.tensor
) - 2D output vector.
- flatten(z, domains)#
Inverse operation of
nn.pihnn.DD_PIHNN.unflatten()
.- Parameters:
z (
torch.tensor
) – 2D input vector.domains (
torch.tensor
) – A 2D tensor such that \(T_{i,j}=1\) if and only if the \(j\)-th point belongs to the \(i\)-th domain, \(0\) otherwise.
- Returns:
z_flat (
torch.tensor
) - 1D output vector.
- forward(z, flat_output=True, real_output=False)#
Forward step, i.e., compute:
\[\mathcal{L}_{L,t,d} \circ \phi \circ \mathcal{L}_{L-1,t,d} \circ \phi \dots \circ \mathcal{L}_{1,t,d} (z)\]where \(z\) is the input, \(\phi\) the activation function, \(d=1,\dots,D\) the domain to which \(z\) belongs and \(\{\mathcal{L}_{l,t,d}\}\) the complex linear layers (
pihnn.nn.ComplexLinear
) for each layer \(l=1,\dots,L\) and stacked network \((t,d)\).- Parameters:
z (
torch.tensor
) – Input of the network, typically a batch of coordinates from the domain boundary.flat_output (bool) – If True, the output of the network is a 1D/flat vector. Otherwise, the output is a 2D tensor where the first dimension is the number of domains and the second dimension is the number of points per domain. The second option is necessary for the training of the network while one can simply consider a flat output in other circumstances. Notice that the output is flat only if the input is also flat.
real_output (bool) – Whether to provide the output in the real-valued representation.
- Returns:
phi (
torch.tensor
) - Output of the network.
- initialize_weights(method, beta=0.5, sample=None, gauss=None)#
Equivalent to
pihnn.nn.PIHNN.init()
.- Parameters:
method (str) – Either ‘he’, ‘he_scaled’, ‘exp’, see description above.
beta (float) – Scaling coefficient in the scaled He initialization, \(\beta\) coefficient in the Calafà initialization, not used in He initialization.
sample (
torch.tensor
) – Initial sample \(x_0\) in the Calafà initialization, not used in the other methods.gauss (int) – \(M_e\) coefficient in the Calafà initialization, not used in the other methods.