pihnn.nn.DD_PIHNN#

class pihnn.nn.DD_PIHNN(PDE, units, boundary, material={'lambda': 1, 'mu': 1}, activation=torch.exp, has_bias=True)#

Bases: PIHNN

Domain-decomposition physics-informed holomorphic neural networks (DD-PIHNNs).

DD-PIHNNs have been introduced in Calafà et al. [2024], Section 4.3, to solve problems on multiply-connected domains. The structure is similar to pihnn.nn.PIHNN but includes multiple stacked networks, each one corresponding to each function \(\varphi,\psi\) and each domain.

Parameters:
  • PDE (str) – Problem to solve, either ‘laplace’, ‘biharmonic’, ‘km’ or ‘km-so’.

  • units (list of int) – List containing number of units at each layer, e.g., [1,10,10,1].

  • boundary (pihnn.geometries.boundary) – Geometry of the domain, necessary for information regarding domain splitting.

  • material (dict) – Properties of the material, dictionary with ‘lambda’ (first Lamé coefficient), ‘mu’ (second Lamé coefficient).

  • activation (callable) – Activation function, by default the complex exponential.

  • has_bias (bool) – True if the linear layers include bias vectors.

forward(z, flat_output=True, real_output=False)#

Forward step, i.e., compute for \(j=1,2\):

\[\mathcal{L}_{N,j,d} \circ \phi \circ \mathcal{L}_{N-1,j,d} \circ \phi \dots \circ \mathcal{L}_{1,j,d} (z)\]

where \(z\) is the input, \(\phi\) the activation function, \(d\in \mathbb{N}\) the domain to which \(z\) belongs and \(\{\mathcal{L}_{i,j,d}\}\) the complex linear layers (pihnn.nn.ComplexLinear) for each layer \(i\) and stacked network \((j,d)\).

Parameters:
  • z (torch.tensor) – Input of the network, typically a batch of coordinates from the domain boundary.

  • flat_output (bool) – If True, the output of the network is a 1D/flat vector. Otherwise, the output is a 2D tensor where the first dimension is the number of domains and the second dimension is the number of points per domain. The second option is necessary for the training of the network while one can simply consider a flat output in other circumstances. Notice that the output is flat only if the input is also flat.

  • real_output (bool) – Whether to provide the output in the real-valued representation.

Returns:

phi (torch.tensor) - Output of the network. It has the same shape of the input for the Laplace problem but double size for the other problems.

initialize_weights(method, beta=0.5, sample=None, gauss=None)#

Equivalent to pihnn.nn.PIHNN.init().

Parameters:
  • method (str) – Either ‘he’, ‘he_scaled’, ‘exp’, see description above.

  • beta (float) – Scaling coefficient in the scaled He initialization, \(\beta\) coefficient in the Calafà initialization, not used in He initialization.

  • sample (torch.tensor) – Initial sample \(x_0\) in the Calafà initialization, not used in the other methods.

  • gauss (int) – \(M_e\) coefficient in the Calafà initialization, not used in the other methods.