dair_pll.deep_support_function
Modelling and manipulation of convex support functions.
- dair_pll.deep_support_function.extract_obj(support_function)[source]
Given a support function, extracts a Wavefront obj representation.
- dair_pll.deep_support_function.extract_outward_normal_hyperplanes(vertices, faces)[source]
Extract hyperplane representation of convex hull from vertex-plane representation.
Constructs a set of (outward) normal vectors and intercept values. Additionally, notes a boolean value that is
True
iff the face vertices are in counter-clockwise order when viewed from the outside.Mathematically for a face \((v_1, v_2, v_3)\), in counter-clockwise order, this function returns \(\hat n\), the unit vector in the \((v_2 - v_1) \times (v_3 - v_`1)\) direction, and intercept \(d = \hat n \cdot v_1\).
- dair_pll.deep_support_function.extract_mesh(support_function)[source]
Given a support function, extracts a vertex/face mesh.
- class dair_pll.deep_support_function.HomogeneousICNN(depth, width, negative_slope=0.5, scale=1.0)[source]
Bases:
Module
Homogeneous Input-convex Neural Networks.
Implements a positively-homogenous version of an ICNN [AXK17].
These networks have the structure \(f(d)\) where
\[\begin{split}\begin{align} h_0 &= \sigma(W_{d,0} d),\\ h_i &= \sigma(W_{d,i} d + W_{h,i} h_{i-1}),\\ f(d) &= W_{h,D} h_D, \end{align}\end{split}\]where each \(W_{h,i} \geq 0\) and \(\sigma\) is a convex and monotonic
LeakyReLU
.- Parameters:
Scale of hidden weight matrices \(W_{h,i} \geq 0\).
-
input_weights:
torch.nn.modules.container.ParameterList
List of input-injection weight matrices \(W_{d,i}\).
-
output_weight:
torch.nn.parameter.Parameter
Output weight vector \(W_{h,D} \geq 0\).
-
activation:
torch.nn.modules.module.Module
Activation module (
LeakyReLU
).
- abs_weights()[source]
Returns non-negative version of hidden weight matrices by taking absolute value of
hidden_weights
andoutput_weight
.
- activation_jacobian(activations)[source]
Returns flattened diagonal Jacobian of LeakyReLU activation.
The jacobian is simply
1
at indices where the activations are positive andself.activation.negative_slope
otherwise.