dair_pll.deep_support_function

Modelling and manipulation of convex support functions.

dair_pll.deep_support_function.extract_obj(support_function)[source]

Given a support function, extracts a Wavefront obj representation.

Parameters:

support_function (Callable[[Tensor], Tensor]) – Callable support function.

Return type:

str

Returns:

Wavefront .obj string

dair_pll.deep_support_function.extract_outward_normal_hyperplanes(vertices, faces)[source]

Extract hyperplane representation of convex hull from vertex-plane representation.

Constructs a set of (outward) normal vectors and intercept values. Additionally, notes a boolean value that is True iff the face vertices are in counter-clockwise order when viewed from the outside.

Mathematically for a face \((v_1, v_2, v_3)\), in counter-clockwise order, this function returns \(\hat n\), the unit vector in the \((v_2 - v_1) \times (v_3 - v_`1)\) direction, and intercept \(d = \hat n \cdot v_1\).

Parameters:
  • vertices (Tensor) – (*, N, 3) batch of polytope vertices.

  • faces (Tensor) – (*, M, 3) batch of polytope triangle face vertex indices.

Returns:

(*, M, 3) face outward normals. (*, M) whether each face is in counter-clockwise order. (*, M) face hyperplane intercepts.

dair_pll.deep_support_function.extract_mesh(support_function)[source]

Given a support function, extracts a vertex/face mesh.

Parameters:

support_function (Callable[[Tensor], Tensor]) – Callable support function.

Return type:

MeshSummary

Returns:

Object vertices and face indices.

class dair_pll.deep_support_function.HomogeneousICNN(depth, width, negative_slope=0.5, scale=1.0)[source]

Bases: Module

Homogeneous Input-convex Neural Networks.

Implements a positively-homogenous version of an ICNN [AXK17].

These networks have the structure \(f(d)\) where

\[\begin{split}\begin{align} h_0 &= \sigma(W_{d,0} d),\\ h_i &= \sigma(W_{d,i} d + W_{h,i} h_{i-1}),\\ f(d) &= W_{h,D} h_D, \end{align}\end{split}\]

where each \(W_{h,i} \geq 0\) and \(\sigma\) is a convex and monotonic LeakyReLU.

Parameters:
  • depth (int) – Network depth \(D\).

  • width (int) – Network width.

  • negative_slope (float) – Negative slope of LeakyReLU activation.

  • scale – Length scale of object in meters.

hidden_weights: torch.nn.modules.container.ParameterList

Scale of hidden weight matrices \(W_{h,i} \geq 0\).

input_weights: torch.nn.modules.container.ParameterList

List of input-injection weight matrices \(W_{d,i}\).

output_weight: torch.nn.parameter.Parameter

Output weight vector \(W_{h,D} \geq 0\).

activation: torch.nn.modules.module.Module

Activation module (LeakyReLU).

abs_weights()[source]

Returns non-negative version of hidden weight matrices by taking absolute value of hidden_weights and output_weight.

Return type:

Tuple[List[Tensor], Tensor]

activation_jacobian(activations)[source]

Returns flattened diagonal Jacobian of LeakyReLU activation.

The jacobian is simply 1 at indices where the activations are positive and self.activation.negative_slope otherwise.

Parameters:

activations (Tensor) – (*, width) output of activation function for some layer.

Return type:

Tensor

Returns:

(*, width) activation jacobian.

network_activations(directions)[source]

Forward evaluation of the network activations

Parameters:

directions (Tensor) – (*, 3) network inputs.

Return type:

Tuple[List[Tensor], Tensor]

Returns:

List of (*, width) hidden layer activations. (*,) network output

forward(directions)[source]

Evaluates support function Jacobian at provided inputs.

Parameters:

directions (Tensor) – (*, 3) network inputs.

Return type:

Tensor

Returns:

(*, 3) network input Jacobian.