vegans.utils package

Submodules

vegans.utils.layers module

class vegans.utils.layers.LayerInception(in_channels, out_channels)[source]

Bases: torch.nn.modules.module.Module

Implementation of the inception layer architecture.

Uses a network in network (NIN) architecture to make networks wider and deeper.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class vegans.utils.layers.LayerPrintSize[source]

Bases: torch.nn.modules.module.Module

Prints the size of a layer without performing any operation.

Mainly used for debugging to find the layer shape at a certain depth of the network.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class vegans.utils.layers.LayerReshape(shape)[source]

Bases: torch.nn.modules.module.Module

Reshape a tensor.

Might be used in a densely connected network in the last layer to produce an image output.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class vegans.utils.layers.LayerResidualConvBlock(in_channels, out_channels, skip_layers, kernel_size)[source]

Bases: torch.nn.modules.module.Module

Implementation of the inception layer architecture.

Uses a network in network (NIN) architecture to make networks wider and deeper.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

vegans.utils.networks module

class vegans.utils.networks.Adversary(network, input_size, adv_type, device, ngpu, secure=True)[source]

Bases: vegans.utils.networks.NeuralNetwork

Implements adversary architecture.

Might either be a discriminator (output [0, 1]) or critic (output [-Inf, Inf]).

predict(x)[source]
training: bool
class vegans.utils.networks.Autoencoder(encoder, decoder)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_number_params()[source]

Returns the number of parameters in the model.

Returns

Dictionary containing the number of parameters per network.

Return type

dict

summary()[source]
training: bool
class vegans.utils.networks.Decoder(network, input_size, device, ngpu, secure=True)[source]

Bases: vegans.utils.networks.NeuralNetwork

training: bool
class vegans.utils.networks.Encoder(network, input_size, device, ngpu, secure=True)[source]

Bases: vegans.utils.networks.NeuralNetwork

training: bool
class vegans.utils.networks.Generator(network, input_size, device, ngpu, secure=True)[source]

Bases: vegans.utils.networks.NeuralNetwork

training: bool
class vegans.utils.networks.NeuralNetwork(network, name, input_size, device, ngpu, secure)[source]

Bases: torch.nn.modules.module.Module

Basic abstraction for single networks.

These networks form the building blocks for the generative adversarial networks. Mainly responsible for consistency checks.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_number_params()[source]
summary()[source]
training: bool

vegans.utils.plot2DModel module

vegans.utils.plot2DModel.onclick(event)[source]
vegans.utils.plot2DModel.plot_2d_grid(model, nr_images=10, show=True)[source]
vegans.utils.plot2DModel.plot_on_click(model)[source]

vegans.utils.torchsummary module

Full credit for this module goes to github user sksq96 (Shubham Chandel) and the other authors of the pytorch-summary package.

Check out their implementation on github: https://github.com/sksq96/pytorch-summary.

Unfortunately the package is currently (2021-05-04) no longer under development and a conda version does not exist. This would block us from creating a conda distribution. Their package is published under the MIT License, so we fork their code (2021-05-04) and use it in this module here.

vegans.utils.torchsummary.summary(model, input_size, batch_size=- 1, device=device(type='cuda', index=0), dtypes=None)[source]
vegans.utils.torchsummary.summary_string(model, input_size, batch_size=- 1, device=device(type='cuda', index=0), dtypes=None)[source]

vegans.utils.utils module

class vegans.utils.utils.DataSet(X, y=None)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

class vegans.utils.utils.KLLoss(eps)[source]

Bases: object

__call__(input, target)[source]

Compute the Kullback-Leibler loss for GANs.

Parameters

input (torch.Tensor) – Input tensor. Output of a critic.

Returns

KL divergence

Return type

torch.Tensor

class vegans.utils.utils.NormalNegativeLogLikelihood[source]

Bases: object

class vegans.utils.utils.WassersteinLoss[source]

Bases: object

__call__(input, target)[source]

Compute the Wasserstein loss / divergence.

Also known as earthmover distance.

Parameters
  • input (torch.Tensor) – Input tensor. Output of a critic.

  • target (torch.Tensor) – Label, either 1 or -1. Zeros are translated to -1.

Returns

Wasserstein divergence

Return type

torch.Tensor

vegans.utils.utils.concatenate(tensor1, tensor2)[source]

Concatenates two 2D or 4D tensors.

Parameters
  • tensor1 (torch.Tensor) – 2D or 4D tensor.

  • tensor2 (torch.Tensor) – 2D or 4D tensor.

Returns

Cncatenation of tensor1 and tensor2.

Return type

torch.Tensor

Raises

NotImplementedError – If tensors do not have 2 or 4 dimensions.

vegans.utils.utils.create_gif(source_path, target_path=None)[source]

Create a GIF from images contained on the source path.

Parameters
  • source_path (string) – Path pointing to the source directory with .png files.

  • target_path (string, optional) – Name of the created GIF.

vegans.utils.utils.get_input_dim(dim1, dim2)[source]

Get the number of input dimension from two inputs.

Tensors often need to be concatenated in different ways, especially for conditional algorithms leveraging label information. This function returns the output dimensions of a tensor after the concatenation of two 2D tensors (two vectors), two 4D tensors (two images) or one 2D tensor with another 4D Tensor (vector with image). For both tensors the first dimension will be number of samples which is not considered in this function. Therefore dim1 and dim2 are both either 1D or 3D Tensors indicating the vector or image dimensions (nr_channles, height, width). In a usual use case dim1 is either the latent z dimension (often a vector) or a sample from the sample space (might be an image). dim2 often represents the conditional y dimension that is concatenated with the noise or a sample vefore passing it to a neural network.

This function ca be used to get the input dimension for the generator, adversary, encoder or decoder in a conditional use case.

Parameters
  • dim1 (int, iterable) – Dimension of input 1.

  • dim2 (int, iterable) – Dimension of input 2.

Returns

Output dimension after concatenation.

Return type

list

vegans.utils.utils.invert_channel_order(images)[source]
vegans.utils.utils.plot_images(images, labels=None, show=True, n=None)[source]

Plot a number of input images with optional label

Parameters
  • images (np.array) – Must be of shape [nr_samples, height, width] or [nr_samples, height, width, 3].

  • labels (np.array, optional) – Array of labels used in the title.

  • show (bool, optional) – If True, plt.show is called to visualise the images directly.

  • n (None, optional) – Number of images to be drawn, maximum is 36.

Returns

Created figure and axis objects.

Return type

plt.figure, plt.axis

vegans.utils.utils.plot_losses(losses, show=True, share=False)[source]

Plots losses for generator and discriminator on a common plot.

Parameters
  • losses (dict) –

    Dictionary containing the losses for some networks. The structure of the dictionary is: ``` {

    mode1: {loss_type1_1: losses1_1, loss_type1_2: losses1_2, …}, mode2: {loss_type2_1: losses2_1, loss_type2_2: losses2_2, …}, …

    where mode is probably one of “Train” or “Test”, loss_type might be “Generator”, “Adversary”, “Encoder”, … and losses are lists of loss values collected during training.

  • show (bool, optional) – If True, plt.show is called to visualise the images directly.

  • share (bool, optional) – If true, axis ticks are shared between plots.

Returns

Created figure and axis objects.

Return type

plt.figure, plt.axis

Module contents