tensormonk.activations

Activations

class Activations(tensor_size: tuple, activation: str = 'relu', **kwargs)[source]

Activation functions. Additional activation functions (other than those available in pytorch) are "hsigm" & "hswish" (“Searching for MobileNetV3”), "maxo" (“Maxout Networks”), "mish" (“Mish: A Self Regularized Non-Monotonic Neural Activation Function”), "squash" (“Dynamic Routing Between Capsules”) and "swish" (“SWISH: A Self-Gated Activation Function”).

Parameters
  • tensor_size (tuple, required) – Input tensor shape in BCHW (None/any integer >0, channels, height, width).

  • activation (str, optional) – The list of activation options are "elu", "gelu", "hsigm", "hswish", "lklu", "maxo", "mish", "prelu", "relu", "relu6", "rmxo", "selu", "sigm", "squash", "swish", "tanh". (default: "relu")

  • elu_alpha (float, optional) – (default: 1.0)

  • lklu_negslope (float, optional) – (default: 0.01)

import torch
import tensormonk
print(tensormonk.activations.Activations.METHODS)

tensor_size = (None, 16, 4, 4)
activation = "maxo"
maxout = tensormonk.activations.Activations(tensor_size, activation)
maxout(torch.randn(1, *tensor_size[1:]))

tensor_size = (None, 16, 4)
activation = "squash"
squash = tensormonk.activations.Activations(tensor_size, activation)
squash(torch.randn(1, *tensor_size[1:]))

tensor_size = (None, 16)
activation = "swish"
swish = tensormonk.activations.Activations(tensor_size, activation)
swish(torch.randn(1, *tensor_size[1:]))