Undocumented
Module | __drop |
DropBlock, DropPath |
Module | __helpers |
Layer/Module Helpers Hacked together by / Copyright 2020 Ross Wightman |
Module | __weight |
No module docstring; 1/4 function documented |
From __init__.py
:
Class |
|
DropBlock. See https://arxiv.org/pdf/1810.12890.pdf |
Class |
|
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). |
Function | drop |
DropBlock. See https://arxiv.org/pdf/1810.12890.pdf |
Function | drop |
DropBlock. See https://arxiv.org/pdf/1810.12890.pdf |
Function | drop |
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). |
Function | to |
Undocumented |
Function | trunc |
Fills the input Tensor with values drawn from a truncated normal distribution. The values are effectively drawn from the normal distribution N(mean, std2) with values outside [a, b]... |
torch.Tensor
, drop_prob: float
= 0.1, block_size: int
= 7, gamma_scale: float
= 1.0, with_noise: bool
= False, inplace: bool
= False, batchwise: bool
= False):
(source)
¶
DropBlock. See https://arxiv.org/pdf/1810.12890.pdf
DropBlock with an experimental gaussian noise option. This layer has been tested on a few training runs with success, but needs further validation and possibly optimization for lower runtime impact.
torch.Tensor
, drop_prob: float
= 0.1, block_size: int
= 7, gamma_scale: float
= 1.0, with_noise: bool
= False, inplace: bool
= False):
(source)
¶
DropBlock. See https://arxiv.org/pdf/1810.12890.pdf
DropBlock with an experimental gaussian noise option. Simplied from above without concern for valid block mask at edges.
torch.Tensor
, drop_prob: float
= 0.0, training: bool
= False, scale_by_keep: bool
= True):
(source)
¶
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the argument.
torch.Tensor
, mean: float
= 0.0, std: float
= 1.0, a: float
= -2.0, b: float
= 2.0) -> torch.Tensor
:
(source)
¶
Fills the input Tensor with values drawn from a truncated normal distribution. The values are effectively drawn from the normal distribution N(mean, std2) with values outside [a, b] redrawn until they are within the bounds. The method used for generating the random values works best when a ≤ mean ≤ b.
NOTE: this impl is similar to the PyTorch trunc_normal_
, the bounds [a, b] are
applied while sampling the normal with mean/std applied, therefore a, b args
should be adjusted to match the range of mean, std args.
- Args:
- tensor: an n-dimensional
torch.Tensor
mean: the mean of the normal distribution std: the standard deviation of the normal distribution a: the minimum cutoff value b: the maximum cutoff value - Examples:
>>> w = torch.empty(3, 5) >>> nn.init.trunc_normal_(w)