FPN¶
FPN based on Feature Pyramid Networks for Object Detection
The Feature Pyramid Network (FPN) is designed to enhance feature maps given from the backbone, typically used for detection models. Therefore, we also recommend to use it in detection task as well. FPN can create more pyramid deeply than the input feature pyramid from backbone, and in such cases, additional convolution or pooling layers are added.
Compatibility matrix¶
Supporting backbones | Supporting heads | torch.fx | NetsPresso |
---|---|---|---|
ResNet MobileNetV3 MixNet CSPDarkNet MobileViT MixTransformer EfficientFormer |
ALLMLPDecoder AnchorDecoupledHead AnchorFreeDecoupledHead |
Supported | Supported |
Field list¶
Field | Description |
---|---|
name |
(str) Name must be "fpn" to use FPN neck. |
params.num_outs |
(int) The number of output feature maps. This must greater than or equal to the number of input feature maps. If end_level is not the last feature map produced by the backbone, extra levels that generated by setting num_outs beyond the number of input feature maps are not allowed. |
params.start_level |
(int) Determines the starting index from the list of feature maps produced by the backbone. It defines the number of input feature maps with end_level . |
params.end_level |
(int) Determines the end index from the list of feature maps produced by the backbone. If -1, it equals to using up to the last feature map. it defines the number of input feature maps with start_level . |
params.add_extra_convs |
(str) Defines additional convolution layers for pyramid construction when num_outs is greater than the number of input feature maps. Options are on_input , on_lateral , on_output . If None , max pooling is applied. |
params.relu_before_extra_convs |
(bool) Determines whether to apply the relu activation function to the extra convolutions that are generated. |