Skip to content

Commit 5c91eac

Browse files
authored
add api docs
1 parent 58bbb06 commit 5c91eac

20 files changed

+1494
-335
lines changed

Diff for: docs/apis/backbones.md

+182
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
# paddleseg.models.backbone
2+
3+
The models subpackage contains backbones extracting features for sementic segmentation models.
4+
5+
- [ResNet_vd](#ResNet_vd)
6+
- [HRNet](#HRNet)
7+
- [MobileNetV3](#MobileNetV3)
8+
- [XceptionDeeplab](#xceptiondeeplab)
9+
10+
11+
## [ResNet_vd](../../paddleseg/models/backbones/resnet_vd.py)
12+
ResNet_vd backbone from ["Bag of Tricks for Image Classification with Convolutional Neural Networks"](https://arxiv.org/pdf/1812.01187.pdf)
13+
14+
> CLASS paddleseg.models.backbones.Resnet_vd(layers=50, output_stride=None, multi_grid=(1, 1, 1), lr_mult_list=(0.1, 0.1, 0.2, 0.2), pretrained=None)
15+
16+
> > Args
17+
> > > - **layers** (int, optional): The layers of ResNet_vd. The supported layers are [18, 34, 50, 101, 152, 200]. Default: 50.
18+
> > > - **output_stride** (int, optional): The stride of output features compared to input images. It is 8 or 16. Default: 8.
19+
> > > - **multi_grid** (tuple|list, optional): The grid of stage4. Defult: (1, 1, 1).
20+
> > > - **pretrained** (str, optional): The path of pretrained model.
21+
22+
> paddleseg.models.backbones.ResNet18_vd(**args)
23+
24+
Return a object of ResNet_vd class which layers is 18.
25+
26+
> paddleseg.models.backbones.ResNet34_vd(**args)
27+
28+
Return a object of ResNet_vd class which layers is 34.
29+
30+
> paddleseg.models.backbones.ResNet50_vd(**args)
31+
32+
Return a object of ResNet_vd class which layers is 50.
33+
34+
> paddleseg.models.backbones.ResNet101_vd(**args)
35+
36+
Return a object of ResNet_vd class which layers is 101.
37+
38+
> paddleseg.models.backbones.ResNet152_vd(**args)
39+
40+
Return a object of ResNet_vd class which layers is 152.
41+
42+
> padddelseg.models.backbones.ResNet200_vd(**args)
43+
44+
Return a object of ResNet_vd class which layers is 200.
45+
46+
## [HRNet](../../paddleseg/models/backbones/hrnet.py)
47+
HRNet backbone from ["HRNet:Deep High-Resolution Representation Learning for Visual Recognition"](https://arxiv.org/pdf/1908.07919.pdf)
48+
49+
> CLASS paddleseg.models.backbones.HRNet(pretrained=None, stage1_num_modules=1, stage1_num_blocks=(4,), stage1_num_channels=(64,), stage2_num_modules=1, stage2_num_blocks=(4, 4), stage2_num_channels=(18, 36), stage3_num_modules=4, stage3_num_blocks=(4, 4, 4), stage3_num_channels=(18, 36, 72), stage4_num_modules=3, stage4_num_blocks=(4, 4, 4, 4), stage4_num_channels=(18, 36, 72, 14), has_se=False, align_corners=False)
50+
51+
> > Args
52+
> > > - **pretrained** (str, optional): The path of pretrained model.
53+
> > > - **stage1_num_modules** (int, optional): Number of modules for stage1. Default 1.
54+
> > > - **stage1_num_blocks** (list, optional): Number of blocks per module for stage1. Default (4,).
55+
> > > - **stage1_num_channels** (list, optional): Number of channels per branch for stage1. Default (64,).
56+
> > > - **stage2_num_modules** (int, optional): Number of modules for stage2. Default 1.
57+
> > > - **stage2_num_blocks** (list, optional): Number of blocks per module for stage2. Default (4, 4).
58+
> > > - **stage2_num_channels** (list, optional): Number of channels per branch for stage2. Default (18, 36).
59+
> > > - **stage3_num_modules** (int, optional): Number of modules for stage3. Default 4.
60+
> > > - **stage3_num_blocks** (list, optional): Number of blocks per module for stage3. Default (4, 4, 4).
61+
> > > - **stage3_num_channels** (list, optional): Number of channels per branch for stage3. Default (18, 36, 72).
62+
> > > - **stage4_num_modules** (int, optional): Number of modules for stage4. Default 3.
63+
> > > - **stage4_num_blocks** (list, optional): Number of blocks per module for stage4. Default (4, 4, 4, 4).
64+
> > > - **stage4_num_channels** (list, optional): Number of channels per branch for stage4. Default (18, 36, 72. 144).
65+
> > > - **has_se** (bool, optional): Whether to use Squeeze-and-Excitation module. Default False.
66+
> > > - **align_corners** (bool, optional): An argument of F.interpolate. It should be set to False when the feature size is even,
67+
e.g. 1024x512, otherwise it is True, e.g. 769x769. Default: False.
68+
69+
> paddleseg.models.backbones.HRNet_W18_Small_V1(**kwargs)
70+
71+
Return a object of HRNet class which width is 18 and it is smaller than HRNet_W18_Small_V2.
72+
73+
> paddleseg.models.backbones.HRNet_W18_Small_V2(**kwargs)
74+
75+
Return a object of HRNet class which width is 18 and it is smaller than HRNet_W18.
76+
77+
> paddleseg.models.backbones.HRNet_W18(**kwargs)
78+
79+
Return a object of HRNet class which width is 18.
80+
81+
> paddleseg.models.backbones.HRNet_W30(**kwargs)
82+
83+
Return a object of HRNet class which width is 30.
84+
85+
> paddleseg.models.backbones.HRNet_W32(**kwargs)
86+
87+
Return a object of HRNet class which width is 32.
88+
89+
> paddleseg.models.backbones.HRNet_W40(**kwargs)
90+
91+
Return a object of HRNet class which width is 40.
92+
93+
> paddleseg.models.backbones.HRNet_W44(**kwargs)
94+
95+
Return a object of HRNet class which width is 44.
96+
97+
> paddleseg.models.backbones.HRNet_W48(**kwargs)
98+
99+
Return a object of HRNet class which width is 48.
100+
101+
> paddleseg.models.backbones.HRNet_W60(**kwargs)
102+
103+
Return a object of HRNet class which width is 60.
104+
105+
> paddleseg.models.backbones.HRNet_W64(**kwargs)
106+
107+
Return a object of HRNet class which width is 64.
108+
109+
110+
111+
## [MobileNetV3](../../paddleseg/models/backbones/mobilenetv3.py)
112+
MobileNetV3 backbone from ["Searching for MobileNetV3"](https://arxiv.org/pdf/1905.02244.pdf).
113+
114+
> CLASS paddleseg.models.backbones.MobileNetV3(pretrained=None, scale=1.0, model_name="small", output_stride=None)
115+
> > Args
116+
> > > - **pretrained** (str, optional): The path of pretrained model.
117+
> > > - **scale** (float, optional): The scale of channels . Default: 1.0.
118+
> > > - **model_name** (str, optional): Model name. It determines the type of MobileNetV3. The value is 'small' or 'large'. Defualt: 'small'.
119+
> > > - **output_stride** (int, optional): The stride of output features compared to input images. The value should be one of [2, 4, 8, 16, 32]. Default: None.
120+
121+
> paddleseg.models.backbones.MobileNetV3_small_x0_35(**args)
122+
123+
Return a object of MobileNetV3 class which scale is 0.35 and model_name is small.
124+
125+
> paddleseg.models.backbones.MobileNetV3_small_x0_5(**args)
126+
127+
Return a object of MobileNetV3 class which scale is 0.5 and model_name is small.
128+
129+
> paddleseg.models.backbones.MobileNetV3_small_x0_75(**args)
130+
131+
Return a object of MobileNetV3 class which scale is 0.75 and model_name is small.
132+
133+
> paddleseg.models.backbones.MobileNetV3_small_x1_0(**args)
134+
135+
Return a object of MobileNetV3 class which scale is 1.0 and model_name is small.
136+
137+
> paddleseg.models.backbones.MobileNetV3_small_x1_25(**args)
138+
139+
Return a object of MobileNetV3 class which scale is 1.25 and model_name is small.
140+
141+
> paddleseg.models.backbones.MobileNetV3_large_x0_35(**args)
142+
143+
Return a object of MobileNetV3 class which scale is 0.35 and model_name is large.
144+
145+
> paddleseg.models.backbones.MobileNetV3_large_x0_5(**args)
146+
147+
Return a object of MobileNetV3 class which scale is 0.5 and model_name is large.
148+
149+
> paddleseg.models.backbones.MobileNetV3_large_x0_75(**args)
150+
151+
Return a object of MobileNetV3 class which scale is 0.75 and model_name is large.
152+
153+
> paddleseg.models.backbones.MobileNetV3_large_x1_0(**args)
154+
155+
Return a object of MobileNetV3 class which scale is 1.0 and model_name is large.
156+
157+
> paddleseg.models.backbones.MobileNetV3_large_x1_25(**args)
158+
159+
Return a object of MobileNetV3 class which scale is 1.25 and model_name is large.
160+
161+
162+
## [XceptionDeeplab](../../paddleseg/models/backbones/xception_deeplab.py)
163+
Xception backbone of DeepLabV3+ from ["Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation"](https://arxiv.org/abs/1802.02611)
164+
165+
> CLASS paddleseg.models.backbones.XceptionDeeplab(backbone, pretrained=None, output_stride=16)
166+
167+
> > Args
168+
> > > - **backbone** (str): Which type of Xception_DeepLab to select. It should be one of ('xception_41', 'xception_65', 'xception_71').
169+
> > > - **pretrained** (str, optional): The path of pretrained model.
170+
> > > - **output_stride** (int, optional): The stride of output features compared to input images. It is 8 or 16. Default: 16.
171+
172+
> paddleseg.models.backbones.Xception41_deeplab(**args)
173+
174+
Return a object of XceptionDeeplab class which layers is 41.
175+
176+
> paddleseg.models.backbones.Xception65_deeplab(**args)
177+
178+
Return a object of XceptionDeeplab class which layers is 65.
179+
180+
> paddleseg.models.backbones.Xception71_deeplab(**args)
181+
182+
Return a object of XceptionDeeplab class which layers is 71.

Diff for: docs/apis/core.md

+72
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
# paddleseg.core
2+
3+
The interface for training, evaluation and prediction.
4+
- [Training](#Training)
5+
- [Evaluation](#Evaluation)
6+
- [Prediction](#Prediction)
7+
8+
## [Training](../../paddleseg/core/train.py)
9+
> paddleseg.core.train(model, train_dataset, val_dataset=None, optimizer=None, save_dir='output', iters=10000, batch_size=2, resume_model=None, save_interval=1000, log_iters=10, num_workers=0, use_vdl=False, losses=None)
10+
11+
Launch training.
12+
13+
> Args
14+
> > - **mode**l(nn.Layer): A sementic segmentation model.
15+
> > - **train_dataset** (paddle.io.Dataset): Used to read and process training datasets.
16+
> > - **val_dataset** (paddle.io.Dataset, optional): Used to read and process validation datasets.
17+
> > - **optimizer** (paddle.optimizer.Optimizer): The optimizer.
18+
> > - **save_dir** (str, optional): The directory for saving the model snapshot. Default: 'output'.
19+
> > - **iters** (int, optional): How may iters to train the model. Defualt: 10000.
20+
> > - **batch_size** (int, optional): Mini batch size of one gpu or cpu. Default: 2.
21+
> > - **resume_model** (str, optional): The path of resume model.
22+
> > - **save_interval** (int, optional): How many iters to save a model snapshot once during training. Default: 1000.
23+
> > - **log_iters** (int, optional): Display logging information at every log_iters. Default: 10.
24+
> > - **num_workers** (int, optional): Num workers for data loader. Default: 0.
25+
> > - **use_vdl** (bool, optional): Whether to record the data to VisualDL during training. Default: False.
26+
> > - **losses** (dict): A dict including 'types' and 'coef'. The length of coef should equal to 1 or len(losses['types']).
27+
The 'types' item is a list of object of paddleseg.models.losses while the 'coef' item is a list of the relevant coefficient.
28+
29+
## [Evaluation](../../paddleseg/core/val.py)
30+
> paddleseg.core.evaluate(model, eval_dataset, aug_eval=False, scales=1.0, flip_horizontal=True, flip_vertical=False, is_slide=False, stride=None, crop_size=None, num_workers=0)
31+
32+
Launch evaluation.
33+
34+
> Args
35+
> > - **model**(nn.Layer): A sementic segmentation model.
36+
> > - **eval_dataset** (paddle.io.Dataset): Used to read and process validation datasets.
37+
> > - **aug_eval** (bool, optional): Whether to use mulit-scales and flip augment for evaluation. Default: False.
38+
> > - **scales** (list|float, optional): Scales for augment. It is valid when `aug_eval` is True. Default: 1.0.
39+
> > - **flip_horizontal** (bool, optional): Whether to use flip horizontally augment. It is valid when `aug_eval` is True. Default: True.
40+
> > - **flip_vertical** (bool, optional): Whether to use flip vertically augment. It is valid when `aug_eval` is True. Default: False.
41+
> > - **is_slide** (bool, optional): Whether to evaluate by sliding window. Default: False.
42+
> > - **stride** (tuple|list, optional): The stride of sliding window, the first is width and the second is height.
43+
It should be provided when `is_slide` is True.
44+
> > - **crop_size** (tuple|list, optional): The crop size of sliding window, the first is width and the second is height.
45+
It should be provided when `is_slide` is True.
46+
> > - **num_workers** (int, optional): Num workers for data loader. Default: 0.
47+
48+
> Returns
49+
> > - **float**: The mIoU of validation datasets.
50+
> > - **float**: The accuracy of validation datasets.
51+
52+
## [Prediction](../../paddleseg/core/predict.py)
53+
54+
> paddleseg.core.predict(model, model_path, transforms, image_list, image_dir=None, save_dir='output', aug_pred=False, scales=1.0, flip_horizontal=True, flip_vertical=False, is_slide=False, stride=None, crop_size=None)
55+
56+
Launch predict and visualize.
57+
58+
> Args
59+
> > - **model** (nn.Layer): Used to predict for input image.
60+
> > - **model_path** (str): The path of pretrained model.
61+
> > - **transforms** (transform.Compose): Preprocess for input image.
62+
> > - **image_list** (list): A list of image path to be predicted.
63+
> > - **image_dir** (str, optional): The root directory of the images predicted. Default: None.
64+
> > - **save_dir**** (bool, optional): Whether to use mulit-scales and flip augment for predition. Default: False.
65+
> > - **scales** (list|float, optional): Scales for augment. It is valid when `aug_pred` is True. Default: 1.0.
66+
> > - **flip_horizontal** (bool, optional): Whether to use flip horizontally augment. It is valid when `aug_pred` is True. Default: True.
67+
> > - **flip_vertical** (bool, optional): Whether to use flip vertically augment. It is valid when `aug_pred` is True. Default: False.
68+
> > - **is_slide** (bool, optional): Whether to predict by sliding window. Default: False.
69+
> > - **stride** (tuple|list, optional): The stride of sliding window, the first is width and the second is height.
70+
It should be provided when `is_slide` is True.
71+
> > - **crop_size** (tuple|list, optional): The crop size of sliding window, the first is width and the second is height.
72+
It should be provided when `is_slide` is True.

0 commit comments

Comments
 (0)