Authors: Guangqun Chen
In this paper I studied SfNet further. I reduced layers of SfNet, created a group of networks: HDDE6 series and HDDE9 series. HDDE6-3 and HDDE6-2S have 9 layers, HDDE9-2 has 11 layers, HDDE9-2S and HDDE9-3 have 12 layers. In my experiments on dataset CALTECH-256, compared to SFNet, the classification accuracy of HDDE9-3 increases about 4.35%, the classification accuracy of HDDE9-2 increases about 3.07%, the classification accuracy of HDDE9-2S increases about 5.55%. Compared to VGG-16, the classification accuracy of HDDE9-3 increases from 83.63% to 90.73%, the classification accuracy of HDDE9-2 increases to 89.81%, the classification accuracy of HDDE9-2S increases to 92.28%. For HDDE6 series, features extraction part uses only 1 convolution layer and 5 MixedSCLayers, has much less parameters, running speed is much faster, the accuracy of HDDE6-3 is the same as HDDE9-3's on CALTECH-256. All the improvements are due to the use of Structure Composing Layers.
Comments: 10 Pages.
Unique-IP document downloads: 7 times
Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.