Enhanced Deep Learning with Improved Feature Subspace Separation

Mustafa Parlaktuna, Tennessee State University


This research proposes a new deep convolutional network architecture that improves the feature subspace separation. In training, the system considers M classes of input sets {C i}Mi=1 and M deep convolutional networks {DNi} Mi=1 whose filter and other parameters are randomly initialized. For each input class C i , the Convolutional Neural Network ( CNN) generates a set of features Fi. Then, a local subspace Si is matched for each set Fi. This is followed with a full training of the deep convolutional network DNi based on a decision criteria developed with computation of rejections of all features in {Fi}Mi=1 to Si. 10 different deep convolutional network topologies are used to show that the proposed technique works better for small network topologies and has close performance to more complex networks. ^ This research also suggests that a trained deep network can potentially be used as a generic feature extraction to cluster images that the network cannot inherently identify. In recent years, many different network topologies have been applied in various areas of science and engineering. Those networks are typically trained with large datasets (e.g. millions of images) that contain diverse classes and categories with distinctive features. Intuitively, if a deep convolutional network is trained with a rich variety of classes, it may have capability to classify data from unknown classes. ^

Subject Area

Computer engineering|Artificial intelligence|Computer science

Recommended Citation

Mustafa Parlaktuna, "Enhanced Deep Learning with Improved Feature Subspace Separation" (2018). ETD Collection for Tennessee State University. Paper AAI10842927.