[5] viXra:2010.0225 [pdf] submitted on 2020-10-28 07:50:55
Authors: Molokwu C. Reginald, Molokwu C. Bonaventure, Molokwu C. Victor, Okeke C. Ogochukwu
Comments: 8 Pages.
Convolutional Neural Networks have become
state-of-the-art methods for image classification in recent times. CNNs have proven to be very productive in identifying objects, human faces, powering machine vision in robots as well as self-driving cars. At this point, they perform
better than human subjects on a large number of image datasets. A large portion of these datasets depends on the idea of solid classes. Hence, Image classification has become an exciting and appealing domain in Artificial Intelligence
(AI) research. In this paper, we have proposed a unique framework, FUSIONET, to aid in image classification. Our proposition utilizes the combination of 2 novel models in parallel (MainNET, a 3 x 3, architecture and AuxNET,
a 1 x 1 architecture) Successively; these relatively feature maps, extracted from the above combination are fed as input features to a downstream classifier for classification
tasks about the images in question. Herein FUSIONET, has been trained, tested, and evaluated on real-world datasets, achieving state-of-the-art on the popular CINIC-10 dataset.
Category: Artificial Intelligence
[4] viXra:2010.0220 [pdf] replaced on 2020-11-01 02:24:46
Authors: Md Monzur Morshed
Comments: 11 Pages. This is a research proposal.
The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts [1]. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web [2]. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea [3, 4, and 5]. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats [6]. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally & globally.
Category: Artificial Intelligence
[3] viXra:2010.0147 [pdf] submitted on 2020-10-19 19:41:58
Authors: Eren Unlu
Comments: 4 Pages.
Fisher Discriminant Analysis (FDA), also known as Linear Discriminant Analysis (LDA) is a simple in nature yet highly effective tool for classification for vast types of datasets and settings. In this paper, we propose to leverage the discriminative potency of FDA for an unsupervised outlier detection algorithm. Unsupervised anomaly detection has been a topic of high interest in literature due to its numerous practical applications and fuzzy nature of subjective interpretation of success, therefore it is important to have different types of algorithms which can deliver distinct perspectives. Proposed method selects the subset
of outlier points based on the maximization of LDA distance between the class of non-outliers via genetic algorithm.
Category: Artificial Intelligence
[2] viXra:2010.0078 [pdf] submitted on 2020-10-11 11:02:47
Authors: David M. W. Powers
Comments: 11 Pages. Accepted and presented at ConZealand2020. Rejected by arXiv as not in scope.
The history of robotics is older than the invention and exploitation of robots. The term ‘robot’ came from the Czech and was first used in a play a century ago. The term ‘robotics’ and the ethical considerations captured by ‘The Three Laws of Robotics’ come from a SciFi author born a century ago. SF leads the way! Similarly, the idea of Artificial Intelligence as a thinking machine goes back to the earliest days of computing, and in this paper we follow some of the key ideas through the work of the pioneers in the field.
We’ve come a long way since then, but are we there yet? Could we now build a conscious sentient thinking computer? What would it be like? Will it take over the world?
Category: Artificial Intelligence
[1] viXra:2010.0060 [pdf] submitted on 2020-10-09 20:01:48
Authors: Eren Unlu
Comments: 5 Pages.
We propose an innovative, trivial yet effective unsupervised outlier detection algorithm called Auto-Encoder Transposed Permutation Importance Outlier Detector (ATPI), which is based on the fusion of two machine learning concepts, autoencoders and permutation importance. As unsupervised anomaly detection is a subjective task, where the accuracy of results can vary on the demand; we believe this kind of a novel framework has a great potential in this field.
Category: Artificial Intelligence