Artificial Intelligence

1712 Submissions

[17] viXra:1712.0659 [pdf] submitted on 2017-12-29 06:21:14

TDBF: Two Dimensional Belief Function

Authors: Yangxue Li; Yong Deng
Comments: 15 Pages.

How to efficiently handle uncertain information is still an open issue. Inthis paper, a new method to deal with uncertain information, named as two dimensional belief function (TDBF), is presented. A TDBF has two components, T=(mA,mB). The first component, mA, is a classical belief function. The second component, mB, also is a classical belief function, but it is a measure of reliability of the first component. The definition of TDBF and the discounting algorithm are proposed. Compared with the classical discounting model, the proposed TDBF is more flexible and reasonable. Numerical examples are used to show the efficiency of the proposed method.
Category: Artificial Intelligence

[16] viXra:1712.0647 [pdf] submitted on 2017-12-28 23:25:34

A Total Uncertainty Measure for D Numbers Based on Belief Intervals

Authors: Xinyang Deng, Wen Jiang
Comments: 14 Pages.

As a generalization of Dempster-Shafer theory, the theory of D numbers is a new theoretical framework for uncertainty reasoning. Measuring the uncertainty of knowledge or information represented by D numbers is an unsolved issue in that theory. In this paper, inspired by distance based uncertainty measures for Dempster-Shafer theory, a total uncertainty measure for a D number is proposed based on its belief intervals. The proposed total uncertainty measure can simultaneously capture the discord, and non-specificity, and non-exclusiveness involved in D numbers. And some basic properties of this total uncertainty measure, including range, monotonicity, generalized set consistency, are also presented.
Category: Artificial Intelligence

[15] viXra:1712.0495 [pdf] submitted on 2017-12-18 08:50:22

Just Keep it in Mind: Information is a Complex Notion with Physical and Semantic Information Staying for Real and Imaginary Parts of the Expression

Authors: Emanuel Diamant
Comments: 3 Pages. Presented at the IS4SI 2017 Summit, Information Theory Section, Gothenburg, Sweden, 12–16 June 2017

Shannon’s Information was devised to improve the performance of a data communication channel. Since then, the situation has changed drastically and today a more generally applicable and suitable definition of information is urgently required. To meet this demand, I have proposed a definition of my own. According to it, information is a complex notion with Physical and Semantic information staying for Real and Imaginary parts of the term. The scientific community has very unfriendly accepted this idea. But without a better solution for the problem of: 1) intron-exon partition in genes, 2) information flow in neuronal networks, 3) memory creation and potentiation in brains, 4) thoughts and thinking materialization in human heads, and 5) the undeniable shift from Computational (that is, data processing based) approach to Cognitive (that is, information processing based) approach in the field of scientific research, they would be forced to admit one day that something worthy is in this new definition.
Category: Artificial Intelligence

[14] viXra:1712.0494 [pdf] submitted on 2017-12-18 09:05:26

Shannon's Definition of Information is Obsolete and Inadequate. it is Time to Embrace Kolmogorov’s Insights on the Matter

Authors: Emanuel Diamant
Comments: 3 Pages. Presented at the 2016 ICSEE International Conference, Eilat, Israel, 16 – 18 November 2016.

Information Theory, as developed by Claude Shannon in 1948, was about the communication of messages as electronic signals via a transmission channel. Only physical properties of the signal and the channel have been taken into account. While the meaning of the message has been ignored totally. Such an approach to information met very well the requirements of a data communication channel. But recent advances in almost all sciences put an urgent demand for meaningful information inclusion into the body of a communicated message. To meet this demand, I have proposed a new definition of information. In this definition, information is seen as a complex notion composed of two inseparable parts: Physical information and Semantic information. Classical informations such as Shannon, Fisher, Renyi, Kolmogorov’s complexity, and Chaitin’s algorithmic information – they are all physical information variants. Semantic information is a new concept and it desires to be properly studied, treated, and used.
Category: Artificial Intelligence

[13] viXra:1712.0469 [pdf] submitted on 2017-12-15 23:33:47

Predicting Yelp Star Reviews Based on Network Structure with Deep Learning

Authors: Luis Perez
Comments: 12 pages, 17 figures

In this paper, we tackle the real-world problem of predicting Yelp star-review rating based on business features (such as images, descriptions), user features (average previous ratings), and, of particular interest, network properties (which businesses has a user rated before). We compare multiple models on different sets of features -- from simple linear regression on network features only to deep learning models on network and item features. In recent years, breakthroughs in deep learning have led to increased accuracy in common supervised learning tasks, such as image classification, captioning, and language understanding. However, the idea of combining deep learning with network feature and structure appears to be novel. While the problem of predicting future interactions in a network has been studied at length, these approaches have often ignored either node-specific data or global structure. We demonstrate that taking a mixed approach combining both node-level features and network information can effectively be used to predict Yelp-review star ratings. We evaluate on the Yelp dataset by splitting our data along the time dimension (as would naturally occur in the real-world) and comparing our model against others which do no take advantage of the network structure and/or deep learning.
Category: Artificial Intelligence

[12] viXra:1712.0468 [pdf] submitted on 2017-12-15 23:41:37

The Effectiveness of Data Augmentation in Image Classification using Deep Learning

Authors: Luis Perez, Jason Wang
Comments: 8 Pages.

In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.
Category: Artificial Intelligence

[11] viXra:1712.0467 [pdf] submitted on 2017-12-15 23:43:11

Gaussian Processes for Crime Prediction

Authors: Luis Perez, Alex Wang
Comments: 8 Pages.

The ability to predict crime is incredibly useful for police departments, city planners, and many other parties, but thus far current approaches have not made use of recent developments of machine learning techniques. In this paper, we present a novel approach to this task: Gaussian processes regression. Gaussian processes (GP) are a rich family of distributions that are able to learn functions. We train GPs on historic crime data to learn the underlying probability distribution of crime incidence to make predictions on future crime distributions.
Category: Artificial Intelligence

[10] viXra:1712.0465 [pdf] submitted on 2017-12-16 00:36:46

Reinforcement Learning with Swingy Monkey

Authors: Luis Perez, Aidi Zhang, Kevin Eskici
Comments: 7 Pages.

This paper explores model-free, model-based, and mixture models for reinforcement learning under the setting of a SwingyMonkey game \footnote{The code is hosted on a public repository \href{https://github.com/kandluis/machine-learning}{here} under the prac4 directory.}. SwingyMonkey is a simple game with well-defined goals and mechanisms, with a relatively small state-space. Using Bayesian Optimization \footnote{The optimization took place using the open-source software made available by HIPS \href{https://github.com/HIPS/Spearmint}{here}.} on a simple Q-Learning algorithm, we were able to obtain high scores within just a few training epochs. However, the system failed to scale well after continued training, and optimization over hundreds of iterations proved too time-consuming to be effective. After manually exploring multiple approaches, the best results were achieved using a mixture of $\epsilon$-greedy Q-Learning with a stable learning rate,$\alpha$, and $\delta \approx 1$ discount factor. Despite the theoretical limitations of this approach, the settings, resulted in maximum scores of over 5000 points with an average score of $\bar{x} \approx 684$ (averaged over the final 100 testing epochs, median of $\bar{m} = 357.5$). The results show an continuing linear log-relation capping only after 20,000 training epochs.
Category: Artificial Intelligence

[9] viXra:1712.0464 [pdf] submitted on 2017-12-16 00:38:28

Multi-Document Text Summarization

Authors: Luis Perez, Kevin Eskici
Comments: 24 Pages. 24

We tackle the problem of multi-document extractive summarization by implementing two well-known algorithms for single-text summarization -- {\sc TextRank} and {\sc Grasshopper}. We use ROUGE-1 and ROUGE-2 precision scores with the DUC 2004 Task 2 data set to measure the performance of these two algorithms, with optimized parameters as described in their respective papers ($\alpha =0.25$ and $\lambda=0.5$ for Grasshopper and $d=0.85$ for TextRank). We compare these modified algorithms to common baselines as well as non-naive, novel baselines and we present the resulting ROUGE-1 and ROUGE-2 recall scores. Subsequently, we implement two novel algorithms as extensions of {\sc GrassHopper} and {\sc TextRank}, each termed {\sc ModifiedGrassHopper} and {\sc ModifiedTextRank}. The modified algorithms intuitively attempt to ``maximize'' diversity across the summary. We present the resulting ROUGE scores. We expect that with further optimizations, this unsupervised approach to extractive text summarization will prove useful in practice.
Category: Artificial Intelligence

[8] viXra:1712.0446 [pdf] submitted on 2017-12-13 08:17:06

A New Divergence Measure for Basic Probability Assignment and Its Applications in Extremely Uncertain Environments

Authors: Liguo Fei, Yong Hu, Yong Deng, Sankaran Mahadevan
Comments: 9 Pages.

Information fusion under extremely uncertain environments is an important issue in pattern classification and decision-making problem. Dempster-Shafer evidence theory (D-S theory) is more and more extensively applied to information fusion for its advantage to deal with uncertain information. However, the results opposite to common sense are often obtained when combining the different evidences using Dempster’s combination rules. How to measure the difference between different evidences is still an open issue. In this paper, a new divergence is proposed based on Kullback-Leibler divergence in order to measure the difference between different basic probability assignments (BPAs). Numerical examples are used to illustrate the computational process of the proposed divergence. Then the similarity for different BPAs is also defined based on the proposed divergence. The basic knowledge about pattern recognition is introduced and a new classification algorithm is presented using the proposed divergence and similarity under extremely uncertain environments, which is illustrated by a small example handling robot sensing. The method put forward is motivated by desperately in need to develop intelligent systems, such as sensor-based data fusion manipulators, which need to work in complicated, extremely uncertain environments. Sensory data satisfy the conditions 1) fragmentary and 2) collected from multiple levels of resolution.
Category: Artificial Intelligence

[7] viXra:1712.0444 [pdf] submitted on 2017-12-13 08:59:01

Environmental Impact Assessment Using D-Vikor Approach

Authors: Liguo Fei, Yong Deng
Comments: 15 Pages.

Environmental impact assessment (EIA) is an open and important issue depends on factors such as social, ecological, economic, etc. Due to human judgment, a variety of uncertainties are brought into the EIA process. With regard to uncertainty, many existing methods seem powerless to represent and deal with it effectively. A new theory called D numbers, because of its advantage to handle uncertain information, is widely used to uncertainty modeling and decision making. VIKOR method has its unique advantages in dealing with multiple criteria decision making problems (MCDM), especially when the criteria are non-commensurable and even conflicting, it can also obtain the compromised optimal solution. In order to solve EIA problems more effectively, in this paper, a D-VIKOR approach is proposed, which expends the VIKOR method by D numbers theory. In the proposed approach, assessment information of environmental factors is expressed and modeled by D numbers. And a new combination rule for multiple D numbers is defined. Subjective weights and objective weights are considered in VIKOR process for more reasonable ranking results. A numerical example is conducted to analyze and demonstrate the practicality and effectiveness of the proposed D-VIKOR approach.
Category: Artificial Intelligence

[6] viXra:1712.0439 [pdf] submitted on 2017-12-13 11:45:00

Large Scale Traffic Surveillance :Vehicle Detection and Classification Using Cascade Classifier and Convolutional Neural Network

Authors: Shaif Chowdhury
Comments: 8 Pages.

In this Paper, we are presenting a traffic surveillance system for detection and classification of vehicles in large scale videos. Vehicle detection is crucial part of Road safety. There are lots of different intelligent systems proposed for traffic surveillance. The system presented here is based on two steps, a descriptor of the image type haar-like, and a classifier type convolutional neural networks. A cascade classifier is used to extract objects rapidly and a neural network is used for final classification of cars. In case of Haar Cascades, the learning of the system is performed on a set of positive images (vehicles) and negative images (non-vehicle), and the test is done on another set of scenes. For the second, we have used faster R-CNN architecture. The cascade classifier gives faster processing time and Neural Network is used to increase the detection rate.
Category: Artificial Intelligence

[5] viXra:1712.0432 [pdf] submitted on 2017-12-13 22:28:48

DS-Vikor: a New Methodology for Supplier Selection

Authors: Liguo Fei, Yong Deng, Yong Hu
Comments: 15 Pages.

How to select the optimal supplier is an open and important issue in supply chain management (SCM), which needs to solve the problem of assessment and sorting the potential suppliers, and can be considered as a multi-criteria decision-making (MCDM) problem. Experts’ assessment play a very important role in the process of supplier selection, while the subjective judgment of human beings could introduce unpredictable uncertainty. However, existing methods seem powerless to represent and deal with this uncertainty effectively. Dempster-Shafer evidence theory (D- S theory) is widely used to uncertainty modeling, decision making and conflicts management due to its advantage to handle uncertain information. The VIKOR method has a great advantage to handle MCDM problems with non-commensurable and even conflicting criteria, and to obtain the compromised optimal solution. In this paper, a DS- VIKOR method is proposed for the supplier selection problem which expends the VIKOR method by D-S theory. In this method, the basic probability assignment (BPA) is used to denote the decision makers’ assessment for suppliers, Deng entropy weight-based method is defined and applied to determine the weights of multi-criteria, and VIKOR method is used for getting the final ranking results. An illustrative example under real life is conducted to analyze and demonstrate the practicality and effectiveness of the proposed DS-VIKOR method.
Category: Artificial Intelligence

[4] viXra:1712.0400 [pdf] submitted on 2017-12-13 06:52:57

Adaptively Evidential Weighted Classifier Combination

Authors: Liguo Fei, Bingyi Kang, Van-Nam Huynh, Yong Deng
Comments: 9 Pages.

Classifier combination plays an important role in classification. Due to the efficiency to handle and fuse uncertain information, Dempster-Shafer evidence theory is widely used in multi-classifiers fusion. In this paper, a method of adaptively evidential weighted classifier combination is presented. In our proposed method, the output of each classifier is modelled by basic probability assignment (BPA). Then, the weights are determined adaptively for individual classifier according to the uncertainty degree of the corresponding BPA. The uncertainty degree is measured by a belief entropy, named as Deng entropy. Discounting-and-combination scheme in D-S theory is used to calculate the weighted BPAs and combine them for the final BPA for classification. The effectiveness of the proposed weighted combination method is illustrated by numerical experimental results.
Category: Artificial Intelligence

[3] viXra:1712.0347 [pdf] submitted on 2017-12-07 09:10:57

Finding The Next Term Of Any Time Series Type Or Non Time Series Type Sequence Using Total Similarity & Dissimilarity {Version 6} ISSN 1751-3030.

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given time series type or non-time series type sequence.
Category: Artificial Intelligence

[2] viXra:1712.0138 [pdf] submitted on 2017-12-05 14:07:08

Topological Clustering as a Method of Control for Certain Critical-Point Sensitive Systems

Authors: Martin J. Dudziiak
Comments: 6 Pages. submitted to CoDIT 2018 (Thessaloniki, Greece, April 2018)

New methods can provide more sensitive modeling and more reliable control, through use of dynamically-alterable local neighborhood clusters comprised of of the state-space parameters most disposed to be influential in non-linear systemic changes. Particular attention is directed to systems with extreme non-linearity and uncertainty in measurement and in control communications (e.g., micro-scalar, remote and inaccessible to real-time control). An architecture for modeling based upon topological similarity mapping principles is introduced as an alternative to classical Turing machine models including new “quantum computers.”
Category: Artificial Intelligence

[1] viXra:1712.0071 [pdf] replaced on 2018-01-28 06:21:06

The Intelligence Quotient of the Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 27 Pages. Bulgarian. Serdica Journal of Computing

To say which programs are AI, it's enough to run an exam and recognize for AI those programs that passed the exam. The exam grade will be called IQ. We cannot say just how big the IQ has to be in order one program to be AI, but we will choose a specific value. So our definition of AI will be any program whose IQ is above this specific value. This idea has already been realized in [3], but here we will repeat this construction by bringing some improvements.
Category: Artificial Intelligence