Artificial Intelligence

2112 Submissions

[8] viXra:2112.0155 [pdf] submitted on 2021-12-29 02:21:06

Comparison of Various Models for Stock Prediction

Authors: Jonathan Lee
Comments: 4 Pages. Thanks

Due to the high volatility of the COVID-19 pandemic, interest in stock invest-ment is focused. Also, it is said that the atmosphere is gathering again fromthe cryptocurrency market to the domestic stock market. In this situation, welooked at which model could more accurately predict the closing
Category: Artificial Intelligence

[7] viXra:2112.0135 [pdf] submitted on 2021-12-26 21:08:14

Directed Dependency Graph Obtained from a Correlation Matrix by the Highest Successive Conditionings Method

Authors: Ait-Taleb Nabil
Comments: 22 Pages.

In this paper we will propose a directed dependency graph obtained from a correlation matrix. This graph will include probabilistic causal sub-models for each node modeled by conditionings percentages. The directed dependency graph will be obtained using the highest successive conditionings method with a conditioning percentage value to be exceeded.
Category: Artificial Intelligence

[6] viXra:2112.0130 [pdf] submitted on 2021-12-24 04:23:06

The SP Challenge: that the SP System is More Promising as a Foundation for the Development of Human-Level Broad ai Than Any Alternative

Authors: J Gerard Wolff
Comments: 44 Pages.

The "SP Challenge" is the deliberately provocative theme of this paper: that the "SP System" (SPS), meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model", is more promising as a foundation for the development of human-level broad AI, aka 'artificial general intelligence' (AGI), than any alternative. In that connection, the main strengths of the SPS are: 1) The adoption of a top-down, breadth-first research strategy with wide scope; 2) Recognition of the importance of information compression (IC) in human learning, perception, and cognition -- and, correspondingly, a central role for IC in the SPS; 3) The working hypothesis that all kinds of IC may be understood in terms of the matching and unification of patterns (ICMUP); 4) A resolution of the apparent paradox that IC may achieve decompression as well as compression. 5) The powerful concept of SP-multiple-alignment, a generalisation of six other variants of ICMUP; 6) the clear potential of the SPS to solve 19 problems in AI research; 7) Strengths and potential of the SPS in modelling several aspects of intelligence, including several kinds of probabilistic reasoning, versatility in the representation and processing of AI-related knowledge, and the seamless integration of diverse aspects of intelligence, and diverse kinds of knowledge, in any combination; 8) Several other potential benefits and applications of the SPS; 9) In "SP-Neural", abstract concepts in the SPS may be mapped into putative structures expressed in terms of neurons and their interconnections and intercommunications; 10) The concept of ICMUP provides an entirely novel perspective on the foundations of mathematics; 11) How to make generalisations from data, including the correction of over- and under-generalisations, and how to reduce or eliminate errors in data. There is discussion of how the SPS compares with some other potential candidates for the SP-Challenge. And there is an outline of possible future directions for the research.
Category: Artificial Intelligence

[5] viXra:2112.0126 [pdf] submitted on 2021-12-23 04:31:07

Pcarst: a Method of Weakening Conflict Evidence Based on Principal Component Analysis and Relatively Similar Transformation

Authors: Xuan Zhao, Huizi Cui, Zilong Xiao, Bingyi Kang
Comments: 26 Pages.

How to deal with conflict is a significant issue in Dempster-Shafer evidence theory (DST). In the Dempster combination rule, conflicts will produce counter-intuitive phenomena. Therefore, many effective conflict handling methods have been presented. This paper proposes a new framework for reducing conflict based on principal component analysis and relatively similar transformation (PCARST), which can better reduce the impact of conflict evidence on the results, and has more reasonable results than existing methods. The main characteristic feature of the BPAs is maintained while the conflict evidence is regarded as a noise signal to be weakened. A numerical example is used to illustrate the effectiveness of the proposed method. Results show that a higher belief degree of the correct proposition is obtained comparing previous methods.
Category: Artificial Intelligence

[4] viXra:2112.0122 [pdf] submitted on 2021-12-22 03:25:27

Feedforward Neural Networks: Efficiency and Performance of Backpropagation and Evolutionary Algorithms

Authors: Kasper van Maasdam
Comments: 31 Pages.

Artificial neural networks are important in everyday life and are becoming more widespread. For this reason, it is crucial they are understood and tested. This paper tests and compares two training methods: reinforcement learning with backpropagation and an evolutionary method. The hypothesis is that the training method using backpropagation and reinforcement learning is more efficient in training a neural network to play a game than a model trained with the evolutionary algorithm. However, the model trained with backpropagation and reinforcement learning will have lower performance than a model trained with the evolutionary algorithm. To research the hypothesis, a feedforward neural network and how it works must first be explained.

Neural networks are systems inspired by the biological brain which enables a computer to predict, model, classify and many other applications. All this by learning from some set of training data to find general relations that can be applied to unseen data. A neural network model is essentially a function with potentially thousands of parameters. Just like any other function, input values are provided and with those, the output is calculated. In a feedforward neural network, this process is called feedforward.

The process of feedforward is meaningless with a model that has not yet been configured to do anything. A neural network must first be taught to perform a certain task. This is what is accomplished with machine learning. Backpropagation is an example of a machine learning method. For backpropagation two things are required: the input and the corresponding output. Backpropagation will adjust the parameters of a model so the next time the same input is provided, the output will be closer to the desired output. This is called optimisation.

Reinforcement learning is a way to teach a neural network by giving it positive reinforcement when it does something good and negative reinforcement when it does something bad. This is used when no desired output is known so backpropagation cannot directly be applied.

An evolutionary algorithm is much more intuitive than backpropagation. It is the imitation of natural selection in biology, but with self-determined factors deciding the fitness of a model. When training a neural network with an evolutionary algorithm, a large group of random models will be generated, all performing the same task. Some models, however, will be better suited for this task than others. How well they are suited to their environment is their fitness. This will be the determining factor of who survives and can therefore reproduce and create mutated offspring. This process is repeated as many times as required to reach the desired performance.

The hypothesis of this paper has been proven wrong. Neural networks trained with an evolutionary algorithm do end up performing at a higher level than models trained with reinforcement learning and backpropagation. However, Neural networks trained with an evolutionary algorithm are also more efficient with regard to not only the number of cycles needed to reach the same performance but also with regard to the time required.


Category: Artificial Intelligence

[3] viXra:2112.0097 [pdf] replaced on 2022-01-18 17:08:15

Phish: A Novel Hyper-Optimizable Activation Function

Authors: Philip Naveen
Comments: 8 Pages. Critical errors fixed, and additional experiments performed

Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Generalized networks were constructed using different activation functions. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.
Category: Artificial Intelligence

[2] viXra:2112.0095 [pdf] replaced on 2022-02-24 21:03:49

Triplere: Knowledge Graph Embeddings Via Triple Relation Vectors

Authors: Long Yu, ZhiCong Luo, Deng Lin, HongZhu Li, HuanYong Liu, YaFeng Deng
Comments: 6 Pages.

Knowledge representation is a classic problem in Knowledge graphs. Distance-based models have made great progress. The most significant recent developments in this direction have been those of Rotate[1] and PairRE[2], which focuses on expressing relationships as projections of nodes. However TransX series Model(TransE[3], TransH[4], TransR[5]) expresses relationships as translations of nodes. To date, the problem of the Combination of Projection and translation has received scant attention in the research literature. Hence, we propose TripleRE, a method that models relationships by projections and translations. Compared with the other knowledge representation model, we achieve the best results on the ogbl-wikikg2 dataset.
Category: Artificial Intelligence

[1] viXra:2112.0012 [pdf] submitted on 2021-12-02 03:27:08

A Traffic Prediction Using Machine Learning: Literature Survey

Authors: Ji Yoon Kim
Comments: 4 Pages.

Accurate calculation of the commute cost is crucial for the government to decide whether housing subsidy will be provided to disadvantaged workers, or to create a new method that can reduce the commute cost of the disadvantaged workers by offering mass transit. Many studies have already proven that machine learning can predict traffic and commute times. Although different machine learning algorithms can be used, this study mainly uses Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), which are based on the Recurrent Neural Networks (RNNs) architecture.
Category: Artificial Intelligence