Artificial Intelligence

2111 Submissions

[12] viXra:2111.0172 [pdf] submitted on 2021-11-30 05:08:24

New Evolutionary Computation Models and their Applications to Machine Learning

Authors: Mihai Oltean
Comments: 170 Pages.

Automatic Programming is one of the most important areas of computer science research today. Hardware speed and capability have increased exponentially, but the software is years behind. The demand for software has also increased significantly, but it is still written in old fashion: by using humans. There are multiple problems when the work is done by humans: cost, time, quality. It is costly to pay humans, it is hard to keep them satisfied for a long time, it takes a lot of time to teach and train them and the quality of their output is in most cases low (in software, mostly due to bugs). The real advances in human civilization appeared during the industrial revolutions. Before the first revolution, most people worked in agriculture. Today, very few percent of people work in this field. A similar revolution must appear in the computer programming field. Otherwise, we will have so many people working in this field as we had in the past working in agriculture. How do people know how to write computer programs? Very simple: by learning. Can we do the same for software? Can we put the software to learn how to write software? It seems that is possible (to some degree) and the term is called Machine Learning. It was first coined in 1959 by the first person who made a computer perform a serious learning task, namely, Arthur Samuel. However, things are not so easy as in humans (well, truth to be said - for some humans it is impossible to learn how to write software). So far we do not have software that can learn perfectly to write software. We have some particular cases where some programs do better than humans, but the examples are sporadic at best. Learning from experience is difficult for computer programs. Instead of trying to simulate how humans teach humans how to write computer programs, we can simulate nature.
Category: Artificial Intelligence

[11] viXra:2111.0171 [pdf] submitted on 2021-11-30 05:11:44

Multi Expression Programming

Authors: Mihai Oltean, D. Dumitrescu
Comments: 28 Pages. Technical Report, Babes-Bolyai Univ. 2002

Multi Expression Programming (MEP) is a new evolutionary paradigm intended for solving computationally difficult problems. MEP individuals are linear entities that encode complex computer programs. MEP chromosomes are represented in the same way as C or Pascal compilers translate mathematical expressions into machine code. MEP is used for solving some difficult problems like symbolic regression and game strategy discovering. MEP is compared with Gene Expression Programming (GEP) and Cartesian Genetic Programming (CGP) by using several well-known test problems. For the considered problems MEP outperforms GEP and CGP. For these examples MEP is two magnitude orders better than CGP.
Category: Artificial Intelligence

[10] viXra:2111.0170 [pdf] replaced on 2024-05-31 10:52:26

Existence and Perception as the Basis of AGI (Artificial General Intelligence)

Authors: Victor Senkevich
Comments: 12 Pages. A slightly expanded version...

I believe that AGI (Artificial General Intelligence), unlike current AI models must operate with meanings / knowledge. This is exactly what distinguishes it from neural network based AI. Any successful AI implementations (playing chess, self-driving, face recognition, etc.) in no way operate with knowledge about the objects being processed and do not recognize their meanings / cognitive structure. This is not necessary for them, they demonstrate good results based on pre-training. But for AGI, which imitates human thinking, the ability to operate with knowledge is crucial. Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not rigorous and formalized, therefore they cannot be programmed. The procedure of searching for meaning / knowledge should use a formalized determination of its existence and possible forms of its perception, usually multimodal. For the practical implementation of AGI, it is necessary to develop such "ready-to-code" formalized definitions of the cognitive concepts of "meaning", "knowledge", "intelligence" and others related to them. This article attempts to formalize the definitions of such concepts.
Category: Artificial Intelligence

[9] viXra:2111.0169 [pdf] submitted on 2021-11-30 07:15:04

Evolving Evolutionary Algorithms using Multi Expression Programming

Authors: Mihai Oltean, Crina Grosan
Comments: 8 Pages. The 7th European Conference on Artificial Life, September 14-17, 2003, Dortmund, Edited by W. Banzhaf (et al), LNAI 2801, pp. 651-658, Springer-Verlag, Berlin, 2003.

Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a difficult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of solving a particular problem. For this purpose the Multi Expression Programming (MEP) technique is used. Each MEP chromosome will encode multiple EAs. An nongenerational EA for function optimization is evolved in this paper. Numerical experiments show the effectiveness of this approach.
Category: Artificial Intelligence

[8] viXra:2111.0161 [pdf] submitted on 2021-11-29 20:00:15

ANN Synthesis and Optimization of Electronically Scanned Coupled Planar Periodic and Aperiodic Antenna Arrays Modeled by the MoM-GEC Approach

Authors: B. Hamdi, A. Nouainia, T. Aguili, H. Baudrand
Comments: 6 Pages.

This paper proposes a new formulation that relied on the moment technique combined with the equivalent circuit (MoM-GEC) to study a beamforming application for the coupled periodic and quasi-periodic planar antenna array. Numerous voltage designs are utilized to show the adequacy and unwavering quality of the proposed approach. The radiators are viewed as planar dipoles and consequently shared (mutual) coupling effects are considered. The recommended array shows a noticeable improvement against the current structures as far as size, 3-D scanning, directivity, SLL reduction, and HPBW. The results verify that multilayer feed-forward neural networks are vigorous and can take care of complex antenna problems. Even so, an artificial neural network (ANN) is ready to create quickly the results of optimization and synthesis by utilizing generalization with an early stopping method. Significant gain in the running time consumption and memory used is acquired employing this last technique for improving generalization (named early stopping). Simulation results are carried out using MATLAB. To approve this work, several simulation examples are shown.
Category: Artificial Intelligence

[7] viXra:2111.0080 [pdf] replaced on 2021-11-24 17:45:45

Discriminator Variance Regularization for Wasserstein GAN

Authors: Jeongik Cho
Comments: 5 Pages.

In Wasserstein GAN, it is important to regularize the discriminator to have a not big Lipschitz constant. In this paper, I introduce discriminator variance regularization to regularize the discriminator of Wasserstein GAN to have a small Lipschitz constant. Discriminator variance regularization simply regularizes the variance of the discriminator's output to be small when input is real data distribution or generated data distribution. Intuitively, a low variance of discriminator output implies that the discriminator is more likely to have a low Lipschitz constant. Discriminator variance regularization does not explicitly regularize the Lipschitz constant of discriminator through differentiation on discriminator but lowers the probability that the Lipschitz constant of the discriminator is high. Discriminator variance regularization is used in Wasserstein GAN with R1 regularization, which reduces the vibration of GAN. Discriminator variance regularization requires very little additional computing.
Category: Artificial Intelligence

[6] viXra:2111.0069 [pdf] submitted on 2021-11-15 19:53:00

A Modified Belief Functions Distance Measure for Orderable Set

Authors: Xingyue Yang, Xuan Zhao, Bingyi Kang
Comments: 23 Pages.

This paper proposes a new method of measuring the distance between conflicting order sets, quantifying the similarity between focal elements and their own size. This method can effectively measure the conflict of belief functions on an ordered set without saturation due to the non-overlapping focus elements. It has proven that the method satisfies the property of the distance. Examples of the engineering budget and sensors show that the distance can effectively measure the conflict between ordered sets, and prove the distance we propose to reflect the information of order sets more comprehensively by comparison with existing methods and the conflict metric between ordered sets is more robust and accurate
Category: Artificial Intelligence

[5] viXra:2111.0065 [pdf] submitted on 2021-11-13 09:37:33

Robotic Autonomy: A Survey

Authors: Bora King
Comments: 7 Pages.

Robotic autonomy is key to the expansion of robotic applications. The paper reviews the success of robotic autonomy in industrial applications, as well as the requirements and challenges on expanding robotic autonomy to in needing applications, such as education, medical service, home service, etc. Through the discussions, the paper draws the conclusion that robotic intelligence is the bottleneck for the broad application of robotic technology.
Category: Artificial Intelligence

[4] viXra:2111.0060 [pdf] submitted on 2021-11-14 14:57:39

Application of Xgboost to Time Series Forecasting by Taking Advantage of Its Powerful Forecasting Performance

Authors: Tatsuhiko Yamato
Comments: 7 Pages.

Xgboost has the best forecasting performance among non-deep learning methods. However, it works well for interpolation problems and regression, but not for future forecasting of time series data that requires extrapolation. I think it is difficult to avoid this tendency even if we add explanatory variables in the background of the data. Possible explanatory variables include lags of a day or several days from the data, months, days, days of the week, holidays, and so on. In fact, the increase or decrease in data values due to these factors is quite possible and can serve as explanatory variables. However, even if you do this, you will not be able to capture the trend.
Category: Artificial Intelligence

[3] viXra:2111.0035 [pdf] submitted on 2021-11-04 23:26:24

Bayesian Optimization for Category Space

Authors: Jun Jin
Comments: 2 Pages.

Hyper parameter optimization is widely used in AI areas. Hyper parameter usually means the value controls the whole learning process, but itself cannot be learned or tunned in training process. Hyper parameter is very important because it will greatly affect the learning result. A good hyper parameter set can lead to a much better result or cost much less training time, instead a bad hyper parameter usually will end in local optimum, or even failed to converge. Hyper parameters can be many difference kinds of types, it could be in the model itself (depth, node counts, etc..), or it could be in the algorithm (learning rate, optimizer, etc..). Different models or algorithms usually need different hyper parameters, even the same model/algorithm can use different hyper parameters to achieve better results. So hyper parameter exists in different part of the training process, some of the hyper parameter is described in a category. It usually means that the parameter can only be chosen in a range. This kind of parameter has some properties, for this special kind of hyper parameter we proposed a common method here to optimize it. By using this method we turn the category problems into Real searching space to achieve a better result.
Category: Artificial Intelligence

[2] viXra:2111.0015 [pdf] submitted on 2021-11-02 20:44:50

A New Algorithm based on Extent Bit-array for Computing Formal Concepts

Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 12 Pages.

The emergence of Formal Concept Analysis (FCA) as a data analysis technique has increased the need for developing algorithms which can compute formal concepts quickly. The current efficient algorithms for FCA are variants of the Close-By-One (CbO) algorithm, such as In-Close2, In-Close3 and In-Close4, which are all based on horizontal storage of contexts. In this paper, based on algorithm In-Close4, a new algorithm based on the vertical storage of contexts, called InClose5, is proposed, which can significantly reduce both the time complexity and space complexity of algorithm In-Close4. Technically, the new algorithm stores both context and extent of a concept as a vertical bit-array, while within In-Close4 algorithm the context is stored only as a horizontal bit-array, which is very slow in finding the intersection of two extent sets. Experimental results demonstrate that the proposed algorithm is much more effective than In-Close4 algorithm, and it also has a broader scope of applicability in computing formal concept in which one can solve the problems that cannot be solved by the In-Close4 algorithm.
Category: Artificial Intelligence

[1] viXra:2111.0014 [pdf] replaced on 2022-01-11 21:52:43

Granule Description based on Compound Concepts

Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 16 Pages.

Concise granule descriptions for definable granules and approaching descriptions for indefinable granules are challenging and important issues in granular computing. The concept with only common attributes has been intensively studied. To investigate the granules with some special needs, we propose a novel type of compound concepts in this paper, i.e., common-and-necessary concept. Based on the definitions of concept-forming operations, the logical formulas are derived for each of the following types of concepts: formal concept, object-induced three-way concept, object oriented concept and common-and-necessary concept. Furthermore, by utilizing the logical relationship among various concepts, we have derived concise and unified equivalent conditions for definable granules and approaching descriptions for indefinable granules for all four kinds of concepts.
Category: Artificial Intelligence