Artificial Intelligence

1408 Submissions

[5] viXra:1408.0122 [pdf] submitted on 2014-08-18 13:10:12

Bird's-eye view on Noise-Based Logic

Authors: Laszlo B. Kish, Claes-Goran Granqvist, Tamas Horvath, Andreas Klappenecker, He Wen, Sergey M. Bezrukov
Comments: 5 Pages. In: Proceedings of the first conference on Hot Topics in Physical Informatics (HoTPI, 2013 November). Paper is in press at International Journal of Modern Physics: Conference Series (2014).

Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic state. We briefly discuss various questions such as (i) What does practical determinism mean? (ii) Is noise-based logic a Turing machine? (iii) Is there hope to beat (the dreams of) quantum computation by a classical physical noise-based processor, and what are the minimum hardware requirements for that? Finally, (iv) we address the problem of random number generators and show that the common belief that quantum number generators are superior to classical (thermal) noise-based generators is nothing but a myth.
Category: Artificial Intelligence

[4] viXra:1408.0121 [pdf] submitted on 2014-08-18 13:12:46

Brain: Biological Noise-Based Logic

Authors: Laszlo B. Kish, Claes-Goran Granqvist, Sergey M. Bezrukov, Tamas Horvath
Comments: 4 Pages. In press at: Advances in Cognitive Neurodynamics Vol. 4 - Proc. 4th International Conference on Cognitive Neurodynamics (Springer, 2014)

Neural spikes in the brain form stochastic sequences, i.e., belong to the class of pulse noises. This stochasticity is a counterintuitive feature because extracting information - such as the commonly supposed neural information of mean spike frequency - requires long times for reasonably low error probability. The mystery could be solved by noise-based logic, wherein randomness has an important function and allows large speed enhancements for special-purpose tasks, and the same mechanism is at work for the brain logic version of this concept.
Category: Artificial Intelligence

[3] viXra:1408.0017 [pdf] submitted on 2014-08-04 03:55:23

Mining Software Metrics from Jazz

Authors: Jacqui Finlay, Andy M. Connor, Russel Pears
Comments: 7 Pages. 9th International Conference on Software Engineering Research, Management and Applications

In this paper, we describe the extraction of source code metrics from the Jazz repository and the application of data mining techniques to identify the most useful of those metrics for predicting the success or failure of an attempt to construct a working instance of the software product. We present results from a systematic study using the J48 classification method. The results indicate that only a relatively small number of the available software metrics that we considered have any significance for predicting the outcome of a build. These significant metrics are discussed and implication of the results discussed, particularly the relative difficulty of being able to predict failed build attempts.
Category: Artificial Intelligence

[2] viXra:1408.0012 [pdf] submitted on 2014-08-04 02:42:53

Synthetic Minority ov er-Sampling Technique (Smote) for Predicting so Ftware Build Outcomes

Authors: Russel Pears, Jacqui Finlay, Andy M. Connor
Comments: 6 Pages. wenty-Sixth International Conference on Software Engineering and Knowledge Engineering (SEKE 2014) held at Hyatt Regency, Vancouver, Canada, 2014-07-01 to 2014-04-03

In this research we use a data stream approach to mining data and construct Decision Tree models that predict software build outcomes in terms of software metrics that are derived from source code used in the software construction process. The rationale for using the data stream approach was to track the evolution of the prediction model over time as builds are incrementally constructed from previous versions either to remedy errors or to enhance functionality. As the volume of data available for mining from the software repository that we used was limited, we synthesized new data instances through the application of the SMOTE oversampling algorithm. The results indicate that a small number of the available metrics have significance for prediction software build outcomes. It is observed that classification accuracy steadily improves after approximately 900 instances of builds have been fed to the classifier. At the end of the data streaming process classification accuracies of 80% were achieved, though some bias arises due to the distribution of data across the two classes over time.
Category: Artificial Intelligence

[1] viXra:1408.0008 [pdf] submitted on 2014-08-03 10:17:37

The Grow-Shrink Strategy for Learning Markov Network Structures Constrained by Context-Specific Independences

Authors: Alejandro Edera, Yanela Strappa, Facundo Bromberg
Comments: 12 Pages.

Markov networks are models for compactly representing complex probability distributions. They are composed by a structure and a set of numerical weights. The structure qualitatively describes independences in the distribution, which can be exploited to factorize the distribution into a set of compact functions. A key application for learning structures from data is to automatically discover knowledge. In practice, structure learning algorithms focused on "knowledge discovery" present a limitation: they use a coarse-grained representation of the structure. As a result, this representation cannot describe context-specific independences. Very recently, an algorithm called CSPC was designed to overcome this limitation, but it has a high computational complexity. This work tries to mitigate this downside presenting CSGS, an algorithm that uses the Grow-Shrink strategy for reducing unnecessary computations. On an empirical evaluation, the structures learned by CSGS achieve competitive accuracies and lower computational complexity with respect to those obtained by CSPC.
Category: Artificial Intelligence