Artificial Intelligence

1605 Submissions

[5] viXra:1605.0288 [pdf] submitted on 2016-05-29 02:24:48

Syllabic Networks: Measuring the Redundancy of Associative Syntactic Patterns

Authors: Bradly Alicea
Comments: 6 Pages. 3 figures, 1 table

The self-organization and diversity inherent in natural and artificial language can be revealed using a technique called syllabic network decomposition. The topology of such networks are determined by a series of linguistic strings which are broken apart at critical points and then linked together in a non-linear fashion. Small proof-of-concept examples are given using words from the English language. A criterion for connectedness and two statistical parameters for measuring connectedness are applied to these examples. To conclude, we will discuss some applications of this technique, ranging from improving models of speech recognition to bioinformatic analysis and recreational games.
Category: Artificial Intelligence

[4] viXra:1605.0190 [pdf] replaced on 2016-05-20 08:36:35

The Algorithm of the Thinking Machine

Authors: Dimiter Dobrev
Comments: 15 Pages. Represented at 12 of May, 2016 at Faculty of Mathematics and Informatics, University of Sofia.

In this article we consider the questions 'What is AI?' and 'How to write a program that satisfies the definition of AI?' It deals with the basic concepts and modules that must be at the heart of this program. The most interesting concept that is discussed here is the concept of abstract signals. Each of these signals is related to the result of a particular experiment. The abstract signal is a function that at any time point returns the probability the corresponding experiment to return true.
Category: Artificial Intelligence

[3] viXra:1605.0178 [pdf] submitted on 2016-05-16 10:25:46

Artificial Intelligence Replaces Physicists

Authors: George Rajna
Comments: 19 Pages.

The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[2] viXra:1605.0125 [pdf] submitted on 2016-05-12 08:13:31

Failure Mode and Effects Analysis Based on D Numbers and Topsis

Authors: Tian Bian, Haoyang Zheng, Likang Yin, Yong Deng, Sankaran Mahadevand
Comments: 39 Pages.

Failure mode and effects analysis (FMEA) is a widely used technique for assessing the risk of potential failure modes in designs, products, process, system or services. One of the main problems of FMEA is to deal with a variety of assessments given by FMEA team members and sequence the failure modes according to the degree of risk factors. The traditional FMEA using risk priority number (RPN) which is the product of occurrence (O), severity (S) and detection (D) of a failure to determine the risk priority ranking order of failure modes. However, it will become impractical when multiple experts give different risk assessments to one failure mode, which may be imprecise or incomplete or the weights of risk factors is inconsistent. In this paper, a new risk priority model based on D numbers, and technique for order of preference by similarity to ideal solution (TOPSIS) is proposed to evaluate the risk in FMEA. In the proposed model, the assessments given by FMEA team members are represented by D numbers, a method can effectively handle uncertain information. TOPSIS method, a novel multi-criteria decision making (MCDM) method is presented to rank the preference of failure modes respect to risk factors. Finally, an application of the failure modes of rotor blades of an aircraft turbine is provided to illustrate the efficiency of the proposed method.
Category: Artificial Intelligence

[1] viXra:1605.0078 [pdf] submitted on 2016-05-07 23:23:50

An Improvement in Monotonicity of the Distance-Based Total Uncertainty Measure in Belief Function Theory

Authors: Xinyang Deng, Yong Deng
Comments: 18 Pages.

Measuring the uncertainty of evidences is an open issue in belief function theory. Recently, a distance-based total uncertainty measure for the belief function theory, indicated by ${TU}^I$, is presented. Some experiments show the efficiency of the ${TU}^I$ to measure uncertainty degree. In this paper, numerical example and theoretical analysis are illustrated that the monotonicity in ${TU}^I$ is not satisfied. To address this issue, an improved uncertainty measure ${TU}^I_E$ is proposed. The monotonicity for ${TU}^I_E$ is theoretically proved. Finally, through experimental comparison we show that ${TU}^I_E$ also has the desired high sensitivity to the evidence changes, which further indicates that the proposed ${TU}^I_E$ is better than ${TU}^I$.
Category: Artificial Intelligence