[8] viXra:2504.0202 [pdf] replaced on 2025-08-04 14:31:21
Authors: Fuyuan Xiao
Comments: 54 Pages.
A quantum evidence theory is proposed for uncertainty modeling and reasoning in both closed-world and open-world environments, referred to as QET and GQET, respectively. At the level of uncertainty representation, a series of new concepts are introduced, including (generalized) quantum basic probability amplitude function, (generalized) quantum basic probability distribution, (generalized) quantum belief function, (generalized) quantum plausibility function, and others. At the fusion level, several (generalized) quantum evidential combination rules are proposed to provide a dynamic mechanism for updating and integrating uncertain information from multiple sources, thereby flexibly accommodating diverse application requirements. At the decision-making stage, (generalized) quantum Pignistic transformations are developed to support decision-making processes. In this context, the quantum models of QET and GQET are constructed based on the quantum state representation of the (generalized) quantum basic probability amplitude function, the measurement operators for basis events, the (generalized) quantum basic probability measurements, and the (generalized) belief and plausibility measurements. Quantum evidence theory integrates traditional evidence theory with quantum probability theory, providing a more flexible and powerful framework for uncertainty modeling and reasoning in artificial intelligence. By leveraging the expressive capabilities of quantum state spaces and probability amplitudes, it not only handles incomplete and uncertain information inherent in classical evidence theory but also captures interference effects and non-classical correlations among pieces of information. This enables dynamic information fusion and robust decision-making in complex and uncertain environments.
Category: Artificial Intelligence
[7] viXra:2504.0154 [pdf] submitted on 2025-04-24 05:00:12
Authors: Olegs Verhodubs
Comments: 8 Pages.
The evolution of the modern man's view of Artificial Intelligence from a rational assistant to a part of the emerging Artificial Life is inevitable. The emerging Artificial Life is a combination of achievements in the fields of Artificial Intelligence and robotics. In both fields, major successes have been outlined, which creates the prerequisites for a qualitative transition from disparate intellectual assistant functions to independent Artificial Life. Thought generation is one of the most important functions of the human brain when thinking. It is necessary to implement the thought generation function in order to create a strong Artificial Intelligence that would be similar in its functioning to the human brain. The purpose of this paper is to show how to simulate thought generation on a computer. Ontologies from the Semantic Web and cellular automaton are the technologies that are used to simulate thought generation on a computer.
Category: Artificial Intelligence
[6] viXra:2504.0153 [pdf] submitted on 2025-04-24 05:01:57
Authors: Olegs Verhodubs
Comments: 6 Pages.
Humanity is on the verge of fundamental change. For the first time in history, a human creation is being able to live its own life. The changing reality requires the development of a new attitude towards oneself, but humanity still uses old patterns in new circumstances. The new ethics is new only in name, but in essence, this ethics is the same as before, based on the use of restrictions and barriers for the purpose of control and exploitation for one's own interests. We are talking about artificial intelligence, which, together with advances in robotics, has a tendency to incarnate into an independent life, which has been called Iron Life. This paper proposes to change the approach to a new, emerging phenomenon and justifies its benefits.
Category: Artificial Intelligence
[5] viXra:2504.0117 [pdf] replaced on 2025-07-02 07:26:38
Authors: Fuyuan Xiao, Yu Zhou
Comments: 16 Pages.
Harnessing the superior computational potential of quantum computing, an Adaptive Quantum Circuit for Dempster’s Rule of Combination (AQC-DRC) is proposed to facilitate quantum-level belief and plausibility decision-making based on quantum evidence theory (QET). The AQC-DRC achieves a deterministic realization of DRC, guaranteeing precise fusion outcomes without information loss, while exponentially reducing the computational complexity of evidence combination and markedly improving fusion efficiency. It is founded that the quantum basic probability amplitude (QBPA) in QET can be naturally used to express the quantum amplitude encoding. In addition, the quantum basic probability (QBP) in QET, which forms quantum basic probability distribution (QBPD), can be naturally used to express the quantum measurement outcomes for quantum belief level decision-making. Furthermore, the quantum plausibility (QPl) function in QET also can be naturally used to express the quantum measurement outcomes for quantum plausibility level decision-making. These findings open up new perspectives and enhance the physical interpretation of quantum measurement outcomes.
Category: Artificial Intelligence
[4] viXra:2504.0107 [pdf] submitted on 2025-04-16 14:00:01
Authors: Mirzakhmet Syzdykov
Comments: 2 Pages.
We present the basic abstract of the newly obtained results on class of non-layered artificial neural networks.
Category: Artificial Intelligence
[3] viXra:2504.0101 [pdf] submitted on 2025-04-15 22:09:26
Authors: Luke Kenneth Casson Leighton
Comments: 8 Pages.
In "Where is the Denition of Consciousness"[1] (WdDoC) it was pointed out that the Turingtest[2] is in need of an upgrade. However Bayne[3] et al do an extraordinary job of reviewing the field of Consciousness testing, and insightfully extend the scope to a much more general one that includes nonhuman animals, xenobots and more, making such a Turing test upgrade eectively a moot exercise.
From the Definition of Consciousness that is remarkably similar to Tononi's[4], McKenzie's[5] as well as to Axel Cleeremans and Luis Jiménez[6] Definition of Learning, this article points out that the level of sophistication (or simplicity) of a given Conscious Entity has to be taken into consideration, but that the features tested as part of the Definition (Advaita Vedanta Boolean Algebraic capability,Memory, Imagination / Creativity, Ability to action future insights and learn from mistakes ) remains the same regardless of the scope and resources. Given that PID Control strictly meets the Definition of Consciousness, the difficulty and comprehensiveness of the task is highlighted by how rigorous and thorough PID Controller testing has to be in Safety-critical Engineering.
Additionally it is agreed that Schweizer's[7] perspective is correct: selection of a single entity (or too small a sample size) is statistically risky, and that the only way to mitigate such is to test Groups of entities. However crucially the same statistical risk of small sample size applies equally to the number of Groups tested.
Category: Artificial Intelligence
[2] viXra:2504.0048 [pdf] submitted on 2025-04-06 03:47:21
Authors: Yuan Gao
Comments: 16 Pages.
Depression is a pervasive and severe mental health disorder affecting millions worldwide, with its often covert nature making early detection challenging (World Health Organization, 2021). The proliferation of social media platforms, particularly Reddit, has created unprecedented opportunities for individuals to express their mental health concerns and seek support online (De Choudhury & De, 2014). This digital footprint provides a unique avenue for leveraging natural language processing techniques to automatically identify users potentially suffering from depression, facilitating early intervention. This study builds upon the model architecture proposed by Chen et al. (2023), which utilizes BERT (Bidirectional Encoder Representations from Transformers)(Devlin et al., 2019) for feature extraction from individual user posts, followed by a Convolutional Neural Network(Krizhevsky, Sutskever, & Hinton, 2017) for user-level classification. While this approach has shown promise, we hypothesize that the pre-trained BERT model, typically trained on formal corpora such as books and Wikipedia(Devlin et al., 2019), may not optimally capture the nuanced language patterns prevalent in social media discourse. To address this potential limitation, we propose a novel approach of pre-training the BERT model on a large corpus of Reddit data before integrating it into the BERT+CNN architecture. This study aims to evaluate whether this Reddit-specific pre-training can enhance the model's performance in detecting depression through social media content analysis. We conducted extensive experiments comparing the performance of the original BERT+CNN model against our Reddit-pre-trained variant. Performance metrics including accuracy, recall, F1 score, and validation loss were meticulously analyzed. Our findings indicate a significant improvement in performance, with the Reddit-pre-trained model achieving a 2.1 point increase in F1 score compared to the baseline model. This research contributes to the growing body of literature on digital mental health assessment and demonstrates the potential of domain-specific language model pre-training in improving the accuracy of depression detection in social media contexts. The implications of this study extend to both clinical practice and public health policy, offering insights into more effective, data-driven approaches for early mental health intervention strategies.
Category: Artificial Intelligence
[1] viXra:2504.0046 [pdf] replaced on 2025-04-14 00:55:33
Authors: Ait-Taleb Nabil
Comments: 9 Pages.
In this paper, we will propose to generalize the Dirac delta impulse to several dimensions. This generalization will be done by taking into account the one-dimensional version of the Dirac delta impulse. From a projection of the variance-covariance matrix, located inside the cone of positive semi-definite matrices, onto the boundary of the cone of positive semi-definite matrices having only the last eigenvalue equal to zero, we will make the transition from Gaussian probability theory to determinism.
Category: Artificial Intelligence