[3] viXra:2508.0109 [pdf] replaced on 2025-10-21 11:42:04
Authors: Hidehiko Okada
Comments: 8 Pages.
This study investigates the application of Evolution Strategy (ES) to train binary neural network controllers for the Atari game Space Invaders, extending previous work for control tasks such as Pendulum and Acrobot. Unlike conventional networks using real-valued weights, this approach represents connection weights using binary values from the set {-1, 1}. Experimental results evaluate the performance of multilayer perceptrons (MLPs) with varying numbers of hidden units and weight bit precision (1-bit vs. 64-bit). Key findings indicate that 1-bit MLPs achieve comparable performance to 64-bit MLPs. Moreover, performance with only 2 hidden units is comparable to those with 4, 8, and 16 hidden units, suggesting that binary quantization may not necessitate increased model complexity. Additionally, results demonstrate that increasing the number of offspring per generation enhances ES effectiveness more than increasing the number of generations. These findings highlight the potential of binary-weight neural networks for efficient and effective reinforcement learning in resource-constrained settings.
Category: Artificial Intelligence
[2] viXra:2508.0060 [pdf] submitted on 2025-08-09 03:34:03
Authors: Horacio Useche
Comments: 33 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Artificial intelligence is here to stay with us until the end of time, providing services across every field of human knowledge. We have all witnessed its advances, and much is said about its repercussions, yet few people take the trouble to understand how AIs actually work and are trained. It is often said that an AI is a large language model (LLM2) with billions of parameters powering its capabilities through neural networks, which are flows of tensors that use those parameters to process and respond to user requests. AIs cannot deal directly with things the way humans do. Instead, they use mathematical objects that represent those things. A rock, a tree, a cat, a river, a galaxy, Gustavo Petro making a fool of himself, these are all examples of "things" that AIs represent using tensors and manipulate using tensor algebra in neural networks. In other words: numbers, once again, everything is numbers...This document briefly reviews how AIs function and introduces the concept of the embryonic tensor in the training and operation of AI systems. Experts are well aware of the role of ordinary tensors in this field, but few suspect that we can go further by introducing concepts that amplify the usefulness of tensors in AI training. In this spirit, we present the use of embryonic tensors as a "super extension" of the ordinary tensor concept, already heavily used in training current AI LLMs such as ChatGPT, Gemini, Grok, DeepSeek, etc. For those who are concerned about the intrusion of AI into nearly every aspect of contemporary life, the author has also developed a "cure": the Kama technology. It converts any system file into a simple PNG graphic and, in doing so, encodes the digital information using advanced steganographic techniques that allow the data to be hidden safely inside the image. To date, none of the aforementioned AIs has succeeded in decoding a Kama file, not even with help. Kama also fools every web robot that accepts uploads of such files without questioning their contents which, it should be noted, pose no danger to the web or to the application "zombified" by Kama.
Category: Artificial Intelligence
[1] viXra:2508.0003 [pdf] submitted on 2025-08-01 18:05:23
Authors: Sayed Amir Karim
Comments: 34 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Semantic similarity systems face a fundamental trade-off between domain expertise and multilingual capability, as single embedding spaces cannot preserve both specialized knowledge and cross-linguistic connections. We decompose semantic similarity into three specialist layers—domain-specific, cross- linguistic, and cross-domain—fused with context-adaptive weights.On 783K scientific concepts (6 domains, 8 languages), the approach yields 15% higher Pearson correlation than strong ensembles (r = 0.831 vs 0.748, p < 0.001) at 1.1× computational cost. MTEB evaluation shows consistent 12% gains across 14 tasks. Our theoretical analysis provides mathematical proofs of superiority with O(d) complexity bounds and convergence guarantees.Production deployment on AQEA Universal Platform processes 783K+ concepts with 16.8ms latency and 99.97% uptime. Multi-Layer Network Theory establishes the first systematic solution to the semantic compression problem, enabling AI systems that maintain specialized expertise while preserving global multilingual accessibility. The framework’s theoretical rigor, comprehensive validation, and production success position it for immediate adoption across scientific, educational, and commercial applications.
Category: Artificial Intelligence