[13] viXra:2511.0117 [pdf] submitted on 2025-11-24 01:45:21
Authors: Claire Nicholson
Comments: 17 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Large language models often exhibit behavioural variability, adversarial drift, and structural inconsistency across repeated generations. This study presents empirical evidence that a structured prompt operator, referred to as the HelixScribe operator, can reliably stabilise these behaviours without modifying model weights. Across more than 1,100 generations spanning 120 paired business scenarios, the operator induced a compact behavioural manifold approximately 7.4 times smaller than that produced by vanilla prompting, with a centroid shift of 3.35σ in six-dimensional metric space. Outputs remained stable even under conflicting or adversarial instructions, whereas vanilla prompting showed marked degradation. These results suggest that operator-level syntax can act as a form of soft behavioural control, producing fine-tuning-like stability through prompt structure alone.
Category: Artificial Intelligence
[12] viXra:2511.0116 [pdf] submitted on 2025-11-23 00:56:33
Authors: Satyadhar Joshi
Comments: 35 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
This comprehensive analysis examines the profound impact of artificial intelligence (AI) on global labor markets, focusing on workforce disruption patterns, emerging skill requirements, and the critical rise of prompt engineering as a core competency. Drawing from over 70 authoritative sources, we find that AI is expected to affect approximately 40% of jobs globally, with generative AI potentially transforming up to 90% of existing occupations. While automation may displace 85 million jobs by 2025, it is projected to create 97 million new roles, representing a net positive employment shift. The impact, however, varies by region—advanced economies face higher disruption levels (around 60% of jobs affected), compared to emerging markets (40%) and low-income countries (26%).Prompt engineering has emerged as an essential cross-domain skill, spanning finance, healthcare, education, and creative industries. Organizations implementing structured AI training programs report 45—60% improvements in workforce adaptation and productivity, with prompt engineering training yielding performance effect sizes between 1.24 and 1.32 standard deviations based on current literature. These findings highlight the shifting nature of human—AI collaboration and underscore the urgency of integrating AI literacy and prompt design into professional development frameworks.This research concludes with strategic recommendations for policymakers, educators, and industry leaders, advocating for proactive investment in AI literacy, adaptive workforce policies, and equitable access to AI skill development. Such measures are critical to harness AI’s transformative potential while mitigating displacement risks, fostering resilient and inclusive labor markets in the era of intelligent automation. All results and proposals are from cited literature.
Category: Artificial Intelligence
[11] viXra:2511.0100 [pdf] submitted on 2025-11-20 00:34:48
Authors: Alberto Romero
Comments: 29 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Recent advances in large language model (LLM) agent optimization reveal a fundamental imitation: single-dimension approaches—whether context engineering, test-time compute, or parameter tuning—are increasingly being surpassed by sophisticated hybrid systems that adaptively orchestrate multiple optimization strategies. We analyze Agentic Context Engineering (ACE) and 50+ papers from 2024-2025 to identify critical gaps in current optimization paradigms. Building on this analysis, we propose Meta-Adaptive Context Engineering (Meta-ACE), a novel framework that addresses ACE’s core limitations through adaptive multi-strategy optimization with learned meta-policies. Meta-ACE introduces a learned meta-controller that dynamically composes optimization strategies based on real-time assessment of task characteristics, model confidence, and feedback reliability. Rather than applying uniform context engineering, Meta-ACE treats optimization as a sequential decision problem, learning to allocate computational budget across six strategies: minimalcontext, ACE-style reflection, test-time compute, hierarchical verification, adaptive memory, and selective test-time training. Our framework addresses three critical limitations of ACE: dependency on strong reflectors, vulnerability to poor feedback quality, and uniform processing regardless of task complexity. Through hierarchical fallbacks, quality gates, and meta-reinforcement learning on diverse task distributions, Meta-ACE enables graceful degradation and achieves projected improvements of 8-11% on agent benchmarks and 6-8% on domain-specific tasks, while reducing computational costs by 30-40% through adaptive resource allocation. This work demonstrates that comprehensive, multi-dimensional optimization with learned coordination represents the next frontier in building robust, efficient, and self-improving AI agent systems. efficient, and self-improving AI agent systems.
Category: Artificial Intelligence
[10] viXra:2511.0071 [pdf] replaced on 2025-12-30 09:44:09
Authors: Dimiter Dobrev
Comments: 11 Pages.
If we aim to create AGI, our first job is to enable it understand the world. The key to understanding has a name and that name is world model. This is what AGI must look for. In fact, rather than looking for a model, we will aim to find a description of the world. For this purpose, we need a language for description of worlds. We will use the game of chess to create the language we need. We have already done this in a previous paper, but then the agent was able to see the chessboard, while now it will play blind. Playing without seeing the chessboard makes the problem more complex and requires the addition of abstract ED models. The result will be a world model which will enable AGI think in its mind and plan its actions.
Category: Artificial Intelligence
[9] viXra:2511.0059 [pdf] submitted on 2025-11-13 15:26:04
Authors: Gurpreet Singh, Trina Banerjee, Nishaa
Comments: 41 Pages.
Artificial Intelligence (AI) has evolved remarkably over the past seven decades, transforming from simple rule-based systems into complex multimodal and generative frameworks capable of reasoning, creativity, and perception. This review traces the chronological development of AI tools, highlighting key milestones that shaped the field-from the early symbolic programs like Logic Theorist and ELIZA to the emergence of modern large-scale models such as GPT-4, Gemini, and Claude. The study explores the progression across distinct eras: the foundational period of symbolic reasoning (1940s-1970s), the rise of machine learning and statistical modeling (1980s-2000s), the deep learning revolution (2010s), and the recent explosion of generative and multimodal systems (2020-2025). Each phase reflects a major shift in how intelligence is defined, represented, and implemented-from handcrafted logic to data-driven learning and now to context-aware multimodal understanding. By reviewing over fifty significant AI tools and frameworks, this paper provides a comprehensive overview of how incremental innovations in computation, data availability, and model architecture have collectively enabled the current state of AI. The work concludes with insights on how this evolution paves the way for the next generation of agentic and real-time AI systems capable of seamless interaction across text, image, audio, and video modalities.
Category: Artificial Intelligence
[8] viXra:2511.0044 [pdf] submitted on 2025-11-10 01:37:43
Authors: Olegs Verhodubs
Comments: 8 Pages. (Note by viXra Admin: Non-academic content blocked)
There are a lot of knowledge in the Web, but there is no technological way to extract them. The bulk of knowledge is embedded in texts, and machine text processing is so inefficient that it is necessary to use Semantic Web technologies [1]. Working with ontologies (part of the Semantic Web) is convenient, but the process of creating ontologies is still more of a manual work than an automatic process. This paper proposes to generate IF..THEN rules from raw texts (from sentences) in the Web, and then perform logical inference based on these rules. Moreover, semantic processing is proposed to be applied to the IF part and the THEN part, and not to the entire raw text, generating an ontology from it. This method of generating rules and logical inference is being implemented in the Keyword Search Engine Enriched by Expert System Features [2], which will allow us to obtain expert assessments from many useful texts in the Web.
Category: Artificial Intelligence
[7] viXra:2511.0043 [pdf] submitted on 2025-11-10 21:14:09
Authors: Subhojit Ghimire
Comments: 9 Pages.
Now that AI-driven moderation has become pervasive in everyday life, we often hear claims that "the AI is biased". While this is often said jokingly, the light-hearted remark reflects a deeper concern. How can we be certain that an online post flagged as "inappropriate" was not simply the victim of a biased algorithm? This paper investigates this problem using a dual approach. First, I conduct a quantitative benchmark of a widely used toxicity model (unitary/toxic-bert) to measure performance disparity between text in African-American English (AAE) and Standard American English (SAE). The benchmark reveals a clear, systematic bias: on average, the model scores AAE text as 1.8 times more toxic and 8.8 times higher for "identity hate". Second, I introduce an interactive pedagogical tool that makes these abstract biases tangible. The tool’s core mechanic, a user-controlled "sensitivity thresh-old," demonstrates that the biased score itself is not the only harm; instead, the more-concerning harm is the human-set, seemingly neutral policy that ultimately operationalises discrimination. This work provides both statistical evidence of disparate impact and a public-facing tool de-signed to foster critical AI literacy.
Category: Artificial Intelligence
[6] viXra:2511.0038 [pdf] submitted on 2025-11-10 19:25:41
Authors: Rana Shivang Singh
Comments: 10 Pages.
Federated Learning (FL) is an emerging method to train machine learning models without the data getting centralized. By not centralizing the data, FL is compatible with security-conscious sectors like healthcare, finance, and IoT. However, despite this benefit, FL currently encounters three key issues hindering mainstream adoption, including high energy consumption during distributed training, the requirement for trust amongst the users, and the absence of good verifiability to ensure the result is proper and not adulterated.In the past few years, researchers have attempted to solve each of these problems individually. Initiatives under Green FL work towards minimizing the carbon and energy footprint. Blockchain-enabled solutions incorporate mechanisms for trust among clients as well as incentives. Cryptographic and auditing mechanisms allow for some extent of verifiability. The majority of the above works consider the problems in isolation. What is still absent is an integrated picture that examines their interplay, trade-offs, and the potential for common frameworks.This paper surveys 45 papers from 2021 to 2025 that relate to energy awareness, blockchain incorporation, or verifiability in FL. We categorise each paper with the straightforward coding scheme (Yes, Partial, No) on the three dimensions and study overlaps. The results show blockchain as the most progressed strand, energy-efficiency dealt with moderately, while verifiability remains the least studied. The paper ends with gaps, open issues, and future work towards sustainable and trustworthy FL.
Category: Artificial Intelligence
[5] viXra:2511.0028 [pdf] submitted on 2025-11-07 01:41:13
Authors: Jay Dayal Guwalani
Comments: 15 Pages.
Predictive maintenance in automotive telematics signifies a revolutionary method for vehicle health management, using machine learning methods to foresee breakdowns and enhance maintenance schedules. This research utilizes machine learning methods to ascertain the loading status of trucks—loaded or empty—exclusively using data from the vehicle's communication network, particularly from the engine module. We attained an accuracy over 85% for small hauls (0.5 to 5 km) and approximately 95% for long hauls (5 to 500 km). This method optimizes fleet management by minimizing communication between managers and drivers, while also significantly contributing to research on fuel consumption reduction and advanced fault diagnostics. The findings demonstrate that machine learning-based predictive maintenance decreases unplanned downtime and maintenance expenses while also improving vehicle safety and durability. This paper provides a thorough examination of the efficacy of machine learning models in predictive maintenance, delineates the challenges associated with data privacy, computational efficiency, and integration with current automotive systems, and explores future avenues for creating more resilient and scalable predictive maintenance frameworks in the automotive sector.
Category: Artificial Intelligence
[4] viXra:2511.0019 [pdf] submitted on 2025-11-05 08:46:26
Authors: Hidehiko Okada
Comments: 8 Pages.
This study investigates the performance of Genetic Algorithm for optimizing binary neural network controllers in the Atari Space Invaders task, extending prior work that applied Evolution Strategy to the same optimization problem. The network topology and the activation function are kept consistent with the earlier study to enable direct comparison between GA and ES. Two GA configurations were utilized while varying the number of hidden units and the bit precision of connection weights. Experimental results revealed that, for the number of hidden units of 1, 2, 4, and 8, the game scores achieved by 1-bit networks were not significantly lower than those of 64-bit networks, consistent with prior ES-based findings. Moreover, even a single hidden unit exhibited competitive performance, unlike in the ES case where performance degraded markedly. GA outperformed ES under the configuration emphasizing the number of generations, while ES performed better under the configuration emphasizing population size; the former difference was statistically significant (p < .01). These findings suggest that GA provides a viable alternative to ES for training binary neural network controllers in reinforcement learning tasks.
Category: Artificial Intelligence
[3] viXra:2511.0014 [pdf] submitted on 2025-11-06 02:24:37
Authors: Khushi Kher
Comments: 7 Pages.
This report offers an examination ofkeystroke analysis as a method for authenticating users, employing machine learning techniques.The report encompasses a comprehensive exploration of the theoretical underpinnings, and con-temporary research in keystroke dynamics. Furthermore, it provides insights into the practicalimplementation of keystroke analysis for user authentication, elucidating the operational aspectsand technical intricacies involved. Additionally, the report critically evaluates the limitations en-countered within this authentication method, providing a detailed analysis of the challenges faced. The report concludes by outlining the potential of keystroke analysis in enhancing security measures and augmenting user experience. Overall, this report aims to contribute to thediscourse on keystroke dynamics, shedding light on both its advancements and limitations whileenvisioning its future prospects in the realm of user authentication.
Category: Artificial Intelligence
[2] viXra:2511.0010 [pdf] submitted on 2025-11-03 19:59:26
Authors: Tadisetty Sai Yashwanth
Comments: 7 Pages.
Floating-point non-associativity makes fundamental deep learning operations, such as matrix multiplication (matmul) on GPUs, inherently non-deterministic. Despite this, the statistical structure of the resulting numerical error remains poorly understood. A common working assumption is that these errors behave as independent and identically distributed (i.i.d.) Gaussian noise. In this paper, we empirically test this assumption and show that it fails to describe real GPU behavior. By comparing outputs of single-input and batched matmuls, we find that while the i.i.d. modelpredicts non-zero output instability, empirical results show a 0.00% prediction flip rate. Through covariance analysis, we uncover the cause: the floating-point error is structured and highly correlated. For float16, nearly 50% of the total error variance lies in off-diagonal terms, revealing that the noise behaves as a coordinated, directional perturbation rather than random static. This result challenges theprevailing stochastic view of numerical noise and provides a principled foundation for analyzing deep learning reliability under hardware non-determinism.
Category: Artificial Intelligence
[1] viXra:2511.0002 [pdf] submitted on 2025-11-01 16:22:29
Authors: Gurpreet Singh
Comments: 26 Pages.
Large Language Models (LLMs) have rapidly become a central focus in both research and practical applications, owing to their remarkable ability to understand and generate text with a level of fluency comparable to human communication. Recently, these models have evolved into multimodal large language models (MM-LLMs), extending their capabilities beyond text to include images, audio, and video. This advancement has enabled a wide array of applications, including text-to-video synthesis, image captioning, and text-to-speech systems. MM-LLMs are developed either by augmenting existing LLMs with multi-modal functionality or by designing multi-modal architectures from the ground up. This paper presents a comprehensive review of the current landscape of LLMs with multi-modal capabilities, highlighting both foundational and cutting-edge MM-LLMs. It traces the historical development of LLMs, emphasizing the transformative impact of transformer-based architectures such as OpenAI's GPT series and Google's BERT, as well as the role of attention mechanisms in improving model performance. The review also examines key strategies for adapting pre-trained models to specific tasks, including fine-tuning and prompt engineering. Ethical challenges, including data bias and the potential for misuse, are discussed to stress the importance of responsible AI deployment. Finally, we explore the implications of open-source versus proprietary models for advancing research in this field. By synthesizing these insights, this paper underscores the significant potential of MM-LLMs to reshape diverse applications across multiple domains.
Category: Artificial Intelligence