Artificial Intelligence

2604 Submissions

[8] viXra:2604.0119 [pdf] submitted on 2026-04-30 23:15:01

What Does "Formally Verified" Actually Guarantee?

Authors: Stanislav Komarovsky
Comments: 12 Pages.

We prove that for any formal verification of any real system, the correspondence between the formal proposition and the system it describescannot be established within any finite tower of formal languages. The proof follows from Tarski’s undefinability theorem applied iteratively: verifying that a proposition correctly describes a system requires ex-pressing a correspondence claim that, by Tarski’s theorem, cannot be formulated within the proposition’s own language. Expressing the correspondence in a richer metalanguage generates a new correspondence claim that cannot be formulated in the metalanguage, producing an infinite regress that no finite extension of the formal framework can resolve. The result is structural, not contingent on current tooling or the complexity of the target system. We discuss five caveats—concerning human knowledge, the value of formal verification, partial gap closure, varying assumption strength, and the functionalist objection—and draw implications for the verification of AI-generated software artifacts.
Category: Artificial Intelligence

[7] viXra:2604.0118 [pdf] submitted on 2026-04-30 05:12:54

A Modular Zero-Knowledge Credential Framework for Multi-System Attribute Verification with Scoped Unlinkability and Efficient Accumulator-Based Revocation

Authors: Sayan Bairagi, Sayan Singha Roy, Abir Rakshit, Anik Bhowmick
Comments: 41 Pages.

Abstract—This work presents a zero-knowledge credential framework designed to enable secure and privacy-preserving attribute verification across multiple independent systems. Theframework allows a user to prove statements of the form a ≥ t,where a ∈ Zq represents a secret attribute and t denotes a public threshold, without revealing the attribute value itself. At the sametime, the framework prevents the exposure of any globally stable identifier, thereby eliminating the risk of cross-domain tracking. The construction is based on Pedersen commitments, where each attribute is encoded as C = g^ah^r ∈ G, with G ⊆ Z^∗p denoting a cyclic group of prime order q. The generators g and h are selected such that the discrete logarithm relation between them is unknown. This ensures that the commitment is computationally binding under the discrete logarithm assumption and perfectly hiding due to the use of randomness r. As a result, the committed attribute remains concealed while still allowing verification of statements about it. Predicate verification is achieved using a sigma protocol, whichenables the prover to demonstrate knowledge of valid witnesses without revealing them. In particular, the protocol proves the relation C · g−t = g^δh^r, where δ = a − t. This transformation allows the system to verify threshold conditions such as a ≥ twithout disclosing the value of a. The zero-knowledge property of the protocol ensures that the verifier learns only the validity ofthe statement and no additional information about the underlying attribute or randomness.To prevent correlation of user activity across different verification domains, the framework introduces scoped pseudonyms defined as IDS = pkH(S), where pk = g^x is a public key derivedfrom a secret key x, and H is a cryptographic hash functionmodeled as a random oracle. The scope S represents a domain specific identifier. This construction produces a unique identifierfor each domain while ensuring that identifiers generated for different scopes cannot be linked without solving the discrete logarithm problem in G. Revocation is supported through an RSA accumulator constructed under the Strong RSA assumption. For a revoked set R={ri}, the accumulator value is defined as A = g^Qri mod N,where N is an RSA modulus. The system enables efficient non membership verification using witnesses derived from B´ezout coefficients1. This mechanism allows a verifier to confirm thata credential has not been revoked, while maintaining constant verification cost that does not depend on the size of the revokedset. Importantly, this process does not introduce any additional identifiers that could compromise user privacy.The framework follows a complete lifecycle consisting of system setup, credential issuance, proof generation, scoped identifier derivation, verification, and revocation checking. During issuance, attributes are committed and signed by an issuer, producing a credential that can later be used by the holder. During authentication, the holder generates a non-interactive zero-knowledge proof bound to a verifier-specific challenge, ensuring freshness and resistance to replay attacks. The verifier evaluates the proof, validates the credential, and performs revocation checks without accessing any underlying attribute values. The security of the system is grounded in well-established cryptographic assumptions, including the discrete logarithm assumption in prime-order groups, the Strong RSA assumption for accumulator security, and the random oracle model for hash functions. Under these assumptions, the framework providesattribute privacy, soundness of predicate proofs, scoped unlinkability across verification domains, and resistance to replay and collusion-based inference attacks. The complete protocol stack has been implemented using 2048-bit security parameters within a modular architecture that includes issuer, holder, and multiple verifier components. The system is designed to be compatible with decentralized identity frameworks through the integration of decentralized identifiers (DID) and verifiable credentials (VC). Experimental evaluationdemonstrates that the framework achieves an average verification latency below 3 milliseconds, a compact presentation size ofapproximately 3 kilobytes, and stable revocation verification performance for revoked sets containing up to 200 elements.The proposed framework demonstrates that selective disclosure, strong unlinkability, and efficient revocation can be achieved simultaneously without relying on pairing-based cryptographyor trusted setup assumptions. The modular structure allowsindependent evolution of system components while maintainingconsistent security guarantees. This makes the system suitablefor practical deployment in multi-system identity verificationscenarios where both security and privacy are essential.Index Terms—Zero-knowledge proofs, Attribute-based credentials, Selective disclosure, Unlinkability, RSA accumulator,Pedersen commitments, Sigma protocols, Decentralized identity,Verifiable credentials
Category: Artificial Intelligence

[6] viXra:2604.0096 [pdf] submitted on 2026-04-26 18:24:54

True ai Should not be a Winner, But a Loser

Authors: Dimiter Dobrev
Comments: 6 Pages. In Bulgarian

There is an inaccuracy in the modern definition of AI. Today's definition says that AI is a program that is successful. Indeed, for a program to be successful, it must be intelligent, but the opposite is not true. A program can be intelligent but not successful, simply because it has other goals and does not strive for the success in question. From a theoretical point of view, the modern definition of AI is good enough, because it gives us an answer to the question "What is AI" even though it does not describe all intelligent programs, but only some of them. From a practical point of view, however, this definition is not sufficient. The reason is that we are on the verge of creating true AI and we need to choose from all intelligent programs the one with which we will best live. It turns out that it is not good to choose one of the successful programs. It would be better to choose a program that does not blindly strive for victory. Such a program can be called a loser, because it will not be successful enough. However, in both humans and AI, reckless ambition is not a positive quality.
Category: Artificial Intelligence

[5] viXra:2604.0062 [pdf] submitted on 2026-04-16 18:44:39

DeepSlide: From Artifacts to Presentation Delivery

Authors: Ming Yang, Zhiwei Zhang, Jiahang Li, Haoseng Liu, Yuzheng Cai, Weiguo Zheng
Comments: 37 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)

Presentations are a primary medium for scholarly communication, yet most AI slide generatorsoptimize the artifact (a visually plausible deck) while under-optimizing the delivery process (pacing, narrative, and presentation preparation). We present DeepSlide, a human-in-the-loop multi-agent system that supports preparing the full presentation process, from requirement elicitation and time-budgeted narrative planning, to evidence-grounded slide—script generation, attention augmentation,and rehearsal support. DeepSlide integrates (i) a controllable logical-chain planner with per-node timebudgets, (ii) a lightweight content-tree retriever for grounding, (iii) Markov-style sequential rendering with style inheritance, and (iv) sandboxed execution with minimal repair to ensure renderability. We further introduce a dual-scoreboard benchmark that cleanly separates static artifact quality from dynamic delivery excellence. Across 20 domains and diverse audience profiles, DeepSlide matches strong baselines on artifact quality while consistently achieving larger gains on delivery metrics,improving narrative flow, pacing precision, and slide—script synergy with clearer attention guidance.
Category: Artificial Intelligence

[4] viXra:2604.0059 [pdf] replaced on 2026-04-26 07:19:04

Reducing Credit Assignment Variance via Counterfactual Reasoning Paths

Authors: Fei Ding, Yongkang Zhang, Yeling Peng, Youwei Wang, Guoxiong Zhou, Zijian Zeng
Comments: 8 Pages.

Reinforcement learning for multi-step reasoning with large language models (LLMs) often relies on sparse terminal rewards, leading to poor credit assignment conditions where the final feedback is evenly propagated across all intermediate decisions. This results in high gradient variance, unstable training, and numerous ineffective updates, ultimately causing the model to fail and preventing sustained improvement. We introduce a counterfactual comparison-based credit assignment framework, which samples multiple reasoning trajectories under the same input. By treating their differences as an implicit approximation of alternative decisions, we construct an implicit process-level advantage estimator that transforms sparse terminal rewards into step-sensitive learning signals. Based on this, we propose Implicit Behavior Policy Optimization (IBPO), which significantly improves training stability and performance upper bounds on mathematical and code reasoning benchmarks, pointing to a promising direction for unlocking the performance potential of LLMs.
Category: Artificial Intelligence

[3] viXra:2604.0058 [pdf] submitted on 2026-04-15 20:10:37

Design Conditions for Intra-Group Learning of Sequence-Level Rewards: Token Gradient Cancellation

Authors: Fei Ding, Yongkang Zhang, Youwei Wang, Zijian Zeng
Comments: 8 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)

In sparse termination rewards, intra-group comparisons have become the dominant paradigm for fine-tuning reasoning models via reinforcement learning. However, long-term training often leads to issues like ineffective update accumulation (learning tax), solution probability drift, and entropy collapse. This paper presents a necessary condition for algorithm design from a token-level credit assignment perspective: to prevent reward-irrelevant drift, intra-group objectives must maintain gradient exchangeability across token updates, enabling gradient cancellation on weak-credit/high-frequency tokens. We show that two common mechanisms disrupting exchangeability make "non-cancellation" a structural norm. Based on this, we propose minimal intra-group transformations to restore or approximate the cancellation structure in the shared token space. Experimental results demonstrate that these transformations stabilize training, improve sample efficiency, and enhance final performance, validating the value of this design condition.
Category: Artificial Intelligence

[2] viXra:2604.0018 [pdf] submitted on 2026-04-06 20:43:07

DisasterSim: A Reproducible Benchmark for Navigation and Coverage in Collapsed Structures with Empirical Analysis of Classical Exploration and Goal-Conditioned Learning-Based Methods

Authors: Newton Adhikari
Comments: 10 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)

Autonomous navigation of collapsed buildings iscritical for disaster response, yet no standardized simulation benchmark exists for reproducible evaluation of robot navigationand coverage policies in such environments. We present DisasterSim, an open-source benchmark built on ROS 2 Humble and Gazebo Classic that provides a physically realistic post-earthquakebuilding interior with configurable obstacle density, a multimodal sensor suite with Extended Kalman Filter (EKF)-based fusion, four formally defined evaluation metrics with automatedcomputation, and four reference baseline policies. The entire system—environment, robot, SLAM, navigation stack, metrics, and automated experiment runner—executes from a singlecommand with frozen parameters to ensure full reproducibility. Our empirical study across 39 trials reveals a striking result: three fundamentally different classical exploration paradigms—reactive FSM, frontier-based, and potential field—converge to a statistically indistinguishable performance plateau of approximately 30% area coverage (p>0.79, |d|≤0.27). This convergence suggests that navigation constraints, not exploration strategy, form the primary performance bottleneck in cluttered disaster environments. A partially trained goal-conditioned PPO policy(370k of 600k planned steps)—which navigates toward a fixed known survivor location rather than exploring freely—achieves higher incidental coverage (36.9% mean, 61.1% peak, Cohen’sd=0.78), indicating that goal-directed learned navigation traverses more of the environment en route than classical explorers manage in the same time budget. We additionally identify a quantifiable coverage—localization trade-off (Pearson r=0.85, p<0.001), correct a data error present in an earlier draft, and discuss the design of a goal-free RL explorer as the next step toward a fully autonomous learned baseline. All code, configurations, experiment logs, andtrained models are publicly available.
Category: Artificial Intelligence

[1] viXra:2604.0003 [pdf] submitted on 2026-04-02 13:18:10

AI and Neuroengineering Powered Slavery: a Distopian Future

Authors: Tianqi Zhu
Comments: 6 Pages.

Recent progress in minimally invasive brain—computer interfaces (BCIs), nanoscale neural interfacing, and multimodal neural decoding has enabled increasingly precise access to and interpretation of human brain activity. This paper analyzes the dual-use risks associated with these technologies when integrated with advanced artificial intelligence and adaptive social engineering methodologies. We formalize a conceptual architecture for "brain-invading systems," which leverage closed-loop neural interaction, personalized modeling, and behavioral manipulation strategies to influence cognitive and affective states. We examine enabling components, including remote-capable neural interfaces and high-fidelity decoding pipelines, and discuss their potential convergence into scalable manipulation frameworks. Key challenges in detecting such systems are evaluated, including signal attribution, adversarial interference, and limitations in current neurodiagnostic methods. We further discuss opportunities in detecting such neuro-AI system for malicious purposes based on EEG signals.
Category: Artificial Intelligence